Search is not available for this dataset
text
string | meta
dict |
---|---|
\documentclass[conference]{IEEEtran}
% *** CITATION PACKAGES ***
%\usepackage{cite}
% *** GRAPHICS RELATED PACKAGES ***
\ifCLASSINFOpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage[dvips]{graphicx}
\usepackage{epsfig}
\fi
\usepackage{threeparttable}
%\usepackage[cmex10]{amsmath}
\usepackage{amsmath}
\usepackage{arydshln}
\usepackage[numbers,sort&compress]{natbib}
\usepackage{float}
\begin{document}
\title{Phasen, Techniken und Strategien der Softwareentwicklung anhand eines Kursprojekts im Modul SWE-II}
\DeclareRobustCommand{\IEEEauthorrefmark}[1]{\smash{\textsuperscript{\footnotesize #1}}}
\author{\IEEEauthorblockN{Felix Jacobsen\IEEEauthorrefmark{1},
Dennis Podkolsin\IEEEauthorrefmark{2} und Marcus Koppelmann\IEEEauthorrefmark{3}}
\IEEEauthorblockA{Fachbereich Duales Studium, Studiengang Informatik\\
Hochschule für Wirtschaft und Recht Berlin\\
Email: \IEEEauthorrefmark{1}s\[email protected],
\IEEEauthorrefmark{2}s\[email protected],
\IEEEauthorrefmark{3}s\[email protected]}}
\maketitle
%%%
% --- Abstract ---
%%%
\begin{abstract}
\boldmath
Agile courses in university settings aim to prepare students to face the ever increasing demands from the software industry, where Agile has become mainstream. This proves the teaching and understanding of Agile in such settings is of the utmost importance. This is why Agile is no longer just a part of the software engineering curriculum in Computer Science but a standalone course in most cases, though with increasing challenges for both faculty and students. This article presents yet another example case of the design, planning, development and evaluation of an agile project-based course. The reason for addressing the Agile teaching is twofold: not only are the Agile theory and practice taught and experienced in class, but also the teaching itself, and consequently the learning, has been adapted to changing requirements and priorities in each edition of the course. Making it project-based allows students to work with realistic projects through which they learn Agile more effectively, in collaborative and self-organizing teams. These insights, as well as settings and experiences over a total of 4 years, are addressed in this article.
\end{abstract}
%%%
% --- Keywords ---
%%%
\begin{keywords}
Agile, eXtreme Programming, teaching, project-based learning.
\end{keywords}
%%%
% --- Introduction ---
%%%
\section{Introduction}
\label{sec:intro}
There are lots of strong reasons for including Agile principles in CS education \cite{HaDu07}. Positive experiences that go from \textit{project-based} Computer Science (CS) courses using Agile \cite{Pe09,Scetal12} over \textit{Agile teaching} \cite{RaAn06} to \textit{Agile instructional design} \cite{LeVa12} have had a common denominator: the practices, the values and the methods of the agile software development are essential; Agile is a current mainstream in the software industry \cite{VO13} and educational environments are profiting from this, too. Meanwhile, project-based learning has proven to be very attractive in tertiary teaching: students learn the discipline via a realistic project, they pursue questions and connect them to activities that are part of the project, they construct knowledge and autonomously work towards a final product, as well as they master the curriculum standards with academic rigor \cite{Th00}.
The module \textit{Project Management} is part of the CS education during the third semester at the Berlin School of Economics and Law (BSEL). By successfully passing this module, dual studies CS students can obtain 14 ECTS-credits,\footnote{European Credit Transfer and Accumulation System. One credit point is equivalent to 30 hours of study.} which are assigned by considering the following proportion: a 20\% of them goes to the sub-module \textit{Project and Quality Management}, a 30\% goes to the sub-module \textit{Multidisciplinary Lab using Agile techniques}, and a 50\% goes to the sub-module \textit{Practice Transfer}, where students are at their enterprises and where they should apply gained knowledge in software engineering in general and in Agile and project management in particular. Credit hours, however, were never intended to be a measure of student learning, as Laitinen argues in \cite{La13}. She brings forward the argument that there should be found ``what students are expected to --and actually do-- learn'', as well as the measurements to meaningfully assess what they have learned, not only concerning time-based units. By introducing Agile project-based techniques in CS assignments and by accurately defining both the learning goals and their evaluation forms, as it is further presented in this article, a positive step in this direction is achieved.
Much of the Agile courses in university settings have a common goal: to prepare students to face the ever increasing challenges in the software industry. Jaccheri and Morasca define in \cite{JaMo06} five main roles that industry can play in software engineering education from the point of view of the university teacher: industry as students, as teachers, as researchers, as customers, and as former students. Three of these roles are well-identified in the mentioned module \textit{Project Management}:
\begin{itemize}
\item Industry as teachers: the sub-module \textit{Project and Quality Management} runs parallel to the sub-module \textit{Multidisciplinary Lab using Agile techniques}. The first sub-module is taught by an industry specialist in close collaboration with the latter's teacher.
\item Industry as customers: a real customer, who presents a problem to the students and who is available for consulting, is simulated in the Lab, if it is not possible to invite ``a real'' one. The concrete problem that is selected and the algorithm for solving it are also present in many industrial applications.
\item Industry as former students: there are a Faculty Technical Commission and a Faculty Commission for Cooperative Studies at the BSEL both integrated by several industry partners, former dual studies students some of them, that discuss and approve the curriculum and other teaching and learning issues. Part of the faculty is composed of former BSEL students as well.
\end{itemize}
Two of the most important advantages of the program that prepare CS students for their further professional life are: firstly, students from the Faculty of Cooperative Studies are dual studies students and work in German companies from their first career's semester on. This means, they gain practical experience in real industry scenarios from the beginning of their studies on. Second, the sub-module \textit{Multidisciplinary Lab using Agile techniques} (Lab using Agile, for short) provides them with several hard skills like specifying, designing, implementing and testing software, as well as communicating, presenting, and working in a team, to name a few soft skills. Furthermore, both advantages successfully minimize new hires' common frustrations, as addressed in \cite{Co02}.
The Lab using Agile uses an interdisciplinary approach from the viewpoint of different cross-disciplinary topics addressed there. Perhaps these are reasons why the course has been favorably received by both faculty and students. Its careful design and planning, as well as its constant adaption to changing teaching and learning requirements has proven extremely effective in project-based courses. The remainder of this paper describes aspects for the Lab using Agile in detail.
%%%
% --- Agile and XP techniques ---
%%%
\section{Agile and XP techniques}
\label{sec:agileXP}
One of Agile's most used methodologies is eXtreme Programming (XP), which has also been very popular in CS teaching \cite{Go04,KeDi04,MuTi01,ShWi02,StLuEr08}. For example, Stapel and colleagues propose in \cite{StLuEr08} a XP lab design property system for teaching a project-based XP course to CS master students, emphasizing in XP practices as part of a closed block course. Their work inspired the study summarized in this paper, which recommends a change from a weekly course to a blocked one. However, not only the course design, its type and the students' level, but also the blocks' duration, the XP iteration lengths, the team sizes, and the project content, among others indicators, differentiate their research from the one presented in this paper. Valuable insights from other works evaluating Agile in education environments also influence the findings presented here.
Pair programming is no longer extrinsic to CS education. In \cite{MeVaGo08}, for example, a case study concludes that pair programming is an effective approach for mastering computer programming together with cooperative learning principles. The authors extensively review the literature about the advantages and disadvantages of pair programming as a teaching-learning strategy, too. In \cite{XuRa05}, the authors additionally comment about the benefits of pair programming when practicing it in graduate software engineering class projects. Furthermore, several works have been published concerning both the strengths and weaknesses of pair programming but from the perspective of the Agile community.
The rest of the XP techniques are also introduced to the students in the Lab using Agile, both theoretical and practically. The students are, however, undergraduate students with little programming experience. In fact, they have only attended a few semesters at the university. Nevertheless, they learn quickly how to develop software with the aid of Agile, they solve a concrete real problem working in teams and they gain experiences by simulating a working day at an enterprise as part of the course project.
Differentiated supervision and guidance allow for better reactions to problems that might arise when introducing Agile or simply when working with others. In the Lab using Agile, individual and general coaching is offered as well. The faculty coaches individuals and teams in the course and is able to monitor progress and development anytime. Thus, continuous feedback can be provided to the students, to the teams and to the entire group. In reciprocation, students should be capable of presenting different stages of working software, and they should discuss with faculty in the role of (simulated) customers, acceptance criteria for their software products. In the coaching sections, it is expected that students come with concrete questions they have prepared in advance about any topic they need advice on.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Course schedule: teaching blocks and semester credit hours.}
\label{tab:SCHblocks}
\centering
\begin{tabular}{|c|cc|cc|c|}
\hline
\bfseries Block 1 & \multicolumn{2}{|c|}{\bfseries Block 2} & \multicolumn{2}{|c|}{\bfseries Block 3} & \bfseries Block 4\\
\hline
\textit{Day 1} & \multicolumn{1}{|c|}{\textit{Day 2}} & \textit{Day 3} & \multicolumn{1}{|c|}{\textit{Day 4}} & \textit{Day 5} & \textit{Day 6}\\
\hline
\hline
& \multicolumn{1}{|c|}{8 SCH} & \multicolumn{1}{|c|}{8 SCH} & \multicolumn{1}{|c|}{8 SCH} & \multicolumn{1}{|c|}{8 SCH} & \\
6 SCH & \multicolumn{1}{|c|}{(PC Lab)} & \multicolumn{1}{|c|}{(PC Lab)} & \multicolumn{1}{|c|}{(PC Lab)} & \multicolumn{1}{|c|}{(PC Lab)} & 6 SCH \\
\cline{2-4}
\cline{4-5}
& \multicolumn{2}{|c|}{16 SCH} & \multicolumn{2}{|c|}{16 SCH} & \\
\hline
\hline
\multicolumn{6}{|c|}{44 SCH} \\
\hline
\end{tabular}
\end{table}
%%%
% --- Course Schedule ---
%%%
\section{Course Schedule}
\label{sec:courseschedule}
Table \ref{tab:SCHblocks} shows the course schedule for the Lab using Agile in teaching blocks and semester credit hours (SCH, 1 SCH meaning what follows 45 minutes of teaching time). The course is divided into four teaching blocks for a total of 44 SCH. Blocks 2 and 3 take place in a PC Lab. They are mainly intended for teamwork. In the Fall 2009 and 2010 editions of the course, three XP iterations were programmed for respective three product releases. However, in the Fall 2011 and 2012, only two XP iterations and their respective releases were planned, in response to the course appraisals administered at the end of the previous terms. More on this respect can be found in Section \ref{sec:teacheval}.
Table \ref{tab:Agileblocks} shows the same course schedule but in teaching blocks and Agile cycles. Both \textit{Syllabus} and \textit{Intro I} at Day 1 conform Block 1 and refer to an introductory section, which states the purpose and goals of the course, as well as the theory about the algorithms selected to solve the customer problem. \textit{Intro II} at Day 2 refers to an introduction to Agile and to XP. Days 2 and 3 are two continuous calendar days from Block 2, as well as days 4 and 5 are from Block 3. \textit{Iteration 1} starts with \textit{Planing game 1} and takes between three and four weeks until \textit{Release 1} is accomplished, with only the first two days at the university. This similarly occurs for \textit{Iteration 2}, whose \textit{Release 2} takes place at the end of the course, at Day 6. The \textit{Conclusions} are mainly based on the presentations of the final product releases and on the teacher's feedback concerning the projects as a whole. In \cite{StLuEr08}, to name one crucial difference to this work, the block course has no interruption at all: the (very short) iterations are continuously located in the course time frame.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Course schedule: teaching blocks and Agile cycles.}
\label{tab:Agileblocks}
\centering
\begin{tabular}{|c|cc|cc|c|}
\hline
\bfseries Block 1 & \multicolumn{2}{|c|}{\bfseries Block 2} & \multicolumn{2}{|c|}{\bfseries Block 3} & \bfseries Block 4\\
\hline
\textit{Day 1} & \multicolumn{1}{|c|}{\textit{Day 2}} & \textit{Day 3} & \multicolumn{1}{|c|}{\textit{Day 4}} & \textit{Day 5} & \textit{Day 6}\\
\hline
\hline
& \multicolumn{1}{|c|}{Intro II} & & \multicolumn{1}{|c|}{Release 1} & & \\
Syllabus & \multicolumn{1}{|c|}{Planing game 1} & & \multicolumn{1}{|c|}{Planing game 2} & & Release 2 \\
\cline{2-2}
\cline{4-4}
Intro I & \multicolumn{2}{|c|}{Iteration 1} & \multicolumn{2}{|c|}{Iteration 2} & Conclusion \\
& \multicolumn{2}{|c|}{(Incremental teamwork)} & \multicolumn{2}{|c|}{(Incremental teamwork)} & \\
\hline
\end{tabular}
\end{table}
\textit{Incremental teamwork} in blocks 2 and 3 means students become more independent while working in a team. Students not only do work incrementally on different tasks without interruption while planning and developing software: they also apply Agile techniques that make them more independent. They progressively need lesser coaching from faculty for mastering activities that are more complex with time. In order to cope with these challenges, the course schedule includes more time for programming and less for other didactic exercises, also in a progressive way.
%%%
% --- Learning and Teaching Goals ---
%%%
\section{Learning and Teaching Goals}
\label{sec:coursegoals}
Faculty should be aware of both the coarse and the fine-grained learning goals for a course, in order to break down those goals and to focus on the content to be taught. The former, the coarse-grained learning goals, are often defined in the curriculum in a general way. The latter ones help faculty to plan and to draw up in detail what students need to master and the ways of achieving and evaluating that. By defining thoroughly the fine-grained learning goals of the Lab using Agile, faculty creates the course syllabus without difficulty, and individual blocks and days are planed easier. This does not require a straightforward, additional effort for the conception of all these teaching materials, but the time saved later pays dearly the invested one.
The second block of the Lab using Agile is dedicated to the first experiences with the XP practice, especially at Day 2. The fine-grained learning goals of the second block (B2) for the firsts double credit hours (2 SCH each, i.e., 1$\frac{1}{2}$ hours) are:
\textit{After completion of the second block, the students will be able$\dots$ }
\textit{\bfseries B2.1 (2 SCH):} $\dots$\textit{to identify and to describe software requirements using story cards; to assess their priorities; to coordinate and to discuss their inclusion in the current iteration; and to plan and to schedule related activities for the first XP iteration.}
\textit{\bfseries B2.2 (2 SCH):} $\dots$\textit{to meet and to participate in ``stand-ups'' or daily meetings; to develop software programming in pairs.}
\textit{\bfseries B2.3 (2 SCH):} $\dots$\textit{to discuss and to formulate rules for working in a team; to discuss and to formulate rules for the work of several teams in a room.}
\textit{\bfseries B2.4 (2 SCH):} $\dots$\textit{to develop software working in teams.}
Didactic exercises worked out in this block include organizational aspects that allow for better collaborative work when applying XP, since this is essential to Agile \cite{Beetal01}. Rules for working in a team are then to be discussed by the students, for example, and each project group could present its set of rules using a flip chart in one of the sessions.
\textit{Teaching screenplays} were used to better schedule the sequence of concrete teaching and learning activities to be included into a class, as well as the time required to complete them. They were planned using a sandwich structure, i.e., by combining passive and active learning units, and are like lesson plans or teaching worksheets that describe the teaching roadmap for a class or for part of a class in detail. For example, the teaching screenplay for the first double credit hour from block B2 is shown in Table \ref{tab:screenplay1}. It corresponds to the fine-grained learning goals defined above for the first double credit hour of that block, i.e., for B2.1.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Example teaching screenplay for the double credit hour B2.1.}
\label{tab:screenplay1}
\centering
\begin{tabular}{|c|c|c|l|}
\hline
& & 5 & \textbf{Start} -- \textit{passive unit} \\
& \textbf{Entry} & min. & \hspace{0.5em} Welcoming (oral) \\
& & & \hspace{0.5em} Contents and time schedule (flip chart) \\
\cline{2-4}
& & & \textbf{Content 1} -- \textit{passive unit} \\
& & 20 & \hspace{0.5em} Motivation (oral) \\
& & min. & \hspace{0.5em} Learning goals (flip chart) \\
& & & \hspace{0.5em} Planning game (flip chart, blackboard) \\
& & & \hspace{0.5em} Story cards (blackboard)\\
\cline{3-4}
& & 3 & \textbf{Brainstorming} -- \textit{active unit} \\
& & min. & \hspace{0.5em} Collect examples (plenum) \\
\cline{3-4}
& & & \textbf{Content 2} -- \textit{passive unit} \\
90 & \textbf{Working} & 20 & \hspace{0.5em} Project description (hand outs) \\
min. & \textbf{phase} & min. & \hspace{0.5em} Project goals (blackboard) \\
& & & \hspace{0.5em} Project requirements (hand outs) \\
& & & \hspace{0.5em} Requirements for 1st release (blackboard) \\
\cline{3-4}
& & 2 & \textbf{Introduce exercise} -- \textit{passive unit} \\
& & min. & \hspace{0.5em} Planning game: method, time management (oral) \\
\cline{3-4}
& & & \textbf{Knowledge transfer} -- \textit{active unit} \\
& & 35 & \hspace{0.5em} Planning game 1st iteration (teamwork, coaching) \\
& & min. & \hspace{1em} Define story cards \\
& & & \hspace{1em} Set priorities \\
& & & \hspace{1em} Discuss realization \\
\cline{2-4}
& & & \textbf{End} -- \textit{active and passive unit}\\
& \textbf{Exit} & 5 & \hspace{0.5em} Questions, feedback (oral) \\
& & min. & \hspace{0.5em} Conclusions (oral) \\
& & & \hspace{0.5em} Short about the next double SCH, i.e., B2.2 (oral) \\
\cline{1-4}
\end{tabular}
\end{table}
19 such teaching screenplays are needed for blocks 1 to 3, i.e., one screenplay as in Table \ref{tab:screenplay1} for each double SCH. However, much of them are only an outline like the one presented in Table \ref{tab:screenplay2}. All teaching screenplays can be adjusted and adapted depending on the concrete class' rhythm when developing the course projects, which is just an expression of the Agile project-based teaching. An extra column could be added to the screenplays, too, for comments on self reflection and on self assessment after completing the scheduled exercises and activities.
%%%
% --- Project Requirements ---
%%%
\section{Project Requirements}
\label{sec:projreq}
The general project description was formulated as follows: \textit{Solve the traveling salesman problem (TSP) using a metaheuristic algorithm in the context of an XP project. Wanted is a software product with a graphical user interface (GUI) that includes menus and controls to define settings and that visualize results, as well as with a graphical window to show both the cities and the optimization process in real time.}
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Example teaching screenplay for a double credit hour with teamwork.}
\label{tab:screenplay2}
\centering
\begin{tabular}{|c|c|c|l|}
\hline
& & 2 & \textbf{Start} -- \textit{passive unit} \\
& \textbf{Entry} & min. & \hspace{0.5em} Welcoming (oral) \\
& & & \hspace{0.5em} Goals and time schedule (flip chart) \\
\cline{2-4}
& & & \textbf{Teamwork and coaching} -- \textit{active unit} \\
90 & \textbf{Working} & 83 & \hspace{0.5em} Incremental software development (by students) \\
min. & \textbf{phase} & min. & \hspace{0.5em} Individual team coaching (by faculty) \\
& & & \hspace{0.5em} Questions, feedback (team-oriented) \\
\cline{2-4}
& & 5 & \textbf{End} -- \textit{passive unit}\\
& \textbf{Exit} & min. & \hspace{0.5em} Conclusions (oral) \\
& & & \hspace{0.5em} Short about the next double SCH (oral) \\
\cline{1-4}
\end{tabular}
\end{table}
Students should use metaheuristics algorithms, like genetic algorithms (GA) and ant colony systems (ACO), to solve instances of the TSP. They should test their programs using 2-dimensional, symmetric TSP instances of geographical problems from TSPLIB \cite{Re91}, as well as they should report both their findings and the software development using Agile in a research paper of at least five pages, following the guidelines for two-column conference proceeding in IEEE style.
Software requirements are defined by the customer (real or simulated) at the beginning of each XP iteration, depending on the focus the software development in that phase is centered around. Only those requirements related to the GUI development, for instance, are defined, specified, planned, and prioritized in the same planning game. Those requirements concerning the data and the algorithms to process them are defined in another planning game. Whether to start with the GUI or with the logic was discussed with the students. For many of them it was more important and attractive to have a working product with options and other components to present to the customer in the different releases, into which other functionalities could be added onto.
In Fall 2012, the first release, at the beginning of the third block (see Table \ref{tab:Agileblocks}), was an ``individual'' meeting of each team with faculty playing the roles of \textit{customer} and \textit{coach}. The second release, in the last course's block, was a ``public'' meeting (all teams, in plenum), where faculty played both the \textit{customer} and the \textit{evaluator} roles. Each team presented a software prototype in the former, as well as it addressed the main aspects related to other Agile methods and techniques. In the latter, the final release, a formal oral presentation of about 35 minutes gave insights about the final product, about the project development, and about the experiences and lessons learned during the project completion.
Emphasis was also put on project management tools for collaborative work. The students had the opportunity, at least in the last two editions of the Lab using Agile, to test and to use several new tools (for them), like Redmine\footnote{Redmine (at http://redmine.org) is a project management web application.} and Trello\footnote{Trello is a board-based collaboration tool. See more at http://trello.com/.}, for instance.
%%%
% --- Course Evaluation ---
%%%
\section{Course Evaluation}
\label{sec:courseeval}
The composition and the size of the class, together with other information related to the last four editions of the course, are presented in Table \ref{tab:groupsize}. The number of students answering a customized, anonymous questionnaire at the end of the semester is given in parenthesis for each course edition.
In the Falls 2009 and 2010, the course was offered weekly and there were a total of three XP iterations (and therefore, a total of three releases). No special didactic methods were applied at that time. In each of both editions, a different algorithm was considered to solve the TSP problems, i.e., ACO in Fall 2009 and GA in 2010. Students had difficulties especially when programming in the class, since the time available each week was minimal. They also had problems that prevented them completing their projects on time.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Characteristics of the last four course editions.}
\label{tab:groupsize}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
& \bfseries Group & \bfseries Female & \bfseries Weekly$/$ & \bfseries Agile & \bfseries Algo- & \bfseries Special & \bfseries Special\\
\bfseries Fall & \bfseries size & \bfseries prop. & \bfseries Blocks & \bfseries iter. & \bfseries rithm & \bfseries didactic & \bfseries coaching\\
\hline \hline
2009 & 30(30) & 1 & w & 3 & ACO & -- & -- \\
2010 & 30(30) & 2 & w & 3 & GA & -- & -- \\
2011 & 24(24) & -- & b & 2 & ACO & ++ & + \\
2012 & 28(19) & 2 & b & 2 & ACO & ++ & ++ \\
\hline
\end{tabular}
\end{table}
In the Falls 2011 and 2012, however, the course was divided in four presence blocks, as it is presented in Table \ref{tab:SCHblocks}. Both editions of the course scheduled only two XP iterations, as derived from students' feedback in the former courses. The algorithm used for solving TSP was the same in both cases (i.e., ACO). Both editions included several special didactic methods not applied before, as well as a close team coaching by the professor, more intensive in Fall 2012. Additionally, the faculty was coached in Fall 2012 by an external training coach, expert in didactic in higher education.
%%%
% --- Evaluating Students ---
%%%
\subsection{Evaluating Learning}
\label{sec:learneval}
Each student can earn at most 100 points, which are then converted to a grade-point system in the German grading scale, as usual. A final student's grade is the team grade to which they belong. It is determined using a percentage system with 20\% for each of the following areas: first release, second release, research paper, software program, and project management.
For assessing the releases and the team presentations, an evaluation form was designed by the faculty. It considers key components like presentation skills, content, timing, confidence, quality, and so on. The research paper was evaluated according to guidelines for scientific events. What to consider for both its content and structure was previously discussed with the students. Last but not least, the software program should satisfy all requirements, the teams should submit an executable version out of bugs, and the main software features and their functioning should be shown in the final presentation, without forgetting the project management aspects related to the project as a whole.
%%%
% --- Evaluating Teaching ---
%%%
\subsection{Evaluating (not only) Teaching}
\label{sec:teacheval}
By the term's end, a questionnaire independent of formal faculty evaluations was administered to students. The questions catalogue with their descriptive scale values is shown in Table \ref{tab:questions}. The questions are grouped in four major topics, these corresponding to the course requirements in particular, to teaching in general, to how students learned, and to Agile.
\begin{table*}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Questions catalogue with descriptive scale values.}
\label{tab:questions}
\centering
\begin{threeparttable}
\begin{tabular}{|c|l||cccc|}
\hline
& & \multicolumn{4}{|c|}{\bfseries Descriptive scale values and index}\\
\cline{3-6}
\multicolumn{1}{|c|}{\bfseries Id.} & \multicolumn{1}{|l||}{\bfseries Question} & \multicolumn{1}{|c}{4} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{2} & \multicolumn{1}{c|}{1\tnote{a}}\\
\hline\hline
A & What do you think about the required time for the course & too high & normal & too low & abstention\\
B & How were the requirements concerning the course assignments/tasks? & too high & realistic & too low & abstention\\
C & How did you find the problem that was selected to be solved (i.e. TSP)? & motivating & neutral & dissuasive & abstention\\
D & How did you find the algorithm that was selected to solve the user problem? & motivating & neutral & dissuasive & abstention\\
\hline
E & How was the introduction on the course goals and topics? & very good & normal & very bad & abstention\\
F & How did the teacher/on-site customer respond to the questions, how was her feedback? & very good & normal & very bad & abstention\\
\hline
G & Do you feel as if you would have learned something during the course? & very much & normal & very little & abstention\\
H & How did the course form your interest on the working field? & motivating & neutral & dissuasive & abstention\\
\hline
I & Did you enjoy Agile practices, especially XP? & very much & normal & very little & abstention\\
J & Do you think you have improved your programming skills when participating in the XP project? & very much & normal & very little & abstention\\
K & And how about your social skills? Did you improve them? & very much & normal & very little & abstention\\
L & Do you think that using XP improves the productivity of small teams? & very much & normal & very little & abstention\\
M & Do you think that using XP improves the quality of the code? & very much & normal & very little & abstention\\
N & Do you think that Pair Programming speeds up the developing process? & very much & normal & very little & abstention\\
O & How did you find the planning game at the beginning of each iteration? & very helpful & normal & irritating & abstention\\
\hdashline
P & How was the division in two (Fall 2011, Fall SS2012) / three (Fall 2009, Fall 2010) releases? & excessive & adequate & insufficient & abstention\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] The scale index with value 1 is reserved for abstentions, for each question, so that students can leave questions unanswered.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Students could also provide an overall evaluation of the course, including what they liked the most, what they did not like at all, as well as further suggestions and comments.
%%%
% --- Results ---
%%%
\section{Results and discussion}
\label{sec:results}
Figure \ref{fig:polargraph} shows a polar line chart with an area layer divided in four sectors that depend on the four general questionnaire topics mentioned so far. The question P is not included since it refers to different scenarios (two or three releases).
\begin{figure}[!t]
\centering
\includegraphics[width=6.2cm]{Resources/webgraph.pdf}
\caption{Questionnaire results averaged for the four editions of the course.}
\label{fig:polargraph}
\end{figure}
The plotted data are computed using the following formula, which represents a weighted average for each question $i$:
\begin{align*}
y &= \frac{\sum\limits_{j=1}^4 (5-j) \cdot v_{ij}}{N}\\
&= \frac{4 \cdot v_{i1} + 3 \cdot v_{i2} + 2 \cdot v_{i3} + v_{i4}}{103}
\end{align*}
$N$ being the total number of students responses over the four years ($N=103$) and $v_{ij}$ being the sum of all responses multiplied by a scaling of the descriptive scale value $j$, for each question. For example, question A refers to the required time for the course and it has the descriptive scale values \textit{too high}, \textit{normal}, \textit{too low}, and \textit{abstention} (see Table \ref{tab:questions}). The number of total responses were 13, 70, 18, and 2 for each descriptive value, respectively. Thus, $y=2.9126$ in the polar line chart for question A, which means that a substantial number of all students considered the required time as normal.
The rest of the plotted data can be read in a similar way: most students found the requirements concerning the course assignments (question B) to be realistic, the TSP and solving it with the selected metaheuristic (questions C and D) were motivating, and so on. All in all, the students' feedback was very positive in general, particularly regarding Agile.
Figure \ref{fig:11vs12} shows a polar line chart with a polar area layer and a polar line layer comparing in detail some data for the Fall 2011 and for the Fall 2012, respectively. Only the questionnaire topics ``how students learned'' and ``Agile techniques'' are considered. In the figure, \textit{Learning} refers to the question with identifier \textit{G}, \textit{Interest in the field} to \textit{H}, \textit{Enjoy Agile} to \textit{I}, \textit{Programming skills} to \textit{J}, \textit{Social skills} to \textit{K}, \textit{Productivity small teams} to \textit{L}, \textit{Code quality} to \textit{M}, \textit{Development} to \textit{N}, \textit{Planning game} to \textit{O}, and \textit{Two releases} to \textit{P}, respectively, as specified in Table \ref{tab:questions}. The corresponding values are listed in Table \ref{tab:t11vs12}, which includes the relative percentage of responses for each descriptive scale value, for each question, not including the abstentions for being irrelevant. Such details give more information than the weighted average when comparing both courses.
\begin{figure}[!t]
\centering
\includegraphics[width=7.4cm]{Resources/2011vs2012.pdf}
\caption{Questionnaire results comparing Fall 2011 and Fall 2012 in detail.}
\label{fig:11vs12}
\end{figure}
The main differences between the settings for Falls 2011 and 2012 concern the presence of female students (none in 2011) and the team coaching (more intensive in 2012), as it is presented in Table \ref{tab:groupsize}. The questionnaire results, however, differ strongly in several aspects: almost all results for questions \textit{G} to \textit{P} show remarkable changes from Fall 2011 to Fall 2012. In the latter, for example, most students feel they learned \textit{very much} during the course (62.5\%). One year before, more than half (54.2\%) of the students considered learning between \textit{normal} and \textit{very little}. Similarly, for students in Fall 2012 the course is much more \textit{motivating} than for their peers in 2011, they think their programming skills and the quality of the code are improved \textit{very much} with XP, and two thirds find \textit{adequate} the division in two releases (\textit{insufficient} for 41.7\% of the students in 2011). However, students from Fall 2011 enjoy Agile more (54.2\%) despite more respondents selecting \textit{very little} to describe the following Agile characteristics: speeding up the developing process with pair programming (33.3\%), improvement of code's quality (20.8\%), improvement of social skills (16.7\%), as well as improvement of small teams' productivity (8.3\%). These values were much more smaller or absent for responses from Fall 2012 and with descriptive scale \textit{very little}.
\begin{table}[!t]
%% increase table row spacing, adjust to taste
\renewcommand{\arraystretch}{1.3}
\setlength{\tabcolsep}{0.6em}
\caption{Fall 2011 and 2012 compared for general learning and Agile data.}
\label{tab:t11vs12}
\centering
\begin{tabular}{|c||ccc|c|ccc|c|}
\hline
\bfseries Question & \multicolumn{4}{|c|}{\bfseries Fall 2011} & \multicolumn{4}{|c|}{\bfseries Fall 2012}\\
\cline{2-5}
\cline{6-9}
\bfseries Id. & \multicolumn{3}{|c|}{rel. $\%$} & \multicolumn{1}{|c|}{glb. $\%$} & \multicolumn{3}{|c|}{rel. $\%$} & \multicolumn{1}{|c|}{glb. $\%$}\\
\hline\hline
G & 45,8 & 50 & 4,2 & 3,4 & 62,5 & 16,7 & 0 & 3,8\\
H & 33,3 & 62,5 & 4,2 & 3,3 & 50 & 29,2 & 0 & 3,7\\
I & 54,2 & 29,2 & 12,5 & 3,3 & 29,2 & 45,8 & 4,2 & 3,3\\
J & 29,2 & 50 & 16,7 & 3,0 & 50 & 25 & 4,2 & 3,6\\
K & 25 & 58,3 & 16,7 & 3,1 & 29,2 & 33,3 & 16,7 & 3,2\\
L & 58,3 & 33,3 & 8,3 & 3,5 & 54,2 & 25 & 0 & 3,7\\
M & 45,8 & 33,3 & 20,8 & 3,2 & 58,3 & 20,8 & 0 & 3,7\\
N & 33,3 & 33,3 & 33,3 & 3,0 & 29,2 & 45,8 & 4,2 & 3,3\\
O & 33,3 & 58,3 & 8,3 & 3,2 & 16,7 & 54,2 & 4,2 & 3,0\\
P & 16,7 & 37,5 & 41,7 & 2,7 & 12,5 & 62,5 & 4,2 & 3,1\\
\hline
\end{tabular}
\end{table}
Figure \ref{fig:horiz} shows the ten most positive impressions from the students, i.e., what they liked the most, from more to less frequent and after considering all four courses. Much of them refer to both Agile and XP. Pair programming was the most mentioned with a total of 12 occurrences. Both its benefits and practice were well accepted by the students. Working in a team and applying XP to implement a motivating algorithm was also very important for the students, as well as the chance to improve their programming skills in such a course project.
The students also had the possibility to mention what they did not like at all, as well as the opportunity to suggest changes to be considered in new editions of the course. Some typical responses were the following ones: it is too much work that has to be done for too few credits (there should be assigned more credits points for such a lab), the time pressure is too high (more time should be allocated for both programming and teamwork in the class), it is difficult to work in a room with too many teams at the same time (fewer teams should work in the same room).
The overall evaluation of the course in the four editions was as follows: About 80\% of all students evaluated the course as \textit{very positive} (18,45\%) and \textit{positive} (61,16\%). A neutral evaluation was given by 18,45\% of the students, mainly from the Fall 2011. Two students from the same year evaluated the course as \textit{negative}, for a 1,94\%. No student evaluated the course as \textit{very negative}.
\begin{figure}[!t]
\centering
\includegraphics[width=8.9cm]{Resources/horizontalBars.pdf}
\caption{Most mentioned positive comments.}
\label{fig:horiz}
\end{figure}
A subjective explanation of the negative results could be related to gender aspects, although no factual evidence is available. For years, usual comments between faculty staff, not only from Computer Science but also from the other three technical carriers at the BSEL, connect students' attention, participation and discipline in class to the presence or lack of female students. They argue that courses with female students have a better balanced classroom dynamic. The group attending Fall 2011 had no female students. A direct intervention was necessary several times to control both teamwork in the classroom and the discipline of few students. For that group, these aspects were the worst of all four editions of the course. It should be mentioned, in addition to this, that the teaching professor is female which is also infrequent in CS, at least in Germany. Furthermore, all female students from the other three years got the better grades, and this was also the case in other courses taught by the same female faculty. It is also worth pointing out that all females chose to do their two student research projects with this female teacher and their final grades were the highest possible scores. This supports Shaikh's conclusion in \cite{Sh08}: ``the presence of female faculty in CS is also an important source of mentoring''.
Another possible reason is the observed students' behavior during the course assignments and exercises. Most students were somehow resistant to participate in didactic exercises involving traditional methods other than the ones they use to work with while frontal teaching. Open feedback asked at the end of some blocks confirmed the argument that, when exercises were not directly related to programming activities for their projects, students were \textit{wasting their time}. They could not see the potential advantages class games or student debates or think-pair-share might have on long-term learning. In Fall 2012, already knowing the difficult situations that arose in Fall 2011, students were instructed in advance about the goals and benefits of such kind of supporting exercises. Appropriate advise was also given by an expert coach. The working environment and the relations student--faculty were more relaxing and productive in 2012, in general.
The final grade in the module considers 30 points (from 100) for the Lab using Agile. The averaged final grades from all four editions of the course were:\footnote{The number of teams is given in parenthesis. Each team is composed of 4 to 5 students, as a rule.} Fall 2009, 27.82 (6); Fall 2010, 26.79 (7); Fall 2011, 28.92 (5); and Fall 2012, 29.43 (7) points from 30. All in all, the grades were more than satisfactory: all students earned the required credits and the final grades were good despite the students' lack of participation and the difficult situations from the Fall of 2011. Most of the lost points were on scientific writing and not on the programs. The developed software programs were successful working products that satisfied the defined requirements and they were finished on time. Furthermore, the most XP values and practices were well understood by the students and were consequent applied during the project realization.
%%%
% --- Conclusions ---
%%%
\section{Conclusions}
In this paper, the most significant differences between Agile weekly and block courses at the BSEL were presented. The combined use of all XP practices is very effective when developing Agile based-projects in these courses. Pair programming and whole team proved the most enjoyed by the students. However, students' engagement is higher in block courses because they have more time to concentrate and to participate in active learning tasks that need more time to complete. Students exploit the XP practices better when they work without interruption and when the teaching process is adapted accordingly. They are more able to improve their skills in planning and discussing, in analyzing and creating software, in evaluating and presenting results, as well as in working in teams in block courses than in weekly ones.
Since Agile's success in the software industry, it has been a constant in the CS curriculum at educational environments. Yet it is of utter importance not only how students learn Agile, but also how to teach it effectively. Teaching screenplays could help faculty in alleviating the conception and use of teaching materials. These roadmaps could describe the fine-grained learning goals of Agile teaching in detail. They proved to be very useful when used in Agile block courses.
Future work will be related to the introduction of other practices and techniques, for example from Scrum. The use of more tools to support the Agile development in the classroom is planned too. They should value individuals and interactions, working software, customer collaboration, and response to change, as Agile software development encourages.
%%%
% --- Bibliography: style and .bib ---
%%%
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,bib2SERP13}
% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{18}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}
\end{document}
| {
"alphanum_fraction": 0.7426637555,
"avg_line_length": 96.6244725738,
"ext": "tex",
"hexsha": "1a2b1700d62811e6eea7cf488274e3bdab0c4a8f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "77147defaa0090059e5d077801673be503383588",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "HWR-Berlin-SWE-II-Gruppe-2-Team-3-2022/Dokumentation",
"max_forks_repo_path": "Paper/Original.tex",
"max_issues_count": 14,
"max_issues_repo_head_hexsha": "77147defaa0090059e5d077801673be503383588",
"max_issues_repo_issues_event_max_datetime": "2022-03-18T22:35:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-11T18:44:20.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "HWR-Berlin-SWE-II-Gruppe-2-Team-3-2022/Dokumentation",
"max_issues_repo_path": "Paper/Original.tex",
"max_line_length": 1473,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "77147defaa0090059e5d077801673be503383588",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "HWR-Berlin-SWE-II-Gruppe-2-Team-3-2022/Dokumentation",
"max_stars_repo_path": "Paper/Original.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 11527,
"size": 45800
} |
\section{A modelling intermezzo}
\subsection{}
\begin{frame}
\frametitleTC{Foreword and motivation}
\framesubtitleTC{}
\myPause
\begin{itemize}[<+-| alert@+>]
\item We said right at the outset, that this activity would focus on control\\
and not on modelling.
\item However, without some knowledge on that matter, interpreting the\\
parameters of a control may be extremely cumbersome.
\item Hence we go through the bare minimum we need.
\item In particular, we temporarily bring the continuous-time (CT) domain\\
back into play.
\item \vfill CAVEAT: this material is not presented ``the mainstream way'',\\
so please be EXTREMELY attentive, stop and ask questions\\
immediately at the minimum unclear statement.
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Foreword and motivation}
\framesubtitleTC{}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Any phenomenon naturally takes place in the CT domain.
\item However we (shall) realise controllers as algorithms to run when a new value for $u$ is needed.
\item AS A RESULT OF THIS we need to model the controlled object as something that evolves ``by steps''.
\item We have three important questions to answer.
\begin{enumerate}[<+-| alert@+>]
\item Does the physics of that object ``naturally'' tell us when a new $u$ is to be\\
computed, or not (i.e., \underline{we} have to decide when to compute)?
\item In the second case, assuming that we are computing $u$ periodically,\\
is there any clue to choose the period?
\item Still in the second case, and most important to parametrise\\
a controller meaningfully, does the process model $P(z)$ change\\
if we change the period?
\end{enumerate}
\item \vfill As usual, we start from some examples and then abstract.
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Example 1}
\framesubtitleTC{Preemptive scheduling}
\myPause
\begin{itemize}[<+-| alert@+>]
\item We look at the controlled system (plus sensor and actuator), NOT at the control.
\item Let us focus on the \emph{control oriented} model for one task.
\item Functionally:
\begin{itemize}[<+-| alert@+>]
\item a CPU time amount (or \TC{burst}) $b$ is allotted by some control (not our business here);
\item a timer is set and the task activated (actuation);
\item there is \TC{no information} from the task (no assumption on its code can\\
be made, the OS has to be agnostic) till either the time elapses,\\
or the task yields the CPU;
\item the scheduler regains control and measures (sensing) the actually\\
used CPU time; this will in general be $b+\delta b$, where $\delta b$\\
is a disturbance (yield before $b$, preemption interrupt received\\
while in a critical section, and so on);
\item control is invoked, and the same or some other task is activated.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Example 1}
\framesubtitleTC{Preemptive scheduling}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Is there a ``natural cadence'' for computing a new control?
\item[] $\Rightarrow$ Yes, that of task activations.
\item Having thus $k$ count the scheduler intervention, which is the model with $b(k)$ as input
and the task accumulated CPU time $t_{CPU}(k)$ as output?
\item[] $\Rightarrow$ Quite immediately, assuming that instant $k$ is at the \emph{end} of the activation\\
\hspace{5.5mm}period, thus the relative burst was decided at $k-1$,
\begin{displaymath}
t_{CPU}(k) = t_{CPU}(k-1)+b(k-1)+\delta_b(k-1).
\end{displaymath}
\item Does the model depend on the (continuous) time elapsed between\\
interventions $k-1$ and $k$?
\item[] $\Rightarrow$ Apparently, no.
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitleTC{Example 2}
\framesubtitleTC{CPU thermal management --- brutally simplified}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Once again, no control and just the phenomenon to govern.
\item Thermal generation/storage/dissipation takes place in the CT domain:
\begin{itemize}[<+-| alert@+>]
\item {\scriptsize\verb#stored energy = CPU thermal capacity * CPU temperature#}
\item {\scriptsize\verb#time derivative of energy = generated power - dissipated power#}
\item {\scriptsize\verb#generated power = input#}
\item {\scriptsize\verb#dissipated power = sink thermal conductance * (CPU temp. - ambient temp.)#}
\item {\scriptsize\verb#ambient temperature = another input#}
\end{itemize}
\item As a differential equation, then,
\begin{displaymath}
C_{CPU} \frac{dT_{CPU}(t)}{dt} = P(t) - G_{sink}(T_{CPU}(t)-T_{amb}(t)).
\end{displaymath}
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Example 2}
\framesubtitleTC{CPU thermal management}
\myPause
\begin{itemize}[<+-| alert@+>]
\item We want a DT model, however, so we decide a timestep $T_s$ to compute the model state and output (here
just $T_{CPU}$) and \TC{replace the time derivative with the incremental ratio over one step}.
\item Writing $v(k)$ in the DT to indicate $v(kT_s)$ in the CT, whatever $v$ is, this gives
\begin{displaymath}
C_{CPU} \frac{T_{CPU}(k)-T_{CPU}(k-1)}{T_s} = P(k) - G_{sink}(T_{CPU}(k)-T_{amb}(k)).
\end{displaymath}
\item The curious may ask why $k$ and not $k-1$ on the right hand side.\\
We omit the matter in this course, but if interested look for\\
``implicit/explicit discretisation''.
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitleTC{Example 2}
\framesubtitleTC{CPU thermal management}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Now getting the model in $z$ form is straightforward:
\begin{displaymath}
C_{CPU} \frac{T_{CPU}-z^{-1}T_{CPU}}{T_s} = P - G_{sink}(T_{CPU}-T_{amb})
\end{displaymath}
\item Maxima:
{\scriptsize
\begin{verbatim}
dum: solve(Ccpu*(Tcpu-Tcpu/z)/Ts=P-Gsink*(Tcpu-Tamb),Tcpu); // Solve for Tcpu
sol: rhs(dum[1]); // Take RHS of 1st element
TFs: jacobian([sol],[P,Tamb]); // Transfer functions
\end{verbatim}
}
\item Result:
\begin{displaymath}
\begin{array}{rcl}
T_{CPU}(k) &=& \frac{zT_s}{(C_{CPU}+G_{sink}Ts)z-C_{CPU}} \, P(k) \\ \\
& & +\frac{zG_{sink}T_s}{(C_{CPU}+G_{sink}Ts)z-C_{CPU}} \, T_{amb}(k).
\end{array}
\end{displaymath}
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Example 2}
\framesubtitleTC{CPU thermal management}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Does the model depend on the continuous time elapsed between $k-1$ and $k$,\\
i.e., on $T_s$?
\item[] $\Rightarrow$ Apparently, yes: $T_s$ appears as a parameter in the transfer functions.
\item Is there a ``natural cadence'' for computing a new control for this model?
\item Reformulating, is there a ``good'' value of $T_s$ so that the points computed by the\\
DT model represent ``well enough'' the CT solution, so as to be informative\\
for control and thereby suggest the cadence above?
\item[] $\Rightarrow$ Yes, but that ``good'' $T_s$ apparently depends on the numbers\\
\hspace{5.5mm}in the model. We have to decide based on them.
\end{itemize}
\end{frame}
\begin{frame}
\frametitleTC{Lessons learnt}
\framesubtitleTC{}
\myPause
\begin{itemize}[<+-| alert@+>]
\item Sometimes $k$ just counts control interventions, and the time in between them does not change the process
model $P(z)$.
\item In this case the control system is DT, and one can reason entirely in the DT domain. No need to relate
control parameters to any CT entity.
\item \vspace{3mm}Sometimes, conversely, the model $P(z)$ seen by the controller depends on the time between
two evaluations of its output, which for us coincides with the cadence to compute the control signal.
\item In this case the control system is DT but also \TC{sampled-signals},\\
and for evident practical reasons, control parameters need\\
expressing in such a way to not change if $T_s$ is changed.
\item Of course, in this case clues to select $T_s$ are also needed.
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.6682453232,
"avg_line_length": 44.9255319149,
"ext": "tex",
"hexsha": "77dd9bebf98614e41d0158bd2f0084728c2c7660",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "albertoleva/PID4CSE",
"max_forks_repo_path": "slides/Unit-06/sections/01-ModellingIntermezzo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "albertoleva/PID4CSE",
"max_issues_repo_path": "slides/Unit-06/sections/01-ModellingIntermezzo.tex",
"max_line_length": 115,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "66ec14c204e16c97a5792c2e240b2daed4b39e83",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "albertoleva/PID4CSE",
"max_stars_repo_path": "slides/Unit-06/sections/01-ModellingIntermezzo.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-19T16:38:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-19T16:38:10.000Z",
"num_tokens": 2351,
"size": 8446
} |
\subsection{Weak law of large numbers}
The sample mean is:
\(\bar X_n=\dfrac{1}{n}\sum_{i=1}^nX_i\)
The variance of this is:
\(Var[\bar X_n]=Var[\dfrac{1}{n}\sum_{i=1}^nX_i]\)
\(Var[\bar X_n]=\dfrac{1}{n^2}nVar[X]\)
\(Var[\bar X_n]=\dfrac{\sigma^2}{n} \)
We know from Chebyshev’s inequality:
\(P(|X-\mu | \ge k\sigma )\le \dfrac{1}{k^2}\)
Use \(\bar X_n\) as \(X\):
\(P(|\bar X_n-\mu | \ge \dfrac{k\sigma }{\sqrt n})\le \dfrac{1}{k^2}\)
Update \(k\) so \(k:=\dfrac{k\sqrt n}{\sigma}\)
\(P(|\bar X_n-\mu | \ge k)\le \dfrac{\sigma^2}{nk^2}\)
As \(n\) increases, the chance that the sample mean lies outside a given distance from the population mean approaches \(0\).
| {
"alphanum_fraction": 0.6029411765,
"avg_line_length": 22.6666666667,
"ext": "tex",
"hexsha": "b19e3654bddf0839c72ffc742630b86944b5e8cb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/probability/iidWLLN/01-03-IIDLLN.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/probability/iidWLLN/01-03-IIDLLN.tex",
"max_line_length": 124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/probability/iidWLLN/01-03-IIDLLN.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 273,
"size": 680
} |
\section{Project Development: Manufacturing}
\label{sec:manufDev}
The PCB layouts as shown below are currently being produced by Fritzing Fab. The estimated delivery date is mid-March.
\begin{figure}[H]
\centering
\makebox[\textwidth][c]{\includegraphics[width=1\textwidth]{ChauSze_ElectronicDesign1_BBSizing.jpg}}
\caption{PCB layout planning.}
\label{tiny2_plan}
\end{figure}
\subsection{ATtiny85 Design: 3 LEDs}
\begin{figure}[H]
\centering
\makebox[\textwidth][c]{\includegraphics[width=1\textwidth]{ChauSze_SeniorDesign1_Tiny1_PCB.PNG}}
\caption{Fritzing PCB layout of the first ATtiny85 design.}
\label{tiny1_pcb}
\end{figure}
\subsection{ATtiny85 Design: 6 LEDs}
\begin{figure}[H]
\centering
\makebox[\textwidth][c]{\includegraphics[width=1\textwidth]{ChauSze_SeniorDesign1_Tiny2_PCB.PNG}}
\caption{Fritzing PCB layout of the second ATtiny85 design.}
\label{tiny2_pcb}
\end{figure}
| {
"alphanum_fraction": 0.7518557794,
"avg_line_length": 31.4333333333,
"ext": "tex",
"hexsha": "502b3b019286fb07794898fc96e23bb8ec4d6d75",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wodiesan/senior_design_spring",
"max_forks_repo_path": "documentation/latex_prototype/manufDev.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wodiesan/senior_design_spring",
"max_issues_repo_path": "documentation/latex_prototype/manufDev.tex",
"max_line_length": 118,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wodiesan/senior_design_spring",
"max_stars_repo_path": "documentation/latex_prototype/manufDev.tex",
"max_stars_repo_stars_event_max_datetime": "2018-09-30T17:29:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-26T03:46:09.000Z",
"num_tokens": 298,
"size": 943
} |
\par
\section{Prototypes and descriptions of {\tt BKL} methods}
\label{section:BKL:proto}
\par
This section contains brief descriptions including prototypes
of all methods that belong to the {\tt BKL} object.
\par
\section{Basic methods}
\label{subsection:BKL:proto:basics}
\par
As usual, there are four basic methods to support object creation,
setting default fields, clearing any allocated data, and free'ing
the object.
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
BKL * BKL_new ( void ) ;
\end{verbatim}
\index{BKL_new@{\tt BKL\_new()}}
This method simply allocates storage for the {\tt BKL} structure
and then sets the default fields by a call to
{\tt BKL\_setDefaultFields()}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_setDefaultFields ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_setDefaultFields@{\tt BKL\_setDefaultFields()}}
This method sets the fields of the structure to their default
values:
{\tt bpg}, {\tt colors} and {\tt regwghts} are set to {\tt NULL},
the {\tt int} parameters are set to zero,
and the {\tt cweights} vector is filled with zeros.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_clearData ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_clearData@{\tt BKL\_clearData()}}
This method clears any data allocated by the object,
namely the {\tt colors} and {\tt regwghts} vectors.
It then fills the structure's fields with default values
with a call to {\tt BKL\_setDefaultFields()}.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_free ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_free@{\tt BKL\_free()}}
This method releases any storage by a call to
{\tt BKL\_clearData()} then free's the storage for the
structure with a call to {\tt free()}.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Initializer methods}
\label{subsection:BKL:proto:initializers}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_init ( BKL *bkl, BPG *bpg, float alpha ) ;
\end{verbatim}
\index{BKL_init@{\tt BKL\_init()}}
This method initializes the {\tt BKL} object given a bipartite
graph object and cost function parameter as input.
Any previous data is cleared with a call to {\tt BKL\_clearData()}.
The {\tt ndom}, {\tt nseg} and {\tt nreg} scalars are set,
the {\tt regwghts[]} vector allocated and filled,
and
the {\tt colors[]} vector allocated and filled with zeros.
\par \noindent {\it Error checking:}
If {\tt bkl} or {\tt bpg} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Utility methods}
\label{subsection:BKL:proto:utilities}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_setRandomColors ( BKL *bkl, int seed ) ;
\end{verbatim}
\index{BKL_setRandomColors@{\tt BKL\_setRandomColors()}}
If {\tt seed > 0}
a random number generator is set using {\tt seed}.
The domains are then colored {\tt 1} or {\tt 2} randomly
and {\tt BKL\_setColorWeights()} is called to set the segment
weights.
\par \noindent {\it Error checking:}
If {\tt bkl} or {\tt bkl->bpg} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_setColorWeights ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_setColorWeights@{\tt BKL\_setColorWeights()}}
This method sets the color weights for the region.
It assumes that all domains are colored {\tt 1} or {\tt 2}.
The segments are then colored.
If a segment is adjacent only to domains of one color, its color is
that color, otherwise its color is {\tt 0}.
\par \noindent {\it Error checking:}
If {\tt bkl} or {\tt bkl->bpg} is {\tt NULL},
an error message is printed and the program exits.
The colors of the domains are checked to ensure they are {\tt 1} or
{\tt 2}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int BKL_segColor ( BKL *bkl, int iseg ) ;
\end{verbatim}
\index{BKL_segColor@{\tt BKL\_segColor()}}
This method returns the color of segment {\tt iseg}.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
or if {\tt iseg} is not in {\tt [bkl->ndom, bkl->nreg)},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_flipDomain ( BKL *bkl, int idom ) ;
\end{verbatim}
\index{BKL_flipDomain@{\tt BKL\_flipDomain()}}
This method flips the color of domain {\tt idom},
adjusts the colors of neighboring segments
and the {\tt cweights[]} vector.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
or if {\tt idom} is not in {\tt [0,bkl->ndom)},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int BKL_greyCodeDomain ( BKL *bkl, int count ) ;
\end{verbatim}
\index{BKL_greyCodeDomain@{\tt BKL\_greyCodeDomain()}}
This method returns the next domain id in a grey code sequence,
used to exhaustively search of a subspace of partitions
defined by set of candidate domains to flip.
The value {\tt count} ranges from {\tt 1} to $2^{\mbox{\tt ndom}}$.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float BKL_setInitPart ( BKL *bkl, int flag, int seed, int domcolors[] ) ;
\end{verbatim}
\index{BKL_setInitPart@{\tt BKL\_setInitPart()}}
This method sets the initial partition
by coloring the domains and segments.
The {\tt flag} parameter has the following values.
\begin{itemize}
\item
{\tt flag = 1} $\longrightarrow$
random coloring of the domains
\item
{\tt flag = 2} $\longrightarrow$
one black domain, ({\tt seed} \% {\tt ndom}), rest are white
\item
{\tt flag = 3} $\longrightarrow$
one black pseudoperipheral domain, found using
domain ({\tt seed} \% {\tt ndom}) as root, rest are white
\item
{\tt flag = 4} $\longrightarrow$
roughly half-half split, breadth first search
of domains, ({\tt seed} \% {\tt ndom}) as root
\item
{\tt flag = 5} $\longrightarrow$
roughly half-half split, breadth first search
of domains, ({\tt seed} \% {\tt ndom}) as root to find
a pseudoperipheral domain as root
\item
{\tt flag = 6} $\longrightarrow$
use {\tt domcolors[]} to seed the {\tt colors[]} array
\end{itemize}
The {\tt seed} input parameter is for a random number generator.
The {\tt domcolors[]} input array is used only for {\tt flag = 6}.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
or if {\tt flag = 6} and {\tt domcolors} is {\tt NULL},
or if {\tt flag} is not in {\tt [1,6]},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
int BKL_domAdjToSep ( BKL *bkl, int dom ) ;
\end{verbatim}
\index{BKL_domAdjToSep@{\tt BKL\_domAdjToSep()}}
This method returns {\tt 1}
if domain {\tt dom} is adjacent to the separator
and {\tt 0} otherwise.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
or if {\tt dom} is not in {\tt [0,ndom)},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Partition evaluation methods}
\label{subsection:BKL:proto:evaluation}
\par
There are three functions that evaluate the cost of a partition.
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void BKL_evalgain ( BKL *bkl, int dom, int *pdeltaS, int *pdeltaB, int *pdeltaW ) ;
\end{verbatim}
\index{BKL_evalgain@{\tt BKL\_evalgain()}}
This method evaluates the change in the components
$\Delta S$, $\Delta B$ and $\Delta W$
that would occur if domain {\tt dom} were to be flipped.
These {\it gain} values are put into the storage pointed to by
{\tt pdeltaS}, {\tt pdeltaB} and {\tt pdeltaW}.
The method checks that {\tt bkl}, {\tt pdeltaS}, {\tt pdeltaB}
and {\tt pdeltaW} are not {\tt NULL}
and that {\tt idom} is in {\tt [0,bkl->ndom)}.
\par \noindent {\it Error checking:}
If {\tt bkl}, {\tt pdeltaS}, {\tt pdeltaB} or {\tt pdeltaW}
is {\tt NULL},
or if {\tt dom} is not in {\tt [0,ndom)},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float BKL_evalfcn ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_evalfcn@{\tt BKL\_evalfcn()}}
The $|S|$, $|B|$ and $|W|$ values are taken from the {\tt
cweights[]} vector.
If $\min(|B|,|W|) > 0$, this function returns
$$
|S|\left(1 + \alpha * \frac{\max(|B|,|W|)}{\min(|B|,|W|)} \right),
$$
otherwise it returns $(|S| + |B| + |W|)^2$.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float BKL_eval ( BKL *bkl, int Sweight, int Bweight, int Wweight ) ;
\end{verbatim}
\index{BKL_eval@{\tt BKL\_eval()}}
The $|S|$, $|B|$ and $|W|$ values are taken from the {\tt
Sweight}, {\tt Bweight} and {\tt Wweight} parameters.
If $\min(|B|,|W|) > 0$, this function returns
$$
|S|\left(1 + \alpha * \frac{\max(|B|,|W|)}{\min(|B|,|W|)} \right),
$$
otherwise it returns $(|S| + |B| + |W|)^2$.
The method checks that {\tt bkl} is not {\tt NULL}.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
\par
\subsection{Partition improvement methods}
\label{subsection:BKL:proto:improve}
\par
There are two functions that take a given partition and some input
parameters and return a (hopefully) improved partition.
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float BKL_exhSearch ( BKL *bkl, int mdom, int domids[], int tcolors[] ) ;
\end{verbatim}
\index{BKL_exhSearch@{\tt BKL\_exhSearch()}}
This method performs an exhaustive search of a subspace of
partitions and returns the best partition.
The starting partition is given by the {\tt BKL} object's {\tt
colors[]} vector.
The subspace of domains to flip is defined by the {\tt
domids[mdom]} vector.
The {\tt tcolors[]} vector is a work vector.
There are $2^{\mbox{\tt mdom}}$ distinct partitions in the subspace
to be explored.
We flip the domains using a grey code sequence so a total of
$2^{\mbox{\tt mdom}}$ domain flips are performed.
The {\tt bkl->colors[]} vector is filled with the colors of
the best partition and its cost is returned.
\par \noindent {\it Error checking:}
If {\tt bkl}, {\tt domids} or {\tt tcolors} is {\tt NULL},
or if {\tt mdom < 1},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float BKL_fidmat ( BKL *bkl ) ;
\end{verbatim}
\index{BKL_fidmat@{\tt BKL\_fidmat()}}
If the number of domains is eight or less, an exhaustive search is
made.
Otherwise, this method finds a good partition using a variant of the
Fiduccia-Mattheyes algorithm.
At any step, only the domains that are adjacent to the separator
are eligible to be flipped.
For each eligible domain,
we maintain
$\Delta S$, $\Delta B$ and $\Delta W$,
the change in the three component weights
if this domain were to be flipped.
These values must be updated whenever a neighboring domain has been
flipped, and so is {\it local} information.
The cost of the partition that would result if a domain were to be
flipped is a function of the local information
$\Delta S$, $\Delta B$ and $\Delta W$,
as well as the present weights of the components
(global information).
At each step we evaluate the cost of the resulting partition for each
domain that is eligible to be flipped.
This is relatively expensive when compared to using a heap to contain
$\Delta S$ for each domain, but we have found the resulting
partitions to be better.
The eligible domains are kept on a doubly linked list to allow easy
insertions and deletions.
\par \noindent {\it Error checking:}
If {\tt bkl} is {\tt NULL},
an error message is printed and the program exits.
%-----------------------------------------------------------------------
\end{enumerate}
| {
"alphanum_fraction": 0.6135620915,
"avg_line_length": 38.5787965616,
"ext": "tex",
"hexsha": "538bca38b0a8abe7c2df941804df8c39c82c006a",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/BKL/doc/proto.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/BKL/doc/proto.tex",
"max_line_length": 83,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/BKL/doc/proto.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3479,
"size": 13464
} |
\section{Evolution of Trust}
Trust Is Risk is a game that runs indefinitely. In each turn, a player is chosen, decides what to play and, if valid, the
chosen turn is executed.
\subimport{fc17/definitions/}{turns.tex}
\ifdefined\proceedings
\else
\subimport{common/}{turnexample.tex}
We use $prev\left(j\right)$ and $next\left(j\right)$ to denote the previous and next turn respectively played by
$Player(j)$.
\fi
\subimport{common/definitions/}{prevnextturn.tex}
\subimport{common/definitions/}{damage.tex}
\subimport{common/definitions/}{history.tex}
\noindent Knowledge of the initial graph $\mathcal{G}_0$, all players' initial capital and the history amount to full
comprehension of the evolution of the game. Building on the example of Fig.~\ref{fig:utxo}, we can see the resulting graph
when $D$ plays
\begin{equation}
\label{turnexample}
Turn_1 = \{Steal\left(1, A\right), Add\left(4, C\right), Add\left(-1, B\right)\} \enspace.
\end{equation}
\subimport{common/figures/}{turnexample.tikz}
\noindent We now define the Trust Is Risk Game formally. We assume players are chosen so that, after her turn, a player will
eventually play again later.
\subimport{fc17/algorithms/}{trustisriskgame.tex}
\noindent \texttt{strategy[}$A$\texttt{]()} provides player $A$ with full knowledge of the game, except for the capitals of
other players. This assumption may not be always realistic. \texttt{executeTurn()} checks the validity of \texttt{Turn} and
substitutes it with an empty turn if invalid. Subsequently, it creates the new graph $\mathcal{G}_j$ and updates the
history accordingly. \ifdefined\proceedings\else For the routine code, see Appendix B.\fi
| {
"alphanum_fraction": 0.7431725741,
"avg_line_length": 55.5161290323,
"ext": "tex",
"hexsha": "e05eff38d5f553fdc236f6649c1b44ceb21d3c45",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-08-28T06:32:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-07T10:49:58.000Z",
"max_forks_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "dionyziz/DecentralizedTrust",
"max_forks_repo_path": "fc17/evolution.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_issues_repo_issues_event_max_datetime": "2017-07-31T14:42:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-07T12:25:26.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "dionyziz/DecentralizedTrust",
"max_issues_repo_path": "fc17/evolution.tex",
"max_line_length": 126,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "dionyziz/DecentralizedTrust",
"max_stars_repo_path": "fc17/evolution.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-01T14:07:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-15T14:33:06.000Z",
"num_tokens": 489,
"size": 1721
} |
\documentclass[a4paper,notitlepage,11pt]{article}
\usepackage[dvipsnames]{xcolor}
\usepackage{tikz}
\usetikzlibrary{calc}
% add bulgarian support
\usepackage[utf8]{inputenc}
\usepackage[english,bulgarian]{babel}
\usepackage[T2B]{fontenc}
\usepackage[a4paper, total={7.5in, 10.7in}]{geometry}
\usepackage{tikz,lmodern}
\usepackage{mymacros}
\usepackage{mdwlist}
\usepackage{enumitem}
\usepackage[backref=true, %
hyperref=true, %
firstinits=true, %
indexing=true, %
url=false, %
style=alphabetic, % style=debug, alphabetic
bibstyle=ieee, %
backend=biber, %
doi=true,
texencoding=utf8,
bibencoding=utf8]{biblatex}
%\addbibresource{myPublications.bib}
%\definecolor{myblue}{rgb}{0.0,0.7,1.0}
%\definecolor{myblack}{rgb}{0.0,0.0,0.0}
%\definecolor{mywhite}{rgb}{1.0,1.0,1.0}
%%%%%%%%%%%%
% Theme 1 %
%%%%%%%%%%%%
\definecolor{myblue}{HTML}{31A2AC}
\definecolor{myred}{HTML}{AF1C1C}
%\definecolor{myred}{HTML}{8C001A} % requested by plamena
\definecolor{mywhite}{HTML}{F0EFF0}
\definecolor{myblack}{HTML}{2F2F28}
%%%%%%%%%%%%
% Theme 2 %
%%%%%%%%%%%%
%\definecolor{myblue}{HTML}{FF6600}
%\definecolor{myred}{HTML}{000000}
%\definecolor{mywhite}{HTML}{CCCCCC}
%\definecolor{myblack}{HTML}{000000}
\begin{document}
% turn off page numbering
\thispagestyle{empty}
\pagecolor{mywhite}
\begin{figure}[t!]
\hspace{-2cm}
% \centering
\begin{tikzpicture}
\coordinate (BL) at (-3,0);
\coordinate (TR) at (19,3);
\coordinate (A) at (8,2.2);
\coordinate (D) at (8.8,3);
\coordinate (A2) at (-3,2.2);
\coordinate (B) at (10.2,0);
\coordinate (C) at (11,0.8);
\coordinate (C2) at (19,0.8);
% \draw[step=1cm,gray,very thin] (BL) grid (TR);
\draw[myblue, fill=myblue] (BL)--(B)--(A)--(A2)--cycle;
\draw[myblue, fill=myblue] (C)--(D)--(TR)--(C2)--cycle;
\draw[myblue, fill=myblue] (0,0.5) rectangle (2.7,3.5);
% \draw[mylightblue, fill=mylightblue](A)--(B)--(C)--(D)--cycle;
% \draw[myblue, fill=myblue] at (3.0, 0.0) rectangle (2.5cm, 3.0cm);
\node[anchor=south west,inner sep=0] at (0.1,0.5) {\includegraphics[width=2.5cm]{ymadzhunkov.jpg}};
\node[draw=none,align=left] at (5.8,1) {
\color{mywhite}\Large \textbf{Yordan Madzhunkov}\\
% \color{mywhite}Physicist, Software Developer\\
\color{mywhite}[email protected]
};
\node[draw=none,align=left] at (14.5,2) {
\color{mywhite} \hfill \large \textbf{"Creativity is intelligence having fun"}\\
\color{mywhite} \hfill Albert Einstein
};
\end{tikzpicture}
\end{figure}
\noindent
\begin{minipage}[t]{.6\textwidth}
\color{myblack}
\input{minipage1.tex}
\end{minipage}% This must go next to `\end{minipage}`
\quad
\quad
\fcolorbox{myblue}{myblue}{
\begin{minipage}[t]{.39\textwidth}
\color{mywhite}
\input{minipage2.tex}
\end{minipage}
}
%\newpage
%\thispagestyle{empty}
%\noindent
%\begin{minipage}[t]{.6\textwidth}
% \vspace{0.1cm}
% \color{myblack}
% \input{minipage3.tex}
%\end{minipage}% This must go next to `\end{minipage}`
%\quad
%\quad
%\fcolorbox{myblue}{myblue}{
%\begin{minipage}[t]{.39\textwidth}
%\color{mywhite}
%\input{minipage4.tex}
%\end{minipage}
%}
%\vspace{-0.8cm}
%\mysection{Personality Strengths}
%\small
%I find fascinating to learn something new. I enjoy to analyze problems from
%every angle in quest for best solutions. When presented with chronic, systemic
%problems, I respond with enthusiasm and energy, putting days and nights to find
%a solution. I easily use my accumulated knowledge to identify fallacious
%arguments and systematical contradictions. Therefore, I can produce factual
%knowledge that can be trusted. I am confident in my abilities. I have passion
%to share my opinion, knowledge and critical thinking with my colleagues. People
%find intriguing my way with words. My confidence, quick thought and ability to
%connect distant ideas in novel ways create a style of communication that is
%charming, entertaining, and informative at the same time. I have very good
%communication skills, feeling at ease in social events or in situations where I
%need to connect to other people. Playing an active role in the society and
%knowing what makes other people tick means a lot for me. However, I am not
%influenced easily by others opinion about me.
%\newpage
%\section*{Bibliographies}
%\nocite{*}
%\printbibliography
\end{document}
| {
"alphanum_fraction": 0.6692273924,
"avg_line_length": 32.3120567376,
"ext": "tex",
"hexsha": "f12a3ac0ae0d30737fd14eb007a8c8dd4101d580",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c3e8623553ba9c109f9667d2999ce5543d04a159",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ymadzhunkov/cv",
"max_forks_repo_path": "ymadzhunkov_cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c3e8623553ba9c109f9667d2999ce5543d04a159",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ymadzhunkov/cv",
"max_issues_repo_path": "ymadzhunkov_cv.tex",
"max_line_length": 102,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c3e8623553ba9c109f9667d2999ce5543d04a159",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ymadzhunkov/cv",
"max_stars_repo_path": "ymadzhunkov_cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1497,
"size": 4556
} |
\chapter*{Vita} \label{vita}
The vita should be written in narrative form, not resume or curriculum vitae form.
It should contain appropriate academic and professional information about the
author/student.
Personal information, such as the student's address or phone number, should not
be included.
| {
"alphanum_fraction": 0.7915309446,
"avg_line_length": 38.375,
"ext": "tex",
"hexsha": "a41b0e1ec1bca63c84aa68f7610997898dd0dd14",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-01-09T19:04:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-06-18T16:47:41.000Z",
"max_forks_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tapjdey/thesis_template",
"max_forks_repo_path": "back-matter/vita.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tapjdey/thesis_template",
"max_issues_repo_path": "back-matter/vita.tex",
"max_line_length": 83,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "8ee38a57f7301530d4c6b413bc41ff60fcff2cea",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tapjdey/thesis_template",
"max_stars_repo_path": "back-matter/vita.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-28T06:22:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-09T18:20:56.000Z",
"num_tokens": 62,
"size": 307
} |
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size
\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs
\usepackage{graphicx}
\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default
\usepackage[english]{babel} % English language/hyphenation
\usepackage{amsmath,amsfonts,amsthm} % Math packages
\usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template
\usepackage{sectsty} % Allows customizing section commands
\allsectionsfont{\centering \normalfont\scshape} % Make all sections centered, the default font and small caps
\usepackage{fancyhdr} % Custom headers and footers
\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers
\fancyhead{} % No page header - if you want one, create it in the same way as the footers below
\fancyfoot[L]{} % Empty left footer
\fancyfoot[C]{} % Empty center footer
\fancyfoot[R]{\thepage} % Page numbering for right footer
\renewcommand{\headrulewidth}{0pt} % Remove header underlines
\renewcommand{\footrulewidth}{0pt} % Remove footer underlines
\setlength{\headheight}{13.6pt} % Customize the height of the header
\numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)
\numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)
\numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4)
\setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height
\title{
\normalfont \normalsize
\textsc{EEP702, Software Laboratory} \\ [25pt] % Your university, school and/or department name(s)
\horrule{0.5pt} \\[0.4cm] % Thin top horizontal rule
\huge Ubuntu Weather Application \\ % The assignment title
\horrule{2pt} \\[0.5cm] % Thick bottom horizontal rule
}
\author{Rishi Dua, 2010EE50557} % Your name
\date{\normalsize 27 January, 2014} % Today's date or a custom date
\begin{document}
\maketitle % Print the title
%----------------------------------------------------------------------------------------
% PROBLEM 1
%----------------------------------------------------------------------------------------
\section{Weather Application}
\subsection{Problem statement}
Develop an ubuntu app which can accomplish following two tasks:
\begin{enumerate}
\item Forecasting next few days weather conditions
\item Keep record of past few days' weather conditions (max 30 days) in to your database and show it as a strip chart which has a rewind / fast fwd button.
\end{enumerate}
Use tcl/tk for the GUI to interact with the user and python for other operations. The user will be asked to enter name of a place for which weather conditions have to be forecasted, the number of days (n) for which to show information, and to press a button to decide whether to forecast or tell about the past n days weather conditions. Use MySQL as database. Max 5 days' information need to be recorded. Make use of yahoo api for weather forecasting info. Develop html documentation using Doxygen. Give user an option to provide details of the place either by specifying name of that place or manually entering geographical location (lat/ long).
\subsection{Abstract}
Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more. Open source and business-friendly, Tcl is a mature yet evolving language that is truly cross platform, easily deployed and highly extensible.\\
Tk is a graphical user interface toolkit that takes developing desktop applications to a higher level than conventional approaches. Tk is the standard GUI not only for Tcl, but for many other dynamic languages, and can produce rich, native applications that run unchanged across Windows, Mac OS X, Linux and more\\
\subsection{Specification And Assumptions}
{\textbf {Tool Specifications:}}\\
Language used: Tcl\\
Platform: Ubuntu 12.04\\
Additional tools used: python\\
Bash Version: GNU bash, version 4.2.25(1)-release (x86\_64-pc-linux-gnu)\\
TK Version: 8.5.12\\
API used: http://query.yahooapis.com/v1/public/\\
MySQL: Used from the package lamp. The SQL dump is attached with the code\\
DoxyGen: This was used to generate the code documentation\\
{\textbf{Assumptions}}\\
Yahoo API provides weather report for all woeid (cities)\\
\textbf{Problem specifications}
Here is a list of all namespaces:\\
cleanup \\
getweather \\
woeidfromcordi \\
woeidfromquery \\
writefutureweather \\
writepastweather \\
Here is a list of all namespace members:
cleandb() : cleanup \\
getwoeid() : woeidfromcordi , woeidfromquery\\
parseweather() : getweather\\
showweather() : writefutureweather , writepastweather\\
yahootosql() : getweather , writepastweather , writefutureweather , woeidfromcordi\\
\newpage
\subsection{Flow chart}
{\center\includegraphics[height=10 in]{flowchart.jpg}}
The flowchart for cron job is\\
{\center\includegraphics[height=4 in]{cronflow.jpg}}
\subsection{Logic Implementation}
The problem is broken into 3 parts
\textbf{1. Getting woeid}\\
APIs used to get woeid:\\
Searching by city name:\\
http://query.yahooapis.com/v1/public/yql?q=select\%20*\%20from\%20geo.places\%\\
20where\%20text\%3D\%22Place\%20name\%22\&format=xml\\
Searching by cordinates:\\
http://query.yahooapis.com/v1/public/yql?q=select\%20*\%20from\%20geo.placefinder\%\\
20where\%20text="37.416275,-122.025092"\%20and\%20gflags="R"\&format=xml\\
Example YQL Queries for the same:\\
select city, woeid from geo.placefinder where text="delhi"\\
select city, woeid from geo.placefinder where text="28.7,77.2" and gflags="R"\\
\textbf{2. Getting weather}\\
The user entered values are parsed and passed to the python as shell arguments\\
Tcl file calls the python code which uses the Yahoo weather API and stores the results in the database\\
This result is fetched by tcl and shown on the user interface\\
Exceptions are caught by python file whenever there is no internet connection\\
\textbf{3. Cleaning database for records older than 30 days}
The software utility cron is a time-based job scheduler in Unix-like computer operating systems. People who set up and maintain software environments use cron to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration. Using a small python script, all records are removed by the cron job.
\subsection{Execution Directive}
No compilation required\\
Directly run by typing
./main.tcl\\
\subsection{Output Of The Program}
{\center\includegraphics[height=4 in]{ss1.png}}
{\center\includegraphics[height=4 in]{ss2.png}}
{\center\includegraphics[height=4 in]{ss3.png}}
{\center\includegraphics[height=4 in]{ss4.png}}
\subsection{Result}
A weather application is developed using the following:\\
MySQL: Used from the package lamp. The SQL dump is attached with the code\\
DoxyGen: This was used to generate the code documentation\\
\textbf{Problems encountered:}\\
Testing internet connectivity\\
Solution: The returned file is parsed and exception is caught if the return size is 0\\
\subsection{Conclusion}
Successfully developed a code that gets weather from Yahoo API, stores it into a database and displays to the user.
\end{document}
| {
"alphanum_fraction": 0.7187345526,
"avg_line_length": 47.6,
"ext": "tex",
"hexsha": "1552dc706a672fa163314ca6f4a893fffb3bcab7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9914a96db69607255d62cb2eebdea0cdfcf7ebcd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rishirdua/eep702-software-laboratory",
"max_forks_repo_path": "03_ubuntu-weather-app/documentation/report/Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9914a96db69607255d62cb2eebdea0cdfcf7ebcd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rishirdua/eep702-software-laboratory",
"max_issues_repo_path": "03_ubuntu-weather-app/documentation/report/Report.tex",
"max_line_length": 647,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "9914a96db69607255d62cb2eebdea0cdfcf7ebcd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rishirdua/eep702-software-laboratory",
"max_stars_repo_path": "03_ubuntu-weather-app/documentation/report/Report.tex",
"max_stars_repo_stars_event_max_datetime": "2015-05-18T10:39:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-18T10:39:19.000Z",
"num_tokens": 2024,
"size": 8092
} |
\documentclass[a4paper]{article}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{amsmath,amsfonts}
\usepackage{graphicx}
\usepackage[colorinlistoftodos]{todonotes}
\usepackage[margin=1in]{geometry}
\usepackage{enumitem}
\usepackage{multirow}
\usepackage{graphicx,wrapfig,lipsum}
\usepackage{graphicx} % for images
\usepackage{caption} % for captions under images/figures
\usepackage{url} % for urls
\setcounter{tocdepth}{5}
\setcounter{secnumdepth}{5}
\title{CS 669 Assignment 1}
\author{Rohit Patiyal \\ Devang Bacharwar}
\date{\today}
\begin{document}
\maketitle
\vspace{2.0cm}
\tableofcontents
\clearpage
\section {Objective}
To build Bayes and Naive-Bayes classifiers for different types of data sets :
\subsection{2-D artificial Data of 3 or 4 classes}
\begin{enumerate}
\item {Linearly separable data set}
\item {Nonlinearly separable data sets (3 Data sets)}
\item {Overlapping data set}
\end{enumerate}
\subsection{Real World data set}
\vspace{1.0cm}
\section{Procedure}
\begin{enumerate}
\item {Data for each class is partitioned into 75 \% for training and 25
\% for testing }
\item {Mean and Covariances are calculated for each class using the
training data.}
\item {For points in a grid, likelihood is calculated for each class and is
labeled as of the class with the maximum likelihood probability.}
\item{For bayes classifier, the likelihood is assumed to be a multivariate
gaussian distribution }
\item {These labelled points are plotted with different colors to visualize
the different regions separated by the decision boundaries.}
\item {The testing data is also plotted over the regions, and observations
are made.}
\end{enumerate}
\vspace{1.0cm}
\section{Observations}
\subsection{Bayes Classifier}
\subsubsection{Linearly separable data set}
The decision boundary clearly separates the testing data according to the
estimated classes, as the data forms widely separated clusters.
Results are similar when the covariance is either taken as the average of
individual class covariances and when the covariance is calculated using all
classes' data together.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/13/all_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/13/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/ls/pair/23/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/ls/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/ls/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/ls/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 0 & 125\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Non-Linearly separable data set }
\paragraph{Data of Interlocking Classes}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/interlock/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 239 \\
Incorrect : 11 \\
Acurracy : 95.6000 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 118 & 7 \\
& Class 2 & 4 & 121\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/interlock/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 239 \\
Incorrect : 11 \\
Acurracy : 95.6000 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 118 & 7 \\
& Class 2 & 4 & 121\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/interlock/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 240 \\
Incorrect : 10 \\
Acurracy : 96 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 118 & 7 \\
& Class 2 & 3 & 122\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\paragraph{A ring with a central mass}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/ring/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/ring/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/ring/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 75 & 0 \\
& Class 2 & 0 & 300\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\paragraph{Spiral Dataset}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/spiral/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/spiral/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/nls/spiral/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 75 & 0 \\
& Class 2 & 0 & 300\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Overlapping data set}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/13/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/13/diff_cov.png}\\
\hline
1 and
4&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/14/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/14/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/14/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/23/diff_cov.png}\\
\hline
2 and
4&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/24/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/24/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/24/diff_cov.png}\\
\hline
3 and
4&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/34/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/34/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/over/pair/34/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/over/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 450 \\
Incorrect : 50 \\
Acurracy : 90.000 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 111 & 4 & 4 & 6\\
& Class 2 & 1 & 116 & 0 & 8\\
& Class 3 & 9 & 0 & 116 & 0\\
& Class 4 & 6 & 12 & 0 & 107\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/over/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 453 \\
Incorrect : 47 \\
Acurracy : 90.600 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 111 & 6 & 4 & 4\\
& Class 2 & 2 & 118 & 0 & 5\\
& Class 3 &9 & 0 & 116 & 0\\
& Class 4 & 5 & 12 & 0 & 108\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/over/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 452 \\
Incorrect : 48 \\
Acurracy : 90.400 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 113 & 4 & 4 & 4\\
& Class 2 & 2 & 118 & 0 & 5\\
& Class 3 & 12 & 0 & 113 & 0\\
& Class 4 & 5 & 12 & 0 & 108\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Real world data set}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/real/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 1386 \\
Incorrect : 391 \\
Acurracy : 77.9966 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 510 & 1 & 30\\
& Class 2 & 21 & 332 & 261\\
& Class 3 & 7 & 71 & 544\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/real/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 1509 \\
Incorrect : 268 \\
Acurracy : 84.9184 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 510 & 5 & 26\\
& Class 2 & 20 & 397 & 197\\
& Class 3 & 7 & 13 & 602\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{bayes/real/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 0 & 125\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/13/all_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/13/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{bayes/real/pair/23/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsection{Naive-Bayes classifier}
\subsubsection{Linearly separable data set}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/ls/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/ls/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/ls/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 0 & 125\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/13/all_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/13/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/ls/pair/23/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Non-Linearly separable data set }
\paragraph{Data of Interlocking Classes}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/interlock/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/interlock/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/interlock/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 75 & 0 \\
& Class 2 & 0 & 300\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\paragraph{A ring with a central mass}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/ring/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/ring/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/ring/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 75 & 0 \\
& Class 2 & 0 & 300\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\paragraph{Spiral Dataset}
\noindent
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/spiral/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/spiral/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 188 \\
Incorrect : 187 \\
Acurracy : 50.133 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 46 & 29 \\
& Class 2 & 158 & 142\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/nls/spiral/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& & \multicolumn{2}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 75 & 0 \\
& Class 2 & 0 & 300\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Overlapping data set}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/13/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/13/diff_cov.png}\\
\hline
1 and
4&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/14/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/14/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/14/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/23/diff_cov.png}\\
\hline
2 and
4&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/24/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/24/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/24/diff_cov.png}\\
\hline
3 and
4&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/34/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/34/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/over/pair/34/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/over/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 450 \\
Incorrect : 50 \\
Acurracy : 90.000 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 111 & 4 & 4 & 6\\
& Class 2 & 1 & 116 & 0 & 8\\
& Class 3 & 9 & 0 & 116 & 0\\
& Class 4 & 6 & 12 & 0 & 107\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/over/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 453 \\
Incorrect : 47 \\
Acurracy : 90.600 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 111 & 6 & 4 & 4\\
& Class 2 & 2 & 118 & 0 & 5\\
& Class 3 &9 & 0 & 116 & 0\\
& Class 4 & 5 & 12 & 0 & 108\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/over/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 452 \\
Incorrect : 48 \\
Acurracy : 90.400 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
& & \multicolumn{4}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3 & Class 4\\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1& 113 & 4 & 4 & 4\\
& Class 2 & 2 & 118 & 0 & 5\\
& Class 3 & 12 & 0 & 113 & 0\\
& Class 4 & 5 & 12 & 0 & 108\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Real world data set}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/real/all/all_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with alltogether covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/real/all/avg_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with average covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 374 \\
Incorrect : 1 \\
Acurracy : 99.733 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 1 & 124\\
\hline
\end{tabular}
\end{center}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\vspace{0pt} % [3]
\includegraphics[width=\textwidth]{naivebayes/real/all/diff_cov.png}
\captionof{figure}{ Decision region plot for all the classes together with the training data
superposed with different covariance} % [4]
\label{gfx/image}
\end{minipage}
\begin{minipage}[t]{0.2\linewidth} % [2]
\vspace{10pt} % [3]
Correct : 375 \\
Incorrect : 0 \\
Acurracy : 100 \\
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
& & \multicolumn{3}{| c |}{Predicted} \\
\hline
& & Class 1 & Class 2 & Class 3\\
\hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{Act.}} & Class 1 & 125 & 0 & 0\\
& Class 2 & 0 & 125 & 0\\
& Class 3 & 0 & 0 & 125\\
\hline
\end{tabular}
\end{center}
\end{minipage}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{figure}
\begin{tabular}{|c|c|c|c|}
\hline
Pair/Cov & Alltogther & Average & Different \\
\hline
1 and
2&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/12/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/12/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/12/diff_cov.png}\\
\hline
1 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/13/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/13/all_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/13/diff_cov.png}\\
\hline
2 and
3&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/23/all_cov.png}&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/23/avg_cov.png}
&\includegraphics[width=40mm,height=30mm]{naivebayes/real/pair/23/diff_cov.png}\\
\hline
\end{tabular}
\caption{Decision region plot for every pair of classes}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\subsubsection{Linearly separable data set}
The decision boundaries are very similar to the bayes classifier, where most
of test data fit in the estimated class regions.
\section{Conclusion}
As per the observations, we can make the following conclusions :
\begin{enumerate}
\item The Decision Boundaries are more accurate in the case of different
covariance for different classes as compared to the other cases.
\item The curvature of the decision boundaries is due to the covariance term
in the likelihood probabilty which makes the surface quadratic.
\item The Decision Boundaries are better in cases where data is not
overlapping and is separable either linearly or non linearly.
\item In case of real data, the data is more overlapping and non
linear, resulting in lesser accuracy of the testing data.
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.5974252982,
"avg_line_length": 31.9268476621,
"ext": "tex",
"hexsha": "d4a162fe487c8faa807f4ef72ca19e75a1bad286",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a637aa7f819f289df8c4f9a0fbb5af0dc5dddc3a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rohit21122012/PatternRecognitionReport",
"max_forks_repo_path": "document.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a637aa7f819f289df8c4f9a0fbb5af0dc5dddc3a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rohit21122012/PatternRecognitionReport",
"max_issues_repo_path": "document.tex",
"max_line_length": 162,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a637aa7f819f289df8c4f9a0fbb5af0dc5dddc3a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rohit21122012/PatternRecognitionReport",
"max_stars_repo_path": "document.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 16135,
"size": 42335
} |
\documentclass[a4paper]{article}
\input{temp}
\begin{document}
\title{GRM}
\date{Lent 2016/2017}
\maketitle
\newpage
\tableofcontents
\newpage
\section{Groups}
\subsection{1.2}
\begin{defi}
A homomorphism is called an \emph{isomorphism} if it is a bijection. Say groups $G$ and $H$ are isomorphic if there exists an isomorphism $\phi:G \to H$ between them, write $G \cong H$.
\end{defi}
Exercise: If $\phi$ is an isomorphism, then the inverse function $\phi^{-1}: H \to G$ is also a homomorphism (so an isomorphism).
\begin{thm} (First isomorphism theorem)\\
Let $\phi:G \to H$ be a homomorphism. Then $\ker(\phi) \triangleleft G$, $\im(\phi) \leq H$, and $G/\ker(\phi) \cong \im(\phi)$.
\begin{proof}
We've done the first two parts.\\
Let $f:G/\ker(\phi) \to \im(\phi)$ by $g\ker(\phi) \to \phi(g)$.\\
$f$ is well-defined: if $g\ker(\phi) = g'\ker(\phi)$ then $g^{-1}g' \in \ker(\phi)$. So $e_H = \phi(g^{-1}g')=\phi(g^{-1}) \cdot \phi(g') = \phi(g)^{-1} \phi(g')$. So $\phi(g) = \phi(g')$. So we have $f(g\ker(\phi)) = f(g'\ker(\phi))$.
$f$ is a homomorphism: $f(g\ker(\phi)\cdot g'\ker(\phi)) = f(gg'\ker(\phi)) = \phi(gg') = \phi(g)\phi(g') = f(g\ker(\phi))\cdot f(g'\ker(\phi))$.
$f$ is surjective: Let $h \in \im(\phi)$, i.e. $h=\phi(g)$ for some $g$. So $h=f(g\ker(\phi))$.
$f$ is injective: Suppose $f(g\ker(\phi)) = e_H$, i.e. $\phi(g)=e_H$. Then $g \in \ker(\phi)$. So $g\ker(\phi) = e_G \ker(\phi)$.
\end{proof}
\end{thm}
\begin{eg}
Consider $\phi : \C \to \C \backslash \{0\}$ by $z \to e^z$. Then $\phi$ is a homomorphism from $(\C,+,0)$ to $(\C \backslash \{0\},\times,1)$. $\phi$ is onto because $\log$ exists (principal value). We have
\begin{equation*}
\begin{aligned}
\ker(\phi) = \{z \in \C | e^z = 1\} = \{2\pi ik \in \C | k \in \Z \} = 2\pi i \Z
\end{aligned}
\end{equation*}
So from first isomorphism theorem we get $(\C / 2\pi i\Z,+,0) \cong (\C\backslash\{0\},\times,1)$.
\end{eg}
\begin{thm} (Second isomorphism theorem)\\
Let $H \leq G$, $K \triangleleft G$. Then
\begin{equation*}
\begin{aligned}
HK = \{x=hk \in G | h \in H, k \in K\}
\end{aligned}
\end{equation*}
is a subgroup of $G$, $H \cap K \triangleleft H$, and
\begin{equation*}
\begin{aligned}
HK/K \cong H/H\cap K
\end{aligned}
\end{equation*}
\begin{proof}
Let $hk,h'k' \in HK$. Then
\begin{equation*}
\begin{aligned}
h'k'(hk)^{-1} = h'k'k^{-1}h^{-1}=h'h^{-1}hk'k^{-1}h^{-1}
\end{aligned}
\end{equation*}
$h'h^{-1}\in H$, and $hk'k^{-1}h^{-1} \in K$ since $K \triangleleft G$. So $h'k'(hk)^{-1} \in HK$. So $HK \leq G$.
Then consider $\phi: H \to G/K$ by $h \to hK$. This is a homomorphism (composition of $H \to G \to G/K$). Then
\begin{equation*}
\begin{aligned}
\ker(\phi) = \{h \in H|hK = eK\} = H \cap K
\end{aligned}
\end{equation*}
so $H\cap K$ is normal in $H$ by first isomorphism theorem. Also
\begin{equation*}
\begin{aligned}
\im(\phi)=\{gK\in G/K|gK=hK \text{ for some } h\in H\} = HK/K
\end{aligned}
\end{equation*}
So by first isomorphism theorem, $H/H \cap K \cong HK/K$ as required.
\end{proof}
\end{thm}
\begin{thm} (Subgroup correspondence)\\
Let $K \triangleleft G$. There is a bijection between subgroups of $G/K$ and subgroups of $G$ that contain $K$ by:\\
$\leftarrow$: $L/K \leq G/K \leftarrow K \triangleleft L \leq G $ and\\
$\rightarrow$: $U \leq G/K \to \{g \in G| gK \in U\}$.
The same maps give a bijection between normal subgroups of $G/K$ and normal subgroups of $G$ that contain $K$.
\end{thm}
\begin{thm} (Third isomorphism theorem)\\
Let $K \triangleleft L$, $L \triangleleft G$. Then $(G/K)/(L/K) \cong G/L$.
\begin{proof}
Let $\phi:G/K \to G/L$ by $gK \to gL$.
$\phi$ is well-defined: if $gK=g'K$ then $g^{-1}g' \in K \leq L$. So $gL =g(g^{-1}g')L = g'L$.
$\phi$ is clearly surjective, and $\ker(\phi) = \{gK \in G/K | gL=eL \iff g\in L\} = L/K$.
So by first isomorphism theorem, $(G/K)/(L/K) \cong G/L$.
\end{proof}
\end{thm}
\begin{defi}
A group $G$ is \emph{simple} if its only normal subgroups are $\{e\}$ and $G$.
\begin{lemma}
An abelian group is simple iff it is isomorphic to $C_p$ for prime $p$.
\begin{proof}
In an abelian group, every subgroup is normal. Now let $g \in G$ be non-trivial and consider $H=\{...,g^{-1},e,g,...\}$. This is a subgroup of $G$, so a normal subgroup of $G$. If $G$ is simple, then since $g$ is non-trivial, this must be equal to $G$. So $G$ is a cyclic group.
If $G$ is infinite, then it is isomorphic to $(\Z,+,0)$. But $2\Z \triangleleft \Z$. So this is not simple.
So $G \cong C_n$ for some $n$. If $n = a\cdot b$ for some $a,b \in \Z$ and $a,b \neq 1$, then $G$ contains $<...,g^{-a},e,g^a,...> \cong C_b$ as a proper subgroup. Contradiction.
So $n$ must be a prime number.
Finally, note that $C_p$ for prime $p$ is indeed simple: by Lagrange theorem any subgroup of $C_p$ must have order $1$ or $p$.
\end{proof}
\end{lemma}
\end{defi}
\subsection{Actions and Permutations}
\begin{thm}
Let $G$ be a non-abelian simple group, and $H \leq G$ a subgroup of index $n>1$. Then $G$ is isomorphic to a subgroup of $A_n$ for $n \geq 5$.
\begin{proof}
We let $G$ act on $X=G/H$, giving $\phi:G \to \Sym(G/H)$. Then $\ker(\phi) \triangleleft G$, so as $G$ is simple, either $\ker(\phi) = G$ or $\ker(\phi) = \{e\}$. But
\begin{equation*}
\begin{aligned}
\ker(\phi) = \bigcap_{g \in G} g^{-1}Hg \leq H
\end{aligned}
\end{equation*}
a proper subgroup of $G$; so the first case cannot occur. So $\ker(\phi) = \{e\}$.
By 1st isomorphism theorem,
\begin{equation*}
\begin{aligned}
G \cong G/\{e\} \cong \im(\phi) = G^X \leq \Sym(G/H) \cong S_n
\end{aligned}
\end{equation*}
Apply 2nd isomorphism theorem to $A_n \triangleleft S_n$, $G^X \leq S_n$. Then $G^X \cap A_n \triangleleft G^X$, $G^X/G^X\cap A_n = G^X A_n / A_n$.
As $G^X \cong G$ is simple, $G^X \cap A_n \triangleleft G^X$, so $G^X \cap A_n = \{e\}$ or $G^X \cap A_n = \{e\}$. But if the first case holds, then $G^X \cong G^XA_n/A_n \leq S_n / A_n \cong C_2$, contradicting $G^X \cong G$ being non-abelian. Hence $G^X \cap A_n = G^X$, i.e. $G^X \leq A_n$.
$n \geq 5$ because $A_2,A_3,A_4$ have no non-abelian simple subgroups.
\end{proof}
\end{thm}
\begin{coro}
If $G$ is non-abelian simple, $H \leq G$ is of index $n$, then $|G| \mid \frac{n!}{2}$.
\end{coro}
\begin{defi}
If $G$ acts on $X$, the \emph{orbit} of $x \in X$ is
\begin{equation*}
\begin{aligned}
G \cdot x = \{ y=g*x \in X| g \in G\}
\end{aligned}
\end{equation*}
and the \emph{stabiliser} of $x \in X$ is
\begin{equation*}
\begin{aligned}
G_x = \{g\in G|g*x = x\} \leq G.
\end{aligned}
\end{equation*}
\end{defi}
\begin{thm} (Orbit-stabiliser).\\
If $G$ acts on $X$, then for any $x \in X$, there is a bijection between $G\cdot x$ and $G/G_x$ by $g*x \to gG_x$, $gG_x \leftarrow y=g*x$.
\end{thm}
\subsection{Conjugacy classes, centralisers and normalisers}
There is an action of $G$ on the set $X=G$ via $g*x := g\cdot x \cdot g^{-1}$.
This gives a map $\phi : G \to \Sym(G)$. Note $\phi(g)(x \cdot t)=g \cdot x \cdot t \cdot g^{-1} = gxg^{-1}gtg^{-1}=\phi(g)(x) \cdot \phi(g)(t)$, i.e. $\phi(g)$ is a group homomorphism. Also it's a bijection (in $\Sym(G)$), so it is an isomorphism.
Let $\Aut(G) = \{f:G \to G | f$ is a group isomoprhism $\} \leq \Sym(G)$, called the automorphisms of $G$.
We have shown that $\phi:G \to \Sym(G)$ has image in $\Aut(G) \leq \Sym(G)$.
\begin{defi}
The \emph{conjugacy class} of $x \in G$ is $G \cdot x = Cl_G(x) = \{gxg^{-1} | g \in G\}$.\\
The \emph{centraliser} of $x\in G$ is $G_x=C_G(x) = \{g \in G | gxg^{-1} = x \iff gx = xg\}$.\\
The \emph{centre} of $G$ is $Z(G) = G_X = \ker(\phi) = \{g\in G| gxg^{-1} = x \forall x \in G\}$.\\
The \emph{normaliser} of $H \leq G$ is $N_G(H) = \{ g \in G | gHg^{-1} = H\}$.
\end{defi}
By Orbit-stabiliser theorem, there is a bijection between $Cl_G(x)$ and $G / C_G(x)$. So if $G$ is finite, then $|Cl_G(x)|$ equals the index of $C_G(x) \leq G$ which divides $|G|$.
Recall (from IA groups) that in $S_n$,\\
(i) everything can be written as a product of disjoint cycles;\\
(ii) permutations are conjugate iff they have the same cycle type.
\begin{thm}
$A_n$ is simple for $n \geq 5$.
\begin{proof}
First, claim $A_n$ is generated by $3-$cycles.
Need to show that a product of two transposition is a product of $3-$cycles. We have $(ab)(bc)=(abc)$, $(ab)(cd) = (acb)(acd)$.
Let $H \triangleleft A_n$. \emph{If} $H$ contains a $3-$cycle, say $(abc)$.
In $S_n$, there is a $\sigma$ so that $(abc) = \sigma^{-1} (123) \sigma$. If $\sigma \in A_n$, then $(123) \in H$. Otherwise, let $\sigma' = (45) \sigma \in A_n$. Then $\sigma(123)\sigma = (abc)$.
So all $3-$cycles are in $H$ if one of them is in $H$. In that case we know $H = A_n$.
So it is enough to show that any $\{e\} \neq H \triangleleft A_n$ contains a $3-$cycle.
Case 1: $H$ contains $\sigma = (123...r)\tau$ in disjoint cycle notation for some $r \geq 4$. Let $\delta = (123)$ and consider $\sigma^{-1}\delta^{-1}\sigma\delta$. This is in $H$. Evaluate it and we get
\begin{equation*}
\begin{aligned}
\sigma^{-1}\delta^{-1}\sigma\delta &= \tau^{-1}(r...21)(132)(12...r)\tau(123)\\
&=(r...21)(132)(12...r)(123)\\
&=(23r) \in H
\end{aligned}
\end{equation*}
is a $3-$cycle.
Case 2: $H$ contains $\sigma = (123)(456)\tau$ in disjoint cycle notation. Let $\delta = (124)$ and calculate
\begin{equation*}
\begin{aligned}
\sigma^{-1}\delta^{-1}\sigma\delta = (132)(465)(142)(123)(456)(124) = (12436)
\end{aligned}
\end{equation*}
So we've reduced to the first case.
Case 3: $H$ contains $\sigma = (123) \tau$, and $\tau$ is a product of $2-$cycles. Then $\sigma^2 = (132) \in H$.
Case 4: $H$ contains $\sigma = (12)(34)\tau$, and $\tau$ is a product of $2-$cycles. Let $\delta = (123)$, then
\begin{equation*}
\begin{aligned}
u=\sigma^{-1}\delta^{-1}\sigma\delta=(12)(34)(132)(12)(34)(123)=(14)(23)
\end{aligned}
\end{equation*}
Now let $v=(152)u(125) = (13)(45)$. We have $u\cdot v = (14)(23)(13)(45) = (12345)$. So we've reduced to the first case.
So $H$ contains a $3-$cycle.
\end{proof}
\end{thm}
\subsection{$p$-groups}
A finite group $G$ is a $p$-group if $|G| = p^n$ for some prime number $p$.
\begin{thm}
If $G$ is a finite $p$-group, then $Z(G) \neq \{e\}$.
\begin{proof}
The conjugacy classes partition $G$, and
\begin{equation*}
\begin{aligned}
|Cl(x)| = |G/C_G(x)| \mid |G|
\end{aligned}
\end{equation*}
by Orbit-Stabilizer and Lagrange's Theorem. So $|Cl(x)|$ is a power of $p$.
We know $|G|$ is the sum of sizes of conjugacy classes. We can write $|G|$ = number of conjugacy classes of size $1$ + size of all other conjugacy classes (which is divisible by $p$). Since $p \mid |G|$, the number of conjugacy classes of size 1 is divisible by $p$. In particular, $|Cl(e)| = 1$, so there is at least $p$ of such conjugacy classes.
Now note that $Z(G)$ consider all the elements that commutes with all the elements in the group, i.e. they have conjugacy classes of size $1$. So $|Z(G)| \geq p$.
\end{proof}
\end{thm}
\begin{coro}
A group of order $p^n$, $n > 1$, is \emph{never} simple.
\end{coro}
\begin{lemma}
For any group $G$, if $G/Z(G)$ is cyclic, then $G$ is abelian.
\begin{proof}
Let the coset $gZ(G)$ generate the cyclic group $G/Z(G)$. Then every coset is a of the form $g^r Z(G)$, $r \in \Z$. So every element of $G$ is of the form $g^r \cdot z$ for $z \in Z(G)$. Now take
\begin{equation*}
\begin{aligned}
(g^rz)\cdot(g^{r'} z') = g^r g^{r'} z z' = g^{r'}g^r z'z = g^{r'} z' g^r z
\end{aligned}
\end{equation*}
So $G$ is abelian.
\end{proof}
\end{lemma}
\begin{coro}
If $|G| = p^2$, $p$ is prime, then $G$ is abelian.
\begin{proof}
We know $\{e\} \lneq Z(G) \leq G$, so $|Z(G)| = p$ or $p^2$. If it's $p^2$ then $G=Z(G)$ is abelian.
If $|Z(G)| = p$, then $|G/Z(G)| = p$. So $G/Z(G)$ is cyclic. So $G$ is abelian.
\end{proof}
\end{coro}
\begin{thm}
If $|G| = p^a$, then $G$ has a subgroup of order $p^b$ for any $0 \leq b \leq a$.
\begin{proof}
Prove by induction on $a$. If $a=1$ then done. For $a>1$, have $\{e\} \lneq Z(G)$. Let $e \neq x \in Z(G)$. Then $x$ has order a power of $p$, so we can take some power of $p$ that has order $p$, say $z$. Let $C=\left<z\right>$, a normal subgroup of $G$ (since this is inside centre).
Now $G/C$ has order $p^{a-1}$. By induction hypothesis, we may find a subgroup $H \leq G/C$ of order $p^{b-1}$. Now by subgroup correspondence, this $H$ gives some $L\leq G$ that contains $C$ (by $H =L/C$), and $|L| = p^b$.
\end{proof}
\end{thm}
\subsection{Finite abelian groups}
\begin{thm}
If $G$ is a finite abelian group, then
\begin{equation*}
\begin{aligned}
G \cong C_{d_1} \times c_{d_2} \times ... \times C_{d_k}
\end{aligned}
\end{equation*}
with $d_{i+1} | d_i$ for all $i$.
We will prove this later, by considering an abelian group as a $\Z$-module.
\end{thm}
\begin{eg}
If $|G|=8$ and $G$ is abelian, then $G$ is either $C_8$, or $C_4 \times C_2$, or $C_2 \times C_2 \times C_2$.
\end{eg}
\begin{lemma} (Chinese Remainder Theorem)\\
If $n,m$ are coprime, then $C_{nm} \cong C_n \times C_m$.
\begin{proof}
Let $g \in C_n$ have order $n$, $h \in C_m$ has order $m$. Consider $x = (g,h)$ in $C_n \times C_m$. Clearly $x^{nm} = (e,e)$.
Now if $(e,e) = x^r = (g^r,h^r)$, then $n \mid r$ and $m \mid r$. So $nm \mid r$. So the order of $x$ is $nm$. So $\left<x\right> \cong C_{nm}$. Then by size we get the desired result.
\end{proof}
\end{lemma}
\begin{coro}
If $G$ is a finite abelian group, then
\begin{equation*}
\begin{aligned}
G \cong C_{n_1} \times C_{n_2} \times ... \times C_{n_l}
\end{aligned}
\end{equation*}
with each $n_i$ a power of a prime number.
\begin{proof}
If $d=p_1a^1...p_ra^r$ for distinct prime $p_i$, the lemma shows
\begin{equation*}
\begin{aligned}
C_d \cong C_{p_1a^1} \times C_{p_2a^2} \times ... \times C_{p_ra^r}
\end{aligned}
\end{equation*}
Apply this to the theorem.
\end{proof}
\end{coro}
\subsection{Sylow's Theorems}
\begin{thm} (Sylow's)\\
Let $|G| =p^a \cdot m$, with $(p,m) = 1$, where $p$ is prime. Then\\
(i) The set $Syl_p(G)=\{P \leq G \mid |P| = p^a\}$ of \emph{Sylow $p$-subgroup} is not empty.\\
(ii) All elements inf $Syl_p(G)$ are conjugate in $G$.\\
(iii) The number $n_p = |Syl_p(G)|$ satisfies $n+p \equiv 1 \pmod p$ and $n_p \mid |G|$ (i.e. $n_p \mid m$).
\end{thm}
\begin{lemma}
If $n_p=1$, then the unique Sylow $p$-subgroup is normal in $G$.
\begin{proof}
If $g \in G$, $P \leq G$ the Sylow subgroup, then $g^{-1}Pg$ is a subgroup of order $p^a$. But $P$ is the only such subgroup.
\end{proof}
\end{lemma}
Note that this tells that, if $G$ is simple, then $n_p \neq 1$; or conversely, if $n_p = 1$ for some $p$, then $G$ is not simple.
\begin{eg}
Let $|G| = 96 = 2^5 \cdot 3$. So $n_2 \equiv 1 \pmod 2$ and $n_2 \mid 3$. So $n_2 = 1$ or $3$. Also, $n_3 \equiv 1 \pmod 3$ and $n_3 \mid 32$. So $n_3 = 1,4,16$.
\end{eg}
$G$ acts on the set $Syl_p(G)$ by conjugation. So (ii) of the theorem says that this action has $1$ orbit. The stabilizer of $P \in Syl_p(G)$, i.e. the normalizer $N_G(P) \leq G$, is of index $n_p = |Syl_p(G)|$.
\begin{coro}
If $G$ is non-abelian simple, then
\begin{equation*}
\begin{aligned}
|G| \mid \frac{(n_p)!}{2}.
\end{aligned}
\end{equation*}
and $n_p \geq 5$.
\begin{proof}
$N_G(P)$ has index $n_p$. So apply the general result about subgroups of non-abelian simple groups (see section 1.2).
\end{proof}
\end{coro}
Now in the above example, $|G| \nmid \frac{3!}{2}$, so the group $G$ cannot be non-abelian simple. Also it cannot be abelian simple as 96 is not a prime.
\begin{eg}
Suppose $G$ is a simple group of order $132=2^2\times 3 \times 11$.
We know $n_{11} = 1 \pmod {11}$ and $n_{11} | 12$. As $G$ is simple we can't have $n_{11} = 1$, so $n_{11} = 12$.
Each Sylow 11-subgroup has order 11, so is isomorphic to $C_{11}$, so contains $10=(11-1)$ elements of order $11$. Such subgroups can only intersect in the identity element, so we have 12+10 = 120 elements of order 11. We know $n_3 \equiv 1 \pmod 3$ and $n_3 | 44$, so $n_3 = 1,4$ or $22$ but similarly $n_3 \neq 1$. If $n_3 = 4$ then we need $|G| \mid \frac{4!}{2}|$ which is impossible. So $n_3 = 22$. But then by counting the number of elements we get a contradiction.
\end{eg}
\textbf{Proof of Sylow's Theorems.} Let $|G| = p^n \cdot m$.\\
i) Let $\Omega$ be the set of subsets of $G$ of order $p^n$, and let $G$ act on $\Omega$ via $g * \{g_1,...,g_{p^n}\} = \{gg_1,...,gg_{p^n}\}.$\\
Let $\varepsilon \subset \Omega$ be an orbit for this action. If $\{g_1,...,g_{p^n}\} = \varepsilon$, then
\begin{equation*}
\begin{aligned}
(gg_1^{-1}) * \{g_1,...,g_{p^n}\} = \varepsilon = \{ g,gg^{-1}_1 g_2,...,gg^{-1}_1 g_{p^n}\}
\end{aligned}
\end{equation*}
So for any $g \in G$, there is an element of $\varepsilon$ which contains $g$. So $|\varepsilon| \geq \frac{|G|}{p^n} = m$.
If there is some orbit $\varepsilon$ with $|\varepsilon|=m$, then the stabilizer $G_\varepsilon$ has order $\frac{|G|}{|\varepsilon|} = \frac{p^n m}{m} = p^n$, so $G_\varepsilon$ \emph{is} a Sylow $p-$subgroup. To show this happens, we must show that it is not possible for \emph{every} orbit of $G$ acting on $\Omega$ to have size $>m$.
By orbit-stabilizer, for any orbit $\varepsilon$, $|\varepsilon| |p^n \cdot m$, so if $|\varepsilon|>m$, then $p | |\varepsilon|$. So if \emph{all} orbits of $G$ acting on $\Omega$ has size $>m$, then $p$ divides all of them, so $p | |\Omega|$.
Let's calculate $|\Omega|$. We have
\begin{equation*}
\begin{aligned}
|\Omega| = {p^n m \choose p^n} = \prod_{j=0}^{p^n-1} \frac{p^nm ... j}{p^n ... j} (???)
\end{aligned}
\end{equation*}
The largest power of $p$ dividing $p^n m=j$ is the same as the largest power of $p$ dividing $j$, which is the same as the largest power of $p$ dividing $p^n = j$. So $|\Omega|$ is \emph{not} divisible by $p$.
ii)Let's show something \emph{stronger}: if $p \in Syl_p(G)$ and $Q$ is a $p-$subgroup, then there is a $g \in G$ s.t. $g^{-1}Qg \in P$.
Let $G$ act on $G/P$ by $q*g^p = qg^p$. By orbit-stabilizer, the size of an orbit divides $|Q| = p^n$, so is either 1 or divisible by $p$.
On the other hand, $|G/P| = \frac{|G|}{|P|} = m$ is not divisible by $p$. So ther must be an orbit of size 1, say $\{g^p\}$, i.e. for every $q \in Q$, $qg^p = g^p$ i.e. $g^{-1}qg \in P$ $\forall q \in Q$, i.e. $g^{-1} Qg \leq P$.
(iii) By (ii), $G$ acts on $Syl_p(G)$ by conjugation with one orbit, so by orbit-stabilizer, $n_p \equiv |Syl_p(G)| \mid |G|$, which is the second part of (ii).
\begin{eg}
Consider $GL_2(\Z/p)$. It has order $(p^2-1)(p^2-p) = p(p+1)(p-1)^2$. Let $l$ be an odd prime dividing $p-1$ once only. Then $l \nmid p$. But also $l \nmid p+1$. So $l^2$ is the largest power of $l$ dividing $|GL_2(\Z/p)|$, i.e. there is at least a subgroup of order $l^2$. We have
\begin{equation*}
\begin{aligned}
(\Z/p)^X &= \{ x \in \Z/p | \exists g \in \Z/p \ s.t. \ xy=1 \in \Z/p\}\\
&= \{x \in \Z /p | x\neq 0\}
\end{aligned}
\end{equation*}
has size $p-1$. As a group under \emph{multiplication}, $(\Z/p)^X \cong C_{p-1}$. So there is a subgroup $C_l \leq C_{p-1}$, i.e. we can find a $1 \neq x \in (\Z/p)^X$ so that $x^l=1$.
Now let
\begin{equation*}
\begin{aligned}
H &= \left\{ \left(\begin{matrix}
a & 0\\
0 & b
\end{matrix}\right) \mid a,b \in (\Z/p)^X \text{ has order } l \right\} \cong C_l \times C_l\\
&\leq GL_2(\Z/p)
\end{aligned}
\end{equation*}
is a Sylow $l-$subgroup (order $l^2$).
\end{eg}
\begin{eg}
Consider $$SL_2 (\Z/p) = \ker(\det: GL_2(\Z/p) \to (\Z_p)^X\}$$
The determinant homomorphism is onto, so $SL_2(\Z/p) \leq GL_2(\Z/p)$ has index $(p-1)$. So $|SL_2(\Z/p)| = (p-1)p(p+1)$.
Now consider
\begin{equation*}
\begin{aligned}
PSL_2(\Z/p) := SL_2(\Z/p) / \left\{\left(\begin{matrix}
\lambda & 0 \\
0 & \lambda
\end{matrix}\right) \in SL_2(\Z/p)\right\}
\end{aligned}
\end{equation*}
If $\left(\begin{matrix}
\lambda & 0\\
0 & \lambda
\end{matrix}\right) \in SL_2(\Z /p)$ then $\lambda^2 = 1 \in (\Z/p)^X \cong C_{p-1}$. As long as $p \geq 3$, there are two such $\lambda$, $+1$ and $-1$. So $|PSL_2(\Z/p)| =\frac{1}{2} (p-1)p(p+1)$.
Let $(\Z/p)_\infty =\Z/p \cup \{\infty\}$. Then $PSL_2 (\Z/p)$ acts on $(\Z/p)_\infty$ by M$\ddot{o}$bius maps:
\begin{equation*}
\begin{aligned}
\left[\begin{matrix}
a & b\\
c & d
\end{matrix} \right] * z := \frac{az+b}{cz+d}
\end{aligned}
\end{equation*}
with the usual convention that if $cz+d=0$ then we get $\infty$.
\end{eg}
\begin{eg}
Let $p=5$, then this action gives a homomorphism $\phi: PSL_2(\Z/p) \to \Sym\left((\Z/5)_\infty\right) \cong S_6$.
We have $|PSL_2(\Z/5)| = \frac{1}{2}\cdot 4\cdot 5 \cdot 6 = 60$.
\textbf{Claim.} $\phi$ is injective.
\begin{proof}
If $\frac{az+b}{cz+d} = z$ $\forall z \in (\Z/p)_\infty$, set $z=0$ we get $b=0$. Set $z=\infty$ we get $c=0$. Set $z=1$ we get $a=d$. So $\left[\begin{matrix}
a & b\\
c & d
\end{matrix}\right] = \left[\begin{matrix}
1 & 0\\
0 & 1
\end{matrix}\right] \in PSL_2 (\Z/p)$.
\end{proof}
\textbf{Claim.} $\phi$ lands in $A_6 \leq S_6$.
\begin{proof}
Consider the composition
\begin{equation*}
\begin{aligned}
\psi: PSL_2(\Z/5) \to \Sym((\Z/5)_\infty) \cong S_6 \to \{\pm 1\}
\end{aligned}
\end{equation*}
by $\phi$ and $sgn$ respectively. We need to show that $\psi\left(\begin{matrix}
a & b\\
c & d
\end{matrix}\right) = +1$.
We know that elements of odd order in $PSL_2(\Z/5)$ have to be sent to $+1$.
Note that $H = \left\{ \left[\begin{matrix}
\lambda & 0\\
0 & \lambda^{-1}
\end{matrix}\right], \left[\begin{matrix}
0 & \lambda\\
-\lambda^{-1} & 0
\end{matrix}\right] \in PSL_2(\Z/5) \mid \lambda \in (\Z/5)^X \right\}$ has order $4$ (note that $\lambda$ and $-\lambda$ represent the same equivalence class as we are in $PSL$, so there are 2 of each kind), so is a Sylow $2-$subgroup of $PSL_2(\Z/5)$. Any element of order $2$ or $4$ is conjugate to an element in $H$. We'll show that $\psi(H) = \{+1\}$.
$H$ is generated by $\left[\begin{matrix}
2 & 0\\
0 & -2
\end{matrix}\right],\left[\begin{matrix}
0 & 1\\
-1 & 0
\end{matrix}\right]$. Now consider
\begin{equation*}
\begin{aligned}
\left[\begin{matrix}
2 & 0\\
0 & -2
\end{matrix}\right]
\end{aligned}
\end{equation*} acting on $(\Z/5)_\infty$. This sends
\includegraphics[scale=0.4]{GRM_01}
so is an even permutation. Then
\begin{equation*}
\begin{aligned}
\left[\begin{matrix}
0 & 1\\
-1 & 0
\end{matrix}\right]
\end{aligned}
\end{equation*}
sends
\includegraphics[scale=0.4]{GRM_02}
is also even. So they are both in $A_6$.
\end{proof}
\end{eg}
\newpage
\section{Rings}
In this course we only consider commutative rings with a multiplicative identity. Many of the things we are going to prove in this course will not hold without these two properties.
\subsection{Definitions}
\begin{defi}
A \emph{ring} is a quintuple $(R,+,\cdot,0_R,1_R)$ s.t. \\
(R1) $(R,+,0_R)$ is an abelian group;\\
(R2) The operation $- \cdot -$: $R \times R \to R$ is associative, and satisfies $1_R \cdot r = r = r \cdot 1_R$.\\
(R3) $r \cdot (r_1+r_2) = r \cdot r_1 + r \cdot r_2$, and $(r_1+r_2) \cdot r = r_1 \cdot r + r_2 \cdot r$ (Distributivity).
A ring is \emph{commutative} if in addition $a \cdot b = b \cdot a$ $\forall a,b \in R$.
From now on every ring we discuss will by default be commutative and has a multiplicative identity.
\end{defi}
\begin{defi}
If $(R,+,\cdot,0_R,1_R)$ is a ring ans $S \subset R$ is a subset, then it is called a \emph{subring} if $0_R,1_R \in S$ and $+,\cdot$ make $S$ into a ring in its own right.
\end{defi}
\begin{eg}
We have $\Z \leq \Q \leq \R \leq \C$ as rings with the usual $0,1,+,\cdot$.
\end{eg}
\begin{eg}
$\Z[i] = \{a+ib\in \C \mid a,b \in \Z\} \leq \C$ is the subring called \emph{Gaussian integers}.
\end{eg}
\begin{eg}
$\Q[\sqrt{2}] = \{ a+\sqrt{2} \cdot b \in \R \mid a,b \in \Q \} \leq \R$ is a subring.
\end{eg}
\begin{defi}
An element $r \in R$ is a \emph{unit} if there is a $s \in R$ s.t. $sr = 1_R$.
Note that this depends not only on the element but only on which ring we are talking about: $2 \in \Z$ is not a unit, but $2 \in \Q$ is.
If every $r \in R$ with $r \neq 0_R$ is a unit, then $R$ is called a field.
\end{defi}
\begin{notation}
If $x \in R$, write $-x \in R$ for the inverse of $x$ in $(R,+,0_R)$. We will write $y-x = y+(-x)$.
\end{notation}
\begin{eg}
$0_R+0_R = 0_R$, so $r \cdot (0_R+0_R) = r\cdot 0_R$, i.e. $r\cdot 0_R + r\cdot 0_R = r\cdot 0_R$, so $r\cdot 0_R = 0_R$. So if $R \neq \{0\}$, then $0_R \neq 1_R$, and $0_R$ is never a unit.
However, $(\{0\},+,\cdot,0,0)$ \emph{is} a valid ring.
\end{eg}
\begin{eg}
If $R,S$ are rings, then $R \times S$ has the state of a ring via componentwise addition and multiplication, with $1=(1_R,1_S)$, $0 =(0_R,0_S$.
Note that in this ring, $e_1 = (1_R,0_S)$, $e_2 = (0_R,1_S$, then $e_1^2 =e_1$ and $e_2^2 = e_2$, and $e_1+e_2 = 1$.
\end{eg}
\begin{eg}
Let $R$ be a ring. A \emph{polynomial} $f$ over $R$ is an expression
\begin{equation*}
\begin{aligned}
f=a_0+a_1X+a_2X^2 + ... + a_n X^n
\end{aligned}
\end{equation*}
with $a_i \in R$. $X^i$ is just a symbol.
We will consider $f$ and
\begin{equation*}
\begin{aligned}
a_0+a_1 X +... + a_n X^n + 0_R \cdot X^{n+1}
\end{aligned}
\end{equation*}
as equal. The \emph{degree} of $f$ is the largest $n$ s.t. $a_n \neq 0$.
If in addition, $a_n = 1_R$, then we say $f$ is \emph{monic}.
We write $R[X]$ for the set of all polynomials over $R$.
If $g=b_0+...+b_m X^m$, then we define addition and multiplication by the usual way:
\begin{equation*}
\begin{aligned}
f+g &= \sum_{i=0} (a_i+b_i) X^i\\
f\cdot g &= \sum_i \left(\sum_0^i a_jb_{i-j}\right)X^i
\end{aligned}
\end{equation*}
Now consider $R$ as a subring of $R[X]$, given by the polynomials of degree $0$. In particular, $1_R \in R$ gives the multiplicative identity element of $R[X]$.
\end{eg}
\begin{eg}
Write $R[[x]]$ for the ring of \emph{formal power series}, i.e.
\begin{equation*}
\begin{aligned}
f =a_0+a_1 X + a_2 X^2 + ...
\end{aligned}
\end{equation*}
with the same addition and multiplication.
Consider $\Z/2[X]$ and an element $f = X+X^2$. Then
\begin{equation*}
\begin{aligned}
f(0)=0+0=0,
f(1) = 1+1=0
\end{aligned}
\end{equation*}
But definitely $f \neq 0$. So we see the reason why we don't think $f$ as functions despite that they do give functions. They are just elements in a particular ring.
\end{eg}
\begin{eg}
The \emph{Laurent polynomials} $R[X,X^{-1}]$ is the set of
\begin{equation*}
\begin{aligned}
f=\sum_{i \in \Z} a_i X^i
\end{aligned}
\end{equation*}
s.t. only finitely many $a_i$ are non-zero.
\end{eg}
\begin{eg}
The ring of \emph{Laurent series} are those expressions
\begin{equation*}
\begin{aligned}
f = \sum_{i\in \Z} a_i X_i
\end{aligned}
\end{equation*}
with only finitely many $i<0$ s.t. $a_i \neq 0$ (i.e. formal power series in the positive part and polynomial in the negative part). This is to make the sum in each coefficient a finite sum, as we didn't even define infinite sums in rings.
\end{eg}
\begin{eg}
If $R$ is a ring and $X$ is a set, the set $R^X$ of all functions $f: X \to \R$ is a ring, with operations
\begin{equation*}
\begin{aligned}
(f+g) (X) &= f(X) + g(X),\\
(fg) (X) &= f(X) \cdot g(X).
\end{aligned}
\end{equation*}
The multiplicative identity element is the function $1(X) = 1_R$ for all $X$, and the same for the zero element.
Observe $\R^\R \supsetneq$ set of continuous $f:\R \to \R$ $\supset$ polynomials $\R \to \R$ = $\R[X]$. So $\R[X] \subsetneq \R^\R$.
\end{eg}
\subsection{Homomorphisms, ideals, quotients, and isomorphisms}
\begin{defi}
A function $\phi: R \to S$ between rings is a \emph{homomorphism} if\\
(H1) $\phi(r_1+r_2) = \phi(r_1)+\phi(r_2)$, i.e. $\phi$ is a group homomorphism between the additive groups of $R$ and $S$;\\
(H2) $\phi(r_1r_2) \phi(r_1) \phi(r_2)$;\\
(H3) $\phi(1_R) = 1_S$.
If in addition, $\phi$ is a bijection, then we say it is an \emph{isomorphism}.
The \emph{kernel} of $\phi:R \to S$ is
\begin{equation*}
\begin{aligned}
\ker(\phi) = \{r \in R | \phi(r) = 0\}
\end{aligned}
\end{equation*}
\end{defi}
\begin{lemma}
$\phi:R \to S$ is injective if and only if $\ker(\phi) = \{0\}$.
\begin{proof}
Note that $\phi:(R,+,0_R) \to (S,+,0_S)$ is a group homomorphism, and its kernel as a group homomorphism is also $\ker(\phi)$. So by theorems in groups we get the desired result.
\end{proof}
\end{lemma}
\begin{defi}
A subset $I \subset R$ is an \emph{ideal}, written $I \triangleleft R$, if\\
(I1) $I$ is a subgroup of $(R,+)$;\\
(I2) If $x \in I$, $r \in R$, then $x \cdot r \in I$ (strong multiplicative closure).
We say $I \triangleleft R$ is proper if $I \neq R$.
\end{defi}
\begin{lemma}
If $\phi:R \to S$ is a homomorphism, then $\ker(\phi) \triangleleft R$.
\begin{proof}
(I1) holds for $\ker(\phi)$ since $\phi$ is a group homomorphism.
Now let $x \in \ker(\phi)$, $r \in R$. Then
\begin{equation*}
\begin{aligned}
\phi(r \cdot x) = \phi(r) \cdot \phi(x) = \phi(r) \cdot 0_S = 0_S
\end{aligned}
\end{equation*}
So $r \cdot x \in \ker(\phi)$.
\end{proof}
\end{lemma}
\begin{eg}
If $I \triangleleft R$ and $1_R \in I$, then for any $r \in R$, we have
\begin{equation*}
\begin{aligned}
r = r \cdot 1 \in I,
\end{aligned}
\end{equation*}
so $I=R$. In short, proper ideals never include $1$, so are never subrings.
\end{eg}
\begin{eg}
If $R$ is a field, then $\{0\}$ and $R$ are the only ideals. This is reversible: If $\{0\}$ and $R$ are the only ideals, then $R$ is a field.
\end{eg}
\begin{eg}
In the ring $\Z$, all ideals are of the form $n\Z$ for some $n \in \Z$, where
\begin{equation*}
\begin{aligned}
n\Z =\{...,-2n,-n,0,n,2n,...\}
\end{aligned}
\end{equation*}
\begin{proof}
$n \Z$ is certainly an ideal. Let $I \triangleleft \Z$ be an ideal. Let $n \in I$ be the smallest positive element. Then $n \Z \subset I$. If this is not an equality, choose $m \in I\backslash n\Z$. Then $m=n \cdot q + r$ for some $0\leq r \leq n-1$. If $r=0$ then $m \in I$, a contradiction. So
\begin{equation*}
\begin{aligned}
r=m-n\cdot q < n
\end{aligned}
\end{equation*}
is in the ideal $I$. Contradiction.
\end{proof}
\end{eg}
\begin{defi}
For an element $a \in R$, write
\begin{equation*}
\begin{aligned}
(a) = \left\{a \cdot r | r \in R\right\}
\end{aligned}
\end{equation*}
the \emph{ideal generated by $a$}. More generally, for a list $a_1,...,a_s$, write
\begin{equation*}
\begin{aligned}
(a_1,...,a_s) = \left\{\sum_i a_i r_i | r_i \in R\right\}
\end{aligned}
\end{equation*}
which somewhat resembles the linear combinations in a vector space.
Even more generally, if $A \subseteq R$ is a subset, then the ideal generated by $A$ is
\begin{equation*}
\begin{aligned}
\left(A\right) = \left\{\sum_{a \in A} a \cdot r_a | r_a \in R, \text{only finitely many }r_a \neq 0\right\}.
\end{aligned}
\end{equation*}
since we have no definition of infinite sums in rings.
If an ideal $I\triangleleft R$ is of the form $(a)$, then we say that $I$ is a \emph{principal ideal}.
\end{defi}
\begin{eg}
In $\Z$ we have
\begin{equation*}
\begin{aligned}
n\Z = (n) \triangleleft \Z
\end{aligned}
\end{equation*}
is principal.
\end{eg}
\begin{eg}
In $\C[X]$, the polynomials with constant coefficient $0$ forms an ideal, which is just $(X)$ (check). This is also principal.
\end{eg}
\begin{prop}
Let $I \triangleleft R$ be an ideal. Define the \emph{quotient ring} $R/I$ to be the set of cosets $r+I$ (i.e. $(R,+,0)/$normal subgroup $I$), with addition and multiplication given by\\
$\bullet$ $(r_1+I)+(r_2+I) = r_1+r_2+I$,\\
$\bullet$ $(r_1+I)+(r_2+I) = r_1r_2 + I$,\\
and $0_{R/I} =0_R+I$, $1_{R/I} = 1_R+I$.
This is a ring, and the quotient map $R \to R/I$ by $r \to r+I$ is a ring homomorphism.
\begin{proof}
We already know that $(R/I,+,0)$ is an abelian group. And addition as described above is well-defined. If $r_1+I=r'_1+I$, $r_2+I = r'_2+I$, then $r'_1-r_1 =a_1 \in I$, $r'_2-r_2 = a_2 \in I$. So
\begin{equation*}
\begin{aligned}
r'_1r'_2 = (r_1+a_1)(r_2+a_2)=r_1r_2+r_1a_2+a_1r_2+a_1a_2 = r_1r_2 + a
\end{aligned}
\end{equation*}
for some $a \in I$, i.e. $r'_1r'_2 + I = r_1 r_2 + I$. So multiplication is well-defined. The ring axioms for $R/I$ then follow from those of $R$.
\end{proof}
\end{prop}
\begin{eg}
$n\Z \triangleleft \Z$, so have a ring $\Z/n\Z$. This has elements $0+n\Z,1+n\Z,2+n\Z,...,(n-1)+n\Z$, and addition and multiplication are modular arithmetic $\pmod n$.
\end{eg}
\begin{eg}
$(X) \triangleleft \C[X]$, so we have a ring $\C[X] / (X)$. Then
\begin{equation*}
\begin{aligned}
a_0+a_1X+a_2X^2+...+a_nX^n + (X) = a_0 + (X).
\end{aligned}
\end{equation*}
If $a_0+(X) = b_0+(X)$, then $a_0-b_0 \in (X)$. So $X | a_0-b_0$, i.e. $a_0=b_0$.
So consider
\begin{equation*}
\begin{aligned}
\phi: \C[X] / (X) &\longleftarrow &\C\\
a+(X) & \longleftarrow & a
\end{aligned}
\end{equation*}
is surjective and injective. So $\phi$ is a bijection.
Observe that $\phi$ is a ring homomorphism. The inverse is $f+(X) \to f(0)$.
\end{eg}
\begin{prop} (Euclidean algorithm for polynomials)\\
Let $F$ be a field and $f,g \in F[X]$, then we may write
\begin{equation*}
\begin{aligned}
f = g\cdot q + r
\end{aligned}
\end{equation*}
with $\deg(r) < \deg(g)$.
\begin{proof}
Let\\
$\deg(f) = n$, so $f=a_0+a_1X + ... + a_nX^n$ with $a_n \neq 0$;\\
$\deg(g) = m$, so $g=b_0+b_1X + ... + b_mX^m$ with $b_m \neq 0$.
If $n<m$, let $q=0$ and $r=f$.
Suppose $n \geq m$, and proceed by induction on $n$, Let
\begin{equation*}
\begin{aligned}
f_1 = f-g \cdot X^{n-m} \cdot a_n b_m^{-1}
\end{aligned}
\end{equation*}
we can do this because $F$ is a field, so $b_m$ has an inverse.
This has degree smaller than $n$.
If $n=m$, then $f = gX^{n-m}a_nb_m^{-1} + f_1$ where $\deg(f_1)<n=m$.
If $n>m$, by induction on degree, we have $f_1 = g\cdot q_1 + r$ with $\deg(r) < \deg(g)$. So $f=gX^{n-m}a_n+b_m^{-1} + g\cdot q_1+r = g (X^{n-m}b_m^{-1} + q_1) +r$ as required.
\end{proof}
\end{prop}
\begin{eg}
Consider $(X^2+1) \triangleleft \R[X]$, and $R=\R[X]/(X^2+1)$. Elements of $R$ are of the form $f+(X^2+1)$. By Euclidean algorithm we have $f=q\cdot (X^2+1) + r$ with $\deg(r) < 2$. So $f+(X^2+1) = r + (X^2+1)$. So every coset is represented by a polynomial $r$ of degree at most $1$.
If $a_1+b_1X + (X^2+1) = a_2+b_2X + (X^2+1)$, then
\begin{equation*}
\begin{aligned}
X^2+1 | (a_1+b_1X)-(a_2+b_2X)
\end{aligned}
\end{equation*}
But by degree we know that $(a_1+b_1X) - (a_2+b_2X) = 0$. So take
\begin{equation*}
\begin{aligned}
\phi:\R[x]/(X^2+1) &\to \C\\
a+bX+(X^2+1) &\to a+bi
\end{aligned}
\end{equation*}
This is a bijection. It sends addition to addition, and multiplication satisfies
\begin{equation*}
\begin{aligned}
&\phi((a+bX+(X^2+1)) \cdot (c+dX+(X^2+1)))\\
&=\phi(ac+(bc+ad)X + bdX^2 + (X^2+1))\\
&=\phi(ac+(bc+ad)X + bd(-1) + bd(X^2+1) + (X^2+1))\\
&=\phi((ac-bd) + (bc+ad)X + (X^2+1))\\
&=(ac-bd)+(bc+ad)i\\
&=(a+ib)(c+id)
\end{aligned}
\end{equation*}
So $\phi$ is a homomorphism. So $\R[x] / (X^2+1) \cong \C$.
\end{eg}
We also have $\Q[x]/(X^2-2) = \Q[\sqrt{2}] \subseteq \R$.
\begin{thm} (First isomorphism theorem)\\
Let $\phi:R \to S$ be a ring homomorphism. Then $\ker(\phi) \triangleleft R$, $\im(\phi) \leq S$, and $R/\ker(\phi) \cong \im(\phi)$ by $r+\ker(\phi) \to \phi(r)$.
\end{thm}
\begin{thm} (Second isomorphism theorem)\\
Let $R \subset S$, $J \triangleleft S$. Then $R\cap J \triangleleft R$, $(R+J)/J = \{r+J | r \in R\} \leq S/J$, and $R / R\cap J = (R+J)/J$.
\end{thm}
\begin{thm} (Subring correspondence)\\
We have a bijection between subrings of $R/I$ and subrings of $R$ containing $I$ by:\\
$S/I \leq R/I \leftarrow I \triangleleft S \leq R$\\
$L \leq R/I \rightarrow \{r\in R | r+I \in L\}$, and the same map gievs a bijection between ideals of $R/I$ and ideals of $R$ containing $I$ by
\begin{equation*}
\begin{aligned}
\end{aligned}
J/I \triangleleft R/I \leftrightarrow I \triangleleft J \triangleleft R.
\end{equation*}
\end{thm}
\begin{thm} (Third isomorphism theorem)\\
Let $I,J \triangleleft R$, $I \subset J$. Then $J/I \triangleleft R/I$ and $(R/I)/(J/I) \cong R/J$.
\end{thm}
\begin{eg}
Consider the homomorphism $\phi: \R[X] \to \C$ by substituting in $X=i$, which is onto. We know
\begin{equation*}
\begin{aligned}
\ker(\phi) = \{f \in \R[x] | f(i) = 0\} = (X^2+1)
\end{aligned}
\end{equation*}
because real polynomials with $i$ as a root also have $-i$ as a root. So are divisible by $(X-i)(X+i) = (X^2+1)$. Then by first isomorphism theorem,
\begin{equation*}
\begin{aligned}
\R[X] / (X^2+1) \cong \C
\end{aligned}
\end{equation*}
(Compare with the previous proof).
\end{eg}
\begin{defi}
For any ring $R$, there is a unique homomorphism
\begin{equation*}
\begin{aligned}
\iota:\Z &\to \R\\
1 &\to 1_R\\
n>0 &\to 1_R+1_R+...+1_R \ (n \text{ times})\\
n<0 &\to -(1_R+1_R+...+1_R)\ (-n \text{ times})\\
\end{aligned}
\end{equation*}
Note that $\ker(\iota) \triangleleft \Z$, so $\ker(i) = n\Z$ for some $n \geq 0$. This $n \geq 0$ is called the \emph{characteristic} of the ring $R$.
\end{defi}
\begin{eg}
$\Z \leq \Q \leq \R \leq \C$ all have characteristic $0$, while $\Z/n$ has characteristic $n$.
\end{eg}
\subsection{Integral domains, field of fractions, maximal and prime ideal}
One thing to remember:
\begin{equation*}
\begin{aligned}
Field \implies ED \implies PID \implies UFD \implies ID.
\end{aligned}
\end{equation*}
The interesting bits start here.
\begin{defi}
A non-zero ring $R$ is called an \emph{integral domain (ID)} if for all $a,b \in R$, $a \cdot b = 0 \implies a=0$ or $b=0$.
We call $x$ a \emph{zero divisor} in $R$ if $x\neq 0$ but $\exists y \neq 0$ s.t. $xy = 0$.
\end{defi}
\begin{eg}
All fields are integral domains. If $xy=0$ with $y \neq 0$, then $xyy^{-1} = 0$ i.e. $x=0$.
A subring of an integral domain is an integral domain, so $\Z \leq \Q$ and $\Z[i] \leq \C$ are integral domains.
\end{eg}
\begin{defi}
A ring $R$ is a \emph{principal ideal domain (PID)} if it is an integral domain and every ideal is principal.
\end{defi}
For example, $\Z$ is a principal ideal domain.
\begin{lemma}
A finite integral domain is a field.
\begin{proof}
Let $a \neq 0 \in R$, and consider
\begin{equation*}
\begin{aligned}
a \cdot -: R &\to R\\
b &\to ab
\end{aligned}
\end{equation*}
This is a homomorphism of abelian groups and its kernel is $\{b \in R | ab=0\} = \{0\}$. So $a\cdot -$ is injective. But $R$ is finite. So $a \cdot -$ is bijective. In particular, $\exists b \in R$ s.t. $ab=1$. So $R$ is a field.
\end{proof}
\end{lemma}
\begin{lemma}
Let $R$ be an integral domain, then $R[X]$ is also an integral domain.
\begin{proof}
Let $f=\sum_{i=0}^n a_i X^i$ and $a_n \neq 0$, $g=\sum_{i=0}^m b_i X^i$ and $b_m \neq 0$ be non-zero polynomials. Then the largest power of $X$ in $fg$ is $X^{n+m}$ and its coefficient is $a_nb_m \neq 0$ as $R$ is an ID. So $fg \neq 0$.
\end{proof}
\end{lemma}
Iterating this, we have
\begin{equation*}
\begin{aligned}
R[X_1,...,X_n] = (((R[X_1])[X_2])...[X_n])
\end{aligned}
\end{equation*}
is an integral domain.
\begin{thm}
Let $R$ be an ID. There is a \emph{field of fractions} $F$ of $R$ with the following properties:\\
(i) $F$ is a field;\\
(ii) $R \leq F$;\\
(iii) every element of $F$ is of the form $a \cdot b^{-1}$ for $a,b \leq R \leq F$.
\end{thm}
\begin{proof}
Consider
\begin{equation*}
\begin{aligned}
S=\{(a,b)\in R\times R| b\neq 0\}
\end{aligned}
\end{equation*}
with the equivalence relation $(a,b) \sim (c,d) \iff ad=bc \in R$. This is reflexive and symmetric. For transitivity, if
\begin{equation*}
\begin{aligned}
(c,d) \sim (e,f)
\end{aligned}
\end{equation*}
Then $(ad)f = (bc)f = b(cf) = b(ed) \implies d(af-be) =0 $. But $d \neq 0$. So $af-be = 0$.
Let $F = S/\sim$. Write $[(a,b)] = \frac{a}{b}$ and define
\begin{equation*}
\begin{aligned}
\frac{a}{b} + \frac{c}{d} = \frac{ad+bc}{bd},\\
\frac{a}{b} \cdot \frac{c}{d} = \frac{ab}{cd}.
\end{aligned}
\end{equation*}
and $0=\frac{0}{1}$, $1=\frac{1}{1}$.
If $\frac{a}{b} \neq 0$ then $a \cdot 1 \neq 0 \cdot b$, i.e. $a \neq 0$. Then $\frac{b}{a} \in F$, so $\frac{a}{b} \cdot \frac{b}{a} = \frac{1}{1}$. So $\frac{a}{b}$ has an inverse, so $F$ is a field.
We make $R \leq F$ by $\phi:R \to F$ by $r \to \frac{r}{1}$.
\end{proof}
\begin{eg}
The field of fractions of $\Z$ is $\Q$, and that of $\C[z]$ is the rational polynomial fractions in $z$.
\end{eg}
Note: the ring $\{0\}$ is \emph{not} afield.
\begin{lemma}
A (non-zero) ring is a field iff its only ideals are $\{0\}$ and $R$.
\begin{proof}
If $I \triangleleft R$ is a non-zero ideal, then it contains $a \neq 0$. But an ideal containing a unit must be the whole ring. On the other hand, let $x \neq 0 \in R$, Then $(x)$ must be $R$, as it is \emph{not} the zero ideal. So $\exists y \in R$ s.t. $xy = 1_R$. So $X$ is a unit.
\end{proof}
\end{lemma}
\begin{defi}
An ideal $I \triangleleft R$ is \emph{maximal} if there is no proper ideal which properly contains $I$.
\end{defi}
\begin{lemma}
An ideal $I$ is maximal iff $R/I$ is a field.
\begin{proof}
$R/I$ is a field $\iff$ $I/I$ and $R/I$ are the only ideals in $R/I$ $\iff$ $I,R\ triangleleft$ are the only ideals containing $I$ by ideal correspondence.
\end{proof}
\end{lemma}
\begin{defi}
An ideal $I \triangleleft R$ is \emph{prime} if $I$ is proper, and if $a,b \in R$ are s.t. $a \cdot b \in I$, then $a\in I$ or $b \in I$.
\end{defi}
\begin{eg}
The ideal $n\Z \triangleleft \Z$ is prime if and only if $n$ is zero and a prime number: if $p$ is prime and $a \cdot b \in p\Z$, then $p | a \cdot b$, so $p|a$ or $p|b$, i.e. $a \in p\Z$ or $b \in p\Z$.
Conversely, if $n=uv$ is composite, $u\cdot v \in n\Z$ but $u,v \not\in n\Z$.
\end{eg}
\begin{lemma}
$I \triangleleft R$ is prime iff $R/I$ is an integral domain.\\
Note that this shows that every maximal ideal is prime since fields are integral domains.
\end{lemma}
\begin{proof}
Suppose $I$ is prime. Let $a+I,b+I \in R/I$ be s.t. $(a+I)(b+I) = 0$, i.e. $ab+I=0$, so $ab \in I$. But $I$ is prime, so $a \in I$ or $b \in I$. So $a+I = 0+I$ or $b+I = 0+I$ is the zero element in $R/I$. So $R/I$ is an integral domain.
For the other direction, suppose $R/I$ is an integral domain. Let $ab \in I$. Then $ab+I = 0$, so $(a+I)(b+I)=0$. So $a+I = 0+I$ or $b+I = 0+I$, i.e. $a \in I$ or $b \in I$.
\end{proof}
\begin{lemma}
If $R$ is an integral domain, then its characteristic is $0$ or a prime number.
\begin{proof}
Let $\iota:\Z \to R$ with $1 \to 1_R$. Consider $\ker(\iota) = n\Z$. By 1st isomorphism theorem, $\Z / n\Z \cong \im(\phi) \leq R$ as a subring of an integral domain is again an integral domain, $Z/n\Z$ is an integral domain, so $n\Z \triangleleft \Z$ is prime. So $n$ is zero or a prime number.
\end{proof}
\end{lemma}
\subsection{Factorisation in integral domains}
Suppose throughout this section that $R$ is an integral domain.\\
\begin{defi}
1) An element $a\in R$ is a unit if there is $b\in R$ s.t. $ab=1$. Equivalently, $\left(a\right)=R$.\\
2) $a$ divides $b$ if there is $c\in R$ s.t. $b=a\cdot c$. Equivalently, $\left(b\right) \subset \left(a\right)$.\\
3) $a,b\in R$ are associates if $a=b\cdot c$ with $c$ a unit. Equivalently, $\left(a\right)=\left(b\right)$, or $a|b$ and $b|a$.\\
4) $a\in R$ is irreducible if it is not 0, not a unit, and if $a=x\cdot y$ then $x$ or $y$ is a unit.\\
5) $a\in R$ is prime if it is not 0, not a unit, and when $a|x\cdot y$ then $a|x$ or $a|y$.\\
Note that $2\in \Z$ is prime, but $2\in \Q$ is not.\\
$2x\in \Q[x]$ is irreducible, $2x\in \Z[x]$ is not irreducible.
\end{defi}
\begin{lemma}
$\left(a\right)$ is a prime ideal in $R \iff r=0$ or $r$ is prime in $R$.
\begin{proof}
1) let $\left(r\right)$ be a prime, $r\neq 0$. As $\left(r\right) \neq R$, $r$ is not a unit.\\
Suppose $r|a\cdot b$. Then $a\cdot b\in \left(r\right)$, but $\left(r\right)$ is prime. So $a\in\left(r\right)$ or $b\in\left(r\right)$. So $r|a$ or $r|b$. So $r$ is prime in $R$.\\
2) if $r=0$ then $(0)$ is a prime ideal since $R$ is an integral domain.\\
Now let $r\neq 0$ and be prime in $R$.\\
Let $ab\in\left(r\right)$. Then $r|ab$. So $r|a$ or $r|b$. So $a\in\left(r\right)$ or $b\in\left(r\right)$. So $\left(r\right)$ is a prime ideal in $R$.
\end{proof}
\end{lemma}
\begin{lemma}
if $r\in R$ is prime, then it is irreducible.
\begin{proof}
let $r\in R$ be prime, and suppose $r=a\cdot b$.\\
As $r$ is prime, $r|a$ or $r|b$.\\
Suppose $r|a$. So $a=r\cdot c$. Then $r=r\cdot c\cdot b$.\\
As $R$ is an integral domain, \\
$r\left(c\cdot b-1\right)=0 \implies c\cdot b=1$.\\
So $b$ is a unit. So $r$ is irreducible.
\end{proof}
\end{lemma}
\begin{eg}
Let $R=\Z[\sqrt{-5}]=\left\{a+b\sqrt{-5}|a\cdot b\in\Z\right\}\subseteq \C$.\\
$\C$ is a field and $R$ is a subring, so $R$ is an integral domain.\\
Consider the "norm":\\
\begin{equation*}
\begin{aligned}
N:R &\to \Z \geq 0\\
a+b\sqrt{-5} &\to a^2+5b^2\\
z &\to z\overline{z}=|z|^2.
\end{aligned}
\end{equation*}
This satisfies $N\left(zw\right)=N\left(z\right)\cdot N\left(w\right)$.\\
If $r\cdot s=1$ then $1=N\left(1\right)=N\left(r\cdot s\right)=N\left(r\right)\cdot N\left(s\right)$.\\
So $N\left(s\right)=N\left(r\right)=1$. So any unit has normal 1.\\
i.e. $a^2+5b^2=1$. Then $a=\pm 1, b=0$: only $\pm 1\in R$ are units.\\
\textbf{Claim}: $2\in R$ is irreducible:\\
Suppose $2=ab$. Then $4=N\left(a\right) N\left(b\right)$.\\
Note that nothing in $R$ has norm 2. So WLOG $N\left(a\right)=1,N\left(b\right)=4$. So $a$ is a unit. So 2 is irreducible.\\
Similarly $3,1+\sqrt{-5},1-\sqrt{-5}$ are irreducible (no $r$ with $N\left(r\right)=3$).\\
Note that $\left(1+\sqrt{-5}\right)\left(1-\sqrt{-5}\right)=6=2\cdot 3$.\\
\textbf{Claim}: 2 does not divide $1 \pm \sqrt{-5} \implies 2$ is not prime:\\
if $2|1+\sqrt{-5}$, then $N\left(2\right)|N\left(1+\sqrt{-5}\right)$, i.e. $4|6$, contradiction.\\
\textbf{Lessons}: 1) irreducible doesn't imply prime in general.\\
2) $\left(1+\sqrt{-5}\right)\left(1-\sqrt{-5}\right) = 2\cdot 3$. So factorisation into irreducibles might not be unique.
\end{eg}
\begin{defi}
an integral domain $R$ is a \emph{Euclidean domain}(ED) if there is a function $\varphi: R \backslash \left\{0\right\}\to \Z\geq 0$, a "Euclidean function", such that:\\
1) $\varphi \left(a\cdot b\right) \geq \varphi\left(b\right)$ for all $a,b\neq 0$;\\
2) if $a,b\in R$ with $b\neq 0$, there are $q,r\in R$ s.t. $a=b\cdot q+r$, such that $r=0$ or $\varphi\left(r\right)<\varphi\left(b\right)$ ($r$ is "strictly smaller than" $b$).
\end{defi}
\begin{eg}
1) $\Z$ is a Euclidean domain with $\varphi\left(n\right)=|n|$.\\
2) $F[x]$ with $F$ a field is a Euclidean domain with $\varphi\left(f\right)=\deg\left(f\right)$.\\
3) $\Z[i]=R$ is Euclidean domain, with $\varphi\left(z\right)=N\left(z\right)=|z|^2=z\overline{z}$:\\
i) $\varphi\left(zw\right)=\varphi\left(z\right)\varphi\left(w\right) \geq \varphi\left(z\right)$, as $\varphi\left(w\right)\in \Z^+$ for $w\neq 0$;\\
ii) let $a,b\in\Z[i]$. Consider $\frac{a}{b}\in\C$.\\
We know that $\exists q\in\Z[i]$ s.t. $|\frac{a}{b}-q|<1$,i.e. $\frac{a}{b}=q+c$ with $|c|<1$.\\
Then take $r=b\cdot c$, so $a=b\cdot q + b\cdot c = b\cdot q+r$.\\
$r=a-bq$, so $r$ is in the ring $\Z[i]$; and $\varphi\left(r\right)=N\left(bc\right)=N\left(b\right)N\left(c\right)<N\left(b\right)=\varphi\left(b\right)$ since $N\left(c\right)<1$.
\end{eg}
\begin{prop} (ED $\implies$ PID)\\
if $R$ is a Euclidean domain, then it is a principal ideal domain.
\begin{proof}
Let $R$ have Euclidean function $\varphi:R\backslash\left\{0\right\}\to\Z\geq 0$. Let $I\triangleleft R$ be non-zero. Let $b\in I\backslash\left\{0\right\}$ be an element with $\varphi\left(b\right)$ minimal.\\
Then for $a\in I$, write $a=bq+r$ with $r=0$, or $\varphi\left(r\right)<\varphi\left(b\right)$. But $r=a-bq\in I$, so we can't have $\varphi\left(r\right)<\varphi\left(b\right)$. So $r=0$.\\
Thus $a\in\left(b\right)$. Since $a$ is arbitrary, $I \subset \left(b\right)$. But $\left(b\right)\in I$ as well, so $I=\left(b\right)$. So R is a principal ideal domain.
\end{proof}
\end{prop}
\begin{eg}
$\Z$,$F[X]$($F$ field) are Principal ideal domains.\\
$\Z[i]$ is a PID.
In $\Z[X]$, $\left(2,x\right)\triangleleft \Z[X]$ is not a principal ideal.\\
Otherwise suppose $\left(2,x\right)=\left(f\right)$, then $2=f\cdot g$ for some $g$. Then $f$ has to have degree zero, so a constant, so $f\pm 1 or \pm 2$.\\
If $f=\pm 1$ a unit, then $\left(f\right)=\Z[x]$, but $1\notin \left(2,x\right)$. Contradiction.
If $f=\pm 2$, $x\in \left(2,x\right)=\left(f\right)$ so $\pm 2 | x$, a contradiction.
\end{eg}
\begin{eg}
Let $A\in M_{n\times n} \left(F\right)$ be an $n\times n$ matrix over a field $F$.\\
$I=\left\{f\in F[X]|f\left(A\right)=0\right\}$.\\
If $f\cdot g\in I$,$\left(f+g\right)\left(A\right)=f\left(A\right)+g\left(A\right)=0+0=0$.\\
If $f\in I, g\in F[X]$ then $\left(f\cdot g\right)\left(A\right)=f\left(A\right) \cdot g\left(A\right)=0$\\
So $I$ is an ideal.\\
So $F[X]$ is a PID, have $I=\left(m\right)$ for some $m\in F[X]$.\\
Suppose $f\in F[X]$ s.t. $f\left(A\right)=0$. Then $f\in I$ so $f=m\cdot g$. So $m$ is the minimal polynomial of $A$.
\end{eg}
\begin{defi}
An integral domain is a unique factorization domain (UFD) if:\\
1) every non-unit may be written as a product of irreducible elements;\\
2) if $p_{1}p_{2}...p_{n}=q_{1}q_{2}...q_{m}$ with $p_{i},q_{i}$ irreducible, then $n=m$, and they can be reordered such that $p_{i}$ is an associate of $q_{i}$. (they generate the same ideal)\\
\end{defi}
Goal: want to show that PID $\implies$ UFD.\\
\begin{lemma}
Let $R$ be a PID. If $p\in R$ is irreducible, then it is prime.\\
(prime $\implies$ irreducible in any integral domain)
\begin{proof}
Let $p\in R$ be irreducible. Suppose $p|a\cdot b$. Suppose $p \nmid a$.\\
Consider the ideal $\left(p,a\right)\triangleleft R$, a PID so $\left(p,a\right)=\left(d\right)$ for some $d\in \R$.\\
So $d|p$,so $p=q_{1}\cdot d$ for some $q_{1}$.\\
We must have $q_{1}$ a unit or $d$ a unit.\\
If $q_{1}$ a unit then $d=q_{1}^{-1}\cdot p$ divides a. So $a=q_{1}\cdot p\cdot x$, contradiction.\\
Thus $d$ is a unit, so $\left(p,a\right)=\left(d\right)=R$.\\
So we have $1_{R}=v\cdot p+s\cdot a$ for some $r,s \in R$.\\
So $b=r\cdot p\cdot b + s\cdot a\cdot b$. So $p|b$.
\end{proof}
\end{lemma}
\begin{lemma}
Let $R$ be a PID, let $I_{1}\in I_{2}\in...$ be a chain of ideals. Then there is a $N\in\N$ s.t. $I_{n}=I_{n+1} \forall n\geq \N$.(this is called the ascending chain condition(ACC), a ring satisfying this condition is called \emph{Noetherian}.)
\begin{proof}
Let $I=\cup_{n\geq 1}^\infty I_{n}$, again an ideal. As $R$ is a PID, $I=\left(a\right)$ for some $a\in R$. This $a\in I=\cup_{n=0}^\infty I_{n}$, so $a\in I_{n}$ for some $n$. \\
Thus $\left(a\right) \leq I_{n} \leq I = \left(a\right)$.\\
So they are all equal. So $I_{n}=\left(a\right)=I$, so $I_{n}=I_{N} \forall n \geq N$.
\end{proof}
\end{lemma}
\begin{prop}
PID $\implies$ UFD.
\begin{proof}
1) Need to show any $r\in R$ is a product of irreducibles.\\
Let $r\in R$. If $r$ is irreducible then we are done.\\
Suppose not, then $r=r_{1}s_{1}$ with $r_{1},s_{1}$ both non-units.\\
If both $r_{1},s_{1}$ are reducible then we are done. Suppose not, WLOG write $r_{1}=r_{2}s_{2}$ with $r_{2},s_{2}$ non-units.\\
Continue in this way. If the process doesn't end, $\left(r\right) \leq \left(r_{1}\right) \leq ... \leq \left(r_{n}\right) \leq ...$.\\
So by the ACC property, $\left(r_{n}\right)=\left(r_{n+1}\right)=...$ for some $n$.\\
So $r_{n}=r_{n+1}\cdot s_{n+1}$, and $\left(r_{n}\right)=\left(r_{n+1}\right) \implies s_{n+1}$ is a unit. Contradiction.\\
2) Let $p_{1}p_{2}...p_{n}=q_{1}q_{2}...q_{n}$ with $p_{i},q_{i}$ irreducible.\\
So $p_{1} | q_{1}...q_{n}$. In a PID, irreducible $\iff$ prime. So $p_{1}$ divides some $q_{i}$, reorder to suppose $p_{1}|q_{1}$. So $q_{1}=p_{1}\cdot a$. But as $q_{1}$ is irreducible, $a$ must be a unit. So $p_{1} and q_{1}$ are associates.\\
Cancelling $p_{1}$ gives:\\
$p_{2}p_{3}...p_{n}=\left(aq_{2}\right)q_{3}...q_{n}$ and we continue.\\
This also shows $n=m$, else if $n=m+k$ then get $p_{k+1}...p_{n}=1$ a contradiction.
\end{proof}
\end{prop}
\begin{defi}
$d$ is a greatest common divisor of $a_{1},a_{2},...,a_{n}$ if $d|a_{i}$ for all $i$, and if $d'|a_{i}$ for all $i$ then $d'|d$.
\end{defi}
\begin{lemma}
If $R$ is a UFD then the gcd exists, and is unique up to associates.
\begin{proof}
Every $a$ is a product of irreducibles, so let $p_{1},p_{2},...,p_{m}$ be a list of all the irredcibles which are factors of $a_{i}$, none of them is associate of each other.\\
Write $a_{i}=u_{i}\Pi_{j=1}^{m} p_{j}^{n_{ij}}$ for $u_{i}$ units and $n_{ij} \in \N$.\\
Let $m){j}=\min_{i}\left(n_{ij}\right)$ and $d=\Pi_{j=1}^m p_{j}^{m_{j}}$. As $m_{j} \leq n_{ij} \forall i$, $d|a_{i}$ for all $i$.\\
If $d'|a_{i} \forall i$, let $d'=v \Pi_{j=1}^m p_{i}^{t_{j}}$.\\
Then we must have $t_{j} \leq n_{ij} \forall i$ so $t_{j} \leq m_{j} \forall j$. Then $d'|d$.\\
\end{proof}
\end{lemma}
\subsection{Factorisation in polynomial rings}
For $F$ a field, we know $F[x]$ is a Euclidean Domain(ED), so a PID, so a UFD. So\\
1) $I\triangleleft F[x] \implies I=\left(f\right)$.\\
2) $f\in F[x]\text{ irreducible} \iff f\text{ prime}$.\\
3) Let $f\in F[x]$ be irreducible, and $\left(f\right) \leq J \leq F[x]$. Then $J=\left(g\right)$ and $\left(f\right) \subset \left(g\right)$ so $f=g\cdot h$. But $f$ is irreducible, so $g$ or $h$ is a unit.\\
If $g$ is a unit, then $\left(g\right)=F[x]$;\\
If $h$ is a unit, then $\left(f\right)=\left(g\right)$.\\
So $\left(f\right)$ is a maximal ideal.\\
4) $\left(f\right)$ prime ideal $\implies f$ prime $implies f$ reducible $\implies \left(f\right)$ is maximal.\\
So in $F[x]$, prime ideals are the same as maximal ideals.\\
5) $f$ is irreducible if and only if $F[x]/\left(f\right)$ is a field.\\
\begin{defi}
Let $R$ be a UFD and $f=a_0+a_1 X+...+a_n X^n \in R[x]$ with $a_n \neq 0$. Let the \emph{content} $c\left(f\right)$ of $f$ is the gcd of all the coefficients in $R$, unique up to associates. Say $f$ is \emph{primitive} if $c\left(f\right)$ is a unit, i.e. the $a_i$ are coprime.
\end{defi}
\begin{lemma} (Gauss')
Let $R$ be a UFD, $f\in R[x]$ be a primitive polynomial. Then $f$ is irreducible in $R[x] \iff f$ is irreducible in $F[x]$, where $F$ is the field of fractions of $R$.
\end{lemma}
\begin{eg}
Consider $f=x^3+x+1 \in \Z[x]$. This has content 1 so is primitive.\\
Suppose $f$ is reducible in $\Q[x]$. Then by Gauss' lemma $f$ is reducible in $\Z[x]$ too, so $x^3+x+1=g\cdot h$ for $g,h\in \Z[x]$, both $g$ and $h$ are not units. Neither $g$ nor $h$ can be constant, so they both have degree at least 1. So WLOG suppose $g$ has degree 1 and $h$ as degree 2.\\
So $g=b_0+b_1x$, $h=c_0+c_1x+c_1x^2$.\\
Multiplying them gives $b_0c_0=1$, $c_2b_1=1$ so $b_0$ and $b_1$ are both $\pm 1$. So $g$ is $1+x$ or $1-x$ or $-1+x$ or $-1-x$, so has $\pm 1$ as a root. But $f$ doesn't have $\pm 1$ as a root. Contradiction.\\
Note that from this we can know that $f$ has not no root in $\Q$.
\end{eg}
\begin{lemma}
Let $R$ be a UFD. If $f,g\in R[x]$ are primitive, then $f\cdot g$ is primitive too (Note that we don't know whether $R[x]$ is a UFD or not).
\begin{proof}
Let $f=a_0+a_1x+...+a_nx^n$ with $a_n\neq 0$,\\
$g=b_0+b_1x+...+b_mx^m$ with $b_m\neq 0$ be both primitive.\\
Suppose $f\cdot g$ is not primitive. Then $c\left(fg\right)$ is not a unit, so let $p$ be an irreducible which divides $c\left(fg\right)$.\\
By assumption $c\left(f\right)$ and $c\left(g\right)$ are units, so $p \nmid c\left(f\right)$ and $p \nmid c\left(g\right)$.\\
Suppose $p|a_0$, $p|a_1$, ..., $p|a_{k-1}$, but $p \nmid a_k$;\\
$p|b_0$,...,$p|b_{l-1}$,but $p \nmid b_l$.\\
Look at coefficient of $x^{k+l}$ in $f\cdot g$: \\
$...+a_{k+1}b_{l-1}+a_kb_l+a_{k-1}b_{l+1}+...=\sum_{i+j=k+l} a_ib_j$.\\
As $p|c\left(fg\right)$, we have $p|\sum_{i+j=k+l}a_ib_j$.\\
We see that the only term that might not be divisible by p is $a_kb_l$.\\
So $p|a_kb_l$. $p$ is irreducible (so prime), so $p|a_n$ or $p_b|l$. Contradiction.\\
So $f\cdot g$ is primitive.
\end{proof}
\end{lemma}
\begin{coro}
let $R$ be a UFD. Then for $f,g\in R[x]$ we have that $c\left(f\cdot g\right)$ is an associate of $c\left(f\right)c\left(g\right)$.
\begin{proof}
We can always write $f=c\left(f\right)f_1, g=c\left(g\right)g_1$ with $f_1, g_1$ being primitive.\\
Then $f\cdot g=c\left(f\right)c\left(g\right)\left(f_1\cdot g_1\right)$. So $c\left(f\right)c\left(g\right)$ is a gcd of coefficients $f\cdot g$, so is $c\left(fg\right)$ (up to associates).
\end{proof}
\end{coro}
\begin{proof}(Gauss' lemma)\\
We will show that a primitive $f\in R[x]$ is reducible in $R[x] \iff$ it is reducible in $F[x]$.\\
1) Let $f=g\cdot h$ be a product in $R[x]$, $g,h$ not units. As $f$ is primitive, so are $g$ and $h$. So both have degree at least 1.\\
So $g,h$ are not units in $F[x]$ either, so $f$ is reducible in $F[x]$.\\
2) Let $f=g\cdot h$ in $F[x]$, $g$ and $h$ not units. So $g$ and $h$ have degree at least 1.\\
We can find $a,b\in R$ s.t. $a\cdot g\in R[x]$ and $b\cdot h\in R[x]$ (clear the denominators).\\
Then $a\cdot b\cdot f = \left(a\cdot g\right)\left(b\cdot h\right)$ is a factorisation in $R[x]$.\\
Let $\left(a\cdot g\right)=c\left(a\cdot g\right)\cdot g_1$ with $g_1$ primitive, $\left(b\cdot h\right)=c\left(b\cdot h\right)\cdot h_1$ with $h_1$ primitive.\\
So
\begin{equation*}
\begin{aligned}
a\cdot b &= c\left(a\cdot b\cdot f\right)\\
&= c\left(\left(a\cdot g\right)\left(b\cdot h\right)\right)\\
&= u\cdot c\left(a\cdot g\right)\cdot c\left(b\cdot h\right)
\end{aligned}
\end{equation*}
by the previous corollary, where $u\in R$ is a unit.\\
But also $a\cdot b\cdot f=c\left(a\cdot g\right)\cdot c\left(b\cdot h\right)\cdot g_1 \cdot h_1$.\\
So cancelling $a\cdot b$ gives $f=u^{-1} g_1h_1 \in R[x]
$, so $f$ is reducible in $R[x]$.
\end{proof}
\begin{prop}
Let $R$ be a UFD, $g\in R[x]$ be primitive.\\
Let $J=\left(g\right)\triangleleft R[x]$, $I=\left(g\right)\triangleleft F[x]$.\\
Then $J=I \cap R[x]$.\\
(More plainly, if $f=g\cdot h\in R[x]$ with $h\in F[x]$ then $f=g\cdot h'$ with $h' \in R[x]$.
\begin{proof}
Certainly $J \subseteq I \cap R[x]$. Let $f\in I \cap R[x]$, so $f=g\cdot h$ with $h\in F[x]$. Choose $b\in R$ s.t. $b\cdot h\in R[x]$ (clear denominators).\\
Then $b\cdot f=g\cdot \left(bh\right)\in R[x]$.\\
Let $\left(b\cdot h\right)=c\left(b\cdot h\right)\cdot h_1$ for $h_1$ primitive. Then\\
$b\cdot f = c\left(b\cdot h\right)\cdot g\cdot h_1$. So $c\left(bf\right)=u\cdot c\left(bh\right)$ for $u$ a unit since $g\cdot h_1$ is primitive.\\
But $c\left(b\cdot f\right)=b\cdot c\left(f\right)$. So $b|c\left(bh\right)$.\\
$c\left(bh\right)=b\cdot c\in R$.\\
So $b\cdot f=b\cdot c gh_1$, cancelling $b$ gives $f=g\left(ch_1\right)$. So $g$ divides $f$ in $R[x]$.
\end{proof}
\end{prop}
\begin{thm}
If $R$ is a UFD, then $R[x]$ is a UFD.
\begin{proof}
Let $f\in R[x]$. We can write $f=c\left(f\right)\cdot f_1$ with $f_1$ primitive.\\
Firstly, As $R$ is a UFD, we may factor $c\left(f\right)=p_1 p_2 ... p_n$ for $p_i \in R$ irreducible, (so also irreducible in $R[x]$).\\
If $f_1$ is not irreducible, write $f_1 = f_2 f_3$ with $f_2$ and $f_3$ both not units, so $f_2$ and $f_3$ must both have non-zero degree(since $f_1$ is primitive, they can't be constant). Also $\deg\left(f_2\right),\deg\left(f_3\right) < \deg\left(f_1\right)$.\\
If $f_2,f_3$ are irreducible then done. Else continue factoring. At each stage the degree of factors strictly decreases, so we must finish: $f_1 = q_1 q_2 ... q_m$ with $q_i$ irreducible.\\
So $f=p_1 p_2 ... p_n q_1 q_2 ... q_m$ is a product of irreducibles.\\
For uniqueness, first note that $c\left(f\right)=p_1 p_2 ... p_n$ is a unique factorisation up to reordering and associates, as $R$ is a UFD. So cancel this off to obtain $f_1 = q_1 ... q_m$.So suppose $q_1 q_2 ... q_m = r_1 r_2 ... r_l$ is another factorisation of $f_1$.\\
Note that each $q_i$ and each $r_i$ is a factor of the primitive polynomial $f_1$, so each of them must be also primitive.\\
Let $F$ be the field of fractions of $R$, and consider $q_i, r_i \in F[x]$ instead. Now $F[x]$ is a ED, hence PID, hence UFD. By Gauss' lemma, the $q_i$ and $r_i$ are irreducible in $F[x]$. As $F[x]$ is a UFD we find that $l=m$; and after reordering $r_i = u_i q_i$ with $u_i \in F[x]$ a unit.\\
Firstly $u_i\in F$ since it is a unit.\\
Clear denominators of $u_i$, we find that $a_i r_i = b_i q_i \in R[x]$.\\
So taking contents shows that $a_i$ and $b_i$ are associates. So $b_i = v_i a_i$ with $v_i\in R$ a unit.\\
Cancelling $a_i$ gives $r_i = v_i q_i$ as required.
\end{proof}
\end{thm}
\begin{eg}
$\Z[x]$ is a UFD.\\
$R$ is a UFD $\implies$ $R[x_1,x_2,...,x_n]$ is a UFD.
\end{eg}
\begin{thm} (Eisenstein's criterion)
Let $R$ be a UFD, let
\begin{equation*}
\begin{aligned}
f=a_0 + a_1 x + ... + a_n x^n \in R[x]
\end{aligned}
\end{equation*}
have $a_n \neq 0$ and $f$ primitive. Let $p\in R$ be irreducible (=prime, since $R$ is a UFD) such that:\\
1) $p \nmid a_n$;\\
2) $p | a_i$ for $0 \leq i \leq n-1$;\\
3) $p^2 \nmid a_0$.\\
Then $f$ is irreducible in $R[x]$, so also irreducible in $F[x]$ by Gauss' lemma.
\begin{proof}
Suppose $f$=$g\cdot h$ with\\
$g=r_0 + r_1 x+...+r_k x^k$ with $r_k \neq 0$,\\
$h=s_0 + s_1 x+...+s_l x^l$ with $s_l \neq 0$.\\
Now $r_k s_l = a_n$, and $p \nmid a_n$ so $p \nmid r_k$ and $p \nmid s_l$.\\
Also $r_0 s_0 = a_0$, and $p|a_0$ but $p^2 \nmid a_0$. So WLOG let $p | r_0$ but $p \nmid s_0$.\\
Let $j$ be such that $p|r_0, p|r_1,...,p|r_{j-1},p \nmid r_j$.\\
Then $a_j = r_0 s_j+r_1 s_{j-1} + ... + r_{j-1} s_1 + r_j s_0$. All but the last term are divisible by $p$, and $r_j s_0$ is not divisible by $p$ since both $r_j$ and $s_0$ are not divisible by $p$.\\
So $p \nmid a_j$. By condition (1) and (2) we must have $j=n$. Also we have $j \leq k \leq n$, so $j=k=n$. That means $l = n-k = 0$, so $h$ is a constant.\\
But $f$ is primitive, it follows that $h$ must be a unit. So $f$ is irreducible.
\end{proof}
\end{thm}
\begin{eg}
Consider $x^n - p \in \Z[x]$ for $p$ prime. Apply Eisenstein's criterion with $p$, we find that all the conditions hold. So $x^n - p$ is irreducible in $\Z[x]$, and so in $\Q[x]$ as well by Gauss' lemma.\\
This implies that $x^n - p$ has no roots in $\Q$. So $\sqrt[n]{p} \notin \Q$.
\end{eg}
\begin{eg}
Consider $f= x^{p-1} + x^{p-2}+...+x^2+x+1 \in \Z[x]$ with $p$ a prime number.\\
Note $f=\frac{x^p-1}{x-1}$, so let $y=x-1$. Then\\
$\hat{f}\left(y\right) = \frac{\left(y+1\right)^p-1}{y} = y^{p-1} + {p \choose 1} y^{p-2} + ... + {p \choose p-1}$.\\
Now $p | {p\choose i}$ for $1\leq i \leq p-1$, but $p^2 \nmid {p \choose p-1} = p$.\\
So by Eisenstein's criterion, $\hat{f}$ is irreducible in $\Z[x]$.\\
Now if $f\left(x\right)=g\left(x\right)\cdot h\left(x\right) \in \Z[x]$, then get $\hat{f}\left(y\right) = g\left(y+1\right)\cdot h\left(y+1\right)$ a factorisation in $\Z[y]$. So $f$ is irreducible.
\end{eg}
\subsection{Gaussian integers}
Recall $\Z[i] = \left\{a+bi | a,b\in\Z\right\} \leq \C$ is thexswq \emph{Gaussian integers}.\\
The \emph{norm} $N\left(a+ib\right)=a^2 + b^2$ serves as a Euclidean function for $\Z[i]$. So it is a ED, so a PID, so a UFD.\\
The units are precisely $\pm 1$ and $\pm i$.\\
\begin{eg}
1) $2=\left(1+i\right)\left(1-i\right)$, so not irreducible, so not prime.\\
2) $3$: $N\left(3\right)=9$, so if $3=u\cdot v$ with $u,v$ not units, then $9=N\left(u\right)N\left(v\right)$ with $N\left(u\right)\neq 1 \neq N\left(v\right)$. So $N\left(u\right)=N\left(v\right)=3$. But $3=u^2+v^2$ has no solutions with $a,b\in \Z$. So 3 is irreducible, so a prime.\\
3) $5=\left(1+2i\right)\left(1-2i\right)$ is not irreducible, so not prime.
\end{eg}
\begin{prop}
A prime number $p\in\Z$ is prime in $\Z[i] \iff p \neq a^2+b^2$ for $a,b\in\Z\backslash\left\{0\right\}$.
\begin{proof}
If $p=a^2+b^2=\left(a+ib\right)\left(a-ib\right)$ then it is not irreducible, so not prime.\\
If $p=u\cdot v$, then $p^2 = N\left(u\right)N\left(v\right)$. So if $u,v$ are not units, then $N\left(u\right)=N\left(v\right)=p$ since $p$ is prime in $\Z$. Writing $u=a+ib$, this says $a^2+b^2=p$.
\end{proof}
\end{prop}
\begin{lemma}
Let $p$ be a prime number, $F_p = \Z/p\Z$ a field with $p$ elements.\\
Let $F_p^* = F_p\backslash\left\{0\right\}$ be the group of invertible elements under multiplication.\\
Then $F_p^* \cong C_{p-1}$.
\begin{proof}
Certainly $F_p^*$ has order $p-1$, and is abelian.\\
Know classification of finite abelian groups, it follows that if $F_p^*$ is not cyclic, then it must contain a subgroup $C_m \times C_m$ for $m>1$.\\
Consider the polynomial $X^m -1 \in F_p[x]$, a UFD. At best this factors into $m$ linear factors, so $X^m -1$ has at most $m$ distinct roots.\\
If $C_m \times C_m \leq F_p^*$, then we have $m^2$ elements of $F_p$ which are roots of $X^m-1$. But $m^2 > m$, contradiction. So $F_p^*$ is cyclic.
\end{proof}
\end{lemma}
\begin{prop}
The primes in $\Z[i]$ are, up to associates,\\
1) prime numbers $p\leq \Z \leq \Z[i]$ s.t. $p\equiv 3 \mod 4$;\\
2) $z\in\Z[i]$ with $N\left(z\right)=z\overline{z}=p$ for $p$ prime, $p=2$ or $p \equiv 1 \mod 4$.
\begin{proof}
1) If $p\equiv 3 \mod 4$ then $p\neq a^2+b^2$.\\
By the previous proposition, $p\in\Z[i]$ is prime.\\
2) If $N\left(z\right)=p$ and $z=uv$, then $N\left(u\right)N\left(v\right)=p$. So $N\left(u\right)=1$ or $N\left(v\right)=1$, so $u$ or $v$ is a unit.\\
Let $z\in\Z[i]$ be irreducible (also prime). Then $\overline{z}$ is irreducible, so $N\left(z\right) = z\overline{z}$ is a factorisation of $N\left(z\right)$ into irreducibles.\\
Let $p\in\Z$ be a prime number dividing $N\left(z\right)$. ($N\left(z\right)\neq 1$ so such $p$ exists).\\
$\bullet$ Case 1: $p\equiv 3 \mod 4$. Then $p\in\Z[i]$ is prime by the first part of the proof. $p|N\left(z\right)=z\overline{z}$ so $p|z$ or $p|\overline{z}$. So perhaps conjugating, get $p|z$. But both are irreducible, so $p$ and $z$ are associates.\\
$\bullet$ Case 2: $p=2$ or $p\equiv 1 \mod 4$.\\
If $p\equiv 1\mod 4$ then $p-1 = 4k$ for some $k$. As $F_p^* \cong C_{p-1} = C_{4k}$, there is a unique element of order $2$, which must be $[-1]\in F_p$.\\
Let $[a]\in F_p^*$ be an element of order 4. Then $[a^2] = [-1]$.\\
So $a^2 + 1$ is divisible by $p$. So $p|\left(a+i\right)\left(a-i\right)$.\\
Also $2|\left(1+i\right)\left(1-i\right)$.\\
So deduce that $p$ (or 2) is not prime, so not irreducible, as it clearly does not divide $a+i$ or $a-i$.\\
So $p=z_1 z_2$ for $z_1, z_2 \in \Z[i]$. So
\begin{equation*}
\begin{aligned}
p^2 = N\left(p\right) = N\left(z_1\right)N\left(z_2\right).
\end{aligned}
\end{equation*}
So as $z_i$ are not units, $N\left(z_1\right) = N\left(z_2\right) = p$. So $p=z_1 \bar{z_2}$ ($=z_2 \bar{z_1}$). So $\bar{z_1} = z_2$.\\
So $p=z_1 \bar{z_1} | N\left(z\right) = z\bar{z}$. So $z$ is an associate of $z_1$ or $\bar{z_1}$, as $z$ and $z_1$ are irreducible.
\end{proof}
\end{prop}
\begin{coro}
An integer $n\in\Z^+$ may be written as $x^2 + y^2$ (the sum of two squares) if and only if, when we write $n=p_1^{n_1} p_2^{n_2} ... p_k^{n_k}$ as a product of distinct primes, if $p_i \equiv 3 \mod 4$ then $n_i$ is even.
\begin{proof}
Let $n=x^2+y^2 = \left(x+iy\right)\left(x-iy\right)=N\left(x+iy\right)$. Let $z=x+iy$, so $z=\alpha_1 \alpha_2...\alpha_q$ a product of irreducibles in $\Z[i]$.\\
By the proposition, each $\alpha_i$ is either $\alpha_i = p$ prime number with $p\equiv 3 \mod 4$, or $N\left(\alpha_i\right)=p$ a prime number which is either 2 or $\equiv 1 \mod 4$.
\begin{equation*}
\begin{aligned}
n=x^2+y^2=N\left(z\right)=N\left(\alpha_1\right)N\left(\alpha_2\right)...N\left(\alpha_q\right)
\end{aligned}
\end{equation*}
Each $N\left(\alpha_i\right)$ satisfies: either\\
$\bullet$ $N\left(\alpha_i\right) = p^2$ with $p\equiv 3 \mod 4$ prime, or\\
$\bullet$ $N\left(\alpha_i\right) = p$ with $p=2$ or $p\equiv 1 \mod 4$ prime.\\
So if $p^m$ is the largest power of $p$ dividing $n$, we find that $m$ must be even if $p\equiv 3 \mod 4$.\\
Conversely, let $n=p_1^{n_1}p_2^{n_2}...p_k^{n_k}$ be a product of distinct primes.\\
For each $p_i$, either $p_i \equiv 3 \mod 4$ and $n_i$ is even, so $p_i^{n_i} = \left(p_i^2\right)^{\frac{n}{2}} = N\left(p_i^\frac{n}{2}\right)$, or $p_i = 2$ or $p_i \equiv 1 \mod 4$, then $p_i = N\left(\alpha_i\right)$ for some $\alpha_i \in \Z[i]$. So $p_i^{n_i} = N\left(\alpha_i^{n_i}\right)$.\\
So $n$ is the norm of some $z\in\Z[i]$, so $n=N\left(z\right)=N\left(x+iy\right) = x^2+y^2$ is a sum of squares.
\end{proof}
\end{coro}
\begin{eg}
$65=5\cdot 13$.\\
Then $5=\left(2+i\right)\left(2-i\right)$\\
$13=\left(2+3i\right)\left(2-3i\right)$.\\
So $65 = N\left(\left(2+i\right)\left(2+3i\right)\right)=N\left(1+8i\right)=1^2+8^2$.\\
Also $65 = N\left(\left(2+i\right)\left(2-3i\right)\right)=N\left(7-4i\right)=7^2+4^2$.
\end{eg}
\subsection{Algebraic integers}
\begin{defi}
$\alpha\in\C$ is called an \emph{algebraic integer} if it is a root of a monic polynomial in $\Z[x]$, i.e. $\exists$ monic $f\in\Z[x]$ s.t. $f\left(\alpha\right)=0$.\\
Write $\Z[\alpha]\leq \C$ for the smallest subring containing $\alpha$.\\
In other words,$\Z[\alpha]$=$Im\left(\varphi\right)$ where $\varphi$ is defined as:
\begin{equation*}
\begin{aligned}
\varphi: &\Z[x] \to \C\\
&g\to g\left(\alpha\right)
\end{aligned}
\end{equation*}
So also $\Z[\alpha] \cong \Z[x]/I$, $I=\ker\left(\varphi\right)$.
\end{defi}
\begin{prop}
If $\alpha\in\C$ is an algebraic integer then
\begin{equation*}
\begin{aligned}
I= \ker \left(\varphi:
\begin{array}{ll}
\Z[x] &\to\C\\
f &\to f\left(\alpha\right)
\end{array}
\right)
\end{aligned}
\end{equation*}
is a principal ideal and is generated by a monic irreducible polynomial $f_\alpha\in\Z[x]$, called the \emph{minimal polynomial} of $\alpha$.
\begin{proof}
By definition there is a monic $f\in\Z[x]$ s.t. $f\left(\alpha\right)=0$. So $f\in I$ so $I\neq 0$.\\
Let $f_\alpha\in I$ be a polynomial of minimal degree. We may suppose that $f_\alpha$ is primitive by dividing by its content.\\
We want to show that $I=\left(f_\alpha\right)$ and that $f_\alpha$ is irreducible.\\
Let $h\in I$. In $\Q[x]$ we have a Euclidean algorithm, so we may write $h=f_\alpha \cdot q+r$ with $r=0$ or $\deg\left(r\right)<\deg\left(f_\alpha\right)$.\\
We may multiply by some $a\in\Z$ to clear denominators and get
\begin{equation*}
\begin{aligned}
a\cdot h = f_\alpha \cdot \left(aq\right) + \left(ar\right)
\end{aligned}
\end{equation*}
with $aq$ and $ar$ in $\Z[x]$.\\
Evaluate at $\alpha$ gives
\begin{equation*}
\begin{aligned}
ah\left(\alpha\right) = f_\alpha\left(\alpha\right)\left(aq\right)\left(\alpha\right) + \left(ar\right)\left(\alpha\right)\\
\implies 0=\left(ar\right)\left(\alpha\right)
\end{aligned}
\end{equation*}
So $\left(ar\right)\in I$.\\
As $f_\alpha\in I$ has minimal degree, we cannot have $\deg\left(r\right)=\deg\left(ar\right)<\deg\left(f_\alpha\right)$. So instead must have $r=0$.\\
So $ah = f_\alpha \cdot\left(aq\right) \in \Z[x]$.\\
Take contents of everything, get
\begin{equation*}
\begin{aligned}
a\cdot c\left(h\right) = c\left(ah\right) = c\left(f_\alpha\left(aq\right)\right) = c\left(aq\right)
\end{aligned}
\end{equation*}
as $f_\alpha$ is primitive.\\
So $a|c\left(aq\right)$, so $aq = a \bar{q}$ with $\bar{q}\in\Z[x]$ and cancelling $a$ shows $q=\bar{q}\in \Z[x]$.\\
So $h=f_\alpha \cdot q\in\left(f_\alpha\right) \triangleleft \Z[x]$. So $I=\left(f_\alpha\right)$.\\
Now we want to show that $f_\alpha$ is irreducible. We have
\begin{equation*}
\begin{aligned}
\Z[x]/\left(f_\alpha\right) = \Z[x]/\ker\left(\varphi\right) \cong Im\left(\varphi\right) = \Z[\alpha] \leq \C
\end{aligned}
\end{equation*}
$\C$ is an integral domain, so Im$\left(\varphi\right)$ is an integral domain, so $\Z[x]/\left(f_\alpha\right)$ is an integral domain.\\
So $\left(f_\alpha\right)$ is prime. So $f_\alpha$ is prime, so irreducible.
\end{proof}
\end{prop}
\begin{eg}
$\alpha = i$ is an algebraic integer with $f_\alpha = x^2 + 1$.\\
$\alpha = \sqrt{2}$ is an algebraic integer with $f_\alpha = x^2-2$.\\
$\alpha = \frac{1}{2}\left(1+\sqrt{-3}\right)$ is an algebraic integer with $f_\alpha = x^2-x+1$.\\
The polynomial $x^5 - x + d\in\Z[x]$ with $d\in \Z$ has precisely one real root $\alpha$, which is an algebraic integer.
\end{eg}
\begin{rem} (Galois theory)\\
This $\alpha$ cannot be constructed from $\Z$ using $+,-,\times,/,\sqrt[n]{ }$.
\end{rem}
\begin{lemma}
If $\alpha\in\Q$ is an algebraic integer, then $\alpha \in \Z$.
\begin{proof}
Let $f_\alpha \in \Z[x]$ be the minimal polynomial, which is irreducible.\\
In $\Q[x]$, $x-\alpha$ must divide $f_\alpha$, but by Gauss' lemma, $f_\alpha \in \Q[x]$ must be irreducible. So must have $f_\alpha = x-\alpha\in\Z[x]$ (else there is a proper decomposition). So $\alpha \in \Z$.
\end{proof}
\end{lemma}
\subsection{Hilbert basis theorem}
A ring $R$ satisfies the \emph{ascending chain condition (ACC)} if whenever
\begin{equation*}
\begin{aligned}
I_1 \subset I_2 \subset ...
\end{aligned}
\end{equation*}
is an increasing sequence of ideals, then we have
\begin{equation*}
\begin{aligned}
I_n = I_{n+1} = I_{n+2} = ...
\end{aligned}
\end{equation*}
for some $n \in \N$.\\
A ring satisfying this condition is called Noetherian.
\begin{eg}
Any finite ring, any field, and $\Z$ or any other PID is Noetherian (see next proposition).\\
Consider $\Z[x_1,x_2,...]$. Note that
\begin{equation*}
\begin{aligned}
\left(x_1\right) \subset \left(x_1x_2\right) \subset \left(x_1x_2x_3\right) \subset ...
\end{aligned}
\end{equation*}
while none of the ideals are equal. Thus $\Z[x_1,x_2,...]$ is not Noetherian.
\end{eg}
\begin{prop}
A ring $R$ is Noetherian $\iff$ every ideal of $R$ is finitely generated, i.e. $I=\left(r_1,...,r_n\right)$ for some $r_1,...,r_n\in R$ for every ideal $I \subset R$.
\begin{proof}
Suppose every ideal of $R$ is finitely generated. Given $I_1 \subset I_2 \subset ...$, consider the ideal
\begin{equation*}
\begin{aligned}
I = I_1 \cup I_2 \cup ...
\end{aligned}
\end{equation*}
We have $I=\left(r_1,...,r_n\right)$, with WLOG $r_i \in I_{k_i}$.\\
Now let $k=max\left(k_1,...,k_n\right)$.\\
Then $r_1,...,r_n\in I_k$, hence $I_k = I$.
On the other hand, suppose an ideal $I$ is not finitely generated.\\
Choose $r_1 \in I$. Then $\left(r_1\right) \neq I$ as $I$ is not finitely generated. Then choose $r_2 \in I\backslash \left(r_1\right)$. Then $\left(r_1,r_2\right)\neq I$. Then choose $r_3,r_4,...$ similarly. But now we get a chain of ideals
\begin{equation*}
\begin{aligned}
\left(r_1\right) \subset \left(r_1,r_2\right) \subset ...
\end{aligned}
\end{equation*}
while none of them is equal to any other. Contradiction. So $I$ must be finitely generated.
Alternative proof for second part (2017 Lent): conversely, suppose $R$ is Noetherian. Let $I$ be an ideal.
Choose $a_1 \in I$. If $I=(a_1)$ then done, so suppose not. Then choose $a_2 \in I \setminus\{a_1\}$; if $I=(a_1,a_2)$ then done, so suppose not... If we can't be finished by this process, then we get
\begin{equation*}
\begin{aligned}
(a_1) \subsetneq (a_1,a_2) \subsetneq (a_1,a_2,a_3) \subsetneq ...
\end{aligned}
\end{equation*}
which is impossible as $R$ is Noetherian. So $I=(a_1,a_2,...,a_r)$ for some $r$.
\end{proof}
\end{prop}
\begin{thm}(Hilbert's basis theorem)\\
$R$ is Noetherian $\implies$ $R[x]$ is Noetherian.\\
(hence e.g. $Z[x]$ is Noetherian, whence $Z[x,y]$ is Noetherian, etc.)
\begin{proof} (Lent 2017)\\
Let $J \triangleleft R[x]$. Let $f_1 \in J$ be a polynomial of minimal degree. If $J=(f_1)$ then done, else choose $f_2 \in J \setminus (f_1)$ of minimal degree. If $J=(f_1,f_2)$ then done... Suppose this never terminates, i.e. we have $(f_1) \subsetneq (f_1,f_2) \subsetneq ... \subsetneq(f_1,f_2,f_3) \subsetneq ...$.
Let $0 \neq a_i \in R$ be the coefficient of the largest power of $X$ in $f_i$, and consider the chain of ideals $(a_1) \subset (a_1,a_2) \subset (a_1,a_2,a_3) \subset ... \triangleleft R$. As $R$ is Noetherian, this chain stabilizes, i.e. there exist $m$ s.t. all $a_i$ lie in $a_1,...,a_m$. In particular, $a_{m+1} = \sum_{i=1}^n a_i b_i$ for some $b_i \in R$.
Let $g = \sum_{i=1}^m b_i f_i X^{\deg(f_{m+1}) - \deg(f_i)}$ has top term $\sum_{i=1}^n b_i a_i X^{\deg(f_{m+1})}$, i.e. $a_{m+1} X^{\deg(f_{m+1})}$.
Note that $f_{m+1} - g$ has degree strictly smaller than that of $f_{m+1}$. But $g \in (f_1,...,f_m)$, while $f_{m+1} \not\in (f_1,...,f_m)$. So $f_{m-1} -g \not\in (f_1,...,f_m)$, contradicting with the fact that we have chosen $f_{m+1}$ to be the minimal degree each time.
\end{proof}
\begin{proof} (Lent 2016)\\
Let $I$ be an ideal in $R[x]$. For $n=0,1,2,...$, let
\begin{equation*}
\begin{aligned}
I_n = \left\{r\in R:\exists f\in I \text{ with } f=rx^n+...\right\} \cup \left\{0\right\}
\end{aligned}
\end{equation*}
Then each $I_n$ is an ideal of $R$.\\
Also $I_n \subset I_{n+1} \forall n$ since $f\in I \implies xf\in I$ (as $I$ is an ideal in $R[x]$).\\
Thus $I_N=I_{N+1}=...$ for some $N$ since $R$ is Noetherian.\\
For each $0\leq n \leq N$, we have
\begin{equation*}
\begin{aligned}
I_n=\left(r_1^{\left(n\right)},r_2^{\left(n\right)},...,r_{k\left(n\right)}^{\left(n\right)}\right)
\end{aligned}
\end{equation*}
As $R$ is Noetherian.\\
For each $r_i^{\left(n\right)}$, choose a $f_i^{\left(n\right)}$ with $f_i^{\left(n\right)} = r_i^{\left(n\right)} x^n+ ...$\\
$\bullet$ Claim: The polynomials $f_i^{\left(n\right)}$ ($0\leq n \leq N, 1 \leq i \leq k\left(n\right)$) generate $I$.\\
Proof of claim: Suppose not. Then choose $g\in i$ of minimum degree that is not generated by the above polynomials $f_i^{\left(n\right)}$.\\
$\bullet$ If $\deg\left(g\right) = n \leq N$: have $g=r x^n+...$. But $r\in I_n$. So $r=\sum_i \lambda_i r_i^{\left(n\right)}$ for some $\lambda_i \in R$.\\
So $\sum_i \lambda_i f_i^{\left(n\right)} = rx^n+...$, whence $g-\sum_i \lambda_i f_i^{\left(n\right)}$ has smaller degree than $g$(or it's zero) and is also not in $I$, contradicting with the fact that $g$ has the minimum degree.\\
$\bullet$ If $\deg\left(g\right) = n > N$: Have $g=r x^n + ...$. But $r\in I_n = I_N$, so $r=\sum_i \lambda_i r_i^{\left(N\right)}$ for some $\lambda_i\in R$.\\
So $x^{n-N} \sum_i \lambda_i r_i^{\left(N\right)} = rx^n + ...$ is in the ideal, whence $g-x^{n-N} \sum_i \lambda_i r_i^{\left(N\right)}$ has smaller degree than $g$ (or it's zero) and is also not in $I$. Contradiction.
\end{proof}
\end{thm}
Does $R$ Noetherian imply every subring of $R$ is Noetherian?\\
The answer is NO -- e.g. take $\Z[x_1,x_2,...]$ (an integral domain) and let $R$ be its field of fractions, while the latter is a field so Noetherian, but the first one isn't Noetherian.\\
\begin{prop}
Let $R$ be Noetherian, $I$ be an ideal in $R$. Then $R/I$ is Noetherian.
\begin{proof}
Let
\begin{equation*}
\begin{aligned}
\varphi : &R \to R/I\\
&x \to x+I
\end{aligned}
\end{equation*}
Given an ideal $J$ in $R/I$, have $\varphi^{-1}\left(I\right)$ an ideal in $R$ (by ideal correspondence).\\
So $\varphi^{-1} = \left(r_1,...,r_n\right)$ for some $r_1,...,r_n\in R$ (since $R$ is Noetherian so $I$ is finitely generated).\\
Thus $J=\left(\varphi\left(r_1\right),\varphi\left(r_2\right),...,\varphi\left(r_n\right)\right)$ is finitely generated. So $R/I$ is Noetherian.
\end{proof}
\end{prop}
What about $Z[x]$? (recall that it's not a pid since $(2,x)$ is not principal)\\
\begin{rem}
Let $E\subset F[x_1,x_2,...,x_n]$ be any set of polynomial equations.\\
Consider $\left(E\right) \triangleleft F[x_1,x_2,...,x_n]$. By Hilbert's basis theorem, there is a finite list $f_1,...,f_k$ s.t. $\left(E\right) = \left(f_1,...,f_k\right)$.\\
Given $\left(\alpha_1,\alpha_2,...\alpha_n\right)\in F^n$, consider
\begin{equation*}
\begin{aligned}
\varphi_\alpha: \left(
\begin{array}{ll}
F[x_1,...,x_n] &\to F\\
x_i &\to \alpha_i
\end{array}
\right)
\end{aligned}
\end{equation*}
a ring homomorphism.\\
$\left(\alpha_1,...\alpha_n\right)\in F^n$ is a solution to the equations $E$ $\iff$ $\left(E\right) \subset \ker \left(\varphi_\alpha\right)$ $\iff$ $\left(f_1,...,f_n\right)\triangleleft \ker\left(\varphi_\alpha\right)$ $\iff$ $\left(\alpha_1,...,\alpha_n\right)$ is a common solution to $f_1,...,f_k$.
\end{rem}
\newpage
\section{Modules}
\subsection{Definitions and examples}
\begin{defi}
Let $R$ be a commutative ring. A quadruple $\left(M,+,0_M,\cdot\right)$ is a $R-$module if:\\
$\bullet$ (M1) $\left(M,+,0_M\right)$ is an abelian group;\\
$\bullet$ (M2) The operation $-\cdot-:R\times M\to M$ satisfies
\begin{equation*}
\begin{aligned}
&\left(r_1+ r_2\right)\cdot m = \left(r_1\cdot m\right)+\left(r_2\cdot m\right)\\
&r\cdot\left(m_1 + m_2\right) = \left(r\cdot m_1\right) + \left(r\cdot m_2\right)\\
&r_1\cdot\left(r_2\cdot m\right) = \left(r_1 \cdot r_2\right)\cdot m\\
&1_R \cdot m = m
\end{aligned}
\end{equation*}
\end{defi}
\begin{eg}
1) Let $F$ be a field. An $F-$module is precisely the same as a vector space over $F$.\\
2) For any ring $R$, $R^n = R\times R\times ... \times R$ is a $R-$module via
\begin{equation*}
\begin{aligned}
r\cdot\left(r_1,r_2,...,r_n\right) = \left(r\cdot r_1,r\cdot r_2,...,r\cdot r_n\right)
\end{aligned}
\end{equation*}
3) If $I\triangleleft R$ is an ideal, then it is an $R-$module via
\begin{equation*}
\begin{aligned}
r\cdot_M a = r\cdot_R a
\end{aligned}
\end{equation*}
Also, $R/I$ is a $R-$module via
\begin{equation*}
\begin{aligned}
r\cdot \left(a+I\right) = r\cdot a + I
\end{aligned}
\end{equation*}
4) A $\Z-$module is precisely the same as an abelian group. For $A$ an abelian group,
\begin{equation*}
\begin{aligned}
\left(
\begin{array}{ll}
\Z \times A &\to A\\
\left(n,a\right) &\to \left\{
\begin{array}{ll}
a+a+...+a \text{ (n times)} & a>0\\
0 & a=0\\
\left(-a\right)+\left(-a\right)+...+\left(-a\right) \text{ (n times)} & a<0
\end{array}
\right.
\end{array}
\right)
\end{aligned}
\end{equation*}
5) Let $F$ be a field, $V$ a vector space on $F$, and $\alpha: V\to V$ be a linear map.\\
Then $V$ is a $F[x]-$module via
\begin{equation*}
\begin{aligned}
\left(
\begin{array}{ll}
F[x]\times V &\to V\\
\left(f,v\right) &\to (f\left(\alpha\right))\left(v\right)
\end{array}
\right)
\end{aligned}
\end{equation*}
i.e. Substitute $\alpha$ in the polynomial $f$, then act on $v$. \\
Different choices of $\alpha$ make $V$ into different $F[x]-$modules, so this is a module structure.\\
6) If $\varphi: R\to S$ is a ring homomorphism, then any $S-$module $M$ may be considered as a $R-$module via
\begin{equation*}
\begin{aligned}
\left(
\begin{array}{ll}
R\times M &\to M\\
\left(r,m\right) &\to \varphi\left(r\right) \cdot m
\end{array}
\right)
\end{aligned}
\end{equation*}
\end{eg}
\begin{defi}
If $M$ is a $R-$Module, a subset $N\subset M$ is a $R-$submodule if it is a subgroup of $\left(M,+,0_M\right)$ and if $n\in N$ and $r\in R$ then $r\cdot n \in N$.\\
We write $n \leq M$.
\end{defi}
\begin{eg}
A subset of the $R$ is a submodule of the $R-$module $R$ \emph{precisely} if it is an ideal.\\
A subset of an $F-$module $V$ for $F$ a field is a submodule \emph{precisely} if it is a vector subspace.
\end{eg}
\begin{defi}
If $N\subseteq M$ is a $R-$submodule, the \emph{quotient module} $M/N$ is the set of $N-$cosets in the abelian group $\left(M,+,0_M\right)$ with
\begin{equation*}
\begin{aligned}
r\cdot\left(m+N\right) = r\cdot m + N
\end{aligned}
\end{equation*}
This is well defined as, if any two different $m$ represent the same coset then they differ by some $n\in N$.
\end{defi}
\begin{defi}
A function $f:M\to N$ between $R-$modules is an \emph{$R-$module homomorphism} if it is a homomorphism of abelian groups, and satisfies
\begin{equation*}
\begin{aligned}
f\left(r\cdot m\right) = r\cdot f\left(m\right)
\end{aligned}
\end{equation*}
\end{defi}
\begin{eg}
If $F$ is a field and $V,W$ are $F-$modules (vector spaces over $F$), then an $F-$module homomorphism is precisely an $F-$linear map.
\end{eg}
\begin{thm} (First isomorphism theorem)\\
Let $f:M\to N$ be a $R-$module homomorphism. Then
\begin{equation*}
\begin{aligned}
\ker\left(f\right) = \left\{m\in M| f\left(m\right)=0\right\} \leq M
\end{aligned}
\end{equation*}
(submodule),
\begin{equation*}
\begin{aligned}
Im\left(f\right)=\left\{n\in N| \exists m\in M s.t. n=f\left(m\right)\right\} \leq N
\end{aligned}
\end{equation*}
Moreover, $M/\ker\left(f\right) \cong$ Im$\left(f\right)$.
\end{thm}
\begin{thm} (Second isomorphism theorem)\\
Let $A,B \leq M$. Then\\
\begin{equation*}
\begin{aligned}
A+B=\left\{m\in M|\exists a\in A, b\in B \text{ s.t. } m=a+b\right\} \leq M
\end{aligned}
\end{equation*}
(a submodule), and
\begin{equation*}
\begin{aligned}
A \cap B \leq M
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
A+B/A \cong B/(A\cap B).
\end{aligned}
\end{equation*}
\end{thm}
\begin{thm} (Third isomorphism theorem)\\
If $N\leq L \leq M$, then
\begin{equation*}
\begin{aligned}
M/L \cong (M/N)/(L/N).
\end{aligned}
\end{equation*}
In addition, there is a submodule correspondence between submodules of $M/N$ and submodules of $M$ which contain $N$.
\end{thm}
\begin{defi}
Let $M$ be a $R-$module, $m\in M$. The \emph{annihilator} of $m$ is
\begin{equation*}
\begin{aligned}
\Ann\left(m\right) = \left\{r\in R | r\cdot m=0\right\}
\end{aligned}
\end{equation*}
The annihilator of $M$ is
\begin{equation*}
\begin{aligned}
\Ann\left(M\right) = \bigcap_{m\in M} \Ann\left(m\right) = \left\{r\in R| r\cdot m = 0 \forall m\in M\right\}
\end{aligned}
\end{equation*}
\end{defi}
\begin{rem}
$\Ann\left(m\right)$ is an ideal of $R$ (so $\Ann\left(M\right)$ is too).
\end{rem}
\begin{defi}
If $M$ is a $R-$module and $m\in M$, the \emph{submodule granted by $m$} is
\begin{equation*}
\begin{aligned}
R_m = \left\{r\cdot m \in M | r\in R\right\}
\end{aligned}
\end{equation*}
\end{defi}
Consider the $R-$module homomorphism
\begin{equation*}
\begin{aligned}
\varphi:\left(
\begin{array}{ll}
R &\to M\\
r &\to r\cdot m
\end{array}
\right)
\end{aligned}
\end{equation*}
Here
\begin{equation*}
\begin{aligned}
&R_m = Im\left(\varphi\right)\\
&\Ann\left(m\right) = \ker\left(\varphi\right)
\end{aligned}
\end{equation*}
So
\begin{equation*}
\begin{aligned}
R_m \cong R/\Ann\left(m\right)
\end{aligned}
\end{equation*}
\begin{defi}
Say an $R-$module $M$ is \emph{finitely generated} if there are elements $m_1,...,m_k$ s.t.
\begin{equation*}
\begin{aligned}
M&=R_{m_1} + R_{m_2} + ... + R_{m_k}\\
&= \left\{r_1 m_1 + r_2 m_2 + ... + r_k m_k|r_1,r_2,...,r_k\in R\right\}
\end{aligned}
\end{equation*}
\end{defi}
\begin{lemma}
A $R-$module $M$ is finitely generated if and only if there is a surjective $R-$module homomorphism
\begin{equation*}
\begin{aligned}
f: R^k \to M
\end{aligned}
\end{equation*}
\begin{proof}
If $M=R_{m_1}+...+R_{m_k}$, define
\begin{equation*}
\begin{aligned}
f:\left(
\begin{array}{ll}
R^k &\to M\\
\left(r_1,...,r_k\right) &\to r_1 m_1 + r_2 m_2 + ... + r_k m_k
\end{array}
\right)
\end{aligned}
\end{equation*}
This is a $R-$module map. This \emph{is} surjective by the definition of $M$.\\
Conversely, given a surjection $f:R^k \to M$, let
\begin{equation*}
\begin{aligned}
M_i = f\left(0,0,...,0,1,0,...,0\right)
\end{aligned}
\end{equation*}
where the 1 is in the $i^{th}$ position.\\
Let $m\in M$. As $f$ is surjective, $m=f\left(r_1,r_2,...,r_k\right)$ for some $r_1,...,r_k$.\\
Then write
\begin{equation*}
\begin{aligned}
f\left(r_1,...,r_k\right) &= f\left(\left(r_1,0,...,0\right) + \left(0,r_2,0,...,0\right)+...+\left(0,0,...,0,r_k\right)\right)\\
&=f\left(r_1\cdot 1,0,...,0\right)+f\left(0,r_2\cdot 1,0,...,0\right) + ... + f\left(0,...,0,r_k\cdot 1\right)\\
&=r_1 f\left(1,0,...,0\right) + r_2 f\left(0,1,0,...,0\right) + ... + r_k\left(0,...,0,1\right)\\
&=r_1 m_1 + r_2 m_2 + ... + r_k m_k
\end{aligned}
\end{equation*}
So the $m_i$'s generate $M$.
\end{proof}
\end{lemma}
\begin{coro}
If $N\leq M$ and $M$ is finitely generated, then $M/N$ is finitely generated.
\begin{proof}
$m$ is finitely generated\\
$\implies$ there is a surjection $f:R^k \to M$\\
$\implies R^k \to M \to M/N$ (by $m\to m+N$) (surjection)
\end{proof}
\end{coro}
\begin{eg}
A submodule of a finitely generated module need not be finitely generated.\\
Let
\begin{equation*}
\begin{aligned}
R = \C[x_1,x_2,x_3,...]
\end{aligned}
\end{equation*}
Let $M=R$ be finitely generated (by 1). The submodule $I=\left(x_1,x_2,...\right)\triangleleft R$ is not finitely generated (because finitely generated as a module implies finitely generated as an ideal, which it isn't).
\end{eg}
\begin{eg}
For $\alpha\in \C$, $\Z[\alpha]$ is a finitely generated $\Z$-module $\iff$ $\alpha$ is an algebraic integer (see example sheet).
\end{eg}
\subsection{Direct sums and free modules}
\begin{defi}
If $M_1,M_2,...,M_k$ are $R-$modules, the \emph{direct sum}
\begin{equation*}
\begin{aligned}
M_1 \oplus M_2 \oplus ... \oplus M_k
\end{aligned}
\end{equation*}
is the set
\begin{equation*}
\begin{aligned}
M_1 \times M_2 \times ... \times M_k
\end{aligned}
\end{equation*}
with addition
\begin{equation*}
\begin{aligned}
\left(m_1,m_2...,m_k\right) + \left(m'_1,m'_2,...,m'_k\right) = \left(m_1+m'_1,...,m_k+m'_k\right)
\end{aligned}
\end{equation*}
and $R$-module structure
\begin{equation*}
\begin{aligned}
r\cdot \left(m_1,...,m_k\right) = \left(r\cdot m_1,r\cdot m_2,...,r\cdot m_k\right)
\end{aligned}
\end{equation*}
\end{defi}
\begin{eg}
What we have been calling $R^n$ is $R\oplus R\oplus...\oplus R$ ($n$ times).
\end{eg}
\begin{defi}
Let $m_1,m_2,...,m_k\in M$. The set $\left\{m_1,...,m_k\right\}$ is \emph{independent} if
\begin{equation*}
\begin{aligned}
\sum_{i=k}^m r_i m_i = 0 \implies r_1 = r_2 = ... = r_k = 0
\end{aligned}
\end{equation*}
\end{defi}
\begin{defi}
A subset $S \subset M$ \emph{generates $M$ freely} if\\
1) $S$ generates $M$;\\
2) Any function $\psi: S\to N$ to a $R-$module extends to a $R-$module map $\theta: M\to N$.\\
If $\theta_1$ and $\theta_2$ are two of such extensions, consider $\theta_1 - \theta_2 : M\to N$. Then $S\subseteq \ker\left(\theta_1 - \theta_2\right) \leq M$. So the submodule generated by $S$ lies in $\ker\left(\theta_1 - \theta_2\right)$ too. But 1) says $S$ generates $M$. So $M=\ker\left(\theta_1 - \theta_2\right)$. So $\theta_1 = \theta_2$.\\
A $R-$module freely generated by some subset $S\subset M$ is called \emph{free}, and $S$ is called a \emph{basis}.
\end{defi}
\begin{prop}
For a subset $\left\{m_1,m_2,...,m_k\right\} \subset M$, the following are equivalent:\\
1) $S$ generates $M$ freely;\\
2) $S$ generates $M$ and the set $S$ is independent;\\
3) Every element of $M$ is \emph{uniquely} expressible as
\begin{equation*}
\begin{aligned}
r_1 m_1 + r_2 m_2 + ... + r_k m_k
\end{aligned}
\end{equation*}
for some $r_i \in R$.
\begin{proof}
$\bullet$ 1) $\implies$ 2):\\
Let $S$ generate $M$ freely.\\
If $S$ is \emph{not} independent, we have
\begin{equation*}
\begin{aligned}
0=r_1 m_1 + ... + r_k m_k
\end{aligned}
\end{equation*}
with some $r_j \neq 0$.\\
Let
\begin{equation*}
\begin{aligned}
\psi: \left(
\begin{array}{ll}
S \to &R\\
m_j \to &1_R\\
m_i \to &0 \text{ } (i \neq j)
\end{array}
\right)
\end{aligned}
\end{equation*}
a function.\\
As $S$ generates $M$ freely, this extends to a $R-$module homomorphism $\theta: M\to R$. Thus
\begin{equation*}
\begin{aligned}
0=\theta\left(0\right) &= \theta\left(r_1 m_1 + r_2 m_2 + ... + r_k m_k\right)\\
&= r_1 \theta\left(m_1\right) + ... + r_k \theta \left(m_k\right)\\
&= r_j\cdot 1_R \in R
\end{aligned}
\end{equation*}
a contradiction as we supposed $r_j \neq 0$.
The remaining steps are just as in Linear Algebra.
\end{proof}
\end{prop}
\begin{eg}
The set $\left\{2,3\right\} \in \Z$ generates $\Z$, but \emph{not} freely, as $3\cdot 2 + \left(-2\right) \cdot 3$ = 0. So $S$ is not independent. So $S$ doesn't generate $\Z$ freely. Also $\left\{2\right\}$ and $\left\{3\right\}$ do \emph{not} generate $\Z$.
\end{eg}
\begin{eg}
The $\Z$-module $\Z/2$ is not free.\\
Generating set: $\left\{1\right\}$, $\left\{0,1\right\}$.\\
1) for $\left\{1\right\}$: Let
\begin{equation*}
\begin{aligned}
\psi: \left(
\begin{array}{ll}
\left\{1\right\} &\to \Z\\
1 &\to 1
\end{array}
\right)
\end{aligned}
\end{equation*}
this extends to
\begin{equation*}
\begin{aligned}
\theta: \left(
\begin{array}{ll}
\Z/2 &\to \Z\\
1 &\to 1\\
0=1+1 &\to 1+1
\end{array}
\right)
\end{aligned}
\end{equation*}
which is a contradiction since it's not a homomorphism.\\
For the second case is generally the same.
\end{eg}
\begin{lemma}
If $S=\{m_1,...,m_k\} \subset M$ is freely generated, then $M \cong R^k$ as an $R-$module.
\begin{proof}
Let $f:R^k \to M$ by $(r_1,...,r_k) \to \sum r_i m_i$ as $R-$module map. It is surjective as $\{m_i\}$ generate $M$, and is injective as the $m_i$ are independent.
\end{proof}
\end{lemma}
\begin{defi}
If $M$ is a finitely generated $R-$module, we have shown that there is a surjective $R-$module homomorphism $\varphi: R^k \to M$.\\
We call $\ker\left(\varphi\right)$ the \emph{relation module} for these generators.\\
Now As $M \cong R^k/\ker(f)$, knowing $M$ is equivalent ot knowing the relation module.\\
We say $M$ is \emph{finitely presented} if, in addition, $\ker\left(\varphi\right)$ is finitely generated.\\
More precisely, if $\left\{m_1, m_2, ... ,m_k\right\}$ generate $M$ and $\left\{n_1,n_2,...,n_l\right\}$ generate $\ker\left(\varphi\right)$, then each $n_i = \left(r_{i1},r_{i2},...,r_{ik}\right)$ corresponds to the relation
\begin{equation*}
\begin{aligned}
r_{i1} m_1 + r_{i2} m_2 + ... + r_{ik} m_k = 0
\end{aligned}
\end{equation*}
in $M$.
\end{defi}
\begin{prop} (Invariance of dimension (rank))\\
Let $R$ be a non-zero ring. Then if $R^n \cong R^m$ as a $R-$module, we must have $n=m$.
\begin{proof}
We know this is true if $R$ is a field (since they are vector spaces).\\
General construction: let $I\triangleleft R$ be an ideal and $M$ a $R-$module. Define
\begin{equation*}
\begin{aligned}
IM = \left\{ a\cdot m \in M | a\in I, m\in M\right\}
\end{aligned}
\end{equation*}
a submodule of $M$, so $M/IM$ is a $R-$module.\\
If $b\in I$ then $b\cdot \left(m+I M\right) = b\cdot m + IM = 0 + IM$.\\
So $M/IM$ is a $R/I$-module via
\begin{equation*}
\begin{aligned}
\left(r+I\right) \cdot \left(m+IM\right) = r\cdot m + IM
\end{aligned}
\end{equation*}
General property: every non-zero ring has a maximal ideal.\\
Observation: an ideal $I\triangleleft R$ is proper $\iff $ $1_R \not\in I$.\\
So an increasing union of proper ideals is proper.\\
(Fact: (Zorn's lemma applies) so there is a maximal ideal)\\
Back to the proof: choose a maximal ideal $I\triangleleft R$.\\
If $R^n \cong R^m$, then $R^n/IR^n \cong R^m / IR^m$, i.e. $\left(R/I\right)^n \cong \left(R/I\right)^m$. But $I$ is maximal, so $R/I$ is a field. So this is an isomorphism between vector spaces over the spaces $R/I$. So $n=m$ by usual dimension theory from linear algebra.
\end{proof}
\end{prop}
\subsection{Matrices over Euclidean domains}
Until further notice, $R$ is a Euclidean domain, and write $\phi: R\backslash\left\{0\right\} \to \Z\geq 0$ for its Euclidean function.\\
We know what $\gcd(a,b)$ is for $a,b\in R$ and is unique up to associates. The Euclidean algorithm using $\phi$ shows that $\gcd(a,b) = ax+by$ for some $x,y \in R$.
\begin{defi}
\emph{Elementary row operations} on a $m\times n$ matrix $A$ with entries in $R$ are\\
(ER1) Add $c\in R$ times the $i^{th}$ row to the $j^{th}$. This may be done by multiplying $A$ on the left by
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
1 & & & & \\
&1& & c &\\
& &1& & \\
& & & ... & \\
& & & &1
\end{matrix}
\right)
\end{aligned}
\end{equation*}
Where $c$ is in the $j^{th}$ row and the $i^{th}$ column.\\
(ER2) Swap the $i^{th}$ and the $j^{th}$ rows. This is done using
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
1 & & & & & \\
&0& &1 & &\\
& &1& & &\\
&1& & 0& &\\
& & & & ...&\\
& & & & & 1
\end{matrix}
\right)
\end{aligned}
\end{equation*}
Where the two 1 are in the $(i,j)$ entry and the $(j,i)$ entry.\\
(ER3) Multiply the $i^{th}$ row by a \emph{unit} $c\in R$, using
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
1 & & & & \\
&1& & &\\
& &c& & \\
& & & ... & \\
& & & &1
\end{matrix}
\right)
\end{aligned}
\end{equation*}
\end{defi}
Where $c$ is in the $(i,i)$ entry.
We have analogues for column operations, called (EC1),(EC2),(EC3).
\begin{defi}
$A$ and $B$ are \emph{equivalent} if they differ by a sequence of elementary row or column operations.\\
If $A$ and $B$ are equivalent, there are invertible (square) matrices $P$,$Q$ s.t. $B=QAP^{-1}$.
\end{defi}
\begin{thm} (Smith normal form)\\
A $m\times n$ matrix $A$ on a ED $R$ is equivalent to $\Diag(d_1,d_2,...,d_r,0,...,0)$ with the $d_i$ all non-zero and
\begin{equation*}
\begin{aligned}
d_1 | d_2 | ... | d_r
\end{aligned}
\end{equation*}
The $d_k$ are called \emph{invariant factors} of $A$.
\begin{proof}
if $A=0$ we are done. So suppose $A \neq 0$.\\
So some entry $A_{ij}\neq 0$. Swapping the $i^{th}$ and first row then $j^{th}$ and first column, we arrange that $A_{11} \neq 0$.\\
Try to reduce $\varphi\left(A_{11}\right)$ as much as possible:\\
Case 1) If there is a $A_{1j}$ not divisible by $A_{11}$, use Euclidean algorithm to write
\begin{equation*}
\begin{aligned}
A_{1j} = q \cdot A_{11} + r
\end{aligned}
\end{equation*}
with $\varphi\left(r\right) < \varphi\left(A_{11}\right)$.\\
Subtract $q$ times the first column from the $j^{th}$ column. In position $\left(1,j\right)$, we now have $r$. Swapping $j^{th}$ and $1^{st}$ columns puts $r$ in position $\left(1,1\right)$, and so $\varphi\left(r\right) < \varphi\left(A_{11}\right)$.\\
Case 2) If there is a $A_{i1}$ not divisible by $A_{11}$, do the analogous thing to reduce $\varphi\left(A_{11}\right)$.\\
After finitely many applications of Case 1 and Case 2, we get that $A_{11}$ divides all $A_{ij}$ and all $A_{i1}$.\\
Then subtracting appropriate multiples of the first column from all others makes $A_{1j} = 0$ for all $j$ apart from the first one. Do the same with rows. Then we have
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
d&0&0&...&0\\
0& & & &\\
0& & & &\\
...& & &C &\\
0& & & &
\end{matrix}
\right)
\end{aligned}
\end{equation*}
Case 3) if there is an entry of $C$ not divisible by $d$, say $A_{ij}$ with $i>1, j>1$. Then write $A_{ij} = qd + r$, with $\varphi\left(r\right) < \varphi\left(d\right)$.\\
Now add column 1 to column $j$, subtract $q$ times row 1 from row $i$, swap row $i$ with row 1, and swap column $j$ with column 1.Then the $\left(1,1\right)$ entry is $r$, and $\varphi\left(r\right) < \varphi\left(d\right)$.\\
But now the zeroes are messed up. So do case 1 and case 2 if necessary to get
\begin{equation*}
\begin{aligned}
\left(\begin{matrix}
d'&0&0&...&0\\
0& & & &\\
0& & & &\\
...& & & C'&\\
0& & & &
\end{matrix}\right)
\end{aligned}
\end{equation*}
But now with $\varphi\left(d'\right) \leq \varphi\left(r\right) < \varphi\left(d\right)$.\\
Since case 3 strictly decreases $\varphi\left(d\right)$, it can only happen for finitely many times.\\
Therefore, we arrive at
\begin{equation*}
\begin{aligned}
\left(\begin{matrix}
d&0&0&...&0\\
0& & & &\\
0& & & &\\
...& & &C &\\
0& & & &
\end{matrix}\right)
\end{aligned}
\end{equation*}
Such that $d$ divides \emph{every} entry of $C$ (this is because case 3 stops only if there is no entry of $C$ not divisible by $d$, by the condition).\\
Now apply the entire process to $C$. We end up with a diagonal matrix with the claimed divisibility.
\end{proof}
\end{thm}
\begin{eg}
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
3&7&4\\
1&-1&2\\
3&5&1
\end{matrix}
\right)
\to
\left(
\begin{matrix}
1&-1&2\\
3&7&4\\
3&5&1
\end{matrix}
\right)
\to
\left(
\begin{matrix}
1&0&0\\
3&10&-2\\
3&8&-5
\end{matrix}
\right)
\to
\left(
\begin{matrix}
1&0&0\\
0&10&-2\\
0&8&-5
\end{matrix}
\right)
\to
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\left(
\begin{matrix}
1&0&0\\
0&2&10\\
0&5&8
\end{matrix}
\right)
\to
\left(
\begin{matrix}
1&0&0\\
0&2&10\\
0&1&-12
\end{matrix}
\right)
\to
\left(
\begin{matrix}
1&0&0\\
0&1&-12\\
0&2&10
\end{matrix}
\right)
\end{aligned}
\to
\left(
\begin{matrix}
1&0&0\\
0&1&0\\
0&0&34
\end{matrix}
\right)
\end{equation*}
\end{eg}
To study the uniqueness of the invariant factors (the $d_k$'s) of a matrix $A$, we will consider \emph{minors}:
\begin{defi}
A \emph{$k\times k$ minor} of a matrix $A$ is the determinant of a $k\times k$ sub-matrix of $A$ (a matrix found by removing all but $k$ rows and all but $k$ columns).\\
For a matrix $A$, the \emph{$k^{th}$ fitting ideal} called $\Fit_k\left(A\right) \triangleleft R$ is the ideal generated by the set of all $k\times k$ minors of $A$.
\end{defi}
\begin{lemma}
If $A$ and $B$ are equivalent matrices, then
\begin{equation*}
\begin{aligned}
\Fit_k\left(A\right) = \Fit_k\left(B\right)
\end{aligned}
\end{equation*}
for all $k$.
\begin{proof}
We just show that changing $A$ by the elementary row operations (or the column versions) doesn't change $\Fit_k\left(A\right)$. We just need to consider the row operations as $\Fit_k\left(A\right) = \Fit_k\left(A^T\right)$.\\
For (ER1): Fix $C$ a $k\times k$ minor of $A$. Let $B$ be the result of adding $c$ times the $i^{th}$ row to the $j^{th}$ row.\\
If the $j^{th}$ row is outside of $C$, then the minor is unchanged.\\
If $i^{th}$ and $j^{th}$ row are \emph{in} $C$, then the sub-matrix changes by a row operation. But we know from linear algebra that a row operation doesn't change the determinant.\\
If $j^{th}$ row is in $C$ but the $i^{th}$ row is not, then $C$ is changed to $C'$ with $j^{th}$ row equal to
\begin{equation*}
\begin{aligned}
\left(C_{j1} + cf_1, C_{j2} + cf_2, ... , c_{jk} + cf_k\right)
\end{aligned}
\end{equation*}
Where $f_1$, $f_2$, ... $f_k$ are the $i^{th}$ row.\\
Computing $\det\left(C'\right)$ using this row, we get $\det \left( C' \right) = \det\left(C\right) $ a minor $ + c \det $( matrix obtained by replacing the jth row of C with $ f_1, f_2,...,f_k)$ also a minor of A.
So $\det \left(C'\right) \in \Fit_k\left(A\right)$.\\
(ER2) and (ER3) follow by standard properties of swapping rows or multiplying rows on determinants.\\
So $\Fit_k\left(B\right) \leq \Fit_k\left(A\right)$. But this also follows in the opposite direction as row operations are invertible. So they are equal.
\end{proof}
\end{lemma}
\begin{rem}
if $B=\Diag\left(d_1,d_2,...,d_r,0,...,0\right)$ is a matrix in its Smith Normal Form, then
\begin{equation*}
\begin{aligned}
\Fit_k \left(B\right) = \left(d_1 d_2 ... d_n\right)
\end{aligned}
\end{equation*}
\end{rem}
\begin{coro}
If $A$ has Smith Normal Form $\Diag\left(d_1,d_2,...,d_r,0,...,0\right)$ then $\left(d_1 d_2 ... d_k\right) = \Fit_k\left(A\right)$, so $d_k$ is unique up to associates.
\end{coro}
\begin{eg}
Consider
\begin{equation*}
\begin{aligned}
\left(\begin{matrix}
2&0\\
0&3
\end{matrix}\right) = A
\end{aligned}
\end{equation*}
Then
\begin{equation*}
\begin{aligned}
\Fit_1\left(A\right) = \left(2,3\right) = \left(1\right)
\end{aligned}
\end{equation*}
So $d_1 = \pm 1$,
\begin{equation*}
\begin{aligned}
\Fit_2\left(A\right) = \left(6\right)
\end{aligned}
\end{equation*}
So
\begin{equation*}
\begin{aligned}
d_1 d_2 = \pm 6 \implies d_2 = \pm 6
\end{aligned}
\end{equation*}
So
\begin{equation*}
\begin{aligned}
\left(\begin{matrix}
1& 0\\
0& 6
\end{matrix}\right)
\end{aligned}
\end{equation*}
is a Smith Normal Form for $A$.
\end{eg}
\begin{lemma}
Let $R$ be a Euclidean Domain. Any submodule of $R^m$ is generated by at most $m$ elements.
\begin{proof}
Let $N\leq R^m$ be a submodule. Consider the ideal
\begin{equation*}
\begin{aligned}
I= \left\{ r \in R | \left(r, r_2, ..., r_m\right) \in N \text{ for some } r_2, ..., r_m \in \R\right\}
\end{aligned}
\end{equation*}
As $R$ is a ED, it is also a PID. So $I=\left(a\right)$ for some $a\in R$.\\
Choose a $n=\left(a_1, a_2, ..., a_m\right) \in N$.\\
For a $\left(r_1, r_2,..., r_m\right) \in N$, we know $a|r_1$, so $r_1 = r \cdot a_1$, and
\begin{equation*}
\begin{aligned}
\left(r_1, r_2, ..., r_m\right) - r\left(a_1, a_2, ..., a_m\right) = \left(0, r_2-ra_2, ..., r_m-ra_m\right)
\end{aligned}
\end{equation*}
This lies in $N' = N \cap \left(\left\{0\right\} \times R^{m-1}\right) \leq R^{m-1}$.\\
Then by induction we can suppose that there are $n_2, ... n_m \in N'$ generating $N'$. Thus
\begin{equation*}
\begin{aligned}
\left(r_1,...,r_m\right)
\end{aligned}
\end{equation*}
lies in the submodule generated by $n,n_2,...,n_m$. Since $r_1,...,r_m$ are arbitrary, we know that $n,n_2,...,n_m$ generate $N$.
\end{proof}
\end{lemma}
(missing 0.5 lecture?)
\begin{eg}
Let $R = \Z$ (a ED), and let $A$ be the abelian group (=$\Z-$module) generated by $a,b,c$, subject to $2a+3b+c=0$, $a+2b=0$ and $5a+6b+7c=0$.
Thus $A = \Z^3/N$ where $N \leq \Z^3$ generated by $(2,3,1)^T,(1,2,0)^T,(5,6,7)^T$.
Now put $M=\left(\begin{matrix}
2 & 1 & 5\\
3 & 2 & 6\\
1 & 0 & 7
\end{matrix}\right)$ into Smith Normal form we get $(1,1,3)$. To show that, we just have to calculate the fitting ideals: $\Fit_1(M) = (1)$, $\Fit_2(M)=(1)$ and $\Fit_3(M)=\det(M) = 3$.
After changing basis, $N$ is generated by $(1,0,0),(0,1,0),(0,0,3)$. So $A \cong \Z/3$.
\end{eg}
\subsubsection{Structure theorem for finitely-generated abelian groups}
Any f.g. abelian group is isomorphic to $$C_{d_1} \times C_{d_2} \times ... \times C_{d_r} \times C_\infty \times C_\infty \times ... \times C_\infty$$ with $d_1|d_2|...|d_r$.
\begin{proof}
Apply classification of f.g. modules to the ED $R=\Z$, and note $\Z/(d)=C_d$ and $\Z/(0) = C_\infty$.
\end{proof}
\begin{lemma}
Let $R$ be a ED, $a,b \in R$ with $gcd(a,b) = 1$. Then $R/(ab)\cong R/(a) \oplus R/(b)$.
\begin{proof}
Consider the $R-$module homomorphism
\begin{equation*}
\begin{aligned}
\phi: &R/(a) \oplus R/(b) &\to &R/(ab)\\
&(r_1+(a),r_2+(b)) &\to &(br_1+ar_2+(ab))
\end{aligned}
\end{equation*}
As $gcd(a,b) = 1$, $(a,b) = (1)$. So $1=xa+yb$ for some $x,y \in \Z$. So for $r \in R$, we get $r= rxa+ryb$. So
\begin{equation*}
\begin{aligned}
r+(ab) = rxa+ryb+(ab) = \phi(ry+(a),rx+(b))
\end{aligned}
\end{equation*}
So $\phi$ is onto.
Now we also have to deal with injectivity (since $R/(ab)$ is not necessarily finite). If $\phi(r_1+(a),r_2+(b))=0+(ab)$, then $br_1+ar_2 \in (ab)$. Thus $a|br_1+ar_2$, so $a|br_1$, but $gcd(a,b)=1$, so $a|r_1$, so $r_1+(a) = 0+(a)$.
\end{proof}
\end{lemma}
\subsubsection{Primary decomposition theorem}
Let $R$ be a ED, $M$ a f.g. $R-$module. Thus $M \cong N_1 \oplus ... \oplus N_t$ with each $N_i$ either equal to $R$, or $R/(p^n)$ for some prime $p \in R$ and some $n \geq 1$.
\begin{proof}
Note that if $d=p_1^{n_1}...p_k^{n_k}$ with $p_i \in R$ \emph{distinct} primes, then the previous lemma shows that $R/(d) \cong R/(p_1^{n_1}) \oplus ... \oplus R/(p_k^{n_k})$. Plug this into the usual classification of f.g. modules we get the result.
\end{proof}
\subsection{Modules over $F[X]$, andnormal forms for matrices}
For any field $F$, $F[X]$ is a ED. So the results of the last section apply.
If $V$ is a vector space over $F$ and $\alpha:V \to V$ an endomorphism, then we have
\begin{equation*}
\begin{aligned}
&F[X]\times V &\to &V\\
&(f,v) &\to &f(\alpha)(v)
\end{aligned}
\end{equation*}
which makes $V$ into a $F[X]-$module, call it $V_\alpha$ (see section 3.1).
Lemma: if $V$ is finite-dimensional, then $V_\alpha$ is finitely-generated as a $F[X]-$module.
\begin{eg}
1) Suppose $V_\alpha \cong F[X]/(X^r)$ as a $F[X]-$module. This has $F-$basis $1,X,X^2,...,X^{r-1}$, and the action of $\alpha$ on $V$ corresponds to multiplication by $X$.
So in this basis, $\alpha$ has matrix with $A_{(i+1),i}=1$ and all other entries 0.
2) Suppose $V_\alpha \cong F[X] / (X-\lambda)^r)$is a $F[X]-$module. Consider $\beta = \alpha-\lambda Id$, then
\begin{equation*}
\begin{aligned}
V_\beta \cong F[Y] / (Y^n)
\end{aligned}
\end{equation*}
as a $F[Y]-$module. So by (1), $V$ has a basis so that $\beta$ is given by the above matrix. So $\alpha$ is given by $\Diag(\lambda)+A$ where $A_{(i+1),i}=1$.
3) Suppose $v_\alpha \cong F[X]/(f)$ with $f=a_0+a_1X+...+a_{r-1}X^{r-1}+X^r$. Then $1,X,...,X^{r-1}$ is a $F-$basis, and in this basis, $\alpha$ is given by the $A$ in example (1) with an additional column $-a_0,-a_1,...,-a_{r-1}$ added rightmost. This matrix is called the \emph{companion matrix} for $f_1$ and is written $C(f)$.
\subsubsection{Rational canonical form theorem}
Let $\alpha:V \to V$ be a linear map, $V$ finite-dimensional vector space over $F$. Regards $V$ as a $F[X]-$module $V_\alpha$, we have
\begin{equation*}
\begin{aligned}
V_\alpha \cong F[X]/(d_1) \oplus ... \oplus F[X]/(d_r)
\end{aligned}
\end{equation*}
with $d_1 | d_2 | ... | d_r$. This there is a basi sof $V$ for which $\alpha$ is given by $\Diag(c(d_1),c(d_2),...,c(d_r))$. To prove this we can simply apply classification of f.g. modules over $F[X]$, an ED, and note that is(?) copies of $F[X]$ appear, as this has $\infty$ dimension over $F$.
\end{eg}
Observations:\\
1) If $\alpha$ is represented by a matrix $A$ in some basis, then $A$ is conjugate to $(\Diag(c(d_1),...,c(d_r))$.
2) The minimal polynomial for $\alpha$ is $d_r \in F[X]$.\\
3) The characteristic polynomial of $\alpha$ is $d_1d_2...d_r$.
\begin{lemma}
The primes in $\C[X]$ are $X-\lambda$ for $\lambda \in \C$, up to associates.
\begin{proof}
If $f \in \C[X]$ is irreducible, Fundamental theorem of algebra says that $f$ has a root $\lambda$, or $f$ is a constant. If it is constant it is $0$ or a unit $X$, so $X-\lambda | f$, so $f=(X-\lambda)g$. But $f$ is irreducible. So $g$ is a unit, so $f$ is an associate of $X-\lambda$.
\end{proof}
\end{lemma}
The conjugacy classes in $GL_2(\Z/3)$ are
\begin{equation*}
\begin{aligned}
\left(\begin{matrix}
0 & 2\\
1 & 0
\end{matrix}\right),\left(\begin{matrix}
0 & 1\\
1 & 2
\end{matrix}\right),\left(\begin{matrix}
0 & 1\\
1 & 1
\end{matrix}\right),\left(\begin{matrix}
\lambda & 0\\
1 & \lambda
\end{matrix}\right),\left(\begin{matrix}
\lambda & 0\\
0 & \mu
\end{matrix}\right)
\end{aligned}
\end{equation*}
for non-zero $\lambda$ and $\mu$.
Recall
\begin{equation*}
\begin{aligned}
|GL_2(\Z/3)| = (9-1)(9-3) = 2^4 \cdot 3
\end{aligned}
\end{equation*}
so Sylow 2-subgroup has order $16=2^4$. The first matrix among the above 5 has order 4, the second and third have order 8, while for the fourth one, $\lambda = 1$ has order 3 and $\lambda =2$ has order 6, and the diagonal matrices has order 2. So Sylow 2-subgroup cannot be cyclic (order 16).
Now let $A,B$ be the first and the second matrix respectively. Then
\begin{equation*}
\begin{aligned}
A^{-1}BA =\left(\begin{matrix}
2 & 2\\
2 & 0
\end{matrix}\right)
\end{aligned}
\end{equation*}
This have to be some power of $B$ (since it's in the same conjugacy class as $B$). In fact it is equal to $B^3$.
So $\left<B\right> \leq \left<A,B\right> \leq GL_2(\Z/3)$, and $\left<B\right> \triangleleft \left<A,B\right>$.
By the second isomorphism theorem is $\frac{\left<A,B\right>}{\left<B\right>} = \frac{\left<A\right>}{\left<A\right> \cap \left<B\right>}$. But
\begin{equation*}
\begin{aligned}
\left<A\right> \cap \left<B\right> = \left<\left(\begin{matrix}
2 & 0\\
0 & 2
\end{matrix}\right)\right>
\end{aligned}
\end{equation*}
is a group of order 2. But $\left<A\right>$ has order 4. So
\begin{equation*}
\begin{aligned}
\left|\left<A,B\right>/\left<B\right>\right| = \left|\left<A\right>/(\left<A\right>\cap\left<B\right>)\right| = 4/2 = 2
\end{aligned}
\end{equation*}
so $\left<A,B\right>| = 2\cdot 8 = 16$. So this is a Sylow 2-subgroup of $GL_2(\Z/3)$. It is
\begin{equation*}
\begin{aligned}
\left<A,B | A^4 = I, B^8 = I, A^{-1}BA = B^3\right>
\end{aligned}
\end{equation*}
a \emph{semidihedral group of order 16}.
\begin{eg}
Let $R=\Z[X]/(X^2+5)$, which we wish to show, that it is equal to $\Z[-5] \leq \C$. Then
\begin{equation*}
\begin{aligned}
(1+X)(1-X) =1-X^2 = 1+5 = 6 = 2 \cdot 3
\end{aligned}
\end{equation*}
while $1 \pm X$, $2,3$ are all irreducible, so $R$ is \emph{not} a UFD. Let
\begin{equation*}
\begin{aligned}
I_1 = (3,1+X), I_2 =(3,1-X)
\end{aligned}
\end{equation*}
be ideals (submodules) of $R$. Consider
\begin{equation*}
\begin{aligned}
\phi: I_1 \oplus I_2 &\to R\\
(a,b) &\to a+b
\end{aligned}
\end{equation*}
an $R-$module map. Then
\begin{equation*}
\begin{aligned}
\im(\phi) = (3,1+X,1-X)
\end{aligned}
\end{equation*}
But $3-((1+X)+(1-X))=1$. So this is the whole ring.
Also $\ker(\phi) = \{(a,b) \in I_1 \oplus I_2 | a+b=0\} \cong I_1 \cap I_2$ by sending $x$ back to $(x,-x)$. Hence
\begin{equation*}
\begin{aligned}
(3) \subset I_1 \cap I_2
\end{aligned}
\end{equation*}
Let $s \cdot 3 + t(1-x) \in (3,1-X) \subset R=\Z[X] / (X^2+5)$.
Working module $(3)$ as well, get
\begin{equation*}
\begin{aligned}
t(1+X) = (1-X)p \pmod {(3,X^2+5) = (3,X^2-1) = (3,(X-1)(X+1)}.
\end{aligned}
\end{equation*}
So $1-X|t$, so $(1+X)(1-X)|t(1+X)$, so $t(1+X)=q(X^2-1)=q(X^2+5-6)$ i.e. $t(1+X) = 3(-2q)$.
Therefore $s \cdot 3 + t(1+X)$ is divisible by $3$, so $I_1 \cap I_2 \subset (3)$, so equality.
By Example sheet 4 Q1(iii), if we have module $N \leq M$ and $M/N \cong \R^n$, then $M \cong N \oplus R^n$.
So hence, $I_1 \oplus I_2 / \ker(\phi) \cong \im(\phi)=R$, so $I_1 \oplus I_2 \cong R \oplus \ker(\phi) = R \oplus (3)$.
Consider
\begin{equation*}
\begin{aligned}
\psi: R &\to (3)\\
x &\to 3x
\end{aligned}
\end{equation*}
$\ker(\psi) = \{x \in R|3x=0\} = 0$ as $R$ is an integral domain. So $\psi$ is an isomorphism. So $I_1 \oplus I_2 \cong R \oplus R$.
We claim that $I_1$ is not principal. If $I_1 = (a+bX)$, then $I_2 = (a-bX)$. Then
\begin{equation*}
\begin{aligned}
(3) = I_1 \cap I_2 = ((a+bX)(a-bX)) = (a^2-bX^2) = (a^2+5b^2)
\end{aligned}
\end{equation*}
so $3 \in (a^2+5b^2)$, so $3=(a^2+5b^2)(c+dX)$, so $a^2 + 5b^2 |3$. Contradiction. So $I_1$ cannot be principal, so $I_2$ cannot be as well. But now:
$\bullet$ $I_1$ need 2 eleemnts to generate it, but it is not the free module $R^2$;\\
$\bullet$ $I_1$ \emph{is} a direct summand of $R^2$.
\end{eg}
\end{document}
| {
"alphanum_fraction": 0.6352556533,
"avg_line_length": 40.5264285714,
"ext": "tex",
"hexsha": "4130efbaa88c641905522c273207ea33ffc8bf94",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "raoxiaojia/raoxiaojia.github.io",
"max_forks_repo_path": "Notes/GRM.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "raoxiaojia/raoxiaojia.github.io",
"max_issues_repo_path": "Notes/GRM.tex",
"max_line_length": 471,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "d20c23a64794b500f2e0356fd01017ee31830fa2",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "raoxiaojia/raoxiaojia.github.io",
"max_stars_repo_path": "Notes/GRM.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-25T17:34:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-25T17:34:25.000Z",
"num_tokens": 45828,
"size": 113474
} |
\subsection{Class ExceptionBlock}\label{ExceptionBlock}
This class represents an exception block present in the object file. This class gives all the information pertaining to that exception block.
\begin{tabular}{p{1.25in}p{1in}p{3.25in}}
\toprule
Method name & Return type & Method description \\
\midrule
hasTry & bool & True if the exception block has a try block. \\
tryStart & Offset & Start of the try block if it exists, else 0. \\
tryEnd & Offset & End of the try block if it exists, else 0. \\
trySize & Offset & Size of the try block if it exists, else 0. \\
catchStart & Offset & Start of the catch block. \\
\bottomrule
\end{tabular}
\begin{apient}
bool contains(Offset addr) const
\end{apient}
\apidesc{
This method returns \code{true} if the offset \code{addr} is contained with in the try block. If there is no try block associated with this exception block or the offset does not fall within the try block, it returns \code{false}.
}
| {
"alphanum_fraction": 0.7486910995,
"avg_line_length": 41.5217391304,
"ext": "tex",
"hexsha": "2ccf91f9c57e7756900b22fd1ec2e6bc28d439e1",
"lang": "TeX",
"max_forks_count": 18,
"max_forks_repo_forks_event_max_datetime": "2021-10-14T10:17:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-04T03:44:22.000Z",
"max_forks_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "Vtech181/Path_Armor",
"max_forks_repo_path": "Dyninst-8.2.1/symtabAPI/doc/API/Symtab/ExceptionBlock.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "Vtech181/Path_Armor",
"max_issues_repo_path": "Dyninst-8.2.1/symtabAPI/doc/API/Symtab/ExceptionBlock.tex",
"max_line_length": 230,
"max_stars_count": 47,
"max_stars_repo_head_hexsha": "9879a85c7ba56b443aeccde730778dcb6d55a39d",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "Vtech181/Path_Armor",
"max_stars_repo_path": "Dyninst-8.2.1/symtabAPI/doc/API/Symtab/ExceptionBlock.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T11:23:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-14T23:12:32.000Z",
"num_tokens": 252,
"size": 955
} |
\documentclass[letter,10pt]{article}
\usepackage[letterpaper, margin=1.5in]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{verbatim}
\usepackage{color}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{subfig}
% \usepackage{pstool}
\usepackage{algorithm}
\usepackage{array}
\usepackage[binary-units]{siunitx}
\usepackage[backref=page,
pageanchor=true,
plainpages=false,
pdfpagelabels,
bookmarks,
bookmarksnumbered,
linkbordercolor={1 0.5 0.5},
citebordercolor={0 0.5 0},
]{hyperref}
\newcommand{\eg}{\textit{e.g.},~}
\newcommand{\etc}{\textit{etc.}}
\let\eqref\undefined
\newcommand{\figref}[1]{Figure~\ref{fig:#1}}
\newcommand{\algoref}[1]{Algorithm~\ref{algo:#1}}
\newcommand{\secref}[1]{Section~\ref{sec:#1}}
\newcommand{\tabref}[1]{Table~\ref{tab:#1}}
\newcommand{\eqref}[1]{Equation~\ref{eq:#1}}
\newcommand{\figlabel}[1]{\label{fig:#1}}
\newcommand{\algolabel}[1]{\label{algo:#1}}
\newcommand{\seclabel}[1]{\label{sec:#1}}
\newcommand{\tablabel}[1]{\label{tab:#1}}
\newcommand{\eqlabel}[1]{\label{eq:#1}}
% Title Page
\title{CMPSCI 453\\HW6, Wireshark Lab 3}
\author{Tony Gao\iffalse\\
\href{mailto:[email protected]}{[email protected]}\fi
}
\begin{document}
\maketitle
%\begin{abstract}
%\end{abstract}
%\section*{Collaborators And Sources}
%List everyone you have collaborated with, or discussed the assignment with,
%along with their title, contact email addresses, and nature of collaboration.
%All sources must be acknowledged.
%\paragraph{Collaborators}
%\begin{itemize}
% \item Person One, CS PhD student,
% \href{mailto:[email protected]}{[email protected]}. Discussed math
%formatting.
% \item Person Two, Professor,
% \href{mailto:[email protected]}{[email protected]}. Discussed Maxwell's
%equations.
%\end{itemize}
%\paragraph{Sources}
%\begin{itemize}
% \item \url{https://en.wikibooks.org/wiki/LaTeX/Mathematics}
% \item \url{http://detexify.kirelabs.org/classify.html}
%\end{itemize}
\section{Problem 1}
\begin{enumerate}
\item Run nslookup to obtain the IP address of a Web server in Asia. What is the IP
address of that server? \\
\begin{verbatim}
Tonys-MBP:CS453 rpg711$ nslookup google.com.hk
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: google.com.hk
Address: 172.217.9.35
\end{verbatim}
The type A record for google.com.hk maps it to 172.217.9.35
\item Run nslookup to determine the authoritative DNS servers for a university in
Europe.
\begin{verbatim}
Tonys-MBP:CS453 rpg711$ nslookup -type=NS www.cam.ac.uk
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
www.cam.ac.uk canonical name = cam.ac.uk.
cam.ac.uk nameserver = dns0.cl.cam.ac.uk.
cam.ac.uk nameserver = authdns0.csx.cam.ac.uk.
cam.ac.uk nameserver = sns-pb.isc.org.
cam.ac.uk nameserver = dns0.eng.cam.ac.uk.
cam.ac.uk nameserver = ns2.ic.ac.uk.
Authoritative answers can be found from:
authdns0.csx.cam.ac.uk internet address = 131.111.8.37
sns-pb.isc.org internet address = 192.5.4.1
sns-pb.isc.org has AAAA address 2001:500:2e::1
dns0.eng.cam.ac.uk internet address = 129.169.8.8
\end{verbatim}
There are 4 type NS records for official authoritative DNS servers for Cambridge University.
\item Run nslookup so that one of the DNS servers obtained in Question 2 is queried for
the mail servers for Yahoo! mail. What is its IP address?
\begin{verbatim}
Tonys-MBP:CS453 rpg711$ nslookup -type=mx yahoo.com authdns0.csx.cam.ac.uk
Server: authdns0.csx.cam.ac.uk
Address: 131.111.8.37#53
** server can't find yahoo.com: REFUSED
\end{verbatim}
Doing an nslookup for -type=MX for yahoo.com gives mta7.am0.yahoodns.net as one of the mail servers. A type A query reveals that mta7.am0.yahoodns.net maps to many IP addresses, one of which is 98.137.159.27.
\begin{verbatim}
Tonys-MBP:CS453 rpg711$ nslookup mta7.am0.yahoodns.net
Server: 192.168.1.1
Address: 192.168.1.1#53
Non-authoritative answer:
Name: mta7.am0.yahoodns.net
Address: 98.137.159.27
.. truncated ..
\end{verbatim}
\end{enumerate}
\section{Problem 2}
\begin{enumerate}
\setcounter{enumi}{4}
\item Locate the DNS query and response messages. Are then sent over UDP or TCP?
UDP
\item What is the destination port for the DNS query message? What is the source port
of DNS response message?
User Datagram Protocol, Src Port: 45190, Dst Port: 53
\item To what IP address is the DNS query message sent? Use ipconfig to determine the
IP address of your local DNS server. Are these two IP addresses the same?
192.168.1.1. I am on a Macbook Pro OS X 10.12 so ifconfig does not list my DNS servers. Instead, scutil must be used.
\begin{verbatim}
$ scutil --dns
DNS configuration
resolver #1
nameserver[0] : 192.168.1.1
if_index : 7 (en0)
flags : Request A records
reach : 0x00020002 (Reachable,Directly Reachable Address)
\end{verbatim}
\item Examine the DNS query message. What “Type” of DNS query is it? Does the
query message contain any “answers”?
\begin{verbatim}
Queries
clients6.google.com: type A, class IN
\end{verbatim}
It contains 0 answers and 1 question, the full query is below.
\begin{verbatim}
Domain Name System (query)
[Response In: 144]
Transaction ID: 0x5f53
Flags: 0x0100 Standard query
0... .... .... .... = Response: Message is a query
.000 0... .... .... = Opcode: Standard query (0)
.... ..0. .... .... = Truncated: Message is not truncated
.... ...1 .... .... = Recursion desired: Do query recursively
.... .... .0.. .... = Z: reserved (0)
.... .... ...0 .... = Non-authenticated data: Unacceptable
Questions: 1
Answer RRs: 0
Authority RRs: 0
Additional RRs: 0
Queries
clients6.google.com: type A, class IN
\end{verbatim}
\item Examine the DNS response message. How many “answers” are provided? What
do each of these answers contain?
2 answers. A canonical name request for clients6.google.com and a hostname-IP mapping for clients.l.google.com
\begin{verbatim}
Domain Name System (response)
[Request In: 143]
[Time: 0.014347000 seconds]
Transaction ID: 0x5f53
Flags: 0x8180 Standard query response, No error
1... .... .... .... = Response: Message is a response
.000 0... .... .... = Opcode: Standard query (0)
.... .0.. .... .... = Authoritative: Server is not an authority for domain
.... ..0. .... .... = Truncated: Message is not truncated
.... ...1 .... .... = Recursion desired: Do query recursively
.... .... 1... .... = Recursion available: Server can do recursive queries
.... .... .0.. .... = Z: reserved (0)
.... .... ..0. .... = Answer authenticated: Answer/authority portion was not authenticated by the server
.... .... ...0 .... = Non-authenticated data: Unacceptable
.... .... .... 0000 = Reply code: No error (0)
Questions: 1
Answer RRs: 2
Authority RRs: 0
Additional RRs: 0
Queries
Answers
clients6.google.com: type CNAME, class IN, cname clients.l.google.com
clients.l.google.com: type A, class IN, addr 172.217.11.46
\end{verbatim}
\item Consider the subsequent TCP SYN packet sent by your host. Does the destination
IP address of the SYN packet correspond to any of the IP addresses provided in
the DNS response message?
Yes.
\begin{verbatim}
Internet Protocol Version 4, Src: 192.168.1.20, Dst: 172.217.11.46
Transmission Control Protocol, Src Port: 55588, Dst Port: 80, Seq: 0, Len: 0
Source Port: 55588
Destination Port: 80
[Stream index: 25]
[TCP Segment Len: 0]
Sequence number: 0 (relative sequence number)
Acknowledgment number: 0
1011 .... = Header Length: 44 bytes (11)
Flags: 0x002 (SYN)
Window size value: 65535
[Calculated window size: 65535]
Checksum: 0xb461 [unverified]
[Checksum Status: Unverified]
Urgent pointer: 0
Options: (24 bytes), Maximum segment size, No-Operation (NOP), Window scale, No-Operation (NOP), No-Operation (NOP), Timestamps, SACK permitted, End of Option List (EOL)
\end{verbatim}
\item This web page contains images. Before retrieving each image, does your host
issue new DNS queries?
No, the image wireshark capture numbers go from 603-607. DNS queries go from 84-484, 1538-1587 so there is no overlap, implying no DNS queries are done for images. This makes sense, because the images are all local to the ietf.org hostname, which was already resolved.
\end{enumerate}
\end{document}
| {
"alphanum_fraction": 0.6971461852,
"avg_line_length": 32.6425855513,
"ext": "tex",
"hexsha": "a3efeb7c40c47ee301ff1206d639ca33abec9afe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "315707a39ab2106a9dadad684cc004be64c7d394",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rpg711/CS453",
"max_forks_repo_path": "HW6_Gao_Tony,LAB03/lecture-notes-template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "315707a39ab2106a9dadad684cc004be64c7d394",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rpg711/CS453",
"max_issues_repo_path": "HW6_Gao_Tony,LAB03/lecture-notes-template.tex",
"max_line_length": 268,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "315707a39ab2106a9dadad684cc004be64c7d394",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rpg711/CS453",
"max_stars_repo_path": "HW6_Gao_Tony,LAB03/lecture-notes-template.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2581,
"size": 8585
} |
\documentclass[]{article}
\usepackage[left=1in,top=1in,right=1in,bottom=1in]{geometry}
%%%% more monte %%%%
% thispagestyle{empty}
% https://stackoverflow.com/questions/2166557/how-to-hide-the-page-number-in-latex-on-first-page-of-a-chapter
\usepackage{color}
% \usepackage[table]{xcolor} % are they using color?
% \definecolor{WSU.crimson}{HTML}{981e32}
% \definecolor{WSU.gray}{HTML}{5e6a71}
% \definecolor{shadecolor}{RGB}{248,248,248}
\definecolor{WSU.crimson}{RGB}{152,30,50} % use http://colors.mshaffer.com to convert from 981e32
\definecolor{WSU.gray}{RGB}{94,106,113}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand*{\authorfont}{\fontfamily{phv}\selectfont}
\usepackage{lmodern}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{abstract}
\renewcommand{\abstractname}{} % clear the title
\renewcommand{\absnamepos}{empty} % originally center
\renewenvironment{abstract}
{{%
\setlength{\leftmargin}{0mm}
\setlength{\rightmargin}{\leftmargin}%
}%
\relax}
{\endlist}
\makeatletter
\def\@maketitle{%
\pagestyle{empty}
\newpage
% \null
% \vskip 2em%
% \begin{center}%
\let \footnote \thanks
{\fontsize{18}{20}\selectfont\raggedright \setlength{\parindent}{0pt} \@title \par}%
}
%\fi
\makeatother
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\title{\textbf{\textcolor{WSU.crimson}{Analysis of Body measurements and
proportions between male and female}} }
%
% \author{ \Large true \hfill \normalsize \emph{} }
\author{\Large Minju
Lee\vspace{0.05in} \newline\normalsize\emph{Washington State
University} }
\date{December 14, 2020}
\setcounter{secnumdepth}{3}
\usepackage{titlesec}
% See the link above: KOMA classes are not compatible with titlesec any more. Sorry.
% https://github.com/jbezos/titlesec/issues/11
\titleformat*{\section}{\bfseries}
\titleformat*{\subsection}{\bfseries\itshape}
\titleformat*{\subsubsection}{\itshape}
\titleformat*{\paragraph}{\itshape}
\titleformat*{\subparagraph}{\itshape}
% https://code.usgs.gov/usgs/norock/irvine_k/ip-092225/
%\titleformat*{\section}{\normalsize\bfseries}
%\titleformat*{\subsection}{\normalsize\itshape}
%\titleformat*{\subsubsection}{\normalsize\itshape}
%\titleformat*{\paragraph}{\normalsize\itshape}
%\titleformat*{\subparagraph}{\normalsize\itshape}
% https://tex.stackexchange.com/questions/233866/one-column-multicol-environment#233904
\usepackage{environ}
\NewEnviron{auxmulticols}[1]{%
\ifnum#1<2\relax% Fewer than 2 columns
%\vspace{-\baselineskip}% Possible vertical correction
\BODY
\else% More than 1 column
\begin{multicols}{#1}
\BODY
\end{multicols}%
\fi
}
\usepackage{natbib}
\setcitestyle{aysep={}} %% no year, comma just year
% \usepackage[numbers]{natbib}
\bibliographystyle{./../biblio/ormsv080.bst}
\usepackage[strings]{underscore} % protect underscores in most circumstances
\newtheorem{hypothesis}{Hypothesis}
\usepackage{setspace}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% MONTE ADDS %%%
\usepackage{fancyhdr} % fancy header
\usepackage{lastpage} % last page
\usepackage{multicol}
\usepackage{etoolbox}
\AtBeginEnvironment{quote}{\singlespacing\small}
% https://tex.stackexchange.com/questions/325695/how-to-style-blockquote
\usepackage{soul} %% allows strike-through
\usepackage{url} %% fixes underscores in urls
\usepackage{csquotes} %% allows \textquote in references
\usepackage{rotating} %% allows table and box rotation
\usepackage{caption} %% customize caption information
\usepackage{booktabs} %% enhance table/tabular environment
\usepackage{tabularx} %% width attributes updates tabular
\usepackage{enumerate} %% special item environment
\usepackage{enumitem} %% special item environment
\usepackage{lineno} %% allows linenumbers for editing using \linenumbers
\usepackage{hanging}
\usepackage{mathtools} %% also loads amsmath
\usepackage{bm} %% bold-math
\usepackage{scalerel} %% scale one element (make one beta bigger font)
\newcommand{\gFrac}[2]{ \genfrac{}{}{0pt}{1}{{#1}}{#2} }
\newcommand{\betaSH}[3]{ \gFrac{\text{\tiny #1}}{{\text{\tiny #2}}}\hat{\beta}_{\text{#3}} }
\newcommand{\betaSB}[3]{ ^{\text{#1}} _{\text{#2}} \bm{\beta} _{\text{#3}} } %% bold
\newcommand{\bigEQ}{ \scaleobj{1.5}{{\ }= } }
\newcommand{\bigP}[1]{ \scaleobj{1.5}{#1 } }
\usepackage{endnotes} % he already does this ...
\renewcommand{\enotesize}{\normalsize}
% https://tex.stackexchange.com/questions/99984/endnotes-do-not-be-superscript-and-add-a-space
\renewcommand\makeenmark{\textsuperscript{[\theenmark]}} % in brackets %
% https://tex.stackexchange.com/questions/31574/how-to-control-the-indent-in-endnotes
\patchcmd{\enoteformat}{1.8em}{0pt}{}{}
\patchcmd{\theendnotes}
{\makeatletter}
{\makeatletter\renewcommand\makeenmark{\textbf{[\theenmark]} }}
{}{}
% https://tex.stackexchange.com/questions/141906/configuring-footnote-position-and-spacing
\addtolength{\footnotesep}{5mm} % change to 1mm
\renewcommand{\thefootnote}{\textbf{\arabic{footnote}}}
\let\footnote=\endnote
%\renewcommand*{\theendnote}{\alph{endnote}}
%\renewcommand{\theendnote}{\textbf{\arabic{endnote}}}
\renewcommand*{\notesname}{ENDNOTES}
\makeatletter
\def\enoteheading{\section*{\notesname
\@mkboth{\MakeUppercase{\notesname}}{\MakeUppercase{\notesname}}}%
\mbox{}\par\vskip-2.3\baselineskip\noindent\rule{.5\textwidth}{0.4pt}\par\vskip\baselineskip}
\makeatother
\renewcommand*{\contentsname}{TABLE OF CONTENTS}
\renewcommand*{\refname}{REFERENCES}
%\usepackage{subfigure}
\usepackage{subcaption}
\captionsetup{labelfont=bf} % Make Table / Figure bold
%%% you could add elements here ... monte says .... %%%
%\usepackage{mypackageForCapitalH}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
% move the hyperref stuff down here, after header-includes, to allow for - \usepackage{hyperref}
\makeatletter
\@ifpackageloaded{hyperref}{}{%
\ifxetex
\PassOptionsToPackage{hyphens}{url}\usepackage[setpagesize=false, % page size defined by xetex
unicode=false, % unicode breaks when used with xetex
xetex]{hyperref}
\else
\PassOptionsToPackage{hyphens}{url}\usepackage[draft,unicode=true]{hyperref}
\fi
}
\@ifpackageloaded{color}{
\PassOptionsToPackage{usenames,dvipsnames}{color}
}{%
\usepackage[usenames,dvipsnames]{color}
}
\makeatother
\hypersetup{breaklinks=true,
bookmarks=true,
pdfauthor={Minju Lee (Washington State University)},
pdfkeywords = {boxplots; multivariate chi-square
distribution; nonlinear growth curves; Richard's curve; simulated
critical points},
pdftitle={Analysis of Body measurements and proportions
between male and female},
colorlinks=true,
citecolor=blue,
urlcolor=blue,
linkcolor=magenta,
pdfborder={0 0 0}}
\urlstyle{same} % don't use monospace font for urls
% Add an option for endnotes. -----
%
% add tightlist ----------
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
% add some other packages ----------
% \usepackage{multicol}
% This should regulate where figures float
% See: https://tex.stackexchange.com/questions/2275/keeping-tables-figures-close-to-where-they-are-mentioned
\usepackage[section]{placeins}
\pagestyle{fancy}
\lhead{\textcolor{WSU.crimson}{\textbf{ Analysis of Body measurements
and proportions between male and female }}}
\chead{}
\rhead{\textcolor{WSU.gray}{\textbf{ Page\ \thepage\ of\ \protect\pageref{LastPage} }}}
\lfoot{}
\cfoot{}
\rfoot{}
\begin{document}
% \pagenumbering{arabic}% resets `page` counter to 1
%
% \maketitle
{% \usefont{T1}{pnc}{m}{n}
\setlength{\parindent}{0pt}
\thispagestyle{plain}
{\fontsize{18}{20}\selectfont\raggedright
\maketitle % title \par
}
{
\vskip 13.5pt\relax \normalsize\fontsize{11}{12}
\textbf{\authorfont Minju Lee} \hskip 15pt \emph{\small Washington State
University}
}
}
\begin{abstract}
\hbox{\vrule height .2pt width 39.14pc}
\vskip 8.5pt % \small
\noindent In this article we compare the
\emph{empirical characteristic function} \citep{Tukey:1977, Becker:1988}
to a \emph{moment-generating-functional form} to compute the proportion
of hypotheses \(m\) that are rejected under the null hypothesis.
\vspace{0.25in}
\noindent Here is a second paragraph of the abstract (if necessary), and
with the pipe notation it doesn't break. Notice it still needs to be
indented. \vspace{0.25in}
\noindent Generally, we write this abstract last. Often it is called the
executive summary. It should succinctly summarize the entire document.
You can include references such as this one to the Appendices section
\ref{sec:appendix} if necessary.
\vskip 8.5pt \noindent \textbf{\underline{Keywords}:} boxplots;
multivariate chi-square distribution; nonlinear growth curves; Richard's
curve; simulated critical points \par
\hbox{\vrule height .2pt width 39.14pc}
\vskip 5pt
\hfill \textbf{\textcolor{WSU.gray}{ December 14, 2020 } }
\vskip 5pt
\end{abstract}
\vskip -8.5pt
% removetitleabstract
\noindent
\section{Introduction}
\label{sec:intro}
\noindent The global mean height of women is about four and a half
inches, or 12 centimeters shorter than that of men. Globally, the ratio
is 1.07, meaning that on average, men are about 7\% taller than women.
Across the world, this relative difference between the sexes can vary
from only 2-3\% to over 12\% \citep{owidhumanheight:2013}. It is
well-established knowledge that men are taller than women on average
throughout the globe. This study will further examine the anatomical
differences between men and women. More specifically, in the context of
standing height, leg length, wingspan, hand length, foot length, and
head height.
\begin{figure}[!ht]
\hrule
\caption{ \textbf{Height distribution between male and female} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.65\textwidth]{pdfs/kernel_density-1.pdf} }
\end{center}
\label{fig:height-dist}
\hrule
\end{figure}
\noindent Figure \ref{fig:height-dist} visualizes the height
distributions of males and females in the sample data. The red line
indicates the mean value of the female height sample, and the blue line
indicates the mean value of the male height sample. The difference
between the average height is about 13.7cm, which is 1.3cm greater than
the global mean difference of 12cm, according to
\citep{owidhumanheight:2013}. In our sample data, men are about 8.4\%
taller than the female on average. The result verifies our sample data
is within the norm of global height difference.
\section{Research Question: Male and female comparison of body measurements and proportions }
\label{sec:rq}
\subsection{How are male and female differ in terms of body measurements}
\label{sec:rq2}
\begin{figure}[!ht]
\hrule
\caption{ \textbf{boxplots of male and female measurements} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.80\textwidth]{pdfs/measure_box_plots-1.pdf} }
\end{center}
\label{fig:measure.mf}
\hrule
\end{figure}
\begin{figure}[!ht]
\hrule
\caption{ \textbf{boxplots of white/asian male and female body measurements} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.80\textwidth]{pdfs/measure_boxplot_amfwmf-1.pdf} }
\end{center}
\label{fig:measure.amfwmf}
\hrule
\end{figure}
\noindent The sample data measurement differs between males and females.
Figure \ref{fig:measure.mf} reveal the distribution of each variable
between men and women. All variables median value lie outside of the box
of opposite sex box plots. That is, there is likely to be a difference
between the two genders. Observation of Interquartile ranges(the box)
shows a similar dispersion between males and females. The overall spread
of the boxplots reveals men vary more in head height and foot length in
comparison to women. Some of the outliers in head height, hand length,
and foot length are likely due to a measurement error. It is very
unlikely that the normal human head height and foot length exceed 30cm.
Extreme outliers will be removed for the hypothesis
testing.\vspace{0.25in}
\noindent Further investigating the measurement between male and female
with another factor variable ethnicity. In our sample, we are only
comparing Asians and whites. Figure \ref{fig:measure.amfwmf} displays
boxplots of Asian male and female measurements on the top row and white
male and female measurements on the bottom row. The boxplots are closer
together and overlap more in Asians than whites. This indicates there is
less difference between genders in Asians than whites. It is interesting
to observe leg length between Asian males and females, they seem to have
the same median value although Asian male's median height value is about
10cm taller. Do Asian women have longer leg lengths relative to height
than Asian men? Note the distribution of Asian males varies greater than
of Asian females across every measurement variables. The distribution of
white males and females is very similar to the average distribution
between males and females. However, the average measurement values in
whites are slightly higher than the average sample data.\vspace{0.25in}
\subsection{How are male and female differ in terms of body proportions}
\label{sec:rq3}
\begin{figure}[!ht]
\hrule
\caption{ \textbf{boxplots of male and female body proportions} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.80\textwidth]{pdfs/proportion_box_plots-1.pdf} }
\end{center}
\label{fig:proportion.mf}
\hrule
\end{figure}
\noindent In Figure \ref{fig:proportion.mf} The proportion of boxplots
between male and female sample data reveal the distribution of body
proportions relative to height. It is clear to see the distributions are
very similar to each other. Body proportion between males and females
doesn't seem to differ much. On average, the arm span is about the same
as the height of a person. The median head height proportion is about
0.13, which is about 7.7 heads will fit into a person's standing
height.\vspace{0.25in}
\begin{figure}[!ht]
\hrule
\caption{ \textbf{boxplots of white/asian male and female body proportions} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.80\textwidth]{pdfs/proportion_box_plots_amfwmf-1.pdf} }
\end{center}
\label{fig:prportion.amfwmf}
\hrule
\end{figure}
\noindent Figure \ref{fig:proportion.amfwmf} breaks down the male-female
proportion boxplots between Asian and white. Again, the boxplots overlap
with each other. However, the Asian male/female boxplots median values
differ slightly. Asian male boxplots show longer arm span and hand
length, and shorter leg length compared to Asian female proportions. In
the next section, the study reviews the significance of the difference
in body measurements and the proportion between genders.\vspace{0.25in}
\section{Data Description}
\label{sec:data}
\noindent Data was collected by each classmate. Each classmate created
handouts for data collection and collected 10 individuals measurements.
The original data initially contained 428 observations. After the data
cleaning process, the observation reduced to 190 individuals. Some of
the measurements measured right and left side of the body have been
merged into one averaged value. All the measurement units were converted
to centimeters.\vspace{0.25in}
\noindent Sample data constraints individuals at age 18 or above. The
measurements contain individuals' height, leg length, arm span, head
height, hand length, and foot length. The sample data also contains
proportions of each measurement relative to height. Other variables used
in the sample data include gender, sex, and age.\vspace{0.25in}
\noindent Reference the section in the Appendix with greater detail
about the data provenance \ref{sec:appendix}.
\subsection{Summary of Sample}
\label{sec:data-sample}
\noindent Figure\ref{fig:sample} shows the sample data counts grouped by
gender and ethnicity. A total of 167 observations are in sample data
which contains 83 females and 84 males. About 70\% of the sample data is
white and about 30\% Asian.
\begin{figure}[!ht]
\hrule
\caption{ \textbf{sample data grouping by gender and ethnicity} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.85\textwidth]{pdfs/sample.pdf} }
\end{center}
\label{fig:sample}
\hrule
\end{figure}
\noindent Figure\ref{fig:age-dist} histogram shows the density of the
sample data in the different age groups. About 45\% of the participants
are aged between 20 and 30.
\begin{figure}[!ht]
\hrule
\caption{ \textbf{sample data age distribution} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.85\textwidth]{pdfs/sample_proportion-1.pdf} }
\end{center}
\label{fig:age-dist}
\hrule
\end{figure}
\subsection{Summary Statistics of Data}
\label{sec:data-summary}
\noindent
See table \ref{table:male-correlation} and
\ref{table:female-correlation} \newpage
\input{tables/will.table}
\input{tables/female-correlation-table}
\section{Key Findings}
\label{sec:findings}
\section{Conclusion}
\label{sec:conclusion}
\newpage
This was a new page
This is a newline. \newline Here is some more text.
Below are some example code that may benefit you in preparing your
document. \newline
\vspace{0.25in}
\noindent Please state your name: \hrulefill \newline I was born on
\hrulefill in \hrulefill \vspace{0.25in}
\begin{equation}
\label{eq:my-model}
Y_{jt} = \alpha + \bm{\beta}X_{jt} + \upsilon_{j} + \varepsilon_{jt} ,
\end{equation}
\noindent where \(\alpha\) is the grand mean, \(\upsilon_{j}\) is the
fixed-time country mean, \(X_{jt}\) (country \(j\) at time \(t\)) is the
matrix of country-level observations for the vector of aforementioned
parameters \(\bm{\beta}\), and \(\varepsilon_{jt}\) represents the
residual idiosyncratic disturbance. Our panel data set consists of
repeated observations of countries over time. Therefore, we employ
cross-section time-series models. This approach redefines
Equation\textasciitilde{}\ref{eq:my-model} by subtracting time-demeaned
values. This \emph{within} transformation subtracts constant country
effects for the dependent variable \(\bar{Y_{j}}\), the predictor
variables \(\bar{X_{j}}\), and the intercept \(\bar{\upsilon_{j}}\):
\begin{equation}
\label{eq:my-random}
(Y_{jt} - \theta \bar{Y_{j}}) = (1-\theta)\alpha + \bm{\beta}(X_{jt} - \bar{X_{j}}) + (\upsilon_{jt} - \theta \bar{\upsilon_{j}}) ,
\end{equation}
\noindent If \(\theta = 0\), the model reduces to a basic pooled
ordinary-least-squares (OLS) model; if \(\theta = 1\), the model reduces
to a fixed-effects model; otherwise the model represents a
random-effects model. The pooled OLS estimation is biased if country
effects exist \citep{Hsiao:2003}. The random-effects model may be
susceptible to omitted-variable bias \citep{Wooldridge:2006}: bias
because a predictor was excluded from the model specification.
Conversely, the fixed-effects model is not susceptible to this bias as
it captures unobserved intracountry variation around its average
country-level ``fixed effect." Panel-data analysis commonly has issues
with heteroskedasticity, serial autocorrelation, and cross-sectional
autocorrelation.
\vspace{0.5in}
\(i=1\) and \[i = 1\]
\vspace{0.5in}
\begin{tabular}{ c c c c c}
1 & 2 & 3 & 4 & 5 \\
\hline
6 & 7 & 8 & 9 & 10
\end{tabular}
\vspace{0.5in}
\begin{figure}[!ht]
%% figures have hrule, tables have hline
\hrule
\caption{ \textbf{Conceptual Model} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=\textwidth]{figures/conceptual-model-v4.pdf} }
\end{center}
\label{fig:conceptual-model}
\hrule
\end{figure}
See Figure \ref{fig:conceptual-model}.
\newpage
This is a
footnote\footnote{This is a footnote that can be really long. \newline You can have multiple paragraphs in the footnote. You can have \underline{underline} or \textbf{bold} or \emph{italics}. You can even have a math equation inline. \newline In this section, we review the regression results to summarize our findings. First, we examine each model for significance, and conclude the hypothesized models fit well with the data. Second, we conclude that the fixed country effects represent consistent and unbiased parameter estimates. Third, with the use of the \citet{Driscoll:1998} robust standard errors, we adjust any variance bias to ascertain the significance of these consistent estimates. Therefore, we are able to make inferences about the hypotheses using our model estimates. For ease of interpretation across these 12 models, we introduce $\betaSH{{ \ \ }M1}{Total}{1}$ as notation to refer to parameter estimate $\hat{\beta}_{1}$ (HDI) for the Total Sample and (M1) Model 1: Main Effects. We proceed by reporting findings for the total sample. \newline The footnotes are automatically converted to "endnotes" and will be included at the end of the document. It will finish when you have that outer brace like this.}
that can be placed within a document.
\vspace{1.5in}
Refer to the Appendices in section\textasciitilde{}\ref{sec:appendix}
where I am going to cite John \citep[pp. 2-3]{Tukey:1962}.
Here is a quote by \citet[pp. 2-3]{Tukey:1962}:
\begin{quote}
For a long time I have thought I was a statistician, interested in inferences from the particular to the general. But as I have watched mathematical statistics evolve, I have had to cause to wonder and to doubt. [...] All in all, I have come to feel that my central interest is in \emph{data analysis}, which I take to include among other things: procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing the data.
Large parts of data analysis are inferential in the sample-to-population sense, but these are only parts, not the whole. Large parts of data analysis are incisive, laying bare indications which we could not perceive by simple and direct examination of the raw data, but these too are only parts, not the whole. Some parts of data analysis, as the term is her stretch beyond its philology, are allocation, in the sense that they guide us in the distribution of effort and other valuable considerations in observation, experimentation, or analysis. Data analysis is a larger and more varied field than inference, or incisive procedures, or allocation.
Statistics has contributed much to data analysis. In the future it can, and in my view should, contribute more. For such contributions to exist, and be valuable, it is not necessary that they be direct. They need not provide new techniques, or better tables for old techniques, in order to influence the practice of data analysis.
\end{quote}
\newpage
\section{APPENDICES}
\label{sec:appendix}
\subsection{Data Provenance}
\label{sec:appendix-data-provenance}
\newpage
\subsubsection{Data Collection Handout}
\label{sec:appendix-data-handout}
\begin{figure}[!ht]
\hrule
\caption{ \textbf{Handout Page 1} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.85\textwidth]{pdfs/handout1.pdf} }
\end{center}
\label{fig:handout-1}
\hrule
\end{figure}
\newpage
\begin{figure}[!ht]
\hrule
\caption{ \textbf{Handout Page 2} }
\begin{center}
\scalebox{1.00}{ \includegraphics[trim = 0 0 0 0,clip,width=0.85\textwidth]{pdfs/handout2.pdf} }
\end{center}
\label{fig:handout-2}
\hrule
\end{figure}
\newpage
\begin{figure}[!ht]
\begin{subfigure}[h]{0.5\textwidth}
\centering
% trim={<left> <lower> <right> <upper>}
% https://shantoroy.com/latex/add-subfig-in-latex/
\includegraphics[trim = 0 0 11.25cm 0,clip,scale=1]{figures/Vitruvian.pdf}
\caption{ \citet{Thomas:2020} discuss this. }
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}[h]{0.5\textwidth}
\centering
\includegraphics[trim = 11.25cm 0 0 0,clip,scale=1]{figures/Vitruvian.pdf}
\caption{Schnitt realer Sensor \citep{Thomas:2020}}
\label{fig:sub-second}
\end{subfigure}
\vspace{2.5mm}
\hrule
\vspace{2.5mm}
\caption{\textbf{ Der Sensor in Theorie und Verwirklichung... caption at bottom instead? } I can write a really long caption if I want. \newline This is using "crop" to include one image and trim it to appear as two. Likely you will have two separate images if you use this option, so you would set the trim parameters all equal to 0. \newline This figure has subfigures which each also have a possible caption. }
\label{fig:combined}
\vspace{-2.5mm}
\hrule
\end{figure}
\newpage
\subsection{Preparing the Report Workspace as a subsection}
\label{sec:appendix-setup}
\subsubsection{prepare sample data}
\label{sec:appendix-setup2}
Below is the necessary functions and libraries required to run the code
referenced in this document.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(devtools); }\CommentTok{\# required for source\_url}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Loading required package: usethis
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{path.humanVerseWSU =}\StringTok{ "https://raw.githubusercontent.com/MonteShaffer/humanVerseWSU/"}
\KeywordTok{source\_url}\NormalTok{( }\KeywordTok{paste0}\NormalTok{(path.humanVerseWSU,}\StringTok{"master/misc/functions{-}project{-}measure.R"}\NormalTok{) );}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## SHA-1 hash of file is 091aa1c443f262dce181395047d037a756331a65
\end{verbatim}
\begin{verbatim}
## Loading required package: lattice
\end{verbatim}
\begin{verbatim}
## Loading required package: survival
\end{verbatim}
\begin{verbatim}
## Loading required package: Formula
\end{verbatim}
\begin{verbatim}
## Loading required package: ggplot2
\end{verbatim}
\begin{verbatim}
##
## Attaching package: 'Hmisc'
\end{verbatim}
\begin{verbatim}
## The following objects are masked from 'package:base':
##
## format.pval, units
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{source\_url}\NormalTok{( }\KeywordTok{paste0}\NormalTok{(path.humanVerseWSU,}\StringTok{"master/humanVerseWSU/R/functions{-}dataframe.R"}\NormalTok{) );}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## SHA-1 hash of file is 1149cbf3e865f692b50d4d1983e6364dc56ce62d
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{source\_url}\NormalTok{( }\KeywordTok{paste0}\NormalTok{(path.humanVerseWSU,}\StringTok{"master/humanVerseWSU/R/functions{-}EDA.R"}\NormalTok{) );}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## SHA-1 hash of file is 62ba3333da32792e57c410e3f02a443a4c7f4985
\end{verbatim}
\begin{verbatim}
## Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa
\end{verbatim}
\begin{verbatim}
##
## Attaching package: 'psych'
\end{verbatim}
\begin{verbatim}
## The following object is masked from 'package:Hmisc':
##
## describe
\end{verbatim}
\begin{verbatim}
## The following objects are masked from 'package:ggplot2':
##
## %+%, alpha
\end{verbatim}
Below is the code to load the data and prepare it for analysis.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{path.project =}\StringTok{ "C:/\_git\_/WSU\_STATS419\_FALL2020/project{-}measure/"}\NormalTok{;}
\NormalTok{path.to.secret =}\StringTok{ "C:/Users/13608/Dropbox/WSU{-}419/Fall 2020/\_\_student\_access\_\_/\_SECRET\_/"}
\NormalTok{measure =}\StringTok{ }\NormalTok{utils}\OperatorTok{::}\KeywordTok{read.csv}\NormalTok{( }\KeywordTok{paste0}\NormalTok{(path.to.secret, }\StringTok{"measure{-}students.txt"}\NormalTok{), }\DataTypeTok{header=}\OtherTok{TRUE}\NormalTok{, }\DataTypeTok{quote=}\StringTok{""}\NormalTok{, }\DataTypeTok{sep=}\StringTok{"|"}\NormalTok{);}
\NormalTok{getOne =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"hand.length"}\NormalTok{, }\StringTok{"hand.width"}\NormalTok{, }\StringTok{"hand.elbow"}\NormalTok{, }\StringTok{"elbow.armpit"}\NormalTok{, }\StringTok{"arm.reach"}\NormalTok{, }\StringTok{"foot.length"}\NormalTok{, }\StringTok{"floor.kneepit"}\NormalTok{, }\StringTok{"floor.hip"}\NormalTok{, }\StringTok{"floor.armpit"}\NormalTok{);}
\CommentTok{\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#}
\NormalTok{path.github =}\StringTok{ "https://raw.githubusercontent.com/minju{-}lee92/WSU\_STATS419\_FALL2020/"}\NormalTok{;}
\KeywordTok{source\_url}\NormalTok{( }\KeywordTok{paste0}\NormalTok{(path.github,}\StringTok{"master/functions/functions{-}project{-}measure.R"}\NormalTok{) );}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## SHA-1 hash of file is 5aa82ee87a0a62bfc9b04ccc942d44c0cc3ed092
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# this is your function}
\CommentTok{\# covert unit into cm}
\NormalTok{measureAscm \textless{}{-}}\KeywordTok{convert.inchestocm}\NormalTok{(measure)}
\CommentTok{\# build merged left/right value cols}
\NormalTok{merged.df \textless{}{-}}\KeywordTok{merge.left.right}\NormalTok{(measureAscm, getOne)}
\CommentTok{\# remove nas, duplicates, create categorical variables... etc }
\NormalTok{cleaned.df =}\StringTok{ }\KeywordTok{prepareMeasureData}\NormalTok{(merged.df)}
\CommentTok{\# create scaled variables using height}
\NormalTok{colnum =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\DecValTok{3}\OperatorTok{:}\DecValTok{7}\NormalTok{,}\DecValTok{20}\OperatorTok{:}\DecValTok{28}\NormalTok{)}
\NormalTok{new.colname =}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"p.height"}\NormalTok{, }\StringTok{"p.head.h"}\NormalTok{, }\StringTok{"p.head.c"}\NormalTok{, }\StringTok{"p.arm.span"}\NormalTok{, }\StringTok{"p.floor.navel"}\NormalTok{,}\StringTok{"p.hand.length"}\NormalTok{,}\StringTok{"p.hand.width"}\NormalTok{,}\StringTok{"p.hand.elbow"}\NormalTok{,}\StringTok{"p.elbow.armpit"}\NormalTok{,}\StringTok{"p.arm.reach"}\NormalTok{,}\StringTok{"p.foot.length"}\NormalTok{,}\StringTok{"p.floor.kneepit"}\NormalTok{, }\StringTok{"p.floor.hip"}\NormalTok{, }\StringTok{"p.floor.armpit"}\NormalTok{)}
\NormalTok{v1.df =}\KeywordTok{build.scale.variables}\NormalTok{(cleaned.df, colnum, new.colname, cleaned.df}\OperatorTok{$}\NormalTok{height)}
\CommentTok{\# final sample data for the analysis}
\NormalTok{sample.df \textless{}{-}v1.df[,}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\OperatorTok{:}\DecValTok{4}\NormalTok{,}\DecValTok{6}\NormalTok{,}\DecValTok{20}\NormalTok{,}\DecValTok{25}\NormalTok{,}\DecValTok{27}\NormalTok{,}\DecValTok{13}\NormalTok{,}\DecValTok{29}\OperatorTok{:}\DecValTok{31}\NormalTok{,}\DecValTok{36}\NormalTok{,}\DecValTok{37}\NormalTok{,}\DecValTok{39}\NormalTok{,}\DecValTok{41}\NormalTok{,}\DecValTok{46}\NormalTok{,}\DecValTok{48}\NormalTok{)]}
\NormalTok{sample.data \textless{}{-}}\StringTok{ }\NormalTok{sample.df[sample.df}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{!=}\StringTok{\textquotesingle{}o\textquotesingle{}}\OperatorTok{\&}\StringTok{ }\NormalTok{sample.df}\OperatorTok{$}\NormalTok{my.ethnicity}\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\OperatorTok{|}\NormalTok{sample.df}\OperatorTok{$}\NormalTok{my.ethnicity}\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{,]}
\CommentTok{\# age over 18}
\NormalTok{sample.data \textless{}{-}}\StringTok{ }\NormalTok{sample.data[sample.data}\OperatorTok{$}\NormalTok{age }\OperatorTok{\textgreater{}=}\DecValTok{18}\NormalTok{,]}
\KeywordTok{summary}\NormalTok{(sample.data)}
\CommentTok{\# getting rid of outliers.}
\NormalTok{sample.data \textless{}{-}}\StringTok{ }\NormalTok{sample.data[sample.data}\OperatorTok{$}\NormalTok{height }\OperatorTok{\textgreater{}=}\StringTok{ }\DecValTok{145} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span}\OperatorTok{\textgreater{}}\DecValTok{100} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length}\OperatorTok{\textgreater{}}\DecValTok{10} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length}\OperatorTok{\textless{}}\DecValTok{30} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length}\OperatorTok{\textgreater{}}\DecValTok{19} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip}\OperatorTok{\textgreater{}}\DecValTok{50} \OperatorTok{\&}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height}\OperatorTok{\textless{}}\DecValTok{30}\NormalTok{,]}
\KeywordTok{saveRDS}\NormalTok{(sample.data,}\StringTok{"sample.data.rds"}\NormalTok{);}
\NormalTok{ utils}\OperatorTok{::}\KeywordTok{write.table}\NormalTok{(sample.data, }\DataTypeTok{file=}\StringTok{"sample.data.txt"}\NormalTok{, }\DataTypeTok{quote=}\OtherTok{FALSE}\NormalTok{, }\DataTypeTok{col.names=}\OtherTok{TRUE}\NormalTok{, }\DataTypeTok{row.names=}\OtherTok{FALSE}\NormalTok{, }\DataTypeTok{sep=}\StringTok{"|"}\NormalTok{);}
\end{Highlighting}
\end{Shaded}
\subsubsection{generate summary of sample}
\label{sec:appendix-setup2}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(dplyr)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
##
## Attaching package: 'dplyr'
\end{verbatim}
\begin{verbatim}
## The following objects are masked from 'package:Hmisc':
##
## src, summarize
\end{verbatim}
\begin{verbatim}
## The following objects are masked from 'package:stats':
##
## filter, lag
\end{verbatim}
\begin{verbatim}
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{sample.data \textless{}{-}}\StringTok{ }\KeywordTok{na.omit}\NormalTok{(sample.data)}
\NormalTok{dist.sample \textless{}{-}}\StringTok{ }\NormalTok{sample.data }\OperatorTok{\%\textgreater{}\%}\StringTok{ }\KeywordTok{group\_by}\NormalTok{(my.gender, my.ethnicity) }\OperatorTok{\%\textgreater{}\%}\StringTok{ }\NormalTok{dplyr}\OperatorTok{::}\KeywordTok{summarise}\NormalTok{(}\DataTypeTok{count =} \KeywordTok{n}\NormalTok{())}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## `summarise()` regrouping output by 'my.gender' (override with `.groups` argument)
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(gridExtra)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
##
## Attaching package: 'gridExtra'
\end{verbatim}
\begin{verbatim}
## The following object is masked from 'package:dplyr':
##
## combine
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{pdf}\NormalTok{(}\StringTok{"pdfs/sample.pdf"}\NormalTok{, }\DataTypeTok{height=}\DecValTok{2}\NormalTok{, }\DataTypeTok{width=}\DecValTok{3}\NormalTok{)}
\KeywordTok{grid.table}\NormalTok{(dist.sample)}
\KeywordTok{dev.off}\NormalTok{()}
\KeywordTok{hist}\NormalTok{(sample.data}\OperatorTok{$}\NormalTok{age, }\DataTypeTok{breaks=}\KeywordTok{seq}\NormalTok{(}\DecValTok{10}\NormalTok{,}\DecValTok{100}\NormalTok{,}\DecValTok{10}\NormalTok{), }\DataTypeTok{ylim=}\KeywordTok{c}\NormalTok{(}\DecValTok{0}\NormalTok{,}\FloatTok{0.05}\NormalTok{), }\DataTypeTok{freq=}\OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{dim}\NormalTok{(sample.data)}
\KeywordTok{str}\NormalTok{(sample.data)}
\NormalTok{summary.sample \textless{}{-}}\KeywordTok{summary}\NormalTok{(sample.data[,}\KeywordTok{c}\NormalTok{(}\DecValTok{13}\OperatorTok{:}\DecValTok{18}\NormalTok{)])}
\KeywordTok{library}\NormalTok{(gridExtra)}
\KeywordTok{pdf}\NormalTok{(}\StringTok{"pdfs/summary{-}sample.pdf"}\NormalTok{, }\DataTypeTok{height=}\DecValTok{3}\NormalTok{, }\DataTypeTok{width=}\DecValTok{10}\NormalTok{)}
\KeywordTok{grid.table}\NormalTok{(summary.sample)}
\KeywordTok{dev.off}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\subsubsection{generate summary statistic tables}
\label{sec:appendix-setup2}
Below is the code to generate the summary statistics and save them as a
table that you see in Section \ref{table:correlation}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{path.project =}\StringTok{ "C:/\_git\_/WSU\_STATS419\_FALL2020/project{-}measure/"}\NormalTok{;}
\NormalTok{path.tables =}\StringTok{ }\KeywordTok{paste0}\NormalTok{(path.project,}\StringTok{"tables/"}\NormalTok{);}
\KeywordTok{createDirRecursive}\NormalTok{(path.tables);}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{file.correlation =}\StringTok{ }\KeywordTok{paste0}\NormalTok{(path.tables,}\StringTok{"correlation{-}table.tex"}\NormalTok{)}
\NormalTok{myData =}\StringTok{ }\KeywordTok{as.matrix}\NormalTok{(sample.data[,}\KeywordTok{c}\NormalTok{(}\DecValTok{3}\OperatorTok{:}\DecValTok{8}\NormalTok{)]) }\CommentTok{\# numeric values only, only what will appear in table}
\CommentTok{\# https://www.overleaf.com/read/srzhrcryjpwn}
\CommentTok{\# keepaspectratio of include graphics }
\CommentTok{\# could scale \textbackslash{}input if still too big ...}
\CommentTok{\# https://tex.stackexchange.com/questions/13460/scalebox{-}knowing{-}how{-}much{-}it{-}scales\#13487}
\KeywordTok{buildLatexCorrelationTable}\NormalTok{(myData, }
\DataTypeTok{rotateTable =} \OtherTok{FALSE}\NormalTok{,}
\DataTypeTok{width.table =} \FloatTok{0.80}\NormalTok{, }\CommentTok{\# best for given data ... 0.95 when rotateTable = FALSE}
\CommentTok{\# 0.60 when rotateTable = TRUE}
\DataTypeTok{myCaption =} \StringTok{"Overall Descriptive Statistics and Correlation Analysis"}\NormalTok{,}
\DataTypeTok{myFile =}\NormalTok{ file.correlation,}
\DataTypeTok{myNames =} \KeywordTok{colnames}\NormalTok{(myData),}
\DataTypeTok{showOnes =} \StringTok{"left"}\NormalTok{)}
\KeywordTok{Sys.sleep}\NormalTok{(}\DecValTok{2}\NormalTok{) }\CommentTok{\# in case Knit{-}PDF doesn\textquotesingle{}t like that I just created the file...}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{file.correlation =}\StringTok{ }\KeywordTok{paste0}\NormalTok{(path.tables,}\StringTok{"male{-}correlation{-}table.tex"}\NormalTok{)}
\NormalTok{male\_sample.data \textless{}{-}}\StringTok{ }\NormalTok{sample.data[sample.data}\OperatorTok{$}\NormalTok{my.gender}\OperatorTok{==}\StringTok{\textquotesingle{}m\textquotesingle{}}\NormalTok{,]}
\NormalTok{myData =}\StringTok{ }\KeywordTok{as.matrix}\NormalTok{(male\_sample.data[,}\KeywordTok{c}\NormalTok{(}\DecValTok{3}\OperatorTok{:}\DecValTok{8}\NormalTok{)]) }\CommentTok{\# numeric values only, only what will appear in table}
\CommentTok{\# https://www.overleaf.com/read/srzhrcryjpwn}
\CommentTok{\# keepaspectratio of include graphics }
\CommentTok{\# could scale \textbackslash{}input if still too big ...}
\CommentTok{\# https://tex.stackexchange.com/questions/13460/scalebox{-}knowing{-}how{-}much{-}it{-}scales\#13487}
\KeywordTok{buildLatexCorrelationTable}\NormalTok{(myData, }
\DataTypeTok{myLabel =} \StringTok{"table:male{-}correlation"}\NormalTok{,}
\DataTypeTok{rotateTable =} \OtherTok{FALSE}\NormalTok{,}
\DataTypeTok{width.table =} \FloatTok{0.80}\NormalTok{, }\CommentTok{\# best for given data ... 0.95 when rotateTable = FALSE}
\CommentTok{\# 0.60 when rotateTable = TRUE}
\DataTypeTok{myCaption =} \StringTok{"Descriptive Statistics and Correlation Analysis in male"}\NormalTok{,}
\DataTypeTok{myFile =}\NormalTok{ file.correlation,}
\DataTypeTok{myNames =} \KeywordTok{colnames}\NormalTok{(myData),}
\DataTypeTok{width.names =} \StringTok{"15mm"}\NormalTok{,}
\DataTypeTok{space.M.SD =} \StringTok{"0.1mm"}\NormalTok{,}
\DataTypeTok{space.SD.corr =} \StringTok{"0.5mm"}\NormalTok{,}
\DataTypeTok{space.between =} \StringTok{"0.1mm"}\NormalTok{,}
\DataTypeTok{showOnes =} \StringTok{"center"}\NormalTok{)}
\KeywordTok{Sys.sleep}\NormalTok{(}\DecValTok{2}\NormalTok{) }\CommentTok{\# in case Knit{-}PDF doesn\textquotesingle{}t like that I just created the file...}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{file.correlation =}\StringTok{ }\KeywordTok{paste0}\NormalTok{(path.tables,}\StringTok{"female{-}correlation{-}table.tex"}\NormalTok{)}
\NormalTok{female\_sample.data \textless{}{-}}\StringTok{ }\NormalTok{sample.data[sample.data}\OperatorTok{$}\NormalTok{my.gender}\OperatorTok{==}\StringTok{\textquotesingle{}f\textquotesingle{}}\NormalTok{,]}
\NormalTok{myData =}\StringTok{ }\KeywordTok{as.matrix}\NormalTok{(female\_sample.data[,}\KeywordTok{c}\NormalTok{(}\DecValTok{3}\OperatorTok{:}\DecValTok{8}\NormalTok{)]) }\CommentTok{\# numeric values only, only what will appear in table}
\CommentTok{\# https://www.overleaf.com/read/srzhrcryjpwn}
\CommentTok{\# keepaspectratio of include graphics }
\CommentTok{\# could scale \textbackslash{}input if still too big ...}
\CommentTok{\# https://tex.stackexchange.com/questions/13460/scalebox{-}knowing{-}how{-}much{-}it{-}scales\#13487}
\KeywordTok{buildLatexCorrelationTable}\NormalTok{(myData,}
\DataTypeTok{myLabel =} \StringTok{"table:female{-}correlation"}\NormalTok{, }
\DataTypeTok{rotateTable =} \OtherTok{FALSE}\NormalTok{,}
\DataTypeTok{width.table =} \FloatTok{0.80}\NormalTok{, }\CommentTok{\# best for given data ... 0.95 when rotateTable = FALSE}
\CommentTok{\# 0.60 when rotateTable = TRUE}
\DataTypeTok{myCaption =} \StringTok{"Descriptive Statistics and Correlation Analysis in female"}\NormalTok{,}
\DataTypeTok{myFile =}\NormalTok{ file.correlation,}
\DataTypeTok{myNames =} \KeywordTok{colnames}\NormalTok{(myData),}
\DataTypeTok{width.names =} \StringTok{"15mm"}\NormalTok{,}
\DataTypeTok{space.M.SD =} \StringTok{"0.1mm"}\NormalTok{,}
\DataTypeTok{space.SD.corr =} \StringTok{"0.5mm"}\NormalTok{,}
\DataTypeTok{space.between =} \StringTok{"0.1mm"}\NormalTok{,}
\DataTypeTok{showOnes =} \StringTok{"center"}\NormalTok{)}
\KeywordTok{Sys.sleep}\NormalTok{(}\DecValTok{2}\NormalTok{) }\CommentTok{\# in case Knit{-}PDF doesn\textquotesingle{}t like that I just created the file...}
\end{Highlighting}
\end{Shaded}
\subsubsection{generate summary statistic figures}
\label{sec:appendix-setup2}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{head.height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{head.height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(height.m, height.f,}
\DataTypeTok{main =} \StringTok{"height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{140}\NormalTok{,}\DecValTok{220}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(arm.span.m, arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{140}\NormalTok{,}\DecValTok{220}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(floor.hip.m, floor.hip.f,}
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{80}\NormalTok{,}\DecValTok{160}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(head.height.m, head.height.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{16}\NormalTok{,}\DecValTok{35}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(hand.length.m, hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{16}\NormalTok{,}\DecValTok{35}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(foot.length.m, foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\DecValTok{16}\NormalTok{,}\DecValTok{35}\NormalTok{),}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Would certain ethnicity change the measurements between male and female?}
\NormalTok{a.height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}} \OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.head.height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.head.height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{a.foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{w.height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}} \OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.head.height.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.head.height.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{head.height[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{w.foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{6}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(a.height.m, a.height.f,}
\DataTypeTok{main =} \StringTok{"height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(157.5, 162, 169, 172.86, 177.5, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.arm.span.m, a.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(160, 165.5, 174.5, 177.8, 181.5, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.floor.hip.m, a.floor.hip.f, }
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(a.head.height.m, a.head.height.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(18.5, 20.795, 22.75, 24, 28, 19, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.hand.length.m, a.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(a.foot.length.m, a.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(22.5, 23.875, 25.375, 26.25, 26.67, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(w.height.m, w.height.f,}
\DataTypeTok{main =} \StringTok{"height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.arm.span.m, w.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.floor.hip.m, w.floor.hip.f, }
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.head.height.m, w.head.height.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.hand.length.m, w.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.foot.length.m, w.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(w.height.m, a.height.f,}
\DataTypeTok{main =} \StringTok{"height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.arm.span.m, a.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.floor.hip.m, a.floor.hip.f, }
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.head.height.m, a.head.height.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.hand.length.m, a.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(w.foot.length.m, a.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"white male vs asian female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(a.height.m, w.height.f,}
\DataTypeTok{main =} \StringTok{"height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(157.5, 162, 169, 172.86, 177.5, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.arm.span.m, w.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(160, 165.5, 174.5, 177.8, 181.5, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.floor.hip.m, w.floor.hip.f, }
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(a.head.height.m, w.head.height.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(18.5, 20.795, 22.75, 24, 28, 18, : some
## notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(a.hand.length.m, w.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(a.foot.length.m, w.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"asian male vs white female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"unit(cm)"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(22.5, 23.875, 25.375, 26.25, 26.67, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
What is the best to way to represent proportions of measurments? .
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{p.arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{p.arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{p.floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{p.floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{p.head.h.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{p.head.h.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{p.hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{p.hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\NormalTok{p.foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\NormalTok{]}
\NormalTok{p.foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\NormalTok{]}
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(p.arm.span.m, p.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(p.floor.hip.m, p.floor.hip.m, }
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(p.head.h.m, p.head.h.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(p.hand.length.m, p.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(p.foot.length.m, p.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"male vs female"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{ap.arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.head.h.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.head.h.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{ap.foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}a\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.arm.span.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.arm.span.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.arm.span[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.floor.hip.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.floor.hip.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.floor.hip[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.head.h.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.head.h.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.head.h[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.hand.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.hand.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.hand.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.foot.length.m \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}m\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\NormalTok{wp.foot.length.f \textless{}{-}}\StringTok{ }\NormalTok{sample.data}\OperatorTok{$}\NormalTok{p.foot.length[sample.data}\OperatorTok{$}\NormalTok{my.gender }\OperatorTok{==}\StringTok{ \textquotesingle{}f\textquotesingle{}}\OperatorTok{\&}\NormalTok{sample.data}\OperatorTok{$}\NormalTok{my.ethnicity }\OperatorTok{==}\StringTok{\textquotesingle{}w\textquotesingle{}}\NormalTok{]}
\KeywordTok{par}\NormalTok{(}\DataTypeTok{mfrow =} \KeywordTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{5}\NormalTok{))}
\KeywordTok{boxplot}\NormalTok{(ap.arm.span.m, ap.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(ap.floor.hip.m, ap.floor.hip.f,}
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(0.477941176470588, 0.526795735129068, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(ap.head.h.m, ap.head.h.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(0.108823529411765, 0.121212121212121, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(ap.hand.length.m, ap.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(0.102601156069364, 0.107252706993406, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(ap.foot.length.m, ap.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"am vs af"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
## Warning in bxp(list(stats = structure(c(0.141509433962264, 0.146318342151675, :
## some notches went outside hinges ('box'): maybe set notch=FALSE
\end{verbatim}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{boxplot}\NormalTok{(wp.arm.span.m, wp.arm.span.f,}
\DataTypeTok{main =} \StringTok{"arm span"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(wp.floor.hip.m, wp.floor.hip.f,}
\DataTypeTok{main =} \StringTok{"leg length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(wp.head.h.m, wp.head.h.f,}
\DataTypeTok{main =} \StringTok{"head height"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(wp.hand.length.m, wp.hand.length.f,}
\DataTypeTok{main =} \StringTok{"hand length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\KeywordTok{boxplot}\NormalTok{(wp.foot.length.m, wp.foot.length.f,}
\DataTypeTok{main =} \StringTok{"foot length"}\NormalTok{,}
\DataTypeTok{xlab =} \StringTok{"wm vs wf"}\NormalTok{,}
\DataTypeTok{ylab =} \StringTok{"proportion"}\NormalTok{,}
\DataTypeTok{col =} \KeywordTok{c}\NormalTok{(}\StringTok{"orange"}\NormalTok{, }\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{),}
\DataTypeTok{border =} \StringTok{"brown"}\NormalTok{,}
\DataTypeTok{notch =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{comb\textless{}{-}}\KeywordTok{c}\NormalTok{(}\StringTok{\textquotesingle{}af\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}am\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}wf\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}wm\textquotesingle{}}\NormalTok{)}
\NormalTok{comb \textless{}{-}}\KeywordTok{as.factor}\NormalTok{(comb)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{)}
\NormalTok{height \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\KeywordTok{mean}\NormalTok{(ap.head.h.f), }\KeywordTok{mean}\NormalTok{(ap.head.h.m), }\KeywordTok{mean}\NormalTok{(wp.head.h.f), }\KeywordTok{mean}\NormalTok{(wp.head.h.m))}
\NormalTok{head\_height \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\KeywordTok{mean}\NormalTok{(ap.arm.span.f), }\KeywordTok{mean}\NormalTok{(ap.arm.span.m), }\KeywordTok{mean}\NormalTok{(wp.arm.span.f), }\KeywordTok{mean}\NormalTok{(wp.arm.span.m))}
\NormalTok{arm\_span \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\KeywordTok{mean}\NormalTok{(ap.floor.hip.f), }\KeywordTok{mean}\NormalTok{(ap.floor.hip.m), }\KeywordTok{mean}\NormalTok{(wp.floor.hip.f), }\KeywordTok{mean}\NormalTok{(wp.floor.hip.m))}
\NormalTok{leg\_length \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\KeywordTok{mean}\NormalTok{(ap.hand.length.f), }\KeywordTok{mean}\NormalTok{(ap.hand.length.m), }\KeywordTok{mean}\NormalTok{(wp.hand.length.f), }\KeywordTok{mean}\NormalTok{(wp.hand.length.m))}
\NormalTok{hand\_length \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{mean\textless{}{-}}\KeywordTok{c}\NormalTok{(}\KeywordTok{mean}\NormalTok{(ap.foot.length.f), }\KeywordTok{mean}\NormalTok{(ap.foot.length.m), }\KeywordTok{mean}\NormalTok{(wp.foot.length.f), }\KeywordTok{mean}\NormalTok{(wp.foot.length.m))}
\NormalTok{foot\_length \textless{}{-}}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(comb, mean)}
\NormalTok{head\_height}\OperatorTok{$}\NormalTok{name \textless{}{-}}\StringTok{\textquotesingle{}head\_height\textquotesingle{}}
\NormalTok{arm\_span}\OperatorTok{$}\NormalTok{name \textless{}{-}}\StringTok{\textquotesingle{}arm\_span\textquotesingle{}}
\NormalTok{leg\_length}\OperatorTok{$}\NormalTok{name \textless{}{-}}\StringTok{\textquotesingle{}leg\_length\textquotesingle{}}
\NormalTok{hand\_length}\OperatorTok{$}\NormalTok{name \textless{}{-}}\StringTok{\textquotesingle{}hand\_length\textquotesingle{}}
\NormalTok{foot\_length}\OperatorTok{$}\NormalTok{name \textless{}{-}}\StringTok{ \textquotesingle{}foot\_length\textquotesingle{}}
\NormalTok{d \textless{}{-}}\StringTok{ }\KeywordTok{rbind}\NormalTok{(head\_height, arm\_span, leg\_length, hand\_length, foot\_length)}
\NormalTok{p \textless{}{-}}\StringTok{ }\KeywordTok{ggplot}\NormalTok{(d, }\KeywordTok{aes}\NormalTok{(comb, mean, }\DataTypeTok{fill=}\NormalTok{name))}\OperatorTok{+}
\StringTok{ }\KeywordTok{geom\_bar}\NormalTok{(}\DataTypeTok{position =} \StringTok{\textquotesingle{}dodge\textquotesingle{}}\NormalTok{, }\DataTypeTok{stat=}\StringTok{\textquotesingle{}identity\textquotesingle{}}\NormalTok{)}\OperatorTok{+}
\StringTok{ }\KeywordTok{xlab}\NormalTok{(}\StringTok{"Aisn/white male and female"}\NormalTok{)}\OperatorTok{+}
\StringTok{ }\KeywordTok{ylab}\NormalTok{(}\StringTok{"proportion as a function of height"}\NormalTok{)}
\NormalTok{p}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Two{-}way ANOVA with interaction effect}
\NormalTok{res.aov2 \textless{}{-}}\StringTok{ }\KeywordTok{aov}\NormalTok{(height }\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{my.gender }\OperatorTok{*}\StringTok{ }\NormalTok{my.ethnicity, }\DataTypeTok{data =}\NormalTok{ sample.data)}
\KeywordTok{summary}\NormalTok{(res.aov2)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{two.way \textless{}{-}}\StringTok{ }\KeywordTok{aov}\NormalTok{(height }\OperatorTok{\textasciitilde{}}\StringTok{ }\NormalTok{my.gender }\OperatorTok{*}\StringTok{ }\NormalTok{my.ethnicity, }\DataTypeTok{data =}\NormalTok{ sample.data)}
\KeywordTok{summary}\NormalTok{(two.way)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{t.test}\NormalTok{(height.m, height.f,}
\DataTypeTok{alternative =} \StringTok{"greater"}\NormalTok{,}
\DataTypeTok{mu =} \DecValTok{0}\NormalTok{, }\DataTypeTok{paired =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{var.equal =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{t.test}\NormalTok{(a.height.m, w.height.f,}
\DataTypeTok{alternative =} \StringTok{"greater"}\NormalTok{,}
\DataTypeTok{mu =} \DecValTok{0}\NormalTok{, }\DataTypeTok{paired =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{var.equal =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{t.test}\NormalTok{(a.floor.hip.m, a.floor.hip.f,}
\DataTypeTok{alternative =} \StringTok{"greater"}\NormalTok{,}
\DataTypeTok{mu =} \DecValTok{0}\NormalTok{, }\DataTypeTok{paired =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{var.equal =} \OtherTok{FALSE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(ggplot2)}
\KeywordTok{library}\NormalTok{(plyr)}
\KeywordTok{library}\NormalTok{(dplyr)}
\NormalTok{csample.data \textless{}{-}}\StringTok{ }\KeywordTok{ddply}\NormalTok{(sample.data, }\StringTok{"my.gender"}\NormalTok{, summarise, }\DataTypeTok{height.mean=}\KeywordTok{mean}\NormalTok{(height))}
\NormalTok{csample.data \textless{}{-}}\StringTok{ }\KeywordTok{na.omit}\NormalTok{(csample.data)}
\NormalTok{csample.data}
\CommentTok{\# Density plots with semi{-}transparent fill}
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{height, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom\_vline}\NormalTok{(}\DataTypeTok{data=}\NormalTok{csample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{xintercept=}\NormalTok{height.mean, }\DataTypeTok{colour=}\NormalTok{my.gender), }\DataTypeTok{linetype=}\StringTok{"dashed"}\NormalTok{, }\DataTypeTok{size=}\DecValTok{1}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{labs}\NormalTok{(}\DataTypeTok{x =} \StringTok{"height(cm)"}\NormalTok{,}
\DataTypeTok{title =} \StringTok{"Kernel density estimate of height"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{asample.data \textless{}{-}}\StringTok{ }\KeywordTok{ddply}\NormalTok{(sample.data, }\StringTok{"my.gender"}\NormalTok{, summarise, }\DataTypeTok{arm.span.mean=}\KeywordTok{mean}\NormalTok{(arm.span))}
\NormalTok{asample.data \textless{}{-}}\StringTok{ }\KeywordTok{na.omit}\NormalTok{(asample.data)}
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{arm.span, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom\_vline}\NormalTok{(}\DataTypeTok{data=}\NormalTok{asample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{xintercept=}\NormalTok{arm.span.mean, }\DataTypeTok{colour=}\NormalTok{my.gender), }\DataTypeTok{linetype=}\StringTok{"dashed"}\NormalTok{, }\DataTypeTok{size=}\DecValTok{1}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{floor.hip, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{head.height, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{hand.length, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{ggplot}\NormalTok{(sample.data, }\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x=}\NormalTok{foot.length, }\DataTypeTok{fill=}\NormalTok{my.gender)) }\OperatorTok{+}\StringTok{ }\KeywordTok{geom\_density}\NormalTok{(}\DataTypeTok{alpha=}\NormalTok{.}\DecValTok{3}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# cols = colnames(sample.data[,{-}c(1:2)]);}
\CommentTok{\# rows = sample.data$my.gender;}
\CommentTok{\# }
\CommentTok{\# df = removeColumnsFromDataFrame(sample.data, "my.gender");}
\CommentTok{\# df.t = transposeMatrix(df);}
\CommentTok{\# colnames(df.t) = rows;}
\CommentTok{\# }
\CommentTok{\# sample.t = as.data.frame(df.t);}
\CommentTok{\# sample.t;}
\CommentTok{\# }
\CommentTok{\# X = removeColumnsFromDataFrame(sample.data,"my.gender");}
\CommentTok{\# X}
\CommentTok{\# X.kmeans = kmeans(X, 3); \# default algorithm}
\CommentTok{\# stars(X.kmeans$centers, len = 0.5, key.loc = c(4, 3),}
\CommentTok{\# main = "Algorithm: DEFAULT [Hartigan{-}Wong] \textbackslash{}n Stars of KMEANS=3", draw.segments = TRUE);}
\CommentTok{\# print(X.kmeans)}
\end{Highlighting}
\end{Shaded}
%% appendices go here!
\newpage
\theendnotes
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% biblio %%%%%%%%
\newpage
\begin{auxmulticols}{2}
\singlespacing
\bibliography{./../biblio/master.bib}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% biblio %%%%%%%%
\end{auxmulticols}
\newpage
{
\hypersetup{linkcolor=black}
\setcounter{tocdepth}{3}
\tableofcontents
}
\end{document} | {
"alphanum_fraction": 0.7186172428,
"avg_line_length": 59.5289256198,
"ext": "tex",
"hexsha": "e0f073559b6e55f256e1cc94f7287344b619e895",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ff1e5c4a1ed8cd254281bae9f8c2f5cae38082fa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "minju-lee92/WSU_STATS419_FALL2020",
"max_forks_repo_path": "project-measure/project-measure-writeup.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ff1e5c4a1ed8cd254281bae9f8c2f5cae38082fa",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "minju-lee92/WSU_STATS419_FALL2020",
"max_issues_repo_path": "project-measure/project-measure-writeup.tex",
"max_line_length": 1238,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ff1e5c4a1ed8cd254281bae9f8c2f5cae38082fa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "minju-lee92/WSU_STATS419_FALL2020",
"max_stars_repo_path": "project-measure/project-measure-writeup.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 33630,
"size": 108045
} |
\section{About land based explosive devices}\label{mas_23}
The most common purpose of landmines and unexploded ordnance (UXO) is not necessarily to kill the victim, but to disable personnel and/or equipment. This can result in long-term medical and psychological trauma, as well as being a financial burden for the affected individuals. As well as the mine action. \cite{LandmineMonitor2019} Areas covered in mines can restrict access to clean water, arable land, roads, healthcare services, and facilities \cite{OxfordAcademic2005}. Therefore, it is easy to see the benefits that can be gained from demining.
\subsection{What is a landmine}
\begin{wrapfigure}{R}{0.47\linewidth}
\vspace{-8mm}
\centering
\includegraphics[width=\linewidth]{00 - Images/composition of vehicle landmine.png}
\caption{Composition of Vehicle Landmine \cite{NAP10071}}
\label{fig:comp_veh_mine}
\end{wrapfigure}
Landmines are explosive devices with an unmanned trigger mechanism. Landmines are constructed with the specific needs to either disable vehicles or personnel. The content of mines varies based on the target area, but all mines have a trigger mechanism and a main explosive charge. Mines usually consist of an explosive chain reaction being initiated by the trigger mechanism. As in the case with figure \ref{fig:comp_veh_mine} the chronological order of initiation is the fuse then the detonator or ignited followed by the booster charge and finally the main charge. \cite{NAP10071}
\subsection{Mine types}\label{mine_types}
For landmines to be effective against “enemy” forces they have to be hidden out of sight while close enough for its detonation to release a critical force upon the target/subject. Landmines come in a lot of shapes and sizes ranging from round to square, large and small. The various types of mines have different purposes and can be divided into two main categories being, \gls{ap} mines and \gls{at} mines.
\subsubsection*{\gls{ap} Blast mines}
\begin{wrapfigure}{R}{0.5\linewidth}
\vspace{-8mm}
\centering
\includegraphics[width=0.5\linewidth]{00 - Images/vs-50-001.jpg}
\caption{VS-50 Blast mine ø45mm \cite{vs-50}}
\label{fig:vs-50}
\end{wrapfigure}
These mines are usually placed on the ground or dug into the ground or camouflaged and are designed to detonate when the victim steps on the fuse. Depending on the situation, this mine type does not necessarily kill the victim, instead, it disables the victim. These mines are designed to be small and personnel are the designated targets, thus the diameter ranges from 45-200mm, the operating pressure ranges from 5-16kg and they are usually seen in a circular or rectangular shape \cite{mine_detection}\cite{landmines_and_mine_action}
\newpage
\subsubsection*{\gls{ap} Fragmentation mines}
\paragraph{Bounding}
\begin{wrapfigure}{R}{0.5\linewidth}
\vspace{-8mm}
\centering
\includegraphics[width=0.5\linewidth]{00 - Images/pp-mi-sr-001.jpg}
\caption{PP Mi-Sr Bounding mine ø102mm\cite{pp-mi-sr}}
\label{fig:pp-mi-sr}
\end{wrapfigure}
mines are buried in the ground with the top detonator protruding from the ground and are designed to detonate when a victim comes in contact with the detonator. When activated, the booster charge starts a propellant which lifts the mine approximately 1m into the air, at this point the main charge explodes causing a large amount of damage to the upper part of the body. The specifications of these mines are similar to the \gls{ap} Blast mines, except the height added for the propellant. They are normally found in a cylindrical shape made out of plastic or metal. \cite{mine_detection}
\paragraph{Directional}
\begin{wrapfigure}{R}{0.5\linewidth}
\vspace{-8mm}
\centering
\includegraphics[width=0.5\linewidth]{00 - Images/hamd-y-001.jpg}
\caption{HAMDY Egyptian Directional mine \cite{hamdy}}
\label{fig:hamdy}
\end{wrapfigure}
mines are placed on top of the ground supported by their bottom scissor legs, they are usually filled with steel balls or glass, which at detonation fragments in all directions and cause a large amount of injury upon the victim. The directional fragmentation mines are usually triggered by a tripwire or remote detonator and come in rectangular or round shapes made of material such as plastic or metal \cite{mine_detection}
\subsubsection*{\gls{at} mines}
\begin{wrapfigure}[9]{R}{0.5\linewidth}
\vspace{-8mm}
\centering
\includegraphics[width=0.5\linewidth]{00 - Images/m19i-003a.jpg}
\caption{M19 \gls{at} mine \cite{m19}}
\label{fig:m19}
\end{wrapfigure}
These mines are usually are much larger than the previously mentioned types, due to the fact that they carry a much larger main charge, thus making it able to disable a tank or vehicle, hence the diameter ranges from 45-200mm, the operating pressure is usually 100kg+. Furthermore, these mines are designed in both circular, rectangular, and square shapes in materials such as plastic and metal. \cite{mine_detection}
\subsection{Landmines in Conflict Zones}
For unconventional warfare there are no rules and thereby almost nothing to protect civilians as well as the land from being “affected” more than necessary by the conflict. In this chase it is unlikely that the unconventional force has access to mass production of landmines and therefore makes use of improvised solutions. This could involve re-purposing other types of munition into creative but unorthodox landmines and explosive devices also known as improvised explosive devices (IED).
\newpage
\iffalse
ORIGINAL SEKTION:
Mines are usually simple mechanisms that commonly consist of a container, the internal explosive material, and a trigger. These components can vary according to their intended purpose or accessibility \cite{LandmineDetectionTechniques2010}.
The container can be made with a variety of different materials, ex. plastic, wood, metal, or a combination of those three \cite{LandmineDetectionTechniques2010}. This raises a problem for the mine detection techniques since the materials and size of the explosive device can vary a lot. Improvised mines are usually made with materials available at hand \cite{DetectionAndLocalizationOfImprovisedDevices2010}. That includes both the casing, the explosive material, and the trigger. The trigger component is what will detonate the mine. The mine could be activated by an electronic or pressure sensor. If the mine is intended as an anti-personnel mine with a pressure sensor, then it usually requires a pressure between 5-16 kg to initiate. Anti-tank mines require more than 100kg pressure to initiate. Some mines are buried just below the ground while others have their triggers above the ground \cite{LandmineDetectionTechniques2010}.
The most common purpose of landmines and unexploded ordnance (UXO) is not necessarily to kill the victim, but to disable personnel and/or equipment. This can result in long-term medical and psychological trauma, as well as being a financial burden for the affected individuals. As well as the mine action, see Figure \ref{fig:contributions_by_thematic_sector_2018}. Areas covered in mines can restrict access to clean water, arable land, roads, healthcare services, and facilities \cite{OxfordAcademic2005}. Therefore, it is easy to see the benefits that can be gained from demining.
\fi
| {
"alphanum_fraction": 0.800082158,
"avg_line_length": 78.5268817204,
"ext": "tex",
"hexsha": "918528545b79c25b72a470dea960126849ad5763",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "777d6c3774e927268d3ba9b9a761d2353a1a782a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RiceCurry2/aau",
"max_forks_repo_path": "tex/02 - Sections/04-Mines_and_specifications.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "777d6c3774e927268d3ba9b9a761d2353a1a782a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RiceCurry2/aau",
"max_issues_repo_path": "tex/02 - Sections/04-Mines_and_specifications.tex",
"max_line_length": 936,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "777d6c3774e927268d3ba9b9a761d2353a1a782a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RiceCurry2/aau",
"max_stars_repo_path": "tex/02 - Sections/04-Mines_and_specifications.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1730,
"size": 7303
} |
\chapter{Project Outline}
\pagenumbering{arabic} \setcounter{page}{24}
Ever since the discovery of DADA pipeline, that generates exact sequence variants instead of clustered sequence variants, there has been a debate in the metagenomics consortium. Even though numerous studies have been published stating that there is no statistically significant difference between the two if high similarity cut-offs are used. The impact of these strategies on microbial co-occurrence networks is still unanswered. The microbes do not dwell in isolation; instead, they thrive in colonies and form associations. These associations shape the patterns and structure of their microscopic world, which administers the macroscopic world. To analyse these associative patterns, microbiologists have implemented the concepts of networks science onto these associations. The graphical form of these pairwise associations is called a microbial co-occurrence network. To draw ecologically sound inferences from these networks, incorporation of phylogenetic measures is also crucial as it increases interpretability and aids in elucidating the community assembly process. This project will test whether clustering microbiome gene sequences at different levels of sequence similarity (i.e. 97\% to 100\%) affects the inferred microbial interactions in correlation networks, particularly regarding the relative importance of habitat-linked. We will be using existing soil microbiome datasets from Chernobyl Exclusion Zone (Ukraine), in addition to public datasets. | {
"alphanum_fraction": 0.8375568551,
"avg_line_length": 307.8,
"ext": "tex",
"hexsha": "5f241edf50c22de85803eb3e1c35a41148496b05",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bfa93af6bb8631ee0e19ac6005db72bbf18f0fae",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "spriyansh/metagenomics-CSG-2021",
"max_forks_repo_path": "literatureReview/tex/Aims.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bfa93af6bb8631ee0e19ac6005db72bbf18f0fae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "spriyansh/metagenomics-CSG-2021",
"max_issues_repo_path": "literatureReview/tex/Aims.tex",
"max_line_length": 1466,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bfa93af6bb8631ee0e19ac6005db72bbf18f0fae",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "spriyansh/metagenomics-CSG-2021",
"max_stars_repo_path": "literatureReview/tex/Aims.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 278,
"size": 1539
} |
\documentclass[11pt]{article}
\usepackage{a4wide,times}
\usepackage{parskip}
\usepackage{lmodern}
\usepackage{multicol}
\usepackage{url}
\usepackage{mathtools}
\usepackage{bm}
\usepackage{color}
\usepackage{float}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[utf8]{inputenc}
\usepackage{tgpagella}
\numberwithin{equation}{section}
\DeclareMathOperator*{\argmin}{\arg\min}
\begin{document}
\date{}
\title{\huge \bf{Using support vector machines and\\ decision tree-based methods for\\ analysing gene expression data}}\maketitle
\centerline{\Large {Machine Learning and Algorithms for Data Mining --- Assignment 2}}
\vspace{1em}
\centerline{(Word count: 2161)}
\vspace{2em}
\begin{abstract}
\noindent
\textit{This report presents a methodology for classification of gene expression data using support vector machines and various methods based on decision trees (bagging, random forests, adaptive and gradient boosting). The aim is to distinguish patients with rheumatoid arthritis from healthy controls by processing expression levels of 22,283 genes from three clinical groups in Jena, Berlin and Leipzig \cite{woetzel2014}. The first section gives an overview of the clinical problem and presents the theoretical aspects behind the previously mentioned algorithms. Following this, I describe the dataset pre-processing and software packages used to implement my methodology. Finally, the evaluation section describes the results obtained by applying the different methods to the gene expression data.}
\end{abstract}
\section{Introduction}
\subsection{Motivation}
\textit{Rheumatoid arthritis} (RA) is one of the most common forms of arthritis---a progressive, chronically inflammatory and destructive joint disease that is sustained by an invasive synovial membrane. Distinguishing RA patients from those affected by other inflammatory or degenerative joint diseases or healthy controls, based solely on differentially expressed genes, is difficult. Consequently, we require other unbiased approaches for classification; one such method is discussed by Woetzel et al. \cite{woetzel2014}, who employed rule-based classifiers to discriminate between RA, osteoarthritis (OA) and healthy controls.
Using a subset of the data described in their research, my work addresses binary classification, distinguishing RA patients from the control group. The machine learning methods that I applied---support vector machines and decision tree-based methods---are described in the following subsections.
\subsection{Support vector machines}
The \textit{support vector machine} (SVM) represents a supervised machine learning technique that performs binary classification by separating the classes with a \textit{hyperplane} in the feature space. For optimal performance, an SVM aims to find the biggest distance from the plane to the boundary data points.
A $p$-dimensional hyperplane is defined as:
\begin{equation}
\beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_pX_p = 0,
\end{equation}
such that if the data point $\mathbf{X} = (X_1, X_2, ..., X_p)^\top$ satisfies the equation, then it lies on the hyperplane.
Assuming the samples in our dataset take the form $(\mathbf{X}_i, Y_i)$, with $Y_i \in \{-1, +1\}$ (for binary classification), we aim to learn a function $f(\mathbf{X}) = \beta_0 + \beta_1X_1 + \beta_2X_2 + ... + \beta_pX_p$ that approximates the response variable $Y$ such that examples from one class lie on one side of the hyperplane ($f(\mathbf{X}) < 0$) and the others satisfy $f(\mathbf{X}) > 0$.
Then, we need to select the hyperplane that achieves the \textit{maximal margin} between the two classes, defined as the biggest perpendicular distance between the plane and the points closest to it.
This can be defined as a \textit{constraint optimisation problem}---find a set of values $\beta_i, i = 0, 1, 2, ..., p$ to maximise $M$, subject to $\sum_{i = 0}^{p} \beta_i^2 = 1$, such that:
\begin{equation}
\beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + ... + \beta_pX_{ip} \geq M, \forall i = 1, 2, ..., n,
\end{equation}
where $n$ is the number of training examples.
However, deriving the best hyperplane might result in overfitting and poorer generalisation ability of the SVM when exposed to unseen data. Instead, we can have a better performance on the test set by misclassifying a few training examples:
\begin{itemize}
\item introduce and find values for \textit{slack variables} $\epsilon_1, \epsilon_2, ..., \epsilon_n$ representing how far away each data point can be on the wrong side of the hyperplane;
\item re-design the constraint optimisation problem such that:
\begin{equation}
\beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + ... + \beta_pX_{ip} \geq M(1-\epsilon_i), \forall i = 1, 2, ..., n,
\end{equation}
with $\epsilon_i \geq 0$ and $\sum_{i = 1}^{n} \epsilon_i \leq C$.
\end{itemize}
Problems might still arise when we cannot linearly separate two classes. In this case, the solution is to \textit{enlarge the feature space} through various functions of features. \textit{Kernel functions} can quantify how similar two data points are, while avoiding complexities such as computing polynomials in feature space (which quickly becomes intractable). The kernels I used are:
\begin{enumerate}
\item linear kernel (corresponds to the support vector classifier):
\begin{equation}
K(x_i, x_{i'}) = \sum_{j = 1}^{p} x_{ij} x_{i'j},
\end{equation}
\item polynomial kernel:
\begin{equation}
K(x_i, x_{i'}) = \bigg(1 + \sum_{j = 1}^{p} x_{ij} x_{i'j}\bigg)^d,
\end{equation}
\item radial (Gaussian) kernel (also known as RBF kernel):
\begin{equation}
K(x_i, x_{i'}) = \exp\bigg(-\gamma\sum_{j = 1}^{p}(x_{ij} - x_{i'j})^2\bigg), \text{with } \gamma > 0.
\end{equation}
\end{enumerate}
\subsection{Decision trees}
Widely used for discriminative tasks, a decision tree learns a hierarchy of \textit{if/else} questions which eventually lead to the classification of the given data point. Nodes in the tree can either represent \textit{questions} or \textit{decisions}---in the latter case, the node is a leaf---thereby providing a series of binary splits in the feature plane. These splits produce non-overlapping regions that correspond to classes.
Overfitting usually occurs when the decision tree represents a highly complex model of the training data, such that all its leaves are \textit{pure} (the tree obtains 100\% accuracy on the training set). This can be solved by either:
\begin{itemize}
\item early stopping of the tree construction (\textit{pre-pruning})---can be done by limiting the maximum depth or number of leaves of the tree;
\item removing/collapsing nodes which contain little information (\textit{pruning}).
\end{itemize}
At each step, we choose the variable that best splits the set of items according to an \textit{impurity} measure; the one used in this study is the Gini measure:
\begin{equation}
H(N_m) = \sum_{k = 1}^{C} p_{mk} (1 - p_{mk}),
\end{equation}
where $p_{mk}$ is the proportion of examples belonging to class $k$ at node $m$. The algorithm I used for decision trees is CART (Classification and Regression Trees), which constructs binary trees using the feature and threshold that give the largest information gain upon every split.
\subsection{Bagging}
Decision trees are easy to construct, but they can produce noisy or weak classifiers. More powerful machine learning models can be obtained by \textit{ensembling} multiple classifiers. One of these approaches is called \textit{bagging} and aims to improve prediction performance by reducing the variance of the classifier and obtaining smoother decision boundaries. The bagging algorithm proceeds in the following manner:
\begin{itemize}
\item draw $B$ bootstrap samples $\mathbf{S'}_i, i = 1, 2, ..., B$ with replacement from the dataset $\mathbf{S}$;
\item use each sample $\mathbf{S'}_i$ to train a separate decision tree and record its out-of-bag (OOB) error rate on the examples from $\mathbf{S}$ that were not included in $\mathbf{S'}_i$;
\item average the recorded values and provide a final estimate for the OOB error.
\end{itemize}
The classification performance is improved, as recording $B$ values for the OOB error reduces its variance from an initial value of $\sigma^2$ to $\frac{\sigma^2}{B}$.
\subsection{Random forests}
A further refinement to decision trees, \textit{random forests} (RFs) improve on the bagging approach by de-correlating the trees. This is crucial when we encounter a strong predictor variable in the dataset; most trees will first split the data by this predictor, so all of the bagged trees will be similar and their predictions highly correlated. This in turn will hurt the variance, since averaging many highly correlated quantities does not reduce the variance as much as if the quantities are uncorrelated.
RFs solve this issue by considering only a random subset of $m$ of the $p$ predictors at each split in the tree. The values I used for $m$ were $m = \sqrt{p}$ and $m = \log_2p$.
\subsection{Boosting}
\textit{Boosting} approaches improve on random forests by averaging trees which are grown to weighted versions of the training set. I use two types of boosting:
\begin{itemize}
\item gradient boosting, which takes three hyperparameters ($B$---the number of steps, $\epsilon$---the shrinkage factor, $d$---the maximum tree depth) and proceeds as follows: starting from a model $\hat{G} = 0$ and a residual vector $\mathbf{r} = \mathbf{y}$, at each step we fit a regression tree $\tilde{g}_i$ to the data, then we update the model $\hat{G}_i = \hat{G}_{i-1} + \epsilon \cdot \tilde{g}_i$ and the residuals $r_i = r_i - (\epsilon \cdot \tilde{g}_i)(\mathbf{X}_i)$;
\item adaptive boosting (AdaBoost): a special case of gradient boosting that assigns higher importance to more poorly fitted training examples.
\end{itemize}
\section{Implementation}
All the work presented in this report has been carried out in the Python programming language, using various libraries that I will describe in the following subsections.
\subsection{Processing the dataset}
The data I used to evaluate the various machine learning models was downloaded from the following GEO DataSet browser locations and represents three clinical groups in Jena, Berlin and Leipzig:
\begin{verbatim}
https://www.ncbi.nlm.nih.gov/sites/GDSbrowser?acc=GDS5401
https://www.ncbi.nlm.nih.gov/sites/GDSbrowser?acc=GDS5402
https://www.ncbi.nlm.nih.gov/sites/GDSbrowser?acc=GDS5403
\end{verbatim}
For the purpose of the report methodology, I only extracted the columns representing healthy controls (30 in total) and rheumatoid arthritis patients (33 across the three datasets) for binary classification. The \texttt{GEOparse} library \cite{geoparse}, based on the R library \texttt{GEOquery}, allowed me to parse the data from the SOFT files in a suitable format---a table which facilitated the retrieval of data for the mentioned patients, as they are indexed by columns. First, I loaded the SOFT files into Python:
\begin{verbatim}
gds = GEOparse.get_GEO(filepath="./GDS5401_full.soft")
\end{verbatim}
Next, I prepared containers for the predictor (gene expression levels) and response variables (indicator whether the person is a healthy control or an RA patient):
\begin{verbatim}
Xs = np.empty(shape=(num_samples, num_features), dtype=float)
ys = np.empty(shape=(num_samples,), dtype=int)
\end{verbatim}
To extract data for the healthy controls (C) and rheumatoid arthritis patients (RA), I first looked at the corresponding ranges on the GEO webpages, under the 'Sample Subsets' tab (e.g. the GDS5403 dataset contains RA patients in the range of identifiers GSM13373\textbf{14}--GSM13373\textbf{26}).
After retrieving the columns through \texttt{cols = gds.table.columns}, I filtered the relevant data from the rest (the ranges [\texttt{l\_C}, ..., \texttt{r\_C}] and [\texttt{l\_RA}, ..., \texttt{r\_RA}] are the ones identified as described above):
\begin{verbatim}
curr_sample = 0
for key in cols:
# Check if this column corresponds to patient data
if 'GSM' in key:
if l_C <= key[-2:] and key[-2:] <= r_C:
# Class = healthy control
ys[curr_sample] = 0
elif l_RA <= key[-2:] and key[-2:] <= r_RA:
# Class = RA patient
ys[curr_sample] = 1
else:
# Other data - not interested
continue
# Record gene expression levels for the current person
Xs[curr_sample, :] = gds.table[key]
curr_sample += 1
\end{verbatim}
\subsection{Machine learning models}
Classification methodologies require evaluation and comparison; the \texttt{scikit-learn} library \cite{scikit-learn} provides simple interfaces to many ML models, including SVMs, decision trees, random forests and ensembling approaches such as bagging and boosting. Furthermore, the library contains useful evaluation metrics (e.g. confusion matrix, recall, precision, F1 score).
The following lines illustrate how to instantiate all the classifiers I used for this report:
\begin{verbatim}
SVM_linear = svm.SVC(kernel='linear', C=C)
SVM_poly = svm.SVC(kernel='poly', C=C, degree=deg)
SVM_rbf = svm.SVC(kernel='rbf', C=C, gamma=gamma)
dt = tree.DecisionTreeClassifier(max_depth=max_depth)
b = BaggingClassifier(base_estimator=None, # defaults to a decision tree
n_estimators=num_trees,
oob_score=True)
rf = RandomForestClassifier(n_estimators=num_trees,
max_features=max_features, # log2 or sqrt
oob_score=True)
gb = GradientBoostingClassifier(n_estimators=num_trees,
max_depth=max_depth)
adab = AdaBoostClassifier(n_estimators=num_trees)
\end{verbatim}
The performance of SVMs, decision trees, gradient boosting and adaptive boosting was evaluated using the metrics below and saved to pickled files (the \texttt{pickle} module performs Python object serialization):
\begin{verbatim}
cm = confusion_matrix(y_test, y_pred)
recall = recall_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
# Save the results in a pickled format
res = { "cm" : cm, "recall" : recall, "precision" : precision, "f1" : f1 }
pickle.dump(res, open(filename, "wb"))
\end{verbatim}
Bagging and random forests perform bootstrap sampling with replacement, so their evaluation consists of out-of-bag error estimates:
\begin{verbatim}
oob_error = 1 - b.oob_score_
oob_error = 1 - rf.oob_score_
\end{verbatim}
\section{Evaluation}
All machine learning models described in this report were evaluated with 20 random train/test partitions of the dataset and the results were averaged, in order to provide a better indicator of the real accuracy. As the dataset is quite small---63 examples in total---a single model is likely to encounter high variance on other data, so the repeated random partitioning helps reduce this variance.
\subsection{SVMs}
To evaluate the performance of SVM models, I trained and tested one seperate model for every combination of the following parameters:
\begin{itemize}
\item for all SVMs: $C = \{0.1, 1, 10\}$;
\item polynomial kernel SVMs: $\textit{degree} = \{2, 3, ..., 15\}$;
\item RBF kernel SVMs: $\gamma = \{0.1, 1, 10\}$.
\end{itemize}
Table \ref{table:resSVM} describes the best results for each type of SVM (chosen by the highest value for the F1 score, which incorporates both precision and recall), along with the corresponding hyperparameters.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& Precision & Recall & F1 \\
\hline
Linear kernel ($C = 0.1$) & \bf{0.92} & \bf{1.0} & \bf{0.96} \\
\hline
Polynomial kernel ($C = 0.1, d = 2$) & 0.91 & 0.97 & 0.93 \\
\hline
RBF kernel ($C = 0.1, \gamma = 0.1$) & 0.64 & 1.0 & 0.77 \\
\hline
\end{tabular}
\caption{Results for the best \textbf{SVM} models.}
\label{table:resSVM}
\end{center}
\end{table}
It can be noticed that a linear boundary separates the two classes well enough, while doing feature expansion through a second-degree polynomial kernel already reduces the performance of the SVM. With an F1 score of 0.77, the Gaussian kernel is clearly unsuitable for discriminating between RA and healthy controls, which is in agreement with the linear kernel performing so well (the Gaussian kernel is not likely to cover the area outside the one determined by training examples).
\subsection{Decision tree-based methods}
A single \textbf{decision tree} was evaluated with no bounds on the maximum depth, and then limited by $depth = \{1, 2, 4, 7\}$. As seen in Table \ref{table:resDT}, the unbounded tree obtains the best performance (only slightly better than a tree with depth 4). This might be due to numerous splits being required to finally make a decision regarding the class, using all of the 22,283 features.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
& Precision & Recall & F1 \\
\hline
Depth = 1 & 0.851 & 0.884 & 0.854 \\
\hline
Depth = 2 & 0.859 & 0.897 & 0.866 \\
\hline
Depth = 4 & 0.894 & 0.924 & 0.896 \\
\hline
Depth = 7 & 0.874 & 0.884 & 0.866 \\
\hline
Depth = $\infty$ (unbounded) & \bf{0.879} & \bf{0.938} & \bf{0.903} \\
\hline
\end{tabular}
\caption{Results for a single \textbf{decision tree}.}
\label{table:resDT}
\end{center}
\end{table}
\textbf{Bagging} is expected to improve on the performance of a single decision tree; here, only the OOB error estimate is available. Figure 1 shows that it tends to oscillate between $0.6$ and $0.9$ for most values of $t$. However, the minimum value for the OOB error is $0.038$ (so a proportion of 96.2\% correctly classified out-of-bag examples), at $t = 33$ trees.
\begin{figure}[H]
\includegraphics[width=\textwidth]{bagging.png}
\caption{Out-of-bag (OOB) error rates for the \textbf{bagging} method.}
\end{figure}
Even further improvements (once again, expected) can be noted from the results of evaluating \textbf{random forests}. This is an effect of limiting the number of features considered at each split, such that the predictions of the obtained trees are decorrelated. Figure 2 indicates that when we allow only $m = \sqrt{p}$ and $m = \log_2p$ features upon splitting, the minimum values for the OOB error are both equal to $0.0$, at $t = 79$ trees and $t = 64$ trees, respectively. However, this estimate might change if the dataset used for evaluation is considerably larger, but it still indicates a significant improvement over bagging. The graph also confirms the fact that, once a sufficient number of trees is reached, the performance cannot degrade.
\begin{figure}[H]
\includegraphics[width=\textwidth]{rf.png}
\caption{Out-of-bag (OOB) error rates for \textbf{random forests}, shown in \textcolor{blue}{blue} for $m = \sqrt{p}$ and \textcolor{green}{green} for $m = \log_2p$.}
\end{figure}
\textbf{Gradient boosting} was evaluated with the following hyperparameters: maximum depth of the trees $d$ between 2 and 10, number of trees $t$ ranging from 10 to 100. The best performance (chosen by the largest F1 score) is achieved by ($d = 3$, $t = 25$):
\begin{itemize}
\item precision: 0.93
\item recall: 0.95
\item \underline{F1: 0.93}.
\end{itemize}
I evaluated the \textbf{adaptive boosting} approach by running a separate model for any number of trees between $10$ and $200$. As seen in Figure 3, the \underline{F1 metric peaks at $0.91$}, for $t = 74$ trees.
\begin{figure}[H]
\includegraphics[width=\textwidth]{adaboost.png}
\caption{F1 score as a function of the number of trees for \textbf{adaptive boosting}.}
\end{figure}
Although boosting methods usually produce more powerful discriminators than random forests, the size of the dataset might have affected the outcome in this study. Both gradient and adaptive boosting still perform better than a single decision tree---which is highly expected---but they fail to reach the (obviously superior) performance of random forests.
\section{Conclusion}
The machine learning algorithms discussed in this report---especially the decision tree-based ones---generally behaved as initially discussed from a theoretical perspective, when being exposed to RA/healthy controls data. The best performances were obtained by the linear SVM model (F1 score of 0.96) and the random forest approach (OOB score of 0.0), but using significantly larger datasets to train the classifiers might produce different results which are likely to indicate performances closer to the real ones.
In the case of SVMs, the superiority of the linear kernel from the polynomial and RBF ones indicates that a separating hyperplane provides a good approximation to the boundary between the two classes. Bagging was clearly outperformed by random forests, while decision trees were also surpassed by boosting approaches (F1 scores of 0.90, 0.91 and 0.93, respectively). Nevertheless, these results show an impressive performance for a dataset of 63 examples, illustrating the power of the various improvements that are added to a basic algorithm (e.g. taking bootstrap samples of the dataset to reduce variance and create smoother decision boundaries).
\newpage
\medskip
\begin{thebibliography}{9}
\bibitem{woetzel2014}
Woetzel, D., Huber, R., Kupfer, P., Pohlers, D., Pfaff, M., Driesch, D., ... \& Kinne, R. W. (2014). Identification of rheumatoid arthritis and osteoarthritis patients by transcriptome-based rule set generation. Arthritis research \& therapy, 16(2), R84.
\bibitem{mldm}
Jamnik, M., Li\`o, P., \& Sauerwald, T. (2017). Lecture notes for \textit{Machine Learning and Algorithms for Data Mining} (University of Cambridge Computer Laboratory - MPhil in Advanced Computer Science).
\bibitem{geoparse}
Gumienny, R. (2015). GEOparse: Python library to access Gene Expression Omnibus Database (GEO). \texttt{https://geoparse.readthedocs.io/en/latest/introduction.html}.
\bibitem{scikit-learn}
Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825--2830.
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7524376062,
"avg_line_length": 66.7402985075,
"ext": "tex",
"hexsha": "6f4e48af75c70a766725325977f5d4a0b08c5c0c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "85945aef1b1beb8553012d778a450adf50290fab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "catalina17/GE-analysis",
"max_forks_repo_path": "mldm2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "85945aef1b1beb8553012d778a450adf50290fab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "catalina17/GE-analysis",
"max_issues_repo_path": "mldm2.tex",
"max_line_length": 802,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "85945aef1b1beb8553012d778a450adf50290fab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "catalina17/GA-analysis",
"max_stars_repo_path": "mldm2.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-27T14:39:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-27T14:39:13.000Z",
"num_tokens": 5843,
"size": 22358
} |
\section{Data serialization}\label{sec:dsdl_data_serialization}
\newcommand{\hugett}[1]{\texttt{\huge{#1}}}
\subsection{General principles}
\subsubsection{Design goals}
The main design principle behind the serialized representations described in this section is
the maximization of compatibility with native representations used by currently existing and
likely future computer microarchitectures.
The goal is to ensure that the serialized representations defined by DSDL match internal data representations of
modern computers, so that, ideally, a typical system will not have to perform any data conversion whatsoever while
exchanging data over a UAVCAN network.
The implicit truncation and implicit zero extension rules introduced in this section are designed to
facilitate structural subtyping and to enable extensibility of data types while retaining backward compatibility.
This is a conscious trade-off between runtime type checking and long-term stability guarantees.
This model assumes that data type compatibility is determined statically and is not, normally, enforced at runtime.
\subsubsection{Bit and byte ordering}
The smallest atomic data entity is a bit.
Eight bits form one byte;
within the byte, the bits are ordered so that the least significant bit is considered first (0-th index),
and the most significant bit is considered last (7-th index).
Numeric values consisting of multiple bytes are arranged so that the least significant byte is encoded first;
such format is also known as little-endian.
\begin{figure}[H]
$$
\overset{\text{bit index}}{%
\underbrace{%
\overset{\text{M}}{\overset{7}{\hugett{0}}}
\overset{6}{\hugett{1}}
\overset{5}{\hugett{0}}
\overset{4}{\hugett{1}}
\overset{3}{\hugett{0}}
\overset{2}{\hugett{1}}
\overset{1}{\hugett{0}}
\overset{\text{L}}{\overset{0}{\hugett{1}}}
}_\text{least significant byte}%
}
\hugett{\ldots}
\overset{\text{bit index}}{%
\underbrace{%
\overset{\text{M}}{\overset{7}{\hugett{0}}}
\overset{6}{\hugett{1}}
\overset{5}{\hugett{0}}
\overset{4}{\hugett{1}}
\overset{3}{\hugett{0}}
\overset{2}{\hugett{1}}
\overset{1}{\hugett{0}}
\overset{\text{L}}{\overset{0}{\hugett{1}}}
}_\text{most significant byte}%
}
$$
\caption{Bit and byte ordering\label{fig:dsdl_serialization_bit_ordering}}
\end{figure}
\subsubsection{Implicit truncation of excessive data}\label{sec:dsdl_serialization_implicit_truncation}
When a serialized representation is deserialized, implementations shall ignore
any excessive (unused) data or padding bits remaining upon deserialization\footnote{%
The presence of unused data should not be considered an error.
}.
The total size of the serialized representation is reported either by the underlying transport layer, or,
in the case of nested objects, by the \emph{delimiter header}
(section \ref{sec:dsdl_serialization_composite_non_sealed}).
As a consequence of the above requirement the transport layer can introduce
additional zero padding bits at the end of a serialized representation
to satisfy data size granularity constraints.
Non-zero padding bits are not allowed\footnote{%
Because padding bits may be misinterpreted as part of the serialized representation.
}.
\begin{remark}
Because of implicit truncation a serialized representation constructed from an instance of type $B$ can be
deserialized into an instance of type $A$ as long as $B$ is a structural subtype of $A$.
Let $x$ be an instance of data type $B$, which is defined as follows:
\begin{minted}{python}
float32 parameter
float32 variance
\end{minted}
Let $A$ be a structural supertype of $B$, being defined as follows:
\begin{minted}{python}
float32 parameter
\end{minted}
Then the serialized representation of $x$ can be deserialized into an instance of $A$.
The topic of data type compatibility is explored in detail in section~\ref{sec:dsdl_versioning}.
\end{remark}
\subsubsection{Implicit zero extension of missing data}\label{sec:dsdl_serialization_implicit_zero_extension}
For the purposes of deserialization routines,
the serialized representation of any instance of a data type shall \emph{implicitly} end with an
infinite sequence of bits with a value of zero (0).\footnote{%
This can be implemented by checking for out-of-bounds access during deserialization and returning zeros
if an out-of-bounds access is detected. This is where the name ``implicit zero extension rule'' is derived
from.
}.
Despite this rule, implementations are not allowed to intentionally truncate trailing zeros
upon construction of a serialized representation of an object\footnote{%
Intentional truncation is prohibited because a future revision of the specification may remove the implicit zero
extension rule.
If intentional truncation were allowed, removal of this rule would break backward compatibility.
}.
The total size of the serialized representation is reported either by the underlying transport layer, or,
in the case of nested objects, by the \emph{delimiter header}
(section \ref{sec:dsdl_serialization_composite_non_sealed}).
\begin{remark}
The implicit zero extension rule enables extension of data types by introducing additional fields
without breaking backward compatibility with existing deployments.
The topic of data type compatibility is explored in detail in section~\ref{sec:dsdl_versioning}.
The following example assumes that the reader is familiar with the variable-length array serialization rules,
explained in section~\ref{sec:dsdl_serialized_variable_length_array}.
Let the data type $A$ be defined as follows:
\begin{minted}{python}
uint8 scalar
\end{minted}
Let $x$ be an instance of $A$, where the value of \verb|scalar| is 4.
Let the data type $B$ be defined as follows:
\begin{minted}{python}
uint8[<256] array
\end{minted}
Then the serialized representation of $x$ can be deserialized into an instance of $B$ where the field
\verb|array| contains a sequence of four zeros: $0, 0, 0, 0$.
\end{remark}
\subsubsection{Error handling}\label{sec:dsdl_serialized_error}
In this section and further, an object that nests other objects is referred to as an \emph{outer object}
in relation to the nested object.
Correct UAVCAN types shall have no serialization error states.
A deserialization process may encounter a serialized representation that does not belong to the
set of serialized representations of the data type at hand.
In such case, the invalid serialized representation shall be discarded and the implementation
shall explicitly report its inability to complete the deserialization process for the given input.
Correct UAVCAN types shall have no other deserialization error states.
Failure to deserialize a nested object renders the outer object invalid\footnote{%
Therefore, failure in a single deeply nested object propagates upward, rendering the entire structure invalid.
The motivation for such behavior is that it is likely that if an inner object cannot be deserialized,
then the outer object is likely to be also invalid.
}.
\subsection{Void types}\label{sec:dsdl_serialized_void}
The serialized representation of a void-typed field attribute is constructed as a sequence of zero bits.
The length of the sequence equals the numeric suffix of the type name.
When a void-typed field attribute is deserialized, the values of respective bits are ignored;
in other words, any bit sequence of correct length is a valid serialized representation
of a void-typed field attribute.
This behavior facilitates usage of void fields as placeholders for non-void fields
introduced in newer versions of the data type (section~\ref{sec:dsdl_versioning}).
\begin{remark}
The following data type will be serialized as a sequence of three zero bits $000_2$:
\begin{minted}{python}
void3
\end{minted}
The following bit sequences are valid serialized representations of the type:
$000_2$,
$001_2$,
$010_2$,
$011_2$,
$100_2$,
$101_2$,
$110_2$,
$111_2$.
Shall the padding field be replaced with a non-void-typed field in a future version of the data type,
nodes utilizing the newer definition may be able to retain compatibility with nodes using older types,
since the specification guarantees that padding fields are always initialized with zeros:
\begin{minted}{python}
# Version 1.1
float64 a
void64
\end{minted}
\begin{minted}{python}
# Version 1.2
float64 a
float32 b # Messages v1.1 will be interpreted such that b = 0.0
void32
\end{minted}
\end{remark}
\subsection{Primitive types}
\subsubsection{General principles}
Implementations where native data formats are incompatible with those adopted by UAVCAN shall perform
conversions between the native formats and the corresponding UAVCAN formats during
serialization and deserialization.
Implementations shall avoid or minimize information loss and/or distortion caused by such conversions.
Serialized representations of instances of the primitive type category that are longer than one byte (8 bits)
are constructed as follows.
First, only the least significant bytes that contain the used bits of the value are preserved;
the rest are discarded following the lossy assignment policy selected by the specified cast mode.
Then the bytes are arranged in the least-significant-byte-first order\footnote{Also known as ``little endian''.}.
If the bit width of the value is not an integer multiple of eight (8) then the next value in the type will begin
starting with the next bit in the current byte. If there are no further values then the remaining bits
shall be zero (0).
\begin{remark}
The value $1110\,1101\,1010_2$ (3802 in base-10) of type \verb|uint12| is encoded as follows.
The bit sequence is shown in the base-2 system, where bytes (octets) are comma-separated:
$$
\overset{\text{byte 0}}{%
\underbrace{%
\overset{7}{\hugett{1}}
\overset{6}{\hugett{1}}
\overset{5}{\hugett{0}}
\overset{4}{\hugett{1}}
\overset{3}{\hugett{1}}
\overset{2}{\hugett{0}}
\overset{1}{\hugett{1}}
\overset{0}{\hugett{0}}
}_{\substack{\text{Least significant 8} \\ \text{bits of }3802_{10}}}%
}%
\hugett{,}%
\overset{\text{byte 1}}{%
\underbrace{
\overset{7}{\hugett{?}}
\overset{6}{\hugett{?}}
\overset{5}{\hugett{?}}
\overset{4}{\hugett{?}}
}_{\substack{\text{Next object} \\ \text{or zero} \\ \text{padding bits}}}%
\underbrace{
\overset{3}{\hugett{1}}
\overset{2}{\hugett{1}}
\overset{1}{\hugett{1}}
\overset{0}{\hugett{0}}
}_{\substack{\text{Most} \\ \text{significant} \\ \text{4 bits of} \\ \text{3802}_{10}}}%
}
$$
\end{remark}
\subsubsection{Boolean types}\label{sec:dsdl_serialized_bool}
The serialized representation of a value of type \verb|bool| is a single bit.
If the value represents falsity, the value of the bit is zero (0); otherwise, the value of the bit is one (1).
\subsubsection{Unsigned integer types}\label{sec:dsdl_serialized_unsigned_integer}
The serialized representation of an unsigned integer value of length $n$ bits
(which is reflected in the numerical suffix of the data type name)
is constructed as if the number were to be written in base-2 numerical system
with leading zeros preserved so that the total number of binary digits would equal $n$.
\begin{remark}
The serialized representation of integer 42 of type \verb|uint7| is $0101010_2$.
\end{remark}
\subsubsection{Signed integer types}
The serialized representation of a non-negative value of a signed integer type is constructed as described
in section~\ref{sec:dsdl_serialized_unsigned_integer}.
The serialized representation of a negative value of a signed integer type is computed by
applying the following transformation:
$$2^n + x$$
where $n$ is the bit length of the serialized representation
(which is reflected in the numerical suffix of the data type name)
and $x$ is the value whose serialized representation is being constructed.
The result of the transformation is a positive number,
whose serialized representation is then constructed as described in section~\ref{sec:dsdl_serialized_unsigned_integer}.
The representation described here is widely known as \emph{two's complement}.
\begin{remark}
The serialized representation of integer -42 of type \verb|int7| is $1010110_2$.
\end{remark}
\subsubsection{Floating point types}
The serialized representation of floating point types follows the IEEE 754 series of standards as follows:
\begin{itemize}
\item \verb|float16| --- IEEE 754 binary16;
\item \verb|float32| --- IEEE 754 binary32;
\item \verb|float64| --- IEEE 754 binary64.
\end{itemize}
Implementations that model real numbers using any method other than IEEE 754 shall be able to model
positive infinity, negative infinity, signaling NaN\footnote{%
Per the IEEE 754 standard, NaN stands for
``not-a-number'' -- a set of special bit patterns that represent lack of a meaningful value.
}, and quiet NaN.
\subsection{Array types}
\subsubsection{Fixed-length array types}
Serialized representations of a fixed-length array of $n$ elements of type $T$ and
a sequence of $n$ field attributes of type $T$ are equivalent.
\begin{remark}
Serialized representations of the following two data type definitions are equivalent:
\begin{minted}{python}
AnyType[3] array
\end{minted}
\begin{minted}{python}
AnyType item_0
AnyType item_1
AnyType item_2
\end{minted}
\end{remark}
\subsubsection{Variable-length array types}\label{sec:dsdl_serialized_variable_length_array}
A serialized representation of a variable-length array consists of two segments:
the implicit length field immediately followed by the array elements.
The implicit length field is of an unsigned integer type.
The serialized representation of the implicit length field
is injected in the beginning of the serialized representation of its array.
The bit length of the unsigned integer value is first determined as follows:
$$b=\lceil{}\log_2 (c + 1)\rceil{}$$
where $c$ is the capacity (i.e., the maximum number of elements) of the variable-length array and
$b$ is the minimum number of bits needed to encode $c$ as an unsigned integer. An additional transformation
of $b$ ensures byte alignment of this implicit field when serialized\footnote{Future updates to the specification
may allow this second step to be modified but the default action will always be to byte-align the implicit
length field.}:
$$2^{\lceil{}\log_2 (\text{max}(8, b))\rceil{}}$$
The number of elements $n$ contained in the variable-length array is encoded
in the serialized representation of the implicit length field
as described in section~\ref{sec:dsdl_serialized_unsigned_integer}.
By definition, $n \leq c$; therefore, bit sequences where the implicit length field contains values
greater than $c$ do not belong to the set of serialized representations of the array.
The rest of the serialized representation is constructed as if the variable-length array was
a fixed-length array of $n$ elements\footnote{%
Observe that the implicit array length field, per its definition,
is guaranteed to never break the alignment of the following array elements.
There may be no padding between the implicit array length field and its elements.
}.
\begin{remark}
Data type authors must take into account that variable-length arrays with a capacity of $\leq{}255$ elements will
consume an additional 8 bits of the serialized representation
(where a capacity of $\leq 65535$ will consume 16 bits and so on).
For example:
\begin{minted}{python}
uint8 first
uint8[<=6] second # The implicit length field is 8 bits wide
@assert _offset_.max / 8 <= 7 # This would fail.
\end{minted}
In the above example the author attempted to fit the message into a single Classic CAN frame but
did not account for the implicit length field. The correct version would be:
\begin{minted}{python}
uint8 first
uint8[<=5] second # The implicit length field is 8 bits wide
@assert _offset_.max / 8 <= 7 # This would pass.
\end{minted}
If the array contained three elements, the resulting set of its serialized representations would
be equivalent to that of the following definition:
\begin{minted}{python}
uint8 first
uint8 implicit_length_field # Set to 3, because the array contains three elements
uint8 item_0
uint8 item_1
uint8 item_2
\end{minted}
\end{remark}
\subsection{Composite types}\label{sec:dsdl_serialization_composite}
\subsubsection{Sealed structure}
A serialized representation of an object of a sealed composite type that is not a tagged union
is a sequence of serialized representations of its field attribute values joined into a bit sequence,
separated by padding if such is necessary to satisfy the alignment requirements.
The ordering of the serialized representations of the field attribute values follows the order
of field attribute declaration.
\begin{remark}
Consider the following definition,
where the fields are assigned runtime values shown in the comments:
\begin{minted}{python}
# decimal bit sequence comment
truncated uint12 first # +48858 1011_1110_1101_1010 overflow, MSB truncated
saturated int3 second # -1 111 two's complement
saturated int4 third # -5 1011 two's complement
saturated int2 fourth # -1 11 two's complement
truncated uint4 fifth # +136 1000_1000 overflow, MSB truncated
@sealed
\end{minted}
It can be seen that the bit layout is rather complicated because the field boundaries do not align with byte
boundaries, which makes it a good case study.
The resulting serialized byte sequence is shown below in the base-2 system:
$$
\underbrace{%
\overbrace{%
\underset{7}{\overset{7}{\hugett{1}}}%
\underset{6}{\overset{6}{\hugett{1}}}%
\underset{5}{\overset{5}{\hugett{0}}}%
\underset{4}{\overset{4}{\hugett{1}}}%
\underset{3}{\overset{3}{\hugett{1}}}%
\underset{2}{\overset{2}{\hugett{0}}}%
\underset{1}{\overset{1}{\hugett{1}}}%
\underset{0}{\overset{0}{\hugett{0}}}%
}^{\texttt{first}}%
}_{\texttt{byte 0}}%
\hugett{,}%
\underbrace{%
\overbrace{%
\underset{7}{\overset{0}{\hugett{1}}}%
}^{\texttt{third}}%
\overbrace{%
\underset{6}{\overset{2}{\hugett{1}}}%
\underset{5}{\overset{1}{\hugett{1}}}%
\underset{4}{\overset{0}{\hugett{1}}}%
}^{\texttt{second}}%
\overbrace{%
\underset{3}{\overset{11}{\hugett{1}}}%
\underset{2}{\overset{10}{\hugett{1}}}%
\underset{1}{\overset{9}{\hugett{1}}}%
\underset{0}{\overset{8}{\hugett{0}}}%
}^{\texttt{first}}%
}_{\texttt{byte 1}}%
\hugett{,}%
\underbrace{%
\overbrace{%
\underset{7}{\overset{2}{\hugett{0}}}%
\underset{6}{\overset{1}{\hugett{0}}}%
\underset{5}{\overset{0}{\hugett{0}}}%
}^{\texttt{fifth}}%
\overbrace{%
\underset{4}{\overset{1}{\hugett{1}}}%
\underset{3}{\overset{0}{\hugett{1}}}%
}^{\texttt{fourth}}%
\overbrace{%
\underset{2}{\overset{3}{\hugett{1}}}%
\underset{1}{\overset{2}{\hugett{0}}}%
\underset{0}{\overset{1}{\hugett{1}}}%
}^{\texttt{third}}%
}_{\texttt{byte 2}}%
\hugett{,}%
\underbrace{%
\overbrace{%
\underset{7}{\overset{?}{\hugett{?}}}%
\underset{6}{\overset{?}{\hugett{?}}}%
\underset{5}{\overset{?}{\hugett{?}}}%
\underset{4}{\overset{?}{\hugett{?}}}%
\underset{3}{\overset{?}{\hugett{?}}}%
\underset{2}{\overset{?}{\hugett{?}}}%
\underset{1}{\overset{?}{\hugett{?}}}%
}^{\substack{\text{Next object or} \\ \text{zero padding bits}}}
\overbrace{%
\underset{0}{\overset{3}{\hugett{1}}}%
}^{\texttt{fifth}}%
}_{\texttt{byte 3}}%
$$
Note that some of the complexity of the above illustration stems from the modern convention of representing
numbers with the most significant components on the left moving to the least significant component of the
number of the right. If you were to reverse this convention the bit sequences for each type in the composite
would seem to be continuous as they crossed byte boundaries. Using this reversed representation, however, is
not recommended because the convention is deeply ingrained in most readers, tools, and technologies.
\end{remark}
\subsubsection{Sealed tagged union}
Similar to variable-length arrays, a serialized representation of a sealed tagged union consists of two segments:
the implicit \emph{union tag} value followed by the selected field attribute value.
The implicit union tag is an unsigned integer value whose serialized representation
is implicitly injected in the beginning of the serialized representation of its tagged union.
The bit length of the implicit union tag is determined as follows:
$$b=\lceil{}\log_2 n\rceil{}$$
where $n$ is the number of field attributes in the union, $n \geq 2$ and $b$ is the minimum number of bits needed
to encode $n$ as an unsigned integer. An additional transformation of $b$ ensures byte alignment of this implicit
field when serialized\footnote{Future updates to the specification may allow this second step to be modified but
the default action will always be to byte-align the implicit length field.}:
$$2^{\lceil{}\log_2 (\text{max}(8, b))\rceil{}}$$
Each of the tagged union field attributes is assigned an index according to the order of their definition;
the order follows that of the DSDL statements (see section~\ref{sec:dsdl_grammar} on statement ordering).
The first defined field attribute is assigned the index 0 (zero),
the index of each following field attribute is incremented by one.
The index of the field attribute whose value is currently held by the tagged union is encoded
in the serialized representation of the implicit union tag as described in section
\ref{sec:dsdl_serialized_unsigned_integer}.
By definition, $i < n$, where $i$ is the index of the current field attribute;
therefore, bit sequences where the implicit union tag field contains values
that are greater than or equal $n$ do not belong to the set of serialized representations of the tagged union.
The serialized representation of the implicit union tag is immediately followed by
the serialized representation of the currently selected field attribute value\footnote{%
Observe that the implicit union tag field, per its definition,
is guaranteed to never break the alignment of the following field.
There may be no padding between the implicit union tag field and the selected field.
}.
\begin{remark}
Consider the following example:
\begin{minted}{python}
@sealed
@union # In this case, the implicit union tag is one byte wide
uint16 FOO = 42 # A regular constant attribute
uint16 a # Field index 0
uint8 b # Field index 1
uint32 BAR = 42 # Another regular constant attribute
float64 c # Field index 2
\end{minted}
In order to serialize the field \verb|b|, the implicit union tag shall be assigned the value 1.
The following type will have an identical layout:
\begin{minted}{python}
@sealed
uint8 implicit_union_tag # Set to 1
uint8 b # The actual value
\end{minted}
Suppose that the value of \verb|b| is 7.
The resulting serialized representation is shown below in the base-2 system:
$$%
\overset{\text{byte 0}}{%
\underbrace{\hugett{00000001}}_{\substack{\text{union} \\ \text{tag}}}%
}%
\hugett{,}%
\overset{\text{byte 1}}{%
\underbrace{\hugett{00000111}}_{\text{field }\texttt{b}}%
}
$$
\end{remark}
\begin{remark}
Let the following data type be defined under the short name \verb|Empty| and version 1.0:
\begin{minted}{python}
# Empty. The only valid serialized representation is an empty bit sequence.
@sealed
\end{minted}
Consider the following union:
\begin{minted}{python}
@sealed
@union
Empty.1.0 none
AnyType.1.0 some
\end{minted}
The set of serialized representations of the union given above is equivalent to
that of the following variable-length array:
\begin{minted}{python}
@sealed
AnyType.1.0[<=1] maybe_some
\end{minted}
\end{remark}
\subsubsection{Delimited types}\label{sec:dsdl_serialization_composite_non_sealed}
Objects of delimited (non-sealed) composite types that are nested inside other objects\footnote{%
Of any type, not necessarily composite; e.g., arrays.
}
are serialized into opaque containers that consist of two parts:
the fixed-length \emph{delimiter header},
immediately followed by the serialized representation of the object as if it was of a sealed type.
Objects of delimited composite types that are \emph{not} nested inside other objects (i.e., top-level objects)
are serialized as if they were of a sealed type (without the delimiter header).
The delimiter header, therefore, logically belongs to the container object rather than the contained one.
\begin{remark}
Top-level objects do not require the delimiter header because the change in their length does not necessarily
affect the backward compatibility thanks to the implicit truncation rule
(section \ref{sec:dsdl_serialization_implicit_truncation}) and the implicit zero extension rule
(section \ref{sec:dsdl_serialization_implicit_zero_extension}).
\end{remark}
The delimiter header is an implicit field of type \verb|uint32| that encodes the length of the
serialized representation it precedes in bytes\footnote{%
Remember that by virtue of the padding requirement (section \ref{sec:dsdl_composite_alignment_cumulative_bls}),
the length of the serialized representation of a composite type is always an integer number of bytes.
}.
During deserialization, if the length of the serialized representation reported by its delimiter header
does not match the expectation of the deserializer,
the implicit truncation (section \ref{sec:dsdl_serialization_implicit_truncation})
and the implicit zero extension (section \ref{sec:dsdl_serialization_implicit_zero_extension})
rules apply.
The length encoded in a delimiter header cannot exceed the number of bytes remaining between the delimiter header
and the end of the serialized representation of the outer object.
Otherwise, the serialized representation of the outer object is invalid and is to be discarded
(section \ref{sec:dsdl_serialized_error}).
It is allowed for a sealed composite type to nest non-sealed composite types, and vice versa.
No special rules apply in such cases.
\begin{remark}
The resulting serialized representation of a delimited composite is identical to \verb|uint8[<2**32]|
(sans the higher alignment requirement).
The implicit array length field is like the delimiter header,
and the array content is the serialized representation of the composite as if it was sealed.
The following illustrates why this is necessary for robust extensibility.
Suppose that some composite $C$ contains two fields whose types are $A$ and $B$.
The fields of $A$ are $a_0,\ a_1$;
likewise, $B$ contains $b_0,\ b_1$.
Suppose that $C^\prime$ is modified such that $A^\prime$ contains an extra field $a_2$.
If $A$ (and $A^\prime$) were sealed, this would result in the breakage of compatibility between $C$ and $C^\prime$
as illustrated in figure \ref{fig:dsdl_sealed_non_extensibility} because the positions of the fields of $B$
(which is sealed) would be shifted by the size of $a_2$.
The use of opaque containers allows the implicit truncation and the implicit zero extension rules to apply
at any level of nesting, enabling agents expecting $C$ to truncate $a_2$ away,
and enabling agents expecting $C^\prime$ to zero-extend $a_2$
if it is not present, as shown in figure \ref{fig:dsdl_non_sealed_extensibility},
where $H_A$ is the delimiter header of $A$.
Observe that it is irrelevant whether $C$ (same as $C^\prime$) is sealed or not.
\begin{figure}[H]
\centering
\begin{tabular}{r c c c c c}
\cline{2-5}
$C$ &
\multicolumn{1}{|c|}{$a_0$} & \multicolumn{1}{c|}{$a_1$}
&\multicolumn{1}{c|}{$b_0$} & \multicolumn{1}{c|}{$b_1$} &
\\\cline{2-5}
& $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ \\
\cline{2-6}
$C^\prime$ &
\multicolumn{1}{|c|}{$a_0$} & \multicolumn{1}{c|}{$a_1$} & \multicolumn{1}{c|}{$a_2$}
&\multicolumn{1}{c|}{$b_0$} & \multicolumn{1}{c|}{$b_1$}
\\\cline{2-6}
\end{tabular}
\caption{Non-extensibility of sealed types}
\label{fig:dsdl_sealed_non_extensibility}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{r c c c c c c}
\cline{2-7}
$C$ &
\multicolumn{1}{|c|}{$H_A$} & \multicolumn{1}{c|}{$a_0$} & \multicolumn{1}{c|}{$a_1$}
&\multicolumn{1}{c|}{\footnotesize{$\ldots$}}
&\multicolumn{1}{c|}{$b_0$} & \multicolumn{1}{c|}{$b_1$}
\\\cline{2-7}
& $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\
\cline{2-7}
$C^\prime$ &
\multicolumn{1}{|c|}{$H_A$} & \multicolumn{1}{c|}{$a_0$} & \multicolumn{1}{c|}{$a_1$} &
\multicolumn{1}{c|}{$a_2$}
&\multicolumn{1}{c|}{$b_0$} & \multicolumn{1}{c|}{$b_1$}
\\\cline{2-7}
\end{tabular}
\caption{Extensibility of delimited types with the help of the delimiter header}
\label{fig:dsdl_non_sealed_extensibility}
\end{figure}
This example also illustrates why the extent is necessary.
Per the rules set forth in \ref{sec:dsdl_composite_extent_and_sealing},
it is required that the extent (i.e., the buffer memory requirement) of $A$ shall be large enough to accommodate
serialized representations of $A^\prime$, and, therefore,
the extent of $C$ is large enough to accommodate serialized representations of $C^\prime$.
If that were not the case, then an implementation expecting $C$ would be unable to correctly process $C^\prime$
because the implicit truncation rule would have cut off $b_1$, which is unexpected.
The design decision to make the delimiter header of a fixed width may not be obvious so it's worth explaining.
There are two alternatives: making it variable-length and making the length a function of the extent
(section \ref{sec:dsdl_composite_extent_and_sealing}).
The first option does not align with the rest of the specification because DSDL does not make use of
variable-length integers (unlike some other formats, like Google Protobuf, for example),
and because a variable-length length {\footnotesize{(sic!)}} prefix would have somewhat complicated the
bit length set computation.
The second option would make nested hierarchies (composites that nest other composites) possibly highly fragile
because the change of the extent of a deeply nested type may inadvertently move the delimiter header of an
outer type into a different length category, which would be disastrous for compatibility and hard to spot.
There is an in-depth discussion of this issue (and other related matters) on the forum.
The fixed-length delimiter header may be considered large,
but delimited types tend to also be complex, which makes the overhead comparatively insignificant,
whereas sealed types that tend to be compact and overhead-sensitive do not contain the delimiter header.
\end{remark}
\begin{remark}
In order to efficiently serialize an object of a delimited type,
the implementation may need to perform a second pass to reach the delimiter header
after the object is serialized, because before that, the value of the delimiter header cannot be known
unless the object is of a fixed-size (i.e., the cardinality of the bit length set is one).
Consider:
\begin{minted}{python}
uint8[<=4] x
\end{minted}
Let $\texttt{x} = \left[ 4, 2 \right]$,
then the nested serialized representation would be constructed as:
\begin{enumerate}
\item Memorize the current memory address $M_\text{origin}$.
\item Skip 32 bits.
\item Encode the length: 2 elements.
\item Encode $x_0 = 4$.
\item Encode $x_1 = 2$.
\item Memorize the current memory address $M_\text{current}$.
\item Go back to $M_\text{origin}$.
\item Encode a 32-bit wide value of $(M_\text{current} - M_\text{origin})$.
\item Go back to $M_\text{current}$.
\end{enumerate}
However, if the object is known to be of a constant size, the above can be simplified,
because there may be only one possible value of the delimiter header.
Automatic code generation tools should take advantage of this knowledge.
\end{remark}
| {
"alphanum_fraction": 0.6967038042,
"avg_line_length": 46.1946666667,
"ext": "tex",
"hexsha": "c6c25c0d9828d1e5e575efa394eb6b3accd861e5",
"lang": "TeX",
"max_forks_count": 13,
"max_forks_repo_forks_event_max_datetime": "2022-03-09T05:54:16.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-19T14:04:19.000Z",
"max_forks_repo_head_hexsha": "327e52c832b12736e4d792bd079f0a15a09ab1d0",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "ASMfreaK/specification",
"max_forks_repo_path": "specification/dsdl/serialization.tex",
"max_issues_count": 87,
"max_issues_repo_head_hexsha": "327e52c832b12736e4d792bd079f0a15a09ab1d0",
"max_issues_repo_issues_event_max_datetime": "2022-02-23T10:13:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-09T09:16:59.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "ASMfreaK/specification",
"max_issues_repo_path": "specification/dsdl/serialization.tex",
"max_line_length": 119,
"max_stars_count": 38,
"max_stars_repo_head_hexsha": "327e52c832b12736e4d792bd079f0a15a09ab1d0",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "ASMfreaK/specification",
"max_stars_repo_path": "specification/dsdl/serialization.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-15T17:41:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-10T18:58:23.000Z",
"num_tokens": 8634,
"size": 34646
} |
\documentclass[11pt,a4paper]{article}
\usepackage{fontspec}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{lmodern}
\usepackage{minted}
\usepackage{booktabs}
\usepackage{paralist}
\usepackage[left=1.00in, right=1.00in, top=1.00in, bottom=1.00in]{geometry}
\title{Code Standards}
\author{Daniel Celis Garza}
\date{\today}
%\newcommand{\lfort}[1]{\mint{fortran}|#1|}
\begin{document}
%
\maketitle
\section{Introduction}
Documenting coding standards.
%
\section{General Practices}
%
\emph{Always} use \texttt{implicit none} before any variable declaration. Include only what will be used in the code via \texttt{use <modue\textunderscore name>, only : <list>}.
Use spaces to clearly denote what is going on in the code. Try not to use \texttt{**} if elevating to a low integer power. Should \texttt{**} become necessary, do not add spaces between them. Use line breaks to improve the code's readability. Align similar lines of code to make the code neater.
Modularise code and make subroutines as general as possible.
\section{Files}
%
\subsection{Makefile}
%
Makefiles should start with \texttt{make\textunderscore} followed by the main file's name. They should have, \emph{at minimum}, the following flags: array bounds, unused variables \& unallocated variables. Comments should be used to explain what each file and flag is needed/used for.
%
\subsection{Main}
%
Main filenames must be as clear and short as possible, where clarity $\gg$ length. Use the following naming structure: \texttt{<project>\textunderscore main}. The extension must be \texttt{.out} for Unix systems and \texttt{.exe} for Windows. It should only contain calls to other subroutines and functions.
%
\subsection{Modules and Sub-modules}
%
Do \emph{NOT} use global variables and/or parameters unless \emph{completely} necessary. Global parameters and variables have a tendency to bite one in the ass when one least expects it. If they seem necessary, such as in the case of user-defined parameters, use a module to read and store them so they become global variables defined at runtime.
Modules and submodules should be named under the same scheme as main files, without the underscore and what follows it, after the underscore.
Limit a module's scope to a specific purpose.
%
\subsection{Data}
%
Data filenames should use the either of the following naming structures: \texttt{<project>.dat} or \texttt{<project>\textunderscore <parameters>.dat}.
The top line of the data file should contain a short commented description of the file structure and purpose. Whenever possible, the column print order should be:
\begin{table}
\centering
\caption{Data file print order.}
\label{t:dfpo}
\begin{tabular}{cc}
\toprule
Independent Vars & Dependent Vars \\
\midrule
$ x_{1 \ldots n} $ & $ \dfrac{\mathrm{d}^{0 \ldots k} }{\mathrm{d} x_{1 \ldots n}^{0\ldots k}} f_{1 \ldots m}(x_{1 \ldots n}) $ \\
\bottomrule
\end{tabular}
\end{table}
%
\section{Variables and Arguments}
%
Should a subroutine or function be used as an argument of another function or subroutine, make interfaces explicit. Use derived types, pointers and allocatable variables
Variable and subroutine names should make it clear what they mean, pointers should start with \texttt{p\textunderscore}. The same principle applies as in naming files. They should be declared by type with kind in ascending order:
\begin{inparaenum}[\itshape 1\upshape )]
\item Scalar,
\item Array,
\item Coarray,
\item Optional,
\item Derived,
\item Pointer,
\item Allocatable;
\end{inparaenum}
and intent:
\begin{inparaenum}[\itshape 1\upshape )]
\item \texttt{intent(in)},
\item \texttt{intent(out)},
\item \texttt{intent(inout)},
\item locals, and
\item \texttt{parameter}.
\end{inparaenum}
%
\section{Intent}
%
\texttt{intent()} must always be declared within functions and subroutines.
%
\section{Functions and Subroutines}
%
Aforementioned principles apply to their naming conventions.
%
\subsection{Elemental}
%
Elemental functions \& subroutines should start their name with \texttt{elmt\textunderscore}.
%
\subsection{Recursive}
%
Recursive functions \& subroutines should start their name with \texttt{rcsv\textunderscore}.
%
\subsection{Elemental Recursive}
%
Elemental functions \& recursive subroutines should start their names with \texttt{rcel\textunderscore}.
%
%
\section{Comments and Labels}
%
Comments should be used to explain the reasoning behind logical blocks. Expanded comments should be used whenever a non-trivial procedure is being carried out.
Every loop, if and case block must be labeled with a logical label made up of the initial letters of the block's main purpose as delineated by the comment explaining its function.
Subroutines and functions purpose and variables must be documented before \texttt{implicit none}. These comments must have the a structure similar to the following:
\pagebreak
\begin{minted}{fortran}
!=======================!
! Short explanation. !
! Method used. !
! Author & date !
!-----------------------!
! Extended explanation. !
!-----------------------!
! Inputs: !
! scalar = explanation !
! array() = explanation !
!-----------------------!
! Outputs: !
!-----------------------!
! Inputs-Outputs: !
!-----------------------!
! Locals: !
!=======================!
\end{minted}
\end{document} | {
"alphanum_fraction": 0.7038550364,
"avg_line_length": 41.3897058824,
"ext": "tex",
"hexsha": "49f13f666a4e38dd3065d3ec25822a63d205b90b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a8a6e49ce225392bafffb02b51c22299ffb9d20e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dcelisgarza/applied_math",
"max_forks_repo_path": "standards/standards.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a8a6e49ce225392bafffb02b51c22299ffb9d20e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dcelisgarza/applied_math",
"max_issues_repo_path": "standards/standards.tex",
"max_line_length": 348,
"max_stars_count": 15,
"max_stars_repo_head_hexsha": "a8a6e49ce225392bafffb02b51c22299ffb9d20e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dcelisgarza/applied_math",
"max_stars_repo_path": "standards/standards.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-30T23:33:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-30T19:22:04.000Z",
"num_tokens": 1517,
"size": 5629
} |
% -*- mode:LaTex; mode:visual-line; mode:flyspell; fill-column:75-*-
\chapter{Introduction} \label{secIntro}
Introduction.
\section{Installation instructions}
This template was tested with TeX Live 2017, which includes all required packages~\cite{TUG2017}. Mac users: this is included as part of OSX and TeXShop. After successfully installing TeX Live, compile the PDF file using your favorite build tool (we tested with \verb!make! on OSX).
\section{How to use this template}
Write each chapter as a separate \LaTeX\ file and include them in \verb!thesis-main.tex!. Edit the abstract, acknowledgments, background, title, dedication, and funding files as necessary. Include additional packages in \verb!thesis-packages.tex! and define helpful macros in \verb!thesis-macros.tex!.
\subsection{Algorithms}
Define each algorithm as a separate \LaTeX\ file in the algorithms folder using either the \verb!algorithmicx! or \verb!algpseudocode! packages. For example, see Algorithm~\ref{algTemplate}.
\input{algorithms/alg-template.tex}
| {
"alphanum_fraction": 0.7861271676,
"avg_line_length": 54.6315789474,
"ext": "tex",
"hexsha": "4968bdad47ed1f7999ded40aca9b243c9f927692",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2022-02-16T17:31:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-09-29T20:37:21.000Z",
"max_forks_repo_head_hexsha": "74e9953a4ad7ffba75988c8218d0cb2fe8ef23a7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sourishg/ri-thesis-template",
"max_forks_repo_path": "thesis-introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74e9953a4ad7ffba75988c8218d0cb2fe8ef23a7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sourishg/ri-thesis-template",
"max_issues_repo_path": "thesis-introduction.tex",
"max_line_length": 301,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "74e9953a4ad7ffba75988c8218d0cb2fe8ef23a7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sourishg/ri-thesis-template",
"max_stars_repo_path": "thesis-introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-20T11:11:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-27T20:42:11.000Z",
"num_tokens": 250,
"size": 1038
} |
\chapter{Soccer Simulation 2D}\label{chapter:ss2d}
In this chapter we explain the architecture of the Simulator itself. It is divided in 5 sections: about RoboCup Soccer Simulation Server (rcssserver), \cite{rcssserver}, explanation of the formation tactics and the formation editor, a section about the two major agents: the coach and the players; and a Section to show the environment used to train our agents.
\section{Server}\label{section:rcssserver}
Robocup Soccer Simulation Server (RCSSS) is the core of SS2D. It processes the whole environment receiving the messages from every agent and returning the actual state of the game in discretized time. An old challenge to teams in the league is that the returned state is noisy, therefore, not fully trustful. Also, it establishes the communication between the communication with monitors, from which one can watch the game.
RCSSS follows an UDP/IP client-server style, which allows teams to develop on any programming language since they do that communication. There's an specific port for the 11 infield players and another to the coach. Only messages with the allowed protocols is processed by the server. For example, only the goalie can do catch, if any other player do catch, the action is invalidated and the agent looses a cycle. The two agents with special commands are the goalie and the coach, which will be explained in Section \ref{section:coach}.
After the initial connection with the clients, the server commits the it's parameters. Those parameters are divided in server-params, the environment itself (position of goal, size of ball, how many cycles it will wait for an action, etc), and the 18 player types. Each player type has random params for speed, acceleration, size (which influences in the tackle action), maximum stamina and kickable area. In game, clients can exchange N messages with other clients, N being given by the server-params.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/field_params.png}
\caption{Default field parameters given by the server. Image from \cite{ss2dmanual}.}
\label{fig:fieldparams}
\end{figure}
\section{Formation}\label{section:formation}
The formation file is one of the most tactical information that the players have. It describes positions depending on ball's position and, for an agent based on Helios Base, also called Agent2D, (\cite{heliosbase}), like great part of the teams in the league, it is indispensable this information. With the Agent2D, it was also released Fedit2. It is an user interface(UI) that you can create formations files with up to 128 static situations. See Figure \ref{fig:fedit2}.
Teams usually have more than one formation file to be used in game. The formations can be changed given a certain situation. For example, if our team is winning and has great chance of winning, we can choose an aggressive formation instead of a defensive one.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{images/fedit2.png}
\caption{Actual Footage of Fedit2. Image from \cite{heliosbase}.}
\label{fig:fedit2}
\end{figure}
\section{Agents}\label{section:agents}
In this section we shall talk about the main High-level agents: Coach and Player. The idea of each agent is based on a real soccer game where there is a coach and 11 players, where the coach can note things that the players cannot but they can send a message to a specific agent and only the player can execute the actions that the coach instructed them.
\subsection{Coach}\label{section:coach}
The coach is an special agent which can see all the field noiseless and has the greater message area of any agent. As it has global vision of the game, it can analyze globally the game (e.g: where our defense line is breaking, how we do goals more often, how many passes we got right). Cyrus2014, \cite{cyrus2014}, realized their code which shows some statistical information of the game. See Figure \ref{fig:cyrus_coach}.
For the next RoboCup, the teams will not be able to see each others names, so strategies developed for a given team will be more difficult to be applied. For that, the coach is the most quoted agent to analyse the game in real time and tell what probable team is the opponent and then apply the specific strategy.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/cyrus_coach.png}
\caption{Example of game analysis by Cyrus2014's coach.}
\label{fig:cyrus_coach}
\end{figure}
\subsection{Players}\label{section:players}
The players are the usual agents in field. The players can take 8 actions:
\begin{itemize}
\item Dash: given a power of dash $\alpha \in [-100, 100]$, the agent "runs" with $\alpha$ in its body's direction.
\item Turn: given an angle $\gamma \in [-180, 180]$, the agent turns its body in $\gamma$ degrees.
\item Kick: given a power $\alpha \in [-100, 100]$ and an angle $\gamma \in [-180, 180]$, the agent kicks in $\gamma$ direction with power $\alpha$.
\item Tackle: given a power $\alpha \in [-100, 100]$, the agent tackles the ball.
\item Say: given a message M and a target N, the agent sends to server M to delivered to N.
\item Turn\_neck: given an angle $\gamma \in [-180, 180]$, the agent turns its neck in $\gamma$ degrees.
\item Move: given a point (x,y) the agent is teletransported to (x,y). It is a special action done only while the game is paused.
\item Catch: only the goalie can realize that action.
\end{itemize}
The agent has some restrictions of view as well. As far as the agent is of a target P, it becomes more difficult to see P. Figure \ref{fig:view_ranges} describes the view model, where point a, c and d the agent can see all parameters of P, at point e it cannot see the uniform number of P, at point f it cannot see which team P belongs to, at point b and g the agent cannot see. To cure some tactics that depends of all agents in field, \cite{heliosbase} created a memory for the agent that allows check an old position of the target. The catch and kick action also has some restrictions, it can only be performed if the ball is the kickable or catchable area. Figure \ref{fig:catchable_area} shows an example of catchable area.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/view_ranges.png}
\caption{Ranges of view of the players. Image from \cite{ss2dmanual}.}
\label{fig:view_ranges}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/catchable_area.png}
\caption{Description of Catchable Area. Image from \cite{ss2dmanual}.}
\label{fig:catchable_area}
\end{figure}
\section{Half Field Offensive}\label{section:HFO}
The Half Field Offensive (HFO), \cite{hfo}, is an interface environment designed specifically to train SARSA (state, action, reward, next state, next action) agents based on the usual OpenAI environments. HFO supports: a delayed message from the agent due to an algorithm training or some heavy process, writing the agent's code in Python Programming Language (\cite{python}) using the original actions of the C++ Programming Language (\cite{cpp}) agent via an interface, see figure \ref{fig:HFO_diagram}. It provides 2 spaces of states and actions:
\begin{itemize}
\item Low-Level Features and Actions- Uses raw features from SS2D server (angle, positioning) and provides raw actions (kick, dash, turn).
\item High-Level Features and Actions - Uses processed features (distance to opponent, interceptable ball) and only complex or chained actions (dribble, pass to teammate).
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{images/HFO_diagram.png}
\caption{Half Field Offensive diagram. Image from \cite{hfo}.}
\label{fig:HFO_diagram}
\end{figure} | {
"alphanum_fraction": 0.7660202902,
"avg_line_length": 91.6117647059,
"ext": "tex",
"hexsha": "ca770db76b49ecb738b71abbdf7e7c251ed619a4",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-08-08T02:30:21.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-18T01:33:35.000Z",
"max_forks_repo_head_hexsha": "d0237207dbb485611c685251f97649679a7bbc0a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bcahlit/graduationMgm",
"max_forks_repo_path": "Evaluating Reinforcement Learning on Robocup Soccer Simulation 2D/chapters/ss2d.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d0237207dbb485611c685251f97649679a7bbc0a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bcahlit/graduationMgm",
"max_issues_repo_path": "Evaluating Reinforcement Learning on Robocup Soccer Simulation 2D/chapters/ss2d.tex",
"max_line_length": 728,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "ee264d6f1bffdbbbb04c55c8f43146604d54cc88",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cstiano/graduationMgm",
"max_stars_repo_path": "Evaluating Reinforcement Learning on Robocup Soccer Simulation 2D/chapters/ss2d.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-16T14:50:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-19T19:10:07.000Z",
"num_tokens": 1900,
"size": 7787
} |
\documentclass[a4paper]{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{xcolor}
\usepackage{amsthm}
\usepackage[mathcal]{euscript}
\usepackage{url}
\newcommand{\Hcal}{\mathcal{H}}
\newcommand{\real}{\mathbb{R}}
\title{Study of the constructive proof of weak compactness}
\author{Nazarov Ivan}
\date{\today}
\begin{document}
\maketitle
Let $(\Hcal, \langle\cdot,\cdot\rangle)$ be a Hilbert space. A sequence $(x_n)_{n\geq1}
\in \Hcal$ converges {\bf strongly} to $x\in\Hcal$ if $\|x_n - x\| \to 0$, and {\bf
weakly} if $\langle x_n, z \rangle \to \langle x, z\rangle$ for all $z\in \Hcal$.
Weak convergence is necessary for strong convergence, since for every $z\in \Hcal$
by the Cauchy-Schwartz inequality
\begin{equation*}
\liminf_{n\to \infty}
\lvert \langle x_n - x, z\rangle \rvert
\leq \limsup_{n\to \infty}
\lvert \langle x_n - x, z\rangle \rvert
\leq \|z\| \limsup_{n\to \infty} \|x_n - x \|
= 0
\,.
\end{equation*}
Let $(x_n)_{n\geq1} \in \Hcal$ be a bounded sequence $M = \sup_{n\geq1} \|x_n\| < +\infty$.
Then there is $(n_k)_{k\geq1}\uparrow$ such that $x_{n_k}$ converges weakly to some
$h \in \Hcal$, i.e. $x_{n_k}\rightharpoonup h$.
\paragraph{Proof} % (fold)
\label{par:proof}
Let $\Hcal^0_0 = \{\sum_{i=1}^n \beta_i x_i \colon \beta_i \in \real,\,n\geq 0 \}$.
The set $\Hcal^0_0$ is a linear subspace of $\Hcal$. It is a pre-Hilbert space with
respect to $\langle\cdot, \cdot \rangle$, since it can be completed w.r.t. this inner
product to get $\Hcal_0$ and $\Hcal^0_0$ is dense in its completion.
\paragraph{Subsequence} % (fold)
\label{par:subsequence}
Let's construct a subsequence, which could potentially weakly converge to something.
Let $x^0_n = x_n$. Suppose for $k\geq 0$ we have $(x^k_n)_{n\geq 1} \subseteq (x_n)_{n\geq 1}$
and $(\alpha_i)_{i\leq k}$, such that $\langle x_i, x^k_n\rangle \to \alpha_i$ for
all $1\leq i\leq k$ as $n\to \infty$.
A sequence $\alpha^{k+1}_n = \langle x_{k+1}, x^k_n \rangle$ in $\real$ is bounded,
since $\lvert\alpha^{k+1}_n\rvert \leq \|x_{k+1}\| \|x^k_n\| \leq M^2$ and $x^k_n \in
\{x_n\colon n\geq 1\}$. Therefore, it contains a subsequence $(n_p)_{p\geq1}$ such that
$\alpha^{k+1}_{n_p}$ converges to some $\alpha^{k+1} \in \real$. If we let $x^{k+1}_p
= x^k_{n_p}$, then
\begin{itemize}
\item $\langle x_j, x^{k+1}_p \rangle \to \alpha^j$ since $(x^{k+1}_p)_{p\geq 1}
\subseteq (x^k_p)_{p\geq 1} \subseteq (x^j_p)_{p\geq 1}$ for $j\leq k$, and
subsequences converge to the same limit as the parent sequence;
\item $\langle x_{k+1}, x^{k+1}_p \rangle = \alpha^{k+1}_{n_p} \to \alpha^{k+1}$
by construction.
\end{itemize}
For the diagonal sequence $(x^p_p)_{p\geq1}$ there are indices $(n_p)_{p\geq1} \uparrow$
such that $x^p_p = x_{n_p}$, and we have $(x^p_p)_{p\geq1} \subseteq (x^k_p)_{p\geq1}$
for all $k\geq 0$, whence we must have $\langle x_n, x^p_p \rangle \to \alpha^n$
for all $n\geq 1$.
% paragraph subsequence (end)
\paragraph{A mapping} % (fold)
\label{par:a_mapping}
We shall show that a map $l(x) = \lim_{p\to \infty} \langle x, x_{n_p} \rangle$ is
well defined.
Right from the start we know that $l(x_n) = \alpha^n$ for all $n\geq 1$. Since the
inner product is bilinear and the limit is additive, for any $z = \sum_{i=1}^n \beta_i x_i$
we have
\begin{equation*}
l(z)
= \lim_{p\to \infty} \Bigl \langle \sum_{i=1}^n \beta_i x_i, x_{n_p} \Bigr \rangle
= \sum_{i=1}^n \beta_i \lim_{p\to \infty} \langle x_i, x_{n_p} \rangle
= \sum_{i=1}^n \beta_i l(x_i)
= \sum_{i=1}^n \beta_i \alpha^i
\in \real
\,.
\end{equation*}
Therefore $l$ is defined on $\Hcal^0_0$.
Since $\Hcal^0_0$ is dense in $\Hcal_0$ w.r.t the induced norm, for any $z\in \Hcal_0$
there is $(z_k)_{k\geq1} \in \Hcal^0_0$ such that $\|z_k - z\|\to 0$. Thus for any
$\varepsilon > 0$ there is $n_\varepsilon \geq 1$ such that for all $n\geq n_\varepsilon$
we have $\|z_k - z\| \leq \tfrac\varepsilon{M}$. Hence for every $p\geq 1$ we get
\begin{align*}
\bigl\lvert \langle z, x_{n_p} \rangle - \langle z_k, x_{n_p} \rangle \bigr\rvert
\leq \| z - z_k \| \|x_{n_p} \|
\leq \tfrac\varepsilon{M} M
\,.
\end{align*}
We conclude that $\sup_{p\geq 1} \lvert \langle z - z_k, x_{n_p} \rangle \rvert \to 0$
as $k\to \infty$, since for any $\varepsilon > 0$ there is $k_\varepsilon \geq 1$ such
that $\sup_{p\geq 1} \lvert \langle z - z_k, x_{n_p} \rangle \rvert \leq \varepsilon$ for
all $k\geq k_\varepsilon$.
Next observe that for any $k \geq j$ and all $p$ we have
\begin{align*}
\lvert l(z_k) - l(z_j) \rvert
&\leq \lvert l(z_j) - \langle z_j, x_{n_p} \rangle \rvert
+ \lvert l(z_k) - \langle z_k, x_{n_p} \rangle \rvert
\\
&+ \lvert \langle z_j - z, x_{n_p} \rangle \rvert
+ \lvert \langle z_k - z, x_{n_p} \rangle \rvert
\,.
\end{align*}
From the convergence of the supremum above we can pick $k_\varepsilon \geq1$ such
that the last two terms are bounded each by $\tfrac\varepsilon2$ for all $j,k \geq
k_\varepsilon$. Taking limit suprema of both sides with respect to $p$ eliminates
the first two absolute terms of the right hand side, thereby implying that
$\lvert l(z_k) - l(z_j) \rvert \leq \varepsilon$ for all such $j$ and $k$. Hence
$(l(z_k))_{k\geq 1}$ is Cauchy in $\real$, and thus $l(z_k) \to \alpha$ for some
$\alpha \in \real$.
To show that $l(z) = \alpha$ we make the following observation: for all $k,p\geq 1$
\begin{equation*}
\lvert \langle z, x_{n_p} \rangle - \alpha \rvert
\leq \lvert l(z_k) - \alpha \rvert
+ \lvert \langle z_k, x_{n_p} \rangle - l(z_k) \rvert
+ \sup_{p\geq 1} \lvert \langle z - z_k, x_{n_p} \rangle \rvert
\,.
\end{equation*}
For any $\varepsilon > 0$ there is $k_\varepsilon \geq 1$ such that the sum of the
first and last terms on the right hand side is not greater than $\varepsilon$ at
$k_\varepsilon$. Hence the $\limsup_{p\to \infty}$ of the expression in the left
is not greater than $\varepsilon$, since
\begin{equation*}
\limsup_{p\to \infty} \lvert \langle z, x_{n_p} \rangle - \alpha \rvert
\leq \varepsilon
+ \limsup_{p\to \infty} \lvert
\langle z_{k_\varepsilon}, x_{n_p} \rangle - l(z_{k_\varepsilon})
\rvert
\,,
\end{equation*}
and $\langle z_k, x_{n_p} \rangle \to l(z_k)$ in $\real$ as $p\to \infty$ for any
$k \geq 1$. Since the last bound holds for arbitrarily small $\varepsilon > 0$,
$\langle z, x_{n_p} \rangle \to \alpha$, and therefore $l$ is defined on $\Hcal_0$.
What about $z\in \Hcal_0^\perp$, the orthogonal complement of $\Hcal_0$ in $\Hcal$?
For any $z\in \Hcal_0^\perp$ by orthogonality $\langle z, x_n \rangle = 0$ for all
$n\geq 1$, whence $\lim_{p\to\infty} \langle z, x_{n_p} \rangle = 0$. Therefore $l$
is defined on $\Hcal_0^\perp$.
% paragraph a_mapping (end)
\paragraph{Linearity and continuity} % (fold)
\label{par:linearity_and_continuity}
Thus $z\mapsto l(z) = \lim_{p\to\infty} \langle z, x_{n_p} \rangle$ is a well defined
$\Hcal \to \real$ map. Furthermore, it is linear, since the inner product is bilinear:
\begin{equation*}
l(z + \alpha x)
= \lim_{p\to\infty} \langle z + \alpha x, x_{n_p} \rangle
= \lim_{p\to\infty} \langle z, x_{n_p} \rangle
+ \alpha \lim_{p\to\infty} \langle x, x_{n_p} \rangle
= l(z) + \alpha l(x)
\,.
\end{equation*}
Finally, $l$ is also a bounded map, since for all $z\in \Hcal$
\begin{equation*}
\lvert l(z) \rvert
= \lim_{p\to\infty} \lvert \langle z, x_{n_p} \rangle \rvert
\leq \sup_{p\geq 1} \|z\| \| x_{n_p}\|
\leq M \|z\|
\,.
\end{equation*}
So the map $l$ is linear and bounded, hence continuous.
% paragraph linearity_and_continuity (end)
\paragraph{Weak limit} % (fold)
\label{par:weak_limit}
Riesz representer theorem therefore implies the existence of some $h\in \Hcal$ such
that $l(z) = \langle z, h\rangle$ for any $z\in \Hcal$. Therefore $x_{n_p} \rightharpoonup
h$ for this $(x_{n_p})_{p\geq 1} \subseteq (x_n)_{n\geq 1}$.
% paragraph weak_limit (end)
% paragraph proof (end)
\end{document}
| {
"alphanum_fraction": 0.6559900683,
"avg_line_length": 40.4773869347,
"ext": "tex",
"hexsha": "4b22c67816eeb44e36cae8157ef375afcae48e43",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ivannz/general-scribbles",
"max_forks_repo_path": "scribbles/weak-compactness.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ivannz/general-scribbles",
"max_issues_repo_path": "scribbles/weak-compactness.tex",
"max_line_length": 94,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "48652c077fa008be5af0db8ab24e7a39f2d03fe1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ivannz/general-scribbles",
"max_stars_repo_path": "scribbles/weak-compactness.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-28T12:47:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-12-07T20:41:34.000Z",
"num_tokens": 3209,
"size": 8055
} |
\documentclass[11pt,]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\newcommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{}
\pretitle{\vspace{\droptitle}}
\posttitle{}
\author{}
\preauthor{}\postauthor{}
\date{}
\predate{}\postdate{}
\usepackage{placeins}
\AtBeginDocument{\let\maketitle\relax}
\begin{document}
\FloatBarrier
\newpage
\hypertarget{conclusion}{%
\section*{Conclusion}\label{conclusion}}
\addcontentsline{toc}{section}{Conclusion}
In this work we have introduced the reader to the problematic of the
different modelling paradigms in application to the consumer choice
studies. By means of an experimental theory-testing framework we
demonstrate the complexity of the model performance evaluation
problematic, showing the eventual bottlenecks and the questions to be
answered on all the levels of data exploration procedure. The correct
specification of the theoretical assumptions, the dataset generation,
the model choice as well as the performance measure choice were studied.
The main objective to propose a comprehensive methodology for
theory-testing framework creation was accomplished, illustrating the
devised frameworks' potential over an economic question issued from real
world.
Two different consumer choice situation were explored, issued from the
setting delimited by Michaud, Llerena, and Joly (2012). The discrete
choice context allowed us to compare how the presence of heterogeneous
preferences for environmental attributes affected the possibility to
identify correctly the underlying utility functions, as well as to
derive the WTP and premiums for the attributes. The implementation of
artificial dataset simulation techniques proved its potential in
creation of fully controlled data samples, providing two consistent
datasets constructed under RUM assumptions. Given the data, we could
observe, how taste heterogeneity affected the population's choice
distribution and the resulting datasets, as well as their impact on
models' performances.
A total of three models, issued from alien disciplines such as
econometrics (MNL and MMNL) and ML (CNN-MNL), were implemented over the
generated artificial datasets. We could demonstrate the differences and
similarities between the traditional econometrics models and such ML
techniques as NN. The econometric models allowed us to observe the
potential biases that researchers risk to induce using the simplest
models in unjustified context. The ML model made it possible to
demonstrate, how different approaches to optimisation and algorithmic
solutions influence the obtained results. Moreover, the framework
demonstrated, that ML models could be used instead of the traditional
econometrics techniques under correct specification, as technically NN
are able to approximate any other more simple linear or non-linear
model. All of the models demonstrated good overall performance given the
homogeneous individual preferences setting, while only the most complex
MMNL model achieved sufficient results in presence of taste
heterogeneity.
The multidimensionality of the explored situation allows us to tear
several solutions from this work in terms of model performances in
presence of heterogeneous preferences. The MMNL models demonstrated a
better adaptivity for the different datasets and consequently a better
adaptiveness in all the cases. This family of models showed a great
tolerance for the eventual misspecification in the assumptions of the
presence of random effects. On the contrary, the MNL models produced
biased estimates in the presence of the random effects in population,
which indicates a great danger and signal the importance of the correct
specifications a preliminary data studies to be performed before the
models estimation. The only observed difference was in the way the
resulting approximation was unable to directly estimate the variance for
the linear part coefficients, which is not initially the main focus of
the NN models. However, the marginal effects could still be derived for
the individual characteristics or the alternative specific attributes,
assuming a correct approximation was used, which does not inflate the
overall variance for the marginal effects.
Nevertheless, there exist potential biases that require particular
attention and caution in future research. The implemented
data-generation procedure risks to bias the results in favour of the
econometrics models, which were used to simulate the data. Speaking
about the models, we have observed that the adaptiveness and flexibility
of the MMNL model comets at some costs in resources efficiency. The
time, computation power and the data amount needed to achieve satisfying
results are significantly higher than for the other models.
This work demonstrates only a fraction of the full potential of the
theory-testing framework. Many extensions and generalisations should be
performed before it could be used at scale. For example, it is
particularly interesting to introduce an extension which will provide
the possibility to explore and compare how different behavioural
theories (RUM, RRM, QDM) affects the estimation results. Even more, with
this methodology it becomes possible to explore the effects of
non-additive utility presence or the behaviour of populations with mixed
behaviours presence. Another extension concerns the implemented
mathematical models and consists in incorporating the most recent
developments in the ML field into the framework, enabling users to
implement such models as decision trees or more advanced NN. Last, but
not the least, the framework could be complemented with a methodological
tool-set for hypothesis testing using the advantages of a controlled
experiment data collection.
To summarise, we conclude that the experimental framework has proven its
importance for the empirical and theoretical studies and has
demonstrated its potential. There clearly exist a strong need for a more
extensive study and development of this framework to provide the
research community with a hypothesis testing tool-set, which could be
used in the context of the consumer choice modelling. The exploration of
potential biases and theory-testing will allow us to establish a
comprehensive and consistent methodology to be implemented latter in
empirical work and controlled experiments in particular.
\newpage
\renewcommand\contentsname{}
\setcounter{tocdepth}{4}
\hypertarget{table-of-contents}{%
\section*{Table of contents}\label{table-of-contents}}
\addcontentsline{toc}{section}{Table of contents}
\vspace{-12mm}
\tableofcontents
\newpage
\hypertarget{list-of-figures}{%
\section*{List of figures}\label{list-of-figures}}
\addcontentsline{toc}{section}{List of figures}
\renewcommand\listfigurename{}
\vspace{-12mm}
\listoffigures
\newpage
\hypertarget{list-of-tables}{%
\section*{List of tables}\label{list-of-tables}}
\addcontentsline{toc}{section}{List of tables}
\renewcommand\listtablename{}
\vspace{-12mm}
\listoftables
\newpage
\hypertarget{bibliography}{%
\section*{Bibliography}\label{bibliography}}
\addcontentsline{toc}{section}{Bibliography}
\hypertarget{refs}{}
\leavevmode\hypertarget{ref-llerena2013rose}{}%
Michaud, Celine, Daniel Llerena, and Iragael Joly. 2012. ``Willingness
to pay for environmental attributes of non-food agricultural products: a
real choice experiment.'' \emph{European Review of Agricultural
Economics} 40 (2): 313--29. \url{https://doi.org/10.1093/erae/jbs025}.
\end{document}
| {
"alphanum_fraction": 0.8053650046,
"avg_line_length": 41.0378151261,
"ext": "tex",
"hexsha": "364a27f688898e1823e478a639a60f6f11f2bf50",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ea1cadabf1685385709ddfac442008160a9571e1",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nikitagusarov/performance_exploration",
"max_forks_repo_path": "memoire/the_conclusion/the_conclusion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ea1cadabf1685385709ddfac442008160a9571e1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nikitagusarov/performance_exploration",
"max_issues_repo_path": "memoire/the_conclusion/the_conclusion.tex",
"max_line_length": 83,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ea1cadabf1685385709ddfac442008160a9571e1",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nikitagusarov/performance_exploration",
"max_stars_repo_path": "memoire/the_conclusion/the_conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2356,
"size": 9767
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\title{PS5 Yarberry}
\author{Megan N. Yarberry }
\date{February 20, 2020}
\begin{document}
\maketitle
\section{Problem 3}
For my non API web scrapping, I scrap some National Championship history of Division I college football programs, the chart I was able to scrap is not only the count of National titles each school has along with each year they won the title. This data is more relevant to me in looking at research and topic ideas, I currently am study college football patterns in particular though that pertain to donor gifts including major gifts back to the athletic department, and does much an effect does winning a National Championship have on gifts. I crapped my data using rvest which is similar to the example we went over in class.
\section{Problem 4}
For problem four I scrapped data from Yahoo Finance and in particular Nike's stock, and I used htmlTreeParse. I parsed the data from the site without using an API code and was able to download the data straight from the website using a URl. I then broke down the data into section more readable that I could use, this data will also be undated ad the stock changes if refreshed. I don't particular need Nike stock for any of my current projects but could be a contributing factor in college football programs which is my topic of study.
\end{document}
| {
"alphanum_fraction": 0.7958001448,
"avg_line_length": 72.6842105263,
"ext": "tex",
"hexsha": "b05aeadc72730940f15597086c9c532d40cb0f43",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "myarberry26/DScourseS20",
"max_forks_repo_path": "ProblemSets/PS5/PS5_Yarberry.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "myarberry26/DScourseS20",
"max_issues_repo_path": "ProblemSets/PS5/PS5_Yarberry.tex",
"max_line_length": 627,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "da3e45e48a26e7c8c37da33696e845e992a34711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "myarberry26/DScourseS20",
"max_stars_repo_path": "ProblemSets/PS5/PS5_Yarberry.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 310,
"size": 1381
} |
\section{Maintaining repeatability}\label{sec-limitations}
Another way Research Objects can contribute to scientific transparency is by
clarifying claims about computational repeatability.
Just as the overall scientific reproducibility of a study represented by a Research Object
might be described precisely in terms of individual components required to
satisfy particular (namespaced) definitions of \emph{reproducible} or \emph{replicable},
additional statements could be made about the various dimensions of computational
reproducibility in particular.
There have long been discussions about packaging ROs for transparency and
reproducibility \cite{claerbout1992, gentleman2007, peng2011}, but less emphasis has been
placed on \emph{describing} characteristics of the published ROs. The aim would be for researchers
publishing their work via ROs to be fully aware of the implications of the claims they make about
the computations represented by the RO.
Researchers discovering, evaluating, or using the Research Object for further research
would be able to interpret these claims unambiguously.
The implications for the possibilities of rerunning, reproducing, or exactly repeating the
computations described in the RO under different conditions would be clear
to all parties.
As discussed above, \emph{exact repeatability} promises increasingly to be a powerful,
new addition to the modern researcher's repertoire of reproducibility techniques.
At the same time, there appears to be (possibly growing) confusion over what is actually
possible in terms of computational reproducibility generally, exact repeatability
specifically, and the conditions required to achieve them in practice.
The fundamental limitations computers impose on the exact replicability of program executions
are well known.
At the lowest level, finite precision arithmetic, differing word sizes between processors,
the effects of round-off errors, and the implications of choosing between different
mathematically equivalent orderings of operations all have the potential to
impose limits on the replicability of scientific computations across different computing environments.
Virtual machines and software containers cannot fully address issues at this level.
The fact that we can expect options such as full-processor emulation, either in software~\cite{QEMU}
or on customizable hardware,
to provide better guarantees of exact computational repeatability under more circumstances
over time reveals the true crux of the problem.
What we can expect from computers in terms of reproducibility in general, and exact
repeatability in particular, is changing quickly---and likely will continue to do so for the
foreseeable future.
In the case of hiding hardware differences, time is on our side---or can be if we happen to save the
information actually needed to enable exact re-execution of our analyses in the future.
In many other cases, time works against repeatability~\cite{hinsen_dealing_2019}.
A Dockerfile that today correctly produces the software environment in which computations were originally
performed may not do so a year from now--if it builds at all.
Due to the dependencies of most scientific software on packages not
bundled with the language compiler or runtime (with these packages typically depending on
other packages, and so on), the chances of rebuilding or rerunning software
equivalent to that used to produce a result in a Research Object decreases
rapidly with time.
Fortunately, time also works for us in this dimension as well, as new ways of specifying software
environments and archiving dependencies emerge.
But again the issue arises---are we saving the right information to enable computational repeatability
in the future?
What Research Objects can offer here is analogous to the proposed function of mediating between competing
and contradictory definitions of reproducibility and replicability.
Rather than trying to anticipate all future developments in the area of computational reproducibility,
and representing computing environments, software dependencies, and machine information
in a way that we hope will be usable by future technologies, we can take the pluralistic
path here as well.
We can characterize the various dimensions in which computing technology currently supports---or fails to
support---exact repeatability; then create mappings from the specific capabilities
of existing technologies (Docker~\cite{docker2019}, Singularity~\cite{Singularity2019}, Jupyter~\cite{jupyter2019}, etc)
and software stacks (Binder~\cite{Binder_2018}, Whole Tale~\cite{WT2019}, etc)
onto these dimensions.
As new technologies that better (or differently) support computational reproducibility emerge or gain acceptance,
the capabilities of these tools can be mapped as well; and the common, underlying model can be enhanced as needed.
The advantage of including such capabilities in Research Objects is that researchers could be made aware
of the implications of the various technologies, programming environments, and specification
standards they choose to use employ.
It is easy to imagine a current-day researcher intending to enable others to reproduce their computational
results by sharing any custom source code or scripts in a Git repository, along with the Dockerfile
they used to create the computing environment in which they worked.
While this is laudable, and almost certainly better than nothing, in many cases it is likely the researcher's
expectations with regard to how these actions will ensure reproducibility will exceed
what is actually the case.
If instead the researcher composed their study as a Research Object, they could be prompted---by whatever
software environment they are using to create the RO---for details about their precise expectations with
regard to reproducibility.
They may then find that a Dockerfile that does not specify the version of the base image, for example, is not sufficient
to meet their expectations.
When faced with the current limitations of available technology they may choose to archive the Docker image itself,
or even a virtual machine image, while still being made aware of the limitations associated with these alternative
approaches.
Researchers evaluating an RO similarly could probe its reproducibility capabilities. One might discover for example
that an RO comes with a Dockerfile that currently references a non-existent base image.
Or that it depends on a software package no longer available in the Ubuntu Apt Repository.
The archived Docker image referenced in the RO might no longer be compatible with the latest version of Docker.
These are all challenging issues to discover, debug, and remediate even for experts in these technologies.
Making Research Objects \emph{transparent} with respect to their actual reproducibility capabilities would be a
step forward for making the computational components of scientific research reproducible and
transparent.
| {
"alphanum_fraction": 0.8234875445,
"avg_line_length": 70.9595959596,
"ext": "tex",
"hexsha": "e34b8a31fe9c4a631ee78f110127f03a19a37117",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-08-05T08:11:58.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-03T10:04:38.000Z",
"max_forks_repo_head_hexsha": "28ca520bf3b8867f7295442ba07e457956ae9abe",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "whole-tale/reproducibility-components",
"max_forks_repo_path": "ro2019/limitations.tex",
"max_issues_count": 17,
"max_issues_repo_head_hexsha": "28ca520bf3b8867f7295442ba07e457956ae9abe",
"max_issues_repo_issues_event_max_datetime": "2019-08-05T06:03:04.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-07-29T03:51:50.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "whole-tale/reproducibility-components",
"max_issues_repo_path": "ro2019/limitations.tex",
"max_line_length": 121,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "28ca520bf3b8867f7295442ba07e457956ae9abe",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "whole-tale/reproducibility-components",
"max_stars_repo_path": "ro2019/limitations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1378,
"size": 7025
} |
% Methods
In this chapter, we give a description of event selection, definitions of
key physical variables and how they are used to select events. Then, a procedure regarding
how the signal sample is manipulated to produce a high statistics off-shell Higgs sample is presented.
Finally, the binning of variables used to obtain the results are determined and defined.
\section{Event selection and physical variables}
Proton bunches cross each other at a rate of about \SI{400}{\mega\hertz} in the beamline of
the LHC. Naturally, not all of these crossings are recorded due to both technical
limitation of the electronics as well as the fact that the vast majority of these
crossings don't produce inelastic collision that is energetic enough to be interesting.
After the selection of Level 1 (L1) trigger and the higher level trigger (HLT), less than 1000
events per second are permanently recorded and go to full off-line reconstruction. Among these,
we only select the ones that pass certain HLT trigger, for 2018 data set:
\begin{itemize}
\item HLT\_Mu17\_TrkIsoVVL\_Mu8\_TrkIsoVVL\_DZ\_Mass3p8
\item HLT\_IsoMu24
\item HLT\_Ele23\_Ele12\_CaloIdL\_TrackIdL\_IsoVL
\item HLT\_DoubleEle25\_CaloIdL\_MW
\item HLT\_DoublePhoton70
\item HLT\_Ele32\_WPTight\_Gsf
\item HLT\_Photon200
\end{itemize}
Most of the trigger names are self-explanatory, for example, the numbers are the transverse momentum ($p_\mathrm{T}$)
threshold (in \si{\giga\electronvolt}) to pass each of them. The HLT is the starting point of the analysis. The
purpose of the triggers is to ensure that the events in the corresponding samples will contain the physics
we wanted (since run data samples are divided according to what triggers they passed).
The next step is to `add' composite variables that are relevant to the physics we are interested in,
well as making base-line cuts on the events.
The jets are all AK4 jets unless mentioned otherwise, and a special type of jets called b-tagged
jets are identified using the DeepFlavour algorithm.\cite{deepflavour} These jets
likely originate from relatively long-lived bottom quarks judging based on a combination of many measurements
including the displacement of vertices.
We shall also define a few of the
uncommon variables in the list and physical motivations are given in the following
paragraphs.
After mandating passage of the set of triggers above, based on more delicate physics reasons,
the baseline cuts are:
\begin{itemize}
\item Missing transverse energy $\met > 125 \gev$:
the signal process creates true $\met$ with neutrinos, this cut also reduce bkg such
as DY.
\item Both leptons have $p_\mathrm{T} > \SI{25}{\giga\electronvolt}$
\item $\abs{\Delta\phi_{\ell\ell\_\met}} > 1.0$
\item $\abs{\Delta\phi_{\ell\ell\mathrm{Jets}\_\met}} > 2.5$
\item $\abs{m_{\ell\ell} - \SI{91.2}{\giga\electronvolt}} < \SI{15}{\giga\electronvolt}$:
the signal process consists of $\mathrm{Z}\rightarrow{}\ell\ell$, we
require the di-lepton system has a mass that is consistent within the Z mass peak.
\item $p_\mathrm{T}^{\ell\ell} > \SI{55}{\giga\electronvolt} $:
the Drell–Yan (DY) process creates a lot of backgrounds events, but their di-leptons
go back-to-back with expected value of this variable close to 0.
\item No ak4-jet b-tagged jet ($\Delta{}R=0.4$, loose b-tagging working point (WP))
\item $\min{\abs{ \Delta\phi_{\mathrm{j}\_\met}}} > 0.25$
\end{itemize}
In addition, to reduce the detector effects at high pseudo rapidity angles:
\begin{itemize}
\item $\eta_\mu< 2.4$
\item $\eta_e< 2.5$
\end{itemize}
First and most importantly, the $\met$ cut reduces background from DY significantly, since they don't have
real $\met$. Notice that many of the
cuts are related to angles between various physical objects presented in the
reconstruction. The reason is simple: the signal events, where Higgs goes to ZZ and one Z
goes to 2 charged leptons and the other goes to 2 neutrinos, ideally would have the two Z's momentum
`back-to-back' in Higgs boson's rest frame, leading to a large angle in the transverse plane.
% ( assuming the $\met$ mainly comes from the two neutrinos, of course)
Furthermore, in background events which do not contain this kinematic feature, the
correlation in the directions of $\vec{E}_{T}^\mathrm{missing}$ and the transverse momentum of the leptons is weaker.
To use this kinematic feature to increase the signal to background ratio, we define
$\abs{\Delta\phi_{\ell\ell\_\met}}$ as the azimuthal angle (perpendicular to
the beamline) between the di-lepton system and the transverse missing energy. In the
signal events that produce 0 jets, this variable should be $\pi$. The cut is lowered to $1.0$
due to the finding that in the (not so rare) cases where there is jet(s) recoiling against
the ZZ system, this variable can go quite low.
This leads to the next variable on the list, $\abs{\Delta\phi_{\ell\ell\mathrm{Jets}\_\met}}$, which
is almost the same except that we add all jets' momentum into the di-lepton system to account
for the events that have produced jets, which in turn would cause the angles to be lower in
a multi-body final state.
Also, a veto on the b-tagged jet is placed to reduce the background events originated from a pair
of top quarts ($t\bar{t}$) since top quark always decay to a bottom quark.
Finally, $\min{\abs{ \Delta\phi_{\mathrm{j}\_\met}}}$ is the minimum azimuthal angle difference between
any of the jet (that passes cuts) and the $\met$. The cut exists because jets are one of the
most difficult physical objects to measure, they often create so-called instrumental $\met$ due to jet
mis-measurements and it can be quite large in magnitude. However, such mis-measurements often yield large
$\met$ in the direction of the original jet. This cut requires angular separation since in the signal processes,
the jet recoils against the ZZ system.
We also define variables that are not cut on, but are used for fitting:
\be
D_{j j}^{V B F}=\frac{P_\mathrm{S M}^{V B F}(\vec{\Omega})}{P_\mathrm{S M}^{g g}(\vec{\Omega})+P_\mathrm{S M}^{V B F}(\vec{\Omega})}
\ee
as first introduced in another CMS paper~\cite{djjvbf}. In short, this variable (discriminator) is sensitive
to the VBF physics and the correlation between (angles and mass of) the outgoing jets resulted from the
VBF topology. At the same time:
\be
\mtzz={\left[\sqrt{p_{\mathrm{T}, \ell\ell}^2+m_{2 \ell}^2}+\sqrt{E_{\mathrm{T}}^{\mathrm{miss}^2}+m_\mathrm{Z}^2}\right]}^2-\left[\vec{p}_{\ell\ell}+\vec{E}_{\mathrm{T}}^{\mathrm{miss}}\right]^{2}
\ee
is defined based on the hypothesis that the $\met$ is comprised of mainly the two neutrinos
from one of the Z bosons. We shall see the usefulness of this variable in channels where
not enough jets are present to construct the $D_{j j}^{V B F}$ variable.
On top of the cuts stated in the beginning, several $\met$ filters exist to remove detector
anomalies. A lepton isolated track veto is implemented
in the underlying analysis framework to
remove events with additional leptons that are likely backgrounds, such as those can be found in WZ.
\section{Signal samples re-weighting}
\label{sec:sig_rewgt}
Extra attention is given to the off-shell Higgs sample used in this thesis and two different kinds
of re-weighting of the simulated events are applied in to produced an MC sample with wide
mass spectrum way beyond the mass of Higgs ($\approx \SI{125}{\giga\electronvolt}$). We use the
gluon fusion Higgs (ggH) sample to illustrate the procedures. The same procedures are applied to
the VBF samples as well.
We start by generating separate samples with different Higgs pole mass.
This is the mass of Higgs that appears in the propagator on
the R.H.S of Eq.~\ref{eqn:diff_xsec} as mentioned before. The raw distribution of the true mass
in different samples (without any weight) is shown in Fig.~\ref{fig:LHE_raw} (left). As expected,
the peak of the distribution moves to the right as the mass of the sample becomes larger, at the
same time, the `peak' of samples with very large mass is wider because the width of the Higgs increases.
We also see that for some lower mass
samples (200, 300, 400 GeV etc.), they have a cut-off (due to insufficient statistics) beyond
$\mathrm{M}\approx\SI{2500}{\giga\electronvolt}$ which means they have 0
statistics beyond that mass range. After applying the GEN, PU, and ME weights given by their
individual MC process and JHUGen MELA, as shown in Fig.~\ref{fig:LHE_raw},
we see that they are consistent with each others' line shape. However, it is clear
that:
\begin{enumerate}[label=(\roman*)]
\item Lower mass samples have insufficient statistics in the tail region
\item Samples have poor statistics in mass windows that are
far from their true mass (as listed in the legend).
\end{enumerate}
The second point is best illustrated by the wide spikes of lower mass samples near their `cut-offs',
as well as the visible fluctuations of high-mass samples in the mass region (don't let the
visual mislead you, the plots are in semi-log scale).
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/LHE_Raw.pdf}}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/LHE_PUME_wgts.pdf}}\\
\end{center}
\caption{Unit normalized distributions of LHECandMass before (left) and after (right) applying
the weights. Together they show a need to combine samples for a wide-range, high
statistics signal sample. Bin size = \SI{10}{\giga\electronvolt}.}
\label{fig:LHE_raw}
\end{figure}
The goal of the combination of samples is to use all the events, but with a correction weight
such that each sample has a higher weight in the region where they pose good statistics
while the overall normalization stays unchanged. To do this, we pick a list of
`mass windows' with edges sitting on the true masses of the samples, and we define effective
number of events $\mathrm{N}_\mathrm{eff} = \frac{(\sum\mathrm{wgts})^2}{\sum(\mathrm{wgts}^2)}$
within each mass window. Here, the weight corresponds to the product of PU wgt, GEN wgt, K-factor,
and ME weight for
the GGH sample into consideration. For a specific GGH sample, $i_0$ and its events fall in a mass
window $j$, $\mathrm{N}_\mathrm{eff}^{i_0j}$ is first obtained and a re-weighting factor can be computed:
\be
\mathrm{wgt}_\mathrm{window}^{i_0j} =
\frac{\mathrm{N}_\mathrm{eff}^{i_0j}}{\sum_i\mathrm{N}_\mathrm{eff}^{ij}}
\ee
This factor is applied to all events from sample $i_0$ within the window $j$. Conceptually,
the effective number of events ensures the weight is not skewed by the difference in the overall
normalization of samples, and in each of the mass windows, samples with more concentrated
statistics in that window are given a higher weight. In Fig.~\ref{fig:window_wgt_matrix} (a), a
clear diagonal pattern can be seen. Physically it means that samples with higher true
mass is given a higher weight in tail mass windows --- consistent with the expected outcome. Complete
table of weights can be found in Appendix~\ref{apdx:window_wgts}.
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/Window_wgt_GGH.pdf}}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/LHC_compare_windowwgt.pdf}}
\end{center}
\caption{Heatmap of window re-weight factors of different samples and mass windows (left);
effects of applying window factors for the combined sample(right)}
\label{fig:window_wgt_matrix}
\end{figure}
However, even with the unit normalization, there are still inconsistency in the shape, as shown
in Fig.~\ref{fig:window_wgt_matrix} (b). This inconsistency is likely caused by the finite number of events
and non-infinitesimal mass window size used. We introduce another correction factor for
this small artifacts. Iteratively going through every sample, between the previous and the
next one, we derive a sample mass factor based on:
\be
\mathrm{wgt}_\mathrm{mass}^{i, i+1} = \frac{\sum{\mathrm{wgt}_\mathrm{i}}}{\sum{\mathrm{wgt}_\mathrm{i+1}}}
\text{, for events that has Mass between sample mass of $i$ and $i+1$}
\ee
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/Mass_wgt_GGH.pdf}}
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/LHC_compare_bothwgt.pdf}}
\end{center}
\caption{Iterative sample mass factors obtained (left) and the final combined sample (right)}
\label{fig:LHE_rewgt}
\end{figure}
This factor corrects the high variations of overall normalizations between samples. The factors and results
are shown in Fig.~\ref{fig:LHE_rewgt}. As expected, high mass samples need a down correction (not by a lot)
to eliminate the deviated trend before. Complete tables of these weights for both samples can be found
in Appendix~\ref{apdx:mass_wgts}.
Finally, 1.10 is multiplied to the weights of all ggZZ processes (all of BKG, SIG, BSI
of GGH sample) as a K factor for Next-to-next-to-leading-order (NNLO) $\rightarrow$
Next-to-next-to-next-to-leading-order (N3LO) QCD.
Although we used the particular matrix element weights for one of the signal hypothesis, these two
correction factors apply to all hypothesis. A plot of them without unit area normalization is shown
in Fig.~\ref{fig:bsi_sig_bkg_compare}. As expected, the background exceeds the signal by more than 100\%
which is partially why the constrain is hard to obtain.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=.7\linewidth]{fig/LHE_integral_difference.pdf}
\end{center}
\caption{Distributions of background, signal, and background signal interaction}
\label{fig:bsi_sig_bkg_compare}
\end{figure}
\clearpage
\section{Strategy in variable selection and binning and systematical uncertainties}
After we prepared the signal samples and decided on the event selection criteria, we then
decided on the variables and their (2D histogram, as shown in Fig.~\ref{fig:templates_demo}) `binning' before a combined-limits
fit can be applied. As discussed in earlier sections, one of the more inventive variables
newly introduced specifically for the analysis is the DJJVBF discriminator. However, it is
clear that this variable is undefined for events with $\njets < 2$. To
not `waste' any statistical significance, we use $\met$ in its place for the
$\njets = 0,1$ categories. In total, we have 2 ($ee$ or $\mu\mu$) $\times$ 4 ($\njets = 0,1,2,3+$) = 8
channels to consider when making histogram templates.
The bin edges for different categories in number of jets is the following:
\begin{itemize}
\item $\njets >= 2$
\begin{itemize}
\item $\mtzz$ = 150, 300, 400, 600, 800, 1000, 13000 (GeV)
\item KD1 = DJJVBF = 0, 0.2, 0.4, 0.6, 0.8, 1
\end{itemize}
\item $\njets < 2$
\begin{itemize}
\item $\mtzz$ = 150, 300, 400, 600, 800, 1000, 13000 (GeV)
\item KD1 = $\met$ = 125, 200, 280, 420, 500, 800, 13000 (GeV)
\end{itemize}
\end{itemize}
The higher mass ($\mtzz$) bins are wider because samples
have difficulty filling them due to physical reasons (especially for backgrounds)
and because the cuts are applied.
The different shapes of backgrounds and signals are shown in Fig.~\ref{fig:1d_hists_all}. These plots have noticeable features
including the VBF samples peak at ends of the $D_{j j}^{V B F}$ showing the power of this variable.
See Appendix~\ref{sec:1dhists}
for a more complete collection of plots.
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=.4\linewidth]{fig/MT_ZZ_Nj_2.png}}
\subfloat[]{\includegraphics[width=.4\linewidth]{fig/MET_Nj_2.png}}\\
\subfloat[]{\includegraphics[width=.5\linewidth]{fig/DJJ_VBF_Nj_2.png}}
\end{center}
\caption{Distribution (integral not to be taken literal) of $\mtzz$ (left), $\met$ (right), and
$D_{j j}^{V B F}$ (bottom) for backgrounds and signals with
$N_\mathrm{jets}=2$. Here sig a3 is a scaled up version of the signal (a1) and VVV corresponds to
WWW or WWZ or WZZ}
\label{fig:1d_hists_all}
\end{figure}
We use the above binning for all samples and 8 channels that are considered in this
thesis. A limited number of the 2d template histograms are
shown in Fig.~\ref{fig:templates_demo}. Again, we notice the peaks of $D_{j j}^{V B F}$ in
VBF samples match our expectation.
See Appendix~\ref{apdx:templates_hist} for a compilation of template histograms of ggZZ sample.
\begin{figure}[htb]
\begin{center}
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/ggZZ_2e2nu_nj1_Sig.pdf}}
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/VVZZ_2e2nu_nj1_Sig.pdf}}
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/qqZZ_2e2nu_nj1_count.pdf}}\\
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/ggZZ_2e2nu_nj2_Sig.pdf}}
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/VVZZ_2e2nu_nj2_Sig.pdf}}
\subfloat[]{\includegraphics[width=.33\linewidth]{fig/counts_templates/qqZZ_2e2nu_nj2_count.pdf}}
\end{center}
\caption{ggZZ (left), VBF (center), and qqZZ (right), histogram templates for fitting,
in $2e2\nu$ channel with 1,2 jet(s). Color bar (event counts) is scaled by \num{1e5}.}
\label{fig:templates_demo}
\end{figure}
For low-yield background samples, due to the nature of NLO samples, bins sometimes will have negative
content. To mitigate its effect on the likelihood fitting (pathological), we replace the bin content
by (Integral of the histogram) $\times \num{e-5}$.
Systematical uncertainties are also included in the fitting:
\begin{itemize}
\item Luminosoties GGH\_ZZ, VBF\_ZZ, qqZZ, qqWZ
\item NRB (non-resonant background) Estimation: TT, WW
\item Branching Ratio of Higgs to ZZ to 4l: GGH\_ZZ, VBF\_ZZ
\item K-fatcors of background gluongluon parameter
\end{itemize}
See Table.~\ref{tab:uncertainty} for respective range of these uncertainties.
| {
"alphanum_fraction": 0.7520221607,
"avg_line_length": 59.1803278689,
"ext": "tex",
"hexsha": "992e2f8adf49925ffb96c2ce7d431fa9a555e349",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "387f035d4f5f57f5b6c16a5d12285f3c1fdda820",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "Moelf/UCSB_2020_Senior_Thesis",
"max_forks_repo_path": "Dissertation/tex/methods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "387f035d4f5f57f5b6c16a5d12285f3c1fdda820",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "Moelf/UCSB_2020_Senior_Thesis",
"max_issues_repo_path": "Dissertation/tex/methods.tex",
"max_line_length": 197,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "387f035d4f5f57f5b6c16a5d12285f3c1fdda820",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "Moelf/UCSB_2020_Senior_Thesis",
"max_stars_repo_path": "Dissertation/tex/methods.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-03T19:20:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-03T03:53:37.000Z",
"num_tokens": 4957,
"size": 18050
} |
\subsection{MazezaM}
\screenshot{plugins/images/ss-mazezam}{MazezaM}{fig:mazezam}
The goal of this puzzle game is to escape a dungeon consisting of ten
``mazezams''.
These are rooms containing rows of blocks which can be shifted left or
right.
You can move the rows only by pushing them and if you move the rows
carelessly, you will get stuck.
You can have another go by selecting ``retry level'' from the menu,
but this will cost you a life.
You start the game with three lives.
Luckily, there are checkpoints at levels four and eight.
\begin{btnmap}
\opt{IPOD_3G_PAD}{\ButtonScrollBack, \ButtonScrollFwd, \ButtonLeft, \ButtonRight}
\nopt{IPOD_3G_PAD}{\PluginUp, \PluginDown, \PluginLeft, \PluginRight}
&
\opt{HAVEREMOTEKEYMAP}{
\PluginRCUp, \PluginRCDown, \PluginRCLeft, \PluginRCRight
&}
Move Character
\\
\opt{IPOD_3G_PAD}{\ButtonMenu}
\nopt{IPOD_3G_PAD}{\PluginCancel}
&
\opt{HAVEREMOTEKEYMAP}{\PluginRCCancel
&}
Menu
\\
\end{btnmap}
| {
"alphanum_fraction": 0.7113502935,
"avg_line_length": 30.9696969697,
"ext": "tex",
"hexsha": "26b18a2375a943f9cb91a48719d341cfd5630810",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z",
"max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_forks_repo_path": "manual/plugins/mazezam.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_issues_repo_path": "manual/plugins/mazezam.tex",
"max_line_length": 85,
"max_stars_count": 24,
"max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_stars_repo_path": "manual/plugins/mazezam.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z",
"num_tokens": 309,
"size": 1022
} |
\section{Adaptive AR Model Based Time-Frequency Estimation}
\subsection{AR modelling of FM signal}
\begin{figure}[htb]
\centering
\includegraphics[width=0.36\textwidth]{fig/32/32a1.eps}
\caption{Time-variant frequency of FM signal}
\label{fig:3_2_a1}
\end{figure}
\noindent
A FM signal with time-variant frequency as shown in Fig.\ref{fig:3_2_a1} is modulated with adding white noise in distribution of $\mathcal N\in$ (0, 0.05). The frequency is composed of three segments with constant, linear and quadratic parts, resulting in the non-stationary FM signal.
\begin{figure}[htb]
\centering
\hspace{-0.4cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a2.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a3.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a4.eps}
\end{subfigure}
\caption{Whole FM AR estimation with order=1, 10 and 15}
\label{fig:3_2_a2}
\end{figure}\\
When using the MATLAB function \texttt{aryule} to estimate the coefficient for entire signal, Fig.\ref{fig:3_2_a2} depicts the performance of estimations with different orders. For AR(1) modelling, the estimated peak of frequency is inaccurate since the function is incapable to estimate non-stationary signal. With increasing the order of estimated model, only the constant frequency at $100Hz$ is successfully estimated. However, other segments frequencies are still not captured accurately. Thus, the previous method in Part 2.2 is not applicable for FM signal.
\begin{figure}[htb]
\centering
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a5.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a6.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a7.eps}
\end{subfigure}
\\
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a8.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a9.eps}
\end{subfigure}
\hspace{-0.4cm}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32a10.eps}
\end{subfigure}
\caption{Block-based AR estimation with order=1 and 15}
\label{fig:3_2_a3}
\end{figure}\\
In addition, the frequency is time-varying function with constant, linear and quadratic segments. Thus, the block-based estimation is implemented with segments length $N=500$ samples. Fig.\ref{fig:3_2_a3} illustrates the performance of estimated AR model with order 1 and 15. The constant frequency part can be successfully estimated with $p=1$ presented in a peak value at $100Hz$. However, the linear and quadratic segments perform inferior with inaccurate frequencies. By increasing the capacity of estimated model, the frequency range of segment can be approximately estimated with acceptable uncertainty, as shown in second row of Fig.\ref{fig:3_2_a3}. Nevertheless, the linear and quadratic relationship can not be observed based on the estimation since they are not satisfied with stationary.
\subsection{CLMS based estimated AR coefficient}
In this section, the CLMS algorithm is applied to estimate non-stationary signal. Fig.\ref{fig:3_2_b} shows the performance of estimating AR(1) coefficients with varying step-size $\mu$. It is obviously that the CLMS can adaptively capture the time-variant frequencies. However, the different step-size $\mu$ also affects the performance of the CLMS estimation. With small step-size, the learning curve can not converge, leading to inadequate estimation. The performance at $\mu=0.01$ is improved in spite of lacking beginning parts of the constant frequency. Setting $\mu=0.05$ introduces an optimal estimation, while larger step causes oscillation in convergence. As a consequence, there are large variance and distortions of spectrum.
\begin{figure}[htb]
\centering
\hspace{0.4cm}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32b1.eps}
\end{subfigure}
\hspace{0.4cm}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32b2.eps}
\end{subfigure}
\\
\hspace{0.4cm}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32b3.eps}
\end{subfigure}
\hspace{0.4cm}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/32/32b4.eps}
\end{subfigure}
\caption{CLMS-based AR estimation with different step $\mu$}
\label{fig:3_2_b}
\end{figure}
| {
"alphanum_fraction": 0.7084041549,
"avg_line_length": 54.587628866,
"ext": "tex",
"hexsha": "d5b2d8b64d81f82a8d7f2bba48e748a54e040dca",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_forks_repo_path": "Report/sections/Part3/32.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_issues_repo_path": "Report/sections/Part3/32.tex",
"max_line_length": 799,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "88d8c848909fdcbfd55907201575ef2b67601c93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "zdhank/Adaptive-Signal-Processing",
"max_stars_repo_path": "Report/sections/Part3/32.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-19T08:55:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-05T10:27:57.000Z",
"num_tokens": 1502,
"size": 5295
} |
\section{Setup}
\subsection{Datasets}
We perform experiments on three task-oriented dialog datasets: bAbI Dialog \cite{BordesW16}, CamRest \cite{wenEMNLP2016}, and Stanford Multi-Domain Dataset \cite{Ericsigdial}.
\noindent
\textbf{bAbI Dialog} consists of synthetically generated dialogs with the goal of restaurant reservation. The dataset consists of five different tasks, all grounded to a KB. This KB is split into two mutually exclusive halves. One half is used to generate the train, validation, and test sets, while the other half is used to create a second test set called the OOV test set.
\noindent
\textbf{CamRest} is a human-human dialog dataset, collected using the Wiz-of-Oz framework, also aimed at restaurant reservation. It is typically used to evaluate traditional slot filling systems. In order to make it suitable for end-to-end learning, we stripped the handcrafted state representations and annotations in each dialog, and divided the 676 available dialogs into train, validation, and test sets (406, 135, and 135 dialogs, respectively).
\noindent
\textbf{Stanford Multi-Domain Dataset (SMD)} is another human-human dialog dataset collected using the Wiz-of-Oz framework. Each conversation is between a driver and an in-car assistant. The other datasets consist of dialogs from just one domain (restaurant reservation), whereas SMD consists of dialogs from multiple domains (calendar scheduling, weather information retrieval, and navigation).
\subsection{Knowledge Adaptability (KA) Test Sets}
Each bAbI dialog task has an additional OOV test set, which helps to evaluate a model's robustness to change in information in the KB. A model that perfectly disentangles language and knowledge should have no drop in accuracy on the OOV test set when compared to the non-OOV test set. To measure the degree of disentanglement in a model, we generated 10 additional test sets for each real-world corpus by varying the percentage (in multiples of 10) of unseen entities in the KB. Our adversarial attacks systematically pick random KB entities and replace all their occurrences in the dialog with new entity names. We will refer to these generated dialogs as the \emph{Knowledge Adaptability} (KA) test sets.
\subsection{Baselines}
We compare \sys\ against several existing end-to-end task-oriented dialog systems. These include retrieval models, such as the query reduction network (QRN) \cite{seo2016query}, memory network (MN) \cite{BordesW16}, and gated memory network (GMN) \cite{liu2017gated}.
We also compare against generative models such as a sequence-to-sequence model (Seq2Seq), a copy augmented Seq2Seq (Seq2Seq+Copy) \cite{ptr-unk}, and Mem2Seq \cite{mem2seq}.\footnote{We thank the authors for releasing a working code at \url{https://github.com/HLTCHKUST/Mem2Seq}} For fairness across models, we do not compare against key-value retrieval networks \cite{Ericsigdial} as they simplify the dataset by canonicalizing all KB words in dialogs.
We noticed that the reported results in the Mem2Seq paper are not directly comparable, as they pre-processed\footnote{Mem2Seq used the following pre-processing on the data: 1) The subject (restaurant name) and object (rating) positions of the rating KB tuples in bAbI dialogs are flipped 2) An extra fact was added to the navigation tasks in SMD which included all the properties (distance, address, etc.) combined together as the subject and \textit{poi} as the object. See Appendix.} training data in SMD and bAbI datasets. For fair comparisons, we re-run Mem2Seq on the original training datasets. For completeness we mention their reported results (with pre-processing) as Mem2Seq*.
\subsection{Evaluation Metrics}
We evaluate \sys\ and other models based on their ability to generate valid responses. The {\em per-response accuracy} \cite{BordesW16} is the percentage of generated responses that exactly match their respective gold response. The {\em per-dialog accuracy} is the percentage of dialogs with all correctly generated responses. These accuracy metrics are a good measure for evaluating datasets with boilerplate responses such as bAbI.
To quantify performance on other datasets, we use {\em BLEU} \cite{papineni2002bleu} and {\em Entity F1} \cite{eric2017copy} scores. BLEU measures the overlap of n-grams between the generated response and its gold response and has become a popular measure to compare task-oriented dialog systems. Entity F1 is computed by micro-F1 over KB entities in the entire set of gold responses.
\subsection{Human Evaluation}
We use two human evaluation experiments to compare (1) the \emph{usefulness} of a generated response with respect to solving the given task, and (2) the \emph{grammatical correctness} and \emph{fluency} of the responses on a 0--3 scale. We obtain human annotations by creating Human Intelligence Tasks (HITs) on Amazon Mechanical Turk (AMT). For each test condition (percentage of unseen entities), we sampled 50 dialogs from Camrest and SMD each, and two AMT workers labeled each system response for both experiments, resulting in 200 labels per condition per dataset per system. We evaluate four systems in this study, leading to a total of 1600 labels per condition.
% \begin{figure}[t]
% \centering
% \subcaptionbox{\label{sfig:testa}}{
% \includegraphics[width=0.8\textwidth]{assets/AMT_screen.png}}
% \subcaptionbox{\label{sfig:testb}}{\includegraphics[width=0.9\textwidth]{assets/AMT_screen_grammar.png}}
% \caption{}
% \label{fig:amt_rel}
% \end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\linewidth]{assets/AMT_screen.png}
\caption{}\label{fig:testa}
\end{subfigure}
% \vspace*{0.5in}
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\linewidth]{assets/AMT_screen_grammar.png}
\caption{}\label{fig:testb}
\end{subfigure}
\caption{A sample HIT on Amazon Mechanical Turk to (a) validate useful responses based on the given dialog context, and (b) validate grammatical correctness of different responses on a scale of 0-3}
\end{figure}
\noindent \textbf{Response Relevance Test}
We show a sample of an Human Intelligence Task (HIT) on Amazon Mechanical Turk in Figure \ref{fig:testa}. We randomize the responses generated by the three baseline models and \sys\ on the same dialog and ask the user to tick all those response options that seem to capture the relevant information of the given sample response. A total of 200 such annotations were collected for Camrest and SMD each.
\noindent \textbf{Response Grammar Test}
We show a sample of an Human Intelligence Task (HIT) on Amazon Mechanical Turk in Figure \ref{fig:testb}. We randomize the responses generated by the three baseline models and \sys\ on the same dialog and ask the user to rate each response based on the grammatical correctness and natural flow of the sentence. The rating ranges from 0-3 where 0 being the worst and 3 being the best. Note) the sentences were not asked to be rated with respect to each other, but instead as individual occurrences. A total of 200 such annotations were collected for Camrest and SMD each.
\subsection{Training}
We train \sys\ using an Adam optimizer \cite{kingma2014adam} and apply gradient clipping with a clip-value of 40. We identify hyper-parameters based on the evaluation of the held-out validation sets. We sample word embedding, hidden layer, and cell sizes from \{64, 128, 256\} and learning rates from \{10$^{-3}$, 5$\times$10$^{-4}$, 10$^{-4}$\}. The hyper-parameter $\gamma$ in the loss function is chosen between [0-1.5]. The Disentangle Label Dropout rate is sampled from \{0.1, 0.2\}. The number of hops for multi-hop attention in the encoder is sampled from \{1, 3, 6\}.
We list out the complete set of hyperparameters used to train \sys\ for the various datasets in Table \ref{tab:params}.
\vspace*{0.5in}
\begin{table*}[ht]
\centering
\footnotesize
\begin{tabular}{c|ccccc}
\toprule
\textbf{Task} & \textbf{Learning Rate} & \textbf{Hops} & \textbf{Embedding Size} & \textbf{Disentangle Loss Weight} & \textbf{DLD}\\
\midrule
T1 & 0.001 & 1 & 128 & 1.0 & 0.2 \\
T2 & 0.001 & 1 & 128 & 1.0 & 0.2 \\
T3 & 0.0005 & 3 & 128 & 1.5 & 0.2 \\
T4 & 0.001 & 1 & 128 & 1.0 & 0.2 \\
T5 & 0.0005 & 3 & 256 & 1.0 & 0.2 \\
CamRest & 0.0005 & 6 & 256 & 1.0 & 0.2 \\
SMD & 0.0005 & 3 & 256 & 1.0 & 0.1 \\
\bottomrule
\end{tabular}
\caption{The hyperparameters used to train \sys\ on the different datasets}.
\label{tab:params}
\end{table*}
| {
"alphanum_fraction": 0.7738598966,
"avg_line_length": 91.4838709677,
"ext": "tex",
"hexsha": "d22025936bae2eafd499d7c730ceed5e6ad85627",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e2857c14f1856be52d10a525303a999836b8eee4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NikhilGupta1997/IIT-Delhi-Thesis",
"max_forks_repo_path": "070setup.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e2857c14f1856be52d10a525303a999836b8eee4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NikhilGupta1997/IIT-Delhi-Thesis",
"max_issues_repo_path": "070setup.tex",
"max_line_length": 706,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "e2857c14f1856be52d10a525303a999836b8eee4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NikhilGupta1997/IIT-Delhi-Thesis",
"max_stars_repo_path": "070setup.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-16T03:43:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-27T10:21:24.000Z",
"num_tokens": 2195,
"size": 8508
} |
\section{Calculting \(\pi \)}
| {
"alphanum_fraction": 0.59375,
"avg_line_length": 8,
"ext": "tex",
"hexsha": "4a69a6177a9c4debba374f0f21f2c1d8ac5e86ef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/computer/piE/01-00-pi.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/computer/piE/01-00-pi.tex",
"max_line_length": 29,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/computer/piE/01-00-pi.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 32
} |
%
% Chapter 4.5
%
\section*{4.5 Substitution Rule}
\subsection*{Substitution Rule for Indefinite Integrals}
If \(u=g(x)\) is a differentiable function whose range is an interval \(I\) and \(f\) is continuous on \(I\), then
$$ \int f(g(x))g'(x)dx = \int f(u)du $$
\subsection*{Substitution Rule for Definite Integrals}
If \(g'\) is continuous on \([a, b]\) and \(f\) is continuous on the range of \(u=g(x)\), then
$$ \int_a^b f(g(x))g'(x)dx=\int_{g(a)}^{g(b)}f(u)du $$
\subsection*{Integrals of Symmetric Functions}
Suppose \(f\) is continuous on \([-a, a]\).
$$ \text{If } f \text{ is even } [f(-x)=f(x)] \text{, then } \int_{-a}^a f(x)dx = 2 \int_{0}^{a} f(x)dx. $$
$$ \text{If } f \text{ is odd } [f(-x)=-f(x)] \text{, then } \int_{-a}^a f(x)dx = 0. $$
| {
"alphanum_fraction": 0.5916230366,
"avg_line_length": 34.7272727273,
"ext": "tex",
"hexsha": "cd37be75b08fb51f09f1ae285eb2241b1542a643",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b6b0a43ef551d1735ba4af55f3917ed1ed39e926",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "davidcorbin/calc-1-study-guide",
"max_forks_repo_path": "tex/4-5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b6b0a43ef551d1735ba4af55f3917ed1ed39e926",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "davidcorbin/calc-1-study-guide",
"max_issues_repo_path": "tex/4-5.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b6b0a43ef551d1735ba4af55f3917ed1ed39e926",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "davidcorbin/calc-1-study-guide",
"max_stars_repo_path": "tex/4-5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 295,
"size": 764
} |
\documentclass{article}
% Language setting
% Replace `english' with e.g. `spanish' to change the document language
\usepackage[english]{babel}
% Set page size and margins
% Replace `letterpaper' with`a4paper' for UK/EU standard size
\usepackage[letterpaper,top=2cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry}
% Useful packages
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[colorlinks=true, allcolors=blue]{hyperref}
\title{Investigating Rust's Anonymous Symbols for Embedded Contexts}
\author{Gabe Barney}
\begin{document}
\maketitle
\section{The concept of embedded data and why it's important}
Embedded data is considered the disk space that is not necessarily allocated by the application, but rather the constructs of the compiler and language that increase the size of the binary. Currently, we detect embedded data by parsing for the space between different symbols which tells us how much padding there is, as well as the space that is used by constant strings.
\section{What I've looked at}
\subsection{Manually attributing symbols to functions on ARM}
\begin{figure}[h]
\centering
\includegraphics[width=.75\textwidth]{word_img.png}
\caption{\label{fig:word} An example of ldr instructions and ARM words}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{str0_img.png}
\caption{\label{fig:str0}An example symbol in ARM's literal pool}
\end{figure}
In ARM, there are '.word' directives which are a level of indirection that makes parsing symbols directly through pure string manipulation very difficult because of address alignment. Compounding upon this, because words are an arbitrary directive, we can't always tell what the meaning for each of the words for functions mean. This is because there are sometimes words that are generated to be used for padding in order to align the literals in the literal pools. We can't understand these automatically generated words without looking into each individual case. Another issue is that when we find the location of a word, often times we don't know the length, due to the difficulty of trying to manually parse registers. With some functions, such as basic panics such as in \autoref{fig:word}, it is easy to understand the length of the literal that's being referred to, but we find that the coverage of this byte-wise isn't nearly as significant as we would hope.
Approximately ~52\% of the imix board embedded data is panic-related, this is deduced by removing the panic handler and comparing binary sizes. By parsing the length of strings used in panics by detecting the string length registers, which in \autoref{fig:word}'s case is r1, there is a surprisingly low ~8\% of this panic-related data that is attributed with this method. And this is the majority case of panic data that we can detect manually. It is also by far the simplest.
The main issue that occurs here seems to be rooted in ARM's literal pooling. Parsing strings, like in \autoref{fig:str0}, is difficult when they're in such large collections with unrelated strings immediately surrounding them. We can easily attribute the first string in Figure 2 from Figure 1's assembly through string manipulation, but what about cases that aren't as explicit and straightforward as panics? Additionally, ARM seems to mix in unreferenced embedded strings with referenced ones, which greatly adds to the complexity when we were looking at the imix binary.
It's not a simple task to detect embedded data in ARM, you would expect that looking at the symbol table would easily give you the size of all the embedded string symbols, even if they're clumped together with unrelated strings. But it turns out that the symbol sizes in the symbol table are the size of only the first string in that symbol. This means that any string past the first string is not detected in the size of that symbol, and that you could only really detect the true overall size of a symbol by looking at the difference in the addresses from one symbol to its neighbor.
With ARM, there is limited 4KB offset for the LDR instruction, which can make it understandable that there are some oddities in ARM's literal pool. This gives some inspiration to changes in platform to RISC-V that can be read in Section 2.4.
\subsection{llvm-dwarfdump and pyelftools}
Quickly realizing that manually parsing lengths of these anonymous symbols is too difficult, especially for such low coverage. Trying to leverage the ELF and DWARF tooling available became attractive, as it would be far more generalized. This would be beneficial because we could get more coverage of the panics, as well as attribute embedded data to more functions beyond just panics.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{dwarfdump_img.png}
\caption{\label{fig:dwarfdump}This plot shows how often these tools attribute symbols to files}
\end{figure}
I believe that pyelftools\footnote{\url{https://github.com/eliben/pyelftools}} could be particularly useful for attributing symbols to functions, but its surface value seems to be equivalent of llvm-dwarfdump. LLVM's dwarfdump is simply faster, so I resorted to using it after a brief testing period.
From \autoref{fig:dwarfdump}, we can see that the tool doesn't seem to have an inappropriate attribution scheme, and can attribute embedded data to files and line numbers. But the different DWARF outputs from similar embedded data made it obscure to establish further findings, particularly the functions in which they occur. The tools output the line and column numbers of where the symbols should be, but it is still completely uncertain how or where the embedded data comes into play. These locations that are received don't quite output interesting code, and make it seem that this doesn't quite achieve what is wanted. There is potential promise, but it seems to be best left alone for now. Those with extensive DWARF experience could probably figure out more.
\subsection{Determining the influence of LLVM's Link Time Optimization}
Discussion led to investigating how LTO affects the organization of the binary in ARM. It caused a minor increase in number of symbols, but made no clear improvement in the ability to easily parse and attribute strings to functions, as they were still heavily clumped together with unrelated strings. The importance in this is that this further cements the conclusion that this is most likely an issue with ARM.
\subsection{RISC-V analysis}
The lack of meaningful information gained from turning off LTO on ARM prompted looking at a RISC-V board. Coming into this without knowing any differences between ARM and RISC-V, it was natural to check if LTO has any influence on RISC-V, then attempting to see how easily symbols can be attributed to functions, like in Section 2.1.
Having LTO off has 779 symbols related to embedded data while having LTO on has 675 symbols related to embedded data. This makes intuitive sense.
In RISC-V, there is a new symbol type. It's titled L\_\_unnamed, yet its functionality seems identical to Lanon symbols. Yet, turning LTO on causes there to be more Lanon than unnamed, and turning LTO off causes there to be more unnamed than Lanon. It seems there's no functional difference between these two different symbol types, but their change in distribution with LTO turned on/off is interesting. Additionally, there seems to be a range, increasing from 170, in the \_\_unnamed\_ symbol naming scheme that corresponds to them being unreferenced.
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{riscv_str0_img.png}
\caption{\label{fig:riscv_str0}An example symbol in RISC-V's literal pool}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=.8\textwidth]{byte_usage_plot.png}
\caption{\label{fig:byte_usage}A plot showing the embedded string byte contribution per function}
\end{figure}
It's reasonable to believe that the limitation on LDR offsets, is a potential reason for the literal pool issues, and is therefore the reason why ARM is so hard to parse. There is no behavior alike this limitation on LDRs in RISC-V. When looking at symbols in RISC-V that were clumped together in ARM, they are completely isolated to themselves.
This property of RISC-V, along with the removal of the words from ARM, make it very easy to parse through the assembly and attribute the symbols to functions. Making attributions and code size contributions for functions simple. From the limited information visible in \autoref{fig:byte_usage}, I've completed a mapping of functions to their embedded string byte usages. This also means t hat it's possible to know the mapping of all the symbols to their functions they're used in.
Given the long names of functions as well as the difficulty in representing the y-axis, if you want to know the details of this plot, as well as other related plots, you'll need to hover over the bar by graphs using the script or the notebook. \footnote{\url{https://github.com/barneyga/tock-embedded-data}}
\subsection{Next steps}
\begin{enumerate}
\item Embedded data detection doesn't work the same for RISC-V boards as it does with ARM board. This makes it hard to comprehend the coverage we get from mapping the symbols to their functions.
\item Following from this, being able to isolate certain functions and determine their coverage and individual attributes, currently I can only think of compiling without the panic handler, as done before, but I'm unsure as to how other features could be isolated.
\item Expanding the current print\_tock\_memory script with a flag that prints information about the embedded data organized by their parent functions.
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.8004104669,
"avg_line_length": 92.8095238095,
"ext": "tex",
"hexsha": "8a0e1e539d4fa18ce1874c15f02aa7248da39fde",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-09-13T21:11:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-09-13T21:11:59.000Z",
"max_forks_repo_head_hexsha": "269150616e38f2abb482283475dec6b7e2794b05",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "barneyga/tock-embedded-data",
"max_forks_repo_path": "Tock CS199 Writeup - Gabe Barney Summer 2021/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "269150616e38f2abb482283475dec6b7e2794b05",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "barneyga/tock-embedded-data",
"max_issues_repo_path": "Tock CS199 Writeup - Gabe Barney Summer 2021/main.tex",
"max_line_length": 966,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "269150616e38f2abb482283475dec6b7e2794b05",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "barneyga/tock-embedded-data",
"max_stars_repo_path": "Tock CS199 Writeup - Gabe Barney Summer 2021/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2145,
"size": 9745
} |
\section{Properties of the Dual Map and Double Dual}
\begin{lemma}
Let $V,W$ be vector spaces over $F$ and $\alpha\in L(V,W)$.
Let $\alpha^\ast\in L(W^\ast,V^\ast)$ be the dual map, then:\\1
1. $\ker\alpha^\ast=(\operatorname{Im}\alpha)^\circ$, so $\alpha^\ast$ is injective iff $\alpha$ is surjective.\\
2. $\operatorname{Im}\alpha^\ast\le(\ker\alpha)^\circ$ with equality if $V,W$ are finite dimensional, in which case it implies that $\alpha^\ast$ is surjective iff $\alpha$ is injective.
\end{lemma}
This is very important as it shows how we can understand $\alpha$ from $\alpha^\ast$, which is often simpler.
\begin{proof}
1. Pick $\epsilon\in W^\ast$, then $\epsilon\in\ker\alpha^\ast$ iff $\alpha^\ast(\epsilon)=0$ iff $\epsilon\circ\alpha=0$ iff $\epsilon\in(\operatorname{Im}\alpha)^\circ$.\\
2. We first show that $\operatorname{Im}\alpha^\ast\le(\ker\alpha)^\circ$.
Indeed, for any $\epsilon\in\operatorname{Im}\alpha^\ast$, we have $\epsilon=\alpha^\ast(\phi)=\phi\circ\alpha$ for some $\phi\in W^\ast$.
But then for any $u\in \ker\alpha$ we have $\epsilon(u)=\phi\circ\alpha(u)=\phi(0)=0$, which means $\epsilon\in(\ker\alpha)^\circ$.
In finite dimension, pick bases $B,C$ of $V,W$ and we get
\begin{align*}
\dim\operatorname{Im}\alpha^\ast&=r(\alpha^\ast)=r([\alpha^\ast]_{C^\ast,B^\ast})=r([\alpha]_{B,C}^\top)\\
&=r([\alpha]_{B,C})=r(\alpha)\\
&=\dim V-\dim\ker\alpha\\
&=\dim (\ker\alpha)^\circ
\end{align*}
So they have the same dimension, hence equal.
\end{proof}
We now turn to a very important concept known as double dual.
$V^\ast$ is a vector space too, so we can also construct its dual
$$V^{\ast\ast}=L(V^\ast,F)=(V^\ast)^\ast$$
Why is it important?
Well, not much in finite dimensions, but in infinite dimensonal spaces, it is very hard to find obvious relations between $V$ and $V^\ast$.
However, there is a canonical embedding of $V$ into $V^{\ast\ast}$.
Indeed, pick $v\in V$, consider $\hat{v}:V^\ast\to F$ via $\epsilon\mapsto\epsilon(v)$, which is a well-defined element of $V^{\ast\ast}$.
Quite ironically, our first theorem on this topic is about finite-dimensional spaces.
\begin{theorem}
If $V$ is finite dimensional, then this operation $\hat{}:V\to V^{\ast\ast}$ we just described is an isomorphism of vector spaces.
\end{theorem}
So we can just identify $V^{\ast\ast}$ with $V$.
\begin{proof}
Linearity is standard.
To see it is injective, let $e\in V\setminus\{0\}$ and extend $\{e\}$ to a basis $\{e,e_2,\ldots,e_n\}$ of $V$.
So the dual basis $(\epsilon,\epsilon_2,\ldots,\epsilon_n)$ would have $\hat{e}(\epsilon)=\epsilon(e)=1$.
Therefore $\hat{}$ has trivial kernel, hence injective.
It then follows that it is an isomorphism as $\dim V=\dim V^\ast=\dim V^{\ast\ast}$.
\end{proof}
\begin{remark}
In further linear analysis and functional analysis, we will see that $\hat{}$ remains injective for a huge class of infinite dimensional vector spaces (those of interests are often space of functions).
And there are many of them (called reflexive spaces) where $\hat{}$ is actually an isomorphism.
The theories emerged from here have numerous applications in analysis.
\end{remark}
\begin{lemma}
Let $V$ be a finite dimensional vector space over $F$ and $U\le V$.
Define $\hat{U}=\{\hat{u}:u\in U\}\le V^{\ast\ast}$.
Then $\hat{U}=U^{\circ\circ}=(U^\circ)^\circ$.
\end{lemma}
Thus we can identify $U^{\circ\circ}$ with $U$ too.
\begin{proof}
Trivial.
\end{proof}
\begin{lemma}
Let $V$ be finite dimensional vector space over $F$ and $U_1,U_2\le V$, then:\\
1. $(U_1+U_2)^\circ=U_1^\circ\cap U_2^\circ$.\\
2. $(U_1\cap U_2)^\circ=U_1^\circ+U_2^\circ$.
\end{lemma}
\begin{proof}
Just write it out.
\end{proof} | {
"alphanum_fraction": 0.6555209953,
"avg_line_length": 61.2380952381,
"ext": "tex",
"hexsha": "af20025ac3edce5339214bfb75abb041b188feb1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_forks_repo_path": "9/double.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_issues_repo_path": "9/double.tex",
"max_line_length": 206,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5a499f7ed33ef0110facb27323e13f42883aa0c5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "david-bai-notes/IB-Linear-Algebra",
"max_stars_repo_path": "9/double.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1235,
"size": 3858
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% UMB-CS240-2016S: Programming in C
% Copyright 2016 Pejman Ghorbanzade <[email protected]>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/UMB-CS240-2016S
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Question 2}
The following functions compile and run as expected.
Briefly explain what each function is used for.
For each function, write a simple \texttt{main()} function that calls the function.
\lstset{language=c,tabsize=4}
\begin{enumerate}[label=(\alph*)]
\item \lstinline{}
\lstinputlisting[firstline=16,lastline=33]{\resDirectory/str2int.c}
\item \lstinline{}
\lstinputlisting[firstline=17,lastline=29]{\resDirectory/matrix.c}
\newpage
\item \lstinline{}
\lstinputlisting[firstline=16,lastline=27]{\resDirectory/adjacent.c}
\item \lstinline{}
\lstinputlisting[firstline=12,lastline=15]{\resDirectory/strcpy.c}
\end{enumerate}
| {
"alphanum_fraction": 0.6762092794,
"avg_line_length": 38.9615384615,
"ext": "tex",
"hexsha": "5b7fbc6fbab4503854688d96bb05f461f5eb0c94",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ghorbanzade/UMB-CS240-2016S",
"max_forks_repo_path": "src/tex/main/f02/f02q02.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980",
"max_issues_repo_issues_event_max_datetime": "2016-06-20T03:04:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-05-16T23:55:39.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ghorbanzade/UMB-CS240-2016S",
"max_issues_repo_path": "src/tex/main/f02/f02q02.tex",
"max_line_length": 83,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ghorbanzade/UMB-CS240-2016S",
"max_stars_repo_path": "src/tex/main/f02/f02q02.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:24.000Z",
"num_tokens": 269,
"size": 1013
} |
% =========================================
% COMMAND: _URLDEC
% =========================================
\newpage
\section{\_URLDEC}
\label{cmd:_URLDEC}
\paragraph{Syntax:}
\subparagraph{}
\texttt{\_URLDEC "<string>" <variable>}
\paragraph{Purpose:}
\subparagraph{}
Stores a URL-decoded string into $<$variable$>$. | {
"alphanum_fraction": 0.5235109718,
"avg_line_length": 21.2666666667,
"ext": "tex",
"hexsha": "9fd2adb9b8c1270de71dd51ca84c4055f0ff6a81",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ia97lies/httest",
"max_forks_repo_path": "doc/users-guide/local-commands/cmd_urldec.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ia97lies/httest",
"max_issues_repo_path": "doc/users-guide/local-commands/cmd_urldec.tex",
"max_line_length": 48,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "5bbe912ffde2e74b382405f580ef5963bf792288",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "ia97lies/httest",
"max_stars_repo_path": "doc/users-guide/local-commands/cmd_urldec.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-01T13:22:10.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-16T07:47:43.000Z",
"num_tokens": 80,
"size": 319
} |
%%\documentclass[sigconf]{acmart}
\documentclass[10pt,twocolumn]{article}
\title{Binge: Processing All of the Things with a BINary-at-the-EdGe}
\author{
Kevin M. Greenan\\
BRB Inc.\\
Santa Cruz, CA 950602
}
\date{\today}
\usepackage{pdfpages}
\usepackage{listings}
\usepackage{listings-golang}
\usepackage{makecell}
\renewcommand\theadalign{bc}
\renewcommand\theadfont{\bfseries}
\renewcommand\theadgape{\Gape[4pt]}
\renewcommand\cellgape{\Gape[4pt]}
\def\code#1{\texttt{#1}}
\lstset{basicstyle=\small\ttfamily,columns=fullflexible,language=Golang}
\begin{document}
\maketitle
\begin{abstract}
Most stream and event processing is done using popular Stream Processing
Engines (SPE), such as Apache Storm, using event-based design patterns within a
custom application stack, or a combination of the two. In either case, the
foundation of these architectures relies on a centralized event bus or pub/sub
system that acts as a buffer for processing events. While this approach is
well understood and ubiquitous, it is not well-suited to many current and
future applications within edge computing and modern SaaS applications. While
there will likely always be a need for centralized event buses and custom
application stacks, a great deal of stream processing can be done without an
SPE or core application stack.
In this paper, we present Binge (binary-at-the-edge), a lightweight, durable,
scalable, stream processing daemon that can run on commodity hardware without
the need for complex application and infrastructure configurations. Each binge
instance only relies on its own local configuration, which allows it to scale
horizontally and heterogeneously. We show that binge can be leveraged for
simple stream processing tasks at the IoT edge, VPC edge, PoP edge and as a
mesh of coordinating endpoints.
\end{abstract}
\section{Introduction}
The ubiquity of software-as-a-service (SaaS) (e.g., Salesforce, Slack, GitHub,
etc.) and cloud platform services (PaaS) (e.g., AWS, Google Cloud and Azure)
has created a complex ecosystem of integration platforms that integrate SaaS
services via events and APIs. There are a great deal of products aimed at
automating decision making, tracking customer experience, automating
engineering processes, and so on. These products are effectively consuming
event data from SaaS services and processing it in managed PaaS services.
Integrating with a SaaS platform typically means subscribing to events,
consuming events and calling their APIs. A basic integration platform may
consume all events from any number of SaaS integrations and publish them to
Kafka topics to be consumed by SPEs, custom microservice applications or big
data systems such as BigQuery.
The many emerging IoT use cases are very similar. That is, consuming disparate
events, publishing them to a centralized event bus and using SPEs to process.
The main difference between the IoT use cases and the integration platform use
case is the definition of edge. In the case of IoT, the edge is as close to
the devices as possible, while the integration platform is usually a
point-of-presence (PoP) or a load balancer in the platform's VPC or data
center. In each case, there are different assumptions around what resources
are available. For example, it might not be safe to assume now-latency access
to a Kafka broker in the IoT use case, but the integration platform edge may be
on the same network as a Kafka cluster. In the most ideal case, all filtering,
transformation and processing can be done as close to the edge as possible. In
reality, most of this is still done in a centralized fashion, albeit in
distributed SPEs and microservice architectures.
The goal of binge is to simplify moving as much event processing as possible to
the edge (depending on what edge means for the application) in a way that is
durable, flexible and easy to operate. The goal is not to usurp existing SPEs
or event based architectures, but to compliment them in a way that performs
processing in the most appropriate tier (edge vs. hub) depending on cost,
resources, performance, etc.
\section{Outline}
The remainder of this article is organized as follows.
Sections~\ref{binge:design}~and~\ref{binge:implementation} cover the design and
implementation of Binge. We go through a few potential use cases for using
Binge in Section~\ref{usecases}. We evaluate the performance of Binge in
Section~\ref{evaluation}. Section~\ref{previous:work} gives account of
previous work. Finally, Section~\ref{future:work} covers future work and we
conclude in Section~\ref{conclusions}.
\section{Binge Design}\label{binge:design}
The high-level components of Binge are illustrated in
Figure~\ref{fig:binge-daemon}. Opposed to most SPEs and microservice
architectures, which require a great deal of configuration and moving parts,
Binge core is a single binary that can run as a command (e.g. process a single
event in a Lambda for use in a server-less architecture, for testing or
debugging) or as a daemon. The figure shows the components used for daemon
mode. Binge exposes a HTTP endpoint that accepts POST requests containing
JSON-formatted content, each representing an event. All events consumed by the
daemon are persisted to a durable queue, which are consumed by workers. Each
event will be processed by a worker in one or more pipelines defined in the
configuration. Once a worker has completed processing an event, it will ack
the event and pick up more work. This in combination with checkpointing
(discussed later) allows the each daemon to be killed without losing events or
processing state. Later we will discuss tradeoffs with the various durability
configurations.
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{figures/binge-daemon.pdf}
\caption{This}
\label{fig:binge-daemon}
\end{figure}
Today, the daemon can be also be configured in stateless mode, which disables
the durable queue. This configuration can be used in cases where reliable
delivery is less important than performance. In addition, there are some use
cases where HTTP is not a sufficient interface. Adding new endpoint interfaces
is relatively easy. For example, we can create a consumer interface that
consumes messages from MTTQ queues when running in an IoT edge. The only
difference here is that the daemon operates in a pull model opposed to the push
model of an HTTP endpoint.
The rest of this section will be devoted to digging deeper into the durable
queuing mechanism, pipelines, configuration and tradeoffs between different
configurations.
\subsection{Durable Queue}
All events posted to the binge daemon are immediately placed into a durable
queue before replying with success to the caller. This is done for two
reasons. First, it allows this or another daemon to process events that were
either unprocessed or in-flight after a crash. Second, it provides a buffering
mechanism between the incoming events and the workers, preventing the need to
apply backpressure. The remainder of this section is devoted to both of these
aspects of the durable queue.
\subsubsection{Event Processing and Crashes}
The durable queue maintains three buckets: in-flight, unprocessed and an
internal bucket for queue metadata, such as head and tail location of the
unprocessed events. Figure~\ref{fig:durable-queue} shows the basic data
structures and a simple example. The queue can technically be backed by any
underlying data structure that implements the following interface:
\begin{lstlisting}[linewidth=\columnwidth,breaklines=true]
type DQueue interface {
Dequeue() (*QueueItem, error)
Enqueue(v []byte) error
Ack(*QueueItem) error
}
\end{lstlisting}
We currently rely on BoltDB (https://github.com/etcd-io/bbolt) for persistence.
Swapping out backends is trivial as long as the backing system can be mapped to
a Key-Value interface. We chose BoltDB because it is fast, stable and runs
in-process, which allows us to minimize the number of external dependencies.
Running BoltDB also allows for configurations to easily leverage external block
stores for persistence, which lends itself to more ephemeral environments.
As shown in Figure~\ref{fig:durable-queue}, we have $5$ unprocessed events a
$0$ in-flight, before a worker pulls an event off the queue. Prior to
returning the event to the worker, $Evt_{3}$ is atomically swapped to the
in-flight queue. An event is not removed from the in-flight queue until it is
ack-ed. While processing the event, the daemon (as well as the worker) crashes
and restarts. Before accepting any new connections the daemon will process
the unprocessed and in-flight queues. Depending on the current status of each
event, some may remain on the in-flight queue after the start-up process
finishes. This could be due to an external resource being unavailable or an
issue with the state of the event or checkpoint. The proper action depends
mostly on the use case and could be a combination of: throw the events away,
fire an alert, or forward the events to another system \footnote{ToDo: Add
recovery rules to the pipelines. For example, add an OnFail section to a
process, which can take an action when it fails}
\begin{figure}[h]
\centering
\includegraphics[scale=0.60]{figures/durable-queue.pdf}
\caption{This}
\label{fig:durable-queue}
\end{figure}
The self-contained nature of binge also allows many instances to serve the same
event streams where daemons fail and recover without direct coordination. For
example, if running in Kubernetes, a binge pod can be bounced and will simply
continue using the same persistent volume when it restarts. We get similar
behavior when running in VMs or on physical hardware, provided a supervisor
detects the daemon stopped and requires restart.
\subsubsection{Backpressure}
We want to ensure all events are eventually processed, but there are times when
a daemon gets overloaded and applies backpressure, usually in the form of a
HTTP $429$ response. One way to avoid the need to apply backpressure is to put
a buffer between the endpoint serving the request and the processing. Here, the
tradeoff is that returning a $200$ OK only means the event has been persisted
and the event is hopefully processed. As we have shown, we use a durable queue
as our buffer. We use tracing to ensure we have visibility into the state of
all events.
Given that binge may be running in a resource constrained environment, it is
possible that a daemon is overloaded and the queue exhausts disk space in
either a local or remote volume. There are two complimentary ways to ensure
events are not lost: spin-up more instances (if possible) and/or specify
high-water marks used to offload the latest events to an external system until
a low-water mark is hit and we can pull those events in \footnote{ToDo: This
can be worked into the configuration, likely as a command line option for the
daemon}.
In the worst case, the daemon is running in a resource constrained environment,
runs out of disk space and eventually has to resort to backpressure. In any
case, the daemon itself can be configured to mitigate this issue by offloading
newly consumed events until a low-water mark is hit.
\subsubsection{Stateless Mode}
As previously discussed, binge can run in stateless mode, either as a daemon or
in command mode. Stateless mode effectively disables the durable queuing
mechanism and crash recovery life-cycle. This means that backpressure will be
applied when the all of the daemon threads or allowable connections are
consumed, and unprocessed events will likely be dropped in the event of a
crash.
This mode is best suited for cases where the source events can be safely
dropped or binge is run within a serverless architecture (i.e. command
mode). As we will show in the next section, pipelines provide a checkpoint
mechanism that can ensure individual pipeline executions can proceed after a
crash.
\subsection{Pipeline Processes}\label{sec:pipelineproc}
Pipelines are at the heart of binge. Each event will be processed through
every pipeline configured for the daemon. This allows different
binge/configuration artifacts to be deployed for specific events. For example,
you can deploy a specific configuration to an auto-scale group to serve events
for your CI/CD processes from GitHub, GitLab and CircleCI, while a separate
configuration is deployed to an auto-scale group to serve your Slack
events.
A pipeline is a collection of processes that are applied in sequential order.
Each process must implement the following interface to be used within a pipeline:
\begin{lstlisting}[linewidth=\columnwidth,breaklines=true]
type PipelineProcess interface {
Process(ctx context.Context, in map[string]interface{}) (map[string]interface{}, error)
}
\end{lstlisting}
The context is used to plumb contextual information through the pipeline, the
individual processes and onto external dependencies, such as Key-Value Stores.
As we will see the pipelines can be configured to automatically add
OpenTelemetry~\cite{OT} trace context to select requests.
At a high level, the Process function takes in a map and outputs a map. The
source can be any structured data format that can be converted into a map, such
as JSON, YAML, CSV, protobuf, etc. As we will see, this simple abstraction
allows one to define a rich set of operations that can handle many stream
processing tasks.
Each pipeline invocation will process an event serially through the pipeline.
We rely on a \code{Completable}/\code{Future} abstraction, which simplifies asynchronous
processing. Since Golang does not have native support for \code{Futures}, we
created our own implementation. In a nutshell, a \code{Completable} contains the
eventual result of a computation, which is exposed to the caller as a \code{Future}.
The caller can invoke \code{Future.Get()} to block and obtain the result, or rely on
callbacks to perform actions on the result. To enable highly concurrent
pipelines, the Future abstraction exposes a \code{Then(Runnable)} function
that will run the provided runnable using output of the parent future as input,
when the parent future completes. This model maps very nicely with invoking
pipelines on events.
\begin{figure}
\begin{lstlisting}[linewidth=\columnwidth,breaklines=true]
/* RunnableStartProcess will create
* a runnable that calls
* outMap = p1.Process(inMap)
*/
r1 := NewRunnableStartProcess(p1, inMap)
/* RunnablePartialProcess will create a
* runnable that implements
* SetInData(x map[string]interface{}) and
* runs p.Process(SetInData(x))
*/
r2 := NewRunnablePartialProcess(p2)
r3 := NewRunnablePartialProcess(p3)
// A pipeline invocation is a chain of futures
f1 := CreateFuture(r1)
f2 := f1.Then(r2)
f3 := f2.Then(r3)
f3.Get()
\end{lstlisting}
\caption{Example of chaining runnables}
\label{fig:runnables}
\end{figure}
Figure~\ref{fig:runnables} illustrates how the Future abstraction fits nicely
with pipeline invocation. Each pipeline process is contained in a runnable
object. A \code{RunnableStartProcess} must implement \code{Run()}, which will simply
invoke \code{Process}. A \code{RunnablePartialProcess} must implement both \code{Run} and
\code{SetInData}, where \code{SetInData} is called by \code{Then} with the result of the
previous future and \code{Run} invokes \code{Process} with the result.
In addition to chaining, callbacks such as Prepare, OnSuccess and OnFailure are
used to add instrumentation (meters and counters) and trace information (spans)
to the individual processes.
\subsection{Pipeline Configuration}
There are five major components to a pipeline configuration:
\begin{itemize}
\item[]{{\bfseries External Systems:}} This contains global configuration for external systems, such as databases, key-value stores, SPEs, HTTP/gRPC endpoints or pub/sub systems.
\item[]{{\bfseries Pipeline Process Definition:}} This contains the configuration for a pipeline process that can be referenced by one or more pipelines.
\item[]{{\bfseries Pipeline Manifests:}} Each pipeline will contain a manifest, which is a list of ordered pipeline processes that define the pipeline.
\item[]{{\bfseries Checkpoint Process:}} The checkpoint process is a special process that will checkpoint state to a provided external system. It is configured per pipeline.
\item[]{{\bfseries Pipelines:}} Pipelines is root configuration object for binge and contains the external systems, pipeline process definitions and pipeline manifests.
\end{itemize}
\subsection{Pipeline Process Types}
\begin{table*}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
{\bfseries Type} & {\bfseries Desc} & {\bfseries Cond?} & {\bfseries Update?} & {\bfseries Stateful?} \\ \hline
Annotator & Add annotations to output map & Yes & Yes & No \\ \hline
Aggregator & Update an aggregation based on one or more fields & Yes & Yes & Yes \\ \hline
Completer & \makecell{Define a join on $N$ fields and emit a completion \\ annotation when a specific value for all $N$ fields is observed} & Yes & Yes & Yes \\ \hline
Filter & Filter (or inverse filter) fields using string match or regex & No & Yes & No \\ \hline
Spawner & Spawn a job & Yes & No & No \\ \hline
Transformer & Transform one or more fields of the source map & Yes & Yes & No \\ \hline
Tee & Send the current map or a transformed map to an external system & Yes & No & No \\ \hline
\end{tabular}
\caption{Process types}
\label{tab:processtypes}
\end{table*}
As shown in Table~\ref{tab:processtypes}, there are seven process types. As
described in Section~\ref{sec:pipelineproc}, a process essentially processes an
input map and returns an output map. Many of the processes can be
conditionally guarded with a condition implementing the following interface:
\begin{lstlisting}[linewidth=\columnwidth,breaklines=true]
type Condition interface {
Evaluate(in map[string]interface{}) (bool, error)
}
\end{lstlisting}
A condition is applied to the input map, where the target process will run if
and only if Condition evaluates to true. An error will either lead to failure of
the pipeline run for this event or will invoke the error handler specified in the
process definition.
All but two of the processes apply updates to the map. Note that the input map
left untouched and an update simply means the output map is a transformed copy
of the input map.
\subsubsection{Stateless Processes}
Stateless process are the easiest to reason about, since they can more-or-less
run anywhere without an external dependency for managing state.
The Annotator, Filter and Spawner are the simplest of the processes, and all
can be defined with a conditional guard. An Annotator will simply add
annotations to the map. A Filter will either apply a filter or inverse filter
to a map. Finally, a Spawner will spawn a job and conditionally block.
Currently, jobs are processes that are spawned locally must adhere to the same
interface a Processes. That is, a JSON-encoded map is written to standard
input and a JSON-encoded map is expected on standard output.
A Transformer process will transform the provided map. A transformer is
defined by a list of transformation specifications, each containing
\code{sourceField}, \code{targetField} and a \code{transformation}. A transformation can be
applied to an entire map by leaving the source and target fields empty. Each
transformation is defined by its type and type-specified arguments. There are
currently seven types of transformation:
\begin{itemize}
\item[]{\bfseries Copy} will simply perform a deep copy of the value
\item[]{\bfseries Map} takes a path argument identifying the mapping function
\item[]{\bfseries MapAdd/Mult} takes a value argument that will apply a constant operation
\item[]{\bfseries MapRegex} will apply a provided regex
\item[]{\bfseries FoldLeft/Right} will apply the appropriate fold using the provided function
\end{itemize}
A Tee process will write the input map to the configured external system and
return the input map as output. The input map can optionally be transformed,
using a transformer, prior to writing to the external system. In both cases,
the original input map is always returned as the output of the process.
The Tee process is very powerful in that it can be used to send events to
external webhooks, internal microservices, or other binge processes in response
to an event. This allows the basic directed list topology of a single binge
process to extend to a tree-like topology. This allows system designers to
deliberately place some processes close to the event generation and others
closer to centralized systems.
\subsubsection{Stateful Processes}
An Aggregator exposes many common aggregations provided by SPEs and databases.
Currently, we support Sum, Max, Min, Avg, Count and Histograms. An Aggregator
is defined by 4 components:
\begin{itemize}
\item[]{\bfseries State Store} This specifies the external system used to store
the aggregation, which can be anything from a local file system to an external
key-value store.
\item[]{\bfseries Field Key} This is the field key that corresponds to the value to aggregate.
\item[]{\bfseries Aggregation Type} This is the type of aggregation to apply.
\item[]{\bfseries Group By} Group by applies to all but the Histogram aggregation and will aggregate by the keys provided.
\end{itemize}
Figure~\ref{fig:aggregation} shows an example snapshot of a pipeline run
containing three aggregation processes. First, we see the output map of the
previous process passed into the Sum aggregation. The Aggregator process will
try to fetch state for this aggregation. If there is no state, it will create
new state. In either case, a new annotation is added to the map containing the
current state of the aggregation after it is updated. These annotations can be
used by downstream processes to conditionally run processes, to further
aggregate or make other decisions. The figure also illustrates the use of
histogram aggregations and a simple count.
A Completion is a special process that listens for matching values of a set of
fields. The first time a value is seen for a set of fields, a completion is
triggered. Once that value is seen for all fields in the set, the completion
is complete. This can be seen as a very simple online join of events. A completion
if defined by two components:
\begin{itemize}
\item[]{\bfseries State Fields} A list of fields to extract and include in the
completion's state \footnote{ToDo: need to add this functionality}
\item[]{{\bfseries JoinKeys}} The field keys that define the set of fields to watch
\item[]{{\bfseries Timeout}} A timer is started the first time a value is seen.
If the timer exceeds the time-out, we cancel the completion for the value.
\end{itemize}
Figure~\ref{fig:completion} shows an example completion from getting triggered
to being completed. In this example, the JoinKeys are \code{commit.repo},
\code{dev-deploy.origin} and \code{unit-tests.origin}. In this example, a event sourced
from GitHub arrives first, followed by a deployer event and finally a tester
event. As shown in the figure, an annotation is added whenever a completion is
triggered or completed, which allows downstream processes to conditionally act
on the current state of a completion.
These examples highlight well-known issue with stateful stream processes:
tracking consistent state. There are three main tradeoffs that arise with
respect to stateful processes:
\begin{itemize}
\item[]{{\bfseries Consistency}} Maintaining consistent state requires coordination or centralization.
\item[]{{\bfseries Performance}} Requiring consistent stateful processes will negatively impact latency.
\item[]{{\bfseries Starvation}} Requiring consistent stateful processes could lead to starvation.
\end{itemize}
There are two modes of operation here:
\begin{itemize}
\item{Use a centralized key-value store to maintain state. At minimum we would
want atomic put and delete to ensure consistency.}
\item{Rely on a local persistent store (file system or local key-value store)
and aggregate the aggregations at a binge sink that is close to a centralized
key-value store.}
\end{itemize}
The first option is the easiest to reason about, since we simply specify an
external system to use. The major cloud providers have a variety of options
that can be deployed at the push of a button. Here, the concern is
performance: increase in latency due to round-trip time and contention.
Contention could be mitigated using high-performance key-value stores, such as
Anna~\cite{ANNA}. In either case, the distance between the binge process and
the centralized key-value store will largely dictate the performance overhead.
The second option relies on the existence of a binge-mesh, where a pipeline
maintains local aggregation state and relies on a downstream binge process to
perform the final aggregation. In this case, the final aggregation or
completion could be performed closer to a centralized key-value store. Note
that here the round-trip time doesn't change, but our throughput will likely be
higher than the first option. The main disadvantage to this approach is
managing the mesh of binge processes. We will cover this in
Section~\ref{sec:orchestration}.
\subsubsection{Cummutative Updates}
As discussed in the previous section, providing highly reliable and performant
stateful updates is quite challenging. Fortunately, we can exploit properties
of the updates to provide mechanisms to support lock-free contention using a
standard key-value store interface. We accomplish this with commutative
updates.
\subsubsection{Checkpointing}
We use a two-level checkpointing mechanism to provide reliable replay crashed
or failed pipelines.
\section{Orchestration}\label{sec:orchestration}
\section{Operations}\label{sec:operations}
The discussion of any infrastructure service or distributed system is incomplete
without discussing how it is monitored and managed. We have built binge with
operations in mind.
\section{Example Use Cases}\label{usecases}
\section{Performance Evaluation}\label{evaluation}
Interesting things to measure:
Environment:
Single host: EC2 micro, small, large and xlarge
Microbenchmarks: single-host
\begin{itemize}
\item[] Singleton pipelines of each type, use in-memoery dependencies
\item[] Singleton aggregations with KVSTore with latency injected, simulate high contention to compare contention policies
\item[] PIpelines up of size 2-10 with and without persistence (in-memory dependencies)
\end{itemize}
Microbenchmarks: Use cases (mileagle kinda varies, but microbenchmarks can inform decisions)
\begin{itemize}
\item[] Edge pipeline connected to non-edge pipeline, compare sensitivity of operations at each
\item[] Mesh case where multiple edge binge instances aggregate at sink binge instance(s)
\end{itemize}
\section{Previous work}\label{previous:work}
\section{Limitations and Future Work}\label{future:work}
\section{Conclusions}\label{conclusions}
We worked hard, and achieved very little.
\bibliographystyle{abbrv}
\bibliography{main}
\section{Appendix}\label{sec:appendix}
\section{Process Examples}\label{sec:appendix:process}
\begin{figure*}
\centering
\includegraphics{figures/aggregation.pdf}
\caption{Figure}
\label{fig:aggregation}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{figures/completion.pdf}
\caption{Figure}
\label{fig:completion}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{figures/tee.pdf}
\caption{Figure}
\label{fig:tee}
\end{figure*}
\end{document}
| {
"alphanum_fraction": 0.7926110689,
"avg_line_length": 48.6168717047,
"ext": "tex",
"hexsha": "c4febb9c7e8a29c8eb228f569eb182de44c9c3de",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6b56f1d6a282c952b86c8ff5f6af22bdb7926564",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kmgreen2/agglo",
"max_forks_repo_path": "docs/binge/binge.tex",
"max_issues_count": 16,
"max_issues_repo_head_hexsha": "6b56f1d6a282c952b86c8ff5f6af22bdb7926564",
"max_issues_repo_issues_event_max_datetime": "2021-10-16T17:48:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-01T22:58:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kmgreen2/agglo",
"max_issues_repo_path": "docs/binge/binge.tex",
"max_line_length": 178,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6b56f1d6a282c952b86c8ff5f6af22bdb7926564",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kmgreen2/agglo",
"max_stars_repo_path": "docs/binge/binge.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-01T22:10:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-03-01T22:10:37.000Z",
"num_tokens": 6378,
"size": 27663
} |
\section*{Objectives}
\label{sec:objectives}
Objectives provide a list of goals that will be achieved thorough the proposed
research-
\begin{itemize}
\item What are the benefits/impact of that will be generated if the
research problem is answered?
\item Why would we allow this research to be done?
\end{itemize} | {
"alphanum_fraction": 0.7830188679,
"avg_line_length": 31.8,
"ext": "tex",
"hexsha": "3c4b50168f351f039b6080270c50bc98b6c6618d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8640913a717b16cbf845c7833dd9078e72c36f5f",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "hossainlab/research-proposal-template",
"max_forks_repo_path": "sections/4_objectives.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8640913a717b16cbf845c7833dd9078e72c36f5f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "hossainlab/research-proposal-template",
"max_issues_repo_path": "sections/4_objectives.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8640913a717b16cbf845c7833dd9078e72c36f5f",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "hossainlab/research-proposal-template",
"max_stars_repo_path": "sections/4_objectives.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 80,
"size": 318
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Diaz Essay
% LaTeX Template
% Version 2.0 (13/1/19)
%
% This template originates from:
% http://www.LaTeXTemplates.com
%
% Authors:
% Vel ([email protected])
% Nicolas Diaz ([email protected])
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[11pt]{diazessay} % Font size (can be 10pt, 11pt or 12pt)
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\title{\textbf{Capstone Project: Building a Machine Learning Model, Develop a Flask Web App and Deploy in the Cloud using Heroku} \\ {\Large\itshape Deploy ML model on the Web App and Train on the Fly}} % Title and subtitle
\author{\textbf{Md Sohel Mahmood} \\ \textit{}} % Author and institution
\date{\today} % Date, use \date{} for no date
%----------------------------------------------------------------------------------------
\begin{document}
\maketitle % Print the title section
%----------------------------------------------------------------------------------------
% ABSTRACT AND KEYWORDS
%----------------------------------------------------------------------------------------
%\renewcommand{\abstractname}{Summary} % Uncomment to change the name of the abstract to something else
\clearpage
\begin{center}
\thispagestyle{empty}
\vspace*{\fill}
To my beloved wife Samia Jannat
\vspace*{\fill}
\end{center}
\clearpage
\tableofcontents
\clearpage
\begin{abstract}
Time series data is utilized to predict the future trend. Stock price and weather forecast technique use machine learning algorithms to to develop a sustainable model and see in the future course of the data. This project aims to develop a stock price prediction machine learning model and then deploy. There are three stages for this project. First a machine learning model is created for the time series data extracted from Yahoo Finance. Next, a web app is developed locally using python's Flask library. Finally, it is deployed in Heroku cloud platform to run the application on the cloud.
\end{abstract}
\hspace*{3.6mm}\textit{Keywords:} Time Series, Machine Learning, LSTM, Web App, Python, Flask, Cloud Implementation % Keywords
\vspace{30pt} % Vertical whitespace between the abstract and first section
%----------------------------------------------------------------------------------------
% ESSAY BODY
%----------------------------------------------------------------------------------------
\section{Introduction}
We experience Time Series data every day and everywhere. Data like stock price and weather forecast are the prime examples of time stamped data which can be used for ML model deployment. There are other scenarios where people will want to get the financial health of the company or country, measure quarterly metrices, perform analysis on future market and so on where time plays and the base dimension. In all cases, data is a sequence of collected values at a specific time step. A time series can be both univariate and multi-variate. A univariate time series have only one data point per time step whereas multi-variate time series have multiple values of different variables per time step.
There are some other examples with seasonality, trend and noise [Figure 1]. A seasonal data repeats the trend when the season arrives. For example, ice-cream purchase increases every summer. There may be trend also in the data indicating upward or downward movement. If we collect data of ice-cream for over many years, we may see that the people are buying more ice-creams in latest summers than the summers in the 80’s or 90’s. This may tell us that the series has an upward trend. Again, there may be noise embedded inside the data which may sometimes corrupt the data and make it difficult to extract the actual trend in the series.
%\begin{wrapfigure}{l}{0.42\textwidth} % Inline image example, use an 'r' column type to position the figure on the right
% \includegraphics[width=\linewidth]{time_series.png}
% \caption{An example fish.}
%\end{wrapfigure}
Here, the project aims to the following goals:
1. First create a machine learning model for the time series data extracted from Yahoo Finance
2. Develop a local web app using python's Flask library
3. Deploy the final app in Heroku cloud platform to run the application on the cloud
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.65\textwidth]{time_series.png} % Include the image placeholder.png
\caption{Time series data with trend and seasonality [1]}
\end{center}
\end{figure}
\section{Dependencies}
\begin{itemize}
\item Python 3.6+
\item Visualization libraries: Matplotlib
\item Libraries for data and array: pandas and numpy
\item Machine learning libraries:Tensorflow, Keras, Sciki-Learn
\item Web App library: Flask
\item Financial data parsing library: Yfinance
\end{itemize}
\section{Git Repository}
To clone the git repository:
{\fontfamily{qcr}\selectfont
git clone https://github.com/mdsohelmahmood/stock-price-predict
}
\section{Execution of program}
\addcontentsline{toc}{section}{Part 1}
\section*{Part 1}
First step is to create an ML model for the time series data from historical stock price. Data extracted from Yahoo Finance using YFiance library.
We start the project code by importing the necessary libraries.\\
{\fontfamily{qcr}\selectfont
\noindent import tensorflow as tf\\
import pandas as pd\\
import numpy as np\\
import matplotlib.pyplot as plt\\
import keras\\
from keras.models import Sequential\\
from keras.layers import Dense\\
from keras.layers import LSTM\\
from keras.layers import Dropout\\
from sklearn.preprocessing import MinMaxScaler\\
from sklearn.metrics import mean\_squared\_error\\
from sklearn.metrics import mean\_absolute\_error\\
from sklearn.model\_selection import train\_test\_split\\
import yfinance as yf\\
}
Next, the historical data is imported using YFinance. To parse the data, the stock name need to be keyed in. For example, to get the historical price of Bitcoins in USD, the name should be “BTC-USD” [Figure 2].\\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{hist.png} % Include the image placeholder.png
\caption{Historical trend of the stock}
\end{center}
\end{figure}
The next phase is the creation of training and test dataset. Training dataset is the first 80 percent of the total data and the remaining 20 percent will be predicted and will be used as test data.\\
{\fontfamily{qcr}\selectfont
\noindent df=hist\\
d=30\\
ahead=10\\
n=int(hist.shape[0]*0.8)\\
training\_set = df.iloc[:n, 1:2].values\\
test\_set = df.iloc[n:, 1:2].values\\
}
The dataset is scaled and reshaped in the following step.\\
{\fontfamily{qcr}\selectfont
\noindent sc = MinMaxScaler(feature\_range = (0, 1))\\
training\_set\_scaled = sc.fit\_transform(training\_set)\\
X\_train = []\\
y\_train = []\\
for i in range(d, n-ahead):\\
X\_train.append(training\_set\_scaled[i-d:i, 0])\\
y\_train.append(training\_set\_scaled[i+ahead, 0])\\
X\_train, y\_train = np.array(X\_train), np.array(y\_train)\\
X\_train = np.reshape(X\_train, (X\_train.shape[0], X\_train.shape[1], 1))\\
}
Then the model is defined using several layers of LSTM from tensorflow. Tensorflow is an open source library for machine learning. It can handle sequence information using Recurrent Neural Network (RNN). Tensorflow has several recurrent layer types including SimpleRNN, Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM). This project utilizes LSTM feature.\\
While going through the RNN, some information may be lost due to data transformations. If the initial data is lost, the subsequent result will have no trace of the first inputs. This is true for human too. If we do not memorize a data, it will be lost from out memory after some time. To fix this issue, RNN can be deployed with cells having Long Short Term Memory (LSTM) that can effectively have information of the previous data. So while using LSTM, we do not need to bother about long term dependencies of the data. LSTM was first introduced in 1997 by Sepp and Jurgen [2] and was developed in later years.
The model is defined as below.\\
{\fontfamily{qcr}\selectfont
\noindent model = Sequential()\\
model.add(LSTM(units = 100, return\_sequences = True, input\_shape = (X\_train.shape[1], 1)))\\
model.add(Dropout(0.2))\\
model.add(LSTM(units = 100, return\_sequences = True))\\
model.add(Dropout(0.2))\\
model.add(LSTM(units = 50, return\_sequences = True))\\
model.add(Dropout(0.2))\\
model.add(LSTM(units = 50))\\
model.add(Dense(units = 1))\\
model.compile(optimizer = 'adam', loss = 'mean\_squared\_error')\\
model.fit(X\_train, y\_train, epochs = 50, batch\_size = 32)\\
}
When the model runs, it will provide the loss values after each epoch. As the model have more epochs, the loss value will drop. Three LSTM layers are added with 100 units of cells for the first 2 layers and 50 units for the last layer. Dropping 20 percent of theoutput will reduce the amount of overfitting the training data. One unit of dense layer is added at the bottom and the model is then compiled with 'adam' optimizer and 'mean\_aquared\_error' loss. \\
The model is saved in the next phase
{\fontfamily{qcr}\selectfont
\noindent model.save("BTC-predict.h5")
}
Once the model is defined, the test dataset is extracted out from the historical data and from there we need to take the first 30 days as input to the model (of course this number can be defined by the user but here for the sake of understanding, the number 30 is taken to predict the 31st point or the point the user wants). \\
{\fontfamily{qcr}\selectfont
\noindent dataset\_train = df.iloc[:n, 1:2]\\
dataset\_test = df.iloc[n:, 1:2]\\
dataset\_total = pd.concat((dataset\_train, dataset\_test), axis = 0)\\\
inputs = dataset\_total[len(dataset\_total) - len(dataset\_test) - d:].values\\
inputs = inputs.reshape(-1,1)\\
inputs = sc.transform(inputs))\\
}
The test data is reshaped afterwards.\\
{\fontfamily{qcr}\selectfont
\noindent X\_test = []\\
for i in range(d, inputs.shape[0]):\\
X\_test.append(inputs[i-d:i, 0])\\
X\_test = np.array(X\_test)\\
X\_test = np.reshape(X\_test, (X\_test.shape[0], X\_test.shape[1], 1))\\
print(X\_test.shape)\\
}
The final step of part 1 is to predict the test data set and plot along with the actual dataset [Figure 3].\\
{\fontfamily{qcr}\selectfont
\noindent predicted\_stock\_price = model.predict(X\_test)\\
predicted\_stock\_price = sc.inverse\_transform(predicted\_stock\_price)\\
df['Date']=df.index\\
df=df.reset\_index(drop=True)\\
plt.plot(df.loc[n:, 'Date'],dataset\_test.values, color = 'red', label = 'Real Butcoin Stock Price')\\
plt.plot(df.loc[n:, 'Date'],predicted\_stock\_price, color = 'blue', label = 'Predicted Bitcoin Stock Price')\\
plt.title('Bitcoin Price Prediction')\\
plt.xlabel('Time')\\
plt.ylabel('Bitcoin Price')\\
plt.legend()\\
plt.xticks(rotation=90)\\
plt.show()\\
}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.8\textwidth]{pred.png} % Include the image placeholder.png
\caption{Predicted stock price plotted wth the actual price}
\end{center}
\end{figure}
\addcontentsline{toc}{section}{Part 2}
\section*{Part 2}
In Part 2, the local hosting of the web app will be implemented. To use the local server, python's flask library will be utilized in this project because of its ease of deployment. The first step in Part 2 is to consolidate all the project files inside a project folder [Figure 4]. The project folder contains the main project file in .py format and the saved model from Part 1. The html files are saved inside 'templates' folder. That's all we need to run the app in localhost.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{dir.png} % Include the image placeholder.png
\caption{Files inside the project folder}
\end{center}
\end{figure}
Once the project folder is setup, the command line is used to initiate the web app. First a separate python environment is created for flask. The environment is named as 'flask\_env' and later it is activated using 'conda'.
'Conda' is a python library and environment manager.\\
{\fontfamily{qcr}\selectfont
\noindent conda create --name flask\_env\\
conda activate flask\_env\\
}
Then the current directory is changed to the project folder using 'cd'. Afterwards the stock.py file set as the running script for the web app by the following command.\\
{\fontfamily{qcr}\selectfont
\noindent set FLASK\_APP=stock.py\\
}
Now the html. files need to set for proper user input. I have created two htl files. One name 'form.html' which takes the user input for the machine learning model training and the other one is 'plot.html' which delivers the predicted plot. Those two files are placed inside th templates folder.
At the final step, 'flask run' command is excuted to deploy the app on local server [Figure 5] .
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{cmd1.png} % Include the image placeholder.png
\caption{Flask app running at localhost having address of 127.0.0.1 at port 5000}
\end{center}
\end{figure}
Figure 6 shows the front-end look of the web app. It shows the necessary input explanations with the textbox below. For this app I am basically trying to predict the future stock trend of any stock. The ML model can be trained differently based on different user input [Figure 7]. So model therefore can be trained on the fly and the user can get instant result based on the inputs.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\textwidth]{form.png}
\caption{Front-end of the web app}
\end{center}
\end{figure}
\begin{enumerate}
\item Stock Name is the CAPITALIZED letters for the stock. AAPL for Apple, BTC-USD fro Bitcoin, TSLA for Tesla.
\item Epochs is the number of passes of the entire training dataset, the machine learning algorithm has completed. More the epochs, better the accuracy. Epoch can be set at 50 for a good performance.
\item Ahead is the number of days user wants to predict ahead of time. Less the number in "Ahead", higher the accuracy becasue machine will have hard time predicting further into the future.
\item Days is the number of days user wants to use as the prediction dataset to predict the future. For example, one can use 30 days of data to predict the 31st day (ahead =1) or use the same data to predict 40th day (ahead=10).
\end{enumerate}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{user.png} % Include the image placeholder.png
\caption{Sample user input for Bitcoin (BTC-USD)}
\end{center}
\end{figure}
While the machine is learning on the training data, the command line will show the number of epochs [Figure 8]. Higher number of epochs will take longer time to complete the training.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.65\textwidth]{epoch.png} % Include the image placeholder.png
\caption{Back-end training of the model}
\end{center}
\end{figure}
When completed, the app will be redirected to the plot.html and will deliver the predicted price along with the actual price of the stock [Figure 9].
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.75\textwidth]{plot.png}
\caption{Output of the web app delivering the predicted price of the stock on localhost}
\end{center}
\end{figure}
\addcontentsline{toc}{section}{Part 3}
\section*{Part 3}
In part 3, I will implement the web app of predicting the stock price on Heroku cloud server. Since the free account in Heroku provides 500 Mb of RAM which is not enough for training the model using tensorflow on the fly. Tensorflow itself consumes 300+ Mb. Therefore later, I built and saved the model. Since the model has all the necessary information to predict the test data and no tensorflow is required, it is less than 500 Mb limit and can be easily deployed in Heroku.
There are 2 additional requirements for Heroku to implement the web app.
\begin{enumerate}
\item requirements.txt
\item Procfile
\end{enumerate}
The requirements.txt file is a simple text file containing all the required libraries for the app to run. It will look something like Figure 10.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{req.png} % Include the image placeholder.png
\caption{Content of requirements.txt file}
\end{center}
\end{figure}
This file is created using the following command.
{\fontfamily{qcr}\selectfont
\noindent pip freeze > requirements.txt > Procfile\\
}
Then a Procfile is required which declares the type of the app and the app file to execute. The Procfile is created in command line.\\
{\fontfamily{qcr}\selectfont
\noindent echo web: gunicorn stock:app > Procfile\\
}
It contains only one line. 'Gunicorn' is a web server gateway interface that implements the app in the cloud. I have used git commands to commit the project files to the server. The command sequence is below.\\
{\fontfamily{qcr}\selectfont
\noindent git init\\
git add .\\
git commit -am "initial commit"\\
}
Then Heroku login is required to push the project files. \\
{\fontfamily{qcr}\selectfont
\noindent heroku login\\
heroku create\\
heroku rename new-name\\
git push heroku master\\
}
The command 'Heroku login' will prompt the ser to login in the browser. Once the project is created, the files will be pushed to the master branch subsequently [Figure 11].
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\textwidth]{heroku1.png} % Include the image placeholder.png
\caption{Heroku deployment completion}
\end{center}
\end{figure}
When the app is initiated by going to the pointed https link, the same front-end [Figure 12] is displayed but now on the cloud.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\textwidth]{heroku2.png} % Include the image placeholder.png
\caption{Front-end of the web app on Heroku server}
\end{center}
\end{figure}
I have proceeded with similar user inputs to train the model on the fly but the limited RAM size could not complete the training. Therefore, I have saved the trained model and then placed the model inside the project folder and then commit. In this way, I do not need the tensorflow library as well as other ML libraries and my required RAM is greatly reduced below 500 Mb and can be easily deployed on Heroku. The down side of this type of pre-saved model implementation is the model is not trained on the latest data and will provide the predicted output based on the owner's saved model.
To upload the model on the flask app, I have created a separate .py file, loaded the pre-saved model and test the data in hand. The file is saved as model\_stock.py in the same working directory. Only the Procfile is changed and redirected to this new .py file.\\
{\fontfamily{qcr}\selectfont
\noindent web: gunicorn stock\_model:app\\
}
Upon following the similar git command, I ended up pushing the project along with the saved model successfully on the Heroku cloud. The app then shows the predicted price trend of the stock on the cloud [Figure 13].
\begin{figure}[!h]
\begin{center}
\includegraphics[width=\textwidth]{heroku3.png} % Include the image placeholder.png
\caption{Predicted output price of stock from the web app on Heroku server}
\end{center}
\end{figure}
\section{Discussion}
The web app that has been developed to predict stock price, works for any stock since the user has the flexibility to input their preferred stock name. The users can also provide how many days ahead in time they want to predict and how many inputs of training samples they want to take to predict one future data point. They also have the flexiility to modify the epochs number so that the loss function can be minimized. The Heroku platform's limited RAM availability hindered the app to train the model on the fly over the cloud but later the pre-saved trained model was uploaded to reduce the RAM usage. The drawback with the model is the user can only get the specific stock prediction which the owner has uploaded. If there is higher allocated memory available, the cloud app will be same as the local server and will provide more flexibility to the users. That being said, the web app has successfully deployed on the local server and on the cloud using the LSTM machine learning algorithm.
\section{Acknowledgement}
I would like to acknowledge the learning I received from the book [3] by Aurelien Geron and [1] by Laurence Moroney. Also, Udacity for providing me the opportunity to dive deep into the data science realm. The online html editor [4] has also eased the html coding.
\section{Conclusion}
This work demonstrates the implementation of a stock price prediction model using machine learning libraries. The model seems reliably following the actual price of stocks up to 10 days ahead of time. Three layers of LSTM has been used in the model and dropouts are added to minimize the overfitting of the data. The app is deployed on the local server which provides the flexibility to input any stock's name as well as other variable features that the user can control. The app is then deployed on Heroku cloud platform using the pre-saved model. This web app has shown the capability to get trained on any time series data and successfully predicted the stock price.
\begin{thebibliography}{9}
\bibitem{latexcompanion}
Laurence Moroney,
\textit{AI and Machine Learning for Coders, Chapter 9},
\\\texttt{(https://github.com/lmoroney/tfbook/tree/master/chapter9)},
O'Reilly Media Inc., Sebastpol, CA 95472
\bibitem{latexcompanion}
Sepp Hochreiter and Jurgen Schmidhuber,
\textit{LONG SHORT-TERM MEMORY},
\\\texttt{(https://www.bioinf.jku.at/publications/older/2604.pdf)},
Neural Computation 9(8):1735-1780, 1997.
\bibitem{latexcompanion}
Aurelien Geron,
\textit{Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems},
O'Reilly Media Inc., Sebastpol, CA 95472
\bibitem{latexcompanion}
HTML editor,
\\\texttt{(https://html-online.com/editor/)}
\end{thebibliography}
%----------------------------------------------------------------------------------------
% BIBLIOGRAPHY
%----------------------------------------------------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.7310850181,
"avg_line_length": 50.5669642857,
"ext": "tex",
"hexsha": "ab53bf3e3b02dec33cb0d108e999b7ac0299e5e9",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-02-25T11:44:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-13T06:29:11.000Z",
"max_forks_repo_head_hexsha": "995400ed2574ec178b2e9d8235e22e38550abfd1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "easella/stock-price-predict",
"max_forks_repo_path": "Latex/Capstone_project.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "995400ed2574ec178b2e9d8235e22e38550abfd1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "easella/stock-price-predict",
"max_issues_repo_path": "Latex/Capstone_project.tex",
"max_line_length": 1005,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "995400ed2574ec178b2e9d8235e22e38550abfd1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "easella/stock-price-predict",
"max_stars_repo_path": "Latex/Capstone_project.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-14T09:21:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-17T13:49:08.000Z",
"num_tokens": 5507,
"size": 22654
} |
% $Id$
%
% Earth System Modeling Framework
% Copyright 2002-2020, University Corporation for Atmospheric Research,
% Massachusetts Institute of Technology, Geophysical Fluid Dynamics
% Laboratory, University of Michigan, National Centers for Environmental
% Prediction, Los Alamos National Laboratory, Argonne National Laboratory,
% NASA Goddard Space Flight Center.
% Licensed under the University of Illinois-NCSA License.
%\subsection{Description}
\subsubsection{Profiling}
\label{sec:Profiling}
ESMF's built in {\em profiling} capability collects runtime statistics
of an executing ESMF application through both automatic and manual code
instrumentation. Timing information for all phases of all ESMF components
executing in an application can be automatically collected using the
{\tt ESMF\_RUNTIME\_PROFILE} environment variable (see below for settings).
Additionally, arbitrary user-defined code regions can be timed by
manually instrumenting code with special API calls. Timing profiles
of component phases and user-defined regions can be output in several
different formats:
\begin{itemize}
\item in text at the end of ESMF Log files
\item in separate text file, one per PET (if the ESMF Logs are turned off)
\item in a single summary text file that aggregates timings over multiple PETs
\item in a binary format for import into \htmladdnormallink{Cupid}{https://earthsystemcog.org/projects/cupid/} for detailed analysis
\end{itemize}
The following table lists important environment variables that control
aspects of ESMF profiling.
\begin{tabular} {|p{6cm}|p{8cm}|p{6cm}|p{6cm}|}
\hline\hline
{\bf Environment Variable} & {\bf Description} & {\bf Example Values} & {\bf Default}\\
\hline\hline
{\tt ESMF\_RUNTIME\_PROFILE} & Enable/disables all profiling functions & {\tt ON} or {\tt OFF} & {\tt OFF} \\
\hline\hline
{\tt ESMF\_RUNTIME\_PROFILE\_PETLIST} & Limits profiling to an explicit list of PETs & ``{\tt 0-9 50 99}'' & {\em profile all PETs}\\
\hline\hline
{\tt ESMF\_RUNTIME\_PROFILE\_OUTPUT} & Controls output format of profiles; multiple can be specified in a space separated list & {\tt TEXT}, {\tt SUMMARY}, {\tt BINARY} & {\tt TEXT} \\
\hline\hline
\end{tabular}
\subsubsection{Tracing}
\label{sec:Tracing}
Whereas profiling collects summary information from an application,
{\em tracing} records a more detailed set of events for later analysis. Trace
analysis can be used to understand what happened during a program's
execution and is often used for diagnosing problems, debugging, and
performance analysis.
ESMF has a built-in tracing capability that records events into special
binary log files. Unlike log files written by the {\tt ESMF\_Log} class,
which are primarily for human consumption (see Section \ref{sec:Log}),
the trace output files are
recorded in a compact binary representation and are processed by tools
to produce various analyses. ESMF event streams are recorded in the
\htmladdnormallink{Common Trace Format}{http://diamon.org/ctf/} (CTF).
CTF traces include one or more event streams,
as well as a metadata file describing the events in the streams.
Several tools are available for reading in the CTF traces output by ESMF.
Of the tools listed below, the first one is designed specifically for
analyzing ESMF applications and the second two are general purpose tools
for working with all CTF traces.
\begin{itemize}
\item \htmladdnormallink{Cupid}{https://earthsystemcog.org/projects/cupid/}
is a plugin for the Eclipse Integrated Development Environment
that can read and analyze ESMF traces.
\item \htmladdnormallink{TraceCompass}{http://tracecompass.org/}
is a general purpose tool for reading, analyzing, and visualizing traces.
\item \htmladdnormallink{Babeltrace}{http://www.efficios.com/babeltrace}
is a command-line tool and library for trace conversion
that can read and write CTF traces. Python bindings are available
to open CTF traces are iterate through events.
\end{itemize}
Events that can be captured by the ESMF tracer include the following. Events
are recorded with a high-precision timestamp to allow timing analyses.
\begin{description}
\item [phase\_enter] indicates entry into an initialize, run, or finalize ESMF component routine
\item [phase\_exit] indicates exit from an initialize, run, or finalize ESMF component routine
\item [region\_enter] indicates entry into a user-defined code region
\item [region\_exit] indicates exit from a user-defined code region
\item [mem] records current memory usage information
\end{description}
The following table lists important environment variables that control
aspects of ESMF tracing.
\begin{tabular} {|p{6cm}|p{8cm}|p{6cm}|p{6cm}|}
\hline\hline
{\bf Environment Variable} & {\bf Description} & {\bf Example Values} & {\bf Default}\\
\hline\hline
{\tt ESMF\_RUNTIME\_TRACE} & Enable/disables all tracing functions & {\tt ON} or {\tt OFF} & {\tt OFF} \\
\hline\hline
{\tt ESMF\_RUNTIME\_TRACE\_CLOCK} & Sets the type of clock for timestamping events (see Section \ref{sec:TracingClocks}). & {\tt REALTIME} or {\tt MONOTONIC} or {\tt MONOTONIC\_SYNC} & {\tt REALTIME}\\
\hline\hline
{\tt ESMF\_RUNTIME\_TRACE\_PETLIST} & Limits tracing to an explicit list of PETs & ``{\tt 0-9 50 99}'' & {\em trace all PETs}\\
\hline\hline
{\tt ESMF\_RUNTIME\_TRACE\_COMPONENT} & Enables/disable tracing of Component phase\_enter and phase\_exit events & {\tt ON} or {\tt OFF} & {\tt ON} \\
\hline\hline
{\tt ESMF\_RUNTIME\_TRACE\_FLUSH} & Controls frequency of event stream flushing to file & {\tt DEFAULT} or {\tt EAGER} & {\tt DEFAULT} \\
\hline\hline
\end{tabular}
| {
"alphanum_fraction": 0.7551447506,
"avg_line_length": 50.7433628319,
"ext": "tex",
"hexsha": "bc7bd3af9a34a3d04d59d43b89675dcadb27380b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe",
"max_forks_repo_licenses": [
"NCSA"
],
"max_forks_repo_name": "Formatted/esmf",
"max_forks_repo_path": "src/Infrastructure/Trace/doc/Trace_desc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"NCSA"
],
"max_issues_repo_name": "Formatted/esmf",
"max_issues_repo_path": "src/Infrastructure/Trace/doc/Trace_desc.tex",
"max_line_length": 206,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe",
"max_stars_repo_licenses": [
"NCSA"
],
"max_stars_repo_name": "Formatted/esmf",
"max_stars_repo_path": "src/Infrastructure/Trace/doc/Trace_desc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1484,
"size": 5734
} |
\documentclass[10pt,a4paper]{article}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{pdfpages}
\usepackage[left=2.00cm, right=2.00cm, top=2.00cm, bottom=2.00cm]{geometry}
\begin{document}
\begin{center}
\Large{\underline{\bf Methods Attempted}}
\end{center}
\section{Original Report Method}
We wish to find the probability of moving from a graph \(G\) to any other graph \(G'\) in the space of all graphs on \(n\) nodes, \(\mathcal{G}\), that is \(\mathbb{P}(G'|G)\). We can do this by using the Law of Total Probability, and partitioning the space by counting how many edge swaps are required to get from \(G\) to \(G'\). We will denote this \(\delta\).\par
We restrict our possible moves as follows:
\[\mathbb{P}(G'|G) = \begin{cases}
f(G,\delta) & \text{if }G' \text{ is within } \delta \text{ edge swaps of }G\\
0
& \text{otherwise}
\end{cases}\]
We define the set of all graphs within \(\delta\) edge swaps of \(G\) to be the \(\delta\)-neighbourhood of \(G\), denoted \(\mathrm{nbhd}_{\delta}(G)\). \par
If we restrict our choice of \(\delta\) to an interval \(\left[ -W/2,W/2\right]\backslash\{0\}\), we obtain
\begin{align*}
\mathbb{P}(G'|G)= \sum_{\substack{d=-W/2,\\ d\neq0}}^{W/2}\mathbb{P}(G'|G,\delta=d)\mathbb{P}(\delta=d)
\end{align*}
This can be calculated explicitly for \(W=2\).
\begin{align*}
\mathbb{P}(G'|G,\delta=-1) = \frac{\binom{m}{1}}{\binom{\binom{n}{2}}{m}}
\end{align*}
There are \(m\) possible edges to remove, of which we are choosing 1. This is divided by the number of graphs with \(m\) edges on \(n\) nodes. Similarly,
\begin{align*}
\mathbb{P}(G'|G,\delta=1) = \frac{\binom{\binom{n}{2}-m}{1}}{\binom{\binom{n}{2}}{m}}
\end{align*}
Assigning a discrete uniform distribution on \(\delta\) gives \(\mathbb{P}(\delta=d)=\frac{1}{W}=\frac{1}{2}\). As such,
\[\mathbb{P}(G'|G) = \frac{\binom{n}{2}}{2\binom{\binom{n}{2}}{m}}\]
To extend to larger windows, I thought it would be the same, however choosing different numbers of links in the numerator.
\begin{align*}
q(G,G')=\frac{1}{W\binom{\binom{n}{2}}{m}}\left[\sum_{d=1}^{W/2} \binom{m}{d}+\binom{\binom{n}{2}-m}{d}\right]
\end{align*} However, this fails even for \(W=4\), as it gives a probability greater than 1. I think the error lies in the denominator, as it remains constant while the numerator grows with every increase in window size. Using this in the algorithm clearly does not work, no proposals are accepted.
\section{Report Code}
In the algorithm, this was coded erroneously. The rejection step was inverted, i.e. \(\Delta Q(G,G') = -\log q(G,G')+\log q(G',G)\). When running this with \(n=100, p=0.01\) for \(10^6\) iterations, convergence is good. The Kolmogorov-Smirnov statistic is 0.0163. Figure \ref{fig:goodhist} shows a histogram comparison and a QQ plot.
\begin{figure}[h]
\centering
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{histogram}
\end{subfigure}\\%
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{qq}
\end{subfigure}
\caption{Good Convergence for 100 nodes, 0.01 link density}
\label{fig:goodhist}
\end{figure}
\begin{figure}[h]\label{fig:ksstat}
\centering
\includegraphics[width=\textwidth]{ksstat}
\caption{KS Stat varying runtime 100 nodes, 0.01 link density}
\end{figure}
For n=1000, p=0.01: KS: 0.2824. Mean should be 4995.
\begin{figure}[h]
\includegraphics[width=\textwidth]{badhist}
\end{figure}
\begin{figure}[h]
\includegraphics[width=\textwidth]{qqbad}
\caption{Bad Convergence for 1000 nodes, 0.01 link density}
\label{fig:badhist}
\end{figure}
\section{Discrete uniform over entire Graph space}
Suggests \(\mathbb{P}(G'|G) = \frac{\mathbb{P}(\delta=m'-m)}{|\text{nbhd}_{\delta}G|}\) - also doesn't work
\section{Difference in m,m'}
Not yet coded. Current best guess
\end{document} | {
"alphanum_fraction": 0.6706319703,
"avg_line_length": 49.8148148148,
"ext": "tex",
"hexsha": "1db1ee0182b6922f66160eab9310cc73027fda52",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Tom271/DeltaLink",
"max_forks_repo_path": "brokenmethods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Tom271/DeltaLink",
"max_issues_repo_path": "brokenmethods.tex",
"max_line_length": 370,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b3bebd84524e3198c89723f47272560f0a15de91",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Tom271/DeltaLink",
"max_stars_repo_path": "brokenmethods.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1347,
"size": 4035
} |
%&context
\section[sct_ed]{Stream editing}
\subsection{Background}
The programs we have seen so far are all specialized for a given task,
hopefully one of the more frequent ones. It is, of course, impossible to
foresee every way in which one may need to process a tree, and with this we hit
a limit to specialization. In some cases, a more general-purpose program may
offer a solution.
As an analogy, consider the task of finding lines in a text that match a
given pattern. This can be done, for example, in the following ways, from the
general to the specific:
\startitemize
\item a Perl program
\item a \progname{sed} one-liner
\item a \progname{grep} command
\stopitemize
Perl is a general-purpose language, it just happens to be rather good at
processing text.\footnote{Ok, Perl was {\em initially} designed for processing
text -- it's the Practical Extraction and Report Language, after all -- but it
has long grown out of this initial specialization.} Sed is specialized for
editing text streams, and \progname{grep} is designed for precisely the line-finding
task in question.\footnote{The name "grep" comes from the \progname{sed} expression
{\tt g/re/p}, where "re" stands for "regular expression".} We should expect
\progname{grep} to be the most efficient, but we should not expect it to be able to
perform any significantly different task. By contrast, Perl may be (I haven't
checked!) less efficient than \progname{grep}, but it can handle pretty much any
task. Sed will lie in between. The \nutils{} programs we have seen so far are
\progname{grep}-like: they are specialized for one task (and hopefully, they are
efficient).
The programs described in this section are more \progname{sed}-like: they are
less specialized, usually less efficient, but more flexible than the ones shown
up to now. They were in fact inspired by \progname{sed} and \progname{awk},
which perform an action on the parts of input (usually lines) that meet some
condition. Rather than lines in a file, the programs presented here work with
nodes in a tree: each node is visited in turn, and if it meets a user-specified
condition, a user-specified action is performed. In other words, they are
node-oriented stream editors for trees.
As a word of warning, I should say that these programs are among the more
experimental in the \nutils{} package. This is why there are three programs
that do basically the same thing, although differently and with different
capabilities: \ed{} is the simplest (and first), it is written entirely in C
and it is fairly limited. \sched{} was developed to address \ed{}'s
limitations: by embedding a Scheme (\from[URL:Scheme]) interpreter (\gnu{}
Guile, \from[URL:Guile]), its flexibility is, for practical purposes,
limitless. Of course, this comes at the price of linking to an external
library, which may not be available. Therefore \ed{}, for all its limitations,
will stay in the package as it has no external dependency. Finally, I
understand that Scheme is not the only solution as an embedded language, and
that many people (myself included) find learning it a bit of a challenge.
Therefore, I tried the same approach with Lua\footnote{In case you're curious,
the reason I tried Scheme before Lua is simply that I heard about them in that
order.} (\from[URL:Lua]), which is designed as an embeddable language, is even
smaller than Guile, and by most accounts easier to learn.\footnote{And, in my
experience, easier to embed in a C program, but your experience may differ. In
particular, I could provide all of \luaed's functionality without writing a
single line of Lua code, whereas \sched{} relies on a few dozen lines of
embedded Scheme code that have to be parsed and interpreted on each run. But
that may very possibly just reflect my poor Scheme/Guile skills. Furthermore, I
can apparently run \luaed{} through Valgrind's (\from[URL:Valgrind])
memcheck utility without problems (I do this with all the programs in the
utils), but with \sched{} I get tons of error messages. But it may be that I
don't get how to manage memory with Guile}. The result, \luaed, is probably the
best so far: as powerful as \sched, while smaller, faster and easier to use.
For this reason, I will probably not develop \sched{} much more, but I won't
drop it altogether either, not soon at any rate.
\subsection{The General Idea}
\ed, \sched, and \luaed{} work in the same way: they iterate over the nodes in a
specific order (\nw{} order by default), and for each node they evaluate a
logical expression provided by the user. If the expression is true, they perform
a user-specified action. By default, the (possibly modified) tree is printed at
the end of the run.
Where the programs differ is the way the expression and action are expressed; as
well as the expressive power of the languages used; some of them can also
perform actions at the start or end of the run, or of each input tree. These are
summarized in the table below.
\startplacesilenttable
\startlocalfootnotes
\starttabulate[|l|l|l|l|]
\NC \NC \ed \NC \sched \NC \luaed \NC\NR
\HL
\NC language \NC own \NC Scheme \NC Lua \NC\NR
\NC programming constructs \NC no \NC Scheme's \NC Lua's \NC\NR
\NC functions \NC fixed \NC arbitrary \footnote{(\ie, user can define their own)} \NC arbitrary \NC\NR
\NC depends on \NC nothing \NC \gnu{} Guile \NC Lua library \NC\NR
\NC pre- \& post-tree code \NC no \NC yes \NC yes \NC\NR
\NC pre- \& post-run code \NC no \NC yes \NC yes \NC\NR
\stoptabulate
\placelocalfootnotes
\stoplocalfootnotes
\stopplacesilenttable
\subsection[sct:luaed]{\luaed}
Although \luaed{} is the most recent of the three, we'll cover it first because
if this one does what you need it's unlikely you'll need the others. Let's look
at an example before we jump into the details. Here is a tree of vertebrate
genera, showing support values:
\svgCmdOutput{ed_1}
Let's extract all well-supported clades, using a support value of
95\% or more as the criterion for being well-supported. In our jargon, the
{\em condition} would be that a node i) have a support value in the first place
(some nodes don't, \eg{} the root and the \lca{} of (Fugu,Tetraodon)), and ii)
that this value be no less than 95. The {\em action} would simply be to print
out the tree rooted at the current node.
\txtCmdOutput{ed_10}
Note that the \id{(Papio,(Hylobates,Homo))} clade appears twice -- once on
its own, and once within a larger clades (the tetrapods). This is because both
clades meet the condition -- the first has support value 99, the second has 100.
There is a way (see Examples below) of only showing non-overlapping clades,
which results in the "deepest" of two overlapping clades to be printed.
As always, the first argument to the program is a tree file,
\filename{vrt2cg.nw} in this example.
The second argument, \type{b ~= nil and b >= 95}, is the condition. In this
case, it is just the conjunction (\code{and}) of two expressions
\type{b ~= nil} and \code{b >= 95}. The former checks that the node's support
value (variable \code{b}) is defined (\ie, not \code{nil}); the latter
checks that the support value is no less than 95. Note that the checks occur in
that order, and that if \code{b} isn't defined, the second check isn't even
performed, as it is meaningless.
The third argument, \code{s()}, is the action: it specifies what to do when a
node meets the condition -- in this case, call function \code{s},
which just prints the tree rooted at the current node.
\subsubsection{Conditions}
Conditions are Boolean expressions usually involving node properties which are
available as predefined variables. As the program "visits" each node in turn,
the variables are set to the current node's properties. These predefined
variables have short names, to keep expressions concise. They are shown in table
\in{}[tbl:luaedPredefVar].
\placetable[][tbl:luaedPredefVar]{%
Predefined variables in \luaed. Variables \code{b} and \code{lbl} are
both derived from the label, but \code{b} is interpreted as a number, and
is undefined if the conversion to a number fails, or if the node is a leaf.
Edge length and depth (\code{L} and \code{d}) are undefined (not zero!)
if not specified in the Newick tree, as in cladograms.
}{%
\starttabulate[|c|l|l|]
\NC name \NC (Lua) type \NC meaning (refers to the current node)\NC\NR
\HL
\NC \code{a} \NC integer \NC number of ancestors \NC\NR
\NC \code{b} \NC number \NC support value (or \code{nil}) \NC\NR
\NC \code{c} \NC integer \NC number of children (direct descendants) \NC\NR
\NC \code{D} \NC integer \NC total number of descendants (includes children) \NC\NR
\NC \code{d} \NC number \NC depth (distance to root) \NC\NR
\NC \code{i} \NC Boolean \NC true iff node is strictly internal (i.e., not root!) \NC\NR
\NC \code{lbl} \NC string \NC label \NC\NR
\NC \code{l} (ell) \NC Boolean \NC true iff node is a leaf \NC\NR
\NC \code{L} \NC number \NC parent edge length \NC\NR
\NC \code{N} \NC node \NC the current node itself \NC\NR
\NC \code{r} \NC Boolean \NC true iff node is the root \NC\NR
\stoptabulate
}
The condition being just a Boolean expression written in Lua, all the logical
operators of the Lua language can be used (indeed, any valid Lua snippet can be
used, provided it evaluates to a Boolean), and you can use parentheses to
override operator precedence or for clarity.
Here are some examples of \luaed{} conditions:
\starttabulate[|c|l|]
\NC expression \NC selects: \NC\NR
\HL
\NC \code{l} (lowercase ell) \NC all leaves \NC\NR
\NC \code{l and a <= 3} \NC leaves with 3 ancestors or less \NC\NR
\NC \code{i and (b \~{ }= nil) and (b >= 95)} \NC internal nodes with support $\geq$ 95\% \NC\NR
\NC \code{i and (b \~{ }= nil) and (b < 50)} \NC unsupported nodes (less than 50\%) \NC\NR
\NC \code{not r} \NC all nodes except the root \NC\NR
\NC \code{c > 2} \NC multifurcating nodes \NC\NR
\stoptabulate
Notes:
\startitemize
\item If it is certain that all nodes do have support, checks such as \type{b ~= nil} can be omitted.
\item if an action must be performed on every node, just pass \code{true} as
the condition.
\stopitemize
\subsubsection{Actions}
Actions are arbitrary Lua expressions. These will typically involve printing out
data or altering node properties or even tree structure. \luaed{} predefines a
few functions for such purposes (\in{table}[tbl:luaedPredefFunc]), and you can
of course write your own (unless the function is very short, this is easier if
you pass the Lua code in a file, see \in{}[sct:lua_file_input]).
\placetable[][tbl:luaedPredefFunc]{%
Predefined actions in \luaed. The names are one letter long for convenience
when passing the action on the command line. When called without an
argument, these functions operate on the current node (\ie, \code{s()} is
the same as \code{s(N)} (where \code{N} means the current node -- see
\in{table}[tbl:luaedPredefVar]).
}{%
\starttabulate[|c|l|c|]
\NC code \NC effect \NC modifies tree?\NC\NR
\HL
\NC \code{o} \NC splice out the node \NC yes \NC\NR
\NC \code{s} \NC print the subtree rooted at the node \NC no \NC\NR
\NC \code{u} \NC delete ("unlink") the node (and all descendants) \NC yes
\NC\NR
\stoptabulate
}
\luaed{} defines a "node" type, and the current node is always accessible as
variable \code{N} (other nodes can be obtained through node properties, see
below). Node properties can be accessed as fields in a Lua table. Table \in{}[tbl:luaedNodeProp] lists the available node
fields.
\placetable[][tbl:luaedNodeProp]{%
Node properties accessible from Lua. rw: read-write, ro: read only. Some
fields have both a short and a long name, the former is intended for use on
the command line (where space is at a premium), the latter is for use in
scripts (but both can be used anywhere). Note that when referring to the
{\em current} node, the predefined variables (see table
\in{}[tbl:luaedPredefVar]) are even more concise, \eg{} \code{N.len} or
\code{N.L} can be written just \code{L}, but they are read-only.
}{%
%\center
\starttabulate[|l|l|c|l|]
\NC name \NC type \NC mode \NC meaning \NC\NR
\HL
\NC \code{is_root}, \code{r} \NC boolean \NC ro \NC true iff node is root
\NC\NR
\NC \code{len}, \code{L} \NC number \NC rw \NC parent edge's length \NC\NR
\NC \code{lbl} \NC string \NC rw \NC label \NC\NR
\NC \code{b} \NC number \NC ro \NC support value \NC\NR
\NC \code{par} \NC node \NC ro \NC parent \NC\NR
\NC \code{first\_child}, \code{fc} \NC node \NC ro \NC first child \NC\NR
\NC \code{last\_child}, \code{lc} \NC node \NC ro \NC last child \NC\NR
\NC \code{children\_count}, \code{c} \NC integer \NC ro \NC number of children \NC\NR
\NC \code{kids} \NC table \NC ro \NC list of children nodes
\NC\NR
\stoptabulate
}
So for example the parent of the current node is expressed by
\code{N.par}; doubling its length could be \code{N.par.len = N.par.len * 2}.
\subsubsection[sct:lua_file_input]{Lua script in a file}
Sometimes the command line is too short to comfortably write the condition and
action. In this case, one can put the Lua code in a file, which is passed to
\luaed{} via option \code{-f}. The file can contain any Lua code, but some
function names are special: they function as "hooks", that is, they are called
on predefined occasions. Table \in{}[tbl:luaedHooks] shows the hook names and
when they are called.
\placetable[][tbl:luaedHooks]{%
Hooks defined by \luaed. If a function named \code{start\_tree} is
defined, it will be called once per tree, before the tree is processed; etc.
If a hook is not defined, no action is performed on the corresponding
occasion. Strictly speaking, \code{start\_run} is not really necessary, as
the file is evaluated before the run anyway, but it seems cleaner to provide
a start-of-the-run hook as well.
}{
\starttabulate[|l|l|]
\NC hook name \NC called\ldots \NC\NR
\HL
\NC \code{start\_run} \NC before processing any tree \NC\NR
\NC \code{start\_tree} \NC for each tree, before processing \NC\NR
\NC \code{node} \NC for each node \NC\NR
\NC \code{stop\_tree} \NC for each tree, after processing \NC\NR
\NC \code{stop\_run} \NC after processing all trees \NC\NR
\stoptabulate
}
See "tree counter" and "numbering nodes" below for examples of use of
\code{-f}.
\subsubsection{Examples}
\subsubsubsection[sct:ed_ed]{Opening Poorly-supported Nodes}
When a node has low support, it may be better to splice it out from the tree,
reflecting uncertainty about the true topology. Consider the following tree,
\filename{HRV\_cg.nw}:
\svgFigure{ed_3}
The inner node labels represent bootstrap support, as a percentage of
replicates. As we can see, some nodes are much better supported than others.
For example, the \id{(COXB2,ECHO6)} node (top of the figure) has a support
of only 35\%, and in the lower right of the graph there are many nodes with even
poorer support. Let's use \luaed's \code{o} function to "open" the nodes with
less than 50\% support. This means that those nodes will be spliced out, and
their children attached to their "grandparent":
\svgCmdOutput{ed_4}
Now \id{COXB2} and \id{ECHO6} are siblings of
\id{ECHO1}, forming a node with 90\% support. What this means is that the
original tree strongly supports that these three species form a clade, but is
much less clear about the relationships {\em within} the clade. Opening the
nodes makes this fact clear by creating multifurcations. Likewise, the lower
right of the figure is now occupied by a highly multifurcating (8 children) but
perfectly supported (100\%) node, none of whose descendants has less than 80\%
support.
\subsubsubsection{Formatting Lengths}
Some phylogeny programs return \nw{} trees with an unrealistic number
of decimal places. For example, the \filename{HRV.nw} tree has six:
\txtCmdOutput{ed_6}
Here I use \nwindent{} to show each node on a line for clarity, and
show only the last ten.\footnote{the {\em first} ten lines contain only
opening parentheses.} To format\footnote{\sched{} automatically loads the {\tt
format} module so that the full-fledged {\tt format} function is available.}
the lengths to two decimal places, do the following:
\txtCmdOutput{ed_11}
\page[no]
\subsubsubsection{Multiplying lengths by a constant}
It may be necessary to have two trees which only differ by a constant multiple
of the branch lengths. This can be used, for example, to test competing
hypotheses about evolution rates.
Here is our good friend the Catarrhinine tree again:
\svgCmdOutput{ed_8}
To multiply all its branch lengths by (say) 3.5, do the following:
\svgCmdOutput{ed_12}
\subsubsubsection{Implementing other \nutils}
\luaed{} can emulate other programs in the package, when these
iterate on every node and perform some action. There is no real reason to use
\luaed{} rather than the original, since \luaed{} will be slower (after all, it
has to start the Lua interpreter, parse the Lua expressions, etc.). But these
"equivalents" can serve as illustration.
\startalignment[center]
\starttabulate[|l|l|]
\NC program \NC \luaed{} equivalent \NC\NR
\HL
\NC \labels{} \NC {\tt 'lbl \~{}= ""' 'print(lbl)'} \NC\NR
\NC \labels{} {\tt -I} \NC {\tt 'l and lbl \~{}= ""' 'print(lbl)'} \NC\NR
\NC \topology{} \NC {\tt true 'N.len = ""'} \NC\NR
\NC \topology{} {\tt -I} \NC {\tt true 'N.len = ""; if not l then N.lbl = "" end'} \NC\NR
\stoptabulate
\stopalignment
The {\tt lbl \~{}= ""} condition in the \labels{} replacements is checked
because the original \labels{} does not print empty labels. In the \topology{}
replacement, the check for node type (\code{l}) is done in the action rather
than the condition, because there is some code that is performed for every node
and some additional code only for non-leaves.
\subsubsubsection{A tree counter}
As you know by now, the \nutils{} are able to process files that contain any
number of trees. But just how many trees are there in a file? If you're certain
that there is exactly one tree per line, you just use \code{wc -l}. But the
\nw{} format allows trees to span more than one line, or conversely there may be
more than one tree per line; moreover there may be blank lines. All these
conspire to yield wrong tree counts. To solve this, we write a tree counter in
Lua, and pass it to \luaed. Here is the counter:
\txtCmdOutput{ed_13}
As you can see, I've defined three of the five possible hooks. Before any tree
is processed, \code{start\_run} is called, which defines variable
\code{count} and initializes it to zero. After each tree is processed
(actually, no processing is done, since the \code{node} hook is not defined),
function \code{stop\_tree} is called, which increments the counter. And after
the last tree has been processed, the \code{stop\_run} hook is called, which
just prints out the count.
Here it is in action. First, the simple case of one tree per line:
\txtCmdOutput{ed_14}
\txtCmdOutput{ed_15}
Right. Now how about this one: these are the same trees as in
\filename{forest}, but all on a single line:
\txtCmdOutput{ed_16}
\txtCmdOutput{ed_17}
\luaed{} is not fooled! And this is the opposite case -- an indented tree, which has one {\em node} per line:
\txtCmdOutput{ed_18}
\txtCmdOutput{ed_19}
There's no confusing our tree counter, it seems. Note that in future versions I
might well make this unnecessary by supplying a predefined variable which counts
the input trees, akin to Awk's \code{NR}.
\subsubsubsection{Numbering inner nodes}
I was once handed a tree with the task of numbering the inner nodes, starting
close to the leaves and ending at the root.\footnote{Thanks to Beno\^{i}t
Defaucheux for this example.} Here is a tree with unlabeled inner nodes (I hide
the branch lengths lest they obscure the inner node labels, which will also be
numeric):
\svgCmdOutput{ed_22}
A solution is the following \luaed{} script:
\txtCmdOutput{ed_20}
\svgCmdOutput{ed_21}
\subsubsubsection{Extracting deep, well-supported clades}
In the first example of this section (\in{}[sct:luaed]), we extracted
well-supported clades, but there was an overlap because one well-supported clade
was a subclade of another. We may want to extract only the "deepest" clades that
meet the condition, in other words, once a node has been found to match, its
descendants should not be processed. This is the purpose of option \code{-o}.
For this option to be useful, though, the tree must be processed from the root
to the leaves, which is the opposite of the default (namely, \no). To override
this, we pass option \code{-r} ("reverse"):
\txtCmdOutput{ed_23}
All overlap has now disappeared: the
\id{(Papio,(Hylobates,Homo))} clade is no longer printed on its own.
\subsubsection{Future}
I intend to develop \luaed{} further. Among the items in my TODO list are a few
new predefined variables (number of records, root of the tree, more powerful
structure-altering functions, etc).
%%%% nw_ed
%
\subsection{\ed}
{\bf Note}: it is likely that \luaed{} (\in{}[sct:luaed]) will be more useful
than \ed. See also \in{section}[sct_ed] for a general intro to the stream
editing programs. This section gives a minimal description of \ed, without
delving further in examples.
As in \luaed, the two parameters of \ed{} (besides the input file) are the
{\it condition} and the {\it action}.
\subsubsection{Conditions}
Conditions are logical expressions involving node properties, they are composed
of numbers, logical operators, and node functions. The functions have
one-letter names, to keep expressions short (after all, they are passed on the
command line). There are two types, numeric and Boolean.
\startalignment[center]
\starttabulate[|c|l|l|]
\NC name \NC type \NC meaning \NC\NR
\HL
\NC \code{a} \NC numeric \NC number of ancestors of node \NC\NR
\NC \code{b} \NC numeric \NC node's support value (or zero) \NC\NR
\NC \code{c} \NC numeric \NC node's number of children (direct) \NC\NR
\NC \code{D} \NC numeric \NC node's number of descendants (includes children) \NC\NR
\NC \code{d} \NC numeric \NC node's depth (distance to root) \NC\NR
\NC \code{i} \NC Boolean \NC true iff node is strictly internal (i.e., not root!) \NC\NR
\NC \code{l} (ell) \NC Boolean \NC true iff node is a leaf \NC\NR
\NC \code{r} \NC Boolean \NC true iff node is the root \NC\NR
\stoptabulate
\stopalignment
The logical and relational operators work as expected, here is the list, in
order of precedence, from tightest to loosest-binding. Anyway, you can use
parentheses to override precedence, so don't worry.
\startalignment[center]
\starttabulate[|c|l|]
\NC symbol \NC operator \NC\NR
\HL
\NC \code{!} \NC logical negation \NC\NR
\HL
\NC \code{==} \NC equality \NC\NR
\NC \code{!=} \NC inequality \NC\NR
\NC \code{<} \NC greater than \NC\NR
\NC \code{>} \NC lesser than \NC\NR
\NC \code{>=} \NC greater than or equal to \NC\NR
\NC \code{<=} \NC lesser than or equal to \NC\NR
\HL
\NC \code{\&} \NC logical {\em and} \NC\NR
\HL
\NC \code{\|} \NC logical {\em or} \NC\NR
\stoptabulate
\stopalignment
Here are a few examples:
\startalignment[center]
\starttabulate[|c|l|]
\NC expression \NC selects: \NC\NR
\HL
\NC \code{l} \NC all leaves \NC\NR
\NC \code{l \& a <= 3} \NC leaves with 3 ancestors or less \NC\NR
\NC \code{i \& (b >= 95)} \NC internal nodes with support greater than 95\% \NC\NR
\NC \code{i \& (b < 50)} \NC unsupported nodes (less than 50\%) \NC\NR
\NC \code{!r} \NC all nodes except the root \NC\NR
\NC \code{c > 2} \NC multifurcating nodes \NC\NR
\stoptabulate
\stopalignment
\subsubsection{Actions}
The actions are also coded by a single letter, again for brevity's sake. The
following are implemented:
\startalignment[center]
\starttabulate[|c|l|c|]
\NC code \NC effect \NC modifies tree?\NC\NR
\HL
\NC \code{d} \NC delete the node (and all descendants) \NC yes \NC\NR
\NC \code{l} \NC print the node's label \NC no \NC\NR
\NC \code{o} \NC splice out the node \NC yes \NC\NR
\NC \code{s} \NC print the subtree rooted at the node \NC no \NC\NR
\stoptabulate
\stopalignment
I have no plans to implement any other actions, as this can be done easily
with \luaed{} (or \sched).
%%%%% nw_sched
\subsection{\sched}
{\bf Note}: it is likely that \luaed{} (\in{}[sct:luaed]) will be more
convenient than \sched. See also \in{section}[sct_ed] for a general intro to the
stream editing programs. This section gives a minimal description of \sched,
with no motivation and only a few examples (see \in{}[sct:luaed] for more).
As mentioned above, \sched{} works like \luaed{}, but uses Scheme
instead of Lua. Accordingly, the condition and action are passed as a Scheme
expression. The Scheme language has a simple syntax, but it can
be slightly surprising at first. To understand the following examples, you just
need to know that operators {\em precede} their arguments, as do function
names, so that the sum of 2 and 2 is written \code{(+ 2 2)}, the sine of $x$
is \code{(sin x)}, \code{(< 3 2)} is false, etc.
As a first example, let's again extract all well-supported clades from the tree
of vertebrate genera, as we did with \luaed.
\txtCmdOutput{ed_5}
The expression \code{((\& (def? 'b) (>= 95 b)) (s))} parses as follows:
\startitemize
\item the first element (or \code{car}, in Scheme parlance), \code{(\& (def?
'b) (>= 95 b))}, is the condition. It is a Boolean expression, namely a
conjunction (\code{\&})\footnote{\code{\&} is a short name for the Scheme form
\code{and}, which is defined by \sched{} to allow for shorter expressions on
the command line.} of the expressions \code{(def? 'b)} and \code{(>= 95 b)}.
The former checks that variable \code{b} (bootstrap support) is
defined\footnote{In \ed, \code{b} was zero if the support was not defined.
\sched{} distinguishes between undefined and zero, which is why one has to
check that \code{b} is defined before using it. \code{def?} is just a shorter
name for \code{defined?}.}, and the latter is true iff \code{b} is not smaller
than 95.
\item the second element (\code{cadr} in Scheme jargon), \code{(s)}, is the
action -- in this case, a call to function \code{s}, which has the same
meaning as action \code{s} in \ed, namely to print out the subclade rooted at
the current node.
\stopitemize
\subsubsection{Selectors}
Like \ed{} addresses, \sched{} selectors are Boolean expressions normally
involving node properties which are available as predefined variables. As the
program "visits" each node in turn, the variables are set to reflect the current
node's properties. As in \ed{}, the variables have short names, to keep
expressions concise. The predefined variables are shown in the table below.
\startalignment[center]
\starttabulate[|c|l|l|]
\NC name \NC type \NC meaning \NC\NR
\HL
\NC \code{a} \NC integer \NC number of ancestors \NC\NR
\NC \code{b} \NC rational \NC support value \NC\NR
\NC \code{c} \NC integer \NC number of children (direct descendants) \NC\NR
\NC \code{D} \NC integer \NC total number of descendants (includes children) \NC\NR
\NC \code{d} \NC numeric \NC depth (distance to root) \NC\NR
\NC \code{i} \NC Boolean \NC true iff node is strictly internal (i.e., not root!) \NC\NR
\NC \code{lbl} \NC string \NC label \NC\NR
\NC \code{l} (ell) \NC Boolean \NC true iff node is a leaf \NC\NR
\NC \code{L} \NC rational \NC parent edge length \NC\NR
\NC \code{r} \NC Boolean \NC true iff node is the root \NC\NR
\stoptabulate
\stopalignment
Variables \code{b} and \code{lbl} are both derived from the
label, but \code{b} is interpreted as a number, and is undefined if the
conversion to a number fails, or if the node is a leaf. Edge length and depth
(\code{L} and \code{d}) are undefined (not zero!) if not specified in the
Newick tree, as in cladograms.
Whereas \ed{} defines logical and relational operators, \sched{} just uses those
of the Scheme language. It just defines a few shorter names to help keep command
lines compact:
\startalignment[center]
\starttabulate[|l|c|l|]
\NC Scheme \NC \sched{} short form \NC meaning \NC\NR
\HL
\NC \code{not} \NC \code{!} \NC logical negation \NC\NR
\NC \code{and} \NC \code{\&} \NC logical and \NC\NR
\NC \code{or} \NC \code{\|} \NC logical or \NC\NR
\NC \code{defined?} \NC \code{def?} \NC checks if arg is defined \NC\NR
\stoptabulate
\stopalignment
Here are a the same examples as above, but for \sched:
\startalignment[center]
\starttabulate[|c|l|]
expression \NC selects: \NC\NR
\HL
\code{l} (lowercase ell) \NC all leaves \NC\NR
\code{(\& l (<= a 3))} \NC leaves with 3 ancestors or less \NC\NR
\code{(\& i (def? 'b) (>= b 95))} \NC internal nodes with support greater than 95\% \NC\NR
\code{(\& i (def? 'b) (< b 50)} \NC unsupported nodes (less than 50\%) \NC\NR
\code{(! r)} \NC all nodes except the root \NC\NR
\code{(> c 2)} \NC multifurcating nodes
\stoptabulate
\stopalignment
When it is clear that all inner nodes will have a defined support
value, one can leave out the \code{(def? 'b)} clause.
\subsubsection{Actions}
Actions are arbitrary scheme expressions, so they are much more flexible than
the fixed actions defined by \ed. \sched{} defines most of them, as well as a
few new ones, as scheme functions\footnote{note that you must use scheme's function call syntax to call the function, \ie, \code{(function [args...])}.}:
\startalignment[center]
\starttabulate[|c|l|c|]
\NC code \NC effect \NC modifies tree?\NC\NR
\HL
\NC \code{l! <len>} \NC sets the node's parent-edge length to \code{len} \NC yes \NC\NR
\NC \code{lbl! <lbl>} \NC sets the node's label to \code{lbl} \NC yes \NC\NR
\NC \code{o} \NC splice out the node \NC yes \NC\NR
\NC \code{p <arg>} \NC print \code{arg}, then a newline \NC no \NC\NR
\NC \code{s} \NC print the subtree rooted at the node \NC no \NC\NR
\NC \code{u} \NC delete ("unlink") the node (and all descendants) \NC yes \NC\NR
\stoptabulate
\stopalignment
The \code{l} action of \ed{}, which prints the current node's
label, can be achieved in \sched{} with the more general \code{p} function:
\code{(p lbl)}.
The \code{l!} function sets the current node's parent-edge length. It accepts
a string or a number. If the argument is a string, it attempts to convert it to
a number. If this fails, the edge length is undefined. The \code{lbl!}
function sets the current node's label. Its argument is a string.
\subsubsection{Future}
I do not plan to develop \sched{} any more, because in my opinion \luaed{} is
better. I will probably drop it eventually, but not immediately.
| {
"alphanum_fraction": 0.7279703453,
"avg_line_length": 44.2503597122,
"ext": "tex",
"hexsha": "942abb1c7b172e2fcaabc55d91c90d462f877bee",
"lang": "TeX",
"max_forks_count": 26,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T02:43:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-05-07T09:23:34.000Z",
"max_forks_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Cactusolo/newick_utils",
"max_forks_repo_path": "doc/c-ed.tex",
"max_issues_count": 24,
"max_issues_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87",
"max_issues_repo_issues_event_max_datetime": "2021-12-27T10:53:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-22T19:34:50.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Cactusolo/newick_utils",
"max_issues_repo_path": "doc/c-ed.tex",
"max_line_length": 153,
"max_stars_count": 62,
"max_stars_repo_head_hexsha": "da121155a977197cab9fbb15953ca1b40b11eb87",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Cactusolo/newick_utils",
"max_stars_repo_path": "doc/c-ed.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-07T09:12:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-08T22:22:08.000Z",
"num_tokens": 9124,
"size": 30754
} |
\section{Discussion of Results}
\label{sec:DIS}
\subsection{Subsection 1}
\subsection{Subsection 2}
\subsection{Subsection 3}
| {
"alphanum_fraction": 0.7518796992,
"avg_line_length": 10.2307692308,
"ext": "tex",
"hexsha": "cb200284399e4e6b30d85b46e2d6f54c3eb640a9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "376ed2886ac30483cb265affb6d629557ac41262",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dustywhite7/ProjectTemplate",
"max_forks_repo_path": "Paper/Discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "376ed2886ac30483cb265affb6d629557ac41262",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dustywhite7/ProjectTemplate",
"max_issues_repo_path": "Paper/Discussion.tex",
"max_line_length": 31,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "376ed2886ac30483cb265affb6d629557ac41262",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dustywhite7/ProjectTemplate",
"max_stars_repo_path": "Paper/Discussion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 38,
"size": 133
} |
\chapter*{Appendix C \\ Star-forming/Quiescent Classifications \label{chap:append3}}
\addcontentsline{toa}{appendix}{Appendix C}
\addtocontents{toa}{\addvspace{10pt}}%just to separate the entries in the list
In our parameterization of the observed $\fqcen$ in Eqs.~\ref{eq:fq} and~\ref{eq:fqz0},
we derive the best-fit values for the parameters $A_0$ and $A_1$ by fitting
$\fqcen$ measured in the SDSS DR7 group catalog. The SDSS DR7 group catalog
$\fqcen$ is derived using a $\mathrm{SFR} - \mathcal{M}_*$ cut
specified in Eq.~\ref{eq:sfr_cut}. For $\alpha(\mathcal{M}_*)$, the
parameter that dictates the $\fqcen$ redshift dependence, however,
the best-fit value is derived from fitting \cite{Tinker:2013aa}
$\fqcen$ measurements of the COSMOS survey. These $\fqcen$ measurements
use $(NUV - R) - (R - J)$ color-color cuts described in
\cite{Bundy:2010aa} for the star-forming/quiescent classification.
In this section, we demonstrate the consistency between the SDSS
DR7 group catalog $\fqcen$, using a $\mathrm{SFR} - \mathcal{M}_*$
cut, and the \cite{Tinker:2013aa} $\fqcen$, using a $(NUV - R) - (R - J)$
color-color cut.
For the galaxies in our SDSS DR7 group catalog, we construct a catalog
with UV, optical, and infrared photometry. For UV and optical,
we obtain GALEX and SDSS photometry from the NASA-Sloan Atlas\footnote{http://www.nsatlas.org/}.
For infrared, we use photometry from the 2MASS all-sky map~\citep{Cutri:2000aa}.
We then determine the $FUV, NUV$, $u, g, r, i, z$, $J, H, K_s$
band $K$-corrections and absolute magnitudes for the galaxies
using $\mathtt{K-correct}$\footnote{http://howdy.physics.nyu.edu/index.php/Kcorrect}~\citep[v4.2][]{Blanton:2007aa}.
Using these absolute magntidues, in Figure~\ref{fig:NUV_R_J}, we
plot $(NUV-R) - (R-J)$ color-color relation for the SDSS DR7 group
catalog (black). We highlight the galaxies in the sample that are
classified as quiescent using the $\mathrm{SFR} - \mathcal{M}_*$
cut in orange. Furthermore, we plot the color-color cuts from
\cite{Bundy:2010aa} (blue dash-dotted and red dashed lines).
Galaxies that lie above both color-color cuts, are classified as
quiescent in the $(NUV-R) - (R-J)$ classification.
The horizontal color-color cut (blue dash-dotted) is evaluated
using the \cite{Bundy:2010aa} parameterization, at $z \sim 0.0$. The
diagonal cut (red dashed) in \cite{Bundy:2010aa} is, however,
parameterized using coefficients determined by inspection of
redshift bins. Therefore, for the SDSS DR7 group catalog, we
extrapolate the coefficients from the COSMOS $z \sim 0.3, 0.7$
bins. We note that using the coefficients from the lowest COSMOS
redshift bin ($z \sim 0.3$) instead of extrapolating to $z \sim 0.0$,
does not significantly impact the comparison in this section.
Comparison of the quiescent galaxies classified with $\mathrm{SFR} - \mathcal{M}_*$
with the color-color cuts in Figure~\ref{fig:NUV_R_J} find that
the two classifications are generally consistent. To further
test whether the different classifications can impact quiescent
fraction parameterization, in Figure~\ref{fig:fq_colorcolor}, we
compare the the quiescent fractions derived from them for the
SDSS DR7 group catalog: $\fq^{SFR-\mathcal{M}_*}$ (black) versus
$\fq^{\rm color}$ (orange). Throughout the mass range of the
catalog, $\fq^{SFR-\mathcal{M}_*}$ and $\fq^{\rm color}$ are
consistent with each other. Therefore, the $\fqcen(\mathcal{M}_*, z)$
parameterization derived from measurements of SDSS DR 7 group catalog
and \cite{Tinker:2013aa} (Eq.~\ref{eq:fq}) does {\em not} affect the
results of this work.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figs/cenq/nuv_r_j.pdf}
\caption{
The $(NUV - R) - (R - J)$ color-color relation for the
SDSS DR7 group catalog (black) calculated from photometry
compiled from GALEX, SDSS, and 2MASS (\S~\ref{chap:append2}).
Galaxies classified as quiescent using the $\mathrm{SFR} - \mathcal{M}_*$
cut are highlighted (orange). Furthermore, we plot the color-color
cuts from \cite{Bundy:2010aa} that describe the classification
of star-forming/quiescent galaxies in \cite{Tinker:2013aa}.
We note that the quiescent galaxies classified using the
$\mathrm{SFR} - \mathcal{M}_*$ cut are generally consistent with
the \cite{Bundy:2010aa} color-color cuts.
}
\label{fig:NUV_R_J}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figs/cenq/fq_colorcolor.pdf}
\caption{Comparison of the SDSS DR7 group catalog $\fq(\mathcal{M}_*)$
measured using the $\mathrm{SFR} - \mathcal{M}_*$ versus the
$(NUV - R) - (R - J)$ color-color classifications. The $\fq$s
measured using the two different classification methods are
consistent with each other. This consistency illustrates that
the $\fqcen(\mathcal{M}_*, z)$ parameterization derived from
measurements of SDSS DR 7 group catalog and \cite{Tinker:2013aa}
does {\em not} affect the results of this work.}
\label{fig:fq_colorcolor}
\end{center}
\end{figure}
| {
"alphanum_fraction": 0.7457728267,
"avg_line_length": 53.4787234043,
"ext": "tex",
"hexsha": "f90b328bc41287015fec34a74af53de615d0bd70",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "changhoonhahn/DisThesis",
"max_forks_repo_path": "app3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "changhoonhahn/DisThesis",
"max_issues_repo_path": "app3.tex",
"max_line_length": 117,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2eaa61691d22d8a5ff36e801da6fd882528f3981",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "changhoonhahn/DisThesis",
"max_stars_repo_path": "app3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1514,
"size": 5027
} |
% !!! M2/O2 and GC (if a program contains O2 module)
% !!! Review Interfacing to C (it rlates to SysCall etc. as well)
%
\chapter{Multilanguage programming}\label{multilang}
\index{multilanguage programming}
\xds{} allows you to mix \mt{}, \ot{}, C, and Assembler
modules, libraries, and object files in one project.
\section{Modula-2 and Oberon-2}\label{multilang:m2o2}
\index{multilanguage programming!Modula-2/Oberon-2}
It is not necessary to notify the compiler of using \mt{}
objects in \ot{} module and vice versa. The compiler will detect
the language automatically when processing symbol files on
\verb'IMPORT' clause.
\subsection{Basic types}
In \ot{} the basic types have the same length on all platforms. In
\mt{} the size of types {\tt INTEGER}, {\tt CARDINAL} and {\tt
BITSET} may be different and depends on the value of the
\OERef{M2BASE16} option. The following table summarizes the correspondence
between the basic types.
\begin{center}
\begin{tabular}{lcccc}
\bf Type & \bf Size & \bf \ot{} & \multicolumn{2}{c|}{\bf Modula-2} \\
& & & \tt M2BASE16+ & \tt M2BASE16- \\ \hline
integer & 8 & \tt SHORTINT & --- & --- \\
integer & 16 & \tt INTEGER & \tt INTEGER & --- \\
integer & 32 & \tt LONGINT & --- & \tt INTEGER \\
cardinal & 8 & --- & --- & --- \\
cardinal & 16 & --- & \tt CARDINAL & --- \\
cardinal & 32 & --- & --- & \tt CARDINAL \\
bitset & 16 & --- & \tt BITSET & --- \\
bitset & 32 & \tt SET & --- & \tt BITSET
\end{tabular}
\end{center}
The system types {\tt INT} and {\tt CARD} correspond to
\mt{} {\tt INTEGER} and {\tt CARDINAL} types respectively.
We recommend to use {\tt INT} and {\tt CARD} in \ot{} when importing \mt{}
modules. For example, if the procedure {\tt Foo} is defined
in the \mt{} definition module \verb'M' as
\begin{verbatim}
DEFINITION MODULE M;
PROCEDURE Foo(VAR x: INTEGER);
END M.
\end{verbatim}
its portable usage in \ot{} is as follows:
\begin{verbatim}
VAR x: SYSTEM.INT;
. . .
M.Foo(x);
\end{verbatim}
\subsection{Data structures}
\xds{} allows any \mt{} data structures to be used in \ot{}
modules, even those that can not be defined in \ot{} (e.g.
variant records, range types, set types, enumerations, etc).
However, usage of \mt{} types in \ot{} and vice versa is
restricted. Whenever possible \xds{} tries to produce the correct
code. If a correct translation is impossible, an error
is reported:
\begin{itemize}
\item
a \mt{} record field type cannot be of an \ot{} pointer, record
or array type;
\item
a \mt{} pointer to an \ot{} record cannot be used in specific
\ot{} constructs (type-bound procedures, type guards, etc);
\item
an opaque type can not be defined as an Oberon pointer.
\end{itemize}
Standard procedures \verb'NEW' and \verb'DISPOSE' are always applied according to
the language of a parameter's type. For example, for the following
declarations in an \ot{} module:
\begin{verbatim}
TYPE
Rec = RECORD END;
MP = POINTER ["Modula"] TO Rec; (* Modula pointer *)
OP = POINTER TO Rec; (* Oberon pointer *)
VAR
m: MP;
o: OP;
\end{verbatim}
the call \verb|NEW(m)| will be treated as a call to the \mt{}
default {\tt ALLOCATE}, while \verb|NEW(o)| will be treated as
a call of the standard \ot{} run-time routine.
See also \ref{multilang:direct}.
Implicit memory deallocation (garbage collection) is applied to \ot{}
objects only. If a variable of a \mt{} pointer type is declared in
an \ot{} module, it shall be deallocated explicitly.
\paragraph{Example: Using the Modula data type in Oberon}
\begin{verbatim}
(* Modula-2*) DEFINITION MODULE m2;
TYPE
Rec = RECORD (* a record with variant parts *)
CASE tag: BOOLEAN OF
|TRUE: i: INTEGER;
|FALSE: r: REAL;
END;
END;
Ptr = POINTER TO Rec;
VAR
r: Rec;
p: Ptr;
PROCEDURE Foo(VAR r: Rec);
END m2.
(* Oberon-2 *) MODULE o2;
IMPORT m2; (* import of a Modula-2 module *)
VAR
r: m2.Rec; (* using the Modula-2 record type *)
p: m2.Ptr; (* using the Modula-2 pointer type *)
x: POINTER TO m2.Rec;
BEGIN
NEW(p); (* Modula-2 default ALLOCATE *)
NEW(x); (* Oberon-2 NEW *)
m2.Foo(r);
m2.Foo(p^);
m2.Foo(x^);
END o2.
\end{verbatim}
\subsection{Garbage collection}
It is important to remember that \mt{} and \ot{} have different
approaches to memory utilization. When a program contains both
\mt{} and \ot{} modules, garbage collection is used.
See \ref{rts:mm} for more information.
%--------------------------------------------------------------
\section{Direct language specification}\label{multilang:direct}
\index{multilanguage programming!language specification}
\index{C interface!language specification}
The compiler must know the implementation language of a
module to take into account different semantics of different
languages and to produce correct code.
In some cases, it is necessary for a procedure or data type to be
implemented according to the rules of a language other than that
of the whole module. In \xds{}, it is possible to explicitly
specify the language of a type or object. {\em Direct language
specification \index{DLS}(DLS)} is allowed either if
language extensions are enabled or if the module \verb'SYSTEM' is imported.
In a record, pointer, or procedure type declaration, or in a procedure declaration,
the desired language (or, more precisely, the way in which that
declaration is treated by the compiler) can be specified as \verb'"[" language "]"'
immediately following the keyword \verb'RECORD', \verb'POINTER',
or \verb'PROCEDURE'. \verb'language' can be a string or integer constant
expression\footnote{We recommend to use strings, integer values are
preserved for backward compatibility.}:
\begin{center}
\begin{tabular}{lll}
\bf Convention & \bf String & \bf Integer \\
\hline
Oberon-2 & \tt "Oberon" & \tt 0 \\
Modula-2 & \tt "Modula" & \tt 1 \\
C & \tt "C" & \tt 2 \\
Pascal & \tt "Pascal" & \tt 5 \\
Win32 API & \tt "StdCall" & \tt 7 \\
OS/2 API & \tt "SysCall" & \tt 8 \\
\end{tabular}
\end{center}
Examples:
\verb' TYPE'\\
\verb' UntracedPtr = POINTER ["Modula"] TO Rec;'
Here {\tt UntracedPtr} is defined as a \mt{} pointer, hence all variables
of that type will not be traced by garbage collector.
\verb' PROCEDURE ["C"] sig_handler (id : SYSTEM.int);'\\
\verb' . . .'\\
\verb' signal.signal(signal.SYSSEGV, sig_handler);'\\
Here \verb'sig_handler' has C calling and naming conventions, so it
can be installed as a signal handler into C run-time support.
A direct language specification clause placed after a name of a field,
constant, type, or variable points out that the name of the object
will be treated according to the rules of the specified
language.
\begin{verbatim}
TYPE
Rec ["C"] = RECORD
name ["C"]: INTEGER;
END;
CONST pi ["C"] = 3.14159;
VAR buffer[]["C"]: POINTER TO INTEGER;
\end{verbatim}
{\bf Note:} In ISO \mt{}, an absolute address may be specified
for a variable after its name in square brackets, so
the empty brackets are required in the last line.
A procedure name is treated according to the language of
its declaration, so in the following declaration:
\verb' PROCEDURE ["C"] Foo;'
both the procedure type and the procedure name are treated
according to the C language rules.
{\bf Note:} If you are using a C++ compiler, the {\tt Foo}
function should be declared with C name mangling style.
Consult your C++ manuals for further information.
The compiler option \OERef{SYSTEMFLAGEXT} enables extended syntax
of direct language specification. Language string, "{\tt Volatile}"
specification and absolute address can be specified in the same clause,
separated by the a comma. See section \ref{m2:ext:volatile} for details.
%----------------------------------------------------------------
\section{Interfacing to C}\label{multilang:C}
\index{multilanguage programming!interface to C}
\index{C interface}
Special efforts were made in \XDS{} to provide convenient
interface to other languages, primarily to the C language.
The main goal is to allow direct usage of existing C libraries and
APIs in \mt{}/\ot{} programs.
\subsection{Foreign definition module}
\label{multilang:foreigndefmod}
\index{multilanguage programming!foreign definition module}
\index{C interface!foreign definition module}
A \See{direct language specification}{}{multilang:direct}
clause may appear immediately after keywords \verb'DEFINITION'
\verb'MODULE'. The effect is that all objects defined in
that module are translated according to the specified
language rules, thus making unnecessary direct language
specifications for each object.
Several options are often used in foreign definition modules.
\ifgenc
See \ref{maptoc:opt:foreign} for the
description of options used to create a foreign definition module.
\fi
\Example
\begin{verbatim}
<*+ M2EXTENSIONS *>
<*+ CSTDLIB *> (* C standard library *)
<*+ NOHEADER *> (* we already have header file *)
DEFINITION MODULE ["C"] string;
IMPORT SYSTEM;
PROCEDURE strlen(s: ARRAY OF CHAR): SYSTEM.size_t;
PROCEDURE strcmp(s1: ARRAY OF CHAR;
s2: ARRAY OF CHAR): SYSTEM.int;
END string.
\end{verbatim}
Take the following considerations into account when designing
your own foreign definition module:
\begin{itemize}
\item
If you are developing an interface to an existing header file,
use the {\bf NOHEADER} option to disable generation of the header
file. This option is meaningful for translators only.
\item
If the header file is a standard header file, use the
{\bf CSTDLIB} option. This option is meaningful for the translators
only.
\item
Use the special \verb'SYSTEM' types {\tt int}, {\tt unsigned},
\verb|size_t|, and {\tt void} for corresponding
C types.
\item
\xds{} compilers use relaxed type compatibility rules
for foreign entities. See \ref{multilang:relax} for
more information.
\end{itemize}
\ifgenc
Definition modules for ANSI C libraries ({\tt stdio.def},
{\tt string.def}, etc) can be used as tutorial examples.
\fi
\subsection{External procedures specification}
\label{multilang:extproc}
\index{multilanguage programming!external procedures}
\index{C interface!external procedures}
In some cases, it may be desirable not to write a foreign
definition module but to use some C or API functions directly.
\xds{} compilers allow a function to be declared as external.
The declaration of an external procedure consists of a procedure
header only. The procedure name in the header is prefixed by the
symbol \verb|"/"|.
\verb' PROCEDURE ["C"] / putchar(ch: SYSTEM.int): SYSTEM.int;'
\subsection{External variables specification}
\label{multilang:extvar}
\index{multilanguage programming!external variables}
\index{C interface!external variables}
In some cases, it may be desirable not to write a foreign
definition module but to use some C or API functions directly.
\xds{} compilers allow a variable to be declared as external.
The variable name of an external variable is prefixed by the
symbol \verb|"/"|.
As a rule, a \See{direct language specification}{}{multilang:direct}
clause is placed after external variable name to point out that the name
of the variable will be treated according to the rules of the specified
language.
\verb' VAR / buffer []["C"]: POINTER TO INTEGER;'
%------------------------------------------------------------------
\section{Relaxation of compatibility rules}
\label{multilang:relax}
The compiler performs all semantic checks for an object or type
according to its language specification. Any object declared as that
of \mt{} or \ot{} is subject to \mt{} or \ot{} compatibility rules
respectively. The compiler uses relaxed compatibility rules for
objects and types declared as \verb'"C"', \verb'"Pascal"',
\verb'"StdCall"', and \verb'"SysCall"'.
\subsection{Assignment compatibility}
Two pointer type objects are considered assignment compatible, if
\begin{itemize}
\item they are of the same \mt{} or \ot{} type.
\item at least one of their types is declared as
\verb'"C"', \verb'"Pascal"', \verb'"StdCall"', or \verb'"SysCall"',
and their {\it base types} are the same.
\end{itemize}
\begin{verbatim}
VAR
x: POINTER TO T;
y: POINTER TO T;
z: POINTER ["C"] TO T;
BEGIN
x := y; -- error
y := z; -- ok
z := y; -- ok
\end{verbatim}
\subsection{Parameter compatibility}
\label{multilang:parmcomp}
For procedures declared as \verb'"C"', \verb'"Pascal"',
\verb'"StdCall"', or \verb'"SysCall"', the
type compatibility rules for parameters are significantly relaxed:
If a formal value parameter is of the type declared as \verb'POINTER TO T',
the actual parameter can be of any of the following types:
\begin{itemize}
\item the same type (the only case for regular \mt{}/\ot{}
procedures);
\item another type declared as \verb'POINTER TO T'.
\item any array type which elements are of type {\tt T}.
In this case the address of the first array element is passed,
as it is done in C.
\item the type {\tt T} itself, if {\tt T} is a record type.
In this case the address of the actual parameter is passed.
\end{itemize}
If a formal parameter is an open array of type \verb'T',
the actual parameter can be of any of the following types:
\begin{itemize}
\item an (open) array of type \verb'T' (the only case for regular \mt{}/\ot{}
procedures);
\item type \'verb'T' itself (if \OERef{M2EXTENSIONS}\iftopspeed or \OERef{TOPSPEED}\fi
option is set ON);
\item any type declared as \verb'POINTER TO T'.
\end{itemize}
This relaxation, in conjunction with the
\See{{\tt SYSTEM.REF} function procedure}{}{m2:sysfunc:nonstandard},
simplifies \mt{}/\ot{} calls to C libraries and the target operating
system API, preserving the advantages of the type checking mechanism
provided by that languages.
\Example
\begin{verbatim}
TYPE
Str = POINTER TO CHAR;
Rec = RECORD ... END;
Ptr = POINTER TO Rec;
PROCEDURE ["C"] Foo(s: Str); ... END Foo;
PROCEDURE ["C"] Bar(p: Ptr); ... END Bar;
PROCEDURE ["C"] FooBar(a: ARRAY OF CHAR); ... END FooBar;
VAR
s: Str;
a: ARRAY [0..5] OF CHAR;
p: POINTER TO ARRAY OF CHAR;
R: Rec;
A: ARRAY [0..20] OF REC;
P: POINTER TO REC;
Foo(s); (* allowed - the same type *)
Foo(a); (* allowed for the "C" procedure *)
Foo(p^); (* allowed for the "C" procedure *)
Bar(R); (* the same as Bar(SYSTEM.REF(R)); *)
Bar(A); (* allowed for the "C" procedure *)
Bar(P); (* allowed for the "C" procedure *)
FooBar(s); (* allowed for the "C" procedure *)
\end{verbatim}
\subsection{Ignoring function result}
\index{C interface!using C functions}
It is a standard practice in C programming to ignore the result of
a function call. Some standard library functions are designed
taking that practice into account. E.g. the string copy
function accepts the destination string as a variable parameter (in
terms of \mt{}) and returns a pointer to it:
\verb' extern char *strcpy(char *, const char *);'
In many cases, the result of the {\tt strcpy} function call is ignored.
In \xds{}, it is possible to ignore results of functions
defined as \verb'"C"', \verb'"Pascal"', \verb'"StdCall"', or \verb'"SysCall"'.
Thus, the function {\tt strcpy} defined in the \verb'string.def'
foreign definition module as
\verb' PROCEDURE ["C"] strcpy(VAR d: ARRAY OF CHAR;'\\
\verb' s: ARRAY OF CHAR): ADDRESS;'
can be used as a proper procedure or as function procedure:
\verb' strcpy(d,s);'\\
\verb' ptr:=strcpy(d,s);'
% This section has to be included if and only if gencode is true.
\ifgencode
% !!! DS_EQ_SS - throw away?
\section{Configuring XDS for a C Compiler}\label{multilang:ccomp}
% CC equation: WATCOM/SYMANTEC/BORLAND/MSVC/OS2SYSCALL
% options: GENCPREF, DS_NEQ_SS, ONECODESEG
Different C compilers have different naming and calling conventions.
If you use C functions or libraries in your projects,
you have to specify your C compiler using the \OERef{CC} equation in
order to have all C functions to be called in a way compatible with
that compiler. The compiler also sets the default values of some
other options and equations according to the value of the \OERef{CC} equation.
\iflinux \else See \ref{multilang:ccomp:opt}. \fi % this section not included for Linux
\iflinux
For Linux \XDS{} supports the GCC (ELF) compiler. Therefore, the \OERef{CC}
equation has to be set to \verb'"GCC"', written in any case. If the equation value
is not set, \verb'"GCC"' is assumed by default.
\fi
\ifwinnt
For Windows NT and Windows 95 \XDS{} supports the MSVC++ and Watcom
(stack convention) compilers. The corresponding values of the \OERef{CC}
equation are \verb'"MSVC"' and \verb'"WATCOM"', written in any case.
If the equation is not set, the compiler will assume \verb'"WATCOM"'
by default. Add the line
\verb' -cc=Watcom'
or
\verb' -cc=MSVC'
to your configuration file.
\fi
Alignment of data structures is controlled by the \OERef{ALIGNMENT} equation.
\iflinux
{\bf ATTENTION!} Libraries included in \XDS{} distribution are built via
GCC. Since GCC usually produces aligned code, the \OERef{ALIGNMENT} equation
has to be set to 4. Setting it to other values may cause unpredictable results.
Don't change it unless you exactly know what you are doing!
\fi
Names in an object file produced by a C compiler may have leading
underscore. If you are going to use C modules and libraries, you have to
force \XDS{} to use the same naming rules. To do this, turn the
\OERef{GENCPREF} option ON in the foreign definition modules:
\verb' <* +GENCPREF *>' \\
\verb' DEFINTION MODULE ["C"] stdio;'
\iflinux
Since GCC (ELF) produces no underscore prefixes you should not turn this option
ON.
\fi
\ifcomment !!!
See {\tt samples.txt} from \XDS{} on-line documentation for more
information.
\fi
\subsection{Possible problems}
To use a C function or a data type from \mt{} or \ot{} you have
to express its type in one of these languages. Usually
it is done in a foreign definition module (See \ref{multilang:C}).
The current version of \XDS{} does not support
all calling conventions, so direct usage of some functions
is not possible, namely:
\begin{itemize}
\item functions with a parameter of a structured type, passed by value,
e.g.:
\verb' void foo(struct MyStruct s);'
\item functions that return structured types, e.g.:
\verb' struct MyStruct foo(void)'
\item C functions with Pascal calling convention that return
a real type.
\iflinux \else % not Linux
\item functions that are compiled with non-stack calling conventions.
{\bf Note:} stack calling conventions shall be set for
Watcom using "-3s", "-4s", or "-5s" option.
\fi
\end{itemize}
\iflinux \else % not Linux
\XDS{} does not support usage of data structures with
non-standard alignments. If the \OERef{ALIGNMENT} equation is set
to {\em n}, use the option "-zp{\em n}" for Watcom C and
"-Zp{\em n}" for MSVC.
\fi
Both \mt{} and C/C++ have exception handling and finalization
facilities. Unpredictable results may occur
if you try to utilize that facilities from both languages in one program.
\iflinux \else % not Linux
\subsection{Using an unsupported compiler}
\XDS{} does not support all available C compilers.
You can use additional configuration options (See \ref{multilang:ccomp:opt})
to adapt \XDS{} to your C compiler.
The \OERef{DEFLIBS} option should be switched off in that case.
It may be necessary to make some changes in the run-time support
or to build a special version of the library for a particular C compiler.
It can be done under terms of a special support program.
\subsection{Additional configuration options}\label{multilang:ccomp:opt}
The following options can be used to adapt \XDS{} to an unsupported
C compiler. We recommend to use these options with care.
\begin{itemize}
\ifcomment
\item[DS\_NEQ\_SS] \index{DS\_NEQ\_SS@{\bf DS\_NEQ\_SS}} \mbox{}
If the option is ON, the compiler assumes that DS register
is not equal to SS register.
\fi
\item The \OERef{GENCPREF} option controls whether the compiler should precede
all external names in object files with an underscore prefix (ON),
or leave them unchanged (OFF).
\item The \OERef{ONECODESEG} option controls whether the compiler should
produce a single code segment for all code of a module (ON),
or a separate code segment for each procedure (OFF).
\end{itemize}
The table below shows the default values of these options for the supported
C compilers:
\begin{center}
\begin{tabular}{lcc} \
\OERef{CC} setting & \tt WATCOM & \tt MSVC \\
\hline
\ifcomment
DS\_NEQ\_SS & ? & ? \\
\fi
\OERef{GENCPREF} & OFF & ON \\
\OERef{ONECODESEG} & OFF & ON
\end{tabular}
\end{center}
\fi % not Linux
\fi % not gencode
\ifcomment % ------------------------------------------------------
%!!!!
\section{Interfacing to other languages}
\index{multilanguage programming!other languages}
Although \xds{} compiles to ANSI C, it is possible to use {Pascal} and
{FORTRAN} libraries from \mt{}/\ot{} sources.
Most C compilers can recognize the special keyword {\tt pascal}
(and/or {\tt fortran}), immediately preceding a C function name.
\begin{verbatim}
void pascal PrintScreen (void);
/* pascal function declaration */
\end{verbatim}
The naming and parameter passing conventions are different in
C and in {Pascal}/{FORTRAN}, and therefore a special keyword is necessary.
If your C compiler supports the foreign library interface, you can use
{Pascal}/{FORTRAN} procedures, variables, data types, etc. in the
same way as C objects.
To do it, you should first create the corresponding C header file if it
does not exist. Consult your C compiler documentation for details.
After this, a Modula-2 definition module should be created.
Special keywords such as {\tt pascal}, {\tt far}, {\tt volatile} may
be omitted in the definition module.
Compile this definition file with the {\bf NOHEADER}
option to produce a symbol file.
Having the Pascal declaration
\begin{verbatim}
function Foo(p: real): real;
\end{verbatim}
the respective C header file line should be
\begin{verbatim}
float pascal far Foo(float p);
\end{verbatim}
and the definition file
\begin{verbatim}
<*+ NOHEADER *>
DEFINITION MODULE [2] Pascals;
PROCEDURE Foo(p: REAL): REAL;
END Pascals.
\end{verbatim}
\fi % comment -----------------------------------------------------
| {
"alphanum_fraction": 0.6676151818,
"avg_line_length": 35.350591716,
"ext": "tex",
"hexsha": "8f735ef48733ff9fc928ad49da4cc795f3ecb849",
"lang": "TeX",
"max_forks_count": 20,
"max_forks_repo_forks_event_max_datetime": "2021-10-02T19:46:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-06-10T18:09:16.000Z",
"max_forks_repo_head_hexsha": "cfd20e209193c9cfcee94ad2ca30d8c32ead48c9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "undecidedzogvisvitalispotent8stars360/xds",
"max_forks_repo_path": "Sources/Doc/Comp/src/multlang.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "cfd20e209193c9cfcee94ad2ca30d8c32ead48c9",
"max_issues_repo_issues_event_max_datetime": "2021-07-30T07:17:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-07-10T16:06:48.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "undecidedzogvisvitalispotent8stars360/xds",
"max_issues_repo_path": "Sources/Doc/Comp/src/multlang.tex",
"max_line_length": 88,
"max_stars_count": 53,
"max_stars_repo_head_hexsha": "58bfef657569983dcdfa6c03eee49b4ba3c4ba03",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "pjmlp/xds",
"max_stars_repo_path": "Sources/Doc/Comp/src/multlang.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-28T18:56:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-10T18:19:44.000Z",
"num_tokens": 6132,
"size": 23897
} |
\chapter[
for the toc
for the toc]{
title text title text
title text title text
title text title text
}
body text body text
body text body text
body text body text
\section{section text}
section body
\section{section text}
section body
\section{section text}
section body
| {
"alphanum_fraction": 0.7157190635,
"avg_line_length": 13.5909090909,
"ext": "tex",
"hexsha": "cbf5064be1da4ffb463161790065d71b252a3a4c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "digorithm/latex-linter",
"max_forks_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/headings/headings-blank-line.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "digorithm/latex-linter",
"max_issues_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/headings/headings-blank-line.tex",
"max_line_length": 24,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "digorithm/latex-linter",
"max_stars_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/headings/headings-blank-line.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 76,
"size": 299
} |
\documentclass[onecolumn, draftclsnofoot, 10pt, compsoc]{IEEEtran}
\usepackage{graphicx}
\usepackage{url}
\usepackage{setspace}
\usepackage{geometry}
\geometry{textheight=9.5in, textwidth=7in}
% 1. Fill in these details
\def \CapstoneTeamName{ Team 41}
\def \CapstoneTeamNumber{ 41}
\def \GroupName{ 30k CS Avionics}
\def \GroupMemberOne{ Joshua Novak}
\def \GroupMemberTwo{ Allison Sladek}
\def \GroupMemberThree{ Levi Willmeth}
\def \CapstoneProjectName{ 30K Rocket Spaceport America}
\def \CapstoneSponsorCompany{ Oregon State University}
\def \CapstoneSponsorPerson{ Dr. Nancy Squires}
% 2. Uncomment the appropriate line below so that the document type works
\def \DocType{ %Problem Statement
%Requirements Document
Technology Review
%Design Document
%Progress Report
}
\newcommand{\NameSigPair}[1]{
\par
\makebox[2.75in][r]{#1} \hfill
\makebox[3.25in]{\makebox[2.25in]{\hrulefill} \hfill \makebox[.75in]{\hrulefill}}
\par\vspace{-12pt}
\textit{
\tiny\noindent \makebox[2.75in]{} \hfill
\makebox[3.25in]{\makebox[2.25in][r]{Signature} \hfill \makebox[.75in][r]{Date}}
}
}
% 3. If the document is not to be signed, uncomment the RENEWcommand below
%\renewcommand{\NameSigPair}[1]{#1}
% \renewcommand{\thesubsubsection}{\thesection.\alph{subsubsection}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{titlepage}
\pagenumbering{gobble}
\begin{singlespace}
%\includegraphics[height=4cm]{coe_v_spot1}
\hfill
% 4. If you have a logo, use this includegraphics command to put it on the coversheet.
%\includegraphics[height=4cm]{CompanyLogo}
\par\vspace{.2in}
\centering
\scshape{
\huge CS Capstone \DocType \par
{\large\today}\par
\vspace{.5in}
\textbf{\Huge\CapstoneProjectName}\par
\vfill
% {\large Prepared for}\par
% \Huge \CapstoneSponsorCompany\par
% \vspace{5pt}
% {\Large\NameSigPair{\CapstoneSponsorPerson}\par}
{\large Prepared by }\par
% \GroupName\par
% 5. comment out the line below this one if you do not wish to name your team
% \CapstoneTeamName\par
\vspace{5pt}
{\Large
% \NameSigPair{\GroupMemberOne}\par
% \NameSigPair{\GroupMemberTwo}\par
\NameSigPair{\GroupMemberThree}\par
}
\vspace{20pt}
}
\end{singlespace}
\section*{Revision History}
\begin{tabular*}{1\linewidth}{@{\extracolsep{\fill}}|c|c|c|c|}
\hline
Name & Date & Reason For Changes & Version\\
\hline
Levi Willmeth&11/5/17&Initial document draft&0.1\\
\hline
Levi Willmeth&11/16/16&Edits using feedback from peer review&0.11\\
\hline
\end{tabular*}
\\
\begin{abstract}
A comparison of the technical features for several telemetry transmitters, methods of decoding APRS packets, and database solutions to organize and store flight data, as they relate to the Oregon State University entry into the 2017-18 Spaceport America Cup 30k challenge.
\end{abstract}
\end{titlepage}
\newpage
\pagenumbering{arabic}
\tableofcontents
% 7. uncomment this (if applicable). Consider adding a page break.
%\listoffigures
%\listoftables
% ===================================================================================
\section{Project Overview}
\subsection{Introduction}
Our project is to design, build, and test software that will fly on board the Oregon State University's entry to the Spaceport America Cup's 30k Challenge. The Spaceport America Cup is an international engineering competition to design, build, and fly a student-made rocket to 30,000 feet. The competition is scored on several criteria including software components like flight avionics, recording and displaying telemetry, and using a scientific research payload.
\subsection{My personal role in the team}
Our team has many members, including at least 12 mechanical engineering, three electrical engineering, and three computer science students, as well as several faculty and industry mentors. My primary role in the team is to work with my computer science subteam to design, build, and test the ground station software, as well as the flight software that will control avionics for the rocket, and a scientific payload that will be ejected from the rocket at apogee.
We have divided the programming aspect of the project into 4 groups of software: rocket avionics, payload avionics, parsing data, and displaying data. Our goal is to work together so that all three of us contribute to each piece, even though one of us may be called on to take the lead on different pieces at different times. Personally, my primary interests lie in the rocket and payload avionics components, but I also have previous experience in parsing APRS packets and graphing data.
\subsection{My expected contributions towards our team goals}
This project requires writing avionics software to control events on the rocket, another avionics software to control events on the payload, a ground station software to process incoming experiment or telemetry data and store the results in a database, and software to display information from the database in a browser. Additionally, each program will include a suite of unit tests.
These four goals provide a way of dividing the project into individual portions that can be developed simultaneously. Each of us intends to contribute to all of the portions, but we may also choose to each manage a portion in order to gain experience both leading, and following other team members.
I would like to manage one of the avionics programs, and possibly the parser program. For the rocket avionics, I would be excited to write a Kalman filter to exclude errant sensor data and prevent accidental early parachute deployment. For the payload avionics, I would like to write a closed loop PID controller to adjust the motor speed to provide optimum thrust to achieve micro gravity. The parser will also be interesting to work on because it may require parsing a wide variety of sensor data from csv files.
% ===================================================================================
\section{Transmitting telemetry data}
\subsection{Overview of telemetry}
In this context, telemetry is the data transmitted down from the rocket during flight. This data will be used to calculate the rocket's maximum altitude, as well as locate the rocket after the flight. If something goes wrong during the flight and the rocket is not recovered, telemetry can be used to determine what went wrong so that future flights may avoid the same problem.
\subsection{Criteria for telemetry}
This project will use a commercial, off the shelf telemetry module to transmit data from the rocket to the ground. There are several commercial products that offer this capability. The important features of a telemetry module are:
\begin{itemize}
\item Radio frequency
\item Sensor types and transmission rates
\item Physical characteristics and costs
\end{itemize}
\subsection{Considered Technologies}
\subsubsection{Bigredbee BeeLine GPS}
The BeeLine GPS unit is a self contained GPS and telemetry transmitter that operates on the 70cm radio band. It is 1.25" x 3", weighs about 2 ounces, and commonly uses a 35cm long antenna. The BeeLine GPS can also record up to 2 hours of GPS data internally. The BeeLine GPS can log and transmit limited positional telemetry fields at 1Hz intervals at 100mW of power, with an advertised range of 40 miles.\cite{8}
The BeeLine GPS transmits an audio signal that needs to be decoded on the ground, using a terminal node controller (TNC). There are several devices that can accomplish this, including using the sound card on a computer. The price varies by vendor, but most hardware TNC's cost over \$80.\cite{2} Software TNC's are limited to one signal per sound card, which makes exact pricing difficult to determine. Using a dedicated raspberry pi zero with USB sound card, the cost to process APRS packets with a software TNC should be around \$25 per signal.\cite{3}
It is unknown how many signals can be processed by a single computer, but it seems reasonable to assume that processing additional signals will eventually begin to reduce the rate of successful packet captures. In other words, processing each signal with a dedicated computer may add redundancy and performance.
The Bigredbee Beeline GPS costs \$259 and requires a handheld radio, which costs around \$40. It will also likely require a dedicated computer per signal, which could be a raspberry pi zero with usb sound card for an additional \$25 per signal. This brings the total cost to around \$324 per signal.
\subsubsection{Altus Metrum TeleMega}
The TeleMega is a self contained GPS and telemetry transmitter on the 70cm radio band, that can also function as a flight computer. It measures 1.25" x 3.25" and often uses a 35cm long antenna. The data is sent as an audio packet that is decoded by the receiver. The TeleMega can store several hours of data on board, and transmits many types of telemetry at custom intervals, using 40mW of power and an advertised range of 40 miles.\cite{9}
TeleMega data packets are formatted in 32 bit packets to reduce errors, and are designed to be received and decoded by a TeleDongle ground station. The TeleDongle is an additional cost but also provides a convenient serial output over USB. This is important because it means a single computer can simultaneously receive multiple signals from different transmitters.\cite{10}
The TeleMega costs around \$280 and offers a 10\% student discount. The TeleDongle is another ~\$100, bringing the total to around \$342 per signal.
\subsubsection{RFD900}
The RFD900 is a 900 MHz radio designed to work as a pair of serial modems. We could generate a signal on the flight computer, send that signal to a TX RFD900 on the rocket, pick up the signal on a RX RFD900 on the ground, and get whatever telemetry fields we want on the ground. The advantage is that the system would be very customizable. We could send any size packets at any rate we want. The disadvantage is reliability. The RFD900 transmits at a much higher frequency, which reduces rates and has more trouble with interference from atmospheric conditions such as humidity.\cite{1}
The RFD900 costs about \$300 for a transmitter/receiver pair, and requires additional processing on the ground, for a total of around \$350 per signal. One significant downside to the RFD900 is that it is designed to work as a transmitter/receiver pair, which means we could not easily use redundant receivers. There is an open broadcast mode available that would offer that feature, but there are reasons to believe the RFD900 is not the most reliable choice.
\subsection{Comparisons}
\begin{center}
\resizebox{\linewidth}{!}{%
\begin{tabular}{| l | c | c | c | c | c | c |}
\hline
Model & Radio Frequency & Range & Sensor fields & Transmission rate & Size & Cost\\
\hline
BeeLine GPS&100mW 70 cm&40 miles&\shortstack{
Latitude\\
Longitude\\
Altitude
}&1 Hz&1.25" x 3", ~2 ounces&\$324\\
\hline
TeleMega&40mW 70 cm&40 miles&\shortstack{
Latitude\\
Longitude\\
Altitude\\
Ground speed\\
Ascent rate\\
Bearing\\
Gps confidence\\
Time
}&1 Hz&1.25" x 3"&\$342\\ \cline{4-5}
&&&\shortstack{
Device ID\\
Flight number\\
Configuration version number\\
Apogee deploy delay\\
Main parachute deploy\\
Radio operator identifier\\
Flight state\\
Battery voltage\\
Pyro battery voltage\\
Calculated height
}&5 Hz&&\\ \cline{4-5}
&&&\shortstack{
Angle from vertical\\
100g accelerometer on Z axis\\
3 axis accelerometer\\
3 axis gyroscope\\
3 axis magnetometer\\
Barometer\\
Temperature
}&10 Hz&&\\ \cline{4-5} \hline
RFD900&1W 900MHz&26 miles&Customizable, any&Any&1.25" x 2.5"&\$300\\
\hline
\end{tabular}
} % end resizebox
\end{center}
\subsection{Conclusions}
Based on all factors, I believe the TeleMega is the best choice for the telemetry module. It provides far more fields of telemetry, offers similar range and reliability, could act as a redundant flight computer if needed, is competitively priced, and is designed and supported by one of our project mentors.
% ===================================================================================
\section{Receiving and parsing the telemetry packets}
\subsection{Overview of telemetry signal formatting}
This section depends entirely on the exact model of telemetry transmitter used. Based on several discussions with the ECE subteam, we assume they will select the Bigredbee Beeline GPS telemetry unit, which is limited to sending a tone-based signal formatted as an APRS packet. That means we will need to use a HAM radio to receive the signal and convert it to an audio tone, which then needs to be parsed into a string using a process called Terminal Node Controller (TNC).
\subsection{Criteria}
The goal of this component is to parse an audio signal into text. As such, the main priority is accuracy and reliability. The parser should be able to correctly and reliably interpret the highest number of packets, even under sub optimal conditions. Multiple options will be judged based on performance (compare identical sample input packets across multiple parsers), reliability (what can go wrong), license or comprehension (open source vs black box), and cost.
\subsection{Using a Hardware TNC}
For a long time, this problem was solved by electronics hardware. Several HAM radio companies exist and sell units known as hardware TNC's, which process the audio signal into serial text. These have the advantage of being plug-and-play. They convert input to output and the user doesn't really need to know much about what happens inside.
There is an argument to be made that hardware TNC's are not all made using the same algorithm, and that modern software TNC's may be able to decode a greater proportion of 'messy' signals with some amount of noise or interference. (Will address this in the software TNC portion.)
Another downside to a hardware TNC is that it is essentially a black box to us. We will not be able to understand the circuit well enough to make any changes, fixes, or improvements.\cite{2}
There are several hardware TNC's available for sale, listed from most to lease common.
\begin{itemize}
\item Kantronics KPC-3+ Packet Communicator, MSRP \$199.
\item Kantronics KAM, MSRP unknown, no longer commonly sold.
\item Kenwood TM-D710A, MSRP \$549.
\item PacComm Tiny-2 MK-II, MSRP \$99.
\end{itemize}
Another problem with hardware TNC's is that they are not as popular as they once were. It was difficult to find any units for sale to make this list, even using sites like ebay or amazon. Finding several identical and new hardware TNC units for our project may prove difficult or impractical.
\subsection{Using a Software TNC}
Software TNC's convert audio tones to text, which allow them to perform the same task as a hardware TNC without a physical circuit. Some software TNC's also include features to verify APRS packets, and even decode packets into individual fields. This is ideal for a situation like ours where individual fields could then be inserted into a database.
The downside to using a software TNC is that it requires one sound card per audio signal. Even if an audio splitter were used, multiple simultaneous audio sources would likely increase the number of errors even if additional software were used to attempt to separate the sources. Therefor it seems likely that one computer would be needed per audio source. Using several Raspberry Pi computers may be a good choice because they are small, cheap, and can run headless without the need for a keyboard or monitor.
\subsubsection{Direwolf}
The most popular software TNC is an open source project called Direwolf. Direwolf is a software "soundcard" AX.25 packet modem/TNC and APRS encoder/decoder. Because Direwolf is open source, it does not need to be a black box to us. We can look inside to understand how it works, make changes as needed, and use only the pieces we need for our project.\cite{3}
Additionally, the developer of Direwolf proposed an experiment to compare it's new algorithm against a hardware TNC, using a pre-recorded 2 hour sample of APRS traffic from the city of Los Angeles. This allows others to run the same experiment and compare their results against other hardware TNC's, or their specific computer hardware including sound card and processor. The results of his experiments showed that the Kantronics KPC-3 Plus successfully decoded 30\% fewer packets, and the Kenwood TM-D710A decoded 33\% fewer packets than the software TNC.\cite{4}
\subsubsection{AGWPE}
AGWPE is another software TNC solution. Although it is free, AGWPE is not open source. Instead, it is distributed as an exe file and operates as a black box. It is not clear if it can run in Linux under wine or a virtual machine running Windows, but it is very likely that the additional complexity would present problems for using cheap, distributed redundant receivers using raspberry pi computers. There are no easily available statistics comparing the performance of AGWPE to similar technologies.\cite{11}
\subsection{Comparisons}
All of these options also require purchasing a portable HAM radio unit and appropriate audio cable for each signal.
\begin{center}
\resizebox{\linewidth}{!}{%
\begin{tabular}{| c | c | c | c | c |}
\hline
Product&Software/Hardware&Performance&Availability&Cost\\
\hline
Kantronics KPC-3+ &Hardware &0.7 &Limited, used &\$199\\
Kenwood TM-D710A &Hardware &0.67 &Limited, used &\$549\\
PacComm Tiny-2 &Hardware &? &Limited, used &\$99\\
Direwolf+Pi Zero &Software &1 &Open source &\$25\\
AGWPE(Windows laptop)&Software &? &Closed source &\$0\\
\hline
\end{tabular}
} % end resizebox
\end{center}
\subsection{Conclusions}
Using this information, I believe that using the Direwolf software TNC is the right choice for our project. Furthermore, I believe that using one Raspberry Pi per audio source is a good way to distribute the signal processing. If using a Pi Zero and USB soundcard, the cost would be around \$25 per audio signal. Each Pi would process one telemetry signal and be networked to a central computer, which would collect and store data from each Pi.
% ===================================================================================
\section{Storing telemetry and flight data}
\subsection{Overview of the data to be stored}
This project will generate telemetry data from the rocket, telemetry data from the payload, sensor data from the rocket, and sensor data from the payload. All of this data is useful and may be related. For example, using acceleration data from the rocket may clarify or improve results obtained from the payload.
Additionally, the data will need to be used by another program. We intend to view telemetry data and generate dynamic graphs while the rocket is still in the air, and after recovering the rocket we will import sensors and generate a more complete set of graphs using that data.
With these conditions, we believe a database is a good solution for storing all of this data.
\subsection{Considered Technologies}
\subsubsection{SQLite}
SQLite is an embedded, server-less, transactional SQL database engine. Records are stored in a single file, which can easily be copied across platforms. These features make it very lightweight and ideal for single-user applications.\cite{5}
\subsubsection{MySQL}
MySQL is one of the most popular databases worldwide, especially for web based applications. It's freeware, receives regular updates, and is very reliable. MySQL could be considered the 'standard' for other databases to be compared against.\cite{6}
\subsubsection{PostgreSQL}
PostgreSQL is another free database commonly used for web applications. It offers more features than MySQL, but at the cost of additional complexity and arguably worse documentation. One interesting perk for PostgreSQL is that it supports JSON.\cite{5}
\subsubsection{MariaDB}
MariaDB is the newest database being considered. It offers some additional features such as server and application-level encryption, progress bars during especially large instructions, and claims improved stability and performance over MySQL.\cite{7}
\subsection{Comparisons}
After researching several of the popular relational databases, the following comparison table was made\cite{5}\cite{6}\cite{7}:
\begin{center}
\resizebox{\linewidth}{!}{%
\begin{tabular}{| c | c | c | c |}
\hline
Database &Pros &Cons &Conclusion\\
\hline
SQLite
&\shortstack{
Easy to set up\\
Stored as a single file
}
&\shortstack{
No user/access management\\
Cannot easily function as server
}
&Good for single-user applications\\
\hline
MySQL
&\shortstack{
Popular, lots of tutorials\\
Fast writes
}
&\shortstack{
Not fully SQL compliant\\
Queries using TIME lack precision
}
&Not a bad choice\\
\hline
PostgreSQL
&\shortstack{
Fully SQL compliant\\
Fast reads
}
&More complex than others
&More database than we really need\\
\hline
MariaDB
&\shortstack{
Active userbase\\
Claims to be faster than MySQL
}
&&I like it!\\
\hline
\end{tabular}
}
\end{center}
\subsection{Conclusions}
An active user base and being open source are nice features for any piece of software. It would also be smart to use a popular database so that the skills we learn during this project are more likely to be applicable on future projects.
SQL lite lacks the server functionality we need for this project. PostgreSQL may complicate setup and usage by offering more features than we really need. I don't think it will matter for our scale of application, but PostgreSQL also seems to be optimized for read time, while we will probably want to optimize for write times due to the time-sensitive nature of our incoming data.
After comparing several types of databases, I believe MariaDB fits our needs the best. I like that it's open source and has modern, easy to use guides and support forums. It is SQL compliant so the skills we learn can also be applied to other, similar databases on future projects. MariaDB also supports precise queries for TIME or TIMEDATE fields, which may prove very useful for our type of data.\cite{7}
% ===================================================================================
\bibliography{references}
\bibliographystyle{IEEEtran}
\end{document}
| {
"alphanum_fraction": 0.7308038585,
"avg_line_length": 64.6121883657,
"ext": "tex",
"hexsha": "d326a8e90056c2960aa2efee24411f9a2b22e650",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "OregonStateRocketry/30k2018-CS-Capstone",
"max_forks_repo_path": "Documentation/Assignments/Technology_Review/Levi/willmeth_tech_review.tex",
"max_issues_count": 34,
"max_issues_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb",
"max_issues_repo_issues_event_max_datetime": "2018-05-28T16:50:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-27T16:11:34.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "OregonStateRocketry/30k2018-CS-Capstone",
"max_issues_repo_path": "Documentation/Assignments/Technology_Review/Levi/willmeth_tech_review.tex",
"max_line_length": 591,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6c534ad28667884c03485cdd01371290175842bb",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "OregonStateRocketry/30k2018-CS-Capstone",
"max_stars_repo_path": "Documentation/Assignments/Technology_Review/Levi/willmeth_tech_review.tex",
"max_stars_repo_stars_event_max_datetime": "2017-11-21T18:54:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-11-21T18:54:29.000Z",
"num_tokens": 5461,
"size": 23325
} |
\problem{Induction 1}
Let $n \in \mathcal{N}$. Conjecture a formula for:
$$a_n = \frac{1}{(1)(2)} + \frac{1}{(2)(3)} + ... + \frac{1}{(n)(n + 1)}$$
\solution
\part
\begin{theorem}
The formula $a_n = \frac{n}{n + 1}$ describes the summation.
\end{theorem}
\begin{proof}
Proceeding by induction, it is first established that the formula works for $a_1$, because $a_1 = \frac{1}{(1) * (2)} = \frac{1}{2}$.
Then, suppose the formula is true for an arbitrary $n \geq 1$, i.e. that $a_n = \frac{n}{n + 1}$.
From this, $a_{n + 1} = \frac{n}{n + 1} + \frac{1}{(n + 1)(n + 2)} = \frac{n^2 + 2n + 1}{(n + 1)(n + 2)} = \frac{(n + 1)(n + 1)}{(n + 1)(n + 2)} = \frac{n + 1}{n + 2}$.
By induction, this formula describes the summation $a_n$.
\end{proof}
\subsection{Scratch work}
$a_1 = (1/2)$ \\
$a_2 = (1/2) + (1/6) = 4/6 = 2/3$ \\
$a_3 = (1/2) + (1/6) + (1/12) = 9/12 = 3/4$ \\
A likely formula is $a_n = (n/n+1)$
| {
"alphanum_fraction": 0.5486338798,
"avg_line_length": 29.5161290323,
"ext": "tex",
"hexsha": "ee726bd41b0e6d83f66184c0bbc6daeab7f88175",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "13e92e5824cfc90fa4f440269167c7a59d8e11bc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "LSaldyt/proofs",
"max_forks_repo_path": "2-12/problems/3.2.3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "13e92e5824cfc90fa4f440269167c7a59d8e11bc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "LSaldyt/proofs",
"max_issues_repo_path": "2-12/problems/3.2.3.tex",
"max_line_length": 168,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "13e92e5824cfc90fa4f440269167c7a59d8e11bc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "LSaldyt/proofs",
"max_stars_repo_path": "2-12/problems/3.2.3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 407,
"size": 915
} |
\chapter*{Acronyms}
\begin{table}
NTV & Narrow Tilting Vehicle \medskip \\
DOF & Degree(s) Of Freedom \medskip \\
DTC & Direct Tilt Control \medskip \\
STC & Steering Tilt Control \medskip \\
SDTC & Steering and Direct Tilt Control \medskip \\
PID & Proportional Integral Derivative \medskip \\
LQR & Linear Quadratic Regulator \medskip \\
\end{table}
\chapter*{Notations}
\begin{longtable}{ll}
$\psi$ & Vehicle yaw angle \medskip \\
$\theta$ & Vehicle tilt angle \medskip \\
$y$ & Lateral vehicle distance from instantaneous center of circular path\medskip \\
$m$ & Mass of the vehicle \medskip \\
$m$ & Mass of the vehicle without wheels \medskip \\
$m_f$ & Mass of each front wheel \medskip \\
$m_r$ & Mass of the rear wheel \medskip \\
$h$ & Height of center of gravity (c.g.) of vehicle from ground \medskip \\
$h_f$ & Height c.g. of front wheels from ground \medskip \\
$h_r$ & Height of c.g. of the rear wheel from ground \medskip \\
$h_o$ & Height of c.g. of vehicle without wheels from ground \medskip \\
$L_f$ & Longitudinal distance from center of gravity to front axle \medskip \\
$L_r$ & Longitudinal distance from center of gravity to rear axle \medskip \\
$L_r$ & Longitudinal distance from c.g. of vehicle to c.g. of vehicle without wheels \medskip \\
$I_x$ & Tilting moment of inertia of the vehicle \medskip \\
$I_z$ & Yaw moment of inertia of the vehicle \medskip \\
$I_z\,o$ & Yaw moment of inertia of the vehicle without wheels\medskip \\
$I_{wheel\:f\:\theta}$ & Tilting inertia of each front wheel about its own axis \medskip \\
$I_{wheel\:f\:rot}$ & Inertia of each front wheel about its rotating axis \medskip \\
$I_{wheel\:f\:\phi}$ & Yaw inertia of each front wheel about its own axis \medskip \\
$I_{wheel\:r\:\theta$ & Tilting inertia of the rear wheel about its own axis \medskip \\
$I_{wheel\:r\:rot}$ & Inertia of the rear wheel about its rotating axis \medskip \\
$I_{wheel\:r\:\phi}$ & Yaw inertia of the rear wheel about its own axis \medskip \\
$R_{wf}$ & Front wheel radius \medskip \\
$R_{wr}$ & Rear wheel radius \medskip \\
$F_z_f$ & Vertical reaction in each front wheel \medskip \\
$F_z_r$ & Vertical reaction in the rear wheel \medskip \\
$C_f$ & Cornering stiffness of each front wheel \medskip \\
$C_r$ & Cornering stiffness of the rear wheel \medskip \\
$\lambda_f$ & Camber stiffness of each front wheel \medskip \\
$\lambda_r$ & Camber stiffness of the rear wheel \medskip \\
$\delta$ & Front wheel steering angle \medskip \\
$\beta$ & Front wheel trail angle \medskip \\
$\gamma$ & Front wheel trail \medskip \\
$\gamma_{pneumatic}$ & Front wheel pneumatic trail \medskip \\
$M_{\delta}$ & Steering torque input from driver \medskip \\
$M_{trail}$ & Camber stiffness of the rear wheel \medskip \\
$M_{t}$ & Tilt torque from actuator \medskip \\
$\alpha_f$ & Slip angle at front wheel \medskip \\
$\alpha_r$ & Slip angle at front wheel \medskip \\
$V_x$ & Longitudinal vehicle velocity \medskip \\
$R$ & Curvature radius of the curve path \medskip \\
$b$ & Width of the vehicle in the base \medskip \\
\end{longtable} | {
"alphanum_fraction": 0.6986169186,
"avg_line_length": 49.3492063492,
"ext": "tex",
"hexsha": "cb439417a89cc2d979b550713b364559ee0fc77a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2018-11-16T07:29:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-31T00:54:01.000Z",
"max_forks_repo_head_hexsha": "f547c9879ca2c2b12ee57ceff9d533061167b701",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "imartinezl/MIT-Media-Lab-latex-thesis",
"max_forks_repo_path": "notations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f547c9879ca2c2b12ee57ceff9d533061167b701",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "imartinezl/MIT-Media-Lab-latex-thesis",
"max_issues_repo_path": "notations.tex",
"max_line_length": 97,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "f547c9879ca2c2b12ee57ceff9d533061167b701",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "imartinezl/MIT-Media-Lab-latex-thesis",
"max_stars_repo_path": "notations.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-08T15:09:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-25T16:15:25.000Z",
"num_tokens": 882,
"size": 3109
} |
\documentclass[pdftex,10pt]{book}
\RequirePackage[hyperindex,colorlinks,plainpages=false]{hyperref}
\hypersetup{pdfauthor={Heng Li},linkcolor=blue,citecolor=blue,urlcolor=blue}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{caption}
\renewcommand{\captionfont}{\fontsize{9pt}{10pt}\selectfont}
\addtolength{\textwidth}{2cm}\addtolength{\hoffset}{-1cm}\addtolength{\textheight}{2cm}\addtolength{\voffset}{-1cm}
\DeclareMathOperator*{\argmax}{argmax}
\makeindex
\title{Mathematical Notes on SAMtools Algorithms}
\author{Heng Li}
\begin{document}
\maketitle
\chapter{Duplicate Rate}
\section{Amplicon duplicates}
Let $N$ be the number of distinct segments (or seeds) before the
amplification and $M$ be the total number of amplicons in the
library. For seed $i$ ($i=1,\ldots,N$), let $k_i$ be the number of
amplicons in the library and $k_i$ is drawn from Poinsson distribution
${\rm Po}(\lambda)$. When $N$ is sufficiently large, we have:
\[
M=\sum_{i=1}^Nk_i=N\sum_{k=0}^{\infty}kp_k=N\lambda
\]
where $p_k=e^{-\lambda}\lambda^k/{k!}$.
At the sequencing step, we sample $m$ amplicons from the library. On the
condition that:
\begin{equation}
m\ll M
\end{equation}
we can regard this procedure as sampling with replacement. For seed $i$,
let:
\begin{equation*}
X_i=\left\{\begin{array}{ll}
1 & \mbox{seed $i$ has been sampled at least once} \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation*}
and then:
\begin{equation*}
{\rm E} X_i= \Pr\{X_i=1\}=1-\Big(1-\frac{k_i}{M}\Big)^m\simeq 1-e^{-k_i m/M}
\end{equation*}
Let:
\[Z=\sum_{i=1}^NX_i\]
be the number of seeds sampled from the
library. The fraction of duplicates $d$ is:
\begin{eqnarray*}
d&=&1-\frac{{\rm E}(Z)}{m}\\
&\simeq&1-\frac{N}{m}\sum_{k=0}^{\infty}\big(1-e^{-km/M}\big)p_k\\
&=&1-\frac{N}{m}+\frac{N e^{-\lambda}}{m}\sum_k \frac{1}{k!}\big(\lambda e^{-m/M}\big)^k\\
&\simeq& 1-\frac{N}{m}\Big[1-e^{-\lambda}\cdot e^{\lambda(1-m/M)}\Big]
\end{eqnarray*}
i.e.
\begin{equation}
d \simeq 1 - \frac{N}{m}\Big(1-e^{-m/N}\Big)
\end{equation}
irrelevant of $\lambda$. In addition, when $m/N$ is sufficiently small:
\begin{equation}\label{equ:d2}
d\approx \frac{m}{2N}
\end{equation}
This deduction assumes that i) $k_i\ll M$ which should almost always
stand; ii) $m\ll M$ which should largely stand because otherwise the
fraction of duplicates will far more than half given $\lambda\sim 1000$
and iii) $k_i$ is drawn from a Poisson distribution.
The basic message is that to reduce PCR duplicates, we should either
increase the original pool of distinct molecules before amplification or
reduce the number of reads sequenced from the library. Reducing PCR
cycles, however, plays little role.
\section{Alignment duplicates}
For simplicity, we assume a read is as short as a single base pair. For
$m$ read pairs, define an indicator function:
\begin{equation*}
Y_{ij}=\left\{\begin{array}{ll}
1 & \mbox{if at least one read pair is mapped to $(i,j)$} \\
0 & \mbox{otherwise}
\end{array}\right.
\end{equation*}
Let $\{p_k\}$ be the distribution of insert size. Then:
\begin{equation*}
{\rm E}Y_{ij}=\Pr\{Y_{ij}=1\}=1-\Big[1-\frac{p_{j-i}}{L-(j-i)}\Big]^m\simeq 1-e^{-p_{j-i}\cdot m/[L-(j-i)]}
\end{equation*}
where $L$ is the length of the reference. The fraction of random
coincidence is:
\begin{eqnarray*}
d'&=&1-\frac{1}{m}\sum_{i=1}^L\sum_{j=i}^L{\rm E}Y_{ij}\\
&\simeq&1-\frac{1}{m}\sum_{i=1}^L\sum_{j=i}^L\big(1-e^{-p_{j-i}\cdot m/(L-(j-i))}\big)\\
&=&1-\frac{1}{m}\sum_{k=0}^{L-1}(L-k)\big[1-e^{-p_k m/(L-k)}\big]
\end{eqnarray*}
On the condition that $L$ is sufficient large and:
\begin{equation}
m\ll L
\end{equation}
\begin{equation}\label{equ:dd}
d'\simeq\frac{m}{2}\sum_{k=0}^{L-1}\frac{p_k^2}{L-k}
\end{equation}
We can calculate/approximate Equation~\ref{equ:dd} for two types of
distributions. Firstly, if $p_k$ is evenly distributed between
$[k_0,k_0+k_1]$, $d'\simeq\frac{m}{2k_1L}$. Secondly, assume $k$ is
drawn from $N(\mu,\sigma)$ with $\sigma\gg 1$:
\begin{equation*}
p_k=\frac{1}{\sqrt{2\pi}\sigma}\int_k^{k+1}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\,dx
\simeq \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(k-\mu)^2}{2\sigma^2}}
\end{equation*}
If $p_0\ll 1$, $\mu\ll L$ and $L\gg 1$:
\begin{eqnarray*}
d'&\simeq&\frac{m}{4\pi\sigma^2}\int_0^1\frac{1}{1-x}\cdot e^{-\frac{(Lx-\mu)^2}{\sigma^2}}\,dx\\
&\simeq&\frac{m}{4\pi\sigma^2}\int_{-\infty}^{\infty}e^{-\frac{(x-\mu/L)^2}{(\sigma/L)^2}}\,dx\\
&=&\frac{m}{4\pi\sigma^2}\cdot\frac{\sqrt{2\pi}\cdot \sqrt{2}\sigma}{L}\\
&=&\frac{m}{2\sqrt{\pi}\sigma L}
\end{eqnarray*}
\chapter{Base Alignment Quality (BAQ)}
Let the reference sequence be $x=r_1\ldots r_L$. We can use a profile
HMM to simulate how a read $y={\tt\char94}c_1\ldots c_l{\tt\char36}$
with quality $z=q_1\ldots q_l$ is generated (or sequenced) from the
reference, where ${\tt\char94}$ stands for the start of the read
sequence and ${\tt\char36}$ for the end.
\begin{figure}[!hb]
\includegraphics[width=\textwidth]{ahmm}
\caption{A profile HMM for generating sequence reads from a reference
sequence, where $L$ is the length of the reference sequence, $M$
states stand for alignment matches, $I$ for alignment insertions to
the reference and $D$ states for deletions.}\label{fig:ahmm}
\end{figure}
The topology of the profile HMM is given in Fig~\ref{fig:ahmm}. Let
$(M,I,D,S)=(0,1,2,3)$. The transition matrix between different types of
states is
$$
{\bf A}=(a_{ij})_{4\times4}=\left(\begin{array}{cccc}
(1-2\alpha)(1-s) & \alpha(1-s) & \alpha(1-s) & s\\
(1-\beta)(1-s) & \beta(1-s) & 0 & s \\
1-\beta & 0 & \beta & 0 \\
(1-\alpha)/L & \alpha/L & 0 & 0 \\
\end{array}\right)
$$
where $\alpha$ is the gap open probability, $\beta$ is the gap extension
probability and $s=1/(2l)$ with $l$ being the average length of a
read. As to emission probabilities, $P(c_i|D_k)=1$,
$P({\tt\char94}|S)=P({\tt\char36}|S)=1$, $P(c_i|I_k)=0.25$ and
$$
P(b_i|M_k)=e_{ki}=\left\{\begin{array}{ll}
1-10^{-q_i/10} & \mbox{if $r_k=b_i$} \\
10^{-q_i/10}/3 & \mbox{otherwise}
\end{array}\right.
$$
The forward-backward algorithm\footnote{We may adopt a banded
forward-backward approximation to reduce the time complexity. We may
also normalize $f_{\tilde{k}}(i)$ for each $i$ to avoid floating point
underflow.} is as follows:
\begin{eqnarray*}
f_S(0)&=&1\\
f_{M_k}(1)&=&e_{k1}\cdot a_{30}\\
f_{I_k}(1)&=&0.25\cdot a_{31}\\
f_{M_k}(i)&=&e_{ki}\cdot\Big[a_{00}f_{M_{k-1}}(i-1)+a_{10}f_{I_{k-1}}(i-1)+a_{20}f_{D_{k-1}}(i-1)\Big]\\
f_{I_k}(i)&=&0.25\cdot\Big[a_{01}f_{M_k}(i-1)+a_{11}f_{I_k}(i-1)\Big]\\
f_{D_k}(i)&=&a_{02}f_{M_{k-1}}(i)+a_{22}f_{D_{k-1}}(i)\\
f_S(l+1)&=&\sum_{k=1}^La_{03}f_{M_k}(l)+a_{13}f_{I_k}(l)
\end{eqnarray*}
\begin{eqnarray*}
b_S(l+1)&=&1\\
b_{M_k}(l)&=&a_{03}\\
b_{I_k}(l)&=&a_{13}\\
b_{M_k}(i)&=&e_{k+1,i+1}a_{00}b_{M_{k+1}}(i+1)+a_{01}b_{I_k}(i+1)/4+a_{02}b_{D_{k+1}}(i)\\
b_{I_k}(i)&=&e_{k+1,i+1}a_{10}b_{M_{k+1}}(i+1)+a_{11}b_{I_k}(i+1)/4\\
b_{D_k}(i)&=&(1-\delta_{i1})\cdot\Big[e_{k+1,i+1}a_{20}b_{M_{k+1}}(i+1)+a_{22}b_{D_{k+1}}(i)\Big]\\
b_S(0)&=&\sum_{k=1}^Le_{k1}a_{30}b_{M_k}(1)+a_{31}b_{I_k}(1)/4
\end{eqnarray*}
and the likelihood of data is $P(y)=f_S(L+1)=b_S(0)$\footnote{Evaluating
if $f_S(L+1)=b_S(0)$ helps to check the correctness of the formulae
and the implementation.}. The posterior probability of a read base
$c_i$ being matching state $\tilde{k}$ (M- or I-typed) is
$f_{\tilde{k}}(i)b_{\tilde{k}}(i)/P(y)$.
\chapter{Modeling Sequencing Errors}
\section{The revised MAQ model}
\subsection{General formulae}
Firstly it is easy to prove that for any $0\le\beta_{nk}<1$ ($0\le k\le n$),
$$
\sum_{k=0}^n(1-\beta_{nk})\prod_{l=0}^{k-1}\beta_{nl}=1-\prod_{k=0}^n\beta_{nk}
$$
where we regard that $\prod_{i=0}^{-1}\beta_{ni}=1$. In particular, when
$\exists k\in[0,n]$ satisfies $\beta_{nk}=0$, we have:
\[\sum^n_{k=0}(1-\beta_{nk})\prod_{i=0}^{k-1}\beta_{ni}=1\]
If we further define:
\begin{equation}\label{equ:alpha-nk}
\alpha_{nk}=(1-\beta_{nk})\prod_{i=0}^{k-1}\beta_{ni}
\end{equation}
on the condition that some $\beta_{nk}=0$, we have:
\[\sum_{k=0}^n\alpha_{nk}=1\]
\begin{equation*}
\beta_{nk}=1-\frac{\alpha_{nk}}{1-\sum_{i=0}^{k-1}\alpha_{ni}}
=\frac{1-\sum_{i=0}^k\alpha_{ni}}{1-\sum_{i=0}^{k-1}\alpha_{ni}}
=\frac{\sum_{i=k+1}^n\alpha_{ni}}{\sum_{i=k}^n\alpha_{ni}}
\end{equation*}
In the context of error modeling, if we define:
\[
\beta_{nk}\triangleq\left\{\begin{array}{ll}
\Pr\{\mbox{at least $k+1$ errors}|\mbox{at least $k$ errors out of $n$ bases}\} & (k>0) \\
\Pr\{\mbox{at least $1$ error out of $n$ bases}\} & (k=0)
\end{array}\right.
\]
we have $\beta_{nn}=0$, and
$$
\gamma_{nk}\triangleq\prod_{l=0}^{k-1}\beta_{nl}=\Pr\{\mbox{at least $k$ errors out of $n$ bases}\}
$$
then
$$
\alpha_{nk}=(1-\beta_{nk})\gamma_{nk}=\Pr\{\mbox{exactly $k$ errors in $n$ bases}\}
$$
\subsection{Modeling sequencing errors}
Given a uniform error rate $\epsilon$ and independent errors, let
$$
\bar{\alpha}_{nk}(\epsilon)=\binom{n}{k}\epsilon^k(1-\epsilon)^{n-k}
$$
and
$$
\bar{\beta}_{nk}(\epsilon)=\frac{\sum_{i=k+1}^n\bar{\alpha}_{ni}(\epsilon)}{\sum_{i=k}^n\bar{\alpha}_{ni}(\epsilon)}
$$
we can calculate that the probability of seeing at least $k$ errors is
$$
\bar{\gamma}_{nk}(\epsilon)=\prod_{l=0}^{k-1}\bar{\beta}_{nk}(\epsilon)
$$
When errors are dependent, the true $\beta_{nk}$ will be larger than
$\bar{\beta}_{nk}$. A possible choice of modeling this is to let
$$
\beta_{nk}=\bar{\beta}_{nk}^{f_k}
$$
where $0<f_k\le1$ models the dependency for $k$-th error. The
probability of seeing at least $k$ errors is thus
$$
\gamma_{nk}(\epsilon)=\prod_{l=0}^{k-1}\bar{\beta}^{f_l}_{nl}(\epsilon)
$$
For non-uniform errors $\epsilon_1\le\epsilon_2\le\cdots\le\epsilon_n$,
we may approximate $\gamma_{nk}(\vec{\epsilon})$ as
$$
\gamma_{nk}(\vec{\epsilon})=\prod_{l=0}^{k-1}\bar{\beta}^{f_l}_{nl}(\epsilon_{l+1})
$$
\subsection{Practical calculation}
We consider diploid samples only. Let $g\in\{0,1,2\}$ be the number of
reference alleles. Suppose there are $k$ reference alleles whose base
error rates are $\epsilon_1\le\cdots\le\epsilon_k$, and there are $n-k$
alternate alleles whose base error rates are
$\epsilon'_1\le\cdots\le\epsilon'_{n-k}$. We calculate
$$
P(D|0)=\gamma_{nk}(\vec\epsilon)=\prod_{l=0}^{k-1}\bar\beta_{nl}^{f_l}(\epsilon_{l+1})
$$
$$
P(D|2)=\gamma_{nk}(\vec\epsilon')=\prod_{l=0}^{n-k-1}\bar\beta_{nl}^{f_l}(\epsilon'_{l+1})
$$
and
$$
P(D|1)=\frac{1}{2^n}\binom{n}{k}
$$
where $f_l=0.97\eta^{\kappa_l-1}+0.03$ with $\kappa_l$ being the rank of base $l$
among the same type of bases on the same strand, ordered by error
rate. For sequencing data, error rates are usually discretized. We may
precompute $\bar\beta_{nk}(\epsilon)$ for sufficiently large
$n$\footnote{SAMtools precomputes a table for $n\le255$. Given higher
coverage, it randomly samples $255$ reads.} and all possible
discretized $\epsilon$. Calculating the likelihood of the data is
trivial.
\subsection{The original MAQ model}
The original MAQ models the likelihood of data by
$$
\alpha_{nk}(\epsilon)=(1-\bar\beta^{f_k}_{nk})\prod_{i=0}^{k-1}\bar\beta^{f_i}_{ni}
$$
instead of $\gamma_{nk}(\epsilon)$. For non-uniform errors,
$$
\alpha_{nk}(\vec{\epsilon})=c_{nk}(\bar\epsilon)\cdot\prod_{i=0}^{k-1}\epsilon_{i+1}^{f_i}
$$
where
$$
\log{\bar\epsilon}=\frac{\sum_{i=0}^{k-1}f_i\log\epsilon_{i+1}}{\sum_{i=0}^{k-1}f_i}
$$
and
$$
c_{nk}(\bar\epsilon)\triangleq
\Big[1-\bar\beta^{f_k}_{nk}(\bar\epsilon)\Big]
\prod_{i=0}^{k-1}\Bigg[\frac{\bar\beta_{ni}(\bar\epsilon)}{\bar\epsilon}\Bigg]^{f_i}
$$
The major problem with the original MAQ model is that for $\epsilon$
close to 0.5 and large $n$, the chance of seeing no errors may be so
small that it is even smaller than the chance of seeing all errors
(i.e. $\alpha_{n0}<\alpha_{nn}$). In this case, the model prefers seeing
all errors, which is counterintuitive. The revised model uses the
accumulative probability $\gamma_{nk}$ and does not have this problem.
For small $\epsilon$ and $n$, the original and the revised MAQ models
seem to have similar performance.
\chapter{Modeling Multiple Individuals}
\section{Notations}
Suppose there are $N$ sites from $n$ individuals with $i$-th individual
having $m_i$ ploids. Let $M=\sum_im_i$ be the total number of
chromosomes. Let ${\bf D}=(\vec{D}_1,\ldots,\vec{D}_N)^{\rm T}$ be the
data matrix with vector ${\vec D}_a=(D_{a1},\ldots,D_{an})$ representing
the alignment data for each individual at site $a$. Similary, let
\mbox{${\bf G}=(\vec{G}_a,\ldots,\vec{G}_N)^{\rm T}$} and
\mbox{$\vec{G}_a=(G_{a1},\dots,G_{an})$} be the true genotypes, where
$0\le G_{ai}\le m_i$ equals the number of reference alleles~\footnote{If we
take the ancestral sequence as the reference, the non-reference allele
will be the derived allele.}. Define
\begin{equation}
X_a=X_a(\vec{G}_a)\triangleq\sum_iG_{ai}
\end{equation}
to be the number of reference alleles at site $a$ and ${\bf
X}=(X_1,\ldots,X_N)^{\rm T}$. Also define
$\Phi=(\phi_0,\ldots,\phi_M)$ as the allele frequency spectrum (AFS)
with $\sum_k\phi_k=1$.
For convenience, we may drop the position subscript $a$ when it is
unambiguous in the context that we are looking at one locus. Also
define
\begin{equation}
P(D_i|g_i)\triangleq\Pr\{D_i|G_i=g_i\}
\end{equation}
to be the likelihood of the data for individual $i$ when the underlying
genotype is known. $P(D_i|g_i)$ is calculated in
Section~\ref{sec:pdg}. And define
\begin{equation}
P(g_i|\phi)\triangleq\binom{m_i}{g_i}\phi^{g_i}(1-\phi)^{m_i-g_i}
\end{equation}
to be the probability of a genotype under the Hardy-Weinberg equilibrium
(HWE), when the site allele frequency is $\phi$.
\section{Estimating AFS}\label{sec:afs}
\subsection{The EM procedure}
We aim to find $\Phi$ that maximizes $P({\bf D}|\Phi)$ by EM. Suppose at
the $t$-th iteration the estimate is $\Phi_t$. We have
$$
\log \Pr\{{\bf D},{\bf X}={\bf x}|\Phi\}=\log \Pr\{{\bf D}|{\bf X}={\bf
x}\}\Pr\{{\bf X}={\bf x}|\Phi\}=C+\sum_a\log \phi_{x_a}
$$
where $C$ is not a function of $\{\phi_k\}$. The EM $Q$ function
is\footnote{We assume site independency in the following.}
\begin{eqnarray*}
Q(\Phi|\Phi_t)&=&\sum_{\bf x}\Pr\{{\bf X}={\bf x}|{\bf D},\Phi_t\}\log \Pr\{{\bf D},{\bf X}={\bf x}|\Phi\}\\
&=&C+\sum_{\bf x}\prod_a \Pr\{X_a=x_a|\vec{D}_a,\Phi_t\}\sum_b\log \phi_{x_b}\\
&=&C+\sum_{a=1}^N\sum_{x_a=0}^M \Pr\{X_a=x_a|\vec{D}_a,\Phi_t\}\log \phi_{x_a}
\end{eqnarray*}
Requiring $\partial_{\phi_k}(Q-\lambda\sum_l{\phi_l})=0$ leads to
$$
\frac{1}{\phi_k}\sum_a\Pr\{X_a=k|\vec{D}_a,\Phi_t\}-\lambda=0
$$
from which $\lambda$ can be calculated as:
$$
\lambda=\sum_k\sum_a \Pr\{X_a=k|\vec{D}_a,\Phi_t\}=N
$$
and thus at the $(t+1)$ iteration:
\begin{equation}\label{equ:em}
\phi_k^{(t+1)}=\frac{1}{N}\sum_a\Pr\{X_a=k|\vec{D}_a,\Phi_t\}
\end{equation}
where $\Pr\{X_a=k|\vec{D}_a,\Phi_t\}$ is calculated as follows.
\subsection{The distribution of site reference allele count}
Firstly, as we are only looking at a site from now on, we drop subscript
$a$ for convenience. Without proof\footnote{Supposedly, this can be
proved by polynomial expansion. Wiki gives a simplified version of
this formula as
\href{http://en.wikipedia.org/wiki/Vandermonde's_identity\#Generalized_Vandermonde.27s_identity}{generalized
Vandermonde's identity}.}, we note that
$$
\binom{M}{k}\equiv\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\prod_i\binom{m_i}{g_i}
$$
where
$$
s_j(\vec{g})\triangleq\sum_{i=1}^jg_i
$$
and $\delta_{kl}=1$ if $k=l$ and 0 otherwise. The probability of
sampling $\vec{g}$ conditional on $\sum_ig_i=k$ is
$\delta_{k,s_n(\vec{g})}\prod_i\binom{m_i}{g_i}\big/\binom{M}{k}$. With this
preparation, we can calculate\footnote{The derivation below does
\emph{not} assume Hardy-Weinberg equilibrium.}
\begin{eqnarray*}
\Pr\{\vec{D}|X=k\}&=&\sum_{\vec{g}}\Pr\{\vec{D},\vec{G}=\vec{g}|X=k\}\\
&=&\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\Pr\{\vec{D}|\vec{G}=\vec{g}\}\Pr\{\vec{G}=\vec{g}|X=k\}\\
&=&\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\prod_iP(D_i|g_i)\cdot\frac{\prod_j\binom{m_j}{g_j}}{\binom{M}{k}}\\
&=&\frac{1}{\binom{M}{k}}\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\prod_i\binom{m_i}{g_i}P(D_i|g_i)
\end{eqnarray*}
where $P(D_i|g_i)\triangleq\Pr\{D_i|G_i=g_i\}$ is the likelihood of data
when the underlying genotype is known. To calculate
$\Pr\{\vec{D}|X=k\}$, we define
$$
z_{jk}\triangleq\sum_{g_1=0}^{m_1}\cdots\sum_{g_j=0}^{m_j}\prod_{i=1}^j\binom{m_i}{g_i}P(D_i|g_i)
$$
for $0\le k\le\sum_{i=1}^jm_i$ and 0 otherwise. It can be calculated
iteratively with\footnote{To make it explicit, for diploid samples, if
$A$ is the reference allele:
$$
z_{jk}=z_{j-1,k}P(D_j|\langle aa\rangle)+2z_{j-1,k-1}P(D_j|\langle Aa\rangle)+z_{j-1,k-2}P(D_j|\langle AA\rangle)
$$
}
\begin{equation}\label{equ:zcal}
z_{jk}=\sum_{g_j=0}^{m_j}z_{j-1,k-g_j}\cdot\binom{m_j}{g_j}P(D_j|g_j)
\end{equation}
with $z_{00}=1$. Thus
$$
\Pr\{\vec{D}|X=k\}=\frac{z_{nk}}{\binom{M}{k}}
$$
and
\begin{equation}\label{equ:postk}
\Pr\{X=k|\vec{D},\Phi\}=\frac{\phi_k\Pr\{\vec{D}|X=k\}}{\sum_l\phi_l\Pr\{\vec{D}|X=l\}}
=\frac{\phi_kz_{nk}/\binom{M}{k}}{\sum_l\phi_lz_{nl}/\binom{M}{l}}
\end{equation}
\subsection{Numerical stability}\label{sec:numsta}
Numerical computation of Eq.~(\ref{equ:zcal}) may lead to floating point
underflow for large $n$. To avoid this, let
$y_{jk}=z_{jk}/\binom{M_j}{k}$, where
$M_j=\sum_{i=1}^jm_i$. Eq.~(\ref{equ:zcal}) becomes
$$
y_{jk}=\left(\prod_{l=0}^{m_j-1}\frac{k-l}{M_{j}-l}\right)\cdot\sum_{g_j=0}^{m_j}\left(\prod_{l=g_j}^{m_j-1}\frac{M_{j-1}-k+l+1}{k-l}\right)\cdot y_{j-1,k-g_j}\cdot\binom{m_j}{g_j}P(D_j|g_j)
$$
In case of diploid samples:
\begin{eqnarray*}
y_{jk}&=&\frac{1}{2j(2j-1)}\Big[(2j-k)(2j-k-1)\cdot y_{j-1,k}P(D_j|\langle aa\rangle)+2k(2j-k)\cdot y_{j-1,k-1}P(D_j|\langle Aa\rangle)\\
&&+k(k-1)\cdot y_{j-1,k-2}P(D_j|\langle AA\rangle)\Big]
\end{eqnarray*}
For haploid samples,
\[
y_{jk}=\frac{1}{M_{j}}\Big[(M_{j-1}-k+1)\cdot y_{j-1,k}P(D_j|a)+k\cdot y_{j-1,k-1}P(D_j|A)\Big]
\]
However, this is not good enough. $y_{jk}$ still decreases exponentially
with increasing $j$. To solve this issue, we rescale $y_{jk}$ for each
$j$. Define
$$
\tilde{y}_{jk}=\frac{y_{jk}}{\prod_{i=1}^j t_i}
$$
where $t_j$ is chosen such that $\sum_l\tilde{y}_{jl}=1$. The posterior is
$$
\Pr\{X=k|\vec{D},\Phi\}=\frac{\phi_k\tilde{y}_{nk}}{\sum_l\phi_l\tilde{y}_{nl}}
$$
It should be noted that $P(D_i|g_i)$ can also be rescaled without
affecting the calculation of the posterior. Furthermore, in the
$\{\tilde{y}_{jk}\}$ matrix, most of cells should be close to
zero. Computation of $\tilde{y}_{nk}$ can be carried in a band instead
of in a triangle. For large $n$, this may considerably reduce computing
time.
\subsection{The initial AFS}
The EM procedure garantees that $\Pr\{{\bf D}|\Phi\}$ monotomically
increases with each iteration and converges to a local optima. However,
if we start this iteration from a bad initial AFS, we may need many
iterations; the iteration is also more likely to be trapped by a local
optima. Here we give several AFS on different conditions under the
infinite-site Wright-Fisher model.
Let $\phi'_k$ be the probability of seeing k non-reference alleles out
of $M$ chromosomes. The frequency of reference alleles $\phi_k$ equals
$\phi'_{M-k}$.
If we take the ancestral sequence as the reference, the standard model
gives $\phi'_k=\theta/k$ and $\phi'_0=1-\sum_k\phi'_k$. When we do not
know if the reference allele is ancestral, the same conclusion still
stands. To see this, for $k>0$:
$$
\phi'_k=\frac{M+1-k}{M+1}\left(\frac{\theta}{k}+\frac{\theta}{M+1-k}\right)=\frac{\theta}{k}
$$
and for $k=0$:
$$
\phi'_k=1-\sum_{k=1}^{M+1}\frac{\theta}{k}+\frac{\theta}{M+1}=1-\sum_{k=1}^{M}\frac{\theta}{k}
$$
where the first term corresponds to the case wherein the reference is
ancestral and the second to the case wherein the reference is derived.
Another useful AFS is the derived allele frequency spectrum on the
condition of loci being discovered from two chromosomes. Under the
Wright-Fisher model, it is:
$$
\phi'_k=\frac{2(M+1-k)}{(M+1)(M+2)}
$$
A third AFS is the derived allele frequency spectrum on the condition of
knowing one derived allele from a chromosome. It is a flat distribution
$$
\phi'_k=\frac{1}{M+1}
$$
\subsection{Joint distribution of allele counts for 2 groups of samples}
Suppose at a locus we have two data sets $\vec{D}'$ and $\vec{D}''$,
with $n'$ samples $m'$ haplotypes and $n''$ samples and $m''$
haplotypes, respectively. Let $n=n'+n''$ and $m=m'+m''$ and
$\vec{D}=\vec{D}'\oplus\vec{D}''$ be the joint of the two data sets. In
addition, let $X'=\sum_{i'}G'_{i'}$, $X''=\sum_{i''}G''_{i''}$ and
$X=X'+X''$. We have
$$
\Pr\{\vec{D}'|X'=k'\}=\frac{z'_{n'k'}}{\binom{M'}{k'}}
$$
$$
\Pr\{\vec{D}''|X''=k''\}=\frac{z''_{n''k''}}{\binom{M''}{k''}}
$$
and
$$
\Pr\{\vec{D}',\vec{D}''|X'=k',X''=k''\}=\Pr\{\vec{D}'|X'=k'\}\Pr\{\vec{D}''|X''=k''\}=\frac{z'_{n'k'}z''_{n''k''}}{\binom{M'}{k'}\binom{M''}{k''}}
$$
where
$$
z'_{j'k'}\triangleq\sum_{g'_1=0}^{m'_1}\cdots\sum_{g'_j=0}^{m'_j}\prod_{i'=1}^{j'}\binom{m'_i}{g'_i}P(D'_i|g'_i)
$$
and similar to $z''_{j''k''}$. If we note that
$$
\Pr\{X'=k',X''=k''|\Phi\}=\phi_k\cdot\frac{\binom{M'}{k'}\binom{M''}{k''}}{\binom{M}{k}}
$$
and by definition
$$
z_{nk}=z_{n'+n'',k'+k''}=\sum_{\{k',k''|k'+k''=k\}}z'_{n'k'}z''_{n''k''}
$$
we have
$$
\Pr\{\vec{D}|\Phi\}=\sum_k\Pr\{\vec{D},X=k|\Phi\}=\sum_{k',k''}\Pr\{\vec{D}',\vec{D}'',X'=k',X''=k''|\Phi\}
$$
Thus we can derive the joint distribution:
\begin{equation}
\Pr\{X'=k',X''=k''|\vec{D},\Phi\}=\frac{\phi_{k'+k''}z'_{n'k'}z''_{n''k''}\big/\binom{M}{k'+k''}}{\sum_l\phi_lz_{nl}\big/\binom{M}{l}}
\end{equation}
If we let $y_{jk}=z_{jk}/\binom{M_j}{k}$ as in Section~\ref{sec:numsta},
$$
\Pr\{X'=k',X''=k''|\vec{D},\Phi\}=\frac{\phi_{k'+k''}y'_{n'k'}y''_{n''k''}}{\sum_l\phi_ly_{nl}}\cdot\frac{\binom{M'}{k'}\binom{M''}{k''}}{\binom{M'+M''}{k'+k''}}
$$
This derivation can be extended to arbitrary number of data sets.
To test the association between the two groups of samples, we may use
the following statistics:
\[
P=\sum_{k',k''}\Pr\{X'=k',X''=k''|\vec{D},\Phi\}\cdot P(\chi^2(k',n'-k',k'',n''-k'')|1)
\]
where $\chi^2(n_{11},n_{12},n_{21},n_{22})$ gives the $\chi^2$ statistics of
a 2-by-2 contingency table and $P(\chi^2|1)$ computes the $P$-value of an 1-degree
$\chi^2$ statistics. Theoretically speaking, the equation above is not correct.
As a result, the $P$-value computed this way tends to underestimate the significance
on real data. Nonetheless, the relative value is still useful and this statistics
has a nice feature that when the data has little ambiguity, it approaches
the standard 1-degree $\chi^2$ used in GWAS.
\section{EM estimate of site statistics}
\subsection{Estimating site allele frequency assuming HWE}
Here we aim to find $\phi$ that maximises $\Pr\{\vec{D}|\phi\}$. We have:
$$\log \Pr\{\vec{D},\vec{G}=\vec{g}|\phi\}=\log\prod_iP(D_i|g_i)P(g_i|\phi)=C+\sum_i\log P(g_i|\phi)$$
Given an estimate $\phi_t$ at the $t$-th iteration, the $Q(\phi|\phi_t)$
function of EM is:
\begin{eqnarray*}
Q(\phi|\phi_t)&=&\sum_{\vec{g}}\Pr\{\vec{G}=\vec{g}|\vec{D},\phi_t\}\log \Pr\{\vec{D},\vec{G}=\vec{g}|\phi\}\\
&=&C+\sum_{\vec{g_i}}\prod_i\Pr\{G_i=g_i|D_i,\phi_t\}\sum_j\log P(g_j|\phi)\\
&=&C+\sum_{i=1}^n\sum_{g_i=0}^{m_i}\Pr\{G_i=g_i|D_i,\phi_t\}\log P(g_i|\phi)\\
&=&C+\sum_i\sum_{g_i}\Pr\{G_i=g_i|D_i,\phi_t\}\log_i \binom{m_i}{g_i}\phi^{g_i}(1-\phi)^{m_i-g_i}\\
&=&C'+\sum_i\sum_{g_i}\Pr\{G_i=g_i|D_i,\phi_t\}\Big[g_i\log\phi+(m_i-g_i)\log(1-\phi)\Big]\\
\end{eqnarray*}
Requiring $\partial_{\phi}Q\Big|_{\phi=\phi_{t+1}}=0$ gives:
$$\frac{1}{\phi_{t+1}(1-\phi_{t+1})}\sum_i\sum_{g_i}\Pr\{G_i=g_i|D_i,\phi_t\}(g_i-m_i\phi_{t+1})=0$$
Thus
\begin{equation}\label{equ:freq1}
\phi_{t+1}=\frac{1}{\sum_jm_j}\sum_i\sum_{g_i}g_i\Pr\{G_i=g_i|D_i,\phi_t\}
=\frac{1}{M}\sum_i\frac{\sum_{g_i}g_iP(D_i|g_i)P(g_i|\phi_t)}{\sum_{g_i}P(D_i|g_i)P(g_i|\phi_t)}
\end{equation}
which is the EM estimate at the $(t+1)$-th iteration and also the
expected reference allele frequency.
\subsection{Estimating genotype frequency}
In the previous section, we estimate the site allele frequency assuming Hardy-Weinberg equilibrium (HWE).
We can actually relax this assumption when all samples have the sample diploidy.
Let $m=m_1=\cdots=m_n$ and $\vec{\phi}=\{\phi_0,\phi_1,\ldots,\phi_m\}$ be the vector of
genotype frequency satisfying $\sum_g\phi_g=1$. It is easy to compute the $Q$-function:
$$
Q(\vec{\phi}|\vec{\phi}^{(t)})=C+\sum_{i=1}^n\sum_{g=0}^m\Pr\{G_i=g|D_i,\vec{\phi}^{(t)}\}\log\phi_g
$$
Requiring $\partial_{\phi_g}\big[Q(\vec{\phi}|\vec{\phi}^{(t)})-\lambda\sum_{g'}\phi_{g'}\big]\Big|_{\phi_g=\phi_g^{(t+1)}}$=0 gives $\lambda=n$
and thus
$$
\phi_g^{(t+1)}=\frac{1}{n}\sum_{i=1}^n\Pr\{G_i=g|D_i,\vec{\phi}^{(t)}\}=\frac{1}{n}\sum_{i=1}^n\frac{P(D_i|g)\phi_g^{(t)}}{\sum_{g'}P(D_i|g')\phi_{g'}^{(t)}}
$$
Obviously, $\sum_g\phi_g^{(t+1)}=1$ as it should be. This estimate is useful to test HWE.
\subsection{Estimating two-locus LD}
In this section, we only consider diploid samples
(i.e. $m_1=\cdots=m_n=2$). Let ${\bf D}=(\vec{D},\vec{D}')$ be the data
at two loci, respectively; and $H_i$ and $H^{\dag}_i$ be the two
underlying haplotypes for individual $i$ with $H_i\in\{0,1,2,3\}$
representing one of the four possible haplotypes at the 2 loci. We write
${\bf H}=\overrightarrow{(H_i,H^{\dag}_i)}$ as a haplotype configuration of the
samples. Define
$$
\mathcal{G}_{hk}=\mathcal{G}_{kh}=\lfloor h/2\rfloor+\lfloor k/2\rfloor
$$
$$
\mathcal{G}'_{hk}=\mathcal{G}'_{kh}=(h\mod 2)+(k\mod 2)
$$
which calculate the genotype of each locus, respectively.
\begin{eqnarray*}
Q(\vec{\phi}|\vec{\phi}^{(t)})&=&\sum_{\bf h}P({\bf H}={\bf h}|{\bf D},\vec{\phi}^{(t)})\log\Pr\{{\bf D},{\bf H}={\bf h}|\vec{\phi}\}\\
&=&C+\sum_{\bf h}\prod_i\Pr\{H_i=h_i,H^{\dag}_i=h^{\dag}_i|{\bf D},\vec{\phi}^{(t)}\}\sum_j\log P(h_j,h^{\dag}_j|\vec{\phi})\\
&=&C+\sum_i\sum_{h_i}\sum_{h^{\dag}_i}\Pr\{H_i=h_i,H^{\dag}_i=h^{\dag}_i|{\bf D},\vec{\phi}^{(t)}\}\sum_j\log(\phi_{h_i}\phi_{h^{\dag}_i})
\end{eqnarray*}
%P(\mathcal{G}(h_i,h^{\dag}_i)|\vec{D},\vec{\phi})P(\mathcal{G}'(h_i,h^{\dag}_i)|\vec{D}',\vec{\phi})\log(\phi_{h_i}\phi_{h^{\dag}_i})
Solving $\partial_{\phi_k}Q-\lambda=0$ gives
\begin{eqnarray*}
\phi_k&=&\frac{1}{2n}\sum_i\sum_h\left(\Pr\{H_i=h,H^{\dag}_i=k|{\bf D},\vec{\phi}^{(t)}\}+\Pr\{H_i=k,H^{\dag}_i=h|{\bf D},\vec{\phi}^{(t)}\}\right)\\
&=&\frac{\phi^{(t)}_k}{2n}\sum_{i=1}^n\frac{\sum_{h}\phi^{(t)}_h\big[P(D_i|{\cal G}_{hk})P(D'_i|{\cal G}'_{hk})+P(D_i|{\cal G}_{kh})P(D'_i|{\cal G}'_{kh})\big]}{\sum_{k',h}\phi^{(t)}_{k'}\phi^{(t)}_hP(D_i|{\cal G}_{hk'})P(D'_i|{\cal G}'_{hk'})}\\
&=&\frac{\phi^{(t)}_k}{n}\sum_{i=1}^n\frac{\sum_{h}\phi^{(t)}_hP(D_i|{\cal G}_{hk})P(D'_i|{\cal G}'_{hk})}{\sum_{k',h}\phi^{(t)}_{k'}\phi^{(t)}_hP(D_i|{\cal G}_{hk'})P(D'_i|{\cal G}'_{hk'})}
\end{eqnarray*}
\section{An alternative model assuming HWE}
In Section~\ref{sec:afs}, $\phi_k$ in $\Phi=\{\phi_k\}$ is interpreted
as the probability of seeing exactly $k$ alleles from $M$
chromosomes. Under this model, the prior of a genotype configuration is
$$
\Pr\{\vec{G}=\vec{g}|\Phi\}=\phi_{s_n(\vec{g})}\frac{\prod_i\binom{m_i}{g_i}}{\binom{M}{s_n(\vec{g})}}
$$
and the posterior is
$$
\Pr\{\vec{G}=\vec{g}|\vec{D},\Phi\}=\frac{\phi_{s_n(\vec{g})}}{\Pr\{\vec{D}|\Phi\}}\cdot\frac{\prod_i\binom{m_i}{g_i}P(D_i|g_i)}{\binom{M}{s_n(\vec{g})}}
$$
Suppose we want to calculate the expectation of $\sum_if_i(g_i)$, we can
$$
\sum_i\sum_{\vec{g}}f_i(g_i)\Pr\{\vec{G}=\vec{g}|\vec{D},\Phi\}
=\frac{1}{\Pr\{\vec{D}|\Phi\}}\sum_i\sum_k\frac{\phi_k}{\binom{M}{k}}\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}f_i(g_i)\prod_j\binom{m_j}{g_j}P(D_j|g_j)
$$
Due to the presence of $\delta_{k,s_n(\vec{g})}$, we are unable to
reduce the formula to a simpler form. Although we can take a similar
strategy in Section~\ref{sec:afs} to calculate $\sum_k\sum_{\vec{g}}$,
which is $O(n^2)$, another sum $\sum_i$ will bring this calculation to
$O(n^3)$. Even calculating the marginal probability
$\Pr\{G_i=g_i|\vec{D},\Phi\}$ requires this time complexity. All the
difficulty comes from that individuals are correlated conditional on
$\{X=k\}$.
An alternative model is to interpret the AFS as the discretized AFS of
the population rather than for the observed individuals. We define the
population AFS discretized on $M$ chromosomes as
$\Phi'=\{\phi'_k\}$. Under this model,
$$
\Pr\{\vec{G}=\vec{g}|\Phi'\}=\sum_k\phi_k\prod_iP(g_i|k/M)
$$
$$
\Pr\{\vec{G}=\vec{g},\vec{D}|\Phi'\}=\sum_k\phi_k\prod_iP(D_i|g_i)P(g_i|k/M)
$$
$$
\Pr\{\vec{D}|\Phi'\}=\sum_{\vec{g}}\Pr\{\vec{G}=\vec{g},\vec{D}|\Phi'\}
=\sum_{k=0}^M\phi'_k\prod_{i=1}^n\sum_{g_i=0}^{m_i}P(D_i|g_i)P(g_i|k/M)
$$
and
\begin{eqnarray}\label{equ:fexp}
&&\sum_i\sum_{\vec{g}}f_i(g_i)\Pr\{\vec{G}=\vec{g}|\vec{D},\Phi'\}\\\nonumber
&=&\frac{1}{\Pr\{\vec{D}|\Phi'\}}\sum_i\sum_{\vec{g}}f_i(g_i)\sum_k\phi'_k\prod_jP(D_j|g_j)P(g_j|k/M)\\\nonumber
&=&\frac{1}{\Pr\{\vec{D}|\Phi'\}}\sum_k\phi'_k\sum_if_i(g_i)P(D_i|g_i)P(g_i|k/M)\prod_{j\not=i}\sum_{g_j}P(D_j|g_j)P(g_j|k/M)\\\nonumber
&=&\frac{1}{\Pr\{\vec{D}|\Phi'\}}\sum_k\phi'_k\left[\prod_i\sum_{g_i}P(D_i|g_i)P(g_i|k/M)\right]\cdot\left[\sum_i\frac{\sum_{g_i}f_i(g_i)P(D_i|g_i)P(g_i|k/M)}{\sum_{g_i}P(D_i|g_i)P(g_i|k/M)}\right]
\end{eqnarray}
The time complexity of this calculation is $O(n^2)$. Consider that if
$f_i(g_i)=g_i$, $X=\sum_if_i(G_i)$. We can easily calculate
$E(X|\vec{D},\Phi')$ with the formula above.
\subsection{Posterior distribution of the allele count}
Under the alternative model, we can also derive the posterior
distribution of $X$, $\Pr\{X=k|\vec{D},\Phi'\}$ as follows.
$$
\Pr\{X=k|\Phi'\}=\binom{M}{k}\sum_{l=0}^M\phi'_l\left(\frac{l}{M}\right)^k\left(1-\frac{l}{M}\right)^{M-k}
$$
Then
\begin{equation}\label{equ:pdk-alt}
\Pr\{\vec{D},X=k|\Phi'\}=z_{nk}\sum_{l=0}^M\phi'_l\left(\frac{l}{M}\right)^k\left(1-\frac{l}{M}\right)^{M-k}
\end{equation}
In fact, we also have an alternative way to derive this
$\Pr\{\vec{D},X=k|\Phi'\}$. Let $\phi'$ be the true site allele
frequency in the population. Assuming HWE, we have
\begin{eqnarray}\label{equ:pdk2}
\Pr\{X=k,\vec{D}|\phi'\}&=&\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\Pr\{\vec{D}|\vec{G}=\vec{g}\}\Pr\{\vec{G}=\vec{g}|\phi'\}\\\nonumber
&=&\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\prod_iP(D|g_i)\binom{m_i}{g_i}\phi'^{g_i}(1-\phi')^{m_i-g_i}\\\nonumber
&=&\phi'^k(1-\phi')^{M-k}\sum_{\vec{g}}\delta_{k,s_n(\vec{g})}\prod_i\binom{m_i}{g_i}P(D_i|g_i)\\\nonumber
&=&\phi'^k(1-\phi')^{M-k}z_{nk}
\end{eqnarray}
Summing over the AFS gives
\begin{equation*}
\Pr\{\vec{D},X=k|\Phi'\}=\sum_l\phi'_l\Pr\{X=k,\vec{D}|\phi'=l/M\}
\end{equation*}
which is exactly Eq.~(\ref{equ:pdk-alt}). It is worth noting that
$E(X|\vec{D},\Phi')$ calculated by Eq.~(\ref{equ:pdk2}) is identical to
the one calculated by Eq.~(\ref{equ:fexp}), which has been numerically
confirmed.
In practical calculation, the alternative model has very similar
performance to the method in Section~\ref{sec:afs} in one
iteration. However, in proving EM, we require $\Pr\{X=k|\Phi\}=\phi_k$,
which does not stand any more in the alternative
interpretation. Iterating Eq.~(\ref{equ:pdk-alt}) may not monotonically
increase the likelihood function. Even if this was also another
different EM procedure, we have not proved it yet. Yi {\it et al.}
(2010) essentially calculates Eq.~(\ref{equ:pdk2}) for $\phi'$ estimated
from data without summing over the AFS. Probably this method also
delivers similar results, but it is not theoretically sound and may not
be iterated, either.
\section{Likelihood of data given genotype}\label{sec:pdg}
Given a site covered by $k$ reads from an $m$-ploid individual, the
sequencing data is:
\[
D=(b_1,\dots,b_k)=(\underbrace{1,\ldots,1}_l,\underbrace{0,\ldots,0}_{k-l})
\]
where 1 stands for a reference allele and 0 otherwise. The $j$-th base
is associated with error rate $\epsilon_j$, which is the larger error
rate between sequencing and alignment errors. We have
\begin{equation}
P(D|0)=\prod_{j=1}^l\epsilon_j\prod_{j=l+1}^k(1-\epsilon_j)=\left(1-\sum_{j=l+1}^k\epsilon_j+o(\epsilon^2)\right)\prod_{j=1}^l\epsilon_j
\end{equation}
\begin{equation}
P(D|m)=\left(1-\sum_{j=1}^l\epsilon_j+o(\epsilon^2)\right)\prod_{j=l+1}^k\epsilon_j
\end{equation}
For $0<g<m$:
\begin{eqnarray}\label{eq:gd}
P(D|g)&=&\sum_{a_1=0}^1\cdots\sum_{a_k=0}^1\Pr\{D|B_1=a_1,\ldots,B_k=a_k\}\Pr\{B_1=a_1,\ldots,B_k=a_k|g\}\\\nonumber
&=&\sum_{\vec{a}}\left(\frac{g}{m}\right)^{\sum_j a_j}\left(1-\frac{g}{m}\right)^{k-\sum_j a_j}\cdot\prod_j p_j(a_j)\\\nonumber
&=&\left(1-\frac{g}{m}\right)^k\prod_j\sum_{a=0}^1 p_j(a)\left(\frac{g}{m-g}\right)^a\\\nonumber
&=&\left(1-\frac{g}{m}\right)^k\prod_{j=1}^l\left(\epsilon_j+\frac{g}{m-g}(1-\epsilon_j)\right)\prod_{j=l+1}^k\left(1-\epsilon_j+\frac{\epsilon_jg}{m-g}\right)\\\nonumber
&=&\left(1-\frac{g}{m}\right)^k\left\{\left(\frac{g}{m-g}\right)^l+\left(1-\frac{g}{m-g}\right)\left(\sum_{j=1}^l\epsilon_j-\sum_{j=l+1}^k\epsilon_j\right)+o(\epsilon^2)\right\}
\end{eqnarray}
where $a=\sum_j a_j$ and
$$
p_j(a)=\left\{\begin{array}{ll}
\epsilon_j & \mbox{$a=1$}\\
1-\epsilon_j & \mbox{$a=0$}\\
\end{array}\right.
$$
In the bracket of Eq.~\ref{eq:gd}, the first term explains the deviation between $l/k$ and
$g/m$ by imperfect sampling, while the second term explains the
deviation by sequencing errors. The second term can be ignored when $k$
is small but may play a major role when $k$ is large. In particular, for
$m=2$, $P(D|1)=2^{-k}$, independent of sequencing errors.
In case of dependent errors, we may replace:
\[\epsilon_1<\epsilon_2<\cdots<\epsilon_l\]
with
\[\epsilon'_j=\epsilon_j^{\alpha^{j-1}}\]
where parameter $\alpha\in[0,1]$ addresses the error dependency.
\section{Multi-sample SNP calling and genotyping}
The probability of the site being polymorphic is
$\Pr\{X=0|\vec{D},\Phi\}$. For individual $i$, we may estimate the
genotype $\hat{g}_i$ as:
$$
\hat{g}_i=\argmax_{g_i} \Pr\{G_i=g_i|D_i,\phi_E\}=\argmax_{g_i}\frac{P(D_i|g_i)P(g_i|\phi_E)}
{\sum_{h_i}P(D_i|h_i)P(h_i|\phi_E)}
$$
where
$$
\phi_E=E(X|\vec{D},\Phi)/M
$$
This estimate of genotypes may not necessarily maximize the posterior
probability $P(\vec{g}|\vec{D})$, but it should be good enough in
practice. However, it should be noted that $\sum_i\hat{g}_i$ is usually
a bad estimator of site allele frequency. The max-likelihood estimator
by Eq. \eqref{equ:freq1} is much better.
\end{document}
| {
"alphanum_fraction": 0.6614637833,
"avg_line_length": 43.7153748412,
"ext": "tex",
"hexsha": "29ed2fc4b2820393b785151862e269b6325076ca",
"lang": "TeX",
"max_forks_count": 42,
"max_forks_repo_forks_event_max_datetime": "2022-03-01T10:16:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-02T15:55:08.000Z",
"max_forks_repo_head_hexsha": "7da8a6a04bbd7beded0ee0adfd99e1133b31f2c8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iontorrent/samtools",
"max_forks_repo_path": "math/samtools.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "7da8a6a04bbd7beded0ee0adfd99e1133b31f2c8",
"max_issues_repo_issues_event_max_datetime": "2021-02-19T14:14:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-23T13:24:45.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iontorrent/samtools",
"max_issues_repo_path": "math/samtools.tex",
"max_line_length": 246,
"max_stars_count": 22,
"max_stars_repo_head_hexsha": "a80c6727c9412b00b947ace45662acc0583016e8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tiagomaie/samtools",
"max_stars_repo_path": "math/samtools.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-03T08:57:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-25T15:03:26.000Z",
"num_tokens": 13907,
"size": 34404
} |
\chapter{Summary}
\label{ch:summary}
\begin{figure}
\centering
\inputTikZ{figures/overview}
\caption{Overview over the various important concepts, equations and adaptive filters covered in these lecture notes.}
\label{fig:overview}
\end{figure}
\noindent Fig.~\ref{fig:overview} shows an overview over the various important concepts, equations and adaptive filters covered in these lecture notes. All relationships indicated by the arrows have been covered with the exception of the arrow from the steepest descent block to the NLMS block. The interested reader may establish this relationship by solving Problem 1 of Chapter 6 in \cite{Haykin2001}.
Table~\ref{tab:summary} shows a summary of the adaptive filters covered in these lecture notes. Most the expressions for the mean-square stability, the excess mean-square error (EMSE), the misadjustment, and the mean-square deviation (MSD) are approximations, and they can therefore only be used as a rule of thumb. For more accurate expressions see the description of the adaptive filters and the references therein.
\begin{sidewaystable}
\centering
\small
\begin{tabular}{@{}Sl Sr<{ = }@{ }Sl Sc Sc Sc Sc Sc Sc@{}}
\toprule
Name & \multicolumn{2}{c}{Algorithm} & Cost & Mean-Square Stability & EMSE & Misadjustment & MSD\\
\midrule
SD & $\vect{g}(\vect{w}(n))$ & $2\vect{R}_u\vect{w}(n)-2\vect{r}_{ud}(n)$ & $\mathcal{O}(M)$ & $0 < \mu < \displaystyle\frac{2}{\lambda_\textup{max}}$ & 0 & 0 & 0\\
& $\vect{w}(n+1)$ & $\vect{w}(n)-\displaystyle\frac{\mu}{2}\vect{g}(\vect{w}(n))$ & & & & &\\[4mm]
LMS & $e(n)$ & $d(n)-\vect{u}^T(n)\vect{w}(n)$ & $\mathcal{O}(M)$ & $0 < \mu < \displaystyle\frac{2}{\tr{\vect{R}_u}}$ & $\displaystyle\frac{\mu}{2}J_\textup{min}\tr{\vect{R}_u}$ & $\displaystyle\frac{\mu}{2}\tr{\vect{R}_u}$ & $\displaystyle\frac{\mu}{2}J_\textup{min}M$\\
& $\vect{w}(n+1)$ & $\vect{w}(n)+\mu\vect{u}(n)e(n)$ & & & & &\\[4mm]
NLMS& $e(n)$ & $d(n)-\vect{u}^T(n)\vect{w}(n)$ & $\mathcal{O}(M)$ & $0 < \beta < 2$& $\displaystyle\frac{\beta}{2}J_\textup{min}$ & $\displaystyle\frac{\beta}{2}$ & $\displaystyle\frac{\beta J_\textup{min}}{2\tr{\vect{R}_u}}$\\
& $\vect{w}(n+1)$ & $\vect{w}(n)+\displaystyle\frac{\beta}{\epsilon+\|\vect{u}(n)\|^2}\vect{u}(n)e(n)$ & & & & &\\[4mm]
APA & $\vect{e}(n)$ & $\vect{d}(n)-\vect{U}^T(n)\vect{w}(n)$ & $\mathcal{O}(MK^2)$& $0 < \beta < 2$&$\displaystyle\frac{\beta}{2}J_\textup{min}K$ & $\displaystyle\frac{\beta}{2}K$ & no simple expression\\
& $\vect{w}(n+1)$ & \multicolumn{2}{@{}Sl}{$\vect{w}(n)+\beta\vect{U}(n)[\epsilon\vect{I}+\vect{U}^T(n)\vect{U}(n)]^{-1}\vect{e}(n)$} & & & &\\[4mm]
RLS & $\vect{\pi}(n)$ & $\vect{P}(n-1)\vect{u}(n)$ & $\mathcal{O}(M^2)$ & $0 < \lambda \leq 1$&$\displaystyle\frac{J_\textup{min}\frac{1-\lambda}{1+\lambda}M}{1-\frac{1-\lambda}{1+\lambda}M}$ & $\displaystyle\frac{\frac{1-\lambda}{1+\lambda}M}{1-\frac{1-\lambda}{1+\lambda}M}$ & $\displaystyle\frac{J_\textup{min}\frac{1-\lambda}{1+\lambda}M}{1-\frac{1-\lambda}{1+\lambda}M}\sum_{m=1}^M\frac{1}{\lambda_m}$\\
& $\vect{k}(n)$ & $\displaystyle\frac{\vect{\pi}(n)}{\lambda+\vect{u}^T(n)\vect{\pi}(n)}$& & & & &\\
& $\xi(n)$ & $d(n)-\vect{u}^T(n)\vect{w}(n-1)$ & & & & &\\
& $\vect{w}(n)$ & $\vect{w}(n-1)+\vect{k}(n)\xi(n)$ & & & & &\\
& $\vect{P}(n)$ & \multicolumn{2}{@{}Sl}{$\lambda^{-1}\left[\vect{P}(n-1)-\vect{k}(n)\vect{\pi}^T(n)\right]$}& & & &\\
\bottomrule
\end{tabular}
\caption{Overview over the basic adaptive filters and their properties. Most of the expressions for the mean-square stability, the excess mean-square error (EMSE), the misadjustment, and the mean-square deviation (MSD) are approximations.}
\label{tab:summary}
\end{sidewaystable}
| {
"alphanum_fraction": 0.574754902,
"avg_line_length": 107.3684210526,
"ext": "tex",
"hexsha": "0c645d3dc73cf4a81229f1860506dd069684b2fc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jkjaer/adaptiveFilteringLectureNotes",
"max_forks_repo_path": "lecture_notes/lectures/summary.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jkjaer/adaptiveFilteringLectureNotes",
"max_issues_repo_path": "lecture_notes/lectures/summary.tex",
"max_line_length": 441,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jkjaer/adaptiveFilteringLectureNotes",
"max_stars_repo_path": "lecture_notes/lectures/summary.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-21T07:10:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-22T19:46:04.000Z",
"num_tokens": 1431,
"size": 4080
} |
\chapter*{Interactions}
Part \ref{part:resonators} introduced various resonators in isolation. Most musical instruments are composed of several individual systems which all interact with one another. The following chapters will describe different ways of interaction between individual resonators.
Chapter \ref{ch:collisions} describes collision interactions between different systems and has been used for the interactions between the different components of the tromba marina in papers \citeP[D] and \citeP[E]. Furthermore, the theory described in this chapter has been used for the collision between the lips that excite the trombone in paper \citeP[H]. Then, Chapter \ref{ch:connections} describes various to connections between models and has been used extensively for papers \citeP[A] and \citeP[B]. Furthermore, the dynamic grid presented in paper \citeP[G], uses the principles described in this chapter. | {
"alphanum_fraction": 0.818380744,
"avg_line_length": 228.5,
"ext": "tex",
"hexsha": "3ea4d196232c6ed468bbf17b33b30f1981bc975e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "SilvinWillemsen/phdThesis",
"max_forks_repo_path": "aauPhdCollectionThesis/interactions/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "SilvinWillemsen/phdThesis",
"max_issues_repo_path": "aauPhdCollectionThesis/interactions/introduction.tex",
"max_line_length": 614,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b0a59790e12d0c308a065958c6dc47c8763d8c34",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "SilvinWillemsen/phdThesis",
"max_stars_repo_path": "aauPhdCollectionThesis/interactions/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 185,
"size": 914
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% HEPDB - Reference Manual -- LaTeX Source %
% %
% Appendices - System dependencies %
% - Technical details %
% %
% Original: Jamie Shiers (from HEPDB source) %
% %
% Last Mod.: 02 Sep 1991 11:30 mg %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Filename{H1hdbappen-Conversion-of-existing-database-files}
\chapter{Conversion of existing database files}
Existing database files may be converted to {\tt HEPDB} format
using a simple program. The following examples describe the
conversion of a file that is already in {\tt Zebra RZ} format
and the conversion of a file that does not use {\tt Zebra RZ}.
\Filename{H2hdbappen-Conversion-of-the-CPLEAR-calibration-database}
\section{Conversion of the {\tt CPLEAR} calibration database}
The {\tt CPLEAR} calibration database consists of a single
{\tt Zebra RZ} file containing a number of directories
corresponding to the long term, medium term and short term
calibration constants of the various subdetectors.
Information is stored in these directories as individual {\tt Zebra}
banks, identified by the directory name and four keys. These
keys contain the following information:
\begin{DLtt}{1234567890}
\item[VAL\_STAR]Run number defining the lower bound of the validity range (integer)
\item[VAL\_STOP]Run number defining the upper bound of the validity range (integer)
\item[DETECTOR]Detector name to which the information corresponds (hollerith)
\item[POINTER]Hollerith bank identifier of the {\tt Zebra} bank (hollerith)
\end{DLtt}
This corresponds to a {\tt HEPDB} database with one validity range pair.
The {\tt detector} and {\tt pointer} information are stored as user keys.
The first thing that must be performed is the creation of a new {\tt HEPDB}
database file. This is performed by the following program.
\begin{XMPt}{Creating a new {\tt HEPDB} database for {\tt CPLEAR}}
*CMZ : 23/10/92 10.16.19 by Jamie Shiers
*-- Author :
PROGRAM CDEXA1
* ==============
*
* Create a new, empty database
*
PARAMETER (NWPAW=100000)
COMMON/PAWC/PAW(NWPAW)
*
* Initialise Zebra, HBOOK and HEPDB
*
CALL CDPAW(NWPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
*
* Unit for database access
*
LUNCD = 1
*
* Database parameters
*
NPAIR = 1
NREC = 20000
NPRE = 200
NTOP = 1
NQUO = 65000
*
* Accept default record length (1024 words)
*
LRECL = 0
CALL CDNEW(LUNCD,'HEPDB','RZKAL.DBS',IDIV,NPAIR,NQUO,NREC,NPRE,NTOP,
+ LRECL,'F',IRC)
*
* Set the log level
*
CALL CDLOGL(' ',3,'A',IRC)
*
* Terminate
*
CALL CDEND(' ','A',IRC)
END
\end{XMPt}
The following program shows how the directory structure is created in
the {\tt HEPDB} database. Note that the directory structure is somewhat
simplified in the conversion, but this is of course optional.
\begin{XMPt}{Creating the directory structure in the {\tt HEPDB} database}
CDECK ID>, KALCONV.
PROGRAM KALCONV
*
* Program to convert CPLEAR calibration database
* to HEPDB format
*
* RZKAL keys: VAL_STAR (I)
* VAL_STOP (I)
* DETECTOR (H)
* BANK ID (H)
* insertion time = RZ date/time
*
* HEPDB keys: NPAIR = 1
* VAL_STAR = KEYS(11) (I)
* VAL_STOP = KEYS(12) (I)
* NUSER = 2
* DETECTOR = KEYS(13) (H)
* BANK ID = KEYS(14) (H)
* insertion time = KEYS(IDHINS)
*
* Output pathnames:
*
* //CDCD/CALIBRATION/DC_ST
* //CDCD/CALIBRATION/DC_LT
* //CDCD/CALIBRATION/DC_MT
*
* //CDCD/CALIBRATION/PC_ST
* //CDCD/CALIBRATION/PC_LT
* //CDCD/CALIBRATION/PC_MT
*
* //CDCD/CALIBRATION/PID_ST
* //CDCD/CALIBRATION/PID_LT
* //CDCD/CALIBRATION/PID_MT
*
* //CDCD/CALIBRATION/ST_MT
*
* //CDCD/CALIBRATION/CALO_LT
*
PARAMETER (NWPAW=100000)
COMMON/PAWC/ PAW(NWPAW)
COMMON/USRLNK/IDIV,LADDR
CHARACTER*4 CHTOP
CHARACTER*80 CHFILE
EXTERNAL CPKALC
*
* Initialise Zebra, HBOOK and HEPDB
*
CALL CDPAW(NWPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
*
* Link area of banks retrieved from database
*
CALL MZLINK(IDIV,'/USRLNK/',LADDR,LADDR,LADDR)
*
* Unit for database access
*
LUNCD = 1
*
* Unit for database update (via journal files)
*
LUNFZ = 2
*
* Unit for RZKAL file
*
LUNRZ = 3
*
* Open CPLEAR calibration database (RZKAL.DATA)
*
LRECL = 0
CALL RZOPEN(LUNRZ,'RZKAL','rzkal.data',' ',LRECL,IRC)
CALL RZFILE(LUNRZ,'RZKAL',' ')
*
* Find the database file and construct the top directory name
*
CALL CDPREF(10,'CD',CHTOP,CHFILE,IRC)
*
* Open the database file
*
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
*
* Loop over directories in RZKAL.DATA
*
CALL RZSCAN('//RZKAL',CPKALC)
*
* Terminate
*
CALL CDEND(' ','A',IRC)
CALL RZCLOS(' ','A')
END
CDECK ID>, CPKALC.
SUBROUTINE CPKALC(CHDIR)
CHARACTER*(*) CHDIR
PARAMETER (NKEYS=2)
PARAMETER (MAXOBJ=1000)
CHARACTER*8 CHTAG(NKEYS)
CHARACTER*2 CHFOR
CHARACTER*255 CHPATH,CHSAVE
DATA NENTRY/0/
SAVE NENTRY
IF(NENTRY.EQ.0) THEN
NENTRY = 1
RETURN
ENDIF
*
* Must save directory in local variable: calls to RZ
* overwrite it!
*
LDIR = LENOCC(CHDIR)
CHSAVE = CHDIR(1:LDIR)
*
* Make directories in HEPDB database
*
DELTA = 0.0
IPREC = 0
CHFOR = 'HH'
CHTAG(1) = 'DETECTOR'
CHTAG(2) = 'POINTER '
*
* Construct directory name for HEPDB file
*
LSLASH = INDEXB(CHSAVE(1:LDIR),'/') + 1
IF(INDEX(CHSAVE(1:LDIR),'MONTE').EQ.0) THEN
CHPATH = '//CDCD/CALIBRATION/'//CHSAVE(LSLASH:LDIR)
LPATH = LDIR - LSLASH + 20
ELSE
CHPATH = '//CDCD/'//CHSAVE(LSLASH:LDIR)
LPATH = LDIR - LSLASH + 8
ENDIF
CALL CDMDIR(CHPATH(1:LPATH),NKEYS,CHFOR,CHTAG,MAXOBJ,
+ IPREC,DELTA,'CP',IRC)
99 CONTINUE
CALL RZCDIR(CHSAVE(1:LDIR),' ')
END
\end{XMPt}
The data is then entered using a program that is very similar to the above.
\begin{XMPt}{Entering the data into the {\tt HEPDB} database}
CDECK ID>, KALCONV.
PROGRAM KALCONV
*
* Program to convert CPLEAR calibration database
* to HEPDB format
*
* RZKAL keys: VAL_STAR (I)
* VAL_STOP (I)
* DETECTOR (H)
* BANK ID (H)
* insertion time = RZ date/time
*
* HEPDB keys: NPAIR = 1
* VAL_STAR = KEYS(11) (I)
* VAL_STOP = KEYS(12) (I)
* NUSER = 2
* DETECTOR = KEYS(13) (H)
* BANK ID = LEYS(14) (H)
* insertion time = KEYS(IDHINS)
*
* Output pathnames:
*
* //CDCD/CALIBRATION/DC_ST
* //CDCD/CALIBRATION/DC_LT
* //CDCD/CALIBRATION/DC_MT
*
* //CDCD/CALIBRATION/PC_ST
* //CDCD/CALIBRATION/PC_LT
* //CDCD/CALIBRATION/PC_MT
*
* //CDCD/CALIBRATION/PID_ST
* //CDCD/CALIBRATION/PID_LT
* //CDCD/CALIBRATION/PID_MT
*
* //CDCD/CALIBRATION/ST_MT
*
* //CDCD/CALIBRATION/CALO_LT
*
PARAMETER (NWPAW=100000)
COMMON/PAWC/ PAW(NWPAW)
COMMON/USRLNK/IDIV,LADDR
CHARACTER*4 CHTOP
CHARACTER*80 CHFILE
EXTERNAL CPKALC
*
* Initialise Zebra, HBOOK and HEPDB
*
CALL CDPAW(NWPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
*
* Link area of banks retrieved from database
*
CALL MZLINK(IDIV,'/USRLNK/',LADDR,LADDR,LADDR)
*
* Unit for database access
*
LUNCD = 1
*
* Unit for database update (via journal files)
*
LUNFZ = 2
*
* Unit for RZKAL file
*
LUNRZ = 3
*
* Open CPLEAR calibration database (RZKAL.DATA)
*
LRECL = 0
CALL RZOPEN(LUNRZ,'RZKAL','rzkal.data',' ',LRECL,IRC)
CALL RZFILE(LUNRZ,'RZKAL',' ')
*
* Find the database file and construct the top directory name
*
CALL CDPREF(10,'CD',CHTOP,CHFILE,IRC)
*
* Open the database file
*
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
*
* Loop over directories in RZKAL.DATA
*
CALL RZSCAN('//RZKAL',CPKALC)
*
* Terminate
*
CALL CDEND(' ','A',IRC)
CALL RZCLOS(' ','A')
END
CDECK ID>, CPKALC.
SUBROUTINE CPKALC(CHDIR)
CHARACTER*(*) CHDIR
COMMON/USRLNK/IDIV,LADDR
PARAMETER (NKEYS=2)
PARAMETER (MAXOBJ=1000)
DIMENSION KEYS(13)
DIMENSION KEYZ(4)
CHARACTER*8 CHTAG(NKEYS)
CHARACTER*2 CHFOR
CHARACTER*255 CHPATH,CHSAVE
PARAMETER (IQDROP=25, IQMARK=26, IQCRIT=27, IQSYSX=28)
COMMON /QUEST/ IQUEST(100)
COMMON /ZVFAUT/IQVID(2),IQVSTA,IQVLOG,IQVTHR(2),IQVREM(2,6)
COMMON /ZEBQ/ IQFENC(4), LQ(100)
DIMENSION IQ(92), Q(92)
EQUIVALENCE (IQ(1),LQ(9)), (Q(1),IQ(1))
COMMON /MZCA/ NQSTOR,NQOFFT(16),NQOFFS(16),NQALLO(16), NQIAM
+, LQATAB,LQASTO,LQBTIS, LQWKTB,NQWKTB,LQWKFZ
+, MQKEYS(3),NQINIT,NQTSYS,NQM99,NQPERM,NQFATA,NQCASE
+, NQTRAC,MQTRAC(48)
EQUIVALENCE (KQSP,NQOFFS(1))
COMMON /MZCB/ JQSTOR,KQT,KQS, JQDIVI,JQDIVR
+, JQKIND,JQMODE,JQDIVN,JQSHAR,JQSHR1,JQSHR2,NQRESV
+, LQSTOR,NQFEND,NQSTRU,NQREF,NQLINK,NQMINR,LQ2END
+, JQDVLL,JQDVSY,NQLOGL,NQSNAM(6)
DIMENSION IQCUR(16)
EQUIVALENCE (IQCUR(1),LQSTOR)
COMMON /MZCC/ LQPSTO,NQPFEN,NQPSTR,NQPREF,NQPLK,NQPMIN,LQP2E
+, JQPDVL,JQPDVS,NQPLOG,NQPNAM(6)
+, LQSYSS(10), LQSYSR(10), IQTDUM(22)
+, LQSTA(21), LQEND(20), NQDMAX(20),IQMODE(20)
+, IQKIND(20),IQRCU(20), IQRTO(20), IQRNO(20)
+, NQDINI(20),NQDWIP(20),NQDGAU(20),NQDGAF(20)
+, NQDPSH(20),NQDRED(20),NQDSIZ(20)
+, IQDN1(20), IQDN2(20), KQFT, LQFSTA(21)
DIMENSION IQTABV(16)
EQUIVALENCE (IQTABV(1),LQPSTO)
C
COMMON /RZCL/ LTOP,LRZ0,LCDIR,LRIN,LROUT,LFREE,LUSED,LPURG
+, LTEMP,LCORD,LFROM
EQUIVALENCE (LQRS,LQSYSS(7))
C
PARAMETER (KUP=5,KPW1=7,KNCH=9,KDATEC=10,KDATEM=11,KQUOTA=12,
+ KRUSED=13,KWUSED=14,KMEGA=15,KIRIN=17,KIROUT=18,
+ KRLOUT=19,KIP1=20,KNFREE=22,KNSD=23,KLD=24,KLB=25,
+ KLS=26,KLK=27,KLF=28,KLC=29,KLE=30,KNKEYS=31,
+ KNWKEY=32,KKDES=33,KNSIZE=253,KEX=6,KNMAX=100)
C
DATA NENTRY/0/
SAVE NENTRY
IF(NENTRY.EQ.0) THEN
NENTRY = 1
RETURN
ENDIF
*
* Must save directory in local variable: calls to RZ
* overwrite it!
*
LDIR = LENOCC(CHDIR)
CHSAVE = CHDIR(1:LDIR)
*
* Retrieve the keys in this directory
*
IF(LQRS.EQ.0) GOTO 99
IF(LCDIR.EQ.0) GOTO 99
LS = IQ(KQSP+LCDIR+KLS)
LK = IQ(KQSP+LCDIR+KLK)
NK = IQ(KQSP+LCDIR+KNKEYS)
NWK= IQ(KQSP+LCDIR+KNWKEY)
DO 10 I=1,NK
K=LK+(NWK+1)*(I-1)
DO 20 J=1,NWK
IKDES=(J-1)/10
IKBIT1=3*J-30*IKDES-2
IF(JBYT(IQ(KQSP+LCDIR+KKDES+IKDES),IKBIT1,3).LT.3)THEN
KEYZ(J)=IQ(KQSP+LCDIR+K+J)
ELSE
CALL ZITOH(IQ(KQSP+LCDIR+K+J),KEYZ(J),1)
ENDIF
20 CONTINUE
CALL VZERO(KEYS,10)
CALL UCOPY(KEYZ(1),KEYS(11),4)
*
* Retrieve the highest cycle of this object
* (will need modification if all cycles are to be converted)
*
ICYCLE = 9999
JBIAS = 2
CALL RZIN(IDIV,LADDR,JBIAS,KEYZ,ICYCLE,' ')
IF(IQUEST(1).NE.0) THEN
PRINT *,'CPKALC. error ',IQUEST(1),' from RZIN for ',KEYZ
GOTO 10
ENDIF
*
* Date/time of insertion
*
CALL RZDATE(IQUEST(14),IDATE,ITIME,1)
CALL CDPKTM(IDATE,ITIME,IPACK,IRC)
KEYS(4) = IPACK
*
* Store objects in HEPDB with appropriate keys
* Option H: honour insertion time in KEYS(IDHINS)
*
CALL CDSTOR(CHPATH(1:LPATH),LADDR,LKYBK,IDIV,KEYS,'H',IRC)
*
* Reset directory
*
CALL RZCDIR(CHSAVE(1:LDIR),' ')
*
* Drop this bank
*
CALL MZDROP(IDIV,LADDR,' ')
LADDR = 0
10 CONTINUE
99 CONTINUE
*
* Send updates to server one directory at a time
*
CALL CDSTSV(' ',0,IRC)
CALL RZCDIR(CHSAVE(1:LDIR),' ')
END
\end{XMPt}
\Filename{H2hdbappen-Creation-of-the-CHORUS-database}
\section{Creation of the {\tt CHORUS} database}
The programs in this section were all written by
{\tt J. Brunner/CHORUS}.
The following program shows the creation of the directory
structure and aliases for the {\tt CHORUS} geometry database.
Once the directories have been created by the server,
the program can be rerun to enter the aliases.
\begin{XMPt}{Creating the directories and aliases for {\tt CHORUS}}
PROGRAM MKDIRHDB
C ----------------
C CREATES THE DIRECTORY STRUCTURE
C FOR THE GEOMETRY DATABASE OF CHORUS
C
PARAMETER (NPAW=100000,NHBOOK=0,NDX=43)
COMMON /PAWC/ PAW(NPAW)
CHARACTER*4 CHTOP
CHARACTER*80 CHFILE
CHARACTER*80 DNAME
CHARACTER*40 DITAG(NDX)
CHARACTER*4 ALIAS(NDX)
DATA ALIAS /'3021','3022','3023','3024','3061','3062','3063',
+'3041','3042','3043','3044','3051','3052','3053','3054',
+'3011','3012','3015','3016','3017','3014','3018',
+'3031','3032','3033','3034','3091',
+'3071','3072','3073','3074','3075','3076','3077','3078',
+'3081','3082','3083','3084','3085','3086','3087','3088'/
DATA DITAG /'TUBES/X-COORD',
+ 'TUBES/V-COORD',
+ 'TUBES/V-OFFSET',
+ 'TUBES/ORIENTATION',
+ 'TUBES/ANALOG-V-COORD',
+ 'TUBES/ANALOG-V-OFFSET',
+ 'TUBES/MAGNET',
+ 'BREMS/X-COORD',
+ 'BREMS/V-COORD',
+ 'BREMS/V-OFFSET',
+ 'BREMS/ORIENTATION',
+ 'DRIFT/X-COORD',
+ 'DRIFT/V-COORD',
+ 'DRIFT/V-OFFSET',
+ 'DRIFT/ORIENTATION',
+ 'CALOR/X-COORD',
+ 'CALOR/V-COORD',
+ 'CALOR/ELM-V-OFFSET',
+ 'CALOR/HA1-V-OFFSET',
+ 'CALOR/HA2-V-OFFSET',
+ 'CALOR/ORIENTATION',
+ 'CALOR/MASK',
+ 'FIBER/X-COORD',
+ 'FIBER/V-COORD',
+ 'FIBER/V-OFFSET',
+ 'FIBER/ORIENTATION',
+ 'DIAMO/X-COORD',
+ 'TRIGG/X-COORD-PLAN',
+ 'TRIGG/Y-COORD-PLAN',
+ 'TRIGG/Z-COORD-PLAN',
+ 'TRIGG/X-WIDTH-PLAN',
+ 'TRIGG/Y-WIDTH-PLAN',
+ 'TRIGG/Z-WIDTH-PLAN',
+ 'TRIGG/Z-ANGLE-PLAN',
+ 'TRIGG/POINTER-TO-BAR',
+ 'TRIGG/X-COORD-BAR',
+ 'TRIGG/Y-COORD-BAR',
+ 'TRIGG/Z-COORD-BAR',
+ 'TRIGG/X-WIDTH-BAR',
+ 'TRIGG/Y-WIDTH-BAR',
+ 'TRIGG/Z-WIDTH-BAR',
+ 'TRIGG/Z-ANGLE-BAR',
+ 'TRIGG/POINTER-TO-PLAN'/
C
C--- INITIALISATION
C
CALL CDPAW(NPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
PRINT '('' IRC FROM CDPAW '',I5)',IRC
LUNCD=1
LUNFZ=2
CALL CDPREF(10,'CH',CHTOP,CHFILE,IRC)
PRINT '('' IRC FROM CDPREF '',I5)',IRC
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
PRINT '('' IRC FROM CDOPEN '',I5)',IRC
C
C--- CREATE DIRECTORIES
C
IPREC = -8
MAX = 100
DELTA = 0.0
NKEYS = 0
DO IDX=1,NDX
DNAME = '//CDCH/GEOMETRY/'//DITAG(IDX)
*
* First run with the following call to CDMDIR
*
CALL CDMDIR(DNAME,NKEYS,' ',' ',MAX,IPREC,DELTA,' ',IRC)
PRINT '('' IRC FROM CDMDIR '',I5)',IRC
*
* Then rerun with the following call uncommented and
* the previous call to CDMDIR commented out
*
* CALL CDALIA(DNAME,ALIAS(IDX),'P',IRC)
* PRINT '('' IRC FROM CDALIA '',I5)',IRC
END DO
C
C--- TERMINATION
C
CALL CDEND(' ','A',IRC)
END
\end{XMPt}
The following program shows an example of how the directories
created by the previous program can be populated with vectors.
{\tt HEPDB} always stores objects as Zebra banks and so the
first operation is to convert the vectors into banks using
the routine \Rind{CDVECT}. The banks can then be stored
using \Rind{CDSTOR}.
\begin{XMPt}{Storing vectors in a {\tt HEPDB} database}
PROGRAM FILLHDB
C ----------------
C FILLS THE DIRECTORY STRUCTURE
C FOR THE GEOMETRY DATABASE OF CHORUS
C
DIMENSION KEYDBS(100)
PARAMETER (NPAW=400000,NHBOOK=0,NDX=42)
COMMON /PAWC/ PAW(NPAW)
CHARACTER*4 CHTOP
CHARACTER*80 CHFILE
CHARACTER*80 DNAME
CHARACTER*40 DITAG(NDX)
CHARACTER*4 ALIAS(NDX)
DIMENSION IPO(1300)
DATA (IPO(L),L=1,90)/
+ 13633, 13634, 13635, 13636, 13637, 13638, 13639, 13640, 0,
+ 0, 13641, 13642, 13643, 13644, 13645, 13646, 13647, 13648,
+ 13649, 13650, 13651, 13652, 13653, 13654, 13655, 13656, 13657,
+ 13658, 13659, 13660, 13661, 13662, 13663, 13664, 13665, 13666,
+ 13667, 13668, 0, 0, 13377, 13378, 13379, 13380, 13381,
+ 13382, 13383, 13384, 0, 0, 13385, 13386, 13387, 13388,
+ 13389, 13390, 13391, 13392, 13393, 13394, 13395, 13396, 13397,
+ 13398, 13399, 13400, 13401, 13402, 13403, 13404, 13405, 13406,
+ 13407, 13408, 13409, 13410, 13411, 13412, 0, 0, 13121,
+ 13122, 13123, 13124, 13125, 13126, 13127, 13128, 0, 0/
data (ipo(L),L=91,180)/
+ 13129, 13130, 13131, 13132, 13133, 13134, 13135, 13136, 13137,
+ 13138, 13139, 13140, 13141, 13142, 13143, 13144, 13145, 13146,
+ 13147, 13148, 13149, 13150, 13151, 13152, 13153, 13154, 13155,
+ 13156, 0, 0, 12865, 12866, 12867, 12868, 12869, 12870,
+ 12871, 12872, 0, 0, 12873, 12874, 12875, 12876, 12877,
+ 12878, 12879, 12880, 12881, 12882, 12883, 12884, 12885, 12886,
+ 12887, 12888, 12889, 12890, 12891, 12892, 12893, 12894, 12895,
+ 12896, 12897, 12898, 12899, 12900, 0, 0, 12609, 12610,
+ 12611, 12612, 12613, 12614, 12615, 12616, 0, 0, 12617,
+ 12618, 12619, 12620, 12621, 12622, 12623, 12624, 12625, 12626/
data (IPO(L),L=181,270)/
+ 12627, 12628, 12629, 12630, 12631, 12632, 12633, 12634, 12635,
+ 12636, 12637, 12638, 12639, 12640, 12641, 12642, 12643, 12644,
+ 0, 0, 13697, 13698, 13699, 13700, 13701, 13702, 13703,
+ 13704, 0, 0, 13705, 13706, 13707, 13708, 13709, 13710,
+ 13711, 13712, 13713, 13714, 13715, 13716, 13717, 13718, 13719,
+ 13720, 13721, 13722, 13723, 13724, 13725, 13726, 13727, 13728,
+ 13729, 13730, 13731, 13732, 0, 0, 13441, 13442, 13443,
+ 13444, 13445, 13446, 13447, 13448, 0, 0, 13449, 13450,
+ 13451, 13452, 13453, 13454, 13455, 13456, 13457, 13458, 13459,
+ 13460, 13461, 13462, 13463, 13464, 13465, 13466, 13467, 13468/
data (IPO(L),L=271,360)/
+ 13469, 13470, 13471, 13472, 13473, 13474, 13475, 13476, 0,
+ 0, 13185, 13186, 13187, 13188, 13189, 13190, 13191, 13192,
+ 0, 0, 13193, 13194, 13195, 13196, 13197, 13198, 13199,
+ 13200, 13201, 13202, 13203, 13204, 13205, 13206, 13207, 13208,
+ 13209, 13210, 13211, 13212, 13213, 13214, 13215, 13216, 13217,
+ 13218, 13219, 13220, 0, 0, 12929, 12930, 12931, 12932,
+ 12933, 12934, 12935, 12936, 0, 0, 12937, 12938, 12939,
+ 12940, 12941, 12942, 12943, 12944, 12945, 12946, 12947, 12948,
+ 12949, 12950, 12951, 12952, 12953, 12954, 12955, 12956, 12957,
+ 12958, 12959, 12960, 12961, 12962, 12963, 12964, 0, 0/
data (IPO(L),L=361,450)/
+ 12673, 12674, 12675, 12676, 12677, 12678, 12679, 12680, 0,
+ 0, 12681, 12682, 12683, 12684, 12685, 12686, 12687, 12688,
+ 12689, 12690, 12691, 12692, 12693, 12694, 12695, 12696, 12697,
+ 12698, 12699, 12700, 12701, 12702, 12703, 12704, 12705, 12706,
+ 12707, 12708, 0, 0, 9537, 9538, 9539, 9540, 9541,
+ 9542, 9543, 9544, 9545, 9546, 9547, 9548, 9549, 9550,
+ 9551, 9552, 9553, 9554, 9555, 9556, 9557, 9558, 9559,
+ 9560, 9561, 9562, 9563, 9564, 9565, 9566, 9567, 9568,
+ 9569, 9570, 9571, 9572, 9573, 9574, 9575, 9576, 9281,
+ 9282, 9283, 9284, 9285, 9286, 9287, 9288, 9289, 9290/
data (IPO(L),L=451,540)/
+ 9291, 9292, 9293, 9294, 9295, 9296, 9297, 9298, 9299,
+ 9300, 9301, 9302, 9303, 9304, 9305, 9306, 9307, 9308,
+ 9309, 9310, 9311, 9312, 9313, 9314, 9315, 9316, 9317,
+ 9318, 9319, 9320, 9025, 9026, 9027, 9028, 9029, 9030,
+ 9031, 9032, 9033, 9034, 9035, 9036, 9037, 9038, 9039,
+ 9040, 9041, 9042, 9043, 9044, 9045, 9046, 9047, 9048,
+ 9049, 9050, 9051, 9052, 9053, 9054, 9055, 9056, 9057,
+ 9058, 9059, 9060, 9061, 9062, 9063, 9064, 8769, 8770,
+ 8771, 8772, 8773, 8774, 8775, 8776, 8777, 8778, 8779,
+ 8780, 8781, 8782, 8783, 8784, 8785, 8786, 8787, 8788/
data (IPO(L),L=541,630)/
+ 8789, 8790, 8791, 8792, 8793, 8794, 8795, 8796, 8797,
+ 8798, 8799, 8800, 8801, 8802, 8803, 8804, 8805, 8806,
+ 8807, 8808, 8513, 8514, 8515, 8516, 8517, 8518, 8519,
+ 8520, 8521, 8522, 8523, 8524, 8525, 8526, 8527, 8528,
+ 8529, 8530, 8531, 8532, 8533, 8534, 8535, 8536, 8537,
+ 8538, 8539, 8540, 8541, 8542, 8543, 8544, 8545, 8546,
+ 8547, 8548, 8549, 8550, 8551, 8552, 9601, 9602, 9603,
+ 9604, 9605, 9606, 9607, 9608, 9609, 9610, 9611, 9612,
+ 9613, 9614, 9615, 9616, 9617, 9618, 9619, 9620, 9621,
+ 9622, 9623, 9624, 9625, 9626, 9627, 9628, 9629, 9630/
data (IPO(L),L=631,720)/
+ 9631, 9632, 9633, 9634, 9635, 9636, 9637, 9638, 9639,
+ 9640, 9345, 9346, 9347, 9348, 9349, 9350, 9351, 9352,
+ 9353, 9354, 9355, 9356, 9357, 9358, 9359, 9360, 9361,
+ 9362, 9363, 9364, 9365, 9366, 9367, 9368, 9369, 9370,
+ 9371, 9372, 9373, 9374, 9375, 9376, 9377, 9378, 9379,
+ 9380, 9381, 9382, 9383, 9384, 9089, 9090, 9091, 9092,
+ 9093, 9094, 9095, 9096, 9097, 9098, 9099, 9100, 9101,
+ 9102, 9103, 9104, 9105, 9106, 9107, 9108, 9109, 9110,
+ 9111, 9112, 9113, 9114, 9115, 9116, 9117, 9118, 9119,
+ 9120, 9121, 9122, 9123, 9124, 9125, 9126, 9127, 9128/
data (IPO(L),L=721,810)/
+ 8833, 8834, 8835, 8836, 8837, 8838, 8839, 8840, 8841,
+ 8842, 8843, 8844, 8845, 8846, 8847, 8848, 8849, 8850,
+ 8851, 8852, 8853, 8854, 8855, 8856, 8857, 8858, 8859,
+ 8860, 8861, 8862, 8863, 8864, 8865, 8866, 8867, 8868,
+ 8869, 8870, 8871, 8872, 8577, 8578, 8579, 8580, 8581,
+ 8582, 8583, 8584, 8585, 8586, 8587, 8588, 8589, 8590,
+ 8591, 8592, 8593, 8594, 8595, 8596, 8597, 8598, 8599,
+ 8600, 8601, 8602, 8603, 8604, 8605, 8606, 8607, 8608,
+ 8609, 8610, 8611, 8612, 8613, 8614, 8615, 8616, 5186,
+ 5187, 5188, 5189, 5190, 5191, 5192, 5193, 5194, 5195/
data (IPO(L),L=811,900)/
+ 5196, 5197, 5198, 5199, 5200, 5201, 5202, 5203, 5204,
+ 5205, 5206, 5207, 5208, 5209, 5210, 5211, 5212, 5213,
+ 5214, 5215, 5216, 5217, 5218, 5219, 5220, 5221, 5222,
+ 5223, 5224, 5225, 5226, 5227, 5228, 5229, 5230, 5231,
+ 5232, 5233, 5234, 5235, 5236, 5237, 5238, 5239, 5240,
+ 5241, 5242, 5243, 5244, 5245, 4930, 4931, 4932, 4933,
+ 4934, 4935, 4936, 4937, 4938, 4939, 4940, 4941, 4942,
+ 4943, 4944, 4945, 4946, 4947, 4948, 4949, 4950, 4951,
+ 4952, 4953, 4954, 4955, 4956, 4957, 4958, 4959, 4960,
+ 4961, 4962, 4963, 4964, 4965, 4966, 4967, 4968, 4969/
data (IPO(L),L=901,990)/
+ 4970, 4971, 4972, 4973, 4974, 4975, 4976, 4977, 4978,
+ 4979, 4980, 4981, 4982, 4983, 4984, 4985, 4986, 4987,
+ 4988, 4989, 4674, 4675, 4676, 4677, 4678, 4679, 4680,
+ 4681, 4682, 4683, 4684, 4685, 4686, 4687, 4688, 4689,
+ 4690, 4691, 4692, 4693, 4694, 4695, 4696, 4697, 4698,
+ 4699, 4700, 4701, 4702, 4703, 4704, 4705, 4706, 4707,
+ 4708, 4709, 4710, 4711, 4712, 4713, 4714, 4715, 4716,
+ 4717, 4718, 4719, 4720, 4721, 4722, 4723, 4724, 4725,
+ 4726, 4727, 4728, 4729, 4730, 4731, 4732, 4733, 4418,
+ 4419, 4420, 4421, 4422, 4423, 4424, 4425, 4426, 4427/
data (IPO(L),L=991,1080)/
+ 4468, 4469, 4470, 4471, 4472, 4473, 4474, 4475, 4476,
+ 4477, 5250, 5251, 5252, 5253, 5254, 5255, 5256, 5257,
+ 5258, 5259, 5260, 5261, 5262, 5263, 5264, 5265, 5266,
+ 5267, 5268, 5269, 5270, 5271, 5272, 5273, 5274, 5275,
+ 5276, 5277, 5278, 5279, 5280, 5281, 5282, 5283, 5284,
+ 5285, 5286, 5287, 5288, 5289, 5290, 5291, 5292, 5293,
+ 5294, 5295, 5296, 5297, 5298, 5299, 5300, 5301, 5302,
+ 5303, 5304, 5305, 5306, 5307, 5308, 5309, 4994, 4995,
+ 4996, 4997, 4998, 4999, 5000, 5001, 5002, 5003, 5004,
+ 5005, 5006, 5007, 5008, 5009, 5010, 5011, 5012, 5013/
data (IPO(L),L=1081,1170)/
+ 5014, 5015, 5016, 5017, 5018, 5019, 5020, 5021, 5022,
+ 5023, 5024, 5025, 5026, 5027, 5028, 5029, 5030, 5031,
+ 5032, 5033, 5034, 5035, 5036, 5037, 5038, 5039, 5040,
+ 5041, 5042, 5043, 5044, 5045, 5046, 5047, 5048, 5049,
+ 5050, 5051, 5052, 5053, 4738, 4739, 4740, 4741, 4742,
+ 4743, 4744, 4745, 4746, 4747, 4748, 4749, 4750, 4751,
+ 4752, 4753, 4754, 4755, 4756, 4757, 4758, 4759, 4760,
+ 4761, 4762, 4763, 4764, 4765, 4766, 4767, 4768, 4769,
+ 4770, 4771, 4772, 4773, 4774, 4775, 4776, 4777, 4778,
+ 4779, 4780, 4781, 4782, 4783, 4784, 4785, 4786, 4787/
data (IPO(L),L=1171,1260)/
+ 4788, 4789, 4790, 4791, 4792, 4793, 4794, 4795, 4796,
+ 4797, 4482, 4483, 4484, 4485, 4486, 4487, 4488, 4489,
+ 4490, 4491, 4532, 4533, 4534, 4535, 4536, 4537, 4538,
+ 4539, 4540, 4541, 4428, 4429, 4430, 4431, 4432, 4433,
+ 4434, 4435, 4436, 4437, 4438, 4439, 4440, 4441, 4442,
+ 4443, 4444, 4445, 4446, 4447, 4448, 4449, 4450, 4451,
+ 4452, 4453, 4454, 4455, 4456, 4457, 4458, 4459, 4460,
+ 4461, 4462, 4463, 4464, 4465, 4466, 4467, 4492, 4493,
+ 4494, 4495, 4496, 4497, 4498, 4499, 4500, 4501, 4502,
+ 4503, 4504, 4505, 4506, 4507, 4508, 4509, 4510, 4511/
data (IPO(L),L=1261,1300)/
+ 4512, 4513, 4514, 4515, 4516, 4517, 4518, 4519, 4520,
+ 4521, 4522, 4523, 4524, 4525, 4526, 4527, 4528, 4529,
+ 4530, 4531, 4417, 4673, 4929, 5185, 4478, 4737, 4990,
+ 5249, 0, 0, 4481, 4734, 4993, 5246, 4542, 4798,
+ 5054, 5310, 0, 0/
DATA ALIAS /'3021','3022','3023','3024','3061','3062',
+'3041','3042','3043','3044','3051','3052','3053','3054',
+'3011','3012','3015','3016','3017','3014','3018',
+'3031','3032','3033','3034','3091',
+'3071','3072','3073','3074','3075','3076','3077','3078',
+'3081','3082','3083','3084','3085','3086','3087','3088'/
DATA DITAG /'TUBES/X-COORD',
+ 'TUBES/V-COORD',
+ 'TUBES/V-OFFSET',
+ 'TUBES/ORIENTATION',
+ 'TUBES/ANALOG-V-COORD',
+ 'TUBES/ANALOG-V-OFFSET',
+ 'BREMS/X-COORD',
+ 'BREMS/V-COORD',
+ 'BREMS/V-OFFSET',
+ 'BREMS/ORIENTATION',
+ 'DRIFT/X-COORD',
+ 'DRIFT/V-COORD',
+ 'DRIFT/V-OFFSET',
+ 'DRIFT/ORIENTATION',
+ 'CALOR/X-COORD',
+ 'CALOR/V-COORD',
+ 'CALOR/ELM-V-OFFSET',
+ 'CALOR/HA1-V-OFFSET',
+ 'CALOR/HA2-V-OFFSET',
+ 'CALOR/ORIENTATION',
+ 'CALOR/MASK',
+ 'FIBER/X-COORD',
+ 'FIBER/V-COORD',
+ 'FIBER/V-OFFSET',
+ 'FIBER/ORIENTATION',
+ 'DIAMO/X-COORD',
+ 'TRIGG/X-COORD-PLAN',
+ 'TRIGG/Y-COORD-PLAN',
+ 'TRIGG/Z-COORD-PLAN',
+ 'TRIGG/X-WIDTH-PLAN',
+ 'TRIGG/Y-WIDTH-PLAN',
+ 'TRIGG/Z-WIDTH-PLAN',
+ 'TRIGG/Z-ANGLE-PLAN',
+ 'TRIGG/POINTER-TO-BAR',
+ 'TRIGG/X-COORD-BAR',
+ 'TRIGG/Y-COORD-BAR',
+ 'TRIGG/Z-COORD-BAR',
+ 'TRIGG/X-WIDTH-BAR',
+ 'TRIGG/Y-WIDTH-BAR',
+ 'TRIGG/Z-WIDTH-BAR',
+ 'TRIGG/Z-ANGLE-BAR',
+ 'TRIGG/POINTER-TO-PLAN'/
C
C--- INITIALISATION
C
CALL CDPAW(NPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
PRINT '('' IRC FROM CDPAW '',I5)',IRC
LUNCD=1
LUNFZ=2
CALL CDPREF(10,'CH',CHTOP,CHFILE,IRC)
PRINT '('' IRC FROM CDPREF '',I5)',IRC
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
PRINT '('' IRC FROM CDOPEN '',I5)',IRC
C
C--- STORE VECTORS
C
DO IDX=1,NDX
DNAME = '//CDCH/GEOMETRY/'//DITAG(IDX)
NDAT = 1300
CALL CDVECT(' ',IPO,NDAT,JADDR,'PI',IRC)
PRINT '('' IRC FROM CDVECT '',I5)',IRC
KEYDBS(11) = 1
KEYDBS(12) = 999999
IDIV = 0
CALL CDSTOR(DNAME(1:26),JADDR,LDUMI,IDIV,KEYDBS,' ',IRC)
PRINT '('' IRC FROM CDSTOR '',I5)',IRC
END DO
C
C--- TERMINATION
C
CALL CDEND(' ','A',IRC)
END
\end{XMPt}
The following example shows how objects may be copied from
one database to another. The directory structures in the two
databases is different in this case.
\begin{XMPt}{Copying objects from one database to another}
PROGRAM COPYHDB
C ----------------
C FILLS THE DIRECTORY STRUCTURE
C FOR THE GEOMETRY DATABASE OF CHORUS
C
DIMENSION KEYDBS(100),KEY(5),JP(5)
DATA KEY /3023,3062,3043,3053,3071/
DATA JP / 3, 6, 10, 14, 7/
PARAMETER (NWPAW=400000,NHBOOK=0,NDX=43)
COMMON/PAWC/NWP,IXPAWC,IHDIV,IXHIGZ,IXKU,FENC(5),LMAIN,HCV(NWPAW)
DIMENSION IQ(2),Q(2),LQ(8000)
EQUIVALENCE (LQ(1),LMAIN),(IQ(1),LQ(9)),(Q(1),IQ(1))
CHARACTER*7 CHPAT1
DATA CHPAT1 /'//CDC2/'/
CHARACTER*4 CHTOP
CHARACTER*80 CHFILE,CHPATH
CHARACTER*80 DNAME
CHARACTER*40 DITAG(NDX)
CHARACTER*4 ALIAS(NDX)
DATA ALIAS /'3021','3022','3023','3024','3061','3062','3063',
+'3041','3042','3043','3044','3051','3052','3053','3054',
+'3011','3012','3015','3016','3017','3014','3018',
+'3031','3032','3033','3034','3091',
+'3071','3072','3073','3074','3075','3076','3077','3078',
+'3081','3082','3083','3084','3085','3086','3087','3088'/
DATA DITAG /'TUBES/X-COORD',
+ 'TUBES/V-COORD',
+ 'TUBES/V-OFFSET',
+ 'TUBES/ORIENTATION',
+ 'TUBES/ANALOG-V-COORD',
+ 'TUBES/ANALOG-V-OFFSET',
+ 'TUBES/MAGNET',
+ 'BREMS/X-COORD',
+ 'BREMS/V-COORD',
+ 'BREMS/V-OFFSET',
+ 'BREMS/ORIENTATION',
+ 'DRIFT/X-COORD',
+ 'DRIFT/V-COORD',
+ 'DRIFT/V-OFFSET',
+ 'DRIFT/ORIENTATION',
+ 'CALOR/X-COORD',
+ 'CALOR/V-COORD',
+ 'CALOR/ELM-V-OFFSET',
+ 'CALOR/HA1-V-OFFSET',
+ 'CALOR/HA2-V-OFFSET',
+ 'CALOR/ORIENTATION',
+ 'CALOR/MASK',
+ 'FIBER/X-COORD',
+ 'FIBER/V-COORD',
+ 'FIBER/V-OFFSET',
+ 'FIBER/ORIENTATION',
+ 'DIAMO/X-COORD',
+ 'TRIGG/X-COORD-PLAN',
+ 'TRIGG/Y-COORD-PLAN',
+ 'TRIGG/Z-COORD-PLAN',
+ 'TRIGG/X-WIDTH-PLAN',
+ 'TRIGG/Y-WIDTH-PLAN',
+ 'TRIGG/Z-WIDTH-PLAN',
+ 'TRIGG/Z-ANGLE-PLAN',
+ 'TRIGG/POINTER-TO-BAR',
+ 'TRIGG/X-COORD-BAR',
+ 'TRIGG/Y-COORD-BAR',
+ 'TRIGG/Z-COORD-BAR',
+ 'TRIGG/X-WIDTH-BAR',
+ 'TRIGG/Y-WIDTH-BAR',
+ 'TRIGG/Z-WIDTH-BAR',
+ 'TRIGG/Z-ANGLE-BAR',
+ 'TRIGG/POINTER-TO-PLAN'/
C
C--- INITIALISATION, OPEN 2 DATABASE FILES
C
CALL CDPAW(NWPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
PRINT '('' IRC FROM CDPAW '',I5)',IRC
LUNCD=1
LUNFZ=2
CALL CDPREF(10,'CH',CHTOP,CHFILE,IRC)
PRINT '('' IRC FROM CDPREF1 '',I5)',IRC
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
PRINT '('' IRC FROM CDOPEN1 '',I5)',IRC
*
LUNCD=3
LUNFZ=4
CALL CDPREF(10,'C2',CHTOP,CHFILE,IRC)
PRINT '('' IRC FROM CDPREF2 '',I5)',IRC
LRECL = 0
CALL CDOPEN(LUNCD,LUNFZ,CHTOP,CHFILE,LRECL,IDIV,' ',IRC)
PRINT '('' IRC FROM CDOPEN2 '',I5)',IRC
C
C--- COPY OBJECTS
C
C DO IDX=1,NDX
DO J=1,5
WRITE(CHPATH,'(A7,I4)') CHPAT1,KEY(J)
DNAME = '//CDCH/GEOMETRY/'//DITAG(JP(J))
NRUN = 1
CALL CDUSE(CHPATH,JKEY,NRUN,'N',IRC)
JADDR = LQ(JKEY-1)
PRINT '(A30)',CHPATH
PRINT '(I8)',JADDR
PRINT '('' IRC FROM CDUSE '',I5)',IRC
KEYDBS(11) = 1
KEYDBS(12) = 999999
CALL CDSTOR(DNAME,JADDR,LDUMI,IDIV,KEYDBS,' ',IRC)
PRINT '('' IRC FROM CDSTOR '',I5)',IRC
END DO
C
C--- TERMINATION
C
CALL CDEND(' ','A',IRC)
END
\end{XMPt}
\Filename{H1hdbappen-HEPDB-compression-algorithms}
\chapter{HEPDB compression algorithms}
Three methods of packing are used by the HEPDB when storing data.
These are the {\bf differences} method,
the {\bf delta} method, and the {\bf packing} method.
\Filename{H2hdbappen-The-delta-packing-method}
\section{The delta packing method}
In the delta packing method, the user has to supply a
floating point variable (delta). All numbers below this (in absolute
value) are treated as zero and ignored.
The output bank has 3 extra words.
\begin{OL}
\item The number of data words in the original bank
\item delta
\item the number of words stored after the zero-suppression.
\end{OL}
\Filename{H2hdbappen-The-packing-method}
\section{The packing method}
In the packing method, all numbers are first converted
to integers by multiplying the original number with \Lit{10**IPREC},
a user specified quantity.
They are then truncated at the decimal point.
The minimum offset to make all numbers positive is then found and added
to each value.
The optimum packing factor is then determined.
This
factor is chosen so that the minimum number of words are used after packing.
The data is then packed, except for those words which cannot be stored
with the chosen packing factor, which are stored in 32 bit words.
Here again 3 extra words are used to keep useful information, as follows:
\begin{OL}
\item The number of data words \Lit{*10**4 + original data type*1000 + (IPREC+100)}
The data type is indicated as follows:
\begin{DLtt}{1}
\item[2]Integer
\item[3]Floating point
\end{DLtt}
\item The offset to make all the numbers positive
\item A packed word with bits \\
\Lit{1-26} = number of words stored as 32-bit words,\\
bits \Lit{27-31} = number of bits used in packing,\\
bit \Lit{32} set to distinguish from the other methods.
\end{OL}
\Filename{H2hdbappen-The-differences-method}
\section{The differences method}
In addition to the above two methods, one can also store
the difference of the current object from a master.
A comparison is made with the five nearest
neighbours which are not themselves updates. The differences are then made
against that neighbour for which the minimum number of words are required.
The user may select that the user keys must also match during the search
for nearest neighbours.
\Filename{H1hdbappen-Extraction-and-Merging-of-database-records}
\chapter{Extraction and Merging of database records}
\Filename{H2hdbappen-Copying-database-records-from-one-to-another-database}
\section{Copying database records from one to another database}
There will probably be a need to extract only a subset of the contents
of a database to form a smaller private database. For maintaining the
main database, the manager will have to be able to merge new or updated
records from privately created database files into the main database,
and there will also be the need to create `snapshot' records of the
status of the current valid records in the database.
Routine \Rind{HDBEXTR} serves these purposes.
\Rind{HDBEXTR} extracts records from the subdirectory (or subdirectories)
specified by the path \Rarg{PATHI} , and also (if \Rarg{PATHI} only leads to a
directory, not a subdirectory) the \Lit{NSDIR}-element \Lit{CHARACTER} vector
\Rarg{CHDIR}, which contains the subdirectory names (optionally, records
from all subdirectories are extracted). The extracted records are
valid for the period of validity specified in the integer vector
\Rarg{KEYS}. If the \Ropt{W} option is not requested and there is more than
one valid current record, appropriate for the given type, in the range
then a series of summary records (each of which compresses the
information from the chain of records which apply to a part of
the period of validity) is created. The extracted record is written into
the corresponding subdirectory in the file with top-directory name
\Rarg{PATHO}.
There is also additional information written which is needed for
book-keeping when databases are merged
(see also option \Ropt{E} in \Rind{HDBSTOR} and \Rind{HDBSTOM}).
If the user only wants a copy of the database records he can
choose option \Ropt{M}, which suppresses the storing of merge informations.
By default a brief summary of all extracted records is printed, unless
option \Ropt{N} is specified. If the user wishes to write the
extracted records also on an FZ file he should specify the option \Ropt{X},
but note that the appropriated FZ calls should have previously been
issued before calling \Rind{HDBEXTR}.
\Shubr{HDBEXTR}{(PATHI,PATHO,KEYS,CHSDIR,NSECD,CHOPT,IRC*)}
\begin{DLtt}{1234567}
\item[PATHI]Full pathname of directory to be extracted.
\item[PATHO]Name of top directory of the output database.
This database should not yet contain any of the
subdirectories that are to be copied from the main database, unless
option \Ropt{O} is requested.
\item[ISEL]Vector specifying the required validity range.
\item[CHDIR]Character array of subdirectories to be extracted
\item[NSDIR]Number elements in array \Rarg{CHDIR}
\item[CHOPT]Character variable containing a string of single letter options
\begin{DLtt}{123}
\item[' ']directory specified by \Rarg{PATHI} will be extracted
\item['A']All subdirectories of the specified path will be extracted
\item['C']extract only records with key status bit copy set
\item['I']extend the validity range beyond one experiment number
\item['M']no merge information will be stored
\item['N']no summary will be printed
\item['S']a selected subset of subdetector subdirectories
(specified in \Rarg{CHDIR}) will be extracted
\item['O']subdetector-subdirectories in \Rarg{PATHO} may exist
\item['W']the whole number of stored records are
copied to the auxiliary database. However
the key word containing the pointer to the
logging will be changed. The input values of \Lit{ISEL} are ignored.
\item['X']write extracted records also to an FZ file,
where the FZ logical unit number is given by
the logical unit number for \Lit{PATHO+10}
\end{DLtt}
\index{return code!{\tt IRC}}
\item[IRC]Return status
\begin{DLtt}{1}
\item[0]Normal completion
\end{DLtt}
\end{DLtt}
\Filename{H2hdbappen-Merging-databases-or-database-records}
\section{Merging databases or database records}
The problem of merging records into the main database in a
coherent manner is not a trivial one. In particular, care
must be taken that conflicting updates are not entered into
the main database. The routine \Rind{HDBMERG} described below
performs checks to ensure that at record to be merged into
the database does not conflict with any updates already
in the database that have been made since information
upon which the new record is based was extracted from the
main database.
\par
Routine \Rind{HDBMERG} will merge a database or records from a database with a
pre-existing `target' database. Depending on the options selected,
all subdetector subdirectories or only a selected few may have their
records merged. The subdetector-subdirectories to be merged are either
specified by a full pathname in \Lit{CHAUX}, or else a path to the
relevant subdetector-directory in \Lit{CHAUX} and a list of the selected
subdetector-subdirectory names in the \Lit{NSDIR} elements of the \Lit{CHARACTER}
vector \Rarg{CHSDIR}. The ``target'' database is
identified by the top-directory name given in \Lit{CHTMAI} .
The database which contains the records to be merged can be created
with the \Rind{HDBEXTR} routine or with the routine \Rind{HDBNEW}. If the
database has been created with the \Rind{HDBEXTR} routine only the records which
have been stored in this database after calling \Rind{HDBEXTR} will be merged
in. In the other case all records will be merged in. If the option \Ropt{X}
is specified the merged records are also written to an FZ file, but note
that the appropriated FZ calls should have previously been issued before
calling \Rind{HDBMERG}.
By default a brief summary of all merged records is printed. This
can be suppressed with option \Ropt{N}. For expert use only there is the
facility to change the range of applicability of the records on merging
(option \Ropt{R}).
\Shubr{HDBMERG}{(PATHI,PATHO,KEYS,CHSDIR,NSECD,CHOPT,IRC*)}
\begin{DLtt}{1234567}
\item[PATHI]Full pathname of directory to be MERGacted.
\item[PATHO]Name of top directory of the output database.
This database should not yet contain any of the
subdirectories that are to be copied from the main database, unless
option \Ropt{O} is requested.
\item[KEYS]Vector specifying the required validity range.
\item[CHDIR]Character array of subdirectories to be MERGacted
\item[NSDIR]Number elements in array \Rarg{CHDIR}
\item[CHOPT]Character variable containing a string of single letter options
\begin{DLtt}{123}
\item[' ']directory specified by \Rarg{PATHI} will be MERGacted
\item['A']All subdirectories of the specified path will be
MERGacted
\item['C']MERGact only records with key status bit copy set
\item['I']extend the validity range beyond one experiment number
\item['M']no merge information will be stored
\item['N']no summary will be printed
\item['S']a selected subset of subdetector subdirectories
(specified in \Rarg{CHSDIR}) will be MERGacted
\item['O']subdetector-subdirectories in \Rarg{PATHO} may exist
\item['R']the new period of validity given by the
integer vector \Rarg{KEYS} is used. This
may only be used for database records created by
the \Rind{HDBEXTR} routine and with option \Ropt{W} not requested.
\item[X]write extracted records also to an FZ file,
where the FZ logical unit number is given by
the logical unit number for \Lit{PATHO+10}.
\end{DLtt}
\index{return code!{\tt IRC}}
\item[IRC]Return status
\begin{DLtt}{12}
\item[0]Normal completion
\end{DLtt}
\end{DLtt}
\Filename{H2hdbappen-Transferring-datastructures-to-and-from-FZ-files}
\section{Transferring datastructures to and from FZ files}
RZ are not in themselves transportable between computers, unless
they are written in exchange format (which is the default for
HEPDB RZ files).
They may also be translated into FZ file format for
transportation. This process is possible for a whole database, a
directory sub-tree or a single database record. It is possible
to store the validity information in the record headers or in
a special start-of-run record.
The records are written to an FZ file using the routine \Rind{HDBTOFZ}, and
may be read back into a database using the routine \Rind{HDBFRFZ}. It is
also possible to store the contents of some other FZ file in a
specified subdirectory using the routine \Rind{HDBFRFZ}. The routine
\Rind{HDBFZM} can only used for FZ files which have been produced by the
routines \Rind{HDBEXTR} or \Rind{HDBMERG} with the option \Ropt{X} requested,
because of the special FZ file organisation.
\Filename{H2hdbappen-Converting-database-records-into-an-FZ-file}
\section{Converting database records into an FZ file}
\Shubr{HDBTOFZ}{(PATH,LUN,ISEL,CHOPT,IRC*)}
\begin{DLtt}{1234567}
\item[PATH]Pathname of the directory from where the data are
to be retrieved.
\item[LUN]Logical unit number for the FZ file.
\item[CHOPT]Character variable containing a string of single character options
\begin{DLtt}{1234}
\item[' ']Write retrieved record to a FZ file.
\item['C']Write pathname into header vector (not valid for
\Ropt{R} option).
\item['P']Write period of validity and \Lit{NSTYPE} in header vector.
\item['R']Write start-of-run record.
\item['RP']Write start-of-run record with header containing
the period of validity.
\item['W']Writes the directory and all subdirectories to
a FZ file (using \Rind{RZTOFZ}). Other options are ignored.
\end{DLtt}
\index{return code!{\tt IRC}}
\item[IRC]Return status
\begin{DLtt}{1}
\item[0]Normal completion
\end{DLtt}
\end{DLtt}
Information from a HEPDB database may be copied to a ZEBRA FZ sequential
file using the routine \Rind{HDBTOFZ}. This routine can output a single
datastructure, the contents of a subdirectory or directory tree with optional
selection on perid of validity.
The required \Rind{OPEN} and \Rind{FZFILE} statements must have
previously been issued for the FZ file before calling this routine
(see, for example, \Lit{PROGRAM HDBTOFZ} in the HEPDB PAM file).
\newpage
\Filename{H2hdbappen-Reading-database-records-from-FZ-files}
\section{Reading database records from FZ files}
\Shubr{HDBFRFZ}{(PATH,LUN,NSKIP,KEYS,CHOPT,IRC*)}
\begin{DLtt}{1234567}
\item[PATH]Pathname of the directory into which the records are
to be copied.
\item[LUN]Logical unit number for the FZ file.
\item[NSKIP]Skip \Rarg{NSKIP} records before reading.
If option \Ropt{R} is chosen the
start-of-run record is not included in \Rarg{NSKIP}.
\item[KEYS]Vector of \Rarg{KEYS} containing the period of validity.
\item[CHOPT]Character variable containing a string of single character options
\begin{DLtt}{1234}
\item[' ']No start-of-run record and take period
of validity from input. Store as base record.
\item['A']Create the directory specified by \Rarg{PATH} before
reading the FZ file (cannot create a new file
if a whole database is to be loaded).
\item['B']Store as base record.
\item['C']Header contains the pathname, i.e. first header
contains length of the pathname (max. 40 words)
and the following words contain the pathname.
However, the pathname cannot be written into the start-of-run header.
\item['H']Take period of validity from the first 6 words of each header.
If option \Ropt{P} is chosen take
sub-run type from the seventh word of the header.
\item['I']extend the validity range beyond one experiment number
\item['P']Store as partial record.
\item['R']FZ file begins with start-of-run record.
\item['RH']As option \Ropt{R} and take period of validity from
start-of-run header as is described for option \Ropt{H}.
\item['T']Take time stamp from input \Rarg{KEYS} vector
\item['W']The whole directory is loaded via call to \Rind{RZFRFZ}.
\end{DLtt}
\index{return code!{\tt IRC}}
\item[IRC]Return status
\begin{DLtt}{12}
\item[0]Normal completion
\end{DLtt}
\end{DLtt}
An FZ file created using the routine \Rind{HDBTOFZ} or the routines
\Rind{HDBMERG} or \Rind{HDBEXTR} with the {\tt X} option may
be read using the routine \Rind{HDBFRRZ}.
In the special case that the FZ file contains the contents of a complete database,
using the option \Ropt{W} in \Rind{HDBTOFZ}, a new database file should be
created using the routine \Rind{HDBNEW} before calling this routine.
In all other cases, the data is entered into the subdirectory specified
by the path \Rarg{PATH}, or else the subdirectory tree stored is added
at the level specified by the path \Rarg{PATH}.
This routine may also be used to process FZ files that are not written
by one of the HEPDB routines. For example, an FZ file containing a single
datastructure may be read in and entered into the database in the directory
specified by the variable \Rarg{PATH}
with the period of validity as specified by \Lit{ISEL}.
Clearly, if the FZ file contains a database or directory tree, the
validity of the recovered records is the same as those originally
stored. However, when the FZ file contains a single record extracted
from a database previously, or a completely new ZEBRA structure, then
the validity may be specified. If the validity has been stored with the
record, then this may be retrieved and applied. If the record contains
a start-of-run record, the option \Ropt{R} must be used.
\newpage
\Filename{H2hdbappen-Merging-FZ-files-written-by-HDBEXTR-or-HDBMERG}
\section{Merging FZ files written by \protect\Rind{HDBEXTR} or \protect\Rind{HDBMERG}}
FZ files written by \Rind{HDBEXTR} or \Rind{HDBMERG} with the
option \Ropt{X} may be processed with the routine \Rind{HDBFZM}.
This routine decodes this FZ file structure and merges the
corresponding information into the database. The user must issue
the appropriate \Rind{OPEN} statement and call to the routine \Rind{FZFILE}
before calling \Rind{HDBFZM}.
\Shubr{HDBFZM}{(PATH,LUN,CHOPT,IRC*)}
\begin{DLtt}{1234567}
\item[PATH]Top directory name of the database into which the records
are to be copied.
\item[LUN]Logical unit number for the FZ file.
\item[CHOPT]Character variable containing a string of single character options
\begin{DLtt}{123}
\item[' ']Copy records, skip directories which are not
present, and print a summary.
\item['A']Add directories which are not present.
\item['N']No summary will be printed.
\end{DLtt}
\item[IRC]Integer return code
\index{return code!{\tt IRC}}
\begin{DLtt}{1}
\item[0]Normal completion
\end{DLtt}
\end{DLtt}
\Filename{H1hdbappen-Updating-HEPDB-databases}
\chapter{Updating HEPDB databases}
Several methods exist for updating HEPDB databases. Normally,
updates are not applied directly to the database itself but
queued to a dedicated server. The client-server communication
also has several variants but the most important is when the
communication is via files.
Only this method will be described here as it is the only one
enabled in the standard CERNLIB distribution of the package.
When a user accesses a HEPDB database a journal file is opened
in which database modifications are written. This journal file
is created in the user's directory and is in Zebra FZ alpha exchange
format. This file is then moved to a queue directory upon request,
when the user starts to modify a different
database or when the database is closed. The queue directory is
defined by a configuration file which is described in detail below.
The journal file name contains information on which database
the modifications are for and the user and node name from which
the update originated.
The above scheme works well in both localised and distributed
environments and is designed with file systems such as NFS and
DFS in mind. In the case of nodes which do not offer network
file systems, journal files are transferred using the
CERN Program Library package CSPACK~\cite{bib-CSPACK}.
\chapter{Access control}
By default, all users may access any database in read mode.
Updates are also possible, but are not performed directly,
but placed in a queue where they will be handled by a server.
One may use standard file permissions to control read and write
access to a database (using write access to the queue directory
in case of updates.) Alternatively, the {\bf :read} and {\bf :write}
tags, described on page \pageref{sect-NAMES}, may be used.
\Filename{H1hdbappen-Creating-a-new-database}
\chapter{Creating a new database}
The following example shows how to create a new database file.
\begin{XMPt}{Creating a new database file}
PROGRAM CDEXA1
* ==============
*
* Create a new, empty database
*
PARAMETER (NWPAW=100000)
COMMON/PAWC/PAW(NWPAW)
*
* Initialise Zebra, HBOOK and HEPDB
*
CALL CDPAW(NWPAW,NHBOOK,IDIV,'USR-DIV',5000,50000,'ZPHU',IRC)
*
* Unit for database access
*
LUNCD = 1
*
* Database parameters
*
NPAIR = 1
NREC = 20000
NTOP = 1
*
* Accept default record length (1024 words)
*
LRECL = 0
CALL CDNEW(LUNCD,'HEPDB','HEPDB.DBS',IDIV,NPAIR,NREC,NTOP,
+ LRECL,' ',IRC)
*
* Set the log level
*
CALL CDLOGL(' ',3,'A',IRC)
*
* Terminate
*
CALL CDEND(' ','A',IRC)
END
\end{XMPt}
The same result can be achieved by running the {\tt CDMAKE} program, e.g.
using the following script.
\begin{XMPt}{Script to run CDMAKE program}
#
# Make a new database
#
# export CDFILE='name of the database file'
export CDFILE='test.dbs'
# export CDPAIR='number of validity range pairs'
export CDPAIR=1
# export CDPRE='number of records to be preformatted'
export CDPRE=100
# export CDTOP='numeric ID for database'
export CDTOP=1
# export CDQUO='number of records for database quota'
export CDQUO=65000
# export CDRECL='record length of database file (words)'
export CDRECL=1024
#
# now run the job
#
/cern/pro/bin/cdmake
\end{XMPt}
\Filename{H1hdbappen-Managing-the-database-servers}
\chapter{Managing the database servers}
\label{sect-NAMES}
\index{names file}
\index{HEPDB NAMES}
Once the database file has been created, the server must be configured
for this file. This is done using a \Lit{NAMES} file, as follows.
\begin{XMPt}{Names file entries for a database file (hepdb.names)}
:nick.config
:list.ge au
:log./hepdb/cplear/logs
:queue./hepdb/cplear/queue
:todo./hepdb/cplear/todo
:save./hepdb/cplear/save
:bad./hepdb/cplear/bad
:loglevel.3
:wakeup.60
:servers.cernvm vxcpon hepdb
:nick.ge
:file./hepdb/cplear/database/geo.dbs
:servers.vxcpon hepdb cernvm
:desc.Geometry database for the CPLEAR experiment
:nick.au
:file./hepdb/cplear/database/aux.dbs
:servers.vxcpon cernvm
:desc.Auxiliary database for the CPLEAR experiment
:read.*
:write.phr cpb
:nick.hepdb
:userid.cdcplear
:node.hepdb
:localq./hepdb/l3/todo
:nick.vxcpon
:userid.cdcplear
:node.vxcpon
:queue.disk$db:[cdcplear.todo]
:protocol.tcpip
:localq./hepdb/cplear/tovxcpon
:nick.cernvm
:userid.cdcplear
:node.cernvm
:queue./hepdb/cplear/tocernvm
:protocol.tcpip
:localq./hepdb/cplear/tocernvm
\end{XMPt}
The various tags in the preceding names file have the following meanings.
\begin{DLtt}{1234567890}
\item[CONFIG]Configuration details for the server, as follows.
\begin{DLtt}{1234567890}
\item[LIST]A list of two character database prefixes
\item[LOG]The directory where the server logs are written
\item[QUEUE]The directory where new updates are placed by HEPDB clients
\item[TODO]The directory scanned by the HEPDB servers for updates to
process. In the case of {\tt MASTER} servers, the {\tt todo}
and {\tt queue} directories are the same. In the case of {\tt SLAVE}
servers, these queues are different.
\item[BAD]The directory where the server places {\tt bad} updates.
Bad updates are files for which the corresponding database cannot be
found, or updates which cannot be successfully processed by the
database server.
\item[SAVE]The directory where the server saves updates after processing
\item[LOGL]The log level for the server
\item[WAKEUP]The wakeup interval in seconds for the server
\item[SERVERS]This is the {\it or} of the list of servers for the
individual databases. The database servers are responsible for
moving updates to the local queues for the remote servers. A separate
process, CDMOVE, is responsible for moving the processes between different
systems.
\end{DLtt}
\item[prefix]The two character database prefix, e.g. {\Lit aa}.
\begin{DLtt}{1234567890}
\item[FILE]The full name of the database file. For VM/CMS systems,
the syntax is {\tt <user.address>filename.filetype}
\item[DESC]A comment string identifying the database and/or its purpose
\item[SERVERS]The list of remote servers for this database. Each node
in this list {\bf must} also be in the list for the :nick.config entry.
\item[READ]A list of users who may read the database. An asterisk
grants read access to all users. If this tag is not present,
read access control is not performed.
\item[WRITE]A list of users who may update the database.
Users with write access automatically gain read access.
If this tag is not present, write access control is not performed.
\end{DLtt}
\item[server]The nickname of the servers, e.g. {\Lit aa1}.
\begin{DLtt}{1234567890}
\item[USERID]Userid under which the server runs on the remote node
\item[NODE]Node on which the server runs
\item[QUEUE]Input queue on the remote node
\item[PROTOCOL]Method by which updates are transmitted
\item[LOCALQ]The local directory where updates are written pending
transmission to the remote node. This may, in fact, be the same
as \Lit{QUEUE}, e.g. when the directory is accessible via NFS
or AFS.
\index{NFS}\index{AFS}%
\end{DLtt}
\end{DLtt}
\Filename{H2hdbappen-Master-and-slave-database-servers}
\section{Master and slave database servers}
Objects entered into a {\tt HEPDB} database are assigned a unique
key within the directory into which they are inserted (the key serial number)
and are stamped with the insertion date and time. It is important
that these values are the same in all copies of the database. This is
achieved by assigning these values centrally. The node on which the
so-called {\tt master} server runs may be different for each experiment
but will typically be at the laboratory where the experiment is being
conducted. At CERN, a dedicated system has been set up to host the
master database servers. This is node {\tt hepdb}.
Master and slave servers operate identically. The only difference
lies in the names file (hepdb.names) that drives them. Updates are always queued
by the {\tt HEPDB} client software into the directory pointed to
by the {\tt :queue} tag in the names file, as described above.
The servers scan the directory pointed to by the {\tt :todo} tag
for outstanding updates. In the case of the master server, the
{\tt :queue} and {\tt :todo} directories are the same. In all other
cases it is a separate process that performs the automatic distribution
of updates between servers. In the case of distributed file systems
such as {\tt afs}, this operation is trivial. In other cases
{\tt TCP/IP, Bitnet, DECnet} or other transport mechanism is used.
The updates are stamped with the user and node name of origin. This
allows the servers to avoid forwarding updates back to their node
of origin.
\Filename{H1hdbappen-Managing-HEPDB-servers-at-CERN}
\chapter{Managing HEPDB servers at CERN}
\label{CERN-servers}
The following sections describe the setup of HEPDB at CERN.
In general, the descriptions are also valid for other sites.
\Filename{H2hdbappen-Creating-a-new-server-on-CERNVM}
\section{Creating a new server on CERNVM}
On CERNVM a dedicated account is used per experiment.
Thus for {\tt CPLEAR} we have the account {\tt CDCPLEAR},
for {\tt CHORUS} we have {\tt CDCHORUS}. These accounts
are created using the standard {\tt USERREG} procedure.
Each account has 3 mini-disks plus a link to the {\tt 191}
disk of the {\tt HEPDB} machine. The latter is used to
store the various EXECs that are required for the servers,
to avoid cluttering up the {\tt CERNLIB} disks.
The various disks are used as follows:
\begin{DLtt}{1234}
\item[191]Disk for the database files and log files
\item[192]Link to {\tt HEPDB 191}
\item[193]Disk for journal files
\item[194]Disk for {\tt bad} files, i.e. those that
cannot be successfully processed.
\end{DLtt}
In addition there is a special server named {\tt CDHEPDB}.
This is used to exchange journal files between node {\tt hepdb}
and {\tt CERNVM}. It is also used as a gateway to remote {\tt Bitnet}
sites. Thus, in the case of {\tt CPLEAR}, updates from the
{\tt VM/CMS} systems in Lyon, Saclay and Rutherford are first
sent via {\tt CDHEPDB} to the master server on node {\tt hepdb}.
Once a unique key serial number and the insertion date and time
has been allocated, the new journal files are then resent
to the slave servers on those nodes and CERNVM.
The database servers are autologged by the machine {\tt FATONE},
which also controls the {\tt FATMEN} servers.
\Filename{h2hdbappen-Update-flow-between-CERNVM-and-HEPDB}
\section{Transfer of updates between CERNVM and HEPDB}
Updates are transferred between CERNVM and HEPDB by a dedicated service
machine running under the account CDHEPDB. This machine keeps
a TCP/IP connection open between the two nodes. Upon startup, it builds
a list of HEPDB servers on the HEPDB node and transfers any pending
updates. These updates are then sent to the appropriate server on
CERNVM, or to a distribution list. The following example shows
NAMES file (hepdb.names) entry that will cause
updates to be distributed to multiple VM/CMS systems.
\begin{XMPt}{Sending updates to multiple VM/CMS systems}
:nick.CDCPLEAR
:list.cdcplear at frcpn11 cdcplear at cernvm
\end{XMPt}
When an update is received by this service machine, it is immediately
transferred to HEPDB and a scan for updates pending for CERNVM made.
\begin{XMPt}{Code for CDHEPDB service machine}
PROGRAM CDHEPDB
*CMZ : 21/02/91 16.24.17 by Jamie Shiers
*-- Author : Jamie Shiers 21/02/91
* Program to move updates between CERNVM and HEPDB
*
* Stolen from FATMEN.
*
PARAMETER (NDIR=100)
CHARACTER*255 CHDIRS(NDIR)
PARAMETER (NMAX=500)
CHARACTER*64 FILES(NMAX)
CHARACTER*8 HEPUSR,HEPNOD,REMUSR,REMNOD,REMDBS
CHARACTER*64 REMOTE,TARGET
CHARACTER*12 CHTIME
CHARACTER*8 CHUSER,CHPASS
CHARACTER*8 CHNODE,CHTYPE,CHSYS,CHRAND
CHARACTER*6 CHENT
CHARACTER*80 CHMAIL,LINE,CHDIR
CHARACTER*38 VALID
CHARACTER*255 ERRMSG
CHARACTER*2 CDPREF
CHARACTER*255 CDFILE
COMMON/PAWC/PAW(50000)
PARAMETER (IPRINT=6)
PARAMETER (IDEBUG=0)
PARAMETER (LUNI=1)
PARAMETER (LUNO=2)
+CDE,QUEST.
+CDE,SLATE.
DATA NENTRY/0/
DATA VALID/'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890._'/
*
* Initialise ZEBRA
*
CALL HLIMIT(50000)
*
* Initialise XZ
*
CALL XZINIT(IPRINT,IDEBUG,LUNI,LUNO)
*
CALL CDHOST(CHNODE,IRC)
LNODE = LENOCC(CHNODE)
*
* Open connection to HEPDB...
*
+SELF,IF=TCPSOCK.
IDUMMY = CINIT(IDUMMY)
+SELF,IF=-TCPSOCK.
CALL VMREXX('F','USER',CHUSER,IC)
CALL VMREXX('F','PWD' ,CHPASS,IC)
CALL CUTOL(CHUSER)
CALL CUTOL(CHPASS)
CALL VMSTAK(CHPASS,'L',IC)
CALL VMSTAK(CHUSER,'L',IC)
+SELF.
CALL CZOPEN('zserv','HEPDB',IRC)
*
* First entry: look on hepdb before sleeping
*
NDIRS = 0
GOTO 20
10 CALL VMCMS('EXEC HDBSERV',IRC)
IF(IRC.EQ.99) GOTO 20
IF(IRC.NE.0) THEN
PRINT *,'CDHEPDB. error ',IRC,' from HDBSERV. Stopping...'
GOTO 90
ENDIF
NENTRY = NENTRY + 1
*
* Get the user and node name for this file...
*
CALL VMCMS('GLOBALV SELECT *EXEC STACK HDBADDR',IC)
CALL VMRTRM(LINE,IEND)
ISTART = ICFNBL(LINE,1,IEND)
CALL CDWORD(HEPUSR,0,' ',LINE(ISTART:IEND),IC)
LHEP = LENOCC(HEPUSR)
CALL CDWORD(HEPNOD,1,' ',LINE(ISTART:IEND),IC)
LNOD = LENOCC(HEPNOD)
*
* Get file name (for database prefix and name of remote server)
*
CALL VMCMS('GLOBALV SELECT *EXEC STACK CDFILE',IC)
CALL VMRTRM(CDFILE,LFILE)
CDPREF = CDFILE(1:2)
LBLANK = INDEX(CDFILE(1:LFILE),' ')
JBLANK = INDEXB(CDFILE(1:LFILE),' ')
REMDBS = CDFILE(LBLANK+1:JBLANK-1)
LDBS = JBLANK - LBLANK - 1
IF(IDEBUG.GE.1)
+PRINT *,'CDHEPDB. Update received for ',REMDBS(1:LDBS),' prefix ',
+ CDPREF
*
* Number of pending files
*
CALL VMCMS('GLOBALV SELECT *EXEC STACK HDBFILES',IC)
CALL VMRTRM(LINE,IEND)
NFILES = ICDECI(LINE,1,IEND)
CALL DATIME(ID,IT)
WRITE(CHTIME,'(I6.6,I4.4,I2.2)') ID,IT,IS(6)
WRITE(CHENT ,'(I6.6)') NENTRY
CALL CDRAND(CHRAND,IRC)
*
* Now put this file...
* This assumes the HEPDB naming convention: /hepdb/cdgroup,
* e.g. /hepdb/cdchorus
CHDIR = '/hepdb/'//REMDBS(1:LDBS)//
+ '/todo'
LDIR = LENOCC(CHDIR)
*
REMOTE = ' '
REMOTE = 'zz'//CHTIME//CHRAND//CHENT
+ //'.'//HEPUSR(1:LHEP)//'_'//HEPNOD(1:LNOD)
LREM = LENOCC(REMOTE)
TARGET = REMOTE(1:LREM)
*
* Change remote directory
*
CALL CUTOL(CHDIR(1:LDIR))
IF(IDEBUG.GE.1) PRINT *,'CDHEPDB. Changing remote directory to ',
+ CHDIR(1:LDIR)
CALL XZCD(CHDIR(1:LDIR),IRC)
IF(IDEBUG.GE.1) PRINT *,'CDHEPDB. Sending file as ',
+ REMOTE(1:LREM)
CALL XZPUTA(CDFILE(1:LFILE),REMOTE(1:LREM),' ',IC)
IF(IC.NE.0) THEN
WRITE(ERRMSG,9001) IC,HEPUSR,HEPNOD
9001 FORMAT(' CDHEPDB. error ',I6,' sending update from ',
+ A,' at ',A,' to HEPDB')
LMSG = LENOCC(ERRMSG)
GOTO 100
ENDIF
*
* Rename the remote update file
*
LSTA = INDEXB(TARGET(1:LREM),'/') + 1
TARGET(LSTA:LSTA+1) = CDPREF
IF(IDEBUG.GE.1) PRINT *,'CDHEPDB. Renaming file to ',
+ TARGET(1:LREM)
CALL XZMV(REMOTE(1:LREM),TARGET(1:LREM),' ',IRC)
*
* Delete this update...
*
CALL VMCMS('ERASE '//CDFILE(1:LFILE),IC)
*
* Try to clear out RDR
*
IF(NFILES.GT.10) GOTO 10
*
* Are there any files for us to get?
*
20 CONTINUE
*
* Get list of remote directories
*
JCONT = 0
IF(NDIRS.EQ.0) THEN
IF(IDEBUG.GE.1) PRINT *, 'HEPDB. Retrieving list of remote '
+ //'directories...'
CALL XZLS('/hepdb/cd*/tovm',CHDIRS,NDIR,NDIRS,JCONT,'D',IC)
NDIRS = MIN(NDIR,NDIRS)
IF(JCONT.NE.0) THEN
IC = 0
PRINT *,'CDHEPDB. too many directories - excess names ',
+ 'will be flushed'
*
30 CONTINUE
CALL CZGETA(CHMAIL,ISTAT)
LCH = LENOCC(CHMAIL)
IF(CHMAIL(1:1).EQ.'0') THEN
*
* Nop
*
ELSEIF(CHMAIL(1:1).EQ.'1') THEN
ELSEIF(CHMAIL(1:1).EQ.'2') THEN
GOTO 30
ELSEIF(CHMAIL(1:1).EQ.'3') THEN
IQUEST(1) = 1
IRC = 1
ELSEIF(CHMAIL(1:1).EQ.'E') THEN
IQUEST(1) = -1
IRC = -1
ELSEIF(CHMAIL(1:1).EQ.'V') THEN
GOTO 30
ELSE
IQUEST(1) = 1
IRC = 1
ENDIF
*
ENDIF
ENDIF
IF(NDIRS.EQ.1.AND.INDEX(CHDIRS(1),'not found').NE.0) THEN
ERRMSG = 'CDHEPDB. there are no remote directories!'
LMSG = LENOCC(ERRMSG)
GOTO 100
ENDIF
DO 80 J=1,NDIRS
LDIR = LENOCC(CHDIRS(J))
IF(LDIR.EQ.0) GOTO 80
CALL CLTOU(CHDIRS(J)(1:LDIR))
*
* Get the name of the server for whom these updates are intended...
*
JSTART = INDEX(CHDIRS(J)(1:LDIR),'/CD')
IF(JSTART.EQ.0) THEN
IF(IDEBUG.GE.-3)
+ PRINT *,'CDHEPDB. unrecognised directory - skipped ',
+ '(',CHDIRS(J)(1:LDIR),')'
GOTO 80
ELSE
JSTART = JSTART + 1
ENDIF
JEND = INDEX(CHDIRS(J)(JSTART:LDIR),'/')
IF(JEND.EQ.0) THEN
PRINT *,'CDHEPDB. unrecognised file name - skipped ',
+ '(',CHDIRS(J)(1:LDIR),')'
GOTO 80
ENDIF
REMUSR = CHDIRS(J)(JSTART:JSTART+JEND-2)
LREM = LENOCC(REMUSR)
IF(LREM.EQ.0) THEN
IF(IDEBUG.GE.-3)
+ PRINT *,'CDHEPDB. unrecognised file name - skipped ',
+ '(',CHDIRS(J)(1:LDIR),')'
GOTO 80
ENDIF
IF(IDEBUG.GE.1)
+ PRINT *,'CDHEPDB. processing updates for ',REMUSR(1:LREM)
CALL XZCD(CHDIRS(J)(1:LDIR),IRC)
IF(IRC.NE.0) THEN
IF(IDEBUG.GE.-3)
+ PRINT *,'CDHEPDB. cannot set directory to ',
+ CHDIRS(J)(1:LDIR)
GOTO 80
ENDIF
ICONT = 0
NFILES = 0
IF(IDEBUG.GE.1) PRINT *, 'HEPDB. Retrieving list '
+ //'of remote files in ', CHDIRS(J)(1:LDIR)
CALL XZLS(' ',FILES,NMAX,NFILES,ICONT,' ',IC)
NFILES = MIN(NFILES,NMAX)
IF(IDEBUG.GE.2)
+ PRINT *,'CDHEPDB. ',NFILES,' files found in ',CHDIRS(J)(1:LDIR)
IF(ICONT.NE.0) THEN
IC = 0
IF(IDEBUG.GE.0)
+ PRINT *,'CDHEPDB. too many files - excess names will be '
+ //'flushed'
*
40 CONTINUE
CALL CZGETA(CHMAIL,ISTAT)
LCH = LENOCC(CHMAIL)
IF(CHMAIL(1:1).EQ.'0') THEN
*
* Nop
*
ELSEIF(CHMAIL(1:1).EQ.'1') THEN
ELSEIF(CHMAIL(1:1).EQ.'2') THEN
GOTO 40
ELSEIF(CHMAIL(1:1).EQ.'3') THEN
IQUEST(1) = 1
IRC = 1
ELSEIF(CHMAIL(1:1).EQ.'E') THEN
IQUEST(1) = -1
IRC = -1
ELSEIF(CHMAIL(1:1).EQ.'V') THEN
GOTO 40
ELSE
IQUEST(1) = 1
IRC = 1
ENDIF
*
ENDIF
DO 70 I=1,NFILES
LF = LENOCC(FILES(I))
IF(LF.EQ.0) GOTO 70
CALL CLTOU(FILES(I))
*
* Fix for the case when there are no files...
*
IF(NFILES.EQ.1) THEN
IF(INDEX(FILES(I)(1:LF),'DOES NOT EXIST').NE.0.OR.
+ INDEX(FILES (I)(1:LF),'NOT FOUND').NE.0) GOTO 10
IF(INDEX(FILES(I)(1:LF),'ARG LIST TOO LONG').NE.0) THEN
IF(IDEBUG.GE.-3) THEN
PRINT *,'CDHEPDB. Stopping due to the following '
+ //'error...'
PRINT *,FILES(I)(1:LF)
PRINT *,'(Intervention required on HEPDB)'
ENDIF
CALL VMCMS('EXEC TELL JAMIE '//FILES(I)(1:LF),IC)
CALL VMCMS('EXEC TELL JAMIE Logging off...',IC)
CALL VMCMS('EXEC TELL FATONE Logging off due to'//
+ FILES(I)(1:LF),IC)
CALL VMSTAK('LOGOFF','L',IC)
STOP
ENDIF
ENDIF
*
* Check that file name is valid
*
DO 50 L=1,LF
IF(INDEX(VALID,FILES(I)(L:L)).EQ.0) THEN
IF(IDEBUG.GE.-3) THEN
PRINT *,'CDHEPDB. invalid character ',
+ FILES(I)(L:L),
+ ' at ',L,' in ',FILES(I)(1:LF)
PRINT *,'CDHEPDB. skipping update...'
ENDIF
GOTO 70
ENDIF
50 CONTINUE
IF(INDEX(FILES(I)(1:LF),CHNODE(1:LNODE)).NE.0) THEN
IF(IDEBUG.GE.1)
+ PRINT *,'CDHEPDB. skipping update for ',CHNODE(1:LNODE),
+ '(',FILES(I)(1:LF),')'
GOTO 70
ENDIF
LSLASH = INDEXB(FILES(I)(1:LF),'/')
IF(FILES(I)(LSLASH+1:LSLASH+2).EQ.'ZZ') THEN
IF(IDEBUG.GE.1)
+ PRINT *,'CDHEPDB. active file - skipped ', '(',FILES(I)
+ (1:LF),')'
GOTO 70
ENDIF
IF(IDEBUG.GE.2)
+ PRINT *,'CDHEPDB. update found for ',REMUSR(1:LREM), '(',
+ FILES(I)(1:LF),')'
IF(IDEBUG.GE.1) PRINT *,'CDHEPDB. retrieving update ',
+ FILES(I)(1:LF)
CDPREF = FILES(I)(1:LF)
CALL CDRAND(CHRAND,IRC)
CDFILE = CDPREF // CHRAND(3:) // '.HEPDB.B'
LFILE = 16
CALL XZGETA(CDFILE(1:LFILE),FILES(I)(1:LF),'S',IC)
IF(IC.NE.0) THEN
WRITE(ERRMSG,9002) IC,REMUSR(1:LREM)
9002 FORMAT(' CDHEPDB. error ',I6,' retrieving update for ',A)
LMSG = LENOCC(ERRMSG)
GOTO 100
ENDIF
*
* Protection against zero length files
*
IF(IQUEST(11).EQ.0) GOTO 60
LDOT = INDEX(CDFILE(1:LFILE),'.')
CDFILE(LDOT:LDOT) = ' '
LDOT = INDEX(CDFILE(1:LFILE),'.')
CDFILE(LDOT:LDOT) = ' '
CALL VMCMS('EXEC SENDFILE '//CDFILE(1:LFILE)//' TO '
+ //REMUSR(1:LREM),IC)
IF(IC.NE.0) THEN
WRITE(ERRMSG,9003) IC,REMUSR(1:LREM)
9003 FORMAT(' CDHEPDB. error ',I6,' sending update to ',A)
LMSG = LENOCC(ERRMSG)
GOTO 100
ENDIF
*
* Now delete local file
*
CALL VMCMS('ERASE '//CDFILE(1:LFILE),IC)
*
* and the remote one
*
60 continue
CALL XZRM(FILES(I)(1:LF),IC)
IF(IC.NE.0) THEN
WRITE(ERRMSG,9004) IC,FILES(I)(1:LF)
9004 FORMAT(' CDHEPDB. error ',I6,' deleting file ',A)
LMSG = LENOCC(ERRMSG)
GOTO 100
ENDIF
70 CONTINUE
80 CONTINUE
*
* Wait for some action...
*
GOTO 10
90 CALL CZCLOS(ISTAT)
STOP
100 CONTINUE
*
* Error exit
*
IF(IDEBUG.GE.-3) PRINT *,ERRMSG(1:LMSG)
CALL VMCMS('EXEC TELL JAMIE '//ERRMSG(1:LMSG),IC)
CALL VMCMS('EXEC TELL JAMIE Logging off...',IC)
CALL VMCMS('EXEC TELL FATONE Logging off due to'//ERRMSG(1:LMSG),
+ IC)
CALL VMSTAK('LOGOFF','L',IC)
GOTO 90
END
\end{XMPt}
\Filename{h2hdbappen-Update-flow-between-HEPDB-and-other-nodes}
\section{Transfer of updates between HEPDB and other nodes}
\label{sect-CDMOVE}
\index{CDMOVE}
\index{Update distribution}
Updates are transferred between HEPDB and non-VM nodes using
a generalisation of the above procedure, known as {\bf CDMOVE}.
Rather than maintain
a permanent network connection with each of the remote nodes,
the method is as follows:
\begin{itemize}
\item
Upon startup, the program obtains a list of experiments
for which updates are to be processed. This list is defined by
the environmental variable {\bf CDGROUPS}.
\begin{XMPt}{Defining a list of groups}
CDGROUPS="CDCHORUS,CDCPLEAR,CDNOMAD,CDCMS,CDNA48";export CDGROUPS
\end{XMPt}
\item
Each group is then processed in turn. The directory containing
the NAMES file (hepdb.names) for a given group is obtained from the environmental
variable {\bf CDgroup}, e.g. {\bf CDCPLEAR}.
\item
The list of remote nodes is identified by the {\bf :servers} tag of
the {\bf :nick.config} entry. Thus, for CPLEAR, the list of remote
nodes is vxcrna uxcp05 axcp01 (see below).
{\bf N.B. Each database may have a different list of remote
servers. The :servers tags on the individual database entries
are processed by CDSERV, not CDMOVE. CDMOVE only looks at
the :servers tag on the :nick.config entry.}
\item
The remote nodes are processed for each database. Any files
in the local queue for the remote node will be sent.
The local queue is defined by the tag {\bf :localq}.
If this queue is missing, the directory tonode, e.g.
tovxcrna, is used.
\item
The updates are transferred to the directory
defined by the {\bf :queue} tag. If this tag is
missing, the subdirectory {\bf TODO} is used.
\item
One may define a different list of remote nodes for each
database. However, this is the concern of {\bf CDSERV},
the database server, and not {\bf CDMOVE}. {\bf CDSERV}
makes a copy of all update files in a local queue directory
for each remote server defined by the {\bf :servers} tag
for the current database. {\bf CDMOVE} looks at all
of the local queues - the database to which an update
file corresponds is determined by the two character
prefix of the update file.
\item
If the tag {\bf :receive.yes} is specified, any pending updates
will be retrieved from the remote system.
\item
If the tag {\bf :poll.yes} is specified, the remote node
will be contacted and any pending updates
will be retrieved from the remote system, regardless of
whether there are any updates pending for the remote node
in question.
\item
A connection is only made to a remote system if there are updates
pending for that system. This makes sense in an environment where
database updates are typically produced centrally.
\item
The files are received from the directory identified by the
{\bf :remoteq} tag. If this tag is not specified, the
subdirectory {\bf queue} is used.
\item
In all cases, the updates are received into the {\bf todo}
subdirectory of the local server. It is assumed
that {\bf CDMOVE} will be run on the same machine
as the master server.
\end{itemize}
\begin{XMPt}{Example script to run CDMOVE}
#!/bin/ksh
#
# Define the list of groups for whom updates are to be processed
#
export CDGROUPS="CDCPLEAR,CDCHORUS"
#
# For each group, define the pathname where the hepdb.names file resides
#
export CDCPLEAR=/hepdb/cdcplear
export CDCHORUS=/hepdb/cdchorus
#
# Now start the server
#
/cern/pro/bin/cdmove
\end{XMPt}
\begin{XMPt}{NAMES file for CDCPLEAR (hepdb.names) }
:nick.config
:list.au ge ca aa
:log./hepdb/cdcplear/log
:queue./hepdb/cdcplear/todo
:todo./hepdb/cdcplear/todo
:save./hepdb/cdcplear/save
:bad./hepdb/cdcplear/bad
:loglevel.3
:wakeup.60
:servers.vxcrna uxcp05 axcp01
:nick.au
:file./hepdb/cdcplear/aux.dbs
:desc.auxil database
:servers.cernvm vxcrna uxcp05
:nick.ca
:file./hepdb/cdcplear/cal.dbs
:desc.calibration database
:servers.cernvm vxcrna uxcp05
:nick.ge
:file./hepdb/cdcplear/geo.dbs
:desc.geometry database
:servers.cernvm vxcrna uxcp05
:nick.aa
:file./hepdb/cdcplear/aa.dbs
:desc.test database
:servers.cernvm
:nick.cernvm
:userid.cdcplear
:node.cernvm
:localq./hepdb/cdcplear/tovm
:nick.vxcrna
:userid.cdcplear
:node.axcrnb
:localq./hepdb/cdcplear/tovxcrna
:queue.disk$mf:[cdcplear.todo]
:nick.uxcp05
:userid.cdcplear
:node.uxcp05
:localq./hepdb/cdcplear/touxcp05
:queue.disk$mf:[cdcplear.todo]
:poll.yes
:nick.axcp01
:userid.cdcplear
:node.axcp01
:localq./hepdb/cdcplear/toaxcp01
:queue./hepdb/cdcplear/todo
\end{XMPt}
\begin{XMPt}{NAMES file for CDCHORUS (hepdb.names) }
:nick.config
:list.c2 ch
:log./hepdb/cdchorus/log
:queue./hepdb/cdchorus/todo
:todo./hepdb/cdchorus/todo
:save./hepdb/cdchorus/save
:bad./hepdb/cdchorus/bad
:loglevel.3
:wakeup.60
:servers.vxcrna xantia
:nick.c2
:file./hepdb/cdchorus/charm2.dbs
:desc.Charm2 database converted to HEPDB format
:servers.cernvm vxcrna xantia
:nick.ch
:file./hepdb/cdchorus/chorus.dbs
:desc.CHORUS (geometry) database
:servers.cernvm vxcrna xantia
:nick.cernvm
:userid.cdhepdb
:node.cernvm
:localq./hepdb/cdchorus/tovm
:nick.vxcrna
:userid.cdhepdb
:node.vxcrna
:localq./hepdb/cdchorus/tovxcrna
:nick.xantia
:userid.cdhepdb
:node.xantia.caspur.it
:localq./hepdb/cdchorus/toxantia
:receive:yes
:queue./cern/hepdb/cdchorus/todo
\end{XMPt}
The actual file transfer is performed by \CSPACK{}~\cite{bib-CSPACK}
routines. Thus, the remote node must be set up correctly, i.e.
{\bf ZSERV} must be installed. The user name and password for
the remote connection are defined in the {\bf .netrc} file
(or ftplogin. for VMS systems)
in the home directory of the account under which the
{\bf CDMOVE} job is run.
\section{Example of configuration on two VAX systems}
Let us take the example of two VAX systems: {\bf VXLNFB}
and {\bf AXPALS}. In this example, the {\it master} server
will run on the node {\bf VXLNFB}. To achieve this, we
must configue the file {\bf hepdb.names} so that the
{\bf :queue} and {\bf :todo} tags point to the same directory,
as shown below.
\begin{XMPt}{Example names file for master server}
:nick.config
:list.ct rt gt
:log.DISK\$MF:[KLOEDB.LOG]
:queue.DISK\$MF:[KLOEDB.TODO]
:todo.DISK\$MF:[KLOEDB.TODO]
:save.DISK\$MF:[KLOEDB.SAVE]
:bad.DISK\$MF:[KLOEDB.BAD]
:loglevel.3
:wakeup.60
:servers.axpals
:nick.ct
:file.DISK\$MF:[KLOEDB.DATABASE]CALRT.DBS
:desc.Calibration Database for Test Beam 1994
:servers.axpals
:nick.rt
:file.DISK\$MF:[KLOEDB.DATABASE]RUNRT.DBS
:desc.RUN CONDITION Database for Test Beam 1994
:servers.axpals
:nick.gt
:file.DISK\$MF:[KLOEDB.DATABASE]GEORT.DBS
:desc.Geometry Database for Test Beam 1994
:servers.axpals
:nick.axpals
:userid.kloedb
:node.axpals
:poll.yes
\end{XMPt}
The names file for the slave server is similar, as shown below.
\begin{XMPt}{Example names file for slave server}
:nick.config
:list.ct rt gt
:log.DISK\$MF:[KLOEDB.LOG]
:queue.DISK\$MF:[KLOEDB.TOVXLNFB]
:todo.DISK\$MF:[KLOEDB.TODO]
:save.DISK\$MF:[KLOEDB.SAVE]
:bad.DISK\$MF:[KLOEDB.BAD]
:loglevel.3
:wakeup.60
:servers.
:nick.ct
:file.DISK\$MF:[KLOEDB.DATABASE]CALRT.DBS
:desc.Calibration Database for Test Beam 1994
:servers.vxlnfb
:nick.rt
:file.DISK\$MF:[KLOEDB.DATABASE]RUNRT.DBS
:desc.RUN CONDITION Database for Test Beam 1994
:servers.vxlnfb
:nick.gt
:file.DISK\$MF:[KLOEDB.DATABASE]GEORT.DBS
:desc.Geometry Database for Test Beam 1994
:servers.vxlnfb
\end{XMPt}
In the above examples, it is assumed that {\bf CDMOVE} will run on the
same node as the {\it master} server, e.g. {\bf VXLNFB}.
\subsection{Making an update on a node with a {\bf MASTER} server}
\begin{itemize}
\item
If an update is made on the node where the master server is running,
it is written into the directory {\bf DISK\$MF:[KLOEDB.TODO]}.
\item
The master server will process this update and make a copy in the directory
{\bf DISK\$MF:[KLOEDB.TOAXPALS]}.
\item
A seperate process, {\bf CDMOVE} takes the files in
{\bf DISK\$MF:[KLOEDB.TOAXPALS]}, connects to the node {\bf AXPALS}
and transfers the files to the directory
{\bf DISK\$MF:[KLOEDB.TODO]}.
\item
It will then be processed by the
{\bf CDSERV} process on this node and the local copy of the
database updated.
\end{itemize}
\subsection{Making an update on a node with a {\bf SLAVE} server}
\begin{itemize}
\item
If an update is made on the node where the slave server is running,
it will be written into the directory {\bf DISK\$MF:[KLOEDB.TOVXLNFB]}.
\item
The {\bf CDMOVE} process running on VXLNFB will connect at
periodic intervals and transfer any pending files,
as we have specified the tag {\bf :poll.yes}, the {\bf CDMOVE}
Alternatively, one may specify
{\bf :receive.yes}, in which case updates will only be transferred
from {\bf AXPALS} if the {\bf CDMOVE} server has made a connection
to transfer updates from {\bf VXLNFB} to {\bf AXPALS}. If neither
tag is specified, updates will never be transferred from {\bf AXPALS}
to {\bf VXLNFB}.
\end{itemize}
\subsection{Running {\bf CDMOVE} on the slave node}
Normally, one would run the {\bf CDMOVE} process on the same
node as the master server. As stated above, this requires
that {\bf ZSERV} has been correctly configured on all slave
nodes.
\index{ZSERV}
\index{UCX}
If this is not possible for some reason, e.g. if UCX is running
on the slave node, then {\bf CDMOVE} can be run on the slave
node. This, in turn, requires that {\bf ZSERV} is correctly
configured on the master node.
In this case the names file on the slave node must be modified
as shown below.
The changes are indented for clarity, although this is purely
cosmetic.
\begin{XMPt}{Example names file for slave server with changes for CDMOVE}
:nick.config
:list.ct rt gt
:log.DISK\$MF:[KLOEDB.LOG]
:queue.DISK\$MF:[KLOEDB.TOVXLNFB]
:todo.DISK\$MF:[KLOEDB.TODO]
:save.DISK\$MF:[KLOEDB.SAVE]
:bad.DISK\$MF:[KLOEDB.BAD]
:loglevel.3
:wakeup.60
:servers.VXLNFB
:nick.ct
:file.DISK\$MF:[KLOEDB.DATABASE]CALRT.DBS
:desc.Calibration Database for Test Beam 1994
:servers.vxlnfb
:nick.rt
:file.DISK\$MF:[KLOEDB.DATABASE]RUNRT.DBS
:desc.RUN CONDITION Database for Test Beam 1994
:servers.vxlnfb
:nick.gt
:file.DISK\$MF:[KLOEDB.DATABASE]GEORT.DBS
:desc.Geometry Database for Test Beam 1994
:servers.vxlnfb
:nick.VXLNFB
:userid.kloedb
:node.VXLNFB
:poll.yes
\end{XMPt}
\Filename{H2hdbappen-Creating-a-new-server-on-VXCRNA}
\section{Creating a new server on VXCRNA}
Although it is possible to access remote database files
from {\tt VAX/VMS} systems using {\tt NFS}, there
are cases, such as on an online {\tt VAXcluster}, when
it is desirable to have a database server on the
{\tt VAX} itself.
As above, an account is first created using {\tt USERREG}.
This account is then configured using the following command
file, included in the standard distribution:
\begin{XMPt}{CDNEW.COM}
$!DECK ID>, CDNEW.COM
$ !
$ ! Setup the directory and file structure for a new
$ ! server
$ !
$ procedure = f$parse(f$environment("PROCEDURE"),,,"NAME")
$ say = "write sys$output"
$ if p1 .eqs. ""
$ then
$ write sys$output "''procedure': usage ''procedure' group"
$ exit
$ endif
$ !
$ ! Does the directory already exist?
$ !
$ home = f$search("DISK$MF:[000000]''p1'.dir")
$ if home .eqs. ""
$ then
$ say "''procedure': home directory for ''p1' does not exist."
$ say "''procedure': please create an account using USERREG"
$ exit
$ endif
$ !
$ ! Create subdirectories
$ !
$ create/directory DISK$MF:['p1'.BAD]
$ create/directory DISK$MF:['p1'.LOG]
$ create/directory DISK$MF:['p1'.QUEUE]
$ set file/protection=w:rwe DISK$MF:['p1']QUEUE.DIR
$ create/directory DISK$MF:['p1'.TODO]
$ create/directory DISK$MF:['p1'.SAVE]
$ directory DISK$MF:['p1'] /security
$ !
$ ! Create names file
$ !
$ open/write out DISK$MF:['p1']HEPDB.NAMES
$ write out ":nick.config"
$ write out ":list.aa"
$ write out ":bad.DISK$MF:[''p1'.BAD]"
$ write out ":log.DISK$MF:[''p1'.LOG]"
$ write out ":queue.DISK$MF:[''p1'.QUEUE]"
$ write out ":todo.DISK$MF:[''p1'.TODO]"
$ write out ":save.DISK$MF:[''p1'.SAVE]"
$ write out ":wakeup.120"
$ write out ":loglevel.3"
$ close out
$ type DISK$MF:['p1']HEPDB.NAMES
\end{XMPt}
Should the disk in question have disk quotas enabled,
one should ensure that the {\tt queue} directory is
owned by an identifier and has an ACL as in the following
example:
\begin{XMPt}{Queue directory on VAX/VMS systems}
(IDENTIFIER=CDCHORUS,ACCESS=READ+WRITE+EXECUTE+DELETE+CONTROL)
(IDENTIFIER=ID$_CHORUS,ACCESS=READ+WRITE+EXECUTE)
(IDENTIFIER=ID$_CHORUS,OPTIONS=DEFAULT,ACCESS=READ+WRITE+EXECUTE+DELETE+CONTROL)
(IDENTIFIER=CDF_EXPERIMENT,OPTIONS=DEFAULT,ACCESS=READ+WRITE+EXECUTE)
\end{XMPt}
The identifier must be granted to all users who should be permitted
to update the database with the {\tt RESOURCE} attribute.
As for {\tt CERNVM}, an extra account exists which is used to
exchange updates between {\tt VXCRNA} and {\tt hepdb}.
The files created on {\tt hepdb} must have the correct ownership
(in this case UID 102 and GID 3) which must be mapped to the UIC
under which the command file is executed on the VAX.
This is performed as follows:
\begin{XMPt}{Mapping a Unix UID/GID pair to a VMS username}
$ ! MULTINET CONFIGURE /NFS
$ ! NFS-CONFIG>add cdhepdb 102 3
$ ! NFS-CONFIG>ctrl-z
\end{XMPt}
This is done using the following command file:
\begin{XMPt}{Moving updates between VXCRNA and hepdb}
$!DECK ID>, CDSEND.COM
$ !
$ ! Command file to move updates between 'slave' and 'master'
$ !
$ ! Invoked by CDSERV.COM from the account CDHEPDB on VXCRNA
$ !
$ ! Assumes correct UID & GID mapping for directories on 'master'
$ ! MULTINET CONFIGURE /NFS
$ ! NFS-CONFIG>add cdhepdb 102 3
$ ! NFS-CONFIG>ctrl-z
$ !
$ set noon
$ !
$ ! List of servers
$ !
$ cdservers = "CDCPLEAR,CDCHORUS,CDNOMAD"
$ !
$ ! Master & slave definitions
$ !
$ slave = "VXCRNA"
$ master = "HEPDB"
$ !
$ main_loop:
$ nserver = 0
$ !
$ ! Loop over all servers
$ !
$ loop_servers:
$ server = f$element(nserver,",",cdservers)
$ nserver = nserver + 1
$ if server .eqs. "," then goto sleep
$ !
$ ! Look for files waiting to be sent to 'master'
$ !
$ to_hepdb:
$ journal_file = f$search("DISK$MF:[''server'.TO''master']*.*")
$ !
$ if journal_file .eqs. "" then goto from_hepdb
$ !
$ ! Skip 'active' files
$ !
$ if f$extract(0,2,journal_file) .eqs. "ZZ" then goto to_hepdb
$ !
$ ! Build remote file name
$ !
$ istart = f$locate("]",journal_file) + 1
$ remote_file = "''master':[''server'.TODO]ZZ" + -
f$extract(istart+2,f$length(journal_file),journal_file)
$ !
$ ! Copy the file over
$ !
$ copy 'journal_file' 'remote_file' /log /noconfirm
$ !
$ ! Rename remote file and delete local file if it was ok
$ !
$ if $severity .eq. 1
$ then
$ remote_update = "''master':[''server'.TODO]" + -
f$extract(istart,f$length(journal_file),journal_file)
$ rename 'remote_file' 'remote_update' /nolog /noconfirm
$ if $severity .eq. 1 then delete /nolog /noconfirm 'journal_file'
$ endif
$ !
$ goto to_hepdb
$ !
$ ! Look for files to be pulled over from 'master'
$ !
$ from_hepdb:
$ journal_file = f$search("HEPDB:[''server'.TO''slave']*.*")
$ !
$ if journal_file .eqs. "" then goto loop_servers
$ !
$ ! Skip 'active' files
$ !
$ if f$extract(0,2,journal_file) .eqs. "ZZ" then goto from_hepdb
$ !
$ ! Build local file name
$ !
$ istart = f$locate("]",journal_file) + 1
$ local_file = "DISK$MF:[''server'.TODO]ZZ" + -
f$extract(istart+2,f$length(journal_file),journal_file)
$ !
$ ! Copy the file over
$ !
$ copy 'journal_file' 'local_file' /log /noconfirm
$ !
$ ! Rename local file and delete remote file if it was ok
$ !
$ if $severity .eq. 1
$ then
$ local_update = "DISK$MF:[''server'.TODO]" + -
f$extract(istart,f$length(journal_file),journal_file)
$ rename 'local_file' 'local_update' /log /noconfirm
$ if $severity .eq. 1 then delete /log /noconfirm 'journal_file'
$ endif
$ !
$ goto from_hepdb
$ !
$ sleep:
$ wait 00:30:00
$ goto main_loop
\end{XMPt}
The servers are controlled by the following job, which runs in the
{\tt SYS\$FATMEN} queue:
\begin{XMPt}{Command file to control HEPDB servers}
$!DECK ID>, CDMAST.COM
$SET NOON
$ !
$ ! Master HEPDB command file
$ !
$ save_mess = f$environment("MESSAGE")
$ set message/nofacility/noseverity/noid/notext
$ write sys$output "CDMAST starting at ''f$time()'"
$ !
$ ! define list of servers
$ !
$ servers = "CDHEPDB,CDCHORUS,CDCPLEAR" ! Separate by commas
$ wakeup :== 00:30:00 ! Every 30 minutes
$ !
$ ! define symbols - this is VXCRNA specific
$ !
$ n = 0
$ loop:
$ server = f$element(n,",",servers)
$ if server .eqs. "," then goto again
$ 'server' == "DISK$MF:[''server']"
$ n = n + 1
$ goto loop
$ again:
$ !
$ ! Run the command files that expect a complete list as argument
$ !
$ write sys$output ">>> CDPURGE..."
$ @CERN_ROOT:[EXE]CDPURGE 'servers' ! Purge old journal files
$ !
$ write sys$output ">>> CDCHECK..."
$ @CERN_ROOT:[EXE]CDCHECK 'servers' ! Check that servers are started
$ !
$ write sys$output ">>> Time is ''f$time()'. Waiting ''wakeup'..."
$ wait 'wakeup'
$ write sys$output ">>> Wakeup at ''f$time()'."
$ goto again
$ set message 'save_mess'
$ exit
\end{XMPt}
The job {\tt CDPURGE} purges old journal and log files and is as
follows:
\begin{XMPt}{Job to purge old journal and log files}
$!DECK ID>, CDPURGE.COM
$SET NOON
$ !
$ ! Purge journalled HEPDB updates that are over a day old
$ !
$ if p1 .eqs. "" then exit
$ hepdb = p1
$ count = 0
$ save_mess = f$environment("MESSAGE")
$ set message/nofacility/noseverity/noid/notext
$loop:
$ server = f$element(count,",",hepdb)
$ if server .eqs. "," then goto end
$ count = count + 1
$ write sys$output "Processing ''server'..."
$ ON WARNING THEN GOTO UNDEFINED
$ cddir = &server
$ purge 'cddir' ! Purge old log files
$ cdfil = f$extract(0,f$length(cddir)-1,cddir) + ".SAVE]*.*;*"
$ ON WARNING THEN CONTINUE
$ delete/before=-0-23:59 'cdfil'
$ goto loop
$ undefined:
$ write sys$output "Warning: symbol ''server' is not defined"
$ goto loop
$ end:
$ set message 'save_mess'
$ exit
\end{XMPt}
The job to check and restart the servers is as follows:
\begin{XMPt}{CDCHECK command file}
$!DECK ID>, CDCHECK.COM
$SET NOON
$ !
$ ! Check that HEPDB servers are started
$ !
$ if p1 .eqs. "" then exit
$ servers = p1
$ count = 0
$ save_mess = f$environment("MESSAGE")
$ set message/nofacility/noseverity/noid/notext
$ !
$ ! Check that the queue is started
$ !
$ if f$getqui("DISPLAY_QUEUE","QUEUE_STOPPED","SYS$FATMEN") .eqs. "FALSE" then -
start/queue sys$fatmen
$loop:
$ server = f$element(count,",",servers)
$ if server .eqs. "," then goto end
$ count = count + 1
$ write sys$output "Processing ''server'..."
$ show user/nooutput 'server'
$ if $severity .ne. 1
$ then
$ !
$ ! Check that server has not been stopped
$ !
$ ON WARNING THEN GOTO UNDEFINED
$ cddir = &server
$ ON WARNING THEN CONTINUE
$ cddir = f$extract(0,f$length(cddir)-1,cddir) + ".TODO]SIGNAL.STOP"
$ if f$search(cddir) .nes. ""
$ then write sys$output "Signal.Stop file found - will not restart"
$ goto loop
$ endif
$ write sys$output "Restarting server ..."
$ cdserv = &server + "CDSERV.COM"
$ submit/queue=sys$fatmen/user='server' /id 'cdserv'
$ endif
$ goto loop
$ undefined:$ write sys$output "Warning: symbol ''server' is not defined"
$ goto loop
$ end:
$ exit
\end{XMPt}
\Filename{H2hdbappen-Accessing-remote-database-files-over-NFS}
\section{Accessing remote database files over NFS}
\index{NFS}
One may avoid running a local database server on a given node
by accessing the database files over the network. This is the
recommended procedure for Unix systems at CERN.
To enable this, one should first mount the {\tt /hepdb} file
system as shown below.
\begin{XMPt}{Mounting the /hepdb file system on a machine running Unix}
mount hepdb:/hepdb /hepdb
\end{XMPt}
\index{CORE}
\index{SHIFT}
\index{CSF}
Should your experiment require access to {\bf HEPDB} from one of
the CORE/SHIFT/CSF systems, please contact your CORE representative
and ask them to perform the above action on the nodes in question.
On a VAX/VMS system that has the NFS client software installed,
as is the case on VXCERN, the following commands are issued
at system startup time.
\begin{XMPt}{Mounting the /hepdb file system on a machine running VMS}
$ !
$ ! Mount the file system if not already done
$ !
$ if f$trnlm("HEPDB").eqs."" then NFSMOUNT/soft HEPDB::"/hepdb" HEPDB
\end{XMPt}
The HEPDB software automatically uses C I/O to access remote database files
on VMS systems. This is because VAX Fortran does not recognise the file
structure of the remote Unix database file but is in any case completely
transparent to the user.
It is currently recommended that the update directory reside on the local
VMS system. This is because Multinet NFS requires that VMS UICs are
mapped to Unix UID and GID pairs on the remote node, even if the remote
directory is {\tt world} writable (or writable by {\tt others} in Unix
parlence). On VXCERN only a single UIC is mapped to a valid UID/GID pair
on node hepdb. A job runs under this UIC to move the update files between
the local and remote file systems.
\Filename{H2hdbappen-VMS-systems-running-UCX}
\section{VMS systems running UCX}
{\bf N.B. The use of UCX is not recommended. The following section
remains for historical reasons only.}
At the time of writing, DEC's UCX product still does not provide
an NFS client. In this case one can mount a VMS directory on
node {\tt hepdb}. This is done today for {\tt CPLEAR}.
As is the case for Multinet NFS, one must map a Unix UID/GID pair
to a VMS username. In addition, a {\tt binding} must be made been
a VMS directory and a Unix style file name.
This can be done as follows:
\begin{XMPt}{Binding a VMS directory to a Unix name}
$ UCX
UCX> BIND UXCP05$DKA300: /vxcplear
UCX> show bind
Logical filesystem Pathname
UXCP05$DKA300: /vxcplear
UCX>
\end{XMPt}
\begin{XMPt}{Mapping a UID/GID pair to a VMS username}
$ UCX
UCX> ADD PROXY CDCPLEAR /UID=102 /GID=1 /HOST=hepdb.cern.ch
\end{XMPt}
Note that UCX treats hostnames as case sensitive.
Finally, one must start the UCX NFS server. This involves
\begin{itemize}
\item
Modifying (correcting) the UCX startup command file
{\tt SYS\$MANAGER:UCX\$NFS\_STARTUP.COM)}
\item
Invoking the command file at system startup.
\end{itemize}
\begin{XMPt}{Modifying the UCX NFS startup command file}
$ ! ...
$ !
$ ! Set the following UID and GIDs
$ !
$ DEFINE/SYSTEM/EXE/NOLOG UCX$NFS00000000_GID 1
$ DEFINE/SYSTEM/EXE/NOLOG UCX$NFS00000000_UID 0
$ !
$ ! ...
$ !
$ ! Comment out the following line
$ ! RUN SYS\$SYSTEM:UCX\$SERVER\_NFS.EXE
$ !
$ ! The following section contains NFS process quota that is required by
$ ! manual startup. Please uncomment the following lines and comment out
$ ! the "RUN" command above, if you choose to manually start NFS.
$ !
$ RUN SYS\$SYSTEM:UCX\$SERVER\_NFS.EXE/DETACH -
/OUTPUT=NLA0: -
/ERROR='P1' -
/AST\_LIMIT=512 -
/BUFFER\_LIMIT=200000 -
/EXTENT=20000 -
/FILE\_LIMIT=1024 -
/IO\_BUFFERED=400 -
/IO\_DIRECT=200 -
/QUEUE\_LIMIT=64 -
/ENQUEUE\_LIMIT=3000 -
/MAXIMUM\_WORKING\_SET=20000 -
/PAGE\_FILE=20000 -
/PRIORITY=8 -
/PRIVILEGES=(BYPASS,SYSPRV) -
/UIC=[1,4] -
/NORESOURCE
$ !
$EXIT:
$ EXIT
\end{XMPt}
The file system is now ready for mounting on {\tt hepdb}.
\begin{XMPt}{Extract from /etc/filesystems for /vxcplear}
/vxcplear:
dev = /vxcplear/cdcplear
vfs = nfs
nodename = uxcp05
mount = true
options = bg,hard,intr
\end{XMPt}
\Filename{H2hdbappen-Setting-up-a-new-server-on-hepdb}
\section{Setting up a new server on {\tt hepdb}}
The {\tt hepdb} system is a dedicated IBM RS6000 that only runs
{\tt HEPDB} servers and associated jobs. The database files
are maintained in the {\tt /hepdb} file system. This is nfs exported
and should be mounted on other Unix systems, such as {\tt CSF},
as follows:
\begin{XMPt}{Mounting the /hepdb file system}
mount hepdb:/hepdb /hepdb
\end{XMPt}
Before creating a new server, the account must be registered for
service {\bf AFS} using {\bf USERREG}. The account should be
the letters {\bf cd} followed by the name of the experiment,
e.g. {\bf cdatlas, cdnomad, cdna49}.
Once the account has been centrally registered for {\bf AFS}, one should
create an account on the {\bf HEPDB} machine, using the UID and GID
allocated by {\bf USERREG} and visible through {\bf XWHO}.
Finally, the following script is run to create the appropriate directories
and dummy configuration files.
New servers can be setup using the following script, which creates
the necessary directory structure and configuration files.
\begin{XMPt}{Creating the files and directories for a new server}
#
# Setup the directory and file structure for a new
# server
#
iam=`whoami`
#
# Are we root?
#
if [ "$iam" != "root" ]
then
echo $0: This script must be run from root
exit
fi
#
# Did we get any arguments?
#
if [ $# != 1 ]
then
echo $0: usage $0 group
exit
fi
#
# Does this directory exist?
#
if [ -d /hepdb/$1 ]
then
echo $0: Directory /hepdb/$1 already exists
exit
fi
#
# No, so make it
#
mkdir /hepdb/$1
#
# and the subdirectories...
#
mkdir /hepdb/$1/bad
mkdir /hepdb/$1/log
mkdir /hepdb/$1/queue
chmod o+w /hepdb/$1/queue
mkdir /hepdb/$1/todo
mkdir /hepdb/$1/save
ls -F /hepdb/$1
#
# now create the names file
#
echo :nick.config > /hepdb/$1/hepdb.names
echo :list.aa >> /hepdb/$1/hepdb.names
echo :log./hepdb/$1/log >> /hepdb/$1/hepdb.names
echo :queue./hepdb/$1/queue >> /hepdb/$1/hepdb.names
echo :todo./hepdb/$1/todo >> /hepdb/$1/hepdb.names
echo :save./hepdb/$1/save >> /hepdb/$1/hepdb.names
echo :bad./hepdb/$1/bad >> /hepdb/$1/hepdb.names
echo :loglevel.3 >> /hepdb/$1/hepdb.names
echo :wakeup.60 >> /hepdb/$1/hepdb.names
echo :nick.aa >> /hepdb/$1/hepdb.names
echo :file./hepdb/$1/aa.dbs >> /hepdb/$1/hepdb.names
echo :desc.Description of the database >> /hepdb/$1/hepdb.names
echo :servers. >> /hepdb/$1/hepdb.names
cat /hepdb/$1/hepdb.names
#
# Link the server script
#
ln -s /cern/new/bin/cdserv.sh /hepdb/$1/cdserv
#
# and the server module
#
ln -s /cern/new/bin/cdserv /hepdb/$1/cdsrv
\end{XMPt}
The servers are started at boot time by adding the
file /etc/inittab as follows:
\begin{XMPt}{Extract from /etc/inittab}
rcnfs:2:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons
hepdb:2:wait:/etc/rc.hepdb > /dev/console 2>&1 # Start HEPDB
cons:0123456789:respawn:/etc/getty /dev/console
\end{XMPt}
This invokes the following script:
\begin{XMPt}{rc.hepdb}
#!/bin/sh
#
# Start HEPDB servers
#
#
if [ -x /cern/pro/bin/cdstart ]
then
echo Start HEPDB servers ...
su - hepdb /cern/pro/bin/cdstart 2>&1
fi
\end{XMPt}
One may execute {\tt cdstart} at any time, as it will only
restart servers that are not already running.
\begin{XMPt}{cdstart script}
#!/bin/ksh
start=" "
stop=" "
run=" "
nolog=" "
noscr=" "
b="."
#
# Ensure that variables are defined...
#
for i in /hepdb/cd*
do
echo
typeset -u cdgrp
cdpath=$i
cdgrp=`basename $i`
echo Setting $cdgrp to $cdpath ...
eval $cdgrp=$cdpath;export $cdgrp
#
# and start the servers
#
if [ -x ${i}/cdserv ]
then
#
# does a log file exist?
#
if [ -f /hepdb/${cdgrp}.log ]
then
echo '>>> log file exists - looking for existing process'
log=${log}${b}${cdgrp}
pid=`cat /hepdb/${cdgrp}.log | awk '{printf "%s\\n",$13}'`
if (test $pid)
then
echo Looking for server process for $cdgrp
if(ps -ae | grep -s $pid )
then
echo CDSRV running PID = $pid
run=${run}${b}${cdgrp}
else
echo No existing server found for $cdgrp - starting server
if [ -f ${i}/todo/signal.stop ]
then echo but signal.stop file found!
else echo Starting server for $cdgrp
nohup ${i}/cdserv ${cdgrp} > $i/cdserv.log &
start=${start}${b}${cdgrp}
fi
fi
else
echo No existing server found for $cdgrp - starting server
if [ -f ${i}/todo/signal.stop ]
then echo but signal.stop file found!
stop=${stop}${b}${cdgrp}
else echo Starting server for $cdgrp
nohup ${i}/cdserv ${cdgrp} > $i/cdserv.log &
start=${start}${b}${cdgrp}
fi
fi
else
echo No server log found in $i
if [ -f ${i}/todo/signal.stop ]
then echo but signal.stop file found!
stop=${stop}${b}${cdgrp}
else echo Starting server for $cdgrp
nohup ${i}/cdserv ${cdgrp} > $i/cdserv.log &
start=${start}${b}${cdgrp}
fi
fi
else
echo No cdserv script found in $i - cannot start server
scr=${scr}${b}${cdgrp}
fi
done
echo
echo Log files found for $log | tr '.' ' '
echo Started servers for $start | tr '.' ' '
echo Servers already running for $run | tr '.' ' '
echo Servers stopped $stop | tr '.' ' '
echo No scripts found for $scr | tr '.' ' '
\end{XMPt}
The servers can be checked by running the following script:
\begin{XMPt}{Looking for running servers}
echo 'HEPDB server Elapsed CPU time %CPU'
echo '==============================================================================='
ps -aef -F "args,etime,time,pcpu" | grep "/cdsrv" | sort +2 -r
\end{XMPt}
\begin{XMPt}{Output from the above script}
HEPDB server Elapsed CPU time %CPU
===============================================================================
/hepdb/cdnomad/cdsrv 7-02:19:29 00:04:44 0.0
/hepdb/cdchorus/cdsrv 7-02:19:29 00:04:43 0.0
/hepdb/cdcplear/cdsrv 7-02:19:29 00:04:41 0.0
\end{XMPt}
\Filename{H1hdbappen-Examples-of-of-the-flow-of-journal-files}
\chapter{Examples of of the flow of journal files}
\Filename{H2hdbappen-Updating-a-database-on-node-hepdb}
\section{Updating a database residing on node {\tt hepdb} from a Unix system at CERN}
In this simple example, the journal file is written directly to the directory
pointed to by the {\tt :queue} tag in the {\tt hepdb.names} file. As the
server is the database master, the {\tt :queue} and {\tt :todo} tags point to
the same directory. Whilst the journal file is being written, the reserved prefix
{\tt zz} is used. As soon as it is complete, the file is renamed to have the
correct database prefix so that the server, which can of course handle several
databases for the same experiment, can identify which database file is to be updated.
After the update has been processed, the master server sends the new journal file
to all slave servers.
\Filename{H2hdbappen-Updating-a-database-residing-on-node-hepdb-from-CERNVM}
\section{Updating a database residing on node {\tt hepdb} from CERNVM}
In this case the update is first sent to the service machine {\tt CDHEPDB}.
This machine receives the file and transfers it to the {\tt todo} directory
of the appropriate server on {\tt hepdb}. Once the update has been processed,
the new journal file will be written to a {\tt tovm} directory and transferred
back. This file will then be sent to the local slave server on CERNVM and
any remote servers on Bitnet nodes.
\Filename{H2hdbappen-Updating-a-database-on-node-hepdb-from-VMS}
\section{Updating a database residing on node {\tt hepdb} from a VMS system at CERN}
This is similar to the above, except that the journal file is written to a special
{\tt tohepdb} directory on the local VMS system. A batch job periodically scans
this directory and copies any files found over NFS to {\tt hepdb}. Once again,
once the update has been processed the new journal file is copied back and placed
in the {\tt todo} directory of the local VMS slave server.
\Filename{H2hdbappen-Updating-a-database-residing-on-hepdb-from-remote-VM-system}
\section{Updating a database residing on node {\tt hepdb} from a remote VM system}
This is the same as for the CERNVM case, except that the names file on the remote
system points to {\tt CDHEPDB at CERNVM}, rather than simply {\tt CDHEPDB}.
\chapter{Hardware configuration of node HEPDB}
The central HEPDB server at CERN is an RS6000 model 320.
It has an internal disk, used only for the system, and
two external disks, used for the /hepdb and /backdb filesystems.
\begin{XMPt}{HEPDB disk configuration}
name status location description
hdisk0 Available 00-01-00-00 320 MB SCSI Disk Drive
hdisk1 Available 00-01-00-30 Other SCSI Disk Drive
hdisk2 Available 00-01-00-40 Other SCSI Disk Drive
\end{XMPt}
\Filename{H1hdbappen-Return-codes}
\chapter{Return codes}
\begin{XMP}
+-----+------------------------------------------------+--------------+
|Error| Meaning | Routine Name |
|Code | | |
+-----+------------------------------------------------+--------------+
| -1 |Invalid top directory name | CDINIT |
| -2 |The file is already open with correct LUNRZ and | CDINIT |
| |TOPNM | |
| -3 |The file is already open with wrong LUNRZ or | CDINIT |
| |TOPNM | |
| -5 |Invalid process name in Online context | CDINIT |
| -6 |Error in IC_BOOK for booking the CACHE | CDINIT |
| -7 |Error in CC_SETUP for reserving the CLUSCOM | CDINIT |
| -8 |Error in opening journal file in server mode | CDFOPN |
| -9 |Unable to open FZ communication channel | CDINIT |
| -10 |Host unable to open RZ file | CDINIT |
+-----+------------------------------------------------+--------------+
| 1 |Illegal character option |CDUSEDB/CDUSEM|
| 2 |Illegal path name |CDGETDB/CDUSE/|
| | |CDUSEM |
| 3 |Data base structure in memory clobbered |CDUSE/CDUSEDB/|
| | |CDUSEM |
| 4 |Illegal key option |CDUSE/CDUSEDB/|
| | |CDUSEM |
| 5 |Error in CDCHLD in P3 communication | CDUSP3 |
+-----+------------------------------------------------+--------------+
| 12 |Illegal pathname | CDNODE |
| 13 |Not enough structural link to support a new Node| CDNODE |
| 15 |Cannot define IO descriptor for Key bank | CDNODE |
+-----+------------------------------------------------+--------------+
| 21 |Too many keys with option M | CDKMUL |
| 22 |Illegal key option | CDKMUL |
| 24 |No Key bank created satisfying key options for | CDBKKS |
| |option S | |
| 25 |Illegal Path Name | CDBKKS |
+-----+------------------------------------------------+--------------+
| 31 |Illegal path name or path name in node bank |CDCHCK/CDKXIN/|
| |is wrong |CDPRIN |
| 32 |No keys/data in this directory |CDCHCK/CDGETDB|
| | |CDPRIN |
| 33 |No valid data for the given range of insertion | CDKXIN |
| |time or for the given set of keys and program | |
| |version number | |
| 34 |RZIN fails to read the data | CDRZIN |
| 35 |Wrong reference to data objects in update mode | CDKXIN |
| 36 |Data bank address zero on return from CDKXIN | CDCHCK |
| 37 |Insufficient space in USER store array | CDCHCK |
| 38 |Read error in getting the RZ date and time | CDPRDT |
| 39 |Illegal data type in the key descriptor | CDPRKY |
+-----+------------------------------------------------+--------------+
| 43 |Too many key elements | CDMDIR |
| 44 |Cannot find the top directory name | CDMDIR |
| |(wrong initialization) | |
| 45 |Illegal Path name | CDMDIR |
| 47 |The Directory already exists | CDMKDI |
| 48 |Error in directory search sequence | CDMKDI |
| 49 |FZOUT fails to write on the sequential file | CDSDIR |
+-----+------------------------------------------------+--------------+
| 51 |Illegal character option | CDFRDB |
| 52 |No access to the Key banks | CDFRDB |
| 54 |Pathname not matched to that found in bank NODB | CDFRDB |
| 57 |Illegal pathname | CDFRDB |
| 58 |Database structure in memory clobbered | CDFRDB |
| 59 |Some of the expected key banks not found | CDFRDB |
+-----+------------------------------------------------+--------------+
| 61 |Too many keys |CDENTB/CDREPL |
| 62 |Illegal character option |CDREPL/CDSTOM |
| 63 |Data base structure in memory clobbered |CDREPL/CDSTOR |
| 64 |Error in MZCOPY while copying Data bank |CDREPL/CDSTOR |
| 65 |Illegal number of data objects | CDSTOM |
| 66 |Illegal logical unit number |CDATOI/CDRHLP |
| 67 |File too long; no space in buffer | CDATOI |
| 68 |Input directory is partitioned | CDPART |
| 69 |Input directory is not partitioned | CDPURP |
| 70 |Error in deleting a partition through RZDELT | CDPURP |
+-----+------------------------------------------------+--------------+
| 71 |Illegal path name |CDDONT/CDENFZ/
| | |CDENTB/CDFZUP/|
| | |CDKOUT/CDPART/|
| | |CDPURP/CDRTFZ |
| 72 |Read error on the FZ file (journal file) |CDENFZ/CDFZUP |
| 73 |RZOUT fails to write on disk |CDDONT/CDENFZ/|
| | |CDENTB/CDKOUT/|
| | |CDPART/CDPURP |
| 74 |Error in RZRENK in updating key values for |CDENFZ/CDENTB/|
| |partitioned data set |CDKOUT/CDPART/|
| | |CDPURP |
| 76 |Cannot form the IO descriptor for the FZ header |CDDONT/CDENTB/|
| | |CDFZUP/CDFZWR/|
| | |CDKOUT/CDPART |
| 77 |FZOUT fails to write on the sequential journal |CDDONT/CDENFZ/|
| |file |CDENTB/CDFZWR/|
| | |CDKOUT/CDPART/|
| | |CDPURP |
| 78 |Illegal number of keys on data base/journal file|CDFZUP/CDKOUT |
| 79 |Top directory name illegal in the FZ file | CDFZUP |
+-----+------------------------------------------------+--------------+
| 81 |Precision is not correctly given | CDUCMP |
| 82 |Illegal Data Type | CDUCMZ |
| 83 |Data update but uncompreseed | CDUNCP |
| 84 |The update structure has different number of | CDUNCP |
| |data words | |
| 85 |No data in the structure | CDUNCP |
| 86 |The update structure has different data type | CDUNCP |
+-----+------------------------------------------------+--------------+
| 91 |Illegal Character Option | CDOPTS |
| 92 |Nonstandard IO descriptor | CDFRUS |
| 93 |Illegal time |CDPKTM/CDUPTM |
| 94 |Nonmatching NPAR's in different UPCD banks | CDVALID |
| 95 |Description not found in the dictionary | CDLDIC |
| 96 |RZCDIR fails to set to the current directory | CDLDUP |
| 97 |No matching UPCD bank found |CDLDUP/CDVALID|
| 98 |Invalid path name in Node bank | CDSTAT |
| 99 |No space in memory for creating the bank |CDBANK/CDRZIN |
+-----+------------------------------------------------+--------------+
| 111 |Illegal path name |CDPURG/CDPURK |
| 112 |No key or data for the path name |CDPURG/CDPURK |
| 113 |Illegal character option | CDPURK |
| 114 |Valid data object(s) in the Node/Key structure | CDPURK |
| 115 |Cannot form the IO descriptor for the FZ header | CDSPUR |
| 116 |FZOUT fails to write on the sequential file | CDSPUR |
+-----+------------------------------------------------+--------------+
| 131 |Illegal pathname (in key bank for CDLAST) |CDLAST/CDLKEY/|
| | |CDLMOD |
| 132 |Illegal number of keys in the directory |CDLAST/CDLKEY |
| | |CDLMOD |
| 135 |Illgeal Top directory name |CDFZOP/CDILDU |
| 136 |Illegal logical unit number |CDILDF/CDILDU/|
| | |CDJOUR |
+-----+------------------------------------------------+--------------+
| 140 |Illegal top directory name | CDUDIC |
| 141 |Error in creating the DICTIONARY/HELP directory | CDUDIC |
| 142 |Error in RZ in writing the dictionary object |CDCDIC/CDUDIC |
| 143 |Error in RZ in purging the dictionary directory |CDCDIC/CDUDIC |
| 144 |Dictionary directory cannot be loaded | CDCDIC |
| 145 |Pathname already exists in the dictionary | CDCDIC |
| 146 |Illegal path name |CDDINF/CDEALI |
| | |CDEHLP/CDENAM/|
| | |CDGNAM/CDRHLP/|
| | |CDRNAM |
| 147 |Dictionary directory not found in memory |CDEALI/CDGNAM/|
| | |CDRNAM |
| 148 |FZOUT fails to write on the sequential file |CDEALI/CDSNAM |
| 149 |Error in RZ for writing to the R.A. file |CDEALI/CDSNAM |
| 150 |Illegal number of data words | CDENAM |
| 151 |No description of data elements for the given |CDGNAM/CDRNAM |
| |path name exists in the data base | |
| 152 |Illegal flag (IFLAG) | CDSNAM |
| 153 |FZIN error for reading the data structure | CDSNAM |
| 154 |Illegal alias name for a directory | CDRALI |
| 155 |No HELP directory inside the data base | CDRHLP |
| 156 |No help information for this path stored yet | CDRHLP |
+-----+------------------------------------------------+--------------+
| 171 |Illegal Path name | CDDDIR |
| 172 |Cannot find the top directory for the path name | CDDDIR |
| 173 |Error in RZ for reading the dictionary object | CDDDIR |
| 174 |Error in FZOUT for saving the journal file | CDDDIR |
| 175 |Error in RZ in writing the dictionary object | CDDDIR |
| 176 |Error in RZ in purging the dictionary directory | CDDDIR |
| 177 |Error in RZ in deleting the tree | CDDDIR |
| 178 |Error in RZ in deleting Name/Help information | CDDDIR |
+-----+------------------------------------------------+--------------+
| 191 |Illegal path name | CDRENK |
| 192 |Specified key elements do not match with any of | CDRENK |
| |the existing set of keys | |
| 194 |Cannot form the IO descriptor for the FZ header | CDRENK |
| 195 |FZOUT fails to write on the sequential journal | CDRENK |
| |file | |
| 196 |Error in RZRENK in updating key values | CDRENK |
| |partitioned data set | |
+-----+------------------------------------------------+--------------+
| 211 |Illegal number of paths | CDKEEP |
| 212 |Illegal path name |CDFPAT/CDKEEP |
| 213 |Conflicting top directory names | CDKEEP |
+-----+------------------------------------------------+--------------+
| 221 |Error in CC_WRITELOCK for locking CLUSCOM (VAX);| CDWLOK |
| 222 |Error in CC_RELEASE for releasing CLUSCOM (VAX) | CDCWSV |
| 223 |Error in IC_SIGNAL for signalling the VAX Server| CDCWSV |
| 225 |Error in sending spool file to the server (IBM | CDSTSV |
| |or APOLLO) | |
+-----+------------------------------------------------+--------------+
\end{XMP}
\Filename{H1hdbappen-Format-for-FZ-output}
\chapter{Format for FZ output}
HEPDB can create a journal file and can also update a data base from
the corresponding journal file. The journal file format is defined as an
FZ record consisting of a header and the data part. The format is
general enough and can also be used for the communication betwen the
server and a process which wants to update the data base.
The data part of the FZ record is relevant only for data to be
entered. It is exactly the same data structure as input to \Lit{DBENTR}. For
efficiency reason, HEPDB for its own journal file stores the data
structure as input to the \Rind{RZOUT} call. This difference can be easily
recognised from the value of \Lit{KEY(1)}, which is zero for outside source
and nonzero for HEPDB's own journal file.
The header part has very similar structure for the eight actions
foreseen so far, e.g., entering data, creating new directories, deleting
data objects, deleting a directory tree, renaming the keys, entering
names of data elements or help information for a directory, entering
alias name to a directory, deleting a few partitions in a partitioned
directory. However, they differ in details and the eight different types
of FZ headers are listed below.
\begin{XMP}
Header for entering data :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=1) |
| 2 | NWKEY | I | Number of key elements |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | IPREC | I | Precision chosen for packing |
| | | | (see DBENTR) |
| 6 | KEY(1) | I | Key element 1 |
| .. | ... | .. | ........ |
| NWKEY+5 |KEY(NWKEY)| .. | Key element NWKEY |
| NWKEY+6 | CHOPT | H | Character option |
| .. | .. | H | |
| NWKEY+6 | PATHN | H | Path name |
| +NWDOP | | | |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for creating directories :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=2) |
| 2 | NWKEY | I | Number of key elements |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | MXKP | I | Maximum number of objects inside one |
| | | | partition (see DBMDIP) |
| 6 | INSTM | I | Insertion time packed up to minutes |
| | | | (see DBPKTM) |
| 7 | NRECD | I | Unused at this moment |
| 8 | CHOPT | H | Character option (e.g., 'P' for a |
| .. | ... | .. | partitioned directory) |
| NDOP+8 | CHFOR | H | Description of key element type. This |
| .. | .. | .. | information is stored in NCFO = (NWKEY |
| .. | .. | .. | +3)/4 words |
| NDOP+8 | CHTAG | H | Tags for each key element. This info. |
| +NCFO | .. | .. | is stored in NTAG = 2*NWKEY words. |
|NDOP+NCFO | PATHN | H | Path name |
| +NTAG+8 | | | |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for deleting objects :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=3) |
| 2 | NWKEY | I | Number of key elements |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | NPARS | I | Number of pairs of validity range (set |
| | | | for CDPURK) or -1 for CDPURG |
| 6 | INSTM | I | Deletion time packed up to minutes |
| | | | (see DBPKTM) |
| 7 | ISEL(1) | I | The objects to be selected using the |
| .. | ... | | validity criteria in CDPURK |
| NPARS+6 | ISEL(n) | I | |
| NPARS+7 | KEY(1) | I | Key element 1 for CDURK |
| | ... | .. | ......... |
| NENDK | KEY(n) | .. | Key element NWKEY for CDPURK |
| 7 | KYDAT | I | To be used for CDPURG |
| 8 | KYTIM | I | To be used for CDPURG |
| .. | ... | .. | |
| NENDK | | | NWKEYth word following KYDAT for CDPURG|
| NENDK+1 | CHOPT | H | Character option |
| .. | ... | .. | |
| NENDK+1 | PATHN | H | Path name |
| +NWDOP | | | |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for deleting directories :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=4) |
| 2 | --- | I | Unused (set to 0) |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | --- | I | Unused (set to 0) |
| 6 | INSTM | I | Deletion time packed up to minutes |
| | | | (see DBPKTM) |
| 7 | CHOPT | H | Character option |
| NWDOP+7 | PATHN | H | Path name |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for renaming keys :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=5) |
| 2 | NWKEY | I | Number of key elements |
| 3 | NWDOP | I | Number of words for CHOPT (= 0) |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | Unused | I | Set to zero |
| 6 | KYO(1) | I | Old key element 1 |
| .. | ... | .. | ........ |
| NWKEY+5 |KYO(NWKEY)| .. | Old key element NWKEY |
| NWKEY+6 | KYN(1) | I | New key element 1 |
| .. | .. | .. | ........ |
|2*NWKEY+5 |KYO(NWKEY)| .. | New key element NWKEY |
|2*NWKEY+6 | PATHN | H | Path name |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for entering/deleting names or help information :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=6) |
| 2 | NWKEY | I | Number of key elements |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name (DICTIONARY or HELP) |
| 5 | IFLAG | I | Flag (1 for help information; 2 for |
| | | | names of the data elements) |
| 6 | KEY(1) | I | Key element 1 ( = Identifier of path) |
| .. | ... | .. | ........ |
| NWKEY+5 |KEY(NWKEY)| .. | Key element NWKEY |
| NWKEY+6 | CHOPT | H | Character option |
| .. | ... | .. | ........ |
| NWKEY+ | | | |
| NWDOP+6 | PATHN | H | Path name (DICTIONARY or HELP) |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for entering the alias name :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=7) |
| 2 | --- | I | Unused (set to 0) |
| 3 | NWDOP | I | Number of words used to store CHOPT(=0)|
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name of the dictitionary |
| 5 | IFLAG | I | Flag (0 means temporary; 1 permanent) |
| 6 | NWDP | I | Number of words used to store the |
| | | | path name |
| 7 | PATHD | H | Path name of the dictionary |
| .. | .. | H | |
| NDOP+7 | ALIAS | H | Alias name |
| .. | .. | H | |
| NDOP+9 | PATHN | H | Path name of the directory |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
Header for deleting a few partitions in a partitioned directory :
+----------+----------+------+----------------------------------------+
|Word Count| Mnemonic | Type | Content |
+----------+----------+------+----------------------------------------+
| 1 | IACT | I | Action code (=8) |
| 2 | --- | I | Unused (set to 0) |
| 3 | NWDOP | I | Number of words used to store CHOPT |
| 4 | NDOP | I | Number of words used to to store the |
| | | | path name |
| 5 | INSTM | I | Deletion time packed up to minutes |
| | | | (see CDPKTM) |
| 6 | NKEEP | I | Number of partitions to be kept |
| 7 | CHOPT | H | Character option |
| NWDOP+7 | PATHN | H | Path name of the directory |
| .. | .. | H | |
+----------+----------+------+----------------------------------------+
\end{XMP}
The bank structure created in memory by HEPDB is show below.
\begin{verbatim}
(3) +-------\
+---------| FZDB > List of directories to be updated
| +-------/
|
+--------\ +--------\
| UPDB >-----| UPDB > Support for all top directories opened
+--------/ +--------/
| |
| | (2) +--------\
| +---------| DICT > Dictionary information
| +--------/
|
| (1) +--------\
+-------------| NODB > Node bank for the top directory
+--------/
|..|..||
|
|
+--------\
| NODB >
+--------/
|....|||
| Node bank of subdirectory for which data
+--------\ +--------\ +--------\ is retrieved
| NODB >--| KYDB >--| KYDB >
+--------/ +--------/ +--------/ Key banks
| (1) | (1)
+--------+ +--------+
| DATA | | DATA |
+--------+ +--------+
\end{verbatim}
\begin{XMPt}{Bank description}
========================================================================
|Bank: UPDB Top level bank |
|NL_/NS_ = 2/2 IO_ = '8I -H' |
|NW_ = 12 |
+----------------------------------------------------------------------+
|LINKS: |
|link type bank offset |
|---- ---- ---- ------ |
| -3 Ref FZDB KLFZDB ( 3) |
| -2 Str DICT KLDICT ( 2) |
| -1 Str NODB |
| 0 nxt UPDB of the next data base file |
+----------------------------------------------------------------------+
|DATA WORDS: |
|word type contents offset |
|---- ---- -------- ------ |
| 1 I Logical unit number of RZ file MUPLUN ( 1) |
| 2 I Flag if database to be updated (0 if not) MUPFLG ( 2) |
| 3 I Logical unit number of standard journal file MUPJFL ( 3) |
| 4 I Logical unit number of special backup file MUPBAK ( 4) |
| 5 I Identifier of the top directrory MUPDIC ( 5) |
| 6 I Number of characters in the top directory MUPNCH ( 6) |
| name |
| 7 I Shared/server flag (IOPS*10 + IOPP) MUPSRV ( 7) |
| (IOPS = 1 if S option in DBINIT; |
| IOPP = 1 if P option in DBINIT) |
| 8 I Maximum insertion time for subsequent MUPKY7 ( 8) |
| object retrieval |
|9-12 H Name of the top directory MUPNAM ( 9) |
+----------------------------------------------------------------------+
========================================================================
|Bank: DICT Dictionary bank |
|NL_/NS_ = 0/0 IO_ = '1I /3I 22H' |
|NW_ = 1 + 25*n |
+----------------------------------------------------------------------+
|DATA WORDS: |
|word type contents offset |
|---- ---- -------- ------ |
| 1 I Number of nodes in the dictionary MDCNTM ( 1) |
| For each node (Node number n) |
|IOFF+ (= (n-1)*NWITDB + 1) (NWITDB = 25) |
| 1 I Unique identifier of the node MDCITM ( 1) |
| 2 I Number of characters for describing the path MDCNCH ( 2) |
| to the node |
| 3 I Last update to the node (not avaialable yet) MDCLUP ( 3) |
| 4-5 H Alias name MDCALI ( 4) |
|6-25 H Name of the path to the node (excluding the MDCNAM ( 6) |
| top directory part) |
+----------------------------------------------------------------------+
========================================================================
|Bank: NODB Node bank |
|NL_/NS_ = NS_/(number of down nodes) IO_ = '4I 16B -H' |
|NW_ = 20 + words needed for path name |
+----------------------------------------------------------------------+
|LINKS: |
|link type bank offset |
|---- ---- ---- ------ |
| -n Str NODB (next level node) |
| 0 nxt KYDB of the first key bank to the node KLDYDB ( 0) |
+----------------------------------------------------------------------+
|DATA WORDS: |
|word type contents offset |
|---- ---- -------- ------ |
| 1 I Number of key elements for this node MNDNWK ( 1) |
| 2 I Total number of data words in the Key bank MNDNWD ( 2) |
| 3 I Number of characters describing the path to MNDNCH ( 3) |
| the node |
| 4 I Unique identifier of this node MNDDIC ( 4) |
|5-20 B IO descriptor of the Key bank MNDIOF ( 5) |
|21-.. H Name of the path to the node MNDNAM (21) |
+----------------------------------------------------------------------+
========================================================================
|Bank: KYDB Key bank |
|NL_/NS_ = 3/1 IO_ = Dynamic |
|NW_ = NWKEY + NWFXM(=6) |
+----------------------------------------------------------------------+
|LINKS: |
|link type bank offset |
|---- ---- ---- ------ |
| -2 Ref UPDB (Top level bank) KLUPDB ( 3) |
| -2 Ref NODB (parent node bank) KLNODB ( 2) |
| -1 Str Data bank KLDADB ( 1) |
| 0 nxt KYDB of the next key bank |
+----------------------------------------------------------------------+
|DATA WORDS: |
|word type contents offset |
|---- ---- -------- ------ |
| 1 I Serial number of the object |
| 2 I Refernce to the master object (for update) |
| 3 I Start validity time (upto seconds) |
| 4 I End validity time (upto seconds) |
| 5 I Source identifier |
| 6 I Flag for storing the object (internal to HEPDB) |
| Bit JRZUDB (=1) : Full RZ option |
| JIGNDB (=2) : Ignore the object |
| JPRTDB (=3) : Directory is partitioned |
| JASFDB (=4) : Specially encoded ASCII |
| 7 I Insertion time (upto minutes) |
|8-NWKEY User keys |
|NWKEY+1 I Logical end validity time (upto seconds) |
|NWKYDB+ |
| -4 I Number of physical reads to disk for this key MKYRID (-4) |
| -3 I Number of calls to DBUSE in the same event MKYCEV (-3) |
| -2 I Number of calls to DBUSE in the entire run MKYCRU (-2) |
| -1 I Precision used for storing the object MKYPRE (-1) |
| 0 I Free flag (set by DBFREE call) MKYFRI ( 0) |
+----------------------------------------------------------------------+
========================================================================
|Bank: FZDB List of directories to be updated |
|NL_/NS_ = 0/0 IO_ = '-H' |
|NW_ = 4 + 20*n |
+----------------------------------------------------------------------+
|LINKS: |
|link type bank offset |
|---- ---- ---- ------ |
| 0 nxt FZDB of the next data base file |
+----------------------------------------------------------------------+
|DATA WORDS: |
|word type contents offset |
|---- ---- -------- ------ |
| 1-4 H Top directory name MFZTOP ( 1) |
| For each directory (number n) |
|IOFF+ (= (n-1)*(MXLWDB+1) + MFZDIR) (MXLWDB = 20; MFZDIR = 5) |
| 1 I Number of characters in the path |
|2-21 H Complete pathname of the directory or the root |
+----------------------------------------------------------------------+
\end{XMPt}
| {
"alphanum_fraction": 0.5576314404,
"avg_line_length": 38.8224522293,
"ext": "tex",
"hexsha": "51a05ed7fb87154ad71576a6548628dad4db614a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "berghaus/cernlib-docs",
"max_forks_repo_path": "hepdb/hdbappen.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "berghaus/cernlib-docs",
"max_issues_repo_path": "hepdb/hdbappen.tex",
"max_line_length": 86,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "berghaus/cernlib-docs",
"max_stars_repo_path": "hepdb/hdbappen.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z",
"num_tokens": 45389,
"size": 146283
} |
\filetitle{doubledata}{Convert tseries observations to double precision}{tseries/doubledata}
\paragraph{Syntax}\label{syntax}
\begin{verbatim}
x = doubledata(x)
\end{verbatim}
\paragraph{Input arguments}\label{input-arguments}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\texttt{x} {[} tseries {]} - Tseries object whose observations will be
be converted to double precision.
\end{itemize}
\paragraph{Output arguments}\label{output-arguments}
\begin{itemize}
\itemsep1pt\parskip0pt\parsep0pt
\item
\texttt{y} {[} numeric {]} - Tseries object with double-precision
observations.
\end{itemize}
\paragraph{Description}\label{description}
\paragraph{Example}\label{example}
| {
"alphanum_fraction": 0.7606232295,
"avg_line_length": 20.7647058824,
"ext": "tex",
"hexsha": "56324c63f1b7e133504908079fa7c23f5aa70557",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z",
"max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_forks_repo_path": "-help/tseries/doubledata.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_issues_repo_path": "-help/tseries/doubledata.tex",
"max_line_length": 96,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave",
"max_stars_repo_path": "-help/tseries/doubledata.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z",
"num_tokens": 205,
"size": 706
} |
\documentclass[a4paper,12pt]{article} % My Personal Preference
\usepackage[table,xcdraw]{xcolor} % Prettier Tables
\usepackage[utf8]{inputenc} % Makes input utf8 (I think?)
\usepackage{amsmath} % All maths related things
\usepackage{amssymb} % Maths symbols
\usepackage{cancel} % Show cancellations in maths
\usepackage{graphicx} % Use images in your report
\usepackage{fancyhdr} % Make very nice looking header and footers for your reportthe field of study that combines biologic data with software for analysis, storage and distribution [
\usepackage{titlesec} % Change the way the default titles/heading look
\usepackage{float} % Make your pictures float/set where you want them
\usepackage[titletoc]{appendix} % Nicer appendices - acts as a section, own environment, etc.
\usepackage{siunitx} % Use SI Units in a readable manner
\usepackage{pgfplots} % Add graphs very simply
\usepackage{caption} % Options for captions
\usepackage{pdflscape} % Make landscape pages display in a PDF in landscape, instead of sideways portait
\usepackage{geometry} % ability to change the sizes of a page. usefull for `fullbleed' pages
\usepackage{mathrsfs} % Used for Maths-based script letters (like the R for the real number set, or F for Fourier Transform)
\usepackage{listings} % format your code for pretty display
\usepackage[american]{babel} % Does something to do with formatting based on culture
\usepackage{csquotes}
\usepackage[url=true,backend=biber,style=apa]{biblatex} % better bibliographies
\DeclareLanguageMapping{american}{american-apa}
\usepackage{nameref} % Reference names and make them links (I think?)
\usepackage[backref=true,colorlinks=true]{hyperref} % Should be last package include. Adds links that work
\usepackage[numbered,framed]{matlab-prettifier} % Pretty-print MATLAB code
\usepackage{setspace}
$if(graphics)$
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=0.9\maxwidth,height=\maxheight,keepaspectratio}
$endif$
\let\origfigure\figure
\let\endorigfigure\endfigure
\renewenvironment{figure}[1][2] {
\expandafter\origfigure\expandafter[H]
} {
\endorigfigure
}
% `caption' setup
\captionsetup{justification=centering}
% Required for syntax highlighting
$highlighting-macros$
$for(bibliography)$
\addbibresource{$bibliography$}
$endfor$
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\newcommand{\dueDate}{September 7, 2018}
%Fancy Header Formatting
% This is pretty much my standard, but I move the parts around based on
% how long titles are
\pagestyle{fancy}
\fancyhf{}
\setlength{\headheight}{15pt}
\lhead{Buchan-Swanson, David}
\rhead{\dueDate}
\lfoot{Simplifying Bioinformatic Dataset Access}
\rfoot{\thepage}
\renewcommand{\footrulewidth}{0.4pt}
\begin{document}
%--------------PREAMBLE-------------------------
% includes the title page, gives roman numerals to all pages before the first page,
\include{titlepage}
\pagenumbering{roman}
% Changes the page style to plain - only display page number - no headers
\pagestyle{plain}
\doublespacing
\section*{Executive Summary}
\addcontentsline{toc}{section}{Abstract}
{\color{cyan}
Bio-informatics is an increasingly important field, and its very nature
produces an extraordinarily large amount of data. Management and usage of
this data becomes critical in order to achieve anything meaningful
\parencite{lee_bioinformatics_2012}.
A type provider is a compiler extension for F\# which allows for types to be
programatically generated for large datasets. This makes them easier to manage
and interact with.
The goal for this research is to simplify access to bioinformatic data,
starting with Genbank. To achieve this, the aims are to explore the
viability of a tool to simplify the processing of data, and to establish its
usefulness in the community.
To date, an initial implementation of an F\# type provider has been built,
with functionality to view taxa and genomes available available on Genbank.
There is continuing work on addtional functionality to reach usability by the
wider community.
Included is current progress made towards completion of the type provider,
challenges faced in the project and an outline of future work.
}
\newpage
\singlespacing
\tableofcontents
% then clears the page, and goes back to arabic numerals
\newpage
\doublespacing
\pagestyle{fancy} % sets the page style back to nice headers
\pagenumbering{arabic}
\setcounter{page}{1}
% \renewcommand{\thesection}{} % don't show a part number in the ToC
\titleformat{\section}[hang]{\Large\bfseries}{}{0ex}{}{} % No Part Specifier in title
$body$
\newpage
\printbibliography
\addcontentsline{toc}{section}{References}
\newpage
\titleformat{\section}[hang]{\Large\bfseries}{}{0ex}{{\normalfont Appendix \thesection}\hspace{.5ex}\textbar\hspace{.5ex}}{}
\pagestyle{empty}
\begin{appendices}
\begin{landscape}
\section{Gantt Chart}
\label{gantt-chart}
\begin{figure}
\centering
\makebox[\textwidth][c]{
\includegraphics[width=1.5\textwidth]{src/images/ganttChart.pdf}}%
\caption{Tasks in green are upcoming and are not due to have started
yet, blue are in progress and within time expectations, and an orange
contingency signifies between 5 and 10 days available.}
\end{figure}
\end{landscape}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7550268097,
"avg_line_length": 38.2564102564,
"ext": "tex",
"hexsha": "42288846f9cde6e23155848356c5f88613ac4657",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bbacd815cb58db8e62879e3cbd3460805a984cd8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "deecewan/GenbankTypeProvider",
"max_forks_repo_path": "progress-report/template.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "bbacd815cb58db8e62879e3cbd3460805a984cd8",
"max_issues_repo_issues_event_max_datetime": "2018-08-09T03:59:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-06-04T17:58:27.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "deecewan/GenbankTypeProvider",
"max_issues_repo_path": "progress-report/template.tex",
"max_line_length": 182,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "bbacd815cb58db8e62879e3cbd3460805a984cd8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "deecewan/GenbankTypeProvider",
"max_stars_repo_path": "progress-report/template.tex",
"max_stars_repo_stars_event_max_datetime": "2018-09-30T16:49:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-06-02T17:08:48.000Z",
"num_tokens": 1641,
"size": 5968
} |
\section{Further Research}
While our trust network can form a basis for risk-invariant transactions in a pseudonymous and decentralized setting, more
research is required to achieve other desirable properties. Some directions for future research are outlined below.
%\subsection{Trust Transfer Algorithms}
If $Alice$ indirectly trusts $Bob$ enough to make a purchase from him, she should not directly pay him the value of the
good because then she will increase her indirect trust towards $Bob$. She first has to reduce her outgoing direct trust in
a manner such that the supposition (\ref{primetrust}) of Risk Invariance theorem is satisfied. The methods $Alice$ can use
to recalculate her outgoing direct trust will be discussed in a future paper.
%\subsection{Dynamic Setting}
The current description of Trust Is Risk refers to a static setting where the game evolves in turns. In each turn only one
user changes the state of the network and the game is controlled by a central algorithm, the Trust Is Risk Game. In the
dynamic setting, users should be able to play simultaneously, freely join, depart or disconnect temporarily from the
network.
%\subsection{Zero knowledge}
Our network evaluates indirect trust by computing the max flow in the graph of lines-of-credit. In order to do that,
complete information about the network is required. However, disclosing the network topology may be undesirable, as
it subverts the identity of the participants even when participants are treated pseudonymously, as deanonymisation
techniques can be used \cite{deanonymisation}. To avoid such issues, exploring the ability to calculate flows in a
zero knowledge fashion may be desirable. However, performing network queries in zero knowledge may allow an adversary
to extract topological information. More research is required to establish how flows can be calculated effectively in
zero knowledge and what bounds exist in regards to information revealed in such fashion. \cite{silentwhispers} and
and its centralized predecessor, PrivPay \cite{privpay}, seem to offer invaluable insight into how privacy can be achieved.
%\subsection{Game Theoretic Analysis}
Our game theoretic analysis is simple. An interesting analysis would involve modelling repeated purchases with the
respective edge updates on the trust graph and treating trust on the network as part of the utility function.
%\subsection{Implementation}
We are proposing a concrete financial game and not a theoretical concept. Thus its implementation as a wallet on any
blockchain will be most welcome.
%\subsection{Experimental Results}
A simulation or actual implementation of Trust Is Risk, combined with analysis of the resulting dynamics can yield
interesting experimental results. Subsequently, our trust network can be used in other applications, such as decentralized
social networks \cite{synereo}.
%\subsection{Alternative Multisigs}
1-of-2 multisigs correspond intuitively to simple directed weighted graphs. However it can be interesting to explore
the trust relations that can arise by using other types of multisig, such as 1-of-3, as vessel for multi-party direct trust
schemes. Our results do not necessarily hold for other multisigs and the simple relations now represented by directed
weighted graphs have to be revised under a new kind of representation.
| {
"alphanum_fraction": 0.8053019146,
"avg_line_length": 73.8043478261,
"ext": "tex",
"hexsha": "044a4a6e399c0cd33bd95da45263e118870285b7",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-08-28T06:32:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-07T10:49:58.000Z",
"max_forks_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "dionyziz/DecentralizedTrust",
"max_forks_repo_path": "thesis/further_research.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_issues_repo_issues_event_max_datetime": "2017-07-31T14:42:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-07T12:25:26.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "dionyziz/DecentralizedTrust",
"max_issues_repo_path": "thesis/further_research.tex",
"max_line_length": 125,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "60f65bff00041e7e940491913bd4ca3f11bf22d9",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "dionyziz/DecentralizedTrust",
"max_stars_repo_path": "thesis/further_research.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-01T14:07:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-15T14:33:06.000Z",
"num_tokens": 682,
"size": 3395
} |
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={Data Journalism with R and the Tidyverse},
pdfauthor={Matt Waite},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{natbib}
\bibliographystyle{apalike}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\providecommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{Data Journalism with R and the Tidyverse}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{Matt Waite}
\preauthor{\centering\large\emph}
\postauthor{\par}
\predate{\centering\large\emph}
\postdate{\par}
\date{2019-12-26}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{introduction}{%
\chapter{Introduction}\label{introduction}}
\hypertarget{what-well-cover}{%
\subsection{What we'll cover}\label{what-well-cover}}
\begin{itemize}
\tightlist
\item
Public records and open data
\item
Installations
\item
R Basics
\item
Replication
\item
Data basics and structures
\item
Exploratory data analysis
\item
Aggregates
\item
Mutating
\item
Filters
\item
Data smells
\item
Cleaning data I
\item
Cleaning data II
\item
Pulling data from PDFs
\item
Advanced analysis: Correlations and regressions
\item
Advanced analysis: Logistic regression
\item
Basic data scraping
\item
Intermediate data scraping
\item
Getting data from APIs: Census
\item
Visualizing for reporting: Basics
\item
Visualizing for reporting: More forms
\item
Visualizing for reporting: Faceting
\item
Geographic data basics
\item
Geographic queries
\item
Geographic visualization
\item
Text analysis basics
\item
Text analysis
\item
Writing with and about data
\item
Data journalism ethics
\end{itemize}
\hypertarget{public-records}{%
\chapter{Public records}\label{public-records}}
\hypertarget{installations}{%
\chapter{Installations}\label{installations}}
Here is a review of existing methods.
\hypertarget{methods}{%
\chapter{Methods}\label{methods}}
We describe our methods in this chapter.
\hypertarget{applications}{%
\chapter{Applications}\label{applications}}
Some \emph{significant} applications are demonstrated in this chapter.
\hypertarget{example-one}{%
\section{Example one}\label{example-one}}
\hypertarget{example-two}{%
\section{Example two}\label{example-two}}
\hypertarget{final-words}{%
\chapter{Final Words}\label{final-words}}
We have finished a nice book.
\hypertarget{data-basics}{%
\chapter{Data basics}\label{data-basics}}
\hypertarget{exploratory-data-analysis}{%
\chapter{Exploratory data analysis}\label{exploratory-data-analysis}}
\hypertarget{aggregates}{%
\chapter{Aggregates}\label{aggregates}}
\end{document}
| {
"alphanum_fraction": 0.756884343,
"avg_line_length": 24.0947867299,
"ext": "tex",
"hexsha": "d3f982455222e497814a2d708304ef4576e989d1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "417a650db9735ee303822af7cc8c41aece5f226d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dwillis/jour472files",
"max_forks_repo_path": "coursefiles/data_journalism_book/bookdown-demo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "417a650db9735ee303822af7cc8c41aece5f226d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dwillis/jour472files",
"max_issues_repo_path": "coursefiles/data_journalism_book/bookdown-demo.tex",
"max_line_length": 83,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "417a650db9735ee303822af7cc8c41aece5f226d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dwillis/jour472files",
"max_stars_repo_path": "coursefiles/data_journalism_book/bookdown-demo.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T18:31:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-11T18:31:42.000Z",
"num_tokens": 1575,
"size": 5084
} |
\documentclass[letterpaper]{article}
\usepackage{fullpage}
\usepackage{nopageno}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{tikz}
\usepackage[utf8]{luainputenc}
\usepackage{aeguill}
\usepackage{setspace}
\tikzstyle{edge} = [fill,opacity=.5,fill opacity=.5,line cap=round, line join=round, line width=50pt]
\usetikzlibrary{graphs,graphdrawing}
\usegdlibrary{trees}
\pgfdeclarelayer{background}
\pgfsetlayers{background,main}
\allowdisplaybreaks
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\begin{document}
\title{Notes}
\date{9 mars, 2015}
\maketitle
\section*{7.1 genus of a graph}
last chapter we discussed ``how far'' from planar a graph was. we used the crossing \#
in the same vein, but more useful is the genus of a graph. we know that $K_{3,3}$ cannot be embedded on the plane. what about on a donut (torus)?
recall that $\text{cr}(K_5)=1$.
Now we can ``mold'' the torus into figure 7.5 on page 271.
so handles can get around crossings.
a sphere with $k$ handles is called a surface with genus $k$. the book calls it $S_k$. think a $k$ holed torus.
an easier way:
think of the torus as a plane rolled up into a tube, with edges connected. now we associated opposite edges. use arrows or something to show this
with this interpretation
\subsection*{thrm}
if $G$ is connected with $|G|=n, |E(G)|=m$ and $G$ is embedded minimally with $r$ regions, then we have the $n-m+2=2-2\gamma(G)$ where $\gamma(G)=$minimal genus
like before we get a bound right away:
if $G$ is a connected graph with $|G|\ge 3$ then $\gamma(G)\ge \frac{m}{6}-\frac{n}{2}+1$
\section*{Homework}
2,8,11
\end{document}
| {
"alphanum_fraction": 0.730509515,
"avg_line_length": 29.0892857143,
"ext": "tex",
"hexsha": "b54fd5a1c425984f653b6458ba32ef63687f77b0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "ylixir/school",
"max_forks_repo_path": "graph/graph-notes-2015-03-09.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "ylixir/school",
"max_issues_repo_path": "graph/graph-notes-2015-03-09.tex",
"max_line_length": 160,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "ylixir/school",
"max_stars_repo_path": "graph/graph-notes-2015-03-09.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 524,
"size": 1629
} |
%!TEX root = ../../report.tex
Building Envelopes
\subsection{Building Envelopes [NOT DONE]} % (fold)
\label{sub:building_envelopes}
Sabri Gokmen in \cite{Gokmen2013} presents a way to create envelope systems for buildings. In this case he had a approach that was inspired in Gotheam morphology and leaf venetian patterns.
This article explains form as described by Johann Wolfgang von Goethe in the late eighteenth century. He started by working on annual plants through the work of Linneaus. Goethe considered the external properties of plants to be the result of an internal principle, idea that was influenced by the prior topological work by Linneaus that classified plants according to their physical characteristics.
For Goethe this external properties are not constant and change over time according to environmental conditions.
Forms in architecture are described as a ``topological entity following an overall schema or as a replication of an exiting type that appears fixed (Garcia, 2010)". It says that architects often work with topologies for various buildings and variations are achieved using topological operations. Adding to this is the idea of parametric design to create ``smooth variable systems". This parametric top-down systems are able to control the overall behaviour of design and have been used to evaluate and adapt performance based approaches in design solutions. ``However this systems are However these systems are ineffective to provide a morphogenetic approach towards design."
Because morphology considers form as the result of a bottom-up process, it is a better method to model growth. Because the form is not defined from start and is the result of the growth process.
Goethe mixed this two ideas namely performance based and morphogenetic approaches. The parametric property is not throughout the system but applied in parts that grow with a morphogenetic approach.
Leaf venetian patterns:
There isn't one certain theory about the guiding principles that support the leaf patterns, there are many theories that try to explain it. One of them is called canalization theory that observes this patterns really as a distribution network, so it depends on the concentration/distributions of auxin producers to efficiently distribute auxin throughout the plant.
Computation of leaf venetian patterns:
The growth algorithm that is presented in this paper is based on the canalization theory. It starts by generating a density map that guides de distribution of auxin sources, that is the second step.
Finally it generates the leaf venetian patterns from the auxin source map.
The generation of the venetian patterns starts by setting root nodes that will be the points from which the patterns will start growing. Then ``at each time step the closest vein node to each auxin source will be defined. Then these nodes will grow towards the average direction influenced by the auxin sources".
% subsection building_envelopes (end)
| {
"alphanum_fraction": 0.8179064288,
"avg_line_length": 102.4482758621,
"ext": "tex",
"hexsha": "d3ea8e8af875f3a8680dcf8ed6412837f46540a3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arturalkaim/ProceduralGeneration",
"max_forks_repo_path": "sections/Works/8-Building_Envelopes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arturalkaim/ProceduralGeneration",
"max_issues_repo_path": "sections/Works/8-Building_Envelopes.tex",
"max_line_length": 676,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "736fcb8a15291ede1db069ad968527508bc081c4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arturalkaim/ProceduralGeneration",
"max_stars_repo_path": "sections/Works/8-Building_Envelopes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 583,
"size": 2971
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath}
\usepackage{graphicx}
\graphicspath{{plots/}}
\setlength{\parindent}{0em}
\setlength{\parskip}{0.5em}
\title{CTA200 Assignment 2}
\author{Emily Crawley}
\date{05/05/2021}
\begin{document}
\maketitle
\section*{Question 1}
Using the approximation given for evaluating a function $f(x)$ at $x_0$ for small finite $h$, we define \texttt{deriv(f, x0, h)} which computes
$$ \frac{f(x_0 + h) - f(x_0 - h)}{2h} $$
We then define our function to be differentiated, \texttt{f(x) = sin(x)} and its analytical derivative \texttt{dfdx(x) = cos(x)}. Values of $h < 1$ were chosen ranging from 0.1 to 1e-8.
We compute the analytical derivative $cos(0.1)$, and the numerical derivatives at each value of $h$, plotting them each against the analytical solution. Since the $h$ values vary by orders of 10 we plot the error on logarithmic axes using \texttt{pyplot.loglog}.
\includegraphics[scale=0.7]{plot1}
From the plot we can see that the error in the numerical solution decreases linearly for smaller values of $h$, until around $h < 10^{-6}$ when the larger relative error presumably comes from the precision limitations of the \texttt{float} type. The slope of the linear part of the graph represents how quickly the error decreases as $h$ decreases, so in some sense represents its effectiveness in terms of accuracy. That is, we want to be able to greatly decrease error with only one or two order-of-magnitude decrements of $h$, so a steeper slope is preferable.
\section*{Question 2}
We cover the complex plane in the square $|z|^2 = \Re(z)^2 + \Im(z)^2$ by taking $c = x + iy$ for numerous combinations of x- and y-values between -2 and 2.
For each point $c$ we iterate the expression $z_{i + 1} = z_i^2 + c$ starting from $z_0 = 0$ and test the absolute square of each $z_i$ to see if it diverges from the boundary. If so, we assign it a value of 0 in a 2D boolean array representing whether each point is bounded or not. We also record the iteration number in a separate 2D array. The iteration number array contains values of -1 for points that stay bounded. We then plot the results using \texttt{pyplot.imshow}.
When tested running for 1000 iterations, none of the points in the sample set take more than 32 iterations to diverge, so to save time we will only iterate for 32 steps.
The only obvious pattern from the resulting images is that there are no points that remain bounded from our iteration in an area centered around $c = -0.5i$. This same area on the iteration numbers plot also notably contains the brightest points, i.e. the points in these areas take a higher number of iterations to diverge. There is a single point near the center of this area that notably takes the single highest number of iterations to diverge. These results are taken from a finite number of points across the area and not every possible point, so there could be more "interesting" points that are missed by numerical analysis.
\includegraphics[scale=0.7]{plot2_1}
\includegraphics[scale=0.7]{plot2_2}
\section*{Question 3}
Our initial value problem is given as:
\begin{align}
\frac{dS}{dt} &= -\frac{\beta S I}{N},\\
\frac{dI}{dt} &= \frac{\beta S I}{N} - \gamma I,\\
\frac{dR}{dt} &= \gamma I
\end{align}
$I(0) = 1, S(0)=999, R(0) = 0$
$N=1000$, $0\leq t \leq 200$
Using \texttt{scipy.integrate.solve\_ivp} as the ODE integrator, which takes parameters \texttt{(fun, tspan, y0)}, we set up the time derivative functions as a vector $[\frac{dS}{dt},\frac{dI}{dt},\frac{dR}{dt}]$ for the \texttt{fun} parameter, and the initial conditions vector $[S(0), I(0), R(0)]$ as \texttt{y0}. We also specify the parameter \texttt{vectorized=True} so that the integrator handles the vectors properly. We will use its default Runge-Kutta method, \texttt{method='RK45'}.
$\beta$ represents an infection rate and $\gamma$ represents a recovery rate, so we will select some meaningful variations on these parameters, i.e. a disease that spreads quickly but has a fast recovery rate, a disease that spreads quickly with a slower recovery rate, etc. The solutions show us the different progressions for each of these sets of parameters, for example with a high infection rate and low recovery rate there is a large spike in $I(t)$, whereas with the same infection rate but a higher recovery rate the $I(t)$ "spike" is much lower as the disease does not have the opportunity to progress. The first three scenarios plotted all have the recovered and susceptible populations become constant when the infected population reaches zero. With a zero recovery rate, the entire population eventually becomes infected.
\includegraphics[scale=0.7]{plot3}
\end{document} | {
"alphanum_fraction": 0.7518860017,
"avg_line_length": 72.303030303,
"ext": "tex",
"hexsha": "55f79311f00a1a81d321bc3f6ec459afc73af61b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7343229002a5a6df8a6280c721f908018d0ccbd8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "emilychallice/CTA200",
"max_forks_repo_path": "assignment_2/Assignment 2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7343229002a5a6df8a6280c721f908018d0ccbd8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "emilychallice/CTA200",
"max_issues_repo_path": "assignment_2/Assignment 2.tex",
"max_line_length": 833,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7343229002a5a6df8a6280c721f908018d0ccbd8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "emilychallice/CTA200",
"max_stars_repo_path": "assignment_2/Assignment 2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1283,
"size": 4772
} |
\chapter{Introduction}
\label{Introduction}
Differential equations, ordinary or partial, allow modeling phenomena that evolve with respect to space and time. They are commonly used to describe the propagation of sound or heat and appear frequently in models related to electrostatics, electrodynamics, fluid dynamics, elasticity, quantum mechanics, and among other more related areas. \\
However, their analytical solutions cannot always be easily obtained and in many cases, it will be necessary to resort to very complex techniques that tend to give solutions with very impractical mathematical expressions to use. In particular, problems characterized as non-linear present these difficulties, and considering a different alternative to find solutions may be a more reasonable option. \\
There are alternatives to find solutions to differential equations, which depend on the nature of the problem to be solved. For example, computational fluid dynamics is one of the branches of fluid mechanics that uses numerical methods and algorithms to solve and analyze fluid flow problems that perform millions of calculations to simulate the interaction of liquids and gases through complex surfaces. However, even with simplified equations and high-performance supercomputers, in many cases, only approximate results can be achieved. \\
The resolution of differential equations related to the characterization of fluids, and in general, for those that occur in the field of complex systems, are considered of utmost importance since they allow studying problems of great interest such as the turbulence phenomenon that allows understanding with precision its dynamics. Understanding these phenomena through differential equations is not enough, it is also necessary to characterize their nature, which in some cases is possible if the dimensionless value of the Reynolds number ($Re$) that indicates whether a fluid follows a laminar flow is known or turbulent. However, it is not always possible to predict with this information those phenomena that present turbulence in a combination of convection or combustion processes, and that therefore require greater attention in their dynamics. \\
Spectral methods have recently emerged as a viable alternative for the numerical solution of partial differential equations. They have proved particularly useful in fluid dynamics simulation where are now regularly used large spectral hydrodynamics codes to study turbulence, numerical weather prediction, ocean dynamics, and any other problems where high accuracy is desired. \\
Due to the above, he has motivated the development of this thesis by studying spectral methods extensively to acquire the ability to use this tool and understand them from the point of view of mathematical analysis. To develop this study we will focus first on the elementary theory of these methods, encompassing enough knowledge to allow us to develop, implement and analyze under this approach a wide variety of problems that arise in the partial differential equations that evolve. \\
To understand the application of these methods, the well-known Burgers' equation has been considered, since it is an ideal problem for understanding these methods because, in addition to being a non-linear problem that presents interesting characteristics, it can be useful to develop the ability to attack more complex problems. Furthermore, in order to extend the study of the implementation of spectral methods, we are going to work with the stochastic version of this equation that will be very useful for us to know in general terms trying to solve problems of this type, which are considered of great importance for its wide field of applications and that it is still an area that is in full development due to the great difficulty in obtaining solutions. \\
To carry out this study, we will divide the work into six parts organized as follows
\begin{enumerate}
\item[1.] In chapter \ref{Introduction}, a brief history of Burgers' equation will be presented in its deterministic version, and we will also present how to obtain the analytical solution for an initial value problem of this equation, transforming it into another linear one that can be solved using the Fourier transform. Later, the origin of the stochastic version will be discussed, in addition to its importance within mathematics and physics.
\item[2.] In chapter \ref{Chapter_2}, we will study the theoretical bases of spectral methods using the well-known Fourier series as the main tool, which will allow us to study a theory of approximation of functions under two approaches, using orthogonal projections and another using interpolation techniques. These two approaches will be examined independently, studying their implementation and some theoretical results that will be useful in chapter \ref{Chapter_3}.
\item[3.] In chapter \ref{Chapter_3}, the spectral methods known as Fourier-Galerkin and Fourier-Collocation will be developed using the tools examined in chapter \ref{Chapter_2}, verifying their convergence theory. For this, the deterministic Burgers' equation will be used, taking advantage of its linearized form to describe the methods, which will then be applied to the original nonlinear equation to describe the algorithms of its computational implementation that will allow us to perform numerical experiments and to be able to observe some characteristics interesting for your discussion.
\item[4.] In chapter \ref{Chapter_4}, a spectral method used to solve stochastic partial differential equations that was studied in \cite{Delgado2016} will be disclosed in as much detail as possible. We will see that this method, which is built based on the well-known Hermite polynomials, will allow us to obtain solutions of stochastic problems by solving a deterministic type problem, and will be illustrated using the stochastic Burgers' equation developing its implementation and also numerical simulations.
\item[5.] In this last chapter, we will discuss the most relevant of each chapter, and we will give some observations of the obtained numerical results to conclude with some ideas that can be considered to extend this work.
\item[6.] At the end of this work, an appendix \ref{Appendix_A} was added, which will be useful to understand in more detail the spectral method developed in chapter \ref{Chapter_4}.
\end{enumerate}
\input{introduction/Burgers_Deterministic}
\newpage
\input{introduction/Burgers_Stochastic}
| {
"alphanum_fraction": 0.8157048829,
"avg_line_length": 181.4722222222,
"ext": "tex",
"hexsha": "39fd63679e32e9ca2493e79e869e2cce462fbe5c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-04T13:29:56.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-04T13:29:56.000Z",
"max_forks_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alanmatzumiya/spectral-methods",
"max_forks_repo_path": "docs/introduction/Introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alanmatzumiya/spectral-methods",
"max_issues_repo_path": "docs/introduction/Introduction.tex",
"max_line_length": 856,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "c5e2a019312fb8f9bc193b04b07b7815e6ed4032",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alanmatzumiya/Maestria",
"max_stars_repo_path": "docs/introduction/Introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-12T11:18:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-29T10:44:02.000Z",
"num_tokens": 1229,
"size": 6533
} |
\documentclass{amsart}
%\documentclass[a4paper,10pt]{scrartcl}
\usepackage[utf8x]{inputenc}
\usepackage[british]{babel}
%\usepackage[a4paper, inner=0.5cm, outer=0.5cm, top=1cm,
%bottom=1.5cm, bindingoffset=1cm]{geometry}
\usepackage{amsmath}
\usepackage{amssymb, latexsym}
\usepackage{longtable}
\usepackage[table]{xcolor}
\usepackage{textcomp}
\usepackage{stmaryrd}
\usepackage{graphicx}
\usepackage{enumitem}
\usepackage{yfonts}
\usepackage{algpseudocode}
\usepackage{algorithm}
\usepackage{hyperref}
\usepackage{MnSymbol}
\setlist[enumerate]{label*=\arabic*.}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{example}{Example}[section]
\newtheorem{definition}{Definition}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{notation}{Notation}[section]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\title{Creating Custom Rule Primitives for Jena}
\author{Henriette Harmse}
\date{\today}
\pdfinfo{%
/Title (Creating Custom Rule Primitives for Jena)
/Author (Henriette Harmse)
/Creator ()
/Producer ()
/Subject (DL)
/Keywords ()
}
\begin{document}
\maketitle
In this post I will show you
\begin{enumerate}
\item how to add your own custom rule primitive,
\item how to inform Jena of your custom rule primitive, and
\item I will discuss things you have to keep in mind when writing a custom primitive.
\end{enumerate}
\section{Adding a Custom Rule Primitive}
A powerful feature of Jena is that it allows you to create your own custom builtin rule primitives. Building on our student example of the previous post, assume we want to calculate the final mark for a student given their test result, exam result and project result. We assume we have the following data
\begin{small}
\begin{verbatim}
:Peet :hasTestResult 77 .
:Peet :hasExamResult 86 .
:Peet :hasProjectResult 91 .
\end{verbatim}
\end{small}
for which we add the following rule
\begin{small}
\begin{verbatim}
[calcStudentFinalMarkRule:
(?student :hasTestResult ?testResult)
(?student :hasExamResult ?examResult)
(?student :hasProjectResult ?projectResult)
calcFinalMark(?testResult, ?examResult, ?projectResult, ?finalMark)
-> (?student :hasFinalMark ?finalMark)]
\end{verbatim}
\end{small}
This states that when \texttt{?student} has a \texttt{?testResult}, \texttt{?examResult} and \texttt{?projectResult}, a \texttt{?finalMark} can be calculated for which we add an associated triple for \texttt{?student}. What is left for us is to implement our \texttt{calcFinalMark} primitive, which we do by creating a class \texttt{CalcFinalMark} that extends the \texttt{BaseBuiltin} abstract class. To name our custom primitive we override \texttt{getName()}:
\begin{small}
\begin{verbatim}
@Override
public String getName() {
return "calcFinalMark";
}
\end{verbatim}
\end{small}
We state that our custom primitive has 4 parameters by overriding:
\begin{small}
\begin{verbatim}
@Override
public int getArgLength() {
return 4;
}
\end{verbatim}
\end{small}
To ensure that \texttt{calcFinalMark} can be used both in the body and the head of rules we override
\begin{small}
\begin{verbatim}
@Override
public boolean bodyCall(Node[] args, int length, RuleContext context) {
return doUserRequiredAction(args, length, context);
}
@Override
public void headAction(Node[] args, int length, RuleContext context) {
doUserRequiredAction(args, length, context);
}
\end{verbatim}
\end{small}
The meat of the implementation is the \texttt{doUserRequiredAction(args, length, context)} method, which consists of the following steps:
\begin{enumerate}
\item check that we have the correct number of parameters,
\item retrieve the input parameters,
\item verify the typing of input parameters,
\item doing the actual calculation,
\item creating a node for the output parameter, and
\item binding the node to the output parameter.
\end{enumerate}
\begin{small}
\begin{verbatim}
private boolean doUserRequiredAction(Node[] args, int length,
RuleContext context) {
// Check we received the correct number of parameters
checkArgs(length, context);
boolean success = false;
// Retrieve the input arguments
Node studentTestResult = getArg(0, args, context);
Node studentExamResult = getArg(1, args, context);
Node studentProjectResult = getArg(2, args, context);
// Verify the typing of the parameters
if (studentTestResult.isLiteral() && studentExamResult.isLiteral() &&
studentProjectResult.isLiteral()) {
Node finalMark = null;
if (studentTestResult.getLiteralValue() instanceof Number &&
studentExamResult.getLiteralValue() instanceof Number &&
studentProjectResult.getIndexingValue() instanceof Number) {
Number nvStudentTestResult =
(Number)studentTestResult.getLiteralValue();
Number nvStudentExamResult =
(Number)studentExamResult.getLiteralValue();
Number nvStudentProjectResult =
(Number)studentProjectResult.getLiteralValue();
// Doing the calculation
int nFinalMark = (nvStudentTestResult.intValue() * 20)/100 +
(nvStudentExamResult.intValue() * 50)/100 +
(nvStudentProjectResult.intValue() * 30)/100;
// Creating a node for the output parameter
finalMark = Util.makeIntNode(nFinalMark);
// Binding the output parameter to the node
BindingEnvironment env = context.getEnv();
success = env.bind(args[3], finalMark);
}
}
return success;
}
\end{verbatim}
\end{small}
\section{Registering a Custom Primitive with Jena}
Our code for load our rules and activating it is similar to my previous post, except that you have to make a call to register the custom primitive:
\begin{small}
\begin{verbatim}
// Load RDF data
String data = path.toFile().getAbsolutePath() +
"/src/main/resources/data2.ttl";
Model model = ModelFactory.createDefaultModel();
model.read(data);
// Register custom primitive
BuiltinRegistry.theRegistry.register(new CalcFinalMark());
// Load rules
String rules = path.toFile().getAbsolutePath() +
"/src/main/resources/student2.rules";
Reasoner reasoner = new GenericRuleReasoner(Rule.rulesFromURL(rules));
InfModel infModel = ModelFactory.createInfModel(reasoner, model);
infModel.rebind();
\end{verbatim}
\end{small}
\section{Things to Keep in Mind}
There are two main things I think one needs to keep in mind with Jena custom rule primitives:
\begin{enumerate}
\item A primitive is suppose to be a elementary building block. Being able to create your own primitives may tempt you to add all sorts of interesting processing besides the manipulation of triples, but I strongly advice against that. Arbitrary processing in your builtin primitive can degrade performance of inferencing.
\item Do not assume that you have control over when a rule will be triggered. Exactly when a rule will be triggered is dependent on when the Jena \texttt{InfModel} implementation decides to re-evaluate the rules, which is dependent on internal caching of the \texttt{InfModel} implementation and how it deals with modifications made to the model. Even though I believe \texttt{InfModel} implementations will avoid arbitrarily re-evaluating rules, I still think it is conceivable that under some circumstance the same rule may be triggered more than once for the same data. Furthermore, the Jena documentation of 3.6.0 states that the \texttt{InfModel} interface is still in flux, which could mean that even if a rule is only triggered once for given data currently, due to unforeseen changes it may be triggered more than once in future updates of Jena.
\end{enumerate}
\section{Conclusion}
In this post I gave an example of how you can develop a custom rule primitive for Jena. The code for this example can be found at \href{https://github.com/henrietteharmse/henrietteharmse/tree/master/blog/tutorial/jena/source/rules/}{github}.
\bibliographystyle{amsplain}
\bibliography{../../../BibliographicDetails_v.0.1}
\end{document}
| {
"alphanum_fraction": 0.7548195749,
"avg_line_length": 37.9906103286,
"ext": "tex",
"hexsha": "0e5c2834c72bd369bfeb6ed3e44bfce6d6f8f2b9",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-12-16T11:02:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-24T11:10:08.000Z",
"max_forks_repo_head_hexsha": "fdd87271a41b161867ed742b20dea11530dd7722",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vzkqwvku/henrietteharmse",
"max_forks_repo_path": "blog/tutorial/jena/docs/Creating Custom Rule Primitives for Jena.tex",
"max_issues_count": 20,
"max_issues_repo_head_hexsha": "fdd87271a41b161867ed742b20dea11530dd7722",
"max_issues_repo_issues_event_max_datetime": "2022-01-04T16:47:12.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-10T01:43:47.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vzkqwvku/henrietteharmse",
"max_issues_repo_path": "blog/tutorial/jena/docs/Creating Custom Rule Primitives for Jena.tex",
"max_line_length": 854,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "fdd87271a41b161867ed742b20dea11530dd7722",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vzkqwvku/henrietteharmse",
"max_stars_repo_path": "blog/tutorial/jena/docs/Creating Custom Rule Primitives for Jena.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-20T10:47:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-03T02:19:13.000Z",
"num_tokens": 2026,
"size": 8092
} |
\declareIM{g}{G}{2021-06-30}{AOS using ComCam, commanding MT}{AOS CC/MT}
Executive Summary:
Perform wave-front sensing using ComCam and a notebook
\textbf{Does not map to P6}
\subsection{Goals of IM}
Demonstrate:
\begin{itemize}
\item Slew to field, generating telemetry (\cf \IM{f})
\item piston camera (e.g. in, -out, in, +out, in). \XXX{five exposures at each position}
\item take data using ComCam in playback mode, commanded from OCS
\item uses \gls{OCPS} to reduce data on commissioning cluster
\item load Zernikes into \gls{EFD}
\item Use MTAOS to update state of optics
\end{itemize}
\subsection{Prerequisites}
\begin{itemize}
\item ability to control camera hexapod
\item \IM{pg}
\item \IM{f}
\begin{itemize}
\item No need for replaying corner raft data
\end{itemize}
\item notebook/script to perform pistoning of camera hexapod with image acquisition
\item \gls{OCPS}
\end{itemize}
\subsection{Procedure}
\begin{itemize}
\item Generate simulated images with desired pistons (e.g. $[0, -10, 0, 10, 0]$) and appropriate
metadata and load into comCam
\item Use \gls{scriptQueue} to issue slew and camera commands, sending commands to \gls{OCPS}
to process the data. \Nb will require the \gls{AOS} to synchronise the piston values
with the OCPS exposure numbers (\texttt{seqNum}).
\item Check that desired events are in the EFD
\end{itemize}
| {
"alphanum_fraction": 0.7469090909,
"avg_line_length": 33.5365853659,
"ext": "tex",
"hexsha": "d34087b2c2e05f38ecc274fbf64d39e93a0397af",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst-sitcom/sitcomtn-006",
"max_forks_repo_path": "IMg.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst-sitcom/sitcomtn-006",
"max_issues_repo_path": "IMg.tex",
"max_line_length": 96,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b8d07b89110d868d8e6640e59911d74f1619846e",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst-sitcom/sitcomtn-006",
"max_stars_repo_path": "IMg.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 414,
"size": 1375
} |
\section{Credits}
The sketch project was started at UC Berkeley in 2005 by Armando Solar-Lezama and Ras Bodik and has been led by Solar-Lezama at MIT since 2009. The current code base includes important contributions by the following individuals (in chronological order):
Gilad Arnold,
Liviu Tancau,
Chris Jones,
Nicholas Tung,
Lexin Shan,
Jean Yang,
Rishabh Singh,
Zhilei Xu,
Rohit Singh,
Jeevana Priya Inala,
Xiaokang Qui,
Miguel Velez.
The project also relies heavily on code from MiniSat (Niklas Een, Niklas Sorensson), StreamIt (led by Bill Theis and Saman Amarasinghe with code from David Maze, Michal Karczmarek and others), as well as the open source systems ANTLR (Terence Parr), Apache Commons CLI and Rats/xtc (Robert Grimm).
Over the years, the project has benefited from funding by the following projects:
\begin{itemize}
\item NSF-1049406 EAGER:Human-Centered Software Synthesis
\item NSF-1116362 SHF: Small: Human-Centered Software Synthesis
\item NSF-1161775 SHF: Medium: Collaborative Research: Marrying program analysis and numerical Search
\item DOE: ER25998/DE-SC0005372: Software Synthesis for High Productivity Exascale Computing
\item NSF-1139056 Collaborative Research: Expeditions in Computer Augmented Programming
\item DARPA: UHPC Program
\item DOE: ER26116/DE-SC0008923: D-TEC: DSL Technology for Exascale Computing
\item DARPA: MUSE FA8750-14-2-0242
\item Toyota Research Institute
\end{itemize}
| {
"alphanum_fraction": 0.8050314465,
"avg_line_length": 46.1612903226,
"ext": "tex",
"hexsha": "912b01f7f9b83dcb8983db5891e3b7a15d9577d5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_forks_repo_licenses": [
"X11"
],
"max_forks_repo_name": "dolphingarlic/sketch-frontend",
"max_forks_repo_path": "docs/SketchManual/credits.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"X11"
],
"max_issues_repo_name": "dolphingarlic/sketch-frontend",
"max_issues_repo_path": "docs/SketchManual/credits.tex",
"max_line_length": 297,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e646b7d51405e8a693f45472aa3cc6991a6f38af",
"max_stars_repo_licenses": [
"X11"
],
"max_stars_repo_name": "dolphingarlic/sketch-frontend",
"max_stars_repo_path": "docs/SketchManual/credits.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-06T03:40:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-06T03:40:53.000Z",
"num_tokens": 398,
"size": 1431
} |
\documentclass{article}
\usepackage[a4paper, total={180mm, 260mm}]{geometry}
\usepackage{graphicx}
\usepackage{url}
%\usepackage{natbib}
\usepackage{todonotes}
\usepackage{booktabs}
\usepackage{lineno}
\usepackage{color}
%\usepackage{auto-pst-pdf}
\usepackage[colaction]{multicol}
\usepackage{caption}
\usepackage{svg}
\usepackage{authblk}
\usepackage{standalone}
\usepackage{xr}
\externaldocument{extended_figures}
\usepackage[section]{placeins}
%\setcitestyle{square, citesep={,}}
\linespread{1.5}
\makeatletter
\renewcommand{\maketitle}{\bgroup\setlength{\parindent}{0pt}
\begin{flushleft}
{\huge\textbf{\@title}}
\bigskip
{\large\textbf{\@author}}
\end{flushleft}\egroup
}
\makeatother
% Title
\title{A Topography of Climate Change Research}
\author[1,2]{Max Callaghan}
\author[1,2]{Jan Minx}
\author[2]{Piers M. Forster}
\affil[1]{Mercator Research Institute on Global Commons and Climate Change, Torgauer Straße, 10829 Berlin, Germany}
\affil[2]{Priestley International Centre for Climate, University of Leeds, Leeds LS2 9JT, United Kingdom}
\begin{document}
\maketitle
\begin{linenumbers}
\noindent\textbf{\input{abstract.tex}}
\bigskip
\noindent We live in an age of ``Big Literature''
\cite{Nunez-Mir2016, Minx2017l}, where the science of climate change is expanding exponentially \cite{Grieneisen2011, Haunschild2016}. In the five years since the publication of the last IPCC assessment report \cite{IPCC2014c}, 202,000 papers on climate change were published in the Web of Science (WoS) (see Table \ref{tab}). This is almost as much as the 205,000 papers identified in the same query \cite{Grieneisen2011} during the first five assessment periods; a period of nearly 30 years. Around 350,000 new publications can be expected for before the sixth assessment report (AR6) of the Intergovernmental Panel on Climate Change (IPCC), based on current growth patterns (Figure \ref{pub-growth}). Moreover, from the expansion of the literature's vocabulary (see methods) - from 2,000 unique words in the first assessment period to 95,000 words so far in the sixth - we can observe the literature's increasing diversity of content. For example, the zika virus, mentioned in 182 articles from 2014-2018, had never before been discussed in the titles or abstracts of articles relating to climate change. Yet it has emerged as a topic of high relevance: the incidence of the virus, whose outbreak in Brazil in 2016 was declared a public health emergency by the World Health Organization, is set to increase under rising global temperatures \cite{Rao2019}. Similar rapid emergence patterns can be seen for Intended Nationally Determined Contributions (INDCs) in AR6, and Biochar in AR5, among others.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=180mm]{../plots_pub/pubs_time_wgb.pdf}
\caption{ The number of climate change documents in the Web of Science in each year. A total of 406,191 documents were published until the end of 2018. The number of publications in each assessment period is shown in square brackets. For 2019-21 we project the number of papers assuming there is no more growth, and assuming that growth continues at the same rate as over the past five years}
\label{pub-growth}
\end{center}
\end{figure}
\begin{table}[htp]
\begin{center}
{\scriptsize
\input{../tables/growth_table.tex}}
\caption{Growth of Literature on Climate Change. A glossary of acronyms is provided in the note below}
\label{tab}
\end{center}
\end{table}
\section*{}
\textbf{Note - definition of acronyms}: \textbf{co2:} Carbon Dioxide,
\textbf{ncep:} National Centers for Environmental Protection,
\textbf{fco:} Fugacity of Carbon Dioxide,
\textbf{pfc:} Perflourocompound
\textbf{otcs:} Open Top Chambers
\textbf{dtr:} Diurnal Temperature Range
\textbf{sres:} Special Report on Emissions Scenarios (200)
\textbf{petm:} Paleocene Eocene Thermal Maximum
\textbf{amf:} Arbuscular Mycorrhizal Fungal
\textbf{sf5cf3:} trifluoromethyl sulfur pentafluoride (A Potent Greenhouse Gas Identified in the Atmosphere, 2000)
\textbf{clc:} Chemical Looping Combustion
\textbf{cwd:} Coarse woody debris
\textbf{etm:} Enhanced Thematic Mapper (NASA satellite sensor)
\textbf{cmip5:} Coupled Model Intercomparison Project 5 (Starting 2008)
\textbf{cmip3:} Coupled Model Intercomparison Project phase 3 (first published 2007 \cite{Meehl2007})
\textbf{mofs:} metal-organic frameworks (for CO2 storage)
\textbf{sdm:} statistical-dynamical model
\textbf{mmms:} Mixed Matrix Membranes (for CO2 capture)
\textbf{cop21:} 21st Conference of Parties (Paris 2015)
\textbf{c3n4:} Carbon nitride (a synthetic nanomaterial used for hydrogen production)
\textbf{sdg:} Sustainable Development Goals
\textbf{(i)ndc:} (Intended) Nationally Determined Contributions
\section*{}
Big literature poses at least three challenges for scientific policy advice and science itself: First, established procedures in scientific assessments like those conducted by the IPCC struggle to address the exploding literature base. For example, the ratio of studies cited in IPCC reports to the number of studies on climate change in the WoS has declined from 60\% to 20\% \cite{Minx2017l}, posing a rapidly growing risk of selection bias. The exponentially increasing volume of literature means that the provision of ``comprehensive, objective, open and transparent'' assessments of the available scientific literature, as defined in the principles governing IPCC work \cite{IPCC2013}, is no longer possible by traditional means.
Machine reading and learning methods, among other data science applications, are required to enable an understanding of the field of climate change research at scale.
Second, evidence synthesis - the enterprise of reviewing the literature based on a formal and systematic set of methods \cite{Chalmers2002} - becomes increasingly important for aggregating and consolidating rapidly emerging knowledge and enabling scientific assessments to do their job.
Yet traditional methods of evidence synthesis themselves are pushed to their limits by the large amount of scientific publications. The field of evidence synthesis technology, which tries to streamline human tasks through machine learning at the different stages of the review process, is still in its infancy \cite{Beller2018}. Finally, overwhelming amounts of literature may be a major reason why studies of scientific assessments \cite{Bjurstroem2011} do not offer robust quantification for their claims about the relationship between report citations and the underlying literature.
This study uses topic modelling \cite{Blei2010} to map the vast body of evidence on climate change. Topic modelling is an unsupervised machine-learning technique, where patterns of word co-occurrences are used to learn a set of topics, groups of words, which describe the corpus.
The word topic derives from the Greek word for place (topos), and by situating the documents in a reduced-form projection of their thematic content (Figure \ref{oecd_topic_map}), we create a topographic map of the literature on climate change. Such a systematic engagement with the thematic content of the climate science is missing from the literature so far.
We then use this map to understand how IPCC reports have represented the available climate change literature and re-evaluate claims of bias based on a more comprehensive understanding of the available climate science.
We enrich the discussion on representation by discussing topics as well as disciplines.
%\subsection*{Mapping the landscape of climate change literature}
Figure \ref{oecd_topic_map} shows a thematic or topographic map of the 378,000 publications on climate change in our dataset with abstracts. Using non-negative matrix factorization \cite{Lee1999}, the 140 topics are machine-learned from the papers' abstracts (see methods for details). The topic scores of each document are reduced to the two dimensions shown through t-distributed stochastic neighbour embedding (t-SNE) \cite{vandermaaten2008} (a full list of topics and related words, and a list of documents, their positions on the map, and their related topics are given in the supplementary material). The two dimensions represent a projection of the 140-dimensional topic scores of each document that seeks to preserve small distances between topically similar documents.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=180mm]{../plots_pub/all_topic_words_oecds.png}
\caption{A map of the literature on climate change. Document positions are obtained by reducing the topic scores to two dimensions via t-SNE (see methods for further details). The two axes therefore have no direct interpretation, but represent a reduced version of similarities between documents across 140 topics. Documents are coloured by web of science discipline category. Topic labels are placed in the center of each of the large clusters of documents associated with each topic. }
\label{oecd_topic_map}
\end{center}
\end{figure}
Our map covers a broad range of topics, with related topics in clusters. Generally, topics related to climate science and impacts are on the left, while solution-oriented topics are on the right. More fine-grained research areas can also be distinguished. For example, publications related to urban infrastructure (\textbf{buildings}, \textbf{cement}, \textbf{waste}) are located on the right, physical climate impacts (\textbf{sea-level}, \textbf{droughts} or [crop] \textbf{yield}) are in the lower left and energy systems are in upper right. Larger groups of documents at the fringes of the map relate mainly to one or two specific topics like \textbf{biochar} or \textbf{coral}. Interestingly, scenarios feature centrally in the map, at the interface between different scientific communities. This corresponds to their integrative nature in IPCC reports \cite{Moss2010}. %This map of the thematic structure of the literature could be useful for individual scientific communities or for climate change assessments.
The disciplinary composition of this research topography indicated by the different colours in Figure \ref{oecd_topic_map} highlights the dominance of natural sciences in climate change research. More than 60\% of the literature is published in natural science journals. Similarly, 115 of 140 topics contain a greater share of publications from natural science journals than any other discipline. We calculate disciplinary entropy of topics as a measure of their degree of interdisciplinarity (Extended Data Figure \ref{dis-entropy} and methods for details). This shows how research on \textbf{health}, \textbf{food}, or \textbf{policy} comes from a range of disciplines, while research on \textbf{ice} or \textbf{oceans} comes almost exclusively from the natural sciences).
\begin{figure}
\begin{center}
\includegraphics[width=180mm]{../plots_pub/topic_evolution_4.png}
\caption{Evolution of the landscape of climate change literature. In each period, the 10 fastest growing topics are labelled. Where documents could be matched to IPCC citations, they are coloured by the working group citing them. Panel a) shows documents from assessment periods 1-3, b) shows documents from assessment period 4, c) shows documents from assessment period 5, and d) shows documents published during assessment period 6.}
\label{evolution-map}
\end{center}
\end{figure}
Finally, the topography shows the thematic evolution of the literature (Figure \ref{evolution-map}), with topics exhibiting distinct patterns of growth. Fast-growing topics in the last three assessment periods have included, among others, \textbf{coral}, \textbf{risks}, \textbf{adaptation}, \textbf{hydrogen}, \textbf{buildings}, \textbf{CO2 removal}, \textbf{networks} and \textbf{biochar}. \textbf{Biochar} is particularly remarkable in that the sizeable literature which emerged in AR5 was completely absent from the climate change literature beforehand.
The identification of new topics as they emerge, particularly as these are identified without prior knowledge of the literature, can help researchers and assessment-makers to keep abreast of a quickly evolving field.
%\subsection*{Research representation in IPCC reports}
We apply our topic map to understand how IPCC assessments represent the science and respond to policymakers' and consulted experts' demands for more solution-oriented knowledge \cite{Kowarsch2017}. Several studies have identified, made, or repeated claims of a disciplinary bias of IPCC assessments towards the natural sciences, and within the social sciences towards economics \cite{Bjurstroem2011, Victor2015, Hulme2010, Corbera2016}. Where these claims were based on an analysis of IPCC citations \cite{Bjurstroem2011}, they assess this without measurable baseline.
In view of the organisation's mandate to provide ``comprehensive, objective, open and transparent'' assessments of the available science \cite{IPCC2013}, our dataset of publications allows us - albeit imperfectly, as discussed in the concluding section - to study representation with a meaningful baseline. Further we provide an update to the last quantitative assessment of IPCC citations \cite{Bjurstroem2011}, which looked only at AR3.
This baseline forms a starting point for informed discussion about how to represent the literature according to the IPCC's priorities.
By matching the documents in our dataset to a set of references scraped from all published IPCC reports \cite{Minx2017l}, we assess the representation of a group of studies by comparing its share in IPCC citations with its share in the dataset of WoS studies on climate change (see methods).
Figure \ref{oecd_rep}.a shows that social science documents (as identified by WoS) were indeed under-represented in AR3, but by AR5 were the most over-represented discipline, with a share in the literature cited by IPCC reports 1.32 times higher than their share in our WoS dataset. Likewise, social \& economic geography, political science, and "Other social sciences" were better represented in AR5 than economics.
This challenges what we think we know about the IPCC.
Instead of under-representing the social sciences, the IPCC has been under-representing the Agricultural Sciences and Engineering \& Technology.
%Humanities are also under-represented, although they make up a very small proportion of the total literature.
The topography allows us to delve deeper into subjects that receive more or less attention in the IPCC.
%Figures \ref{oecd_rep}b and \ref{oecd_rep}c plot topic representation.
Figure \ref{oecd_rep}c shows that topics more commonly cited by IPCC working group I (WGI) are older and largely better represented in IPCC reports. These topics, for example \textbf{ozone}, \textbf{oceans}, and \textbf{aerosols}, are core topics for WGI, which addresses the physical science of climate change.
The topics in the lower right of the graph are the most pertinent to the question of whether the IPCC is well representing knowledge on climate change. They are newer and until now have been under-represented in IPCC reports. Their novelty may be highly salient in a periodic assessment process. These topics are primarily in working group III, on mitigation and are ``solutions-relevant''. But while policymakers' demands for solutions-oriented IPCC assessments were often focussed on policy options, these under-represented new topics deal with more technical solutions and are found in technical disciplines within engineering \& technology and the agricultural sciences.
Further, WGIII topics that are well represented contain a greater proportion of social science research (figure \ref{oecd_rep}b). The topics \textbf{countries}, \textbf{policy}, and \textbf{prices} are close to a proportional representation and are made up of around 30\% social science research. \textbf{Waste}, \textbf{biochar}, and \textbf{cement}, are more than 3 times more prevalent in the wider literature than in the literature cited by the IPCC, and are made up of around 5\% social science research. This pattern is not visible in other working groups (Extended Data Figure \ref{socsci-wgs}).
The difference between under-represented new topics and new topics that are better represented is intriguing. This is visible in figure \ref{evolution-map}, where in AR5, the clusters of documents around the, \textbf{buildings} and \textbf{biochar} topics contain few IPCC citations, whereas the clusters around, \textbf{adaptation} and \textbf{food} contain more. As shown in figure \ref{oecd_rep}c, \textbf{buildings} and \textbf{biochar} are 3.34 and 3.61 times more prevalent in the literature than in IPCC citations, while \textbf{food} is 1.22 times more prevalent in the literature and \textbf{adaptation} is 2.22 times more prevalent in IPCC citations respectively. %The IPCC, has been better at integrating new knowledge from these topics, and in general better at integrating new knowledge from WGII than WGIII topics.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=180mm]{../plots_pub/big_panel_representation.pdf}
\caption{Representation in IPCC reports: \textbf{a)} by discipline, \textbf{b)} by social science proportion of WGIII topics, \textbf{c)} and novelty of all topics, where topics in the highest and lowest 10\% of either axis are labelled. Topics are coloured according to the working group from which they receive the most citations, although infrequently cited topics may not correspond to the relevant working group (see methods). Representation is the share of the subset of documents being cited by the IPCC divided by the share of the subset in the whole literature. We plot on a log scale so that 0.5 is equally distant to 1 as 2; plot labels show real values. Assessment period occurrence refers to the center of a topic's distribution across assessment periods (see methods for further details).}
\label{oecd_rep}
\end{center}
\end{figure}
%\subsection*{Machine-learning for climate change assessments}
Notwithstanding the over-representation of social science and under-representation of technical solutions in the IPCC with respect to the WoS, a
perfectly proportional representation of the literature is of course not optimal. A recommendation that the IPCC cite more or less of any part of the literature is by no means the goal of such an analysis. The IPCC, as a community of scientific experts, is vastly better placed to decide what is relevant than any algorithm.
As with many machine learning applications, we should be mindful of David Hume's is-ought problem.
Machine learning can help us to more efficiently understand and describe the landscape of climate change literature, but cannot tell us how things should be.
The results represent new knowledge about the interaction between the IPCC and the literature, which can have a variety of implications.
If the IPCC needs to include more social science knowledge \cite{Victor2015}, our analysis suggests that this is a result of insufficient production or funding of social science research on climate, rather than IPCC bias.
The under-representation of solutions-relevant topics (despite calls for solutions-oriented assessments), and the small proportion of social science research within these topics, suggests areas for future highly relevant social science research, as well as opportunities for particularly fruitful interdisciplinary collaboration.
As a guide for future assessments, the map could facilitate well informed decisions about the representation of different areas of climate literature, from the early scoping process, through to selection by authors of individual studies.
One advantage of topic modelling is that outcomes are not determined by any categorisation scheme imposed by the modeller, facilitating the discovery of ``unsearched'' for topics.
Highlighting recent research on, for example, membranes, biochar or e-vehicles, could prompt discussion in the scoping process about their inclusion in chapter outlines.
This mode of discovery can act as a complement to human expertise, which may be better at identifying under-researched niches, existing biases or knowledge requirements.
The methods shown here could also be extended to aid other processes in the production of IPCC reports, such as the identification of potential authors to achieve a better balance across sectors and regions \cite{Corbera2016}. The possible benefits or risks of using data science methods for IPCC processes constitutes an important area for future research.
Outside of the IPCC, this approach is part of ongoing attempts to make use of machine learning within evidence synthesis. This topographic map is a new approach to rapidly mapping very large literatures.
Our dataset of more than 400,000 publications represents a wealth of knowledge on climate change and climate solutions, but is by no means exhaustive. We repeat an established query \cite{Haunschild2016}, granting that it may have imperfections. Furthermore, we miss publications not in WoS (some small journals, some books, and most grey literature, not to mention indigenous knowledge \cite{Ford2016b}); and studies relevant for the work of the IPCC, that do not directly mention climate change (for example on energy policy). We argue that this remains a reasonable system boundary given data availability, and stress that documents not included in our study alter our findings only if they have systematically different patterns of citation by the IPCC.
A future topography could be improved by making use of more sources of climate change knowledge, extracting and classifying information from full texts, or exploring author networks and interdisciplinarity.
Most importantly, exploring machine learning applications that support IPCC authors in their assessments would prepare the IPCC for the age of big literature.
\section*{Methods}
\subsection*{Data}
This study reproduces the query developed by Grieneisen and Zhang \cite{Grieneisen2011}, which is carried out on the Web of Science core collection. We downloaded the results of the query on March 19, 2019. Though not exhaustive, the Web of Science gives a good coverage of the literature in major peer-reviewed journals. The Web of Science data gives us a disciplinary classification (based on the journal) and publication year, among other metadata, for each document. Each document is assigned to an assessment period according to the timeline shown in table 1.
We also tested the query documented in \cite{Haunschild2016}, by checking a random sample of documents exclusive to it. We found that the majority of additional documents were not relevant, and decided to use only the query from \cite{Grieneisen2011}.
We use the references scraped from IPCC assessment reports from Minx et al. \cite{Minx2017l}, and attempt to match these with the results from the Web of Science. We use doc2vec similarity scores \cite{Le2014} to identify the 500 most similar titles for each reference, and count the document as a match if the jaccard similarity score of the two word shingles of the reference title and the document title is greater than 0.5 \cite{Khabsa2014}. Extended Data Table \ref{ipcc-matching} shows the percentage of IPCC citations matched in each working group for each assessment report. This is significantly lower in earlier periods, as data coverage and quality of citation databases is lower for earlier periods. Matching in WG III is also lower, suggesting a greater share of non-peer review literature, or literature not directly mentioning climate change, but related to its mitigation (for example on energy policy).
We analysed by hand a sample of 100 IPCC references which could not be matched and found that 46\% of these references were not in the Web of Science at all, 53\% were in the Web of Science but not in our query, and 1 document was in our query but had mistakenly been identified as not being so. This was due a different version of the title appearing in the IPCC citation and the Web of Science record.
\begin{table}[htp]
\begin{center}
\input{../tables/ipcc_matching.tex}
\caption{The proportion of citations in each report that could be matched with a document in our query from the Web of Science}
\label{ipcc-matching}
\end{center}
\end{table}
\subsection*{Pre-processing}
Data quality in earlier Web of Science results is poorer, and some documents have missing abstracts. In the quantification of the size of the literature and its vocabulary in table \ref{tab}, titles are substituted for abstracts where they are not available. The words of the documents are lemmatized, replacing different forms of the same word (i.e. word/words) with a single instance. Commonly occurring words, or ``stopwords'' are removed, as are all words shorter than 3 characters, and all words containing only punctuation or numbers.
The documents are transformed into a document-term matrix, where each row represents a document, and each column represents a unique word. Each cell contains the number of that column's terms in that document. Only terms which occur more than once are considered.
For the calculation of the topic model, documents with missing abstracts are ignored, and the document term matrix is transformed into a document
frequency-inverse document frequency (tf-idf) matrix, where scores are scaled according to the frequency of their occurrence in the corpus. This gives more weight to terms which appear in few documents, and less weight to those which appear in many.
\begin{equation}
tf(t,d) = f_{t,d} \mathrm{,}\quad idf(t,D) = \log\frac{N}{|\{d \in D:t \in d\}|}
\end{equation}
\subsection*{Topic Model}
We use non-negative Matrix Factorisation (NMF) \cite{Lee1999}, an approach to topic modelling which factorises the term-frequency-inverse document frequency matrix \( V \) into the matrices \(W\), the topic-term matrix, and \( H \) the document-topic matrix, whose product approximates \(V\):
\begin{equation}
V_{i\mu} \approx (WH)_{i\mu} = \sum_{a=1}^{r}W_{ia}H_{a\mu}
\end{equation}
As demonstrated in Extended Data Figure \ref{doc-topic}, each topic is represented as a set of word scores, and each document a set of topic scores. The combination of the two approach the word scores in the document. For clarity in the figure, these are shown as simple counts, but in the model these are scaled according to each term's frequency within the corpus as explained above.
Topics are calculated using the scikitlearn library \cite{Pedregosa2011}, and are saved in a database and topic visualisation system based on that developed by Chaney and Blei \cite{Chaney2012}. The system adds new functionality to and combines it with a system for managing sets of documents and queries. The code and additional information is published online at \url{https://github.com/mcallaghan/tmv}.
\subsubsection*{Model selection}
Topic models are calculated for 70, 80, 90, 100, 110, 120, 130, 140 and 150 topics. The run with 150 topics was discarded as it contained a topic to which no terms or documents were assigned. The relative usefulness of each model was assessed subjectively by the authors, based on inspection of the online visualisation tool, and the spreadsheet \textbf{topic\_comparison.xlsx} accompanying the supporting information. The spreadsheet shows each set of topics in adjacent columns. Topics from each model are placed next to the topics with the largest number of each topic's 10 highest scoring words in common. This helps authors to find an appropriate level of granularity for the analysis. Statistical methods for the selection of topic model parameters are available but they do not necessarily align with human perceptions of topic model quality \cite{Chang2009}. We make a judgement based on subjective criteria, but for transparency publish the results of the analysis for different numbers of topics in Extended Data Figure \ref{top-rep-ks}. The main conclusions drawn about the rapid growth and under-representation of solutions-relevant topics are stable across models.
\subsubsection*{Topic assignment to working groups}
\label{topic-wg}
A topic's score for each working group is calculated by summing the document-topic scores for all documents cited by that working group. We call the topic's primary working group that working group for which the above sum is the highest, but in some cases, where there are very few IPCC citations of documents related to a topic this can be misleading . For example, the word ``capacity" is relevant to the adsorption topic, so documents talking about adaptive capacity receive a low score for the topic. Because only very few documents highly relevant to the topic (in that they talk about adsorption or adsorptive capacity) are cited by the IPCC, and many of the weakly relevant documents are cited by the IPCC, the sum of the topic scores of the weakly relevant documents outweighs the sum of the topic scores of the strongly relevant documents, meaning that the topic is mistakenly assigned to working group II when it is more properly relevant to working group III. We point out that topics are in any case mixtures of documents cited by different working groups, and stress that the colouring of the topics by working group is merely illustrative.
\subsubsection*{Topic Representation and Newness}
To calculate topic representation in IPCC reports we divide each topic's share in the subsample of documents cited by IPCC reports by its share in the whole corpus (excluding documents published after the last assessment report). Disciplinary representation is calculated in the same way.
We calculate a topic's total score as the sum of document-topic scores. A topic's window score is the sum of document-topic scores considering only documents in the given time window. To represent a topic's newness, we multiply each assessment period number by the share of it's total score occurring in that window, and take the mean of these scores. A topic in which 100\% of documents which make it up occurred in assessment period 1 (6) would thereby receive a score of 1 (6), while a topic evenly distributed across all assessment periods would receive a score of 3.5.
\subsubsection*{Disciplinary Entropy}
Disciplinary Entropy inverts the measurement of a conference's topical diversity suggested in \cite{Hall2008}, by measuring a topic \(z\)'s entropy \(H\), where
\begin{equation}
H(f|z) = -\sum_{i=1}^K \hat{p}(f|z) \log \hat{p}(f|z)
\end{equation}
based on the empirical distribution of a field \(f\) in the documents \(d\) in each topic:
\begin{equation}
\hat{p}(f|z) = \sum_{d:z_d=z} \hat{p} (f|d) \hat{p} (d|z)
\end{equation}
It is an indication of the diversity of disciplines within the set of documents related to a topic.
\subsubsection*{Topic Map}
The topic model gives us the location of each document in a 140 dimensional topic space, with each dimension corresponding to a that document's ``topic-ness'' in a given topic. t-Distributed Stochastic Neighbour Embedding (t-SNE) is a dimensionality reduction technique which we use to represent each document's topic scores in 2 dimensions \cite{vandermaaten2008}. Documents are placed on the map such that documents with similar combinations of topics are close together.
Datasets showing the documents considered, the topics, and the associations between topics and documents are are published alongside this piece \cite{Callaghan2019}
\end{linenumbers}
\section*{Author Information}
\subsection*{Author Contribution Statement}
M.C. and J.M. designed the research. M.C. performed the analysis. M.C., J.M. and P.M.F analysed the results. M.C. wrote the manuscript with contributions from all authors.
\subsection*{Corresponding Author}
Correspondence to Max Callaghan
\section*{Acknowledgements}
Max Callaghan is supported by a PhD stipend from the Heinrich Böll Stiftung. Jan Minx acknowledges funding from the German Federal Ministry of Research and Education within the PEGASOS project (grant reference: 01LA1826A)
\section*{Data Availability}
Three datasets from this study are available at \url{https://doi.org/10.6084/m9.figshare.9009665}
\medskip\noindent
\texttt{docs.csv} contains a list of the documents considered in this study, along with basic metadata and their position on the map. For copyright reasons, the full metadata from Web of Science can not be published. To reproduce the analysis, it would be necessary to download the abstracts for the papers shown, either using the Web of Science IDs provided, or the query documented in \cite{Grieneisen2011}.
\medskip\noindent
\texttt{topics.csv} Contains a list of the topics, along with their features discussed in this paper. The top 10 words associated with each topic are also shown
\medskip\noindent
\texttt{doctopics.csv} Contains a list of document-topic scores, which can be cross-referenced with the document and topic ids in \texttt{docs.csv} and \texttt{topics.csv}.
\medskip\noindent
\texttt{topic\_comparison.xlsx} shows models with different numbers of topics. It was used to select the topic model used for analysis in this paper.
\section*{Code Availability}
The code used to produce this paper is available at https://github.com/mcallaghan/cc-topography
\section*{Ethics Declarations}
\subsection*{Competing Interests}
The authors declare no competing interests.
%\listoffigures
\linespread{1}
\bibliography{Mendeley}
\bibliographystyle{natunsrt}
% \include{methods}
\end{document} | {
"alphanum_fraction": 0.7798747608,
"avg_line_length": 97.9943181818,
"ext": "tex",
"hexsha": "5494842365cbc788e56b593b43ee010d0682ca06",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c02d1049fc85d1c71c84da64d752bd602b1a6102",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mrchristian/cc-topography",
"max_forks_repo_path": "manuscript/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c02d1049fc85d1c71c84da64d752bd602b1a6102",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mrchristian/cc-topography",
"max_issues_repo_path": "manuscript/main.tex",
"max_line_length": 1505,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c02d1049fc85d1c71c84da64d752bd602b1a6102",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mrchristian/cc-topography",
"max_stars_repo_path": "manuscript/main.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-27T17:51:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-27T17:51:40.000Z",
"num_tokens": 7946,
"size": 34494
} |
\documentclass[11pt]{article}
% This is a package for drawing figures
% it is a part of standard latex 2e distribution
\usepackage{tikz}
\usetikzlibrary{shapes}
\usepackage{fullpage}
\usepackage{palatino}
\RequirePackage{ifthen}
\usepackage{latexsym}
\RequirePackage{amsmath}
\RequirePackage{amsthm}
\RequirePackage{amssymb}
\RequirePackage{xspace}
\RequirePackage{graphics}
\usepackage{xcolor}
\definecolor{OliveGreen}{rgb}{0,0.6,0}
\RequirePackage{textcomp}
\usepackage{keyval}
%\usepackage{listings}
\usepackage{xspace}
\usepackage{mathrsfs,paralist, amsmath,amssymb,url,listings,mathrsfs}
%\usepackage{pvs}
%\usepackage{supertabular,alltt,latexsym}
%\usepackage{multicol,multirow,epsfig}
%\usepackage[dvips, usenames]{color}
\usepackage{framed}
\usepackage{lipsum}
%\usepackage[dvipsnames]{color}
\usepackage{multicol}
% copyright notice
\definecolor{reddish}{rgb}{1,.8,0.8}
\definecolor{blueish}{rgb}{0.8,.8,1}
\definecolor{greenish}{rgb}{.8,1,0.8}
\definecolor{yellowish}{rgb}{1,1,.20}
\usepackage[pdftex]{hyperref}
\hypersetup{
pdftitle={Lecture notes for Modeling and Verification of Real-time and Hybrid Systems},
pdfauthor={Sayan Mitra},
colorlinks=true,
citecolor={blue},
linkcolor = {blue},
pagecolor={blue},
backref={true},
bookmarks=true,
bookmarksopen=false,
bookmarksnumbered=true
}
%\newcommand{\remove}[1]{}
\input{prelude1}
\newcommand{\handout}[6]{
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in { {\bf ECE/CS 584: Embedded and CPS Verification } \hfill #2 }
\vspace{4mm}
\hbox to 5.78in { {\Large \hfill #5 \hfill} }
\vspace{2mm}
\hbox to 5.78in { {\Large \hfill #6 \hfill} }
\vspace{2mm}
\hbox to 5.78in { {\em #3 \hfill #4} }
}
}
\end{center}
\vspace*{4mm}
}
\newcommand{\smallheader}[4]{
\noindent
\begin{center}
\framebox{
\vbox{
\hbox to 5.78in { {\bf ECE/CS 584: Embedded and CPS System Verification } \hfill #2 }
\vspace{2mm}
\hbox to 5.78in { {\em #3 \hfill #4} }
}
}
\end{center}
\vspace*{4mm}
}
\newcommand{\lecture}[4]{\handout{#1}{#2}{#3}{Scribe: #4}{Lecture #1}}
\newcommand{\homework}[2]{\smallheader{#1}{Spring 2016}{Homework #1}{#2}}
\newcommand{\solution}[2]{\smallheader{#1}{Spring 2016}{Solutions for Homework #1}{#2}}
\newcommand{\interestingfact}[1]{
\noindent
\begin{center}
\colorbox{yellowish}{
\parbox{11.5cm}{{\bf Factoid.} #1}
}
\end{center}
\vspace*{4mm}
}
%\definecolor{MyGray}{rgb}{0.96,0.97,0.98}
\makeatletter\newenvironment{color1box}{%
\begin{lrbox}{\@tempboxa}\begin{minipage}{\columnwidth}}{\end{minipage}\end{lrbox}%
\colorbox{reddish}{\usebox{\@tempboxa}}
}\makeatother
\makeatletter\newenvironment{color3box}{%
\begin{lrbox}{\@tempboxa}\begin{minipage}{\columnwidth}}{\end{minipage}\end{lrbox}%
\colorbox{blueish}{\usebox{\@tempboxa}}
}\makeatother
% 1-inch margins, from fullpage.sty by H.Partl, Version 2, Dec. 15, 1988.
\topmargin 0pt
\advance \topmargin by -\headheight
\advance \topmargin by -\headsep
\textheight 8.9in
\oddsidemargin 0pt
\evensidemargin \oddsidemargin
\marginparwidth 0.5in
\textwidth 6.5in
\parindent 0in
\parskip 1.5ex
%\renewcommand{\baselinestretch}{1.25}
\begin{document}
\homework{2 on Hybrid Models and Computation--- Due on March $2^{nd}$, 2016}{Chiao Hsieh}
\paragraph{Problem 1 (10 points).}
Give an example where the composition $\A = \A_1 \| \A_2$ of two compatible hybrid input/output automata $\A_1$ and $\A_2$ is not a hybrid input/output automaton. Recall, an HIOA has to satisfy the input action and input trajectory enabled conditions.
Give a sufficient condition under which $\A = \A_1 \| \A_2$ is a hybrid I/O automaton.
Ans.
Followings are the example automata $\A_1$ and $\A_2$,
both automata consist of no actions and discrete transitions as follows.
\begin{multicols}{2}
\begin{lstlisting}[language=ioaLang, mathescape=true]
automaton $\A_1$()
variables
input $u_1:\Real$;
output $y_1:\Real$;
internal $x_1:\Real = 0$;
trajectories
evolve $x_1 = u_1$; $y_1 = x_1$
invariant $true$
\end{lstlisting}
\vfill\columnbreak
\begin{lstlisting}[language=ioaLang, mathescape=true]
automaton $\A_2$()
variables
input $y_1:\Real$, $u_2:\Real$;
output $u_1:\Real$;
internal $x_2:\Real = 0$;
trajectories
evolve $x_2 = y_1 + 1$; $u_1 = x_2$
invariant $true$
\end{lstlisting}
\end{multicols}
Apparently, $\A_1$ and $\A_2$ are compatible.
However, the composition $\A_1 \| \A_2$ does not satisfy the input trajectory enabled condition.
Since both automata have no actions and discrete transitions,
satisfying input trajectory enabled condition~(\emph{E2}) requires that
\[
\forall \vx \in val(X), \forall v \in Traj(U), \exists \tau \in \T,(\tau.\fstate = \vx \land \tau \restrrange U = v)
\]
In $\A_1 \| \A_2$, $X = \{x_1, x_2\}$ and $U = (\{u_1\} \cup \{y_1, u_2\}) \setminus (\{y_1\} \cup \{u_1\}) = \{u_2\}$.
We pick $(x_1,x_2)=(0,0) \in val(X)$ as a counterexample violating E2.
If there is a trajectory $\tau \in \T$ with $\tau.\fstate = (0,0)$, then $y_1 = 0$ from $\A_1$ but $y_1 - 1$ from $\A_2$.
Hence, no $\tau$ can satisfy E2. We conclude that $\A_1 \| \A_2$ is not an HIOA.
A sufficient condition for which $\A_1 \| \A_2$ be a HIOA is that $U_1 \cap Y_2 = \emptyset$.
That is the output variables of $\A_1$ are not used by $\A_2$,
or generally there is no loop in the composition.
More sufficient conditions can be found at Section 6.3 in~\cite{Lynch2003105}.
\newpage
\paragraph{Problem 2 (10 points).}
Consider the composed hybrid automaton $\A = \A_1 \| \A_2$ and
an alternating sequence $\alpha = \tau_0, a_1, \ldots \tau_n$ where each $a_i$ is an action of $\A$ and each $\tau_i$ is a trajectory of the variables of $A$.
Note that $\alpha$ may not be an execution of $\A$.
Show that if $\alpha \lceil (A_i, V_i)$ is an execution of $\A_i$ for each $i \in \{1, 2\}$,
then $\alpha$ is indeed an execution of $\A$.
%
Here $\alpha \lceil (A_i, V_i)$ denotes the restriction of $\alpha$ to the set of actions $A_i$ and the set of variables $V_i$.
Ans.
Let $\alpha \lceil (A_i, V_i) = \tau_0^i a_1^i, \ldots \tau_{m_i}^i$
with last trajectory $\tau_{m_i}^i$.
If $\alpha \lceil (A_i, V_i)$ is an execution of $\A_i$ for each $i \in \{1, 2\}$,
then following holds by definition.
\begin{align}
&\forall j, \tau_j^i \in \T_i \land a_j^i \in A_i \land \tau_j^i.\lstate \arrow{a_{j+1}^i} \tau_{j+1}^{i}.\fstate \label{eqn:p1}\\
&\tau_0^i.\fstate \in \Theta_i \label{eqn:p2}
\end{align}
And we have to prove $\alpha = \tau_0, a_1, \ldots \tau_n$ satisfies the above two conditions.
By definition of $\alpha \lceil (A_i, V_i)$ at Section 3.3.4 in~\cite{Lynch2003105},
we can prove (\ref{eqn:p1}) by induction on the sequence.
\textbf{Base:}\\
The base case is $\alpha = \tau_0$ without actions.
We know that $\tau_0 \restrrange V_i$ should be a prefix of $\tau_0^i$.
That is, $\tau_0 \restrrange V_i \in \T_i$ and
\[
\because \forall i, \tau_0 \restrrange V_i \in \T_i \qquad \therefore \tau_0 \in \T
\]
Property~(\ref{eqn:p1}) holds for base case because of no actions.
\textbf{Induction:}\\
Let $\alpha = \alpha' a_j \tau_j$ for a transition $a_{j}, \tau_{j}$.
By induction hypothesis, $\alpha'$ satisfies property (\ref{eqn:p1}).
Also from definition of composition, $\tau_j\restrrange V_i \in \T_i$.
If the action $a_j \in A_i$, there must be the last discrete transition $\tau_{m_i-1}^i, a_j, \tau_{m_i}^i$ in $\alpha \lceil (A_i, V_i)$.
Therefore, we know $\alpha'\lceil (A_i, V_i).\lstate = \tau_{m_i-1}^i.\lstate$ and $\tau_{j} \restrrange V_i.\fstate = \tau_{m_i}^i.\fstate$.
Because property (\ref{eqn:p1}) holds for $\alpha \lceil (A_i, V_i)$,
\begin{equation*}
\because \tau_{m_i-1}^i.\lstate \arrow{a_{j}} \tau_{m_i}^i.\fstate \quad
\therefore \alpha'\lceil (A_i, V_i).\lstate \arrow{a_{j}} \tau_{j} \restrrange V_i.\fstate \\
\end{equation*}
If the action $a_{j} \notin A_i$,
the last trajectory $\tau'$ of $\alpha'\lceil (A_i, V_i)$ will concatenate with $\tau_j \restrrange V_i$ to form
the last trajectory of $\alpha\lceil (A_i, V_i)$.
Formally, $\tau_{m_i}^i = \tau'\concat (\tau_j\restrrange V_i)$.
By definition of concatenation,
\begin{align*}
\because &\alpha'\lceil (A_i, V_i).\lstate = \tau'.\lstate = \tau_j\restrrange V_i.\fstate \nonumber \\
\therefore& \alpha'\lceil (A_i, V_i).\lstate \arrow{a_{j}} \tau_j \restrrange V_i.\fstate
\end{align*}
Now we know no matter $a_{j} \in A_i$ or not,
the last action $a_j$ and trajectory $\tau_j$ of $\alpha$ alway satisfies $\alpha'.\lstate \arrow{a_{j}} \tau_{j}.\fstate$.
Also from induction hypothesis, every trajectory and action in $\alpha'$ satisfies (\ref{eqn:p1}).
We conclude that $\alpha$ alway follows property (\ref{eqn:p1}) by induction.
Finally, for property (\ref{eqn:p2}), since $\tau_0 \restrrange V_i$ should be a prefix of $\tau_0^i$,
$\tau_0\restrrange V_i.\fstate = \tau_0^i.\fstate \in \Theta_i$ and
\[
\because \forall i, \tau_0\restrrange V_i.\fstate = \tau_0^i.\fstate \in \Theta_i \quad \therefore \tau_0.\fstate \in \Theta
\]
\newpage
\paragraph{Problem 3. (20 points).}
Consider an idealized billiard table of length $a$ and width $b$. This table has no pockets, its surface has no friction, and it's boundary bounces the balls perfectly. Write a hybrid automaton model of the position of {\bf two\/} balls on this table. The balls have some initial velocities. The balls bounce from the walls when their position is at the boundary. They collide whenever $|x_1 - x_2| \leq \epsilon$ and $|y_1 - y_2| \leq \epsilon$ and their velocity vectors are pointing towards each other. Here $\epsilon$ is some constant.
Whenever a bounce occurs, the appropriate velocity changes sign.
Whenever a collision occurs, the balls exchange their velocity vectors. Make all the variables {\bf internal\/}. Wall bounces are modeled by an output action called $\act{bounce}$ and collision are modeled by an output action $\act{collision}$.
Ans.
\begin{lstlisting}[language=ioaLang, mathescape=true]
automaton BilliardTable($a$:Real, $b$:Real, $\epsilon$:Real)
variables
internal $x_1$:Real, $y_1$:Real, $x_2$:Real, $y_2$:Real
initially $\forall i \in [1, 2], 0 \leq x_i \leq a \land 0 \leq y_i \leq b$,
$vx_1$:Real, $vy_1$:Real, $vx_2$:Real, $vy_2$:Real
initially $\forall i \in [1, 2], 0 < vx_i \land 0 < vy_i$
signature
output bounce(i:[1, 2])
output collision
transitions
output bounce(i:[1, 2])
pre $x_i= 0 \lor x_i= a \lor y_i= 0 \lor y_i= b$
eff if ($x_i=0 \lor x_i=a$) then
$vx_i := -vx_i$;
fi
if ($y_i=0 \lor y_i=b$) then
$vy_i := -vy_i$;
fi
output collision
pre $|x_1 - x_2| \leq \epsilon \land |y_1 - y_2| \leq \epsilon \land is\_colliding$
eff swap($vx_1, vx_2$)
swap($vy_1, vy_2$)
trajectories
evolve d$(x_i) = vx_i$; d$(y_i) = vy_i$; d$(vx_i) = 0$; d$(vy_i) = 0$;
invariant $0 \leq x_i \leq a \land 0 \leq y_i \leq b$
\end{lstlisting}
The $is\_colliding$ predicate captures the condition when the velocity vectors of both balls pointing toward each other.
\[
is\_colliding := \frac{x_1 - x_2}{vx_1 - vx_2}<0 \land \frac{y_1 - y_2}{vy_1 - vy_2}<0 \land \frac{vy_1}{vx_1} = \frac{vy_2}{vx_2}
\]
\newpage
\paragraph{Part b.} State conservation of momentum as an invariant property of the automaton. Is this an inductive invariant?
Ans.
Conservation of momentum will not hold since the direction of velocity will change when the border bounces the ball.
Instead, we discuss conservation of kinetic energy.
Since we assume two balls have the same mass, the conservation of energy is then simplified as
\[
CE := \left({vx_1^2 + vy_1^2} + {vx_2^2 + vy_2^2}\right) = E_0
\]
where $E_0$ is the initial energy ${vx_1^2(t_0) + vy_1^2(t_0)} + {vx_2^2(t_0) + vy_2^2(t_0)}$.
The conservation of energy trivially contains initial states,
and it holds for both \texttt{bounce} and \texttt{collision} transitions.
Further, since $d(vx_i) = 0$ and $d(vy_i) = 0$ for trajectories,
the velocities remain the same in any trajectory;
hence it holds for any trajectory $\tau$ that $\tau.\fstate \in CE \implies \tau.\lstate \in CE$.
Conservation of kinetic energy is an inductive invariant for the automaton.
\paragraph{Part c.} Prove an inductive invariant (using the theorem in lecture 9) which implies conservation of momentum.
Ans.
Let $S$ be $vx_1^2 + vx_2^2 = EX_0 \land vy_1^2 + vy_2^2 = EY_0$
where $EX_0$ is ${vx_1^2(t_0) + vx_2^2(t_0)}$ the initial energy along x-axis.
Similarly, $EY_0$ is ${vy_1^2(t_0) + vy_2^2(t_0)}$.
It's obvious $S \implies CE$.
To prove $S$ is an inductive invariant, we first check if $\Theta \subseteq S$,
and it trivially holds when we substitute $vx_i, vy_i$ with $vx_i(t_0), vy_i(t_0)$.
Further, we compute $Post_\A(S) = \bigcup_{a\in A} Post\_Trans(S, a) \cup Post\_Traj(S)$.
From our automaton, we derive the post states for each discrete transition.
\[
\begin{array}{rcl}
Post\_Trans(S, \texttt{bounce(1)})&=& (-vx_1)^2 + vx_2^2 = EX_0 \land (-vy_1)^2 + vy_2^2 = EY_0\\
Post\_Trans(S, \texttt{bounce(2)})&=& vx_1^2 + (-vx_2)^2 = EX_0 \land vy_1^2 + (-vy_2)^2 = EY_0\\
Post\_Trans(S, \texttt{collision})&=& vx_2^2 + vx_1^2 = EX_0 \land vy_2^2 + vy_1^2 = EY_0\\
\end{array}
\]
Apparently, each set of post states is essentially the same as $S$.
Finally for trajectories,
\[
Post\_Traj(S) = \{ v' \mid \exists \tau \in \T, \tau.\fstate \in S \land \tau.\lstate = v'\}
\]
Since $\frac{d(vx_i)}{dt} =0$ and $\frac{d(vy_i)}{dt} = 0$, $Post\_Traj(S)$ is also the same set as $S$.
Therefore, we prove that $Post_\A(S) \subseteq S$ and $S$ is an inductive invariant.
\newpage
\paragraph{Problem 4 (30 points).}
Consider two satellites orbiting the earth on circular orbits with (constant) angular speeds $\omega_1$ and $\omega_2$.
Write a hybrid automaton model of the position of the satellite-pair in the $[0,2\pi]^2$ space. When one of the satellites hit $0$ or $2\pi$ its position has to be reset. Model this using an output action called $\act{jump}$.
Ans.
\begin{lstlisting}[language=ioaLang, mathescape=true]
automaton Satelite($\omega_1$:Real, $\omega_2$:Real)
type indices:enumeration [1, 2]
variables internal $pos$:[indices $\to$ Real] initially $\forall i \in [1,2], 0\leq pos[i]\leq 2\pi$;
signature
output jump(i:indices)
transitions
output jump(i:indices)
pre $pos[i] \leq 0 \lor pos[i] \geq 2\pi$
eff if $pos[i] \leq 0$ then
$pos[i] := pos[i] + 2\pi$
else if $pos[i] \geq 2\pi$ then
$pos[i] := pos[i] - 2\pi$
fi
trajectories
evolve d$(pos[1]) = \omega_1$; d$(pos[2]) = \omega_2$
invariant $\forall i \in [1,2], 0 \leq pos[i] \leq 2\pi$
\end{lstlisting}
The automaton will repeatedly jump if $\omega_i = 0$ and $pos[i] = 0$;
%\paragraph{Part b}
%Consider the model of a {\bf single\/} ball in the billard table of Problem 3.
%For appropriate choices of $\omega_1, \omega_2$, and the initial position of the satellites, show that this hybrid automaton strongly simulates the hybrid automaton modeling the single ball in the billiard table. Write down the forward simulation relation. Check that the relation is preserved by the transitions and trajectories.
\paragraph{Problem 5 a. (10 points).}
Model the following hysteresis-based switching system as a hybrid automaton.
The automaton has (at least) $n$ continuous variables $x_1,\ldots,x_n$ and a discrete variable called $\mathit{m}$ which takes the values in the set
$\mathit{mode_1},\ldots,\mathit{mode}_n$.
We say that the system is in mode $i$, when $m = \mathit{mode}_i$.
There are $n \times (n-1)$ actions $\auto{switch}(i,j)$, where $i,j \in [n]$ and $j \neq i$.
%
When in mode $i$, $\dot{x}_i = a_i x_i$, where $a_i >0$ is a positive constant and $\dot{x}_j = 0$ for all $j \neq i$.
When in mode $i$, for any $j \neq i$, if $x_i$ becomes greater than $(1+h) x_j$ then the automaton switches to mode $j$, otherwise it continues in mode $i$.
Here $h > 0$ is a parameter of the model. Is your model deterministic?
Ans.
The model is non-deterministic as multiple actions can be enabled at same time.
Consider an automaton with 3 modes, all $a_i = 1$, and $h = 1$,
the automaton is as following implementation.
\begin{lstlisting}[language=ioaLang, mathescape=true]
automaton Switch()
variables
internal $x_1$:Real, $x_2$:Real, $x_3$:Real, $m$:$\{\mathit{mode_1},\mathit{mode}_2,\mathit{mode}_3\}$
signature
internal switch(1, 2); internal switch(2, 1); internal switch(3, 1);
internal switch(1, 3); internal switch(2, 3); internal switch(3, 2);
transitions
internal switch(1, 2) pre $m=\mathit{mode_1} \land x_1 > 2 x_2$ eff $m := \mathit{mode}_2$
internal switch(1, 2) pre $m=\mathit{mode_1} \land x_1 > 2 x_3$ eff $m := \mathit{mode}_3$
internal switch(2, 1) pre $m=\mathit{mode_2} \land x_2 > 2 x_1$ eff $m := \mathit{mode}_1$
internal switch(2, 3) pre $m=\mathit{mode_2} \land x_2 > 2 x_3$ eff $m := \mathit{mode}_3$
internal switch(3, 1) pre $m=\mathit{mode_3} \land x_3 > 2 x_1$ eff $m := \mathit{mode}_1$
internal switch(3, 2) pre $m=\mathit{mode_3} \land x_3 > 2 x_2$ eff $m := \mathit{mode}_2$
trajectories
evolve d$(x_1) = x_1$; d$(x_2) = 0$; d$(x_3) = 0$ invariant $m = \mathit{mode}_1\land x_1 \leq 2 x_2 \land x_1 \leq 2 x_3$
evolve d$(x_1) = 0$; d$(x_2) = x_2$; d$(x_3) = 0$ invariant $m = \mathit{mode}_2\land x_2 \leq 2 x_1 \land x_2 \leq 2 x_3$
evolve d$(x_1) = 0$; d$(x_2) = 0$; d$(x_3) = x_3$ invariant $m = \mathit{mode}_3\land x_3 \leq 2 x_1 \land x_3 \leq 2 x_2$
\end{lstlisting}
Given initial state $(x_1,x_2,x_3,m) = (1, 1, 1, \mathit{mode}_1)$,
the state is updated until $x_1 > 2$.
Here, both actions \texttt{switch(1,2)} and \texttt{switch(1,3)} are enabled,
so both transitions can take place.
That is, the automaton is non-deterministic.
\paragraph{Problem 5 b (20 points).}
Write a program (in any language) to generate simulations of the above automaton for given values of the parameters $\{a_i\}$ and $h$.
First, set the initial values of the variables.
The basic idea for simulating the system is to write a big {\bf For\/}-loop in which each iteration advances time by some small amount $\delta >0$ and updates the current state of the system.
So, in the body of the loop you will first compute, for each $i$, $\tau.lstate \lceil x_i$ as a function of $\tau.fstate \lceil x_i, \tau.fstate \lceil m$ and $\delta$.
Then check if $\tau.\lstate$ satisfies any of the switching conditions and in that case update $m$. Ignore the fact that switches may actually occur sometime before the $\delta$ time interval.
With start state $x_i = 0$, for each $i$, and $m = 0$, and plot two executions of the system with two different values of $h$.
Ans.
Notice that, for invariant $\dot{x}_i(t) = a_i x_i(t)$ and initial value $x_i(0) = 0$,
there are multiple trajectories such as
\[
x_i(t) = 0 \quad or\quad x_i(t) = e^{a_i t} - 1
\]
Hence, a trivial execution is $m(t) = 0$ and, for all $i$, $x_i(t) = 0$,
and no discrete transition occurs.
For another execution, we use $x_i(t) = e^{a_i t} - 1$ as the first trajectory.
The execution is simulated under 3 modes with $a_0=1, a_1=1, a_2=1$, and $h=9$.
The plot is as Figure~\ref{fig:execution}.
\begin{figure}
\centering
\caption{Colors: \textcolor{blue}{$m$}, \textcolor{OliveGreen}{$x_0$}, \textcolor{red}{$x_1$}, \textcolor{cyan}{$x_2$}}\label{fig:execution}
\includegraphics{execution-1}
\end{figure}
\bibliography{refs}
\bibliographystyle{plain}
\end{document}
| {
"alphanum_fraction": 0.6745297254,
"avg_line_length": 42.4829059829,
"ext": "tex",
"hexsha": "48e7293640d27068b94eba5c2e44f831d0c630d2",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-02-22T00:44:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-22T00:44:03.000Z",
"max_forks_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "hc825b/homeworks",
"max_forks_repo_path": "Embedded_System_Verification/homework2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "hc825b/homeworks",
"max_issues_repo_path": "Embedded_System_Verification/homework2.tex",
"max_line_length": 541,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "21d2d50d7cc0ebb05f08a5ff0bdba16f6a63cccb",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "hc825b/homeworks",
"max_stars_repo_path": "Embedded_System_Verification/homework2.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-02T02:05:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-02T02:05:22.000Z",
"num_tokens": 6972,
"size": 19882
} |
\newpage
\section{Sample and sample characteristics}
\subsection{Sample}
\begin{definition}
Vector \(\vec{X} = \left( x_1, \ldots, x_n \right)\), where \(x_i \in P(\xi)\)
are independent equally distributed random values (i.e.d. - independent
equally distributed) is called
a sample of volume $n$ with distribution $P(\xi)$ (from general
totality (з генеральної сукупності) $P(\xi)$).
\end{definition}
\begin{remark}
$F_{\vec{X}} \left( y_1, \ldots, y_n \right) = P(x_1 \leq y_1,
\ldots, x_n \leq y_n) = \prod_{i=1}^{n} P(x_i \leq y_i) =
\prod_{i=1}^{n} F_{\xi} (y_i) $, where $F_{\xi}(x) = P(\xi \leq x)$
distribution function $\xi$.
\end{remark}
$\mathcal{F} = \{F_{\xi}\}$ we define a class of allowable ditribution
functions for random value $\xi$.
\(\mathcal{F} \{F(x, \theta), \theta \in \Theta\}\), $\Theta$ - a set of all allowable values for $\theta$.
\begin{example}
$P(\xi)$ normal distribution with known dispersion $\sigma^2$ but unknown expectation $\theta$. Then our parametric model is:
\[
\mathcal{F} = \{F(x, \theta), \theta \in \Theta = (-\infty, \infty)\}, \text{ where } F(x, \theta) \text{ has density of distribution }
\]
\[
f(x, \theta) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\theta)^2}{2\sigma^2}}, x \in \mathbb{R}
.\]
\end{example}
\begin{example}
$P(\xi)$ has Puasson distribution with unknown parameter $\theta$.
Then the parametric model is:
\[
\mathcal{F} = \{F(x, \theta), \; \theta \in \Theta = (0, \infty)\}
.\]
\[
F(x, \theta) = P(\xi = x) = \frac{\theta^{x}}{x!} e^{-\theta}, x = 0,1,2,\ldots
.\]
\end{example}
\begin{definition}
Measurable function from sampling (and only from sample) is called statistics.
\[
T_n(\vec{X}) \text{ - statistics }
.\]
\end{definition}
\begin{example}
\[
x_1, \ldots, x_n \text{ - i.e.d. random values }
.\]
\[
T(x_1, \ldots, x_n) = x_1
.\]
\[
T(x_1, \ldots, x_n) = \frac{1}{n} \sum_{i=1}^{n} x_i
.\]
\[
T(x_1, \ldots,x_n) = \min(x_1, \ldots, x_n)
.\]
\end{example}
\begin{example}
\[
x_i \sim Poiss(\theta), \theta \text{ - unknown parameter }
.\]
\(T(x_1, \ldots, x_n) = \frac{x_1}{\theta}\) \text{ - is not a statistics function as long as it depends on unknown parameter }
\end{example}
\subsection{Variation series of the sample}
Suppose $\vec{X} = (X_1, \ldots, X_n)$ a sample, $\vec{x} = (x_1, \ldots, x_n)$ a realization of the sample.
Let
\begin{align*}
x_{(1)} = min(x_1, \ldots, x_n) \\
x_{(2)} \text{ - second by range } \\
\hdots \\
x_{(n)} = max(x_1, \ldots, x_n)
.\end{align*}
In probability and statistics, a realization, observation, or observed value, of a random variable is the value that is actually observed (what actually happened).
Let $X_{(k)}$ to be a random value that for every realization $\vec{x}$ of sample $\vec{X}$ is $x_{(k)}$. Then the series
\[
R = \left( X_{(1)}, X_{(2)}, \ldots, X_{(n)} \right)
.\]
is a variation series of the sample.
$X_{(k)}$ - is $k$th ordinal statistics.
\begin{remark}
Ordinal statistics $X_{(1)}, \ldots, X_{(n)}$ are neither independent nor equally distributed.
\end{remark}
Let's find $F_{X_{(1)}}, F_{X_{(k)}}, F_{X_{(n)}}$:
\begin{gather*}
F_{X_{(1)}}(y) = P( X_{(1)} \leq y ) = P( \min \left( X_1, \ldots, X_n \right) \leq y) = \\
= 1 - P( \min \left( X_1, \ldots, X_n \right) > y) = 1 - P( X_1 > y, \ldots, X_n > y) = \\
= 1 - \prod_{i=1}^{n} P( X_i > y) = 1 - \left( 1 - F(y) \right) ^{n} ;
\end{gather*}
\begin{gather*}
F_{X_{(n)}} (y) = P \left( \max \left( X_1, \ldots, X_n \right) \leq y \right) = \\
= P\left( X_1 \leq y, \ldots, X_n \leq y \right) = \left[ F (y) \right] ^{n}.
\end{gather*}
\[
F_{ X_{(k)} } (y) = P \left( X_{(k)} \leq y \right) =
\]
\begin{figure}[ht]
\centering
\incfig{distribution-of-k-th-ordinal-statistics}
\caption{Distribution of k-th ordinal statistics}
\label{fig:distribution-of-k-th-ordinal-statistics}
\end{figure}
\begin{gather*}
= P \left( \text{ at least $k$ elements do not exceed $y$} \right) = \\
= \sum_{m=k}^{n} C_n^m \left[ F(y) \right]^m \left( 1 - F(y) \right) ^{n-m}.
\end{gather*}
\begin{proposition}[joint distribution of variation series]
Let \( \vec{X} = \left( X_1, \ldots, X_n \right) \) - a sample and $X_i$ has density
$f(x)$. Then:
\[
f_{( X_{(1)}, \ldots, X_{(n)})} (y_1, \ldots, y_n) = n! f(y_1) \ldots f(y_n) \times
\mathds{1}(y_1 \leq y_2 \ldots \leq y_n)
.\]
\end{proposition}
\begin{proof}
Consider distribution function of variation series:
\[
F_{(X_{(1)}, \ldots, X_{(n)})} (y_1, y_2, \ldots, y_n) =
P\left( X_{(1)} \leq y_1, \ldots, X_{(n)} \leq y_n \right).
\]
Consider that $y_1 > y_2$. Then $X_{(2)} \leq y_2 \Rightarrow X_{(1)} \leq y_1$.
$\left( X_{(1)} \leq X_{(2)} \leq y_2 < y_1 \right) $.
\[
\left\{ X_{(2)} \leq y_2 \right\} \cap \left\{ X_{(1)} \leq y_1 \right\} = \left\{ X_{(2)} \leq y_2 \right\}
.\]
That's why:
\[
F_{\left( X_{(1)}, \ldots, X_{(n)} \right) } (y_1, \ldots, y_n) =
P\left( X_{(2)} \leq y_2, \ldots, X_{(n)} \leq y_n \right)
.\]
Because right side does not depend from $y_1$ then
\[
f_R (y_1, \ldots, y_n) = 0
\]
in case of non-fulfillment of the condition of orderliness: $y_1 \leq y_2 \leq \ldots \leq y_n$.
Let $\Gamma = \{\left( y_1, \ldots, y_n \right) \in \mathbb{R}^n : y_1 \leq \ldots, y_n\}$.
\[
\forall A \subset \Gamma : P\left( \left( X_{(1)}, \ldots, X_{(n)} \right) \in A \right) =
\int_{A}^{} f_R (y_1, \ldots, y_n) dy_1 \ldots dy_n
.\]
On the other side:
\[
\]
\begin{gather*}
P\left( \left( X_{(1)}, \ldots, X_{(n)} \right) \in A \right) =
\sum_{\sigma \in S_n}^{} P\left( \left( X_{\sigma(1)}, \ldots, X_{\sigma(n)} \right) \in A \right) = \\
= n! \cdot P\left( \left( X_1, \ldots, X_n \right) \in A \right)
= n! \cdot \int_{A}^{} f_{\vec{X}} \left( y_1, \ldots, y_n \right) dy_1\ldots dy_n
\end{gather*}
Hence:
\[
f_r(y_1, \ldots, y_n) = n! f_{\vec{X}} (y_1, \ldots, y_n) = n! f(y_1) \cdot \ldots \cdot f(y_n)
.\]
At last:
\[
f_R (y_1, \ldots, y_n) = n! \prod_{i=1}^{n} f(y_i) \cdot \mathds{1}(y_1 \leq \ldots \leq y_n)
.\]
\end{proof}
\subsection{Empirical distribution function}
Let $\vec{X} = \left( x_1, \ldots, x_n \right) $ - a sample.
Consider:
\[
F_n(x) = \frac{1}{n} \sum_{i=1}^{n} \mathds{1}\left(X_i \leq x\right)
.\]
$F_n(x)$ is called \textbf{empirical distirbution function}.
$F_n(x)$ - random function: for every $x \in \mathbb{R}$ takes values $0, \frac{1}{n}, \frac{2}{n}, \ldots, 1$.
Herewith:
\[
P\left( F_n (x) = \frac{k}{n} \right) = C_n^k \left[ F(x) \right] ^k
(1 - F(x))^{n-k}, \;\;\;\; k = \overline{0, n}
.\]
\[
n \cdot F_n(x) \sim \operatorname{Bin} (n, F(x))
.\]
Hence:
\begin{enumerate}
\item
\begin{align*}
M \left[ n \cdot F_n(x) \right] &= n \cdot F(x) \\
&\Downarrow \\
M F_n(x) &= F(x)
.\end{align*}
\item
\begin{align*}
\mathcal{D}(n \cdot F_n(x)) &= n \cdot F(x) (1-F(x)) \\
&\Downarrow \\
\mathcal{D}F_x(x) &= \frac{1}{n} F(x) (1-F(x))
.\end{align*}
\end{enumerate}
Using law of large numbers:
\[
F_n(x) = \frac{\mathds{1}(X_1 \leq x) + \ldots + \mathds{1} (X_n \leq x)}{n}
\underset{n \to \infty}{\overset{P}{\to}}
M \mathds{1}(X_1 \leq x) = P(X_1 \leq x) = F(x)
.\]
Using central limit theorem (ЦГТ):
\[
\frac{n \cdot F_n(x) - n F(x)}{\sqrt{nF(x) (1-F(x))} }
\underset{n\to\infty}{\overset{d}{\to}}
\mathcal{N}(0, 1)
\]
\[
\sqrt{n} \cdot \frac{F_n(x) - F(x)}{\sqrt{F(x)(1-F(x))}}
\overset{d}{\to}
\mathcal{N}(0,1)
\]
\subsection{Hystogram and frequency range}
Suppose that $X_1, \ldots, X_n$ = a sample;
$X_i \sim \xi$; $\xi$ has continuous density $f(x)$ (unknown).
Let $\mathcal{I}_1, \ldots, \mathcal{I}_m$ - some division of the area $\mathcal{I}$ of possible values of $\xi$:
Let $\nu_r = \sum_{j=1}^{n} \mathds{1}(X_j \in \mathcal{I}_r)$ - number of elements of the sample that are in $\mathcal{I}_r$.
Then by the Law of Large Numbers:
\[
\frac{\nu_r}{n} = \frac{\sum_{j=1}^{n} \mathds{1}(X_j \in \mathcal{I}_r)}{n}
\underset{n\to\infty}{\overset{P}{\to}}
M \mathds{1}(X_1 \in \mathcal{I}_r) = P(X_1 \in \mathcal{I}_r) =
\int_{\mathcal{I}_r}^{} f(x) dx
.\]
Because $f$ is continuos then by the theorem about mean (теорема про середнє):
\[
\int_{\mathcal{I}_r}^{} f(x) dx = \left| \mathcal{I}_r \right| \cdot f(x_r)
\]
where $x_r$ - is inner point of the interval $\mathcal{I}_r$, $\left| \mathcal{I}_r \right| $ - length of the interval.
We can consider, that ( $n$ is big and $\left| \mathcal{I}_r \right| $ is small)
\[
\frac{\nu_r}{n \cdot \left| \mathcal{I}_r \right| } \approx f(x_r)
\]
where $x_r$ - middle of $\mathcal{I}_r$.
\begin{definition}
Piecewise constant function
\[
f_n(x) = \frac{\nu_r}{n \cdot \left| \mathcal{I}_r \right| } \mathds{1}(x \in \mathcal{I}_r), \;\;\;\; r = \overline{1, m}
\]
is called a hystogram.
\end{definition}
Within large $n$ and small enough division the hystogram $f_n(x)$ is
an approximation of true density $f(x)$.
\begin{example}
Height of $n=500$ students was measured; The results are shown in view of
interval statistiсal series:
\begin{center}
\begin{tabular}{|c | c | c | c | c | c | c | c|}
\hline
145-150 & 150-155 & 155-160 & 160-165 & 165-170 & 170-175 & 175-180 & 180-185 \\
\hline
1 & 2 & 28 & 90 & 169 & 132 & 55 & 23 \\
\hline
\end{tabular}
\end{center}
\begin{align*}
\left| \mathcal{I}_r \right| = 5; && n = 500; && f_n(x) = \frac{\nu_r}{ 2500} \mathds{1}(x \in \mathcal{I}_r)
.\end{align*}
\begin{figure}[ht]
\centering
\incfig{students-height-hystogram}
\caption{Students height hystogram}
\label{fig:students-height-hystogram}
\end{figure}
\end{example}
\begin{definition}
The frequency polygon is a polyline that connects the midpoints of the segments in the histogram.
\end{definition}
\subsection{Sample mean}
\begin{definition}
Statistics $\overline{X} = \frac{X_1 + \ldots + X_n}{n}$ is called
sample mean (вибіркове середнє) or selective first moment
(вибірковий перший момент).
\end{definition}
Properties:
\begin{enumerate}
\item $M \overline{X} = \frac{1}{n} \sum_{i=1}^{n} MX_i = MX_1 = m$
\item $\mathcal{D}\overline{X} = \frac{1}{n^2} \sum_{i=1}^{n} \mathcal{D}X_i = \frac{1}{n} \mathcal{D}X_1 = \frac{\sigma^2}{ n}$
\item Using Law of Large Numbers:
\[
\overline{X} = \frac{X_1 + \ldots + X_n}{n}
\underset{n\to\infty}{\overset{P}{\to}}
MX_1 = m
.\]
\item Using central limit theorem (центральна гранична теорема):
\[
\frac{n \cdot \overline{X} - n \cdot M\overline{X}}{\sqrt{\mathcal{D}[n\overline{X}]} }
\overset{d}{\to}
\mathcal{N}(0,1), \quad n \to \infty
\]
\[
\frac{n\left( \overline{X} - m \right) }{n \sqrt{\frac{\sigma^2}{n}} }
\overset{d}{\to}
\mathcal{N}(0,1)
\]
\[
\sqrt{n \cdot \frac{\overline{X} - m}{\sigma}}
\overset{d}{\to}
\mathcal{N}(0,1), \;\;\; n \to \infty
.\]
\end{enumerate}
\subsection{Sample variance}
\[
S^2 = \frac{1}{n} \sum_{i=1}^{n} \left( X_i - \overline{X} \right)
\]
where $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$.
\[
S^2 = \overline{X^2} - \left( \overline{X} \right) ^2
\]
where $\overline{X}^2 = \frac{X_1^2 + \ldots + X_n^2}{n}$ is second sample moment.
Using Law of Large Numbers:
\[
S^2 = \overline{X^2} - \left( \overline{X} \right) ^2
\underset{n \to \infty}{\overset{P}{\to}}
MX_1^2 - \left( MX_1 \right) ^2 = \mathcal{D}X_1
.\]
\[
S_o^2 = \frac{1}{n-1} \sum_{i=1}^{n} \left( X_i - \overline{X} \right) ^2
.\]
$S_0^2$ - is \textbf{unbiased sample dispersion}.
Then
\begin{equation}
MS_o^2 = \sigma^2
\end{equation}
Indeed :
\begin{gather*}
\sum_{i=1}^{n} \left( X_i - m \right) ^2
= \sum_{i=1}^{n} \left( \left( X_i - \overline{X} \right) + \left( \overline{X} - m \right) \right) ^2 = \\
= \sum_{i=1}^{n} \left( X_i - \overline{X} \right) ^2 + 2\left( \overline{X} - m \right) \cdot \overbrace{\sum_{i=1}^{n} \left( X_i - \overline{X} \right)}^0 +
n \cdot \left( \overline{X} - m \right) ^2 = \\
= \sum_{i=1}^{n} \left( X_i - \overline{X} \right) ^2 + n \left( \overline{X} - m \right) ^2 = \\
= \left( n-1 \right) S_o^2 + n \cdot \left( \overline{X} -m \right) ^2.
\end{gather*}
Let's take \textit{expectation} on the left and right side:
\begin{gather*}
M \sum_{i=1}^{n} \left( X_i - m \right) ^2 =
\sum_{i=1}^{n} M \left( X_i - m \right) ^2 =
\sum_{i=1}^{n} \mathcal{D}X_i = n \cdot \sigma^2
\end{gather*}
\begin{gather*}
M\left( \overline{X} - m \right) ^2 = M\left( \overline{X} - M\left( \overline{X} \right) \right) ^2 = \mathcal{D}\overline{X} = \frac{\sigma^2}{n}
\end{gather*}
\begin{align*}
n \cdot \sigma^2 = \left( n-1 \right) & \cdot MS_o^2 + n \cdot \frac{\sigma^2}{n} \\
& \Downarrow \\
MS_o^2 = \sigma^2
.\end{align*}
| {
"alphanum_fraction": 0.5661814616,
"avg_line_length": 32.4975490196,
"ext": "tex",
"hexsha": "b40d6ca4b1698d5d2b122a8e6ccc5af9a680d275",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "aipyth/notes",
"max_forks_repo_path": "cources/statistics/lec_01.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "aipyth/notes",
"max_issues_repo_path": "cources/statistics/lec_01.tex",
"max_line_length": 164,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "59066a1110ea467b2aa43518da9295f23da89a32",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "aipyth/notes",
"max_stars_repo_path": "cources/statistics/lec_01.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5463,
"size": 13259
} |
\section{Data analysis}
\label{refl1:anal}
XRR and NR methods have a well documented history for the analysis of the structure of phospholipid monolayers at the air-water interface.\autocite{mohwald_phospholipid_1990,kewalramani_effects_2010,bayerl_specular_1990,johnson_structure_1991,clifton_role_2012,helm_phospholipid_1987,daillant_x-ray_1990}
Typically these have involved using a model-dependent analysis method, however, the modelling approaches have varied significantly in the number of layers used, the shape of the layers, the use of interfacial roughness, the parameterisation of constraints employed, and even the method by which the reflectometry profile was calculated from the model.
Recently, an evaluation of the applicability of different models for surfactant and phospholipid monolayers using NR outlined a view of ``best practice''.\autocite{campbell_structure_2018}
However, frequently the constraints employed in the modelling process include the head and tail volume for the phospholipid head and tail groups.
These values are taken from a variety of other techniques, some examples are shown in Table~\ref{tab:water}.
%
\begin{sidewaystable}
\centering
\small
\caption{Phospholipid component volumes extracted from different literature sources. $V_l$ corresponds to the total phospholipid volume, $V_t$ to the tail group volume, $V_h$ to the head group volume, MD to molecular dynamics simulations, WAXS to wide-angle X-ray scattering, NB to neutral buoyancy, and DVTD to differential vibrating tube densimetry. The values for DPPC are from \cite{armen_phospholipid_1998}, \cite{sun_order_1994}, and \cite{kucerka_determination_2004,balgavy_evaluation_2001} respectively, the values for DMPC are from \cite{armen_phospholipid_1998} and \cite{kucerka_determination_2004,balgavy_evaluation_2001} respectively, the values for DLPC are from \cite{armen_phospholipid_1998} and \cite{kucerka_determination_2004,balgavy_evaluation_2001} respectively, the values for DMPG are from \cite{pan_molecular_2012}, and the values for POPG are from \cite{kucerka_scattering_2012}.}
\label{tab:water}
\begin{tabular}{l | l l l | l l | l l | l | l}
\toprule
Phospholipid & \multicolumn{3}{l|}{DPPC} & \multicolumn{2}{|l|}{DMPC} & \multicolumn{2}{|l|}{DLPC} & DMPG & POPG \\
\midrule
$V_l$/\si{\angstrom\cubed} & \num{1287.3 \pm 25.5} & \num{1148 \pm 2} & \num{1268.2 \pm 32.1} & \num{1172.5 \pm 25.1} & \num{1155.4 \pm 30.0} & \num{1057.7 \pm 24.7} & \num{1046.6 \pm 28.0} & \num{1011.4} & \num{1203} \\
$V_t$/\si{\angstrom\cubed} & \num{966.4 \pm 5.4} & \num{829 \pm 4} & \num{924.7 \pm 17.6} & \num{851.5 \pm 5.0} & \num{815.9 \pm 15.5} & \num{736.8 \pm 4.6} & \num{707.1 \pm 13.5} & \num{720.4} & \num{914} \\
$V_h$/\si{\angstrom\cubed} & \num{320.9 \pm 20.1} & \num{319 \pm 6} & \num{339.5 \pm 14.5} & \num{320.9 \pm 20.1} & \num{339.5 \pm 14.5} & \num{320.9 \pm 20.1} & \num{339.5 \pm 14.5} & \num{291.0} & \num{289} \\
\midrule
Method & MD & WAXS & NB & MD & NB & MD & NB & DVTD & MD \\
T/\si{\celsius}& 50 & 24 & 30 & 50 & 30 & 50 & 30 & 20 & 25 \\
\bottomrule
\end{tabular}
\end{sidewaystable}
%
Table~\ref{tab:water} provides a general consensus that the volume of the PC head group is \SIrange{320}{360}{\angstrom\cubed}, while the PG head group is \SIrange{289}{291}{\angstrom\cubed}.
However, these values were all determined from experiments\autocite{sun_order_1994,kucerka_determination_2004,balgavy_evaluation_2001,pan_molecular_2012} or simulations \autocite{armen_phospholipid_1998,kucerka_scattering_2012} where the head group was interacting with water molecules.
It is not clear if this will influence the volume that it occupies, and if that volume will change in the presence of a non-aqueous solvent.\footnote{Such as the DES considered herein.}
The charged nature of the zwitterionic and anionic phospholipid head groups may have different interactions with the polar, but neutral water and the charged DES.\autocite{sanchez-fernandez_self-assembly_2018}
Additionally, it is known that, on water, increased SPs and the associated Liquid-Expanded to Liquid-Condensed phase transition will lead to a compression of the phospholipid tail volume, compared to the values in Table~\ref{tab:water},\autocite{marsh_molecular_2010,small_lateral_1984} and that this compaction has not necessarily been accounted for in the literature.\autocite{campbell_structure_2018}
These factors meant that it was necessary to develop a model that was appropriate for the phospholipid chemistry while applying as much of the ``best practice'' from Campbell \emph{et al.}\autocite{campbell_structure_2018} as possible, and ensuring that the head and tail group volumes were not constrained parameters.
The lack of having these normally constrained parameters meant that it was necessary to consider methods by which the reflectometry measurements could be co-refined, in a similar fashion to contrast variation co-refinement in NR.
This could be achieved by the co-refinement of reflectometry measurements at different SPs, as the model was appropriate for the phospholipid chemistry, and the different SPs were in the same phase.\footnote{Liquid-Condensed (LC) for DPPC and Liquid-Expanded (LE) for DMPC, DLPC, and DMPG.}
Therefore the head and tail group volumes remain constant, and only the surface concentration and tail thickness vary.
The chemically-constrained model that has been used in this work was implemented in the Python library \texttt{refnx}.\autocite{nelson_refnx_2019,nelson_refnx_2019-1}.
The software enables the inclusion of custom model classes that feed parameters into the Abel\`{e}s model.\sidecite[discussed in detail in Section~\ref{sec:refltheory}]{abeles_sur_1948,parratt_surface_1954}.
Our chemically-consistent model class can be seen in Code Block~\ref{cb:chemconsis}, and is shared under a CC BY-SA 4.0 license in the ESI for the associated publication.\autocite{mccluskey_lipids_at_airdes_2019}
In order to ensure that the phospholipid chemistry was consistent both within the phospholipid molecule and across the different SPs, Code Block~\ref{cb:const} was implemented.
%
\begin{listing}[t]
\forcerectofloat
\centering
\caption{The chemically-consistent model class that was implemented in \texttt{refnx} \cite{nelson_refnx_2019,nelson_refnx_2019-1}. The input variables are \texttt{vol} which is an array of floats containing the initial values for the head and tail group volumes, \texttt{b} which is the calculated scattering length for the head and tail groups, \texttt{d\_h} which is the initial value for the thickness of the head group region, \texttt{c\_length} which is the number of carbon atoms in the phospholipid tail, \texttt{str} which is the name to be given to the object. The \texttt{slabs} function will return an array of floats representing the scattering length density profile.}
\lstinputlisting[nolol,firstline=1,lastline=48]{reports/code_blocks/mol_vol.py}
\label{cb:chemconsis}
\end{listing}
%
\begin{listing}[t]
\forceversofloat
\centering
\caption{The \texttt{set\_constraints} that was used to impose chemical-consistency on the phospholipid monolayer structure.. The input variables are \texttt{lipids} and \texttt{structures} which are \texttt{refnx} objects that contain information about the phospholipids and monolayers, and \texttt{hold\_tails}, \texttt{hold\_rough}, and \texttt{hold\_phih} are Boolen switches to constrain the tail layer thickness, the interfacial roughness, and the volume fraction of solvent across the different measurements, in this work these were all kept as \texttt{False}.}
\lstinputlisting[nolol,firstline=1,lastline=35]{reports/code_blocks/ref_help.py}
\label{cb:const}
\end{listing}
%
The chemically-consistent model\footnote{That is outlined in Code Block~\ref{cb:chemconsis}.} consisted of two layers that define the phospholipid monolayer; the head layer at the interface with the solvent and the tail layer at the air interface.
The head groups have a scattering length that can be calculated from a summation of the X-ray or neutron atomic scattering lengths, $b_h$, and a volume, $V_h$.
These groups make up a layer of a given thickness, $d_h$, which has some interfacial roughness, $\sigma_h$, within which some volume fraction of solvent may penetrate, $\phi_h$.
Therefore, the $\text{SLD}_i$ in Equation~\ref{equ:sld} may be found and used in Equation~\ref{equ:knsld}, while the layer thickness may be used in Equation~\ref{equ:nowthick} to calculate the reflected intensity from a given layer.
The tail layer is defined in the same way, however, the tail thickness, $d_t$, is constrained such that it can be no greater than the maximum extended length for the phospholipid tail,\footnote{This is defined as the Tanford length, $t_t$ from \cite{tanford_hydrophobic_1980}.} which is given in Table~\ref{tab:invar}, and that no solvent may penetrate into the layer.\footnote{Such at $\phi_t=0$.}
Therefore, the $\text{SLD}$ may be determined as discussed in Equation~\ref{equ:sld}.
Based on the work of Campbell \emph{et al.},\autocite{campbell_structure_2018} a single value for the interfacial roughness was fitted for all of the interfaces, including the subphase,\footnote{i.e. $\sigma_t = \sigma_h = \sigma_s$.} as there is only a single phospholipid molecule type present in each monolayer.
Therefore, any capillary wave roughness at the air-DES interface is carried conformally through the layers.
The interfacial roughness was constrained to be greater than \SI{3.3}{\angstrom}, in agreement with previous work.\autocite{sanchez-fernandez_micellization_2016}
%
\begin{table}[b]
\forceversofloat
\centering
\small
\caption{The invarient parameters within the chemically-consistent model. Values for $t_t$ were taken from the Tanford formula (\cite{tanford_hydrophobic_1980}), and the SLD values for the DES from \cite{sanchez-fernandez_micellization_2016}.}
\label{tab:invar}
\begin{tabular}{l | l l l | l}
\toprule
Component & $b_t$/\si{\femto\meter} & $b_h$/\si{\femto\meter} & $t_t$/\AA & $\text{SLD}$/$10^{-6}$ \AA$^{-2}$ \\
\midrule
X-ray & & & & \\
DPPC & 6827 & 4635 & 20.5 & -- \\
DMPC & 5924 & 4635 & 18.0 & -- \\
DLPC & 5021 & 4635 & 15.5 & -- \\
DMPG & 5924 & 4694 & 18.0 & -- \\
Air & -- & -- & -- & 0 \\
DES & -- & -- & -- & 10.8 \\
\midrule
Neutron & & & & \\
d$_{54}$-DMPC & & & 18.0 & -- \\
d$_{62}$-DPPC & & & 20.5 & -- \\
h-DES & -- & -- & -- & 0.43 \\
hd-DES & -- & -- & -- & 3.15 \\
\bottomrule
\end{tabular}
\end{table}
%
The constraints implemented in Code Block~\ref{cb:const} involved two aspects.
The first was to ensure that the number density of head groups and pairs of tail groups was kept the same.
This was achieved with the following relation,\autocite{braun_polymers_2017}
%
\begin{equation}
\phi_h = 1 - \bigg(\frac{d_tV_h}{V_td_h}\bigg).
\label{equ:phih}
\end{equation}
%
The second aspect was to enforce chemically-consistant constraints across the measurements that were conducted at different SPs.
This was achieved by constraining the head and tail group volumes and the head layer thickness such that they do not vary between the different SP measurements.
The justification for constraining the tail volume is built on the assumption that the phospholipids remain in the same phase.
On water, this may be demonstrated with a Langmuir isotherm.
However, it was not possible to collect consistent Langmuir isotherm measurements.\footnote{Due to the high viscosity of the DES.}
Instead, grazing incidence X-ray diffraction was used to confirm the phases of DMPC and DPPC at \SI{30}{\milli\newton\per\meter}.
Figure~\ref{fig:gixd} shows the grazing-incidence X-ray diffraction\footnote{Abbreviated to GIXD.} data from different phospholipids at different temperatures.
Unfortunately, all the patterns show a weak artefact due to scattering from the Teflon trough.
However, there are clear (2, 0) diffraction peaks in the GIXD pattern for DPPC at \SI{22}{\celsius} and DMPC at \SI{7}{\celsius} indicating that both phospholipids are in the LC phase.
This peak was also present at other SPs (data not shown).
The peak position corresponded well with that found for DPPC in water.\autocite{watkins_structure_2009}
DMPC at \SI{22}{\celsius} showed no evidence of a diffraction peak indicating the presence of the LE phase.
It was assumed that DLPC and DMPG were also in the LE phase as there is no reason for the phase behaviour of these systems to differ significantly from that of DMPC at room temperature.
%
\begin{figure}[t]
\forcerectofloat
\centering
\includegraphics[width=\textwidth]{reflectometry1/gixd}
\caption{The GIXD patterns, where $Q_z$ is the scattering vector normal to the interface and $Q_{xy}$ is that in the plane of the interface; (a) DPPC at \SI{30}{\milli\newton\per\meter} and \SI{22}{\celsius}, (b) DMPC at \SI{30}{\milli\newton\per\meter} and \SI{22}{\celsius}, and (c) DMPC at \SI{30}{\milli\newton\per\meter} and \SI{7}{\celsius}. Note that $Q$ is equivalent to $q$.}
\label{fig:gixd}
\end{figure}
%
Initially, this chemically-consistent modelling approach was applied only to the XRR data.
The tail layer thickness and interfacial roughness were allowed to vary independently across the SPs, while the other parameters were constrained as discussed above or held constant to the values given in Table~\ref{tab:invar}.
For each co-refinement of four XRR measurements, there were, in total, eleven degrees of freedom.
Throughout all of the analyses, the reflected intensity scale factor was allowed to vary freely, while the background was constrained to the reflected intensity at the largest $q$-value.
Following this, the head and tail group volumes, and the head layer thickness that were found from the XRR analysis were used as fixed variables for the refinement of the NR measurements.
This reduced the number of fitted parameters in the NR data to two, namely the thickness of the tail layer, $d_t$, and the interfacial roughness, $\sigma_{t,h,s}$, for the co-refinement of two datasets.
Table~\ref{tab:invar} also presents details of the scattering lengths and SLDs used for the NR refinement.
Again, the intensity scale factor was allowed to vary freely and the background constrained to the intensity at the largest $q$-value.
In both the XRR and the NR analysis, the refinement of the chemically-consistent model to the experimental data involved the transformation of the reflectometry calculated from the model and the data into $Rq^4$-space, such that the contribution of the Fresnel decay was removed.\autocite{gerelli_aurore_2016}
The model was then optimised using the DE method that is available within the \texttt{scipy} library.\autocite{jones_scipy_nodate}
This refined the parameters to give the best fit to the data.
MCMC was then used to probe the search-space available to each parameter, given the experimental uncertainty of the data.
The MCMC sampling method used was Goodman \& Weare's Affine Invariant Ensemble\autocite{goodman_ensemble_2010} as implemented in the \texttt{emcee} package.\autocite{foreman-mackey_emcee_2013}
This enabled the determination of the probability distribution for each of the parameters, and therefore the quantification of their inverse uncertainty, given the uncertainty in the experimental data.
A Shapiro-Wilk test\sidecite[this is a common test to assess the normality of a distribution]{shapiro_analysis_1965} was used to determine if the PDF fitted to a normal distribution and therefore could be considered to have symmetric confidence intervals.
If the PDF failed the test the value was quoted with asymmetric confidence intervals, compared with the symmetric confidence intervals given for those that passed the Shapiro-Wilk test.
It is important to note that the PDFs and therefore the determined confidence intervals are not true confidence intervals, and account only for the uncertainty that is present in the data.\footnote{Therefore, they do not account for systematic uncertainty in the measurement technique.}
In addition to determining parameter confidence intervals, it was also possible to use these probability distributions to understand the correlations present between the parameters and the impact this has on the fitting process.
The correlation was quantified using the Pearson correlation coefficient\autocite{pearson_notes_1895}, a common statistical definition for the level of correlation present between two variables.
The Pearson correlation coefficient can have values that range from \num{-1} to \num{1}, with a value of \num{-1} corresponding to a complete negative correlation\footnote{An increase in one variable is associated with a decrease in the other.}, while a value of \num{1} corresponds to a complete positive correlation\footnote{An increase in one variable is associated with a similar increase in the other.}, a value of \num{0} indicates no correlation between the two variables.
The MCMC sampling involved 200 walkers that were used for 1000 iterations, following a burn-in of 200 iterations.
| {
"alphanum_fraction": 0.7687904718,
"avg_line_length": 110.8717948718,
"ext": "tex",
"hexsha": "6a5773acabe699bfda49020744623853ff21eaff",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "arm61/thesis",
"max_forks_repo_path": "reports/chapters/reflectometry1/analysis.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_issues_repo_issues_event_max_datetime": "2019-06-04T17:11:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-06-04T17:11:33.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "arm61/thesis",
"max_issues_repo_path": "reports/chapters/reflectometry1/analysis.tex",
"max_line_length": 909,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "4c76e837b1041472a5522427de0069a5a28d40c9",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "arm61/thesis",
"max_stars_repo_path": "reports/chapters/reflectometry1/analysis.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-01T06:25:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-06-04T20:53:19.000Z",
"num_tokens": 4718,
"size": 17296
} |
\section{Analysis of Kelly and Amburgey 1991}
Here I analyse the citations to \cite{kelly1991organizational}.\\
\subsection{Aldrich 1999}
The PDF provided was a scan of chapter 2 of the book, "Organizations Evolving" (\cite{aldrich1999organizations}). \\
I could not find any reference to \cite{kelly1991organizational}.\\
\subsection{Gold et al 2001}
\cite{gold2001knowledge} refers to Kelly and Amburgey's paper as reference \#67 in the paper. The paper is not searchable. \\
Used as source of description for preconditions of knowledge management as capabilities and resources \cite[p.186]{gold2001knowledge}.\\
Another reference to capability and resource in organizational behaviour literature \cite[p.192]{gold2001knowledge}.\\
\textit{In particular, the organization should experience a learning effect in which it improves over time in its capabilities for creating value.} \cite[p.196]{gold2001knowledge}.\\
The results of \cite{kelly1991organizational} are neither challenged or confirmed in this paper by \cite{gold2001knowledge}. \\
\subsection{Baum and Oliver 1991}
\cite{baum1991institutional} ("Institutional linkages and organizational mortality").\\
\begin{quote}
Research on the effects of transformation on the risk of organizational failure has provided inconclusive results. Some studies report that transformation reduces the risk of failure (Zucker, 1987b). some report increases in the risk (Carroll, 1984; Miner, Amburgey, and Stearns, 1990), \textbf{some report no effect (Kelly and Amburgey, 1991)}, and still others report outcomes that depend on the type of change (Singh, House, and Tucker, 1986; Baum, 1990; Haveman, 1990) or amount of time since change (Amburgey, Kelly, and Barnett, 1990; Baum, 1990). One possible explanation for this inconsistency is that interorganizational linkages are capable of providing organizations with a "transformational shield" that "insulates an organization against the probability of failure resulting from transformation" (Miner, Amburgey, and Stearns, 1990: 695).
\begin{flushright}
\cite[p.194]{baum1991institutional} \\
\textbf{Emphasis Mine}
\end{flushright}
\end{quote}
\subsection{Fiss 2011}
\cite{fiss2011building} ("Building better causal theories: A fuzzy set approach to typologies in organization research").\\
\begin{quote}
Perhaps the most influential view of core versusperiphery in organizations is that of Hannan and Freeman (1984), who defined an organization’s core as its mission, authority structure, technology, and marketing strategy. In their definition, “Coreness means connectedness” (Hannan et al., 1996:506), with change in core elements requiring adjustments in most other features of an organization. \textbf{This definition of core versus peripheral elements has been adopted in a considerable number of subsequent studies (e.g., Kelly \& Amburgey, 1991;}Singh, House, \& Tucker, 1986).
\begin{flushright}
\cite[p.397]{fiss2011building} \\
\textbf{Emphasis Mine}
\end{flushright}
\end{quote}
\subsection{Lavie and Rosenkopf 2006}
\cite{lavie2006balancing} ("Balancing exploration and exploitation in alliance formation").\\
\begin{quote}
\textbf{The Implications of Firm Age}. The inertia perspective suggests that as firms mature, they are less likely to engage in exploration. \textbf{Older firms develop managerial commitment to existing technologies (Burgelman, 1994; Kelly and Amburgey, 1991), and are less likely to explore new technologies through their alliances.} In support of this argument, Rothaermel (2001b) found that incumbents benefited by exploiting complementary assets rather than by exploring new technologies with partners. In addition, firms nurture relationships as they mature and become embedded in alliance networks. This embeddedness encourages repeated alliances with prior partners (Gulati, 1995b; Li and Rowley, 2002), which are instituted on familiarity, trust, and established collaboration routines. Finally, maturation leads to the development of organizational routines that become embedded in decision-making processes and are applied almost automatically in response to external stimuli (Nelson and Winter, 1982). When a new problem arises, firms engage in local search for relevant experiences (Cyert and March, 1963) and elicit a response that conforms to their established routines.
\begin{flushright}
\cite[p.B2]{lavie2006balancing} \\
\textbf{Emphasis Mine}
\end{flushright}
\end{quote}
\begin{quote}
With respect to firm age and partnering experience, our results clearly supported the slack and absorptive capacity perspectives with no contingencies across dimensions of exploration. Older firms with substantial partnering experience tended to engage in exploration rather than exploitation in alliance formation. Hence, a firm’s accumulated experience is essential for expanding its knowledge domain and pursuing emerging opportunities that contribute to innovation and variation through alliance formation. While experience may also lead to the evolution of rigid routines and inertial forces, we found no evidence that such processes enforce exploitation. It is possible, however, that inertial forces reduce the efficiency of exploration without constraining the tendency to explore, at least in the context of alliances. \textbf{While our findings are consistent with prior research showing increase in the rate of innovation with firm age (Sorensen and Stuart, 2000), they stand in contrast with the inertia literature that argued for limited responsiveness of older firms to organizational change (Amburgey, Kelly, and Barnett, 1993; Hannan and Freeman, 1984; Kelly and Amburgey, 1991).} Possibly, older firms that encounter inertial impediments for innovation explore through alliances while pursuing exploitation through their internal units.
\begin{flushright}
\cite[p.B6]{lavie2006balancing} \\
\textbf{Emphasis Mine}
\end{flushright}
\end{quote}
| {
"alphanum_fraction": 0.80302267,
"avg_line_length": 88.8805970149,
"ext": "tex",
"hexsha": "667962ed964a5bab6beca4520c115b89cc0d0cf9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dfhawthorne/tex_projects",
"max_forks_repo_path": "Organizational_Inertia/analysiskelly1991organizational.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dfhawthorne/tex_projects",
"max_issues_repo_path": "Organizational_Inertia/analysiskelly1991organizational.tex",
"max_line_length": 1354,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dfhawthorne/tex_projects",
"max_stars_repo_path": "Organizational_Inertia/analysiskelly1991organizational.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1390,
"size": 5955
} |
\section{Conclusion}
For this project to be successful, the right technologies must be chosen for the right components. For the graphical, physical engine and auditory components there are many options. Each with its own advantages and disadvantages. The sheer amount of Javascript libraries that area available for 3D visualizations is staggering and can make this choice unclear. For this project, however, there are clear choices for each component.
The physics engine should be Cannon.js. While Oimo.js and Ammo.js are both great physics engines, only one Cannon.js built directly into the A-Frame API. As it seems that this project will be making use of A-Frame, it only makes sense to use what works with it. This along with the extensive documentation that Cannon.js has makes this an easy choice.
The graphics API must be Three.js. This API stands out as a definitive choice among the three APIs listed. Like Cannon.js, Three.js is built into A-Frame which makes it highly desirable. Its incredibly feature richness and vibrant community is another factor in this recommendation. It is important that there is lots of support behind each component. Success is made all too easy when the entire world of web developers is at the back of the API being used by a project.
For the audio API it was another easy choice. HowlerJS is the favorite among web developers for 3D audio and spatialization. Its wide compatibility and abstraction of WebAudio make it simple and easy to use for beginners in the audio arena. Like the others, HowlerJS has the most extensive documentation among its competitors.
Each technology has its advantages and disadvantages but these technologies will only serve to improve the development experience of this project.
| {
"alphanum_fraction": 0.812642369,
"avg_line_length": 146.3333333333,
"ext": "tex",
"hexsha": "8d4933260e29cb36072e8e5531b988264c6575ef",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-12-29T03:51:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-04-05T19:01:08.000Z",
"max_forks_repo_head_hexsha": "c9fd48692b557c096878f405a3d6c5c119fa77ba",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Darn0/Immersive-Web-VR-AR",
"max_forks_repo_path": "documentation/individual-assignments/jonesjonathan/Tech-Review/conclusion.tex",
"max_issues_count": 138,
"max_issues_repo_head_hexsha": "c9fd48692b557c096878f405a3d6c5c119fa77ba",
"max_issues_repo_issues_event_max_datetime": "2022-01-27T16:04:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-10T19:33:45.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Darn0/Immersive-Web-VR-AR",
"max_issues_repo_path": "documentation/individual-assignments/jonesjonathan/Tech-Review/conclusion.tex",
"max_line_length": 471,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c9fd48692b557c096878f405a3d6c5c119fa77ba",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Darn0/Immersive-Web-VR-AR",
"max_stars_repo_path": "documentation/individual-assignments/jonesjonathan/Tech-Review/conclusion.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-02T07:39:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-23T15:30:54.000Z",
"num_tokens": 340,
"size": 1756
} |
\documentclass[12pt]{article}
\usepackage[margin=1in]{geometry}
\geometry{letterpaper}
\usepackage[hyphens]{url}
\usepackage{fancyhdr}
\usepackage{enumitem}
\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\usepackage{url}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
\title{$B^3$: Billions for Big Brains}
\date{}
\begin{document}
\vspace{-0.5cm}
\maketitle
\vspace{-2cm}
\begin{center}
Design a large scale proposal for understanding brain connectivity
\end{center}
\section{Request for Proposal}
Neuroscience is in a golden age of data and computation. With data acquisition systems being developed across
a broad range of scales, computational frameworks enabling scalable analysis unlike ever before, and growing
statistical and biological models enabling more realistic simulation of living organisms than we've seen to date.
We, Grelliam Corp., are formally issuing a request for proposals (RFP) in which we will fund a single group in the
amount of \$ 10~billion to uncover a mystery about the brain. Proposals responding to this RFP shall address a
challenge of the authors' choosing, spending up-to but not exceeding the available \$ 10~billion fund. Evaluation
will be based upon the considerations enumerated below, in Section~\ref{sec:eval}. Each team, along with the
submission of a proposal, will give a presentation to the review board which will highlight the key points of
their proposal.
\section{Background}
This project is inspired by a course entitled \textit{3YP} taught by Frank Wood at The University of Oxford.
The link to Frank's course is here: \url{http://www.robots.ox.ac.uk/~fwood/teaching/3YP_2016/}. On Frank's
website there exists a large number of resources, links, papers, and other tools that may be helpful in this
design process. It is important to note that the goal of the projects is different, so though you may benefit
from some of the resources contained within his site the objective, topic, and evaluation criterion stated
in this document should be followed in any discrepancy.
\section{Evaluation}
\label{sec:eval}
The proposals shall address several key considerations, enumerated and valued below.
\begin{itemize}[noitemsep]
\item \textbf{scientific question} (20 points) Each team must propose a scientific question that is of interest to solving
an important neuroscience challenge relevant to human health, computation, or technology,
\item \textbf{hardware + facilities} (10 points) What type of scanning/imaging equipment will be used to acquire images
of the brain is imperative. The scale of data collection may be up to the proposing team, and different modalities
may have significant advantages for specific scientific questions. Points will be awarded for describing the data
collection procedure in adequate detail such that the reviewers understand the efficacy of the chosen method in answering
the proposed question.
\item \textbf{data collection processes} (10 points) In addition to specifying the modality and necessary equipment to acquire
the images, considerations must be made to for how to get the images from the acquisition hardware into a format that enables
processing.
\item \textbf{information extraction processes} (20 points) Once data is acquired it must be processed and analyzed such that
the proposed scientific question can begin to be answered. This includes image processing such as registration, segmentation,
annotation, graph extraction, as a start. Additionally, once the given derivatives have been extracted, statistical analyses
must be performed to verify the findings and provide an indication of the significance of their strength.
\item \textbf{data upload} (10 points) All raw and processed data, as well as their summary figures, must be uploaded to a
publicly available resource which enables other researchers and members of society to benefit from their existence.
\item \textbf{data storage} (10 points) Data storage must be considered for larger volumes, both in the short term when data
is collected from a microscrope and the longer term when it is being shared publicly for an indefinite period of time.
\item \textbf{cost} (20 points) A cost breakdown will summarize the expense of this proposal in as great detail as is possible.
Additional points will not be awarded if a team submits a project below budget (i.e. if the money is not spent it is considered
lost).
\item \textbf{feasibility} (50 points) This proposal must be feasible in the eyes of the review board, based on the details
contained within. Providing sufficient detail in order for this review to be possible is essential, as unanswered questions by
the reviewers will be considered holes in the feasibility of the proposal.
\item \textbf{total} (150 points)
\end{itemize}
\section{Submission Requirements}
\begin{itemize}[noitemsep]
\item 3-5 page description of proposal in detail written in \LaTeX, thus in PDF format
\item 3 slide presentation summarizing the proposal in PDF format
\item 5-minute elevator pitch of the aforementioned slides.
\end{itemize}
\section{Submission Deadline}
The proposals must be submitted to Grelliam Corp. via a link to a Github repository by no later than 14h45 on Monday,
January 16th, 2017.
\end{document}
| {
"alphanum_fraction": 0.8005919349,
"avg_line_length": 61.4318181818,
"ext": "tex",
"hexsha": "da4f5939b19d1c7a7c5419e0936f7f4514165e44",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2017-01-17T19:54:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-17T16:48:28.000Z",
"max_forks_repo_head_hexsha": "47f2491c330ab8d53d5b488103310b170001baed",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "connectomix/3yp",
"max_forks_repo_path": "billions-big-brains-2017.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "47f2491c330ab8d53d5b488103310b170001baed",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "connectomix/3yp",
"max_issues_repo_path": "billions-big-brains-2017.tex",
"max_line_length": 127,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "47f2491c330ab8d53d5b488103310b170001baed",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "connectomix/3yp",
"max_stars_repo_path": "billions-big-brains-2017.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1223,
"size": 5406
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CS622: Theory of Formal Languages
% Copyright 2014 Pejman Ghorbanzade <[email protected]>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/beacon
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Question 3}
Let $A = \{a,b\}$.
Prove that there are no words $x, y \in A^*$ such that $xay = ybx$.
Prove that there is no word $x \in \{a,b\}^*$ such that $ax = xb$.
\subsection*{Solution}
It is prerequisite of equality for any two words $u, v \in \{a,b\}^*$ to share an equal number of each symbols.
It is shown that the two words $u=xay$ and $v=ybx$ fail to satisfy this prerequisite.
Suppose there are $m$ total $a$ symbols and $n$ total $b$ symbols in x and y.
Therefore, total number of $a$ symbols in $u$ is $m+1$ whereas total number of $a$ symbols in $v$ is $m$.
Thus prerequisite is not met and $u \neq v$.
Similarly, if $u = ax$ and $v = xb$, number of $a$ symbols in $u$ is always one more than number of $a$ symbols in $v$ thus $u \neq v$.
| {
"alphanum_fraction": 0.6056838366,
"avg_line_length": 46.9166666667,
"ext": "tex",
"hexsha": "863305d6e91c1af25681f546f39bf35165a60fb7",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z",
"max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ghorbanzade/beacon",
"max_forks_repo_path": "umb-cs622-2015f/src/tex/hw01/hw01q03.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ghorbanzade/beacon",
"max_issues_repo_path": "umb-cs622-2015f/src/tex/hw01/hw01q03.tex",
"max_line_length": 135,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ghorbanzade/beacon",
"max_stars_repo_path": "umb-cs622-2015f/src/tex/hw01/hw01q03.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z",
"num_tokens": 331,
"size": 1126
} |
Our approach to adaptive refinement in \castro\ uses a nested hierarchy
of logically-rectangular grids with simultaneous refinement of the
grids in both space and time. The integration algorithm on the grid
hierarchy is a recursive procedure in which coarse grids are advanced
in time, fine grids are advanced multiple steps to reach the same time
as the coarse grids and the data at different levels are then
synchronized.
During the regridding step, increasingly finer grids
are recursively embedded in coarse grids until the solution is
sufficiently resolved. An error estimation procedure based on
user-specified criteria (described in Section \ref{sec:tagging})
evaluates where additional refinement is needed
and grid generation procedures dynamically create or
remove rectangular fine grid patches as resolution requirements change.
A good introduction to the style of AMR used here is in Lecture 1
of the Adaptive Mesh Refinement Short Course at
\url{https://ccse.lbl.gov/people/jbb/shortcourse/lecture1.pdf}.
\section{Tagging for Refinement}
\label{sec:tagging}
\castro\ determines what zones should be tagged for refinement at the
next regridding step by using a set of built-in routines that test on
quantities such as the density and pressure and determining whether
the quantities themselves or their gradients pass a user-specified
threshold. This may then be extended if {\tt amr.n\_error\_buf} $> 0$
to a certain number of zones beyond these tagged zones. This section
describes the process by which zones are tagged, and describes how to
add customized tagging criteria.
The routines for tagging cells are located in the
\code{Tagging\_nd.f90} file in the {\tt Source/driver/} directory. (These are
dimension-agnostic routines that loop over all three dimensional
indices even for 1D or 2D problems.) The main routines are
\code{ca\_denerror}, \code{ca\_temperror}, \code{ca\_presserror},
\code{ca\_velerror}, and \code{ca\_raderror}. They refine based on
density, temperature, pressure, velocity, and radiation energy density
(if enabled), respectively. The same approach is used for all of
them. As an example, we consider the density tagging routine. There
are four parameters that control tagging. If the density in a zone is
greater than the user-specified parameter \runparam{denerr}, then that
zone will be tagged for refinement, but only if the current AMR level
is less than the user-specified parameter \runparam{max\_denerr\_lev}.
Similarly, if the absolute density gradient between a zone and any
adjacent zone is greater than the user-specified parameter
\runparam{dengrad}, that zone will be tagged for refinement, but only
if we are currently on a level below
\runparam{max\_dengrad\_lev}. Note that setting {\tt denerr} alone
will not do anything; you'll need to set {\tt max\_dengrad\_lev} $>=
1$ for this to have any effect.
All four of these parameters are set in the \code{\&tagging} namelist
in your {\tt probin}\index{\tt probin} file. If left unmodified, they
default to a value that means we will never tag. The complete set of
parameters that can be controlled this way is the following:
\begin{itemize}
\item density:
\begin{itemize}
\item value: {\tt denerr}, {\tt max\_denerr\_lev}
\item gradient: {\tt dengrad}, {\tt max\_dengrad\_lev}
\end{itemize}
\item temperature:
\begin{itemize}
\item value: {\tt temperr}, {\tt max\_temperr\_lev}
\item gradient: {\tt tempgrad}, {\tt max\_tempgrad\_lev}
\end{itemize}
\item velocity (magnitude):
\begin{itemize}
\item value: {\tt velerr}, {\tt max\_velerr\_lev}
\item gradient: {\tt velgrad}, {\tt max\_velgrad\_lev}
\end{itemize}
\item pressure:
\begin{itemize}
\item value: {\tt presserr}, {\tt max\_presserr\_lev}
\item gradient: {\tt pressgrad}, {\tt max\_pressgrad\_lev}
\end{itemize}
\item radiation energy density:
\begin{itemize}
\item value: {\tt raderr}, {\tt max\_raderr\_lev}
\item gradient: {\tt radgrad}, {\tt max\_radgrad\_lev}
\end{itemize}
\end{itemize}
Since there are multiple algorithms for determining
whether a zone is tagged or not, it is worthwhile to specify
in detail what is happening to a zone in the code during this step.
We show this in the following pseudocode section. A zone
is tagged if the variable {\tt itag = SET}, and is not tagged
if {\tt itag = CLEAR} (these are mapped to 1 and 0, respectively).
\begin{verbatim}
itag = CLEAR
for errfunc[k] from k = 1 ... N
// Three possibilities for itag: SET or CLEAR or remaining unchanged
call errfunc[k](itag)
end for
\end{verbatim}
In particular, notice that there is an order dependence of this operation; if {\tt errfunc[2]}
{\tt CLEAR}s a zone and then {\tt errfunc[3]} {\tt SET}s that zone, the final operation will
be to tag that zone (and vice versa). In practice by default this does not matter, because the
built-in tagging routines never explicitly perform a \texttt{CLEAR}. However,
it is possible to overwrite the {\tt Tagging\_nd.f90} file if you want to change how
{\tt ca\_denerror, ca\_temperror}, etc. operate. This is not recommended, and if you do so
be aware that {\tt CLEAR}ing a zone this way may not have the desired effect.
We provide also the ability for the user to define their own tagging criteria.
This is done through the Fortran function \code{set\_problem\_tags} in the
\code{problem\_tagging\_*d.f90} files. This function is provided the entire
state (including density, temperature, velocity, etc.) and the array
of tagging status for every zone. As an example of how to use this, suppose we
have a 3D Cartesian simulation where we want to tag any zone that has a
density gradient greater than 10, but we don't care about any regions
outside a radius $r > 75$ from the problem origin; we leave them always unrefined.
We also want to ensure that the region $r \leq 10$ is always refined.
In our {\tt probin} file we would set {\tt denerr = 10} and {\tt max\_denerr\_lev = 1}
in the {\tt \&tagging} namelist. We would also make a copy of
{\tt problem\_tagging\_3d.f90} to our work directory and set it up as follows:
\begin{lstlisting}[language=fortran]
subroutine set_problem_tags(tag,tagl1,tagl2,tagl3,tagh1,tagh2,tagh3, &
state,state_l1,state_l2,state_l3, &
state_h1,state_h2,state_h3,&
set,clear,&
lo,hi,&
dx,problo,time,level)
use bl_constants_module, only: ZERO, HALF
use prob_params_module, only: center
use meth_params_module, only: URHO, UMX, UMY, UMZ, UEDEN, NVAR
implicit none
integer ,intent(in ) :: lo(3),hi(3)
integer ,intent(in ) :: state_l1,state_l2,state_l3, &
state_h1,state_h2,state_h3
integer ,intent(in ) :: tagl1,tagl2,tagl3,tagh1,tagh2,tagh3
double precision,intent(in ) :: state(state_l1:state_h1, &
state_l2:state_h2, &
state_l3:state_h3,NVAR)
integer ,intent(inout) :: tag(tagl1:tagh1,tagl2:tagh2,tagl3:tagh3)
double precision,intent(in ) :: problo(3),dx(3),time
integer ,intent(in ) :: level,set,clear
double precision :: x, y, z, r
do k = lo(3), hi(3)
z = problo(3) + (dble(k) + HALF) * dx(3) - center(3)
do j = lo(2), hi(2)
y = problo(2) + (dble(j) + HALF) * dx(2) - center(2)
do i = lo(1), hi(1)
x = problo(1) + (dble(i) + HALF) * dx(1) - center(2)
r = (x**2 + y**2 + z**2)**(HALF)
if (r > 75.0) then
tag(i,j,k) = clear
elseif (r <= 10.0) then
tag(i,j,k) = set
endif
enddo
enddo
enddo
end subroutine set_problem_tags
\end{lstlisting}
\section{Synchronization Algorithm}
\label{sec:amr_synchronization}
Here we present the AMR algorithm for the compressible equations with
self-gravity. The gravity component of the algorithm is closely
related to (but not identical to) that in Miniati and Colella, JCP,
2007. The content here is largely based on the content in the original
\castro\ paper (\cite{castro_I}). The most significant difference is the
addition of a different strategy for when to employ the synchronization;
but regardless of whether the original or new strategy is used, the fundamental
synchronization step is identical.
\subsection{Synchronization Methodology}
\label{sec:synchronization_methodology}
Over a coarse grid time step we collect flux register information for
the hyperbolic part of the synchronization:
\begin{equation}
\delta\Fb = -\Delta t_c A^c F^c + \sum \Delta t_f A^f F^f
\end{equation}
Analogously, at the end of a coarse grid time step we store the
mismatch in normal gradients of $\phi$ at the coarse-fine interface:
\begin{equation}
\delta F_\phi = - A^c \frac{\partial \phi^c}{\partial n}
+ \sum A^f \frac{\partial \phi^f}{\partial n}
\end{equation}
We want the composite $\phi^{c-f}$ to satisfy the multilevel
version of (\ref{eq:Self Gravity}) at the synchronization time, just
as we want the coarse and fine fluxes at that time to match. So the goal
is to synchronize $\phi$ across levels at that time and then zero out
this mismatch register.
At the end of a coarse grid time step we can define
${\overline{\Ub}}^{c-f}$ and $\overline{\phi}^{c-f}$ as the composite
of the data from coarse and fine grids as a provisional solution at
time $n+1$. (Assume $\overline{\Ub}$ has been averaged down so that
the data on coarse cells underlying fine cells is the average of the
fine cell data above it.)
The synchronization consists of two parts:
\begin{itemize}
\item Step 1: Hyperbolic reflux
In the hyperbolic reflux step, we update the conserved variables with
the flux synchronization and adjust the gravitational terms to reflect
the changes in $\rho$ and $\ub$.
\begin{equation}
{\Ub}^{c, \star} = \overline{\Ub}^{c} + \frac{\delta\Fb}{V},
\end{equation}
where $V$ is the volume of the cell and the correction from
$\delta\Fb$ is supported only on coarse cells adjacent to fine grids.
Note: this can be enabled/disabled via {\tt castro.do\_reflux}. Generally,
it should be enabled ({\tt 1}).
Also note that for axisymmetric or 1D spherical coordinates, the
reflux of the pressure gradient is different, since it cannot be
expressed as a divergence in those geometries. We use a separate
flux register in the hydro code to store the pressure term in these
cases.
\item Step 2: Gravitational synchronization
In this step we correct for the mismatch in normal derivative in
$\phi^{c-f}$ at the coarse-fine interface, as well as accounting for
the changes in source terms for $(\rho \ub)$ and $(\rho E)$ due to the
change in $\rho.$
On the coarse grid only, we define
\begin{equation}
(\delta \rho)^{c} = \rho^{c, \star} - {\overline{\rho}}^{c} .
\end{equation}
We then form the composite residual, which is composed of two
contributions. The first is the degree to which the current $
\overline{\phi}^{c-f}$ does not satisfy the original equation on a
composite grid (since we have solved for $\overline{\phi}^{c-f}$
separately on the coarse and fine levels). The second is the response
of $\phi$ to the change in $\rho.$ We define
\begin{equation} R \equiv 4 \pi G \rho^{\star,c-f} - \Delta^{c-f} \; \overline{\phi}^{c-f}
= - 4 \pi G (\delta \rho)^c - (\nabla \cdot \delta F_\phi ) |_c .
\end{equation}
Then we solve
\begin{equation}
\Delta^{c-f} \; \delta \phi^{c-f} = R
\label{eq:gravsync}
\end{equation}
as a two level solve at the coarse and fine levels.
We define the update to gravity,
\begin{equation}
\delta {\bf g}^{c-f} = \nabla (\delta \phi^{c-f}) .
\end{equation}
Finally, we need to
\begin{itemize}
\item add $\delta \phi^{c-f}$ directly to
to $\phi^{c}$ and $\phi^{f}$ and interpolate $\delta \phi^{c-f}$ to any finer
levels and add to the current $\phi$ at those levels.
\item if level $c$ is not the coarsest level in the calculation, then we must transmit the
effect of this change in $\phi$ to the coarser levels by updating the flux register between
level $c$ and the next coarser level, $cc.$ In particular, we set
\begin{equation}
\delta {F_\phi}^{cc-c} = \delta F_\phi^{cc-c}
+ \sum A^c \frac{\partial (\delta \phi)^{c-f}}{\partial n} .
\end{equation}
\end{itemize}
The gravity synchronization algorithm can be disabled with {\tt
gravity.no\_sync = 1}. This should be done with care. Generally,
it is okay only if he refluxing happens in regions of low density that
don't affect the gravity substantially.
\end{itemize}
\subsection{Source Terms}
\label{sec:synchronization_sources}
After a synchronization has been applied, the state on the coarse grid
has changed, due to the change in fluxes at the coarse-fine boundary as
well as the change in the gravitational field. This poses a problem
regarding the source terms, all of which generally rely either on the
state itself, or on the global variables affected by the synchronization
such as the gravitational field. The new-time sources constructed on the
coarse grid all depended on what the state was after the coarse-grid
hydrodynamic update, but the synchronization and associated flux
correction step retroactively changed that hydrodynamic update. So one
can imagine that in a perfect world, we would have calculated the
hydrodynamic update first, including the coarse-fine mismatch
corrections, and only then computed the source terms at the new time.
Indeed, an algorithm that did not subcycle, but marched every zone along
at the same timestep, could do so -- and some codes, like FLASH,
actually do this, where no new-time source terms are computed on any
level until the hydrodynamic update has been fully completed and the
coarse-fine mismatches corrected. But in \castro\ we cannot do this; in
general we assume the ability to subcycle, so the architecture is set up
to always calculate the new-time source terms on a given level
immediately after the hydrodynamic update on that level. Hence on the
coarse level we calculate the new-time source terms before any fine grid
timesteps occur.
One way to fix this, as suggested by Miniati and Colella for the case
of gravity, is to explicitly compute what the difference in the source
term is as a result of any flux corrections across coarse-fine
boundaries. They work out the form of this update for the case of a
cell-centered gravitational force, which has contributions from both
the density advected across the coarse-fine interfaces
(i.e. $\delta \rho \mathbf{g}$, where $\delta \rho$ is the density
change due to the coarse-fine synchronization on the coarse rid), as
well as the global change in the gravitational field due to the
collective mass motion (see Miniati and Colella for the explicit form
of the source term). This has a couple of severe limitations. First,
it means that when the form of the source term is changed, the form of
the corrector term is changed too. For example, it is less easy to
write down the form of this corrector term for the flux-based
gravitational energy source term that is now standard in \castro.
Second, gravity is a relatively easy case due to its linearity in the
density and the gravitational acceleration; other source terms
representing more complicated physics might not have an easily
expressible representation in terms of the reflux contribution. For
example, for a general nuclear reaction network (that does not have an
analytic solution), it is not possible to write down an analytic
expression for the nuclear reactions that occur because of
$\delta \rho$.
Instead we choose a more general approach. On the coarse level, we save
the new-time source terms that were applied until the end of the fine
timesteps. We also save the fine level new-time source terms. Then, when
we do the AMR synchronization after a fine timestep, we first subtract
the previously applied new-time source terms to both the coarse and the
fine level, then do the flux correction and associated gravitational
sync solve, and then re-compute the new-time source terms on both the
coarse and the fine level\footnote{In the absence of a global field like
the gravitational potential, this would only need to be done on the
coarse level, as we always assume that the solution on the fine grid is
correct and average it down to the coarse grid. In \castro\ we do it by
default on the fine level too in anticipation of the fact that gravity
is a common component of many of our production science
simulations. This could be generalized so that if you aren't using any
global force fields, you don't bother updating the fine level. If this
is important to the science you want to do, please let the \castro\
developers know and we can look into it.}. In this way, we get almost
the ideal behavior -- if we aren't subcycling, then we get essentially
the same state at the end of the fine timestep as we would in a code
that explicitly had no subcycling. The cost is re-computing the new-time
source terms that second time on each level. For most common source
terms such as gravity, this is not a serious problem -- the cost of
re-computing $\rho \mathbf{g}$ (for example, once you already know
$\mathbf{g}$) is negligible compared to the cost of actually computing
$\mathbf{g}$ itself (say, for self-gravity). If you believe that the
error in not recomputing the source terms is sufficiently low, or the
computational cost of computing them too high, you can disable this
behavior\footnote{in general it may be desirable for this to be a
source-term specific setting, so that some source terms that are cheap
or physically important are re-computed after a synchronization can be
set to update, while others can be disabled. If this is important for
your science application, please let the developers know, as this would
be a straightforward extension of the current architecture.} using the
code parameter {\tt castro.update\_sources\_after\_reflux}.
Note that at present nuclear reactions are not enabled as part of this
scheme, and at present are not automatically updated after an AMR
synchronization. This will be amended in a future release of \castro.
\subsection{Synchronization Timing}
\label{sec:synchronization_timing}
The goal of the synchronization step is for the coarse and fine grid to
match at the end of a coarse timesteps, after all subcycled fine grid
timesteps have been completed and the two levels have reached the same
simulation time. If subcycling is disabled, so that the coarse and fine
grid take the same timestep, then this is sufficient. However, in the
general subcycling case, the situation is more complicated. Consider the
discussion about source terms in \ref{sec:synchronization_sources}. If
we have a coarse level and one fine level with a refinement ratio of
two, then for normal subcycling the fine grid takes two timesteps for
every one timestep taken by the coarse level. The strategy advocated by
the original \castro\ paper (and Miniati and Colella) is to only do the
AMR synchronization at the actual synchronization time between coarse
and fine levels, that is, at the end of the second fine timestep.
Consequently, we actually only update the source terms after that second
fine timestep. Thus note that on the fine grid, only the {\it new-time}
source terms in the {\it second} fine timestep are updated. But a
moment's thought should reveal a limitation of this. The first fine grid
timestep was also responsible for modifying the fluxes on the coarse
grid, but the algorithm as presented above didn't take full account of
this information. So, the gravitational field at the {\tt old} time in
the second fine timestep is actually missing information that would have
been present if we had updated the coarse grid already. Is there a way
to use this information? For the assumptions we make in \castro, the
answer is actually yes. If we apply the effect of the synchronization
not at the synchronization time but at the end of {\tt every} fine
timestep, then every fine timestep always has the most up-to-date
information possible about the state of the gravitational field. Now, of
course, in fine timesteps before the last one, we have not actually
reached the synchronization time. But we already know at the end of the
first fine timestep what the synchronization correction will be from
that fine timestep: it will be equal to 1/2 of the coarse contribution
to the flux register and the normal contribution to the flux register
for just that timestep. This is true because in \castro, we assume that
the fluxes provided by the hydrodynamic solver are piecewise-constant
over the timestep, which is all that is needed to be second-order
accurate in time if the fluxes are time centered\footnote{If this scheme
is generalized to higher-order methods, in principle all one would need
to do is integrate the fluxes until $\Delta t / 2$, which is what we are
doing here for the constant-in-time flux case.}. So it is fair to say
that halfway through the coarse timestep, half of the coarse flux has
been advected, and we can mathematically split the flux register into
two contributions that have equal weighting from the coarse flux. (In
general, of course, the coarse flux contribution at each fine timestep
is weighted by $1/R$ where $R$ is the refinement ratio between the
coarse and fine levels.) So, there is nothing preventing us from
updating the coarse solution at the synchronization time $t^{n+1}_c$
after this first fine timestep; we already know at that point how the
coarse solution will change, so why not use that information? We can
then update the gravitational potential at $t^{n+1/2}_c$ that is used to
construct the boundary conditions for the gravitational potential solve
on the fine grid at the beginning of the second fine timestep.
In practice, this just means calling the synchronization routine
described in \ref{sec:synchronization_methodology}, with the only
modification being that the flux register contribution from the coarse
grid is appropriately weighted by the fine grid timestep instead of
the coarse grid timestep, and we only include the current fine step:
\begin{equation}
\delta\Fb = -\Delta t_f A^c F^c + \Delta t_f A^f F^f
\end{equation}
The form of the $\phi$ flux register remains unchanged, because the
intent of the gravity sync solve is to simply instantaneously correct
the mismatch between the fine and coarse grid. The only difference,
then, between the old strategy and this new method is that we call the
synchronization at the end of every fine timestep instead of only the
last subcycled one, and we change the weighting appropriately. This
new method is more expensive as currently implemented because we have
to do $R$ gravitational sync solves, refluxes, and source term
recalculations instead of only one. However, it results in maximal
possible accuracy, especially in cases where significant amounts of
material are crossing refinement boundaries. The reflux strategy is
controlled by the parameter {\tt castro.reflux\_strategy}. At present
the old method is still the default.
Note that one does not need to be using self-gravity for this to be
beneficial. Even in pure hydrodynamics this can matter. If a regrid
occurs on the fine level, new zones on the boundaries of the current
fine level are filled by interpolation from the coarse level. In the
old method, that interpolation is not using the most up-to-date data
that accounts for the synchronization.
For multiple levels of refinement, the scheme extends naturally. In
the old method, we always call the synchronization at the
synchronization time between any two levels. So for example with two
jumps in refinement by a factor of two, there is a synchronization at
the end of the first two timesteps on level 2 (between level 1 and
level 2), a synchronization after the next two timesteps on level 2
(again between level 1 and level 2), and then a synchronization
between level 0 and level 1. In the new method, we always call the
synchronization at the end of every timestep {\it on the finest level
only}, and we simultaneously do the synchronization {\it on every
level}. The timestep $\Delta t_f$ in the flux register is just the
timestep on the finest level. (If this is unclear, give it a sanity
check: when the sum of all flux register totals is added up, the level
0 contribution will have a factor of $\Delta t$ equal to the coarse
grid timestep since the sum of the timesteps on the finest level over
the entire advance must equal the level 0 timestep. So, the final
contribution from the flux register is the same as if we had saved up
the flux mismatch until the end of the level 0 timestep.) The
synchronization no longer needs to be called at the end of any coarser
level's timestep because it will already be up to date as a result of
the synchronizations applied at the end of the fine level timesteps.
| {
"alphanum_fraction": 0.758590273,
"avg_line_length": 51.9604166667,
"ext": "tex",
"hexsha": "cc20604516e51bbbc35dca6c18e0df9f265dc717",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c",
"max_forks_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_forks_repo_name": "yingtchen/Castro",
"max_forks_repo_path": "Docs/AMR/AMR.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_issues_repo_name": "yingtchen/Castro",
"max_issues_repo_path": "Docs/AMR/AMR.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5e9bd2f7a699a45447b92a1c9c3064f6c2e3552c",
"max_stars_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_stars_repo_name": "yingtchen/Castro",
"max_stars_repo_path": "Docs/AMR/AMR.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6230,
"size": 24941
} |
\startchapter{Related Works}
\label{chapter:relatedworks}
\newlength{\savedunitlength}
\setlength{\unitlength}{2em}
\section{Version Control History and GitHub}
%CVCS
Version Control Systems (VCS) were developed to record changes to a file or set of files over time so that a specific version can be recalled later. \cite{chacon2009pro} The traditional Centralized Version Control Systems (CVCS) require developers to commit a change to a central repository, merge and resolve the conflicts. This mechanism leads to the downside where the entire history of the project lies in the central database, and if the central database becomes corrupted, all the history data would be lost. \cite{chacon2009pro}\cite{5071408}
%DVCS
Distributed Version Control Systems (DVCS) relaxed the requirement of CVCSs to have a central, master repository. In a DVCS, each developer owns ``first-class access'' to the project repository so they can access the entire history data of the repository.\cite{5071408}. DVCS has the potential to ``make releasing, developing, and coordinating large software projects much less rigid than its exclusively centralized predecessor''.\cite{6607694}
%Git
Among all these DVCSs, Git has gained the most momentum. It began in 2005 as a revision management system used for coordinating the Linux kernel's development. Over the years, Git has evolved by ``leaps and bounds'' due to its functionality, portability, efficiency, and rich third-party adoption.\cite{6188603}
%Applications based on Git
Using Git as a back end to host open source projects, many Web-based applications enhanced project management functionality by adding rich user-friendly interfaces, which provide a convenient way for developers to set up repositories, clone existing projects and commit their contributions.\cite{6188603} These applications also lay more emphasis on the social aspect of software engineering.
%GitHub
Among all these applications, GitHub is the most popular one with more than 12.2 million users\footnote{\url{https://github.com/about/press}}. GitHub not only allows users to \textit{star}\footnote{\url{https://help.github.com/articles/about-stars/}} or \textit{watch}\footnote{\url{https://help.github.com/articles/watching-repositories/}} repositories to keep track of projects they find interesting, but also allows users to follow other users to see what other people are working on and who they are connecting with. GitHub also supports team management using the \textit{organizations} feature, projects discovery using \textit{explore} feature, bugs tracing using \textit{issues}, etc.\footnote{\url{https://help.github.com/articles/be-social}} By integrating these social features into the version control system, the communication and coordination among developers is greatly enhanced.\cite{6357175}
\section{Recent Research on GitHub}
GitHub introduces a new open source environment that has given rise to research from many different angles. GitHub is a huge data pool that contains not only numerous projects but also developers' profiles and developers' activities such as their contributions to projects and their interactions with other developers. More and more researchers have jumped into this data pool, trying to discover some interesting patterns or good stories in either software engineering or social networking.
Some researchers have been working on helping employers find technical experts by analyzing into the developers' profile and their activities.\cite{6336698}\cite{Marlow:2013:ATS:2441776.2441794}\cite{Venkataramani:2013:DTE:2487788.2487832}\cite{6671295} Others have focused on the source code in project repositories. For example, Bissyande et al. (2013) took advantage of the rich data on GitHub by examining the ``popularity'', ``interoperability'', and ``impact'' of various programming languages measured in different ways, such as lines of code, development teams, issues, etc.\cite{6649842} There are also some researchers who tried to discover patterns in developers' collaboration and interaction, such as how developers assess each other and find proper collaborators\cite{Majumder:2012:CTF:2339530.2339690}\cite{Marlow:2013:IFO:2441776.2441792}\cite{Singer:2013:MAS:2441776.2441791}, the herding phenomena on GitHub\cite{Choi:2013:HOS:2441955.2441989}, the relation between developers' behavior on GitHub and other Q\&A websites like StackOverflow\footnote{\url{http://www.stackoverflow.com/}}\cite{6693332}, etc.
However, no scientific study that focused on GitHub Gists have been found.
\section{GitHub Gists}
\subsection{Related Tools}
There are some other snippets management tools similar to GitHub Gists such as \textit{pastebin}\footnote{\url{http://pastebin.com/}}, \textit{snipt}\footnote{\url{https://snipt.net/}}, \textit{codepen}\footnote{\url{http://codepen.io/}}, \textit{dabblet}\footnote{\url{http://dabblet.com/}}, etc. However, GitHub Gists is the only one that manages snippets using version control. Over these years, the features of GitHub Gists are being improved and updated to be more user friendly\footnote{\url{https://github.com/blog/1837-change-the-visibility-of-your-gists}} \footnote{\url{https://github.com/blog/1850-gist-design-update}}.
\subsection{The Features of Gists}
Gists are expected to be small snippets of code or text that utilize the version control tool Git for creation and management. Wikipedia provides a good explanation of the benefits of Gists as follows\footnote{\url{http://en.wikipedia.org/wiki/GitHub\#Gist}}:
\textsl{``Gist builds upon that idea by adding version control for code snippets, easy forking, and SSL encryption for private pastes. Because each `gist' is its own Git repository, multiple code snippets can be contained in a single paste and they and be pushed and pulled using Git. Further, forked code can be pushed back to the original author in the form of a patch, so pastes can become more like mini-projects.''}
In addition, GitHub also provides a powerful Web-based editor to create or modify Gists, which makes it possible to work on Gists without Git. It also supports comments on Gists and provides a Web service that makes Gists embeddable in a Web page. All these features make Gists very flexible, functional and user friendly, helping users better manage their gists and share them. | {
"alphanum_fraction": 0.8034672971,
"avg_line_length": 147.5581395349,
"ext": "tex",
"hexsha": "741d70c8e38ee68edc1c335e2f17faede7d7ef98",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6dc9a6d6ecd7ce4b05603138f44f2b458d01c212",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "wlwg/GitHub-Gists-analysis",
"max_forks_repo_path": "report/chapters/relatedwork/relatedwork.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6dc9a6d6ecd7ce4b05603138f44f2b458d01c212",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "wlwg/GitHub-Gists-analysis",
"max_issues_repo_path": "report/chapters/relatedwork/relatedwork.tex",
"max_line_length": 1123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6dc9a6d6ecd7ce4b05603138f44f2b458d01c212",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "wlwg/GitHub-Gists-analysis",
"max_stars_repo_path": "report/chapters/relatedwork/relatedwork.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1442,
"size": 6345
} |
\problemname{Debugging}
\illustration{0.5}{bug}{Picture in public domain via \href{https://en.wikipedia.org/wiki/File:H96566k.jpg}{Wikimedia Commons}}%
\noindent
Your fancy debugger will not help you in this matter. There are many
ways in which code can produce different behavior between debug and
release builds, and when this happens, one may have to resort to more
primitive forms of debugging.
% -- a real coder uses them all!
So you and your printf are now on your own in the search for a line of
code that causes the release build to crash. Still you are lucky:
adding printf statements to this program affects neither the bug (it
still crashes at the same original code line) nor the execution time
(at least not notably). So even the naive approach of putting a
printf statement before each line, running the program until it
crashes, and checking the last printed line, would work.
However, it takes some time to add each printf statement to the code,
and the program may have a lot of lines. So perhaps a better plan would involve
putting a printf statement in the middle of the program, letting it run, seeing
whether it crashes before the added line, and then continuing the search in
either the first or second half of the code.
But then again, running the program may take a lot of time, so the
most time-efficient strategy might be something in between. Write a
program that computes the minimum worst-case time to find the crashing
line (no matter where it is), assuming you choose an optimal strategy
for placing your printf statements.
We're releasing the new version in five hours, so this issue is escalated and needs to be fixed ASAP.
%Did I already mention this issue is escalated and needs to be fixed asap?
\section*{Input}
The input consists of one line with three integers:
\begin{itemize}
\item $n$ ($1 \le n \le 10^6$), the number of code lines;
\item $r$ ($1 \le r \le 10^9$), the amount of time it takes to compile and run the program until it crashes;
\item $p$ ($1 \le p \le 10^9$), the time it takes to add a single printf line.
\end{itemize}
You have already run the program once and therefore already know that it does crash somewhere.
\section*{Output}
Output the worst-case time to find the crashing line when using an
optimal strategy.
| {
"alphanum_fraction": 0.772489083,
"avg_line_length": 44.0384615385,
"ext": "tex",
"hexsha": "c0f7edf3381b48a6e417002b6882def768523ca3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "stoman/CompetitiveProgramming",
"max_forks_repo_path": "problems/debugging/problem_statement/problem.en.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "stoman/CompetitiveProgramming",
"max_issues_repo_path": "problems/debugging/problem_statement/problem.en.tex",
"max_line_length": 127,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "0000b64369b50e31c6f48939e837bdf6cece8ce4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "stoman/CompetitiveProgramming",
"max_stars_repo_path": "problems/debugging/problem_statement/problem.en.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-12T22:26:26.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-22T13:21:25.000Z",
"num_tokens": 539,
"size": 2290
} |
\SetAPI{J-C}
\section{ICacheProviderExtendable}
\label{extendable:ICacheProviderExtendable}
\ClearAPI
\javadoc{com.koch.ambeth.cache.ICacheProviderExtendable}{ICacheProviderExtendable}
\javadoc{com.koch.ambeth.cache.ICacheProvider}{ICacheProvider}
\TODO
%% GENERATED LISTINGS - DO NOT EDIT
\inputjava{Extension point for instances of \type{ICacheProvider}}
{jambeth-cache/src/main/java/com/koch/ambeth/cache/ICacheProviderExtendable.java}
\begin{lstlisting}[style=Java,caption={Example to register to the extension point (Java)}]
IBeanContextFactory bcf = ...
IBeanConfiguration myExtension = bcf.registerBean(...);
bcf.link(myExtension).to(ICacheProviderExtendable.class);
\end{lstlisting}
\inputcsharp{Extension point for instances of \type{ICacheProvider}}
{Ambeth.Cache/ambeth/cache/ICacheProviderExtendable.cs}
\begin{lstlisting}[style=Csharp,caption={Example to register to the extension point (C\#)}]
IBeanContextFactory bcf = ...
IBeanConfiguration myExtension = bcf.RegisterBean(...);
bcf.Link(myExtension).To<ICacheProviderExtendable>();
\end{lstlisting}
%% GENERATED LISTINGS END
| {
"alphanum_fraction": 0.8086080586,
"avg_line_length": 43.68,
"ext": "tex",
"hexsha": "92edc2a3804e8b662e2c8311ded2a6df6d698b57",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z",
"max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Dennis-Koch/ambeth",
"max_forks_repo_path": "doc/reference-manual/tex/extendable/ICacheProviderExtendable.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Dennis-Koch/ambeth",
"max_issues_repo_path": "doc/reference-manual/tex/extendable/ICacheProviderExtendable.tex",
"max_line_length": 91,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Dennis-Koch/ambeth",
"max_stars_repo_path": "doc/reference-manual/tex/extendable/ICacheProviderExtendable.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 291,
"size": 1092
} |
%!TEX root = handout.tex
\setcounter{equation}{0}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Getting started}
\subsection{Installation}
Before we get started we will install OpenMS and KNIME. If you take part in a training session you will have likely received an USB stick from us that contains the required data and software. If we provide laptops with the software you may of course skip the installation process and continue reading the next section.
\subsubsection{Installation from the OpenMS USB stick}
Please choose the directory that matches your operating system and execute the installer.
For example for \textbf{Windows} you call
\begin{itemize}
\item the OpenMS installer: \directory{\WindowsOpenMSInstallerName}
\item the KNIME installer: \directory{\WindowsKnimeInstallerName}
\item OpenMS prerequisites (Windows-only): After installation, before your first use of the OpenMS plugin in KNIME you will be asked to download it automatically if certain requirements are not found in your Windows registry. Alternatively, you can get a bundled version \href{\WindowsPrerequisitesLink}{here} or on the OpenMS USB stick ( \directory{\WindowsOpenMSPrereqInstallerName} ).
\end{itemize}
on \textbf{macOS} you call
\begin{itemize}
\item the OpenMS installer: \directory{\MacOpenMSInstallerName}
\item the KNIME installer: \directory{\MacKnimeInstallerName}
\end{itemize}
and follow the instructions. For the OpenMS installation on \textbf{macOS}, you need to accept the license drag and drop the OpenMS folder into your Applications folder.
\note{Due to increasing security measures for downloaded apps (e.g. path randomization) on \textbf{macOS} you might need to open TOPPView.app and TOPPAS.app while holding \keys{\ctrl} and accept the warning.
If the app still does not open, you might need to move them from \directory{Applications / OpenMS-2.7.0} to e.g. your Desktop and back.}
On \textbf{Linux} you can extract KNIME to a folder of your choice and for TOPPView you need to install OpenMS via your package manager or build it on your own with the instructions under \href{https://www.openms.de/documentation}{www.openms.de/documentation}.
\note{If you have installed OpenMS on Linux or macOS via your package manager (for instance by installing the \texttt{OpenMS-2.7.0-Linux.deb} package), then you need to set the \texttt{OPENMS\_DATA\_PATH} variable to the directory containing the shared data (normally \texttt{/usr/share/OpenMS}). This must be done prior to running any TOPP tool.}
\subsubsection{Installation from the internet}
If you are working through this tutorial at home you can get the installers under the following links:
\begin{itemize}
\item OpenMS: \href{https://www.openms.de/download/openms-binaries}{ https://www.openms.de/download/openms-binaries}
\item KNIME: \href{https://www.knime.org/downloads/overview}{ https://www.knime.org/downloads/overview}
\item OpenMS prerequisites (Windows-only): After installation, before your first use of the OpenMS plugin in KNIME you will be asked to download it automatically if certain requirements are not found in your Windows registry. Alternatively, you can get a bundled version \href{\WindowsPrerequisitesLink}{here}.
\end{itemize}
Choose the installers for the platform you are working on.
\subsection{Data conversion}
\label{Data_Conversion}
Each MS instrument vendor has one or more formats for storing the acquired data. Converting these data into an open format (preferably mzML) is the very first step when you want to work with open-source mass spectrometry software. A freely available conversion tool is \texttt{MSConvert}, which is part of a \texttt{ProteoWizard} installation. All files used in this tutorial \textbf{have already been converted to mzML} by us, so you do not need to perform the data conversion yourself.
However, we provide a small raw file so you can try the important step of raw data conversion for yourself.
\note{The OpenMS installation package for Windows automatically installs ProteoWizard, so you do not need to download and install it separately. Due to restrictions from the instrument vendors, file format conversion for most formats is \textbf{only possible on Windows} systems. In practice, performing the conversion to mzML on the acquisition PC connected to the instrument is usually the most convenient option.}
\noindent To convert raw data to mzML using \texttt{ProteoWizard} you can either use \texttt{MSConvertGUI} (a graphical user interface) or \texttt{msconvert} (a simple command line tool). Both tools are available in:
\newline
\directory{\WindowsDefaultPWizFolder}.
You can find a small RAW file on the USB stick \directory{ \ExampleDataFolder / Introduction / datasets / raw}.
\subsubsection{MSConvertGUI}
\texttt{MSConvertGUI} (see Fig.~\ref{fig:MSConvertGUI}) exposes the main parameters for data conversion in a convenient graphical user interface.
\begin{figure}
\centering
\includegraphics[width=12cm]{introduction/proteowizard.png}
\caption{MSConvertGUI (part of ProteoWizard), allows converting raw files to mzML. Select the raw files you want to convert by clicking on the browse button and then on Add. Default parameters can usually be kept as-is. To reduce the initial data size, make sure that the peakPicking filter (converts profile data to centroided data (see Fig.~\ref{fig:ProfileCentroidData})) is listed, enabled (true) and applied to all MS levels (parameter "1-"). Start the conversion process by clicking on the Start button.}
\label{fig:MSConvertGUI}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=12cm]{introduction/profilecentroided.png}
\caption{The amount of data in a spectra is reduced by peak picking. Here a profile spectrum (blue) is converted to centroided data (green). Most algorithms from this point on will work with centroided data.}
\label{fig:ProfileCentroidData}
\end{figure}
\subsubsection{msconvert}
The \texttt{msconvert} command line tool has no user interface but offers more options than the application \texttt{MSConvertGUI}. Additionally, since it can be used within a batch script, it allows converting large numbers of files and can be much more easily automatized.
\noindent To convert and pick the file \textit{raw\_data\_file.RAW} you may write:
%% -{}- avoids latex ligatures
\noindent\menu{msconvert raw\_data\_file.RAW -{}-filter "peakPicking true 1-"}
\noindent in your command line.
\noindent To convert all RAW files in a folder may write:
\noindent\menu{msconvert *.RAW -o my\_output\_dir}
\note{To display all options you may type \menu{msconvert -{}-help}. Additional information is available on the ProteoWizard web page.}
\subsubsection{ThermoRawFileParser}
Recently the open-source platform independent ThermoRawFileParser tool has been developed. While Proteowizard and MSConvert are only available for Windows systems this new tool allows to also convert raw data on Mac or Linux.
\note{To learn more about the ThermoRawFileParser and how to use it in KNIME see Section \ref{sec:Minimal_Workflow}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Data visualization using \OPENMSTOOL{TOPPView}}
\label{Data_Visualization}
Visualizing the data is the first step in quality control, an essential tool in understanding the data, and of course an essential step in pipeline development.
OpenMS provides a convenient viewer for some of the data: \OPENMSTOOL{TOPPView}.
\begin{figure}
\includegraphics[width=\textwidth]{introduction/TOPPView.png}
\caption{TOPPView, the graphical application for viewing mass spectra and analysis results. Top window shows a small region of a peak map. In this 2D representation of the measured spectra, signals of eluting peptides are colored according to the raw peak intensities. The lower window displays an extracted spectrum (=scan) from the peak map. On the right side, the list of spectra can be browsed.}
\label{fig:toppview}
\end{figure}
\noindent We will guide you through some of the basic features of \OPENMSTOOL{TOPPView}. Please familiarize yourself with the key controls and visualization methods.
We will make use of these later throughout the tutorial. Let's start with a first look at one of the files of our tutorial data set. Note that conceptually, there are no differences in visualizing metabolomic or proteomic data. Here, we inspect a simple proteomic measurement:
\begin{figure}[!htb]
\includegraphics[width=0.75\textwidth]{introduction/3dview.png}
\caption{3D representation of the measured spectra, signals of eluting peptides are colored according to the raw peak intensities.}
\label{fig:toppview_3D}
\end{figure}
\begin{itemize}
\item Start \OPENMSTOOL{TOPPView} (see \textbf{Windows}' Start-Menu or \directory{Applications/OpenMS-2.7.0} on \textbf{macOS})
\item Go to \menu{File > Open File}, navigate to the directory where you copied the contents of the USB stick to,
and select
\directory{Example\_Data / Introduction / datasets / small / velos005614.mzML}
. This file contains only a reduced LC-MS map \footnote{only a selected RT and m/z range
was extracted using the TOPP tool \OPENMSTOOL{FileFilter}} of a label-free proteomic platelet measurement recorded on an Orbitrap velos.
The other two mzML files contain technical replicates of this experiment.
First, we want to obtain a global view on the whole LC-MS map - the default option \textit{Map view 2D} is the correct one and we can click the \menu{Ok} button.
\item Play around.
\item Three basic modes allow you to interact with the displayed data: scrolling, zooming and measuring:
\begin{itemize}
\item Scroll mode
\begin{itemize}
\item Is activated by default (though each loaded spectra file is displayed zoomed out first, so you do not need to scroll).
\item Allows you to browse your data by moving around in RT and m/z range.
\item When zoomed in, you can scroll through the spectra. Click-drag on the current view.
\item Arrow keys can be used to scroll the view as well.
\end{itemize}
\item Zoom mode
\begin{itemize}
\item Zooming into the data: either mark an area in the current view with your mouse while holding the left mouse
button plus the \keys{\ctrlwin} key to zoom to this area
or use your mouse wheel to zoom in and out.
\item All previous zoom levels are stored in a zoom history. The zoom history can be traversed using
\keys[,]{\ctrlwin,+} or \keys[,]{\ctrlwin,-} or the mouse wheel (scroll up and down).
\item Pressing backspace \keys{\, \backspace \,\,} zooms out to show the full LC-MS map (and also resets the zoom history).
\end{itemize}
\item Measure mode
\begin{itemize}
\item It is activated using the \keys{\, \shift \,\,\, } (shift) key.
\item Press the left mouse button down while a peak is selected and drag the mouse to
another peak to measure the distance between peaks.
\item This mode is implemented in the 1D and 2D mode only.
\end{itemize}
\end{itemize}
\item Right click on your 2D map and select \menu{Switch to 3D view} and examine your
data in 3D mode (see Fig. ~\ref{fig:toppview_3D})
%TODO: Inconsitent buttons 2D 1D 3D
\item Go back to the 2D view. In 2D mode, visualize your data in different intensity normalization modes, use linear , percentage, snap and log-view (icons on the upper left tool bar). You can hover over the icons for additional information.
\note{On \textit{macOS}, due to a bug in one of the external libraries used by OpenMS, you will see a small window of the 3D mode when switching to 2D. Close the 3D tab in order to get rid of it.}
\item In \OPENMSTOOL{TOPPView} you can also execute TOPP tools. Go to
\menu{Tools > Apply tool (whole layer)} and choose a TOPP tool (e.g., \OPENMSTOOL{FileInfo}) and
inspect the results.
\end{itemize}
\noindent Dependent on your data MS/MS spectra can be visualized as well (see Fig.\ref{fig:ms2}) . You can do so, by double-click on the MS/MS spectrum shown in scan view.
\newline
\begin{figure}[!htb]
\includegraphics[width=0.75\textwidth]{introduction/ms2_introduction.png}
\caption{MS/MS spectrum}
\label{fig:ms2}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Introduction to KNIME / OpenMS}
\label{KNIME_Intro}
Using OpenMS in combination with KNIME, you can create, edit, open, save, and run workflows
that combine TOPP tools with the powerful data analysis capabilities of KNIME. Workflows can
be created conveniently in a graphical user interface. The parameters of all involved
tools can be edited within the application and are also saved as part of the workflow.
Furthermore, KNIME interactively performs validity checks during the workflow editing
process, in order to make it more difficult to create an invalid workflow.
\newline
\noindent Throughout most parts of this tutorial you will use KNIME to create and
execute workflows. The first step is to make yourself familiar with KNIME. Additional
information on basic usage of KNIME can be found on the KNIME
\href{https://tech.knime.org/knime}{Getting Started page}. However,
the most important concepts will also be reviewed in this tutorial.
\subsubsection{Plugin and dependency installation}
\label{Install_plugins}
Before we can start with the tutorial we need to install all the required extensions for KNIME. Since KNIME 3.2.1 the program automatically
detects missing plugins when you open a workflow but to make sure that the right source for the OpenMS plugin is chosen, please follow the instructions here.
First, we install some additional extensions that are required by our OpenMS nodes or used in the Tutorials e.g. for visualization and file handling.
\begin{enumerate}
\item Click on \menu{Help > Install New Software...}
\item From the \menu{Work with:} drop-down list select \menu{\KnimeUpdateSite}
\item Now select the following plugins from the \textit{KNIME \& Extensions} category
\begin{itemize}
\item KNIME Base Chemistry Types \& Nodes
\item KNIME Chemistry Add-Ons
\item KNIME File Handling Nodes (required for OpenMS nodes in general)
\item KNIME Interactive R Statistics Integration
\item KNIME Report Designer
\item KNIME SVG Support
% \item KNIME XLS Support not needed anymore (integrated in e.g. KNIME 3.2.1)
% \item KNIME XML-Processing (integrated in e.g. KNIME 3.2.1)
% \item KNIME Math Expression (JEP) (integrated in e.g. KNIME 3.2.1)
\end{itemize}
%\item And the following plugin from the \textit{Marvin Chemistry Extensions (donated by Infocom \& Chemaxon)} category
% \begin{itemize}
% \item ChemAxon/Infocom Marvin Extensions Feature
% \end{itemize}
\item Click on \menu{Next} and follow the instructions (you may but don't need to restart KNIME now)
\item Click again on \menu{Help > Install New Software...}
\item From the \menu{Work with:} drop-down list select \\\menu{\KnimeTrustedSite}
\item Now select the following plugin from the "KNIME Community Contributions - Cheminformatics" category
\begin{itemize}
\item RDKit KNIME integration
\end{itemize}
\item Click on \menu{Next} and follow the instructions and after a restart of KNIME the dependencies will be installed.
\end{enumerate}
%Now you need to decide which OpenMS nodes you want to install. You may choose between the stable, well-tested release or the unstable, nightly release with extended functionality.
\noindent In addition, we need to install R for the statistical downstream analysis. Choose the directory that matches your operating system, double-click the R installer and follow the instructions. We recommend to use the default settings whenever possible. On macOS you also need to install XQuartz from the same directory.\\
\noindent Afterwards open your R installation. If you use Windows, you will find an "R x64 3.6.X" icon on your desktop. If you use macOS, you will find R in your Applications folder. In R type the following lines (you might also copy them from the file \directory{R / install\_R\_packages.R} folder on the USB stick):
\begin{code}
\begin{minted}{R}
install.packages('Rserve',,"http://rforge.net/",type="source")
install.packages("Cairo")
install.packages("devtools")
install.packages("ggplot2")
install.packages("ggfortify")
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install()
BiocManager::install(c("MSstats"))
\end{minted}
\end{code}
\noindent In KNIME, click on \menu{KNIME > Preferences}, select the category \menu{KNIME > R} and set the "Path to R Home" to your installation path. You can use the following settings, if you installed R as described above:
\begin{itemize}
\item Windows: C: \textbackslash Program Files \textbackslash R \textbackslash R-3.6.X (where X is the version you used to install the above libraries)
\item macOS: /Library/Frameworks/R.framework/Versions/3.6/Resources
\end{itemize}
\noindent You are now ready to install the OpenMS nodes.
\begin{itemize}
\item Open KNIME.
\item Click on \menu{Help > Install New Software...}
\end{itemize}
%\iftoggle{isprerelease}
%{
%\note{For this tutorial we use the \textbf{bleeding edge, nightly release} version of OpenMS because it corresponds to a prerelease of the new OpenMS version. While not being a full release, it was nevertheless intensively tested to ensure its functionality for this tutorial. For regular use we still recommend using the latest stable OpenMS release. Please also note that some of the workflows shown here require new functionality contained only in the prerelease version of OpenMS. These will likely not work if transferred to the current stable, but older OpenMS version.}
%
%Instructions for the bleeding edge, nightly release:
%\begin{itemize}
% % If you base the tutorial on the nightly contributions (not recommended - more of a emergency measure)
% \item \label{it:add_site} In the now open dialog choose \menu{Add...} (in the upper right corner of the dialog) to define a new update site. In the opening dialog enter the following details. \\
% \textit{Name:} \texttt{Trunk Community Contributions} \\
% \textit{Location:} \menu{\KnimeTrunkSite}
% \item \label{it:select_site} After pressing \keys{OK} KNIME will show you all the contents of the added Update Site.
% \item \textbf{Note:} From now on, you can use this repository for plugins in the \menu{Work with:} drop-down list.
% \item Select the \textbf{OpenMS} nodes in the category: \\ "KNIME Community Contributions - Bioinformatics \& NGS" and click \keys{Next}.
% \item Follow the instructions and after a restart of KNIME the OpenMS nodes will be available in the Node repository under “Community Nodes”.
%\end{itemize}
%}
%{
%Instructions for the stable release (recommended):
%\begin{itemize}
% \item From the \menu{Work with:} drop-down list select the \\ \menu{\KnimeTrustedSite}
% \item Select the \textbf{OpenMS} nodes in the category: \\ "KNIME Community Contributions - Bioinformatics \& NGS" and click \keys{Next}.
% \item Follow the instructions and after a restart of KNIME the OpenMS nodes will be available in the Node repository under “Community Nodes”.
%\end{itemize}
We included a custom KNIME update site to install the OpenMS KNIME plugins from the USB stick. If you do not have a stick available, please see below.
\begin{itemize}
% If you base the tutorial on the nightly contributions (not recommended - more of a emergency measure)
\item \label{it:add_site} In the now open dialog choose \menu{Add...} (in the upper right corner of the dialog) to define a new update site. In the opening dialog enter the following details. \\
\textit{Name:} \texttt{OpenMS 2.7 UpdateSite} \\
\textit{Location:} \KnimeUSBUpdateSite \\
\item \label{it:select_site} After pressing \keys{OK} KNIME will show you all the contents of the added Update Site.
\item \textbf{Note:} From now on, you can use this repository for plugins in the \menu{Work with:} drop-down list.
\item Select the \textbf{OpenMS} nodes in the "Uncategorized" category and click \keys{Next}.
\item Follow the instructions and after a restart of KNIME the OpenMS nodes will be available in the Node repository under “Community Nodes”.
\end{itemize}
Alternatively, you can try these steps that will install the OpenMS KNIME plugins from the internet. Note that download can be slow.
\begin{itemize}
% If you base the tutorial on the nightly contributions (not recommended - more of a emergency measure)
\item \label{it:add_site} In the now open dialog choose \menu{Add...} (in the upper right corner of the dialog) to define a new update site. In the opening dialog enter the following details. \\
\textit{Name:} \texttt{OpenMS 2.5 UpdateSite} \\
\textit{Location:}
\end{itemize}
\menu{\KnimeTrunkSite}
\begin{itemize}
\item \label{it:select_site} After pressing \keys{OK} KNIME will show you all the contents of the added Update Site.
\item \textbf{Note:} From now on, you can use this repository for plugins in the \menu{Work with:} drop-down list.
\item Select the \textbf{OpenMS} nodes in the "Uncategorized" category and click \keys{Next}.
\item Follow the instructions and after a restart of KNIME the OpenMS nodes will be available in the Node repository under “Community Nodes”.
\end{itemize}
%}
\subsubsection{KNIME concepts}
\label{KNIME_concepts}
A \textbf{workflow} is a sequence of computational steps applied to a single or multiple input data to process and analyze the data.
In KNIME such workflows are implemented graphically by connecting so-called \textbf{nodes}.
A node represents a single analysis step in a workflow.
Nodes have input and output \textbf{ports} where the data enters the node or the results are provided for other nodes after processing, respectively.
KNIME distinguishes between different port types, representing different types of data.
The most common representation of data in KNIME are tables (similar to an excel sheet).
Ports that accept tables are marked with a small triangle.
For OpenMS nodes, we use a different port type, so called \textbf{file ports}, representing complete files.
Those ports are marked by a small blue box.
Filled blue boxes represent mandatory inputs and empty blue boxes optional inputs. The same holds for output ports, despite you can deactivate them
in the configuration dialog (double-click on node) under the OutputTypes tab. After execution, deactivated ports will be marked with a red cross and downstream nodes
will be inactive (not configurable).\\
A typical OpenMS workflow in KNIME can be divided in two conceptually different parts:
\begin{itemize}
\item
Nodes for signal and data processing, filtering and data reduction. Here, files are passed between nodes. Execution times of the individual steps are typically longer for these types of nodes as they perform the main computations.
\item
Downstream statistical analysis and visualization. Here, tables are passed between nodes and mostly internal KNIME nodes or nodes from third-party statistics plugins are used. The transfer from files (produced by OpenMS) and tables usually happens with our provided Exporter and Reader nodes (e.g. MzTabExporter followed by MzTabReader).
\end{itemize}
Moreover, nodes can have three different states, indicated by the small traffic light below the node.
\begin{itemize}
\item
Inactive, failed, and not yet fully configured nodes are marked red.
\item
Configured but not yet executed nodes are marked yellow.
\item
Successfully executed nodes are marked green.
\end{itemize}
If the node execution fails, the node will switch to the red state. Other anomalies and warnings like missing information or empty results
will be presented with a yellow exclamation mark above the traffic light.
Most nodes will be configured as soon as all input ports are connected. Some nodes need to know about the output of the predecessor and may stay red until the predecessor was executed.
If nodes still remain in a red state, probably additional parameters have to be provided in the configuration dialog that can neither be guessed from the data nor filled with sensible defaults.
In this case, or if you want to customize the default configuration in general, you can open the configuration dialog of a node with a double-click on the node.
For all OpenMS nodes you will see a configuration dialog like the one shown in \cref{fig:knime_configure}.
\note{OpenMS distinguishes between normal parameters and advanced parameters.
Advanced parameters are by default hidden from the users since they should only rarely be customized.
In case you want to have a look at the parameters or need to customize them in one of the tutorials you can show them by clicking on the checkbox \menu{Show advanced parameter} in the lower part of the dialog. Afterwards the parameters are shown in a light gray color.}
The dialog shows the individual parameters, their current value and type, and, in the lower part of the dialog, the
documentation for the currently selected parameter. Please also note the tabs on the top of the configuration dialog.
In the case of OpenMS nodes, there will be another tab called \textit{OutputTypes}. It contains dropdown menus for
every output port that let you select the output filetype that you want the node to return (if the tool supports
it). For optional output ports you can select \textit{Inactive} such that the port is crossed out after execution and
the associated generation of the file and possible additional computations are not performed. Note that this will
deactivate potential downstream nodes connected to this port.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{knime_setup/knime_configure_dialog.png}
\caption{Node configuration dialog of an OpenMS node.}
\label{fig:knime_configure}
\end{figure}
\subsubsection{Overview of the graphical user interface}
\begin{figure}
\includegraphics[width=\textwidth]{knime_setup/knime_workbench_marked.png}
\caption{The KNIME workbench.}
\label{fig:knime_workbench}
\end{figure}
The graphical user interface (GUI) of KNIME consists of different components or so-called panels that are shown in \cref{fig:knime_workbench}.
We will briefly introduce the individual panels and their purposes below.
\begin{description}
\item[Workflow Editor:]
The workflow editor is the central part of the KNIME GUI.
Here you assemble the workflow by adding nodes from the Node Repository via "drag \& drop". For quick creation of a
workflow, note that double-clicking on
a node in the repository automatically connects it to the selected node in the workbench (connecting all the inputs
with as many fitting outputs of the last node).
Manually, nodes can be connected by clicking on the output port of one node and dragging the edge until releasing the
mouse at the desired input port of the next node. Deletions are possible by selecting nodes and/or edges and pressing
\keys{Del} or (\keys{Fn}+)\keys{Backspace} depending on your OS and settings. Multiselection happens via dragging
rectangles with the mouse or adding elements to the selection by clicking them while holding down \keys{Ctrl}.
\item[KNIME Explorer:]
Shows a list of available workflows (also called workflow projects).
You can open a workflow by double-clicking it.
A new workflow can be created with a right-click in the Workflow Explorer followed by choosing \menu{New KNIME
Workflow...} from the appearing context menu.
Remember to save your workflow often with the \menu{Ctrl}+\menu{S} shortcut.
\item[Workflow Coach (since KNIME 3.2.1):]
Shows a list of suggested following nodes, based on the last added/clicked nodes.
When you are not sure which node to choose next, you have a reasonable suggestion based on other users behavior
there. Connect them to the last node with a double-click.
\item[Node Repository:]
Shows all nodes that are available in your KNIME installation.
Every plugin you install will provide new nodes that can be found here.
The OpenMS nodes can be found in \menu{Community Nodes > OpenMS}.
Nodes for managing files (e.g., Input Files or Output Folders) can be found in \menu{Community Nodes > GenericKnimeNodes}.
You can search the node repository by typing the node name into the small text box in the upper part of the node repository.
\item[Outline:]
The Outline panel contains a small overview of the complete workflow. While of limited use when working on a small
workflow, this feature is very helpful as soon as the workflows get bigger. You can adjust the zoom level of the
explorer by adjusting the percentage in the toolbar at the top of KNIME.
\item[Console:]
In the console panel warning and error messages are shown.
This panel will provide helpful information if one of the nodes failed or shows a warning sign.
\item[Node Description:]
As soon as a node is selected, the Node Description window will show the documentation of the node including
documentation for all its parameters and especially their in- and outputs, such that you know what types of data
nodes may produce or expect.
For OpenMS nodes you will also find a link to the tool page of the online documentation.
\end{description}
\subsubsection{Creating workflows}
\label{sec:create_workflows}
Workflows can easily be created by a right click in the Workflow Explorer followed by clicking on \menu{New KNIME Workflow...}.
\subsubsection{Sharing workflows}
\label{sec:sharing_workflows}
To be able to share a workflow with others, KNIME supports the import and export of complete workflows.
To export a workflow, select it in the Workflow Explorer and select \menu{File > Export KNIME Workflow...}.
KNIME will export workflows as a \textit{knwf} file containing all the information on nodes, their connections, and their parameter configuration.
Those \textit{knwf} files can again be imported by selecting \menu{File > Import KNIME Workflow...}.
\note{For your convenience we added all workflows discussed in this tutorial to the \directory{Workflows} folder on the USB Stick. Additionally, the
workflow files can be found on our \href{https://github.com/OpenMS/Tutorials}{GitHub repository}.
If you want to check your own workflow by comparing it to the solution or got stuck, simply import the full workflow from the corresponding \textit{knwf} file and after that double-click it in your KNIME Workflow repository to open it.}
\subsubsection{Duplicating workflows}
\label{sec:duplicate-wf}
In this tutorial, a lot of the workflows will be created based on the workflow from a previous task.
To keep the intermediate workflows, we suggest you create copies of your workflows so you can see the progress.
To create a copy of your workflow, save it, close it and follow the next steps.
\begin{itemize}
\item
Right click on the workflow you want to create a copy of in the Workflow Explorer and select \menu{Copy}.
\item
Right click again somewhere on the workflow explorer and select \menu{Paste}.
\item
This will create a workflow with same name as the one you copied with a (2) appended.
\item
To distinguish them later on you can easily rename the workflows in the Workflow Explorer by right clicking on the workflow and selecting \menu{Rename}. \note{To rename a workflow it has to be closed, too.}
\end{itemize}
\subsubsection{A minimal workflow}
\label{sec:Minimal_Workflow}
Let us now start with the creation of our very first, very simple workflow.
As a first step, we will gather some basic information about the data set before starting the
actual development of a data analysis workflow. This minimal workflow can also be used to check if all requirements
are met and that your system is compatible.
\begin{itemize}
\item
Create a new workflow.
\item Add an \KNIMENODE{Input File} node and an \KNIMENODE{Output Folder} node (to be found in \menu{Community Nodes > GenericKnimeNodes > IO} and a \KNIMENODE{FileInfo} node (to be found in the category \menu{Community Nodes > OpenMS > File Handling}) to the workflow.
\item Connect the \KNIMENODE{Input File} node to the \KNIMENODE{FileInfo} node, and the first output port of the \KNIMENODE{FileInfo} node to the \KNIMENODE{Output Folder} node.
\note{In case you are unsure about which node port to use, hovering the cursor over the port in question will display the port name and what kind of input it expects.}
The complete workflow is shown in \cref{fig:knime_minimal}.
FileInfo can produce two different kinds of output files.
\item All nodes are still marked red, since we are missing an actual input file.
Double-click the Input File node and select \menu{Browse}.
In the file system browser select \directory{Example\_Data / Introduction / datasets / tiny / velos005614.mzML} and click \menu{Open}.
Afterwards close the dialog by clicking \menu{Ok}.
\note{Make sure to use the ``tiny'' version this time, not ``small'', for the sake of faster workflow execution.}
\item The \KNIMENODE{Input File} node and the \KNIMENODE{FileInfo} node should now have switched to yellow, but the \KNIMENODE{Output Folder} node is still red.
Double-click on the \KNIMENODE{Output Folder} node and click on \menu{Browse} to select an output directory for the generated data.
\item Great! Your first workflow is now ready to be run. Press \keys{\shift + F7} (shift key + F7; or the button with multiple green triangles in the KNIME Toolbar) to execute the complete workflow.
You can also right click on any node of your workflow and select \menu{Execute} from the context menu.
\item The traffic lights tell you about the current status of all nodes in your workflow.
Currently running tools show either a progress in percent or a moving blue bar, nodes waiting for data show the small word ``queued'', and successfully executed ones become green.
If something goes wrong (e.g., a tool crashes), the light will become red.
\item In order to inspect the results, you can just right-click the \KNIMENODE{Output Folder} node and select \menu{View: Open the output folder}.
You can then open the text file and inspect its contents.
You will find some basic information of the data contained in the mzML file, e.g., the total number of spectra and peaks, the RT and m/z range, and how many MS1 and MS2 spectra the file contains.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.59\textwidth]{knime_setup/Minimal_FileInfo}
\caption{A minimal workflow calling FileInfo on a single file.}
\label{fig:knime_minimal}
\end{figure}
Workflows are typically constructed to process a large number of files automatically.
As a simple example, consider you would like to convert multiple Thermo Raw files into the mzML format.
We will now modify the workflow to compute the same information on three different files and then write the output files to a folder.
\begin{itemize}
\item
We start from the previous workflow.
\item
First we need to replace our single input file with multiple files.
Therefore we add the \KNIMENODE{Input Files} node from the category \menu{Community Nodes > GenericKnimeNodes > IO}.
\item
To select the files we double-click on the \KNIMENODE{Input Files} node and click on \menu{Add}.
In the filesystem browser we select all three files from the directory \directory{Example\_Data / Introduction / datasets / tiny / }.
And close the dialog with \menu{Ok}.
\item
We now add two more nodes: the \KNIMENODE{ZipLoopStart} and the \KNIMENODE{ZipLoopEnd} node from the category \menu{Community Nodes > GenericKnimeNodes > Flow}.
\item
Afterwards we connect the \KNIMENODE{Input Files} node to the first port of the \KNIMENODE{ZipLoopStart} node, the first port of the \KNIMENODE{ZipLoopStart} node to the \KNIMENODE{FileConverter} node, the first output port of the \KNIMENODE{FileConverter} node to the first input port of the \KNIMENODE{ZipLoopEnd} node, and the first output port of the \KNIMENODE{ZipLoopEnd} node to the \KNIMENODE{Output Folder} node (NOT to the \KNIMENODE{Output File}).
The complete workflow is shown in \cref{fig:knime_minimal_loop}
\item
The workflow is already complete.
Simply execute the workflow and inspect the output as before.
\end{itemize}
In case you had trouble to understand what ZipLoopStart and ZipLoopEnd do - here is a brief explanation:
\begin{itemize}
\item
The \KNIMENODE{Input Files} node passes a list of files to the \KNIMENODE{ZipLoopStart} node.
\item
The \KNIMENODE{ZipLoopStart} node takes the files as input, but passes the single files sequentially (that is: one after the other) to the next node.
\item
The \KNIMENODE{ZipLoopEnd} collects the single files that arrive at its input port. After all files have been processed, the collected files are passed again as file list to the next node that follows.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{knime_setup/Minimal_RawFileConverter_Loop}
\caption{A minimal workflow calling the FileConverter on multiple Thermo Raw files in a loop.}
\label{fig:knime_minimal_loop}
\end{figure}
\subsubsection{Digression: Working with chemical structures}
Metabolomics analyses often involve working with chemical structures. Popular cheminformatic toolkits such as RDKit~\cite{rdkit} or CDK~\cite{cdk} are available as KNIME plugins and allow us to work with chemical structures directly from within KNIME. In particular, we will use KNIME and RDKit to visualize a list of compounds and filter them by predefined substructures. Chemical structures are often represented as SMILES (\textbf{S}implified \textbf{m}olecular \textbf{i}nput \textbf{l}ine \textbf{e}ntry \textbf{s}pecification), a simple and compact way to describe complex chemical structures as text. For example, the chemical structure of L-alanine can be written as the SMILES string C[C@H](N)C(O)=O. As we will discuss later, all OpenMS tools that perform metabolite identification will report SMILES as part of their result, which can then be further processed and visualized using RDKit and KNIME.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{metabo/structures_filter_workflow.png}
\caption{Workflow to visualize a list of SMILES strings and filter them by predefined substructures.}
\label{fig:structures_filter_workflow}
\end{figure}
\noindent Perform the following steps to build the workflow shown in in Fig.~\ref{fig:structures_filter_workflow}. You will use this workflow to visualize a list of SMILES strings and filter them by predefined substructures:
\begin{itemize}
\item Add the node \KNIMENODE{File Reader}, open the node configuration dialog and select the file \directory{smiles.csv}. This file has been exported from the Human Metabolome Database (HMDB) and contains the portion of the human metabolome that has been detected and quantified. The file preview on the bottom of the dialog shows that each compound is given by its HMDB accession, compound name, and SMILES string. Click on the column header 'SMILES' to change its properties. Change the column type from 'string' to 'smiles' and close the dialog with \menu{Ok}. Afterwards the SMILES column will be visualized as chemical structures instead of text directly within all KNIME tables.
\item Add the node \KNIMENODE{RDKit From Molecule} and connect it to the \KNIMENODE{File Reader}. This node will use the provided SMILES strings to add an additional column that is required by RDKit.
\item Add the node \KNIMENODE{RDKit Functional Group Filter} and open the node configuration dialog. You can use this dialog to filter the compounds by any combination of functional groups. In this case we want to find all compounds that contain at least one aromatic carboxylic acid group. To do this, set this group as active and choose '>=' and '1'.
\item Connect the first output port (Molecules passing the filter) to a \KNIMENODE{CSV Writer} node to save the filtered metabolites to a file. Right click \KNIMENODE{RDKit Functional Group Filter} and select the view 'Molecules passing the filter' to inspect the selected compounds in KNIME. How many compounds pass the chosen filter (see Fig.~\ref{fig:structures_filter_results})?
\begin{figure}
\centering
\includegraphics[width=0.59\textwidth]{metabo/structures_filter_results.png}
\caption{Resulting list of compounds that contains at least one aromatic carboxylic acid group.}
\label{fig:structures_filter_results}
\end{figure}
\end{itemize}
\subsubsection{Advanced topic: Metanodes}
Workflows can get rather complex and may contain dozens or even hundreds of nodes. KNIME provides a simple way to improve handling and clarity of large workflows:
\KNIMENODE{Metanodes} allow to bundle several nodes into a single \KNIMENODE{Metanode}.
\begin{task}
Select multiple nodes (e.g. all nodes of the ZipLoop including the start and end node). To select a set of nodes, draw a rectangle around them with the left mouse button or hold \keys{Ctrl} to add/remove single nodes from the selection. \textbf{Pro-tip:} There is a \menu{Select Loop} option when you right-click a node in a loop, that does exactly that for you. Then, open the context menu (right-click on a node in the selection) and select \menu{Create Metanode}. Enter a caption for the \KNIMENODE{Metanode}. The previously selected nodes are now contained in the \KNIMENODE{Metanode}. Double-clicking on the \KNIMENODE{Metanode} will display the contained nodes in a new tab window.
\end{task}
%TODO: Check what is going on here with Metanode options! Wrap, ceate, freeze
%TODO: Check component
\begin{task}
Create the Metanode to let it behave like an encapsulated single node. First select the \KNIMENODE{Metanode},
open the context menu (right-click) and select \menu{Metanode > Wrap}. The differences between Metanodes and their
wrapped counterparts are marginal (and only apply when exposing user inputs and workflow variables). Therefore we
suggest to use standard Metanodes to clean up your workflow and cluster common subparts until you actually notice
their limits.
\end{task}
\begin{task}
Undo the packaging. First select the \KNIMENODE{(Wrapped) Metanode}, open the context menu (right-click) and select \menu{(Wrapped) Metanode > Expand}.
\end{task}
\subsubsection{Advanced topic: R integration}
KNIME provides a large number of nodes for a wide range of statistical analysis, machine learning, data processing,
and visualization. Still, more recent statistical analysis methods, specialized visualizations or cutting edge
algorithms may not be covered in KNIME. In order to expand its capabilities beyond the readily available nodes,
external scripting languages can be integrated. In this tutorial, we primarily use scripts of the powerful
statistical computing language R. Note that this part is considered advanced and might be difficult to follow if you
are not familiar with R. In this case you might skip this part.
\KNIMENODE{R View (Table)} allows to seamlessly include R scripts into KNIME. We will demonstrate on a minimal
example how such a script is integrated.
\begin{task}
First we need some example data in KNIME, which we will generate using the \KNIMENODE{Data Generator} node. You can
keep the default settings and execute the node. The table contains four columns, each containing random coordinates
and one column containing a cluster number (Cluster\_0 to Cluster\_3). Now place a \KNIMENODE{R View (Table)} node
into the workflow and connect the upper output port of the \KNIMENODE{Data Generator} node to the input of the
\KNIMENODE{R View (Table)} node. Right-click and configure the node.
If you get an error message like "Execute failed: R\_HOME does not contain a folder with name 'bin'." or "Execution
failed: R Home is invalid.": please change the R settings in the preferences. To do so open \menu{File > Preferences
> KNIME > R} and enter the path to your R installation (the folder that contains the bin directory (e.g., \directory{C:
/ Program Files / R / R-3.4.3}).
If you get an error message like:
"Execute failed: Could not find Rserve package. Please install it in your R installation by running \\
"install.packages('Rserve')"." You may need to run your R binary as administrator (In windows explorer: right-click
"Run as administrator") and enter install.packages('Rserve') to install the package.
If R is correctly recognized we can start writing an R script. Consider that we are interested in plotting the first
and second coordinates and color them according to their cluster number. In R this can be done in a single line.
In the \KNIMENODE{R View (Table)} text editor, enter the following code: \\
\begin{code}
\begin{minted}{R}
plot(x=knime.in$Universe_0_0, y=knime.in$Universe_0_1, main="Plotting column Universe_0_0 vs. Universe_0_1", col=knime.in$"Cluster Membership")
\end{minted}
\end{code}
\textbf{Explanation:}
The table provided as input to the \KNIMENODE{R View (Table)} node is available as R \texttt{data.frame} with name
\texttt{knime.in}. Columns (also listed on the left side of the R View window) can be accessed in the usual R way by
first specifying the \texttt{data.frame} name and then the column name (e.g. \texttt{knime.in\$Universe\_0\_0}).
\texttt{plot} is the plotting function we use to generate the image. We tell it to use the data in column
\texttt{Universe\_0\_0} of the dataframe object \texttt{knime.in} (denoted as \texttt{knime.in\$Universe\_0\_0}) as
x-coordinate and the other column \texttt{knime.in\$Universe\_0\_1} as y-coordinate in the plot. \texttt{main} is
simply the main title of the plot and \texttt{col} the column that is used to determine the color (in this case it is
the \texttt{Cluster Membership} column).
Now press the \menu{Eval script} and \menu{Show plot} buttons.
\end{task}
\note{Note that we needed to put some extra quotes around \texttt{Cluster Membership}. If we omit those, R would
interpret the column name only up to the first space (\texttt{knime.in\$Cluster}) which is not present in the table
and leads to an error. Quotes are regularly needed if column names contain spaces, tabs or other special characters
like \$ itself.}
| {
"alphanum_fraction": 0.7772403369,
"avg_line_length": 70.8103975535,
"ext": "tex",
"hexsha": "2b2a885bbff6b6325e45436f60d8f719d12b24ee",
"lang": "TeX",
"max_forks_count": 19,
"max_forks_repo_forks_event_max_datetime": "2021-09-03T01:11:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-02-18T18:14:36.000Z",
"max_forks_repo_head_hexsha": "52f9d1b69c7f9743fb8a5786924a96342bcef1dd",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "OpenMS/Tutorials",
"max_forks_repo_path": "Handout/gettingstarted.tex",
"max_issues_count": 129,
"max_issues_repo_head_hexsha": "52f9d1b69c7f9743fb8a5786924a96342bcef1dd",
"max_issues_repo_issues_event_max_datetime": "2021-09-09T10:31:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-02-18T15:06:51.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "OpenMS/Tutorials",
"max_issues_repo_path": "Handout/gettingstarted.tex",
"max_line_length": 909,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "52f9d1b69c7f9743fb8a5786924a96342bcef1dd",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "OpenMS/UMTutorials",
"max_stars_repo_path": "Handout/gettingstarted.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-16T12:15:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-18T21:13:49.000Z",
"num_tokens": 11134,
"size": 46310
} |
% -*- coding:utf-8 -*-
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\newenvironment{note}{\paragraph{NOTE:}}{}
\newenvironment{field}{\paragraph{field:}}{}
\newcommand*{\tags}[1]{\paragraph{tags: }#1}
\begin{document}
\begin{note}
\tags{tag here}
\begin{field}
\[
\]
\begin{center}
(text here)
\end{center}
\end{field}
\begin{field}
\[
\]
\begin{center}
(text here)
\end{center}
\end{field}
\end{note}
\end{document}
| {
"alphanum_fraction": 0.5863453815,
"avg_line_length": 16.6,
"ext": "tex",
"hexsha": "229c087c6a5b173f30e4c959bc9c87186e2554bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8d9e491c6d7bf1829686cf8bafdaa2b8d66f35d4",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "cbarina/trigonometric_identities_latex",
"max_forks_repo_path": "parts/anki-template.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8d9e491c6d7bf1829686cf8bafdaa2b8d66f35d4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "cbarina/trigonometric_identities_latex",
"max_issues_repo_path": "parts/anki-template.tex",
"max_line_length": 44,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8d9e491c6d7bf1829686cf8bafdaa2b8d66f35d4",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "cbarina/trigonometric_identities_latex",
"max_stars_repo_path": "parts/anki-template.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 170,
"size": 498
} |
\documentclass{article}
\usepackage{bm}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{mdwlist}
\usepackage[colorlinks=true]{hyperref}
\usepackage{geometry}
\geometry{margin=1in}
\geometry{headheight=2in}
\geometry{top=2in}
\usepackage{palatino}
%\renewcommand{\rmdefault}{palatino}
\usepackage{fancyhdr}
\usepackage{subcaption}
%\pagestyle{fancy}
\rhead{}
\lhead{}
\chead{%
{\vbox{%
\vspace{2mm}
\large
Statement of Purpose \hfill
\\
Seoul National University
\\[4mm]
%Homework \#(\textbf{1})\\
\textbf{ GAO ZHIYUAN (2017-81842)}
}
}
}
\usepackage{paralist}
\usepackage{todonotes}
\setlength{\marginparwidth}{2.15cm}
\usepackage{tikz}
\usetikzlibrary{positioning,shapes,backgrounds}
\begin{document}
\pagestyle{fancy}
%!TEX root = hw1.tex
\section{ \textbf{Personal Background}}
\\
\section{ \textbf{Study Plan}}
\end{document}
| {
"alphanum_fraction": 0.7045707915,
"avg_line_length": 17.25,
"ext": "tex",
"hexsha": "0c1cea7376f9ae27501150e41ed7859e49d97b72",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d96f3ad7743d68447b835d411a732f865cfe6ac1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alapha23/resume",
"max_forks_repo_path": "past_resume/SOP.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d96f3ad7743d68447b835d411a732f865cfe6ac1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alapha23/resume",
"max_issues_repo_path": "past_resume/SOP.tex",
"max_line_length": 47,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d96f3ad7743d68447b835d411a732f865cfe6ac1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alapha23/resume",
"max_stars_repo_path": "past_resume/SOP.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 304,
"size": 897
} |
\documentclass[a4paper,10pt]{article}
%\documentclass[a4paper,10pt]{scrartcl}
\usepackage{hyperref}
% \usepackage{caption}
\hypersetup{pdfborder={0 0 0}}
\usepackage[all]{hypcap}
\usepackage{default}
\usepackage{graphicx}
\usepackage{array}
\usepackage{amsmath}
\usepackage{listings}
\lstset{language=Python,% general command to set parameter(s)
basicstyle =\small\ttfamily, % print whole listing small
showstringspaces=false,
commentstyle=\ttfamily}
\title{Perceptron Exercises}
\author{Dr. Olivia Guest}
\date{\today}
\begin{document}
\maketitle
\section{Running a perceptron on paper}
\begin{figure}[hb]
\centering
\includegraphics{../slides/fig/perceptron_empty.pdf}
\caption{A perceptron that you can run on paper.}
\label{fig:perceptron}
\end{figure}
Using the activation propagation \autoref{eq:propagate} as presented and applied in the handouts calculate the output of the perceptron in \autoref{fig:perceptron}:
\begin{equation}
\label{eq:propagate}
y = f\left( \sum_{j=1}^N x_j \times w_{ji} \right)
\end{equation}
Firstly, because this is just a practice run to help you understand how activations propagate randomly, choose some values for the weights and inputs (simple small numbers will make the arithmetic easier). Then, using the three inputs and the three connection weights you just picked, calculate the value of the single output unit. If you wish to calculate the so-called post-synaptic activation for the output unit you can use an activation function. A simple version of this function, denoted by $f$, is to change the output unit's value to $1.0$ if the total of all inputs to it is above $0$, otherwise set it to $0$. In other words:
\begin{equation}
\label{eq:f}
y = \left\{
\begin{array}{l l}
1 & \quad \text{if $ \sum_{j=1}^N x_j \times w_{ji} > 0$ }\\
0 & \quad \text{otherwise}\\
\end{array} \right.
\end{equation}
What does $f$ help the network do?
\section{Playing around with the network}
Now you will have a go playing around, but hopefully not breaking, the network depicted in \autoref{fig:perceptron_maths}. However, don't worry if you do break it, there is an infinite supply.
\begin{figure}[hb]
\centering
\includegraphics{../slides/fig/perceptron_maths.pdf}
\caption{A perceptron}
\label{fig:perceptron_maths}
\end{figure}
Have a look at the code that makes it learn. Open the file called \texttt{pyceptron.py} and go to the part that says \texttt{def Train(self):}. This function, as I probably already explained, does a bunch of things that essentially implement the two basic equations. The first equation is that for calculating the activation of the output unit(s):
\begin{equation}
y_j = f \left( \sum_1^N w_i \times x_i \right)
\end{equation}
where $y_j$ is the (output) unit whose state you want to calculate, $N$ is the number of units on the previous layer, $w_{i}$ is the weight on the connection between $i$ and $j$, and $f$ is a function that the unit applies. In the code you can write that in many ways, just like you can say the same or similar things in natural language in a number of different ways. I decided on this:
\begin{lstlisting}
for i in range(N+1):
y += x[i] * w[i]
y = f(y)
\end{lstlisting}
and hopefully you can guess what each variable is based on the equation. Find this snippet of code within the \texttt{Train} function. Using your own judgement, add some comments to explain what is going on, for example:
\begin{lstlisting}
for i in range(N+1):
# loop through all the indices of input units
y += x[i] * w[i]
# sum the product of each input unit x[i]
# and weight w[i] and save it in output
# unit y
y = f(y)
# calculate the post-synaptic value for y
\end{lstlisting}
Ask for help if something is not making sense straight away.
\\ \
Can you find where \texttt{f} is defined?
\\ \
The second, and equally important equation, you need is the one that trains the weights:
\begin{equation}
\Delta w_i = \eta ( d_j - y_j ) \times x_{i}
\end{equation}
where $d$ is what you want $y$ to be given $x$, and $\eta$ is the learning rate. In other words $d$ is the target, $y$ is as before the output unit (or units), and $x$ are the unit units. As before, there are a number of different ways to say this in Python. Hopefully, my choice is a clear translation:
\begin{lstlisting}
error = d[p][0] - y
for i in range(N+1):
w[i] += h * error * x[i]\end{lstlisting}
Now that you have a slightly clearer idea (if not --- ask me or a demonstrator for clarification) of how the network works, have a go changing things to see what happens. Before you change anything though, try running the network to see what happens with the current settings.
\begin{enumerate}
\item Can you figure out what it is learning based on the output it prints to screen?
\item After how many iterations does it learn (i.e., have no error)?
\end{enumerate}
Now try the following:
\begin{enumerate}
\item Change the number of training iterations as set in \texttt{Main()}. What happens?
\item Change the value of the learning rate as set in \texttt{Main()}. What happens?
\item What does \texttt{N.Run()} do? Try commenting it back in to see.
\item \texttt{N.Run()} calls the function \texttt{def Run(self):} (below \texttt{Train}) --- can you tell now what it might do?
\item Can you spot similarities (and differences) between \texttt{Run} and \texttt{Train}? Imagine if it was called something less informative than \texttt{Run}, would you still be able to reverse-engineer it? Add comments to help you if you need to return to it.
\item Change the \texttt{graphs} parameter in the initialisation of the network \texttt{N} to \texttt{True}. What happens?
\item Is there something with respect to the actual running or training of the network that you do not understand? Try to use \texttt{print} to see what is going on in such a case.
\end{enumerate}
With respect to the graphs, they represent what the network knows about the inputs. Inputs that are classed as 1.0 (their target is 1.0) are represented with one colour and inputs in the other class are coloured with another. The line that moves around during training is a function of the weights and shows how the network is separating the data into two groups (i.e., $y$ is either 1.0 or 0.0). In other words, the graph depicts that training set as points in a two dimenstional space and the line depicts the weights. If the training set can be linearly separated, i.e., if the line can dissect the input patterns into two classes that reflect the targets then the network has represented the training set correctly.
\ \\
\ \\
What do the x and y axes represent in the graph? Compare the points' coordinates with the values of the input patterns.
\ \\
\ \\
Based on these graphs can you tell when the network has learned?
\ \\
\ \\
Is the concept of linear separation useful to keep? If you think so, keep the \texttt{graphs} set to \texttt{True} for the next section of coding.
\section{Teaching the network logic}
To explore some of the very basic uses of neural networks, you will first attempt to teach the perceptron logical operators. Logical operators were touched on in the first part of the workshop, when we talked about \textbf{not}, \textbf{and}, \textbf{or}, etc., and applied them to truth values (i.e., true and false).
Even though logical operators might seem very abstract, saying things like ``true \textbf{and} true is true'' is just a formalised way of saying much more common expressions such as ``I agree \textbf{and} you agree, therefore we both agree''. Natural language is full of such logical operations, for example when we ask questions, e.g., ``do you want sugar \textbf{or} milk in your tea?'' is equivalent to ``sugar \textbf{or} milk'' in formal logic (recall this allows for both as well as either). Such formalisation is useful because it is an easy way to avoid ambiguity, thus helping us gain a clear understanding of what a network is telling us.
\subsection{Not}
The first logical operator you will teach your perceptron is the \textbf{not} operation. This requires one input unit and one output unit because negation (a synonym for not) is applied to one variable (or in our case input unit) at a time. So when you give the network a 0 you want it to return a 1, and when you give the network a 1 you need a 0 on the output. That means our training patterns are 0 and 1, and our targets are 1 and 0, in that order. \autoref{tbl:not} represents our patterns and their required targets.
%modify this section to say where in the code you need to edit stuff
\begin{table}[htb]
\centering
\begin{tabular}[t]{cc}
Input & Output\\ \hline
0 & 1\\
1 & 0
\end{tabular} \caption{Truth table for logical \textbf{not}.}
\label{tbl:not}
\end{table}
Once you have made the required changes, run the network to see what happens and how long it takes to train.
\ \\ Can the network learn \textbf{not}? Yes / No
\ \\ Is \textbf{not} linearly seperable? Yes / No
\subsection{And}
Now let's teach it something a little more complex, the most basic form of logical \textbf{and} is that with two inputs (e.g., false \textbf{and} true is false) and it always has one output. Meaning that the network needs to learn the input-output mapping given in \autoref{tbl:and}. By looking at \autoref{tbl:and}, you might notice that now there are three units, two input units and the same number of output units as before. So you need to modify the network to have two input units, leaving the single output unit untouched. Thankfully, the code can figure out the number of input and output units required based on the patterns and targets you give it. This is because when I wrote the code I made no assumptions about the number of input units. I did however make a strong assumption that there is only a single output unit (i.e., the targets are always one value long), to keep the code a little simpler.
% This is something I wrote myself, if you want to see how that is done feel free to play around with the , but do not get bogged down with understanding this %modify this section to say where in the code you need to edit stuff
Once the patterns and targets are set, run the network like before and see what happens.
\ \\ Can the network learn \textbf{and}? Yes / No
\ \\ Is \textbf{and} linearly seperable? Yes / No
\begin{table}[ht]
\centering
\begin{tabular}[t]{ccc}
Input 1 & Input 2 & Output\\ \hline
0 & 0 & 0\\
0 & 1 & 0 \\
1 & 0 & 0 \\
1 & 1 & 1 \\
\end{tabular} \caption{Truth table for logical \textbf{and}.}
\label{tbl:and}
\end{table}
\subsection{Or}
Now let's do the same for \textbf{or}. Logical \textbf{or} returns a 1, or true, if either input is on. In other words, it only returns 0 when neither are 1, as shown in \autoref{tbl:or}. Remember that at any point you can check what any logical operator does directly in the Python shell, e.g., by typing \texttt{False or True}, Python will say \texttt{True} (corresponding to the 3rd line of \autoref{tbl:or}).
\ \\ Can the network learn \textbf{or}? Yes / No
\ \\ Is \textbf{or} linearly seperable? Yes / No
\begin{table}[ht]
\centering
\begin{tabular}[t]{ccc}
Input 1 & Input 2 & Output\\ \hline
0 & 0 & 0\\
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 1 \\
\end{tabular} \caption{Truth table for logical \textbf{or}.}
\label{tbl:or}
\end{table}
\subsection{Could the network buy me fruit?}
Instead of only learning the outcomes of logival operators the network can try to clasify some fruit based on two of their features, colour and taste, into ones I do and ones I do not want to eat. Can the network learn which of the fruit in \autoref{tbl:fruit} I like?
\begin{table}[ht]
\centering
\begin{tabular}[t]{cccc}
Name & Colour & Taste & Does Olivia like it? \\ \hline
& Yellow-Red & Sour-Sweet & Yes-No \\ \hline
Loquat & 0.1 & 0.5 & 1.0 \\
Lemon & 0.0 & 0.0 & 0.0 \\
Apple & 1.0 & 0.8 & 0.0 \\
Strawberry & 1.0 & 0.9 & 1.0 \\
\end{tabular} \caption{Some fruit for the network to categorise.}
\label{tbl:fruit}
\end{table}
\ \\ Can the network classify the fruit based on my personal preferences? Yes / No
\ \\ Is it an easy classification for the network to learn? Yes / No
\ \\ Does changing the learning rate help? Yes / No
\ \\ Does training for longer help? Yes / No
\subsection{Xor}
Things start to become interesting when you teach the network \textbf{xor}, also known as exclusive or. Like its namesake \textbf{or}, \textbf{xor} returns true when either one or the other inputs are true, \emph{but it does not} return true when both are true (compare \autoref{tbl:xor} with \autoref{tbl:or}).
In Python one way of specifying \textbf{xor} is by using \texttt{(a and not b) or (b and not a)}. You do not need to know this to program the network, because the network figures out how to map inputs onto outputs itself, or at least tries to...
\begin{table}[ht]
\centering
\begin{tabular}[t]{ccc}
Input 1 & Input 2 & Output\\ \hline
0 & 0 & 0\\
0 & 1 & 1 \\
1 & 0 & 1 \\
1 & 1 & 0 \\
\end{tabular} \caption{Truth table for logical \textbf{xor}.}
\label{tbl:xor}
\end{table}
\ \\ Can the network learn \textbf{xor}? Yes / No
\ \\ Is it an easy classification for the network to learn? Yes / No
\ \\ Does changing the learning rate help? Yes / No
\ \\ Does training for longer help? Yes / No
\ \\ Describe what is going on here.
\end{document}
| {
"alphanum_fraction": 0.7096027209,
"avg_line_length": 50.8051470588,
"ext": "tex",
"hexsha": "96a2f002dedf291190c1564c434b6c09b422b9af",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-06-28T13:34:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-08-26T23:13:51.000Z",
"max_forks_repo_head_hexsha": "1b7ee1c2971f6c79e31bafae0ad7b69b19c74ec5",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "oliviaguest/connectionism",
"max_forks_repo_path": "week1/exercises/exercises.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1b7ee1c2971f6c79e31bafae0ad7b69b19c74ec5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "oliviaguest/connectionism",
"max_issues_repo_path": "week1/exercises/exercises.tex",
"max_line_length": 913,
"max_stars_count": 23,
"max_stars_repo_head_hexsha": "1b7ee1c2971f6c79e31bafae0ad7b69b19c74ec5",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "oliviaguest/connectionism",
"max_stars_repo_path": "week1/exercises/exercises.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-04T22:01:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-07-30T19:49:20.000Z",
"num_tokens": 3706,
"size": 13819
} |
\chapter{A Brief Note on Feature Engineering \label{chapter:feateng}}
All machine learning algorithms and statistical models depend on the concept of a \textbf{feature}. A feature is some aspect of a dataset that, the model designer believes, represents the data in a way that is relevant to the problem he/she is trying to solve.
Before any algorithm can be applied, therefore, it is necessary to decide how to represent the data: which features to include and how to extract them from the raw data. This task is called \textbf{feature engineering}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Study Design vs. Feature Engineering}
We have seen a large number of features in Chapters~\ref{chapter:overview}--\ref{chapter:logreg}, but we never stopped to consider them. That's because, in many datasets, the features are chosen at the \textbf{study design} stage. The analyst (statistician, data scientist, etc.) has no say in what the features look like or which features are included.
This paradigm is changing as data science increasingly focuses on large, observational datasets, like those from electronic medical records (EMRs). In these types of studies, the raw data were not collected for the study itself, but to fulfill some other purpose. The analyst must choose how to build features from the raw data and use them in models.
\vspace{4mm}
\begin{question}{}
The examples in Chapters~\ref{chapter:classification} and~\ref{chapter:regression} used the same two features. What were these features? How were they represented? What are some alternatives to this choice of features?
\end{question}
\begin{question}{}
In Chapter~\ref{chapter:decisiontrees}, we looked at the Wisconsin Breast Cancer Dataset, which includes $30$ different imaging features relevant to predicting whether a tumor is benign or malignant. How were these features represented? What are some alternatives to this choice?
\end{question}
\begin{question}{}
In Chapters~\ref{chapter:linreg} and~\ref{chapter:logreg}, we looked at two datasets that were collected for the purposes of answering particular questions. Do you agree with these study designers' choice of features? What other features could potentially have been relevant to answering each research question?
\end{question}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Turning Data into Numbers}
A model is just a tool for learning relationships among sets of numbers. The first step in any data science problem, therefore, is deciding how to represent what is often a large, complex, noisy dataset as a set of numbers.
\subsection{Numbers}
Sometimes you get lucky and the feature you need is already a number, such as a vital sign measurement, lab value, or other biomarker. In that case, more often than not, the feature enters into the model as its raw value.
In some cases, you may also choose to apply a \textbf{transformation} to the feature before it enters the model. A transformation is simply the application of a deterministic mathematical function that changes the shape of the distribution of the feature. Transformations are often used to improve the interpretability of a model and/or to ensure that the model fulfills the assumptions of the statistical inference method(s) being used (e.g., a hypothesis test).
\vspace{4mm}
\begin{question}{}
Here are $100$ random samples from a normal distribution with $\mu = 3.0$ and $\sigma = 0.5$ and five different transformations of those samples. What do you notice about the shape and position of the data under the different transformations?
\begin{center}
\includegraphics[width=0.4\textwidth]{img/normal-transform-0.png}
\includegraphics[width=0.4\textwidth]{img/normal-transform-1.png}
\includegraphics[width=0.4\textwidth]{img/normal-transform-2.png}
\includegraphics[width=0.4\textwidth]{img/normal-transform-3.png}
\includegraphics[width=0.4\textwidth]{img/normal-transform-4.png}
\includegraphics[width=0.4\textwidth]{img/normal-transform-5.png}
\end{center}
\end{question}
\begin{question}{}
Here are $100$ random samples from an exponential (see Section~\ref{sect:exponential}) distribution with $\lambda = 0.8$ and the same five transformations of those samples. What do you notice about the shape and position of the data under the different transformations?
\begin{center}
\includegraphics[width=0.4\textwidth]{img/exp-transform-0.png}
\includegraphics[width=0.4\textwidth]{img/exp-transform-1.png}
\includegraphics[width=0.4\textwidth]{img/exp-transform-2.png}
\includegraphics[width=0.4\textwidth]{img/exp-transform-3.png}
\includegraphics[width=0.4\textwidth]{img/exp-transform-4.png}
\includegraphics[width=0.4\textwidth]{img/exp-transform-5.png}
\end{center}
\end{question}
Economics, the social sciences, and related disciplines, which are heavily dependent on the use of regression models and hypothesis tests, rely extensively on transformations. In my experience, machine learning folks spend almost no time on them because their primary concern is predictive accuracy, not model interpretation. Machine learning practitioners, however, very frequently \textbf{scale and center} their predictors (see footnote in Section~\ref{section:sehyp}), which is another type of transformation. We will get into more detail on transformations as we continue to learn about regression models.
\subsection{Binary Variables}
For features which are yes/no (e.g., presence/absence of a disease, symptom, physical attribute, etc.) the most common coding scheme is to use ``1'' for ``yes'' and ``0'' for ``no''. This is useful for interpretation, particularly in regression models. In a linear regression model using this coding scheme, for example, the model coefficient will be the shift in the mean of the normal distribution representing the outcome, $y$, when the feature is present.
\subsection{Categories}
Categorical features with $k>2$ categories are generally represented using \textbf{indicator variables}. If a feature, $x$, has $k$ \textbf{levels}, we can use $k-1$ yes/no indicator variables to represent that feature. For example, assume $k = 3$ and the possible levels of our feature, $x$, are $A$, $B$, and $C$. We set:
\begin{align*}
x_1 &= \left\{ \begin{array}{cl} 1 & \text{ if }x = A \\
0 & \text{ otherwise} \end{array} \right. \\[2mm]
x_2 &= \left\{ \begin{array}{cl} 1 & \text{ if }x = B \\
0 & \text{ otherwise} \end{array} \right.
\end{align*}
If the value of $x$ is $A$, $x_1 = 1$ and $x_2 = 0$. If it's $B$, $x_1 = 0$ and $x_2 = 1$. The value $C$ is called our \textbf{reference category} and has $x_1 = 0$ and $x_2 = 0$. In this way, information about all three categories is captured using only two variables. Creating indicator variables is just another way of transforming the value of a feature.
\vspace{4mm}
\begin{question}{}
In Section~\ref{sect:lowbwt}, we saw an example of a model that predicts whether or not a mother will give birth to a low birthweight baby. One of the factors considered in that model is the mother's race, which was coded (crudely and probably inaccurately, I might add) as \texttt{1 = white, 2 = Black, 3 = other}. You can tell how the feature \texttt{RACE} was coded by examining the model output. How many indicator variables were used? Which level of the feature was used as the reference category?
\end{question}
| {
"alphanum_fraction": 0.751269714,
"avg_line_length": 84.0674157303,
"ext": "tex",
"hexsha": "dcc7921b599e6bf48844a7f608eb71d482777436",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-12-14T17:16:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-14T17:16:44.000Z",
"max_forks_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "blpercha/mcds-notes",
"max_forks_repo_path": "tex/mcds-feature-engineering.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "blpercha/mcds-notes",
"max_issues_repo_path": "tex/mcds-feature-engineering.tex",
"max_line_length": 611,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "33531a443afb154b5c415299276a2ad215463896",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "blpercha/mcds-notes",
"max_stars_repo_path": "tex/mcds-feature-engineering.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T01:31:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-10T16:51:08.000Z",
"num_tokens": 1771,
"size": 7482
} |
\subsection{Cylinders}
| {
"alphanum_fraction": 0.7307692308,
"avg_line_length": 5.2,
"ext": "tex",
"hexsha": "954ac3abf3b41baeb0a33e41fbee0d2aaa08e385",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-02-cylinders.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-02-cylinders.tex",
"max_line_length": 22,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/geometry/manifoldsRiemann/03-02-cylinders.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8,
"size": 26
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Contents: Things you need to know
% $Id: things.tex 536 2015-06-26 06:41:33Z oetiker $
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Things You Need to Know}
\begin{intro}
The first part of this chapter presents a short
overview of the philosophy and history of \LaTeXe. The second part
focuses on the basic structures of a \LaTeX{} document.
After reading this chapter, you should have a rough knowledge
of how \LaTeX{} works, which you will need to understand the rest
of this book.
\end{intro}
\section{The Name of the Game}
\subsection{\TeX}
\TeX{} is a computer program created by \index{Knuth, Donald E.}Donald
E. Knuth \cite{texbook}. It is aimed at typesetting text and
mathematical formulae. Knuth started writing the \TeX{} typesetting
engine in 1977 to explore the potential of the digital printing
equipment that was beginning to infiltrate the publishing industry at
that time, especially in the hope that he could reverse the trend of
deteriorating typographical quality that he saw affecting his own
books and articles. \TeX{} as we use it today was released in 1982,
with some slight enhancements added in 1989 to better support 8-bit
characters and multiple languages. \TeX{} is renowned for being
extremely stable, for running on many different kinds of computers,
and for being virtually bug free. The version number of \TeX{} is
converging to $\pi$ and is now at $3.141592653$.
\TeX{} is pronounced ``Tech,'' with a ``ch'' as in the German word
``Ach''\footnote{In german there are actually two pronounciations for ``ch''
and one might assume that the soft ``ch'' sound from ``Pech'' would be a
more appropriate. Asked about this, Knuth wrote in the German Wikipedia:
\emph{I do not get angry when people pronounce \TeX{} in their favorite way
\ldots{} and in Germany many use a soft ch because the X follows the vowel
e, not the harder ch that follows the vowel a. In Russia, `tex' is a very
common word, pronounced `tyekh'. But I believe the most proper pronunciation
is heard in Greece, where you have the harsher ch of ach and Loch.}}
or in the Scottish ``Loch.'' The ``ch'' originates from the Greek
alphabet where X is the letter ``ch'' or ``chi''. \TeX{} is also the first syllable
of the Greek word {\greektext teqnik'h} (technique). In an ASCII environment, \TeX{}
becomes \texttt{TeX}.
\subsection{\LaTeX}
\LaTeX{} enables authors to typeset and print their work at the highest
typographical quality, using a predefined, professional layout. \LaTeX{} was
originally written by \index{Lamport, Leslie}Leslie Lamport~\cite{manual}.
It uses the \TeX{} formatter as its typesetting engine. These days \LaTeX{}
is maintained by \index{Mittelbach, Frank}Frank Mittelbach.
%In 1994 the \LaTeX{} package was updated by the \index{LaTeX3@\LaTeX
% 3}\LaTeX 3 team, led by \index{Mittelbach, Frank}Frank Mittelbach,
%to include some long-requested improvements, and to re\-unify all the
%patched versions which had cropped up since the release of
%\index{LaTeX 2.09@\LaTeX{} 2.09}\LaTeX{} 2.09 some years earlier. To
%distinguish the new version from the old, it is called \index{LaTeX
%2e@\LaTeXe}\LaTeXe. This documentation deals with \LaTeXe. These days you
%might be hard pressed to find the venerable \LaTeX{} 2.09 installed
%anywhere.
\LaTeX{} is pronounced ``Lay-tech'' or ``Lah-tech.'' If you refer to
\LaTeX{} in an ASCII environment, you type \texttt{LaTeX}.
\LaTeXe{} is pronounced ``Lay-tech two e'' and typed \texttt{LaTeX2e}.
%Figure~\ref{components} above % on page \pageref{components}
%shows how \TeX{} and \LaTeXe{} work together. This figure is taken from
%\texttt{wots.tex} by Kees van der Laan.
%\begin{figure}[btp]
%\begin{lined}{0.8\textwidth}
%\begin{center}
%\input{kees.fig}
%\end{center}
%\end{lined}
%\caption{Components of a \TeX{} System.} \label{components}
%\end{figure}
\section{Basics}
\subsection{Author, Book Designer, and Typesetter}
To publish something, authors give their typed manuscript to a
publishing company. One of their book designers then
decides the layout of the document (column width, fonts, space before
and after headings,~\ldots). The book designer writes his instructions
into the manuscript and then gives it to a typesetter, who typesets the
book according to these instructions.
A human book designer tries to find out what the author had in mind
while writing the manuscript. He decides on chapter headings,
citations, examples, formulae, etc.\ based on his professional
knowledge and from the contents of the manuscript.
In a \LaTeX{} environment, \LaTeX{} takes the role of the book
designer and uses \TeX{} as its typesetter. But \LaTeX{} is ``only'' a
program and therefore needs more guidance. The author has to provide
additional information to describe the logical structure of his
work. This information is written into the text as ``\LaTeX{}
commands.''
This is quite different from the \wi{WYSIWYG}\footnote{What you see is
what you get.} approach that most modern word processors, such as
\emph{MS Word} or \emph{LibreOffice}, take. With these
applications, authors specify the document layout interactively while
typing text into the computer. They can see on the
screen how the final work will look when it is printed.
When using \LaTeX{} it is not normally possible to see the final output
while typing the text, but the final output can be previewed on the
screen after processing the file with \LaTeX. Then corrections can be
made before actually sending the document to the printer.
\subsection{Layout Design}
Typographical design is a craft. Unskilled authors often commit
serious formatting errors by assuming that book design is mostly a
question of aesthetics---``If a document looks good artistically,
it is well designed.'' But as a document has to be read and not hung
up in a picture gallery, the readability and understandability is
much more important than the beautiful look of it.
Examples:
\begin{itemize}
\item The font size and the numbering of headings have to be chosen to make
the structure of chapters and sections clear to the reader.
\item The line length has to be short enough not to strain
the eyes of the reader, while long enough to fill the page
beautifully.
\end{itemize}
With \wi{WYSIWYG} systems, authors often generate aesthetically
pleasing documents with very little or inconsistent structure.
\LaTeX{} prevents such formatting errors by forcing the author to
declare the \emph{logical} structure of his document. \LaTeX{} then
chooses the most suitable layout.
\subsection{Advantages and Disadvantages}
When people from the \wi{WYSIWYG} world meet people who use \LaTeX{},
they often discuss ``the \wi{advantages of \LaTeX{}} over a normal
word processor'' or the opposite. The best thing to do when such
a discussion starts is to keep a low profile, since such discussions
often get out of hand. But sometimes there is no escaping \ldots
\medskip\noindent So here is some ammunition. The main advantages
of \LaTeX{} over normal word processors are the following:
\begin{itemize}
\item Professionally crafted layouts are available, which make a
document really look as if ``printed.''
\item The typesetting of mathematical formulae is supported in a
convenient way.
\item Users only need to learn a few easy-to-understand commands
that specify the logical structure of a document. They almost never
need to tinker with the actual layout of the document.
\item Even complex structures such as footnotes, references, table of
contents, and bibliographies can be generated easily.
\item Free add-on packages exist for many typographical tasks not directly supported by basic
\LaTeX. For example, packages are
available to include \PSi{} graphics or to typeset
bibliographies conforming to exact standards. Many of these add-on
packages are described in \companion.
\item \LaTeX{} encourages authors to write well-structured texts,
because this is how \LaTeX{} works---by specifying structure.
\item \TeX, the formatting engine of \LaTeXe, is highly portable and free.
Therefore the system runs on almost any hardware platform
available.
%
% Add examples ...
%
\end{itemize}
\medskip
\noindent\LaTeX{} also has some disadvantages, and I guess it's a bit
difficult for me to find any sensible ones, though I am sure other people
can tell you hundreds \texttt{;-)}
\begin{itemize}
\item \LaTeX{} does not work well for people who have sold their
souls \ldots
\item Although some parameters can be adjusted within a predefined
document layout, the design of a whole new layout is difficult and
takes a lot of time.\footnote{Rumour says that this is one of the
key elements that will be addressed in the upcoming \LaTeX 3
system.}\index{LaTeX3@\LaTeX 3}
\item It is very hard to write unstructured and disorganized documents.
\item Your hamster might, despite some encouraging first steps, never be
able to fully grasp the concept of Logical Markup.
\end{itemize}
\section{\LaTeX{} Input Files}
The input for \LaTeX{} is a plain text file. On Unix/Linux text files are
pretty common. On windows, one would use Notepad to create a text file. It
contains the text of the document, as well as the commands that tell
\LaTeX{} how to typeset the text. If you are working with a \LaTeX{} IDE, it will contain a program for creating
\LaTeX{} input files in text format.
\subsection{Spaces}
``Whitespace'' characters, such as blank or tab, are
treated uniformly as ``\wi{space}'' by \LaTeX{}. \emph{Several
consecutive} \wi{whitespace} characters are treated as \emph{one}
``space.'' Whitespace at the start of a line is generally ignored, and
a single line break is treated as ``whitespace.''
\index{whitespace!at the start of a line}
An empty line between two lines of text defines the end of a
paragraph. \emph{Several} empty lines are treated the same as
\emph{one} empty line. The text below is an example. On the left hand
side is the text from the input file, and on the right hand side is the
formatted output.
\begin{example}
It does not matter whether you
enter one or several spaces
after a word.
An empty line starts a new
paragraph.
\end{example}
\subsection{Special Characters}
The following symbols are \wi{reserved characters} that either have a
special meaning under \LaTeX{} or are not available in all the fonts.
If you enter them directly in your text, they will normally not print,
but rather coerce \LaTeX{} to do things you did not intend.
\begin{code}
\verb.# $ % ^ & _ { } ~ \ . %$
\end{code}
As you will see, these characters can be used in your documents all
the same by using a prefix backslash:
\begin{example}
\# \$ \% \^{} \& \_ \{ \} \~{}
\textbackslash
\end{example}
The other symbols and many more can be printed with special commands
in mathematical formulae or as accents. The backslash character
\textbackslash{} can \emph{not} be entered by adding another backslash
in front of it (\verb|\\|); this sequence is used for
line breaking. Use the \ci{textbackslash} command instead.
\subsection{\LaTeX{} Commands}
\LaTeX{} \wi{commands} are case sensitive, and take one of the following
two formats:
\begin{itemize}
\item They start with a \wi{backslash} \verb|\| and then have a name
consisting of letters only. Command names are terminated by a
space, a number or any other `non-letter.'
\item They consist of a backslash and exactly one non-letter.
\item Many commands exist in a `starred variant' where a star is appended
to the command name.
\end{itemize}
%
% \\* doesn't comply !
%
%
% Can \3 be a valid command ? (jacoboni)
%
\label{whitespace}
\hyphenation{white-spaces white-space}
\LaTeX{} ignores whitespace after commands. If you want to get a
\index{whitespace!after commands}space after a command, you have to
put either an empty parameter \verb|{}| and a blank or a special spacing command after the
command name. The empty parameter \verb|{}| stops \LaTeX{} from eating up all the white space after
the command name.
\begin{example}
New \TeX users may miss whitespaces
after a command. % renders wrong
Experienced \TeX{} users are
\TeX perts, and know how to use
whitespaces. % renders correct
\end{example}
Some commands require a \wi{parameter}, which has to be given between
\wi{curly braces} \verb|{ }| after the command name. Some commands take
\wi{optional parameters}, which are inserted after the command name in
\wi{square brackets}~\verb|[ ]|.
\begin{code}
\verb|\|\textit{command}\verb|[|\textit{optional parameter}\verb|]{|\textit{parameter}\verb|}|
\end{code}
The next examples use some \LaTeX{}
commands. Don't worry about them; they will be explained later.
\begin{example}
You can \textsl{lean} on me!
\end{example}
\begin{example}
Please, start a new line
right here!\newline
Thank you!
\end{example}
\subsection{Comments}
\index{comments}
When \LaTeX{} encounters a \verb|%| character while processing an input file,
it ignores the rest of the present line, the line break, and all
whitespace at the beginning of the next line.
This can be used to write notes into the input file, which will not show up
in the printed version.
\begin{example}
This is an % stupid
% Better: instructive <----
example: Supercal%
ifragilist%
icexpialidocious
\end{example}
The \texttt{\%} character can also be used to split long input lines where no
whitespace or line breaks are allowed.
For longer comments you could use the \ei{comment} environment
provided by the \pai{verbatim} package. Add the
line \verb|\usepackage{verbatim}| to the preamble of your document as
explained below to use this command.
\begin{example}
This is another
\begin{comment}
rather stupid,
but helpful
\end{comment}
example for embedding
comments in your document.
\end{example}
Note that this won't work inside complex environments, like math for example.
\section{Input File Structure}
\label{sec:structure}
When \LaTeXe{} processes an input file, it expects it to follow a
certain \wi{structure}. Thus every input file must start with the
command
\begin{code}
\verb|\documentclass{...}|
\end{code}
This specifies what sort of document you intend to write. After that,
add commands to influence the style of the whole
document, or load \wi{package}s that add new
features to the \LaTeX{} system. To load such a package you use the
command
\begin{code}
\verb|\usepackage{...}|
\end{code}
When all the setup work is done,\footnote{The area between \texttt{\bs
documentclass} and \texttt{\bs
begin$\mathtt{\{}$document$\mathtt{\}}$} is called the
\emph{\wi{preamble}}.} you start the body of the text with the
command
\begin{code}
\verb|\begin{document}|
\end{code}
Now you enter the text mixed with some useful \LaTeX{} commands. At
the end of the document you add the
\begin{code}
\verb|\end{document}|
\end{code}
command, which tells \LaTeX{} to call it a day. Anything that
follows this command will be ignored by \LaTeX.
Figure~\ref{mini} shows the contents of a minimal \LaTeXe{} file. A
slightly more complicated \wi{input file} is given in
Figure~\ref{document}.
\begin{figure}[!bp]
\begin{lined}{6cm}
\begin{verbatim}
\documentclass{article}
\begin{document}
Small is beautiful.
\end{document}
\end{verbatim}
\end{lined}
\caption{A Minimal \LaTeX{} File.} \label{mini}
\end{figure}
\begin{figure}[!bp]
\begin{lined}{10cm}
\begin{verbatim}
\documentclass[a4paper,11pt]{article}
% define the title
\author{H.~Partl}
\title{Minimalism}
\begin{document}
% generates the title
\maketitle
% insert the table of contents
\tableofcontents
\section{Some Interesting Words}
Well, and here begins my lovely article.
\section{Good Bye World}
\ldots{} and here it ends.
\end{document}
\end{verbatim}
\end{lined}
\caption[Example of a Realistic Journal Article.]{Example of a Realistic
Journal Article. Note that all the commands you see in this example will be
explained later in the introduction.} \label{document}
\end{figure}
\section{A Typical Command Line Session}
I bet you must be dying to try out the neat small \LaTeX{} input file
shown on page \pageref{mini}. Here is some help:
\LaTeX{} itself comes without a GUI or
fancy buttons to press. It is just a program that crunches away at your
input file. Some \LaTeX{} installations feature a graphical front-end where
there is a \LaTeX{} button to start compiling your input file. On other systems
there might be some typing involved, so here is how to coax \LaTeX{} into
compiling your input file on a text based system. Please note: this
description assumes that a working \LaTeX{} installation already sits on
your computer.\footnote{This is the case with most well groomed Unix
Systems, and \ldots{} Real Men use Unix, so \ldots{} \texttt{;-)}}
\begin{enumerate}
\item
Edit/Create your \LaTeX{} input file. This file must be plain ASCII
text. On Unix all the editors will create just that. On Windows you
might want to make sure that you save the file in ASCII or
\emph{Plain Text} format. When picking a name for your file, make
sure it bears the extension \eei{.tex}.
\item
Open a shell or cmd window, cd to the directory where your input file is located and run \LaTeX{} on your input file. If successful you will end up with a
\texttt{.dvi} file. It may be necessary to run \LaTeX{} several times to get
the table of contents and all internal references right. When your input
file has a bug \LaTeX{} will tell you about it and stop processing your
input file. Type \texttt{ctrl-D} to get back to the command line.
\begin{lscommand}
\verb+latex foo.tex+
\end{lscommand}
\item
Now you may view the DVI file. There are several ways to do that. Look at the file on screen with
\begin{lscommand}
\verb+xdvi foo.dvi &+
\end{lscommand}
This only works on Unix with X11. If you are on Windows you might want to try \texttt{yap} (yet another previewer).
Convert the dvi file to \PSi{} for printing or viewing with \wi{GhostScript}.
\begin{lscommand}
\verb+dvips -Pcmz foo.dvi -o foo.ps+
\end{lscommand}
If you are lucky your \LaTeX{} system even comes with the \texttt{dvipdf} tool, which allows
you to convert your \texttt{.dvi} files straight into pdf.
\begin{lscommand}
\verb+dvipdf foo.dvi+
\end{lscommand}
\end{enumerate}
\section{The Layout of the Document}
\subsection {Document Classes}\label{sec:documentclass}
The first information \LaTeX{} needs to know when processing an
input file is the type of document the author wants to create. This
is specified with the \ci{documentclass} command.
\begin{lscommand}
\ci{documentclass}\verb|[|\emph{options}\verb|]{|\emph{class}\verb|}|
\end{lscommand}
\noindent Here \emph{class} specifies the type of document to be created.
Table~\ref{documentclasses} lists the document classes explained in
this introduction. The \LaTeXe{} distribution provides additional
classes for other documents, including letters and slides. The
\emph{\wi{option}s} parameter customises the behaviour of the document
class. The options have to be separated by commas. The most common options for the standard document
classes are listed in
Table~\ref{options}.
\begin{table}[!bp]
\caption{Document Classes.} \label{documentclasses}
\begin{lined}{\textwidth}
\begin{description}
\item [\normalfont\texttt{article}] for articles in scientific journals, presentations,
short reports, program documentation, invitations, \ldots
\index{article class}
\item [\normalfont\texttt{proc}] a class for proceedings based on the article class.
\index{proc class}
\item [\normalfont\texttt{minimal}] is as small as it can get.
It only sets a page size and a base font. It is mainly used for debugging
purposes.
\index{minimal class}
\item [\normalfont\texttt{report}] for longer reports containing several chapters, small
books, PhD theses, \ldots \index{report class}
\item [\normalfont\texttt{book}] for real books \index{book class}
\item [\normalfont\texttt{slides}] for slides. The class uses big sans serif
letters. You might want to consider using the Beamer class instead.
\index{slides class}
\end{description}
\end{lined}
\end{table}
\begin{table}[!bp]
\caption{Document Class Options.} \label{options}
\begin{lined}{\textwidth}
\begin{flushleft}
\begin{description}
\item[\normalfont\texttt{10pt}, \texttt{11pt}, \texttt{12pt}] \quad Sets the size
of the main font in the document. If no option is specified,
\texttt{10pt} is assumed. \index{document font size}\index{base
font size}
\item[\normalfont\texttt{a4paper}, \texttt{letterpaper}, \ldots] \quad Defines
the paper size. The default size is \texttt{letterpaper}. Besides
that, \texttt{a5paper}, \texttt{b5paper}, \texttt{executivepaper},
and \texttt{legalpaper} can be specified. \index{legal paper}
\index{paper size}\index{A4 paper}\index{letter paper} \index{A5
paper}\index{B5 paper}\index{executive paper}
\item[\normalfont\texttt{fleqn}] \quad Typesets displayed formulae left-aligned
instead of centred.
\item[\normalfont\texttt{leqno}] \quad Places the numbering of formulae on the
left hand side instead of the right.
\item[\normalfont\texttt{titlepage}, \texttt{notitlepage}] \quad Specifies
whether a new page should be started after the \wi{document title}
or not. The \texttt{article} class does not start a new page by
default, while \texttt{report} and \texttt{book} do. \index{title}
\item[\normalfont\texttt{onecolumn}, \texttt{twocolumn}] \quad Instructs \LaTeX{} to typeset the
document in \wi{one column} or \wi{two column}s.
\item[\normalfont\texttt{twoside, oneside}] \quad Specifies whether double or
single sided output should be generated. The classes
\texttt{article} and \texttt{report} are \wi{single sided} and the
\texttt{book} class is \wi{double sided} by default. Note that this
option concerns the style of the document only. The option
\texttt{twoside} does \emph{not} tell the printer you use that it
should actually make a two-sided printout.
\item[\normalfont\texttt{landscape}] \quad Changes the layout of the document to print in landscape mode.
\item[\normalfont\texttt{openright, openany}] \quad Makes chapters begin either
only on right hand pages or on the next page available. This does
not work with the \texttt{article} class, as it does not know about
chapters. The \texttt{report} class by default starts chapters on
the next page available and the \texttt{book} class starts them on
right hand pages.
\end{description}
\end{flushleft}
\end{lined}
\end{table}
Example: An input file for a \LaTeX{} document could start with the
line
\begin{code}
\ci{documentclass}\verb|[11pt,twoside,a4paper]{article}|
\end{code}
which instructs \LaTeX{} to typeset the document as an \emph{article}
with a base font size of \emph{eleven points}, and to produce a
layout suitable for \emph{double sided} printing on \emph{A4 paper}.
\pagebreak[2]
\subsection{Packages}
\index{package} While writing your document, you will probably find
that there are some areas where basic \LaTeX{} cannot solve your
problem. If you want to include \wi{graphics}, \wi{coloured text} or
source code from a file into your document, you need to enhance the
capabilities of \LaTeX. Such enhancements are called packages.
Packages are activated with the
\begin{lscommand}
\ci{usepackage}\verb|[|\emph{options}\verb|]{|\emph{package}\verb|}|
\end{lscommand}
\noindent command, where \emph{package} is the name of the package and
\emph{options} is a list of keywords that trigger special features in
the package. The \ci{usepackage} command goes into the preamble of the
document. See section \ref{sec:structure} for details.
Some packages come with the \LaTeXe{} base distribution
(See Table~\ref{packages}). Others are provided separately. You may
find more information on the packages installed at your site in your
\guide. The prime source for information about \LaTeX{} packages is \companion.
It contains descriptions on hundreds of packages, along with
information of how to write your own extensions to \LaTeXe.
Modern \TeX{} distributions come with a large number of packages
preinstalled. If you are working on a Unix system, use the command
\texttt{texdoc} for accessing package documentation.
\begin{table}[btp]
\caption{Some of the Packages Distributed with \LaTeX.} \label{packages}
\begin{lined}{\textwidth}
\begin{description}
\item[\normalfont\pai{doc}] Allows the documentation of \LaTeX{} programs.\\
Described in \texttt{doc.dtx}\footnote{This file should be installed
on your system, and you should be able to get a \texttt{dvi} file
by typing \texttt{latex doc.dtx} in any directory where you have
write permission. The same is true for all the
other files mentioned in this table.} and in \companion.
\item[\normalfont\pai{exscale}] Provides scaled versions of the
math extension font.\\
Described in \texttt{ltexscale.dtx}.
\item[\normalfont\pai{fontenc}] Specifies which \wi{font encoding}
\LaTeX{} should use.\\
Described in \texttt{ltoutenc.dtx}.
\item[\normalfont\pai{ifthen}] Provides commands of the form\\
`if\ldots then do\ldots otherwise do\ldots.'\\ Described in
\texttt{ifthen.dtx} and \companion.
\item[\normalfont\pai{latexsym}] To access the \LaTeX{} symbol
font, you should use the \texttt{latexsym} package. Described in
\texttt{latexsym.dtx} and in \companion.
\item[\normalfont\pai{makeidx}] Provides commands for producing
indexes. Described in section~\ref{sec:indexing} and in \companion.
\item[\normalfont\pai{syntonly}] Processes a document without
typesetting it.
\item[\normalfont\pai{inputenc}] Allows the specification of an
input encoding such as ASCII, ISO Latin-1, ISO Latin-2, 437/850 IBM
code pages, Apple Macintosh, Next, ANSI-Windows or user-defined one.
Described in \texttt{inputenc.dtx}.
\end{description}
\end{lined}
\end{table}
\subsection{Page Styles}
\LaTeX{} supports three predefined \wi{header}/\wi{footer}
combinations---so-called \wi{page style}s. The \emph{style} parameter
of the \index{page style!plain@\texttt{plain}}\index{plain@\texttt{plain}}
\index{page style!headings@\texttt{headings}}\index{headings@texttt{headings}}
\index{page style!empty@\texttt{empty}}\index{empty@\texttt{empty}}
\begin{lscommand}
\ci{pagestyle}\verb|{|\emph{style}\verb|}|
\end{lscommand}
\noindent command defines which one to use.
Table~\ref{pagestyle}
lists the predefined page styles.
\begin{table}[!hbp]
\caption{The Predefined Page Styles of \LaTeX.} \label{pagestyle}
\begin{lined}{\textwidth}
\begin{description}
\item[\normalfont\texttt{plain}] prints the page numbers on the bottom
of the page, in the middle of the footer. This is the default page
style.
\item[\normalfont\texttt{headings}] prints the current chapter heading
and the page number in the header on each page, while the footer
remains empty. (This is the style used in this document)
\item[\normalfont\texttt{empty}] sets both the header and the footer
to be empty.
\end{description}
\end{lined}
\end{table}
It is possible to change the page style of the current page
with the command
\begin{lscommand}
\ci{thispagestyle}\verb|{|\emph{style}\verb|}|
\end{lscommand}
A description how to create your own
headers and footers can be found in \companion{} and in section~\ref{sec:fancy} on page~\pageref{sec:fancy}.
%
% Pointer to the Fancy headings Package description !
%
\section{Files You Might Encounter}
When you work with \LaTeX{} you will soon find yourself in a maze of
files with various \wi{extension}s and probably no clue. The following
list explains the various \wi{file types} you might encounter when
working with \TeX{}. Please note that this table does not claim to be
a complete list of extensions, but if you find one missing that you
think is important, please drop me a line.
\begin{description}
\item[\eei{.tex}] \LaTeX{} or \TeX{} input file. Can be compiled with
\texttt{latex}.
\item[\eei{.sty}] \LaTeX{} Macro package. Load this
into your \LaTeX{} document using the \ci{usepackage} command.
\item[\eei{.dtx}] Documented \TeX{}. This is the main distribution
format for \LaTeX{} style files. If you process a .dtx file you get
documented macro code of the \LaTeX{} package contained in the .dtx
file.
\item[\eei{.ins}] The installer for the files contained in the
matching .dtx file. If you download a \LaTeX{} package from the net,
you will normally get a .dtx and a .ins file. Run \LaTeX{} on the
.ins file to unpack the .dtx file.
\item[\eei{.cls}] Class files define what your document looks
like. They are selected with the \ci{documentclass} command.
\item[\eei{.fd}] Font description file telling \LaTeX{} about new fonts.
\end{description}
The following files are generated when you run \LaTeX{} on your input
file:
\begin{description}
\item[\eei{.dvi}] Device Independent File. This is the main result of a \LaTeX{}
compile run. Look at its content with a DVI previewer
program or send it to a printer with \texttt{dvips} or a
similar application.
\item[\eei{.log}] Gives a detailed account of what happened during the
last compiler run.
\item[\eei{.toc}] Stores all your section headers. It gets read in for the
next compiler run and is used to produce the table of contents.
\item[\eei{.lof}] This is like .toc but for the list of figures.
\item[\eei{.lot}] And again the same for the list of tables.
\item[\eei{.aux}] Another file that transports information from one
compiler run to the next. Among other things, the .aux file is used
to store information associated with cross-references.
\item[\eei{.idx}] If your document contains an index. \LaTeX{} stores all
the words that go into the index in this file. Process this file with
\texttt{makeindex}. Refer to section \ref{sec:indexing} on
page \pageref{sec:indexing} for more information on indexing.
\item[\eei{.ind}] The processed .idx file, ready for inclusion into your
document on the next compile cycle.
\item[\eei{.ilg}] Logfile telling what \texttt{makeindex} did.
\end{description}
% Package Info pointer
%
%
%
% Add Info on page-numbering, ...
% \pagenumbering
\section{Big Projects}
When working on big documents, you might want to split the input file
into several parts. \LaTeX{} has two commands that help you to do
that.
\begin{lscommand}
\ci{include}\verb|{|\emph{filename}\verb|}|
\end{lscommand}
\noindent Use this command in the document body to insert the
contents of another file named \emph{filename.tex}. Note that \LaTeX{}
will start a new page
before processing the material input from \emph{filename.tex}.
The second command can be used in the preamble. It allows you to
instruct \LaTeX{} to only input some of the \verb|\include|d files.
\begin{lscommand}
\ci{includeonly}\verb|{|\emph{filename}\verb|,|\emph{filename}%
\verb|,|\ldots\verb|}|
\end{lscommand}
After this command is executed in the preamble of the document, only
\ci{include} commands for the filenames that are listed in the
argument of the \ci{includeonly} command will be executed.
The \ci{include} command starts typesetting the included text on a new
page. This is helpful when you use \ci{includeonly}, because the
page breaks will not move, even when some include files are omitted.
Sometimes this might not be desirable. In this case, use the
\begin{lscommand}
\ci{input}\verb|{|\emph{filename}\verb|}|
\end{lscommand}
\noindent command. It simply includes the file specified.
No flashy suits, no strings attached.
To make \LaTeX{} quickly check your document use the \pai{syntonly}
package. This makes \LaTeX{} skim through your document only checking for
proper syntax and usage of the commands, but doesn't produce any (DVI) output.
As \LaTeX{} runs faster in this mode you may save yourself valuable time.
Usage is very simple:
\begin{verbatim}
\usepackage{syntonly}
\syntaxonly
\end{verbatim}
When you want to produce pages, just comment out the second line
(by adding a percent sign).
%
% Local Variables:
% TeX-master: "lshort2e"
% mode: latex
% mode: flyspell
% End:
| {
"alphanum_fraction": 0.7523289032,
"avg_line_length": 39.5626535627,
"ext": "tex",
"hexsha": "c202c23c1c8c45854df1ae7ce09c678fa3d307d0",
"lang": "TeX",
"max_forks_count": 20,
"max_forks_repo_forks_event_max_datetime": "2021-09-25T21:50:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-08-28T20:40:55.000Z",
"max_forks_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "templateK/synctex",
"max_forks_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/things.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957",
"max_issues_repo_issues_event_max_datetime": "2022-03-30T22:31:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-12-23T13:20:04.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "templateK/synctex",
"max_issues_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/things.tex",
"max_line_length": 154,
"max_stars_count": 38,
"max_stars_repo_head_hexsha": "555467da1535b0b1d7e97532a2c6251d9c2b3957",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "templateK/synctex",
"max_stars_repo_path": "synctex test files/less basic/2017/lshort-5.05/src/things.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-18T22:07:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-12-23T02:21:16.000Z",
"num_tokens": 8640,
"size": 32204
} |
% Copyright 2019 by Till Tantau
%
% This file may be distributed and/or modified
%
% 1. under the LaTeX Project Public License and/or
% 2. under the GNU Free Documentation License.
%
% See the file doc/generic/pgf/licenses/LICENSE for more details.
\section{Axes}
\label{section-dv-axes}
\subsection{Overview}
When a data point is visualized, the most obvious way of creating a visual
representation of its many attributes is to vary \emph{where} the data point is
shown. The data visualization system uses \emph{axes} to turn data point
attributes into positions on a page. The simplest -- and most common -- use of
axes is to vary the horizontal position of data points according to one
attribute and to vary the vertical position according to another attribute. In
contrast, in a polar plot one attribute dictates the distance of the data point
from the origin and another attribute describes the angle. From the data
visualization engine's point of view, in both cases two \emph{axes} are
involved.
In addition to specifying how the value of a certain attribute is converted
into a displacement on the page, an axis is also typically (but not always)
visualized (``drawn'') somewhere on the page. In this case, it is also
customary to add a visual representation on this axis of which attribute values
correspond to which positions on the page -- something commonly known as
\emph{ticks}. Similar to ticks, \emph{grid lines} also indicate positions where
a certain attribute has a certain value, but instead of just indicating a
single position on an axis, a grid line goes through all points that share an
attribute value.
In the following, in Section~\ref{section-dv-axes-main} we first have a look at
how axes can be defined and configured. As you will see, a lot of powerful
configurations are available, but you will rarely define and configure an axis
from scratch. Rather, it is more common to use a preconfigured axis instead.
Section~\ref{section-dv-axis-systems} introduces \emph{axis systems}, which are
predefined bundles of axes. You can define your own axis systems, but, again,
in most cases it will suffice to just use one of the many preconfigured axis
systems and use a few options to configure it so that it fits your need.
Section~\ref{section-dv-ticks-and-grids} explains how ticks and grid lines can
be configured. Again, several layers of options allow you to configure the way
ticks look and where they are placed in great detail.
This section documents the standard axis systems that are always available. For
polar axis systems, a special library needs to be loaded, which is documented
in Section~\ref{section-dv-polar}.
\subsection{Basic Configuration of Axes}
\label{section-dv-axes-main}
Inside the data visualization system, an \emph{axis} is roughly a ``systematic,
named way of mapping an attribute to a position on a page''. For instance, the
classical ``$x$-axis'' is the ``systematic way of mapping the value of the |x|
attribute of data points to a horizontal position on the page''. An axis is
\emph{not} its visual representation (such as the horizontal line with the
ticks drawn to represent the $x$-axis), but a visual representation can be
created once an axis has been defined.
The transformation of an attribute value (such as the value |1000000000| for
the |x| attribute) to a specific displacement of the corresponding data point
on the page involves two steps:
%
\begin{enumerate}
\item First, the range of possible values such as $[-5.6\cdot
10^{12},7.8\cdot 10^{12}]$ must be mapped to a ``reasonable'' interval
such as $[0\mathrm{cm},5\mathrm{cm}]$ or $[0^\circ,180^\circ]$.
\tikzname's drawing routines will only be able to cope with values from
such a ``reasonable'' interval.
\item Second, the values from the reasonable interval must be mapped to a
transformation.
\end{enumerate}
%
The first step is always the same for all axes, while the second requires
different strategies. For this reason, the command |new axis base| is used to
create a ``basic'' axis that has a ``scaling mapper'', whose job it is to map
the range of values of a specific attribute to a reasonable interval, but such
a basic axis does not define an actual transformation object. For this second
step, additional objects such as a |linear transformer| need to be created
separately.
\subsubsection{Usage}
To create an axis, the key |new axis base| is used first. Since this key does
not create a transformation object, users typically do not use this key
directly. Rather, it is used internally by other keys that create ``real''
axes. These keys are listed in Section~\ref{section-dv-reference-axis-types}.
\begin{key}{/tikz/data visualization/new axis base=\meta{axis name}}
This key defines a new axis for the current data visualization called
\meta{name}. This has two effects:
%
\begin{enumerate}
\item A so called \emph{scaling mapper} is created that will monitor a
certain attribute, rescale it, and map it to another attribute.
(This will be explained in detail in a moment.)
\item The \meta{axis name} is made available as a key that can be used
to configure the axis:
%
\begin{key}{/tikz/data visualization/\meta{axis name}=\meta{options}}
This key becomes available once |new axis base=|meta{axis name}
has been called. It will execute the \meta{options} with the
path prefix |/tikz/data visualization/axis options|.
%
\begin{codeexample}[code only]
[new axis base=my axis,
my axis={attribute=some attribute}]
\end{codeexample}
\end{key}
\item The \meta{axis name} becomes part of the current set of axes.
This set can be accessed through the following key:
%
\begin{key}{/tikz/data visualization/all axes=\meta{options}}
This key passes the \meta{options} to all axes inside the
current scope, just as if you had written \meta{some axis
name}|=|\meta{options} for each \meta{some axis name} in the
current scope, including the just-created name \meta{axis
name}.
\end{key}
\end{enumerate}
%
There are many \meta{options} that can be passed to a newly created axis.
They are explained in the rest of this section.
\end{key}
Note the |new axis base| does \emph{not} cause attributes to be mapped to
positions on a page. Rather, special keys like |new Cartesian axis| first use
|new axis base| to create an axis and then create an internal object that
performs a linear mapping of the attribute to positions along a vectors.
\subsubsection{The Axis Attribute}
\label{section-dv-axis-attribute}
The first main job of an axis is to map the different values of some attribute
to a reasonable interval. To achieve this, the following options are important
(recall that these options are passed to the key whose name is the name of the
axis):
\begin{key}{/tikz/data visualization/axis options/attribute=\meta{attribute}}
Specifies that the axis is used to transform the data points according the
different values of the key |/data point/|\meta{attribute}. For instance,
when we create a classical two-dimensional Cartesian coordinate system,
then there are two axes called |x axis| and |y axis| that monitor the
values of the attributes |/data point/x| and |/data point/y|, respectively:
%
\begin{codeexample}[code only]
[new axis base=x axis,
new axis base=y axis,
x axis={attribute=x},
y axis={attribute=y}]
\end{codeexample}
%
In another example, we also create an |x axis| and a |y axis|. However,
this time, we want to plot the values of the |/data point/time| attribute
on the $x$-axis and, say, the value of the |height| attribute on the
$y$-axis:
%
\begin{codeexample}[code only]
[new axis base=x axis,
new axis base=y axis,
x axis={attribute=time},
y axis={attribute=height}]
\end{codeexample}
%
During the data visualization, the \meta{attribute} will be ``monitored''
during the survey phase. This means that for each data point, the current
value of |/data point/|\meta{attribute} is examined and the minimum value
of all of these values as well as the maximum value is recorded internally.
Note that this works even when very large numbers like |100000000000| are
involved.
Here is a real-life example. The |scientific axes| create two axes, called
|x axis| and |y axis|, respectively.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [scientific axes,
x axis={attribute=people, length=2.5cm, ticks=few},
y axis={attribute=year},
visualize as scatter]
data {
year, people
1900, 100
1910, 200
1950, 200
1960, 250
2000, 150
};
\end{codeexample}
%
\end{key}
\subsubsection{The Axis Attribute Range Interval}
Once an attribute has been specified for an axis, the data visualization engine
will start monitoring this value. This means that before anything actual
visualization is done, a ``survey phase'' is used to determine the range of
values encountered for the attribute for all data points. This range of values
results in what is called the \emph{attribute range interval}. Its minimum is
the smallest value encountered in the data and its maximum is the largest
value.
Even though the attribute range interval is computed automatically and even
though you typically do not need to worry about it, there are some situations
where you may wish to set or enlarge the attribute range interval:
%
\begin{itemize}
\item You may wish to start the interval with $0$, even though the range of
values contains only positive values.
\item You may wish to slightly enlarge the interval so that, say, the
maximum is some ``nice'' value like |100| or |60|.
\end{itemize}
The following keys can be used to influence the size of the attribute range
interval:
%
\begin{key}{/tikz/data visualization/axis options/include value=\meta{list of value}}
This key ``fakes'' data points for which the attribute's values are in the
comma-separated \meta{list of values}. For instance, when you write
|include value=0|, then the attribute range interval is guaranteed to
contain |0| -- even if the actual data points are all positive or all
negative.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes, all axes={length=3cm},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes, all axes={length=3cm},
visualize as line,
x axis={include value=20},
y axis={include value=0}]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/axis options/min value=\meta{value}}
This key allows you to simply set the minimum value, regardless of which
values are present in the actual data. This key should be used with care:
If there are data points for which the attribute's value is less than
\meta{value}, they will still be depicted, but typically outside the normal
visualization area. Usually, saying |include value=|\meta{value} will
achieve the same as saying |min value=|\meta{value}, but with less danger
of creating ill-formed visualizations.
\end{key}
\begin{key}{/tikz/data visualization/axis options/max value=\meta{value}}
Works like |min value|.
\end{key}
\subsubsection{Scaling: The General Mechanism}
The above key allows us specify which attribute should be ``monitored''. The
next key is used to specify what should happen with the observed values.
\begin{key}{/tikz/data visualization/axis options/scaling=\meta{scaling spec}}
The \meta{scaling spec} must have the following form:
%
\begin{quote}
\meta{$s_1$}| at |\meta{$t_1$}| and |\meta{$s_2$}| at |\meta{$t_2$}
\end{quote}
%
This means that monitored values in the interval $[s_1,s_2]$ should be
mapped to values the ``reasonable'' interval $[t_1,t_2]$, instead. For
instance, we might write
%
\begin{codeexample}[code only]
[y axis = {scaling = 1900 at 0cm and 2000 at 5cm}]
\end{codeexample}
%
in order to map dates between 1900 and 2000 to the dimension interval
$[0\mathrm{cm},5\mathrm{cm}]$.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization
[scientific axes,
x axis={attribute=people, length=2.5cm, ticks=few},
y axis={attribute=year, scaling=1900 at 0cm and 2000 at 5cm},
visualize as scatter]
data {
year, people
1900, 100
1910, 200
1950, 200
1960, 250
2000, 150
};
\end{codeexample}
%
So much for the basic idea. Let us now have a detailed look at what
happens.
\medskip
\textbf{Number format and the min and max keywords.}
The source values $s_1$ and $s_2$ are typically just numbers like |3.14| or
|10000000000|. However, as described in
Section~\ref{section-dv-expressions}, you can also specify expressions like
|(pi/2)|, provided that (currently) you put them in parentheses.
Instead of a number, you may alternatively also use the two key words |min|
and |max| for $s_1$ and/or $s_2$. In this case, |min| evaluates to the
smallest value observed for the attribute in the data, symmetrically |max|
evaluates to the largest values. For instance, in the above example with
the |year| attribute ranging from |1900| to |2000|, the keyword |min| would
stand for |1900| and |max| for |2000|. Similarly, for the |people|
attribute |min| stands for |100| and |max| for |250|. Note that |min| and
|max| can only be used for $s_1$ and $s_2$, not for $t_1$ and $t_2$.
A typical use of the |min| and |max| keywords is to say
%
\begin{codeexample}[code only]
scaling = min at 0cm and max at 5cm
\end{codeexample}
%
to map the complete range of values into an interval of length of 5cm.
The interval $[s_1,s_2]$ need not contain all values that the
\meta{attribute} may attain. It is permissible that values are less than
$s_1$ or more than $s_2$.
\medskip
\textbf{Linear transformation of the attribute.}
As indicated earlier, the main job of an axis is to map values from a
``large'' interval $[s_1,s_2]$ to a more reasonable interval $[t_1,t_2]$.
Suppose that for the current data point the value of the key
|/data point/|\meta{attribute} is the number $v$. In the simplest case, the
following happens: A new value $v'$ is computed so that $v' = t_1$ when
$v=s_1$ and $v'=t_2$ when $v=s_2$ and $v'$ is some value in between $t_1$
and $t_2$ then $v$ is some value in between $s_1$ and $s_2$. (Formally, in
this basic case $v' = t_1 + (v-s_1)\frac{t_2-t_1}{s_2-s_1}$.)
Once $v'$ has been computed, it is stored in the key
|/data point/|\meta{attribute}|/scaled|. Thus, the ``reasonable'' value
$v'$ does not replace the value of the attribute, but it is placed in a
different key. This means that both the original value and the more
``scaled'' values are available when the data point is visualized.
As an example, suppose you have written
%
\begin{codeexample}[code only]
[x axis = {attribute = x, scaling=1000 at 20 and 2000 at 30}]
\end{codeexample}
%
Now suppose that |/data point/x| equals |1200| for a data point. Then the
key |/data point/x/scaled| will be set to |22| when the data point is being
visualized.
\medskip
\textbf{Nonlinear transformations of the attribute.}
By default, the transformation of $[s_1,s_2]$ to $[t_1,t_2]$ is the linear
transformation described above. However, in some case you may be interested
in a different kind of transformation: For example, in a logarithmic plot,
values of an attribute may range between, say, |1| and |1000| and we want
an axis of length |3cm|. So, we would write
%
\begin{codeexample}[code only]
[x axis = {attribute = x, scaling=1 at 0cm and 1000 at 3cm}]
\end{codeexample}
%
Indeed, |1| will now be mapped to position |0cm| and |1000| will be mapped
to position |3cm|. Now, the value |10| will be mapped to approximately
|0.03cm| because it is (almost) at one percent between |1| and |1000|.
However, in a logarithmic plot we actually want |10| to be mapped to the
position |1cm| rather than |0.03cm| and we want |100| to be mapped to the
position |2cm|. Such a mapping a \emph{nonlinear} mapping between the
intervals.
In order to achieve such a nonlinear mapping, the |function| key can be
used, whose syntax is described in a moment. The effect of this key is to
specify a function $f \colon \mathbb{R} \to \mathbb{R}$ like, say, the
logarithm function. When such a function is specified, the mapping of $v$
to $v'$ is computed as follows:
%
\begin{align*}
v' = t_1 + (f(s_2) - f(v))\frac{t_2 - t_1}{f(s_2)-f(s_1)}.
\end{align*}
The syntax of the |function| key is described next, but you typically will
not call this key directly. Rather, you will use a key like |logarithmic|
that installs appropriate code for the |function| key for you.
%
\begin{key}{/tikz/data visualization/axis options/function=\meta{code}}
The \meta{code} should specify a function $f$ that is applied during
the transformation of the interval $[s_1,s_2]$ to the interval
$[t_1,t_2]$ in the following way: When the \meta{code} is called, the
macro |\pgfvalue| will have been set to an internal representation of
the to-be-transformed value~$v$. You can then call the commands of the
math-micro-kernel of the data visualization system, see
Section~\ref{section-dv-math-kernel}, to compute a new value. This new
value must once more be stored in |\pgfvalue|.
The most common use of this key is to say
%
\begin{codeexample}[code only]
some axis={function=\pgfdvmathln{\pgfvalue}{\pgfvalue}}
\end{codeexample}
%
This specifies that the function $f$ is the logarithm function.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization
[scientific axes,
x axis={ticks={major={at={1,10,100,1000}}},
scaling=1 at 0cm and 1000 at 3cm,
function=\pgfdvmathln{\pgfvalue}{\pgfvalue}},
visualize as scatter]
data [format=named] {
x={1,100,...,1000}, y={1,2,3}
};
\end{codeexample}
%
Another possibility might be to use the square-root function, instead:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization
[scientific axes,
x axis={ticks=few,
scaling=1 at 0cm and 1000 at 3cm,
function=\pgfdvmathunaryop{\pgfvalue}{sqrt}{\pgfvalue}},
visualize as scatter]
data [format=named] {
x={0,100,...,1000}, y={1,2,3}
};
\end{codeexample}
\end{key}
\medskip
\textbf{Default scaling.}
When no scaling is specified, it may seem natural to use $[0,1]$ both as
the source and the target interval. However, this would not work when the
logarithm function is used as transformations: In this case the logarithm
of zero would be computed, leading to an error. Indeed, for a logarithmic
axis it is far more natural to use $[1,10]$ as the source interval and
$[0,1]$ as the target interval.
For these reasons, the default value for the |scaling| that is used when no
value is specified explicitly can be set using a special key:
%
\begin{key}{/tikz/data visualization/axis options/scaling/default=\meta{text}}
The \meta{text} is used as |scaling| whenever no other scaling is
specified. This key is mainly used when a transformation function is
set using |function|; normally, you will not use this key directly.
\end{key}
\end{key}
Most of the time, you will not use neither the |scaling| nor the |function| key
directly, but rather you will use one of the following predefined styles
documented in the following.
\subsubsection{Scaling: Logarithmic Axes}
\begin{key}{/tikz/data visualization/axis options/logarithmic}
When this key is used with an axis, three things happen:
%
\begin{enumerate}
\item The transformation |function| of the axis is setup to the
logarithm.
\item The strategy for automatically generating ticks and grid lines is
set to the |exponential strategy|, see
Section~\ref{section-dv-exponential-strategy} for details.
\item The default scaling is setup sensibly.
\end{enumerate}
%
All told, to turn an axis into a logarithmic axis, you just need to add
this option to the axis.
%
\begin{codeexample}[
width=8cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization [scientific axes,
x axis={logarithmic},
y axis={logarithmic},
visualize as line]
data [format=function] {
var x : interval [0.01:100];
func y = \value x * \value x;
};
\end{codeexample}
%
Note that this will work with any axis, including, say, the degrees on a
polar axis:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.polar}}]
\tikz \datavisualization
[new polar axes,
angle axis={logarithmic, scaling=1 at 0 and 90 at 90},
radius axis={scaling=0 at 0cm and 100 at 3cm},
visualize as scatter]
data [format=named] {
angle={1,10,...,90}, radius={1,10,...,100}
};
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.polar}}]
\tikz \datavisualization
[new polar axes,
angle axis={degrees},
radius axis={logarithmic, scaling=1 at 0cm and 100 at 3cm},
visualize as scatter]
data [format=named] {
angle={1,10,...,90}, radius={1,10,...,100}
};
\end{codeexample}
%
\end{key}
\subsubsection{Scaling: Setting the Length or Unit Length}
\begin{key}{/tikz/data visualization/axis options/length=\meta{dimension}}
Sets |scaling| to |min at 0cm and max at |\meta{dimension}. The effect is
that the range of all values of the axis's attribute will be mapped to an
interval of exact length \meta{dimension}.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [scientific axes,
x axis={length=3cm},
y axis={length=2cm},
all axes={ticks=few},
visualize as line]
data {
x, y
10, 10
20, 20
15, 30
13, 20
};
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [scientific axes,
x axis={length=3cm},
y axis={length=4cm},
all axes={ticks=few},
visualize as line]
data {
x, y
10, 10
20, 20
15, 30
13, 20
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/axis options/unit length=\meta{dimension}\opt{| per |\meta{number}| units|}}
Sets |scaling| to |0 at 0cm and 1 at |\meta{dimension}. In other words,
this key allows you to specify how long a single unit should be. This key
is particularly useful when you wish to ensure that the same scaling is
used across multiple axes or pictures.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [scientific axes,
all axes={ticks=few, unit length=1mm},
visualize as line]
data {
x, y
10, 10
40, 20
15, 30
13, 20
};
\end{codeexample}
%
The optional |per |\meta{number}| units| allows you to apply more drastic
scaling. Suppose that you want to plot a graph where one billion
corresponds to one centimeter. Then the unit length would be need to be set
to a hundredth of a nanometer -- much too small for \TeX\ to handle as a
dimension. In this case, you can write
|unit length=1cm per 1000000000 units|:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization
[scientific axes,
x axis={unit length=1mm per 1000000000 units, ticks=few},
visualize as line]
data {
x, y
10000000000, 10
40000000000, 20
15000000000, 30
13000000000, 20
};
\end{codeexample}
%
\end{key}
%
\begin{key}{/tikz/data visualization/axis options/power unit length=\meta{dimension}}
This key is used in conjunction with the |logarithmic| setting. It cases
the |scaling| to be set to |1 at 0cm and 10 at |\meta{dimension}. This
causes a ``power unit'', that is, one power of ten in a logarithmic plot,
to get a length of \meta{dimension}. Again, this key is useful for ensuring
that the same scaling is used across multiple axes or pictures.
%
\begin{codeexample}[width=8cm,preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization
[scientific axes,
y axis={logarithmic, power unit length=1mm, grid},
visualize as line]
data {
x, y
0, 0.0000000001
1, 1
2, 100000
3, 100000000000
4, 10000000000000000000000000000000
5, 500000000
6, 5000000000000000000
};
\end{codeexample}
%
\end{key}
\subsubsection{Axis Label}
An axis can have a \emph{label}, which is a textual representation of the
attribute according to which the axis varies the position of the page. You can
set the attribute using the following key:
\begin{key}{/tikz/data visualization/axis options/label=\opt{|\char`\{[|\meta{options}|]|}\meta{text}\opt{|\char`\}|}
(default \normalfont axis's label in math mode)%
}
This key sets the label of an axis to \meta{text}. This text will typically
be placed inside a |node| and the \meta{options} can be used to further
configure the way this node is rendered. The \meta{options} will be
executed with the path prefix |/tikz/data visualization/|, so you need to
say |node style| to configure the styling of a node, see
Section~\ref{section-dv-style}.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [
scientific axes,
x axis = {label, length=2.5cm},
y axis = {label={[node style={fill=blue!20}]{$x^2$}}},
visualize as smooth line]
data [format=function] {
var x : interval [-3:5];
func y = \value x * \value x;
};
\end{codeexample}
%
\end{key}
Note that using the |label| key does not actually cause a node to be created,
because it is somewhat unclear where the label should be placed. Instead, the
|visualize label| key is used (typically internally by an axis system) to show
the label at some sensible position. This key is documented in
Section~\ref{section-dv-visualize-label}.
\subsubsection{Reference: Axis Types}
\label{section-dv-reference-axis-types}
As explained earlier, when you use |new axis base| to create a new axis, a
powerful scaling and attribute mapping mechanism is installed, but no mapping
of values to positions on the page is performed. For this, a
\emph{transformation object} must be installed. The following keys take care of
this for you. Note, however, that even these keys do not cause a visual
representation of the axis to be added to the visualization -- this is the job
of an axis system, see Section~\ref{section-dv-axis-systems}.
\begin{key}{/tikz/data visualization/new Cartesian axis=\meta{name}}
This key creates a new ``Cartesian'' axis, named \meta{name}. For such an
axis, the (scaled) values of the axis's attribute are transformed into a
displacement on the page along a straight line. The following key is used
to configure in which ``direction'' the axis points:
%
\begin{key}{/tikz/data visualization/axis options/unit vector=\meta{coordinate} (initially {(1pt,0pt)})}
Recall that an axis takes the values of an attribute and rescales them
so that they fit into a ``reasonable'' interval $[t_1,t_2]$. Suppose
that $v'$ is the rescaled dimension in (\TeX) points. Then when the
data point is visualized, the coordinate system will be shifted by $v'$
times the \meta{coordinate}.
As an example, suppose that you have said
|scaling=0 and 10pt and 50 and 20pt|. Then when the underlying
attribute has the value |25|, it will be mapped to a $v'$ of $15$
(because |25| lies in the middle of |0| and |50| and |15pt| lies in the
middle of |10pt| and |20pt|). This, in turn, causes the data point to
be displaced by $15$ times the \meta{coordinate}.
The bottom line is that the \meta{coordinate} should usually denote a
point that is at distance |1pt| from the origin and that points into
the direction of the axis.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\begin{tikzpicture}
\draw [help lines] (0,0) grid (3,2);
\datavisualization
[new Cartesian axis=x axis, x axis={attribute=x},
new Cartesian axis=y axis, y axis={attribute=y},
x axis={unit vector=(0:1pt)},
y axis={unit vector=(60:1pt)},
visualize as scatter]
data {
x, y
0, 0
1, 0
2, 0
1, 1
2, 1
1, 1.5
2, 1.5
};
\end{tikzpicture}
\end{codeexample}
\end{key}
\end{key}
\subsection{Axis Systems}
\label{section-dv-axis-systems}
An \emph{axis system} is, as the name suggests, a whole family of axes that act
in concert. For example, in the ``standard'' axis system there is a horizontal
axis called the $x$-axis that monitors the |x| attribute (by default, you can
change this easily) and a vertical axis called the $y$-axis. Furthermore, a
certain number of ticks are added and labels are placed at sensible positions.
\subsubsection{Usage}
Using an axis system is usually pretty easy: You just specify a key like
|scientific axes| and the necessary axes get initialized with sensible default
values. You can then start to modify these default values, if necessary.
First, you can (and should) set the attributes to which the difference axes
refer. For instance, if the |time| attribute is plotted along the $x$-axis, you
would write
%
\begin{codeexample}[code only]
x axis = {attribute = time}
\end{codeexample}
Second, you may wish to modify the lengths of the axes. For this, you can use
keys like |length| or further keys as described in the references later on.
Third, you may often wish to modify how many ticks and grid lines are shown. By
default, no grid lines are shown, but you can say the following in order to
cause grid lines to be shown:
%
\begin{codeexample}[code only]
all axes={grid}
\end{codeexample}
%
Naturally, instead of |all axes| you can also specify a single axis, causing
only grid lines to be shown for this axis. In order to change the number of
ticks that are shown, you can say
%
\begin{codeexample}[code only]
all axes={ticks=few}
\end{codeexample}
%
or also |many| instead of |few| or even |none|. Far more fine-grained control
over the tick placement and rendering is possible, see
Section~\ref{section-dv-ticks-and-grids} for details.
Fourth, consider adding units (like ``cm'' for centimeters or
``$\mathrm{m}/\mathrm{s}^2$'' for acceleration) to your ticks:
%
\begin{codeexample}[code only]
x axis={ticks={tick unit=cm}}, y axis={ticks={tick unit=m/s^2}}
\end{codeexample}
Finally, consider adding labels to your axes. For this, use the label option:
%
\begin{codeexample}[code only]
x axes={time $t$ (ms)}, y axis={distance $d$ (mm)}
\end{codeexample}
Here is an example that employs most of the above features:
%
\begin{codeexample}[width=8.5cm,preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
scientific axes=clean,
x axis={attribute=time, ticks={tick unit=ms},
label={elapsed time}},
y axis={attribute=v, ticks={tick unit=m/s},
label={speed of disc}},
all axes=grid,
visualize as line]
data {
time, v
0, 0
1, 0.001
2, 0.002
3, 0.004
4, 0.0035
5, 0.0085
6, 0.0135
};
\end{codeexample}
\subsubsection{Reference: Scientific Axis Systems}
\begin{key}{/tikz/data visualization/scientific axes=\opt{\meta{options}}}
This key installs a two-dimensional coordinate system based on the
attributes |/data point/x| and |/data point/y|.
%
\begin{codeexample}[
width=7cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\begin{tikzpicture}
\datavisualization [scientific axes,
visualize as smooth line]
data [format=function] {
var x : interval [0:100];
func y = sqrt(\value x);
};
\end{tikzpicture}
\end{codeexample}
This axis system is usually a good choice to depict ``arbitrary two
dimensional data''. Because the axes are automatically scaled, you do not
need to worry about how large or small the values will be. The name
|scientific axes| is intended to indicate that this axis system is often
used in scientific publications.
You can use the \meta{options} to fine tune the axis system. The
\meta{options} will be executed with the following path prefix:
%
\begin{codeexample}[code only]
/tikz/data visualization/scientific axes
\end{codeexample}
%
All keys with this prefix can thus be passed as \meta{options}.
This axis system will always distort the relative magnitudes of the units
on the two axis. If you wish the units on both axes to be equal, consider
directly specifying the unit length ``by hand'':
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [visualize as smooth line,
scientific axes,
all axes={unit length=1cm per 10 units, ticks={few}}]
data [format=function] {
var x : interval [0:100];
func y = sqrt(\value x);
};
\end{tikzpicture}
\end{codeexample}
The |scientific axes| have the following properties:
%
\begin{itemize}
\item The |x|-values are surveyed and the $x$-axis is then scaled and
shifted so that it has the length specified by the following key.
%
\begin{key}{/tikz/data visualization/scientific axes/width=\meta{dimension} (initially 5cm)}
\end{key}
%
The minimum value is at the left end of the axis and at the canvas
origin. The maximum value is at the right end of the axis. \item
The |y|-values are surveyed and the $y$-axis is then scaled so that
is has the length specified by the following key.
%
\begin{key}{/tikz/data visualization/scientific axes/height=\meta{dimension}}
By default, the |height| is the golden ratio times the |width|.
\end{key}
%
The minimum value is at the bottom of the axis and at the canvas
origin. The maximum value is at the top of the axis.
\item Lines (forming a frame) are depicted at the minimum and maximum
values of the axes in 50\% black.
\end{itemize}
The following keys are executed by default as options: |outer ticks| and
|standard labels|.
You can use the following style to overrule the defaults:
\begin{stylekey}{/tikz/data visualization/every scientific axes}
\end{stylekey}
\end{key}
The keys described in the following can be used to fine-tune the way the
scientific axis system is rendered.
\begin{key}{/tikz/data visualization/scientific axes/outer ticks}
This causes the ticks to be drawn `` on the outside'' of the frame so that
they interfere as little as possible with the data. It is the default.
%
\begin{codeexample}[
width=7cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\begin{tikzpicture}
\datavisualization [scientific axes=outer ticks,
visualize as smooth line]
data [format=function] {
var x : interval [-12:12];
func y = \value x*\value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/scientific axes/inner ticks}
This axis system works like |scientific axes|, only the ticks are on the
``inside'' of the frame.
%
\begin{codeexample}[
width=7cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\begin{tikzpicture}
\datavisualization [scientific axes=inner ticks,
visualize as smooth line]
data [format=function] {
var x : interval [-12:12];
func y = \value x*\value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
This axis system is also common in publications, but the ticks tend to
interfere with marks if they are near to the border as can be seen in the
following example:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\begin{tikzpicture}
\datavisualization [scientific axes={inner ticks, width=3.2cm},
style sheet=cross marks,
visualize as scatter/.list={a,b}]
data [set=a] {
x, y
0, 0
1, 1
0.5, 0.5
2, 1
}
data [set=b] {
x, y
0.05, 0
1.5, 1
0.5, 0.75
2, 0.5
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/scientific axes/clean}
The axes and the ticks are completely removed from the actual data, making
this axis system especially useful for scatter plots, but also for most
other scientific plots.
%
\begin{codeexample}[
width=7.5cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization [
scientific axes=clean,
visualize as smooth line]
data [format=function] {
var x : interval [-12:12];
func y = \value x*\value x*\value x;
};
\end{codeexample}
The distance of the axes from the actual plot is given by the padding of
the axes.
\end{key}
For all scientific axis systems, different label placement strategies can be
specified. They are discussed in the following.
\begin{key}{/tikz/data visualization/scientific axes/standard labels}
As the name suggests, this is the standard placement strategy. The label of
the $x$-axis is placed below the center of the $x$-axis, the label of the
$y$-axis is rotated by $90^\circ$ and placed left of the center of the
$y$-axis.
%
\begin{codeexample}[
width=8cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization
[scientific axes={clean, standard labels},
visualize as smooth line,
x axis={label=degree $d$,
ticks={tick unit={}^\circ}},
y axis={label=$\sin d$}]
data [format=function] {
var x : interval [-10:10] samples 10;
func y = sin(\value x);
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/scientific axes/upright labels}
Works like |scientific axes standard labels|, only the label of the
$y$-axis is not rotated.
%
\begin{codeexample}[
width=8cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization [
scientific axes={clean, upright labels},
visualize as smooth line,
x axis={label=degree $d$,
ticks={tick unit={}^\circ}},
y axis={label=$\cos d$, include value=1,
ticks={style={
/pgf/number format/precision=4,
/pgf/number format/fixed zerofill}}}]
data [format=function] {
var x : interval [-10:10] samples 10;
func y = cos(\value x);
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/scientific axes/end labels}
Places the labels at the end of the $x$- and the $y$-axis, similar to the
axis labels of a school book axis system.
%
\begin{codeexample}[
width=8cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization [
scientific axes={clean, end labels},
visualize as smooth line,
x axis={label=degree $d$,
ticks={tick unit={}^\circ}},
y axis={label=$\tan d$}]
data [format=function] {
var x : interval [-80:80];
func y = tan(\value x);
};
\end{codeexample}
%
\end{key}
\subsubsection{Reference: School Book Axis Systems}
\begin{key}{/tikz/data visualization/school book axes=\meta{options}}
This axis system is intended to ``look like'' the coordinate systems often
used in school books: The axes are drawn in such a way that they intersect
to origin. Furthermore, no automatic scaling is done to ensure that the
lengths of units are the same in all directions.
This axis system must be used with care -- it is nearly always necessary to
specify the desired unit length by hand using the option |unit length|. If
the magnitudes of the units on the two axes differ, different unit lengths
typically need to be specified for the different axes.
Finally, if the data is ``far removed'' from the origin, this axis system
will also ``look bad''.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [school book axes, visualize as smooth line]
data [format=function] {
var x : interval [-1.3:1.3];
func y = \value x*\value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
The stepping of the ticks is one unit by default. Using keys like
|ticks=some| may help to give better steppings.
The \meta{options} are executed with the key itself as path prefix. Thus,
the following subkeys are permissible options:
%
\begin{key}{/tikz/data visualization/school book axes/unit=\meta{value}}
Sets the scaling so that 1\,cm corresponds to \meta{value} units. At
the same time, the stepping of the ticks will also be set to
\meta{value}.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [school book axes={unit=10},
visualize as smooth line,
clean ticks,
x axis={label=$x$},
y axis={label=$f(x)$}]
data [format=function] {
var x : interval [-20:20];
func y = \value x*\value x/10;
};
\end{tikzpicture}
\end{codeexample}
\end{key}
\begin{key}{/tikz/data visualization/school book axes/standard labels}
This key makes the label of the $x$-axis appear at the right end of
this axis and it makes the label of the $y$-axis appear at the top of
the $y$-axis.
Currently, this is the only supported placement strategy for the school
book axis system.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [school book axes={standard labels},
visualize as smooth line,
clean ticks,
x axis={label=$x$},
y axis={label=$f(x)$}]
data [format=function] {
var x : interval [-1:1];
func y = \value x*\value x + 1;
};
\end{tikzpicture}
\end{codeexample}
\end{key}
\end{key}
\subsubsection{Advanced Reference: Underlying Cartesian Axis Systems}
The axis systems described in the following are typically not used directly by
the user. The systems setup \emph{directions} for several axes in some sensible
way, but they do not actually draw anything on these axes. For instance, the
|xy Cartesian| creates two axes called |x axis| and |y axis| and makes the
$x$-axis point right and the $y$-axis point up. In contrast, an axis system
like |scientific axes| uses the axis system |xy Cartesian| internally and then
proceeds to setup a lot of keys so that the axis lines are drawn, ticks and
grid lines are drawn, and labels are placed at the correct positions.
\begin{key}{/tikz/data visualization/xy Cartesian}
This axis system creates two axes called |x axis| and |y axis| that point
right and up, respectively. By default, one unit is mapped to one cm.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [xy Cartesian, visualize as smooth line]
data [format=function] {
var x : interval [-1.25:1.25];
func y = \value x*\value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
\begin{key}{/tikz/data visualization/xy axes=\meta{options}}
This key applies the \meta{options} both to the |x axis| and the
|y axis|.
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/xyz Cartesian cabinet}
This axis system works like |xy Cartesian|, only it \emph{additionally}
creates an axis called |z axis| that points left and down. For this axis,
one unit corresponds to $\frac{1}{2}\sin 45^\circ\mathrm{cm}$. This is also
known as a cabinet projection.
\begin{key}{/tikz/data visualization/xyz axes=\meta{options}}
This key applies the \meta{options} both to the |x axis| and the
|y axis|.
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/uv Cartesian}
This axis system works like |xy Cartesian|, but it introduces two axes
called |u axis| and |v axis| rather than the |x axis| and the |y axis|. The
idea is that in addition to a ``major'' $xy$-coordinate system this is also
a ``smaller'' or ``minor'' coordinate system in use for depicting, say,
small vectors with respect to this second coordinate system.
\begin{key}{/tikz/data visualization/uv axes=\meta{options}}
Applies the \meta{options} to both the |u axis| and the |y axis|.
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/uvw Cartesian cabinet}
Like |xyz Cartesian cabinet|, but for the $uvw$-system.
\begin{key}{/tikz/data visualization/uvw axes=\meta{options}}
Like |xyz axes|.
\end{key}
\end{key}
\subsection{Ticks and Grids}
\label{section-dv-ticks-and-grids}
\subsubsection{Concepts}
A \emph{tick} is a small visual indication on an axis of the value of the
axis's attribute at the position where the tick is shown. A tick may be
accompanied additionally by a textual representation, but it need not. A
\emph{grid line} is similar to a tick, but it is not an indication on the axis,
but rather a whole line that indicates all positions where the attribute has a
certain value. Unlike ticks, grid lines (currently) are not accompanied by a
textual representation.
Just as for axes, the data visualization system decouples the specification of
which ticks are present \emph{in principle} from where they are visualized. In
the following, I describe how you specify which ticks and grid lines you would
like to be drawn and how they should look like (their styling). The axis system
of your choice will then visualize the ticks at a sensible position for the
chosen system. For details on how to change where whole axis is shown along
with its ticks, see Section~\ref{section-dv-visualize-ticks}.
Specifying which ticks you are interested in is done as follows: First, you use
|ticks| key (or, for specifying which grid lines should be present, the |grid|
key). This key takes several possible options, described in detail in the
following, which have different effects:
%
\begin{enumerate}
\item Keys like |step=10| or |minor steps between steps| cause a
``semi-automatic'' computation of possible steps. Here, you explicitly
specify the stepping of steps, but the first stepping and their number
are computed automatically according to the range of possible values
for the attribute.
\item Keys like |few|, |some|, or |many| can be passed to |ticks| in order
to have \tikzname\ compute good tick positions automatically. This is
usually what you want to happen, which is why most axis system will
implicitly say |ticks={some}|.
\item Keys like |at| or |also at| provide ``absolute control'' over which
ticks or grid lines are shown. For these keys, you can not only specify
at what value a tick should be shown, but also its styling and also
whether it is a major, minor, or subminor tick or grid line.
\end{enumerate}
In the following, the main keys |ticks| and |grids| are documented first. Then
the different kinds of ways of specifying where ticks or grid lines should be
shown are explained.
\subsubsection{The Main Options: Tick and Grid}
\begin{key}{/tikz/data visualization/axis options/ticks=\meta{options} (default some)}
This key can be passed to an axis in order to configure which ticks are
present for the axis. The possible \meta{options} include, for instance,
keys like |step|, which is used to specify a stepping for the ticks, but
also keys like |major| or |minor| for specifying the positions of major and
minor ticks in detail. The list of possible options is described in the
rest of this section.
Note that the |ticks| option will only configure which ticks should be
shown in principle. The actual rendering is done only when the
|visualize ticks| key is used, documented in
Section~\ref{section-dv-visualize-ticks}, which is typically done only
internally by an axis system.
The \meta{options} will be executed with the path prefix
|/tikz/data visualization/|. When the |ticks| key is used multiple times
for an axis, the \meta{options} accumulate.
%
\begin{codeexample}[width=6cm,preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
scientific axes, visualize as line,
x axis={ticks={step=24, minor steps between steps=3},
label=hours}]
data {
x, y
0, 0
10, 0
20, 0.5
30, 0.75
40, 0.7
50, 0.6
60, 0.5
70, 0.45
80, 0.47
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/axis options/grid=\meta{options} (default at default ticks)}
This key is similar to |ticks|, only it is used to configure where grid
lines should be shown rather than ticks. In particular, the options that
can be passed to the |ticks| key can also be passed to the |grid| key. Just
like |ticks|, the \meta{options} only specify which grid lines should be
drawn in principle; it is the job of the |visualize grid| key to actually
cause any grid lines to be shown.
If you do not specify any \meta{options}, the default text
|at default ticks| is used. This option causes grid lines to be drawn at
all positions where ticks are shown by default. Since this usually exactly
what you would like to happen, most of the time you just need to
|all axes=grid| to cause a grid to be shown.
\end{key}
\begin{key}{/tikz/data visualization/axis options/ticks and grid=\meta{options}}
This key passes the \meta{options} to both the |ticks| key and also to the
|grid| key. This is useful when you want to specify some special points
explicitly where you wish a tick to be shown and also a grid line.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
visualize as smooth line,
all axes= {grid, unit length=1.25cm},
y axis={ ticks=few },
x axis={ ticks=many, ticks and grid={ major also at={(pi/2) as $\frac{\pi}{2}$}}}]
data [format=function] {
var x : interval [-pi/2:3*pi] samples 50;
func y = sin(\value x r);
};
\end{codeexample}
%
\end{key}
\subsubsection{Semi-Automatic Computation of Tick and Grid Line Positions}
\label{section-dv-concept-tick-placement-strategies}
Consider the following problem: The data visualization engine determines that
in a plot the $x$-values vary between $17.4$ and $34.5$. In this case, we
certainly do not want, say, ten ticks at exactly ten evenly spaced positions
starting with $17.4$ and ending with $34.5$, because this would yield ticks at
positions like $32.6$. Ticks should be placed at ``nice'' positions like $20$,
$25$, and $30$.
Determining which positions are ``nice'' is somewhat difficult. In the above
example, the positions $20$, $25$, and $30$ are certainly nice, but only three
ticks may be a bit few of them. Better might be the tick positions $17.5$,
$20$, $22.5$, through to $32.5$. However, users might prefer even numbers over
fractions like $2.5$ as the stepping.
A \emph{tick placement strategy} is a method of automatically deciding which
positions are \emph{good} for placing ticks. The data visualization engine
comes with a number of predefined strategies, but you can also define new ones
yourself. When the data visualization is requested to automatically determine
``good'' positions for the placement of ticks on an axis, it uses one of
several possible \emph{basic strategies}. These strategies differ dramatically
in which tick positions they will choose: For a range of values between $5$ and
$1000$, a |linear steps| strategy might place ticks at positions $100$, $200$,
through to $1000$, while an |exponential steps| strategy would prefer the tick
positions $10$, $100$ and $1000$. The exact number and values of the tick
positions chosen by either strategy can be fine-tuned using additional options
like |step| or |about|.
Here is an example of the different stepping chosen when one varies the tick
placement strategy:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [scientific axes, visualize as smooth line]
data [format=function] {
var x : interval [1:11];
func y = \value x*\value x;
};
\end{tikzpicture}
\qquad
\begin{tikzpicture}
\datavisualization [scientific axes, visualize as smooth line,
y axis={exponential steps},
x axis={ticks={quarter about strategy}},
]
data [format=function] {
var x : interval [1:11];
func y = \value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
Two strategies are always available: |linear steps|, which yields
(semi)automatic ticks are evenly spaced positions, and |exponential steps|,
which yields (semi)automatic steps at positions at exponentially increasing
positions -- which is exactly what is needed for logarithmic plots. These
strategies are details in Section~\ref{section-dv-strategies}.
The following options are used to configure tick placement strategies like
|linear steps|. Unlike the basic choice of a placement strategy, which is an
axis option, the following should be passed to the option |ticks| or |grid|
only. So, you would write things like |x axis={ticks={step=2}}|, but
|x axis={linear steps}|.
\begin{key}{/tikz/data visualization/step=\meta{value} (initially 1)}
The value of this key is used to determine the spacing of the major ticks.
The key is used by the |linear steps| and |exponential steps| strategies,
see the explanations in Section~\ref{section-dv-strategies} for details.
Basically, all ticks are placed at all multiples of \meta{value} that lie
in the attribute range interval.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [
school book axes, visualize as smooth line,
y axis={ticks={step=1.25}},
]
data [format=function] {
var x : interval [0:3];
func y = \value x*\value x/2;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/minor steps between steps=\meta{number} (default 9)}
Specifies that between any two major steps (whose positions are specified
by the |step| key), there should be \meta{number} many minor steps. Note
that the default of |9| is exactly the right number so that each interval
between two minor steps is exactly a tenth of the size of a major step. See
also Section~\ref{section-dv-strategies} for further details.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\begin{tikzpicture}
\datavisualization [school book axes, visualize as smooth line,
x axis={ticks={minor steps between steps=3}},
y axis={ticks={minor steps between steps}},
]
data [format=function] {
var x : interval [-1.5:1.5];
func y = \value x*\value x;
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/phase=\meta{value} (initially 0)}
See Section~\ref{section-dv-strategies} for details on how the phase of
steps influences the tick placement.
\end{key}
\subsubsection{Automatic Computation of Tick and Grid Line Positions}
The |step| option gives you ``total control'' over the stepping of ticks on an
axis, but you often do not know the correct stepping in advance. In this case,
you may prefer to have a good value for |step| being computed for you
automatically.
Like the |step| key, these options are passed to the |ticks| option. So, for
instance, you would write |x axis={ticks={about=4}}| to request about four
ticks to be placed on the $x$-axis.
\begin{key}{/tikz/data visualization/about=\meta{number}}
This key asks the data visualization to place \emph{about} \meta{number}
many ticks on an axis. It is not guaranteed that \emph{exactly}
\meta{number} many ticks will be used, rather the actual number will be the
closest number of ticks to \meta{number} so that their stepping is still
``good''. For instance, when you say |about=10|, it may happen that exactly
|10|, but perhaps even |13| ticks are actually selected, provided that
these numbers of ticks lead to good stepping values like |5| or |2.5|
rather than numbers like |3.4| or |7|. The method that is used to determine
which steppings a deemed to be ``good'' depends on the current tick
placement strategy.
\medskip
\textbf{Linear steps.}
Let us start with |linear steps|: First, the difference between the maximum
value $v_{\max}$ and the minimum value $v_{\min}$ on the axis is computed;
let us call it $r$ for ``range''. Then, $r$ is divided by \meta{number},
yielding a target stepping~$s$. If $s$ is a number like $1$ or $5$ or $10$,
then this number could be used directly as the new value of |step|.
However, $s$ will typically something strange like $0.023\,45$ or
$345\,223.76$, so $s$ must be replaced by a better value like $0.02$ in the
first case and perhaps $250\,000$ in the second case.
In order to determine which number is to be used, $s$ is rewritten in the
form $m \cdot 10^k$ with $1 \le m < 10$ and $k \in \mathbb Z$. For
instance, $0.023\,45$ would be rewritten as $2.345 \cdot 10^{-2}$ and
$345\,223.76$ as $3.452\,2376 \cdot 10^5$. The next step is to replace the
still not-so-good number $m$ like $2.345$ or $3.452\,237$ by a ``good''
value $m'$. For this, the current value of the |about strategy| is used:
%
\begin{key}{/tikz/data visualization/about strategy=\meta{list}}
The \meta{list} is a comma-separated sequence of pairs
\meta{threshold}/\meta{value} like for instance |1.5/1.0| or |2.3/2.0|.
When a good value $m'$ is sought for a given $m$, we iterate over the
list and find the first pair \meta{threshold}/\meta{value} where
\meta{threshold} exceeds~$m$. Then $m'$ is set to \meta{value}. For
instance, if \meta{list} is |1.5/1.0,2.3/2.0,4/2.5,7/5,11/10|, which is
the default, then for $m=3.141$ we would get $m'=2.5$ since $4 >
3.141$, but $2.3 \le 3.141$. For $m=6.3$ we would get $m'=5$.
\end{key}
%
Once $m'$ has been determined, the stepping is set to $s' = m' \cdot 10^k$.
% Define an axis type
\tikzdatavisualizationset{
one dimensional axis/.style={
new Cartesian axis=axis,
axis={
attribute=main,
unit vector={(0pt,1pt)},
visualize axis={style=->},
visualize ticks={major={tick text at low},direction axis=perpendicular},
length=3cm
},
new Cartesian axis=perpendicular,
perpendicular={
attribute=perp,
unit vector={(1pt,0pt)},
include value=0,
include value=1
}
}
}
\def\showstrategy#1{
% Show the effect for the different strategies
\medskip
\begin{tikzpicture}
\foreach \max/\about [count=\c] in {10/5,20/5,30/5,40/5,50/5,60/5,70/5,80/5,90/5,100/5,100/3,100/10}
{
\begin{scope}[xshift=\c pt*30]
\datavisualization [#1,
one dimensional axis,
axis={
ticks={about=\about},
include value=0,
include value=\max
}
];
\node at (0,-5mm) [anchor=mid] {\texttt{\about}};
\end{scope}
}
\node at (30pt,-5mm) [anchor=mid east] {\texttt{about=\ \ }};
\end{tikzpicture}
}
The net effect of all this is that for the default strategy the only valid
stepping are the values $1$, $2$, $2.5$ and $5$ and every value obtainable
by multiplying one of these values by a power of ten. The following example
shows the effects of, first, setting |about=5| (corresponding to the |some|
option) and then having axes where the minimum value is always |0| and
where the maximum value ranges from |10| to |100| and, second, setting
|about| to the values from |3| (corresponding to the |few| option) and to
|10| (corresponding to the |many| option) while having the minimum at |0|
and the maximum at |100|:
\showstrategy{standard about strategy}
\medskip
\textbf{Exponential steps.}
For |exponential steps| the strategy for determining a good stepping value
is similar to |linear steps|, but with the following differences:
%
\begin{itemize}
\item Naturally, since the stepping value refers to the exponent, the
whole computation of a good stepping value needs to be done ``in
the exponent''. Mathematically spoken, instead of considering the
difference $r = v_{\max} - v_{\min}$, we consider the difference $r
= \log v_{\max} - \log v_{\min}$. With this difference, we still
compute $s = r / \meta{number}$ and let $s = m \cdot 10^k$ with $1
\le m < 10$.
\item It makes no longer sense to use values like $2.5$ for $m'$ since
this would yield a fractional exponent. Indeed, the only sensible
values for $m'$ seem to be $1$, $3$, $6$, and $10$. Because of
this, the |about strategy| is ignored and one of these values or a
multiple of one of them by a power of ten is used.
\end{itemize}
The following example shows the chosen steppings for a maximum varying from
$10^1$ to $10^5$ and from $10^{10}$ to $10^{50}$ as well as for $10^{100}$
for |about=3|:
\medskip
\begin{tikzpicture}
\foreach \max [count=\c] in {1,...,5,10,20,...,50,100}
{
\begin{scope}[xshift=\c pt*40]
\datavisualization [
one dimensional axis,
axis={
logarithmic,
ticks={about=3},
include value=1,
include value=1e\max
}
];
\end{scope}
}
\end{tikzpicture}
\medskip
\textbf{Alternative strategies.}
In addition to the standard |about strategy|, there are some additional
strategies that you might wish to use instead:
\begin{key}{/tikz/data visualization/standard about strategy}
Permissible values for $m'$ are: $1$, $2$, $2.5$, and~$5$. This
strategy is the default strategy.
\end{key}
\begin{key}{/tikz/data visualization/euro about strategy}
Permissible values for $m'$ are: $1$, $2$, and~$5$. These are the same
values as for the Euro coins, hence the name.
\showstrategy{euro about strategy}
\end{key}
\begin{key}{/tikz/data visualization/half about strategy}
Permissible values for $m'$: $1$ and $5$. Use this strategy if only
powers of $10$ or halves thereof seem logical.
\showstrategy{half about strategy}
\end{key}
\begin{key}{/tikz/data visualization/decimal about strategy}
The only permissible value for $m'$ is $1$. This is an even more
radical version of the previous strategy.
\showstrategy{decimal about strategy}
\end{key}
\begin{key}{/tikz/data visualization/quarter about strategy}
Permissible values for $m'$ are: $1$, $2.5$, and $5$.
\showstrategy{quarter about strategy}
\end{key}
\begin{key}{/tikz/data visualization/int about strategy}
Permissible values for $m'$ are: $1$, $2$, $3$, $4$, and $5$.
\showstrategy{int about strategy}
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/many}
This is an abbreviation for |about=10|.
\end{key}
\begin{key}{/tikz/data visualization/some}
This is an abbreviation for |about=5|.
\end{key}
\begin{key}{/tikz/data visualization/few}
This is an abbreviation for |about=3|.
\end{key}
\begin{key}{/tikz/data visualization/none}
Switches off the automatic step computation. Unless you use |step=|
explicitly to set a stepping, no ticks will be (automatically) added.
\end{key}
\subsubsection{Manual Specification of Tick and Grid Line Positions}
The automatic computation of ticks and grid lines will usually do a good job,
but not always. For instance, you might wish to have ticks exactly at, say,
prime numbers or at Fibonacci numbers or you might wish to have an additional
tick at $\pi$. In these cases you need more direct control over the
specification of tick positions.
First, it is important to understand that the data visualization system
differentiates between three kinds of ticks and grid lines: major, minor, and
subminor. The major ticks are the most prominent ticks where, typically, a
textual representation of the tick is shown; and the major grid lines are the
thickest. The minor ticks are smaller, more numerous, and lie between major
ticks. They are used, for instance, to indicate positions in the middle between
major ticks or at all integer positions between major ticks. Finally, subminor
ticks are even smaller than minor ticks and they lie between minor ticks.
Four keys are used to configure the different kinds:
\begin{key}{/tikz/data visualization/major=\meta{options}}
The key can be passed as an option to the |ticks| key and also to the
|grid| key, which in turn is passed as an option to an axis. The
\meta{options} passed to |major| specify at which positions major
ticks/grid lines should be shown (using the |at| option and |also at|
option) and also any special styling. The different possible options are
described later in this section.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[ school book axes, visualize as smooth line,
x axis={ticks={major={at={1, 1.5, 2}}}}]
data [format=function] {
var x : interval [-1.25:2];
func y = \value x * \value x / 2;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/minor=\meta{options}}
Like |major|, only for minor ticks/grid lines.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[ school book axes, visualize as smooth line,
x axis={grid={minor={at={1, 1.5, 2}}}}]
data [format=function] {
var x : interval [-1.25:2];
func y = \value x * \value x / 2;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/subminor=\meta{options}}
Like |major|, only for subminor ticks/grid lines.
\end{key}
\begin{key}{/tikz/data visualization/common=\meta{options}}
This key allows you to specify \meta{options} that apply to |major|,
|minor| and |subminor| alike. It does not make sense to use |common| to
specify positions (since you typically do not want both a major and a minor
tick at the same position), but it can be useful to configure, say, the
size of all kinds of ticks:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[ school book axes, visualize as smooth line,
x axis={ticks={minor steps between steps, common={low=0}}} ]
data [format=function] {
var x : interval [-1.25:2];
func y = \value x * \value x / 2;
};
\end{codeexample}
%
\end{key}
The following keys can now be passed to the |major|, |minor|, and |subminor|
keys to specify where ticks or grid lines should be shown:
\begin{key}{/tikz/data visualization/at=\meta{list}}
Basically, the \meta{list} must be a list of values that is processed with
the |\foreach| macro (thus, it can contain ellipses to specify ranges of
value). Empty values are skipped.
The effect of passing |at| to a |major|, |minor|, or |subminor| key is that
ticks or grid lines on the axis will be placed exactly at the values in
\meta{list}. Here is an example:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[ school book axes, visualize as smooth line,
x axis={ticks={major={at={-1,0.5,(pi/2)}}}}]
data [format=function] {
var x : interval [-1.25:2];
func y = \value x * \value x / 2;
};
\end{codeexample}
When this option is used, any previously specified tick positions are
overwritten by the values in \meta{list}. Automatically computed ticks are
also overwritten. Thus, this option gives you complete control over where
ticks should be placed.
Normally, the individual values inside the \meta{list} are just numbers
that are specified in the same way as an attribute value. However, such a
value may also contain the keyword |as|, which allows you so specify the
styling of the tick in detail. Section~\ref{section-dv-ticks-styling}
details how this works.
It is often a bit cumbersome that one has to write things like
%
\begin{codeexample}[code only]
some axis = {ticks = {major = {at = {...}}}}
\end{codeexample}
%
A slight simplification is given by the following keys, which can be passed
directly to |ticks| and |grid|:
%
\begin{key}{/tikz/data visualization/major at=\meta{list}}
A shorthand for |major={at={|\meta{list}|}}|.
\end{key}
%
\begin{key}{/tikz/data visualization/minor at=\meta{list}}
A shorthand for |major={at={|\meta{list}|}}|.
\end{key}
%
\begin{key}{/tikz/data visualization/subminor at=\meta{list}}
A shorthand for |major={at={|\meta{list}|}}|.
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/also at=\meta{list}}
This key is similar to |at|, but it causes ticks or grid lines to be placed
at the positions in the \meta{list} \emph{in addition} to the ticks that
have already been specified either directly using |at| or indirectly using
keys like |step| or |some|. The effect of multiple calls of this key
accumulate. However, when |at| is used after an |also at| key, the |at| key
completely resets the positions where ticks or grid lines are shown.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[ school book axes, visualize as smooth line,
x axis={grid, ticks and grid={major={also at={0.5}}}}]
data [format=function] {
var x : interval [-1.25:2];
func y = \value x * \value x / 2;
};
\end{codeexample}
%
As for |at|, there are some shorthands available:
%
\begin{key}{/tikz/data visualization/major also at=\meta{list}}
A shorthand for |major={also at={|\meta{list}|}}|.
\end{key}
%
\begin{key}{/tikz/data visualization/minor also at=\meta{list}}
A shorthand for |major={also at={|\meta{list}|}}|.
\end{key}
%
\begin{key}{/tikz/data visualization/subminor also at=\meta{list}}
A shorthand for |major={also at={|\meta{list}|}}|.
\end{key}
\end{key}
\subsubsection{Styling Ticks and Grid Lines: Introduction}
\label{section-dv-ticks-styling}
When a tick, a tick label, or a grid line is visualized on the page, a whole
regiment of styles influences the appearance. The reason for this large number
of interdependent styles is the fact that we often wish to influence only a
very certain part of how a tick is rendered while leaving the other aspects
untouched: Sometimes we need to modify just the font of the tick label;
sometimes we wish to change the length of the tick label and the tick label
position at the same time; sometimes we wish to change the color of grid line,
tick, and tick label; and sometimes we wish to generally change the thickness
of all ticks.
Let us go over the different kinds of things that can be styled (grid lines,
ticks, and tick labels) one by one and let us have a look at which styles are
involved. We will start with the grid lines, since they turn out to be the most
simple, but first let us have a look at the general |style| and |styling|
mechanism that is used in many placed in the following:
\subsubsection{Styling Ticks and Grid Lines: The Style and Node Style Keys}
\label{section-dv-style}
All keys of the data visualization system have the path prefix
|/tikz/data visualization|. This is not only true for the main keys like
|scientific axes| or |visualize as line|, but also for keys that govern how
ticks are visualized. In particular, a style like |every major grid| has the
path prefix |/tikz/data visualization| and all keys stored in this style are
also executed with this path prefix.
Normally, this does not cause any trouble since most of the keys and even
styles used in a data visualization are intended to configure what is shown in
the visualization. However, at some point, we may also with to specify options
that no longer configure the visualization in general, but specify the
appearance of a line or a node on the \tikzname\ layer.
Two keys are used to ``communicate'' with the \tikzname\ layer:
\begin{key}{/tikz/data visualization/style=\meta{\tikzname\ options}}
This key takes options whose path prefix is |/tikz|, not
|/tikz/data visualization|. These options will be \emph{appended} to a
current list of such options (thus, multiple calls of this key accumulate).
The resulting list of keys is not executed immediately, but it will be
executed whenever the data visualization engine calls the \tikzname\ layer
to draw something (this placed will be indicated in the following).
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
all axes={ticks={style=blue}, length=3cm},
y axis={grid, grid={minor steps between steps, major={style=red}}},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/styling}
Executing this key will cause all ``accumulated'' \tikzname\ options from
previous calls to the key |/tikz/data visualization/style| to be executed.
Thus, you use |style| to set \tikzname\ options, but you use |styling| to
actually apply these options. Usually, you do not call this option directly
since this application is only done deep inside the data visualization
engine.
\end{key}
Similar to |style| (and |styling|) there also exist the |node style| (and
|node styling|) key that takes \tikzname\ options that apply to nodes only --
in addition to the usual |style|.
\begin{key}{/tikz/data visualization/node style=\meta{\tikzname\ options}}
This key works like |style|, but it has an effect only on nodes that are
created during a data visualization. This includes tick labels and axis
labels:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
all axes={ticks={node style=red}, length=3cm},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
Note that in the example the ticks themselves (the little thicker lines)
are not red.
\end{key}
\begin{key}{/tikz/data visualization/node styling}
Executing this key will cause all ``accumulated'' node stylings to be
executed.
\end{key}
\subsubsection{Styling Ticks and Grid Lines: Styling Grid Lines}
\label{section-dv-styling-grid-lines}
When a grid line is visualized, see
Section~\ref{section-dv-visualize-gridlines} for details on when this happens,
the following styles are executed in the specified order.
%
\begin{enumerate}
\item |grid layer|.
\item |every grid|.
\item |every major grid| or |every minor grid| or |every subminor grid|,
depending on the kind of grid line.
\item locally specified options for the individual grid line, see
Section~\ref{section-dv-local-styles}.
\item |styling|, see Section~\ref{section-dv-style}.
\end{enumerate}
All of these keys have the path prefix |/tikz/data visualization|. However, the
options stored in the first style (|grid layer|) and also in the last
(|styling|) are executed with the path prefix |/tikz| (see
Section~\ref{section-dv-style}).
Let us now have a look at these keys in detail:
\begin{stylekey}{/tikz/data visualization/grid layer (initially on background layer)}
\label{section-dv-grid-layer}%
This key is used to specified the \emph{layer} on which grid lines should
be drawn (layers are explained in Section~\ref{section-tikz-backgrounds}).
By default, all grid lines are placed on the |background| layer and thus
behind the data visualization. This is a sensible strategy since it avoids
obscuring the more important data with the far less important grid lines.
However, you can change this style to ``get the grid lines to the front'':
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
all axes={
length=3cm,
grid,
grid={minor steps between steps}
},
grid layer/.style=, % none, so on top of data (bad idea)
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
When this style is executed, the keys stored in the style will be executed
with the prefix |/tikz|. Normally, you should only set this style to be
empty or to |on background layer|.
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every grid}
This style provides overall configuration options for grid lines. By
default, it is set to the following:
%
\begin{codeexample}[code only]
low=min, high=max
\end{codeexample}
%
This causes grid lines to span all possible values when they are
visualized, which is usually the desired behavior (the |low| and |high|
keys are explained in Section~\ref{section-dv-visualize-ticks}. You can
append the |style| key to this style to configure the overall appearance of
grid lines. It should be noted that settings to |style| inside |every grid|
will take precedence over ones in |every major grid| and |every minor grid|.
In the following example we cause all grid lines to be dashed (which is not
a good idea in general since it creates a distracting background pattern).
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
all axes={length=3cm, grid},
every grid/.append style={style=densely dashed},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every major grid}
This style configures the appearance of major grid lines. It does so by
calling the |style| key to setup appropriate \tikzname\ options for
visualizing major grid lines. The default definition of this style is:
%
\begin{codeexample}[code only]
style = {help lines, thin, black!25}
\end{codeexample}
%
In the following example, we use thin major blue grid lines:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes,
all axes={
length=3cm,
grid,
grid={minor steps between steps}
},
every major grid/.style = {style={blue, thin}},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
As can be seen, this is not exactly visually pleasing. The default settings
for the grid lines should work in most situations; you may wish to increase
the blackness level, however, when you experience trouble during printing
or projecting graphics.
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every minor grid}
Works like |every major grid|. The default is
%
\begin{codeexample}[code only]
style = {help lines, black!25}
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every subminor grid}
Works like |every major grid|. The default is
%
\begin{codeexample}[code only]
style = {help lines, black!10}
\end{codeexample}
%
\end{stylekey}
\subsubsection{Styling Ticks and Grid Lines: Styling Ticks and Tick Labels}
\label{section-dv-styling-ticks}
Styling ticks and tick labels is somewhat similar to styling grid lines. Let us
start with the tick \emph{mark}, that is, the small line that represents the
tick. When this mark is drawn, the following styles are applied:
%
\begin{enumerate}
\item |every ticks|.
\item |every major ticks| or |every minor ticks| or |every subminor ticks|,
depending on the kind of ticks to be visualized.
\item locally specified options for the individual tick, see
Section~\ref{section-dv-local-styles}.
\item |tick layer|
\item |every odd tick| or |every even tick|, see
Section~\ref{section-dv-stacking}.
\item |draw|
\item |styling|, see Section~\ref{section-dv-style}.
\end{enumerate}
For the tick label node (the node containing the textual representation of the
attribute's value at the tick position), the following styles are applied:
%
\begin{enumerate}
\item |every ticks|.
\item |every major ticks| or |every minor ticks| or |every subminor ticks|,
depending on the kind of ticks to be visualized.
\item locally specified options for the individual tick, see
Section~\ref{section-dv-local-styles}.
\item |tick node layer|
\item |every odd tick| or |every even tick|, see
Section~\ref{section-dv-stacking}.
\item |styling|, see Section~\ref{section-dv-style}.
\item |node styling|, see Section~\ref{section-dv-style}.
\end{enumerate}
\begin{stylekey}{/tikz/data visualization/every ticks}
This style allows you to configure the appearance of ticks using the
|style| and |node style| key. Here is (roughly) the default definition of
this style:
%
\begin{codeexample}[code only]
node style={
font=\footnotesize,
inner sep=1pt,
outer sep=.1666em,
rounded corners=1.5pt
}
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every major ticks}
The default is
%
\begin{codeexample}[code only]
style={line cap=round}, tick length=2pt
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every minor ticks}
The default is
%
\begin{codeexample}[code only]
style={help lines,thin, line cap=round}, tick length=1.4pt
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every subminor ticks}
The default is
%
\begin{codeexample}[code only]
style={help lines, line cap=round}, tick length=0.8pt
\end{codeexample}
%
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/tick layer (initially on background layer)}
Like |grid layer|, this key specifies on which layer the ticks should be
placed.
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/tick node layer (initially \normalfont empty)}
Like |tick layer|, but now for the nodes. By default, tick nodes are placed
on the main layer and thus on top of the data in case that the tick nodes
are inside the data.
\end{stylekey}
\subsubsection{Styling Ticks and Grid Lines: Exceptional Ticks}
You may sometimes wish to style a few ticks differently from the other ticks.
For instance, in the axis system |school book axes| there should be a tick
label at the |0| position only on one axis and then this label should be offset
a bit. In many cases this is easy to achieve: When you add a tick ``by hand''
using the |at| or |also at| option, you can add any special options in square
brackets.
However, in some situations the special tick position has been computed
automatically for you, for instance by the |step| key or by saying |tick=some|.
In this case, adding a tick mark with the desired options using |also at| would
cause the tick mark with the correct options to be shown in addition to the
tick mark with the wrong options. In cases like this one, the following option
may be helpful:
\begin{key}{/tikz/data visualization/options at=\meta{value} |as [|\meta{options}|]|}
This key causes the \meta{options} to be executed for any tick mark(s) at
\meta{value} in addition to any options given already for this position:
%
\begin{codeexample}[
width=7cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization [
scientific axes,
visualize as smooth line,
x axis={ticks={major={
options at = 3 as [no tick text],
also at = (pi) as
[{tick text padding=1ex}] $\pi$}}}]
data [format=function] {
var x : interval[0:2*pi];
func y = sin(\value x r);
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/no tick text at=\meta{value}}
Shorthand for |options at=|\meta{value}| as [no tick text]|.
\end{key}
\subsubsection{Styling Ticks and Grid Lines: Styling and Typesetting a Value}
\label{section-dv-local-styles}
\label{section-dv-tick-labels}
The \todosp{why 2 labels?} |at| and |also at| key allow you to provide a
comma-separated \meta{list} of \meta{value}s where ticks or grid lines should
be placed. In the simplest case, the \meta{value} is simply a number. However,
the general syntax allows three different kinds of \meta{value}s:
%
\begin{enumerate}
\item \meta{value}
\item \meta{value} |as| |[|\meta{local options}|]|
\item \meta{value} |as| \opt{|[|\meta{local options}|]|} \meta{text}
\end{enumerate}
In the first case, the \meta{value} is just a number that is interpreted like
any other attribute value.
In the second case, where the keyword |as| is present, followed by some option
in square brackets, but nothing following the closing square bracket, when the
tick or grid line at position \meta{value} is shown, the \meta{local options}
are executed first. These can use the |style| key or the |node style| key to
configure the appearance of this single tick or grid line. You can also use
keys like |low| or |high| to influence how large the grid lines or the ticks
are or keys like |tick text at low| to explicitly hide or show a tick label.
In the third case, which is only important for |ticks| and not for |grid|, the
same happens as in the second case, but the text that is shown as tick label is
\meta{text} rather than the automatically generated tick label. This automatic
generation of tick labels is explained in the following.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes=clean,
x axis={length=2.5cm, ticks={major at={
5,
6 as [style=red],
7 as [{style=blue, low=-1em}],
8 as [style=green] $2^3$,
10 as ten
}}},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
A value like ``2'' or ``17'' could just be used as \meta{text} to be displayed
in the node of a tick label. However, things are more difficult when the
to-be-shown value is $0.0000000015$, because we then would typically (but not
always) prefer something like $1.5 \cdot 10^{-9}$ to be shown. Also, we might
wish a unit to be added like $23\mathrm{m}/\mathrm{s}$. Finally, we might wish
a number like $3.141$ to be replaced by $\pi$. For these reasons, the data
visualization system does not simply put the to-be-shown value in a node as
plain text. Instead, the number is passed to a \emph{typesetter} whose job it
is to typeset this number nicely using \TeX's typesetting capabilities. The
only exception is, as indicated above, the third syntax version of the |at| and
|also at| keys, where \meta{text} is placed in the tick label's node,
regardless of what the typesetting would usually do.
The text produced by the automatic typesetting is computed as follows:
%
\begin{enumerate}
\item The current contents of the key |tick prefix| is put into the node.
\item This is followed by a call of the key |tick typesetter| which gets
the \meta{value} of the tick as its argument in scientific notation.
\item This is followed by the contents of the key |tick suffix|.
\end{enumerate}
Let us have a look at these keys in detail:
\begin{key}{/tikz/data visualization/tick prefix=\meta{text} (initially \normalfont empty)}
The \meta{text} will be put in front of every typeset tick:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes, all axes={ticks=few, length=2.5cm},
x axis={ticks={tick prefix=$\langle$, tick suffix=$]$}},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/tick suffix=\meta{text} (initially \normalfont empty)}
Works like |tick prefix|. This key is especially useful for adding units
like ``cm'' or ``$\mathrm m/\mathrm s$'' to every tick label. For this
reason, there is a (near) alias that is easier to memorize:
%
\begin{key}{/tikz/data visualization/tick unit=\meta{roman math text}}
A shorthand for |tick suffix={$\,\rm|\meta{roman math text}|$}|:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization
[scientific axes, all axes={length=3cm},
x axis={ticks={tick unit=s}},
y axis={ticks={tick unit=m/s^2}},
visualize as line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
\end{key}
\end{key}
\begin{key}{/tikz/data visualization/tick typesetter=\meta{value}}
The key gets called for each number that should be typeset. The argument
\meta{value} will be in scientific notation (like |1.0e1| for $10$). By
default, this key applies |\pgfmathprintnumber| to its argument. This
command is a powerful number printer whose configuration is documented in
Section~\ref{pgfmath-numberprinting}.
You are invited to code underlying this key so that a different typesetting
mechanism is used. Here is a (not quite finished) example that shows how,
say, numbers could be printed in terms of multiples of $\pi$:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\def\mytypesetter#1{%
\pgfmathparse{#1/pi}%
\pgfmathprintnumber{\pgfmathresult}$\pi$%
}
\tikz \datavisualization
[school book axes, all axes={unit length=1.25cm},
x axis={ticks={step=(0.5*pi), tick typesetter/.code=\mytypesetter{##1}}},
y axis={include value={-1,1}},
visualize as smooth line]
data [format=function] {
var x : interval [0.5:7];
func y = sin(\value x r);
};
\end{codeexample}
%
\end{key}
\subsubsection{Stacked Ticks}
\label{section-dv-stacking}
Sometimes, the text of tick labels are so long or so numerous that the text of
adjacent tick labels overlap (or have too little padding):
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
There are two ways to address this problem:
%
\begin{itemize}
\item One can rotate the labels on horizontal axes:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
x axis={ticks={node style={rotate=90, anchor=east}}},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
This is often a good solution, but may be hard to read. Also consider
rotating labels only by $45^\circ$ or $30^\circ$.
\item One can specify different shifts of the nodes for the different
ticks, whereby the ticks text no longer overlap.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
x axis={ticks={major at={0,4000,8000,
2000 as [node style={yshift=-1em}],
6000 as [node style={yshift=-1em}],
10000 as [node style={yshift=-1em}]}}},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
However, specifying shifts ``by hand'' in the above way is not always
an option, especially when the tick positions should be computed
automatically. Instead, the |stack| option can be used, which is much
easier to use and gives better results:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm}, x axis={ticks=stack},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
\end{itemize}
The |stack| option is actually just a style that gives you access to the
general even/odd mechanism for ticks with labels. Whenever a tick mark is
created where a tick label is also to be drawn, two special things happen:
%
\begin{enumerate}
\item For every odd tick mark, the |every odd tick| style is executed, for
every even tick mark the |every even tick|. Here, ``odd'' and ``even''
are with respect to the order in which the ticks have been added to the
list of |at| positions for each major, minor, or subminor tick list,
not with respect to the order in which they will appear on the axis.
Thus, when you write
%
\begin{codeexample}[code only]
ticks={major at={1,2,3,4}, major at={0,-1,-2}, minor at={9,8,7}}
\end{codeexample}
%
then for |1|, |3|, |0|, and |-2| as well as |9| and |7| the key
|every odd tick| will be executed, while |every even tick| will be
executed for positions |2|, |4|, |-1|, and also |8|.
\item When a tick node label is shown at the |low| position of the tick
mark, the dimension stored in the key |tick text low even padding| is
added to the |low| value. Provided that this padding is not zero (which
is the default), the length of the even tick marks will be increased
and the tick label node will be placed at a greater distance from the
axis.
Similar keys exist for padding ticks with labels at high positions and
also at even positions.
\end{enumerate}
\begin{key}{/tikz/data visualization/tick text low even padding=\meta{dimension} (initially 0pt)}
When a tick label is shown at the low position of an even tick, the
\meta{distance} is added to the |low| value, see also
Section~\ref{section-dv-visualize-ticks}.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
x axis={ticks={tick text low even padding=-1em}},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
Note that \meta{dimension} should usually be non-positive.
\end{key}
The following keys work similarly:
%
\begin{key}{/tikz/data visualization/tick text low odd padding=\meta{dimension} (initially 0pt)}
\end{key}
%
\begin{key}{/tikz/data visualization/tick text high even padding=\meta{dimension} (initially 0pt)}
\end{key}
%
\begin{key}{/tikz/data visualization/tick text high odd padding=\meta{dimension} (initially 0pt)}
\end{key}
\begin{key}{/tikz/data visualization/tick text odd padding=\meta{dimension}}
A shorthand for setting |tick text odd low padding| and
|tick text odd high padding| at the same time.
\end{key}
\begin{key}{/tikz/data visualization/tick text even padding=\meta{dimension}}
A shorthand for setting |tick text even low padding| and
|tick text even high padding| at the same time.
\end{key}
\begin{key}{/tikz/data visualization/tick text padding=\meta{dimension}}
Sets all text paddings to \meta{dimension}.
\end{key}
\begin{key}{/tikz/data visualization/stack=\meta{dimension} (default 1em)}
Shorthand for |tick text even padding=|\meta{dimension}.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
x axis={ticks={stack=1.5em}},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
\end{key}
\begin{key}{/tikz/data visualization/stack'=\meta{dimension}}
Shorthand for |tick text odd padding=|\meta{dimension}. The difference to
|stack| is that the set of value that are ``lowered'' is exactly exchanged
with the set of value ``lowered'' by |stack|.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes,
all axes={length=2.5cm},
x axis={ticks=stack'},
visualize as smooth line]
data [format=function] {
var y : interval[-100:100];
func x = \value y*\value y;
};
\end{codeexample}
%
\end{key}
Note that the above keys have an effect on all tick labels of an axis, also on
special ticks that you may have added using the |also at| key. When using the
|stack| key, you should specify a |tick text padding| explicitly for such keys:
%
\begin{codeexample}[
width=7cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\tikz \datavisualization
[scientific axes,
x axis={ticks={stack, many, major also at=
{(pi) as [{tick text padding=2.5em}] $\pi$}}},
visualize as smooth line]
data [format=function] {
var x : interval[0:(2*pi)];
func y = sin(\value x r);
};
\end{codeexample}
\subsubsection{Reference: Basic Strategies}
\label{section-dv-strategies}
\begin{key}{/tikz/data visualization/axis options/linear steps}
This strategy places ticks at positions that are evenly spaced by the
current value of |step|.
In detail, the following happens: Let $a$ be the minimum value of the data
values along the axis and let $b$ be the maximum. Let the current
\emph{stepping} be $s$ (the stepping is set using the |step| option, see
below) and let the current \emph{phasing} be $p$ (set using the |phase|)
option. Then ticks are placed all positions $i\cdot s + p$ that lie in the
interval $[a,b]$, where $i$ ranges over all integers.
The tick positions computed in the way described above are \emph{major}
step positions. In addition to these, if the key
|minor steps between steps| is set to some number $n$, then $n$ many minor
ticks are introduced between each two major ticks (and also before and
after the last major tick, provided the values still lie in the interval
$[a,b]$). Note that is $n$ is $1$, then one minor tick will be added in the
middle between any two major ticks. Use a value of $9$ (not $10$) to
partition the interval between two major ticks into ten equally sized minor
intervals.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\begin{tikzpicture}
\datavisualization
[scientific axes={inner ticks, width=3cm},
x axis={ticks={step=3, minor steps between steps=2}},
y axis={ticks={step=.36}},
visualize as scatter]
data {
x, y
17, 30
34, 32
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\label{section-dv-exponential-strategy}
\begin{key}{/tikz/data visualization/axis options/exponential steps}
This strategy produces ticks at positions that are appropriate for
logarithmic plots. It is automatically selected when you use the
|logarithmic| option with an axis.
In detail, the following happens: As for |linear steps| let numbers $a$,
$b$, $s$, and $p$ be given. Then, major ticks are placed at all positions
$10^{i\cdot s+p}$ that lie in the interval $[a,b]$ for $i \in \mathbb{Z}$.
The minor steps are added in the same way as for |linear steps|. In
particular, they interpolate \emph{linearly} between major steps.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\begin{tikzpicture}
\datavisualization
[scientific axes,
x axis={logarithmic, length=2cm, ticks={step=1.5}},
y axis={logarithmic, ticks={step=1, minor steps between steps=9}},
visualize as scatter]
data {
x, y
1, 10
1000, 1000000
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\subsubsection{Advanced: Defining New Placement Strategies}
\begin{key}{/tikz/data visualization/axis options/tick placement strategy=\meta{macro}}
This key can be used to install a so-called \emph{tick placement strategy}.
Whenever |visualize ticks| is used to request some ticks to be visualized,
it is checked whether some automatic ticks should be created. This is the
case when the following key is set:
%
\begin{key}{/tikz/data visualization/compute step=\meta{code}}
The \meta{code} should compute a suitable value for the stepping to be
used by the \meta{macro} in the tick placement strategy.
For instance, the |step| key sets |compute step| to
|\def\tikz@lib@dv@step{#1}|. Thus, when you say |step=5|, then the
desired stepping of |5| is communicated to the \meta{macro} via the
macro |\tikz@lib@dv@step|.
\end{key}
Provided |compute step| is set to some nonempty value, upon visualization
of ticks the \meta{macro} is executed. Typically, \meta{macro} will first
call the \meta{code} stored in the key |compute step|. Then, it should
implement some strategy then uses the value of the computed or desired
stepping to create appropriate |at| commands. To be precise, it should set
the keys |major|, |minor|, and/or |subminor| with some appropriate |at|
values.
Inside the call of \meta{macro}, the macro |\tikzdvaxis| will have been set
to the name of the axis for which default ticks need to be computed. This
allows you to access the minimum and the maximum value stored in the
|scaling mapper| of that axis.
%
\begin{codeexample}[width=7cm,preamble={\usetikzlibrary{datavisualization}}]
\def\silly{
\tikzdatavisualizationset{major={at={
2,3,5,7,11,13}}}
}
\begin{tikzpicture}
\datavisualization [
scientific axes, visualize as scatter,
x axis={tick placement strategy=\silly}
]
data {
x, y
0, 0
15, 15
};
\end{tikzpicture}
\end{codeexample}
%
\end{key}
\subsection{Advanced: Creating New Axis Systems}
The |datavisualization| library comes with a number of predefined axis systems,
like |scientific axes=clean|, but it is also possible and to define new axis
systems. Doing so involves the following steps:
%
\begin{enumerate}
\item Creating a number of axes.
\item Configuring attributes of these axes like their length or default
scaling.
\item Creating visual representations of the axes.
\item Creating visual representations of the ticks and grid lines.
\end{enumerate}
The first step uses |new ... axis| keys to create new axes, the last steps use
|visualize ...| keys to create the visual representations of the axes.
Note that the axis system has no control over the actual attribute value ranges
and neither over which ticks need to be drawn. The axis system can only provide
good defaults and then specify \emph{how} the ticks or labels should be drawn
and \emph{where} on the page -- but not at which values.
In the following, as a running example let us develop an axis system
|our system| that does the following: For the $x$-axis is looks like a normal
scientific axis system, but there are actually two $y$-axes: One at the left
and one at the right, each using a different attribute, but both coexisting in
the same picture.
\subsubsection{Creating the Axes}
A new axis system is created as a style key with the prefix
|/tikz/data visualization|. Thus, we would write:
%
\begin{codeexample}[code only]
\tikzset{
data visualization/our system/.style={
...
}
}
\end{codeexample}
In our system we need three axis: The $x$-axis, the left axis and the right
axis. Since all of these axes are Cartesian axes, we write the following:
%
\begin{codeexample}[code only]
\tikzset{
data visualization/our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
}
}
\end{codeexample}
%
As can be seen, we also configure things so that the $x$-axis will use the |x|
attribute by default (users can later change this by saying
|x axis={attribute=|\meta{some other attribute}|}|), but we do not configure
the attributes of the |left axis| nor the |right axis|. We also make the left
and right axis point upward (the |x axis| needs no configuration here since a
Cartesian axis points right by default). The reason is the |left| would not be
a particularly good attribute name and this way we ensure that users have to
pick names themselves (hopefully good ones).
The next step is to define a standard scaling for the axes. Here, we can use
the same as for |scientific axes|, so we would add the following keys to the
definition of |our system|:
%
\begin{codeexample}[code only]
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
\end{codeexample}
We now already have enough to try our system, although we will not yet see any
axes or ticks, but we will see the correct scaling of the attributes. Let us
first define a data group:
%
\begin{codeexample}[setup code]
\tikz \datavisualization data group {people and money} = {
data [set=people 1] {
time, people
1900, 1000000000
1920, 1500000000
1930, 2000000000
1980, 3000000000
}
data [set=people 2] {
time, people
1900, 2000000000
1920, 2500000000
1940, 4000000000
2000, 5700000000
}
data [set=money 1] {
time, money
1910, 1.1
1920, 2
1930, 5
1980, 2
}
data [set=money 2] {
time, money
1950, 3
1960, 3
1970, 4
1990, 3.5
}
};
\end{codeexample}
\begin{codeexample}[setup code,hidden]
\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
our system,
x axis={attribute=time, length=4cm},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2},
people 1={style={visualizer color=blue}},
people 2={style={visualizer color=blue!50}},
money 1={style={visualizer color=red}},
money 2={style={visualizer color=red!50}}]
data group {people and money};
\end{codeexample}
\subsubsection{Visualizing the Axes}
\label{section-dv-visualize-axis}
We must now show the axes themselves. For this we can use the |visualize axis|
key:
\begin{key}{/tikz/data visualization/axis options/visualize axis=\meta{options}}
This key is passed to an axis as an option. It causes a visual
representation of the axis to be created during the data visualization. The
\meta{options} are used to determine where the axis should be drawn and how
long it should be. We can specify, for instance, that an axis should be
drawn at the minimum value of another axis or where another axis has the
value |0|.
\medskip
\textbf{The goto, high, and low Keys.}
In our example, the |left axis| should be shown at the left hand side. This
is the position where the |x axis| has its minimum value. To specify this,
we would use the following code:
%
\begin{codeexample}[code only]
left axis={ visualize axis={ x axis={ goto=min } }
\end{codeexample}
%
As can be seen, we can pass another axis as an \meta{option} to
|visualize axis|, where we pass the following key to the axis in turn:
%
\begin{key}{/tikz/data visualization/axis options/goto=\meta{value}}
The key can be passed to an axis. It will set the attribute monitored
by the axis to the given \meta{value}, which is usually some number.
However, \meta{value} may also be one of the following, which causes a
special behavior:
%
\begin{itemize}
\item |min|: The attribute is set to the minimal value that the
attribute has attained along this axis.
\item |max|: Like |min|.
\item |padded min|: This will also set the \meta{attribute}
monitored by the axis to the same value as |min|.
Additionally, however, the subkey
|/data point/|\meta{attribute}|/offset| is set to the current
padding for the minimum, see the description of |padding min|
later on. The effect of this is that the actual point ``meant''
by the attribute is offset by this padding along the
attribute's axis.
\item |padded max|: Like |padded min|.
\end{itemize}
\end{key}
The |right axis| would be visualized the same way, only at |goto=max|. The
$x$-axis actually needs to be visualized \emph{twice}: Once at the bottom
and once at the top. Thus, we need to call |visualize axis| twice for this
axis:
%
\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}
\begin{codeexample}[
preamble={\usetikzlibrary{datavisualization}},
pre={\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}}]
\tikzset{
data visualization/our system/.append style={
left axis= {visualize axis={x axis= {goto=min}}},
right axis={visualize axis={x axis= {goto=max}}},
x axis= {visualize axis={left axis={goto=min}},
visualize axis={left axis={goto=max}}},
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=4cm},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
There is another key that is similar to |goto|, but has a slightly
different semantics:
%
\begin{key}{/tikz/data visualization/axis options/goto pos=\meta{fraction}}
The key works like |goto|, only the \meta{fraction} is not interpreted
as a value but as a fraction of the way between the minimum and the
maximum value for this axis.
Suppose that for an axis the attribute range interval is $[500,1000]$
and the reasonable interval is $[1,3]$. Then for a \meta{fraction} of
|0|, the mapping process would choose value $1$ from the reasonable
interval, for a \meta{fraction} of |1| the position $3$ from the
reasonable interval, and for a \meta{fraction} or |0.25| the position
$1.5$ since it is one quarter at the distance from $1$ to $3$.
Note that neither the attribute range interval nor the transformation
function for the attribute are important for the |goto pos| option --
the \meta{fraction} is computed with respect to the reasonable
interval. Also note that the values of the actual attribute
corresponding to the fractional positions in the reasonable interval
are not computed.
%
\begin{codeexample}[
preamble={\usetikzlibrary{datavisualization}},
pre={\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}}]
\tikzset{
data visualization/our system/.append style={
x axis= {visualize axis={left axis={goto pos=0.25}},
visualize axis={left axis={goto pos=0.5}}},
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=4cm},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
\end{key}
By default, when an axis is visualized, it spans the set of all possible
values for the monitored attribute, that is, from |min| to |max|. However,
there are actually two keys that allow you to adjust this:
%
\begin{key}{/tikz/data visualization/low=\meta{value}}
This is the attribute value where the axis visualization starts. The
same special values as for |goto| are permissible (like |min| or
|padded min|, but also |0| or |1|).
\end{key}
%
\begin{key}{/tikz/data visualization/high=\meta{value}}
Like |low|, only for where the axis ends.
\end{key}
By default, |low=min| and |high=max| are set for an axis visualization.
Another sensible setting is |low=padded min| and |high=padded max|. The
following key provides a shorthand for this:
%
\begin{key}{/tikz/data visualization/padded}
Shorthand for |low=padded min, high=padded max|.
\end{key}
%
As an example, consider the |scientific axes=clean|. Here, each axis is
actually drawn three times: Once at the minimum, once at the maximum and
then once more at the padded minimum.
\medskip
\textbf{The axis line.}
When an axis is drawn, \tikzname\ does not simply draw a straight line from
the |low| position to the |high| position. In reality, the data
visualization system uses the two commands |\pgfpathdvmoveto| and
|\pgfpathdvlineto| internally. These will replace the straight line by a
curve in certain situations. For instance, in a polar coordinate system, if
an axis should be drawn along an angle axis for a fixed radius, an arc will
be used instead of a straight line.
\medskip
\textbf{Styling the axis.}
As can be seen, we now get the axis we want (but without the ticks,
visualizing them will be explained later). The axis is, however, simply a
black line. We can \emph{style} the axis in a manner similar to styling
ticks and grid lines, see Section~\ref{section-dv-style}. In detail, the
following styles get executed:
%
\begin{enumerate}
\item |axis layer|
\item |every axis|
\item |styling|
\end{enumerate}
%
Additionally, even before |every axis| is executed, |low=min| and
|high=max| are executed.
\begin{stylekey}{/tikz/data visualization/axis layer (initially on background layer)}
The layer on which the axis is drawn. See the description of
|grid layer| on page~\pageref{section-dv-grid-layer} for details.
\end{stylekey}
\begin{stylekey}{/tikz/data visualization/every axis}
Put styling of the axis here. It is usually a good idea to set this
style to |style={black!50}|.
\end{stylekey}
Recall that the |styling| key is set using the |style| key, see
Section~\ref{section-dv-style}.
%
% TODOsp: codeexamples: What is this empty `\tikzset` good for?
\tikzset{
}
\begin{codeexample}[
preamble={\usetikzlibrary{datavisualization}},
pre={\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}}]
\tikzset{
data visualization/our system/.append style={
every axis/.style={style=black!50}, % make this the default
left axis= {visualize axis={x axis= {goto=min}, style=red!75}},
right axis={visualize axis={x axis= {goto=max}, style=blue!75}},
x axis= {visualize axis={left axis={goto=min}},
visualize axis={left axis={goto=max}}},
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=4cm},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
\tikzset{
data visualization/our system/.append style={
every axis/.style={style=black!50}, % make this the default
left axis= {visualize axis={x axis= {goto=min}, style=red!75}},
right axis={visualize axis={x axis= {goto=max}, style=blue!75}},
x axis= {visualize axis={left axis={goto=min}},
visualize axis={left axis={goto=max}}},
}
}
\medskip
\textbf{Padding the Axis.}
When an axis is visualized, it is often a good idea to make it ``a little
bit longer'' or to ``remove it a bit from the border'', because the
visualization of an axis should not interfere with the actual data. For
this reason, a \emph{padding} can be specified for axes:
\begin{key}{/tikz/data visualization/axis options/padding min=\meta{dimension}}
This is the dimension that is used whenever |goto=padded min| is used.
The \meta{dimension} is then put into the |offset| subkey of the
attribute monitored by the axis. When a data point is transformed by a
linear transformer and when this subkey is nonzero, this offset is
added. (For an angle axis of a polar transformer, the \meta{dimension}
is interpreted as an additional angle rather than as an additional
distance). Note that \meta{dimension} should typically be negative
since ``adding the \meta{dimension}'' will then make the axis longer
(because it starts at a smaller value). The standard axis systems set
the padding to some default and take its value into account:
%
\begin{codeexample}[
width=8cm,
preamble={\usetikzlibrary{datavisualization.formats.functions}},
]
\begin{tikzpicture}
\datavisualization [scientific axes=clean,
x axis={padding min=-1cm},
visualize as smooth line]
data [format=function] {
var x : interval [-3:5];
func y = \value x * \value x;
};
\end{tikzpicture}
\end{codeexample}
Using padded and using the |padded| key, we can visualize our axis ``a
little removed from the actual data'':
%
\begin{codeexample}[
preamble={\usetikzlibrary{datavisualization}},
pre={\tikzdatavisualizationset{
our system/.style={
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
x axis={attribute=x},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}}
}
}%
\tikzset{
data visualization/our system/.append style={
every axis/.style={style=black!50}, % make this the default
left axis= {visualize axis={x axis= {goto=min}, style=red!75}},
right axis={visualize axis={x axis= {goto=max}, style=blue!75}},
x axis= {visualize axis={left axis={goto=min}},
visualize axis={left axis={goto=max}}},
}
}}]
\tikzset{
data visualization/our system/.append style={
all axes= {padding=.5em},
left axis= {visualize axis={x axis= {goto=padded min}, padded}},
right axis={visualize axis={x axis= {goto=padded max}, padded}},
x axis= {visualize axis={left axis={goto=padded min}, padded},
visualize axis={left axis={goto=padded max}, padded}},
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=3cm},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
\end{key}
\begin{key}{/tikz/data visualization/axis options/padding max=\meta{dimension}}
Works like |padding min|, but \meta{dimension} should typically be
positive.
\end{key}
\begin{key}{/tikz/data visualization/axis options/padding=\meta{dimension}}
Sets both |padding min| to the negated value of \meta{dimension} and
|padding max| to \meta{dimension}.
\end{key}
\end{key}
\subsubsection{Visualizing Grid Lines}
\label{section-dv-visualize-gridlines}
As explained earlier, the |grid| key is used to specify at which positions grid
lines should be drawn in principle. However, this key does not actually cause
any grid lines to be drawn. Instead, the |visualize grid| key is used by the
axis system to specify how grid lines are drawn.
\begin{key}{/tikz/data visualization/axis options/visualize grid=\meta{options}}
This key is passed to an axis. It causes grid lines to be drawn at the
positions specified by the |grid| key for this axis. The \meta{options}
govern where and how the grid lines will be drawn.
\medskip
\textbf{The direction axis.}
At first sight, one might expect that the grid lines for an axis should
simply be drawn perpendicular to the axis between the minimum and maximum
value of the axis. However, things are somewhat more difficult in reality:
%
\begin{enumerate}
\item A grid line is supposed to indicate all positions where a certain
attribute attains a fixed value. But, then, a grid line does not
really need to be a grid \emph{line}. Consider for instance a three
dimensional axis system. A ``grid line'' for the $x$-coordinate |3|
would actually be a ``grid plane''.
\item For a polar coordinate system and a fixed radius, this set of
positions at a certain radius is not a straight line, but an arc.
For more complicated coordinate systems such as the one arising
from three-dimensional spherical projections, a grid line may well
be a fairly involved curve.
\end{enumerate}
%
The |visualize grid| command addresses these complications as follows:
%
\begin{enumerate}
\item A grid line is always a line, not a plane or a volume. This means
that in the example of a three dimensional axis system and the
$x$-attribute being |3|, one would have to choose whether the grid
line should go ``along'' the $y$-axis or ``along'' the $z$-axis for
this position. One can, however, call the |visualize grid| command
twice, once for each direction, to cause grid lines to be shown for
both directions.
\item A grid line is created by moving to a start position and then
doing a lineto to the target position. However, the ``moveto'' and
``lineto'' are done by calling special commands of the data
visualization system. These special commands allow coordinate
system to ``notice'' that the line is along an axis and will allow
them to replace the straight line by an appropriate curve. The
polar axes systems employ this strategy, for instance.
\end{enumerate}
By the above discussion, in order to create a grid line for attribute $a$
having value $v$, we need to specify an axis ``along'' which the line
should be drawn. When there are only two axes, this is usually ``the other
axis''. This ``other axis'' is specified using the following key:
%
\begin{key}{/tikz/data visualization/direction axis=\meta{axis name}}
You must pass this key as an \meta{option} each time you use
|visualize axis|. When the grid line is drawn, the attribute $a$ is set
to $v$ and the axis \meta{axis name}'s attribute is set once to the
current value of |low| and once to |high|. Then a line is drawn between
these two positions using |\pgfpathdvlineto|.
\end{key}
%
The |low| and |high| keys are the same as the ones used in the
|visualize axis| key.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
xyz Cartesian cabinet,
all axes={visualize axis={low=0, style=->}},
x axis={visualize grid={direction axis=y axis}, grid=many},
visualize as scatter]
data {
x, y, z
0, 0, 1
0, 1, 0
2, 2, 2
};
\end{codeexample}
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
xyz Cartesian cabinet,
all axes={visualize axis={low=0, style=->}, grid=many},
x axis={visualize grid={direction axis=z axis}},
z axis={visualize grid={direction axis=x axis},
visualize grid={direction axis=y axis},},
visualize as scatter]
data {
x, y, z
0, 0, 1
0, 1, 0
2, 2, 2
};
\end{codeexample}
\medskip
\textbf{Styling the grid lines.}
When a grid line is draw, styles are applied as described in
Section~\ref{section-dv-styling-grid-lines}.
\medskip
\textbf{The major, minor, and subminor grid lines.}
The |grid| option allows you to specify for each kind of grid line (major,
minor, or subminor) a set of different values for which these grid lines
should be drawn. Correspondingly, it is also possible to configure for each
kind of grid line how it should be drawn. For this, the |major|, |minor|,
|subminor|, and also the |common| keys can be used inside the
\meta{options} of |visualize grid|. While as option to |grid| these keys
are used to specify |at| values, as options of |visualize grid| they are
used to configure the different kinds of grid lines.
Most of the time, no special configuration is necessary since all styling
is best done by configuring keys like |every major grid|. You need to use a
key like |major| only if you wish to configure for instance the |low| or
|high| values of a |major| grid line differently from those of |minor| grid
lines -- are rather unlikely setting -- or when the styling should deviate
from the usual settings.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikz \datavisualization [
xy Cartesian,
all axes={visualize axis={low=0, style=->},
grid={some, minor steps between steps}},
x axis= {visualize grid={
direction axis=y axis,
minor={low=0.25, high=1.75, style=red!50}}},
visualize as scatter]
data {
x, y
0, 0
3, 3
};
\end{codeexample}
%
\end{key}
Returning to the example of |our system| with the two axis systems, it is
straight-forward to configure the grid lines of the $x$-axis: The direction
axis is either of the other two axis (they point in the same direction and they
have the same range). For the other two axes, we visualize one grid
independently of the other, using different colors.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikzset{
data visualization/our system/.append style={
x axis= {visualize grid={direction axis=left axis}},
left axis= {visualize grid={direction axis=x axis,
common={style=red!50}}},
right axis={visualize grid={direction axis=x axis,
common={style=blue!50}}},
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=3cm, grid=many},
left axis ={attribute=money, grid=some},
right axis={attribute=people, grid=few},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
\subsubsection{Visualizing the Ticks and Tick Labels}
\label{section-dv-visualize-ticks}
\begin{key}{/tikz/data visualization/axis options/visualize ticks=\meta{options}}
Visualizing a tick involves (possibly) drawing a tick mark and adding
(possibly) the tick node. The process is similar to |visualize grid|: Users
use the |ticks| key to configure how many ticks they would like for an axis
and at which positions. The axis system uses the |visualize ticks| key to
specify where these ticks should actually be shown.
Unlike grid lines, which are typically only visualized once for each
combination of an axis and a direction axis, tick marks might be visualized
at different places for the same axis. Consider for instance the
|scientific axes|:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes, all axes={length=3cm},
x axis={ticks={stack}},
visualize as smooth line]
data [format=function] {
var x : interval [0:2];
func y = \value x*\value x;
};
\end{codeexample}
%
Have a look at the ticks on the $y$-axis: There are ticks at values |0|,
|1|, |2|, |3|, and~|4|. These are visualized both at the left side (where
the tick nodes are also shown) and additionally also at the right side, but
only as small marks. Similarly, the ticks on the $x$-axis appear at the
bottom, but also (in much simpler versions) at the top. Both for the
$x$-axis and for the $y$-axis the |visualize ticks| key was called twice.
\medskip
\textbf{The tick marks.}
Drawing a tick mark is quite similar to visualizing a grid line; indeed a
tick mark can be thought of as a ``mini grid line'': Just like a grid line
it ``points a long an axis''. However, a tick will always be a short
straight line -- even when the coordinate system is actually twisted
(experimentation has shown that ticks that follow the curvature of the
coordinate system like grid lines are hard to recognize). For this reason,
the |low| and |high| keys have a different meaning from the one used with
the |visualize grid| key. In detail to configure the size and position of a
tick mark for the value $v$ of attribute $a$, proceed as follows:
%
\begin{itemize}
\item The |visualize ticks| key will have setup attribute $a$ to be
equal to $v$.
\item You should now use the |goto| or |goto pos| key together with all
\emph{other} axes to configure at which position with respect to
these other options the tick mark should be shown. For instance,
suppose we want tick marks in |our system| for the $x$-axis at the
bottom and at the top. This corresponds to once setting the
|left axis| to its minimal value and once to its maximal value:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikzset{
data visualization/our system/.append style={
x axis={visualize ticks={direction axis=left axis, left axis={goto=min}},
visualize ticks={direction axis=left axis, left axis={goto=max}},
}
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=3cm, ticks=many},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
%
\item In the above example, we may wish to shorten the ticks a bit at
the bottom and at the top. For this, we use the |low| and |high|
key:
%
\begin{key}{/tikz/data visualization/low=\meta{dimension}}
When used with the |visualize ticks| option, the |low| key
contains a dimension that specifies the extend of the tick
going ``toward the minimum'' of the direction axis. More
precisely, when a tick mark is visualized, a unit tangent
vector at the current data point in the direction of the
|direction axis| is computed and this vector is multiplied by
\meta{dimension} to compute the start position of the tick
line. The end position is given by this vector times the |high|
value.
Note that the \meta{dimension} should usually be negative for
the |low| key and positive for the |high| key.
For tick marks where a tick label node is shown, the
\meta{dimension} is increased by the current values of keys
like |tick text even low padding|, see
Section~\ref{section-dv-stacking} for details.
\end{key}
%
\begin{key}{/tikz/data visualization/high=\meta{dimension}}
Like |low|.
\end{key}
%
\begin{key}{/tikz/data visualization/tick length=\meta{dimension}}
Shorthand for |low=-|\meta{dimension}|, high=|\meta{dimension}.
\end{key}
What we want to happen is that in the upper visualization of the
ticks the |low| value is |0pt|, while in the lower one the |high|
value is |0pt|:
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikzset{
data visualization/our system/.append style={
x axis={
visualize ticks={direction axis=left axis,high=0pt,left axis={goto=min}},
visualize ticks={direction axis=left axis,low=0pt,left axis={goto=max}},
}
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=3cm, ticks=many},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
%
\end{itemize}
%
In order to style the tick mark, use the styling mechanism that is detailed
in Section~\ref{section-dv-styling-ticks}.
\medskip
\textbf{The tick label node.}
At certain tick positions, we may wish to add a node indicating the value
of the attribute at the given position. The |visualize ticks| command has
no influence over which text should be shown at a node -- the text is
specified and typeset as explained in Section~\ref{section-dv-tick-labels}.
Each time |visualize ticks|, for each tick position up to two tick label
nodes will be created: One at the |low| position and one at the |high|
position. The following keys are used to configure which of these cases
happen:
%
\begin{key}{/tikz/data visualization/tick text at low=\opt{\meta{true or false}} (default true)}
Pass this option to |visualize ticks| when you want tick label nodes to
be placed at the |low| position of each tick mark.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization}}]
\tikzset{
data visualization/our system/.append style={
x axis={
visualize ticks={direction axis=left axis, left axis={goto=min},
high=0pt, tick text at low, stack},
visualize ticks={direction axis=left axis, left axis={goto=max},
low=0pt, tick text at high, stack}
}
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, length=3cm, ticks=some},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
\end{key}
%
\begin{key}{/tikz/data visualization/tick text at high=\opt{\meta{true or false}} (default true)}
Like |tick text at low|.
\end{key}
\begin{key}{/tikz/data visualization/no tick text}
Shorthand for |tick text at low=false, tick text at high=false|.
%
\begin{codeexample}[preamble={\usetikzlibrary{datavisualization.formats.functions}}]
\tikz \datavisualization [scientific axes, all axes={length=3cm},
x axis={ticks={
major also at={6.5 as [no tick text]}}},
visualize as smooth line]
data [format=function] {
var x : interval [5:10];
func y = \value x * \value x;
};
\end{codeexample}
\end{key}
When a tick label node is to be placed at the low or the high position, the
next step is to determine the exact position and the correct anchor of the
node. This is done as follows:
%
\begin{itemize}
\item In order to compute an appropriate |anchor|, the tick mark is
considered: This is a short line pointing in a certain direction.
For a tick label node at the |low| position, the |anchor| attribute
is setup in such a way that the node label will be below the |low|
position when the tick mark direction points up, it will be to the
right when the direction points left, above when it points down,
and so on also for diagonal directions. Similarly, for the |high|
position, when the direction points up, the node will be placed
above the tick mark and so on.
This computation is done automatically.
\item The tick label node is styled. The styles that are applied are
described in Section~\ref{section-dv-styling-ticks}.
\item A tick label node for the |low| position is usually anchored at
this |low| position, but an additional padding will be added as
described in Section~\ref{section-dv-stacking}.
\end{itemize}
\end{key}
\subsubsection{Visualizing the Axis Labels}
\label{section-dv-visualize-label}
The |label| option can be used with an axis to specify a text should be shown
next to the axis to indicates which attribute this axis refers to. Like |ticks|
or |grid|, the |label| option does not actually draw the label, this is the job
of the |visualize label| key, which is configured by the axis system.
\begin{key}{/tikz/data visualization/axis options/visualize label=\meta{options}}
The \meta{options} should be used to configure a ``good place'' for the
axis label. Usually, you will use the |goto| or the |goto pos| key.
For the example of |our system|, we would like the label of the |x axis| to
be placed below at the middle of the axis, so we use |goto pos=.5| to
determine this position. Concerning the other axes, we want it to be placed
at the minimum position of the |left axis| with a lot of padding.
%
\begin{codeexample}[width=7cm,preamble={\usetikzlibrary{datavisualization}}]
\tikzdatavisualizationset{
our system/.append style={
x axis={visualize label={
x axis={goto pos=.5},
left axis={padding=1.5em, goto=padded min}}}
}
}
\tikz \datavisualization [
our system,
x axis={attribute=time, ticks=some, label},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={
people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
In the above example, the |padding| of |1.5em| was rather arbitrary and
``suboptimal''. It would be outright wrong if the labels on the |x axis|
were larger or if they were missing. It would be better if the vertical
position of the |x axis| label were always ``below'' all other options. For
such cases a slightly strange approach is useful: You position the node
using |node style={at=...}| where |at| is now the normal \tikzname\ option
that is used to specify the position of a node. Inside the |...|, you
specify that the horizontal position should be the bottom of
up-to-now-constructed data visualization and the vertical position should
be at the ``origin'', which is, however, the position computed by the
|goto| keys for the axes:
%
\begin{codeexample}[width=7cm,preamble={\usetikzlibrary{datavisualization}}]
\tikzdatavisualizationset{
our system/.append style={
x axis={visualize label={
x axis={goto pos=.5},
node style={
at={(0,0 |- data visualization bounding box.south)},
below
} } } } }
\tikz \datavisualization [
our system,
x axis={attribute=time, ticks=some, label=Year},
left axis ={attribute=money},
right axis={attribute=people},
visualize as line/.list={
people 1, people 2, money 1, money 2}]
data group {people and money};
\end{codeexample}
Two additional keys are useful for positioning axis labels:
%
\begin{key}{/tikz/data visualization/axis option/anchor at min}
When passed to an axis, this key sets the |anchor| so that a node
positioned at either the |min| or the |padded min| value of the axis
will be placed ``nicely'' with respect to the axis. For instance, if
the axis points upwards from the |min| value to the |max| value, the
|anchor| would be set to |north| since this gives a label below the
axis's start. Similarly, if the axis points right, the anchor would be
set to |east|, and so on.
\end{key}
%
\begin{key}{/tikz/data visualization/axis option/anchor at max}
Like |anchor at min|.
\end{key}
\end{key}
\subsubsection{The Complete Axis System}
Here is the code for the complete axis system developed above and an example of
how it is used:
%
\begin{codeexample}[code only]
\tikzdatavisualizationset{ our system/.style={
% The axes
new Cartesian axis=x axis, new Cartesian axis=left axis, new Cartesian axis=right axis,
% The directions of the axes
all axes={padding=.5em}, left axis={unit vector={(0cm,1pt)}}, right axis={unit vector={(0cm,1pt)}},
% The default attributes, other attributes must be configured
x axis={attribute=x},
% The lengths of the axes
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
% The styling of the axes
every axis/.style={style=black!50}, % make this the default
% Visualizing the axes themselves
left axis= {visualize axis={x axis= {goto=padded min}, style=red!75, padded}},
right axis={visualize axis={x axis= {goto=padded max}, style=blue!75,padded}},
x axis= {visualize axis={left axis={goto=padded min}, padded},
visualize axis={left axis={goto=padded max}, padded}},
% Visualizing the grid, when requested
x axis= {visualize grid={direction axis=left axis}},
left axis= {visualize grid={direction axis=x axis, common={style=red!50}}},
right axis={visualize grid={direction axis=x axis, common={style=blue!50}}},
% Visualizing the ticks, when requested
left axis={visualize ticks={style={red!50!black}, direction axis=x axis,
x axis={goto=padded min}, high=0pt, tick text at low}},
right axis={visualize ticks={style={blue!80!black}, direction axis=x axis,
x axis={goto=padded max}, low=0pt, tick text at high}},
x axis={visualize ticks={direction axis=left axis, left axis={goto=padded min}, high=0pt,
tick text at low},
visualize ticks={direction axis=left axis, left axis={goto=padded max}, low=0pt}},
% By default, there are ticks on all axes
all axes={ticks},
% Visualizing the axis labels, when requested
x axis={visualize label={x axis={goto pos=.5}, node style={
at={(0,0 |- data visualization bounding box.south)}, below}}},
left axis={visualize label={left axis={goto pos=.5}, node style={
at={(0,0 -| data visualization bounding box.west)}, rotate=90, anchor=south, red!50!black}}},
right axis={visualize label={right axis={goto pos=.5}, node style={
at={(0,0 -| data visualization bounding box.east)}, rotate=-90, anchor=south, blue!80!black}}},
}}
\end{codeexample}
\begin{codeexample}[
preamble={\usetikzlibrary{datavisualization}},
pre={\tikzdatavisualizationset{
our system/.style={
% The axes
new Cartesian axis=x axis,
new Cartesian axis=left axis,
new Cartesian axis=right axis,
% The default attributes, other attributes must be configured
x axis={attribute=x},
% The directions of the axes
all axes={padding=.5em},
left axis={unit vector={(0cm,1pt)}},
right axis={unit vector={(0cm,1pt)}},
% The lengths of the axes
x axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/width}},
left axis ={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
right axis={length=\pgfkeysvalueof{/tikz/data visualization/scientific axes/height}},
% The styling of the axes
every axis/.style={style=black!50}, % make this the default
% Visualizing the axes themselves
left axis= {visualize axis={x axis= {goto=padded min}, style=red!75, padded}},
right axis={visualize axis={x axis= {goto=padded max}, style=blue!75,padded}},
x axis= {visualize axis={left axis={goto=padded min}, padded},
visualize axis={left axis={goto=padded max}, padded}},
% Visualizing the grid, when requested
x axis= {visualize grid={direction axis=left axis, padded}},
left axis= {visualize grid={direction axis=x axis, padded, common={style=red!50}}},
right axis={visualize grid={direction axis=x axis, padded, common={style=blue!50}}},
% Visualizing the ticks, when requested
left axis={
visualize ticks={style={red!50!black}, direction axis=x axis, x axis={goto=padded min}, high=0pt, tick text at low}},
right axis={
visualize ticks={style={blue!80!black}, direction axis=x axis, x axis={goto=padded max}, low=0pt, tick text at high}},
x axis={
visualize ticks={direction axis=left axis, left axis={goto=padded min}, high=0pt, tick text at low},
visualize ticks={direction axis=left axis, left axis={goto=padded max}, low=0pt}
},
% By default, there are ticks on all axes
all axes={ticks},
% Visualizing the axis labels, when requested
x axis={visualize label={
x axis={goto pos=.5}, node style={at={(0,0 |- data visualization bounding box.south)}, below}}},
left axis={visualize label={
left axis={goto pos=.5}, node style={
at={(0,0 -| data visualization bounding box.west)}, rotate=90, anchor=south, red!50!black}}},
right axis={visualize label={
right axis={goto pos=.5}, node style={
at={(0,0 -| data visualization bounding box.east)}, rotate=-90, anchor=south, blue!80!black}}},
}
}}]
\tikz \datavisualization [
our system,
x axis={attribute=time, label=Year,
ticks={tick text padding=2pt, style={/pgf/number format/set thousands separator=}}},
left axis={attribute=money, label=Spending,
padding min=0, include value=0, grid,
ticks={tick prefix=\$, style={/pgf/number format/fixed,
/pgf/number format/fixed zerofill, /pgf/number format/precision=2}}},
right axis={attribute=people,
label=Population,
padding min=0, include value=0,
ticks={style=/pgf/number format/fixed}},
visualize as line/.list={
people 1, people 2, money 1, money 2},
people 1={style={visualizer color=blue}},
people 2={style={visualizer color=blue!50}},
money 1={style={visualizer color=red}},
money 2={style={visualizer color=red!50}} ]
data group {people and money};
\end{codeexample}
\subsubsection{Using the New Axis System Key}
The axis system |our system| that we developed in the course of the previous
section is not yet ``configurable''. The only configuration that was possible
was to ``misuse'' the |width| and |height| keys of the |scientific axes|.
In order to make |our system| configurable so that we can say
|our system=|\meta{options}, where \meta{options} are executed with the path
prefix
%
\begin{codeexample}[code only]
/tikz/data visualization/our system
\end{codeexample}
%
we can use the following key:
\begin{key}{/tikz/data visualization/new axis system=\marg{axis system
name}\marg{axis setup}\marg{default options}\\ \marg{application
options}%
}
The |new axis system| key takes four parameters. The first one,
\meta{system name}, is the name of the to-be-created axis system,
|our system| in our case. The |new axis system| will create the following
new key:
%
\begin{key}{/tikz/data visualization/\meta{axis system name}=\opt{\meta{options}}}
When the key \meta{axis system name} is used, the following keys will be
executed in the following order:
%
\begin{enumerate}
\item The \meta{axis setup} with the path prefix
|/tikz/data visualization/|.
\item The \meta{default options} with the same path prefix.
\item The following style:
%
\begin{stylekey}{/tikz/data visualization/every \meta{axis system name}}
Even though this style has the path prefix
|/tikz/data visualization| itself, the keys stored in this
style will be executed with the path prefix
|/tikz/data visualization/|\meta{axis system name}.
\end{stylekey}
\item The \meta{options} with the path prefix
|/tikz/data visualization/|\meta{axis system name}.
\item The \meta{application options} with the path prefix
|/tikz/data visualization/|
\end{enumerate}
\end{key}
Let us now have a look at what all of this means. First, the \meta{axis
setup} will contain all options that setup the axis system in all ways that
need not be configured. For instance, the \meta{axis setup} for the
|scientific axes| will create an |x axis| and also a |y axis| (because
these are always present), but will not setup the label visualization
(because this can be configured in different ways). For |our system|, which
cannot be configured at all, we would place all of our configuration in the
\meta{axis setup}.
The \meta{default options} can be used to pick default values that would
usually be passed to the \meta{options} of the newly created axis system.
For instance, for |scientific axis|, the \meta{default options} are set to
|outer ticks,standard labels|, because these are the defaults.
Finally, the \meta{application options} can be used to actually apply the
configuration that has been chosen by the \meta{options}. The idea is that
\meta{default options}, \meta{options}, and also |every| \meta{axis system
name} all have a chance of changing, re-changing and re-setting all sorts
of styles and keys. Then, with the last change ``winning'', the resulting
setting of a style can be executed, which may then cause a label
visualization to be installed.
\end{key}
| {
"alphanum_fraction": 0.6854607202,
"avg_line_length": 40.8213444939,
"ext": "tex",
"hexsha": "373402b5cd30afa5a2084b673dac0abda5107a95",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-dv-axes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-dv-axes.tex",
"max_line_length": 124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools",
"max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/generic/pgf/text-en/pgfmanual-en-dv-axes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 41518,
"size": 156060
} |
\documentclass[11pt, english, numbers=endperiod]{scrartcl}
\usepackage{listings}
\usepackage{color}
\usepackage{minted}
\usepackage{bussproofs}
\title{Assignment 1}
\subtitle{COS30023 - Languages in Software Development}
\author{Daniel Parker - 971328X}
\date{\today}
\begin{document}
\maketitle
\section{Proof Tree}
\begin{prooftree}
\AxiomC{father(X,luke)}
\AxiomC{father(X,leia)}
\AxiomC{mother(Y,luke)}
\AxiomC{mother(Y,leia)}
\QuaternaryInfC{sibling(luke, leia)}
\end{prooftree}
\section{Bubble Sort}
\subsection{bubble/2}
In planning this predicate, I rationalised the result I wanted as being the sorted result of the head of the current list and the head of the tail of the current list. Then I would keep the smaller of the two, and recursively bubble the bigger part as the head of the remaining tail.
\subsection{bubble\_sort/2}
The bubble\_sort predicate makes use of three other predicates; bubble/2, reverse/2, and remove/3. This predicated is again recursive in nature. A sorted list is created by bubbling, reversing the result, removing the top most element (head), reversing it back again and calling the bubble sort on the remaining tail.
\subsection{Source}
\inputminted[tabsize=4]{prolog}{bubblesort.pl}
\section{Logical Circuits}
\subsection{NAND}
\inputminted[firstline=3, lastline=7 ,tabsize=4]{prolog}{gates.pl}
\subsection{NAND Logic Gates}
\subsubsection{AND}
\inputminted[firstline=9, lastline=11 ,tabsize=4]{prolog}{gates.pl}
\subsubsection{OR}
\inputminted[firstline=19, lastline=22 ,tabsize=4]{prolog}{gates.pl}
\subsubsection{NOR}
\inputminted[firstline=13, lastline=17 ,tabsize=4]{prolog}{gates.pl}
\subsubsection{XOR}
\inputminted[firstline=24, lastline=28 ,tabsize=4]{prolog}{gates.pl}
\subsubsection{XNOR}
\inputminted[firstline=30, lastline=35 ,tabsize=4]{prolog}{gates.pl}
\end{document}
| {
"alphanum_fraction": 0.7762085823,
"avg_line_length": 32.298245614,
"ext": "tex",
"hexsha": "6692a1188a21e3707b7c22cdd23e2248c8ab279f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "rlgod/languages-in-software-development",
"max_forks_repo_path": "Assignments/1/assignment-1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "rlgod/languages-in-software-development",
"max_issues_repo_path": "Assignments/1/assignment-1.tex",
"max_line_length": 317,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4a936667f22d2783521c917d60861249a2ae6671",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "rlgod/languages-in-software-development",
"max_stars_repo_path": "Assignments/1/assignment-1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 563,
"size": 1841
} |
\documentclass[12pt,oneside]{article}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %
% The mathematical style of these documents follows %
% %
% A. Thompson and B.N. Taylor. The NIST Guide for the Use of the International System of Units. %
% NIST Special Publication 881, 2008. %
% %
% http://www.nist.gov/pml/pubs/sp811/index.cfm %
% %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\input{../Bibliography/commoncommands}
\usepackage{fancyhdr}
\pagestyle{fancy}
\lhead{}
\rhead{}
\chead{}
\renewcommand{\headrulewidth}{0pt}
\begin{document}
\title{Vulnerability Model Summary - October 1 2014}
\author{DK Ezekoye}
\maketitle
We discussed a simple model for a year one deliverable to exercise the vulnerability framework. The steps that we agreed upon are outlined below.
\section{Target Jurisdiction}
Because of readily available fire data on Arlington, we chose the city/county(?) of Arlington, VA as our testbed.
\subsection{Required Data}
We will request fire incident data directly from Arlington and also through NFIRS. Having both data sets will be useful in exploring any differences between the local and national data. We also will need census data for demographic data at the tract level. Housing inventroy data will also be needed at the census tract level.
\section{Ignition Model}
We recognize that fire ignition may or may not trigger fire department response. We choose to analyze the fraction of ignition events that trigger firefighter response. The model to be built on fire ignition will be derived directly from fire incident data sets fit to census tract demograhic and building explanatory variables. The use of a model form as compared to direct draws from the existing data is motivated by the need to exercise the ignition model under conditions in which the underlying census parameters change in time. A similar incident ignition model was derived for Austin, TX and was presented to the group.
\section{Fire Model}
A simple fire model of form shown below will be used to drive the fire growth process. The model is required to include the effects of delays in firefighter response or inadequate firefighter response on increased property loss and damage. The model is intended to be a simple modification to existing flame spread models that are parameterized using a pyrolysis length. In this case, a pyrolysis area, $A_p$ and burned out area, $A_b$ are to be used. Sprinkler effects will be modeled into the system as well as firefighter suppression effects.
\begin{equation}
\frac{dA_p}{dt} = \frac{A_f-A_p}{\tau_{ign}} - f(SP,FF,etc.)
\label{eq_AP}
\end{equation}
In the above, the flame area is defined in terms of the difference between the pyrolysis area and the burnout area. An equation is required for the burnout area.Note that modeling of the effects of sprinklers and FF responses are included.
\begin{equation}
\frac{dA_b}{dt} = \frac{A_p-A_b}{\tau_{bo}}
\label{eq_AB}
\end{equation}
In both equations, several calibration parameters are included that will be calibrated using a calibration data set.
\section{FF Response Model}
The efforts of the past study teams in measuring firefighter response times will be used in this model. We recognize that there are times associated with fire detection, alarm, deployment, travel, and initiation of suppression activities. Some of these times will be held fixed while others, such as travel time will be determined by direct computation of the distance between the specific structure drawn from the population of census tract structures and a responding fire station. Noise will be added to the travel time to simulate effects of traffic.
\section{Fire Loss}
The response behavior coupled to the fire model provides a damage area estimate for a particular structure. By smapling over a sufficiently large number of incidents, we will build a histogram/pdf of damage area for the census tract and then for the jurisdiction. This will be compared to available fire incident data as validation of the model.
\end{document}
| {
"alphanum_fraction": 0.6639762107,
"avg_line_length": 65.3888888889,
"ext": "tex",
"hexsha": "dea9ab888779a36e59a9816ee51c8cc35d15b5f4",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2017-01-19T00:26:34.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-30T15:13:22.000Z",
"max_forks_repo_head_hexsha": "c85fec01ae6d9940aed184db910a8d994ee7f8a1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tbuffington7/fire-risk",
"max_forks_repo_path": "docs/Notes/Notes_10_1_14.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "c85fec01ae6d9940aed184db910a8d994ee7f8a1",
"max_issues_repo_issues_event_max_datetime": "2017-03-16T15:41:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-30T14:06:03.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tbuffington7/fire-risk",
"max_issues_repo_path": "docs/Notes/Notes_10_1_14.tex",
"max_line_length": 629,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "c85fec01ae6d9940aed184db910a8d994ee7f8a1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tbuffington7/fire-risk",
"max_stars_repo_path": "docs/Notes/Notes_10_1_14.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-29T15:53:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-21T03:11:28.000Z",
"num_tokens": 913,
"size": 4708
} |
% \vspace{-.12in}
\paragraph{\textbf{Notation: }}
Unless otherwise specified, lower-case English alphabet characters indicate scalars $x \in \Real$. Bold indicates column vectors $\mb{x} \in \Real^p$,
and upper-case bold indicates matrices, $\bX \in \Real^{p \times q}$. Parameters and constants are Greek characters. Time is $t \in [0,T]$,
$i \in [N]$ indexes the $N$ neurons, where $[N]=\{1,2,\ldots,N\}$. Script denotes sets and pipes denote the cardinality of the set, e.g. $|\mc{T}|$.
\vspace{-.1in}
Our overall model is then:
\begin{subequations}
\begin{align}
\mc{T}_i\ \ &\sim \text{PP}(\lambda_i) \quad i \in \NN, \quad &\text{ where } \mathsf{\Lambda}(\cdot)&=\sum_{i=1}^{\infty} \lambda_i \delta_{\theta^*_i} \sim \Gamma \text{P}(\alpha, \mathsf{H}(\cdot| {\phi})), \\ %\mathcal{NW}(\mu, \Sigma)) \\ \\
\vspace{-.8in}
x_i(t) &= \sum_{j = 1}^{|\mc{T}_i|} \sum_{k = 1}^{K} y^*_{ijk} \mathsf{d}_k(t - \tau_{ij}), \quad &\text{ where }\by^*_{ij} &\sim \mathsf{N}_K(\mb{\mu}^*_i, \Sigma^*_i) \quad i,j \in \NN, \\
x(t) &= \sum_{i=1}^{\infty} x_i(t) + \eps_t, \quad &\text{ where at any time $t$, } \eps_t &\sim \mathsf{N}(0,\Sigma_x) \text{ independently}
\end{align}
\end{subequations}
\begin{algorithm}
\caption{Generative mechanism for the multi-electrode, non-stationary, discrete-time process}\label{alg:gen_proc}
\begin{tabular}{p{1.2cm}p{12.4cm}}
Input:& a) the number of bins $T$, and the bin-width $\Delta$\\
& b) the $K$-by-$L$ dictionary $\bD$ of $K$ basis functions\\
& c) the DP hyperparameters $\alpha$ and $\phi$.\\
& d) the transition matrix $\bB$ of the neuron AR process \\
Output:& \ An $M$-by-$T$ matrix $\bX$ of multielectrode recordings. % defined by a set of state and time pairs.
\end{tabular}
\begin{algorithmic}[1]
\State Initialize the number of clusters $C_1$ to $0$.
\State Draw the overall spiking rate $\Lambda \sim \text{Gamma}(\alpha, 1)$.
%\State Set $A_{s_i}(\tau) = \sum_j A_{s_i,j} (\tau)$ and define $u_{s_i}(\tau) \ge A_{s_i}(\tau) \forall \tau$. \label{alg:loop}
%\State Let $\tau_o = (w_{i} - l_i)$. \label{alg:smjp_loop}
\For{$t$ in $[T] $}
\State Sample $\mt{z}_t \sim \text{Bernoulli}(\Lambda \Delta)$, with $\mt{z}_t = 1$ indicating a spike in bin $t$.
\If{$\mt{z}_t = 1$} \label{enum:thin}
\State Sample $\mt{\nu}_t$, assigning the spike to a neuron, with
%\begin{align*}
$ \mathsf{P}({\mt{\nu}_t} = i) \propto
\begin{cases}
|\mc{T}^t_i| \quad i \in [C] \\
\alpha \quad\ i = C + 1 \\
\end{cases}$
%\end{align*}
\If{ $\nu_t = C_t + 1$}
\State $C_{t+1} \leftarrow C_t + 1$.
\State Set $\theta^*_{C_{t+1}} \sim H_{\phi}(\cdot)$, and $\mc{T}_{C_{t+1}}$=\{t\}.
\Else \State $\mc{T}_{\nu_t} \leftarrow \mc{T}_{\nu_t} \cup \{t\}$.
\EndIf
\State Set $\theta_t = \theta^*_{\nu_t}$, recalling that $\theta_t \equiv (\mb{\mu}_t, \Sigma_t)$.
\State Sample $\by_t = (\by^1_t; \cdots; \by^M_1) %\equiv (\by_{t1}, \cdots, \by_{t\Upsilon})
\sim \mathsf{N}(\mb{\mu}_t, \Sigma_t)$, determining the spike shape at all electrodes.
%\State $\bx^h_{t:t+L} = A\by^h$
\State $ x^m_t = \sum_{h = 1}^L \bD_{:,h}^{\T} \by^m_{t-h-1} + \epsilon^m_t \text{\qquad where $\epsilon^m_t \sim \mathsf{N}(0,\sigma^2), m \in [M]$.} $
\State Update the cluster parameters: ${\mb{\mu}}^*_i = \mathbf{B} {\mb{\mu}}^*_i + r_i \quad i \in [C_{t+1}]$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
| {
"alphanum_fraction": 0.6104582844,
"avg_line_length": 55.8032786885,
"ext": "tex",
"hexsha": "a92311b27bf6c2923b8962bfdbe37a2f19fd24a6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jovo/online-spike-sorting",
"max_forks_repo_path": "tex/supp-text.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jovo/online-spike-sorting",
"max_issues_repo_path": "tex/supp-text.tex",
"max_line_length": 249,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "24b8bac41bff449381c5c60a9d09ac40995035b5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jovo/online-spike-sorting",
"max_stars_repo_path": "tex/supp-text.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1357,
"size": 3404
} |
\documentclass[]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\PassOptionsToPackage{usenames,dvipsnames}{color} % color is loaded by hyperref
\hypersetup{unicode=true,
pdftitle={Portfolio intuition},
colorlinks=true,
linkcolor=Maroon,
citecolor=Blue,
urlcolor=blue,
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\newcommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{Portfolio intuition}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{Chris Kennedy\\
Bridge Alternatives\\
\href{mailto:[email protected]}{\nolinkurl{[email protected]}}}
\preauthor{\centering\large\emph}
\postauthor{\par}
\predate{\centering\large\emph}
\postdate{\par}
\date{August 21, 2018}
\usepackage{fancyhdr}
\pagestyle{fancy}
\makeatletter
\lhead{\@title}
\rhead{\includegraphics[width = 0.15\textwidth]{logo.png}}
\makeatother
\begin{document}
\maketitle
\hypertarget{introduction}{%
\subsection{Introduction}\label{introduction}}
There's a short preference test in this paper that most readers, if not
all, will answer incorrectly. It's a ``preference test'' (and not a
quiz) because selections should be made without calculation or
computation. I'm looking to test your intuition.
Below I argue for why we should care about where our intuition is
leading us, why it might be creating blind spots and how we can do
better. Specifically, I attempt to demonstrate the limitations of
financial judgement based on return to risk ratios (i.e.~Sharpe ratios)
and how using the following formula might lead to better outcomes:
\[
\frac{RRR_a}{\rho} > RRR_p
\]
Don't worry, we'll come back to this.
\hypertarget{intuition}{%
\subsection{Intuition}\label{intuition}}
Practitioners in the hedge fund industry must often rely on intuition.
In some ways, given the highly quantitative nature of the industry, this
is surprising; in other ways it isn't. The types of problems regularly
encountered can be extremely complicated, requiring involved computation
for, at best, reasonable approximations. Successful individuals develop
heuristics (shortcuts) to more efficiently guide their day-to-day
filtering of opportunities. They develop a sense for what should foot
and what shouldn't. It's an element of discretion, molded by experience,
that manifests in small but important decisions across the sector
everyday:
\begin{itemize}
\tightlist
\item
Does this manager's presentation warrant follow up?
\item
Does this line of research show promise?
\item
How do I feel about the risk we're taking?
\item
Does this make sense?
\end{itemize}
Granted, these are questions we'd ideally approach quantitatively, but
this takes time we don't always have.
A common instance of this discretion is visible in the evaluation of
performance statistics. Hedge fund investors are confronted with an
amazing amount of data: daily returns, monthly returns, return
statistics, sector exposures, positions, correlation matrices and more.
Given this mountain of information, it's necessary to employ time saving
tools to filter true prospects from everything else.
One such tool is the return to risk ratio (\(RRR\)). It might seem
strange to call the \(RRR\) a ``tool,'' especially in the context of
shortcuts and heuristics, since the ratio has such a strong theoretical
source: William Sharpe\footnote{Yes, this is a slight inaccuracy.
Sharpe's namesake ratio incorporated the risk free rate, but the idea
of dimensionalizing returns using risk is what's important here.}. But
it helps to remember that the \(RRR\) is, indeed, a deliberate
simplification, facilitating the comparison of assets of different risks
``as long as the correlations of the {[}assets{]} with other relevant
asset classes are reasonably similar''
(\href{https://web.stanford.edu/~wfsharpe/art/sr/sr.htm}{Sharpe's
words}). So while the \(RRR\) and Sharpe ratio have their limitations,
you'd be hard pressed to find any presentation of hedge fund performance
that didn't show them. The statistic carries an importance, a currency,
derived from its simplicity and implication of skill---even though
confidently quantifying ``skill'' is probably more complicated.
In the next section, you'll be asked to put yourself in the shoes of a
portfolio manager. You'll be presented with a series of investment
preference tests designed to examine your ability to pick superior
assets. It will be tempting to reach for a spreadsheet or Python or R,
but don't. Try and let intuition guide you. And while this may feel
frivolous, if you were ever presented with statistics describing some
potential investment and, without deliberately calculating its potential
impact on your portfolio, decided it wasn't for you, then you've made
choices like this before.
\hypertarget{preference-tests}{%
\subsection{Preference Tests}\label{preference-tests}}
Below are four preference tests where you'll need to pick one of two
assets (i.e. \(A_1\) or \(A_2\); \(B_1\) or \(B_2\); etc.). In each,
assume you already own a portfolio that exhibits a return of 5\% and a
risk of 15\%. Also, pretend you're seeking to allocate capital such that
the resulting weights will be split 90\% to your current portfolio and
10\% to your asset choice. This asset will therefore play a small but
not unimportant role in the new portfolio. Lastly, assume you're
objective is to maximize your portfolio's return over risk ratio
(\(RRR\)).\footnote{It's perfectly reasonable to prefer some other
objective function. The \(RRR\), while mathematically flexible, has
some practical limitations. For instance, the maximized \(RRR\) might
exhibit a very low return. But the spirit of this work---attempting to
distill portfolio goals into simple but complete tools and guides---is
extendable to any objective function.}
\[
\begin{array}{r|c|c|c}
\text{} & A_1 & or & A_2 \\
\hline{}
\text{Return }(r) & 4.00\% & & 4.00\% \\
\text{Risk }(\sigma) & 7.96\% & & 46.04\% \\
\text{Correlation }(\rho) & -0.2 & & -0.2
\end{array}
\]
\[
\begin{array}{r|c|c|c}
\text{} & B_1 & or & B_2 \\
\hline
\text{Return }(r) & 10.54\% & & 3.57\% \\
\text{Risk }(\sigma) & 20.00\% & & 20.00\% \\
\text{Correlation }(\rho) & 0.8 & & -0.2
\end{array}
\]
\[
\begin{array}{r|c|c|c}
\text{} & C_1 & or & C_2 \\
\hline{}
\text{Return }(r) & 9.33\% & & 6.50\% \\
\text{Risk }(\sigma) & 27.50\% & & 12.50\% \\
\text{Correlation }(\rho) & 0.4 & & 0.4
\end{array}
\]
\[
\begin{array}{r|c|c|c}
\text{} & D_1 & or & D_2 \\
\hline{}
\text{Return }(r) & 6.43\% & & -2.64\% \\
\text{Risk }(\sigma) & 10.00\% & & 40.00\% \\
\text{Correlation }(\rho) & 0.5 & & -0.6
\end{array}
\]
After presenting this test to several people, a typical set of answers
started to emerge:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\(A_1\) was preferred to \(A_2\). With \(A_1\), for the same return
and correlation, you receive nearly 6 times less risk.
\item
\(B_1\) was slightly preferred to \(B_2\). For the same risk, \(B_1\)
delivers much more return, though \(B_2\)'s correlation is better.
\item
\(C_2\) was preferred. It's \(RRR\) is higher (about 0.52 versus about
0.34).
\item
\(D_1\) was preferred to \(D_2\). \(D_1\)'s return to risk ratio
(\(RRR\)) is much higher. \(D_2\)'s return is negative.
\end{enumerate}
\hypertarget{results}{%
\subsection{Results}\label{results}}
To reveal the results and see how far intuition can get you, we'll start
with the case of \(A_1\) and \(A_2\), which should exhibit the most
consensus. Most, if not all, will have chosen \(A_1\) over \(A_2\). For
the same return and correlation you receive much, much less risk. Let's
calculate the portfolio impact of each investment and see exactly how
much better you are with \(A_1\) over \(A_2\):
\hypertarget{portfolio-return-r}{%
\subsubsection{Portfolio Return (R)}\label{portfolio-return-r}}
Since the returns of the prospective assets \(A_1\) and \(A_2\) are
identical, the resulting portfolio returns will be identical as well:
\[
\begin{aligned}
R & = w_p r_p + w_a r_a \\
R & = 0.9 \times 0.05 + 0.1 \times 0.04 \\
R & = 0.049 \\
\end{aligned}
\]
\[
\begin{aligned}
Where: & \\
R & = \text{return of the portfolio after allocating to the prospective asset} \\
r_p & = \text{return of the current portfolio} \\
r_a & = \text{return of the prospective asset} \\
w_p & = \text{weight of the current portfolio post-allocation} \\
w_a & = \text{weight of the prospective asset post-allocation} \\
w_a & = (1 - w_p)
\end{aligned}
\]
\hypertarget{portfolio-risk-s}{%
\subsubsection{Portfolio Risk (S)}\label{portfolio-risk-s}}
Now, because the risks of \(A_1\) and \(A_2\) are so different, we
expect a logical preference to very clearly emerge below. We'll first
calculate the portfolio's resulting risk (\(S\)) for \(A_1\):
\[
\begin{aligned}
S & = \sqrt{w_p^2 \sigma_p^2 + w_a^2 \sigma_a^2 + 2 w_p w_a \sigma_p \sigma_a \rho} \\
S & = \sqrt{(0.9 \times 0.15)^2 + (0.1 \times 0.0796)^2 + 2 \times 0.9 \times 0.1 \times 0.15 \times 0.0796 \times -0.2} \\
S & = \sqrt{0.018225 + 0.0000633616 - 0.00042984} \\
S & = \sqrt{0.0178585216} \\
S & = 0.13363577964 \\
S & \approx 13.363...\%
\end{aligned}
\]
\[
\begin{aligned}
Where: & \\
S & = \text{risk of the portfolio after allocating to the prospective asset} \\
\sigma_p & = \text{risk of the current portfolio} \\
\sigma_a & = \text{risk of the prospective asset} \\
\rho & = \text{correlation between the current portfolio and the prospective asset} \\
\end{aligned}
\]
If we take the ratio of \(R\) to \(S\), the resulting portfolio's
\(RRR\) is now about 0.3667, which represents a 10\% improvement. Not
bad.
Let's see how much better that is than \(A_2\):
\[
\begin{aligned}
S & = \sqrt{w_p^2 \sigma_p^2 + w_a^2 \sigma_a^2 + 2 w_p w_a \sigma_p \sigma_a \rho} \\
S & = \sqrt{(0.9 \times 0.15)^2 + (0.1 \times 0.4604)^2 + 2 \times 0.9 \times 0.1 \times 0.15 \times 0.4604 \times -0.2} \\
S & = \sqrt{0.018225 + 0.0021196816 - 0.00248616} \\
S & = \sqrt{0.0178585216} \\
S & = 0.13363577964 \\
S & \approx 13.363...\%
\end{aligned}
\]
\hypertarget{um-what}{%
\subsection{Um, what?}\label{um-what}}
There are no tricks here. Feel free to replicate the math yourself.
You'll find that \(A_1\) and \(A_2\) deliver the exact same change to
the portfolio's risk, which, since their returns are identical, results
in the same +10\% change to the portfolio's \(RRR\).\footnote{It should
be noted that, in an ideal scenario, we'd have transparency into the
constituency of the current portfolio, and that's obviously not
provided here. The portfolio's particular makeup could influence
allocation decisions, likely affecting choices about weightings or
even the binary question of inclusion. In the scenario painted in this
paper, we implicitly assume the portfolio manager cannot see their own
portfolio's inner-workings. This clear limitation may be a relatable
one, though. How often does a portfolio manager within a large
institution (like a pension fund) have access to, or the ability to
affect, other parts of the portfolio? So while this is indeed a
shortcut, it's a fair one that allows us to make progress. I'd add two
more points: (1) prioritizing opportunities based on Sharpe ratios
ignores the portfolio's inner workings too, and (2) this paper is not
proposing a better way to optimize portfolio weights but only a fairer
way to quickly filter opportunities.}
Think about that. We essentially took an asset (\(A_1\)), added nearly
40\% more risk and ended up with the same resulting portfolio. How's
that possible? Look at the formula for the post-allocation risk (\(S\)),
specifically the last two terms: \(w_a^2 \sigma_a^2\) and
\(2 w_p w_a \sigma_p \sigma_a \rho\). Respectively, these items describe
risk contributions from the prospective asset individually and in
combination with the original portfolio. As we increase the asset's
risk, the first term grows exponentially, and the second term shrinks
linearly (remember, the correlation is negative). It turns out that, at
least temporarily, the shrinking effect from the negative correlation
outweighs the exponential term. We can show this graphically by plotting
how these terms and their sum change as we adjust the asset's risk:
\begin{center}\includegraphics{paper_files/figure-latex/Risk terms-1} \end{center}
For a while, the red line, which describes the risk reduction from the
negatively correlated combination, wins out, allowing for pairs like
\(A_1\) and \(A_2\).
When I first discovered the math behind these strange asset pairs (see
the Appendix for more details), I remember feeling uneasy. We're
strongly trained to view more risk, holding all else constant, as a bad
thing. In some ways, it's a sort of tenant of sound financial thinking.
But here, in this example, that's just not the case.
I doubt at this point you'll be too surprised to learn that all the
other assets in the preference test also deliver the exact same 10\%
improvement to the portfolio's return to risk ratio. Each asset from the
test is, from a quantitative perspective, equivalent. You were right
(and wrong) whatever you picked.
\hypertarget{visualizing-equivalence}{%
\subsection{Visualizing equivalence}\label{visualizing-equivalence}}
In order to make sense of the equivalence behind these seemingly
disparate assets, let's first plot them in a risk-return space:
\begin{center}\includegraphics{paper_files/figure-latex/Scatter-1} \end{center}
At first glance it's hard to see the ``equivalence.'' Looking at this
plot, it is natural to gravitate towards points higher and to the left.
In this light, \(A_1\), \(D_1\), \(C_2\), \(B_1\) look preferential. And
if the \(RRR\) of each asset is our sole consideration, then these truly
are preferential assets. But the story is incomplete without considering
correlation. When we do that, the picture begins to take shape:
\begin{center}\includegraphics{paper_files/figure-latex/Scatter with correlation lines-1} \end{center}
Each line drawn in this plot represents a class of assets that:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Share some correlation to the portfolio (0.8, 0.5, 0.4, -0.2, or -0.6)
\item
Deliver a 10\% improvement, measured in \(RRR\), to the original
portfolio when allocated at a 10\% weight
\end{enumerate}
With the added correlation lines it's easier to see how, as the
correlation drops (corresponding to lines of ``cooler'' coloring), less
return is required to deliver the same 10\% improvement. Put
differently, return is traded for lower correlation such that the
resulting portfolio improvement remains constant.
Correlation is what connects these assets. While that's obvious now, it
wasn't when you were selecting preferences. Whatever you picked, you
still picked, and I think this proves two important things:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
How difficult it is to internalize correlation
\item
At the same time, how important correlation really is
\end{enumerate}
This is a problem, right? \(RRR\)'s and Sharpe ratios are mentally
portable. They are easy, compact and can be helpful when correlations
are ``reasonably similar.'' But even by that logic you'd have developed
illogical preferences for \(A_1\) (over \(A_2\)) and \(C_2\) (over
\(C_1\)). It seems, especially within the world of hedge funds where
lower correlations to traditional assets are more common, something
better is required that:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Incorporates correlation
\item
Remains simple to calculate
\end{enumerate}
Luckily, I develop just the formula below. While being remarkably
simple, the formula is surprisingly difficult to derive, which may
explain why I haven't seen it used much. Starting with the idea of an
indifference curve, I'll reveal the intuition and a bit of the math
behind the derivation.
\hypertarget{indifference-curves}{%
\subsection{Indifference curves}\label{indifference-curves}}
An indifference curve, in the traditional context of economics, connects
combinations of variables into a curve for which a consumer is
indifferent. They are indifferent because each point on one of these
curves corresponds to the same level of utility or happiness or reward.
In the context of portfolio mathematics we'll connect combinations of
prospective asset return, risk and correlation for which an allocator is
indifferent. Since the idea of utility translates nicely into \(RRR\)s
(we're generally happier with more return per unit of risk), what we're
really examining are different kinds of prospective assets that deliver
the exact same change to the portfolio. And this should sound familiar.
In the preference test, we constructed a set of 8 assets that exhibited
very different returns, risks and correlations, for which we were
indifferent; they all delivered the same +10\% change to the portfolio's
\(RRR\).
To derive the new formula, we'll focus not on assets that deliver a
+10\% change, but those that deliver no change. This is a little
confusing since it's hard to see the practical value of such an
exercise. Essentially these are assets that, when added at some weight
(\(w_a\)), do not change the portfolio's return over risk. It's
important to realize that such an asset doesn't have to be identical to
the portfolio with a correlation of 1.0. It can exhibit any correlation
or risk or return subject to some quantitative constraints. For
instance, if some prospective asset delivers relatively low return, it
will obviously lower the portfolio's return. It must balance this with
attractive (i.e.~low) correlation that will serve to reduce the
portfolio's risk. If the risk is lowered enough as to offset the lower
return, the ratio of the two is preserved and the portfolio's \(RRR\) is
unchanged.
We can express this idea formally:
\[
\frac{r_p}{\sigma_p} = \frac{w_p r_p + w_a r_a}
{\sqrt{w_p^2 \sigma_p^2 + w_a^2 \sigma_a^2 + 2 w_p w_a \sigma_p \sigma_a \rho}}
\]
\[
\begin{aligned}
Where: & \\
r_p & = \text{return of the current portfolio} \\
r_a & = \text{return of the prospective asset} \\
\sigma_p & = \text{risk of the current portfolio} \\
\sigma_a & = \text{risk of the prospective asset} \\
\rho & = \text{correlation between the current portfolio and the prospective asset} \\
w_p & = \text{weight of the current portfolio post-allocation} \\
w_a & = \text{weight of the prospective asset post-allocation} \\
w_a & = (1 - w_p)
\end{aligned}
\]
The left hand side of the equation is just the current, pre-allocation
portfolio's return over risk. It's the starting point. We're interested
in the class of prospective assets that, when combined with the
portfolio at a particular weight (\(w_a\)), do not change the original
return over risk. We can call these ``do no harm'' or ``no change''
assets. Since this class of assets all essentially do nothing, we're not
only indifferent to which member of the class we choose, but whether we
choose one or not. Whatever we do, it doesn't make a difference; the
portfolio's return over risk will remain the same.
\hypertarget{so-why-are-we-interested-at-all}{%
\subsection{So why are we interested at
all?}\label{so-why-are-we-interested-at-all}}
Good question. Let's first graph these curves and get back to that.
\begin{center}\includegraphics{paper_files/figure-latex/Indifference curves-1} \end{center}
Each of these lines is an indifference curve, representing different
kinds of those ``no change'' assets, drawn for a particular portfolio
weight (\(w_p\)). Every portfolio weight implies an allocation weight
(\(w_a\)) since \(w_a = 1 - w_p\). The x-axis here is simply the
correlation between the portfolio and prospective asset. The y-axis
captures the relative return to risk of the asset when compared to the
portfolio. When the \(Relative\ RRR\) is:
\begin{itemize}
\tightlist
\item
= 1, the the return over risk ratios of the asset and portfolio are
the same
\item
\textgreater{} 1, the return over risk ratio of the asset is greater
than the portfolio
\item
\textless{} 1, the return over risk ratio of the asset is less than
the portfolio
\end{itemize}
Notice how, as the weight allocated to the asset increases (the lines
move upward, from green to purple), the asset must be more performant in
order to do no harm; it must be better relative to the portfolio. Put
differently, as the role played by the asset increases, more is required
of it, and that sounds about right.
If we go the other way and shrink the weight allocated to the
prospective asset, we obviously reduce what's required of it. It turns
out, as a consequence of the math, that if we allow the weight allocated
to approach but not equal zero (illustrated by the dashed line), we're
left with an amazingly concise limiting scenario:
\[
\frac{RRR_a}{RRR_p} = \rho
\]
\[
\begin{aligned}
Where: & \\
RRR_p & = \text{return to risk ratio of the current portfolio}\ (r_p / \sigma_p) \\
RRR_a & = \text{return to risk ratio of the prospective asset}\ (r_a / \sigma_a) \\
\rho & = \text{correlation between the current portfolio and the prospective asset} \\
\end{aligned}
\]
This formula describes what's required, in an absolute bare-minimum
mathematical sense, of a prospective asset in order to do no harm. If
for some reason the asset's correlation goes up or its \(RRR\) down, by
any amount, we know that the asset will only harm the portfolio
irrespective of the weight chosen.
Take a minute to appreciate the brevity. Just three terms are required
to answer this question: ``What is required from an asset (in terms of
return, risk and correlation) in order to add value to my portfolio?'' I
think that's awesome.
Since we'd like to be better than the bare-minimum, we can add an
inequality to this expression and rearrange it in two ways in order to
reinforce its pracital usability:
\[
\frac{RRR_a}{\rho} > RRR_p
\]
\[
RRR_a > \rho \times RRR_p
\]
Given your own portfolio's return over risk, the return over risk of
some prospective asset and how they correlate, you can immediately
determine if the asset will be able to add value. That's powerful and
simple.
For example, assume you still hold the 5\% return and 15\% risk
portfolio from the preference test. Let's say you've just been presented
with an investment opportunity that exhibits a 0.7 correlation to your
portfolio and a return to risk ratio of 0.2, which isn't great but
probably sounds additive. Using the first arrangement, we know that the
quotient of \(RRR_a\) and \(\rho\) must be greater than \(RRR_p\). When
we divide 0.2 by 0.7, we're left with about 0.286, which is actually
less than 0.333. Using the second arrangement, we know that the product
of \(RRR_p\) and \(\rho\) must be less than \(RRR_a\). When we multiply
0.7 and 0.333, we're predictably left with about 0.231, which is not
less than 0.2. Therefore, at any weight, we know this asset will reduce
the return to risk ratio of the portfolio and probably shouldn't be
where we focus our effort.
\hypertarget{interpretations}{%
\subsection{Interpretations}\label{interpretations}}
It's worth noting a few points about this inequality. First, as far as I
can tell, it cannot be used to somehow rank prospective assets. It can
only serve as a binary filter: yes or no. This might feel like a real
limitation. \(RRR\)s are absolutely rankable. They are measurements of
the same unit (risk). But as we've shown in this paper, those rankings
are not indicative of their true value within the context of a
portfolio. Making decisions based only on return and risk is like
ranking runners based on their times without asking how far they ran. It
doesn't make sense. If you take away one thing from this paper, this
should be it!
Second, correlation is best understood as a sort of performance hurdle.
For assets exhibiting low correlation, less is required of their
standalone performance (i.e.~return over risk), all else equal. In fact,
plug zero correlation into the second inequality and you get: \(RRR_a\)
\textgreater{} 0. This means that if you happen to find a truly
zero-correlation asset it will be additive as long as it has positive
returns.\footnote{Correlation, at least compared to return and risk, is
a relatively unstable measurement. Its measurement error is
comparatively large. This is a result of the math that defines it.
Interestingly, our portfolio is relatively stable with respect to
changes in correlation. Since we never realize our exact expectation
of return, risk and correlation (we get a draw from the distribution
instead), both of these effects should be considered in practical
applications. This is an area I hope to write more about soon.}
Lastly, return over risk can still be helpful. If you look back at the
chart, you'll see that we don't plot relative \(RRR\)s that are greater
than one. These points correspond to prospective asset \(RRR\)s that are
greater than our portfolio's. The point is that if you find an asset
whose \(RRR\) is greater than your portfolio's, it will add value
regardless of the correlation. But that's not justification for focusing
solely on return over risk. If you are managing a well diversified
portfolio you should, by definition, struggle to find these better
performing assets. You'll more likely need to add value using assets
that exhibit \(RRR\)s which are less than what you already have.
\hypertarget{implications}{%
\subsection{Implications}\label{implications}}
Even though we've been writing about them for years\footnote{Burghardt,
Walls \emph{Managed futures for Institutional Investors (Chapter 12:
Superstars versus Teamwork)}. If you'd like a copy of this work,
please email me.}, we know return to risk ratios and Sharpe ratios are
not going anywhere. That's fine, but we do hope that people remain
cognizant of their limitations. Those specific limitations were made
very obvious in the above preference test.
I also hope this work helps practitioners remain mindful of correlation
when evaluating potential portfolio changes. Using the formulas derived
just above, I believe that task becomes easier, but it does require the
standard conversation to change somewhat. In particular, I think hedge
fund investors/allocators should be more open with their portfolio
returns. These are often highly diversified portfolios. The risk of
reverse engineering individual allocations or positions is very low and
the upside of prospective investments being prepared with their unique
correlations to your portfolio is real.
Likewise, hedge fund managers shouldn't be afraid to ask for an
investor's benchmarks and/or their performance. Bring correlation to the
conversation more often, for it's an important and somehow
underappreciated avenue for value delivery and differentiation.
\hypertarget{references}{%
\subsection{References}\label{references}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
I'm very thankful for the many colleagues and friends who provided a
patient, helpful ear during the writing of this.
\item
Parts of this work (primarily the idea of Sharpe indifference curves)
were influenced by a wonderful presentation from Roy Niederhoffer of
\href{https://www.niederhoffer.com/}{R. G. Niederhoffer Capital
Management}, given during the 2016 \href{https://timesummit.org/}{time
Summit}.
\item
During this research, I came across
\href{https://pdfs.semanticscholar.org/c094/f7fd32f6e5c36f121a0d246d6127587a473a.pdf}{related
and interesting work} from Marcos Lopez De Prado and David Bailey of
the Lawrence Berkeley National Laboratory. They are motivated by
similar observations and arrive at comparable conclusions.
\end{enumerate}
\hypertarget{appendix}{%
\subsection{Appendix}\label{appendix}}
\(A_1\) and \(A_2\) are examples of an interesting subset of solutions
to the two asset functions for return and risk. Yes, these are somewhat
fun and surprising demonstrations of portfolio math, but they also
reveal how it can be logical, in certain situations, to want more risk
from an investment, all else equal. That's fun too.
Let's start with a plot of \(A_1\) and \(A_2\):
\begin{center}\includegraphics{paper_files/figure-latex/A1 A2-1} \end{center}
The line in this plot connects assets that, just like \(A_1\) and
\(A_2\), deliver a +10\% change with a correlation of -0.2. And since
this line is a parabola, we can, in certain situations, find two asset
risks (i.e.~solutions) given a particular prospective asset return. If
you think about this for a minute, it shouldn't be too surprising. Take
a look at the formula below, which is our starting point for drawing
these curves:
\[
RRR_{pa} = \frac{w_p r_p + w_a r_a}{\sqrt{w_p^2 \sigma_p^2 + w_a^2 \sigma_a^2 + 2 w_p w_a \sigma_p \sigma_a \rho}}
\]
\[
\begin{aligned}
Where: & \\
RRR_{pa} & = \text{return to risk ratio of the portfolio and the prospective asset combination} \\
r_p & = \text{return of the current portfolio} \\
r_a & = \text{return of the prospective asset} \\
\sigma_p & = \text{risk of the current portfolio} \\
\sigma_a & = \text{risk of the prospective asset} \\
\rho & = \text{correlation between the current portfolio and the prospective asset} \\
w_p & = \text{weight of the current portfolio post-allocation} \\
w_a & = \text{weight of the prospective asset post-allocation} \\
w_a & = (1 - w_p)
\end{aligned}
\]
Basically, if we solve for \(\sigma_a\) here, we'll be left with an
equation in quadratic form (\(a\sigma_a^2 + b\sigma_a + c = 0\) ). It is
a bit messy since our \(a\), \(b\) and \(c\) are multivariate
expressions, but that's what computers are for. Additionally, check out
this paper's supporting markdown source file\footnote{This paper was
written in markdown with code weaved throughout. It was rendered with
\href{https://rmarkdown.rstudio.com/}{RMarkdown} and the
\href{https://rstudio.github.io/reticulate/}{reticulate package}. The
raw file, which includes all of the code supporting the charts and
calculations, is available on
\href{https://github.com/cjken/portfolio-intuition}{Github}.},
specifically the function \texttt{implied\_s2}. Given all the variables
in the above equation (besides \(\sigma_a\)), that function will return
the implied prospective asset risk, which could be a pair like \(A_1\)
and \(A_2\) if the following constraints are satisified:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
the prospective asset's return must be less than the portfolio's
\item
and greater than \(-\large{\frac{r_p w_p}{w_a}}\) (that's a fun one to
figure out)
\item
the correlation must be negative
\end{enumerate}
While the pairs are amusing, I think what's more interesting is that we
can add risk, get nothing in return for it (pun intended) and end up
better off. Look back at the \(A_1\) and \(A_2\) plot. Notice how, at
least until we hit the vertex, if we move from left to right,
representing an increase in risk, we're actually reducing return.
Essentially, this means that, in order to keep the benefit delivered
constant at +10\%, we actually need less return from the assets in
question. Put differently, if we added risk and didn't reduce return
we'd deliver more than a 10\% improvement; risk has a positive payoff
here, which is very cool. Note that this isn't some scaling effect where
by increasing the risk we increase the return too. We're actually
damaging the prospective asset's \(RRR\) by increasing the risk and
ending up better off.
This is very unintuitive, and to demonstrate that more viscerally, add
20\% more risk to \(A_1\) leaving you with something like 27.96\%.
Change nothing else. Recalculate the effect of adding \(A_1\) to the
portfolio using the formula above. You'll see the change has increased
from 10\% to more than 11\%. Yes, this is modest, but 1\% is 1\%. And be
honest; given the choice between \(A_1\) and \(A_1\) plus 20\% more
risk, would you really pick the riskier one? This is another limitation
of thinking in \(RRR\)s.
Lastly, you can explore these ``riskier is better'' situations yourself
by taking the partial derivative of the \(RRR_{pa}\) formula above with
respect to \(\sigma_a\). After plugging in the previously used
parameters for the portfolio, plus the shared return and correlation
from \(A_1\) and \(A_2\), I plot the derivative below:
\begin{center}\includegraphics{paper_files/figure-latex/Partial-1} \end{center}
Focus on the portion of the curve where the partial is greater than zero
(corresponding to prospective asset risks of less than about 30\%).
Within this positive region, we know that a small increase in the
prospective asset risk, translates into a positive change in the
resulting portfolio's \(RRR\). A very cool result.
\hypertarget{disclaimer}{%
\subsection{Disclaimer}\label{disclaimer}}
\emph{This is not a solicitation to buy or sell commodity futures or
options on commodity futures and should not be construed as such.
Futures and options trading involves substantial risk and is not for
everyone. Such investments may not be appropriate for the recipient. The
valuation of futures and options may fluctuate, and, as a result,
clients may lose more than their original investment. Nothing contained
in this message may be construed as an express or an implied promise,
guarantee or implication by, of, or from Bridge Alternatives Inc that
you will profit or that losses can or will be limited in any manner
whatsoever. Past performance is not necessarily indicative of future
results. Although care has been taken to assure the accuracy,
completeness and reliability of the information contained herein, Bridge
Alternatives Inc makes no warranty, express or implied, or assumes any
legal liability or responsibility for the accuracy, completeness,
reliability or usefulness of any information, product, service or
process disclosed.}
\end{document}
| {
"alphanum_fraction": 0.7518536826,
"avg_line_length": 43.0425531915,
"ext": "tex",
"hexsha": "42bba3fb9f4c1d5dda96a641dec260dd34122a78",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1041fb715fcd1ad2f50d4c07511a1cb324f77588",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cjken/Portfolio-Intuition",
"max_forks_repo_path": "paper.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1041fb715fcd1ad2f50d4c07511a1cb324f77588",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cjken/Portfolio-Intuition",
"max_issues_repo_path": "paper.tex",
"max_line_length": 123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1041fb715fcd1ad2f50d4c07511a1cb324f77588",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cjken/Portfolio-Intuition",
"max_stars_repo_path": "paper.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9718,
"size": 36414
} |
\documentclass[simplex.tex]{subfiles}
% NO NEED TO INPUT PREAMBLES HERE
% packages are inherited from simplex.tex; you can compile this on its own
\onlyinsubfile{
\title{NeuroData SIMPLEX Report: Subfile}
}
\begin{document}
\onlyinsubfile{
\maketitle
\thispagestyle{empty}
The following report documents the progress made by the labs of Randal~Burns and Joshua~T.~Vogelstein at Johns Hopkins University towards goals set by the DARPA SIMPLEX grant.
%%%% Table of Contents
\tableofcontents
%%%% Publications
\bibliographystyle{IEEEtran}
\begin{spacing}{0.5}
\section*{Publications, Presentations, and Talks}
\vspace{-20pt}
\nocite{*}
{\footnotesize \bibliography{simplex}}
\end{spacing}
%%%% End Publications
}
\subsection{ndreg}
The Large Deformation Diffeomorphic Metric Mapping (LDDMM) algorithm is
an image registration method used to compute a smooth invertible
transform $\phi_{10}$ which aligns template image $I_0$ to target
image $I_1$. The log Jacobian determinant of the mapping $\log
|D\phi_{10}|$ measures local volume change during LDDMM. Wherever it’s
negative the template CLARITY brain expanded to match the targeted Allen
Reference Atlas (ARA). Wherever it’s positive the CLARITY brain
contracted. Figure~\ref{fig:ndreg} shows the $\log |D\phi_{10}|$ overlaid
on a CLARITY brain and a histogram of its values. It’s clear that this
brain expanded in most places to match the ARA. This was also the case
for 8 of 9 of the of the other brains. Our calculations showed that
CLARITY brains were 21\% smaller than the ARA on average. This likely
occurred due to shrinkage introduced by the CLARIfying processing.
\begin{figure}[h!]
\begin{cframed}
\centering
\includegraphics[width=.73\textwidth]{./figs/ndreg.png}
\includegraphics[width=0.25\textwidth]{./figs/ndreg-md.png}\\
\includegraphics[width=0.75\textwidth]{./figs/ndreg-time.png}
\caption{
The metric distance between the CLARITY and ARA was computed at the
end point of the registration by integrating the time varying velocity
fields from LDDMM. It is a measure of how much the CLARITY brain
differed from the ARA. The next figure compares the calculated
distance when a Mean Square Error (MSE) cost is used during LDDMM to
registration with Mutual Information (MI). The metric distance was
consistently smaller for MI under all 3 conditions (Control, Cocaine
and Fear). This indicates that the quality of the registration was
higher than under a MSE cost.
}
\label{fig:ndreg}
\end{cframed}
\end{figure}
NeuroData’s registration module (ndreg) uses the Large Deformation
Diffeomorphic Metric Mapping (LDDMM) for image alignment. LDDMM
computes a smooth invertible mapping between template image $I_0$ and
target image $I_1$. The plot below shows the timing results of experiments
registering a CLARITY brain template to the Allen Reference Atlas (ARA)
target. In the first experiment the CLARITY image was registered to the
(ARA) using a single-scale approach on a 50 $\mu$~m grid. In the second
experiment registration was done by a coarse to fine multi-scale method.
Registration that was done at a lower resolution was used to initialize
the algorithm at the subsequent higher resolution level. Resolution
levels of 800, 400, 200, 100 and then 50 $\mu$~m were used. The Mutual
Information (MI) between the deformed CLARITY and ARA images was
normalized to a range of $[0, 1.0]$ where 1.0 indicates that no
registration occurred and 0 indicates the best possible scenario ($I_0 =
I_1$). The plot in figure~\ref{fig:ndreg} shows that the multi-scale optimization was much faster
than the single-scale method.
\begin{figure}[h!]
\begin{cframed}
\centering
\includegraphics[width=0.75\textwidth]{./figs/ndreg-single.png}
\includegraphics[width=0.75\textwidth]{./figs/ndreg-multiscale.png}
\caption{
The images below are checkerboard composite images of the deformed
CLARITY image and the ARA. They show that usage of the multi-scale
approach did not reduce registration quality when compared to the
single-scale method.
}
\label{fig:ndregChecker}
\end{cframed}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.779154519,
"avg_line_length": 39.9611650485,
"ext": "tex",
"hexsha": "c6a83d71ca73ec65918b4db4f1b0857bb5056d4a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "openconnectome/SIMPLEX_Q2",
"max_forks_repo_path": "Reporting/reports/2016-12Q4/ndreg.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "openconnectome/SIMPLEX_Q2",
"max_issues_repo_path": "Reporting/reports/2016-12Q4/ndreg.tex",
"max_line_length": 175,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "openconnectome/SIMPLEX_Q2",
"max_stars_repo_path": "Reporting/reports/2016-12Q4/ndreg.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1119,
"size": 4116
} |
\documentclass[main.tex]{subfiles}
\begin{document}
% \section*{Tue Oct 29 2019}
Recall the equations from last time: continuity, momentum and energy conservation in the isothermal case:
%
\begin{subequations}
\begin{align}
\dot{M} &= 4 \pi r^2 \rho v \\
v \dv{v}{r} &= - \frac{1}{\rho } \dv{P}{r} - \frac{GM}{r^2} \\
T(r) &\equiv T
\,.
\end{align}
\end{subequations}
Also, by differentiating the ideal gas law \(P \mu = \mathcal{R} T \rho \) we get
%
\begin{equation}
\frac{1}{\rho } \dv{P}{r} = \frac{\mathcal{R} T}{\mu } \frac{1}{\rho } \dv{\rho }{r}
\marginnote{Divided through by \(\rho \).}
\,,
\end{equation}
%
since the temperature is constant.
Using this, we manipulate the momentum equation into
%
\begin{align}
v \dv{v}{r} &= - \frac{1}{\rho } \frac{\mathcal{R} T}{\mu } \dv{\rho }{r} - \frac{GM}{r^2}
\,,
\end{align}
%
but we know by the continuity equation that the density gradient must correspond to the velocity gradient:
%
\begin{equation}
- \frac{1}{\rho } \dv{\rho }{r} = \frac{1}{v} \dv{v}{r} + \frac{2}{r}
\,,
\end{equation}
%
which we can substitute into the equation: we get
%
\begin{align}
v \dv{v}{r} &= \frac{\mathcal{R} T}{\mu } \qty(\frac{1}{v} \dv{v}{r} + \frac{2}{r}) - \frac{GM}{r^2}
\,.
\end{align}
It is a known fact that the isothermal speed of sound is given by
%
\begin{equation}
a^2 = \pdv{P}{\rho } =\pdv{}{\rho } \qty(\frac{\mathcal{R} \rho T}{\mu }) = \frac{\mathcal{R} T }{\mu }
\implies
a = \sqrt{\frac{\mathcal{R} T}{\mu }}
\,,
\end{equation}
%
so
%
\begin{align}
v \dv{v}{r} &= a^2 \qty(\frac{1}{v} \dv{v}{r} + \frac{2}{r}) - \frac{GM}{r^2}
\,.
\end{align}
Now we move all the terms which are proportional to the velocity gradient on the LHS:
%
\begin{equation}
\dv{v}{r} \qty(v - \frac{a^2}{v}) = \frac{2a^2 }{r }-\frac{GM}{r^2}
\,,
\end{equation}
%
which can be written as
%
\begin{equation}
\frac{1}{v}\dv{v}{r} \qty(v^2 - a^2) = \frac{2a^2 }{r }-\frac{GM}{r^2}
\,,
\end{equation}
%
so the Jacobian of the differential equation is zero is singular in \(v=a\): if \(v=a\) we must have \(2a^2r = GM\), which fixes the radius to the so-called Parker radius: \(r_P = GM / 2 a^2\), which is obtained by setting the LHS of the equation to zero.
Close to the star, the speed is subsonic (\(v < a\)), so the denominator \(D\) is negative; also the numerator \(N\) is negative in
%
\begin{equation}
\frac{1}{v} \dv{v}{r}
= \frac{2 a^2 / r - GM/ r^2}{v^2- a^2}
\overset{\text{def}}{=} \frac{N}{D}
\,,
\end{equation}
%
so on the whole \(N/D\) is positive,
which is consistent with our assumption \(\dv*{v}{r} > 0 \), which we make since we are considering winds, as opposed to accretion.
\begin{bluebox}
Why is the numerator negative? This is equivalent to saying
%
\begin{align}
2 \frac{\mathcal{R} T}{\mu } < \frac{GM}{r}
\,,
\end{align}
%
which means that \(\Circled{B} \times 4/5 < \Circled{C}\), (using the notation from equation \eqref{eq:energy-conservation-differential}) which holds since, as we wrote there, \(\Circled{B} \ll \Circled{C} \) near the stellar radius.
\end{bluebox}
Far from the star the numerator is positive, so the speed must be supersonic, so that \(N>0\) and we can still have \(N/D>0\) everywhere, guaranteeing that the velocity gradient is always positive.
So, the only physical solution is transsonic.
The critical velocity \emph{must} be attained at the Parker radius in order to have a physically meaningful transsonic solution; always-subsonic and always supersonic solutions are mathematically possible but not usually observed.
The velocity gradient at the critical point can be found by de l'Hôpital's rule to be
%
\begin{equation}
\dv{v}{r} \bigg|_{r_P} = \pm \frac{a^3}{GM}
\,.
\end{equation}
\begin{bluebox}
This is calculated by expanding in \(r\); however note that we must vary both \(r\) and \(v\) when we differentiate with respect to \(r\). We find:
%
\begin{subequations}
\begin{align}
\dv{v}{r} &= v \frac{2 a^2 / r - GM/ r^2}{v^2- a^2} \\
&= \bigg(a + \underbrace{\dv{v}{r}}_{\text{Second order}}\bigg)
\frac{- 2 a^2/r_P^2 + 2 GM / r_P^3}{2 a \dv*{v}{r}} \marginnote{Differentiated above and below at \(r = r_P\) and \(v =a \)} \\
\qty(\dv{v}{r})^2 &= - \frac{a^2}{r_P^2} + \frac{GM}{r_P^3} \marginnote{Simplified the \(2a\)} \\
&= -a^2 \qty(\frac{2a^2}{GM})^2 + GM \qty(\frac{2a^2}{GM})^3 \marginnote{Substituted \(r_P = 2a^2 /GM\)}\\
&= -4 \frac{a^{6}}{(GM)^2} + 8 \frac{a^{6}}{(GM)^2} \\
\dv{v}{r} & = \pm 2 \frac{a^3}{GM}
\,.
\end{align}
\end{subequations}
%
\end{bluebox}
\begin{claim}[Exercise]
The speed of sound at the critical point equals half of the escape velocity at that radius.
\end{claim}
\begin{bluebox}
The escape velocity is given by
%
\begin{align}
v _{\text{esc}}^2 = \frac{2GM}{r}
\,,
\end{align}
%
but we know that \(r_P = GM / 2 a^2\): so
%
\begin{align}
a^2 = \frac{GM}{2 r_P}
\,,
\end{align}
%
which means that, if we calculate the escape velocity at the Parker radius, we have \(v _{\text{esc}}^2 / a^2 = 4\), so \(v _{\text{esc}} / a = 2\).
\end{bluebox}
There are exactly two transsonic solutions: if we trace a cross in the \(r, v\) plane centered on the critical point and speed of sound we see that all solutions meet it perpendicular to it. The solution with the decreasing velocity gradient is an accretion solution:
what is plotted is the \emph{absolute value} of the velocity, the accretion solution has always-negative absolute velocity gradient.
Always-supersonic solutions and always-subsonic ones are also found, but they do not obey the monotonicity of the velocity, so we discard them.
The boundary condition is the velocity at some \(r_0 \): the problem is second-order, but if we select the specific transsonic solution we eliminate the necessity of one boundary condition.
The choice we make for \(v_0 = v(r_0)\) is key, and nontrivial.
If we have the density, velocity and radius of the lower boundary we have \emph{fixed} the accretion rate: \(\dot{M} = 4 \pi r_0^2 \rho_0 v_0 \).
This is actually fixed by only specifying the gravity, temperature and density at the star's edge. It would seem like we also should have \(v_0 \), but we do not actually need it, as we will show in a moment.
With these hypotheses, we can solve the momentum equation analytically: we get
%
\begin{subequations}
\begin{align} \label{eq:isothermal-wind-speed-equation}
\frac{v}{v_0 } \exp( -\frac{v^2}{2 a^2}) &= \qty(\frac{r_0}{r})^2 \exp(\frac{GM}{a^2} \qty(\frac{1}{r_0 } - \frac{1}{r})) \\
&= \qty(\frac{r_0}{r})^2 \exp(\frac{2r_c}{r_0 } - \frac{2r_c}{r})
\,.
\end{align}
\end{subequations}
%
\begin{bluebox}
So, to calculate the Mach number we need to solve
%
\begin{align}
M e^{-M^2/2} = \frac{v_0 }{a} \qty(\frac{r_0 }{r})^2 \exp(\frac{2r_c}{r_0 } - \frac{2 r_c}{r})
\,,
\end{align}
%
but there seems to be an issue: the left hand side is bounded (it attains its maximum value of \(e^{-1/2} \approx 0.6\) at \(M=1\)) while the RHS is maximized at \(r = r_c\), where it is equal to
%
\begin{align}
\frac{v_0}{a} \qty(\frac{r_0 }{r_c})^2 \exp(\frac{2r_c}{r_0 } - 2)
\,,
\end{align}
%
which is dependent on \(v_0 \)! If we plug in solar values for the parameters, \(T = \SI{1e6}{K}\), \(\mu = \num{.62}\) and select an initial velocity like \(v_0 = \SI{1}{km/s}\) we get that the RHS's maximum is around 35!
This means that, in order to always have solutions, we need to fix \(v_0 \) to set the maxima to be equal.
This gives us
%
\begin{subequations}
\begin{align}
v_0 &= e^{-1/2} \qty(\frac{1}{a} \qty(\frac{r_0}{r_c})^2
\exp(\frac{2 r_c}{r_0 }) e^{-2})^{-1} \\
&= a \qty(\frac{r_c}{r_0 })^2 \exp(- \frac{2r_c}{r_0 } + \frac{3}{2})
\,.
\end{align}
\end{subequations}
Specifically, we can see that for the solar values mentioned before \SI{1}{km/s} was an overestimate, and with the values we fixed the initial velocity is actually more like \SI{17}{m/s}.
\end{bluebox}
At large distances, we get \(v \rightarrow 2 a \sqrt{\ln(r/r_0)}\).
\begin{bluebox}
To derive this, take the log of the equation and take the limit \(r \rightarrow \infty \).
%
\begin{align}
-\frac{M^2}{2} + \log M
= \log \frac{v_0 }{a} + 2 \log \frac{r_0 }{r} + \frac{2 r_c}{r_0 } - \frac{2 r_c}{r}
\,,
\end{align}
%
so the RHS diverges as \(- \log r\) does for \(r \rightarrow \infty\), which means that the LHS must also diverge and become very large and negative. The polynomial term \(-M^2/2\) dominates the logarithmic one.
So, asymptotically we have \(- M^2/2 \approx 2 \log r_0 /r \), which means \(M \approx 2 \sqrt{\log (r/r_0) }\).
\end{bluebox}
\todo[inline]{This is reported incorrectly in the slides as \(v \rightarrow 2 a \log (r / r_0 )\).}
This is unphysical (the velocity diverges!?) and due to the fact that we assume a constant temperature (and, thus, energy input) even for diverging \(r\).
The density profile can be found from the continuity equation to be
%
\begin{align} \label{eq:isothermal-wind-density}
\frac{\rho}{\rho_0} \exp(\frac{1}{2} \qty(\frac{v_0 \rho_0 r_0^2}{a \rho r^2})^2) = \exp(2 r_c \qty(\frac{1}{r} - \frac{1}{r_0}))
\,,
\end{align}
%
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/atmosphere_critical_radius.pdf}
\caption{Velocity profile (solution to \eqref{eq:isothermal-wind-speed-equation}) and density wind profile (solution to \eqref{eq:isothermal-wind-density}), plus atmosphere (hydrostatic) profile (shown in \eqref{eq:density-profile-hydrostatic-equilibrium}).}
\label{fig:atmosphere_critical_radius}
\end{figure}
Now we look at the structure of the wind in the subcritical region.
In the corresponding slide: the dashed line in the density profile is the density we'd expect in a hydrostatic atmosphere, while the solid one is the solution.
The hydrostatic density structure is given by
%
\begin{subequations}
\begin{align}
\frac{1}{\rho } \dv{P}{r} +\frac{GM }{r^2} &= 0
\,,
\end{align}
\end{subequations}
%
since we set the (log) velocity gradient to zero.
Manipulating this we get
%
\begin{equation}
\frac{r^2}{\rho } \dv{\rho }{r} = - \frac{GM}{a^2}
\,,
\end{equation}
%
So, integrating this we see that the density profile follows a decreasing exponential law:
%
\begin{equation} \label{eq:density-profile-hydrostatic-equilibrium}
\frac{\rho (r)}{\rho_0 } = \exp(\frac{-(r-r_0 )}{H_0 } \frac{r_0}{r} )
\,,
\end{equation}
%
where \(H_0 \) is the scale height, \(H_0 = \mathcal{R} T / \qty(\mu g_0 )\) with \(g_0 = GM / r_0^2\).
The length scale at which this decreases is defined by \(H_0 \).
The density profile in the subsonic region is very well approximated by this hydrostatic profile: as we can see in figure \ref{fig:atmosphere_critical_radius} in the subsonic region the log-velocity gradient is small, so the pressure contribution dominates.
The mass loss rate is our main prediction: we have \(\dot{M } = 4 \pi r^2_0 \rho_0 v_0 = 4 \pi r_c^2\rho_c a\).
In order to compute it at the critical radius, we can use the density profile equation:
%
\begin{equation}
\dot{M} = 4 \pi r_c^2 a \rho_0 \exp(\frac{-(r_c - r_0 )}{H_0 } \frac{r_0 }{r_c})
\,.
\end{equation}
%
\todo[inline]{This is an approximation, justified by the fact that in the subsonic region the density profile of the wind is almost the same as in hydrostatic equilibrium. There is a correction factor of \(\exp(-1/2)\).}
If we consider this numerically, we find that the exponential is the dominant part: the mass loss rate becomes lower when the critical point moves outward.
We can specify it by fixing
%
\begin{enumerate}
\item the temperature at the corona, \(T_C\);
\item the radius at the bottom of the corona, \(r_0 \);
\item the stellar mass \(M\);
\item the density at the bottom of the corona \(\rho_0 \).
\end{enumerate}
In the slides there are numerical estimates.
As \(H_0 \) increases, the density profile is less steep: the density remains high at larger radii.
\begin{bluebox}
Why is the dependence on \((r_c - r_0 ) /H_0 \) so strong? The idea seems to be that in order for the wind to expel a significative quantity of material it needs to give it a large velocity in a small space, otherwise the wind becomes supersonic when its density is very small.
\end{bluebox}
\end{document}
| {
"alphanum_fraction": 0.6729667812,
"avg_line_length": 39.9411764706,
"ext": "tex",
"hexsha": "001e4acc1bd9a596036bb46702e67225b691aed3",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z",
"max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "jacopok/notes",
"max_forks_repo_path": "ap_first_semester/advanced_astrophysics/29oct.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "jacopok/notes",
"max_issues_repo_path": "ap_first_semester/advanced_astrophysics/29oct.tex",
"max_line_length": 277,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "jacopok/notes",
"max_stars_repo_path": "ap_first_semester/advanced_astrophysics/29oct.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z",
"num_tokens": 4208,
"size": 12222
} |
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage{xcolor}
\usepackage{listings}
\lstdefinelanguage{AnB}{
keywords=[1]{Agent, Number, Function, Symmetric_key, PublicKey},
keywords=[2]{Protocol, Types, Knowledge, Goals, Actions},
keywords=[3]{section, inits, rules, step, initial_state, attack_state, not, where, authenticates, on, secret, between},
keywordstyle=[1]\color{violet},
keywordstyle=[2]\color{magenta},
keywordstyle=[3]\bf,
sensitive=false, % keywords are not case-sensitive
morecomment=[l]{\#}, % line comment
morecomment=[l]{\%}, % line comment
%morecomment=[s]{/*}{*/}, % start and end delimiter
basicstyle=\footnotesize,
literate=%
{->}{\begingroup{\color{blue}$\rightarrow$}\endgroup}2%
{\{}{\begingroup{\color{blue}\{}\endgroup}1%
{\}}{\begingroup{\color{blue}\}}\endgroup}1%
% Catch {| |} s.t. we can add them to a math env. = less space between chars
{\{|}{\begingroup{\color{red}$\{\vert$}\endgroup}2%
{|\}}{\begingroup{\color{red}$\vert\}$}\endgroup}2%
{*}{$\bullet$}1,
}
\lstset{
language=AnB,
captionpos=b, % bottom
frame=single, % frame aroundcode
numbers=left
}
\title{AnB Syntax}
\author{Nicolai Søborg}
\date{May 08, 2017} %\today
\begin{document}
\section*{Introduction}
This is a test of the syntax highlighting for the language AnB.
\begin{lstlisting}
Protocol: Basic_Kerberos # Bounded-verified
Types:
Agent C,a,g,s;
Number N1,N2,T1,T2,T3,Payload,tag;
Function hash,sk;
Symmetric_key KCG,KCS
Knowledge:
C: C, a, g, s, sk(C, a), hash, tag;
a: a, g, hash, C, sk(C, a), sk(a, g), tag;
g: a, g, sk(a, g), sk(g, s), hash, tag;
s: g, s, sk(g, s), hash, tag
Actions:
C -> a: C,g,N1
a -> C: {| KCG, C, T1 |}sk(a,g),
{| KCG, N1, T1, G |}sk(C,a)
C -> g: {| KCG, C, T1 |}sk(a,g),
{|C,T1|}KCG,
s,N2
g -> C: {| KCS, C, T2 |}sk(g,s),
{| KCS, N2, T2, s |}KCG
C -> s: {| KCS, C, T2 |}sk(g,s),
{| C, T3 |}KCS
s -> C: {|T3|}KCS, {|tag,Payload|}KCS
Goals:
s *->* C: Payload
\end{lstlisting}
\end{document}
| {
"alphanum_fraction": 0.6038807383,
"avg_line_length": 27.4415584416,
"ext": "tex",
"hexsha": "a4ef46703138a84a89b7f2e450398aa5c09db788",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "18a29bac0659316cf40adb40e2ccbf13c989fb73",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NicolaiSoeborg/LaTeX-AnB-Listings-Syntax",
"max_forks_repo_path": "example.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "18a29bac0659316cf40adb40e2ccbf13c989fb73",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NicolaiSoeborg/LaTeX-AnB-Listings-Syntax",
"max_issues_repo_path": "example.tex",
"max_line_length": 121,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "18a29bac0659316cf40adb40e2ccbf13c989fb73",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NicolaiSoeborg/LaTeX-AnB-Listings-Syntax",
"max_stars_repo_path": "example.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 772,
"size": 2113
} |
\section{Humans}\label{race:human}
Humans are a widely spread race of different ethnicities.
No other race is as numerous as humans are.
This is due to their high versatility and tenacity, as well as their adaptability.\\
They mature at 16 and live to around 80.\\
\textbf{Humans can freely add 4 attribute points, but they can not increase any attribute above 12 with this.}\\
Additionally, humans can \textbf{create their character with 1800 CP instead of 1500}\\
Humans start the game with 25 Health, 25 Stamina and 25 Mana. | {
"alphanum_fraction": 0.7836812144,
"avg_line_length": 65.875,
"ext": "tex",
"hexsha": "9c7cf650ddabf7427054582e8f752ab8dc56991c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_forks_repo_path": "charCreation/races/humans.tex",
"max_issues_count": 155,
"max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_issues_repo_path": "charCreation/races/humans.tex",
"max_line_length": 112,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper",
"max_stars_repo_path": "charCreation/races/humans.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z",
"num_tokens": 134,
"size": 527
} |
\subsection{Definition in I/O space}
\begin{itemize}
\item Each item in output receives \textbf{at most} 1 input $\implies$ injectivity
\item Each item in output receives \textbf{at least} 1 input $\implies$ surjectivity
\item If \textbf{both conditions are satisfied} $\implies$ bijectivity
\end{itemize}
Invertibility is therefore synonymous with bijectivity.
\subsection{Conclusions}
Injectivity concludes that $\mathrm{rank}(A)=m$, where $A$ is $n\times m$. This is because
there must be a leading one in each column.\\
Surjectivity concludes that the last row in $\mathrm{rref}(A)$ is $0\;0\cdots 0\;1$.
Thus there must be no rows of 0 in $\mathrm{rref}(A)$, so all invertible matrices are square. Also
an invertible matrix is \textbf{nonsingular} and an invertible matrix is \textbf{singular}. | {
"alphanum_fraction": 0.7478580171,
"avg_line_length": 45.3888888889,
"ext": "tex",
"hexsha": "9d99922e93622b12da3f69b88df8bdc4cb2b30a7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sidnb13/latex-notes",
"max_forks_repo_path": "linear-algebra/tex/8_inverse_lin_trans.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sidnb13/latex-notes",
"max_issues_repo_path": "linear-algebra/tex/8_inverse_lin_trans.tex",
"max_line_length": 98,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bbd935b7ff9781169775c052625b1917a47d5dcc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sidnb13/latex-notes",
"max_stars_repo_path": "linear-algebra/tex/8_inverse_lin_trans.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 236,
"size": 817
} |
%!TEX TS-program = pdflatex
%!TEX TS-program = skim
%
% PyMC User's Guide
%
% Created by Chris Fonnesbeck on 2006-05-03.
% Copyright (c) 2006 . All rights reserved.
%
\documentclass[article]{jss}
%% almost as usual
\author{Anand Patil}
\title{A Gaussian process module for \pkg{PyMC}}
%% for pretty printing and a nice hypersummary also set:
\Plainauthor{Anand Patil} %% comma-separated
\Plaintitle{A Gaussian process module for PyMC} %% without formatting
% \Shorttitle{GP's For PyMC} %% a short title (if necessary)
%% an abstract and keywords
\Abstract{
This article introduces a package adding Gaussian process functionality to the Bayesian analysis package \pkg{PyMC}. Gaussian processes (GPs) are probability distributions for functions. In Bayesian statistics, they are often used as priors for functions whose forms are unknown. They can encode many types of knowledge about functions, yet remain much less restrictive than priors based on particular functional forms. GPs are not hard to understand at a conceptual level, but implementing them efficiently on a computer can require fairly involved linear algebra. This package implements Gaussian processes as a set of \proglang{Python} classes that can support many types of usage, from intuitive exploration to embedding them in larger probability models and fitting with MCMC.
}
\Keywords{gassian process, bayesian, \proglang{Python}}
\Plainkeywords{gaussian process, bayesian, Python} %% without formatting
%% at least one keyword must be supplied
%% publication information
%% NOTE: Typically, this can be left commented and will be filled out by the technical editor
%% \Volume{13}
%% \Issue{9}
%% \Month{September}
%% \Year{2004}
%% \Submitdate{2004-09-29}
%% \Acceptdate{2004-09-29}
%% The address of (at least) one author should be given
%% in the following format:
\Address{
Anand Patil\\
Malaria Atlas Project\\
Department of Zoology\\
University of Oxford\\
Oxford, OX1 3PS, UK\\
E-mail: \email{[email protected]}
}
% Use utf-8 encoding for foreign characters
%\usepackage[utf8]{inputenc}
% % Setup for fullpage use
% \usepackage{fullpage}
% \usepackage{amsmath}
\usepackage{epsfig}
\usepackage{upquote}
\usepackage{verbatim}
%
% % \usepackage{pdfsync}
%
% % Flexible citation syntax
% \usepackage{natbib}
% % Uncomment some of the following if you use the features
% %
%
% % Multipart figures
% %\usepackage{subfigure}
%
% % More symbols
% \usepackage{amsmath}
% \usepackage{amssymb}
% % \usepackage{latexsym}
%
% % Package for including code in the document
% \usepackage{listings}
%
% % Surround parts of graphics with box
% %\usepackage{boxedminipage}
%
% % This is now the recommended way for checking for PDFLaTeX:
% \usepackage{ifpdf}
%
% % Enable hyperlinks
% % \usepackage[pdfpagemode=FullScreen,colorlinks=true,linkcolor=red]{hyperref}
%
% % \ifpdf
% % \usepackage[pdftex]{graphicx}
% % \else
% % \usepackage{graphicx}
% % \fi
%
% %%% EPYDOC STUFF %%%
\usepackage{underscore}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}\label{sec:firstlook}
Gaussian processes are probability distributions for functions. The statement `random function $f$ has a Gaussian process distribution with mean $M$ and covariance $C$' is usually written as follows:
\begin{equation}
f\sim\textup{GP}(M,C).
\end{equation}
The two parameters of the distribution are analogous to the parameters of the normal distribution. $M$ is the mean function, $M(x)=\E(f(x))$. $C$ is the covariance function, $C(x,y)=\COV(f(x),f(y))$. Among other things, $C$ regulates the amount by which $f$ may deviate from $M$ at any input value $x$, the roughnesss of $f$ and the typical lengthscale of changes in $f$.
As with any probability distribution, random values can be drawn from a Gaussian process. However, these values (called `realizations') are actually functions rather than the usual numbers or vectors. This package represents these random values as \code{Realization} objects which, in accordance with intuition, behave like \proglang{Python} functions with a few extra features.
Gaussian processes are represented by \code{GaussianProcess} objects. These are \pkg{PyMC} stochastic variables \citep{pymc} (analogous to \pkg{WinBugs}' stochastic nodes \citep{bugs}) valued as \code{Realization} objects. These models can be fit using Markov chain Monte Carlo \cite{gamerman}, after which the `dynamic trace' for each \code{GaussianProcess} consists of a sequence of \code{Realization} objects sampled from the target distribution.
This intuitive object model simplifies and expedites construction and fitting of probability models, as well as predictive simulation. It also makes available an enormous model space, which extends far beyond the standard linear model and generalized linear model families. The \code{GaussianProcess} object `is' a Gaussian process in the sense that a \pkg{PyMC} \code{Normal} variable `is' a normal random variable; as is the case for a \code{Normal}, a \code{GaussianProcess}'s descendants' dependencies on it can be defined using the full expressive power of \proglang{Python}.
This package's model-fitting functionality is generic enough to support, in theory, any model that can be defined using \code{GaussianProcess}s and the standard variable classes in \pkg{PyMC}. It is known to perform well for standard situations such as Bayesian geostatistics (see chapter \textbf{ref}), but it cannot provide good performance in all cases. \pkg{PyMC}'s friendly and extensible system of step methods can be used to create fine-tuned jumping strategies for particular applications.
To improve performance, all of this package's numerical computations are done in C or Fortran extensions, some of which are provided by the \pkg{NumPy} package \citep{numpybook}. Covariance function evaluations are multithreaded for sufficiently large matrices.
\medskip
This paper introduces All examples can be found in the folder \code{pymc/examples/gp} in the \pkg{PyMC} source tree.
\include{tutorial1}
\include{tutorial2}
\section{Conclusion}
This package is built around the \code{Realization} object, which represents random mathematical functions as random \proglang{Python} functions. This is arguably the most natural and intuitive representation possible within the \proglang{Python} programming language, which itself is widely regarded as unusually human-friendly.
This package inherits the flexibility of PyMC, as described in section \ref{sec:gp-sub}. Because \pkg{PyMC} allows any \proglang{Python} function to be used to transform variables in probability models, and \proglang{Python} (like all modern programming languages) allows functions to be passed to other functions as arguments, and \code{GaussianProcess} objects are function-valued random variables, this package supports construction of a wides variety of probability models that involve scalar-valued Gaussian processes. The Duffy negativity example in section \ref{sub:geostat} implements a probability model that is not a member of the generalized linear model family, whose nonstandard likelihood function involves transformations of two Gaussian processes. In keeping with the extensible spirit of PyMC, the package accomodates user-specified covariance functions, and provides support for automatic combination of covariance functions and distance metrics.
Strenuous efforts at optimization have resulted in good performance for `standard' Gaussian process-based analyses such as Bayesian geostatistics. For example, \cite{map} recently used it to conduct a fully Bayesian MCMC analysis involving a very expensive covariance function evaluated at 4,291 input locations. The library of covariance functions provided by the package is implemented in Fortran, and can take advantage of SMP systems. The linear algebra functionality is provided by \code{NumPy}, which can be configured to make use of optimized, multithreaded BLAS and LAPACK \textbf{cite} implementations.
However, there are many use cases for which this package cannot achieve performance on par with hand-optimized algorithms. For example, the \code{mesh} attribute of \texttt{GPSubmodel} objects is fixed on creation, but in many reversible-jump MCMC applications (e.g. \cite{gramacy} and the toy example above) the set of input locations under consideration changes as the MCMC progresses. It would be possible to fit these models using \texttt{GPSubmodel}, but the acceptance rate would typically be much lower than that enjoyed by a true reversible-jump MCMC algorithm for reasons explained in section \textbf{ref}. It remains to be seen whether this and related performance problems can be overcome without either diluting the conceptual fidelity of the object model or incurring an unacceptable level of program complexity.
More tutorial material, as well as documentation of several additional features and the incomplete Cholesky decomposition algorithms, can be found in the \href{http://pymc.googlecode.com/files/GPUserGuide.pdf}{package documentation}.
% \nocite{*}
% \bibliographystyle{plain}v
\bibliography{gp}
\end{document}
| {
"alphanum_fraction": 0.7857845817,
"avg_line_length": 63.951048951,
"ext": "tex",
"hexsha": "b9868b68784bf922e9b8183cb2c227d3e2c77441",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-12-02T03:17:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-05T04:56:32.000Z",
"max_forks_repo_head_hexsha": "6ce0094584f1fa00eed0b2ecee533c2fb7f190d6",
"max_forks_repo_licenses": [
"AFL-3.0"
],
"max_forks_repo_name": "kyleabeauchamp/pymc",
"max_forks_repo_path": "pymc/gp/Docs/GPUserGuide.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "6ce0094584f1fa00eed0b2ecee533c2fb7f190d6",
"max_issues_repo_issues_event_max_datetime": "2022-01-14T14:55:57.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-14T08:57:32.000Z",
"max_issues_repo_licenses": [
"AFL-3.0"
],
"max_issues_repo_name": "kyleabeauchamp/pymc",
"max_issues_repo_path": "pymc/gp/Docs/GPUserGuide.tex",
"max_line_length": 964,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "6ce0094584f1fa00eed0b2ecee533c2fb7f190d6",
"max_stars_repo_licenses": [
"AFL-3.0"
],
"max_stars_repo_name": "kyleabeauchamp/pymc",
"max_stars_repo_path": "pymc/gp/Docs/GPUserGuide.tex",
"max_stars_repo_stars_event_max_datetime": "2019-12-06T08:17:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-12-06T08:17:20.000Z",
"num_tokens": 2170,
"size": 9145
} |
\documentclass{classrep}
\usepackage[utf8]{inputenc}
\frenchspacing
\usepackage{graphicx}
\usepackage[usenames,dvipsnames]{color}
\usepackage[hidelinks]{hyperref}
\usepackage{lmodern}
\usepackage{placeins}
\usepackage{url}
\usepackage{amsmath, amssymb, mathtools}
\usepackage{listings}
\usepackage{fancyhdr, lastpage}
\pagestyle{fancyplain}
\fancyhf{}
\renewcommand{\headrulewidth}{0pt}
\cfoot{\thepage\ / \pageref*{LastPage}}
%--------------------------------------------------------------------------------------%
\studycycle{Applied Information Technology, 2 cycle}
\coursesemester{II}
\coursename{Soft Computing Laboratory}
\courseyear{2021/2022}
\courseteacher{dr inż. Kamil Stokfiszewski}
\coursegroup{Wednesday, 8:30}
\author{%
\studentinfo[[email protected]]{Jan Karwowski}{239671}\\
\studentinfo[[email protected]]{Kamil Kowalewski}{239676}\\
}
\title{Assignment 2.: MADALINE network for character recognition}
\begin{document}
\maketitle
\thispagestyle{fancyplain}
\tableofcontents
\newpage
\section{Main goal}
\label{main_goal} {
The main goal of this task is to prepare implementation of MADALINE network.
MADALINE stands for Multiple Adaptive Linear Elements. The purpose of the
network is to recognize the single characters with given dimensions (e.g. 4x4).
}
\section{Theoretical background}
\label{theory} {
MADELINE network is just a series (or a layer) of linear units called
artificial neurons. Fig. \ref{fig:madeline} presents the structure of such a
model. It has $M$ inputs, which is propagated to $N$ neurons. Each neuron is
responsible for identifying single character (e.g. ,,B''). Each neuron has its
own set of weights, which decides what does it focus on. Value
of $M$ is determined by image dimension. Respectively value of $N$ is
determined by number of recognized characters. Normalized value of binarized
and flattened image template(single for each character) should be used to set
up neurons' weights values. Given single image template for each character lets
us easily construct whole MADELINE network.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\textwidth]{img/madeline.jpg}
\caption{Madaline network}
\label{fig:madeline}
\end{figure}
\FloatBarrier
}
\section{Implementation details}
\label{implementation} {
Program was written in Python programming language and very popular numpy
library was used as well. The template and test "images" are parsed and loaded
from txt files. In this approach \textit{#} character is converted to 1 and
\textit{-} is converted to 0. Thanks to this we can store this artificial image in
memory of computer and do next computation. The object of Madaline class is
responsible for accepting the artificial image, which is modeled by in our
program and return predicted value after calling certain method. The whole
created infrastructure has been used in main method and results which proves the
effectiveness of Madaline network are shown after running the program.
}
\section{Experiments and results}
\label{results} {
In order to explore MADELINE network behaviour simple network was prepared. It
is created basing on three 4x4 image templates, which describes letters ,,X'',
,,Y'' and , ,Z''. These templates are presented in fig.\ref{fig:template_images}.
To test MADELINE network set of test characters was prepared. It has the same
structure as template characters, i.e it contains 4x4 images of ,,X'', ,,Y''
and ,,Z'' letters, but some malformed and negative(inverted) image examples was
added. These images are presented in fig.\ref{fig:test_images}. After network
construction all test images were processed by model and results are presented
in table \ref{tab:results}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\textwidth]{img/template_images.jpg}
\caption{Template character images}
\label{fig:template_images}
\end{figure}
\FloatBarrier
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.8\textwidth]{img/test_images.jpg}
\caption{Test character images}
\label{fig:test_images}
\end{figure}
\FloatBarrier
\begin{table}[!htbp]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Test pattern name & ,,X'' activation & ,,Y'' activation & ,,Z''
activation \\ \hline
undisturbed letter "X" & 1.0 & 0.866 & 0.671 \\
disturbed letter "X" & 0.875 & 0.722 & 0.671 \\
negative of letter "X" & 0.0 & 0.0 & 0.447 \\
undisturbed letter "Y" & 0.866 & 1.0 & 0.645 \\
disturbed letter "Y" & 0.722 & 0.833 & 0.645 \\
negative of letter "Y" & 0.224 & 0.0 & 0.5 \\
undisturbed letter "Z" & 0.671 & 0.645 & 1.0 \\
disturbed letter "Z" & 0.64 & 0.615 & 0.858 \\
negative of letter "Z" & 0.289 & 0.167 & 0.0 \\ \hline
\end{tabular}
\caption{MADELINE results for test characters set}
\label{tab:results}
\end{table}
\FloatBarrier
}
\section{Summary and conclusions}
\label{summary} {
Seeing results in table \ref{tab:results} it is simple to find some rule, which
describes MADELINE behavior. When the network ,,sees'' undisturbed letter, then
related neuron has possibly high activation ($1.0$). This is the scenario, when
network can't mistake, because recognized image is exactly same as template
image - the dot product/correlation reaches its maximum value. Second scenario
is all about disturbed images. We can say about some sort of lucky, because in
all the three cases, despite malformed image, proper neuron has the highest
activation, but there is no such a big difference between $0.875$ and$0.722$ as in
case of disturbed ,,X''. It is almost sure, that if there were more recognized
character classes, a pretty small distortion could cause network mistake. Last
test scenario is very straightforward - undisturbed images were inverted
producing, ,negatives'' of letters. In such a case proper neuron has exactly
$0$ activation - there is zero similarity between template image and test image.
}
\begin{thebibliography}{0}
% @formatter:off
\bibitem{instruction}{Labolatory instruction, URL: https://ftims.edu.p.lodz.pl/pluginfile.php/75438/\\mod\_resource/content/2/soft\_comp\_lab\_02\_MADALINE.pdf}
% @formatter:on
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.6547467007,
"avg_line_length": 44.04375,
"ext": "tex",
"hexsha": "71b93c097abe24643ecc6e6b0519f38c1527bdd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4702f29a2626f19ffb11801acf2ccd5764793482",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "KKowalewski24/Reports",
"max_forks_repo_path": "IDP/Task2/src/IDP_Task2_Karwowski_Kowalewski.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4702f29a2626f19ffb11801acf2ccd5764793482",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "KKowalewski24/Reports",
"max_issues_repo_path": "IDP/Task2/src/IDP_Task2_Karwowski_Kowalewski.tex",
"max_line_length": 168,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4702f29a2626f19ffb11801acf2ccd5764793482",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "KKowalewski24/Reports",
"max_stars_repo_path": "IDP/Task2/src/IDP_Task2_Karwowski_Kowalewski.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1776,
"size": 7047
} |
\section{Parbox}
\directlua{for i=0,10 do tex.sprint("\\teststepparbox{1}{\lorema}{\loremb}") end}
% This macro records time for each run
\marktime
| {
"alphanum_fraction": 0.6835443038,
"avg_line_length": 12.1538461538,
"ext": "tex",
"hexsha": "f9e8875e51b328e8465f7f4b1a7991c368ae1939",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cfc92d1b41bad222e8ffff7abd512d7e816c39ab",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "krumeich/luatex-performance",
"max_forks_repo_path": "testcase-parbox.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cfc92d1b41bad222e8ffff7abd512d7e816c39ab",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "krumeich/luatex-performance",
"max_issues_repo_path": "testcase-parbox.tex",
"max_line_length": 81,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cfc92d1b41bad222e8ffff7abd512d7e816c39ab",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "krumeich/luatex-performance",
"max_stars_repo_path": "testcase-parbox.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 54,
"size": 158
} |
%Additionally, as explained in Section \ref{sec:uschema}, models trained to perform link prediction will often suffer from a lack of specificity resulting from learning only to `type check,' i.e. incorrectly predicting that Brad Pitt acted in the movie \emph{The Godfather}. Requiring text provinence makes it much more difficult to make this mistake, since it is highly unlikely that any text will support the above false relation. Embedding entity pairs, rather than embedding single entities then combining their encoded vectors to represent pairs, also helps to avoid the `type checking' problem, and given enough data has been shown to perform better than entity models \citep{toutanova2015representing}. We only experiment on embedding entity pairs as the distinction between entity and entity pairs is orthogonal to this work.
%Many papers evaluate on the popular FB15k dataset for link prediction in knowledge base completion. However, we do not find this interesting as it reduces to little more than type clustering. We are instead interested in learning high quality pattern encoders that, given providence, are able to accurately score the correlation \todo{better word} between patterns and patterns, and patterns and kb relations. In this situation, Brad Pitt would never be inferred to have acted in the Godfather unless there was provinence seen with that entity pair that scored highly with the acted\_in relation. Additionally, this allows us to evaluate unseen entities and address the coldstart problem.
%We choose to use entity pair vectors which, given adequate data, perform better than entity models see \citet{toutanova2015representing} test data with mentions. Although there are many models for kb embedding some using entity vectors and some using entity pair vectors, we do not investigate this distinction as it is orthogonal to our work.
\section{Task and System Description}
We focus on the TAC KBP slot-filling task. Much related work on embedding knowledge bases evaluates on the FB15k dataset \citep{transe,wang2014knowledge,lin2015learning,bishan,toutanova2015representing}. Here, relation extraction is posed as link prediction on a subset of Freebase. This task does not capture the particular difficulties we address: (1) evaluation on entities and text unseen during training, and (2) zero-annotation learning of a predictor for a low-resource language.
Also, note both~\citet{toutanova2015representing} and~\citet{limin} explore the pros and cons of learning embeddings for entity pairs vs. separate embeddings for each entity. As this is orthogonal to our contributions, we only consider entity pair embeddings, which performed best in both works when given sufficient data.
%, which has received considerable attention in the NLP community since it focuses on a practical AKBC scenario characteristic of various real-world applications.
\subsection{TAC Slot-Filling Benchmark}
The aim of the TAC benchmark is to improve both coverage and quality of relation extraction evaluation compared to just checking the extracted facts against a knowledge base, which can be incomplete and where the provenances are not verified. In the slot-filling task, each system is given a set of paired query entities and relations or `slots' to fill, and the goal is to correctly fill as many slots as possible along with provenance from the corpus. For example, given the query entity/relation pair (\emph{Barack Obama, per:spouse}), the system should return the entity \emph{Michelle Obama} along with sentence(s) whose text expresses that relation. The answers returned by all participating teams, along with a human search (with timeout), are judged manually for correctness, i.e. whether the provenance specified by the system indeed expresses the relation in question.
%Some slots, such as \emph{per:countries\_of\_residence} can be filled by multiple entities, whereas others, such as \emph{per:country\_of\_birth} can contain only one filler. The query entities are restricted to people (PER) and organizations (ORG) (rather than locations or other noun types, such as religion, which may fill query slots), and the 2013 English evaluation query set is made up of 50 PER entities and 50 ORG entities.
In addition to verifying our models on the 2013 and 2014 English slot-filling task, we evaluate our Spanish models on the 2012 TAC Spanish slot-filling evaluation. Because this TAC track was never officially run, the coverage of facts in the available annotation is very small, resulting in many correct predictions being marked incorrectly as precision errors. In response, we manually annotated all results returned by the models considered in Table~\ref{es-tac-table}. Precision and recall are calculated with respect to the union of the TAC annotation and our new labeling\footnote{Following \citet{surdeanu2012multi} we remove facts about undiscovered entities to correct for recall.}.
% 18 teams participated
% pooled and annotated - not just evaluated against (incomplete) knowledge base, and checked whether fact is actually expressed by text
% \todo{Ben: something about the popularity of this task. Why it is a good benchmark. }
\subsection{Retrieval Pipeline \label{sec:pipeline}}
Our retrieval pipeline first generates all valid slot filler candidates for each query entity and slot, based on entities extracted from the corpus using {\sc Factorie} ~\citep{mccallum09:factorie:} to perform tokenization, segmentation, and entity extraction. We perform entity linking by heuristically linking all entity mentions from our text corpora to a Freebase entity using anchor text in Wikipedia. Making use of the fact that most Freebase entries contain a link to the corresponding Wikipedia page, we link all entity mentions from our text corpora to a Freebase entity by the following process:
First, a set of candidate entities is obtained by following frequent link anchor text statistics.
We then select that candidate entity for which the cosine similarity between the respective Wikipedia and the sentence context of the mention is highest, and link to that entity if a threshold is exceeded.
An entity pair qualifies as a candidate prediction if it meets the type criteria for the slot.\footnote{Due to the difficulty of retrieval and entity detection, the maximum recall for predictions is limited. For this reason, \citet{surdeanu2012multi} restrict the evaluation to answer candidates returned by their system and effectively rescaling recall. We do not perform such a re-scaling in our English results in order to compare to other reported results. Our Spanish numbers are rescaled. All scores reflect the `anydoc' (relaxed) scoring to mitigate penalizing effects for systems not included in the evaluation pool.} The TAC 2013 English and Spanish newswire corpora each contain about 1 million newswire documents from 2009--2012. The document retrieval and entity matching components of our relation extraction pipeline are based on RelationFactory~\citep{roth2014relationfactory}, the top-ranked system of the 2013 English slot-filling task. We also use the English distantly supervised training data from this system, which aligns the TAC 2012 corpus to Freebase.
More details on alignment are described in Appendix \ref{sec:ds-el}.
%Our maximum recall is significantly limited by our entity extraction pipeline: $\sim$60\% for English using the TAC `anydoc' (relaxed) scoring. \todo{Ben: juxtapose this with stanford scoring. explain what the SOA numbers are.}
As discussed in Section~\ref{sec:non-comp}, models using a deep sentence encoder and using a pattern lookup table have complementary strengths and weaknesses. In response, we present results where we ensemble the outputs of the two models by simply taking the union of their individual outputs. Slightly higher results might be obtained through more sophisticated ensembling schemes. % We manually shift the models' thresholds to be more precision-biased, and take the union of the predictions returned by the two models. In contrast,~\citet{toutanova2015representing}, add the confidence scores of the systems and then apply a threshold. We found that this ensembling approach does not adequately account for the qualitative distinction in types of prediction that each technique can make accurately.
\subsection {Model Details \label{sec:models}}
All models are implemented in Torch (code publicly available\footnote{\url{https://github.com/patverga/torch-relation-extraction}}).
Models are tuned to maximize F1 on the 2012 TAC KBP slot-filling evaluation.
We additionally tune the thresholds of our pattern scorer on a per-relation basis to maximize F1 using 2012 TAC slot-filling for English and the 2012 Spanish slot-filling development set for Spanish.
As in~\citet{limin}, we train using the BPR loss of~\citet{rendle2009bpr}.
Our CNN is implemented as described in \citet{toutanova2015representing}, using width-3 convolutions, followed by tanh and max pool layers.
The LSTM uses a bi-directional architecture where the forward and backward representations of each hidden state are averaged, followed by max pooling over time.
See Section \ref{sec:details}
We also report results including an alternate names (AN) heuristic, which uses automatically-extracted rules to detect the TAC `alternate name' relation.
To achieve this, we collect frequent Wikipedia link anchor texts for each query entity.
If a high probability anchor text co-occurs with the canonical name of the query in the same document, we return the anchor text as a slot filler.
| {
"alphanum_fraction": 0.8136723635,
"avg_line_length": 159.9333333333,
"ext": "tex",
"hexsha": "a8527aefbab863dacc190073fd1a1527c09c888d",
"lang": "TeX",
"max_forks_count": 23,
"max_forks_repo_forks_event_max_datetime": "2020-08-16T04:10:07.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-01-30T21:26:45.000Z",
"max_forks_repo_head_hexsha": "929f39dfb8c2f0b44318378fcd8e75557efbaf9f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ken77921/torch-relation-extraction",
"max_forks_repo_path": "doc/naacl2016/experiments.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "929f39dfb8c2f0b44318378fcd8e75557efbaf9f",
"max_issues_repo_issues_event_max_datetime": "2018-10-12T17:38:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-04-01T05:10:09.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ken77921/torch-relation-extraction",
"max_issues_repo_path": "doc/naacl2016/experiments.tex",
"max_line_length": 1076,
"max_stars_count": 61,
"max_stars_repo_head_hexsha": "929f39dfb8c2f0b44318378fcd8e75557efbaf9f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ken77921/torch-relation-extraction",
"max_stars_repo_path": "doc/naacl2016/experiments.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-18T21:33:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-01-30T20:07:37.000Z",
"num_tokens": 2014,
"size": 9596
} |
\chapter{Conclusions}
\label{ch:Conclusions}
This work proposed a new educational early warning system to be used in the context of a hybrid course offered by the IMD. This system uses predictive models to evaluate student performance, each week, and predict the likelihood of success in the course. The objectives of the research were achieved by following the steps of the EDM cycle, as described in figure \ref{fig:edm}. The source of the data is the IMD's IT technical course. After the preprocessing, four distinct classification algorithms were evaluated and compared. The Logistic Regression binary model was chosen based on the accuracy metric. This models' accuracy ranged from 66.51\%, in week 1, to 88.67\%, in week 18 (see table \ref{tab:lr}, and figures \ref{fig:ab} and \ref{fig:a3c}).
After this research, the questions that motivated this work were addressed. Firstly, the developed web system prototype provides a simple dashboard that enables students and teachers to receive periodically feedback. Even though the web application was not tested by the intended users, the prototype shows how the system can work. Lastly, the models trained during the experiments achieved satisfactory results. All four different classifiers had similar good results. The Logistic Regression was the final model chosen based on performance metrics.
\section{Future work}
Some future works that could be made:
\begin{itemize}
\item University's API integration: the web system developed in this work used only mocked data to simulate the student and teacher interactions with the system. Thus, so that the application can be enabled to be used by the users in the IMD's IT technical course, the University official API would have to make the services to access the classes and grades of the students and teachers available. Given the modularity of the system developed, it would be an easy integration.
\item Model improvement: the Logistic Regression models reached after all experiments in this research could be improved. One possible improvement is to analyze the features looking to drop attributes with high correlation. This action could simplify the models not losing much accuracy. Independence between features is one of the assumptions that Logistic Regression models make.
\item New features to the web system: the current web application developed has a dashboard with only one feature, for both teachers and students. Some new features could be added to improve the system, such as visualization of classes' average weekly grades, and, to the student, the possibility to simulate grades to see the probability of success given by the model.
\end{itemize} | {
"alphanum_fraction": 0.8071481757,
"avg_line_length": 167.875,
"ext": "tex",
"hexsha": "ba41c3ffa22e0fae52e3f55a6c50bfb721d3b072",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ec277ee70b591c3a9cb3186e7c715b5cf26a42a6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lieet/Developing-a-web-system-for-predicting-student-success-using-learning-analytics",
"max_forks_repo_path": "Capitulos/TrabalhosFuturos.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ec277ee70b591c3a9cb3186e7c715b5cf26a42a6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lieet/Developing-a-web-system-for-predicting-student-success-using-learning-analytics",
"max_issues_repo_path": "Capitulos/TrabalhosFuturos.tex",
"max_line_length": 754,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ec277ee70b591c3a9cb3186e7c715b5cf26a42a6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lieet/Developing-a-web-system-for-predicting-student-success-using-learning-analytics",
"max_stars_repo_path": "Capitulos/TrabalhosFuturos.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 526,
"size": 2686
} |
%%----------------------------------------------------------------------------|
%% EGM USERS GUIDE |
%% |
%% P. J. C. Hodder |
%%----------------------------------------------------------------------------|
\documentstyle[11pt,twoside,psfig]{article}
\input{egm_macros.tex}
\begin{document}
%%----------------------------------------------------------------------------|
%% TITLE PAGE |
%%----------------------------------------------------------------------------|
\thispagestyle{empty}
\noindent
\framebox[16cm]{\parbox{16cm}{
\begin{center}
\huge \bf
A Users Guide to EGM --
\rule{0mm}{10mm}
A Galaxy Modelling Program
\rule{0mm}{10mm}
Version 1.7
\end{center}}}
\rule{0mm}{10mm}
\noindent
\framebox[16cm]{\parbox{16cm}{
{\begin{center}
\large \bf
Contents
\end{center}}
\large
\begin{tabbing}
\rule{2cm}{0cm} \= \rule{1cm}{0cm} \= \rule{1cm}{0cm} \= \rule{9cm}{0cm} \= \kill
\> 1 \> Introduction \> \> ~1 \\ \\
\> 2 \> A Brief Review of EGM \> \> ~1\\
\> \> 2.1 \> The Model \> ~1 \\
\> \> 2.2 \> Parameter Files \> ~1\\
\> \> 2.3 \> Running EGM \> ~2 \\
\> \> 2.4 \> Command Line Options \> ~3 \\
\> \> 2.5 \> Error Codes \& Runtime Statistics \> ~5 \\ \\
\> 3 \> Parameter Description and Formats \> \> ~6 \\
\> \> 3.1 \> Adjustable Parameters \> ~6 \\
\> \> 3.2 \> File Formats \> 10 \\
\> \> 3.3 \> Output Format \> 10 \\ \\
\> 4 \> Model Description \> \> 22 \\
\> \> 4.1 \> Geometrical Considerations \> 22 \\
\> \> 4.2 \> Component Models \> 24 \\
\> \> 4.3 \> Distribution Functions \> 26 \\
\> \> 4.4 \> Program Function Details \> 27 \\ \\
\end{tabbing}
}}
%%----------------------------------------------------------------------------|
%% INTRODUCTION |
%%----------------------------------------------------------------------------|
\newpage
\setcounter{page}{1}
\section{Introduction}
This guide is divided into 4 main sections. This is the first section (not
surprisingly!). Section 2 is a brief overall review of the model and how
to run \egm. Section 3 describes the model parameters in more detail, and
Section 4 contains background information and references for some of the
scientific background.
Bug reports and suggestions are welcome. The author of \egm\ (Philip Hodder)
can be contacted at the Department of Geophysics and Astronomy, University of
British Columbia, \#129-2219 Main Mall, Vancouver, B.C., Canada V6T 1Z4, or
via E--mail at {\tt [email protected]}.
%%----------------------------------------------------------------------------|
%% A BRIEF REVIEW OF EGM |
%%----------------------------------------------------------------------------|
\section{A Brief Review of EGM}
%%----------------------------------------------------------------------------|
\subsection{The Model}
The \egm\ program computes number counts and colour distributions for a
model of an (external) spiral galaxy containing at two or three components.
Using user supplied luminosity functions and colour-magnitude diagrams \egm\
will integrate along a line of sight to a particular field in the galaxy,
down to the desired limiting magnitude. Almost every parameter
of the model can be altered by the user by specifying it in a parameter file
or on the command line -- reasonable defaults are used for all other
parameters (see \S 3).
Currently the forms of the density distributions of the components are
fixed. The disk component is an exponential disk (varying in height
and length) whose scale lengths can be specified. The form of the spheroidal
component is a de Vaulcouleurs ${\rm r}^\frac{1}{4}$ spheroid. The
optional third component is also an exponential disk whose parameters may
be varied independently of the first, thin, disk.
%%----------------------------------------------------------------------------|
\subsection{Parameter Files}
\egm\ is run entirely from the command line -- it has no direct interaction
with the user in the form of prompts, queries and so on, although it will
print warnings before it quits when it cannot do something. A
{\em parameter file} contains all the necessary information to successfully
produce a model -- it is therefore vital that the user fully understand
all the necessary elements of this file. This isn't as daunting as it
sounds -- whilst there are 56 parameters that {\em can} be specified, you can
usually get away with using 7 or 8. The others may be changed if needed.
Most of the commonly changed parameters can also be specified on the
command line. Values in a parameter file will override any command line
arguments if it occurs after them, and vice versa.
The parameter file contains specifications for the field you wish to
integrate along (in arc minutes along the major and minor axes), the field
size (in square degrees), the magnitude limits, the files containing the
luminosity functions and colour--magnitude diagrams, and so on.
Note that unlike the original Bahcall \& Soneira (B\&S) code, \egm\ will
work with luminosity functions (\lf s), colour magnitude diagrams (\cmd s)
not just those in $V$ and $B-V$. This is because the user supplies this
data -- it is not calculated from a hard--coded analytical formula during
execution of the program, or from a function call.
Section 3 contains a list of all the parameters and their defaults.
It also lists their {\em keywords} -- i.e. the words in the parameter file
that \egm\ recognizes. For example, to specify a field size of 0.6 square
degrees, you could put this line in your parameter file:
\begin{verbatim}
OMEGA = 0.6
\end{verbatim}
All keywords must be in uppercase, and the keyword and the actual value
must be separated by an equals sign. There can be any number of spaces
around the {\tt =}. Any line beginning with a \# is treated as a comment and is
ignored.
I recommend that for the sake of consistency all parameter files end
in ``.pm'' -- \egm\ comes with ``default.pm'' as an example. The \cmd\ files
end in ``.(mag)(col)'' (e.g. m5.vbv is a \cmd\ for M5 in V and B--V). Likewise
the \lf s end in ``.(mag)lf'' (e.g. analytic.vlf).
Although \egm\ does not care what these files are called, it
{\em does} care how the data inside it is arranged (see \S3 for a description
of the various file formats).
\egm\ will look in several places for these files. Firstly it
will look in the current directory. Then it will look for them in the
directory specified by the {\tt LIBRARY} keyword in the parameter file.
Next it will check the command line for the {\tt -L} option.
Lastly it will look in a standard
library. This is a directory containing a selection of \lf s and \cmd s whose
pathname is compiled into \egm\ during installation.
You can of course add files to this library as you need them.
\egm\ will quit (and complain) if it can't find the required files somewhere.
An example of a parameter file is shown below:
\begin{verbatim}
nukta-1> cat default.pm
# Model Parameters #
X0 = 50.0
Y0 = 90.0
OMEGA = 1.0
DGC_FILE = dskm67.vbv
SGC_FILE = m13.vbv
DLF_FILE = analytic.vlf
SLF_FILE = analytic.vlf
\end{verbatim}
%%----------------------------------------------------------------------------|
\subsection{Running EGM}
So, once you have your parameter file set up (you can use your favourite
text editor to do this -- it is just a text file) you can run \egm\ by
typing the following at the system prompt:
\begin{verbatim}
nukta-2> egm my_params.pm
\end{verbatim}
The model will then be printed out to the terminal/xterm/whatever. Since
the output can be 600 to 700 lines long this is not terribly
useful. I recommend you redirect the output to a file of your choice:
\begin{verbatim}
nukta-3> egm my_params.pm > my_results.dat
\end{verbatim}
Alternatively,
\begin{verbatim}
nukta-3> egm my_params.pm -o my_results.dat
\end{verbatim}
\egm\ (note that the program name is ``egm'', in lowercase) is usually quite
quick to run, but you may want to put it in the background anyway:
\begin{verbatim}
nukta-4> egm my_params.pm > my_results.dat &
\end{verbatim}
\egm\ outputs the model results in one big file (though this may change in
future versions).
There is a short header section at the front of the file. This is
essentially a copy of all relevant parameters. The number counts are printed
out next followed by the colour distribution. Additional blocks of data may
then be printed (depending on various options set on the command line) and
each block is prefaced with an identifier. Section 3.3 describes the output
in more detail.
You can now split this file up into the separate pieces and display it using
whatever plotting package turns you on! \egm\ does NO plotting itself -- a
design goal was to make it as machine independent as possible.
A macro package for SuperMongo (\sm) is included in the source code
archive. This
takes the whole file output by \egm\ and plots the number counts and
colour distributions automatically. See Appendix B for more information.
%%----------------------------------------------------------------------------|
\subsection{Command Line Options}
There are several command line options that can be given to \egm. These
options override those parameters given in the parameter file (which, in turn,
override the default values supplied by the program).
These options can be useful if you want to run the same model
for a different field and don't want to create another parameter file.
The options are:
\medskip
\begin{clo}{-a absorption\_mode}
The absorption mode to use in the model. Allowable choices are 'none', 'cosec'
and 'Sandage' for no absorption, the cosecant law and the Sandage model
respectively. Only the first letter of the word is important, and it may
be in upper or lower case.
\end{clo}
\begin{clo}{-A absorption\_value}
A fixed amount of absorption to include in the model (in magnitudes).
\end{clo}
\begin{clo}{-bm magnitude\_bin}
Size of magnitude bin for scaling differential number counts on output.
\end{clo}
\begin{clo}{-cb magnitude}
The brightest magnitude of stars to be included in the colour distribution.
\end{clo}
\begin{clo}{-ce colour\_error}
The colour error in magnitudes. The default is 0.1.
\end{clo}
\begin{clo}{-cf magnitude}
The faintest magnitude of stars to be included in the colour distribution.
\end{clo}
\begin{clo}{-cl colour}
The minimum colour for the colour distribution.
\end{clo}
\begin{clo}{-cu colour}
The maximum colour for the colour distribution.
\end{clo}
\begin{clo}{-dm magnitude\_interval}
The apparent magnitude interval for the number counts.
\end{clo}
\begin{clo}{-dM magnitude\_interval}
The absolute magnitude interval for number counts and colour distribution.
\end{clo}
\begin{clo}{-dn disk[:thick\_disk]:spheroid}
The density normalization radii for the galaxy model components, in parsecs
\end{clo}
\begin{clo}{-D distance}
The distance to the galaxy (in parsecs).
\end{clo}
\begin{clo}{-es eccentricity}
The eccentricity of the spheroid component.
\end{clo}
\begin{clo}{-f field\_size}
The size of the field in square degrees.
\end{clo}
\begin{clo}{-F}
This tells \egm\ to integrate all components on the far side of the galaxy
(i.e. on the other side of the disk) as well as the near side. The default is
to only use the side closest to the observer, but this may change in future
versions. (See \S4 for further information).
\end{clo}
\begin{clo}{-FD}
\egm\ will integrate the near and far sides of the disk component, and the only
the near sides for the other components.
\end{clo}
\begin{clo}{-FS}
\egm\ will only integrate the far side of the spheroid component.
\end{clo}
\begin{clo}{-FT}
\egm\ will only integrate the far side of the thick disk component.
\end{clo}
\begin{clo}{-h}
This option prints a list of command line options to the standard output
stream.
\end{clo}
\begin{clo}{-H}
This option prints a list of parameter keywords to the standard output stream.
\end{clo}
\begin{clo}{-i inclination}
The inclination of the disk of the galaxy (in degrees).
\end{clo}
\begin{clo}{-L path}
Specifies an alternate directory for the \lf\ and \cmd\ files. The
default is ``/home/hodder/lib/egm/'' (note the ending ``/''). The
keyword {\tt LIBRARY} may also be used in the parameter file for this
purpose.
\end{clo}
\begin{clo}{-mb magnitude}
The brightest apparent magnitude to be included in the model.
\end{clo}
\begin{clo}{-mf magnitude}
The faintest apparent magnitude to be included in the model.
\end{clo}
\begin{clo}{-Md magnitude}
The brightest absolute magnitude of the disk luminosity function that is
to be included.
\end{clo}
\begin{clo}{-Mf magnitude}
The faintest absolute magnitude of both the disk and spheroid luminosity
functions that is to be included in the model.
\end{clo}
\begin{clo}{-Ms magnitude}
The brightest absolute magnitude of the spheroid luminosity function that is
to be included.
\end{clo}
\begin{clo}{-MS}
This tells \egm\ to include a main-sequence colour-magnitude diagram in the
calculations. The actual file name to use for this \cmd\ must be specified
in the parameter file using the {\tt MSC\_FILE} keyword. (See \S4 for further
information).
\end{clo}
\begin{clo}{-o filename}
Sends the output of \egm\ to {\tt filename}, rather than standard output.
\end{clo}
\begin{clo}{-P "parameter file entry"}
This is an alternate way to specify any parameter that has a parameter file
keyword. The string after the {\tt -P} should be of the same format as an
entry in the parameter file. For example, there is no explicit option to
set the disk \lf\ but one could include {\tt -P "DLF\_FILE = foo.vlf"} on
the command line to set it without editting the parameter file.
\end{clo}
\begin{clo}{-re radius}
The effective (half--light) radius of the spheroid (in parsecs).
\end{clo}
\begin{clo}{-sh disk\_scale\_height}
The scale height of the disk, in parsecs.
\end{clo}
\begin{clo}{-sl disk\_scale\_length}
The scale length of the disk, in parsecs.
\end{clo}
\begin{clo}{-T}
This tells \egm\ to include the thick disc in the calculations. The default
is to not include this component. The parameters of this thick disk must be
set in the parameter file if anything other than the defaults values are to
be used (see \S3).
\end{clo}
\begin{clo}{-V}
This turns on ``verbose mode''. \egm\ will print out extra data
concerning the distribution functions used in the model. This is
mainly used for debugging purposes. See \S3.4 for more information.
\end{clo}
\begin{clo}{-x x\_position}
The position along the major axis in arc minutes. The relevant geometry is
described in section 4.1.
\end{clo}
\begin{clo}{-y y\_position}
The position along the minor axis in arc minutes.
\end{clo}
%%----------------------------------------------------------------------------|
\subsection{Error Codes \& Runtime Statistics}
This section will only be useful for a few people, but it should be written
down somewhere! If \egm\ runs to completion the normal exit status is 0.
If at any point during execution \egm\ encounters an error it will stop
with an error message and the appropriate exit status.
\begin{description}
\item[0] Indicates successful completion of program. No error message.
\item[1] A usage error. Usually this means that no command line options were
specified.
\item[2] An invalid command line option was used. See section 2.4, Command
Line Options, and try again.
\item[3] An invalid parameter was found in the parameter file. See section 3.1,
Adjustable Parameters, for a complete list of the allowed keywords.
\item[4] The file specified in one of the parameter file keywords does not
exist. This may also mean that no parameter file was given on the command
line, or that the library of standard files does not exist or is
incorrectly specified. See Appendix~A for installation and compilation
guidelines.
\item[5] A file could not be read properly. The file is most likely in the
wrong format. See section 3.2, File Formats.
\item[6] Memory allocation error. \egm\ could not allocate enough memory
for one of its internal tables. See your friendly, neighbourhood systems
manager.
\item[7] A math error. Currently this will only be generated by the spline
interpolation routine. Make sure all the input files are monotonically
increasing in the first column.
\end{description}
\egm\ runs fairly quickly on most computer systems. The following times have
been recorded on three different setups. The model used was given by the
default set of parameters. The {\tt -O} flag was used during compilation to
produce more optimised code.
\begin{tabbing}
\rule{1cm}{0cm} \= \rule{10cm}{0cm} \= \kill
\> IBM PC Clone (25 MHz, 386DX, 387 Co-Processor) : \> 39.0 seconds \\
\> Sparc 1 (SunOS) [iridium] : \> 15.0 seconds \\
\> Sparc 2 (SunOS) [wendigo] : \> ~8.0 seconds
\end{tabbing}
%%----------------------------------------------------------------------------|
%% PARAMETER DESCRIPTION AND FORMATS |
%%----------------------------------------------------------------------------|
\section{Parameter Description and Formats}
%%----------------------------------------------------------------------------|
\subsection{Adjustable Parameters}
There follows a list of all the parameters that can be adjusted, though
it usually won't be necessary to change most of these.
Those parameters marked with an asterisk ($*$) are
ones that I think should at least be thought about before you run \egm,
even if you don't change them. Those marked with a plus ($+$) shouldn't be
changed unless you're really sure you know what you're doing. The list
is in no particular order -- the parameter file also does not need to be
arranged in any special way. The list is given by keyword with the
default in parentheses (note that most of these defaults apply to M31).
\medskip
\begin{key}{A\_0}{100.0}{}
The scale height for absorption/extinction (in parsecs).
\end{key}
\begin{key}{A\_1}{0.15}{}
The absorption in magnitudes at the Galactic poles (i.e. $a_1(90^\circ)$) in
the ``cosec'' absorption law. The default is for the $V$ filter.
\end{key}
\begin{key}{A\_2}{0.165}{}
The $a_2$ parameter for the Sandage absorption law (see \S 4.2). The default
value is for the $V$ filter.
\end{key}
\begin{key}{A\_3}{1.192}{}
The $a_3$ parameter for the Sandage absorption law (see \S 4.2). The default
value is for the $V$ filter.
\end{key}
\begin{key}{ABS}{0.0}{}
An arbitrary value (in magnitudes) to include in the model. This can simulate
absorption in our own Galaxy.
\end{key}
\begin{key}{AMODE}{Sandage}{}
This keyword selects the absorption mode to be used for the model. Possible
values are ``none'', ``cosec'' and ``Sandage'' to use no absorption,
the cosecant law or the Sandage obscuration model. Like the {\tt -a} option
only the first letter of the word is important and it may be in upper or
lower case.
\end{key}
\begin{key}{COL\_BIN}{0.20}{(*)}
The colour bin size for output. The colour distribution will be scaled to
number of stars per ({\tt COL\_BIN}) magnitude interval.
\end{key}
\begin{key}{COL\_ERR}{0.10}{(*)}
The FWHM of the Gaussian error function convolved with the
calculated colour distribution to produce a predicted colour distribution.
\end{key}
\begin{key}{COL\_NAME}{B-V}{}
The name of the colour scale. It has no effect on the model calculation.
\end{key}
\begin{key}{CON\_FAC}{1.0E-6}{(+)}
A convergence criterion for ceasing integration. When the contribution to
the number counts of one element is less than {\tt CON\_FAC} times the total,
the integration is stopped. (See section 4.4.)
\end{key}
\begin{key}{DEL\_ABS}{0.05}{}
The absolute magnitude interval used during calculation.
\end{key}
\begin{key}{DEL\_APP}{0.5}{}
The apparent magnitude interval used during calculation.
\end{key}
\begin{key}{DEL\_COL}{0.005}{(+)}
The colour interval for calculation before convolution.
\end{key}
\begin{key}{DGC\_FILE}{dskm67.vbv}{(*)}
The file containing the colour--magnitude information for the disk giants.
\end{key}
\begin{key}{DISTANCE}{725000.0}{}
The distance to the galaxy in parsecs.
\end{key}
\begin{key}{DLF\_FILE}{wielen.vlf}{(*)}
The file containing the \lf\ for the disk.
\end{key}
\begin{key}{DNR}{100:100 $|$ 10:10:10}{(*)}
The density normalization radii, in parsecs, for each of the components. For
a two component model the string refers to disk and spheroid; for the three
comppnent model it refers to disk, thick disk amd spheroid. These values are
used to calculate the actual density normalization.
\end{key}
\begin{key}{DR\_D}{25.0}{}
The distance increment for integration in the disk (in parsecs).
\end{key}
\begin{key}{DR\_T}{25.0}{}
The distance increment for integration in the thick disk (in parsecs).
\end{key}
\begin{key}{DR\_S}{1000.0}{}
The distance increment for integration in the spheroid (in parsecs).
\end{key}
\begin{key}{DSK\_ABS}{1.0}{}
An extra amount of absorption included when integrating through the disk (i.e.
it applies to calculations performed on the far side of the model only).
\end{key}
\begin{key}{E\_SPHER}{0.8}{}
The spheroid axis ratio.
\end{key}
\begin{key}{FAR\_DISK}{FALSE}{}
Integrate the far side of the disk in addition to the ``near'' side.
\end{key}
\begin{key}{FAR\_SIDE}{FALSE}{}
Integrate the far side of all the components in addition to the ``near'' side.
\end{key}
\begin{key}{FAR\_SPHD}{FALSE}{}
Integrate the far side of the spheroid in addition to the ``near'' side.
\end{key}
\begin{key}{FAR\_THIK}{FALSE}{}
Integrate the far side of the thick disk in addition to the ``near'' side.
\end{key}
\begin{key}{FMS\_FILE}{analytic.vfm}{(*)}
The file containing the fraction of stars on the main sequence as a function
of absolute magnitude. This is only needed when the main sequence is being
included.
\end{key}
\begin{key}{GSH\_D}{250.0}{}
The scale height of (the giants in) the disk (in parsecs).
\end{key}
\begin{key}{GSH\_T}{1300.0}{}
The scale height of (the giants in) the thick disk (in parsecs).
\end{key}
\begin{key}{INC}{77.5}{}
The inclination of the disk in degrees. This is defined so that an
inclination of $90^{\circ}$ is edge on to the observer.
\end{key}
\begin{key}{LIBRARY}{/home/hodder/lib/egm}{}
The standard library of \lf, \cmd\ and \fms\ files, compiled into \egm\ during
installation.
\end{key}
\begin{key}{M\_BRI}{-6.0}{}
The absolute magnitude of the bright end cutoff for all the \lf s.
\end{key}
\begin{key}{M\_DIM}{16.5}{}
The absolute magnitude of the faint end cutoff of all \lf s.
\end{key}
\begin{key}{MA\_BRT}{16.0}{}
The brightest apparent magnitude used in the model.
\end{key}
\begin{key}{MA\_DIM}{26.0}{}
The limiting apparent magnitude of the model. The integration will not be
continued for stars fainter than this limit.
\end{key}
\begin{key}{MAG\_BIN}{0.0}{}
The bin size to use for scaling the counts. A value of
0.0 indicates that no scaling occurs.
\end{key}
\begin{key}{MAG\_NAME}{V}{}
The name of magnitude scale. It has no effect on the model calculation, but
has been included for completeness.
\end{key}
\begin{key}{MAX\_COL}{3.0}{(*)}
The maximum colour of the colour distribution. The default value of 3.0
refers to the default $B-V$ colour.
\end{key}
\begin{key}{MBC}{16.0}{}
The brightest apparent magnitude of stars to be included in the colour
distribution.
\end{key}
\begin{key}{MFC}{26.0}{}
The faintest apparent magnitude of stars to be included in the colour
distribution.
\end{key}
\begin{key}{MIN\_COL}{0.0}{(*)}
The minimum colour of the colour distribution. The default value of 0.0
refers to the default $B-V$ colour.
\end{key}
\begin{key}{MS\_FILE}{ms.vbv}{(*)}
The file containing the colour--magnitude information for the (optional)
main sequence.
\end{key}
\begin{key}{OMEGA}{0.001}{(*)}
The size of the field (in square degrees).
\end{key}
\begin{key}{PSL\_D}{5500.0}{}
The scale length of the disk (in parsecs).
\end{key}
\begin{key}{PSL\_T}{5500.0}{}
The scale length of the thick disk (in parsecs).
\end{key}
\begin{key}{R\_CO}{100.0}{}
The scale radius used in the Caldwell \& Ostriker model, if it is enabled.
\end{key}
\begin{key}{R\_EK}{2670.0}{}
The de Vaucouleurs radius of the spheroid (in parsecs).
\end{key}
\begin{key}{R\_MAX\_D}{362500.0}{}
The maximum distance to integrate to in the disk (in parsecs).
\end{key}
\begin{key}{R\_MAX\_T}{362500.0}{}
The maximum distance to integrate to in the thick disk (in parsecs).
\end{key}
\begin{key}{R\_MAX\_S}{362500.0}{}
The maximum distance to integrate to in the disk (in parsecs).
\end{key}
\begin{key}{R\_MIN}{0.0}{}
The distance to start integration from in the disk (in parsecs). This is
measured from the plane of the galaxy, not from the observer.
\end{key}
\begin{key}{SGC\_FILE}{m13.vbv}{(*)}
The file containing the colour--magnitude information for the spheroid giants.
\end{key}
\begin{key}{SLF\_FILE}{dacosta.vlf}{(*)}
The file containing the \lf\ for the spheroid.
\end{key}
\begin{key}{THICK}{FALSE}{}
Include a thick disk component in the model.
\end{key}
\begin{key}{TGC\_FILE}{dskm67.vbv}{(*)}
The file containing the colour--magnitude information for the thick disk
giants.
\end{key}
\begin{key}{TLF\_FILE}{wielen.vlf}{(*)}
The file containing the \lf\ for the thick disk.
\end{key}
\begin{key}{USE\_MS}{FALSE}{}
Include a main-sequence \cmd\ in the calculations. See the descriptions
of the {\tt MSC\_FILE} and {\tt FMS\_FILE} keywords, and section 4.3 for
more information.
\end{key}
\begin{key}{VERBOSE}{FALSE}{}
Print out all data arrays. This data is appended to the normal output of
\egm. The format used is described in section 3.3.
\end{key}
\begin{key}{X0}{23.74}{(*)}
The position of the field in arc minutes along the major axis.
\end{key}
\begin{key}{Y0}{0.00}{(*)}
The position of the field in arc minutes along the minor axis.
\end{key}
%%----------------------------------------------------------------------------|
\subsection{File Formats}
One of the key features of \egm\ is the ability of the user to specify a
luminosity function and a colour-magnitude diagram. If you do use your
own you may want to check that the right parameters are set. All of the
defaults assume a $V$ luminosity function and a $B-V$ colour. It is also
assumed that every entry is a floating point number (float or double).
\egm\ currently uses double precision in all floating point calculations.
For use with the {\tt -MS} option a file giving the fraction of stars on the
main sequence can also be specified. Any line beginning with a ``\#'' is
considered a comment and is ignored.
\begin{table}[h]
\centering
\begin{tabular}{ccccc} \\
Luminosity Functions: & & Colour Magnitude Diagrams: & & Fraction on M-S: \\
M[1] \hspace{1cm} $\log_{10}$N[1] & \hspace{1cm} & M[1] \hspace{1cm} C[1] & \hspace{1cm} & M[1] \hspace{1cm} F[1] \\
... \hspace{1cm} ... & \hspace{1cm} & ... \hspace{1cm} ... & \hspace{1cm} & ... \hspace{1cm} ... \\
M[N] \hspace{1cm} $\log_{10}$N[N] & \hspace{1cm} & M[N] \hspace{1cm} C[N] & \hspace{1cm} & M[N] \hspace{1cm} F[N]
\end{tabular}
\end{table}
\noindent where M, C, F and N are magnitudes, colours, fractions ($<1$) and
numbers respectively.
%%----------------------------------------------------------------------------|
\subsection{Output Format}
\egm\ sends all output to the standard output stream -- it may therefore
be redirected to a file. The format used depends somewhat on the command
line options that have been set. The first part of the output is always
a header section, each line of which starts with a ``\#''. The data shown
here is basically a printout of all the parameters used to calculate
the model. A copy of the header section of the output is shown in Figure~1 (two
component model) and Figure~4 (three component model). The only real change
is the inclusion of the thick disk parameters in the latter. In both types the
value of the ``{\tt SPH}'' entry will change to ``{\tt C \& O}'' if the
Caldwell \& Ostriker spheroid model has been used. The ``{\tt rek}'' entry
will then be changed to ``{\tt rco}''.
The section beginning with ``{\tt Stars in COL}'' contains results for
the colour distribution in the
order disk, spheroid and total. The first line is the number of stars,
then the mean colours of each distribution. Next is the fraction of
stars on the main sequence, followed by the fraction of giants in
each one. The number distribution comes next: the first 3 columns are
the magnitude bin ranges and centre. In the next 6 columns, an ``A''
indicates differential counts (stars per magnitude bin), an ``N'' integral
counts (cumulative counts down to that magnitude). The letters ``d'', ``s''
and ``t'' indicate disk, spheroid and total counts respectively.
Care should be taken when interpreting the
differential counts if the magnitude bin size ({\tt -bm} or
{\tt MAG\_BIN}) is not equal to 0.0. In the default case
the differential counts are in stars per magnitude bin (i.e. all the stars in
that bin between V1 and V2). Setting the magnitude
bin size to anything other than 0.0 will make \egm\ scale the counts to that
bin size by multiplying by ({\tt DM\_APP} / {\tt MAG\_BIN}).
Therefore the magnitude bin edges in columns 1 and
3 will {\em not} be ``correct''. The bin centre (column 2) will be
correct -- the {\tt -bm} option merely causes scaling of the differential
number counts. The integral counts (``N'') are not affected by this option.
Figures~2 and 3 show the output format for the number counts and colour
distributions and the verbose output ({\tt -v}) for the two component models.
The latter two are, of course, only printed out if the appropriate options
have been set. Figures~5 and 6 show the same things for the three component
model. The last three figures (7, 8 and 9) are plots (produced using the \sm\
macros described in Appendix~B) of the results obtained using the default
parameters.
The fields in the header section refer to the following (in the order you'll
read them in the file -- note that {\tt rmin} appears three times in the
output -- it is possible that this parameter will be split into 3 (one for
disk, thick disk and spheroid)):
%%--------------|
\begin{tabbing}
\rule{2cm}{0cm} \= \kill
x0 \> The position of the field in arc minutes along the major axis. \\
y0 \> The position of the field in arc minutes along the minor axis. \\
far \> Was the integration was carried through to the far side of the disk? \\
A \> Field size in square degrees. \\
D \> The distance to the galaxy in parsecs. \\
i \> The inclination of the galaxy in degrees. \\
r0 \> Distance to galactic centre (in pc). \\
rpk \> Distance of peak of distribution from galactic plane (see \S 4.1) \\
mabrt \> Brightest apparent magnitude at which to begin integration.\\
madim \> Faintest apparent magnitude to carry integration down to.\\
dm \> Apparent magnitude interval. \\
Mbin \> Magnitude bin size for scaling differential counts. \\
Mbri \> Brightest absolute magnitude for all the \lf s. \\
Mdim \> Faintest absolute magnitude for all \lf s.\\
dM \> The absolute magnitude interval. \\
mnbri \> The brightest magnitude for normalizing the \lf s.\\
Cerr \> The colour error (in magnitudes). \\
Cint \> Colour interval for calculation. \\
Cbin \> The colour bin size for scaling colour distribution. \\
mndim \> The faintest magnitude for normalizing the \lf s.\\
maxC \> Maximum colour of the colour distribution.\\
minC \> Minimum colour of the colour distribution. \\
mbc \> Faint apparent magnitude limit for colour distribution. \\
mfc \> Bright apparent magnitude limit for colour distribution. \\
amode \> The absorption model used. \\
abs \> Any arbitrary absorption included (e.g. from our Galaxy). \\
dska \> The absorption added by looking through the disk.\\
a0 \> Scale height of the absorption. \\
a1 \> Absorption at the Galactic poles in the ``cosec'' absorption law. \\
a2 \> The $a_2$ parameter in the Sandage absorption law (\S4.2). \\
a3 \> The $a_3$ parameter in the Sandage absorption law (\S4.2). \\
den\_d \> The density normalization of the disk at rn\_d. \\
rn\_d \> The normalization radius for the disk \lf. \\
psl\_d \> Scale length in the plane for the disk component (in pc). \\
gsh\_d \> Scale height of the disk giants (in pc). \\
dn\_d \> Calculated disk normalization at r0. \\
dr\_d \> The distance increment for the disk integration. \\
rmin \> The minimum distance for integration. \\
rmaxd \> The maximum distance for integration in the disk component. \\
dsklf \> The disk luminosity function. \\
dgcmd \> The disk giant colour magnitude diagram. \\
dmsfn \> The main -- sequence colour magnitude diagram. \\
fmsfn \> File containing the fraction of stars on main -- sequence. \\
den\_t \> The density normalization of the thick disk at rn\_t. \\
rn\_t \> The normalization radius for the thick disk \lf. \\
psl\_t \> Scale length in the plane for the thick disk component (in pc). \\
gsh\_t \> Scale height of the disk giants (in pc). \\
dn\_t \> Calculated thick disk normalization at r0. \\
dr\_t \> The distance increment for the thick disk integration. \\
rmaxt \> The maximum distance for integration in the thick disk. \\
tsklf \> The thick disk luminosity function. \\
tgcmd \> The thick disk giant colour magnitude diagram. \\
den\_s \> The density normalization of the spheroid at rn\_s. \\
rn\_s \> The normalization radius for the spheroid \lf. \\
rek \> The de Vaucouleurs radius of the spheroid (in pc). \\
es \> The spheroid eccentricity/axis ratio. \\
dn\_s \> Calculated spheroid normalization at r0. \\
dr\_s \> The distance increment for the spheroid integration. \\
rmaxs \> The maximum distance for integration in the thick disk. \\
sphlf \> The spheroid luminosity function. \\
sgcmd \> The spheroid giant colour magnitude diagram. \\
\end{tabbing}
%%--------------|
Additional information may be printed out at the end of the depending on
what options were set on the command line. The format of these extra data
blocks is one header line indicating the data type followed by two
columns of numbers.
If ``verbose mode'' has been turned on (with the {\tt -v} option)
\egm\ will print out data concerning the various distribution functions
that \egm\ has used in the model. They are not the same as the input files
that were specified in
the parameter file. \egm\ uses spline interpolation on the input
\lf s and \cmd s and stores the resulting look--up--table in an array.
The {\tt -v} option just tells \egm\ to print out these arrays.
First of these data blocks is the disk luminosity function
(header: DLF, format: magnitude number), then the spheroid luminosity
function (SLF) (format: magnitude number). Following that are
three blocks containing the colour-magnitude diagrams used:
main sequence (MSC), disk giant (DGC) and then spheroid giant
(SGC) (format: magnitude colour). Finally the fraction of stars
on the main sequence (FMS) (format: magnitude number) is given.
%%--------------|
%% Figure 1 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
# EXTERNAL GALAXY MODEL: default.pm Components: 2
# x0 = 23.740 y0 = 0.000 A = 0.00100
# D = 725000.000 i = 77.500 far = NO dska = 1.000
# mabrt = 16.000 madim = 26.000 dm = 0.500 Mbin = 0.000
# Mbrd = -6.000 Mbrs = -3.000 Mdim = 16.500 dM = 0.050
# Cerr = 0.100 Cint = 0.005 Cbin = 0.200
# maxC = 3.000 minC = 0.000 mbc = 16.000 mfc = 26.000
# dr = 25.000 r0 = 5006.702 amode = Sandage abs = 0.000
# a0 = 100.000 a1 = 0.000 a2 = 0.165 a3 = 1.192
# dn_d = 500.000 dn_t = 0.000 dn_s = 1.000 rn = 8000.000
#
# DSK:
# den_d = 1.723 psl_d = 5500.000 gsh_d = 250.000
# dsklf = wielen.vlf dgcmd = dskm67.vbv
# dmsfn = NONE fmsfn = NONE
#
# SPH: R ^ (1/4)
# den_s = 9.196e-03 rek = 2670.000 es = 0.800
# sphlf = dacosta.vlf sgbfn = m13.vbv
#
# DSK SPH TOT
# Stars in COL = 41438.973 2090.437 43529.410
# Mean Colours = 1.196 0.839 1.179
# Star Fraction = 0.952 0.048
# Giant Fraction = 1.000 1.000 1.000
#
\end{verbatim}
\end{center}
\caption{The header information for the two component (disk and spheroid)
model. If the Caldwell \& Ostriker spheroid model has been used the
``{\tt SPH}'' entry will read {\tt C \& O} and {\tt rek} will be replaced by
{\tt rco}.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 2 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
# NUM:
# V1 V V2 Ad(V) Nd(<V2) As(V) Ns(<V2) At(V) Nt(<V2)
16.50 16.75 17.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.00 17.25 17.50 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.50 17.75 18.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
18.00 18.25 18.50 6.381e-02 3.190e-02 0.000e+00 0.000e+00 6.381e-02 3.190e-02
18.50 18.75 19.00 1.028e+00 5.460e-01 0.000e+00 0.000e+00 1.028e+00 5.460e-01
...
23.00 23.25 23.50 1.891e+03 1.848e+03 7.318e+01 5.732e+01 1.964e+03 1.905e+03
23.50 23.75 24.00 3.668e+03 3.682e+03 1.664e+02 1.405e+02 3.834e+03 3.822e+03
24.00 24.25 24.50 6.864e+03 7.114e+03 3.129e+02 2.970e+02 7.177e+03 7.411e+03
24.50 24.75 25.00 1.235e+04 1.329e+04 4.561e+02 5.250e+02 1.280e+04 1.381e+04
25.00 25.25 25.50 2.127e+04 2.392e+04 7.708e+02 9.104e+02 2.204e+04 2.483e+04
#
# COL:
# B-V Nd Ns NT
0.0000 0.0025 0.0050 0.000 0.000 0.000
0.0400 0.0425 0.0450 0.000 0.000 0.000
0.0800 0.0825 0.0850 0.000 0.000 0.000
0.1200 0.1225 0.1250 0.000 0.000 0.000
0.1600 0.1625 0.1650 0.000 0.000 0.000
...
2.8400 2.8425 2.8450 5.690 0.000 5.690
2.8800 2.8825 2.8850 4.572 0.000 4.572
2.9200 2.9225 2.9250 3.571 0.000 3.571
2.9600 2.9625 2.9650 2.666 0.000 2.666
3.0000 3.0025 3.0050 1.865 0.000 1.865
\end{verbatim}
\end{center}
\caption{The format for the number counts and colour distribution for the
two component (disk and spheroid) model.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 3 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
#
# V DLF SLF FMS
-6.00 1.486e-08 2.645e-09 4.407e-01
-5.95 1.614e-08 2.912e-09 4.408e-01
-5.90 1.753e-08 3.206e-09 4.409e-01
-5.85 1.904e-08 3.530e-09 4.410e-01
-5.80 2.068e-08 3.886e-09 4.410e-01
...
16.30 1.375e-02 1.375e-02 1.000e+00
16.35 1.373e-02 1.373e-02 1.000e+00
16.40 1.372e-02 1.372e-02 1.000e+00
16.45 1.372e-02 1.372e-02 1.000e+00
16.50 1.373e-02 1.373e-02 1.000e+00
#
# V MSC DGC SGC
-6.00 -3.450e-01 3.132e+00 2.768e+00
-5.95 -3.441e-01 3.117e+00 2.753e+00
-5.90 -3.433e-01 3.102e+00 2.737e+00
-5.85 -3.424e-01 3.086e+00 2.722e+00
-5.80 -3.415e-01 3.071e+00 2.706e+00
...
16.30 2.006e+00 2.006e+00 1.846e+00
16.35 2.012e+00 2.012e+00 1.852e+00
16.40 2.019e+00 2.019e+00 1.859e+00
16.45 2.025e+00 2.025e+00 1.865e+00
16.50 2.031e+00 2.031e+00 1.871e+00
\end{verbatim}
\end{center}
\caption{The information produced using verbose mode ({\tt -v}) for the two
component (disk and spheroid) model. In this case the {\tt -MS} option had
been used so the \fms\ and main-sequence \cmd\ files are displayed.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 4 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
# EXTERNAL GALAXY MODEL: default.pm Components: 3
# x0 = 23.740 y0 = 0.000 A = 0.00100
# D = 725000.000 i = 77.500 far = NO dska = 1.000
# mabrt = 16.000 madim = 26.000 dm = 0.500 Mbin = 0.000
# Mbrd = -6.000 Mbrs = -3.000 Mdim = 16.500 dM = 0.050
# Cerr = 0.100 Cint = 0.005 Cbin = 0.200
# maxC = 3.000 minC = 0.000 mbc = 16.000 mfc = 26.000
# dr = 25.000 r0 = 5006.702 amode = Sandage abs = 0.000
# a0 = 100.000 a1 = 0.000 a2 = 0.165 a3 = 1.192
# dn_d = 500.000 dn_t = 10.000 dn_s = 1.000 rn = 8000.000
#
# DSK:
# den_d = 1.723 psl_d = 5500.000 gsh_d = 250.000
# dsklf = wielen.vlf dgcmd = dskm67.vbv
# dmsfn = NONE fmsfn = NONE
#
# THK:
# den_t = 3.447e-02 psl_t = 5500.000 gsh_t = 1300.000
# thklf = wielen.vlf tgbfn = dskm67.vbv
#
# SPH: R ^ (1/4)
# den_s = 9.196e-03 rek = 2670.000 es = 0.800
# sphlf = dacosta.vlf sgbfn = m13.vbv
#
# DSK THK SPH TOT
# Stars in COL = 41438.973 2556.082 2090.437 46085.492
# Mean Colours = 1.196 1.224 0.839 1.181
# Star Fraction = 0.899 0.055 0.045
# Giant Fraction = 1.000 1.000 1.000 1.000
#
\end{verbatim}
\end{center}
\caption{The header information for the three component (disk, thick disk
and spheroid) model. If the Caldwell \& Ostriker spheroid model has been used
the ``{\tt SPH}'' entry will read {\tt C \& O} and {\tt rek} will be replaced
by {\tt rco}.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 5 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
# NUM:
# V1 V V2 Ad(V) Nd(<V2) At(V) Nt(<V2)
16.50 16.75 17.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.00 17.25 17.50 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.50 17.75 18.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
18.00 18.25 18.50 6.381e-02 3.190e-02 1.320e-03 6.599e-04
18.50 18.75 19.00 1.028e+00 5.460e-01 2.538e-02 1.335e-02
...
23.00 23.25 23.50 1.891e+03 1.848e+03 1.111e+02 1.077e+02
23.50 23.75 24.00 3.668e+03 3.682e+03 2.179e+02 2.166e+02
24.00 24.25 24.50 6.864e+03 7.114e+03 4.129e+02 4.231e+02
24.50 24.75 25.00 1.235e+04 1.329e+04 7.526e+02 7.994e+02
25.00 25.25 25.50 2.127e+04 2.392e+04 1.315e+03 1.457e+03
# V1 V V2 As(V) Ns(<V2) AT(V) NT(<V2)
16.50 16.75 17.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.00 17.25 17.50 0.000e+00 0.000e+00 0.000e+00 0.000e+00
17.50 17.75 18.00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
18.00 18.25 18.50 0.000e+00 0.000e+00 6.513e-02 3.256e-02
18.50 18.75 19.00 0.000e+00 0.000e+00 1.054e+00 5.594e-01
...
23.00 23.25 23.50 7.318e+01 5.732e+01 2.075e+03 2.013e+03
23.50 23.75 24.00 1.664e+02 1.405e+02 4.052e+03 4.039e+03
24.00 24.25 24.50 3.129e+02 2.970e+02 7.590e+03 7.834e+03
24.50 24.75 25.00 4.561e+02 5.250e+02 1.356e+04 1.461e+04
25.00 25.25 25.50 7.708e+02 9.104e+02 2.336e+04 2.629e+04
#
# COL:
# B-V Nd Nt Ns NT
0.0000 0.0025 0.0050 0.000 0.000 0.000 0.000
0.0400 0.0425 0.0450 0.000 0.000 0.000 0.000
0.0800 0.0825 0.0850 0.000 0.000 0.000 0.000
0.1200 0.1225 0.1250 0.000 0.000 0.000 0.000
0.1600 0.1625 0.1650 0.000 0.000 0.000 0.000
...
2.8400 2.8425 2.8450 5.690 0.410 0.000 6.100
2.8800 2.8825 2.8850 4.572 0.330 0.000 4.901
2.9200 2.9225 2.9250 3.571 0.258 0.000 3.828
2.9600 2.9625 2.9650 2.666 0.192 0.000 2.858
3.0000 3.0025 3.0050 1.865 0.135 0.000 2.000
\end{verbatim}
\end{center}
\caption{The format for the number counts and colour distribution for the
three component (disk, thick disk and spheroid) model.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 6 |
%%--------------|
\begin{figure}[p]
\begin{center}
\begin{verbatim}
#
# V DLF TLF SLF
-6.00 1.486e-08 1.486e-08 2.645e-09
-5.95 1.614e-08 1.614e-08 2.912e-09
-5.90 1.753e-08 1.753e-08 3.206e-09
-5.85 1.904e-08 1.904e-08 3.530e-09
-5.80 2.068e-08 2.068e-08 3.886e-09
...
16.30 1.375e-02 1.375e-02 1.375e-02
16.35 1.373e-02 1.373e-02 1.373e-02
16.40 1.372e-02 1.372e-02 1.372e-02
16.45 1.372e-02 1.372e-02 1.372e-02
16.50 1.373e-02 1.373e-02 1.373e-02
#
# V DGC TGC SGC
-6.00 3.132e+00 3.132e+00 2.768e+00
-5.95 3.117e+00 3.117e+00 2.753e+00
-5.90 3.102e+00 3.102e+00 2.737e+00
-5.85 3.086e+00 3.086e+00 2.722e+00
-5.80 3.071e+00 3.071e+00 2.706e+00
...
16.30 -6.167e+02 -6.167e+02 -1.818e+01
16.35 -6.243e+02 -6.243e+02 -1.846e+01
16.40 -6.319e+02 -6.319e+02 -1.873e+01
16.45 -6.397e+02 -6.397e+02 -1.901e+01
16.50 -6.474e+02 -6.474e+02 -1.929e+01
\end{verbatim}
\end{center}
\caption{The information produced using the {\tt -v} option for the three
component (disk, thick disk and spheroid) model. The main sequence was not
included.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 7 |
%%--------------|
\begin{figure}[p]
\centerline{\psfig{figure=figure7.eps}}
\caption{The differential number counts for the default model.}
\end{figure}
%%--------------|
%% Figure 8 |
%%--------------|
\begin{figure}[p]
\centerline{\psfig{figure=figure8.eps}}
\caption{The integral number counts for the default model.}
\end{figure}
%%--------------|
%% Figure 9 |
%%--------------|
\begin{figure}[p]
\centerline{\psfig{figure=figure9.eps}}
\caption{The colour distribution produced by the default model.}
\end{figure}
%%--------------|
%%----------------------------------------------------------------------------|
%% MODEL DESCRIPTION |
%%----------------------------------------------------------------------------|
\newpage
\section{Model Description}
%%------------------------------------|
\subsection{Geometrical Considerations}
In order to use \egm\ properly it is necessary to know what the parameters
describing positions and angles actually refer to. The first step is to
convert the Right Ascension and Declination of a field to rectangular
coordinates, as shown in figure~10. The $x$ and $y$ axes are defined to
be along the major and minor axes of the galaxy. The $x$-axis is inclined
to the R.A. axis by the position angle $P$. It is a simple task to transform
the coordinates of G from $(\alpha,\delta)$ to $(x,y)$:
\[
x = \alpha \cos P - \delta \sin P
\]
\[
y = \alpha \sin P + \delta \cos P
\]
%%--------------|
%% Figure 10 |
%%--------------|
\begin{figure}[b]
\centerline{\psfig{figure=figure10.eps,height=8cm}}
\caption{Converting the R.A. and Dec. of the field at G to {\sf x} and
{\sf y} coordinates. These are defined to lie along the major and minor
axes of the galaxy, respectively.}
\end{figure}
\egm\ calculates the number counts using a variation of the Bahcall \&
Soneira Model. In practice this means imagining that the observer is at
the field (G, in figure~10) and calculating the counts back along the line
of sight to the observer. This requires the ``galactic latitude and
longitude'' of the observer as seen from G. Figures~11 and 12 show the
geometry of the situation. In figure~12 an observer at O looks at a field P
in a galaxy whose plane (G) is inclined at am angle $i$ to the plane of the
sky (S). The angle $b$ is the ``galactic latitude'' of O as seen from P.
Provided $D >> x$ then $d \approx D$ and $e \approx 0$. Thus we may assume
that $b = 90^\circ - i $. For the inclination and distance of M31 and a field
where $x = 55 {\rm Kpc}$ this is true to within 0.5\%.
%%--------------|
%% Figure 11 |
%%--------------|
\begin{figure}[p]
\centerline{\psfig{figure=figure11.eps,width=16cm,angle=-90}}
\caption{A simplified view of the geometry of the situation. The plane of
the galaxy (G) is inclined at an angle $i$ to the plane of the sky (S)
(as seen from O). Assuming $ x << D$ we have $b = 90^\circ - i$.}
\end{figure}
%%--------------|
%%--------------|
%% Figure 12 |
%%--------------|
\begin{figure}[p]
\centerline{\psfig{figure=figure12.eps,width=16cm}}
\caption{As seen by the observer the field Q on the sky projects to P in the
galaxy. $r$ and $l$ can be found by geometrical analysis.}
\end{figure}
%%--------------|
The model now needs to know r, the distance of the field P from the
centre of the galaxy, and $\ell$ the ``galactic longitude''. Figure~12 shows
the geometry in this case. It is necessary to define a coordinate
system on the plane of the sky -- x and y in the figure. The x-axis is
defined to lie along the line of nodes of the intersection of the galaxy
with the plane of the sky. So we give the model the (x,y) coordinates of the
field Q (on the sky, which projects to P in the galaxy). Then
\[
r = \rho \left( \cos^2 \phi + \sec^2 i \sin^2 \phi \right) ^ {\frac{1}{2}},
\]
\[
\theta = \arctan \left( \frac{\tan \phi}{\sec i} \right),
\]
\[
\ell = 90^\circ - \theta.
\]
%%--------------------------|
\subsection{Component Models}
There are currently only two types of components in \egm\ -- the thin,
exponential
disk and the spheroidal halo. The thick disk is functionally equivalent to
the thin disk model. The functional forms discussed below have been
hard--coded into the program and cannot be changed without rewriting the
appropriate subroutines. The parameters in these functional forms can,
of course, be set by the user.
%%----------------------|
\subsubsection*{The Disk}
The density distribution of the disk stars in the Bahcall and Soneira
model is represented by the following exponential function:
\[
\rho_d(r,M) = e^{\left[\frac{-z}{H} - \frac{x-R_0}{h} \right]}
\]
where the $H$ scale height of the disk and $h$ is the scale length.
$R_0$ is the distance of the field from the galactic centre (in pc).
%%--------------------------|
\subsubsection*{The Spheroid}
de Vaucouleurs (1959, in Handbuch der Physik, Vol. 53, ed. S. Flugge
(Berlin, Springer--Verlag), p. 311) found that the projected brightness
distribution of ellipticals was given by the famous $r^\frac{1}{4}$ law:
\[
\log \frac{I(r)}{I(r_e)} = -3.3307 \left( \left(\frac{r}{r_e}\right)^
{\frac{1}{4}} - 1 \right)
\]
where $r_e$ is the spatial distance that projects to an angle containing
half the total luminosity. Young (1976, AJ, 81, 807) gives an asymptotic
approximation for the spatial density of stars which leads to this form
of projection:
\[
\rho_s(r) \approx \frac{e^{-b\left(\frac{r}{r_e}\right)}}
{2\left(\frac{r}{r_e}\right)^3}
\left( \frac{\pi}{8 b \left(\frac{r}{r_e}\right)} \right)^{\frac{1}{2}}
\]
where $b = 7.6692$.
%%------------------------|
\subsubsection*{Absorption}
Three different models for galactic absorption are currently supported -- no
absorption, the ``cosec'' law and the Sandage absorption law, the latter being
the default model. The absorption model can be specified on the command line
(the {\tt -a} option) or in the parameter file with the keyword {\tt AMODE}.
In both cases the words ``none'', ``cosec'' and ``Sandage'' can be used to
choose the desired model. Only the first letter of the word (which may be
in upper or lower case) is significant.
Each absorption mode needs various parameters (except the no absorption
case!). The cosecant law calculates absorption based on the formula
\[
A(b) = a_1(90^\circ) \csc b
\]
where $a_1(90^\circ) = 0.15$ magnitudes in $V$ (the default) and
$A_V = 0.75 A_B$. This coefficient can be specified in the parameter file
with the {\tt A\_1} keyword. You will have to decide on the value of this
parameter for other filters on your own.
The Sandage absorption model (Sandage, 1972, ApJ, 178, 1) in the $V$ band is
\[
\begin{array}{lll}
A(b) = & a_2 ( a_3 - \tan b) \csc b & |b| \leq 50^\circ \\
A(b) = & 0 & |b| > 50^\circ
\end{array}
\]
where for the $V$ filter $a_2 = 0.165$ and $a_3 = 1.192$. In the $B$ filter
$A_B = 1.33 A_V$. To the best of my knowledge this formula is only defined
for the $V$ and $B$ filters. The parameters $a_2$ and $a_3$ can be
reset in the parameter file with the {\tt A\_2} and {\tt A\_3} keywords
respectively.
Finally the absorption in magnitudes at distance $R$ is calculated from
\[
A(R) = A(b) \left[ 1 - e^{(- \sin b / a_0) R} \right]
\]
where $a_0$ is the scale height of the absorbing material. The default is
100 pc, and the appropriate keyword is {\tt A\_0}. It is assumed there is no
variation with distance from the galactic centre, only an exponential
variation perpendicular to the plane.
Two other absorption parameters may be defined. The first is simply a
number (in magnitudes) added to the calculations (use the {\tt ABS} keyword
or the {\tt -A} option) -- this may simulate absorption in our Galaxy. The
second (using the {\tt DSK\_ABS} keyword) only applies if the {\tt FAR\_SIDE}
option is set -- this number is added to all calculations that occur of the
far side of the disk. This simulates looking through a thin dust layer in the
plane of the galaxy.
%%-----------------------------------|
\subsubsection*{Density Normalization}
One item of program input that needs careful consideration is the density
normalization to choose for the disk, thick disk and spheroid components.
This is done with the {\tt -dnd}, {\tt -dnt} and {\tt -dns} options on the
command line or the corresponding {\tt DND}, {\tt DNT} and {\tt DNS}
parameters. The format is: {\tt density@radius} where {\tt density} is the
density on stars per cubic parsec at a distance of {\tt radius} (in the
plane of the galaxy).
The \lf\ is then normalized to {\tt density} stars per cubic parsec, between
the magnitudes given by the {\tt MN\_BRI} and {\tt MN\_DIM} parameters.
The distance $r_0$ of the field from the centre of the
galaxy is then calculated and the actual density normalizations at that point
are found by re--scaling from {\tt radius}. For example, the default field is
at 5006.702 pc from the centre of the galaxy which gives a disk density of 1.723 and a spheroid density of \sci{9.196}{-3}.
%%----------------------------------------------------------------------------|
\subsection{Distribution Functions}
%%----------------------------------|
\subsubsection*{Luminosity Functions}
In this release of \egm, three files are given for the luminosity function
of the disk (``analytic.vlf'', ``wielen.vlf'', and ``dacosta.vlf'').
The first of these, the pure analytic \lf, is given by equation (1) of
Bahcall and Soneira 1980 (ApJS, 44, 73):
\[
\begin{array}{lll}
\phi(M) = & \frac{n_\ast 10^{\beta(M-M_\ast)}}
{\left( 1 + 10^{-(\alpha - \beta)\delta(M-M_\ast)}\right)^{\frac{1}{\delta}}}
& M_b \leq M \leq M_c \nonumber \\
\phi(M) = & \phi(M_c) & M_c \leq M \leq M_d \\
\phi(M) = & 0 & M \leq M_b {\rm ~or~~} M \geq M_d
\end{array}
\]
where, for the $V$ band, $n_\ast = 4.03 \times 10^{-3}$, $M_\ast = +1.28$,
$\alpha = 0.74$, $\beta = 0.04$, $\frac{1}{\delta} = 3.40$, $M_b = -6$,
$M_c = +15$ and $M_d = +19$.
This has been modified to include the ``Wielen dip'' (Wielen, 1974, in
Highlights of Astronomy, Vol. 3, ed. G. Contopoulos (Dordrecht, Reidel),
p. 365) in the file ``wielen.vlf'' and further modified to include the so--
called ``globular cluster feature'' (DaCosta, 1982, AJ, 87, 990) in the
file ``dacosta.vlf''. As discussed in Bahcall and Soneira (1980) these can
probably be used for the spheroid \lf\ without too much of a problem.
%%----------------------------------------|
\subsubsection*{Colour--Magnitude Diagrams}
\egm\ uses \cmd s only to calculate the expected colour distribution of the
field in question. Thus if you are interested in say, number counts in $I$
but have no $V-I$ \cmd, you could run \egm\ with an $I$ band \lf\ and ignore
the colour distribution that \egm\ produces.
Several \cmd s are provided with \egm\ in the standard library. The $B-V$ ones
come mainly from the functions supplied with the FORTRAN version of the
Bahcall \& Soneira code. The $V-I$ are take from Da~Costa \& Armandroff
(1990, AJ, 100, 162)
%%----------------------------------------|
\subsubsection*{Fraction of stars on the main--sequence}
If the {\tt -MS} option has been set \egm\ needs to know the fraction of
stars on the main sequence as a function of absolute magnitude. This is
so it can weight the giant branch and main sequence number counts
accordingly. Mamon \& Soneira, 1982 (ApJ, 255, 181) give the following
analytical form for the number of stars on the main sequence:
\[
\begin{array}{lll}
f(M) = & C e^{\alpha(M+\beta)^{\gamma}} & M < M_a \\
f(M) = & 1 & M \geq M_a
\end{array}
\]
where for the $V$ band, $C=0.44$, $\alpha = 1.5\times10^{-4}$, $\beta=8.0$,
$\gamma=3.5$ and $M_a = 3.7$.
%%----------------------------------------------------------------------------|
\subsection{Program Function Details}
This section describes how \egm\ produces a model. It is reasonably
detailed and is {\em not} required reading. In some cases it refers to
what the computer is actually doing, so if you're really interested
you may want to have a copy of the source code close at hand.
The discussion also assumes that all the files \egm\ needs are available
and that it won't crash!
\subsubsection*{Initialization}
The first thing \egm\ does is initialize all the parameters to default
values. Then it scans the command line for the name of the parameter file and
reads it one line at a time. Each line is parsed for a keyword and this
is checked against and internal list of valid keywords.
If it comes across an invalid keyword it will stop,
otherwise it will overwrite the default value of that parameter with the
value in the parameter file.
It then scans the command line for options and updates the appropriate
parameters, checking for invalid options and overwriting the appropriate
parameter values. Command line options are scanned last and will therefore
overwrite values given in the parameter file.
Next \egm\ initializes various look-up-tables (LUTs) that it needs. These are
arrays for the \lf, \cmd\ (and \fms\, if required) data. From
it's parameters it determines the size of each array needed and reads in
the corresponding data file. It performs
spline interpolation on this data to fill in the array. Several arrays are
needed -- one each for the disk and spheroid \lf s, two for the disk giant
and spheroid giant color magnitude diagrams and possibly two for the main
sequence \cmd\ and for the \fms\ data. Corresponding arrays will be needed for
the thick disk if it is to be included. Other arrays used to hold the results
of the integration (number counts and colour distribution) are also set up at
this time but are merely initialized to zero.
\subsubsection*{Integration Procedure}
\egm\ now calculates the ``projection'' parameters -- the effective
galactic latitude and longitude based on the input parameters as described
in section 4.1.
\egm\ begins with the disk component. Starting at R\_MIN it moves along the
line of sight in steps of DR. It then calculates the size of the volume
element, its distance from the observer (not from the galaxy) and the
absorption (if any). The distances of the element above and along the plane
of the galaxy are calculated to enable the density at that point to be found.
The program then steps through the range of absolute magnitudes (defined
by M\_DIM and M\_BRI). For each of these steps it calculates the apparent
magnitude -- if this is outside the desired range (MA\_DIM and MA\_BRT)
it procedes to the next step, otherwise it looks up the number of
stars at that absolute magnitude in the disk luminosity function LUT.
If the main-sequence is being included the fraction of stars on the
giant branch and main-sequence are found using the \fms\ LUT.
The number of stars is then found by multiplying
together the volume, the \lf\ at that (absolute) magnitude, the
\fms\ value and the density distrbution, evaluated at that position.
The effect on the colour distribution is found by looking up the colour
corresponding to that absolute magnitude on the \cmd\ LUTs. The count
for that particular colour is filled in and various other running totals
(for the mean colour etc.) are computed.
The program then loops back for the next magnitude step in the \lf.
When that loop is finished it proceeds to the next distance step and so on.
Integration is terminated when all loops have been finished or when
the following condition is met. The total counts accumulated over the
\lf\ loop are compared to the running grand total of counts. If this
sub-total is less than C\_FAC times the grand total then an insignificant
amount of stars are being added and the integration stops.
The calculation of the spheroid counts proceeds in pretty much the same way,
except for one slight difference. The spheroid generally extends much farther
than the disk, therefore because of the larger numbers involved the
distance steps are done in logarithmic intervals to speed up the calculation.
Tests have shown that this does not introduce any appreciable difference to
calculating in linear steps.
The calculations for the far side of the galaxy proceed in an identical
manner, except for the inclusion of extra absorption and the fact that as
distances from the galaxy increase, so do the distances from the observer.
After this all that remains is some ``tidying up''. The computed colour
distribution is convolved with a Gaussian error distribution to obtain the
predicted colour distribution. Various other ``colour'' statistics are also
worked out. The total of the number counts are found, as is the integral
number counts for each component.
The results are then written out on the standard output stream.
\end{document}
| {
"alphanum_fraction": 0.6679503843,
"avg_line_length": 40.7846858639,
"ext": "tex",
"hexsha": "b22eee15ba9c319a29d592d2ecdb250b66df9738",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ba728b16919cb42e459bc8d4e7f5186a5588d0ff",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "encodis/star-counts",
"max_forks_repo_path": "doc/egm/egm_guide.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ba728b16919cb42e459bc8d4e7f5186a5588d0ff",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "encodis/star-counts",
"max_issues_repo_path": "doc/egm/egm_guide.tex",
"max_line_length": 124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ba728b16919cb42e459bc8d4e7f5186a5588d0ff",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "encodis/star-counts",
"max_stars_repo_path": "doc/egm/egm_guide.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 18956,
"size": 62319
} |
\myframe{proving things about \tla\ specs: \hfill correctness}{
One may want to prove something like \\
\vspace{5pt}
\hfill the following about the Quicksort specification:
\vspace{15pt}
\begin{example}[correctness]
\[
\begin{array}{r c l}
Correctness & \triangleq & (U = \{\})\ \Rightarrow\ A \in PermsOf(A_0) \ \wedge \ IsSorted(A)\\
\\
\end{array}
\]
\end{example}
\vspace{75pt}
}
\input{tla-pic.tex}
\subsection{The Basics}
\myframe{proving things about \tla\ specs: \hfill \tlapm}{
{\color{Maroon} Idea:} \\
\vspace{15pt}
User writes something fairly close to\\
\hspace{50pt} a Hilbert-style mathematical proof \\
\hspace{45pt} (or a Fitch-style natural deduction proof); \\
\vspace{13pt}
\hfill backend provers take care of the annoying details.
\vspace{75pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm}{
{\color{Maroon} Idea:} \\
\vspace{15pt}
User writes something {\color{Maroon}fairly close} to\\
\hspace{50pt} a Hilbert-style mathematical proof \\
\hspace{45pt} (or a Fitch-style natural deduction proof); \\
\vspace{15pt}
\hfill backend provers take care of the annoying details.
\vspace{25pt}
% reasonably light-weight
{\color{Maroon} some guidance using keywords like\\
\hfill \kw{assume-prove}, \kw{pick}, \kw{have}, \kw{witness}, \kw{case}, etc }
\vspace{26pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm}{
{\color{Maroon} Idea:} \\
\vspace{15pt}
User writes something fairly close to\\
\hspace{50pt} a Hilbert-style mathematical proof \\
\hspace{45pt} (or a Fitch-style natural deduction proof); \\
\vspace{15pt}
\hfill backend provers {\color{Maroon}take care of the annoying details}.
\vspace{25pt}
\hfill {\color{Maroon} we wish... \smiley }
\vspace{38pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} The simplest \tlapm\ proof is a one-liner:}
\begin{example}[reflexivity of $PermsOf$]
\vspace{-10pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_refl & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T] \\
& & \kw{prove}\ \ X \in PermsOf(X) \\ \\
& \kw{by} & \kw{def}\ \ PermsOf,\ Automorphisms,\ **
\end{array}
\]
\end{example}
\vspace{80pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} The simplest \tlapm\ proof is a one-liner:}
\begin{example}[reflexivity of $PermsOf$]
\vspace{-10pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_refl & \triangleq &
{\color{Maroon}\textbf{\kw{assume}}}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T] \\
& &{\color{Maroon} \textbf{\kw{prove}}}\ \ X \in PermsOf(X) \\ \\
& \kw{by} & \kw{def}\ \ PermsOf,\ Automorphisms,\ **
\end{array}
\]
\end{example}
{\color{Maroon}
\underline{\kw{assume-prove}} \\ \vspace{10pt}
equivalent to: \hfill
$\forall S,\ \forall T,\ \forall X \in [S \rightarrow T] .\ X \in PermsOf(X) $\\ \vspace{10pt}
actually: \hfill $S \in [S \rightarrow T]\ \vdash\ X \in PermsOf(X)$
}
\vspace{20pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} The simplest \tlapm\ proof is a one-liner:}
\begin{example}[reflexivity of $PermsOf$]
\vspace{-10pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_refl & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T] \\
& & \kw{prove}\ \ X \in PermsOf(X) \\ \\
&{\color{Maroon} \textbf{\kw{by}}} &{\color{Maroon} \textbf{\kw{def}\ \ PermsOf,\ Automorphisms,\ **}}
\end{array}
\]
\end{example}
\vspace{80pt}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} The simplest \tlapm\ proof is a one-liner:}
\begin{example}[reflexivity of $PermsOf$]
\vspace{-10pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_refl & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T] \\
& & \kw{prove}\ \ X \in PermsOf(X) \\ \\
&{\color{Maroon} \textbf{\kw{by}}} &{\color{Maroon} Isa \ \ \textbf{\kw{def}\ \ PermsOf,\ Automorphisms,\ **}}
\end{array}
\]
\end{example}
{\color{Maroon}
\underline{Tactics} \\ \vspace{10pt}
\hfill \kw{SMT, (CVC, Z3, Yices), \ Isa, (Auto, Blast, Force), Zenon, TPL} \\ \vspace{10pt}
default: \hfill \kw{SMT, Zenon, Isa}
}
\vspace{22.5pt}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PERMS OF TRANS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & \kw{new}\ Y \in PermsOf(X), \kw{new}\ Z \in PermsOf(Y) \\
& & \kw{prove}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
& \langle1\rangle 1 & Y \in [S \rightarrow T] \\
& & \kw{by}\ PermsOf\_type \\
& \langle1\rangle 2 & \kw{pick}\ f \in Automorphisms(S) : Y = X **\, f \\
& & \kw{by}\ \kw{def}\ PermsOf \\
& \langle1\rangle 3 & \kw{pick}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & \kw{by}\ \langle1\rangle 1\ \ \kw{def}\ PermsOf \\
& \langle1\rangle 4 & Z = X **\, (f **\, g) \\
& & \kw{by}\ \langle1\rangle 2, \langle1\rangle 3\ \ \kw{def}\ Automorphisms, ** \\
& \langle1\rangle\ \ & \kw{qed} \\
& & \kw{by}\ Automorphisms\_trans, \langle1\rangle 4\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & {\color{Maroon} \textbf{\kw{new}}\ Y \in PermsOf(X), \textbf{\kw{new}}\ Z \in PermsOf(Y)} \\
& & \color{Maroon} \textbf{\kw{prove}}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
& \langle1\rangle 1 & Y \in [S \rightarrow T] \\
& & \kw{by}\ PermsOf\_type \\
& \langle1\rangle 2 & \kw{pick}\ f \in Automorphisms(S) : Y = X **\, f \\
& & \kw{by}\ \kw{def}\ PermsOf \\
& \langle1\rangle 3 & \kw{pick}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & \kw{by}\ \langle1\rangle 1\ \ \kw{def}\ PermsOf \\
& \langle1\rangle 4 & Z = X **\, (f **\, g) \\
& & \kw{by}\ \langle1\rangle 2, \langle1\rangle 3\ \ \kw{def}\ Automorphisms, ** \\
& \langle1\rangle\ \ & \kw{qed} \\
& & \kw{by}\ Automorphisms\_trans, \langle1\rangle 4\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & \kw{new}\ Y \in PermsOf(X), \kw{new}\ Z \in PermsOf(Y) \\
& & \kw{prove}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
&\color{Maroon} \langle1\rangle 1 & Y \in [S \rightarrow T] \\
& & {\color{Maroon}\textbf{\kw{by}}}\ PermsOf\_type \\
&\color{Maroon} \langle1\rangle 2 & \kw{pick}\ f \in Automorphisms(S) : Y = X **\, f \\
& & {\color{Maroon}\textbf{\kw{by}}}\ \kw{def}\ PermsOf \\
&\color{Maroon} \langle1\rangle 3 & \kw{pick}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & {\color{Maroon}\textbf{\kw{by}}}\ \langle1\rangle 1\ \ \kw{def}\ PermsOf \\
&\color{Maroon} \langle1\rangle 4 & Z = X **\, (f **\, g) \\
& & {\color{Maroon}\textbf{\kw{by}}}\ \langle1\rangle 2, \langle1\rangle 3\ \ \kw{def}\ Automorphisms, ** \\
&\color{Maroon} \langle1\rangle\ \ & \color{Maroon} \textbf{\kw{qed}} \\
& & {\color{Maroon}\textbf{\kw{by}}}\ Automorphisms\_trans, \langle1\rangle 4\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & \kw{new}\ Y \in PermsOf(X), \kw{new}\ Z \in PermsOf(Y) \\
& & \kw{prove}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
& \langle1\rangle 1 & Y \in [S \rightarrow T] \\
& & \kw{by}\ \color{Maroon} PermsOf\_type \\
& \langle1\rangle 2 & \kw{pick}\ f \in Automorphisms(S) : Y = X **\, f \\
& & \kw{by}\ \kw{def}\ PermsOf \\
& \langle1\rangle 3 & \kw{pick}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & \kw{by}\ {\color{Maroon} \langle1\rangle 1}\ \ \kw{def}\ PermsOf \\
& \langle1\rangle 4 & Z = X **\, (f **\, g) \\
& & \kw{by}\ {\color{Maroon} \langle1\rangle 2, \langle1\rangle 3}\ \ \kw{def}\ Automorphisms, ** \\
& \langle1\rangle\ \ & \kw{qed} \\
& & \kw{by}\ {\color{Maroon} Automorphisms\_trans, \langle1\rangle 4}\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & \kw{new}\ Y \in PermsOf(X), \kw{new}\ Z \in PermsOf(Y) \\
& & \kw{prove}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
& \langle1\rangle 1 & \color{Maroon} \bf Y \in [S \rightarrow T] \\
& & \kw{by}\ PermsOf\_type \\
& \langle1\rangle 2 & \kw{pick}\ f \in Automorphisms(S) : Y = X **\, f \\
& & \kw{by}\ \kw{def}\ PermsOf \\
& \langle1\rangle 3 & \kw{pick}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & \kw{by}\ \langle1\rangle 1\ \ \kw{def}\ PermsOf \\
& \langle1\rangle 4 & \color{Maroon} \bf Z = X **\, (f **\, g) \\
& & \kw{by}\ \langle1\rangle 2, \langle1\rangle 3\ \ \kw{def}\ Automorphisms, ** \\
& \langle1\rangle\ \ & \kw{qed} \\
& & \kw{by}\ Automorphisms\_trans, \langle1\rangle 4\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\hspace{-18pt} {\color{Maroon} Some \tla\ proofs are more complicated (`hierarchical'):}
\footnotesize
\begin{example}[transitivity of $PermsOf$]
\footnotesize
\vspace{-7pt}
\[
\begin{array}{r c l }
\kw{lemma}\ PermsOf\_trans & \triangleq &
\kw{assume}\ \kw{new}\ S, \kw{new}\ T, \kw{new}\ X \in [S \rightarrow T], \\
& & \kw{new}\ Y \in PermsOf(X), \kw{new}\ Z \in PermsOf(Y) \\
& & \kw{prove}\ \ Z \in PermsOf(X) \\ \\
\end{array}
\] \vspace{-15pt}
%\scriptsize
\[
\begin{array}{r c l}
& \langle1\rangle 1 & Y \in [S \rightarrow T] \\
& & \kw{by}\ PermsOf\_type \\
& \langle1\rangle 2 & {\color{Maroon} \text{\bf{\kw{pick}}}}\ f \in Automorphisms(S) : Y = X **\, f \\
& & \kw{by}\ \kw{def}\ PermsOf \\
& \langle1\rangle 3 & {\color{Maroon} \text{\bf{\kw{pick}}}}\ g \in Automorphisms(S) : Z = Y **\, g \\
& & \kw{by}\ \langle1\rangle 1\ \ \kw{def}\ PermsOf \\
& \langle1\rangle 4 & Z = X **\, (f **\, g) \\
& & \kw{by}\ \langle1\rangle 2, \langle1\rangle 3\ \ \kw{def}\ Automorphisms, ** \\
& \langle1\rangle\ \ & \kw{qed} \\
& & \kw{by}\ Automorphisms\_trans, \langle1\rangle 4\ \ \kw{def}\ PermsOf \\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill \tlapm\ proofs}{
\begin{center}
\href{run:examples/videos/PermsOf.mp4}{\textbf{PermsOf lemmas demo}}
\end{center}
}
\subsection{Safety Properties: Proving Boxes}
\myframe{proving things about \tla\ specs: \hfill safety proofs}
{
For some safety property $P$, the thing to prove is generally:
\vspace{25pt}
\begin{theorem}[Safety]
\begin{center}
$Spec\ \Rightarrow\ \Box\, P$
\end{center}
\end{theorem}
\vspace{75pt}
}
\myframe{proving things about \tla\ specs: \hfill safety proofs}
{
For some safety property $P$, the thing to prove is generally:
\vspace{25pt}
\begin{theorem}[Safety]
\begin{center}
$Init \wedge \Box\, [Next]_V \ \Rightarrow \ \Box\, P$
\end{center}
\end{theorem}
\vspace{75pt}
}
\myframe{proving things about \tla\ specs: \hfill safety proofs}
{
For some safety property $P$, the thing to prove is generally:
\vspace{25pt}
\begin{theorem}[Safety]
\begin{center}
$Init \wedge \Box\, [Next]_V \ \Rightarrow \ \Box\, P$
\end{center}
\end{theorem}
\vspace{20pt}
{\color{Maroon} Question:} \\
\hfill How does one prove a $\Box$ ?
\vspace{27pt}
}
\myframe{proving things about \tla\ specs: \hfill proving $\Box$}
{
\hspace{-18pt} {\color{Maroon}There are two rules that `make' a $\Box$:}
\vspace{15pt}
\begin{rl}[$\Box$-Induction -- standard LTL]
\begin{prooftree}
\AxiomC{$Init \seq P$}
\AxiomC{$P\ ,\ [Next]_V \seq P'$}
\RightLabel{\rlable{\Box-Ind}}
\BinaryInfC{$Init\ ,\ \Box\, [Next]_V \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{30pt}
\begin{rl}[Necessitation]
\begin{prooftree}
\AxiomC{$\seq P$}
\RightLabel{\rlable{Nec}}
\UnaryInfC{$\hspace{8pt} \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{45pt}
}
\myframe{proving things about \tla\ specs: \hfill proving $\Box$}
{
\hspace{-18pt} {\color{Maroon}There are two rules that `make' a $\Box$:}
\vspace{15pt}
\begin{rl}[$\Box$-Induction -- standard LTL]
\begin{prooftree}
\AxiomC{$Init \seq P$}
\AxiomC{$P\ ,\ [Next]_V \seq P'$}
\RightLabel{\rlable{\Box-Ind}}
\BinaryInfC{$Init\ ,\ \Box\, [Next]_V \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{30pt}
\begin{rl}[Necessitation]
\begin{prooftree}
\AxiomC{$\Box\, F_1, \ldots, \Box\, F_n \seq P$}
\RightLabel{\rlable{Nec}}
\UnaryInfC{$\hspace{8pt} \Box\, F_1, \ldots, \Box\, F_n \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{45pt}
}
\myframe{proving things about \tla\ specs: \hfill proving $\Box$}
{
\hspace{-18pt} {\color{Maroon}There are two rules that `make' a $\Box$:}
\vspace{15pt}
\begin{rl}[$\Box$-Induction -- standard LTL]
\begin{prooftree}
\AxiomC{$Init \seq P$}
\AxiomC{$P\ ,\ [Next]_V \seq P'$}
\RightLabel{\rlable{\Box-Ind}}
\BinaryInfC{$Init\ ,\ \Box\, [Next]_V \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{30pt}
\begin{rl}[Necessitation -- careful about weakening !]
\begin{prooftree}
\AxiomC{$\Box\, F_1, \ldots, \Box\, F_n \hspace{10pt} \seq P$}
\RightLabel{\rlable{WK}}
\UnaryInfC{$\hspace{1pt} \Box\, F_1, \ldots, \Box\, F_n, G \seq P$}
\RightLabel{\color{Maroon} \Large $\times$}
\UnaryInfC{$\hspace{9pt} \Box\, F_1, \ldots, \Box\, F_n , G \seq \Box\, P$}
\end{prooftree}
\end{rl}
\vspace{45pt}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CORRECTNESS FOR QUICKSORT
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\myframe{proving things about \tla\ specs: \hfill correctness of Quicksort}{
\hspace{-18pt} {\color{Maroon}Correctness of Quicksort is a safety property:}
\vspace{10pt}
\begin{example}[correctness of quicksort]
\vspace{-12pt}
\[
\begin{array}{r c l}
Correctness & \triangleq & (U = \{\})\ \Rightarrow\ A \in PermsOf(A_0) \ \wedge \ IsSorted(A)\\ \\
\kw{theorem} & & Spec \Rightarrow \Box \, Correctness
\\
\end{array}
\]
\end{example}
\vspace{15pt}
Using $\Box$-Induction to prove this directly is not a good idea, \\
\hfil because $Correctness$ is vacuously true as long as $U \neq \{\}$.
\vspace{15pt}
\hfill {\color{Maroon} We need an inductive invariant... }
\vspace{18.5pt}
}
\myframe{proving things about \tla\ specs: \hfill correctness of Quicksort}{
\hspace{-18pt} {\color{Maroon}Correctness of Quicksort is a safety property:}
\vspace{10pt}
\begin{example}[correctness of quicksort]
\[
\begin{array}{r c l}
Correctness & \triangleq & (U = \{\})\ \Rightarrow\ A \in PermsOf(A_0) \ \wedge \ IsSorted(A)
\\ \\
% \kw{theorem} & & Spec \Rightarrow \Box \, Correctness
% \\ \\
UnsortedIsCovered & \triangleq & \forall\ i, j \in 1\,..\,N .\ i < j \ \wedge \ A[j] < A[i] \\
& & \Rightarrow \exists\ u \in U. \ \{i,j\} \subseteq u[1]\,..\, u[2]\\
\\
NoOverlap & \triangleq & \forall\ u,v \in U .\ (u[1]\, ..\, u[2]) \cap (v[1]\, ..\, v[2]) \neq \{\} \\
& & \Rightarrow u = v
\\ \\
Inv & \triangleq & UnsortedIsCovered \wedge NoOverlap
\\
\end{array}
\]
\end{example}
}
\myframe{proving things about \tla\ specs: \hfill correctness of Quicksort}{
\hspace{-18pt} {\color{Maroon}The proof will look like this:}
\vspace{10pt}
\begin{example}[correctness of quicksort]
\[
\begin{array}{r c l r}
\kw{theorem} & & Spec \Rightarrow \Box \, Correctness
\\ \\
& \langle 1 \rangle 1 & Spec \Rightarrow \Box \, TypeOK & \mathsf{\Box-Ind}
\\
& \langle 1 \rangle 2 & Spec \Rightarrow \Box \, Inv & \mathsf{\Box-Ind}
\\
& \langle 1 \rangle 3 & TypeOK \wedge Inv \Rightarrow Correctness
\\
& \langle 1 \rangle q & \kw{qed} \\
& & \kw{by}\ \langle1\rangle1, \langle1\rangle2, \langle1\rangle3, \kw{PTL} & \mathsf{Nec}
\\
\end{array}
\]
\end{example}
\vspace{36.5pt}
}
\myframe{proving things about \tla\ specs: \hfill correctness of Quicksort}{
\hspace{-18pt} {\color{Maroon}The problem with weakening manifests itself here:}
\vspace{10pt}
\begin{example}[correctness of quicksort]
\[
\begin{array}{r c l r}
\kw{theorem} & & Spec \Rightarrow \Box \, Correctness
\\ \\
& \langle 1 \rangle 1 & Spec \Rightarrow \Box \, TypeOK & \mathsf{\Box-Ind}
\\
& \langle 1 \rangle 2 & Spec \Rightarrow \Box \, Inv & \mathsf{\Box-Ind}
\\
& \langle 1 \rangle 3 & TypeOK \wedge Inv {\color{Maroon} \wedge\ Init} \Rightarrow Correctness
\\
& \langle 1 \rangle q & \kw{qed} \\
& & \kw{by}\ \langle1\rangle1, \langle1\rangle2, \langle1\rangle3, \kw{PTL} & \color{Maroon} \Large{ \mathbf{\times}}
\\
\end{array}
\]
\end{example}
\vspace{36.5pt}
}
\myframe{proving things about \tla\ specs: \hfill correctness of Quicksort}{
\begin{center}
\href{run:examples/videos/QuicksortCor.mp4}{\textbf{Quicksort correctness demo}}
\end{center}
% 1. dummy proof at bottom
% 2. add init show failure
% 3. show proof with lemmata, show both lemmata
}
\myframe{proving things about \tla\ specs: \hfill so far so good ...}{
\footnotesize
\hspace{-18pt} {\color{Maroon} There are several nice aspects of writing proofs in \tla:}
\begin{itemize}
\item little overhead over the pen-and-paper version ;
\item proofs are quite robust wrt to changes in definitions, etc
\href{run:examples/videos/NoOverlaps.mp4}{{\color{Maroon}(\textbf{demo})}} ;
\item interface facilitates asynchronous working style ; % {\color{Maroon} ...}
\item module system ;
\item show me another ITP that supports modal reasoning.
\end{itemize}
\vspace{20pt}
\hspace{-18pt} {\color{Maroon} There are also several not-so-nice aspects of writing proofs in \tla:}
\begin{itemize}
\item controlling the exact obligations is subtle ;
% this is partly due to design decisions taken to minimize the explicit modal reasoning...
\item modal reasoning is subtle, can look strange, and is limited {\color{Maroon} (at present)} ;
\item no certificates {\color{Maroon} (at present)}.
\end{itemize}
\vspace{10pt}
}
| {
"alphanum_fraction": 0.6079125923,
"avg_line_length": 30.4157814871,
"ext": "tex",
"hexsha": "8bfc90e827101176bdb24237c08cab327cdd41da",
"lang": "TeX",
"max_forks_count": 12,
"max_forks_repo_forks_event_max_datetime": "2021-11-12T22:18:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-02-26T19:58:37.000Z",
"max_forks_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "damiendoligez/tlapm",
"max_forks_repo_path": "doc/presentations/2014-QM/tlapm.tex",
"max_issues_count": 49,
"max_issues_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b",
"max_issues_repo_issues_event_max_datetime": "2022-02-07T17:43:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-04T18:13:13.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "damiendoligez/tlapm",
"max_issues_repo_path": "doc/presentations/2014-QM/tlapm.tex",
"max_line_length": 117,
"max_stars_count": 31,
"max_stars_repo_head_hexsha": "13a1993263642092a521ac046c11e3cb5fbcbc8b",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "damiendoligez/tlapm",
"max_stars_repo_path": "doc/presentations/2014-QM/tlapm.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-19T18:38:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-08-16T14:58:40.000Z",
"num_tokens": 7481,
"size": 20044
} |
\documentclass[11pt]{article}
\usepackage{listings}
\newcommand{\numpy}{{\tt numpy}} % tt font for numpy
\topmargin -.5in
\textheight 9in
\oddsidemargin -.25in
\evensidemargin -.25in
\textwidth 7in
\begin{document}
% ========== Edit your name here
\author{Francesco Penasa}
\title{Network Security - Network aspects}
\maketitle
\medskip
% ========== Begin answering questions here
\texttt{https://securitylab.disi.unitn.it/doku.php?id=course_netsec_2016}\\
Autonomous Systems -> logically separated networks
Internet is made of several logically separated networks.
Communication protocols AS betwenn itself
\begin{enumerate}
\item IGP?
\item AOSP?
\end{enumerate}
Communication protocols between AS
\begin{enumerate}
\item bgp?
\end{enumerate}
\paragraph{OSI model} % (fold)
\label{par:osi_model}
\begin{enumerate}
\item 1 physical, prevent physical access
\item 2 data link
\item 3 network
\item 4 transport (tcp port)
\item 5 session (ip address)
\item 6 presentation (ethernet port)
\item 7 application
\end{enumerate}
% paragraph osi_model (end)
\paragraph{OSI data link layer} % (fold)
\label{par:osi_data_link_layer}
Identified by ethernet address, bounded to physical interface identified by a MAC
48-bit identifier (HH-HH-HH-HH-HH-HH).
Inside local network all routing happens at MAC address (IP only outside local network)
`ifconfig en0'
ethernet cable is a shared bus, so it is supposed to be truted.
% paragraph osi_data_link_layer (end)
\paragraph{Network layer IP} % (fold)
\label{par:network_layer_ip}
IPv4 and IPv6
IP address represent a host, IP dinamically assigned by DHCP server
% paragraph network_layer_ip (end)
ARP = associate IP address to a MAC address.x`'
ICMP = internet control message protocol
stateful protocol (TCP)
stateless protocol (UDP)
\end{document}
\grid
\grid | {
"alphanum_fraction": 0.7579062159,
"avg_line_length": 22.6419753086,
"ext": "tex",
"hexsha": "a9506f46805cbfdd3af62dc7accf88c7d5d3f94d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_forks_repo_path": "4th_year/2nd/NS/20200219.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_issues_repo_path": "4th_year/2nd/NS/20200219.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "980c6a3e6f01a7cb98a62bd39eda34912731c9eb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FrancescoPenasa/Master-ComputerScience-UNITN",
"max_stars_repo_path": "4th_year/2nd/NS/20200219.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 509,
"size": 1834
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.