Search is not available for this dataset
text
string
meta
dict
\section{NWChem Architecture} \label{sec:arch} NWChem has a five-tiered modular architecture. This structure is illustrated conceptually by the diagram in Figure 1, which shows the five tiers and their relationships to each other. The first tier is the {\em generic task interface}. This interface\footnote{Note that this is an abstract programming interface, not a user interface. The user's 'interface' with the code is the input file.} serves as the mechanism that transfers control to the different modules in the second tier, which consists of the {\em Molecular Calculation Modules}. The molecular calculation modules are the high level programming modules that accomplish computational tasks, performing particular operations using the specified theories defined by the user input file. These independent modules of NWChem share data only through a disk-resident data base, which allows modules to share data or to share access to files containing data. The third tier consists of the {\em Molecular Modeling Tools}. These routines provide basic chemical functionality such as symmetry, basis sets, grids, geometry, and integrals. The fourth tier is the {\em Software Development Toolkit}, which is the basic foundation of the code. The fifth tier provides the {\em Utility Functions} needed by nearly all modules in the code. These include such functionality as input processing, output processing, and timing. In addition to using a modular approach for the design, NWChem is built on the concepts of object oriented programming (OOP) and non-uniform memory access (NUMA). The OOP approach might seem incompatible with a code written primarily in Fortran77, since it does not have all of the necessary functionality for an object oriented language (OOL). However, many of the required features can be simulated by careful adherence to the guidelines for encapsulation and data hiding outlined in Section \ref{sec:coding-style}. The main advantage of an object-oriented approach is that it allows for orderly and logical access to data more-or-less independent of why or when a given module might require the information. In addition, it allows considerable flexibility in the manipulation and distribution of data on shared memory, distributed memory, and massively parallel hardware architectures, which is needed in a NUMA approach to parallel computations. However, this model does require that the program developer have a fairly comprehensive understanding of the overall structure of the code and the way in which the various parts fit together. The following subsections describe this structure in broad outline, and refer to specific chapters and sections in the code where the various modules, tools, and "objects" are described in detail. \subsection{Object Oriented Design} \label{sec:ood} The basic principles of object-oriented software development are abstraction, hierarchy, encapsulation, and modularity. {\em Abstraction} is the separation of the problem to be solved from the process used to solve it, which facilitates the introduction of new methods as programming tools and hardware capabilities evolve. In complex systems, abstraction can be carried out on many levels, resulting in a hierarchy that allows connections between many different components and the development of further abstractions. {\em Encapsulation} is the creation of isolated data structures or other objects in such a way that they can be manipulated only in carefully controlled and well-defined ways, which helps to reduce the problems due to unexpected interactions between components that are supposed to be independent. {\em Modularity}, which is the use of relatively small program units having well-defined functionality, can also help reduce interaction problems. It can also aid overall code efficiency, if the modules are written to be easily reused. In an object oriented language such as C++, this methodology can be a feature of the actual coding of the program. NWChem is written in a mixture of C and Fortran, however, and uses object oriented ideas at the design stage. This requires some self-discipline on the part of the developers, but the effort is well rewarded in improved implementation and easier code maintenance. In a programming language such as Fortran77, which is not object oriented by design, the concept of objects can be simulated by developing a well defined interface for the programmer to use that in essence hides all of the gory details of "creating", "manipulating", and "destroying" an object. The objects are treated as if they can be manipulated only through the interface. In reality, of course, Fortran 77 allows the programmer to use any of the "private" data and routines that are underneath the interface. For this reason, the rules for encapsulation and data hiding must be adhered to religiously, by following the guidelines outlined in Section \ref{sec:coding-style}. One of the basic features of an object is that all of the data and functions related to the data are encapsulated and available only through a "public" programming interface. This encapsulation feature allows programmers to put related data together in one object to be accessed in a well defined manner. For example, the basis set object (described further in Section \ref{sec:basis}) contains the number of basis functions, the exponents, the coefficients and other data related to basis sets. It also has a very well defined interface that can be used to access and manipulate the data. Because the data description, the internal "private" functions, and the "public" interface together define the abstract concept of the object, specific examples of the objects need to be created (instantiated). Instantiations (or unique copies) of objects are simulated by allowing the user and the programmer to use different handles for different objects of the same type. This feature gives the user the capability of defining different basis sets during computation simply by naming different basis set objects (see Section \ref{sec:basis}). For example, two different basis sets can be defined for a molecule in an input file, as follows; \begin{verbatim} geometry Ne 0.0 0.0 0.0 end basis "dz set" Ne library cc-pvdz end basis "qz set" Ne library cc-pvqz end set "ao basis" "dz set" task scf set "ao basis" "qz set" task scf task mp2 \end{verbatim} The above example has two basis sets that have the same object abstraction, (exponents and coefficients, etc.), but are different instantiations of the object, \verb+"dz set"+ and \verb+"qz set"+, with different handles (i.e., names). The handles can then be used to represent the currently "active" basis set for the computation, using the input command \verb+set "ao basis" "qz set"+. Related to the object oriented design is the idea of an abstract programming interface (API). An API provides a common interface to many different methods that perform the same type of task. An API is different from an object in the sense that there is no instantiation process. Also, while the functions are encapsulated, there is really no data that is encapsulated. For example, memory objects, basis objects, and geometry objects are passed into the integral API and integrals are passed back out in the memory objects. The integral API decides which of three different integral packages will be used to compute the integrals. \subsection{Non-Uniform Memory Access} One of NWChem's design goals is to scale to massively parallel hardware architectures in all aspects of the hardware: CPU, disk, and memory. With this goal in mind, distributing the data across all of the nodes becomes necessary. Therefore, in addition to the modular and object oriented architecture discussed above, NWChem is built on the principle of non-uniform memory access (NUMA). Just as a workstation has various levels of memory (registers, primary and secondary cache, memory and swap space) with varying sizes and access speed, distributing data across nodes "simply" adds another level of remote memory. The programmer must be aware of this extra level of memory access when designing the parallel algorithms in NWChem to get efficient, scalable code. The MA tool allows the programmer to allocate memory that is local to the calling process. This is data that will generally not be directly shared with other processes, such as workspace for a particular local calculation or for replication of very small sets of data. The GA tool supports the NUMA model by allowing nodes to share arrays between processes as if the memory is physically shared. It allows the programmer to use relatively simple routines to access and manipulate data in the shared arrays. However, the programmer must be aware that access to shared data will be slower than access to local data. Just as GA allows the programmer to effectively use the NUMA model for memory, ChemIO is used to create files that are either local to the process or distributed among file systems. This allows the programmer to perform parallel I/O in the most efficient method for the particular algorithm or the particular hardware. Together, MA, GA, and ChemIO provide the tools needed to accomplish a NUMA architecture. They also form a significant part of the Software Development Tooklkit layer. \subsection{The Five-Tiered Modular Architecture} With the basic understanding of the object oriented approach and the NUMA approach, the programmer also needs to understand the basic modular architecture that is used in NWChem. This section provides a basic overview of each of the tiers describes how they fit together to make a cohesive and extensible program. \subsubsection{The Generic Task Interface} In old-fashioned structured Fortran programming, the Generic Task Interface would be refered to as the main program. As the "interface" between the user and the chemistry modules comprising NWChem, the generic task interface processes the input, sets up the parallel environment, and performs any initialization needed for the desired calculations. It then transfers control to the appropriate module, which performs the calculation. After a particular task is completed, control returns to the main program. If the input specifies more than one task, control is transfered to the appropriate module for the next task. This process continues until all specified tasks have been completed, or an error condition occurs. When all tasks complete successfully, the interface terminates program execution in an orderly manner. When errors occur, the interface tries to terminate program execution gracefully, but the degree of success depends somewhat on the severity of the error. Chapter \ref{sec:generic} presents a detailed discussion of the Generic Task Interface, and how it functions in NWChem. \subsubsection{The Molecular Calculation Modules} The second level of the five-tiered structure of NWChem consists of the high level molecular calculation modules. These are independent modules that perform the various functions of the code specified by the task directives. Examples include the self-consistent field (SCF) energy, the SCF analytic gradient, and the density functional theory (DFT) energy modules. The independent molecular calculation modules in NWChem can share data only through a run time database or through other well defined disk files. Each of the modules in this layer use toolkits and routines in the lower layers of the architecture to accomplish their tasks. Chapter \ref{sec:modules} presents discussions of each of the calculational modules in NWChem, and the various operations that can be performed with these modules. \subsubsection{The Molecular Modeling Toolkit} The third level of the architecture of NWChem consists of the molecular modeling toolkit. Chapter \ref{sec:mmt} describes the elements of this toolkit in detail, including discussions of the geometry object (see Section \ref{sec:geometry}), the basis set object (see Section \ref{sec:basis}), the linear algebra routines (see Section \ref{sec:la}), symmetry (see Section \ref{sec:sym}, and the integral API (see Section \ref{sec:intapi}). Each of these tools provides a basic functionality that is common to many of the algorithms in chemistry. The integral API provides a common interface to the three integral packages available in NWChem. The basis set object provides the programmer with information related to a specific basis set. The geometry object provides the basic geometry in different formats and provides for the definition of molecular as well as periodic systems. It also has information such as symmetry, atomic charges, and atomic mass. The linear algebra routines provide many general algorithms for basic vector-vector, vector-matrix, and matrix-matrix operations, and for solving eigenproblems. \subsubsection{The Software Development Toolkit} The Software Development Toolkit makes up the foundation level of the five-tiered structure of the code, and is the feature that makes it possible to develop an object oriented code that is constructed mainly in Fortran77. Chapter \ref{sec:sdt} presents a detailed discussion of this toolkit, which consists of four objects. These are the runtime database (RTDB) (see Section \ref{sec:rtdb}), the memory allocator (MA) (see Section \ref{sec:ma}), Global Arrays (GA) (see Section \ref{sec:ga}), and ChemIO (see Section \ref{sec:ChemIO}). Each of these tools provides the interface between the chemistry specific part of the program and the hardware. They also support the NUMA parallel programming paradigm used by NWChem. The RTDB is a persistant data storage mechanism used in NWChem to hold calculation specific information for the high level programming modules. Since NWChem does not destroy the RTDB at the end of a calculation unless specifically directed to do so by the user, a given RTDB can be used in several independent calculations. The MA tool allocates memory that will be used local to the processor. The GA tool allocates memory that is distributed across nodes (or shared in the case of shared memory machines) and is addressable by all of the nodes. ChemIO is a high performance I/O API designed to meet the requirements of large-scale computational chemistry problems. It allows the programmer to create I/O files that may be local or distributed. \subsubsection{The Utility Routines} This lowest level of the architecture contains many of the basic "odds-and-ends" functionality. These are utility routines that most of the above tiers need. Examples include the timing routines, the input parser, and the print routines.
{ "alphanum_fraction": 0.8056894562, "avg_line_length": 50.7896551724, "ext": "tex", "hexsha": "f5ad6417b47c2589f54e14a182f1bf61d1bb7c41", "lang": "TeX", "max_forks_count": 135, "max_forks_repo_forks_event_max_datetime": "2022-03-31T02:28:49.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-19T18:36:44.000Z", "max_forks_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd", "max_forks_repo_licenses": [ "ECL-2.0" ], "max_forks_repo_name": "dinisAbranches/nwchem", "max_forks_repo_path": "doc/prog/nwarch.tex", "max_issues_count": 356, "max_issues_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd", "max_issues_repo_issues_event_max_datetime": "2022-03-31T02:28:21.000Z", "max_issues_repo_issues_event_min_datetime": "2017-12-05T01:38:12.000Z", "max_issues_repo_licenses": [ "ECL-2.0" ], "max_issues_repo_name": "dinisAbranches/nwchem", "max_issues_repo_path": "doc/prog/nwarch.tex", "max_line_length": 87, "max_stars_count": 317, "max_stars_repo_head_hexsha": "21cb07ff634475600ab687882652b823cad8c0cd", "max_stars_repo_licenses": [ "ECL-2.0" ], "max_stars_repo_name": "dinisAbranches/nwchem", "max_stars_repo_path": "doc/prog/nwarch.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T11:48:24.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-20T21:29:11.000Z", "num_tokens": 3157, "size": 14729 }
% ======================================================================== % Copyright (c) 1985 The University of Washington % % Licensed under the Apache License, Version 2.0 (the "License"); % you may not use this file except in compliance with the License. % You may obtain a copy of the License at % % http://www.apache.org/licenses/LICENSE-2.0 % % Unless required by applicable law or agreed to in writing, software % distributed under the License is distributed on an "AS IS" BASIS, % WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. % See the License for the specific language governing permissions and % limitations under the License. % ======================================================================== % % Documentation for University of Washington thesis LaTeX document class % by Jim Fox % [email protected] % % Revised 2020/02/24, added \caption()[]{} option. No ToC. % % Revised for version 2015/03/03 of uwthesis.cls % Revised, 2016/11/22, for cleanup of sample copyright and title pages % % This document is contained in a single file ONLY because % I wanted to be able to distribute it easily. A real thesis ought % to be contained on many files (e.g., one for each chapter, at least). % % To help you identify the files and sections in this large file % I use the string '==========' to identify new files. % % To help you ignore the unusual things I do with this sample document % I try to use the notation % % % --- sample stuff only ----- % special stuff for my document, but you don't need it in your thesis % % --- end-of-sample-stuff --- % Printed in twoside style now that that's allowed % \documentclass [11pt, proquest] {uwthesis}[2020/02/24] % % The following line would print the thesis in a postscript font % \usepackage{natbib} % \def\bibpreamble{\protect\addcontentsline{toc}{chapter}{Bibliography}} \setcounter{tocdepth}{1} % Print the chapter and sections to the toc % ========== Local defs and mods % % --- sample stuff only ----- % These format the sample code in this document \usepackage{alltt} % \newenvironment{demo} {\begin{alltt}\leftskip3em \def\\{\ttfamily\char`\\}% \def\{{\ttfamily\char`\{}% \def\}{\ttfamily\char`\}}} {\end{alltt}} % metafont font. If logo not available, use the second form % % \font\mffont=logosl10 scaled\magstep1 \let\mffont=\sf % --- end-of-sample-stuff --- \begin{document} % ========== Preliminary pages % % ( revised 2012 for electronic submission ) % \prelimpages % % ----- copyright and title pages % \Title{The Suitability of the \LaTeX\ Text Formatter\\ for Thesis Preparation by Technical and\\ Non-technical Degree Candidates} \Author{Jim Fox} \Year{1995} \Program{IT Infrastructure} \Chair{Name of Chairperson}{Title of Chair}{Department of Chair} \Signature{First committee member} \Signature{Next committee member} \Signature{etc} \copyrightpage \titlepage % % ----- signature and quoteslip are gone % % % ----- abstract % \setcounter{page}{-1} \abstract{% This sample dissertation is an aid to students who are attempting to format their theses with \LaTeX, a sophisticated text formatter widely used by mathematicians and scientists everywhere. \begin{itemize} \item It describes the use of a specialized macro package developed specifically for thesis production at the University. The macros customize \LaTeX\ for the correct thesis style, allowing the student to concentrate on the substance of his or her text.% \footnote{See Appendix A to obtain the source to this thesis and the class file.} \item It demonstrates the solutions to a variety of formatting challenges found in thesis production. \item It serves as a template for a real dissertation. \end{itemize} } % % ----- contents & etc. % \tableofcontents \listoffigures %\listoftables % I have no tables % % ----- glossary % \chapter*{Glossary} % starred form omits the `chapter x' \addcontentsline{toc}{chapter}{Glossary} \thispagestyle{plain} % \begin{glossary} \item[argument] replacement text which customizes a \LaTeX\ macro for each particular usage. \item[back-up] a copy of a file to be used when catastrophe strikes the original. People who make no back-ups deserve no sympathy. \item[control sequence] the normal form of a command to \LaTeX. \item[delimiter] something, often a character, that indicates the beginning and ending of an argument. More generally, a delimiter is a field separator. \item[document class] a file of macros that tailors \LaTeX\ for a particular document. The macros described by this thesis constitute a document class. \item[document option] a macro or file of macros that further modifies \LaTeX\ for a particular document. The option {\tt[chapternotes]} constitutes a document option. \item[figure] illustrated material, including graphs, diagrams, drawings and photographs. \item[font] a character set (the alphabet plus digits and special symbols) of a particular size and style. A couple of fonts used in this thesis are twelve point roman and {\sl twelve point roman slanted}. \item[footnote] a note placed at the bottom of a page, end of a chapter, or end of a thesis that comments on or cites a reference for a designated part of the text. \item[formatter] (as opposed to a word-processor) arranges printed material according to instructions embedded in the text. A word-processor, on the other hand, is normally controlled by keyboard strokes that move text about on a display. \item[\LaTeX] simply the ultimate in computerized typesetting. \item[macro] a complex control sequence composed of other control sequences. \item[pica] an archaic unit of length. One pica is twelve points and six picas is about an inch. \item[point] a unit of length. 72.27 points equals one inch. \item[roman] a conventional printing typestyle using serifs. the decorations on the ends of letter strokes. This thesis is set in roman type. \item[rule] a straight printed line; e.g., \hrulefill. \item[serif] the decoration at the ends of letter strokes. \item[table] information placed in a columnar arrangement. \item[thesis] either a master's thesis or a doctoral dissertation. This document also refers to itself as a thesis, although it really is not one. \end{glossary} % % ----- acknowledgments % \acknowledgments{% \vskip2pc % {\narrower\noindent The author wishes to express sincere appreciation to University of Washington, where he has had the opportunity to work with the \TeX\ formatting system, and to the author of \TeX, Donald Knuth, {\it il miglior fabbro}. % \par} } % % ----- dedication % \dedication{\begin{center}to my dear wife, Joanna\end{center}} % % end of the preliminary pages % % ========== Text pages % \textpages % ========== Chapter 1 \chapter {Introduction} The utility of a clean, professionally prepared thesis is well documented% \footnote{See, for example, W.~Shakespeare\cite{Hamlet} for a recent discussion.} and, even if you never intend to actually print your thesis, you still ought to format it as if that were your intention. \TeX\ facilitates that. It is a flexible, complete and professional typesetting system. It will produce {\bf pdf} output as required by the Graduate School. \section{The Purpose of This Sample Thesis} This sample is both a demonstration of the quality and propriety of a \LaTeX formatted thesis and documentation for its preparation. It has made extensive use of a custom class file developed specifically for this purpose at the University of Washington. Chapter~II discusses \TeX\ and \LaTeX. Chapter III describes the additional macros and functions provided by the custom thesis class file. Finally, Chapter IV hopes to tie things up. It is impossible to predict all the formatting problems one will encounter and there will be problems that are best handled by a specialist. The Graduate School may be able to help you find help. Some departments may also be able to provide \LaTeX\ assistance. \section{Conventions and Notations} In this thesis the typist refers to the user of \LaTeX---the one who makes formatting decisions and chooses the appropriate formatting commands. He or she will most often be the degree candidate. This document deals with \LaTeX\ typesetting commands and their functions. Wherever possible the conventions used to display text entered by the typist and the resulting formatted output are the same as those used by the \TeX books. Therefore, {\tt typewriter type} is used to indicate text as typed by the computer or entered by the typist. It is quite the opposite of {\it italics,} which indicates a category rather than exact text. For example, {\tt alpha} and {\tt beta} might each be an example of a {\it label}. \section{Nota bene} This sample thesis was produced by the \LaTeX\ document class it describes and its format is consonant with the Graduate School's electronic dissertation guidelines, as of November, 2014, at least. However, use of this package does not guarantee acceptability of a particular thesis. % ========== Chapter 2 \chapter{A Brief \\ Description of \protect\TeX} The \TeX\ formatting program is the creation of Donald Knuth of Stanford University. It has been implemented on nearly every general purpose computer and produces exactly the same copy on all machines. \section{What is it; why is it spelled that way; and what do really long section titles look like in the text and in the Table of Contents?} \TeX\ is a formatter. A document's format is controlled by commands embedded in the text. \LaTeX\ is a special version of \TeX---preloaded with a voluminous set of macros that simplify most formatting tasks. \TeX\ uses {\it control sequences} to control the formatting of a document. These control sequences are usually words or groups of letters prefaced with the backslash character ({\tt\char'134}). For example, Figure \ref{start-2} shows the text that printed the beginning of this chapter. Note the control sequence \verb"\chapter" that instructed \TeX\ to start a new chapter, print the title, and make an entry in the table of contents. It is an example of a macro defined by the \LaTeX\ macro package. The control sequence \verb"\TeX", which prints the word \TeX, is a standard macro from the {\it\TeX book}. The short control sequence \verb"\\" in the title instructed \TeX\ to break the title line at that point. This capability is an example of an extension to \LaTeX\ provided by the uwthesis document class. \begin{figure} \begin{demo} \uwsinglespace \\chapter\{A Brief\\\\Description of \\TeX\} The \\TeX\\ formatting program is the creation of Donald Knuth of Stanford University. \end{demo} \label{start-2} \caption{The beginning of the Chapter II text} \end{figure} Most of the time \TeX\ is simply building paragraphs from text in your source files. No control sequences are involved. New paragraphs are indicated by a blank line in the input file. Hyphenation is performed automatically. \section{\TeX books} The primary reference for \LaTeX\ is Lamport's second edition of the \textit{\LaTeX\ User's Guide}\cite{Lbook}. It is easily read and should be sufficient for thesis formatting. See also the \textsl{\LaTeX\ Companion}\cite{companion} for descriptions of many add-on macro packages. Although unnecessary for thesis writers, the \textsl{\TeX book} is the primary reference for \TeX sperts worldwide. \section{Mathematics} The thesis class does not expand on \TeX's or \LaTeX's comprehensive treatment of mathematical equation printing.% \label{c2note}\footnote{% % a long footnote indeed. Although many \TeX-formatted documents contain no mathematics except the page numbers, it seems appropriate that this paper, which is in some sense about \TeX, ought to demonstrate an equation or two. Here then, is a statement of the {\it Nonsense Theorem}. \smallskip \def\RR{{\cal R\kern-.15em R}} {\narrower\hangindent\parindent Assume a universe $E$ and a symmetric function $\$$ defined on $E$, such that for each $\$^{yy}$ there exists a $\$^{\overline{yy}}$, where $\$^{yy} = \$^{\overline{yy}}$. For each element $i$ of $E$ define ${\cal S}(i)=\sum_i \$^{yy}+\$^{\overline{yy}}+0$. Then if $\RR$ is that subset of $E$ where $1+1=3$, for each $i$ $$\lim_{\$\to\infty}\int {\cal S}di = \cases{0,&if $i\not\in\RR$;\cr \infty,&if $i\in\RR$.\cr}$$ \par}} % end of the footnote % The {\it\TeX book}\cite{book}, {\it \LaTeX\ User's Guide}\cite{Lbook}, and {\it The \LaTeX\ Companion}\cite{companion} thoroughly cover this topic. \section{Languages other than English} Most \LaTeX\ implementations at the University are tailored for the English language. However, \LaTeX\ will format many other languages. Unfortunately, this author has never been successful in learning more than a smattering of anything other than English. Consult your department or the Tex Users Group. \smallskip \begin{center} {\tt http://tug.org/}, \end{center} \smallskip for assistance with non-English formatting. Unusual characters can be defined via the font maker \hbox{\mffont METAFONT} (documented by Knuth\cite{Metafont}). The definitions are not trivial. Students who attempt to print a thesis with custom fonts may soon proclaim, % Note. This is not the ideal way to print Greek \medskip \begin{center} ``$\mathaccent"7027\alpha\pi o\kern1pt\theta\alpha\nu\epsilon\hat\iota\nu$ \ $\theta\acute\epsilon\lambda\omega$.'' \end{center} % ========== Chapter 3 \chapter{The Thesis Unformatted} This chapter describes the uwthesis class (\texttt{uwthesis.cls}, version dated 2014/11/13) in detail and shows how it was used to format the thesis. A working knowledge of Lamport's \LaTeX\ manual\cite{Lbook} is assumed. \section{The Control File} The source to this sample thesis is a single file only because ease of distribution was a concern. You should not do this. Your task will be much easier if you break your thesis into several files: a file for the preliminary pages, a file for each chapter, one for the glossary, and one for each appendix. Then use a control file to tie them all together. This way you can edit and format parts of your thesis much more efficiently. Figure~\ref{control-file} shows a control file that might have produced this thesis. It sets the document style, with options and parameters, and formats the various parts of the thesis---% but contains no text of its own. % control file caption and figure % % \begin{figure}[p] \begin{fullpage} \uwsinglespace \begin{verbatim} % LaTeX thesis control file \documentclass [11pt, proquest]{uwthesis}[2014/11/13] \begin{document} % preliminary pages % \prelimpages \include{prelim} % text pages % \textpages \include{chap1} \include{chap2} \include{chap3} \include{chap4} % bibliography % \bibliographystyle{plain} \bibliography{thesis} % appendices % \appendix \include{appxa} \include{appxb} \include{vita} \end{document} \end{verbatim} \caption[A thesis control file]% {\narrower A thesis control file ({\tt thesis.tex}). This file is the input to \LaTeX\ that will produce a thesis. It contains no text, only commands which direct the formatting of the thesis. } \label{control-file} \end{fullpage} \end{figure} The first section, from the \verb"\documentclass" to the \verb"\begin{document}", defines the document class and options. This sample thesis specifies the \texttt{proquest} style, which is now required by the Graduate School and is the default. Two other, now dated, other styles are available: \verb"twoside", which is similar but produces a wider binding margin and is more suitable for paper printing; and \verb"oneside", which is really old fashoned. This sample also specified a font size of 11 points. Possible font size options are: \verb"10pt", \verb"11pt", and \verb"12pt". Default is 12 points, which is the preference of the Graduate School. If you choose a smaller size be sure to check with the Graduate School for acceptability. The smaller fonts can produce very small sub and superscripts. Include most additional formatting packages with \verb"\usepackage", as describe by Lamport\cite{Lbook}. The one exception to this rule is the \verb"natbib" package. Include it with the \verb"natbib" document option. Use the \verb"\includeonly" command to format only a part of your thesis. See Lamport\cite[sec. 4.4]{Lbook} for usage and limitations. \section{The Text Pages} A chapter is a major division of the thesis. Each chapter begins on a new page and has a Table of Contents entry. \subsection{Chapters, Sections, Subsections, and Appendices} Within the chapter title use a \verb"\\" control sequence to separate lines in the printed title (recall Figure \ref{start-2}.). The \verb"\\" does not affect the Table of Contents entry. Format appendices just like chapters. The control sequence \verb"\appendix" instructs \LaTeX\ to begin using the term `Appendix' rather than `Chapter'. Specify sections and subsections of a chapter with \verb"\section" and \verb"\subsection", respectively. In this thesis chapter and section titles are written to the table of contents. Consult Lamport\cite[pg. 176]{Lbook} to see which subdivisions of the thesis can be written to the table of contents. The \verb"\\" control sequence is not permitted in section and subsection titles. \subsection{Footnotes} \label{footnotes} Footnotes format as described in the \LaTeX\ book. You can also ask for end-of-chapter or end-of-thesis notes. The thesis class will automatically set these up if you ask for the document class option \texttt{chapternotes} or \texttt{endnotes}. If selected, chapternotes will print automatically. If you choose endnotes however you must explicitly indicate when to print the notes with the command \verb"\printendnotes". See the style guide for suitable endnote placement. \subsection{Figures and Tables} Standard \LaTeX\ figures and tables, see Lamport\cite[sec.~C.9]{Lbook}, normally provide the most convenient means to position the figure. Full page floats and facing captions are exceptions to this rule. If you want a figure or table to occupy a full page enclose the contents in a \texttt{fullpage} environment. See figure~\ref{facing-caption}. \subsubsection{Facing pages} Facing page captions are an artifact of traditional, dead-tree printing, where a left-side (even) page faces a right-side (odd) page. In the \texttt{twoside} style, a facing caption is full page caption for a full page figure or table and should face the illustration to which it refers. You must explicitly format both pages. The caption part appears on an even page (left side) and the figure or table comes on the following odd page (right side). Enclose the float contents for the caption in a \texttt{leftfullpage} environment, and enclose the float contents for the figure or table in a \texttt{fullpage} environment. The first page (left side) contains the caption. The second page (right side) could be left blank. A picture or graph might be pasted onto this space. See figure~\ref{facing-caption}. \begin{figure}[t] \uwsinglespace \begin{verbatim} \begin{figure}[p]% the left side caption \begin{leftfullpage} \caption{ . . . } \end{leftfullpage} \end{figure} \begin{figure}[p]% the right side space \begin{fullpage} . . . ( note.. no caption here ) \end{fullpage} \end{figure} \end{verbatim} \caption[Generating a facing caption page]{This text would create a double page figure in the two-side styles. } \label{facing-caption} \end{figure} You can use these commands with the \texttt{proquest} style, but they have little effect on online viewing. \subsection{Horizontal Figures and Tables} Figures and tables may be formatted horizontally (a.k.a.\ landscape) as long as their captions appear horizontal also. \LaTeX\ will format landscape material for you. Include the \texttt{rotating} package \begin{demo} \\usepackage[figuresright]\{rotating\} \end{demo} and read the documentation that comes with the package. Figure~\ref{sideways} is an example of how a landscape table might be formatted. \begin{figure}[t] \uwsinglespace \begin{verbatim} \begin{sidewaystable} ... \caption{ . . . } \end{sidewaystable} \end{verbatim} \caption[Generating a landscape table]{This text would create a landscape table with caption.} \label{sideways} \end{figure} \subsection{Figure and Table Captions} Most captions are formatted with the \verb"\caption" macro as described by Lamport\cite[sec. C.9]{Lbook}. The uwthesis class extends this macro to allow continued figures and tables, and to provide multiple figures and tables with the same number, e.g., 3.1a, 3.1b, etc. To format the caption for the first part of a figure or table that cannot fit onto a single page use the standard form: \begin{demo} \\caption[\textit{toc}]\{\textit{text}\} \end{demo} To format the caption for the subsequent parts of the figure or table use this caption: \begin{demo} \\caption(-)\{(continued)\} \end{demo} It will keep the same number and the text of the caption will be {\em(continued)}. To format the caption for the first part of a multi-part figure or table use the format: \begin{demo} \\caption(a)[\textit{toc}]\{\textit{text}\} \end{demo} The figure or table will be lettered (with `a') as well as numbered. To format the caption for the subsequent parts of the multi-part figure or table use the format: \begin{demo} \\caption(\textit{x})\{\textit{text}\} \end{demo} where {\em x} is {\tt b}, {\tt c}, \ldots. The parts will be lettered (with `b', `c', \ldots). If you want a normal caption, but don't want a ToC entry: \begin{demo} \\caption()\{\textit{text}\} \end{demo} Note that the caption number will increment. You would normally use this only to leave an entire chapter's captions off the ToC. \subsection{Line spacing} Normally line spacing will come out like it should. However, the ProQuest style allows single spacing in certain situations: figure content, some lists, and etc. Use \verb"\uwsinglespace" to switch to single spacing within a \verb"\begin{}" and \verb"\end{}" block. The code examples in this document does this. \section{The Preliminary Pages} These are easy to format only because they are relatively invariant among theses. Therefore the difficulties have already been encountered and overcome by \LaTeX\ and the thesis document classes. Start with the definitions that describe your thesis. This sample thesis was printed with the parameters: \begin{demo} \\Title\{The Suitability of the \\LaTeX\\ Text Formatter\\\\ for Thesis Preparation by Technical and\\\\ Non-technical Degree Candidates\} \\Author\{Jim Fox\} \\Program\{IT Infrastructure\} \\Year\{2012\} \\Chair\{Name of Chairperson\}\{title\}\{Chair's department\} \\Signature\{First committee member\} \\Signature\{Next committee member\} \\Signature\{etc\} \end{demo} Use two or more \verb"\Chair" lines if you have co-chairs. \subsection{Copyright page} Print the copyright page with \verb"\copyrightpage". \subsection{Title page} Print the title page with \verb"\titlepage". The title page of this thesis was printed with% \begin{demo} \\titlepage \end{demo} You may change default text on the title page with these macros. You will have to redefine \verb"\Degreetext", for instance, if you're writing a Master's thesis instead of a dissertation.\footnote{If you use these they can be included with the other information before \\copyrightpage".} \begin{list}{}{\itemindent\parindent\itemsep0pt \def\makelabel#1{\texttt{\char`\\#1}\hfill}\uwsinglespace} \item[Degree\char`\{{\it degree name}\char`\}] defaults to ``Doctor of Philosophy'' \item[School\char`\{{\it school name}\char`\}] defaults to ``University of Washington'' \item[Degreetext\char`\{{\it degree text}\char`\}] defaults to ``A dissertation submitted \ldots'' \item[textofCommittee\char`\{{\it committee label}\char`\}] defaults to ``Reading Committee:'' \item[textofChair\char`\{{\it chair label}\char`\}] defaults to ``Chair of the Supervisory Committee:'' \end{list} These definitions must appear \underline{before} the \verb"\titlepage" command. \subsection{Abstract} Print the abstract with \verb"\abstract". It has one argument, which is the text of the abstract. All the names have already been defined. The abstract of this thesis was printed with \begin{demo} \\abstract\{This sample . . . `real' dissertation.\} \end{demo} \subsection{Tables of contents} Use the standard \LaTeX\ commands to format these items. \subsection{Acknowledgments} Use the \verb"\acknowledgments" macro to format the acknowledgments page. It has one argument, which is the text of the acknowledgment. The acknowledgments of this thesis was printed with \begin{demo} \\acknowledgments\{The author wishes . . . \{\\it il miglior fabbro\}.\\par\}\} \end{demo} \subsection{Dedication} Use the \verb"\dedication" macro to format the dedication page. It has one argument, which is the text of the dedication. \subsection{Vita} Use the \verb"\vita" macro to format the curriculum vitae. It has one argument, which chronicles your life's accomplishments. Note that the Vita is not really a preliminary page. It appears at the end of your thesis, just after the appendices. %% %% \section{Customization of the Macros} %% %% Simple customization, including %% alteration of default parameters, changes to dimensions, %% paragraph indentation, and margins, are not too difficult. %% You have the choice of modifying the class file ({\tt uwthesis.cls}) %% or loading %% one or more personal style files to customize your thesis. %% The latter is usually most convenient, since you do not need %% to edit the large and complicated class file. %% % ========== Chapter 4 \chapter{Running \LaTeX\\ ({\it and printing if you must})} From a given source \TeX\ will produce exactly the same document on all computers and, if needed, on all printers. {\it Exactly the same} means that the various spacings, line and page breaks, and even hyphenations will occur at the same places. How you edit your text files and run \LaTeX\ varies from system to system and depends on your personal preference. \section{Running} The author is woefully out of his depth where \TeX\ on Windows is concerned. Google would be his resource. On a linux system he types \begin{demo} \$\ pdflatex uwthesis \end{demo} and it generally works. \section{Printing} All implementations of \TeX\ provide the option of {\bf pdf} output, which is all the Graduate School requires. Even if you intend to print a copy of your thesis create a {\tt pdf}. It will print most anywhere. \printendnotes % % ========== Bibliography % \nocite{*} % include everything in the uwthesis.bib file \bibliographystyle{plain} \bibliography{uwthesis} % % ========== Appendices % \appendix \raggedbottom\sloppy % ========== Appendix A \chapter{Where to find the files} The uwthesis class file, {\tt uwthesis.cls}, contains the parameter settings, macro definitions, and other \TeX nical commands which allow \LaTeX\ to format a thesis. The source to the document you are reading, {\tt uwthesis.tex}, contains many formatting examples which you may find useful. The bibliography database, {\tt uwthesis.bib}, contains instructions to BibTeX to create and format the bibliography. You can find the latest of these files on: \begin{itemize} \item My page. \begin{description} \item[] \verb%http://staff.washington.edu/fox/tex/uwthesis.html% \end{description} \item CTAN \begin{description} \item[] \verb%http://tug.ctan.org/tex-archive/macros/latex/contrib/uwthesis/% \item[] (not always as up-to-date as my site) \end{description} \end{itemize} \vita{Jim Fox is a Software Engineer with IT Infrastructure Division at the University of Washington. His duties do not include maintaining this package. That is rather an avocation which he enjoys as time and circumstance allow. He welcomes your comments to {\tt [email protected]}. } \end{document}
{ "alphanum_fraction": 0.7382885257, "avg_line_length": 31.7578125, "ext": "tex", "hexsha": "de0fe29b0a8001e960ec8e68a66f0ae08bbab856", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7c7e62cb9375eb7886653df1f2184e958f156ca4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "altsalt/UWThesis", "max_forks_repo_path": "uwthesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7c7e62cb9375eb7886653df1f2184e958f156ca4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "altsalt/UWThesis", "max_issues_repo_path": "uwthesis.tex", "max_line_length": 101, "max_stars_count": null, "max_stars_repo_head_hexsha": "7c7e62cb9375eb7886653df1f2184e958f156ca4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "altsalt/UWThesis", "max_stars_repo_path": "uwthesis.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7261, "size": 28455 }
\chapter{Tree} \label{a:tree} \OverviewLineNoTitle \begin{footnotesize}\begin{verbatim} import java.util.Vector; public abstract class Tree { public abstract String toString(); protected void errorAndExit(String s) {System.out.print(s); System.exit(0);} protected int lineno; // line # of element in source public int lineno(){return(lineno);} } class Classdef extends Tree { private int getCounter = 0; private Vector classList = new Vector(); // touple af (navn, contents) // constructor Classdef(int lineno){super.lineno = lineno;} public void add(Object o){classList.addElement(o);} public void resetGet(){getCounter = 0;} public Object getNext() { if(getCounter > -1 && getCounter < classList.size()) return(classList.elementAt(getCounter++)); return(null); } public void pushBack(){getCounter--;} public String toString() { StringBuffer buf = new StringBuffer(); for(int i = 0; i < classList.size(); i++) buf.append("class " + (String) classList.elementAt(i++) + "\n{\n" + classList.elementAt(i).toString() + "}\n\n"); return(buf.toString()); } } class Classcontents extends Tree { private int getCounter = 0; private Vector defList = new Vector(); // list of vardef/fncdef // constructor Classcontents(int lineno){super.lineno = lineno;} public void add(Object o){defList.addElement(o);} public void resetGet(){getCounter = 0;} public Tree getNext() { if(getCounter > -1 && getCounter < defList.size()) return((Tree) defList.elementAt(getCounter++)); return(null); } public void pushBack(){getCounter--;} public String toString() { StringBuffer buf = new StringBuffer(); for(int i = 0; i < defList.size(); i++) buf.append( defList.elementAt(i).toString() + "\n"); return(buf.toString()); } } class Vardef extends Tree { private String name; private String type; // constructor Vardef(String t, String n, int lineno) {type=t; name=n; super.lineno = lineno;} public String getName(){ return(name); } public String getType(){ return(type); } public String toString() { return("(vardef) " + type + " " + name); } } class Fncdef extends Tree { private String name; private Vector argList; // contains pair of <ID>, == 2*n number of <ID> private Tree sentences; // constructor Fncdef(String name, Vector list, Tree s, int lineno) {this.name = name; argList = list; sentences = s; super.lineno = lineno;} public String getName(){return(name);} public Vector getArgList(){return(argList);} public Sentences getSentence(){return((Sentences) sentences);} public String toString() { // navn StringBuffer buf = new StringBuffer("\n(fncdef) void " + name + "("); // argumenter for(int i = 0; i < argList.size(); i++) { if(i != 0) buf.append(", "); buf.append((String) argList.elementAt(i++) + " " + (String) argList.elementAt(i)); } // indhold buf.append(")\n{\n" + sentences.toString() + "}"); return(buf.toString()); } } class Sentences extends Tree { private int getCounter = 0; private Vector sList = new Vector(); // liste of <sentences> // constructor Sentences(int lineno){super.lineno = lineno;} public void add(Object o){sList.addElement(o);} public Tree getNext() { if(getCounter > -1 && getCounter < sList.size()) return((Tree) sList.elementAt(getCounter++)); return(null); } public void pushBack(){getCounter--;} public String toString() { StringBuffer buf = new StringBuffer(); for(int i = 0; i < sList.size(); i++) buf.append( sList.elementAt(i).toString() + "\n"); return(buf.toString()); } } class Fnccall extends Tree { private boolean localCall; private String name; private Vector argList = new Vector(); // constructor Fnccall(String name, boolean localCall, int lineno) {this.name = name; this.localCall = localCall; super.lineno = lineno;} // to convert an OpCall to a Fnccall Fnccall(String name, Vector argList, boolean localCall) {this.name = name; this.argList = argList; this.localCall = localCall;} public String getName(){return(name);} public Vector getArgList(){return(argList);} public boolean isCallLocal(){return(localCall);} public void add(Object o){argList.addElement(o);} public String toString() { StringBuffer buf = new StringBuffer("\n(fnccall) "+ name + "("); for(int i = 0; i < argList.size(); i++) { if(i > 0) buf.append(", "); buf.append(argList.elementAt(i).toString() + "\n"); } return(buf.toString() + ")"); } } class If extends Tree { private Tree condCode; private Tree thenCode; private Tree elseCode; // constructor If(Tree c, Tree t, Tree e, int lineno) { condCode = c; thenCode = t; elseCode = e; super.lineno = lineno;} public Tree getCond(){return(condCode);} public Tree getThen(){return(thenCode);} public Tree getElse(){return(elseCode);} public String toString() { return("if(" + condCode.toString() + ")\n{\n" + thenCode.toString() + "}\nelse\n{\n" + elseCode.toString() + "\n}\n"); } } class While extends Tree { private Tree condCode; private Tree whileCode; //constructor While(Tree c, Tree w, int lineno){condCode = c; whileCode = w;super.lineno = lineno;} public Tree getCond(){return(condCode);} public Tree getWhile(){return(whileCode);} public String toString() { return("while(" + condCode.toString() + ")\n{" + whileCode.toString()+ "}\n"); } } class Break extends Tree { // constructor Break(int lineno){super.lineno = lineno;} public String toString(){return("break");} } class Return extends Tree { //constructor Return(int lineno){super.lineno = lineno;} public String toString(){return("return");} } class Assign extends Tree { int op; // determine wether we have <ID> or <NAME> private String name; private Tree E; // constructor Assign(String name, int op, Tree E, int lineno) {this.name = name; this.op = op; this.E = E; super.lineno = lineno;} public String getName(){return(name);} public Tree getE(){return(E);} public int getOp(){return(op);} public String toString() { return("(assign) " + name + " = " + E.toString()); } } class OpMonadic extends Tree implements TokenNames { private int op; // == TokenName private Tree right; // constructor OpMonadic(int op, Tree right, int lineno) {this.op = op; this.right = right; super.lineno = lineno;} public int getOp(){return(op);} public Tree getR(){return(right);} public String toString() { switch(op) { case NOT: return("(! " + right.toString()+")" ); case MINUS: return("(- " + right.toString()+")" ); default: errorAndExit("Strange operator (#"+op+") in 'opMonadic' made me stop compiling!\n"); } return("dummy to make Java<tm> happy"); } } class OpDual extends Tree implements TokenNames { private int op; // == TokenName private Tree left, right; // constructor OpDual(int op, Tree left, Tree right, int lineno) { this.op = op; this.left = left; this.right = right; super.lineno = lineno;} public Tree getL(){return(left);} public Tree getR(){return(right);} public int getOp(){return(op);} public String toString() { StringBuffer buf = new StringBuffer("("+left.toString() + " "); switch(op) { case PLUS: buf.append("+"); break; case MINUS: buf.append("-"); break; case MULT: buf.append("*"); break; case DIV: buf.append("/"); break; case SET: buf.append("="); break; case NEQUAL: buf.append("!="); break; case EQUAL: buf.append("=="); break; case LEQUAL: buf.append("<="); break; case LESS: buf.append("<"); break; case AND: buf.append("&&"); break; case BITAND: buf.append("&"); break; case OR: buf.append("||"); break; case BITOR: buf.append("|"); break; case MODULO: buf.append("%"); break; default: errorAndExit("Strange operator (#"+op+") in 'opDual' made me stop compiling!\n"); } return(buf + " " + right.toString()+")" ); } } // functioncalls samt NEW class OpCall extends Tree implements TokenNames { private boolean fncCall = false; private int op; // anvendes til at bestemme om det er et "new" eller et funktionskald private String name; private Vector argList = new Vector(); // constructor OpCall(int op, String name, int lineno) {this.op = op; this.name = name; super.lineno = lineno;} public boolean isFncCall(){return(fncCall);} public void setIsFncCall(){fncCall = true;} public int getOp(){return(op);} public String getName(){return(name);} public Vector getArgList(){return(argList);} public void add(Object o){argList.addElement(o);} public String toString() { if(op == NEW) return("new " + name + "() "); else { StringBuffer buf = new StringBuffer(name); if(argList.size() > 0) { buf.append("("); for(int i = 0; i < argList.size(); i++) { if(i != 0) buf.append(", "); buf.append(argList.elementAt(i).toString()); } return(buf.toString() + ")"); } else return(buf.toString()); } } } class OpConst extends Tree implements TokenNames { private int op; // tokenName private int nval; private String sval; // constructor OpConst(int op, int nval, int lineno) {this.op = op; this.nval = nval; super.lineno = lineno;} OpConst(int op, String sval, int lineno) {this.op = op; this.sval = sval; super.lineno = lineno;} // for the code generator public int getOp(){return(op);} public int getNval(){return(nval);} public String getSval(){return(sval);} public String toString() { switch(op) { case VAL_CHAR: return("'"+sval+"' "); case VAL_STRING: return("\""+sval+"\" "); case VAL_INT: String s = "" + nval; return(s); default: errorAndExit("Strange operator (#"+op+") in 'opConst' made me stop compiling!\n"); } return("dummy to make Java<tm> happy"); } } \end{verbatim}\end{footnotesize}
{ "alphanum_fraction": 0.5361400584, "avg_line_length": 27.7020785219, "ext": "tex", "hexsha": "c0d1588ed06c9042f26575dda8d33945c74d7f53", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f4effdde5ee060eac1c186bc11dd261f1e47d958", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "kbilsted/CompilerTeknik", "max_forks_repo_path": "rapport/a_tree.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f4effdde5ee060eac1c186bc11dd261f1e47d958", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "kbilsted/CompilerTeknik", "max_issues_repo_path": "rapport/a_tree.tex", "max_line_length": 106, "max_stars_count": null, "max_stars_repo_head_hexsha": "f4effdde5ee060eac1c186bc11dd261f1e47d958", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "kbilsted/CompilerTeknik", "max_stars_repo_path": "rapport/a_tree.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2769, "size": 11995 }
%%% PREAMBLE - Do not touch %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[10pt,twocolumn,letterpaper]{article} \usepackage[ansinew]{inputenc} \usepackage[portuges,brazil,english]{babel} \usepackage{model} \usepackage{times} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \usepackage{color} \usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref} \usepackage{float} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{algpseudocode,algorithm} \usepackage{listings} \lstset{ % backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor} basicstyle=\footnotesize, % the size of the fonts that are used for the code breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace breaklines=true, % sets automatic line breaking commentstyle=\color{gray}, % comment style deletekeywords={...}, % if you want to delete keywords from the given language escapeinside={\%*}{*)}, % if you want to add LaTeX within your code extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 frame=single, % adds a frame around the code keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) keywordstyle=\color{blue}, % keyword style language=Python, % the language of the code numbers=left, % where to put the line-numbers; possible values are (none, left, right) numbersep=5pt, % how far the line-numbers are from the code numberstyle=\tiny\color{gray}, % the style that is used for the line-numbers rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' showstringspaces=false, % underline spaces within strings only showtabs=false, % show tabs within strings adding particular underscores stepnumber=2, % the step between two line-numbers. If it's 1, each line will be numbered stringstyle=\color{red}, % string literal style tabsize=2, % sets default tabsize to 2 spaces title=\lstname % show the filename of files included with } \pagenumbering{gobble} \cvprfinalcopy \def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}} \ifcvprfinal\pagestyle{empty}\fi \newcommand{\TODO}[1]{TODO: #1} \newcommand{\CITEONE}[2]{\mbox{#1 \cite{#2}}} \newcommand{\CITETWO}[3]{\mbox{#1 and #2 \cite{#3}}} \newcommand{\CITEN}[2]{\mbox{#1 et al. \cite{#2}}} %%% Report beginning %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %%% Title and authors %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \title{A Multi-Layer Modeling Approach to Music Genre Classification} \author{Renan V. Novas % \thanks{\textbf{Contact}:\tt\small{}} \\ \\ Gabriel S. Vicente \thanks{\textbf{Contact}: \tt\small{[email protected]}} } \maketitle %%% Abstract %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} Listening to music and understanding its patterns and structure is a fairly easy task for humans beings, even for listeners without formal musical education. However, building computational models to mimic those processes has proven a remarkably resilient problem. The algorithms and data associated to the Music Information Retrieval (MIR) field are both complex. The scientific endeavour is motivated by a large class of challenging demands in the media business that require efficient and robust audio classification. Application scenarios include audio streaming services, such as Spotify and Pandora, automatic media monitoring, content-based search in multimedia databases, improvements on recommendation and filtering systems and last, but not least, purely artistic explorations. \end{abstract} %%% Introduction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Motivation} Musical genre is notoriously subjective concept and its classification has been a standard problem in the Musical Information Retrieval (MIR) research. It is a very active and multidisciplinary investigation field, comprising musicology, psychology, signal processing, artifial intelligence and machine learning. MIR applications can be divided typically in four groups. \begin{itemize} \item Instrument and musical structure recognition \item Recommendation and taste prediction engines \item Automatic music transcription and algorithmic composition \item Automatic categorization \end{itemize} We are here interested in the last. An extraordinary range of information is hidden inside of music waveforms, ranging from perceptual to auditory which inevitably makes large-scale applications challenging. There are a number of commercially successful online music services, such as Spotify, Pandora and Last.fm, but most of them are merely based on traditional text information retrieval. Many different features can be used for music classification, such as reference features including title and composer, content-based acoustic features including tonality, pitch, and beat, symbolic features extracted from the scores. Content-based music genre classification has been gaining importance and enjoying a growing amount of attention. Commonly used classifiers include Support Vector Machines (SVMs), Nearest-Neighbor (NN) classifiers, Gausian Mixture Models and Linear Discriminant Analysis (LDA). %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Related Work} A long line of work addresses problems in the Music Information Retrieval. There are a growing interest in the scientic community and a vast diversity of new content-based models \cite{conf/sigir/HuO12} \cite{Bou-rabee12classifyingthe}. The content-based acoustic features are classified into timbral texture features, rhythmic content features and pitch content features. Timbral features are mostly originated from traditional speech recognition techniques. They are usually calculated for every short-time frame of sound based on the Short Time Fourier Transform (STFT) and countless others audio descriptos, such as Mel-Frequency Cepstral Coefficients (MFCCs). More atypical descriptors, such as Daubechies Wavelet Coefficient Histograms (DWCH), have been used in experimentation \cite{Li:2003:CSC:860435.860487}. %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experiments and Discussion} We used several approaches to leverage different types of information about a song into a final classifier. First of all, we look in more detail to dataset. \subsection{Million Song Dataset} The Million Song Dataset (MSD) \cite{Bertin-Mahieux2011} is a freely-available collection of audio features and metadata for a million contemporary popular music tracks. The project emerged as a collaborative enterprise between The Echo Nest (now a subsidiary of Spotify) and Columbia University's Laboratory for the Recognition and Organization of Speech and Audio (LabROSA) to provide a dataset collection for evaluating audio-related research and to encourage algorithms that scale to commercial sizes. \begin{wrapfigure}{R}{0.10\textwidth} \centering \includegraphics[width=0.1\textwidth]{img/LogoMSD.jpg} \includegraphics[width=0.1\textwidth]{img/LogoTheEchoNest.jpg} \includegraphics[width=0.1\textwidth]{img/LogoLabROSA.jpg} \end{wrapfigure} The dataset itself does not include any audio signal, only derived features. It constains nearly $300$ GB of metadata and audio-descriptive data, available as an Amazon Public Dataset (AWS), or through a directly-downloadable subset consisting of $10,000$ songs selected at random for a quick look. \begin{itemize} \item \textbf{Million Song Dataset} \begin{itemize} \item 280 GB of data \item 1,000,000 songs/files \item 44,745 unique artists \item 7,643 The Echo Nest tags \item 2,321 MusicBrainz tags \item 2,201,916 asymmetric similarity relationships \item 515,576 dated tracks starting from 1922 \end{itemize} \end{itemize} Each audio track is associated to an exclusive Echo Nest song ID, to artist and song metadata and to numerical fields taken directly from the Echo Nest Analyze API. It is indeed a very extensive and scalable dataset. However, the dataset involved in this present experiment is modest comparing to the MSD. Since the original collection does not explicitly provide genre information, we recreated a smaller dataset while using additional databases. The idea is to use artist tags that describe typical artist-genre associations to assign the information to each track. We used MusicBrainz \cite{MusicBrainz} tags instead of hand-picking associations as they were applied by humans and usually very reliable. They also tend to be standardized, as MusicBrainz care for consistency. We applied the routine to a subset, yielding $16,00$ different tracks, of which 90\% were selected at random for training. \begin{table}[H] \begin{center} \caption{List of genres groups and number of sample tracks} \begin{tabular}{|l|l|c|c|} \hline &\textbf{Genre} & Training & Test \\ \hline A&dance and electronica & 2,880 & 320 \\ B&folk & 2,880 & 320 \\ C&jazz and blues & 2,880 & 320 \\ D&punk & 2,880 & 320 \\ E&soul and reggae & 2,880 & 320 \\ \hline & & 14,400 & 1,600 \\ \hline \end{tabular} \end{center} \end{table} Evidently, building such simplified dataset implies huge flaws. The main one is the unbalancedness of the data. In our case, we selected the same cardinality of songs to each group. A more subtle issue is the genre definition itself. Music genre is notoriously subjective concept. Indicating genre is far more complex than a trivial binary classification and often inconclusive and appropriate to ambiguity. We considered only artists that have been tagged consistenty. We could argue the label of a particular track, but they were still reasonable. It is a little extreme, but we wanted to avoid confusing artists, such as those that span more than one genre. \subsection{Audio features} As for audio features, timbre is represented as 12-dimensional vectors that are the principal components of Mel-frequency cepstral coefficients (MFCCs); they represent the power spectrum of sound, and are derived from Fourier analysis and further processing. MFCCs are very commonly used in speech recognition and music information retrieval systems. Also track level audio features such as loudness and tempo which captures the high level information of the audio. T empo is defined as number of beats per minute, or BPM and loudness is a real value number describes the general loudness of the song. So use the simplest 30 audio features from The Echo Nest: loudness, tempo, time signature, key, mode, duration, 12 averages and 12 variances of timbre vectors. \subsection{Results and Analysis} We experimented at first a multi-class classification using 15 different support vector machines (SVMs) from the open-source Python implementation of the \texttt{Scikit-learn} project \cite{scikit-learn}. More detailed information is avaiable in the documentation. We set up kernel parameter to \textcolor{red}{rbf}, C to \textcolor{red}{1} and gamma to \textcolor{red}{0}. The following code matrix served as input. \emph{One-vs-One} correponds to 1-10 models and \emph{One-vs-Rest} to 11-15. For those models, we mutiplied C by weights inversely proportional to the frequency of each SVM classes, in order to perform some balancing. \begin{table}[H] \begin{center} \caption{Code Matrix} \begin{tabular}{|l|*5{c|}} \hline \textbf{SVM}& A & B & C & D & E \\ \hline 1& 1 & -1& 0 & 0 & 0 \\ \hline 2& 1 & 0 & -1& 0 & 0 \\ \hline 3& 1 & 0 & 0 & -1& 0 \\ \hline 4& 1 & 0 & 0 & 0 & -1 \\ \hline 5& 0 & 1 & -1& 0 & 0 \\ \hline 6& 0 & 1 & 0 & -1& 0 \\ \hline 7& 0 & 1 & 0 & 0 & -1 \\ \hline 8& 0 & 0 & 1 & -1& 0 \\ \hline 9& 0 & 0 & 1 & 0 & -1 \\ \hline 10& 0 & 0 & 0 & 1 & -1 \\ \hline 11& 1 & -1& -1& -1& -1 \\ \hline 12& -1& 1 & -1& -1& -1 \\ \hline 13& -1& -1& 1& -1& -1 \\ \hline 14& -1& -1& -1& 1& -1 \\ \hline 15& -1& -1& -1& -1& 1 \\ \hline \end{tabular} \end{center} \end{table} For each model, we considered 80\% of random samples for training and 20\% for a \emph{leave-one-out} validation. All informations, in training, validation and test, were normalized using the mean values and stardard deviations of corresponding model. \begin{table}[H] \begin{center} \caption{Scores} \begin{tabular}{|l|*2{c|}} \hline & \multicolumn{2}{c|}{Normalized Accuracies} \\ \hline \textbf{SVM} & Training & Validation \\ \hline 1 & 92.5\% & 87.2\% \\ 2 & 88.4\% & 83.5\% \\ 3 & 91.2\% & 86.9\% \\ 4 & 93.7\% & 90.7\% \\ 5 & 89.8\% & 83.9\% \\ 6 & 87.6\% & 83.2\% \\ 7 & 90.6\% & 87.2\% \\ 8 & 90.1\% & 86.3\% \\ 9 & 93.5\% & 90.4\% \\ 10 & 91.9\% & 86.9\% \\ 11 & 86.7\% & 84.2\% \\ 12 & 88.1\% & 84.7\% \\ 13 & 87.7\% & 85.0\% \\ 14 & 84.3\% & 80.9\% \\ 15 & 91.8\% & 88.5\% \\ \hline \end{tabular} \end{center} \end{table} \subsection{A Multi-Layer Approach} Our first innocent attack has given us 15 differents classifiers. We offered them our initial dataframes, expecting predictions we may now attach as features to a more complex 2-layer model. We devised a \emph{Extremely Randomized Trees} \cite{Geurts} approach, also implemented in \texttt{Scikit-learn} packages. We employed the \emph{bagging} algorithm for constructing forest trees and \emph{out-of-the-bag} for scoring success rate. Futhermore, in order to optimize the forest constructing parameters, we performed grid search. The best setting was \textcolor{red}{90} trees, \textcolor{red}{6} features at maximum and \textcolor{red}{3} samples for leaf at minimum. \begin{table}[H] \begin{center} \caption{Confusion Matrix} \begin{tabular}{|l|*5{c|r|}} \hline & \multicolumn{5}{c|}{Predicted Genre} & \\ \hline & A & B & C & D & E & \\ \hline A& 224& 21 & 24 & 17 & 34 & 320 \\ \hline B& 12 & 224& 39 & 15 & 30 & 320 \\ \hline C& 27 & 42 & 223& 9 & 19 & 320 \\ \hline D& 24 & 15 & 7 &240 & 34 & 320 \\ \hline E& 39 & 36 & 16 & 11 & 218& 320 \\ \hline & 326 &338 & 309 & 292 & 335 & \\ \hline \end{tabular} \end{center} \end{table} The average impact of SVM-related features was approximately 0.8\%, with at most 7.46\% of contribution. In fact, the original 30 features remained the most relevant. The success rate of 86.99\% was attained in training and 76.9\% in an \emph{out-of-the-bag} scenario. In test, we accomplished 70.56\% rate of successfully categorized songs. We achieved 70.56\% of sucessfully categorizing tracks in the test dataset. Comparatively, more usual random forest models yields 69\%-70\%. \subsection{Alternative Solutions} In our experiments, we extended the test to several initial procedures. Using exclusively the SVM-related features, our best results were a single \emph{random forest} \cite{Breiman} with 150 trees and 150 at maximum leaf nodes. The success rate reached 79.19\% in training, 77.15\% \emph{out-of-the-bag} and 69.13\% in test. \emph{One-vs-One} models contributed with nearly 4\%, while \emph{One-vs-Rest} models were more important, holding 10\%. Using the original 30 features, the best result was again a \emph{random forest} with 450 tress. Obtaining success on 70.79\% of the training cases, 68.64\% \emph{out-of-the-bag} and 68.64\% in test. Comparing the three models in order of appearance: \\ \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{img/ModelComparison.png} \end{figure} Exploring further, multi-class classifiers such as ECOC, \emph{One-vs-One}, \emph{One-vs-Rest} without proper tuning do not yield impacting results, ranging from 35\% to 40\% success rate in test. %%% Add section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusions and Future Work} In this project, we proposed a multi-layer modeling approach to music genre classification. The \emph{Extremely Randomized Tree} model showed the best results, using SVM classfiers for feature extraction. We accomplished 70.56\% of success rate in test. Looking at the confusion matrix, it is evident we avoided bias. Since genre tags were associated to artists in our experiment, musical compositions considerably different from the artist's usual characteristics and patters might explain some of the confusion. In the future, it would be interesting analysing cases in which the final classification is a more valid answer than our target valu Considering hypothetically the similiary matrices indicate stable results, it would possible to imply that some tracks are wrongly classified because its artist cannot be exclusively assigned to a particular genre. In the future, it would be convenient to analyse overlapping and ambiguity. %%% References %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\small \bibliographystyle{unsrt} \bibliography{refs} } %\onecolumn %\section*{Source Code} %\lstinputlisting[caption=Filtering And Sampling,language=Python] %{../../Codes/Usados/FilteringAndSampling.py} %\lstinputlisting[caption=Auxiliary Functions,language=Python] %{../../Codes/Usados/header.py} %\lstinputlisting[caption= Classification,language=Python] %{../../Codes/Usados/ecocSVM.py} \end{document}
{ "alphanum_fraction": 0.7128093403, "avg_line_length": 46.7389033943, "ext": "tex", "hexsha": "c27522d5bc29c351e56e7f3052000da396d6b8bc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d67ee8927847529aaf667e6b54c6952d9043a36f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "gabrielusvicente/MC886", "max_forks_repo_path": "report/report/ReportML.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d67ee8927847529aaf667e6b54c6952d9043a36f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "gabrielusvicente/MC886", "max_issues_repo_path": "report/report/ReportML.tex", "max_line_length": 146, "max_stars_count": null, "max_stars_repo_head_hexsha": "d67ee8927847529aaf667e6b54c6952d9043a36f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "gabrielusvicente/MC886", "max_stars_repo_path": "report/report/ReportML.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4891, "size": 17901 }
\section{Class options} \label{sec:classoptions} There are some class options available: \begin{itemize} \item internaldraft: Includes notes, remarks, and todos. Additionally date, build nummer, and commit nummer is appended to each page, making review a little easier. \item fulldocument: Includes front matter and end matter. \item printversion: Removes colors from hyperlinks \item showcompilenumber: Adds number of compiles to the bottom of internaldraft. The counter is increased using the provided Makefile. If you don't want to use make, you can disable the number here. \item showcommitnumber: Adds git commit to the bottom of internaldraft. The counter is increased using the provided Makefile. If you don't want to use make, you can disable the number here. \end{itemize} %%% Local Variables: %%% ispell-local-dictionary: "en_US" %%% coding: utf-8 %%% mode: latex %%% TeX-master: "dissertation" %%% TeX-engine: xetex %%% End:
{ "alphanum_fraction": 0.7647679325, "avg_line_length": 47.4, "ext": "tex", "hexsha": "8d543d3959316bec8ca8c5079d6756f4c14bf843", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "33fb348ed42dedef66d848a1d2c4a723be9164a3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "simonreich/dissertation-template", "max_forks_repo_path": "pages/chapter2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33fb348ed42dedef66d848a1d2c4a723be9164a3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "simonreich/dissertation-template", "max_issues_repo_path": "pages/chapter2.tex", "max_line_length": 200, "max_stars_count": null, "max_stars_repo_head_hexsha": "33fb348ed42dedef66d848a1d2c4a723be9164a3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simonreich/dissertation-template", "max_stars_repo_path": "pages/chapter2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 236, "size": 948 }
\documentclass[a4paper]{article} \usepackage{listings} \usepackage{graphicx} \usepackage{float} \usepackage[hidelinks]{hyperref} %opening \title{Deep Reinforcement Beginners Learning Tutorial \\Installation Guide} \author{Julian Bernhart, Robin Guth} \begin{document} \maketitle \tableofcontents \section{Introduction} Deep Reinforcement Learning (also called RL) is a huge step towards the creation of an universal artificial intelligence. In 2013 a company, owned by Google, called ''Deep Mind'', was able to create an astonishing implementation of RL, which was capable to play retro games of the console ''Atari 2600''. In many cases the AI \footnote{Artificial Intelligence} was not only able to play the games successfully, but also exceeded human performances significantly \cite{atari}.\\ As a fairly new topic, beginners often struggle to find a good starting point into the world of AI and specifically RL. Many tutorials are written for more advanced users, who already know basics of machine learning. The ''Deep Reinforcement Learning Tutorial" will provide an easy-to-follow, hands-on beginners guide to RL. After the completion, we will be able to write our own algorithm to play some basic games for us.\\ This installation guide will lead the reader through the setup process for the tutorial. It is also possible to use this as a base for different Machine Learning base, as we will install many standard utilities. \paragraph{Prerequisites} It is recommended that the reader already acquired some basic knowledge about programming with Python and is familiar with basic deep learning concepts like Un-/Supervised Learning or at least with basic concept of AI. It is also useful to be able to perform some basic Linux commands, as we will use them later to install some additional packages. \paragraph{Outlook} The user will get to know basic concepts of RL with the help of the following programs/utilities/libraries: \begin{description} \item[Python3] One of the most popular programming languages for deep learning. The following tutorials will be based on Python. See \url{https://www.python.org/} for more information. \item[Pip] A package installer for Python, which will download and install packages for us from a repository. \item[Jupyter Lab] Enables execution of Python code inside a document, used to teach theory and implementation. \item[Google Cloud] Cloud based processing power for training of different implementations of RL. Delivers a linux environment with preinstalled utilities for deep learning including different libraries for Python, Python itself and Jupyter Lab. \item[Keras] Framework, which delivers different premade algorithms. \item[OpenAiGym] A framework to train and evaluate different AI-based algorithms \end{description} \section{Installation} \subsection{Google Cloud} As "Reinforcement Learning" is pretty resource-intensive, Google Cloud will deliver the required performance in the cloud. Python itself, all libraries required and Jupyter Lab need to be available. Luckily, Google already provides a template for a Linux Virtual Machine, which we can use as base for this tutorial. You will have to create a Google account to use the cloud.\\ It is also possible to use this tutorial locally. You will have to prepare a Jupyter Lab environment with the help of PIP or Anaconda and install all required libraries manually. Both are package installers for Python. Check this tutorial \url{https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html} for more information. \subsubsection{Deep Learning VM} To get started in the most simple way, we will use the Google "Deep Learning VM" template to create our environment. It can be found at \url{https://cloud.google.com/deep-learning-vm} or alternatively you can search for "Deep Learning VM" in the Google Cloud marketplace, as shown in picture \ref{fig_marketplaceSearch}. \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/marketplaceSearch}} \caption{Searching for ''Deep Learning VM''} \label{fig_marketplaceSearch} \end{figure} \noindent After you choose the virtual machine, you will be provided with some additional information. Choose ''start in compute engine'' as shown in picture \ref{fig_additionalInfo}. \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/additionalInfo}} \caption{After Selecting ''Deep Learning VM''} \label{fig_additionalInfo} \end{figure} You will be presented with the final configuration screen as shown in picture \ref*{fig_finalConfig} . We need to configure the following options: \begin{itemize} \item Choose a name for your virtual machine \item Choose any number of CPU/GPU you want to use (1 CPU is enough) \item Choose TensorFlow 1.13 as framework \item Activate: ''Install NVIDIA GPU Driver automatically on first startup?'' \item Activate: ''Enable access to JupyterLab via URL instead of SSH.'' \item Choose a proper size for the boot disk (minimum of 30 GB is enough) \end{itemize} \begin{figure} \centerline{\includegraphics[scale=0.92]{img/finalConfig}} \caption{Final Configuration Screen} \label{fig_finalConfig} \end{figure} \paragraph{Starting the VM} The VM will start automatically after creating it. It can also be started manually by selecting it in the vm instances menu and pressing the start button, as shown in picture \ref{fig_startVM}. \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/startVM}} \caption{Starting the VM} \label{fig_startVM} \end{figure} \subsubsection{Installing Additional Software} After we successfully created our first virtual machine instance, we need to install some additional software on the virtual machine. This is possible through an administrator console, which can be opened by clicking the ''SSH'' button in the ''virtual machine instances'' sub-menu, as shown in picture \ref{fig_adminConsole}. The open console can be seen in picture \ref{fig_adminConsole2}.\\ \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/adminConsole}} \caption{The ''SSH'' button} \label{fig_adminConsole} \end{figure} \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/adminConsole2}} \caption{The administrator console} \label{fig_adminConsole2} \end{figure} We need to execute the following commands: \begin{itemize} \item \lstinline|sudo apt-get install python-opengl| \item \lstinline|sudo apt-get install xvfb| \end{itemize} This will install both python-opengl and xvfb with root rights on our server. We may need to allow the installation by entering ''y'', if the console requests it. As the Google Cloud server does not provide a display or Open-Gl graphics drivers, we need these programs to run OpenAiGym with visible training/evaluation.\\ After the installation we can finally use Jupyter Lab. \subsection{Jupyter Lab} We can connect to Jupyter Lab through the external address that is displayed in the vm instances menu and the port 8080 as shown in picture \ref{fig_startVM}. The address can be entered into a web browser as follows: \lstinline|<external-address>:8080|. \subsubsection{Upload Notebooks} After connecting to Jupyter Lab you will be presented with the main menu of Jupyter Lab as shown in picture \ref{fig_mainMenu}. We can now upload the supplied "Deep Reinforcement Learning" notebooks either by using the upload button in the left upper corner or drag and drop to files into the filebrowser. The notebooks can now be opened and used. \begin{figure}[H] \centerline{\includegraphics[width=\textwidth]{img/jupyterLab}} \caption{Main Menu} \label{fig_mainMenu} \end{figure} \section{Conclusion} We have now successfully set up the Google Cloud VM and fetched all dependencies that need to be installed manually. All other libraries will be automatically downloaded while using the notebooks. We can now continue with the first notebook. \bibliography{bib} \bibliographystyle{ieeetr} \end{document}
{ "alphanum_fraction": 0.797063253, "avg_line_length": 75.8857142857, "ext": "tex", "hexsha": "ca030600689964b8867c04d70c53b803de633e6c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "54e226a53f1fbc7e43d1426b4279a5f9f27dc7a4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RobinGuth/DL_01", "max_forks_repo_path": "Installation/installation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "54e226a53f1fbc7e43d1426b4279a5f9f27dc7a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RobinGuth/DL_01", "max_issues_repo_path": "Installation/installation.tex", "max_line_length": 477, "max_stars_count": null, "max_stars_repo_head_hexsha": "54e226a53f1fbc7e43d1426b4279a5f9f27dc7a4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RobinGuth/DL_01", "max_stars_repo_path": "Installation/installation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1824, "size": 7968 }
\chapter{PERIPHERAL DEVICES} Before we discuss the features of MGED, we will introduce the hardware devices used to implement them. These devices are the ``tools of the trade'' for the MGED user. We will discuss only basic operational characteristics here. Specific use of these devices will be covered in the later sections on the viewing and editing features of MGED. \section{Joystick} The joystick is a mechanical device used to do rotations in MGED. Any movement left or right rotates the display about the X-axis. Any movement up or down rotates the display about the Y-axis. When the joystick top is twisted in a clockwise or counterclockwise direction, the display rotates about the Z-axis. Any combination motion of the joystick will produce a ``combined'' rotation about the appropriate axes. As implemented on the Vector General hardware, all of these motions have a spring return to a null center position. \section{Button Box} The button box contains a collection of buttons. On each button is a light that can be lit under program control. Pressing a button sends a ``press'' event to MGED, and results in an action occurring, or a condition being set. The exact functions assigned to these buttons will be discussed in the sections on viewing the display and on editing. \subsection{Vector General Buttons} \begin{figure} \centering \includegraphics{fig-vg-buttons.ps} \caption{Vector General Button Assignments} \label{vg-buttons} \end{figure} \begin{figure} \centering \includegraphics{fig-sgi-buttons.ps} \caption{Silicon Graphics Button Assignments} \label{sgi-buttons} \end{figure} % \gluein 4.5in by 4.5in, Vector General Button Assignments, vg-buttons. % XXX \gluein 4.5in by 4in, Megatek Dial and Button Box, mg-buttons. The Vector General has thirty-two buttons. Figure \ref{vg-buttons} depicts the functions programmed for each button. The buttons in the shaded area are used for editing while the rest are used for viewing the display. \subsection{Megatek Buttons} \begin{figure}[tbp] \begin{tabular}{rl} Button & Function\\ \\ 1 & View Mode: Restores View \\ & Edit Mode: Translation in the Object-Edit mode \\ 2 & View Mode: Saves View \\ & Edit Mode: Translation in the Object-Edit mode \\ 3 \\ & Edit Mode: Saves the model being displayed on the screen \\ 4 & Off: Viewing mode \\ & On: Edit mode \\ 5 & View Mode: Resets View \\ & Edit Mode: Scaling in the Object-Edit mode \\ 6 \\ & Edit Mode: Rotation in the Object-Edit mode \\ 7 & View Mode: Angle/Distance Cursor \\ & Edit Mode: Translation in the Object-Edit mode \\ 8 \\ & Edit Mode: Rejects display and returns to Viewing display \\ 9 & View Mode: Bottom View \\ & Edit Mode: Scaling in the Solid-Edit mode \\ 10 & View Mode: Left View \\ & Edit Mode: Rotation in the Solid-Edit mode \\ 11 & View Mode: Rear View \\ & Edit Mode: Translation in the Solid-Edit mode \\ 12 & View Mode: 90, 90 View \\ & Edit Mode: Restores Edit mode menu \\ 13 & View Mode: Top View \\ & Edit Mode: Transfers from Viewing to Solid Pick \\ 14 & View Mode: Right View \\ & Edit Mode: Transfers from Viewing to Object Pick \\ 15 & View Mode: Front View \\ \\ 16 & View Mode: 35/45 View \end{tabular} \caption{Megatek Buttons \label{mg-button-table} } \end{figure} The Megatek button box is a general purpose input/output device that communicates with MGED through an intelligent control unit. The device has eight rotatable knobs and 16 buttons with lights. % XXX See Figure \ref{mg-buttons}. The ``buttons'' and ``knobs'' of the Megateks are located in the same box. There are not enough buttons to have just one assigned meaning, hence most buttons have dual functions. To toggle the functions of the buttons, use the upper right button (toggle button). When the light on this button is ON, the functions listed on the RIGHT above each button is the current function. When the light on the ``toggle'' button is OFF, the functions labeled on the LEFT are then in effect. The left/right meaning of these buttons is grouped generally according to viewing functions on the left and editing functions on the right. Figure \ref{mg-button-table} summarizes the uses of the buttons. Depressing the button switches the light on and off. Many of these serve a dual role depending upon the selected mode - viewing or editing. The mode is selected by depressing button 4. If light 4 is off, the system is performing in the viewing mode, and the commands shown in the top half of the table are executed. If light 4 is on, the system is performing in the edit mode, and the commands shown in the bottom half are executed. \subsection{Silicon Graphics Buttons} The button box layout for the SGI Iris is given in Figure \ref{sgi-buttons}. Note that the ``right'' button shows you the right side of the model, as if you were looking in from the left. To achieve the customary draftsman views, this function goes on the left. The upper left button is the {\bf help} key. If this button is held down, and any other button (or knob) is activated, a descriptive string is displayed in the eight character LED display on the button box. The upper right button is used to reset all the knobs to zero. This is useful to halt a runaway rotation or zoom operation. %\begin{figure}[tbp] %{\tt \begin{verbatim} % |---------|---------|---------|---------| % | | | | Zero | % | Help | ADC | Reset | Knobs | %|---------|---------|---------|---------|---------|---------| %| Obj | Obj | Obj | Obj | | Save | %| Scale | ScaleX | ScaleY | ScaleZ | empty | View | %|---------|---------|---------|---------|---------|---------| %| Obj | Obj | Obj | Obj | | Restore | %| TransX | TransY | TransXY | Rotate | empty | View | %|---------|---------|---------|---------|---------|---------| %| Solid | Solid | Solid | Solid | Obj | Solid | %| Trans | Rot | Scale | Menu | Pick | Pick | %|---------|---------|---------|---------|---------|---------| %| REJECT | Bottom | Top | Rear | 90,90 | ACCEPT | %|---------|---------|---------|---------|---------|---------| % | Right | Front | Left | 35,25 | % |---------|---------|---------|---------| %\end{verbatim} } %\caption{Silicon Graphics Button Layout \label{XXsgi-buttons} } %\end{figure} \section{Knobs (Dials)} The knobs (or control dials) are used to send digital information to the computer. As a knob is turned, a succession of numbers are available for use by the computer. The knobs can be used to rotate a displayed object about the x, y, or z axis, translate the object along the x or y axis, and change the size of the view. Action performed by these knobs is continuous and is initiated by turning the knob in the proper direction and terminated by turning the knob in the opposite direction. \subsection{Vector General Knobs} \begin{figure} \centering \includegraphics{fig-vg-knobs.ps} \caption{Vector General Knob Assignments} \label{vg-knobs} \end{figure} \begin{figure} \centering \includegraphics{fig-sgi-knobs.ps} \caption{Silicon Graphics Knob Assignments} \label{sgi-knobs} \end{figure} % \gluein 4.5in by 3.5in, Vector General Knobs, vg-knobs. % \gluein 4.5in by 3.5in, Silicon Graphics Knobs, sgi-knobs. Figure \ref{vg-knobs} depicts the functions assigned to each of the ten knobs. The exact functions of each of these knobs will be discussed in the angle distance cursor section and in the viewing features section. \subsection{Megatek Knobs} The ``buttons'' and ``knobs'' of the Megateks are located in the same box. % XXX as shown in Figure \ref{mg-buttons}. There are not enough knobs to have ONE assigned meaning, hence three knobs have dual functions. The second function of the first three knobs is only in effect when the angle-distance cursor (ADC) is on the screen. \subsection{Silicon Graphics Knobs} Figure \ref{sgi-knobs} depicts the functions assigned to the eight knobs on the Silicon Graphics knob box. In normal operation, the left knobs provide rotations, and the right knobs provide translations and zooming. When the angle/distance cursor is activated, some of the knobs are redefined. \section{Mouse or Data Tablet} Moving the mouse or the data tablet ``pen'' causes a cursor on the screen to move. The screen X-Y coordinates of the cursor can be sensed by MGED at any time. Clicking one of the mouse buttons, or depressing the tip of the pen, results in MGED receiving a special event notification. The meaning of this mouse event depends on the current editing mode and which portion of the display faceplate that the cursor is located in. Below is a list of some of the functions the mouse is used for in MGED; \begin{itemize} \item Selecting editing menus, edit functions (move faces, move edges etc.) and viewing functions (selected from main edit menu); move pointer to appropriate edit function and press center mouse button. \item Pointing functions; interactively positioning solid primitive relative to other solids with positioning or size update being displayed at the same time, position pointer where required and click center mouse button. \item Scaling of view size; enlarge or reduce for a more detailed view of object, left button shrinks view size, right button enlarges view size. \item During the solid or object illuminate phase of editing, the screen is divided into invisible horizontal sections. The available selections are scanned by moving the mouse up and down. \item When MGED is is the viewing state, and a mouse event is received which is not in the menu area of the faceplate, the point at which the cursor is pointing at will be translated to become the center of the current view. By pointing and clicking the center mouse button, the center of the viewing cube can be moved to allow close-up viewing of different areas in your model. \end{itemize} \subsection{Vector General Data Tablet} Position information is entered using a pen-like stylus. The distance this pen is from the tablet is important. If the pen tip is within one half inch of the tablet surface, the cursor location on the screen corresponds to the X,Y location of the pen on the tablet. This condition is called the ``near'' position. If the pen is more than one half inch from the tablet surface, the cursor remains located in the center of the screen. When the pen is pressed against the tablet surface, the pressure switch is activated and a ``mouse'' event is sent to MGED. \subsection{Megatek Data Tablet} Some Megatek systems enter position data on the data tablet using a pen-like stylus. If the tip of the stylus is within one-half inch of the surface of the tablet, a ``star'' corresponding to this location is displayed on the display screen. If the tip is moved more than one-half inch from the surface, the position of the star remains fixed. When the stylus is pressed against the tablet surface, the pressure switch is activated and a ``mouse'' event is sent MGED. Other Megatek data tablets have a mouse instead of a pen. This mouse has four buttons on it. The yellow (top) button is used during illumination and editing just as the pen on the Vector General terminals. However, in the viewing mode, when pushed, the point which it was pointing at will be drawn at the center of the screen. The blue (bottom) button has this same function at ALL times and is used to ``slew'' the display during editing. The white (left) and the green (right) buttons on the mouse are used for zooming the display at a fixed rate. The white button will zoom out and the green button will zoom in. \subsection{Silicon Graphics Mouse} The left and right mouse buttons are used for binary (2x) zooming, and the center mouse button is used for all other MGED mouse functions. On the Silicon Graphics 3-D workstations, MGED can be run directly, or it can be run under the window manager MEX. In both cases, MGED opens two windows, one outlined in white for all text interaction, and one outlined in yellow for all graphics display. When running MGED directly (without MEX), all mouse events are sent the MGED, regardless of where the mouse is pointing. In order to shift emphasis between the graphics and text windows, the smaller one can be enlarged by pointing the cursor within the boundaries of the smaller window, and pressing the center button. This enlarges that window, and reduces the size of the other window. When MEX is running, it is necessary to follow the MEX convention of moving the cursor into the desired window, and clicking the right mouse button, to ``attach'' all input to that window. This has the unfortunate consequence of requiring a lot of extra mouse clicking, because the graphics window needs to be attached when using the buttons, knobs, and mouse, while the text window needs to be attached in order to enter keyboard commands. On the Silicon Graphics 4-D workstations running 4Sight, mouse events are sent to MGED only when the cursor is within the boundaries of the MGED graphics window. \subsection{Sun Workstation Mouse} On the Sun workstation, MGED must be run in a {\bf suntools} window. The main consequence of this is that mouse events are sent to MGED only when the cursor is within the boundaries of the MGED graphics window on the screen. The left and right mouse buttons are used for binary (2x) zooming, and the center mouse button is used for all other MGED mouse functions. \section{Keyboard} The keyboard is used to issue commands and supply parameters to MGED. It is also used to login and logout of the UNIX system, and to run other UNIX programs. All characters typed on the keyboard, with the exception of the user's password, are displayed (echoed) on the monitor. In this text, all input typed by the user is shown in {\em italics}, while all literal MGED output is shown in {\tt typewriter font}. All entries are terminated by depressing the RETURN key. This action immediately precedes the execution of the directive. In most cases, lower case letters must be used. A space must be used between the command and its arguments. Embedded blanks are not allowed. Entering Control/H causes cursor to backspace and erase entered information. An MGED command is interrupted by entering Control/C. End-of-File is sent to MGED by entering Control/D. The graphics editor displays the prompt \begin{verbatim} mged> \end{verbatim} on the display whenever it is ready to accept a command from the keyboard. \chapter{OPERATING INSTRUCTIONS} \section{Entering the Graphics Editor} Type {\em mged filename}, e.g.: \begin{verbatim} mged s_axle.g mged shaft.g mged fred.g \end{verbatim} where the filename is the name of the UNIX file in which your object description data is stored. It is conventional that the extension ``.g'' on the filename signifies a graphics file, and is a good practice, but is not required. If the named database does not already exist, MGED will ask if it should create a new database. MGED will ask: {\tt \begin{verse} \% {\em mged new.g} \\ BRL-CAD Release 3.0 Graphics Editor (MGED) \\ \ \ \ \ Tue Sep 6 02:52:55 EDT 1988 \\ \ \ \ \ mike@video:/cad/mged.4d \\ new.g: No such file or directory \\ Create new database (y|n)[n]? {\em y} \\ attach (nu|tek|tek4109|ps|plot|sgi)[nu]? {\em sgi} \\ ATTACHING sgi (SGI 4d) \\ Untitled MGED Database (units=mm) \\ mged> \end{verse} } Here, the {\em italic} type indicates the user's response: {\em y} instructs MGED to create the new database, and {\em sgi} instructs MGED to attach to a window on the Silicon Graphics (SGI) workstation. Directives to the graphics editor are made by \begin{enumerate} \item entering information from the keyboard, shown here in the text by the use of {\em italics}, \item using the stylus to select items from the menu (select), and \item pressing buttons and twisting knobs on the function control box (press, twist). \end{enumerate} The prompt for a command is {\tt mged>}. \subsection{Running MGED on a Silicon Graphics} When running MGED from the console of a Silicon Graphics workstation, MGED retains the text window from which it was started, and opens a second window for the graphics display. By default, the graphics window is quite large, and the text window is rather small. On the SGI 3-d workstations, should you wish to have a large text window to scan printed output, move the mouse pointer into the text window and click the center mouse button. Use the reverse procedure to regain a large graphics window, i.e., move the mouse pointer into the graphics window and click the center mouse button. \subsection{Running MGED on a Tektronix} To run MGED on the tek4014 class of terminals one needs to have TWO terminals - the graphics terminal (4014 or one which emulates a 4014) and another terminal to enter commands on. The procedure is as follows: \begin{enumerate} \item login on the graphics terminal \item enter {\em tty} to find out which terminal number has been assigned \item enter {\em sleep 32000} to put the graphics terminal in sleep mode \item login on the other terminal \item enter {\em mged file} to execute MGED \item enter {\em tek} to select the tek4014 device processor \item enter the tty value found in step 2. \item perform editing \item enter {\em q} to quit MGED \item enter control-c on the graphics terminal to end the sleep mode \item logout on both terminals \end{enumerate} Since there are no knobs or buttons on the tek4014 class of terminals, one is forced to use the {\em press} and {\em knob} commands to emulate these peripherals. Other commands which can/should be used are: \begin{tabular}{rl} ill & put up a desired path \\ center & slew the display \\ size & zoom the display \\ sed & solid edit immediately \end{tabular} The main force behind the writing of a driver for the tek4014 terminals was to allow the use of the Teletype 5620 terminals. These graphic terminals have an internal processor and different windows can be set up which represent different terminals. Hence two terminals are NOT necessary. The use of the Teletype 5620 terminals is then the same as the procedure outlined above, except each window represents a terminal. \section{The Pop-Up Button Menu} The default MGED faceplate is shown in Figure \ref{faceplate}. If the BUTTON MENU area on the screen is selected with the mouse, then the pop-up button menu appears, as shown in Figure \ref{buttonmenu}. This menu can be very useful in reducing the amount of hand motion between the mouse and the button box. \section{Starting Your Model} Modeling practices using MGED can be quite individual. The following is a suggested modeling method to start with; you may end up developing your own style as you become more familiar with MGED. First of all, decide how you want to represent your model, including the amount of detail, types of solids and regions necessary. Have an accurate sketch or engineering drawing available, so that you can easily transfer its information into the types of primitive solids necessary to create your model. Where possible it is recommended to start with a large block solid and ``subtract'' pieces from it. In this way you avoid errors with abutting faces of a collection of solids ``unioned'' together. Next the solids are created using the {\em make}, {\em cp}, {\em mirror} or {\em in} commands. Depending on the complexity of the model, the solids may be created in the desired location or created at the origin and later translated to the desired location. Creation at the origin provides an opportunity to take advantage of possible symmetries in the geometry. Once all the solids are finished it is time to create the region[s], which will describe (to MGED) how to combine the solids to represent the model. The region[s] are then given the desired item/air code (if this is necessary, otherwise leave it as the system default value), and material codes. The regions are then put onto a group, usually for functionality only. A group has no operations as such (like union [u], intersection [+] or difference [-]) and is just a collection of objects for convenient naming of a whole screen or collection of objects.
{ "alphanum_fraction": 0.7387506101, "avg_line_length": 41.310483871, "ext": "tex", "hexsha": "f132c1a08ac7c49b71711fb450bac499e98e6d8c", "lang": "TeX", "max_forks_count": 54, "max_forks_repo_forks_event_max_datetime": "2022-03-28T23:20:37.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-10T07:57:06.000Z", "max_forks_repo_head_hexsha": "fb56f37c201b51241e8f3aa7b979436856f43b8c", "max_forks_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_forks_repo_name": "pombredanne/sf.net-brlcad", "max_forks_repo_path": "doc/mged/b.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "fb56f37c201b51241e8f3aa7b979436856f43b8c", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:33.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-24T17:07:48.000Z", "max_issues_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_issues_repo_name": "pombredanne/sf.net-brlcad", "max_issues_repo_path": "doc/mged/b.tex", "max_line_length": 78, "max_stars_count": 83, "max_stars_repo_head_hexsha": "34b72d3efd24ac2c84abbccf9452323231751cd1", "max_stars_repo_licenses": [ "BSD-4-Clause", "BSD-3-Clause" ], "max_stars_repo_name": "dservin/brlcad", "max_stars_repo_path": "doc/mged/b.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:33:46.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-10T05:54:52.000Z", "num_tokens": 4931, "size": 20490 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % fphw Assignment % LaTeX Template % Version 1.0 (27/04/2019) % % This template originates from: % https://www.LaTeXTemplates.com % % Authors: % Class by Felipe Portales-Oliva ([email protected]) with template % content and modifications by Vel ([email protected]) % % Template (this file) License: % CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[ 12pt, % Default font size, values between 10pt-12pt are allowed %letterpaper, % Uncomment for US letter paper size %spanish, % Uncomment for Spanish ]{fphw} % Template-specific packages \usepackage[utf8]{inputenc} % Required for inputting international characters \usepackage[T1]{fontenc} % Output font encoding for international characters \usepackage{fontspec,unicode-math} % Required for using utf8 characters in math mode \usepackage{parskip} % To add extra space between paragraphs \usepackage{graphicx} % Required for including images \usepackage{booktabs} % Required for better horizontal rules in tables % \usepackage{listings} % Required for insertion of code \usepackage{enumerate}% To modify the enumerate environment \usepackage{import} % This 4 packages and the command allow importing pdf \usepackage{xifthen} % figures generated with inkscape \usepackage{pdfpages} % Source: https://castel.dev/post/lecture-notes-2/ \usepackage{transparent} \newcommand{\incfig}[1]{% \def\svgwidth{0.45\columnwidth} \small \import{./images/}{#1.pdf_tex} } \setlength{\parindent}{15pt} \setlength{\headheight}{22.66pt} %---------------------------------------------------------------------------------------- % ASSIGNMENT INFORMATION %---------------------------------------------------------------------------------------- \title{Task 4 \\ Shortest paths} % Assignment title \author{Emilio Domínguez Sánchez} % Student name \date{May 7th, 2021} % Due date \institute{University of Murcia \\ Faculty of Mathematics} % Institute or school name \class{Geometría Global de Superficies} % Course or class name \professor{Dr. Luis J. Alías Linares} % Professor or teacher in charge of the assignment %---------------------------------------------------------------------------------------- % Definitions %---------------------------------------------------------------------------------------- \setmathfont{latinmodern-math.otf} \setmathfont[range=\setminus]{asana-math.otf} \usepackage{physics} \usepackage{cleveref} \usepackage{amsthm} \newtheorem{lemma}{Lemma} \newcommand{\R}{\mathbb{R}} \newcommand{\clsr}[1]{\overline{#1}} \DeclareMathOperator{\len}{len} \begin{document} \maketitle % Output the assignment title, created automatically using the information in the custom commands above %---------------------------------------------------------------------------------------- % ASSIGNMENT CONTENT %---------------------------------------------------------------------------------------- \section*{Problem} \begin{problem} Let $S$ be a regular surface and $V$ a normal neighborhood centered in $p \in S$. Let $ε$ be small enough so that the closure of the geodesic disk $D(p, ε)$ is contained in $V$, $\clsr{D}(p, ε) \subset V$. Let $q \in S \setminus D(p, ε)$ and consider the function $r(x) := d(x, q)$ that measures the distance along $S$ from $x$ to $q$. Let $m$ be a point where the function $r$ reaches a minimum over the compact $S(p, ε)$. $m := \min_{x \in S(p, ε)} \{r(x)\}$. Then \begin{equation*} d(p, q) = d(p, m) + d(m, q) = ε + d(m, q). \end{equation*} \end{problem} %---------------------------------------------------------------------------------------- The length minimizing property of the geodesics tells us that if $D(p,r)$ is contained in a normal neighborhood of $p$, then the geodesic that joins $p$ with $x \in D(p,r)$ is a shortest path. Because $\clsr{D}(p,ε) \subset V$, we can find $r > ε$ such that $D(p,r) \subset V$. Applying the length minimizing property to $S(p,ε) \subset D(p,r)$, we conclude that $d(p,x) = ε$ for all $x \in S(p,ε)$. Once we prove that, the rest of the exercise involves reasonings common to any distance $d(p,q)$ defined as the minimum length of the paths joining $p$ and $q$\footnote{ For some notions of length and path that follow the common sense of what is a path and what is a length. }. Here I have chosen to break the proof in small results. \begin{lemma} Fix a path $α : [0, l]$ from $p$ to $q$. Given that $p$ belongs to $D(p,ε)$, but $q$ does not, $α$ needs to traverse the boundary of $D(p,ε)$. That is, $α([0, l])) \cap S(p,ε) \ne \emptyset$. \end{lemma} \begin{proof} Let $U_1 := α^{-1}(D(p,ε))$ and $U_2 := α^{-1}(S \setminus \clsr{D}(p,ε))$. Both are the preimage of an open set and are therefore open. If the intersection $α([0,l]) \cap S(p,ε)$ were empty, then $U_1 \cup U_2$ would be a separation of $[0,l]$ in two open disjoint sets, which is absurd. \end{proof} \begin{lemma} If every path from $p$ to $q$ needs to traverse a set $I$, then \begin{equation}\label{eq:in} d(p,q) \ge d(p,I) + d(I,q). \end{equation} \end{lemma} \begin{proof} Let $α : [0,l] \to S$ be any path from $p$ to $q$ and let $t$ be such that $α(t) \in I$, which exists because $α$ needs to traverse $I$. Then $\len_0^l(α) = \len_0^{t}(α) + \len_{t}^l(α) \ge d(p, I) + d(I, q)$. \end{proof} \begin{figure}[hbt] \centering \hspace{0.15\columnwidth} \incfig{distances} \caption{An example of a set that separates two points and where the inequality \eqref{eq:in} is strict.} \label{fig:in} \end{figure} Notice that the inequality \eqref{eq:in} may be strict (\cref{fig:in}), because the points $x \in I$ for which $d(p, x) = d(p, I)$, if they exist, can be different from those where $d(x, q) = d(I, q)$. In our case though, $d(p, x) = ε$ for all $x \in S(p,ε)$. We conclude the proof writing \begin{equation*} d(p, m) + d(m, q) = ε + d(m, q) = d(p, S(p,ε)) + d(S(p,ε), q) \le d(p, q). \end{equation*} %---------------------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.5767686129, "avg_line_length": 37.4219653179, "ext": "tex", "hexsha": "ee924a78a3144329953dc77fe4bd6ae1036128ba", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_forks_repo_path": "shortest-paths.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "useredsa/introductory-exercises-of-differential-geometry", "max_issues_repo_path": "shortest-paths.tex", "max_line_length": 114, "max_stars_count": 1, "max_stars_repo_head_hexsha": "19b17a0a4c729e3a99f51ea285ae1539352c742b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "useredsa/exercises-surfaces-geometry", "max_stars_repo_path": "shortest-paths.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-25T03:04:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-25T03:04:15.000Z", "num_tokens": 1815, "size": 6474 }
\documentclass[UTF8,12pt]{article} % 12pt 为字号大小 \usepackage{amssymb,amsfonts,amsthm} %\usepackage{fontspec,xltxtra,xunicode} %\usepackage{times} \usepackage{amsmath,bm} \usepackage{mdwlist} \usepackage[colorlinks,linkcolor=blue]{hyperref} \usepackage{cleveref} \usepackage{float} \usepackage{enumerate} \usepackage{extarrows} %\numberwithin{equation}{section} %---------- % 定义中文环境 %---------- \usepackage{xeCJK} \setCJKmainfont[BoldFont={Heiti SC Light},ItalicFont={Kaiti SC Regular}]{Songti SC Regular} \setCJKsansfont{Heiti SC Light} \setCJKfamilyfont{song}{Songti SC Regular} \setCJKfamilyfont{zhhei}{Heiti SC Light} \setCJKfamilyfont{zhkai}{Kaiti SC Regular} \setCJKfamilyfont{zhfs}{STFangsong} \setCJKfamilyfont{zhli}{Libian SC Regular} \setCJKfamilyfont{zhyou}{Yuanti SC Regular} \newcommand*{\songti}{\CJKfamily{zhsong}} % 宋体 \newcommand*{\heiti}{\CJKfamily{zhhei}} % 黑体 \newcommand*{\kaiti}{\CJKfamily{zhkai}} % 楷体 \newcommand*{\fangsong}{\CJKfamily{zhfs}} % 仿宋 \newcommand*{\lishu}{\CJKfamily{zhli}} % 隶书 \newcommand*{\yuanti}{\CJKfamily{zhyou}} % 圆体 %---------- % 版面设置 %---------- %首段缩进 \usepackage{indentfirst} \setlength{\parindent}{2em} %行距 \renewcommand{\baselinestretch}{1.2} % 1.2倍行距 %页边距 \usepackage[a4paper]{geometry} \geometry{verbose, tmargin=2cm,% 上边距 bmargin=2cm,% 下边距 lmargin=2.5cm,% 左边距 rmargin=2.5cm % 右边距 } %---------- % 其他宏包 %---------- %图形相关 \usepackage[x11names]{xcolor} % must before tikz, x11names defines RoyalBlue3 \usepackage{graphicx} \graphicspath{{figures/}} \usepackage{pstricks,pst-plot,pst-eps} \usepackage{subfig} \def\pgfsysdriver{pgfsys-dvipdfmx.def} % put before tikz \usepackage{tikz} %原文照排 \usepackage{verbatim} %网址 \usepackage{url} %---------- % 定理、习题与解答环境 %---------- %定理环境 \usepackage[most]{tcolorbox} \newtcbtheorem[number within=section]{theorem}{Theorem}{ enhanced, breakable, sharp corners, attach boxed title to top left={ yshifttext=-1mm }, colback=blue!4!white, colframe=blue!75!black, fonttitle=\bfseries, boxed title style={ sharp corners, size=small, colback=blue!75!black, colframe=blue!75!black, } }{theorem} \newtcbtheorem[number within=section]{definition}{Definition}{ enhanced, breakable, sharp corners, attach boxed title to top left={ yshifttext=-1mm }, colback=blue!4!white, colframe=blue!75!black, fonttitle=\bfseries, boxed title style={ sharp corners, size=small, colback=blue!75!black, colframe=blue!75!black, } }{definition} \newtcbtheorem[number within=section]{corollary}{Corollary}{ enhanced, breakable, sharp corners, attach boxed title to top left={ yshifttext=-1mm }, colback=blue!4!white, colframe=blue!75!black, fonttitle=\bfseries, boxed title style={ sharp corners, size=small, colback=blue!75!black, colframe=blue!75!black, } }{corollary} \newtcbtheorem[number within=section]{myboxes}{Box}{ enhanced, breakable, sharp corners, attach boxed title to top left={ yshifttext=-1mm }, %colback=white, colframe=black!75!white, fonttitle=\bfseries, boxed title style={ sharp corners, size=small, colback=black!75!white, colframe=black!75!white, } }{myboxes} %习题环境 \newtcbtheorem[number within=section]{exercise}{Problem}{ enhanced, breakable, sharp corners, attach boxed title to top left={ yshifttext=-1mm }, colback=white, colframe=black, fonttitle=\bfseries, boxed title style={ sharp corners, size=small, colback=black, colframe=black, } }{Problem} %解答环境 \ifx\proof\undefined\ \newenvironment{proof}[1][\protect\proofname]{\par \normalfont\topsep6\p@\@plus6\p@\relax \trivlist \itemindent\parindent \item[\hskip\labelsep \scshape #1]\ignorespaces }{% \endtrivlist\@endpefalse } \fi \renewcommand{\proofname}{\it{Solution}} %========== % 正文部分 %========== \begin{document} \title{Chapter 2: Quantum Dynamics} \author{Yuquan Chen} \date{2019/03/19} % 若不需要自动插入日期,则去掉前面的注释;{ } 中也可以自定义日期格式 \maketitle \section{Time evolution operator and Hamiltonian} Here, we talk about non-relativistic situation, and we think about time as a parameter, not an operator. The position representation $\langle x|\psi\rangle = \psi(x)$, adding the time evolution, we have $$\langle x|\psi(t)\rangle = \psi(x,t)$$ First, we talk about 6 postulates of quantum mechanics.\\ \begin{myboxes}{Postulates of Quantum Mechanics}{} \textbf{Postulate 1.} At any time $t$, the state of a physical system is defined by a ket $|\psi\rangle$, or \textit{state} in a relevant Hilbert space $H$.\\\par \textbf{Postulate 2.} The only possible result of measuring observable $A$ is one of the eigenvalues of $A$ \begin{figure}[H] \begin{center} \includegraphics[width=7cm]{post2} %\caption{} %\label{} \end{center} \end{figure} Aside: \begin{enumerate*} \item If $A$ is Hermitian, then the measurement gives a real number. \item If $A$'s spectrum is discrete, then we only see quantized result. \end{enumerate*} \textbf{Postulate 3.} Every measurable physical quantity $A$ is described by a Hermitian operator.\\\par \textbf{Postulate 4.} If $A|u_{\alpha}\rangle = a_{\alpha}|u_{\alpha}\rangle$, then for a system in $|\psi\rangle$, when we measure $A$, then the probability of getting $a_{\alpha}$ is $P(a_{\alpha}) = |\langle u_{\alpha}|\psi\rangle|^{2}$.\\ Aside: If we have degenerate $a_{\alpha}$'s $\{|u_{\alpha,1}\rangle, |u_{\alpha,2}\rangle, ...\}$ share the same eigenvalue, then $P(a_{\alpha}) = \sum_{i} |\langle u_{\alpha,i}|\psi\rangle|^{2}$\\ Example: $A = I$, all $a_{\alpha} = 1$\\\par \textbf{Postulate 5.} If a measurement projects $|\psi\rangle$ into a new state $|u_{\alpha}\rangle$, then a physical new state should be $|u_{\alpha}'\rangle = \frac{|u_{\alpha}\rangle}{\sqrt{\langle u_{\alpha}|u_{\alpha}\rangle}}$, so that $\langle u_{\alpha}'|u_{\alpha}'\rangle = 1$.\\\par \textbf{Postulate 6.} Between measurement the state vector $|\psi(t)\rangle$ evolves in time with time dependent Shr\"{o}dinger's equation $$i\hbar \frac{d}{dt}|\psi(t)\rangle = \hat{H}(t)|\psi(t)\rangle$$ here $\hat{H}$ is a Hamiltonian. \end{myboxes} We let a displacement $dt'$ on state $|\psi(t)\rangle$, \begin{align}\label{dt} \Rightarrow U(dt')|\psi(t)\rangle = |\psi(t+dt')\rangle,~\text{where } UU^{\dag} = 1 \end{align} It's similar to momentum, in that case, we have $$\begin{cases}U(dt') = I - i\frac{\hat{H}}{\hbar}dt'\\\hat{H} \text{ is Hermitian, called Hamiltonian}\end{cases}$$ so (\ref{dt}) could be evaluated as: \begin{align} \text{LHS} &= \left(I - i\frac{\hat{H}}{\hbar}dt'\right) \psi(x,t) = \psi(x,t) - i\frac{\hat{H}}{\hbar}dt'\psi(x,t) \\ \text{RHS} &= \psi(x,t+dt') = \psi(x,t) + \left(\frac{\partial}{\partial t}\psi(x,t)\right)dt' \end{align} \begin{align} \Rightarrow \boxed{i\hbar\frac{\partial}{\partial t} \psi(x,t) = H\psi(x,t)} \end{align} which is Shr\"{o}dinger's equation in position representation. In general, we have \begin{align}\label{shordinger} \boxed{i\hbar\frac{\partial}{\partial t} |\psi(t)\rangle = H|\psi(t)\rangle} \end{align} $H$: Hamiltonian in analog to classical mechanics, \begin{align} H = T + V,~ \begin{cases}T = \frac{p^{2}}{2m} \text{ is kinetic energy} \\ V \text{ is potential energy}\end{cases} \end{align} and in quantum mechanics, we have \begin{align} \hat{H} = \hat{T} + \hat{V} = \frac{\hat{p}^{2}}{2m} + \hat{V}(x) \end{align} Here are some examples of Hamiltonians in different systems. \begin{myboxes}{Examples of Hamiltonians in different systems}{} \begin{enumerate*} \item A free particle $V=0$ $$\hat{H} = \frac{\hat{p}^{2}}{2m}$$ \item Hydrogen atom $$\hat{H} = \frac{\hat{p}_{e}^{2}}{2m_{e}} + \frac{\hat{p}_{n}^{2}}{2m_{n}} - \frac{e^{2}}{4\pi\varepsilon_{0}|\vec{r}_{e}-\vec{r}_{n}|}$$ \item A particle with magnetic momentum $\vec{\mu}$, in external magnetic field $\vec{B}$ $$\hat{H} = -\vec{\mu}\cdot\vec{B}$$ \end{enumerate*} \end{myboxes} As $\hat{H} = \frac{\hat{p}^{2}}{2m}$, it's convenient to work in momentum representation $\{|p\rangle\}$ as our basis. We apply $\langle p|$ on the left of equation $H|\psi(t)\rangle = i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle$, we get \begin{align} &\text{LHS} = \left(\langle p|\frac{\hat{p}^{2}}{2m}\right)|\psi(t)\rangle = \frac{p^{2}}{2m} \langle p|\psi(t)\rangle = \frac{p^{2}}{2m} \psi(p,t) \\ &\text{RHS} = i\hbar\langle p|\frac{\partial}{\partial t}|\psi(t)\rangle \xlongequal{\frac{\partial}{\partial t}\langle p| = 0} i\hbar\frac{\partial}{\partial t}\langle p|\psi(t)\rangle = i\hbar\frac{\partial}{\partial t}\psi(p,t) \\ &\Rightarrow \frac{p^{2}}{2m}\psi(p,t) = i\hbar\frac{\partial}{\partial t}\psi(p,t) \\ &\Rightarrow \psi(p,t) = \psi(p,0) e^{-i\frac{p^{2}t}{2m\hbar}} \end{align} if we let \begin{align} \psi(p,0) = \frac{1}{\sqrt{2\pi\hbar}} e^{-i\frac{px}{\hbar}} \sim \langle x|p\rangle \end{align} in this case, \begin{align} \boxed{\psi(p,t) = \frac{1}{\sqrt{2\pi\hbar}} e^{-i\frac{px}{\hbar}} e^{-i\frac{p^{2}t}{2m\hbar}} = \frac{1}{\sqrt{2\pi\hbar}} e^{-i\frac{p}{\hbar}\left(x + \frac{pt}{m}\right)}} \end{align} we have \begin{align} \psi(p,0) &= \langle p|x\rangle \text{ momentum representation of } |x\rangle \\ \psi(p,t) &= \langle p|x + \frac{pt}{m}\rangle \text{ if set } v=\frac{p}{m} \text{, then } x+\frac{pt}{m} = x+vt \end{align} \begin{myboxes}{Comment}{} We should observe structure of $H$, and choose the right representations. We have $$\psi(x,0) \xrightarrow[\text{rewrite in p-repres}]{\text{Fourier transform}} \psi(p,0)$$ $$\psi(p,t) \xlongequal{H = \frac{p^{2}}{2m}} \psi(p,0) e^{-i\frac{p^{2}t}{2m\hbar}}$$ if we need $\psi(x,t)$, we can get it from another Fourier transformation from $\psi(p,t)$. \end{myboxes} \section{Static Shr\"{o}dinger's equation} Recall equation (\ref{shordinger}) that $$\hat{H}|\psi(t)\rangle = i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle$$ in position representation, \begin{align} \langle x|\hat{H}|\psi(t)\rangle &= i\hbar\frac{\partial}{\partial t}\langle x|\psi(t)\rangle \\ \Rightarrow H\psi(x,t) &= i\hbar\frac{\partial}{\partial t}\psi(x,t) \end{align} where $H = \frac{\hat{p}^{2}}{2m} + V(x),~ \hat{p} \leftrightarrow -i\hbar\frac{\partial}{\partial x}$, \begin{align}\label{sex} \Rightarrow \boxed{-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}\psi(x,t) + V(x)\psi(x,t) = i\hbar\frac{\partial}{\partial t}\psi(x,t)} \end{align} \begin{myboxes}{A common mistake}{} We know that in $x$ representation, $\hat{p} \leftrightarrow -i\hbar\frac{\partial}{\partial x}$, but $$\langle x|\hat{p}^{2}|\psi\rangle \ne -\hbar^{2}\left(\frac{\partial}{\partial x} \psi(x)\right)^{2}$$ instead, $$\boxed{\langle x|\hat{p}^{2}|\psi\rangle = -\hbar^{2}\frac{\partial^{2}}{\partial x^{2}}\psi(x)}$$ because we have $$\langle x|\hat{p}|\psi\rangle = -i\hbar\frac{\partial}{\partial x}\langle x|\psi\rangle = -i\hbar\frac{\partial}{\partial x}\psi(x)$$ then \begin{align*} \langle x|\hat{p}^{2}|\psi\rangle \xlongequal{|\phi\rangle = \hat{p}|\psi\rangle}& \langle x|\hat{p}|\phi\rangle = -i\hbar\frac{\partial}{\partial x}\langle x|\phi\rangle = (-i\hbar)\frac{\partial}{\partial x}\langle x|\hat{p}|\psi\rangle \\ =& (-i\hbar)\frac{\partial}{\partial x}\left(-i\hbar\frac{\partial}{\partial x}\langle x|\psi\rangle\right) = -\hbar^{2}\frac{\partial}{\partial x}\left(\frac{\partial}{\partial x}\psi(x)\right) \\ =& -\hbar^{2}\frac{\partial^{2}}{\partial x^{2}}\psi(x) \end{align*} \end{myboxes} To solve equation (\ref{sex}), it's best to separate the variables. Suppose we set \begin{align} \psi(x,t) = \Psi(x)\phi(t) \end{align} where $\hat{H}$ is Hermitian, and the eigenfunction is \begin{align} H|\psi_{E}\rangle = E|\psi_{E}\rangle,~\begin{cases}E \text{ is eigen energy}\\|\psi_{E}\rangle \text{ is eigenstate}\end{cases} \end{align} if we assume $|\psi(t=0)\rangle = |\psi_{E}\rangle$, then \begin{align} H|\psi(t)\rangle &= i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle \notag\\ \Rightarrow \langle\psi_{E}|H|\psi(t)\rangle &= i\hbar\frac{\partial}{\partial t}\langle\psi_{E}|\psi(t)\rangle \\ \Rightarrow E\langle\psi_{E}|\psi(t)\rangle &= i\hbar\frac{\partial}{\partial t}\langle\psi_{E}|\psi(t)\rangle \\ \xLongrightarrow{\xi(t) = \langle\psi_{E}|\psi(t)\rangle} E\xi(t) &= i\hbar\frac{\partial}{\partial t}\xi(t) \\ \Rightarrow \xi(t) &= e^{-i\frac{Et}{\hbar}}\xi(0) \end{align} \begin{theorem}{}{} We know a inner product of a state $|\psi(t)\rangle$ with eigenstate $|\psi_{E}\rangle$ is getting a phase $e^{-i\frac{Et}{\hbar}}$ over time. \end{theorem} Probability of measuring with $H$ after time $t$ of evolution, is the same as any other time. \begin{align*} P_{E}(t) = \left|\langle\psi_{E}|\psi(t)\rangle\right|^{2} = |e^{-i\frac{Et}{\hbar}}\langle \psi_{E}|\psi(0)\rangle|^{2} = |\langle \psi_{E}|\psi(0)\rangle|^{2} = P_{E}(t=0) \end{align*} \begin{corollary}{}{} In the basis of energy $\{|\psi_{E}^{\left(i\right)}\rangle\}$, \begin{align} |\psi(0)\rangle &\xlongequal{\text{discrete}} \sum_{i} c_{i}|\psi_{E}^{\left(i\right)}\rangle \\ |\psi(t)\rangle &\xlongequal{H \ne H(t)} \sum_{j} c_{j} e^{-i\frac{E_{j}t}{\hbar}}|\psi_{E}^{\left(j\right)}\rangle \end{align} \end{corollary} \end{document}
{ "alphanum_fraction": 0.6554458408, "avg_line_length": 36.826446281, "ext": "tex", "hexsha": "5be5d0fd6ebc5e807e481da39c9d6da06472e56a", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-08T14:25:18.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-08T14:25:18.000Z", "max_forks_repo_head_hexsha": "6bcc76280f36fab7bfd7aee0f98bbf92145d0927", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "YQChen-QI/Quantum-Mechanics", "max_forks_repo_path": "Lecture notes(me)/Chapter 2/0319.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6bcc76280f36fab7bfd7aee0f98bbf92145d0927", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "YQChen-QI/Quantum-Mechanics", "max_issues_repo_path": "Lecture notes(me)/Chapter 2/0319.tex", "max_line_length": 293, "max_stars_count": 7, "max_stars_repo_head_hexsha": "6bcc76280f36fab7bfd7aee0f98bbf92145d0927", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "YQChen-QI/Quantum-Mechanics", "max_stars_repo_path": "Lecture notes(me)/Chapter 2/0319.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-10T09:50:55.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-03T05:08:46.000Z", "num_tokens": 4852, "size": 13368 }
%% %% Beginning of file 'sample62.tex' %% %% Modified 2018 January %% %% This is a sample manuscript marked up using the %% AASTeX v6.2 LaTeX 2e macros. %% %% AASTeX is now based on Alexey Vikhlinin's emulateapj.cls %% (Copyright 2000-2015). See the classfile for details. %% AASTeX requires revtex4-1.cls (http://publish.aps.org/revtex4/) and %% other external packages (latexsym, graphicx, amssymb, longtable, and epsf). %% All of these external packages should already be present in the modern TeX %% distributions. If not they can also be obtained at www.ctan.org. %% The first piece of markup in an AASTeX v6.x document is the \documentclass %% command. LaTeX will ignore any data that comes before this command. The %% documentclass can take an optional argument to modify the output style. %% The command below calls the preprint style which will produce a tightly %% typeset, one-column, single-spaced document. It is the default and thus %% does not need to be explicitly stated. %% %% %% using aastex version 6.2 \documentclass[twocolumn]{aastex62} \usepackage{amsmath, amssymb, mathtools, bm} \usepackage{color} \usepackage[cache=false]{minted} \newcommand{\ktwo}{{\it K$\mathit{2}$}} \newcommand{\tess}{{\it TESS}} \newcommand{\kepler}{{\it Kepler}} \newcommand{\lightkurve}{\texttt{lightkurve}} \newcommand{\LightCurve}{\texttt{LightCurve}} \newcommand{\KeplerLightCurve}{\texttt{KeplerLightCurve}} \newcommand{\numpy}{\texttt{numpy}} \newcommand{\clh}[1]{\textcolor{red}{ \textbf{***CLH: #1 ***}}} \newcommand{\ze}[1]{\textcolor{blue}{ \textbf{***CLH: #1 ***}}} \newcommand{\gb}[1]{\textcolor{green}{ \textbf{***CLH: #1 ***}}} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \received{January 1, 2018} \revised{January 7, 2018} \accepted{\today} \submitjournal{ApJ} \begin{document} \title{\lightkurve: an open source Python package for NASA's \kepler, \ktwo~ and \tess~ data analysis} \correspondingauthor{Geert Barentsen} \email{[email protected]} \author{\lightkurve~contributors} \affil{The galatic startup} \author{Jos\'e Vin\'icius de Miranda Cardoso} \affiliation{Bay Area Environmental Research Institute \\ Petaluma \\ California, USA} \collaboration{(AAS Journals Data Scientists collaboration)} \begin{abstract} The \kepler~ and \ktwo~ missions has provided the astronomy community with light curves of more than 400,000 objects. These ready-made light curves have allowed the community to quickly investigate targets and find exoplanet candidates. However, light curves processed by the Kepler pipeline have built-in, fixed assumptions, such as aperture choice and background correction methods. These assumptions are valid for the majority of targets, but for certain science cases a bespoke analysis may be more valuable. Performing custom photometry with the raw \kepler~ data has many benefits. Working with the raw data allows users to mitigate systematics for their particular science case, (such as the \kepler~ Rolling band) and verify the aperture selected by the pipeline. Working directly with the pixel data allows users to check the data quality, mitigate and remove cosmic rays or identify stray asteroids in the target aperture. To this end, we present \lightkurve, an open source package for analyzing time-series pixel data using Python. \lightkurve~ is designed to interface seemlessly with data from NASA's \kepler, \ktwo~ and \tess~ missions. Using \lightkurve, it is simple and quick to create corrected time-series photometry from raw pixel data from any of these missions and perform many common light curve corrections, including correcting for spacecraft motion induced noise using the self-flat-field method and correcting for correlated trends using cotrend basis vectors. \lightkurve~ is open source and provides an excellent learning tool for any users wanting to get to started with Kepler data. \end{abstract} \keywords{christina} \section{Introduction} \label{sec:intro} Time series photometry of a wide variety of astrophysical targets is available from a myriad of ground and spaced based missions, in an assortment of formats. This ranges from 30 year long time series of variable stars\cite{citationneeded} to short time-series of hours for X-Ray objects \clh{What are the challenges we face in time-series astronomy?} NASA's \kepler and \ktwo missions have provided some of the most precise, long term monitoring of stars to date \cite{citationneeded}. \kepler~ has delivered time-series photometry of more than 200,000 objects in the visible spectrum with a four year long baseline, allowing for continuous monitoring of potential exoplanet hosting systems and characterization of host stars. Since the loss of a second reaction wheel in 2009, the \kepler~ mission was repurposed into the \ktwo~ mission. In the near future the \tess mission will deliver high-precision time series data for 90\% of the sky, providing light curves for X millions of objects \cite{citationneeded}. \paragraph{What tools are currently available?} \begin{itemize} \item PyKE \item \clh{Geert says he can do this para} \item However, no one package provide a simple, open source framework for manipulating time-series data that is general purpose. \end{itemize} \paragraph{What are we presenting} \begin{itemize} \item{We present \lightkurve as a general tool to use almost all time-series photometry, with a particular focus on \kepler, \ktwo~ and \tess.} \item While these missions have powerful pipelines which deliver high-precision light curves for many objects (citation), \lightkurve allows bespoke analyses tailored for specific science cases. \item These might include custom aperture photometry, PSF photometry in crowded field and studies of long period transient events such as supernovae and AGN. \item \lightkurve~ is not designed to replace NASA pipelines, but to allow users more flexibility when producing time-series for their unique science cases. \item{We have designed \lightkurve as a tool process this vast wealth of data easily and intuitively with many features and tools to remove reduce the overhead in using this data.} \item By using these tools users have the advantage of easy reproducibility. By sharing the same tools and the same short scripts for producing their light curve products different teams will be more able to compare results. \end{itemize} \paragraph{How do you use lightkurve} \begin{itemize} \item designed to be flexible \item nuts and bolts \item open source \item easy data fetching \item easy api \end{itemize} \paragraph{What is the selling point of lightkurve} \begin{itemize} \item There are two sides to the \lightkurve package. Firstly, \lightkurve can be used as an extraction package for creating time-series photometry from astronomical images such as \kepler Target Pixel Files (TPFs) or \tess Full Frame Images (FFIs). This includes simple aperture photometry, PSF photometry and centroiding. \item Secondly, \lightkurve can be used for analysis of time-series photometry. This includes motion detrending, CBV corrections, outlier rejection and period folding. \item Together these two sides can be combined to convert raw data from \kepler, \ktwo~ and \tess~ to cleaned light curves of exoplanet candidates, supernovae and extra-galactic objects. \item One flexible system for all optical photometry \item Learning/teaching tool \end{itemize} \paragraph{Future resources?} \begin{itemize} \item This is \lightkurve 1.0 \item Anticipate adding new features \item Easily extendible for users to add in new features \item{There are already tutorials, which will be expanded} \end{itemize} \paragraph{What's in this paper?} \begin{itemize} \item In this paper we discuss the basic components of \lightkurve \item We will show three key components of analysis with \lightkurve; manipulating lightcurves, creating lightcurves, and removing systematics from lightcurves. \item \lightkurve has a full compliment of tutorials \item More details can be found in our documentation at link. \end{itemize} \section{Package overview} \clh{Ze can you do a beautiful diagram here?} \subsection{The \LightCurve~and \KeplerLightCurve~classes} The \LightCurve~class is a simple cointainer to store \numpy~arrays (hereafter, arrays) related to flux time-domain measurements. The \LightCurve~object provides methods to store, process, and convert lightcurves. Table~\ref{tab:methods} contains a description of a subset of the methods. A \LightCurve~object can be instantiated by passing a \texttt{time} array, a \texttt{flux} array, and, optionally, a \texttt{flux\_err} array which accounts for uncertainties in the \texttt{flux} measurements, i.e., \begin{minted}{python} from lightkurve import LightCurve lc = LightCurve(time, flux) \end{minted} The \KeplerLightCurve class extends \LightCurve by adding attributes to store metadata information such as channel number, quality flags, campaign or quarter number, kepler id, etc. Additionally, \texttt{KeplerLightCurve} can be corrected for motion-dependent correlated noise using the \texttt{correct} method which will be discussed in Section~\ref{subsection:motion}. \subsection{The \texttt{KeplerLightCurveFile} class} The \texttt{KeplerLightCurveFile} class defines a structure to deal with lightcurve files from both NASA's Kepler and K2 missions. To instantiate a \texttt{KeplerLightCurveFile} object, it is necessary to pass a \texttt{path} which represents the address (url or local path) of a lightcurve file in the fits (or compressed) format, and a \texttt{quality\_bitmask} string which specifies quality flags of cadences that should be ignored. One crucial method of the \texttt{KeplerLightCurveFile} class is \texttt{get\_lightcurve} which returns a \texttt{KeplerLightCurve} object with the metadata provided by the corresponding \texttt{KeplerLightCurveFile}. Therefore, one can, for example, perform the following series of operations in order to fold a lightcurve from the MAST archive \begin{minted}{python} lc_file = KeplerLightCurveFile("kplr011904151-2009350155506_llc.fits") klc = lc_file.PDCSAP_FLUX.fold(period=0.837495) klc.plot() \end{minted} \subsection{The \texttt{KeplerTargetPixelFile} class} A \texttt{KeplerTargetPixelFile} object can be instantiated by passing a \texttt{path} (URL or local) of a target pixel file. Optionally, the user can elect to throw away frames that contain a specific flag by using the \texttt{quality\_bitmask} argument. \texttt{KeplerTargetPixelFile} offers a number of methods that range from getting raw aperture photometry lightcurves to data visualization. For instance, the method \texttt{plot} can be used to visualize a given frame, which are depicted in Fig.~\ref{fig:plot-method}. \begin{minted}{python} import numpy as np from lightkurve import KeplerTargetPixelFile tpf = KeplerTargetPixelFile("kplr008462852-2011073133259_lpd-targ.fits") tpf.plot() tpf.plot(aperture_mask=tpf.flux[0] > np.nanmean(tpf.flux[0])) \end{minted} \begin{figure}[!htb] \centering \plotone{figs/tpf-plot.pdf} \plotone{figs/tpf-plot-aperture.pdf} \caption{Displaying a given frame of a TPF using \texttt{plot}. Optionally, an \texttt{aperture\_mask} can be passed which is highlighted on the right hand side.} \label{fig:plot-method} \end{figure} In an image with $n$ pixels, where the flux and the center positions of the $i$-th pixel are denoted as $f_i$ and $(x_i, y_i)$, respectively, the centroids may be expressed as \begin{align} x^{\star} = \dfrac{\sum_{i=1}^{n} f_i x_i}{\sum_{i=1}^{n}f_i}, ~~y^{\star} = \dfrac{\sum_{i=1}^{n} f_i y_i}{\sum_{i=1}^{n}f_i}. \end{align} In \texttt{lightkurve}, the centroids in every cadence can be computed as \begin{minted}{python} from lightkurve import KeplerTargetPixelFile tpf = KeplerTargetPixelFile('ktwo246199087-c12_lpd-targ.fits.gz') x_star, y_star = tpf.get_centroids() \end{minted} \clh{This table I believe needs a little updating? There can also be a second table for TPF} \begin{table}[!htb] \centering \caption{A subset of methods provided by the \LightCurve~class} \begin{tabular}{cp{6.5cm}} \hline \textbf{Method} & \textbf{Short description} \\ \hline \texttt{stitch} & appends the attributes \texttt{flux}, \texttt{time}, and \texttt{flux\_err} of other given \texttt{LighCurve} objects.\\ \texttt{flatten} & applies a Savitzky-Golay filter to capture low frequency flux variations which can be then removed in order to aid transit detection algorithms.\\ \texttt{fold} & folds a lightcurve at a given period and phase.\\ \texttt{bin} & bins a lightcurve using a block mean or median.\\ \texttt{cdpp} & computes the Combined Differential Photometric Precision (CDPP) metric, which is a proxy for the amount of scatter in the lightcurve signal. \\ \texttt{plot} & displays a lightcurve. \end{tabular} \label{tab:methods} \end{table} %This might belong somewhere else, but it's slowing down our introduction \subsection{PyKE Tools} \begin{itemize} \item previously there was pyke, don't spend too long on this \item pyke is still available \item \lightkurve uses only python \item new Python package which makes the custom analysis of target easy. Based on AstroPy (cite Astropy). \end{itemize} \section{Common Use Cases} \subsection{Creating Custom Light Curves} \subsubsection{Simple Aperture Photometry (SAP)} \begin{itemize} \item What is SAP? \item Why do SAP?s \item How do you do SAP? \end{itemize} \subsubsection{Point Spread Function (PSF) Photometry} %What is PSF? Point Spread Function (PSF) photometry is the de facto technique to process crowded-field images~\cite{stetson1987, heasley1999}. In context of Kepler and K2 missions, Libralato \textit{et al}~\cite{libralato2016} have shown... See a detailed explaination of PSF photometry in \ref{appendix:psf} %Why do PSF? The underlying principle of PSF photometry consists in modelling a given crowded image as a linear combination of individual PSFs and possibly a background model. On the PSF model itself, it is commonly assumed that the flux at an arbitrary pixel position increases linearly with the integrated flux~\cite{stetson1987, heasley1999}. %How do you do PSF? \lightkurve ~contains routines to perform PSF photometry in TPFs which are implemented in the \texttt{psf} module. The example below illustrates PSF photometry on the target \texttt{EPIC 246199087} (Trappist-1): \clh{If you want to include oktopus in this snippet you have to explain what it is to the reader:)} \begin{minted}{python} from lightkurve import KeplerTargetPixelFile from lightkurve.psf import PRFPhotometry, SceneModel from oktopus import UniformPrior tpf = KeplerTargetPixelFile("ktwo246199087-c12_lpd-targ.fits.gz") prf = tpf.get_prf_model() prior = UniformPrior(lb=[4e3, 990, 25, 1], ub=[2e4, 996, 30, 2e3]) scene = SceneModel(prf=[prfs]) phot = PRFPhotometry(scene_model=scene, prior=prior) results = phot.fit(tpf.flux + tpf.flux_bkg) \end{minted} The photometric results are stored in a $c \times 4$ matrix, where $c$ is the number of frames (cadences). \subsection{Correcting Common Systematics} We provide tools to correct for two systematics that are common between targets on the same channel. We provide corrections using \emph{Cotrending Basis Vectors} (CBVs) which mitigate systematics due to e.g. spacecraft heating. (See Appendix \ref{appendix:CBV} for a detailed explanation of CBVs) We also provide a simple implementation of the \emph{Self Flat Fielding} (SFF) method to correct for spacecraft motion. (See Appendix \ref{appendix:motion} for a detailed explanation of SFF) We only intend to provide simple tools. Ideally, systematics are removed simultaneously with fitting a model (e.g. Montet and DFM 2015). \subsubsection{Correcting Spacecraft Motion} Spacecraft-induced correlated noise remains one of the greatest hurdles to analyzing K2 lightcurves. Many algorithms have been developed to mitigate motion-dependent artifacts~\cite{vanderburg2014}[CITE K2SC and EVEREST]. In \lightkurve, we implement an algorithm based off of the self-flat-field (SFF) presented in~\cite{vanderburg2014}. SFF works by decorrelating the simple aperture flux against the information on the spacecraft motion, obtained by computing the arclength using the centroids of the target. (See Appendix \ref{appendix:motion} for a detailed explanation of SFF) \begin{minted}{python} from lightkurve import KeplerTargetPixelFile tpf = KeplerTargetPixelFile("ktwo248667471-c14_lpd-targ.fits.gz") lc = tpf.to_lightcurve() centroids = tpf.get_centroids() lcc = lc.correct(centroids[0], centroids[1]) \end{minted} \subsubsection{Correcting Common Systematics with CBVs} \clh{Simple paragraph describing CBVs} Cotrending basis vectors (CBVs) can remove global correlated systematics present in a given channel \cite{smith2012}. An example of SAP flux correction for target \texttt{KOI 8462852} (Tabby's star) can be written as follows \begin{minted}{python} from lightkurve.lightcurve import KeplerCBVCorrector cbv = KeplerCBVCorrector("kplr008462852-2011073133259_llc.fits") cbv_lc = cbv.correct(cbvs=[1,2]) \end{minted} Fig~\ref{fig:cbv-correction} illustrates the correction. The pink line has a shift from the green line because in \lightkurve~we do not account the flux lost outside of the aperture mask. \begin{figure}[!htb] \centering \plotone{figs/cbv.pdf} \caption{CBV correction applied on \texttt{KOI 8462852} \clh{This is great, really valuable plot. I would just change to have two lines with labels corrected and uncorrected.}} \label{fig:cbv-correction} \end{figure} Improperly tuning the number of CBVs can cause over-/under-fitting. One way to identify a reasonable number of CBVs is to perform a grid search as shown in Fig~(\ref{fig:cbv-grid-search}). The selection of the number of CBVs is set by inspecting the grid search curve can be set through model comparison heuristics like AIC, BIC, or cross-validation \cite{ivezi2014}. In the same fashion, we can apply cotrending basis vector correction to \ktwo lightcurves. \subsection{Recovering a planet signal} \section{Example Code Snippets} \subsection{Recover a Planet in 5 lines} \begin{figure} \plotone{figs/fold-lc.pdf} \caption{Folded lightcurve of target \texttt{KIC011904151} quarter 3, showing the transit signal of Kepler-10b. \label{fig:fold-method}} \end{figure} \section{Future work} Explain PSF photometry needs users and data-driven model capability. We do not intend to implement transit fitting, more advanced detrending, etc. Instead, lightkurve intends to provide the building blocks needed to build or interact with such packages. We intend to add many tutorials. Explain how people can contribute. \section{Conclusions} we will discuss => we have discussed \appendix \section{Cotrending basis vectors} \label{appendix:cbvs} Given a set of $n$ CBVs, one is interested in finding a vector of $n$ coefficients $\bm{\theta}=(\theta_1, \theta_2, ..., \theta_n)$ which minimizes some cost function between the SAP flux and the set of CBVs. The mathematical structure of the cost function is a direct consequence of the statistical assumptions made for data. For instance, if one assumes that the data comes from an independent and identically distributed (iid) multivariate Gaussian distribution with mean $\sum_{j=1}^{n}\theta_j v_{j}$, in which $v_j$ is the $j$-th CBV, and known variance $\sigma^2$, then the cost function can be expressed as follows \begin{align} \mathcal{C}(\bm{\theta}, f_{SAP}) = ||f_{SAP} - \bm{\theta}V||^{2}_{2} \label{eq:chi-square} \end{align} in which $f_{SAP}$ is the SAP flux light curve and $V = (v_1, v_2, ..., v_n)^{T}$ is a matrix composed by the CBVs stacked rowwise. The above problem is a simple linear least-squares problem which presents an analytical solution as $\theta^{*} = f_{SAP}V^{\mathrm{T}}(VV^{\mathrm{T}})^{-1}$. However, Equation~(\ref{eq:chi-square}) is sensitive to outliers~\cite{ivezi2014}, therefore, as a default behaviour in \lightkurve, we use the following cost function \begin{align} \mathcal{C}(\bm{\theta}, f_{SAP}) = \sum_{t} \left\rvert f_{SAP}(t) - \sum_{j=1}^{n}\theta_j v_{j}(t)\right\rvert. \label{eq:l1-norm} \end{align} Then, the CBV-corrected flux can be computed as \begin{equation} f_{CBV} = f_{SAP} - \sum_{j=1}^{n}\theta^{\star}_j v_{j}(t). \end{equation} %I think this para will be moved in later versions, we don't need this figure, The number of CBVs will directly contribute to overfitting effects. One way to identify a reasonable number of CBVs is to perform a grid search as suggested in Fig~(\ref{fig:cbv-grid-search}), which shows the cost function as a function of the number of CBVs. Usually, as the number of CBVs increases, the value of the cost function decreases. And therefore, the user should empirically choose a number of CBVs which does not remove the astrophysical signal of interest [add reference]. % MGS note: Wait--- won't the cost function decrease monotonically for CBVs? % Do you use AIC or BIC or cross-validation? %CLH: This is a little detailed for this paper, I might consider just leaving it as ``we use BIC'' An objective way of selecting the number of CBVs is to use Bayes' factors [add reference]. In the Bayes' factor setting, the selected number of CBVs is the one that provide the least gain in posterior probability, i.e., for all ordered pairs of CBVs, the Bayes factor selects $n^{\star}$ number of CBVs as follows \begin{align} n^{\star} = \argmin_{n} \dfrac{p_{n+1}}{p_n}, \end{align} in which $p_n$ is the posterior probability evaluated at the Maximum A Posteriori Estimator (MAP) obtained using $n$ CBVs. A Laplacian prior with zero mean and variance $16$ is the default prior density over the CBVs coefficients. \section{Point spread function photometry} \label{appendix:psf} NASA's Kepler and K2 missions have been delivering high-precision time series data for a wide range of stellar types through the official~\cite{jenkins2010} and community-developed pipelines~\cite{vanderburg2014, luger2016, aigrain2016}. Although those pipelines have been extremely successful, they tend to focus on studying isolated targets using simple aperture photometry and often underperform in crowded fields. However, crowded fields are frequent in many K2 campaigns and will be a major characteristic of TESS [ADD CITATION]. Therefore, pipelines that can deal with crowding in a principled way will play a key role on processing such type of data. Briefly, the PSF photometry problem that \lightkurve~solves can be formulated as follows. Given an image $\bm{y}$, with $n$ pixels and $m$ stars, and a PSF model $\lambda(\bm{\theta}) = \sum_{j=1}^{m} \lambda({\theta}_j)$, find the best parameter vector (which encodes fluxes and center positions for $m$ stars) $\bm{\theta}^{\star} = (\theta_1^{\star}, \theta_2^{\star}, ..., \theta_m^{\star})$ that minimizes some cost (or loss) function $R(\lambda(\bm{\theta}), \bm{y})$ of assigning $\bm{\theta} = \bm{\theta}^{\star}$. From a probabilistic point of view, one is often interested in minimizing the expected cost with respect to some probability distribution assigned to the data $\bm{y}$ and to the parameter vector $\bm{\theta}$, from which the cost function $R$ naturally arises. The default assumption, made in \lightkurve,~on the data is that it follows a Poisson probability distribution, whereas the probability distribution on the parameter vector has to be assigned by the user using the $\texttt{prior}$ argument. Using a uniform prior for $\bm{\theta}$, the MAP estimator can be written as \begin{align} \bm{\theta}^{\star}(\bm{y}) = \argmin_{\bm{\theta} \in \Lambda} \sum_{i=1}^{n} \left(\sum_{j=1}^{m}\lambda_i(\bm{\theta}_j) - y_i\log\sum_{j=1}^{m}\lambda_i(\bm{\theta}_j)\right), \end{align} in which $\Lambda$ is the support of $\bm{\theta}$. Another important aspect is the PSF model... \section{Motion-dependent Correlated Noise} \label{appendix:motion} \acknowledgments We would like to express our gratitude... Funding sources \vspace{5mm} \facilities{Kepler} \software{astropy} \bibliographystyle{aasjournal} \bibliography{ms} \end{document}
{ "alphanum_fraction": 0.7532056419, "avg_line_length": 43.4773519164, "ext": "tex", "hexsha": "2d42a09b752081e9c4f49ffeed41a2c0058cae69", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-04-09T08:58:57.000Z", "max_forks_repo_forks_event_min_datetime": "2018-04-09T08:58:57.000Z", "max_forks_repo_head_hexsha": "1683516b0e4daeee217e58df0a21a4ec26b3d39f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mirca/pyke-paper", "max_forks_repo_path": "paper/ms.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "1683516b0e4daeee217e58df0a21a4ec26b3d39f", "max_issues_repo_issues_event_max_datetime": "2018-04-10T17:04:08.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-04T02:04:44.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mirca/pyke-paper", "max_issues_repo_path": "paper/ms.tex", "max_line_length": 323, "max_stars_count": null, "max_stars_repo_head_hexsha": "1683516b0e4daeee217e58df0a21a4ec26b3d39f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mirca/pyke-paper", "max_stars_repo_path": "paper/ms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6672, "size": 24956 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Forester - One Page Two Column Resume % LaTeX Template % Version 1.1 (30/4/2014) % % Original author: % Sean Forester w/ inspiration from Deedy % % Original repository: % https://github.com/lonesome-data % % IMPORTANT: COMPILE WITH XeLaTeX % % This template uses several fonts not included with Windows/Linux by % default. If you get compilation errors saying a font is missing, find the line % on which the font is used and either change it to a font included with your % operating system or comment the line out to use the default font. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TODO: % 1. Integrate biber/bibtex for article citation under publications. % 2. Figure out a smoother way for the document to flow onto the next page. % 3. Add styling information for a "Projects/Hacks" section. % 4. Add location/address information % 5. Merge OpenFont and MacFonts as a single sty with options. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % CHANGELOG: % v1.1: % 1. Fixed several compilation bugs with \renewcommand % 2. Got Open-source fonts (Windows/Linux support) % 3. Added Last Updated % 4. Move Title styling into .sty % 5. Commented .sty file. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Known Issues: % 1. Overflows onto second page if any column's contents are more than the % vertical limit % 2. Hacky space on the first bullet point on the second column. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[]{Forester-Resume} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % LAST UPDATED DATE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \lastupdated %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % TITLE NAME % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \namesection{Sean}{Forester} { Data Science Consultant with robust experience as Team Lead and Individual Contributor: Expert in prototyping machine learning solutions in Python. Current Top Secret Clearance with SCI (CI Poly) } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN ONE % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{minipage}[t]{0.33\textwidth} \sectionsep 910.376.1138 \href{mailto:[email protected]}{\bf [email protected]}\\ Github:// \href{https://github.com/lonesome-data}{\bf lonesome-data} \\ LinkedIn:// \href{https://www.linkedin.com/in/smforester/}{\bf {smforester}} \\ \section{Skills} \subsection{Environment} \location{Jupyter Labs/Hub/Notebook \textbullet{} Google Colab \textbullet{} Azure} \subsection{Machine Learning} \location{Pandas \textbullet{} Scikit-learn \textbullet{} TensorFlow \textbullet{} R} \subsection{Visualization} \location{PyViz \textbullet{} Bokeh \textbullet{} seaborn \textbullet{} Tableau} \subsection{Version Control} \location{Git \textbullet{} Bitbucket} \section{Education} \subsection{Columbia University} \descript{Professional Certificate\\ \textbf{Data Science for Executives}} \location{edX September 2019} \sectionsep \subsection{Naval Postgraduate} \descript{Academic Certificate\\ \textbf{Data Science}} \location{Distance Learning September 2018} \sectionsep \subsection{Microsoft Pro Program} \descript{Professional Certificate\\ \textbf{Data Science}} \location{edX August 2017} \sectionsep \subsection{Tuck School of Business} \location{at Dartmouth} \descript{Executive Education Certificate} \sectionsep \subsection{Naval Postgraduate} \descript{\textbf{MS, Computer Science}} \location{Monterey, CA} \sectionsep \subsection{U.S. Naval Academy} \descript{\textbf{BS, Ocean Engineering}} \location{Annapolis, MD} \textbf{4-yr Varsity Football} \sectionsep \section{Coursework} \subsection{Graduate} Advanced Machine Learning \\ Artificial Intelligence \\ IoT and Energy Harvesting\\ %\sectionsep \subsection{Undergraduate} Ocean Systems Design \\ Adv Electrical Engineering\\ Engineering Thermodynamics \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % COLUMN TWO % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{minipage} \hfill \begin{minipage}[t]{0.66\textwidth} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % EXPERIENCE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Accomplished [X] as measured by [Y] by doing [Z] % https://www.linkedin.com/pulse/20140929001534-24454816-my-personal-formula-for-a-better-resume/ \section{Technical Experience} \runsubsection{auction.com} \descript{Data Scientist} \location{May 19 - Aug 19 | Irvine, CA} \vspace{\topsep} % Hacky fix for awkward extra vertical space \begin{tightemize} \item Improved accuracy by over 10 percent of legacy predictive model by prototyping an XGBoost classifier to target distressed assets \item Clustered customer behavior to minimize time to value of online experience \item Provided quality assurance to business intelligence team responsible for analytic dashboard reporting \end{tightemize} \sectionsep \runsubsection{Dept of Defense} \descript{Data Scientist | Principal Consultant} \location{Aug 15 - May 19 | Greater San Diego, CA} \begin{tightemize} \item Built an artificial neural network with TensorFlow backend using Google CoLab's virtual machines resulting in discovery of tradecraft application bias and enterprise-wide change to auditing practice \item Chief architect behind a Naive Bayes classification model for sentiment classification (Natural Language Processing) resulting in discovery of implicit clues contradicting post-audit client satisfaction (Likert scaled) responses \item Retained talent by improving performance recognition by conducting (unsupervised) Latent Semantic Analysis in unstructured collection of archived awards and identifying key patterns \end{tightemize} \sectionsep \runsubsection{U.S. Naval Academy} \descript{Computer Science Senior Lecturer} \location{Sep 09 - Jul 12 | Annapolis, MD} \begin{tightemize} \item Prepared lessons and instructed students on theoretical machine learning (convex optimization/modeling, regularization and perturbation) \item Hand-selected from cohort of nine postgrads to teach computer science \item Runner-up among 20 candidates nominated for Faculty Teaching Excellence \item Taught computer architecture, networking, cyber security, algorithm complexity and machine learning concepts \end{tightemize} \section{Highlights of Military Service} \descript{U.S. Marine Corps Officer (1999 - 2019) | Domestic \& Int'l} \begin{tightemize} \item Led collaborative efforts of 40-associates resulting in 100 percent on-time delivery of end-to-end analyses predicated on accurate requirements specification and evidence-based solutions \item Advised US Ambassador to Japan's decision making by providing key insights related to military base migration within the island of Okinawa \item Worked independently and collaboratively with key foreign stakeholders in detailed planning and execution of multilateral military exercises in Australia, Taiwan, Japan, Maldives, Republic of the Philippines and the Republic of Korea \item Planned and executed first ever employment of prepositioned military equipment in Manila thereby increasing disaster response posture \item Exceeded STEM quotas as Chair of Computer Science Dept Recruitment Committee by studying and empathizing with behavioral economics of students and competition (other departments) \item Executed 100\% of budget without incident as the Computer Science Dept Finance Officer by prioritizing department requisitions, tracking expenditures and routinely meeting budget milestones \item Conducted full-spectrum military operations in Afghanistan with 12-Marine team responsible for advising Afghan National Army \end{tightemize} \sectionsep \end{minipage} \end{document} \documentclass[]{article}
{ "alphanum_fraction": 0.7286124152, "avg_line_length": 34.08, "ext": "tex", "hexsha": "dd795cd9313115e91c87ef36ccc82d02ab9b2445", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "eab07c8b0c6007436f234b0d4dfb93f032b3b544", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lonesome-data/Data_Science_Resume", "max_forks_repo_path": "Forester-Resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "eab07c8b0c6007436f234b0d4dfb93f032b3b544", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lonesome-data/Data_Science_Resume", "max_issues_repo_path": "Forester-Resume.tex", "max_line_length": 239, "max_stars_count": null, "max_stars_repo_head_hexsha": "eab07c8b0c6007436f234b0d4dfb93f032b3b544", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lonesome-data/Data_Science_Resume", "max_stars_repo_path": "Forester-Resume.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1741, "size": 7668 }
\subsection{Division} \subsubsection{Introduction} We have inverse functions for multiplication. This is division. These will not necessarily have solutions for natural numbers or integers. \subsubsection{Division of natural numbers} \(a.b=c\rightarrow b=\dfrac{c}{a}\) \subsubsection{Division is not commutative} Division is not commutative: \(\dfrac{x}{y}\ne \dfrac{y}{x}\) \subsubsection{Division is not associative} \(\dfrac{x}{\dfrac{y}{z}}\ne \dfrac{\dfrac{x}{y}}{z}\) \subsubsection{Division is not left distributive} Division is not left distributive over subtraction: \(\dfrac{a}{b-c} \ne \dfrac{a}{b} -\dfrac{a}{c}\) \subsubsection{Division is right distributive} Division is right distributive over subtraction: \(\dfrac{a-b}{c} =\dfrac{a}{b} -\dfrac{b}{c}\) \subsubsection{Division of integers}
{ "alphanum_fraction": 0.7268351384, "avg_line_length": 20.775, "ext": "tex", "hexsha": "7038c103da397d31d202744a0d87a0d8c520ccf4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/arithmetic/01-02-division.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/arithmetic/01-02-division.tex", "max_line_length": 75, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/arithmetic/01-02-division.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 243, "size": 831 }
% % API Documentation for QSTK % Package QSTK.quicksim % % Generated by epydoc 3.0.1 % [Mon Mar 5 00:49:21 2012] % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Module Description %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \index{QSTK \textit{(package)}!QSTK.quicksim \textit{(package)}|(} \section{Package QSTK.quicksim} \label{QSTK:quicksim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Modules %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Modules} \begin{itemize} \setlength{\parskip}{0ex} \item \textbf{quickSim} \textit{(Section \ref{QSTK:quicksim:quickSim}, p.~\pageref{QSTK:quicksim:quickSim})} \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Variables %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Variables} \vspace{-1cm} \hspace{\varindent}\begin{longtable}{|p{\varnamewidth}|p{\vardescrwidth}|l} \cline{1-2} \cline{1-2} \centering \textbf{Name} & \centering \textbf{Description}& \\ \cline{1-2} \endhead\cline{1-2}\multicolumn{3}{r}{\small\textit{continued on next page}}\\\endfoot\cline{1-2} \endlastfoot\raggedright \_\-\_\-p\-a\-c\-k\-a\-g\-e\-\_\-\_\- & \raggedright \textbf{Value:} {\tt None}&\\ \cline{1-2} \end{longtable} \index{QSTK \textit{(package)}!QSTK.quicksim \textit{(package)}|)}
{ "alphanum_fraction": 0.4180278282, "avg_line_length": 33.06, "ext": "tex", "hexsha": "4f6922b2917f3795a71711d3d96314f03a5655c7", "lang": "TeX", "max_forks_count": 154, "max_forks_repo_forks_event_max_datetime": "2022-03-19T02:27:59.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-30T09:41:15.000Z", "max_forks_repo_head_hexsha": "0eb2c7a776c259a087fdcac1d3ff883eb0b5516c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jenniyanjie/QuantSoftwareToolkit", "max_forks_repo_path": "Legacy/Docs/pdf/QSTK.quicksim-module.tex", "max_issues_count": 19, "max_issues_repo_head_hexsha": "0eb2c7a776c259a087fdcac1d3ff883eb0b5516c", "max_issues_repo_issues_event_max_datetime": "2021-07-19T11:13:47.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-04T13:12:33.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jenniyanjie/QuantSoftwareToolkit", "max_issues_repo_path": "Legacy/Docs/pdf/QSTK.quicksim-module.tex", "max_line_length": 97, "max_stars_count": 339, "max_stars_repo_head_hexsha": "4981506c37227a72404229d5e1e0887f797a5d57", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "elxavicio/QSTK", "max_stars_repo_path": "Docs/pdf/QSTK.quicksim-module.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T23:32:24.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-01T10:06:49.000Z", "num_tokens": 429, "size": 1653 }
\section{Masked attention implementation} In this section we will detail the prefix sum algorithm proposed by \citet{choromanski2021rethinking} for masked kernelized attention, and give an implementation with usual functions of neural network frameworks. \begin{equation} A^{masked} = \mathrm{masked} \left( \phi(Q) \times \phi(K^T) \right) \times V \end{equation} In this expression, $\mathrm{masked}$ being the operation that set all cells above the diagonal to 0 in a matrix. The naive implementation of this operation has complexity $O(L_QL_Kd)$. We can change the complexity using the operation proposed by \citet{choromanski2021rethinking}. We will derive its formulation here. To start with it, $A^{masked}$ is defined as \begin{equation} A^{masked} = S^{masked} \times V \end{equation} \noindent{}which in summation form is expressed as \begin{equation} A^{masked}_{ij} = \sum_k V_{kj} \times S^{masked}_{ik} \end{equation} \noindent{}and the elements of S are defined as: \begin{equation} S^{masked}_{ik} = \begin{cases} k \leq i & \sum_l \left( \phi(Q)_{il} \times \phi(K)_{kl} \right) \\ \text{otherwise} &{}0 \end{cases} \end{equation} Putting these elements together leads to \begin{equation} A^{masked}_{ij}= \sum_{k=1}^i V_{kj} \times \sum_l \left( \phi(Q)_{il} \times \phi(K)_{kl} \right) \end{equation} \noindent{}which can be reworked as \begin{equation} A^{masked}_{ij}= \sum_l \phi(Q)_{il} \times \sum_{k=1}^i \left(V_{kj} \times \phi(K)_{kl} \right) \end{equation} In this work we make use of these ideas to implement the calculation of $A^{masked}$ with complexity $O(max(L_Q, L_K) \times d^2)$ without custom GPU code as per the following algorithm: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item $\phi(Q)$, $\phi(K)$, $V$ tensors of shape $(L_Q, 1, d)$, $(L_K, d)$, $(L_K, d)$ \item $Unrolled_{kjl} = V_{kj} \times \phi(K)_{kl}$ tensor of shape $(L_K, d, d)$ \item $Right = cumsum(Unrolled,\text{ dim=0})$ tensor of shape $(L_K, d, d)$ \item $Right = align(Right, L_Q)$ tensor of shape $(L_Q, d, d)$ \item $A^{masked} = \phi(Q) \otimes Right$ tensor of shape $(L_Q, 1, d)$ \end{enumerate} \noindent{}with \begin{itemize} \item $\otimes$ the batch matrix product along the last two dimensions \item $cumsum(\_, dim=0)$ the function that calculates the cumulated sum along the first dimension \item $align(\_, L_Q)$ the function that extend the first dimensions to size $L_Q$ by repeating the last element if $L_Q > L_K$, or truncate to size $L_Q$ if $L_Q < L_K$. (Can be implemented with slicing and concatenation) \end{itemize} \endinput
{ "alphanum_fraction": 0.7010270065, "avg_line_length": 32.8625, "ext": "tex", "hexsha": "8744b3d1c8b7ef18e083d2204ed9fae4ac1c0205", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-01T06:24:31.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-01T06:24:31.000Z", "max_forks_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ScalableTransformer/Scaleformer", "max_forks_repo_path": "paper/sections/03-masked-attention.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ScalableTransformer/Scaleformer", "max_issues_repo_path": "paper/sections/03-masked-attention.tex", "max_line_length": 214, "max_stars_count": null, "max_stars_repo_head_hexsha": "57e65deb7ba5fdda88a21bdaf71092a0101cf8c3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ScalableTransformer/Scaleformer", "max_stars_repo_path": "paper/sections/03-masked-attention.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 855, "size": 2629 }
\documentclass{beamer} % See this: http://texblog.net/latex-archive/uncategorized/beamer-warnings/ \let\Tiny=\tiny \usepackage[ compress, %minimal, %nonav, red, %gold, blue, %numbers, %nologo, %nominilogo, minilogoleft, polyu, comp, forty, %seventyfive, ] {beamerthemeHongKong} \title[Title short]{Title full} \author[Author short name]{Author full name} \institute[institute]{institute full name} \date{\today} \begin{document} \frame{\titlepage} \section*{Table of Contents} \frame { \frametitle{\secname} \tableofcontents } \AtBeginSubsection[] { \frame<handout:0> { \frametitle{Outline} \tableofcontents[current,currentsubsection] } } \section{Section A} \subsection{Subsection A-A} \begin{frame}{\subsecname} \begin{columns} \column{0.5\textwidth} \begin{figure} \includegraphics[width=\textwidth]{image/test-image1} \caption{figure A} \end{figure} \column{0.5\textwidth} \begin{block}{example} \begin{enumerate} \item<alert@1> text about figure A \item text \end{enumerate} \end{block} \end{columns} \end{frame} \subsection{Subsection A-B} \section{Section B} \section{Section C} \subsection*{Thanks} \begin{frame}{\subsecname} \begin{columns} \column{2.5cm} \column{5cm} \Huge{Thank you!} \column{2.5cm} \end{columns} \end{frame} \end{document}
{ "alphanum_fraction": 0.6561844864, "avg_line_length": 16.2613636364, "ext": "tex", "hexsha": "60506bf53da83349ec7ed5ff8695da385ac83d9a", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_path": "200+ beamer 模板合集/PolyU_beamer_theme-1.2.1(香港理工大学电子计算机系)/example/example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_path": "200+ beamer 模板合集/PolyU_beamer_theme-1.2.1(香港理工大学电子计算机系)/example/example.tex", "max_line_length": 75, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/PolyU_beamer_theme-1.2.1(香港理工大学电子计算机系)/example/example.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 456, "size": 1431 }
\documentclass{article} %\usepackage{fullpage} %\usepackage{nopageno} \usepackage[margin=1.5in]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage[normalem]{ulem} \usepackage{fancyhdr} %\renewcommand\headheight{12pt} \pagestyle{fancy} \lhead{March 12, 2014} \rhead{Jon Allen} \allowdisplaybreaks \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \begin{document} \section*{Project Prep} I chose binary trees. I found out you can map Dyck paths to binary trees and back. I also never really thought about how many ways you could symmetrise the treees. The statistic finder is meh, just generates trees. The database is interesting in that the statistics in there are kind of random from where I'm sitting. Like for example the first statistic is ``The number of left oriented leafs except the first one.'' \section*{Chapter 5} \begin{enumerate} \setcounter{enumi}{29}\item Prove that the only antichain of $S=\{1,2,3,4\}$ of size 6 is the antichain of all 2-subsets of $S$ We can easily see that the only antichain that contains $\emptyset$ is $\{\emptyset\}$, similarly the only antichain containing $S$ is $\{S\}$. Both of these are obviously of size one. We see then that any antichain that is to be bigger than 6 must contain a subset of size 1,2 or 3. Lets pick a subset of size one to be in our antichain. Without loss of generality we say that subset is $\{1\}$. Since any other subsets in our antichain must not be a superset of $\{1\}$ we can form the remaining subsets in our antichain from $\{2,3,4\}$. We know from theorem 5.3.3 that we can form an antichain of maximum size $\binom{3}{\left\lfloor \frac{3}{2}\right\rfloor}=3$ from this set of size 3. The maximum size then of an antichain containing a subset of size 1 is then $3+1=4$ which is less than 6. The antichain we are looking for then must be made up entirely of subsets of size 2 and 3. Since the antichain we are looking for is not all subsets of size two, we must have at least one of size 3. Let us assume our antichain contains a subset of size 3. Canditates for the antichain consist of any subsets of $A$ with the last element from $S$ added in, or the last element of $S$ only. Without loss of generality we choose a subset $A$ of size 3 from $S$ to be in our antichain. We'll say $A=\{1,2,3\}$. Then an antichain with $A$ can also contain the subset the remaining element, $\{4\}$ or any two combinations from $S$ which contain the remaining element and one element from $A$, eg, $\{1,4\}$, or three combinations of the remaining element and two elements from $A$ eg, $\{1,2,4\}$. Any subsets not containing the last element are subsets of $A$ and therefore not candidates for inclusion in the antichain. There is one possible combination consisting solely of the last element, 3 combinations consisting of the remaining element and one of the elements from $A$ and 3 combinations consisting of the remaining element and two of the 3 elements in $A$. So we have taken one of our 6 possible subsets with $A$. Leaving us with 5 slots to fill with 7 candidates. If we choose the remaining element as a subset of our antichain, then none of the other candidates will fit, as they all also contain the remaining element. This gives us an antichain of size 2. Likewise if we choose one of the 2 element subsets consisting of an element in $A$ and the remaining element of $S$ then we have chosen a subset that is also a subset of two of the 3 element subsets. This gives us an antichain of size 2 and a pool of size 3. This gives us a maximum of 5 subsets in our antichain at this point. So, no matter how we choose from our pool of 7 candidates, we reduce the available remaining candidates to the point where we can not build an antichain of size 6.$\Box$ %Let us assume our antichain contains a subset of size 2. Say without loss of generality $\{1,2\}$. Then the biggest possible antichain containing this set is given by adding the sizes of the largest possible antichains of the original set without each element of $\{1,2\}$ and subtracting the possible antichains of the original set without either element. So again without loss of genrality we have $\{2,3,4\}$ and $\{1,3,4\}$ minus largest antichain of $\{3,4\}$ which is $2\cdot\binom{3}{\left\lfloor\frac{3}{2}\right\rfloor}-\binom{2}{\left\lfloor\frac{2}{2}\right\rfloor}=2*3-1=5$. And adding back in the original set we chose, we have a maximum antichain size of 6. \setcounter{enumi}{32}\item Construct a partition of the subsets of $\{1,2,3,4,5\}$ into symmetric chains. \begin{align*} \emptyset\subset\{1\}\subset\{1,2\}\subset\{1,2,3\}\subset\{1,2,3,4\}\subset\{1,2,3,4,5\}\\ \{5\}\subset\{1,5\}\subset\{1,2,5\}\subset\{1,2,3,5\}\\ \{4\}\subset\{1,4\}\subset\{1,2,4\}\subset\{1,2,4,5\}\\ \{4,5\}\subset\{1,4,5\}\\ \{2\}\subset\{2,3\}\subset\{2,3,4\}\subset\{2,3,4,5\}\\ \{2,5\}\subset\{2,3,5\}\\ \{2,4\}\subset\{2,4,5\}\\ \{3\}\subset\{1,3\}\subset\{1,3,4\}\subset\{1,3,4,5\}\\ \{3,5\}\subset\{1,3,5\}\\ \{3,4\}\subset\{3,4,5\}\\ \end{align*} %34 is for graduate students %\item %In a partition of the subsets of $\{1,2,\dots,n\}$ into symmetric chains, how many chains have only one subset in them? two subsets? $k$ subsets? \setcounter{enumi}{36}\item Use the multinomial theorem to show that, for positive integers $n$ and $t$, \begin{align*} t^n&=\sum{\binom{n}{n_1n_2\dots n_t}}, \end{align*} where the summation extends over all nonnegative integral solutions $n_1,n_2,\dots,n_t$ of $n_1+n_2+\dots+n_t=n$ \begin{align*} t^n&=(1_1+1_2+\dots1_t)^n\\ &=\sum{\binom{n}{n_1n_2\cdots n_t}{1_1}^{n_1}{1_2}^{n_2}\cdots{1_t}^{n_t}}\\ &=\sum{\binom{n}{n_1n_2\cdots n_t}} \end{align*} \item Use the multinomial theorem to expand $(x_1+x_2+x_3)^4$ \begin{align*} (x_1+x_2+x_3)^4&=\sum\limits_{n_1+n_2+n_3=4}{\binom{4}{n_1n_2n_3}{x_1}^{n_1}{x_2}^{n_2}{x_3}^{n_3}}\\ &=\binom{4}{4\;0\;0}{x_1}^4{x_2}^0{x_3}^0 +\binom{4}{0\;4\;0}{x_1}^0{x_2}^4{x_3}^0 +\binom{4}{0\;0\;4}{x_1}^0{x_2}^0{x_3}^4\\ &\quad+\binom{4}{3\;1\;0}{x_1}^3{x_2}^1{x_3}^0 +\binom{4}{3\;0\;1}{x_1}^3{x_2}^0{x_3}^1 +\binom{4}{0\;3\;1}{x_1}^0{x_2}^3{x_3}^1\\ &\quad+\binom{4}{1\;3\;0}{x_1}^1{x_2}^3{x_3}^0 +\binom{4}{0\;1\;3}{x_1}^0{x_2}^1{x_3}^3 +\binom{4}{1\;0\;3}{x_1}^1{x_2}^0{x_3}^3\\ &\quad+\binom{4}{2\;0\;2}{x_1}^2{x_2}^0{x_3}^2 +\binom{4}{2\;2\;0}{x_1}^2{x_2}^2{x_3}^0 +\binom{4}{0\;2\;2}{x_1}^0{x_2}^2{x_3}^2\\ &\quad+\binom{4}{2\;1\;1}{x_1}^2{x_2}^1{x_3}^1 +\binom{4}{1\;2\;1}{x_1}^1{x_2}^2{x_3}^1 +\binom{4}{1\;1\;2}{x_1}^1{x_2}^1{x_3}^2\\ &={x_1}^4+{x_2}^4+{x_3}^4\\ &\quad+4{x_1}^3{x_2}+4{x_1}^3{x_3}+4{x_2}^3{x_3}\\ &\quad+4{x_1}{x_2}^3+4{x_2}{x_3}^3+4{x_1}{x_3}^3\\ &\quad+6{x_1}^2{x_3}^2+6{x_1}^2{x_2}^2+6{x_2}^2{x_3}^2\\ &\quad+12{x_1}^2{x_2}{x_3}+12{x_1}{x_2}^2{x_3}+12{x_1}{x_2}{x_3}^2\\ \end{align*} \setcounter{enumi}{39}\item What is the coefficient of ${x_1}^3{x_2}^3x_3{x_4}^2$ in the expansion of \[(x_1-x_2+2x_3-2x_4)^{9}?\] \[\binom{9}{3\;3\,1\;2}\cdot1^3\cdot(-1)^3\cdot2^1\cdot(-2)^2=5040\cdot -1\cdot2\cdot4=-40320\] \setcounter{enumi}{41}\item Prove the identity (5.21) by a combinatorial argument. (\emph{Hint:} Consider the permutations of a multiset of objects of $t$ different types with repetition numbers $n_1,n_2,\dots,n_t$, respectively. Partition these permutations according to what type of object is in the first position.) We know that $\binom{n}{n_1n_2\cdots n_t}$ gives us the number of permutations of a multiset of objects of $t$ different types with repetition numbers $n_1,n_2,\dots,n_t$, respectively. Now lets calculate the number of permutations of the same multiset which start with an object of type $i$. The number of objects we are choosing from is reduced by one to $n-1$ because the first object is chosen already. Likewise we have $n_i-1$ objects of type $i$ to choose from. So we have $\binom{n-1}{n_1,\dots,n_i-1,\dots,n_t}$ permutations which start with the $i$ object type. Now if we want to find the total number of permutations, we just add together all the permutations possible possible which start with every type. \begin{align*} \sum\limits_{i=1}^t{\binom{n-1}{n_1\dots n_i\dots n_t}} \end{align*} Which gives us our identity.$\Box$ \end{enumerate} \section*{Chapter 6} \begin{enumerate} \item Find the number of integers between 1 and 10,000 inclusive that are not divisible by 4,5, or 6. I'll start by just saying that all the math I do will be integer math, e.g. $\frac{10}{2}=2$. So we can just subtract from 10,000 the number of integers that are divisable by 4,5, or 6 to get the number of integers that are not. The number of integers divisable by 4 is $10000/4=2500$. Similarly for 5 we have $10000/5=2000$ and for 6 we have $10000/6=1666$. Since 20 is the least common multiple of 5 and 4 the number of integers that can be obtained by dividing 10000 by 4 or 5 is $10000/20=500$ and similarly for 4 and 6 we have $10000/12=833$. And for 5 and 6 $10000/30=333$. Finally the number of integers that can be obtained by dividing by 4,5, or 6 is $10000/60=166$. Putting it all together with the inclusion-exclusion principle (inverted since we are subtracting from the total) we have $10000-2500-2000-1666+500+833+333-166=5334$ integers. \item Find the number of integers between 1 and 10,000 inclusive that are not divisible by 4,6,7, or 10. As in problem 1, all math will be integer math, not real. Logic is also the same as in problem 1. \begin{align*} \text{lcm}(4,6)&=12&\text{lcm}(4,7)&=28&\text{lcm}(10,4)&=20\\ \text{lcm}(6,7)&=42&\text{lcm}(6,10)&=30&\text{lcm}(7,10)&=70\\ \text{lcm}(4,6,7)&=84&\text{lcm}(4,6,10)&=60&\text{lcm}(6,7,10)&=210&\text{lcm}(4,7,10)&=140\\ \text{lcm}(4,6,7,10)&=420\\ 10000/4&=2500&10000/6&=1666&10000/7&=1428&10000/10&=1000\\ 10000/12&=833&10000/28&=357&10000/20&=500\\ 10000/42&=238&10000/30&=333&10000/70&=142\\ 10000/84&=119&10000/60&=166&10000/210&=47&10000/140=71\\ 10000/420&=23 \end{align*} So we have $10000-2500-1666-1428-1000+833+357+500+238+333+142-119-166-47-71+23=5429$ integers. \item Find the number of integers between 1 and 10,000 that are neither perfect squares nor perfect cubes. Because $\sqrt{10000}=100$ the perfect squares between 1 and 10000 inclusive are $\{1^2,2^2,\dots,100^2\}$. There are then 100 perfect squares. Similarly because $21^3<10000<22^3$ there are 21 perfect cubes which are $\{1^3,2^3,\dots,21^3\}$. Now if something is a perfect cube \emph{and} a perfect square, than it is a perfect sixth root. Because $4^6<10000<5^6$ there are 4 numbers which are perfect roots and perfect squares between 1 and 10000. Putting it all together we have $10000-100-21+4=9883$ integers that are neither perfect cubes, nor perfect squares. \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7054162394, "avg_line_length": 74.4930555556, "ext": "tex", "hexsha": "99971a29e3f1767ca61ab688c55959e37749445b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ylixir/school", "max_forks_repo_path": "combinatorics/combinatorics-hw-2014-03-12.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ylixir/school", "max_issues_repo_path": "combinatorics/combinatorics-hw-2014-03-12.tex", "max_line_length": 851, "max_stars_count": null, "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ylixir/school", "max_stars_repo_path": "combinatorics/combinatorics-hw-2014-03-12.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4087, "size": 10727 }
\documentclass[journal,12pt,twocolumn]{IEEEtran} % \usepackage{setspace} \usepackage{gensymb} \usepackage{siunitx} \usepackage{tkz-euclide} \usepackage{textcomp} \usepackage{standalone} \usetikzlibrary{calc} %\doublespacing \singlespacing %\usepackage{graphicx} %\usepackage{amssymb} %\usepackage{relsize} \usepackage[cmex10]{amsmath} %\usepackage{amsthm} %\interdisplaylinepenalty=2500 %\savesymbol{iint} %\usepackage{txfonts} %\restoresymbol{TXF}{iint} %\usepackage{wasysym} \usepackage{amsthm} %\usepackage{iithtlc} \usepackage{mathrsfs} \usepackage{txfonts} \usepackage{stfloats} \usepackage{bm} \usepackage{cite} \usepackage{cases} \usepackage{subfig} %\usepackage{xtab} \usepackage{longtable} \usepackage{multirow} %\usepackage{algorithm} %\usepackage{algpseudocode} \usepackage{enumitem} \usepackage{mathtools} \usepackage{steinmetz} \usepackage{tikz} \usepackage{circuitikz} \usepackage{verbatim} \usepackage{tfrupee} \usepackage[breaklinks=true]{hyperref} %\usepackage{stmaryrd} \usepackage{tkz-euclide} % loads TikZ and tkz-base %\usetkzobj{all} \usetikzlibrary{calc,math} \usepackage{listings} \usepackage{color} %% \usepackage{array} %% \usepackage{longtable} %% \usepackage{calc} %% \usepackage{multirow} %% \usepackage{hhline} %% \usepackage{ifthen} %% %optionally (for landscape tables embedded in another document): %% \usepackage{lscape} \usepackage{multicol} \usepackage{chngcntr} \usepackage{amsmath} \usepackage{cleveref} %\usepackage{enumerate} %\usepackage{wasysym} %\newcounter{MYtempeqncnt} \DeclareMathOperator*{\Res}{Res} %\renewcommand{\baselinestretch}{2} \renewcommand\thesection{\arabic{section}} \renewcommand\thesubsection{\thesection.\arabic{subsection}} \renewcommand\thesubsubsection{\thesubsection.\arabic{subsubsection}} \renewcommand\thesectiondis{\arabic{section}} \renewcommand\thesubsectiondis{\thesectiondis.\arabic{subsection}} \renewcommand\thesubsubsectiondis{\thesubsectiondis.\arabic{subsubsection}} % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} \def\inputGnumericTable{} %% \lstset{ %language=C, frame=single, breaklines=true, columns=fullflexible } %\lstset{ %language=tex, %frame=single, %breaklines=true %} \usepackage{graphicx} \usepackage{pgfplots} \begin{document} % \newtheorem{theorem}{Theorem}[section] \newtheorem{problem}{Problem} \newtheorem{proposition}{Proposition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}{Example}[section] \newtheorem{definition}[problem]{Definition} %\newtheorem{thm}{Theorem}[section] %\newtheorem{defn}[thm]{Definition} %\newtheorem{algorithm}{Algorithm}[section] %\newtheorem{cor}{Corollary} \newcommand{\BEQA}{\begin{eqnarray}} \newcommand{\EEQA}{\end{eqnarray}} \newcommand{\define}{\stackrel{\triangle}{=}} \bibliographystyle{IEEEtran} %\bibliographystyle{ieeetr} \providecommand{\mbf}{\mathbf} \providecommand{\pr}[1]{\ensuremath{\Pr\left(#1\right)}} \providecommand{\qfunc}[1]{\ensuremath{Q\left(#1\right)}} \providecommand{\sbrak}[1]{\ensuremath{{}\left[#1\right]}} \providecommand{\lsbrak}[1]{\ensuremath{{}\left[#1\right.}} \providecommand{\rsbrak}[1]{\ensuremath{{}\left.#1\right]}} \providecommand{\brak}[1]{\ensuremath{\left(#1\right)}} \providecommand{\lbrak}[1]{\ensuremath{\left(#1\right.}} \providecommand{\rbrak}[1]{\ensuremath{\left.#1\right)}} \providecommand{\cbrak}[1]{\ensuremath{\left\{#1\right\}}} \providecommand{\lcbrak}[1]{\ensuremath{\left\{#1\right.}} \providecommand{\rcbrak}[1]{\ensuremath{\left.#1\right\}}} \theoremstyle{remark} \newtheorem{rem}{Remark} \newcommand{\sgn}{\mathop{\mathrm{sgn}}} \providecommand{\abs}[1]{\left\vert#1\right\vert} \providecommand{\res}[1]{\Res\displaylimits_{#1}} \providecommand{\norm}[1]{\left\lVert#1\right\rVert} %\providecommand{\norm}[1]{\lVert#1\rVert} \providecommand{\mtx}[1]{\mathbf{#1}} \providecommand{\mean}[1]{E\left[ #1 \right]} \providecommand{\fourier}{\overset{\mathcal{F}}{ \rightleftharpoons}} %\providecommand{\hilbert}{\overset{\mathcal{H}}{ \rightleftharpoons}} \providecommand{\system}{\overset{\mathcal{H}}{ \longleftrightarrow}} %\newcommand{\solution}[2]{\textbf{Solution:}{#1}} \newcommand{\solution}{\noindent \textbf{Solution: }} \newcommand{\cosec}{\,\text{cosec}\,} \providecommand{\dec}[2]{\ensuremath{\overset{#1}{\underset{#2}{\gtrless}}}} \newcommand{\myvec}[1]{\ensuremath{\begin{pmatrix}#1\end{pmatrix}}} \newcommand{\mydet}[1]{\ensuremath{\begin{vmatrix}#1\end{vmatrix}}} %\numberwithin{equation}{section} \numberwithin{equation}{subsection} %\numberwithin{problem}{section} %\numberwithin{definition}{section} \makeatletter \@addtoreset{figure}{problem} \makeatother \let\StandardTheFigure\thefigure \let\vec\mathbf %\renewcommand{\thefigure}{\theproblem.\arabic{figure}} \renewcommand{\thefigure}{\theproblem} %\setlist[enumerate,1]{before=\renewcommand\theequation{\theenumi.\arabic{equation}} %\counterwithin{equation}{enumi} %\renewcommand{\theequation}{\arabic{subsection}.\arabic{equation}} \def\putbox#1#2#3{\makebox[0in][l]{\makebox[#1][l]{}\raisebox{\baselineskip}[0in][0in]{\raisebox{#2}[0in][0in]{#3}}}} \def\rightbox#1{\makebox[0in][r]{#1}} \def\centbox#1{\makebox[0in]{#1}} \def\topbox#1{\raisebox{-\baselineskip}[0in][0in]{#1}} \def\midbox#1{\raisebox{-0.5\baselineskip}[0in][0in]{#1}} \vspace{3cm} \title{Matrix Theory (EE5609) Assignment 14} \author{Arkadipta De\\MTech Artificial Intelligence\\AI20MTECH14002} \maketitle \newpage %\tableofcontents \bigskip \renewcommand{\thefigure}{\theenumi} \renewcommand{\thetable}{\theenumi} \begin{abstract} This document proves the existence of inverse of Hilbert Matrix. \end{abstract} All the codes for the figure in this document can be found at \begin{lstlisting} https://github.com/Arko98/EE5609/blob/master/Assignment_14 \end{lstlisting} \section{\textbf{Problem}} Prove that the following matrix is invertible and $\vec{A^{-1}}$ has integer entries.\\ \begin{align*} \vec{A} = \myvec{1&\frac{1}{2}&\dots&\frac{1}{n}\\\frac{1}{2}&\frac{1}{3}&\dots&\frac{1}{n+1}\\\vdots&\vdots&\dots&\vdots\\\frac{1}{n}&\frac{1}{n+1}&\dots&\frac{1}{2n-1}} \end{align*} \section{\textbf{Solution}} \begin{comment} Let $\vec{H_n}$ be the $n$-th Hilbert matrix given by \begin{align} \vec{H_n} &= \left[\frac1{i+j-1}\right]_{i,j}\\ \intertext{Then $\vec{H_{n+1}}$ is given by,} \vec{H_{n+1}} &= \myvec{\vec{H_n}&\vec{u}\\\vec{u^T}&\frac{1}{2n-1}} \end{align} \end{comment} Let $\vec{A_3}$ be $3 \times 3$ matrix i.e \begin{align} \vec{A_3} &= \myvec{1&\frac{1}{2}&\frac{1}{3}\\\frac{1}{2}&\frac{1}{3}&\frac{1}{4}\\\frac{1}{3}&\frac{1}{4}&\frac{1}{5}} \end{align} Now we find the inverse of the matrix $\vec{A_3}$ as follows, \begin{align} \myvec{1&\frac{1}{2}&\frac{1}{3}&1&0&0\\\frac{1}{2}&\frac{1}{3}&\frac{1}{4}&0&1&0\\\frac{1}{3}&\frac{1}{4}&\frac{1}{5}&0&0&1}\\ \xleftrightarrow[R_3 = R_3 - \frac{1}{3}R_1]{R_2 = R_2 - \frac{1}{2}R_1}\myvec{1&\frac{1}{2}&\frac{1}{3}&1&0&0\\0&\frac{1}{12}&\frac{1}{12}&-\frac{1}{2}&1&0\\0&\frac{1}{12}&\frac{4}{45}&-\frac{1}{3}&0&1}\\ \xleftrightarrow[]{R_3 = R_3 - R_2}\myvec{1&\frac{1}{2}&\frac{1}{3}&1&0&0\\0&\frac{1}{12}&\frac{1}{12}&-\frac{1}{2}&1&0\\0&0&\frac{1}{180}&\frac{1}{6}&-1&1}\\ \xleftrightarrow[R_3 = 180R_3]{R_2 = 12R_2}\myvec{1&\frac{1}{2}&\frac{1}{3}&1&0&0\\0&1&1&-6&12&0\\0&0&1&30&-180&180}\\ \xleftrightarrow[R_1=R_1-R_3]{R_2 = R_2 - R3}\myvec{1&\frac{1}{2}&0&-9&60&-60\\0&1&0&-36&192&-180\\0&0&1&30&-180&180}\\ \xleftrightarrow{R_1 =R_1-\frac{1}{2}R_2}\myvec{1&0&0&9&-36&30\\0&1&0&-36&192&-180\\0&0&1&30&-180&180} \end{align} Hence we see that $\vec{A_3}$ is invertible and the inverse contains integer entries and $\vec{A_3^{-1}}$ is given by, \begin{align} \vec{A_3^{-1}} = \myvec{9&-36&30\\-36&192&-180\\30&-180&180}\label{A3inv} \end{align} Let, $\vec{A_4}$ be $4 \times 4$ matrix as follows, \begin{align} \vec{A_4} &= \myvec{1&\frac{1}{2}&\frac{1}{3}&\frac{1}{4}\\\frac{1}{2}&\frac{1}{3}&\frac{1}{4}&\frac{1}{5}\\\frac{1}{3}&\frac{1}{4}&\frac{1}{5}&\frac{1}{6}\\\frac{1}{4}&\frac{1}{5}&\frac{1}{6}&\frac{1}{7}} \end{align} Now, expressing $\vec{A_4}$ using $\vec{A_3}$ we get, \begin{align} \vec{A_4} &= \myvec{\vec{A_3}&\vec{u}\\\vec{u^T}& d}\label{eqA4} \intertext{where,} \vec{u} &= \myvec{\frac{1}{4} \\ \frac{1}{5} \\ \frac{1}{6}} \\ d &= \frac{1}{7} \end{align} Now assuming $\vec{A_4}$ has an inverse, then from \eqref{eqA4}, the inverse of $\vec{A_4}$ can be written using block matrix inversion as follows, \begin{align} \vec{A_4^{-1}} &= \myvec{\vec{A_3^{-1}}+\vec{A_3^{-1}}\vec{u}x_4^{-1}\vec{u^T}\vec{A_3^{-1}} & -\vec{A_3^{-1}}\vec{u}x_4^{-1}\\-x^{-1}\vec{u^T}\vec{A_3^{-1}} & x_4^{-1}}\label{eqblockinv} \intertext{where,} x_4 &= d-{\vec{u^T}\vec{A_3^{-1}}\vec{u}}\label{eqX} \end{align} Now, the assumption of $\vec{A_4}$ being invertible will hold if and only if $\vec{A_3}$ is invertible, which has been proved in \eqref{A3inv} and $x_4$ from \eqref{eqX} is invertible or $x_4$ is a nonzero scalar. We now prove that $x_4$ is invertible as follows, \begin{align} x_4 &= \frac{1}{7}-\myvec{\frac{1}{4}&\frac{1}{5}&\frac{1}{6}}\myvec{9&-36&30\\-36&192&-180\\30&-180&180}\myvec{\frac{1}{4} \\ \frac{1}{5} \\ \frac{1}{6}}\\ \implies x_4 &= \frac{1}{2800} \intertext{Hence, $x_4$ is a scalar, hence $x_4^{-1}$ exists and is given by,} x_4^{-1} &= 2800 \end{align} Hence, $\vec{A_4}$ is invertible. Now putting the values of $\vec{A_3^{-1}}$, $x_4^{-1}$ and $\vec{u}$ we get, \begin{align} \vec{A_3^{-1}}+\vec{A_3^{-1}}\vec{u}x_4^{-1}\vec{u^T}\vec{A_3^{-1}} &= \myvec{16&-120&240\\-120&1200&-2700\\240&-2700&6480}\label{A1}\\ -\vec{A_3^{-1}}\vec{u}x_4^{-1} &= \myvec{-140\\1680\\-4200}\label{A2}\\ x_4^{-1}\vec{u^T}\vec{A_3^{-1}} &= \myvec{-140&1680&-4200}\label{A3}\\ x_4^{-1} &= 2800\label{A4} \end{align} Putting values from \eqref{A1}, \eqref{A2}, \eqref{A3} and \eqref{A4} into \eqref{eqblockinv} we get, \begin{align} \vec{A_4^{-1}} &= \myvec{16&-120&240&-140\\-120&1200&-2700&1680\\240&-2700&6480&-4200\\-140&1680&-4200&2800}\label{A4invfin} \end{align} Hence, from \eqref{A4invfin} we prove that, $\vec{A_4}$ is invertible and has integer entries.\\ Let $\vec{A_{n-1}}$ be invertible with integer entries. Then we can represent $\vec{A_{n}}$ as follows, \begin{align} \vec{A_{n}} &= \myvec{\vec{A_{n-1}}&\vec{u}\\\vec{u^T}&d}\label{eqAn} \intertext{where,} \vec{u} &= \myvec{\frac{1}{4} \\ \frac{1}{5} \\ \vdots \\ \frac{1}{2n-2}} \\ d &= \frac{1}{2n-1} \end{align} Now assuming $\vec{A_{n}}$ has an inverse, then from \eqref{eqAn}, the inverse of $\vec{A_n}$ can be written using block matrix inversion as follows, \begin{align} \vec{{A_n}^{-1}} &= \myvec{\vec{A_{n-1}^{-1}}+\vec{A_{n-1}^{-1}}\vec{u}x_n^{-1}\vec{u^T}\vec{A_{n-1}^{-1}} & -\vec{A_{n-1}^{-1}}\vec{u}x_n^{-1}\\-x_n^{-1}\vec{u^T}\vec{A_{n-1}^{-1}} & x_n^{-1}}\\ &= x_n^{-1}\myvec{x_n\vec{A_{n-1}^{-1}}+\vec{A_{n-1}^{-1}}\vec{u^T}\vec{A_{n-1}^{-1}} & -\vec{A_{n-1}^{-1}}\vec{u}\\-\vec{u^T}\vec{A_{n-1}^{-1}} &1}\label{eqblockinv1} \intertext{where,} x_n &= d-\vec{u^T}\vec{A_{n-1}^{-1}}\vec{u}\label{eqX1} \end{align} Now, the assumption of $\vec{A_n}$ being invertible will hold if and only if $\vec{A_{n-1}}$ is invertible, which has been assumed and $x$ from \eqref{eqX1} is invertible or $x_n$ is a nonzero scalar. We now prove that $x_n$ is invertible as follows, \begin{align} x_n &= \frac{1}{2n-1}-\myvec{\frac{1}{4}&\frac{1}{5}&\dots&\frac{1}{2n-2}}\vec{A_{n-1}^{-1}}\myvec{\frac{1}{4}\\\frac{1}{5}\\\dots\\\frac{1}{2n-2}} \label{eqX2} \end{align} In equation \eqref{eqX2} $\vec{u}$ contains no negative or zero entries, again $\vec{A_{n-1}^{-1}}$ has non zero integer entries, hence $\vec{u^T}\vec{A_{n-1}^{-1}}\vec{u}$ is a non zero scalar. Moreover $d$ is not equal to $\vec{u^T}\vec{A_{n-1}^{-1}}\vec{u}$ hence in \eqref{eqX2} $x$ is non-zero scalar and invertible and hence it has an inverse. Hence $\vec{A_n}$ is invertible, proved. \end{document}
{ "alphanum_fraction": 0.6598645009, "avg_line_length": 44.8754578755, "ext": "tex", "hexsha": "e147ffb205a6393a69cecb7abae77eaeb232674c", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-10-01T17:05:21.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-02T11:29:27.000Z", "max_forks_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Arko98/EE5609-Matrix-Theory-", "max_forks_repo_path": "Assignment_14/Beamer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Arko98/EE5609-Matrix-Theory-", "max_issues_repo_path": "Assignment_14/Beamer.tex", "max_line_length": 390, "max_stars_count": null, "max_stars_repo_head_hexsha": "7c72720b4e5241a9dc3b62b38d4537f2cdd67e07", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Arko98/EE5609-Matrix-Theory-", "max_stars_repo_path": "Assignment_14/Beamer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4885, "size": 12251 }
%%TC:ignore Due to the coronavirus pandemic that started in March 2020, and has been ongoing since the beginning of this research project (June 2020 to August 2020), work had to be conducted remotely. \section{Coding environment} Due to the physical lab machines equipped with the GPUs being inaccessible during the pandemic, these machines had to be remotely accessed through SSH. As it is not efficient to implement an entire deep learning pipeline through a command-line interface, a \textit{Jupyter Lab} session was created on a lab computer port and forwarded back to a local personal computer.\\ This is done by first logging onto the lab machine via SSH, activating the virtual environment and launching the Jupyter Lab session (these instructions are added into a bash script to be quickly executed): \begin{lstlisting} ssh [email protected] -t ssh [email protected] source /cs/scratch/agj6/tf2/venv/bin/activate cd ~/Projects/Breast-Cancer-Detection-and-Segmentation jupyter lab --no-browser --port=8888 \end{lstlisting} Keeping the first SSH session alive, a second terminal session is opened, and the port forwarding from the lab machine to the personal computer is carried out using the following command: \begin{lstlisting} nohup ssh -J [email protected] [email protected] -L 8888:localhost:8888 -N \end{lstlisting} To prevent the two SSH sessions from being automatically killed, some SSH settings are added to the ``\textit{/etc/ssh/ssh\_config}'' file, making the personal computer send a null packet to the lab machine every 5 minutes: \begin{lstlisting} Host * ServerAliveInterval 300 ServerAliveCountMax 2 \end{lstlisting} Finally, \url{http://localhost:8888} is visited on a web browser to gain access to the Jupyter Lab interface, containing a menu bar, a file explorer, code editor and command line for running python files (see Figure~\ref{fig:appendix-jupyter_interface}). \begin{figure}[ht] \centerline{\includegraphics[width=\textwidth]{figures/appendix/jupyter_interface.png}} \caption{\label{fig:appendix-jupyter_interface}Screenshot of the Jupyter Lab interface used to implement the project.} \end{figure} \section{Code collaboration} For the group work part of the project, the code was collaboratively written and version controlled using GitHub, making use of the pull requests and merging features to combine code written by each group member. The code developed as a group can be found online on GitHub: \url{https://github.com/Adamouization/Breast-Cancer-Detection-Code}, while the code developed individually can be found online on GitHub as well: \url{https://github.com/Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning}. \section{Supervisor meetings} Weekly video meetings with the project supervisor, Dr David Harris-Birtill, and the project co-supervisor, Lewis McMillan, were planned once per week via Microsoft Teams. Additional meetings with the other group members, Ashay Patel and Shuen-Jen Chen, were conducted via Microsoft Teams as well. %%TC:endignore
{ "alphanum_fraction": 0.7976190476, "avg_line_length": 69.0666666667, "ext": "tex", "hexsha": "a7586ae3f3d174a82d55cb51609f64619c01c9d7", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2022-03-15T10:24:03.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-13T01:19:51.000Z", "max_forks_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning", "max_forks_repo_path": "report/chapters-content/appendix/remote-work-setup.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:59:33.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-19T11:36:44.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning", "max_issues_repo_path": "report/chapters-content/appendix/remote-work-setup.tex", "max_line_length": 507, "max_stars_count": 28, "max_stars_repo_head_hexsha": "a8682484886f409e10ff8ecb7326b2d0bf8b17a0", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "Adamouization/Breast-Cancer-Detection-Mammogram-Deep-Learning", "max_stars_repo_path": "report/chapters-content/appendix/remote-work-setup.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-08T16:36:15.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-17T14:20:24.000Z", "num_tokens": 751, "size": 3108 }
\section{Additional Information} \label{sec:moreinfo} Your gradeschool teacher who told you that there is no such thing as a bad question? A liar and an intellectual fraud. Before beginning this project, a member of the \PAPI team explained that ``someone who \emph{really} understands the hardware knows that there are certain questions you just don't ask.'' In \R, we generally prefer to abstract away from the hardware as much as possible. Hardware manufacturers aren't trying to make profiling as simple as possible; they're trying to execute instructions as fast as possible. As such, there are some special caveats worth making clear. \subsection{Calling Context} \PAPI is designed to ``listen in'' on hardware counters only from its calling context. In particular, this means that system noise should not interfere with profiler data gathered by \thispackage. \subsection{Parallel Code} \thispackage currently does not support the profiling of parallel code. Attempting do to so will almost certainly return incorrect values. The exception would be SPMD-written code using MPI, where each R process is single threaded. This is very easy to see with \R's \code{mclapply()} form the \pkg{parallel} package. Consider this simple example: \begin{Output} system.flops(lapply(1, function(i) as.double(1:1000)/100))$flpops # [1] 1008 system.flops(mclapply(1:2, function(i) as.double(1:1000)/100))$flpops # [1] 94 \end{Output} Here we measure the number of floating point operations performed in dividing the (float) numbers 1 through 1000 by 100. In the first example, we get 1000 and some change (the \PAPI + \thispackage + \R overhead). In the second, we get\dots94? Where did the others go? \PAPI is designed to profile only the calling context, and \code{mclapply()} does its computation in forks, outside the scope of \PAPI's ability to profile. This is (usually) true as well of multi-threading, for example, via OpenMP, although this is more difficult to describe, and frankly is platform dependent. Consider this example: \begin{lstlisting}[language=rr] n <- 5000 x <- matrix(rnorm(n*n), n, n) system.flops(x %*% x) \end{lstlisting} If we run this with OpenBLAS using 4 threads, on this reference Sandy Bridge hardware, we find: \begin{Output} $real_time [1] 7.485725 $proc_time [1] 7.457387 $flpops [1] 13 $mflops [1] 1.743238e-06 \end{Output} This is obviously not what we intended to find. Using the serial Atlas BLAS on the same machine, we find: \begin{Output} $real_time [1] 20.52688 $proc_time [1] 20.50697 $flpops [1] 110667824 $mflops [1] 5.396596 \end{Output} which is almost certainly closer to reality.
{ "alphanum_fraction": 0.7552160954, "avg_line_length": 28.8602150538, "ext": "tex", "hexsha": "6e6a68d98dba79b2be3453bd5fd105a6b266fcd6", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-10-28T16:17:37.000Z", "max_forks_repo_forks_event_min_datetime": "2015-09-05T05:21:14.000Z", "max_forks_repo_head_hexsha": "708bee501de20eb82829e03b92b24b6352044f49", "max_forks_repo_licenses": [ "Intel", "BSD-3-Clause" ], "max_forks_repo_name": "QuantScientist3/pbdPAPI", "max_forks_repo_path": "vignettes/include/06-addendum.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "708bee501de20eb82829e03b92b24b6352044f49", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Intel", "BSD-3-Clause" ], "max_issues_repo_name": "QuantScientist3/pbdPAPI", "max_issues_repo_path": "vignettes/include/06-addendum.tex", "max_line_length": 88, "max_stars_count": 8, "max_stars_repo_head_hexsha": "cb3fad3bccd54b7aeeef9e687b52d938613a356e", "max_stars_repo_licenses": [ "Intel", "BSD-3-Clause" ], "max_stars_repo_name": "wrathematics/pbdPAPI", "max_stars_repo_path": "vignettes/include/06-addendum.tex", "max_stars_repo_stars_event_max_datetime": "2016-02-01T20:13:43.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-14T17:00:51.000Z", "num_tokens": 738, "size": 2684 }
\documentclass{amsart} \begin{document} \section{Minipage} % CC-by from https://tex.stackexchange.com/questions/576515/numbering-the-equations-in-latex-with-minipage-and-eqnarray \begin{eqnarray} \begin{minipage}{\textwidth} \begin{split} P_L_S(f)=\frac{1}{2\sigma^2}\*\Bigg\{\frac{[\sum_{k=1}^N\*(x_k-\bar{x})\cos(2\pi{f}(t_k-\tau))]^2}{\sum_{k=1}^N\*\cos^2(2\pi{f}(t_k-\tau))}\\+\frac{[\sum_{k=1}^N\*(x_k-\bar{x})\sin(2\pi{f}(t_k-\tau))]^2}{\sum_{k=1}^N\*\sin^2(2\pi{f}(t_k-\tau))}\Bigg\}\footnote[2]{To see more about Matlab Plomb equations and described in Equation 4.1 variables, \newline visit Matlab Plomb Documentation www.mathworks.com/help/signal/ref/plomb.html} \end{split} \end{minipage} \end{eqnarray} \end{document}
{ "alphanum_fraction": 0.6956521739, "avg_line_length": 38.7368421053, "ext": "tex", "hexsha": "b18d4a2567bd4dc41d0c37a8fa8eab1e491f8578", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "zorkow/COV885", "max_forks_repo_path": "exercises/math_tests/antipatterns/minipage.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "zorkow/COV885", "max_issues_repo_path": "exercises/math_tests/antipatterns/minipage.tex", "max_line_length": 433, "max_stars_count": null, "max_stars_repo_head_hexsha": "be292ba03326441656d0aa8a192c7b7ba9c25063", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "zorkow/COV885", "max_stars_repo_path": "exercises/math_tests/antipatterns/minipage.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 292, "size": 736 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{Conditional Statements} \begin{description} \begin{lstlisting}[caption={\texttt{if} statements}] walker init { a = 4; b = 5; if(a < b): std.out("Hello!"); } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] Hello! \end{lstlisting} \item[Description] \texttt{} \end{description} \begin{description} \begin{lstlisting}[caption={\texttt{else} statement}] walker init { a = 4; b = 5; if(a == b): std.out("A equals B"); else: std.out("A is not equal to B"); } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] A is not equal to B \end{lstlisting} \item[Description] \texttt{} \end{description} \begin{description} \begin{lstlisting}[caption={\texttt{elif} statement}] walker init { a = 4; b = 5; if(a == b): std.out("A equals B"); elif(a > b): std.out("A is greater than B"); elif(a == b - 1): std.out("A is one less than B"); elif(a == b - 2): std.out("A is two less than B"); else: std.out("A is something else"); } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] A is one less than B \end{lstlisting} \item[Description] \texttt{} \end{description} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{Loops} \begin{description} \begin{lstlisting}[caption={\texttt{for} loops}] walker init { for i=0 to i<10 by i+=1: std.out("Hello", i, "times!"); } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] Hello 0 times! Hello 1 times! Hello 2 times! Hello 3 times! Hello 4 times! Hello 5 times! Hello 6 times! Hello 7 times! Hello 8 times! Hello 9 times! \end{lstlisting} \item[Description] \texttt{} \end{description} \begin{description} \begin{lstlisting}[caption={\texttt{for} loops iterating through lists}] walker init { my_list = [1, 'jon', 3.5, 4]; for i in my_list: std.out("Hello", i, "times!"); } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] TEST CASE NOT GENERATED YET \end{lstlisting} \item[Description] \texttt{} \begin{remark} \begin{tBox} Remember, though this looks very much like python, the \texttt{:} operator here indicates single line block. Braces should be used for multiline code blocks, e.g., \begin{lstlisting} for i in my_list { if(i == 'jon'): i = 5; std.out("Hello", i, "times!"); } \end{lstlisting} \end{tBox} \end{remark} \end{description} \begin{description} \begin{lstlisting}[caption={ \texttt{while} loops }] walker init { i = 5; while(i>0) { std.out("Hello", i, "times!"); i -= 1; } } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] Hello 5 times! Hello 4 times! Hello 3 times! Hello 2 times! Hello 1 times! \end{lstlisting} \item[Description] \texttt{} \end{description} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{Loop Control Statements} \begin{description} \begin{lstlisting}[caption={\texttt{break} statement}] walker init { for i=0 to i<10 by i+=1 { std.out("Hello", i, "times!"); if(i == 6): break; } } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] Hello 0 times! Hello 1 times! Hello 2 times! Hello 3 times! Hello 4 times! Hello 5 times! Hello 6 times! \end{lstlisting} \item[Description] \texttt{} \end{description} \begin{description} \begin{lstlisting}[caption={\texttt{continue} statement}] walker init { i = 5; while(i>0) { if(i == 3){ i -= 1; continue; } std.out("Hello", i, "times!"); i -= 1; } } \end{lstlisting} \item[Output] \texttt{} \begin{lstlisting}[language=shell] Hello 5 times! Hello 4 times! Hello 2 times! Hello 1 times! \end{lstlisting} \item[Description] \texttt{} \end{description}
{ "alphanum_fraction": 0.5852891362, "avg_line_length": 24.0290697674, "ext": "tex", "hexsha": "a2c340f728d3596cec2d9f10e224121c98b3b48f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cca187ed3e6aae31514c6c0353a7844f7703d039", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Gim3l/jaseci", "max_forks_repo_path": "archive/old_book/tex/sec/sec-jaccontrol.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cca187ed3e6aae31514c6c0353a7844f7703d039", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Gim3l/jaseci", "max_issues_repo_path": "archive/old_book/tex/sec/sec-jaccontrol.tex", "max_line_length": 181, "max_stars_count": null, "max_stars_repo_head_hexsha": "cca187ed3e6aae31514c6c0353a7844f7703d039", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Gim3l/jaseci", "max_stars_repo_path": "archive/old_book/tex/sec/sec-jaccontrol.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1231, "size": 4133 }
\part{MOSAIC Components} \p{In recent years --- as publishing has become more \q{digitized} --- many online platforms have emerged for scientists to share and publish their research work, including both raw data and scientific papers. These portals generally provide an index/search feature where readers can locate research based on the name of the author, title of the publication, keywords, or subject matter, along with a document viewer where readers can view the abstract of each publication (and sometimes full text of the article) and other publication details (such as co-authors, date of publication, bibliographic references, etc.). However, these portals offer only limited features for interactively exploring publications, particularly when it comes to examining data sets and/or browsing multimedia assets associated with publications, such as audio, video, interactive diagrams, or \ThreeD{} graphics.} \p{The \MOSAIC{} system is envisioned, over all, as a suite of code libraries developed by LTS allowing institutions to host publication repositories --- and for individual users to access publication repositories --- free of the limitations of existing scientific document portals. In the immediate future, LTS is focused on the more modest goal of implementing software to access existing portals (via \MdsX{}) and to create individual data sets (via \dsC{}). With \MOSAIC{}, each publication can be linked to a \text{supplemental archive} that contains information about the author's research methods, data sets, and focal topics. If desired, these archives can include machine-readable representations of full publication text, to support advanced text-mining techniques across the repository. The supplemental archive can be explisitly linked to publications within their host repository, or it may be maintained in a decentralize manner external to the hosting platform.} \p{Using \MOSAIC{}, developers can implement a hosting platform and/or a client platform for this supplemental material, in addition to the publications themselves, where the platform provides software enabling readers to browse and access supplemental archives and their data sets and/or methodological descriptions. A \MOSAIC{} publication repository, also called a \MOSAIC{} \q{portal,} is a structured collection of data and documents which can be hosted via web servers (including fully encapsulated and containerizable cloud services) implemented with the help of \MOSAIC{} libraries.\footnote{\MOSAIC{} repositories can serve as general-purpose portals, hosting academic papers covering a broad range of topics. Alternatively, \sMOSAIC{} repositories can serve as targeted portals hosting papers focused on more narrowly focused subject domains. Organizations can utilize \sMOSAIC{} internally as the basis of a private document management system (\sDMS{}), or to provide (public) open access to complete publications, including human-readable and machine-readable full text and all supplemental content, with no restrictions\footnote{ Excepting sensitive/private information which might be provided via a dedicated component within the portal with specific features for authors, editors, and administrators. Any project which \textit{does} restrict access to some part of any document requires a commercial \MOSAIC{} license.}. \lMOSAIC{} can be customized for different subject areas by incorporating domain-specific ontologies into document-search features as well as machine-readable document-text annotations.} \p{\lMOSAIC{} is designed so that the software to access publications and publications' supplemental archives can be embedded in scientific-computing applications via \MOSAIC{} \textit{plugins}. This ensures that publications and data sets can be examined interactively with the same software that scientists employ to conduct research. \lMOSAIC{} also introduces a \textit{structured reporting system} (\MOSAICSR{}) for describing research/experimental/lab methods/protocols. Supplemental archives, plugins, and the structured reporting system are outlined below.} \section{Supplemental Archives} \p{\lMOSAIC{} supplemental archives are additional resources paired with \MOSAIC{} publications. In general, supplemental archives may include raw data, descriptions of research methods, or annotated data linked to publication texts. Each supplemental archive could have any of the following resources: \begin{enumerate} \item Interactive versions of the publication, with annotations indicating important concepts and phrases, perhaps aggregated into a \q{glossary} defining technical terms; \item Machine-readable representations of document texts, with special-purpose character encodings designed to facilitate Text and Data Mining (\TDM{}); \item Structured files containing raw data discussed in the publication, along with interactive software allowing scientists to access and reuse the data; \item Detailed reports of the author's research methods and experimental setup and/or protocols, conforming to the relevant standards with respect to the publication's subject classification --- for instance, the \q{Minimum Information for Biological and Biomedical Investigations} (\MIBBI{}) includes about 40 specific standards for different branches of biology and medicine; \item Representations of analytic methods and algorithms underlying the research findings, which are provided directly via computer code or indirectly via formal descriptions of computational workflows; \item Self-contained computer software which demonstrates code that the author developed and/or used for analyzing/curating research data; and \item Multi-media assets such as audio or video files, annotated images, \ThreeD{} graphics, interactive \TwoD/\ThreeD{} plots and diagrams, or other kinds of non-textual content which needs to be viewed with special multi-media software. \end{enumerate} } \p{The contents of a supplemental archive will be different for different publications, depending on whether the archive contains specific raw data or just a summary of the methods used to obtain the raw data. In the former case, a typical archive will include a \textit{data-set application}, or a semi-autonomous software component allowing researchers to study and visualize the data set, typically provided in turn via data files that are also located in the archive. The data-set application will, in general, provide both a visual interface for raw data and code libraries for computationally manipulating this data; typically raw data files will encode serialized data structures that can be deserialized by code (e.g., \Cpp{} classes) included in the data-set application. \lMOSAIC{} includes generic implementations of a data-set explorer (\q{\MdsX{}}), which can be adopted to the specific kind of information that needs to be serialized for a given publication. As a result, data-set application can then be packaged as a plugin to existing scientific software within the relevant scientific discipline.} \section{\protect\llMOSAIC{} Plugins} \p{By design, \lMOSAIC{} Portals will provide a suite of plugins for existing scientific software, allowing publications and supplemental archives hosted on the portal to be read and examined within computer software associated with each publication's subject matter. For example, articles about chemistry could be read within \IQmol{}, a molecular visualization program; articles about cellular biology and bioimaging could be read within \CaPTk{} (the Cancer Imaging and Phenomics Toolkit), an image-processing application; articles discussing novel computer code could be read within \Qt{} Creator, an Integrated Development Environment (\IDE{}) for programming; etc. The advantage of accessing a publication and a supplemental archive within actual scientific software is that it allows research work to be understood, evaluated, and reused within the computing environments which scientists typically use to conduct professional research. This is different than existing science publication portals, which generally rely on web browsers to access supplemental materials like data sets and multimedia files --- in this standard setup the software ecosystem wherein readers examine published research is fundamentally separate from the software platforms where research is actually conducted. This example demonstrates how existing portals are \textit{limited} in their ability to share research in rigorously reusable and replicable ways --- and how \MOSAIC{} offers an improvement.} \p{Embedding presentations of their research within existing scientific software has the added benefit for authors of making their work more practically and immediately useful for the scientific community. Such presentations establish the computational framework for pragmatically deploying their techniques in real-world scientific contexts, accelerating the pace at which research work can be translated to concrete scientific (and clinical/lab/experimental) practice. As one example, research involving novel image analysis techniques could be packaged so as to target a \MOSAIC{} plugin for bioimaging software such as \CaPTk{}, so that readers could actually run the author's code as a \CaPTk{} module.\footnote{This toolkit provides a good case-study for research publication because it has an innovative \Qt{} and Common Workflow Language based extension mechanism; cf. \bhref{https://cbica.github.io/CaPTk/tr_integration.html}.} This is important because such functional assessment and adoption of novel contributions is harder to carry out if a body of research work is described indirectly within article text, as opposed to being concretely implemented within a specific scientific application.} \p{Another benefit of using plugins to access supplemental archives is that the host application will usually provide more sophisticated multimedia and data visualization capabilities, compared to static \PDF{} images or even interactive web portals. Publishers have begun to develop online platforms for browsing research papers in conjunction with multimedia content such as interactive diagrams and \ThreeD{} graphics --- a physical model of a protein or a chemical compound, for instance, can be viewed online via \WebGL{}; such graphics could even be embedded as an \q{\textbf{iframe}} within an \HTML{} version of the publications. Publishers consider this to be cutting-edge technology. However, the same molecular \ThreeD{} model, when viewed in \IQmol{}, can be enhanced with many additional visual features, representing bonds, orbitals, torsion angles, etc. The multimedia experience of exploring chemistry data in custom software like \IQmol{} is therefore much greater than the experience of generic web multimedia, which means that the scientific software is a better forum for showcasing novel research.} \section{\protect\llMOSAIC{} Structured Reporting (\protect\lsMOSAICSR{})} \p{The \MOSAIC{} structured reporting framework (\MOSAICSR{}) includes tools to help authors develop interactive presentations supplementing academic documents, and specifically to use supplemental archives to document how their research has been conducted. With \MOSAICSR{}, authors can implement or reuse code libraries that report on research/experiment methods, workflows, and protocols. The \MOSAICSR{} information may be structured as a \q{minimum information checklist} conformant to standards such as those collectively gathered into the \MIBBI{} recommendations; in this case \MOSAICSR{} would be applied by implementing object models instantiating \MIBBI{} policies. Alternatively, \MOSAICSR{} reports can be derived from actual computer programs simulating research workflows, similar to \BioCoder{}\footnote{See \bhref{https://jbioleng.biomedcentral.com/articles/10.1186/1754-1611-4-13}.} (which is included by LTS, in an updated \Cpp{} version, as one \MOSAIC{} library). Finally, \MOSAICSR{} presentations may be based on annotations applied to research/analytic code. For example, in the context of image analysis, the \Pandore{} project (an image-processing application) provides an \q{Image Processing Objectives} Ontology as well as a suite of image-processing operations that can be called from computer code. Image-analysis pipelines can therefore be explained by annotating the pertinent function-calls (which \Pandore{} calls \q{operators}) with terms from the \Pandore{} controlled vocabulary, providing annotation targets for \MOSAICSR{} presentations.} \p{\lMOSAICSR{} can express both computational workflows that are fully encapsulated by published code and real-world protocols concerning laboratory equipment and physical materials or samples under investigation. In the latter guise, \MOSAICSR{} code can employ or instantiate standardized terminologies and data structures for describing experiments --- such as \MIBBI{} policies or \BioCoder{} functions. In this case, the role of \MOSAICSR{} code is to serve as a serialization/deserialization endpoint for sharing research metadata. Conversely, when workflows are fully implemented within software developed as part of a body of research, \MOSAICSR{} can provide a functional interface allowing this code to be embedded in scientific software. For the latter, \MOSAICSR{} provides a framework for modeling how a software component specific to a given research project exposes its functionality to host and/or networked peer applications. There are also instances where both scenarios are relevant --- the \MOSAICSR{} code would simultaneously document real-world experimental protocols and construct a digital interface as part of a workflow which is part digital (\q{\textit{in silico}}) and part \q{real-world} (\q{in the lab}).} \section{\protect\llMOSAIC{} Annotations} \p{Included in any \MOSAIC{} plugin would be specially-designed \PDF{} viewers for interactively reading authors' papers. In particular, these \PDF{} applications would recognize cross-references linking between publications and their associated supplemental archives. This would allow authors to identify concepts which are discussed and/or represented both in the research paper and in the archive, annotating their papers accordingly. For example, the concept \q{\RNA{} Extraction} may be discussed in a publication text, and also formally declared as one step in the lab processing as represented via \BioCoder{}, summarized in a \BioCoder{}-generated chart, and included in the supplemental archive (a similar example is used in the documentation for \BioCoder{}). The \PDF{} viewer would then ensure that the phrase \q{\RNA{} Extraction} in the text is interactively linked to the concordant step in the experimental process, so that readers would then be able to view the \BioCoder{} summary as a context-menu action associated with the phrase where it appears in the \PDF{} file. For a different example, the phrase \q{Oxygenated Airflow} refers to airflow in assisted-breathing devices, such as ventilators; to ensure that the device is working properly, the equipment must be monitored to check that a steady stream of oxygen reaches the patient. Research into the design and manufacturing of ventilators and similar devices may then include \q{Oxygenated Airflow} both as a phrase within the article text and as a parameter in data sets evaluating the device's performance. In this situation, the publication-text location of the \q{Oxygenated Airflow} phrase should once again be annotated with links to the relevant part of the data set (e.g., a table column) where measurements of airflow levels are recorded.} \part{Concrete Applications} \section{\protect\llMOSAIC{} in the context of Image Analysis} \p{This section will focus on one specific application of \MOSAICSR{} in the context of image analysis and bioimaging --- specifically, what we term \DSPIN{} (\q{Data Structure Protocol for Image-Analysis Networking}). \lDSPIN{} adds a narrower focus by extending \MOSAICSR{}. Software that uses the \DSPIN{} protocol is able to provide a description of image processing capabilities which have been utilized and/or are functionally exposed by code and data associated with a research project. This includes \q{structured reporting} of research objectives as well as a concrete interface for invoking analyses associated with the relevant research (either new algorithms or techniques used to obtain reported findings). \lDSPIN{}, in turn, is based on \CaPTk{} and \Pandore{} (which includes both data models and interactive software) and the \Pandore{} \q{Image Processing Objectives} Ontology, mentioned above. \DSPIN{} adopts protocols from \CaPTk{} in order to accept information about how different objectives are merged into workflows, particularly with respect to implementating image-analysis capabilities as extensions to a core application, and with respect to \CaPTk{}'s implementation of the Common Workflow Language (\CWL{}). In effect, \DSPIN{} formalizes the data models and prototypes adopted by \Pandore{} and \CaPTk{} so as to concretize \MOSAICSR{} for the specific domain of image processing and Computer Vision. The sections below will therefore outline \DSPIN{} features in the context of \MOSAICSR{} design principles and objectives.} \section{Meta-Procedural Modeling in \protect\lsDSPIN{} and \protect\lsMOSAICSR{}} \p{When developed as a systematic outline of computational workflows --- not just a digital summary of real-world (e.g. lab) protocols --- \MOSAICSR{} reports can be formalized using a \MOSAIC{} component which we call \q{Hypergraph Multi-Application Configuration Language} (\HMCL{}). Most approaches to modeling research workflows involve some concept of \q{meta-objects},\footnote{See the \sVISSION{} system: \bhref{https://pdfs.semanticscholar.org/1ad7/c459dc4f89f87719af1d7a6f30e6f58dff17.pdf}.} \q{tools} (in the terminology of \CWL{}), or \q{transitions} (in the language of Petri Net theory). In \HMCL{}, the equivalent concept is that of \textit{metaprocedures}, which are analogous to ordinary computational procedures but add extra sources of information concerning input and output parameters. In general, rather than simply passing an input value into an executable routine, metaprocedures define steps which can be taken to acquire the proper values when needed. Aside from ordinary runtime values, the most important input sources are methods defined on \GUI{} components; command-line parameters; file contents; and not-yet-evaluated expressions (perhaps encapsulated in scripts or function pointers). A metaprocedure formulation abstracts the acquisition of input arguments (or the consumption of values within \q{channels} via which a procedure sends and receives data) from the concrete procedure or procedures which are eventually executed. Therefore, an \HMCL{} metaprocedure definition has two separate parts: a preamble where input sources are described; and an executive sequence where concrete procedures are indicated. A \textit{meta-evaluator} then operates in accord with these definitions, concretizing the input values and launching the actual procedure(s). For \DSPIN{}, metaprocedures can be defined using a framework based on \BioCoder{}, but adopted to the imaging and Computer Vision context.} \p{Image analysis methods are often described in academic literature in terms of mathematical formulae and/or characterizations of computing environments (such as Graphical Processing Units). It requires additional construction to translate these overviews into actual computer code. Once Computer Vision innovations are in fact concretely implemented, there is then an additional stage of development requisite for users to enact in practice the computations described in the research. Although it is theoretically possible to demonstrate novel methods within fully self-contained autonomous applications, it is more convenient for users if research code is integrated with existing imaging software. Along these lines, the \DSPIN{} interface can help connect new code to existing applications, allowing users to access the new code's functionality through \GUI{} actions, command-line invocations, or inter-application messaging.} \p{In addition to pragmatically enabling application embedding, \DSPIN{} models represent research methods and theories, contributing to transparency and reusability according to the \MIBBI{} and \FAIR{} (Findable, Accessible, Interoperable, Reusable) standards.\footnote{See \bhref{https://www.researchgate.net/publication/331775411_FAIRness_in_Biomedical_Data_Discovery}.} This can be achieved, in part, by implementing data structures conforming to \Pandore{} Image Processing Objectives. However, \DSPIN{} embeds this logic in an Object-Oriented context which allows imaging-specific workflow notations to be paired with specifications outside of image-processing in the narrowest sense. This allows \DSPIN{} to be available for hybrid computational-objectives representations which are only partially covered by the imaging domain --- analogous to the \MIAPEGI{} (Gel Electrophoresis Informatics) component of \MIAPE{} (Minimum Information About a Proteomics Experiment). The following section will discuss several domains where \DSPIN{} has been explicitly integrated with code libraries codifying \MIBBI{}-style research protocols.} %\vspace{-1em} \section{\protect\lsDSPIN{} in Contexts Supplemental to Image Processing} \vspace{2em} \subsection{Image Flow Cytometry} \p{One important use-case for biomedical image processing is to analyze cellular microscopy in conjunction with cytometric experiments which indirectly investigate cells and cellular-scale entities (such as proteins). Conceptually, image analysis and flow cytometry (\FCM{}) analysis are mathematically similar, and some commercial cytometry software has been extended with image-processing capabilities. The overlap between cytometric and image analysis has also inspired attempts to merge cytometry standards, such as \MIFlowCyt{} (the Minimum Information about a Flow Cytometry Experiment policy within \MIBBI{}), with bioimaging standards such as \DICOM{} (Digital Imaging and Communications in Medicine). One such proposal is due to Robert Leif, who argues that \q{The large overlap between imaging and flow cytometry provides strong evidence that both modalities should be covered by the same standard} and has formalized an \XML{} language (\CytometryML{}) to serve as that overarching bridge.\footnote{See \bhref{https://spie.org/Publications/Proceedings/Paper/10.1117/12.2295220?SSO=1}.} The \DSPIN{} project builds off this work by introducing its own \FCM{}/\DICOM{} hybrid, although in an object-oriented rather than \XML{}-based context, although it also incoporates an expanded \XML{}-oriented schema (discussed in the next part of this paper). As a reference implementation for this \DSPIN{} extension, the project also provides a pure-\Cpp{} cytometry library based on the \openCyto{} (see \bhref{https://www.bioconductor.org/packages/release/bioc/html/cytolib.html}) and \FACSanadu{} (see \bhref{https://www.biorxiv.org/content/biorxiv/early/2017/10/13/201897.full.pdf}) libraries, while eliminating external dependencies such as \R{} and \Java{}.\footnote{Upon request, LTS can provide a detailed summary of this project reconciling \sopenCyto{} and \sFACSanadu{}. Specifically, although \sopenCyto{} as a whole uses \sR{} for its visual layer, \sopenCyto{} contains \scytoLib{}, a \sCpp{} library for cytometric analysis, which LTS employs as the basis for a standalone pure-\sCpp{} flow cytometry application. Meanwhile, \sFACSanadu{} is a \sJava{} application which uses a \sQt{} front-end that LTS is migrating to \sCpp{}. The supplemental information supplied by LTS about the \sFACSanadu{}/\scytoLib{} integration identifies the specific data types where \scytoLib{} code (different kinds of gates, for instances) can be incorporated as alternatives to the \sFACSanadu{} \sJava{} equivalents, while still using \sFACSanadu{} \sGUI{} classes.} The \FCM{}/\DICOM{} bridge is implemented in this context via a \DSPIN{} supplement to \DICOM{} based on \q{Semantic \DICOM{},} which is an effort to standardize query processing within \PACS{} (Picture Archiving and Communications Service) workstations and to more effectively integrate \DICOM{} with clinical data.} \subsection{A Semantic \protect\lsDICOM{} Object Model} \p{As a formal representation of imaging workflows, \DSPIN{} would reasonably be paired in many contexts with \DICOM{}, insofar as \DICOM{} represents the canonical standard for exchanging medical image data. For its applications within the medical-imaging context, \DSPIN{}, therefore, provides object-oriented accessors to \DICOM{} data such that image-objectives and \DICOM{} object models can interoperate. This object-oriented foundation also provides a basis on which to further integrate clinical data in the form of \q{Semantic} \PACS{} models.\footnote{See \bhref{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5119276/}.} The original Semantic \PACS{} implementation draws clinical data from \DICOM{} headers, and represents this extracted information via \RDF{}. In keeping with \MOSAIC{}'s more object-oriented focus, the \MOSAIC{}-Semantic \DICOM{} (\MOSAICSD{}) integration is engineered instead as an extension to the \DICOM{} Toolkit (\DCMTK{}) library, though it imports many constructs within Semantic \DICOM{} in the guise of a \q{Hypergraph Ontology} --- one example of a system \MOSAIC{} uses to merge Semantic Web schema with Object-Oriented code. In short, \MOSAICSD{} extends \DSPIN{} by defining a central \DICOM{} object model affixed to both clinical and image-processing object models.} \p{The \MOSAICSD{} object system applies not only to \DICOM{} integrated with statements of image-processing objectives, but also to other biomedical contexts where image analysis should be integrated with other analytic modalities as well as with clinical or epidemiological information. For example, Flow Cytometry overlaps with clinical data tracking because one of \FCM{}'s essential investigative roles is to examine patients' immunological response to diseases and/or interventions. In the context of Covid-19, say --- with respect to achieving a deeper understanding of how and why SARS-CoV-2 symptoms present differently in different patients --- \q{The starting point will likely be a deep characterization of the immune system in patients with different stages of the disease.}\footnote{See \bhref{https://onlinelibrary.wiley.com/doi/full/10.1002/cyto.a.24002}.} That is, \FCM{} observations need to be matched with clinical data in order to classify (and consider statistical correlations between) immunological findings and clinical facts: risk factors, sociodemographics, disease progression, and so forth. This analysis intrinsically assumes that \FCM{} data can be transparently linked with all relevant clinical data, but such integration is difficult even in \DICOM{}, where \DICOM{} headers are specifically designed for preserving patient data across picture-sharing networks (there is no analogous \q{header} component in the Flow Cytometry Standard, \FCS{}). In brief, \DSPIN{} extensions to non-imaging domains can promote data integration insofar as \DSPIN{} Clinical Object Models, based on Semantic \DICOM{}, provide an affixation point for clinical data in analytic contexts which are operationally related to image analysis, and not just to image analysis by itself.} \subsection{Semantic \protect\lsDICOM{} via \protect\lsDCMTK{} Extensions} \p{As a concrete example of how a Semantic \DICOM{} Object Model could be implemented, consider the process of developing \DICOM{} extensions by modifying existing \DICOM{} client libraries, such as \DCMTK{}. As a file \textit{standard}, \DICOM{} itself is an abstraction; the \DICOM{} protocol only becomes a concrete phenomenon insofar as \PACS{} workstations, radiology software, and other applications employ \DICOM{} client libraries. Therefore, any \DICOM{} \textit{extension} is similarly concretized only by an application which links against modified \DICOM{} libraries that recognize the extended syntax and/or semantics. When implementing \DICOM{}, accordingly, we can assume that the modified client libraries exist in an ambient context which provides capabilities that can be leveraged by the extension as it becomes formalized. For example, if a \DICOM{} extension provides a unified \FCM{}/\DICOM{} format, we can assume that the workstation linking against modified \DCMTK{} libraries supports \FCM{} features, such as the ability to construct geometric gating models (that is, rectangular or ellipsoid segments on \FCM{} images) and to consume information about the cytometric equipment used to derive the current \FCM{} data.} \p{In the case of Semantic \DICOM{}, we can similarly assume that the host application which loads an enhanced \q{Semantic} version of a \DICOM{} client library has the ability to consume more complex clinical data models than are incorporated into conventional \DICOM{}. In the case of \DCMTK{}, patient information is accessed through a \textbf{DcmDataset} object, which in turn contains multiple \textbf{DcmObject} values spanning several more specific types. A \DICOM{} extension can, accordingly, be concretized by expanding the range of object types subsumed under \textbf{DcmObject}, as well as modifying the \textbf{DcmDataset} code so that instances of these extended types could be identified and passed off to the appropriate handlers. Developers can then bundle files serializing these extended types within the overall collection of files zipped into a single \DICOM{} resource.} \p{The steps outlined above allow an extension object model to be embedded in \DICOM{} by hooking extra processing code into the procedures whereby conventional \textbf{DcmDataset}s are extracted from \DICOM{} files. It is then necessary to ensure that the \DICOM{} client application can properly manipulate the extension object data. This could be handled simply by linking all necessary libraries into the \DICOM{} client itself, but a more flexible solution is to employ some form of multi-application networking. In the case of non-standard clinical data, a flexible approach would be to pair client libraries for serializations of such data with a standalone application that can parse these serializations and respond to requests about the encoded information. With such an application in place, the \DCMTK{} host application would not need to implement all logic related to the extension object model; it would simply need to be able to launch and/or communicate with a secondary application tailored to each individual extension in particular.} \p{As a concrete example, consider a specialized data model and self-contained application specifically devoted to tracking immunological responses and post-recovery symptomology for Covid-19. The goal would be to develop more sophisticated technology for modeling the progression of active Covid-19 infections, as well as the lingering effects of the disease for patients who experience adverse reactions (such as lasting neurological damage or lung impairment) even after their ostensive recovery. Detailed models of Covid-19 affliction could then be matched against epidemiological, treatment, and sociodemographic data to determine whether there are predictors that determine which patients may experience more severe or long-lasting symptoms, and whether these symptoms can be mitigated by different treatment options. The data consumed by such a Covid-specific application might be bundled into \DICOM{} files even if \DICOM{} clients cannot directly process this data. In such a case, they could, instead, be programmed to launch the Covid-specific application and query that application for select pieces of information that \textit{are} relevant for a \DICOM{} workstation (e.g., an immunological profile of the patient adding layers of detail to cytological imaging). In short, a Covid-specific application would potentially be accessed by several different peer applications addressing several different biomedical domains (bioimaging, cytometry, epidemiology, genomics), and could provide different sorts of information from a Covid-specific data model for different contexts: given an overall information package about a patient's SARS-CoV-2 immunology and symptomology, some categories of data will be relevant for bioimaging, while others will be relevant for genomics and so forth. The file formats employed by these various peer applications (such as \DICOM{}, in the bioimaging context) may then be extended only as much as is needed to seed shared data packages with enough information to launch the Covid-specific application and request information specific to the peer application's specific biomedical domain.} \subsection{Geo-Imaging and Geographic Information Systems} \p{Another area where \DSPIN{} provides structured object models is that of Geographic Information Systems (\GIS{}) annotations. There is a direct link between image processing and \GIS{} insofar as identifying geotaggable features is one dimension or application of Computer Vision. Effectively manipulating geoimaging data requires mathematical translations between several different coordinate systems, in both two and three dimensions. These coordinate transforms --- as well as semantic interpretations of geoimage segments (buildings, land features, roads, etc.) --- can serve as the basis for an object model attaching image-processing objectives to \GIS{} workflows.} \p{In conventional \GIS{} annotation, data structures are linked to both geospatial coordinates and to visual cues or icons allowing locations of interest to be indicated on maps. The actual geotagged data structures could be derived from any domain; as such any object model may be integrated with \GIS{} annotations so long as one can assign spatial interpretations to the phenomena computationally encapsulated by the domain in question. In a medical context, for instance, geotagged data might represent the scope of a vaccination campaign, or the extent of an epidemic, along with relevant geographic or civic features (villages, medical clinics, national borders, and so on). The actual map as a visible digital artifact therefore serves as a virtual glue where clinical, geographical, governmental, and \GUI{} data are all sutured together. Insofar as geoimaging involves analysis of photographed land features and/or urban environments, image processing information represents a further object model that can be added to the mix. Even when dealing solely with virtual maps, however --- rather than with satellite images or other geospatial photography --- the analysis and application-level rendering of map features is sufficiently similar to image processing that a rigorous model of \GIS{} integration belongs properly within \DSPIN{}, specifically within a \MOSAICGIS{} extension.} %\part{Markup Serialization and \protect\q{Grounding}} %MOSAIC Data Models} %\section{Conceptual Space Models and Hypergraph Database Implementation} %\p{\lMOSAIC{} repositories need to model data } \vspace{-9pt} \part{MOSAIC Data Models} \vspace{-6pt} \section{Hypergraph Database Models for Publication Data} \p{At the core of any \MOSAIC{} portal is a collection of distinct files, representing individual publications and supplemental archives. However, in a typical case it becomes necessary to ground the system of files in a database that hosts publication information, so that readers could search the portal for specific authors, keywords/phrases, titles, subject areas, and so forth. Searches within publication metadata are the simplest example of this functionality. However, searches within publication texts themselves are more complex, because what seems like a straightforward keyword search from a user's point of view might be more complicated at the processing level by documents' markup structure. That is, technical terms or acronyms which readers simply see as single words/phrases might be defined in a more complicated way within the document, such as by internally structured text segments rather than raw character streams. In addition, keyword searches might also be tripped up by ligatures, footnote interruptions, and other visual features wherein the document in human-readable formats like \PDF{} markedly diverges from machine-readable raw text.} \p{To address these potential search complications, \MOSAIC{} provides a \q{Hypergraph Text Encoding Protocol} (\HTXN{}) which effectively separates document text into different layers, dividing presentation-related content from textual content and making search capabilities more reliable. Each publication can have an \HTXN{} representation, essentially a machine-readable structure encapsulating document text, as part of its supplemental archive. These \HTXN{} resources can then be used to support robust searching among text publication sets.} \p{Apart from metadata and keyword searches within texts, \MOSAIC{} allows an additional layer of search functionality: the option to search across supplemental archive contents such as data sets and research protocol descriptions. \lMOSAIC{} provides a hypergraph-based database engine that can be used, at a minimum, for publication info; in addition, developers have the option of including supplemental content within this central database. Each supplemental archive encompasses raw data and/or methodological descriptions via files internal to the archive; these assets are not necessarily shared with the host \MOSAIC{} repository database (except via operations that directly examine archive files). However, those who create or maintain a repository could choose to model some or all archive content within its publication database, either directly storing archives' data sets or importing certain select information (such as object/type schema or representative data samples). In this case, some such data in the supplemental archives will be mirrored in the central database, so that it is accessible for user-facing searches. To accommodate many different kinds of data models (from \q{\SQL{}-like} tables to graphs and hierarchical collections types) \lMOSAIC{} employs a flexible Hypergraph Database engine. Thus, the publication database within a \MOSAIC{} portal would have the flexibility to import most data which is present in supplemental content associated with given publications.} \p{In some contexts, however, authors are given the option to structure their shared research data in a format that can be exported to a \MOSAIC{} publication database. This is possible because \MOSAIC{}'s hypergraph data model supplies a rigorous framework for documenting how a data set is organized, which in turn can provide a formal overview of the theoretical or methodological commitments informing the research. The \MOSAIC{} database engine --- called \ConceptsDB{} --- represents an expanded hypergraph metamodel built around different paradigms for expressing scientific knowledge, such as Conceptual Space Theory and Conceptual Role Semantics.\footnote{Or at least a variation of Conceptual Roles consistent with \AI{} tools such as \Grakenai{}, and formalizations of Conceptual Spaces such as those described in \bhref{https://arxiv.org/abs/1706.06366}.}} \section{Markup Serialization and \protect\q{Grounding} in \protect\lsMOSAIC{} Portals} %\vspace{-1em} \p{The data-integration mechanisms discussed up to now have focused on object models and object-oriented programming techniques. It is understood that while composing special-purpose object-based libraries specifically tailored to individual data-integration problems is a powerful tool for solving such problems, data integration initiatives in practice are often organized around standards for sharing or serializing conformant data structures. Therefore, an important aspect of data integration is implementing proper data-serialization technology that is sufficiently rigorous to serve as a \textit{proxy} for formal interface definitions. \lXML{} formats present a good case-study for such formalizations because most serialization in the biomedical and bioimaging context operates through \XML{} languages.} \p{Standardizing a data format by stipulating how \XML{} files may encode the data is much simpler than defining an analogous specification in terms of executable computer code: one way to document the shape of any relevant data is simply to explain how an \XML{} document will be structured insofar as it encodes data accordingly. Potentially, such specification can be a single \XML{} \DTD{} file, or an \XML{} sample, providing a convenient reference point for developers to grasp the underlying data model. However, the structure of an \XML{} document does not, in and of itself, present a clear picture of how the information which the document represents is semantically organized. Even though \XML{} is processed by computer programs, one cannot even determine from an \XML{} document or schema which \XML{} elements (if any) correspond to data types recognized by applications which read and/or write the corresponding \XML{} code. For example, the \XML{} portion of \OMETIFF{} (the principal Open Microscopy Environment imaging format) includes an explicit \textbf{Image} element (which gathers up all significant image metadata); an application reading \OMETIFF{} files might, therefore, introduce a single datatype --- analogous to (and maybe wrapping an) \textbf{itk::Image} from the Insight Toolkit imaging libraries --- bundling the data in that part of the \OME{} \XML{}. In this case, there will be a one-to-one correspondence between \XML{} structure and application-level data types, at least for that one \textbf{Image} node. On the other hand, software reading \OMETIFF{} information might not manipulate images directly, but rather pull out other kinds of metadata, such as an experimenter's name or description of the microscopic setup. In this case, the application might not have an explicit \q{object} representing the image itself, but it may still read information about the image from child nodes of the \XML{} \textbf{Image} element.} \p{In short, because applications can read or use data from an \XML{} document in different ways, the document's structure does not itself provide a clear picture of how the specific code that reads the \XML{} is organized. Such uncertainty becomes significant when one wishes to use \XML{} specifications as an indirect strategy for documenting parameters and features of the data structures which are serialized via the relevant \XML{} language. In effect, \XML{} serialization operates on two levels: on the one hand, the specific \XML{} document provides an encoding of data conforming to a given structure; but, at a more abstract level, one can model the relationship between the surface-level \XML{} node structure and the application-level data structures thereby serialized. This second level of detail is usually implicit and unstated, but in the \MOSAIC{} technology, such \q{meta-serialization} --- a term we are using to suggest the idea of providing meta-data \textit{about} a serialization --- is directly formulated through a notion of \q{grounding,} which involves adding supplemental markup clarifying how documents instantiating a serialization format (such as \XML{}) relate to the coding protocols and data types of software that reads or creates these documents.} \p{For maximum expressivity, \MOSAIC{} introduces a meta-serialization system that can be applied to languages more flexible than \XML{} --- notably \TAGML{} --- as well as to \XML{} proper.\footnote{See \bhref{http://www.balisage.net/Proceedings/vol21/print/HaentjensDekker01/BalisageVol21-HaentjensDekker01.html}.} In effect, \MOSAIC{} provides parsers for an extended version of \TAGML{} which includes an additional \q{grounding} layer. Grounding, in this context, means describing how elements in the markip --- nodes, attributes, and character sequences --- are \q{grounded} in application-level types, data fields, and other programming constructs. \lMOSAIC{} provides parsers for this \q{Grounded \TAGML{}} (\GTagML{}) language as well as converters to output serialized data as conventional \XML{}, so that \GTagML{} specifications can be used in contexts (\MIFlowCyt{}, for instance) where ordinary \XML{} is expected and serves as a basis of standardization. \lMOSAIC{} also provides code libraries for extracting data from \GTagML{} documents.} \p{\lGTagML{} documents, with certain restrictions enforced, are structurally isomorphic to \XML{} and can be rendered as pure \XML{}; as such, \GTagML{} schemata can be used to define norms for \XML{} languages, although the logical role and operations of such requirements is not identical to \XML{} schema definitions. By design, \GTagML{} schemata operate on two levels: first, they constrain the organization of \GTagML{} files themselves; and second, they stipulate how \GTagML{} document structure relates to the type systems of code libraries serializing and deserializing \GTagML{} files. To model the second level of metadata, \GTagML{} introduces a hypergraph-based type model organized around \q{infosets,} which are structured overviews of application-level data types. To fully utilize \GTagML{} features, programmers may consequently compose \q{infoset classes} as wrappers around ordinary data types (e.g., \Cpp{} classes), whose instances are serialized via nodes or character strings within \GTagML{} documents. Infoset classes provide hypergraph \q{views} onto type-instances, and act as a bridge between data types and their associated \GTagML{} schema. In particular, infoset classes (rather than the types they encapsulate) are the basis for schematizing the relation between \GTagML{} document elements and type instances (and the data they contain).} \p{The hypergraph-based modeling incorporated into \GTagML{} reuses much of the code associated with \ConceptsDB{}, the new hypergraph database being developed alongside \MOSAIC{}. More information about \ConceptsDB{} --- and on the \HTXN{} text encoding protocol for \MOSAIC{} portal manuscripts, mentioned earlier --- can be found on the guthub project page for Linguistic Technology Systems (\bhref{https://github.com/Mosaic-DigammaDB/LingTechSys}).} \end{document}
{ "alphanum_fraction": 0.7919796335, "avg_line_length": 48.9989690722, "ext": "tex", "hexsha": "4888941ca829e45f684e32d1306745ceecd49ea2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "scignscape/PGVM", "max_forks_repo_path": "extra/papers/csaf/csaf-d1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "scignscape/PGVM", "max_issues_repo_path": "extra/papers/csaf/csaf-d1.tex", "max_line_length": 245, "max_stars_count": null, "max_stars_repo_head_hexsha": "e24f46cdf657a8bdb990c7883c6bd3d0a0c9cff0", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "scignscape/PGVM", "max_stars_repo_path": "extra/papers/csaf/csaf-d1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10739, "size": 47529 }
% !TEX root = developer.tex \chapter{A Custom Object: Beginning To End} \label{chapter:custom} Suppose we have brilliant design for a new topology we want to test. We want to run a simple test \emph{without} having to modify the \sstmacro build system. We can create a simple external project that links the new topology object to \sstmacro libraries. The Makefile can be found in \inlineshell{tutorials/programming/topology}. You are free to make \emph{any} Makefile you want. After \sstmacro installs, it creates compiler wrappers \inlineshell{sst++} and \inlineshell{sstcc} in the chosen \inlineshell{bin} folder. These are essentially analogs of the MPI compiler wrappers. This configures all include and linkage for the simulation. We want to make an experimental topology in a ring. Rather than a simple ring with connections to nearest neighbors, though, we will have ``express'' connections that jump to switches far away. We begin with the standard typedefs. \begin{CppCode} #include <sstmac/hardware/topology/structured_topology.h> namespace sstmac { namespace hw { class xpress_ring : public structured_topology { public: typedef enum { up_port = 0, down_port = 1, jump_up_port = 2, jump_down_port = 3 } port_t; typedef enum { jump = 0, step = 1 } stride_t; \end{CppCode} Packets can either go to a nearest neighbor or they can ``jump'' to a switch further away. Each switch in the topology will need four ports for step/jump going up/down. The header file can be found in \inlineshell{tutorials/programm/topology/xpressring.h}. We now walk through each of the functions in turn in the source in the topology public interface. We got some functions for free by inheriting from \inlinecode{structured_topology}. We start with \begin{CppCode} xpress_ring::xpress_ring(sprockit::sim_parameters* params) : structured_topology(params) { ring_size_ = params->get_int_param("xpress_ring_size"); jump_size_ = params->get_int_param("xpress_jump_size"); } \end{CppCode} determining how many switches are in the ring and how big a ``jump'' link is. The topology then needs to tell objects how to connect \begin{CppCode} void xpress_ring::connect_objects(connectable_map& objects) { for (int i=0; i < ring_size_; ++i) { connectable* center_obj = objects[switch_id(i)]; switch_id up_idx((i + 1) % ring_size_); connectable* up_partner = find_object(objects, cloner, up_idx); center_obj->connect(up_port, down_port, connectable::network_link, up_partner); switch_id down_idx((i + ring_size_ - 1) % ring_size_); connectable* down_partner = find_object(objects, cloner, down_idx); center_obj->connect_mod_at_port(down_port, up_port, connectable::network_link, down_partner); switch_id jump_up_idx((i + jump_size_) % ring_size_); connectable* jump_up_partner = find_object(objects, cloner, jump_up_idx); center_obj->connect(jump_up_port, jump_down_port, connectable::network_link, jump_up_partner); switch_id jump_down_idx((i + ring_size_ - jump_size_) % ring_size_); connectable* jump_down_partner = find_object(objects, cloner, jump_down_idx); center_obj->connect(jump_down_port, jump_up_port, connectable::network_link, jump_down_partner); } } \end{CppCode} We loop through every switch in the ring and form $+/-$ connections to neighbors and $+/-$ connections to jump partners. Each of the four connections get a different unique port number. We must identify both the outport port for the sender and the input port for the receiver. To compute the distance between two switches \begin{CppCode} int xpress_ring::num_hops(int total_distance) const { int num_jumps = total_distance / jump_size_; int num_steps = total_distance % jump_size_; int half_jump = jump_size_ / 2; if (num_steps > half_jump) { //take an extra jump ++num_jumps; num_steps = jump_size_ - num_steps; } return num_jumps + num_steps; } int xpress_ring::minimal_distance( const coordinates& src_coords, const coordinates& dest_coords) const { int src_pos = src_coords[0]; int dest_pos = dest_coords[0]; int up_distance = abs(dest_pos - src_pos); int down_distance = abs(src_pos + ring_size_ - dest_pos); int total_distance = std::max(up_distance, down_distance); return num_hops(total_distance); } \end{CppCode} Essentially you compute the number of jumps to get close to the final destination and then the number of remaining single steps. For computing coordinates, the topology has dimension one. \begin{CppCode} switch_id xpress_ring::get_switch_id(const coordinates& coords) const { return switch_id(coords[0]); } void xpress_ring::get_productive_path( int dim, const coordinates& src, const coordinates& dst, routable::path& path) const { minimal_route_to_coords(src, dst, path); } void xpress_ring::compute_switch_coords(switch_id swid, coordinates& coords) const { coords[0] = int(swid); } \end{CppCode} Thus the coordinate vector is a single element with the \switchid. The most complicated function is the routing function. \begin{CppCode} void xpress_ring::minimal_route_to_coords( const coordinates& src_coords, const coordinates& dest_coords, routable::path& path) const { int src_pos = src_coords[0]; int dest_pos = dest_coords[0]; //can route up or down int up_distance = abs(dest_pos - src_pos); int down_distance = abs(src_pos + ring_size_ - dest_pos); int xpress_cutoff = jump_size_ / 2; \end{CppCode} First we compute the distance in the up and down directions. We also compute the cutoff distance where it is better to jump or step to the next switch. If going up is a shorter distance, we have \begin{CppCode} if (up_distance <= down_distance) { if (up_distance > xpress_cutoff) { path.outport = jump_up_port; path.dim = UP; path.dir = jump; path.vc = 0; } else { path.outport = up_port; path.dim = UP; path.dir = step; path.vc = 0; } } \end{CppCode} We then decide if it is better to step or jump. We do not concern ourselves with virtual channels here and just set it to zero. Similarly, if the down direction in the ring is better \begin{CppCode} else { if (down_distance > xpress_cutoff) { path.outport = jump_down_port; path.dim = DOWN; path.dir = jump; path.vc = 0; } else { path.outport = down_port; path.dim = DOWN; path.dir = step; path.vc = 0; } } } \end{CppCode} For adaptive routing, we need to compute productive paths. In this case, we only have a single dimension so there is little adaptive routing flexibility. The only productive paths are the minimal paths. \begin{CppCode} void xpress_ring::get_productive_path( int dim, const coordinates& src, const coordinates& dst, routable::path& path) const { minimal_route_to_coords(src, dst, path); } \end{CppCode} We are now ready to use our topology in an application. In this case, we just demo with the built-in MPI ping all program from \sstmacro. Here every node in the network sends a point-to-point message to every other node. There is a parameter file in the \inlineshell{tutorials/programming/toplogy} folder. To specify the new topology \begin{ViFile} # Topology topology.name = xpress topology.xpress_ring_size = 10 topology.xpress_jump_size = 5 \end{ViFile} with application launch parameters \begin{ViFile} # Launch parameters launch_indexing = block launch_allocation = first_available launch_cmd_app1 = aprun -n10 -N1 launch_app1 = mpi_test_all \end{ViFile} The file also includes a basic machine model. After compiling in the folder, we produce an executable \inlineshell{runsstmac}. Running the executable, we get the following \begin{ShellCmd} Estimated total runtime of 0.00029890 seconds SST/macro ran for 0.4224 seconds \end{ShellCmd} where the \sstmacro wall clock time will vary depending on platform. We estimate the communication pattern executes and finishes in 0.30 ms. Suppose we change the jump size to a larger number. Communication between distant nodes will be faster, but communication between medium-distance nodes will be slower. We now set \inlinefile{jump_size = 10} and get the output \begin{ShellCmd} Estimated total runtime of 0.00023990 seconds SST/macro ran for 0.4203 seconds \end{ShellCmd} We estimate the communication pattern executes and finishes in 0.24 ms, a bit faster. Thus, this communication pattern favors longer jump links.
{ "alphanum_fraction": 0.7358882604, "avg_line_length": 32.4456928839, "ext": "tex", "hexsha": "d0223591ad150f44c0862129656322e4cab8a7f1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-21T11:39:08.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-21T11:39:08.000Z", "max_forks_repo_head_hexsha": "b590e7a5c604e38c840569de0e2b80bfc85539a0", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "deanchester/sst-macro", "max_forks_repo_path": "docs/developer/custom.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b590e7a5c604e38c840569de0e2b80bfc85539a0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "deanchester/sst-macro", "max_issues_repo_path": "docs/developer/custom.tex", "max_line_length": 156, "max_stars_count": null, "max_stars_repo_head_hexsha": "b590e7a5c604e38c840569de0e2b80bfc85539a0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "deanchester/sst-macro", "max_stars_repo_path": "docs/developer/custom.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2161, "size": 8663 }
\section{Data Mining through RAVEN} \label{sec:DMraven} Data mining is the computational process of discovering patterns in large data sets (``big data'') involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. \\RAVEN has support of several different data mining algorithms, such as: \begin{enumerate} \item \textit{Hierarchical methodologies} \item \textit{K-Means} \item \textit{Mean-Shift}, etc. \end{enumerate} The goals of this section are about learning how to: \begin{enumerate} \item Set up a sampling strategy to apply clustering algorithms, perturbing a driven code \item Analyze the data using clustering algorithms. \end{enumerate} To accomplish these tasks, the following RAVEN \textbf{Entities} (XML blocks in the input files) need to be defined: \begin{enumerate} \item \textbf{\textit{RunInfo}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{RunInfo} The \textit{RunInfo} \textbf{Entity} is intended to set up the analysis sequence that needs to be performed. In this specific case, two steps (\xmlNode{Sequence}) are sequentially run using forty processors (\xmlNode{batchSize}). \\In the first step, the original physical model is going to be sampled. The obtained results are going to be analyzed with data mining algorithms. \item \textbf{\textit{Files}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{Files} Since the driven code uses a single input file, in this section the original input is placed. The attribute \xmlAttr{name} represents the alias that is going to be used in all the other input blocks in order to refer to this file. \item \textbf{\textit{Models}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{Models} The goal of this example is to show how the data mining algorithms in RAVEN can be useful to analyze large data set. \\In addition to the previously explained Code model, two Post-Processor models ($DataMining|cluster|KMeans$ and $DataMining|decomposition|PCA$) are specified. Note that the post-processing is performed on all the output FOMs used in this example ( $A,\, B,\, C \, and \, D$). \item \textbf{\textit{Distributions}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{Distributions} In the Distributions XML section, the stochastic model for the uncertainties are reported. In this case 2 distributions are defined: \begin{itemize} \item $sigma \sim \mathbb{U}(0,1000)$, used to model the uncertainties associated with the Model \textit{sigma-A} and \textit{sigma-B}; \item $decayConstant \sim \mathbb{U}(1e-8,1e-7)$, used to model the uncertainties associated with the Model \textit{decay-A} and \textit{decay-B}. \end{itemize} \item \textbf{\textit{Samplers}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{Samplers} In order to obtain the data-set on which the data mining algorithms are going to be applied, a \textit{Grid} sampling approach is here employed. \item \textbf{\textit{DataObjects}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{DataObjects} Int this block, two \textit{DataObjects} are defined: 1) PointSet named ``samples'' used to collect the final outcomes of the code, 2) HistorySet named ``histories'' in which the full time responses of the variables $A,B,C,D$ are going to be stored. %figure samples \begin{figure}[h!] \centering \includegraphics[scale=0.7]{../../tests/framework/user_guide/DataMining/gold/dataMiningAnalysis/1-PlotLabels_dataMining.png} \caption{K-means clustering on original dataset.} \label{fig:KmeanOrigData} \end{figure} \item \textbf{\textit{OutStreams}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{OutStreams} This workflow uses one Print OutStream and three Plot OutStreams: \begin{itemize} \item ``samplesDump'', which writes the original sample set with the additional columns from the PostProcess steps, \item ``PlotKMeans1'', which plots the samples against the Figures of Merit with coloring according to the KMeans clustering, \item ``PlotLabels'', which plots all the samples and colors them according to the KMeans clustering, \item ``PlotPCA1,'' which plots the surrogate principal component dimensions and their associated clustering. \end{itemize} Note that a special kind of plot, the ``dataMining'' \xmlNode{type}, has been implemented to simplify plotting complicated results using RAVEN, and is used in all three of the plots in this workflow. Also note the use of the \xmlNode{range} block to define the data range of the plots created. \item \textbf{\textit{Steps}}: \xmlExample{framework/user_guide/DataMining/dataMiningAnalysis.xml}{Steps} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Finally, all the previously defined \textbf{Entities} can be combined in the \xmlNode{Steps} block; 3 \xmlNode{Steps} have been inputted: \begin{itemize} \item \xmlNode{MultiRun} named ``sample'', used to run the multiple instances of the driven code and collect the outputs in the two \textit{DataObjects}.The \xmlNode{Sampler} is inputted to communicate to the \textit{Step} that the driven code needs to be perturbed through the Grid sampling strategy; \item \xmlNode{PostProcess} named ``kmeans'', used to analyze the data obtained through the sampling strategy. In this step, a K-Means algorithm is going to be employed, plotting the clustering results; \textit{Step} that the driven code needs to be perturbed through the Grid sampling strategy; \item \xmlNode{PostProcess} named ``pca'', used to analyze the data obtained through the sampling strategy. In this Step, a PCA algorithm is going to be employed, plotting the decomposition results. \end{itemize} \end{enumerate} Figure~\ref{fig:KmeanOrigData} shows the clustering on the original input space. \\Figure~\ref{fig:KmeanProjected} shows the clustering on the projected input space. It can be noticed, that the algorithm fully capture the fact that the parameter $sigma-B$ does not impact the response $A$ (being completely independent). \\Figure~\ref{fig:PCAplot} shows the PCA decomposition on the data set. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %figure samples \begin{figure}[h!] \centering \includegraphics[scale=0.7]{../../tests/framework/user_guide/DataMining/gold/dataMiningAnalysis/1-PlotKMeans1_dataMining-dataMining-dataMining.png} \caption{K-means clustering on projected parameters.} \label{fig:KmeanProjected} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %figure samples \begin{figure}[h!] \centering \includegraphics[scale=0.7]{../../tests/framework/user_guide/DataMining/gold/dataMiningAnalysis/1-PlotPCA1_dataMining.png} \caption{Principal Component Analysis.} \label{fig:PCAplot} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%
{ "alphanum_fraction": 0.7145337964, "avg_line_length": 54.2463768116, "ext": "tex", "hexsha": "594993a37d57b0f4b0e1be41e26ddd48bc6ce093", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "milljm/raven", "max_forks_repo_path": "doc/user_guide/dataMiningExample.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "milljm/raven", "max_issues_repo_path": "doc/user_guide/dataMiningExample.tex", "max_line_length": 369, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f29fe81b75e2ffbeb54a55aa63647e7b2f6457b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "milljm/raven", "max_stars_repo_path": "doc/user_guide/dataMiningExample.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1823, "size": 7486 }
\documentclass[12pt, titlepage]{article} \usepackage{amsmath, mathtools} \usepackage[round]{natbib} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{colortbl} \usepackage{xr} \usepackage{hyperref} \usepackage{longtable} \usepackage{xfrac} \usepackage{tabularx} \usepackage{float} \usepackage{siunitx} \usepackage{booktabs} \usepackage{multirow} \usepackage[section]{placeins} \usepackage{caption} \usepackage{fullpage} \hypersetup{ bookmarks=true, % show bookmarks bar? colorlinks=true, % false: boxed links; true: colored links linkcolor=red, % color of internal links (change box color with linkbordercolor) citecolor=blue, % color of links to bibliography filecolor=magenta, % color of file links urlcolor=cyan % color of external links } \usepackage{array} %% Comments \newif\ifcomments\commentstrue \ifcomments \newcommand{\authornote}[3]{\textcolor{#1}{[#3 ---#2]}} \newcommand{\todo}[1]{\textcolor{red}{[TODO: #1]}} \else \newcommand{\authornote}[3]{} \newcommand{\todo}[1]{} \fi \newcommand{\wss}[1]{\authornote{blue}{SS}{#1}} \newcommand{\bmac}[1]{\authornote{red}{BM}{#1}} \newcommand{\progname}{SWHS} \begin{document} \title{Module Interface Specification for Solar Water Heating Systems Incorporating Phase Change Material} \author{Brooks MacLachlan and Spencer Smith} \date{\today} \maketitle \pagenumbering{roman} \newpage \tableofcontents \newpage \section{Symbols, Abbreviations and Acronyms} See SRS Documentation at \url{https://github.com/smiths/swhs} \pagenumbering{arabic} \section{Introduction} The following document details the Module Interface Specifications for the implemented modules in a program simulating a Solar Water Heating System with Phase Change Material. It is intended to ease navigation through the program for design and maintenance purposes. Complementary documents include the System Requirement Specifications and Module Guide. The full documentation and implementation can be found at \url{https://github.com/smiths/swhs}. The specification is given in terms of functions, rather than sequences. For instance, the predicted temperature of the water is given as a function of time ($\mathbb{R} \rightarrow \mathbb{R})$, not as a sequence ($\mathbb{R}^n$). This approach is more straightforward for the specification, but in the implementation stage, it will likely be necessary to introduce a sequence, assuming that a numerical solver is used for the system of ODEs. \section{Notation} The structure of the MIS for modules comes from \citet{HoffmanAndStrooper1995}, with the addition that template modules have been adapted from \cite{GhezziEtAl2003}. The mathematical notation comes from Chapter 3 of \citet{HoffmanAndStrooper1995}. For instance, the symbol := is used for a multiple assignment statement and conditional rules follow the form $(c_1 \Rightarrow r_1 | c_2 \Rightarrow r_2 | ... | c_n \Rightarrow r_n )$. The following table summarizes the primitive data types used by \progname. \begin{center} \renewcommand{\arraystretch}{1.2} \noindent \begin{tabular}{l l p{7.5cm}} \toprule \textbf{Data Type} & \textbf{Notation} & \textbf{Description}\\ \midrule character & char & a single symbol or digit\\ integer & $\mathbb{Z}$ & a number without a fractional component in (-$\infty$, $\infty$) \\ natural number & $\mathbb{N}$ & a number without a fractional component in [1, $\infty$) \\ real & $\mathbb{R}$ & any number in (-$\infty$, $\infty$)\\ \bottomrule \end{tabular} \end{center} \noindent The specification of \progname \ uses some derived data types: sequences, strings, and tuples. Sequences are lists filled with elements of the same data type. Strings are sequences of characters. Tuples contain a list of values, potentially of different types. In addition, \progname \ uses functions, which are defined by the data types of their inputs and outputs. Local functions are described by giving their type signature followed by their specification. \section{Module Decomposition} The following table is taken directly from the Module Guide document for this project. \begin{table}[h!] \centering \begin{tabular}{p{0.3\textwidth} p{0.6\textwidth}} \toprule \textbf{Level 1} & \textbf{Level 2}\\ \midrule {Hardware-Hiding} & ~ \\ \midrule \multirow{7}{0.3\textwidth}{Behaviour-Hiding} & Input Parameters\\ & Output Format\\ & Output Verification\\ & Temperature ODEs\\ & Energy Equations\\ & Control Module\\ & Specification Parameters Module\\ \midrule \multirow{3}{0.3\textwidth}{Software Decision} & {Sequence Data Structure}\\ & ODE Solver\\ & Plotting\\ \bottomrule \end{tabular} \caption{Module Hierarchy} \label{TblMH} \end{table} \newpage ~\newpage \section{MIS of Control Module} \label{Main} \subsection{Module} main \subsection{Uses} Param (Section~\ref{Parameters}), Temperature (Section~\ref{Temperature}), Solver (Section~\ref{ODE}), Energy (Section~\ref{Energy}), verify\_output (Section~\ref{VerifyOutput}), plot (Section~\ref{Plot}), output (Section~\ref{Output}) \subsection{Syntax} \subsubsection{Exported Access Programs} \begin{center} \begin{tabular}{p{2cm} p{4cm} p{4cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline main & - & - & - \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Access Routine Semantics} \noindent main(): \begin{itemize} \item transition: Modify the state of Param module and the environment variables for the Plot and Output modules by following these steps\\ \end{itemize} \noindent Get (filenameIn: string) and (filenameOut: string) from user\\ \noindent load\_params(filenameIn)\\ \noindent \#\textit{Find temperature function} ($T_W^\text{Solid}, T_W^\text{Melting}, T_W^\text{Liquid}$, $T_P^\text{Solid}, T_P^\text{Melting}, T_P^\text{Liquid}$), \textit{and energy} ($Q_P$) \textit{and times of transition between solid, melting and liquid phases} ($t_\text{melt}^{\text{init}}, t_\text{melt}^{\text{final}}$)\\ \noindent $t_\text{melt}^\text{init}, [T_W^{\text{Solid}}, T_P^{\text{Solid}}]^T := \text{solve}(\text{ODE\_SolidPCM}, 0.0, [T_\text{init}, T_\text{init}]^T, \text{event\_StartMelt} , t_\text{final})$\\ \noindent $t_\text{melt}^\text{final}$, $[T_W^{\text{Melting}}, T_P^{\text{Melting}}, Q_p]^T$ := solve(ODE\_MeltingPCM, $t_\text{melt}^\text{init}$, $[T_W^{\text{Solid}}(t_\text{melt}^\text{init}), T_P^{\text{Solid}}(t_\text{melt}^\text{init}), 0.0]^T$, event\_EndMelt, $t_\text{final}$)\\ \noindent $[ T_W^{\text{Liquid}}, T_P^{\text{Liquid}}]^T$ := solveNoE(ODE\_LiquidPCM, $t_\text{melt}^\text{final}$, [$T_W^{\text{Melting}}(t_\text{melt}^\text{final})$, $T_P^{\text{Melting}}(t_\text{melt}^\text{final})$]$^T$, $t_\text{final}$)\\ \noindent \#\textit{Combine temperatures for $0 \leq t \leq t_\text{final}$}\\ \noindent $T_W(t) = (0 \leq t < t_\text{melt}^{\text{init}} \Rightarrow T_W^{\text{Solid}} | t_\text{melt}^{\text{init}} \leq t < t_\text{melt}^{\text{final}} \Rightarrow T_W^{\text{Melting}} | t_\text{melt}^{\text{final}} \leq t \leq t_\text{final} \Rightarrow T_W^{\text{Liquid}} )$\\ \noindent $T_P(t) = (0 \leq t < t_\text{melt}^{\text{init}} \Rightarrow T_P^{\text{Solid}} | t_\text{melt}^{\text{init}} \leq t < t_\text{melt}^{\text{final}} \Rightarrow T_P^{\text{Melting}} | t_\text{melt}^{\text{final}} \leq t \leq t_\text{final} \Rightarrow T_P^{\text{Liquid}} )$\\ \noindent \#\textit{Energy values} $(E_W(t), E_P(t))$ \textit{for} $0 \leq t \leq t_\text{final}$\\ \noindent $E_W(t) = (0 \leq t < t_\text{melt}^{\text{init}} \Rightarrow \text{energyWater}(T_W^{\text{Solid}}) | t_\text{melt}^{\text{init}} \leq t < t_\text{melt}^{\text{final}} \Rightarrow \text{energyWater}(T_W^{\text{Melting}}) | t_\text{melt}^{\text{final}} \leq t \leq t_\text{final} \Rightarrow \text{energyWater}(T_W^{\text{Liquid}}) )$\\ \noindent $E_P(t) = (0 \leq t < t_\text{melt}^{\text{init}} \Rightarrow \text{energySolidPCM}(T_P^{\text{Solid}}) | t_\text{melt}^{\text{init}} \leq t < t_\text{melt}^{\text{final}} \Rightarrow \text{energyMeltingPCM}(Q_P) | t_\text{melt}^{\text{final}} \leq t \leq t_\text{final} \Rightarrow \text{energyLiquidPCM}(T_P^{\text{Liquid}}) )$\\ \noindent \#\textit{Output calculated values to a file and to a plot. Verify calculated values obey conservation of energy.}\\ \noindent verify\_output($T_w$, $T_p$, $E_w$, $E_p$, $t_\text{final}$)\\ \noindent plot($T_w$, $T_p$, $E_w$, $E_p$, $t_\text{final}$)\\ \noindent output(filenameOut, $T_w$, $T_p$, $E_w$, $E_p$, $t_\text{final}$)\\ \newpage \section{MIS of Input Parameters Module} \label{Parameters} The secrets of this module are the data structure for input parameters, how the values are input and how the values are verified. The load and verify secrets are isolated to their own access programs. \subsection{Module} Param \subsection{Uses} SpecParam (Section~\ref{SpecParam}) \subsection{Syntax} \begin{tabular}{p{3cm} p{1cm} p{1cm} >{\raggedright\arraybackslash}p{9cm}} \toprule \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \midrule load\_params & string & - & FileError \\ verify\_params & - & - & badLength, badDiam, badPCMVolume, badPCMAndTankVol, badPCMArea, badPCMDensity, badMeltTemp, badCoilAndInitTemp, badCoilTemp, badPCMHeatCapSolid, badPCMHeatCapLiquid, badHeatFusion, badCoilArea, badWaterDensity, badWaterHeatCap, badCoilCoeff, badPCMCoeff, badInitTemp, badFinalTime, badInitAndMeltTemp \\ $L$ & - & $\mathbb{R}$\\ $D$ & - & $\mathbb{R}$\\ $V_P$ & - & $\mathbb{R}$\\ $A_P$ & - & $\mathbb{R}$\\ ... & ... & ...\\ $m_W^{\text{noPCM}}$ & - & $\mathbb{R}$ \\ $\tau_W^{\text{noPCM}}$ & - & $\mathbb{R}$\\ \bottomrule \end{tabular} \subsection{Semantics} \subsubsection{Environment Variables} inputFile: sequence of string \#\textit{f[i] is the ith string in the text file f}\\ \subsubsection{State Variables} \renewcommand{\arraystretch}{1.2} \begin{longtable*}[l]{l} \# From R1\\ $L$: $\mathbb{R}$ \\ $D$: $\mathbb{R}$ \\ $V_P$: $\mathbb{R}$ \\ $A_P$: $\mathbb{R}$ \\ $\rho_P$ : $\mathbb{R}$ \\ $T_\text{melt}^{P}$: $\mathbb{R}$ \\ $C^S_P$: $\mathbb{R}$ \\ $C^L_P$: $\mathbb{R}$ \\ $H_f$: $\mathbb{R}$ \\ $A_C$: $\mathbb{R}$ \\ $T_C$: $\mathbb{R}$ \\ $\rho_W$: $\mathbb{R}$ \\ $C_W$: $\mathbb{R}$ \\ $h_C$: $\mathbb{R}$ \\ $h_P$: $\mathbb{R}$ \\ $T_\text{init}$: $\mathbb{R}$ \\ $t_\text{step}$: $\mathbb{R}$ \\ $t_\text{final}$: $\mathbb{R}$ \\ $\mathit{AbsTol}$: $\mathbb{R}$ \\ $\mathit{RelTol}$: $\mathbb{R}$ \\ $\mathit{ConsTol}$: $\mathbb{R}$ \\ ~\\ \# From R2\\ $V_\text{tank}$: $\mathbb{R}$ \\ $m_W$: $\mathbb{R}$ \\ $m_P$: $\mathbb{R}$ \\ ~\\ \noindent \# From R3\\ $\tau_W$: $\mathbb{R}$ \\ $\eta$: $\mathbb{R}$ \\ $\tau_P^S$: $\mathbb{R}$ \\ $\tau_P^L$: $\mathbb{R}$ \\ ~\\ \# To Support IM4\\ $E_{P\text{melt}}^{\text{init}}$: $\mathbb{R}$ \\ $E_{P\text{melt}}^{\text{all}}$: $\mathbb{R}$ \\ ~\\ \# To Support Testing\\ $m_W^{\text{noPCM}}$: $\mathbb{R}$ \\ $\tau_W^{\text{noPCM}}$: $\mathbb{R}$\\ \end{longtable*} \subsubsection{Assumptions} \begin{itemize} \item load\_params will be called before the values of any state variables will be accessed. \item The file contains the string equivalents of the numeric values for each input parameter in order, each on a new line. The order is the same as in the table in R1 of the SRS. Any comments in the input file should be denoted with a '\#' symbol. \end{itemize} \subsubsection{Access Routine Semantics} \noindent Param.$L$: \begin{itemize} \item output: \textit{out} := $L$ \item exception: none \end{itemize} \noindent Param.$D$: \begin{itemize} \item output: \textit{out} := $D$ \item exception: none \end{itemize} ... ~\newline \noindent Param.$m_W^{\text{noPCM}}$: \begin{itemize} \item output: \textit{out} := $m_W^{\text{noPCM}}$ \item exception: none \end{itemize} \noindent Param.$\tau_W^{\text{noPCM}}$: \begin{itemize} \item output: \textit{out} := $\tau_W^{\text{noPCM}}$ \item exception: none \end{itemize} \noindent load\_params($s$): \begin{itemize} \item transition: The filename $s$ is first associated with the file f. {inputFile} is used to modify the state variables using the following procedural specification: \begin{enumerate} \item Read data sequentially from inputFile to populate the state variables from R1 ($L$ to $\mathit{ConsTol}$). \item Calculate the derived quantities (all other state variables) as follows: \begin{itemize} \item $V_{\text{tank}} := \pi \times L \times (\frac{D}{2}) ^ 2$ \item $m_W := \rho_w (V_t - V_p)$ \item $m_P := \rho_p V_p$ \item $\tau_W := \frac{m_w C_w}{A_c h_c}$ \item $\eta := \frac{h_p A_p}{h_c A_c}$ \item $\tau_P^S := \frac{m_p C_{ps}}{h_p A_p}$ \item $\tau_P^L := \frac{m_p C_{pl}}{h_p A_p}$ \item $E_{P\text{melt}}^{\text{init}} := C_{ps} m_p (T_{\text{melt}} - T_{\text{init}})$ \item $E_{P\text{melt}}^{\text{all}} := H_f m_p$ \item $m_W^{\text{noPCM}} := \rho_w V_t$ \item $\tau_W^{\text{noPCM}} := \frac{m_W^{\text{noPCM}} C_w}{h_c A_c}$ \end{itemize} \item verify\_params() \end{enumerate} \item exception: exc := a file name $s$ cannot be found OR the format of inputFile is incorrect $\Rightarrow$ FileError \end{itemize} \noindent verify\_params(): \begin{itemize} \item out: \textit{out} := none \item exception: exc := \end{itemize} \noindent \begin{longtable*}[l]{l l} $\neg (L > 0)$ & $\Rightarrow$ badLength\\ $\neg (L_{\text{min}} \leq L \leq L_{\text{max}})$ & $\Rightarrow$ warnLength\\ $\neg (D > 0)$ & $\Rightarrow$ badDiam\\ $\neg ({\frac{D}{L}}_\text{min} \leq \frac{D}{L} \leq {\frac{D}{L}}_\text{max})$ & $\Rightarrow$ warnDiam\\ $\neg (V_P > 0)$ & $\Rightarrow$ badPCMVolume\\ $ \neg (V_P \geq \text{minfract} \cdot V_{\text{tank}}(D, L)) $ & $\Rightarrow$ warnPCMVol\\ $\neg (V_P < V_{\text{tank}}(D, L))$ & $\Rightarrow$ badPCMAndTankVol\\ $\neg (A_P > 0)$ & $\Rightarrow$ badPCMArea\\ $\neg (V_P \leq A_P \leq \frac{2}{h_\text{min}} V_P)$ & $\Rightarrow$ warnVolArea\\ $\neg (\rho_P > 0)$ & $\Rightarrow$ badPCMDensity\\ $\neg (\rho_P^{\text{min}} < \rho_P < \rho_P^{\text{max}})$ & $\Rightarrow$ warnPCMDensity\\ % $T_\text{melt}^{P}$ & $0 < T_\text{melt}^{P} < T_C$ (+) & & 44.2 % \si[per-mode=symbol] {\celsius} & 10\% % \\ % $C_P^S$ & $C_P^S > 0$ & $C_{P\text{min}}^S < C_P^S < C_{P\text{max}}^S$ & 1760 % \si[per-mode=symbol] {\joule\per\kilo\gram\per\celsius} & 10\% % \\ % $C_P^L$ & $C_P^L > 0$ & $C_{P\text{min}}^L < C_P^S < C_{P\text{max}}^L$ & 2270 % \si[per-mode=symbol] {\joule\per\kilo\gram\per\celsius} & 10\% % \\ % $H_f$ & $H_f > 0$ & $H_f^{\text{min}} < H_f < H_f^{\text{max}}$ & 211600 % \si[per-mode=symbol] {\joule\per\kilo\gram} & 10\% % \\ % $A_C$ & $A_C > 0$ (*) & $A_C \leq A_C^{\text{max}}$ & 0.12 \si[per-mode=symbol] {\square\metre} & 10\% % \\ % $T_C$ & $0 < T_C < 100$ (+) & & 50 \si[per-mode=symbol] {\celsius} & 10\% % \\ % $\rho_W$ & $\rho_W > 0$ & $\rho_W^{\text{min}} < \rho_W \leq \rho_W^{\text{max}}$ % & 1000 \si[per-mode=symbol] {\kilo\gram\per\cubic\metre} & 10\% % \\ % $C_W$ & $C_W > 0$ & $C_W^{\text{min}} < C_W < C_W^{\text{max}}$ & 4186 % \si[per-mode=symbol] {\joule\per\kilo\gram\per\celsius} & 10\% % \\ % $h_C$ & $h_C > 0$ & $h_C^{\text{min}} \leq h_C \leq h_C^{\text{max}}$ % & 1000 \si[per-mode=symbol] {\watt\per\square\metre\per\celsius} & 10\% % \\ % $h_P$ & $h_P > 0$ & $h_P^{\text{min}} \leq h_P \leq h_P^{\text{max}}$ % & 1000 \si[per-mode=symbol] {\watt\per\square\metre\per\celsius} & 10\% % \\ % $T_\text{init}$ & $0 < T_\text{init} < T_\text{melt} $ (+) & & 40 \si[per-mode=symbol] {\celsius} & 10\% % \\ % $t_\text{final}$ & $t_\text{final} > 0$ & $t_\text{final} < t_{\text{final}}^{\text{max}}$ (**) % & 50000 \si[per-mode=symbol] {\second} & 10\% % \\ \end{longtable*} etc. See Appendix (Section~\ref{Appendix}) for the complete list of exceptions and associated error messages. \subsection{Considerations} The value of each state variable can be accessed through its name (getter). An access program is available for each state variable. There are no setters for the state variables, since the values will be set and checked by load params and not changed for the life of the program. \newpage % \section{MIS of Input Verification Module} \label{VerifyInput} % \subsection{Module} % verify\_params % \subsection{Uses} % Param (Section~\ref{Parameters}) % \subsection{Syntax} % \subsubsection{Exported Access Programs} % \begin{center} % \begin{tabular}{p{3cm} p{1cm} p{1cm} p{9cm}} % \hline % \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ % \hline % verify\_valid & - & - & badLength, badDiam, badPCMVolume, badPCMAndTankVol, % badPCMArea, badPCMDensity, badMeltTemp, % badCoilAndInitTemp, badCoilTemp, badPCMHeatCapSolid, % badPCMHeatCapLiquid, badHeatFusion, badCoilArea, % badWaterDensity, badWaterHeatCap, badCoilCoeff, % badPCMCoeff, badInitTemp, badFinalTime, % badInitAndMeltTemp \\ % \hline % verify\_recommend & - & - & - \\ % \hline % \end{tabular} % \end{center} % \subsection{Semantics} % \subsubsection{Environment Variables} % $win$: 2D array of pixels displayed on the screen. % \subsubsection{Assumptions} % All of the fields Param have been assigned values before any of the access % routines for this module are called. % \subsubsection{Access Routine Semantics} % verify\_valid(): % \begin{itemize} % \item transition: none % \item exceptions: exc := (\\ % Param.getL() $\leq 0 \Rightarrow$ badLength $|$\\ % Param.get\_diam() $\leq 0 \Rightarrow$ badDiam $|$\\ % Params.get\_Vp() $\leq 0 \Rightarrow$ badPCMVolume $|$\\ % Params.getVp() $\geq$ Params.Vt $\Rightarrow$ badPCMAndTankVol $|$\\ % Params.getAp() $\leq 0 \Rightarrow$ badPCMArea $|$\\ % Params.get\_rho\_p() $\leq 0 \Rightarrow$ badPCMDensity $|$\\ % Params.getTmelt() $\leq 0 \Rightarrow$ badMeltTemp $|$\\ % Params.getTmelt() $\geq$ Params.getTc() $\Rightarrow$ badMeltTemp $|$\\ % Params.getTc() $\leq$ Params.getTinit() $\Rightarrow$ badCoilAndInitTemp $|$\\ % Params.getTc() $\geq 100 \lor$ Params.getTc() $\leq 0 \Rightarrow$ badCoilTemp $|$\\ % Params.getC\_ps() $\leq 0 \Rightarrow$ badPCMHeatCapSolid $|$\\ % Params.getC\_pl() $\leq 0 \Rightarrow$ badPCMHeatCapLiquid $|$\\ % Params.getHf() $\leq 0 \Rightarrow$ badHeatFusion $|$\\ % Params.getAc() $\leq 0 \Rightarrow$ badCoilArea $|$\\ % Params.get\_rho()\_w $\leq 0 \Rightarrow$ badWaterDensity $|$\\ % Params.getC\_w() $\leq 0 \Rightarrow$ badWaterHeatCap $|$\\ % Params.get\_hc() $\leq 0 \Rightarrow$ badCoilCoeff $|$\\ % Params.get\_hp() $\leq 0 \Rightarrow$ badPCMCoeff $|$\\ % Params.getTinit() $\leq 0 \lor$ Params.getTinit() $\geq 100 \Rightarrow$ % badInitTemp $|$\\ % Params.get\_tfinal() $\leq 0 \Rightarrow$ badFinalTime $|$\\ % Params.getTinit() $\geq$ Params.getTmelt() $\Rightarrow$ badInitAndMeltTemp) % \end{itemize} % verify\_recommend(): % \begin{itemize} % \item transition: none % \item exceptions: exc := (\\ % Params.getL() $< 0.1 \lor$ Params.getL() $> 50 \Rightarrow$ warnLength $|$\\ % Params.getdiam() / Params.getL() $< 0.002 \lor$ Params.getdiam() / Params.getL() $> 200 % \Rightarrow$ warnDiam $|$\\ % Params.getVp() $<$ Params.getVt() $\times 10 ^ -6 \Rightarrow$ warnPCMVol $|$\\ % Params.getVp() $>$ Params.getAp() $\lor$ Params.getAp $> (2/0.001) \times$ Params.getVp() % $\Rightarrow$ warnVolArea $|$\\ % (Params.get\_rho\_p() $\leq 500) \lor ($ Params.get\_rho\_p() $\geq 20000) \Rightarrow$ % warnPCMDensity $|$ ... )\\ % \# \textit{Need to continue for the rest of the example - tabular form?} % \# \textit{Should add a module (Configuration Module) to store symbolic constants} % % Params.getC\_ps \leq 100 \lor Params.getC\_ps \geq 4000 \Rightarrow$ % % warnPCMHeatCapSolid $|$\\ % % Params.getC\_pl \leq 100 \lor Params.getC\_pl \geq 5000 \Rightarrow$ % % warnPCMHeatCapLiquid $|$\\ % % Params.getAc > \pi \times (Params.getdiam / 2) ^ 2 \Rightarrow$ warnCoilArea % % $|$\\ % % Params.getrho\_w \leq 950 \lor Params.getrho\_w > 1000 \Rightarrow$ % % warnWaterDensity $|$\\ % % Params.getC\_w \leq 4170 \lor Params.getC\_w \geq 4210 \Rightarrow$ % % warnWaterHeatCap $|$\\ % % Params.gethc \leq 10 \lor Params.gethc \geq 10000 \Rightarrow$ warnCoilCoeff $|$\\ % % Params.gethp \leq 10 \lor Params.gethp \geq 10000 \Rightarrow$ warnPCMCoeff $|$\\ % % Params.gettfinal \leq 0 \lor Params.gettfinal \geq 86400 \Rightarrow$ warnFinalTime) % \end{itemize} % \subsection{Considerations} % See Appendix (Section~\ref{Appendix}) for the complete list of exceptions and % associated error messages. \newpage \section{MIS of Temperature ODEs Module} \label{Temperature} \subsection{Module} Temperature \subsection{Uses} Param (Section~\ref{Parameters}) \subsection{Syntax} \subsubsection{Exported Access Programs} \begin{center} \begin{tabular}{p{3.5cm} p{1cm} p{7cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline ODE\_SolidPCM & -- & $(\mathbb{R}^{3} \rightarrow \mathbb{R})^2$ & - \\ \hline ODE\_MeltingPCM & -- & $(\mathbb{R}^{4} \rightarrow \mathbb{R})^3$ & - \\ \hline ODE\_LiquidPCM & -- & $(\mathbb{R}^{3} \rightarrow \mathbb{R})^2$ & - \\ \hline event\_StartMelt & -- & $\mathbb{R}^2 \rightarrow \mathbb{R}$ & - \\ \hline event\_EndMelt & -- & $\mathbb{R}^3 \rightarrow \mathbb{R}$ & - \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} none \subsubsection{Assumptions} none \subsubsection{Access Routine Semantics} ODE\_SolidPCM(): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := \dfrac{d}{dt} \left [ \begin{array}{c} T_W\\ T_P \end{array} \right] = \left [ \begin{array}{c} \frac{1}{\tau_W}[(T_C - T_W(t)) + {\eta}(T_P(t) - T_W(t))]\\ \frac{1}{\tau^S_P}(T_W(t) - T_P(t)) \end{array} \right] $ \item exception: none \end{itemize} ODE\_MeltingPCM(): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := \dfrac{d}{dt} \left [ \begin{array}{c} T_W\\ T_P\\ Q_P \end{array} \right] = \left [ \begin{array}{c} \frac{1}{\tau_W}[(T_C - T_W(t)) + {\eta}(T_P(t) - T_W(t))]\\ 0 \\ h_P A_P (T_W(t) - T_\text{melt}^P) \end{array} \right] $ \item exception: none \end{itemize} ODE\_LiquidPCM(): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := \dfrac{d}{dt} \left [ \begin{array}{c} T_W\\ T_P \end{array} \right] = \left [ \begin{array}{c} \frac{1}{\tau_W}[(T_C - T_W(t)) + {\eta}(T_P(t) - T_W(t))]\\ \frac{1}{\tau^L_P}(T_W(t) - T_P(t)) \end{array} \right] $ \item exception: none \end{itemize} event\_StartMelt(): \begin{itemize} \item output: $out := g([T_W, T_P]^T) = T_\text{melt}^P - T_P$ \item exception: none \end{itemize} event\_EndMelt(): \begin{itemize} \item output: $out := g([T_W, T_P, Q_P]^T) = 1 - \phi$, where $\phi = \frac{Q_P}{E_{P\text{melt}}^{\text{all}}}$ \item exception: none \end{itemize} \newpage \section{MIS of ODE Solver Module} \label{ODE} \#\textit{Bold font is used to indicate variables that are a sequence type} \subsection{Module} Solver($n: \mathbb{N}$) \#\textit{$n$ is the length of the sequences} \subsection{Uses} None \subsection{Syntax} \subsubsection{Exported Access Programs} \begin{center} \begin{tabular}{p{1.5cm} >{\raggedright\arraybackslash}p{9.25cm} >{\raggedright\arraybackslash}p{2.4cm} p{1.5cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Except.} \\ \hline solve & $\textbf{f}: (\mathbb{R}^{n+1} \rightarrow \mathbb{R}^n), t_0 : \mathbb{R}, \textbf{y}_0: \mathbb{R}^n, g: \mathbb{R}^n \rightarrow \mathbb{R}, t_\text{fin}: \mathbb{R}$ & $t_1: \mathbb{R}, \textbf{y}: (\mathbb{R} \rightarrow \mathbb{R})^n$ & ODE\_ERR, NO\_EVENT\\ solveNoE & $\textbf{f}: (\mathbb{R}^{n+1} \rightarrow \mathbb{R}^n), t_0 : \mathbb{R}, \textbf{y}_0: \mathbb{R}^n, t_\text{fin}: \mathbb{R}$ & $\textbf{y}: (\mathbb{R} \rightarrow \mathbb{R})^n$ & ODE\_ERR\\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Access Routine Semantics} \#\textit{Solving} $\frac{d}{dt} \mathbf{y} = \mathbf{f}(t, \mathbf{y}(t))$\\ \noindent solve($\textbf{f}, t_0, \textbf{y}_0, g, t_\text{fin}$): \begin{itemize} \item output: $out := t_1, \textbf{y}(t)$ where $$\textbf{y}(t) = \textbf{y}_0 + \int_{t_0}^{t} \textbf{f}(s, \textbf{y}(s)) ds$$ with $t_1$ determined by the first time where $g(\textbf{y}(t_1)) = 0$. $\textbf{y}(t)$ is calculated from $t = t_0$ to $t = t_1$. \item exception: $exc :=$ ( $\neg (\exists t: \mathbb{R}| t_0 \leq t \leq t_\text{fin} : g(\textbf{y}(t)) = 0) \Rightarrow$ NO\_EVENT $|$ ODE Solver Fails $\Rightarrow$ ODE\_ERR) \end{itemize} solveNoE($\textbf{f}$, $t_0$, $\textbf{y}_0$, $t_\text{fin}$): \begin{itemize} \item output: $out := \textbf{y}(t)$ where $$\textbf{y}(t) = \textbf{y}_0 + \int_{t_0}^{t_\text{fin}} \textbf{f}(s, \textbf{y}(s)) ds$$ $y(t)$ is calculated from $t = t_0$ to $t = t_\text{fin}$. \item exception: $exc :=$ ( ODE Solver Fails $\Rightarrow$ ODE\_ERR) \end{itemize} \newpage \section{MIS of Energy Module} \label{Energy} \subsection{Module} Energy \subsection{Uses} Param (Section~\ref{Parameters}) \subsection{Syntax} \subsubsection{External Access Programs} \begin{center} \begin{tabular}{p{4cm} p{4cm} p{3cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline energyWater & $\mathbb{R} \rightarrow \mathbb{R}$ & $\mathbb{R} \rightarrow \mathbb{R}$ & - \\ \hline energySolidPCM & $\mathbb{R} \rightarrow \mathbb{R}$ & $\mathbb{R} \rightarrow \mathbb{R}$ & - \\ \hline energyMeltingPCM & $\mathbb{R} \rightarrow \mathbb{R}$ & $\mathbb{R} \rightarrow \mathbb{R}$ & - \\ \hline energyLiquidPCM & $\mathbb{R} \rightarrow \mathbb{R}$ & $\mathbb{R} \rightarrow \mathbb{R}$ & - \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Assumptions} None \subsubsection{Access Routine Semantics} energyWater($T_W$): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := C_W m_W (T_W - T_\text{init})$ \item exception: none \end{itemize} \noindent energySolidPCM($T_P$): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := C^S_P m_P (T_P - T_\text{init})$ \item exception: none \end{itemize} \noindent energyMeltingPCM($Q_P$): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := E_{P\text{melt}}^{\text{init}} + Q_P$ \item exception: none \end{itemize} \noindent energyLiquidPCM($T_P$): \renewcommand*{\arraystretch}{1.5} \begin{itemize} \item output: $out := E_{P\text{melt}}^{\text{init}}+H_f m_p + C_P^L m_P(T_P(t) - T_\text{melt}^P)$ \item exception: none \end{itemize} \newpage \section{MIS of Output Verification Module} \label{VerifyOutput} \subsection{Module} verify\_output \subsection{Uses} Param (Section~\ref{Parameters}) \subsection{Syntax} \subsubsection{Exported Constant} ADMIS\_ER = $1 \times 10^{-6}$ \subsubsection{Exported Access Programs} \begin{center} \begin{tabular}{p{3cm} p{7cm} p{1cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline verify\_output & $T_W(t):\mathbb{R} \rightarrow \mathbb{R}, T_P(t):\mathbb{R} \rightarrow \mathbb{R}, E_W(t):\mathbb{R} \rightarrow \mathbb{R}, E_P(t):\mathbb{R} \rightarrow \mathbb{R}, t_\text{final}: \mathbb{R}$ & - & EWAT\_NOT\_CONSERVE, EPCM\_NOT\_CONSERVE \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Assumptions} All of the fields of the input parameters structure have been assigned a value. \subsubsection{Access Routine Semantics} \noindent verify\_output($T_W, T_P, E_W, E_P$, $t_\text{final}$): \begin{itemize} \item exception: exc := ( \end{itemize} \noindent $ (\forall t | 0 \leq t \leq t_\text{final} : \text{relErr}(E_W, \int_{0}^{t} h_C A_C (T_C - T_W(t)) dt - \int_{0}^{t} h_P A_P (T_W(t) - T_P(t)) dt) < \text{ADMIS\_ER}) \Rightarrow \text{EWAT\_NOT\_CONSERVE} $ $|$ \noindent $ (\forall t | 0 \leq t \leq t_\text{final} : \text{relErr}(E_{P}, \int_{0}^{t} h_{P} A_{P} (T_{W}(t) - T_{P}(t)) dt) < \text{ADMIS\_ER}) \Rightarrow \text{EPCM\_NOT\_CONSERVE} $ ) \subsubsection{Local Functions} relErr: $\mathbb{R}$ $\times$ $\mathbb{R}$ $\rightarrow$ $\mathbb{R}$ \\ $\text{relErr}(t, e) \equiv \frac{|t - e|}{|t|}$ \\ \newline \newpage \section{MIS of Plotting Module} \label{Plot} \subsection{Module} plot \subsection{Uses} N/A \subsection{Syntax} \subsubsection{Exported Access Programs} \begin{center} \begin{tabular}{p{2cm} p{8cm} p{2cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline plot & $T_W(t):\mathbb{R} \rightarrow \mathbb{R}, T_P(t):\mathbb{R} \rightarrow \mathbb{R}, E_W(t):\mathbb{R} \rightarrow \mathbb{R}, E_P(t):\mathbb{R} \rightarrow \mathbb{R}$, $t_\text{final}: \mathbb{R}$ & - & - \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Environment Variables} win: 2D sequence of pixels displayed on the screen\\ \subsubsection{Assumptions} None \subsubsection{Access Routine Semantics} \noindent plot($T_w$, $T_p$, $E_w$, $E_p$, $t_\text{final}$): \begin{itemize} \item transition: Modify win to display a plot where the vertical axis is time and one horizontal axis is temperature and the other horizontal axis is energy. The time should run from $0$ to $t_\text{final}$ \item exception: none \end{itemize} \newpage \section{MIS of Output Module} \label{Output} \subsection{Module} output \subsection{Uses} Param (Section~\ref{Parameters}) \subsection{Syntax} \subsubsection{Exported Constants} $max\_width$: integer \subsubsection{Exported Access Program} \begin{center} \begin{tabular}{p{3cm} p{7cm} p{2cm} p{2cm}} \hline \textbf{Name} & \textbf{In} & \textbf{Out} & \textbf{Exceptions} \\ \hline output & fname: string, $T_W(t):\mathbb{R} \rightarrow \mathbb{R}, T_P(t):\mathbb{R} \rightarrow \mathbb{R}, E_W(t):\mathbb{R} \rightarrow \mathbb{R}, E_P(t):\mathbb{R} \rightarrow \mathbb{R}$, $t_\text{final}: \mathbb{R}$ & - & - \\ \hline \end{tabular} \end{center} \subsection{Semantics} \subsubsection{State Variables} None \subsubsection{Environment Variables} file: A text file \subsubsection{Access Routine Semantics} \noindent output(fname, $T_w$, $T_p$, $E_w$, $E_p$, $t_\text{final}$): \begin{itemize} \item transition: Write to environment variable named fname the following: the input parameters from Param, and the calculated values $T_w$, $T_p$, $E_w$, $E_p$ from times $0$ to $t_\text{final}$. The functions will be output as sequences in this file. The spacing between points in the sequence should be selected so that the heating behaviour is captured in the data. \item exception: none \end{itemize} \newpage \section{MIS of Specification Parameters} \label{SpecParam} The secrets of this module is the value of the specification parameters. \subsection{Module} SpecParam \subsection{Uses} N/A \subsection{Syntax} \subsubsection{Exported Constants} \renewcommand{\arraystretch}{1.2} \begin{longtable*}[l]{l} \# From Table 2 in SRS\\ $L_\text{min}$ := 0.1\\ $L_\text{max}$ := 50\\ ${\frac{D}{L}}_\text{min}$ := 0.002 \\ ${\frac{D}{L}}_\text{max}$ := 200 \\ $\text{minfrac} $ := $10^{-6}$\\ $h_\text{min}$ := 0.001 \\ $\rho_P^{\text{min}}$ := 500\\ $\rho_P^{\text{max}}$ := 20000\\ $C_{P\text{min}}^S$ := 100 \\ $C_{P\text{max}}^S$ := 4000\\ $C_{P\text{min}}^L$ := 100 \\ $C_{P\text{max}}^L$ := 5000\\ $A_C^{\text{max}}$ := $\pi(\frac{D}{2})^2$\\ $\rho_W^{\text{min}}$ := 950\\ $\rho_W^{\text{max}}$ := 1000\\ $C_W^{\text{min}}$ := 4170\\ $C_W^{\text{max}}$ := 4210\\ $h_C^{\text{min}}$ := 10\\ $h_C^{\text{max}}$ := 10000\\ $h_P^{\text{min}}$ := 10\\ $h_P^{\text{max}}$ := 10000\\ $t_{\text{final}}^{\text{max}}$ := 86400\\ \end{longtable*} \wss{$A_C^{\text{max}}$ shouldn't be in this table of constants} \subsection{Semantics} N/A \newpage \bibliographystyle {plainnat} \bibliography {MIS} \newpage \section{Appendix} \label{Appendix} \renewcommand{\arraystretch}{1.2} \begin{longtable}{l p{12cm}} \caption{Possible Exceptions} \\ \toprule \textbf{Message ID} & \textbf{Error Message} \\ \midrule badLength & Error: Tank length must be $> 0$ \\ badDiam & Error: Tank diameter must be $> 0$ \\ badPCMVolume & Error: PCM volume must be $> 0$ \\ badPCMAndTankVol & Error: PCM volume must be $<$ tank volume \\ badPCMArea & Error: PCM area must be $> 0$ \\ badPCMDensity & Error: rho\_p must be $> 0$ \\ badMeltTemp & Error: Tmelt must be $> 0$ and $< Tc$ \\ badCoilAndInitTemp & Error: Tc must be $>$ Tinit \\ badCoilTemp & Error: Tc must be $> 0$ and $< 100$ \\ badPCMHeatCapSolid & Error: C\_ps must be $> 0$ \\ badPCMHeatCapLiquid & Error: C\_pl must be $> 0$ \\ badHeatFusion & Error: Hf must be $> 0$ \\ badCoilArea & Error: Ac must be $> 0$ \\ badWaterDensity & Error: rho\_w must be $> 0$ \\ badWaterHeatCap & Error: C\_w must be $> 0$ \\ badCoilCoeff & Error: hc must be $> 0$ \\ badPCMCoeff & Error: hp must be $> 0$ \\ badInitTemp & Error: Tinit must be $> 0$ and $< 100$ \\ badFinalTime & Error: tfinal must be $> 0$ \\ badInitAndMeltTemp & Error: Tinit must be $<$ Tmelt \\ ODE\_ACCURACY & $reltol$ and $abstol$ were not satisfied by the ODE solver for a given solution step. \\ ODE\_BAD\_INPUT & Invalid input to ODE solver \\ ODE\_MAXSTEP & ODE solver took $MaxStep$ steps and did not find solution \\ warnLength & Warning: It is recommended that 0.1 $<$= L $<$= 50 \\ warnDiam & Warning: It is recommended that 0.002 $<$= D/L $<$= 200 \\ warnPCMVol & Warning: It is recommended that Vp be $>$= 0.0001\% of Vt \\ warnVolArea & Warning: It is recommended that Vp $<$= Ap $<$= (2/0.001) * Vp \\ warnPCMDensity & Warning: It is recommended that 500 $<$ rho\_p $<$ 20000 \\ warnPCMHeatCapSolid & Warning: It is recommended that 100 $<$ C\_ps $<$ 4000 \\ warnPCMHeatCapLiquid & Warning: It is recommended that 100 $<$ C\_pl $<$ 5000 \\ warnCoilArea & Warning: It is recommended that Ac $<$= pi * (D/2) $\wedge$ 2 \\ warnWaterDensity & Warning: It is recommended that 950 $<$ rho\_w $<$= 1000 \\ warnWaterHeatCap & Warning: It is recommended that 4170 $<$ C\_w $<$ 4210 \\ warnCoilCoeff & Warning: It is recommended that 10 $<$ hc $<$ 10000 \\ warnPCMCoeff & Warning: It is recommended that 10 $<$ hp $<$ 10000 \\ warnFinalTime & Warning: It is recommended that 0 $<$ tfinal $<$ 86400 \\ warnWaterError & Warning: There is greater than $x$\% relative error between the energy in the water output and the expected output based on the law of conservation of energy. (Where $x$ is the value of $ConsTol$) \\ warnPCMError & Warning: There is greater than $x$\% relative error between the energy in the PCM output and the expected output based on the law of conservation of energy. (Where $x$ is the value of $ConsTol$) \\ \bottomrule \end{longtable} \end{document}
{ "alphanum_fraction": 0.65478256, "avg_line_length": 31.2633420822, "ext": "tex", "hexsha": "11f92a45a584250dd43263d3fca978850b5bb8ae", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-11-14T14:39:38.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-10T12:52:58.000Z", "max_forks_repo_head_hexsha": "a0d54b30b0c624c61ad3b3c2aa182b2dd193d51c", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "smiths/swhs", "max_forks_repo_path": "docs/Design/MIS/PCM_MIS.tex", "max_issues_count": 52, "max_issues_repo_head_hexsha": "a0d54b30b0c624c61ad3b3c2aa182b2dd193d51c", "max_issues_repo_issues_event_max_datetime": "2018-10-29T20:50:31.000Z", "max_issues_repo_issues_event_min_datetime": "2016-05-31T15:09:18.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "smiths/swhs", "max_issues_repo_path": "docs/Design/MIS/PCM_MIS.tex", "max_line_length": 216, "max_stars_count": 2, "max_stars_repo_head_hexsha": "a0d54b30b0c624c61ad3b3c2aa182b2dd193d51c", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "smiths/swhs", "max_stars_repo_path": "docs/Design/MIS/PCM_MIS.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-02T20:33:26.000Z", "max_stars_repo_stars_event_min_datetime": "2017-02-22T16:14:51.000Z", "num_tokens": 12737, "size": 35734 }
% Created 2020-07-09 jeu. 09:06 % Intended LaTeX compiler: pdflatex \documentclass{ISMA_USD2020} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \usepackage[most]{tcolorbox} \usepackage{bm} \usepackage{booktabs} \usepackage{tabularx} \usepackage{array} \usepackage{siunitx} \usepackage{amsmath,amssymb,amsfonts, cases} \usepackage{algorithmic, graphicx, textcomp} \usepackage{xcolor, import, hyperref} \usepackage{subcaption} \usepackage[USenglish, english]{babel} \setcounter{footnote}{1} \input{config.tex} \author[1,3] {T. Dehaeze} \author[1,2] {C. Collette} \affil[1] {Precision Mechatronics Laboratory\NewLineAffil University of Liege, Belgium \NewAffil} \affil[2] {BEAMS Department\NewLineAffil Free University of Brussels, Belgium \NewAffil} \affil[3] {European Synchrotron Radiation Facility \NewLineAffil Grenoble, France e-mail: \textbf{[email protected]}} \bibliographystyle{IEEEtran} \usepackage{tikz} \usetikzlibrary{shapes.misc,arrows,arrows.meta} \date{} \title{Active Damping of Rotating Platforms using Integral Force Feedback} \hypersetup{ pdfauthor={}, pdftitle={Active Damping of Rotating Platforms using Integral Force Feedback}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 27.0.91 (Org mode 9.4)}, pdflang={English}} \begin{document} \maketitle \abstract{ This paper investigates the use of Integral Force Feedback (IFF) for the active damping of rotating mechanical systems. Guaranteed stability, typical benefit of IFF, is lost as soon as the system is rotating due to gyroscopic effects. To overcome this issue, two modifications of the classical IFF control scheme are proposed. The first consists of slightly modifying the control law while the second consists of adding springs in parallel with the force sensors. Conditions for stability and optimal parameters are derived. The results reveal that, despite their different implementations, both modified IFF control scheme have almost identical damping authority on suspension modes. } \section{Introduction} \label{sec:orgf2d9f1e} \label{sec:introduction} There is an increasing need to reduce the undesirable vibration of many sensitive equipment. A common method to reduce vibration is to mount the sensitive equipment on a suspended platform which attenuates the vibrations above the frequency of the suspension modes. In order to further decrease the residual vibrations, active damping can be used for reducing the magnification of the response in the vicinity of the resonances. In \cite{preumont92_activ_dampin_by_local_force}, the Integral Force Feedback (IFF) control scheme has been proposed, where a force sensor, a force actuator and an integral controller are used to directly augment the damping of a mechanical system. When the force sensor is collocated with the actuator, the open-loop transfer function has alternating poles and zeros which facilitate to guarantee the stability of the closed loop system \cite{preumont02_force_feedb_versus_accel_feedb}. However, when the platform is rotating, gyroscopic effects alter the system dynamics and IFF cannot be applied as is. The purpose of this paper is to study how the IFF strategy can be adapted to deal with these Gyroscopic effects. The paper is structured as follows. Section \ref{sec:dynamics} presents a simple model of a rotating suspended platform that will be used throughout this study. Section \ref{sec:iff} explains how the unconditional stability of IFF is lost due to Gyroscopic effects induced by the rotation. Section \ref{sec:iff_hpf} suggests a simple modification of the control law such that damping can be added to the suspension modes in a robust way. Section \ref{sec:iff_kp} proposes to add springs in parallel with the force sensors to regain the unconditional stability of IFF. Section \ref{sec:comparison} compares both proposed modifications to the classical IFF in terms of damping authority and closed-loop system behavior. \section{Dynamics of Rotating Platforms} \label{sec:orgf97884c} \label{sec:dynamics} In order to study how the rotation does affect the use of IFF, a model of a suspended platform on top of a rotating stage is used. Figure \ref{fig:system} represents the model schematically which is the simplest in which gyroscopic forces can be studied. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/system.pdf} \caption{\label{fig:system}Schematic of the studied System} \end{figure} The rotating stage is supposed to be ideal, meaning it induces a perfect rotation \(\theta(t) = \Omega t\) where \(\Omega\) is the rotational speed in \(\si{\radian\per\second}\). The suspended platform consists of two orthogonal actuators represented by three elements in parallel: a spring with a stiffness \(k\) in \(\si{\newton\per\meter}\), a dashpot with a damping coefficient \(c\) in \(\si{\newton\per\meter\second}\) and an ideal force source \(F_u, F_v\). A payload with a mass \(m\) in \(\si{\kilo\gram}\), representing the sensitive equipment, is mounted on the (rotating) suspended platform. Two reference frames are used: an inertial frame \((\vec{i}_x, \vec{i}_y, \vec{i}_z)\) and a uniform rotating frame \((\vec{i}_u, \vec{i}_v, \vec{i}_w)\) rigidly fixed on top of the rotating stage with \(\vec{i}_w\) aligned with the rotation axis. The position of the payload is represented by \((d_u, d_v, 0)\) expressed in the rotating frame. \par To obtain the equations of motion for the system represented in Figure \ref{fig:system}, the Lagrangian equations are used: \begin{equation} \label{eq:lagrangian_equations} \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}_i} \right) + \frac{\partial D}{\partial \dot{q}_i} - \frac{\partial L}{\partial q_i} = Q_i \end{equation} with \(L = T - V\) the Lagrangian, \(T\) the kinetic coenergy, \(V\) the potential energy, \(D\) the dissipation function, and \(Q_i\) the generalized force associated with the generalized variable \(\begin{bmatrix}q_1 & q_2\end{bmatrix} = \begin{bmatrix}d_u & d_v\end{bmatrix}\). The equation of motion corresponding to the constant rotation in the \((\vec{i}_x, \vec{i}_y)\) plane is disregarded as the motion is considered to be imposed by the rotation stage. \begin{equation} \label{eq:energy_functions_lagrange} \begin{aligned} T & = \frac{1}{2} m \left( \left( \dot{d}_u - \Omega d_v \right)^2 + \left( \dot{d}_v + \Omega d_u \right)^2 \right), \quad V = \frac{1}{2} k \left( {d_u}^2 + {d_v}^2 \right),\\ D &= \frac{1}{2} c \left( \dot{d}_u{}^2 + \dot{d}_v{}^2 \right), \quad Q_1 = F_u, \quad Q_2 = F_v \end{aligned} \end{equation} Substituting equations \eqref{eq:energy_functions_lagrange} into \eqref{eq:lagrangian_equations} for both generalized coordinates gives two coupled differential equations \begin{subequations} \label{eq:eom_coupled} \begin{align} m \ddot{d}_u + c \dot{d}_u + ( k - m \Omega^2 ) d_u &= F_u + 2 m \Omega \dot{d}_v \\ m \ddot{d}_v + c \dot{d}_v + ( k \underbrace{-\,m \Omega^2}_{\text{Centrif.}} ) d_v &= F_v \underbrace{-\,2 m \Omega \dot{d}_u}_{\text{Coriolis}} \end{align} \end{subequations} The uniform rotation of the system induces two Gyroscopic effects as shown in \eqref{eq:eom_coupled}: \begin{itemize} \item Centrifugal forces: that can been seen as added negative stiffness \(- m \Omega^2\) along \(\vec{i}_u\) and \(\vec{i}_v\) \item Coriolis Forces: that couples the motion in the two orthogonal directions \end{itemize} \par To study the dynamics of the system, the differential equations of motions \eqref{eq:eom_coupled} are transformed in the Laplace domain and the \(2 \times 2\) transfer function matrix \(\bm{G}_d\) from \(\begin{bmatrix}F_u & F_v\end{bmatrix}\) to \(\begin{bmatrix}d_u & d_v\end{bmatrix}\) is obtained \begin{align} \begin{bmatrix} d_u \\ d_v \end{bmatrix} &= \bm{G}_d \begin{bmatrix} F_u \\ F_v \end{bmatrix} \label{eq:Gd_mimo_tf} \\ \bm{G}_{d} &= \begin{bmatrix} \frac{ms^2 + cs + k - m \Omega^2}{\left( m s^2 + cs + k - m \Omega^2 \right)^2 + \left( 2 m \Omega s \right)^2} & \frac{2 m \Omega s}{\left( m s^2 + cs + k - m \Omega^2 \right)^2 + \left( 2 m \Omega s \right)^2} \\ \frac{-2 m \Omega s}{\left( m s^2 + cs + k - m \Omega^2 \right)^2 + \left( 2 m \Omega s \right)^2} & \frac{ms^2 + cs + k - m \Omega^2}{\left( m s^2 + cs + k - m \Omega^2 \right)^2 + \left( 2 m \Omega s \right)^2} \end{bmatrix} \label{eq:Gd_m_k_c} \end{align} To simplify the analysis, the undamped natural frequency \(\omega_0\) and the damping ratio \(\xi\) are used \begin{equation} \omega_0 = \sqrt{\frac{k}{m}} \text{ in } \si{\radian\per\second}, \quad \xi = \frac{c}{2 \sqrt{k m}} \end{equation} The transfer function matrix \(\bm{G}_d\) becomes equal to \begin{equation} \label{eq:Gd_w0_xi_k} \bm{G}_{d} = \frac{1}{k} \begin{bmatrix} \frac{\frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2}}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0}}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \\ \frac{- 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0}}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{\frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2}}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \end{bmatrix} \end{equation} For all further numerical analysis in this study, we consider \(\omega_0 = \SI{1}{\radian\per\second}\), \(k = \SI{1}{\newton\per\meter}\) and \(\xi = 0.025 = \SI{2.5}{\percent}\). Even though no system with such parameters will be encountered in practice, conclusions can be drawn relative to these parameters such that they can be generalized to any other set of parameters. \par The poles of \(\bm{G}_d\) are the complex solutions \(p\) of \begin{equation} \left( \frac{p^2}{{\omega_0}^2} + 2 \xi \frac{p}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{p}{\omega_0} \right)^2 = 0 \end{equation} Supposing small damping (\(\xi \ll 1\)), two pairs of complex conjugate poles are obtained: \begin{subequations} \label{eq:pole_values} \begin{align} p_{+} &= - \xi \omega_0 \left( 1 + \frac{\Omega}{\omega_0} \right) \pm j \omega_0 \left( 1 + \frac{\Omega}{\omega_0} \right) \\ p_{-} &= - \xi \omega_0 \left( 1 - \frac{\Omega}{\omega_0} \right) \pm j \omega_0 \left( 1 - \frac{\Omega}{\omega_0} \right) \end{align} \end{subequations} The real part and complex part of these two pairs of complex conjugate poles are represented in Figure \ref{fig:campbell_diagram} as a function of the rotational speed \(\Omega\). As the rotational speed increases, \(p_{+}\) goes to higher frequencies and \(p_{-}\) to lower frequencies. The system becomes unstable for \(\Omega > \omega_0\) as the real part of \(p_{-}\) is positive. Physically, the negative stiffness term \(-m\Omega^2\) induced by centrifugal forces exceeds the spring stiffness \(k\). In the rest of this study, rotational speeds smaller than the undamped natural frequency of the system are assumed (\(\Omega < \omega_0\)). \begin{figure}[htbp] \begin{subfigure}[c]{0.45\linewidth} \includegraphics[width=\linewidth]{figs/campbell_diagram_real.pdf} \caption{\label{fig:campbell_diagram_real} Real Part} \end{subfigure} \hfill \begin{subfigure}[c]{0.45\linewidth} \includegraphics[width=\linewidth]{figs/campbell_diagram_imag.pdf} \caption{\label{fig:campbell_diagram_imag} Imaginary Part} \end{subfigure} \hfill \caption{\label{fig:campbell_diagram}Campbell Diagram : Evolution of the complex and real parts of the system's poles as a function of the rotational speed \(\Omega\)} \centering \end{figure} Looking at the transfer function matrix \(\bm{G}_d\) in Eq. \eqref{eq:Gd_w0_xi_k}, one can see that the two diagonal (direct) terms are equal and the two off-diagonal (coupling) terms are opposite. The bode plot of these two terms are shown in Figure \ref{fig:plant_compare_rotating_speed} for several rotational speeds \(\Omega\). These plots confirm the expected behavior: the frequency of the two pairs of complex conjugate poles are further separated as \(\Omega\) increases. For \(\Omega > \omega_0\), the low frequency pair of complex conjugate poles \(p_{-}\) becomes unstable. \begin{figure}[htbp] \begin{subfigure}[c]{0.45\linewidth} \includegraphics[width=\linewidth]{figs/plant_compare_rotating_speed_direct.pdf} \caption{\label{fig:plant_compare_rotating_speed_direct} Direct Terms \(d_u/F_u\), \(d_v/F_v\)} \end{subfigure} \hfill \begin{subfigure}[c]{0.45\linewidth} \includegraphics[width=\linewidth]{figs/plant_compare_rotating_speed_coupling.pdf} \caption{\label{fig:plant_compare_rotating_speed_coupling} Coupling Terms \(d_v/F_u\), \(-d_u/F_v\)} \end{subfigure} \hfill \caption{\label{fig:plant_compare_rotating_speed}Bode Plots for \(\bm{G}_d\) for several rotational speed \(\Omega\)} \centering \end{figure} \section{Decentralized Integral Force Feedback} \label{sec:orgf541d3f} \label{sec:iff} In order to apply IFF to the system, force sensors are added in series with the two actuators (Figure \ref{fig:system_iff}). As this study focuses on decentralized control, two identical controllers \(K_F\) are used to feedback each of the sensed force to its associated actuator and no attempt is made to counteract the interactions in the system. The control diagram is schematically shown in Figure \ref{fig:control_diagram_iff}. \begin{minipage}[t]{0.50\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/system_iff.pdf} \captionof{figure}{\label{fig:system_iff}System with added Force Sensor in series with the actuators} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.45\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/control_diagram_iff.pdf} \captionof{figure}{\label{fig:control_diagram_iff}Control Diagram for decentralized IFF} \end{center} \end{minipage} \par The forces \(\begin{bmatrix}f_u & f_v\end{bmatrix}\) measured by the two force sensors represented in Figure \ref{fig:system_iff} are equal to \begin{equation} \label{eq:measured_force} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} = \begin{bmatrix} F_u \\ F_v \end{bmatrix} - (c s + k) \begin{bmatrix} d_u \\ d_v \end{bmatrix} \end{equation} Inserting \eqref{eq:Gd_w0_xi_k} into \eqref{eq:measured_force} yields \begin{align} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} &= \bm{G}_{f} \begin{bmatrix} F_u \\ F_v \end{bmatrix} \label{eq:Gf_mimo_tf} \\ \bm{G}_{f} &= \begin{bmatrix} \frac{\left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} \right) \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{- \left( 2 \xi \frac{s}{\omega_0} + 1 \right) \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \\ \frac{\left( 2 \xi \frac{s}{\omega_0} + 1 \right) \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{\left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} \right) \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \end{bmatrix} \label{eq:Gf} \end{align} The zeros of the diagonal terms of \(\bm{G}_f\) are equal to (neglecting the damping for simplicity) \begin{subequations} \begin{align} z_c &= \pm j \omega_0 \sqrt{\frac{1}{2} \sqrt{8 \frac{\Omega^2}{{\omega_0}^2} + 1} + \frac{\Omega^2}{{\omega_0}^2} + \frac{1}{2} } \label{eq:iff_zero_cc} \\ z_r &= \pm \omega_0 \sqrt{\frac{1}{2} \sqrt{8 \frac{\Omega^2}{{\omega_0}^2} + 1} - \frac{\Omega^2}{{\omega_0}^2} - \frac{1}{2} } \label{eq:iff_zero_real} \end{align} \end{subequations} The frequency of the pair of complex conjugate zeros \(z_c\) \eqref{eq:iff_zero_cc} always lies between the frequency of the two pairs of complex conjugate poles \(p_{-}\) and \(p_{+}\) \eqref{eq:pole_values}. For non-null rotational speeds, two real zeros \(z_r\) \eqref{eq:iff_zero_real} appear in the diagonal terms inducing a non-minimum phase behavior. This can be seen in the Bode plot of the diagonal terms (Figure \ref{fig:plant_iff_compare_rotating_speed}) where the low frequency gain is no longer zero while the phase stays at \(\SI{180}{\degree}\). The low frequency gain of \(\bm{G}_f\) increases with the rotational speed \(\Omega\) \begin{equation} \label{eq:low_freq_gain_iff_plan} \lim_{\omega \to 0} \left| \bm{G}_f (j\omega) \right| = \begin{bmatrix} \frac{\Omega^2}{{\omega_0}^2 - \Omega^2} & 0 \\ 0 & \frac{\Omega^2}{{\omega_0}^2 - \Omega^2} \end{bmatrix} \end{equation} This can be explained as follows: a constant force \(F_u\) induces a small displacement of the mass \(d_u = \frac{F_u}{k - m\Omega^2}\), which increases the centrifugal force \(m\Omega^2d_u = \frac{\Omega^2}{{\omega_0}^2 - \Omega^2} F_u\) which is then measured by the force sensors. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/plant_iff_compare_rotating_speed.pdf} \caption{\label{fig:plant_iff_compare_rotating_speed}Bode plot of the dynamics from a force actuator to its collocated force sensor (\(f_u/F_u\), \(f_v/F_v\)) for several rotational speeds \(\Omega\)} \end{figure} \par \label{sec:iff_pure_int} The two IFF controllers \(K_F\) consist of a pure integrator \begin{equation} \label{eq:Kf_pure_int} \bm{K}_F(s) = \begin{bmatrix} K_F(s) & 0 \\ 0 & K_F(s) \end{bmatrix}, \quad K_F(s) = g \cdot \frac{1}{s} \end{equation} where \(g\) is a scalar representing the gain of the controller. In order to see how the IFF affects the poles of the closed loop system, a Root Locus plot (Figure \ref{fig:root_locus_pure_iff}) is constructed as follows: the poles of the closed-loop system are drawn in the complex plane as the controller gain \(g\) varies from \(0\) to \(\infty\) for the two controllers \(K_F\) simultaneously. As explained in \cite{preumont08_trans_zeros_struc_contr_with,skogestad07_multiv_feedb_contr}, the closed-loop poles start at the open-loop poles (shown by \(\tikz[baseline=-0.6ex] \node[cross out, draw=black, minimum size=1ex, line width=2pt, inner sep=0pt, outer sep=0pt] at (0, 0){};\)) for \(g = 0\) and coincide with the transmission zeros (shown by \(\tikz[baseline=-0.6ex] \draw[line width=2pt, inner sep=0pt, outer sep=0pt] (0,0) circle[radius=3pt];\)) as \(g \to \infty\). The direction of increasing gain is indicated by arrows \(\tikz[baseline=-0.6ex] \draw[-{Stealth[round]},line width=2pt] (0,0) -- (0.3,0);\). \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_pure_iff.pdf} \caption{\label{fig:root_locus_pure_iff}Root Locus: evolution of the closed-loop poles with increasing controller gains \(g\)} \end{figure} Whereas collocated IFF is usually associated with unconditional stability \cite{preumont91_activ}, this property is lost as soon as the rotational speed in non-null due to gyroscopic effects. This can be seen in the Root Locus plot (Figure \ref{fig:root_locus_pure_iff}) where the poles corresponding to the controller are bound to the right half plane implying closed-loop system instability. Physically, this can be explain like so: at low frequency, the loop gain is very large due to the pure integrators in \(K_F\). The control system is thus canceling the spring forces which makes the suspended platform no able to hold the payload against centrifugal forces, hence the instability. In order to apply decentralized IFF on rotating platforms, two solutions are proposed to deal with this instability problem. The first one consists of slightly modifying the control law (Section \ref{sec:iff_hpf}) while the second one consists of adding springs in parallel with the force sensors (Section \ref{sec:iff_kp}). \section{Integral Force Feedback with High Pass Filter} \label{sec:orgf53673d} \label{sec:iff_hpf} As was explained in the previous section, the instability comes in part from the high gain at low frequency caused by the pure integrators. In order to limit this low frequency controller gain, an high pass filter (HPF) can be added to the controller \begin{equation} \label{eq:IFF_LHF} K_{F}(s) = g \cdot \frac{1}{s} \cdot \underbrace{\frac{s/\omega_i}{1 + s/\omega_i}}_{\text{HPF}} = g \cdot \frac{1}{s + \omega_i} \end{equation} This is equivalent to slightly shifting the controller pole to the left along the real axis. This modification of the IFF controller is typically done to avoid saturation associated with the pure integrator \cite{preumont91_activ}. This is however not the case in this study as it will become clear in the next section. \par The loop gains, \(K_F(s)\) times the direct dynamics \(f_u/F_u\), with and without the added HPF are shown in Figure \ref{fig:loop_gain_modified_iff}. The effect of the added HPF limits the low frequency gain as expected. The Root Loci for the decentralized IFF with and without the HPF are displayed in Figure \ref{fig:root_locus_modified_iff}. With the added HPF, the poles of the closed loop system are shown to be stable up to some value of the gain \(g_\text{max}\) \begin{equation} \label{eq:gmax_iff_hpf} g_{\text{max}} = \omega_i \left( \frac{{\omega_0}^2}{\Omega^2} - 1 \right) \end{equation} It is interesting to note that \(g_{\text{max}}\) also corresponds to the gain where the low frequency loop gain (Figure \ref{fig:loop_gain_modified_iff}) reaches one. \begin{minipage}[b]{0.45\linewidth} \begin{center} \includegraphics[scale=1]{figs/loop_gain_modified_iff.pdf} \captionof{figure}{\label{fig:loop_gain_modified_iff}Modification of the loop gain with the added HFP, \(g = 2\), \(\omega_i = 0.1 \omega_0\) and \(\Omega = 0.1 \omega_0\)} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.5\linewidth} \begin{center} \includegraphics[scale=1]{figs/root_locus_modified_iff.pdf} \captionof{figure}{\label{fig:root_locus_modified_iff}Modification of the Root Locus with the added HPF, \(\omega_i = 0.1 \omega_0\) and \(\Omega = 0.1 \omega_0\)} \end{center} \end{minipage} \par Two parameters can be tuned for the modified controller \eqref{eq:IFF_LHF}: the gain \(g\) and the pole's location \(\omega_i\). The optimal values of \(\omega_i\) and \(g\) are here considered as the values for which the damping of all the closed-loop poles are simultaneously maximized. In order to visualize how \(\omega_i\) does affect the attainable damping, the Root Loci for several \(\omega_i\) are displayed in Figure \ref{fig:root_locus_wi_modified_iff}. It is shown that even though small \(\omega_i\) seem to allow more damping to be added to the suspension modes, the control gain \(g\) may be limited to small values due to \eqref{eq:gmax_iff_hpf}. \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/root_locus_wi_modified_iff.pdf} \caption{\label{fig:root_locus_wi_modified_iff}Root Locus for several HPF cut-off frequencies \(\omega_i\), \(\Omega = 0.1 \omega_0\)} \end{figure} In order to study this trade off, the attainable closed-loop damping ratio \(\xi_{\text{cl}}\) is computed as a function of \(\omega_i/\omega_0\). The gain \(g_{\text{opt}}\) at which this maximum damping is obtained is also displayed and compared with the gain \(g_{\text{max}}\) at which the system becomes unstable (Figure \ref{fig:mod_iff_damping_wi}). \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/mod_iff_damping_wi.pdf} \caption{\label{fig:mod_iff_damping_wi}Attainable damping ratio \(\xi_\text{cl}\) as a function of \(\omega_i/\omega_0\). Corresponding control gain \(g_\text{opt}\) and \(g_\text{max}\) are also shown} \end{figure} Three regions can be observed: \begin{itemize} \item \(\omega_i/\omega_0 < 0.02\): the added damping is limited by the maximum allowed control gain \(g_{\text{max}}\) \item \(0.02 < \omega_i/\omega_0 < 0.2\): the attainable damping ratio is maximized and is reached for \(g \approx 2\) \item \(0.2 < \omega_i/\omega_0\): the added damping decreases as \(\omega_i/\omega_0\) increases \end{itemize} \section{Integral Force Feedback with Parallel Springs} \label{sec:org4c124af} \label{sec:iff_kp} In this section additional springs in parallel with the force sensors are added to counteract the negative stiffness induced by the rotation. Such springs are schematically shown in Figure \ref{fig:system_parallel_springs} where \(k_a\) is the stiffness of the actuator and \(k_p\) the stiffness in parallel with the actuator and force sensor. Amplified piezoelectric stack actuators can also be used for such purpose where a part of the piezoelectric stack is used as an actuator while the rest is used as a force sensor \cite{souleille18_concep_activ_mount_space_applic}. The parallel stiffness \(k_p\) then corresponds to the amplification structure. An example of such system is shown in Figure \ref{fig:cedrat_xy25xs}. \begin{minipage}[t]{0.48\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/system_parallel_springs.pdf} \captionof{figure}{\label{fig:system_parallel_springs}Studied system with additional springs in parallel with the actuators and force sensors} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.48\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/cedrat_xy25xs.png} \captionof{figure}{\label{fig:cedrat_xy25xs}XY Piezoelectric Stage (XY25XS from Cedrat Technology)} \end{center} \end{minipage} \par The forces \(\begin{bmatrix}f_u & f_v\end{bmatrix}\) measured by the two force sensors represented in Figure \ref{fig:system_parallel_springs} are equal to \begin{equation} \label{eq:measured_force_kp} \begin{bmatrix} f_{u} \\ f_{v} \end{bmatrix} = \begin{bmatrix} F_u \\ F_v \end{bmatrix} - (c s + k_a) \begin{bmatrix} d_u \\ d_v \end{bmatrix} \end{equation} In order to keep the overall stiffness \(k = k_a + k_p\) constant, thus not modifying the open-loop poles as \(k_p\) is changed, a scalar parameter \(\alpha\) (\(0 \le \alpha < 1\)) is defined to describe the fraction of the total stiffness in parallel with the actuator and force sensor \begin{equation} k_p = \alpha k, \quad k_a = (1 - \alpha) k \end{equation} The equations of motion are derived and transformed in the Laplace domain \begin{align} \begin{bmatrix} f_u \\ f_v \end{bmatrix} &= \bm{G}_k \begin{bmatrix} F_u \\ F_v \end{bmatrix} \label{eq:Gk_mimo_tf} \\ \bm{G}_k &= \begin{bmatrix} \frac{\left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} + \alpha \right) \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{- \left( 2 \xi \frac{s}{\omega_0} + 1 - \alpha \right) \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \\ \frac{\left( 2 \xi \frac{s}{\omega_0} + 1 - \alpha \right) \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} & \frac{\left( \frac{s^2}{{\omega_0}^2} - \frac{\Omega^2}{{\omega_0}^2} + \alpha \right) \left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right) + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2}{\left( \frac{s^2}{{\omega_0}^2} + 2 \xi \frac{s}{\omega_0} + 1 - \frac{{\Omega}^2}{{\omega_0}^2} \right)^2 + \left( 2 \frac{\Omega}{\omega_0} \frac{s}{\omega_0} \right)^2} \end{bmatrix} \label{eq:Gk} \end{align} Comparing \(\bm{G}_k\) \eqref{eq:Gk} with \(\bm{G}_f\) \eqref{eq:Gf} shows that while the poles of the system are kept the same, the zeros of the diagonal terms have changed. The two real zeros \(z_r\) \eqref{eq:iff_zero_real} that were inducing non-minimum phase behavior are transformed into complex conjugate zeros if the following condition hold \begin{equation} \label{eq:kp_cond_cc_zeros} \alpha > \frac{\Omega^2}{{\omega_0}^2} \quad \Leftrightarrow \quad k_p > m \Omega^2 \end{equation} Thus, if the added parallel stiffness \(k_p\) is higher than the negative stiffness induced by centrifugal forces \(m \Omega^2\), the direct dynamics from actuator to force sensor will show minimum phase behavior. This is confirmed by the Bode plot of the direct dynamics in Figure \ref{fig:plant_iff_kp}. Figure \ref{fig:root_locus_iff_kp} shows Root Loci plots for \(k_p = 0\), \(k_p < m \Omega^2\) and \(k_p > m \Omega^2\) when \(K_F\) is a pure integrator \eqref{eq:Kf_pure_int}. It is shown that if the added stiffness is higher than the maximum negative stiffness, the poles of the closed-loop system stay in the (stable) right half-plane, and hence the unconditional stability of IFF is recovered. \begin{minipage}[b]{0.43\linewidth} \begin{center} \includegraphics[scale=1]{figs/plant_iff_kp.pdf} \captionof{figure}{\label{fig:plant_iff_kp}Bode Plot of \(f_u/F_u\) without parallel spring, with parallel springs with stiffness \(k_p < m \Omega^2\) and \(k_p > m \Omega^2\), \(\Omega = 0.1 \omega_0\)} \end{center} \end{minipage} \hfill \begin{minipage}[b]{0.53\linewidth} \begin{center} \includegraphics[scale=1]{figs/root_locus_iff_kp.pdf} \captionof{figure}{\label{fig:root_locus_iff_kp}Root Locus for IFF without parallel spring, with parallel springs with stiffness \(k_p < m \Omega^2\) and \(k_p > m \Omega^2\), \(\Omega = 0.1 \omega_0\)} \end{center} \end{minipage} \par Even though the parallel stiffness \(k_p\) has no impact on the open-loop poles (as the overall stiffness \(k\) stays constant), it has a large impact on the transmission zeros. Moreover, as the attainable damping is generally proportional to the distance between poles and zeros \cite{preumont18_vibrat_contr_activ_struc_fourt_edition}, the parallel stiffness \(k_p\) is foreseen to have a large impact on the attainable damping. To study this effect, Root Locus plots for several parallel stiffnesses \(k_p > m \Omega^2\) are shown in Figure \ref{fig:root_locus_iff_kps}. The frequencies of the transmission zeros of the system are increasing with the parallel stiffness \(k_p\) and the associated attainable damping is reduced. Therefore, even though the parallel stiffness \(k_p\) should be larger than \(m \Omega^2\) for stability reasons, it should not be taken too high as this would limit the attainable bandwidth. This is confirmed in Figure \ref{fig:mod_iff_damping_kp} where the attainable closed-loop damping ratio \(\xi_{\text{cl}}\) and the associated control gain \(g_\text{opt}\) are computed as a function of \(\alpha\). \begin{minipage}[t]{0.48\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/root_locus_iff_kps.pdf} \captionof{figure}{\label{fig:root_locus_iff_kps}Comparison the Root Locus for three parallel stiffnessses \(k_p\)} \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.48\linewidth} \begin{center} \includegraphics[width=\linewidth]{figs/mod_iff_damping_kp.pdf} \captionof{figure}{\label{fig:mod_iff_damping_kp}Optimal Damping Ratio \(\xi_\text{opt}\) and the corresponding optimal gain \(g_\text{opt}\) as a function of \(\alpha\)} \end{center} \end{minipage} \section{Comparison and Discussion} \label{sec:org537f1b3} \label{sec:comparison} Two modifications to adapt the IFF control strategy to rotating platforms have been proposed in Sections \ref{sec:iff_hpf} and \ref{sec:iff_kp}. These two methods are now compared in terms of added damping, closed-loop compliance and transmissibility. For the following comparisons, the cut-off frequency for the HPF is set to \(\omega_i = 0.1 \omega_0\) and the stiffness of the parallel springs is set to \(k_p = 5 m \Omega^2\). \par Figure \ref{fig:comp_root_locus} shows the Root Loci for the two proposed IFF modifications. While the two pairs of complex conjugate open-loop poles are identical for both techniques, the transmission zeros are not. This means that the closed-loop behavior of both systems will differ when large control gains are used. One can observe that the closed loop poles of the system with added springs (in red) are bounded to the left half plane implying unconditional stability. This is not the case for the system where the controller is augmented with an HPF (in blue). It is interesting to note that the maximum added damping is very similar for both techniques and is reached for the same control gain \(g_\text{opt} \approx 2 \omega_0\). \begin{figure}[htbp] \centering \includegraphics[scale=1]{figs/comp_root_locus.pdf} \caption{\label{fig:comp_root_locus}Root Locus for the two proposed modifications of decentralized IFF, \(\Omega = 0.1 \omega_0\)} \end{figure} \par The two proposed techniques are now compared in terms of closed-loop transmissibility and compliance. The transmissibility is defined as the transfer function from the displacement of the rotating stage to the displacement of the payload. It is used to characterize how much vibration is transmitted through the suspended platform to the payload. The compliance describes the displacement response of the payload to external forces applied to it. This is a useful metric when disturbances are directly applied to the payload. The two techniques are also compared with passive damping (Figure \ref{fig:system}) where the damping coefficient \(c\) is tuned to critically damp the resonance when the rotating speed is null. \begin{equation} c_\text{crit} = 2 \sqrt{k m} \end{equation} Very similar results are obtained for the two proposed IFF modifications in terms of transmissibility (Figure \ref{fig:comp_transmissibility}) and compliance (Figure \ref{fig:comp_compliance}). It is also confirmed that these two techniques can significantly damp the suspension modes. \begin{figure}[htbp] \begin{subfigure}[c]{0.49\linewidth} \includegraphics[width=\linewidth]{figs/comp_transmissibility.pdf} \caption{\label{fig:comp_transmissibility} Transmissibility} \end{subfigure} \hfill \begin{subfigure}[c]{0.49\linewidth} \includegraphics[width=\linewidth]{figs/comp_compliance.pdf} \caption{\label{fig:comp_compliance} Compliance} \end{subfigure} \hfill \caption{\label{fig:comp_active_damping}Comparison of the two proposed Active Damping Techniques, \(\Omega = 0.1 \omega_0\)} \centering \end{figure} On can see in Figure \ref{fig:comp_transmissibility} that the problem of the degradation of the transmissibility at high frequency when using passive damping techniques is overcome by the use of IFF. The addition of the HPF or the use of the parallel stiffness permit to limit the degradation of the compliance as compared with classical IFF (Figure \ref{fig:comp_compliance}). \section{Conclusion} \label{sec:orga805aaa} \label{sec:conclusion} Due to gyroscopic effects, decentralized IFF with pure integrators was shown to be unstable when applied to rotating platforms. Two modifications of the classical IFF control have been proposed to overcome this issue. The first modification concerns the controller and consists of adding an high pass filter to the pure integrators. This is equivalent as to moving the controller pole to the left along the real axis. This renders the closed loop system stable up to some value of the controller gain \(g_\text{max}\). The second proposed modification concerns the mechanical system. Additional springs are added in parallel with the actuators and force sensors. It was shown that if the stiffness \(k_p\) of the additional springs is larger than the negative stiffness \(m \Omega^2\) induced by centrifugal forces, the classical decentralized IFF regains its unconditional stability property. While having very different implementations, both proposed modifications are very similar when it comes to the attainable damping and the obtained closed loop system behavior. Future work will focus on the experimental validation of the proposed active damping techniques. The Matlab code that was used for this study is available under a MIT License and archived in Zenodo \cite{dehaeze20_activ_dampin_rotat_posit_platf}. \section*{Acknowledgment} \label{sec:orge39bf3f} This research benefited from a FRIA grant from the French Community of Belgium. \bibliography{ref.bib} \end{document}
{ "alphanum_fraction": 0.7361683032, "avg_line_length": 66, "ext": "tex", "hexsha": "fbb8bfa19d651a69c93101179da364bc2c07f171", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_forks_repo_path": "paper/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_issues_repo_path": "paper/paper.tex", "max_line_length": 729, "max_stars_count": null, "max_stars_repo_head_hexsha": "301c13d132319389384d1b2783da7765037a0199", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tdehaeze/dehaeze20_contr_stewa_platf", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 11783, "size": 37884 }
\documentclass[fontsize=11pt,paper=a4,twoside,openright,cleardoublepage=empty]{scrreprt} \input{Titlepage-Cafoscari-Generic-Config} \usepackage{libertine} \begin{document} \input{Titlepage-Cafoscari} \cleardoublepage \setcounter{page}{1} \section{Example Document} This is a wonderful thesis. \end{document}
{ "alphanum_fraction": 0.8, "avg_line_length": 17.5, "ext": "tex", "hexsha": "8933897ac29ef4febceb36b9714c3bdb1ee5b52c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9f7e41b5e2bf2ccbe20b553bee54e45126436a8f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Lazza/cafoscari-thesis-cover", "max_forks_repo_path": "ExampleThesis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9f7e41b5e2bf2ccbe20b553bee54e45126436a8f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Lazza/cafoscari-thesis-cover", "max_issues_repo_path": "ExampleThesis.tex", "max_line_length": 88, "max_stars_count": 2, "max_stars_repo_head_hexsha": "9f7e41b5e2bf2ccbe20b553bee54e45126436a8f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Lazza/cafoscari-thesis-cover", "max_stars_repo_path": "ExampleThesis.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-16T06:06:24.000Z", "max_stars_repo_stars_event_min_datetime": "2018-10-07T15:42:02.000Z", "num_tokens": 99, "size": 315 }
\subsection{Amps}
{ "alphanum_fraction": 0.7, "avg_line_length": 5, "ext": "tex", "hexsha": "83c3065b1bb4804e415ab57e56b0d7d877daf033", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/engineering/engineeringElectrical/02-02-Amps.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/engineering/engineeringElectrical/02-02-Amps.tex", "max_line_length": 17, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/engineering/engineeringElectrical/02-02-Amps.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 20 }
\documentclass[11pt, oneside]{article} % use "amsart" instead of "article" for AMSLaTeX format \usepackage[utf8]{inputenc} \usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... %\geometry{landscape} % Activate for rotated page geometry %\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent \usepackage{graphicx} % Use pdf, png, jpg, or eps§ with pdflatex; use eps in DVI mode % TeX will automatically convert eps --> pdf in pdflatex \usepackage{amssymb} \usepackage{listings} \usepackage{xcolor} \usepackage[export]{adjustbox} \usepackage{booktabs} \usepackage{float} \usepackage{tikz} \usepackage{pgfplots} \usepgfplotslibrary{fillbetween} \usetikzlibrary{patterns} \pgfplotsset{compat=1.16} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} \title{Domain Testing} \author{Zelin Cai, Patrick Silvestre} \date{} \begin{document} \maketitle \section{\texttt{ticketing\_module.py}} \begin{lstlisting}[language=Python] __author__ = "Zelin Cai, Patrick Silvestre" __license__ = "MIT" def ticketing_module(age, gender): if gender == "boy": if age < 6: return "rhyming" elif 7 < age < 10: return "storytelling" elif 11 < age < 15: return "quiz" elif 20 < age: return "poetry" else: return "" elif gender == "girl": if age < 6: return "rhyming" elif 7 < age < 10: return "drawing" elif 10 < age < 15: return "essay writing" elif 20 < age: return "poetry" else: return "" else: return "" \end{lstlisting} \newpage %\section{\texttt{test\_ticketing\_module.py}} %\begin{lstlisting}[language=Python] %__author__ = "Zelin Cai, Patrick Silvestre" %__license__ = "MIT" % %from ticketing_module import * %import unittest % %class testInvalidValues(unittest.TestCase): % def test1(self): % actual_output = ticketing_module(10, "none") % expected_output = "" % self.assertEqual(actual_output, expected_output) % %class testBoy(unittest.TestCase): % def test1(self): % actual_output = ticketing_module(5, "boy") % expected_output = "rhyming" % self.assertEqual(actual_output, expected_output) % % def test2(self): % actual_output = ticketing_module(6, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test3(self): % actual_output = ticketing_module(7, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test4(self): % actual_output = ticketing_module(9, "boy") % expected_output = "storytelling" % self.assertEqual(actual_output, expected_output) % % def test5(self): % actual_output = ticketing_module(10, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test6(self): % actual_output = ticketing_module(11, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test7(self): % actual_output = ticketing_module(13, "boy") % expected_output = "quiz" % self.assertEqual(actual_output, expected_output) % % def test8(self): % actual_output = ticketing_module(15, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test9(self): % actual_output = ticketing_module(20, "boy") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test10(self): % actual_output = ticketing_module(22, "boy") % expected_output = "poetry" % self.assertEqual(actual_output, expected_output) % % %class testGirl(unittest.TestCase): % def test1(self): % actual_output = ticketing_module(5, "girl") % expected_output = "rhyming" % self.assertEqual(actual_output, expected_output) % % def test2(self): % actual_output = ticketing_module(6, "girl") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test3(self): % actual_output = ticketing_module(7, "girl") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test4(self): % actual_output = ticketing_module(9, "girl") % expected_output = "drawing" % self.assertEqual(actual_output, expected_output) % % def test5(self): % actual_output = ticketing_module(10, "girl") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test6(self): % actual_output = ticketing_module(13, "girl") % expected_output = "essay writing" % self.assertEqual(actual_output, expected_output) %\end{lstlisting} %\newpage % %\begin{lstlisting}[language=Python] % def test7(self): % actual_output = ticketing_module(15, "girl") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test8(self): % actual_output = ticketing_module(20, "girl") % expected_output = "" % self.assertEqual(actual_output, expected_output) % % def test9(self): % actual_output = ticketing_module(22, "girl") % expected_output = "poetry" % self.assertEqual(actual_output, expected_output) %\end{lstlisting} %\newpage \section{Control Flow Graph} \begin{figure}[!htb] \includegraphics[width=\linewidth]{control-flow.png} \end{figure} \newpage \section{Predicates} \begin{table}[!htb] \centering \begin{tabular}{|l|l|} \hline \# & Predicate \\ \hline $P_1$ & gender == "boy" \\ \hline $P_2$ & gender == "girl" \\ \hline $P_3$ & age \textless 6 \\ \hline $P_{4a}$ & 7 \textless age \\ \hline $P_{4b}$ & age \textless 10 \\ \hline $P_{5a}$ & 10 \textless age \\ \hline $P_{5b}$ & age \textless 15 \\ \hline $P_6$ & 20 \textless age \\ \hline $P_7$ & age \textless 6 \\ \hline $P_{8a}$ & 7 \textless age \\ \hline $P_{8b}$ & age \textless 10 \\ \hline $P_{9a}$ & 11 \textless age \\ \hline $P_{9b}$ & age \textless 15 \\ \hline $P_{10}$ & 20 \textless age \\ \hline \end{tabular} \end{table} \section{Domain Graph} \begin{tikzpicture} \begin{axis} [ axis x line = middle, axis y line = middle, x=0.6cm, xlabel = $age$, xlabel style={at=(current axis.right of origin), anchor=west}, ylabel = $gender$, ylabel style={at=(current axis.above origin), anchor=south}, xmin = 0, xmax = 22, ymin = -14, ymax = 14, yticklabels={,,} ] \addplot [mark=none] coordinates {(0, 0) (0, 5)} node [right] {male}; \addplot [mark=none] coordinates {(0, 0) (0, 1)} node [right] {$P_1$, $P_2$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(0, 0) (25, 0)}; \addplot [mark=none, color=black, dashed, thick] coordinates {(6, 0) (6, 10)} node [above] {$P_7$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(7, 0) (7, 10)} node [above] {$P_{8a}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(10, 0) (10, 10)} node [above] {$P_{8b}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(11, 0) (11, 10)} node [above] {$P_{9a}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(15, 0) (15, 10)} node [above] {$P_{9b}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(20, 0) (20, 10)} node [above] {$P_{10}$}; \addplot [mark=none] coordinates {(0, 0) (0, -5)} node [right] {female}; \addplot [mark=none, color=black, dashed, thick] coordinates {(6, 0) (6, -10)} node [below] {$P_3$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(7, 0) (7, -10)} node [below] {$P_{4a}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(10, 0) (10, -10)} node [below] {$P_{4b}$, $P_{5a}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(15, 0) (15, -10)} node [below] {$P_{5b}$}; \addplot [mark=none, color=black, dashed, thick] coordinates {(20, 0) (20, -10)} node [below] {$P_{6}$}; \end{axis} \end{tikzpicture} \newpage \section{Test Cases} Order of tests is ON (on the border), ON, OFF. \subsection{Predicate $P_1$ gender == ``boy''} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “boy” & Yes \\ \hline “girl” & “girl” & “boy” & Yes \\ \hline “boy” & “boy” & “boy” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “girl” & Yes \\ \hline “girl” & “girl” & “girl” & No \\ \hline “boy” & “boy” & “girl” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “girl” & Yes \\ \hline “girl” & “girl” & “boy” & Yes \\ \hline “boy” & “boy” & “boy” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “boy” & “ ” & Yes \\ \hline “boy” & “boy” & “boy” & No \\ \hline “girl” & “girl” & “girl” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_2$ gender == ``girl''} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “girl” & Yes \\ \hline “boy” & “boy” & “girl” & Yes \\ \hline “girl” & “girl” & “girl” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “boy” & No \\ \hline “boy” & “boy” & “boy” & No \\ \hline “girl” & “girl” & “boy” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “ ” & “girl” & Yes \\ \hline “boy” & “boy” & “boy” & No \\ \hline “girl” & “girl” & “girl” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Input & Expected Input & Fault Detected \\ \hline “ ” & “girl” & “ ” & Yes \\ \hline “girl” & “girl” & “girl” & No \\ \hline “boy” & “boy” & “boy” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_3$ age \textless 6} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “rhyming” & Yes \\ \hline 6.0001 & “ ” & “rhyming” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “ ” & No \\ \hline 6.0001 & “ ” & “ ” & No \\ \hline 5.9999 & “rhyming” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “ ” & No \\ \hline 6.0001 & “ ” & “rhyming” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “rhyming” & “ ” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline 6.0001 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{4a}$ 7 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “drawing” & Yes \\ \hline 6.9999 & “ ” & “drawing” & Yes \\ \hline 7.0001 & “drawing” & “drawing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “ ” & No \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline 7.0001 & “drawing” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “drawing” & Yes \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline 7.0001 & “drawing” & “drawing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “drawing” & “ ” & Yes \\ \hline 7.0001 & “drawing” & “drawing” & No \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{4b}$ age \textless 6} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “drawing” & Yes \\ \hline 10.0001 & “essay writing” & “drawing” & Yes \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “essay writing” & Yes \\ \hline 10.0001 & “essay writing” & “essay writing” & No \\ \hline 9.9999 & “drawing” & “essay writing” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “essay writing” & Yes \\ \hline 10.0001 & “essay writing” & “drawing” & Yes \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “drawing” & “ ” & Yes \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline 10.0001 & “essay writing” & “essay writing” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{5a}$ 10 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “essay writing” & Yes \\ \hline 9.9999 & “drawing” & “essay writing” & Yes \\ \hline 10.0001 & “essay writing” & “essay writing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “drawing” & Yes \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline 10.0001 & “essay writing” & “drawing” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “essay writing” & Yes \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline 10.0001 & “essay writing” & “essay writing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “essay writing” & “ ” & Yes \\ \hline 10.0001 & “essay writing” & “essay writing” & No \\ \hline 9.9999 & “drawing” & “drawing” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{5b}$ age \textless 15} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “essay writing” & Yes \\ \hline 15.0001 & “ ” & “essay writing” & Yes \\ \hline 14.9999 & “essay writing” & “essay writing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “ ” & No \\ \hline 15.0001 & “ ” & “ ” & No \\ \hline 14.9999 & “essay writing” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “ ” & No \\ \hline 15.0001 & “ ” & “essay writing” & Yes \\ \hline 14.9999 & “essay writing” & “essay writing” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “essay writing” & “ ” & Yes \\ \hline 14.9999 & “essay writing” & “essay writing” & No \\ \hline 15.0001 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_6$ 20 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “poetry” & Yes \\ \hline 19.9999 & “ ” & “poetry” & Yes \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “ ” & No \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline 20.0001 & “poetry” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “poetry” & Yes \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “poetry” & “ ” & Yes \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_7$ age \textless 6} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “rhyming” & Yes \\ \hline 6.0001 & “ ” & “rhyming” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “ ” & No \\ \hline 6.0001 & “ ” & “ ” & No \\ \hline 5.9999 & “rhyming” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “ ” & “ ” & No \\ \hline 6.0001 & “ ” & “rhyming” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 6 & “rhyming” & “ ” & Yes \\ \hline 5.9999 & “rhyming” & “rhyming” & No \\ \hline 6.0001 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{8a}$ 7 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “storytelling” & Yes \\ \hline 6.9999 & “ ” & “storytelling” & Yes \\ \hline 7.0001 & “storytelling” & “storytelling” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “ ” & No \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline 7.0001 & “storytelling” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “ ” & “storytelling” & Yes \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline 7.0001 & “storytelling” & “storytelling” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 7 & “storytelling” & “ ” & Yes \\ \hline 7.0001 & “storytelling” & “storytelling” & No \\ \hline 6.9999 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{8b}$ age \textless 10} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “storytelling” & Yes \\ \hline 10.0001 & “ ” & “storytelling” & Yes \\ \hline 9.9999 & “storytelling” & “storytelling” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “ ” & No \\ \hline 10.0001 & “ ” & “ ” & No \\ \hline 9.9999 & “storytelling” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “ ” & “ ” & No \\ \hline 10.0001 & “ ” & “storytelling” & Yes \\ \hline 9.9999 & “storytelling” & “storytelling” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 10 & “storytelling” & “ ” & Yes \\ \hline 9.9999 & “storytelling” & “storytelling” & No \\ \hline 10.0001 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{9a}$ 11 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 11 & “ ” & “quiz” & Yes \\ \hline 10.9999 & “ ” & “quiz” & Yes \\ \hline 11.0001 & “quiz” & “quiz” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 11 & “ ” & “ ” & No \\ \hline 10.9999 & “ ” & “ ” & No \\ \hline 11.0001 & “quiz” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 11 & “ ” & “quiz” & Yes \\ \hline 10.9999 & “ ” & “ ” & No \\ \hline 11.0001 & “quiz” & “quiz” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 11 & “quiz” & “ ” & Yes \\ \hline 11.0001 & “quiz” & “quiz” & No \\ \hline 10.9999 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{9b}$ age \textless 15} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “quiz” & Yes \\ \hline 15.0001 & “ ” & “quiz” & Yes \\ \hline 14.9999 & “quiz” & “quiz” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “ ” & No \\ \hline 15.0001 & “ ” & “ ” & No \\ \hline 14.9999 & “quiz” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “ ” & “ ” & No \\ \hline 15.0001 & “ ” & “quiz” & Yes \\ \hline 14.9999 & “quiz” & “quiz” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 15 & “quiz” & “ ” & Yes \\ \hline 14.9999 & “quiz” & “quiz” & No \\ \hline 15.0001 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \subsection{Predicate $P_{10}$ 20 \textless age} \subsubsection{Boundary Shift - Reduced Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “poetry” & Yes \\ \hline 19.9999 & “ ” & “poetry” & Yes \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Shift - Increased Domain} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “ ” & No \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline 20.0001 & “poetry” & “ ” & Yes \\ \hline \end{tabular} \end{table} \subsubsection{Boundary Tilt} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “ ” & “poetry” & Yes \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline \end{tabular} \end{table} \subsubsection{Closure Error} \begin{table}[!htb] \centering \begin{tabular}{|l|l|l|l|} \hline Test & Actual Output & Expected Output & Fault Detected \\ \hline 20 & “poetry” & “ ” & Yes \\ \hline 20.0001 & “poetry” & “poetry” & No \\ \hline 19.9999 & “ ” & “ ” & No \\ \hline \end{tabular} \end{table} \newpage \end{document}
{ "alphanum_fraction": 0.5352026635, "avg_line_length": 32.3827893175, "ext": "tex", "hexsha": "c0fe9d4bad362bc3f1c7949711d9603013c75a22", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "701a0e2edebc36ad854e5990de97e33f71fa8098", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cmpe-187/game-expo", "max_forks_repo_path": "doc/domain-testing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "701a0e2edebc36ad854e5990de97e33f71fa8098", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cmpe-187/game-expo", "max_issues_repo_path": "doc/domain-testing.tex", "max_line_length": 108, "max_stars_count": null, "max_stars_repo_head_hexsha": "701a0e2edebc36ad854e5990de97e33f71fa8098", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cmpe-187/game-expo", "max_stars_repo_path": "doc/domain-testing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10440, "size": 32739 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%% ICML 2017 EXAMPLE LATEX SUBMISSION FILE %%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Use the following line _only_ if you're still using LaTeX 2.09. %\documentstyle[icml2017,epsf,natbib]{article} % If you rely on Latex2e packages, like most moden people use this: \documentclass{article} % use Times \usepackage{times} % For figures \usepackage{graphicx} % more modern %\usepackage{epsfig} % less modern \usepackage{subfigure} % For citations \usepackage{natbib} % For algorithms \usepackage{algorithm} \usepackage{algorithmic} \usepackage{amssymb} \usepackage{bm} \usepackage{float} \usepackage{footmisc} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{caption} % As of 2011, we use the hyperref package to produce hyperlinks in the % resulting PDF. If this breaks your system, please commend out the % following usepackage line and replace \usepackage{icml2017} with % \usepackage[nohyperref]{icml2017} above. \usepackage{hyperref} % Packages hyperref and algorithmic misbehave sometimes. We can fix % this with the following command. \newcommand{\theHalgorithm}{\arabic{algorithm}} % Employ the following version of the ``usepackage'' statement for % submitting the draft version of the paper for review. This will set % the note in the first column to ``Under review. Do not distribute.'' %\usepackage{icml2017} % Employ this version of the ``usepackage'' statement after the paper has % been accepted, when creating the final version. This will set the % note in the first column to ``Proceedings of the...'' \usepackage[accepted]{icml2017} % The \icmltitle you define below is probably too long as a header. % Therefore, a short form for the running title is supplied here: \icmltitlerunning{Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using $K$-Simplex Numbers} \begin{document} \twocolumn[ \icmltitle{Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using $K$-Simplex Numbers} % It is OKAY to include author information, even for blind % submissions: the style file will automatically remove it for you % unless you've provided the [accepted] option to the icml2017 % package. % list of affiliations. the first argument should be a (short) % identifier you will use later to specify author affiliations % Academic affiliations should list Department, University, City, Region, Country % Industry affiliations should list Company, City, Region, Country % you can specify symbols, otherwise they are numbered in order % ideally, you should not use this facility. affiliations will be numbered % in order of appearance and this is the preferred way. \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Andrew Nystrom}{google} \icmlauthor{John Hughes}{brown} \end{icmlauthorlist} \icmlaffiliation{google}{Google Inc.} \icmlaffiliation{brown}{Brown University} \icmlcorrespondingauthor{Andrew Nystrom}{[email protected]} \icmlcorrespondingauthor{John Hughes}{[email protected]} % You may provide any keywords that you % find helpful for describing your paper; these are used to populate % the "keywords" metadata in the PDF but will not be shown in the document \icmlkeywords{csr, polynomial, sparse, expansion, triangle, tetrahedral, simplex} \vskip 0.3in ] % this must go after the closing bracket ] following \twocolumn[ ... % This command actually creates the footnote in the first column % listing the affiliations and the copyright notice. % The command takes one argument, which is text to display at the start of the footnote. % The \icmlEqualContribution command is standard text for equal contribution. % Remove it (just {}) if you do not need this facility. %\printAffiliationsAndNotice{} % leave blank if no need to mention equal contribution %\printAffiliationsAndNotice{\icmlEqualContribution} % otherwise use the standard text. %\footnotetext{hi} \begin{abstract} We provide an algorithm that speeds up polynomial and interaction feature expansion on sparse matrices. The algorithm operates on and produces a compressed sparse row matrix with no densification. It works by defining a bijective function involving simplex numbers of column indices in the original matrix to column indices in the expanded matrix. This function allows for only the nonzero elements to be iterated over and multiplied together, greatly improving the expansion of sparse matrices in compressed sparse row format. For a vector of dimension $D$ and density $0 \le d \le 1$, the algorithm has time complexity $\Theta(d^KD^K)$ where $K$ is the polynomial-feature order; this is an improvement by a factor $d^K$ over the standard method. [Keywords: compressed sparse row, csr, feature expansion, feature mapping, polynomial expansion, sparse matrix] \end{abstract} \section{Introduction} In machine learning and statistical modeling, feature mappings are intra-instance transformations, usually denoted by $x \mapsto \phi(\vec{x})$, that map instance vectors to higher dimensional spaces in which they are more linearly separable, allowing linear models to capture nonlinearities \cite{yuan2012recent}. A well known and widely used feature mapping is the \emph{polynomial expansion}, which produces a new feature for each degree-$k$ monomial in the original features. (If the original features are $x,y,z$, the order-2 polynomial features are $x^2, y^2, z^2, xy, xz, xy$, and the order-3 ones are $x^3, y^3, z^3, x^2y, x^2z, xy^2, y^2 z, xz^2, yz^2,$ and $xyz$. A $K$-order polynomial feature expansion of the feature space allows a linear model to learn polynomial relationships between dependent and independent variables. This mapping was first utilized in a published experiment by Joseph Diaz Gergonne in 1815 \cite{gergonne1974application, smith1918standard}. While other methods for capturing nonlinearities have been developed, such as kernels (the direct offspring of feature mappings), trees, generative models, and neural networks, feature mappings are still a popular tool \cite{barker200114, chang2010training, shaw2006intellectual}. The instance-independent nature of feature mappings allows them to pair well with linear parameter estimation techniques such as stochastic gradient descent, making them a candidate for certain types of large-scale machine learning problems when $D \ll N$. The compressed sparse row (CSR) matrix format \cite{saad1994sparskit} is widely used \cite{liu2012sparse, bulucc2009parallel, bell2008efficient, white1997improving} and supported \cite{eigenweb, bastien2012theano, scikit-learn, koenker2003sparsem}, and is considered the standard data structure for sparse, computationally heavy tasks. However, polynomial feature expansions cannot be performed directly on CSR matrices, nor on any sparse matrix format, without intermediate densification steps. This densification not only adds overhead, but wastefully computes combinations of features that have a product of zero, which are then discarded during conversion into a sparse format. We describe an algorithm that takes a CSR matrix as input and produces a CSR matrix for its degree-$K$ polynomial feature expansion with no intermediate densification. The algorithm leverages the CSR format to only compute products of features that result in nonzero values. This exploits the sparsity of the data to achieve an improved time complexity of $\Theta(d^KD^K)$ on each vector of the matrix where $K$ is the degree of the expansion, $D$ is the dimensionality, and $d$ is the density. The standard algorithm has time complexity $\Theta(D^K)$. Since $0 \le d \le 1$, our algorithm is a significant improvement for small $d$. While the algorithm we describe uses CSR matrices, it can be readily adapted to other sparse formats. \section{Notation} We denote matrices by uppercase bold letters thus: $\bm{A}$. The $i^{th}$ the row of $\bm{A}$ is written $\bm{a}_i$. All vectors are written in bold, and $\bm{a}$, with no subscript, is a vector. Non-bold letters are scalars. We sometimes use `slice' notation on subscripts, so that $x_{2:5}$ indicates the second through fifth elements of the vector $x$. A CSR matrix representation of an $r$-row matrix $\bm{A}$ consists of three vectors: $\bm{c}$, $\bm{d}$, and $\bm{p}$ and a single number: the number $N$ of columns of $\bm{A}$. The vectors $\bm{c}$ and $\bm{d}$ contain the same number of elements, and hold the column indices and data values, respectively, of all nonzero elements of $\bm{A}$. The vector $\bm{p}$ has $r$ entries. The values in $\bm{p}$ index both $\bm{c}$ and $\bm{d}$. The $i$th entry $\bm{p}_i$ of $\bm{p}$ tells where the data describing nonzero columns of $\bm{a}_i$ are within the other two vectors: $\bm{c}_{\bm{p}_i:\bm{p}_{i+1}}$ contain the column indices of those entries; $\bm{d}_{\bm{p}_i:\bm{p}_{i+1}}$ contain the entries themselves. Since only nonzero elements of each row are held, the overall number $N$ of columns of $\bm{A}$ must also be stored, since it cannot be derived from the other data. Scalars, vectors, and matrices are often decorated with a superscript $k$, which is not to be interpreted as an exponent, but instead as an indicator of polynomial expansion: For example, if the CSR for $\bm{A}$ is $\bm{c}, \bm{d}, \bm{p}$, then $\bm{c}^2$ is the vector that holds columns for nonzero values in $\bm{A}$'s quadratic feature expansion CSR representation. Uppercase $K$ refers to the degree of a polynomial or interaction expansion. When a superscript $\kappa$ (kappa) appears, it indicates that the element below it is in a polynomially expanded context of degree $K$. For example, if $nnz_i$ is the number of nonezero elements in the vector $i$th row vector of some matrix, $nnz_i^\kappa$ is the number of nonzero elements in the polynomial expansion of that row vector. Lowercase $k$ refers to a column index. \section{Motivation} In this section we present a strawman algorithm for computing polynomial feature expansions on dense matrices. We then modify the algorithm slightly to operate on a CSR matrix, to expose its infeasibility in that context. We then show how the algorithm would be feasible with a bijective mapping from $k$-tuples of column indicies in the input matrix to column indices in the polynomial expansion matrix, which we then derive in the following section. It should be noted that in practice, as well as in our code and experiments, expansions for degrees $1, 2, \dots, k-1$ are also generated. The final design matrix is the augmentation of all such expansions. However, the algorithm descriptions in this paper omit these steps as they would become unnecessarily visually and notationally cumbersome. Extending them to include all degrees less than $K$ is trivial and does not affect the complexity of the algorithm as the terms that involve $K$ dominate. \subsection{Dense Second Degree Polynomial Expansion Algorithm} A natural way to calculate polynomial features for a matrix $\bm{A}$ is to walk down its rows and, for each row, take products of all $K$-combinations of elements. To determine in which column of $\bm{A}^\kappa_i$ products of elements in $\bm{A}_i$ belong, a simple counter can be set to zero for each row of $\bm{A}$ and incremented after each polynomial feature is generated. This counter gives the column of $\bm{A}^\kappa_i$ into which each expansion feature belongs. This is shown in Algorithm \ref{alg:Dense-Second-Order-Polynomial-Expansion}. % BEGIN vrtically centered %\pagebreak %\hspace{0pt} %\vfill \begin{algorithm}%[H] \caption{Dense Second Order Polynomial Expansion} \label{alg:Dense-Second-Order-Polynomial-Expansion} \begin{algorithmic}[1] \STATE {\bfseries Input:} data $\bm{A}$, size $N \times D$ \STATE $\bm{A}^\kappa$ $\gets$ empty $N \times \binom{D}{2}$ matrix \FOR{$i \gets 0$ {\bfseries to} $N-1$} \STATE $c_p \gets 0$ \FOR{$j_1 \gets 0$ {\bfseries to} $D-1$} \FOR{$j_2 \gets j_1$ {\bfseries to} $D-1$} \STATE $\bm{A}^\kappa_{i{c_p}} \gets \bm{A}_{ij_1} \cdot \bm{A}_{ij_2}$ \STATE $c_p \gets c_p + 1$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} %\vskip 1.5in \subsection{Incomplete Second Degree CSR Polynomial Expansion Algorithm} \label{sec:final-algo} Now consider how this algorithm might be modified to accept a CSR matrix. Instead of walking directly down rows of $\bm{A}$, we will walk down sections of $\bm{c}$ and $\bm{d}$ partitioned by $\bm{p}$, and instead of inserting polynomial features into $\bm{A}^\kappa$, we will insert column numbers into $\bm{c}^\kappa$ and data elements into $\bm{d}^\kappa$. Throughout the algorithm, we use variables named $nnz$, with sub- or superscripts, to indicate the number of nonzero entries in either a matrix or a row of a matrix. See Algorithm \ref{alg:Incomplete-Sparse-Second-Order-Polynomial-Expansion}. %\vfill %\hspace{0pt} %\pagebreak % END vrtically centered \begin{algorithm}%[H] \caption{Sparse Second Order Polynomial Expansion} \label{alg:Incomplete-Sparse-Second-Order-Polynomial-Expansion} \begin{algorithmic}[1] \STATE {\bfseries Input:} data $\bm{A}$, size $N \times D$ \STATE $\bm{p}^\kappa$ $\gets$ vector of size $N+1$ \STATE $\bm{p}^\kappa_0 \gets 0$ \STATE $nnz^\kappa \gets 0$ \FOR{$i \gets 0$ {\bfseries to} $N-1$} \STATE $i_{start} \gets \bm{p}_i$ \STATE $i_{stop} \gets \bm{p}_{i+1}$ \STATE $\bm{c}_i \gets \bm{c}_{i_{start}:i_{stop}}$ \STATE $nnz^\kappa_i \gets \binom{|\bm{c}_i|}{2}$ \label{li:row_nnz_count} \STATE $nnz^\kappa \gets nnz^\kappa + nnz^\kappa_i$ \STATE $\bm{p}^\kappa_{i+1} \gets \bm{p}^\kappa_i + nnz^\kappa_i$ \ENDFOR \STATE $\bm{p}^\kappa$ $\gets$ vector of size $N+1$ \STATE $\bm{c}^\kappa$ $\gets$ vector of size $nnz^\kappa$ \STATE $\bm{d}^\kappa$ $\gets$ vector of size $nnz^\kappa$ \STATE $n \gets 0$ \FOR {$i \gets 0$ {\bfseries to} $N-1$} \STATE $i_{start} \gets \bm{p}_i$ \STATE $i_{stop} \gets \bm{p}_{i+1}$ \STATE $\bm{c}_i \gets \bm{c}_{i_{start}:i_{stop}}$ \STATE $\bm{d}_i \gets \bm{d}_{i_{start}:i_{stop}}$ \FOR {$c_1 \gets 0$ {\bfseries to} $|\bm{c}_i|-1$} \FOR {$c_2 \gets c_1$ {\bfseries to} $|\bm{c}_i|-1$} \STATE $\bm{d}^\kappa_{n} \gets \bm{d}_{c_0} \cdot \bm{d}_{c_1}$ \STATE $\bm{c}^\kappa_{n} = ?$ \label{li:set_ck} \STATE $n \gets n + 1$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The crux of the problem is at line 25 of Algorithm \ref{alg:Incomplete-Sparse-Second-Order-Polynomial-Expansion}. Given the arbitrary columns involved in a polynomial feature of $\bm{A}_i$, we need to determine the corresponding column of $\bm{A}^\kappa_i$. We cannot simply reset a counter for each row as we did in the dense algorithm, because only columns corresponding to nonzero values are stored. Any time a column that would have held a zero value is implicitly skipped, the counter would err. To develop a general algorithm, we require a mapping from a list of $K$ columns of $\bm{A}$ to a single column of $\bm{A}^\kappa$. If there are $D$ columns of $\bm{A}$ and $\binom{D}{K}$ columns of $\bm{A}^\kappa$, this can be accomplished by a bijective mapping of the following form: \begin{equation} (j_0, j_1, \dots, j_{K-1}) \rightarrowtail \hspace{-1.9ex} \twoheadrightarrow p_{j_0j_1 \dots i_{K-1}} \in \{0,1,\dots,\binom{D}{K}-1\} \end{equation} where (i) $ 0 \le j_0 \le j_1 \le \dots \le j_{K-1} < D$, (ii) $(j_0, j_1, \dots, j_{K-1})$ are elements of $\bm{c}$, and (iii) $p_{j_0j_1 \dots i_{K-1}}$ is an element of $\bm{c}^\kappa$. (For interaction features, the constraint is $ 0 \le j_0 < j_1 < \dots < j_{K-1} < D$.) Stated more verbosely, we require a bijective mapping from tuples consisting of column indicies of the original input to where the column index of the corresponding product of features in the polynomial expansion. While any bijective mapping would suffice, a common order in which to produce polynomial features is $(0, 1)$, $(0, 2)$, $\dots$, $(0, D-1)$, $(1, 1)$, $(1, 2)$, $\dots$, $(1, D-1)$, $\dots$, $(D-1, D-1)$ for $K=2$ where the elements of the tuples are column indices. That is, the further to the right an index is, the sooner it is incremented. If we want for our algorithm to be backwards compatible with existing models, the mapping must use the this same ordering. %column indicies of a row vector $\vec{x}$ of an $N \times D$ input matrix, and $p_{i_0i_1 \dots i_{k-1}}$ is a column index into the polynomial expansion vector for $\vec{x}$ where the product of elements corresponding to indices $i_0, i_1, \dots, i_{k-1}$ will be stored. \section{Construction of Mappings} %\textbf{\color{red} This looks like the wrong mapping, since it uses $ i < j$ rather than $i \le j$, i.e., it's "interaction features" rather than polynomial features. Am I missing something? I've tried to rewrite a bit --jfh} Within this section, $i$, $j$, and $k$ denote column indices. We will construct mappings for second ($K=2$) and third ($K=3$) degree interaction and polynomial expansions. To accomplish this, we will require the triangle and tetrahedral numbers. We denote the $n$th triangle number as $T_2(n) = \frac{n(n+1)}{2}$ and the $n$th tetrahedral number as $T_3(n) = \frac{n(n+1)(n+2)}{6}$. For reference, we list the first five triangle and tetrahedral numbers in the following table: \begin{tabular}{| c | c | c |} \caption{The first five triangle and tetrahedral numbers.} \hline $n$ & $T_2(n)$ & $T_3(n)$ \\ \hline 0 & 0 & 0 \\ 1 & 1 & 1 \\ 2 & 3 & 4 \\ 3 & 6 & 10 \\ 4 & 10 & 20 \\ \hline \end{tabular} \subsection{Second Degree Interaction Mapping} For second order interaction features, we require a bijective function that maps the elements of the ordered set \begin{equation} ((0, 1), (0, 2), \dots, (1, 2), (1, 3), \dots, (D-2, D-1)) \end{equation} to the elements of the ordered set \begin{equation} (0,1,\dots,\binom{D-1}{2}-1) \end{equation} For $D=4$, we can view the desired mapping $f$ as one that maps the coordinates of matrix cells to $0, 1, 2, 3$. If we fill the cells of the matrix with the codomain, the target matrix is as follows: \begin{align} \begin{bmatrix} x & 0 & 1 & 2 \\ x & x & 3 & 4 \\ x & x & x & 5 \\ x & x & x & x \end{bmatrix} \label{eq:4x4mat} \end{align} where the entry in row $i$, column $j$, displays the value of $f(i, j)$. It will be simpler to instead construct a preliminary mapping, $r(i, j)$ of the following form: \begin{align} \begin{bmatrix} x & 6 & 5 & 4 \\ x & x & 3 & 2 \\ x & x & x & 1 \\ x & x & x & x \end{bmatrix} \label{eq:preliminary4x4} \end{align} and then subtract the preliminary mapping from the total number of elements in the codomain to create the final mapping. Note that in equation \ref{eq:preliminary4x4} we have the following relation: \begin{equation} T_2(D-i-2) < e^i \le T_2(D-i-1) \end{equation} where $e^i$ is the value of any cell in row $i$ of equation \ref{eq:preliminary4x4}. Therefore, the following mapping will produce equation \ref{eq:preliminary4x4}: \begin{equation} r(i, j) = T_2(D-i-1) - (j - i - 1) \end{equation} We can now use this result to construct a mapping for equation \ref{4x4mat} by subtracting it from the size of the codomain: \begin{equation} f(i, j) = T_2(D-1) - [T_2(D-i-1) - (j - i - 1)] \end{equation} \subsection{Second Degree Polynomial Mapping} In this case, the target matrix is of the form \begin{align} \begin{bmatrix} 0 & 1 & 2 & 3 \\ x & 4 & 5 & 6 \\ x & x & 7 & 8 \\ x & x & x & 9 \end{bmatrix} \label{eq:4x4mat} \end{align} A very similar analysis can be done for the $K=2$ case to yield \begin{equation} f(i, j) = T_2(D) - [T_2(D-i) - (j - i)] \end{equation} \subsection{Third Degree Interaction Mapping} For $K=3$ we can no longer view the necessary function as mapping matrix coordinates to cell values; rather, $DxDxD$ tensor coordinates to cell values. For simplicity, we will instead list the column index tuples and their necessary mappings in a table. We shall consider the case of $D=5$. Again, it is simpler to find a mapping $r(i, j, k)$ that maps to the reverse the target indices (plus one) and create a final mapping by subtracting that mapping from the number of elements in the codomain. We therefore seek a preliminary mapping of the form \begin{tabular}{| c | c | c |} \hline $(i, k, j)$ & $r(i,j,k)$ & $f(i,j,k)$ \\ \hline $(0, 1, 2)$ & 10 & 0 \\ $(0, 1, 3)$ & 9 & 1 \\ $(0, 1, 4)$ & 8 & 2 \\ $(0, 2, 3)$ & 7 & 3 \\ $(0, 2, 4)$ & 6 & 4 \\ $(0, 3, 4)$ & 5 & 5 \\ \hline $(1, 2, 3)$ & 4 & 6 \\ $(1, 2, 4)$ & 3 & 7 \\ $(1, 3, 4)$ & 2 & 8 \\ \hline $(2, 3, 4)$ & 1 & 9 \\ \hline \end{tabular} The mapping has been partitioned according to the $i$ dimension. Note that within each partition is a mapping very similar to the $K=2$ equivalent, but with the indices shifted by a function of $T_3$. For example, when $i=0$, the indices are shifted by $T_3(2)$, when $i=1$, the shift is $T_3(1)$, and finally, when $i=2$, the shift is $T_3(0)$. The preliminary mapping is therefore \begin{equation} r(i, j, k) = T_3(D-i-3) + T_2(D-j-1) - (k-j-1) \end{equation} and the final mapping is therefore \begin{align} f(i, j, k) = T_3(D-2) - [&T_3(D-i-3) + \\ &T_2(D-j-1) - \\ &(k-j-1)] \end{align} \subsection{Third Degree Polynomial Mapping} The analysis of the $K=3$ polynomial case is very similar to that of the $K=3$ interaction case. However, the target mapping now includes the edge of the simplex, as it included the diagonal of the matrix in the $K=2$ polynomial case. The analysis yields the mapping \begin{equation} f(i, j, k) = T_3(D) - [T_3(D-i-1) + T_2(D-j) - (k-j)] \end{equation} \section{Higher Order Mappings} It can be seen that mappings to higher orders can be constructed inductively. A $K$ degree mapping is a function of the $K$-simplex numbers and reparameterized versions of all lower order mappings. However, in practice, higher degrees are not often used as the dimensionality of the expanded vectors becomes prohibitively large. A fourth degree polynomial expansion of a $D=1000$ vector would have $\binom{1000}{4} = 41,417,124,750$ dimensions. \section{Final CSR Polynomial Expansion Algorithm} With the mapping from columns of $\bm{A}$ to a column of $\bm{A}^\kappa$, we can now write the final form of the innermost loop of the algorithm from \ref{sec:final-algo}. Let the polynomial mapping for $K=2$ be denoted $h^2$. Then the innermost loop can be completed as follows: %shown in Algorithm \ref{alg:Inner-Loop-of-Completed-Sparse-Second-Order-Polynomial-Expansion}. \begin{algorithm}[H] \caption*{Completed Inner Loop of Algorithm \ref{alg:Incomplete-Sparse-Second-Order-Polynomial-Expansion}} \label{alg:Inner-Loop-of-Completed-Sparse-Second-Order-Polynomial-Expansion} \begin{algorithmic}[1] \FOR {$c_2 \gets c_1$ {\bfseries to} $|\bm{c}_i|-1$} \STATE $j_0 \gets \bm{c}_{c_0}$ \STATE $j_1 \gets \bm{c}_{c_1}$ \STATE $c_p \gets h^2(j_0, j_1)$ \STATE $\bm{d}^\kappa_{n} \gets \bm{d}_{c_0} \cdot \bm{d}_{c_1}$ \STATE $\bm{c}^\kappa_{n} = c_p$ \STATE $n \gets n + 1$ \ENDFOR \end{algorithmic} \end{algorithm} The algorithm can be generalized to higher degrees by simply adding more nested loops, using higher order mappings, modifying the output dimensionality, and adjusting the counting of nonzero polynomial features in line \ref{li:row_nnz_count}. %For interaction features, the interaction mappings can be used in lieu of the polynomial mappings with the additional change of the output dimensionality and the number of nonzero features in each row (combinations without repetition instead of with). \section{Higher Degree and Interaction Algorithms} Most of the steps for higher degrees and interaction expansions (as opposed to polynomial) are the same as for the $K=2$ polynomial case. The differences are that for higher degrees, an extra loop is needed to iterate over another column index, and a different mapping is required. For interaction expansions, the column indices are never allowed to equal each other, so each loop executes one less time, and an interaction mapping is required. Also, for interaction expansions, the way $nnz$ is computed on line \ref{li:row_nnz_count} of Algorithm \ref{alg:Incomplete-Sparse-Second-Order-Polynomial-Expansion}. Instead of $\binom{|\bm{c}_i|}{K}$, we have $\binom{|\bm{c}_i|-1}{K}$. \section{Time Complexity} \subsection{Analytical} \label{sec:analytical} Calculating $K$-degree polynomial features via our method for a vector of dimensionality $D$ and density $d$ requires $T_K(dD)$ products. The complexity of the algorithm, for fixed $K \ll dD$, is therefore \begin{align} &\Theta\left(T_K(dD)\right) =\\ &\Theta\left(dD(dD+1)(dD+2)\dots(dD+K-1)\right) =\\ &\Theta\left(d^KD^K\right) \end{align} For a matrix of size $N \times D$, the complexity is therefore $\Theta\left(Nd^KD^K\right)$. The dense algorithm (Algorithm \ref{alg:Dense-Second-Order-Polynomial-Expansion}) does not leverage sparsity, and its complexity is $\Theta\left(ND^K\right)$. Since $0 \le d \le 1$, the sparse algorithm scales polynomially with the degree of the polynomial expansion. \subsection{Empirical Results} To empirically verify the average time complexity of our algorithm, we implemented both the sparse version and the baseline in the Cython programming language so that results would be directly comparable. We sought the relationships between runtime and the instance count ($N$), the instance dimensionality ($D$), and the instance density ($d$). To find these relationships, we individually varied $N$, $D$, and $d$ while holding the remaining two constant. For each of these configurations, we generated $20$ matrices and took the average time to reduce variance. The time to densify did not count against the dense algorithm. Figure-\ref{fig:all-vs-time} summarizes our findings. Varying the density ($d$) (column 1) shows that our algorithm scales polynomially with $d$, but that the baseline is unaffected by it. The runtimes for both algorithms increase polynomially with the dimensionality ($D$), but ours at a significantly reduced rate. Likewise, both algorithms scale linearly with the instance count ($N$), but ours to a much lesser degree. Note that the point at which the sparse and dense algorithms intersect when varying $d$ is to the right of $0.5$, which is when a matrix technically becomes sparse. The point at which this happens will depend on $D$, but the results show that our algorithm is useful even for some dense matrices. \begin{figure*}[ht!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\textwidth]{all_vs_time.png}} \caption{ %This figure shows how the runtimes of the sparse and dense %algorithms scale with $d$, $D$, and $N$. %Each point is the average of 20 samples. %The time to densify did not count against the dense %algorithm. %The figure shows that our algorithm scales polynomially with %$d$ and $D$ and linearly with $N$, which is in accordance %with the analysis of section \ref{sec:analytical}. Summary performance comparison plots for quadratic (top) and cubic (bottom) cases showing how the algorithm's performance varies with $d$, $D$, and $N$; our sparse algorithm is shown in blue, the dense algorithm in red. Each point in each graph is an average of 20 runs, and the time used in densification is not included in the dense-algorithm timings. In the quadratic case, sparsity loses its advantage at about 67\%, and at about 77\% for the cubic case, though these precise intersections depend on $D$. In general, taking advantage of sparsity shows large benefits, so large that it's difficult to see that the performance does not actually change linearly with $D$ (column 2); figure \ref{fig:sparse_D_and_N_vs_time} gives further details.} \label{fig:all-vs-time} \end{center} \vskip -0.2in \end{figure*} \begin{figure*}[ht!] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\textwidth]{sparse_D_and_N_vs_time.png}} \caption{A closer view of only the sparse runtimes while varying $D$ (left) and $N$ (right) for $d = 0.2$. The left subplot shows that varying $D$ gives polynomial growth in runtime; quadratic for $K = 2$ (dashed line) and cubic for $K = 3$ (dotted line). These nonlinearities were not apparent in Figure \ref{fig:all-vs-time} due to the much greater runtimes of the dense algorithm. The right subplot shows linear growth in runtime for both. These findings are in accordance with the analysis of section \ref{sec:analytical}.} \label{fig:sparse_D_and_N_vs_time} \end{center} \vskip -0.2in \end{figure*} %\section{Discussion} %A natural question to ask is ``Why not just use a decent hashing algorithm?'' There are two answers. The first is that doing so does not avoid the \emph{computation} of features that contain a zero factor, and that alone prevents a reduction in big-theta performance. The second is that hashing is great when you don't have a clear understanding of the distribution of items, but our index-computing function is simpler to compute than most hash functions, and provides perfect storage density. \section{Conclusion} We have developed an algorithm for performing polynomial feature expansions on CSR matrices that scales polynomially with respect to the density of the matrix. The areas within machine learning that this work touches are not en vogue, but they are workhorses of industry, and every improvement in core representations has an impact across a broad range of applications. %This improvement could therefore spare the burning of much fossil fuel. \newpage \bibliography{nystrom_hughes_csr_polynomial_expansions} \bibliographystyle{icml2017} \end{document} % This document was modified from the file originally made available by % Pat Langley and Andrea Danyluk for ICML-2K. This version was % created by Lise Getoor and Tobias Scheffer, it was slightly modified % from the 2010 version by Thorsten Joachims & Johannes Fuernkranz, % slightly modified from the 2009 version by Kiri Wagstaff and % Sam Roweis's 2008 version, which is slightly modified from % Prasad Tadepalli's 2007 version which is a lightly % changed version of the previous year's version by Andrew Moore, % which was in turn edited from those of Kristian Kersting and % Codrina Lauth. Alex Smola contributed to the algorithmic style files.
{ "alphanum_fraction": 0.7304302409, "avg_line_length": 56.8348794063, "ext": "tex", "hexsha": "97c5fba23b3606223b1443b57beefffcb55632f2", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-02-08T17:36:00.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-08T17:28:39.000Z", "max_forks_repo_head_hexsha": "68ac222d7a826a344675d0e5196d82cb1711a69a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "AWNystrom/SparseInteraction", "max_forks_repo_path": "paper/Final/nystrom_hughes_csr_polynomial_expansions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68ac222d7a826a344675d0e5196d82cb1711a69a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "AWNystrom/SparseInteraction", "max_issues_repo_path": "paper/Final/nystrom_hughes_csr_polynomial_expansions.tex", "max_line_length": 747, "max_stars_count": 6, "max_stars_repo_head_hexsha": "68ac222d7a826a344675d0e5196d82cb1711a69a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "AWNystrom/SparseInteraction", "max_stars_repo_path": "paper/Final/nystrom_hughes_csr_polynomial_expansions.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-16T15:35:26.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-08T17:17:55.000Z", "num_tokens": 8610, "size": 30634 }
\documentclass[12pt,twoside]{naturefigs} % figs inline %\documentclass[12pt,twoside]{nature} \usepackage[english]{babel} \usepackage{times,subeqnarray} \usepackage{url} \usepackage{lineno} \linenumbers % following is for pdflatex vs. old(dvi) latex \newif\myifpdf \ifx\pdfoutput\undefined % \pdffalse % we are not running PDFLaTeX \usepackage[dvips]{graphicx} \else \pdfoutput=1 % we are running PDFLaTeX % \pdftrue \usepackage[pdftex]{graphicx} \fi % if you want to be more fully apa-style for submission, then use this %\usepackage{setspace,psypub,ulem} %\usepackage{setspace} % must come before psypub %\usepackage{psypub} %\usepackage{psydraft} \usepackage{one-in-margins} % use instead of psydraft for one-in-margs %\usepackage{apa} % apa must come last % using latex2e as standard, use the following for latex209 % \documentstyle [times,11pt,twoside,subeqnarray,psydraft,apa,epsf]{article} %\input netsym % tell pdflatex to prefer .pdf files over .png files!! \myifpdf \DeclareGraphicsExtensions{.pdf,.eps,.png,.jpg,.mps,.tif} \fi % use 0 for psypub format \parskip 2pt % for double-spacing, determines spacing %\doublespacing %\setstretch{1.7} \columnsep .25in % 3/8 in column separation \def\myheading{ Deep Predictive Learning } % no twoside for pure apa style, use \markright with heading only \pagestyle{myheadings} \markboth{\hspace{.5in} \myheading \hfill}{\hfill O'Reilly, Russin, \& Rohrlich \hspace{.5in}} \bibliographystyle{naturemag} \title{ Deep Predictive Learning as a Model of Human Learning } \author{Randall C. O'Reilly$^{1}$, Jacob L. Russin$^1$, \& John Rohrlich$^1$} \begin{document} % sloppy is the way to go! \sloppy \raggedbottom \spacing{1.5} \maketitle \begin{affiliations} \item Departments of Psychology and Computer Science, Center for Neuroscience, University of California, Davis \end{affiliations} \pagestyle{myheadings} \begin{abstract} How does the human brain learn new concepts from raw sensory experience, without explicit instruction? This longstanding mystery remains unsolved, despite recent demonstrations of the impressive learning power of deep convolutional neural networks (DCNN's), which notoriously require explicit training from massive human-labeled datasets\cite{KrizhevskySutskeverHinton12,LeCunBengioHinton15,Schmidhuber15a}. The plausibility of the error backpropagation\cite{RumelhartHintonWilliams86} powering these models has also long been questioned on biological grounds\cite{Crick89}, although various related biologically plausible mechanisms have been proposed\cite{OReilly96,XieSeung03,BengioMesnardFischerEtAl17}. Here, we show that a biologically based form of {\em predictive} error-driven learning, where error signals arise from differences between a prediction and what actually occurs\cite{Elman90,ElmanBatesKarmiloff-SmithEtAl96}, learns to systematically categorize 3D objects according to invariant shape properties from raw visual inputs alone. We found that these categories match human judgments on the same stimuli, and are consistent with neural representations in inferotemporal (IT) cortex in primates\cite{CadieuHongYaminsEtAl14}. Biologically, we propose that distinctive patterns of connectivity between the neocortex and thalamus\cite{ShermanGuillery06} drive alternating top-down prediction and bottom-up outcome representations over the pulvinar nucleus, at the alpha frequency (10 Hz), with the temporal difference driving error-driven learning throughout neocortex. We show that comparison predictive DCNN models lacking these biological features\cite{LotterKreimanCox16} did not learn object categories that go beyond the visual input structure. Thus, we argue that incorporating these biological properties of the brain can potentially provide a better understanding of human learning at multiple levels relative to existing DCCN models. \end{abstract} \begin{figure} \centering\includegraphics[width=6in]{figs/fig_deepleabra_wwi_abc_pred_model_frames} \caption{\small \protect\spacing{1} {\bf a)} Temporal evolution of information flow in the DeepLeabra algorithm predicting visual sequences, over two alpha cycles of 100 msec each. In each alpha cycle, the V2 Deep layer (lamina 5, 6) uses the prior 100 msec of context to generate a prediction ({\em minus} phase) on the pulvinar thalamic relay cells (TRC). The bottom-up outcome is driven by V1 5IB strong driver inputs ({\em plus} phase); error-driven learning occurs as a function of the {\em temporal difference} between these phases, in both superficial (lamina 2, 3) and deep layers, sent via broad pulvinar projections. 5IB bursting in V2 drives update of temporal context in V2 Deep layers, and also the plus phase in higher area TRC, to drive higher-level predictive learning. See supplementary information (SI) for more details. {\bf b)} The {\em What-Where-Integration, WWI} model. The dorsal {\em Where} pathway learns first, using easily-abstracted {\em spatial blobs}, to predict object location based on prior motion, visual motion, and saccade efferent copy signals. This drives strong top-down inputs to lower areas with accurate spatial predictions, leaving the {\em residual} error concentrated on {\em What} and {\em What * Where} integration. The V3 and DP (dorsal prelunate) constitute the {\em What * Where} integration pathway, binding features and locations. V4, TEO, and TE are the {\em What} pathway, learning abstracted object category representations, which also drive strong top-down inputs to lower areas. {\em s} suffix = superficial, {\em d} = deep, {\em p} = pulvinar. {\bf c)} Example sequence of 8 alpha cycles that the model learned to predict, with the reconstruction of each image based on the V1 gabor filters ({\em V1 recon}), and model-generated prediction (correlation $r$ prediction error shown). The low resolution and reconstruction distortion impair visual assessment, but $r$ values are well above the $r$'s for each V1 state compared to the previous time step (mean = .38, min of .16 on frame 4 -- see SI for more analysis). Eye icons indicate when a saccade occurred.} \label{fig.model} \end{figure} Motivated by considerable biological evidence\cite{OReillyWyatteRohrlich14}, we hypothesize that sensory predictions in posterior neocortex are generated roughly every 100 msec (i.e., the {\em alpha} rhythm), by neurons in the deep layers of the neocortex that project to the pulvinar nucleus of the thalamus (Figure~\ref{fig.model}a). The pulvinar represents this top-down prediction for roughly 75 msec of the alpha cycle as it develops, after which point the layer 5IB intrinsic-bursting neurons send strong, bottom-up driving input to the pulvinar, representing the actual sensory stimulus\cite{ShermanGuillery06}. These 5IB neurons burst at the alpha frequency, determining the overall timing of the predictive learning cycle, along with other dynamic parameters of the thalamocortical circuit\cite{LorinczKekesiJuhaszEtAl09,FranceschettiGuatteoPanzicaEtAl95,SaalmannPinskWangEtAl12}. The prediction error is implicit in the temporal difference between these two periods of activity within the alpha cycle over the pulvinar, which is consistent with the biologically plausible form of error-driven cortical learning used in our models\cite{OReilly96}. The pulvinar sends broad projections back up to all of the areas that drive top-down predictions into it\cite{Shipp03,Mumford91}, thus broadcasting this error signal to drive local synaptic plasticity in the neocortex. This mathematically approximates gradient descent to minimize overall prediction errors. This computational framework makes sense of otherwise puzzling anatomical and physiological properties of the cortical and thalamic networks\cite{ShermanGuillery06}, and is consistent with a wide range of detailed neural and behavioral data regarding the effects of the alpha rhythm on learning and perception\cite{BuffaloFriesLandmanEtAl11,VanRullenKoch03,JensenBonnefondVanRullen12,FiebelkornKastner19}. It has many testable differences from other existing theories of predictive learning that have been proposed over the years, at varying levels of biological detail\cite{Mumford92,RaoBallard99,KawatoHayakawaInui93,Friston05}. A critical question for predictive learning is whether it can develop high-level, abstract ways of representing the raw sensory inputs, while learning from nothing but predicting these low-level visual inputs. For instance, can predictive learning really eliminate the need for human-labeled image datasets where abstract category information is explicitly used to train object recognition models via error-backpropagation? From a cognitive perspective, there is considerable evidence that non-verbal primates and pre-verbal human infants naturally develop abstract categorical encodings of visual objects in IT cortex\cite{CadieuHongYaminsEtAl14}, without relying on any explicit external categorical labels. Existing predictive-learning models based on error backpropagation\cite{LotterKreimanCox16} have not demonstrated the development of abstract, categorical representations. Previous work has shown that predictive learning can be a useful method for pretraining networks that are subsequently trained using human-generated labels, but here we focus on the formation of systematic categories {\em de-novo}. To determine if our biologically based predictive learning model (Figure~\ref{fig.model}b) can naturally form such categorical encodings in the complete absence of external category labels, we showed the model brief movies of 156 3D object exemplars drawn from 20 different basic-level categories (e.g., car, stapler, table lamp, traffic cone, etc.) selected from the CU3D-100 dataset\cite{OReillyWyatteHerdEtAl13}. The objects moved and rotated in 3D space over 8 movie frames, where each frame was sampled at the alpha frequency (Figure~\ref{fig.model}c). There were also saccadic eye movements every other frame, introducing an additional predictive-learning challenge. An efferent copy signal enabled full prediction of the effects of the eye movement, and allows the model to capture {\em predictive remapping} (a widely-studied signature of predictive learning in the brain)\cite{DuhamelColbyGoldberg92,CavanaghHuntAfrazEtAl10}, and introduces additional predictive-learning challenge. The only learning signal available to the model was a prediction error generated by the temporal difference between what it predicted to see in the next frame and what was actually seen. \begin{figure} \centering\includegraphics[width=6in]{figs/fig_deepleabra_wwi_rsa_leabra_expt1} \caption{\small \protect\spacing{1} {\bf a)} Category similarity structure that developed in the highest layer, TE, of the biologically based predictive learning model, showing {\em 1-correlation} similarity of the TE representation for each 3D object against every other 3D object (156 total objects). Blue cells have high similarity, and model has learned block-diagonal clusters or categories of high-similarity groupings, contrasted against dissimilar off-diagonal other categories. Clustering maximized average {\em within - between} correlation distance (see SI). All items from the same basic-level object categories (N=20) are reliably subsumed within learned categories. {\bf b)} Human similarity ratings for the same 3D objects, presented with the V1 reconstruction (see Fig 1c) to capture coarse perception in model, aggregated by 20 basic-level categories. Each cell is 1 - proportion of time given object pair was rated more similar than another pair (see SI). The human matrix shares the same centroid categorical structure as the model (confirmed by permutation testing and agglomorative cluster analysis, see SI). {\bf c)} Emergence of abstract category structure over the hierarchy of layers. Red line = correlation similarity between the TE similarity matrix (shown in panel a) and all layers; black line shows correlation similarity between V1 against all layers (1 = identical; 0 = orthogonal). Both show that IT layers (TEO, TE) progressively differentiate from raw input similarity structure present in V1, and, critically, that the model has learned structure beyond that present in the input.} \label{fig.rsa} \end{figure} We performed a representational similarity analysis (RSA) on the learned activity patterns at each layer in the model, and found that the highest IT layer (TE) produced a systematic organization of the 156 3D objects into 5 categories (Figure~\ref{fig.rsa}a), which visually correspond to the overall shape of the objects (pyramid-shaped, vertically-elongated, round, boxy / square, and horizontally-elongated). This organization of the objects matches that produced by humans making shape similarity judgments on the same set of objects, using the V1 reconstruction as shown in Figure~\ref{fig.model}c to capture the model's coarse-grained perception (Figure~\ref{fig.rsa}b; see supplementary information for methods and further analysis). Critically, Figure~\ref{fig.rsa}c shows that the overall similarity structure present in IT layers (TEO, TE) of the biological model is significantly different from the similarity structure at the level of the V1 primary visual input. Thus the model, despite being trained only to generate accurate visual input-level predictions, has learned to represent these objects in an abstract way that goes beyond the raw input-level information. Furthermore, this abstract category organization reflects the overall visual shapes of the objects as judged by human participants, suggesting that the model is extracting geometrical shape information that is invariant to the differences in motion, rotation, and scaling that are present in the V1 visual inputs. We further verified that at the highest IT levels in the model, a consistent, spatially-invariant representation is present across different views of the same object (e.g., the average correlation across frames within an object was .901). This is also evident in Figure~\ref{fig.rsa}a by virtue of the close similarity across multiple objects within the same category. \begin{figure} \centering\includegraphics[width=4in]{figs/fig_deepleabra_wwi_rsa_leabra_macaque} \caption{\small \protect\spacing{1} Comparison of progression from V4 to IT in macaque monkey visual cortex (top row, from Cadieu et al., 2014) versus same progression in model (replotted using comparable color scale). Although the underlying categories are different, and the monkeys have a much richer multi-modal experience of the world to reinforce categories such as foods and faces, the model nevertheless shows a similar qualitative progression of stronger categorical structure in IT, where the block-diagonal highly similar representations are more consistent across categories, and the off-diagonal differences are stronger and more consistent as well (i.e., categories are also more clearly differentiated). Note that the critical difference in our model versus those compared in Cadieu et al. 2014 and related papers is that they explicitly trained their models on category labels, whereas our model is {\em entirely self-organizing} and has no external categorical training signal.} \label{fig.macaque} \end{figure} Further evidence for the progressive nature of representation development in our model is shown in Figure~\ref{fig.macaque}, which compares the similarity structures in layers V4 and IT in macaque monkeys\cite{CadieuHongYaminsEtAl14} with those in corresponding layers in our model. In both the monkeys and our model, the higher IT layer builds upon and clarifies the noisier structure that is emerging in the earlier V4 layer. Considerable other work has also compared DCNN representations with these same data from monkeys\cite{CadieuHongYaminsEtAl14}, but it is essential to appreciate that those DCNN models were explicitly trained on the category labels, making it somewhat less than surprising that such categorical representations developed. By contrast, we reiterate that our model has discovered its categorical representations entirely on its own, with no explicit categorical inputs or training of any kind. % (JLR) General comment: I know you have the github link, but it might be nice to really emphasize that your model is available online for anyone looking to compare models with empirical RSA (helps make the case for broader impact). I've seen more and more of those, e.g. that one with PredNet and fMRI data. \begin{figure} \centering\includegraphics[width=4.5in]{figs/fig_deepleabra_wwi_bp_prednet_simat} \caption{\small \protect\spacing{1} {\bf a)} Best-fitting category similarity for TE layer of the backpropagation (Bp) model with the same What / Where structure as the biological model. Only two broad categories are evident, and the lower {\em max} distance (0.3 vs. 1.5 in biological model) means that the patterns are highly similar overall. {\bf b)} Best-fitting similarity structure for the PredNet model, in the highest of its layers (layer 6), which is more differentiated than Bp (max = 0.75) but also less cleanly similar within categories (i.e., less solidly blue along the block diagonal), and overall follows a broad category structure similar to V1. {\bf c)} Comparison of similarity structures across layers in the Bp model (compare to Figure~2c): unlike in the biological model, the V1 structure is largely preserved across layers, and is little different from the structure that best fits the TE layer shown in panel {\bf a}, indicating that the model has not developed abstractions beyond the structure present in the visual input. Layer V3 is most directly influenced by spatial prediction errors, so it differs from both in strongly encoding position information. {\bf d)} The best fitting V1 structure, which has 2 broad categories and banana is in a third category by itself. The lack of dark blue on the block diagonal indicates that these categories are relatively weak, and every item is fairly dissimilar from every other. {\bf e)} The same similarities shown in panel {\bf a} for Bp TE also fit reasonably well sorted according to the V1 structure (and they have a similar average within - between contrast differences, of 0.0838 and 0.0513 -- see SI for details). {\bf f)} The similarity structure from the biological model resorted in the V1 structure does {\em not} fit well: the blue is not aligned along the block diagonal, and the yellow is not strictly off-diagonal. This is consistent with the large difference in average contrast distance: 0.5071 for the best categories vs. 0.3070 for the V1 categories.} \label{fig.bpred} \end{figure} Figure~\ref{fig.bpred} shows the results from a purely backpropagation-based (Bp) version of the same model architecture, and a standard PredNet model\cite{LotterKreimanCox16} with extensive hyperparameter optimization (see SI). In the Bp model, the highest layers in the network form a simple binary category structure overall, and the detailed item-level similarity structure does not diverge significantly from that present at the lowest V1 inputs, indicating that it has not formed novel systematic structured representations, in contrast to those formed in the biologically based model. Similar results were found in the PredNet model, where the highest layer representations remained very close to the V1 input structure. Thus, it is clear that the additional biologically derived properties are playing a critical role in the development of abstract categorical representations that go beyond the raw visual inputs. These properties include: excitatory bidirectional connections, inhibitory competition, and an additional Hebbian form of learning that serves as a regularizer (similar to weight decay) on top of predictive error-driven learning\cite{OReilly98,OReillyMunakata00}. Each of these properties could promote the formation of categorical representations. Bidirectional connections enable top-down signals to consistently shape lower-level representations, creating significant attractor dynamics that cause the entire network to settle into discrete categorical attractor states. By contrast, backpropagation networks typically lack these kinds of attractor dynamics, and this could contribute significantly to their relative lack of categorical learning. Hebbian learning drives the formation of representations that encode the principal components of activity correlations over time, which can help more categorical representations coalesce (and results below already indicate its importance). Inhibition, especially in combination with Hebbian learning, drives representations to specialize on more specific subsets of the space. Ongoing work is attempting to determine which of these is essential in this case (perhaps all of them) by systematically introducing some of these properties into the backpropagation model, though this is difficult because full bidirectional recurrent activity propagation, which is essential for conveying error signals top-down in the biological network, is incompatible with the standard efficient form of error backpropagation, and requires much more computationally intensive and unstable forms of fully recurrent backpropagation\cite{WilliamsZipser92,Pineda87}. Furthermore, Hebbian learning requires inhibitory competition which is difficult to incorporate within the backpropagation framework. \begin{figure} \centering\includegraphics[width=4in]{figs/fig_deepleabra_wwi_leabra_manips} \caption{\small \protect\spacing{1} Effects of various manipulations on the extent to which TE representations differentiate from V1. {\em Std} is the same result shown in Figure 2c from the intact model for ease of comparison. All of the following manipulations significantly impair the development of abstract TE categorical representations (i.e., TE is more similar V1 and the other layers). {\bf a)} Dorsal {\em Where} pathway lesions, including lateral inferior parietal sulcus (LIP), V3, and dorsal prelunate (DP). This pathway is essential for regressing out location-based prediction errors, so that the residual errors concentrate feature-encoding errors that train the {\em What} pathway. {\bf b)} Allowing the deep layers full access to current-time information, thus effectively eliminating the prediction demand and turning the network into an auto-encoder, which significantly impairs representation development, and supports the importance of the challenge of predictive learning for developing deeper, more abstract representations. {\bf c)} Reducing the strength of Hebbian learning by 20\% (from 2.5 to 2), demonstrating the essential role played by this form of learning on shaping categorical representations. Eliminating Hebbian learning entirely (not shown) prevented the model from learning anything at all, as it also plays a critical regularization and shaping role on learning.} \label{fig.manips} \end{figure} Figure~\ref{fig.manips} shows just a few of the large number of parameter manipulations that have been conducted to develop and test the final architecture. For example, we hypothesized that separating the overall prediction problem between a spatial {\em Where} vs. non-spatial {\em What} pathway\cite{UngerleiderMishkin82,GoodaleMilner92}, would strongly benefit the formation of more abstract, categorical object representations in the {\em What} pathway. Specifically, the {\em Where} pathway can learn relatively quickly to predict the overall spatial trajectory of the object (and anticipate the effects of saccades), and thus effectively regress out that component of the overall prediction error, leaving the residual error concentrated in object feature information, which can train the ventral {\em What} pathway to develop abstract visual categories. Figure~\ref{fig.manips}a shows that, indeed, when the {\em Where} pathway is lesioned, the formation of abstract categorical representations in the intact {\em What} pathway is significantly impaired. Figure~\ref{fig.manips}b shows that full predictive learning, as compared to just encoding and decoding the current state (which is much easier computationally, and leads to much better overall accuracy), is also critical for the formation of abstract categorical representations --- prediction is a ``desirable difficulty''\cite{Bjork94}. Finally, Figure~\ref{fig.manips}c shows the impact of reducing Hebbian learning, which impairs category learning as expected. In conclusion, we have demonstrated that learning based strictly on predicting what will be seen next is, in conjunction with a number of critical biologically motivated network properties and mechanisms, capable of generating abstract, invariant categorical representations of the overall shapes of objects. The nature of these shape representations closely matches human shape similarity judgments on the same objects. Thus, predictive learning has the potential to go beyond the surface structure of its inputs, and develop systematic, abstract encodings of the ``deeper'' structure of the environment. Relative to existing machine-learning-based approaches in ``deep learning'', which have generally focused on raw categorization accuracy measures using explicit category labels or other human-labeled inputs, the results here suggest that focusing more on the nature of what is learned in the model might provide a valuable alternative approach. Considerable evidence in cognitive neuroscience suggests that the primary function of the many nested (``deep'') layers of neural processing in the neocortex is to {\em simplify} and aggressively {\em discard} information\cite{SimonsRensink05}, to produce precisely the kinds of extremely valuable abstractions such as object categories, and, ultimately, symbol-like representations that support high-level cogntive processeses such as reasoning and problem-solving\cite{RougierNoelleBraverEtAl05,OReillyPetrovCohenEtAl14}. Thus, particularly in the domain of predictive or generative learning, the metric of interest should not be the accuracy of prediction itself (which is indeed notably worse in our biologically based model compared to the DCNN-based PredNet and backpropagation models), but rather whether this learning process results in the formation of simpler, abstract representations of the world that can in turn support higher levels of cognitive function. Considerable further work remains to be done to more precisely characterize the essential properties of our biologically motivated model necessary to produce this abstract form of learning, and to further explore the full scope of predictive learning across different domains. We strongly suspect that extensive cross-modal predictive learning in real-world environments, including between sensory and motor systems, is a significant factor in infant development and could greatly multiply the opportunities for the formation of higher-order abstract representations that more compactly and systematically capture the structure of the world\cite{YuSmith12}. Future versions of these models could thus potentially provide novel insights into the fundamental question of how deep an understanding a pre-verbal human, or a non-verbal primate, can develop\cite{SpelkeBreinlingerMacomberEtAl92,ElmanBatesKarmiloff-SmithEtAl96}, based on predictive learning mechanisms. This would then represent the foundation upon which language and cultural learning builds, to shape the full extent of human intelligence. \bibliography{ccnlab} \begin{addendum} \item We thank Dean Wyatte, Tom Hazy, Seth Herd, Kai Krueger, Tim Curran, David Sheinberg, Lew Harvey, Jessica Mollick, Will Chapman, Helene Devillez, and the rest of the CCN Lab for many helpful comments and suggestions. {\bf Funding} Supported by: ONR grants ONR N00014-19-1-2684 / N00014-18-1-2116, N00014-14-1-0670 / N00014-16-1-2128, N00014-18-C-2067, N00014-13-1-0067, D00014-12-C-0638. This work utilized the Janus supercomputer, which is supported by the National Science Foundation (award number CNS-0821794) and the University of Colorado Boulder. The Janus supercomputer is a joint effort of the University of Colorado Boulder, the University of Colorado Denver and the National Center for Atmospheric Research. {\bf Author Contributions} RCO developed the model, performed the non-PredNet simulations, and drafted the paper. JLR performed the PredNet simulations and analysis, and edited the paper. JR contributed to developing the model and edited the paper. \item[Competing Interests] R. C. O'Reilly is Chief Scientist at eCortex, Inc., which may derive indirect benefit from the work presented here. \item[Correspondence] Correspondence and requests for materials should be addressed to R.C. O'Reilly~(email: [email protected]). \item[Data and Materials Availability] All data and materials will be available at \url{https://github.com/ccnlab/deep-obj-cat} upon publication. \end{addendum} % \section*{Supplementary Information} % % \noindent Materials and Methods % % \noindent Figures S1 - S9 % % \noindent Table S1 \end{document}
{ "alphanum_fraction": 0.8108782606, "avg_line_length": 196.3154362416, "ext": "tex", "hexsha": "2bd0f2d4d6fbf845a4b949e41da64f3f62143edd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8989ae654c8d0e2e142d28e4190d6d0938251191", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ccnlab/deep-obj-cat", "max_forks_repo_path": "papers/short2019/deep_pred_lrn_2019_sub2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8989ae654c8d0e2e142d28e4190d6d0938251191", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ccnlab/deep-obj-cat", "max_issues_repo_path": "papers/short2019/deep_pred_lrn_2019_sub2.tex", "max_line_length": 2129, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8989ae654c8d0e2e142d28e4190d6d0938251191", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ccnlab/deep-obj-cat", "max_stars_repo_path": "papers/short2019/deep_pred_lrn_2019_sub2.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-03T04:57:52.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-03T04:57:52.000Z", "num_tokens": 6511, "size": 29251 }
\section{Blog} % \item[Jan 28, 2016] { \qisun{ \begin{itemize} \item dataset's diversity and generality \item algorithm overview \item foveal vs periphery in optimization \end{itemize} }%qisun }%item \begin{description} \item[December 18, 2016] { \qisun{ I am starting to double this as a tutorial on how to use paper drafts to manage project progress. This is still work in progress and not yet done, but I hope to finish a draft within the next few months. }%qisun }%item \end{description}
{ "alphanum_fraction": 0.7251461988, "avg_line_length": 19.7307692308, "ext": "tex", "hexsha": "cb1e7985b0d552657192393080e9936be2d838a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "78c5df57431bd9319c738b74f743d3fb27dfa80c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "snowymo/research-templates", "max_forks_repo_path": "TOG/note.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "78c5df57431bd9319c738b74f743d3fb27dfa80c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "snowymo/research-templates", "max_issues_repo_path": "TOG/note.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "78c5df57431bd9319c738b74f743d3fb27dfa80c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "snowymo/research-templates", "max_stars_repo_path": "TOG/note.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 150, "size": 513 }
\BoSSSopen{ParameterStudy/ParameterStudy} \graphicspath{{ParameterStudy/ParameterStudy.texbatch}} \BoSSScmd{ restart } \BoSSSexeSilent \BoSSScmd{ /// This guide will give you an example of how to conduct a parameter study with all the necessary steps. /// \section{Initialization of solver, processor and workflow} /// We start with initializing of the workflow } \BoSSSexe \BoSSScmd{ WorkflowMgm.Init("Name of Workflow"); } \BoSSSexe \BoSSScmd{ /// This line helps us manage the sessions later on while evaluating the results. /// Next, we connect to the database. } \BoSSSexe \BoSSScmd{ var myDb = OpenOrCreateDefaultDatabase(); } \BoSSSexe \BoSSScmd{ /// To check all the sessions in the current workflow, use the line: } \BoSSSexe \BoSSScmd{ WorkflowMgm.Sessions; } \BoSSSexe \BoSSScmd{ /// Now, all the necessary libraries need to be loaded } \BoSSSexe \BoSSScmd{ using System.Diagnostics;\newline using BoSSS.Foundation.Grid.RefElements;\newline using BoSSS.Application.IBM\_Solver;\newline using BoSSS.Platform.LinAlg; } \BoSSSexe \BoSSScmd{ /// The last part of the initialization is to set up processors. /// Here, we have two choices - either we do the calculations locally \code{myBatch} or on the network cluster \code{myHPC}. /// In order to use the network cluster, create a folder "cluster" in "P" and then set the HPC-Directory to this path. } \BoSSSexe \BoSSScmd{ var myBatch = new MiniBatchProcessorClient();\newline //var myHPC = new MsHPC2012Client(@"Cluster Directory"); } \BoSSSexe \BoSSScmd{ /// \section{Define geometrical boundaries} /// After loading the grid and giving the dimensions, we need to adjust the edges and their names. With the following code we assign every edge with number and name. Keep in mind that the name corresponds to the boundary condition (in this case "Pressure Dirichlet"). } \BoSSSexe \BoSSScmd{ g.EdgeTagNames.Add(1, "wall");\newline g.EdgeTagNames.Add(2, "Velocity\_Inlet");\newline g.EdgeTagNames.Add(3, "Pressure\_Dirichlet\_back");\newline g.EdgeTagNames.Add(4, "Pressure\_Dirichlet\_top");\newline \newline g.DefineEdgeTags(delegate (double[] X) \{\newline \btab byte ret = 0;\newline \btab if (Math.Abs(X[1]-(0.0))<= 1.0e-8)\newline \btab \btab ret = 1;\newline \btab if (Math.Abs(X[0]-(0.0))<= 1.0e-8)\newline \btab \btab ret = 2;\newline \btab if (Math.Abs(X[1]-(1.0))<= 1.0e-8)\newline \btab \btab ret = 3;\newline \btab if (Math.Abs(X[0]-(1.0))<= 1.0e-8)\newline \btab \btab ret = 4;\newline \btab return ret;\newline \newline \}); } \BoSSSexe \BoSSScmd{ /// \section{Angle/Velocity Profile} /// In this particular case we will use inflow profile represented via tan-function and the angle of inflow will be $30$ degrees. } \BoSSSexe \BoSSScmd{ string caseName = string.Format("k\{0\}\_\{1\}", k, grd);\newline Console.WriteLine("setting up: " + caseName);\newline \newline double beta = 30;\newline string CosBeta = Math.Cos(beta*Math.PI/180.0).ToString();\newline string SinBeta = Math.Sin(beta*Math.PI/180.0).ToString(); } \BoSSSexe \BoSSScmd{ /// These code lines set up the case name and introduce the sine and cosine /// functions to our simulation. Next, we define the velocities in /// $x$- and $y$-direction via a tan-function. These velocities and angles are only for this particular example and would not be suited for your simulation. } \BoSSSexe \BoSSScmd{ var UX = new Formula (string.Format("X=> \{0\}*Math.Atan(X[1]*5)*2.0/Math.PI",CosBeta),false);\newline var UY = new Formula (string.Format("X=> \{0\}*Math.Atan(X[1]*5)*2.0/Math.PI",SinBeta),false); } \BoSSSexe \BoSSScmd{ ///After the velocities and boundary conditions are set. We need to determine all other simulation parameters needed to proceed. The variable $\code{ctrl}$ is used to store the $\code{IBM_Control}$-object. All other parameters are selfexplanatory. } \BoSSSexe \BoSSScmd{ var ctrl = new IBM\_Control();\newline controls.Add(ctrl);\newline \newline ctrl.SessionName = caseName;\newline ctrl.SetDatabase(myDb);\newline ctrl.SetGrid(grd);\newline ctrl.SetDGdegree(k);\newline ctrl.NoOfMultigridLevels = int.MaxValue; } \BoSSSexe \BoSSScmd{ /// \section{Boundary conditions/Initial values} /// We move on to the part where we define the boundary conditions and initial values. } \BoSSSexe \BoSSScmd{ ctrl.AddBoundaryValue("wall");\newline ctrl.AddBoundaryValue("Velocity\_Inlet");\newline ctrl.AddBoundaryValue("Pressure\_Dirichlet\_back");\newline ctrl.AddBoundaryValue("Pressure\_Dirichlet\_top");\newline ctrl.BoundaryValues("Velocity\_Inlet"].Value.Add("VelocityX",UX);\newline ctrl.BoundaryValues("Velocity\_Inlet"].Value.Add("VelocityY",UY); } \BoSSSexe \BoSSScmd{ /// and for the initial values } \BoSSSexe \BoSSScmd{ ctrl.InitialValues.Add("VelocityX", new Formula ("X=> 0.0" false));\newline ctrl.InitialValues.Add("VelocityY", new Formula ("X=> 0.0" false));\newline ctrl.InitialValues.Add("Pressure", new Formula ("X=> 0.0" false));\newline ctrl.InitialValues.Add("Phi", new Formula ("X=> -1.0" false)); } \BoSSSexe \BoSSScmd{ /// \section{Fluid properties} /// Here we set up the density and the Reynolds number, keep in mind that the calculations are dimensionles, so leave the values as seen above ($100$ is an example value) } \BoSSSexe \BoSSScmd{ double reynolds = 100;\newline ctrl.PhysicalParameters.rho\_A = 1;\newline ctrl.PhysicalParameters.mu\_A = 1.0/reynolds; } \BoSSSexe \BoSSScmd{ /// \section{Simulation options} /// We set the simulation parameters, such as time-step size, end time and number of time-steps. } \BoSSSexe \BoSSScmd{ ctrl.Timestepper\_Scheme = IBM\_Control.TimesteppingScheme.BDF2;\newline double dt = 7e-2;\newline ctrl.dtMax = dt;\newline ctrl.dtMin = dt;\newline ctrl.Endtime = 1e16;\newline ctrl.NoOfTimesteps = 100; } \BoSSSexe \BoSSScmd{ /// for the time-stepping scheme, you can choose either BDF2 or ImplicitEuler. /// \section{Starting of simulation} /// You have two possible ways to start a simulation - locally on the PC via $\code{myBatch}$ or on the network cluster $\code{myHPC}$. } \BoSSSexe \BoSSScmd{ Console.WriteLine(" Submitting to Cluster: " + ctrl.SessionName);\newline ctrl.RunBatch(myHPC,NumberOfMPIProcs:1);\newline \newline Console.WriteLine(" Submitting " + ctrl.SessionName);\newline ctrl.RunBatch(myBatch,NumberOfMPIProcs:1, UseComputerNodesExclusive:true); } \BoSSSexe \BoSSScmd{ /// \section{Evaluation and Error Calculation} /// After all of the desired simulation are finished, you need to evaluate the different parameters and their effect on the whole system. Typing the following command gives you a list of all simulations with their status (FinishedSuccessful or with certain errors) } \BoSSSexe \BoSSScmd{ WorkflowMgm.AllJobs.Select(kv => kv.Key + ": \textbackslash t" + kv.Value.Status); } \BoSSSexe \BoSSScmd{ /// With the next command line you are able to select a certain session(simulation) and see the different time-steps for control purposes. } \BoSSSexe \BoSSScmd{ WorkflowMgm.AllJobs.ElementAt(9).Value.Stdout; } \BoSSSexe \BoSSScmd{ /// \subsection{$L^2$-Error} /// This section introduces the calculation of the $L^2$-Error. } \BoSSSexe \BoSSScmd{ ITimestepInfo[] AllSolutionS = WorkflowMgm.AllJobs.Select( kv => kv.Value.LatestSession.Timesteps.Last()).ToArray(); } \BoSSSexe \BoSSScmd{ ITimestepInfo[] k1\_SolutionS = AllSolutionS.Where(ts => ts.Fields.Single(f => f.Identification == "Pressure").Basis.Degree == 0).ToArray();\newline ITimestepInfo[] k2\_SolutionS = AllSolutionS.Where(ts => ts.Fields.Single(f => f.Identification == "Pressure").Basis.Degree == 1).ToArray();\newline ITimestepInfo[] k3\_SolutionS = AllSolutionS.Where(ts => ts.Fields.Single(f => f.Identification == "Pressure").Basis.Degree == 2).ToArray(); } \BoSSSexe \BoSSScmd{ k1\_SolutionS.Select(ts => ts.Fields.Single(f => f.Identification == "Pressure").Basis.Degree); } \BoSSSexe \BoSSScmd{ double[] GridRes;\newline Dictionary<string, double[]> L2Errors;\newline DGFieldComparison.ComputeErrors(new[]\{"VelocityX","VelocityY"\}, k1\_SolutionS, out GridRes, out L2Errors); } \BoSSSexe \BoSSScmd{ /// To check the particular errors, type } \BoSSSexe \BoSSScmd{ GridRes; } \BoSSSexe \BoSSScmd{ L2Errors["VelocityX"]; } \BoSSSexe \BoSSScmd{ L2Errors["VelocityY"]; } \BoSSSexe \BoSSScmd{ /// \section{Plotting of errors} /// This section gives a brief example of how to plot the erros and all the data from the previous simulations. } \BoSSSexe \BoSSScmd{ Plot(GridRes,L2Errors["VelocityX"],"VelXErr","-oy",\newline \btab GridRes,L2Errors["VelocityY"],"VelXErr","-xb",logX:true,logY:true); } \BoSSSexe \BoSSScmd{ /// for a plot with more specifics and more possible adjustments, here is an example code } \BoSSSexe \BoSSScmd{ var FancyPlot = new Plot2Ddata(); } \BoSSSexe \BoSSScmd{ FancyPlot.LogX = true;\newline FancyPlot.LogY = true; } \BoSSSexe \BoSSScmd{ var k1plot = new Plot2Ddata.XYvalues("VelXErr-k1",GridRes,L2Errors["VelocityY"]); } \BoSSSexe \BoSSScmd{ ArrayTools.AddToArray(k1plot, ref FancyPlot.dataGroups); } \BoSSSexe \BoSSScmd{ var CL = FancyPlot.ToGnuplot().PlotCairolatex(); } \BoSSSexe \BoSSScmd{ CL.PlotNow(); } \BoSSSexe \BoSSScmd{ /// \section{Exporting the session table} } \BoSSSexe \BoSSScmd{ static class AddCols \{\newline \btab static public object SipMatrixAssembly\_time(ISessionInfo SI) \{\newline \btab \btab var mcr = SI.GetProfiling()[0];\newline \btab \btab var ndS = mcr.FindChildren("SipMatrixAssembly");\newline \btab \btab var nd = ndS.ElementAt(0);\newline \btab \btab return nd.TimeSpentInMethod.TotalSeconds / nd.CallCount;\newline \btab \}\newline \btab static public object Aggregation\_basis\_init\_time(ISessionInfo SI) \{\newline \btab \btab var mcr = SI.GetProfiling()[0];\newline \btab \btab var ndS = mcr.FindChildren("Aggregation\_basis\_init");\newline \btab \btab var nd = ndS.ElementAt(0);\newline \btab \btab return nd.TimeSpentInMethod.TotalSeconds / nd.CallCount;\newline \btab \}\newline \btab static public object Solver\_Init\_time(ISessionInfo SI) \{\newline \btab \btab var mcr = SI.GetProfiling()[0];\newline \btab \btab var ndS = mcr.FindChildren("Solver\_Init");\newline \btab \btab var nd = ndS.ElementAt(0);\newline \btab \btab //Console.WriteLine("Number of nodes: " + ndS.Count() + " cc " + nd.CallCount );\newline \btab \btab return nd.TimeSpentInMethod.TotalSeconds / nd.CallCount;\newline \btab \}\newline \btab static public object Solver\_Run\_time(ISessionInfo SI) \{\newline \btab \btab var mcr = SI.GetProfiling()[0];\newline \btab \btab var ndS = mcr.FindChildren("Solver\_Run");\newline \btab \btab var nd = ndS.ElementAt(0);\newline \btab \btab return nd.TimeSpentInMethod.TotalSeconds / nd.CallCount;\newline \btab \}\newline \} } \BoSSSexe \BoSSScmd{ /// this code adds additional/user-defined colums. Now, we want to export he saved session table in a file. } \BoSSSexe \BoSSScmd{ var SessTab = WorkflowMgm.SessionTable; } \BoSSSexe \BoSSScmd{ SessTab = SessTab.ExtractColumns(AllCols.ToArray()); } \BoSSSexe \BoSSScmd{ using System.IO; } \BoSSSexe \BoSSScmd{ /// Here, we define the filename } \BoSSSexe \BoSSScmd{ var now = DateTime.Now;\newline SessTab.TableName = "SolverRuns--" + now.Year + "-" + now.Month + "-" + now.Day;\newline string docpath = Path.Combine(CurrentDocDir, SessTab.TableName + ".json"); } \BoSSSexe \BoSSScmd{ /// saving the session table as a file could also be done in our git reposatory } \BoSSSexe \BoSSScmd{ SessTab.SaveToFile(docpath); } \BoSSSexe \BoSSScmd{ /// } \BoSSSexe
{ "alphanum_fraction": 0.7292434239, "avg_line_length": 32.7837078652, "ext": "tex", "hexsha": "ac31b49f7c5bd0e16d547892b1aca329d480452f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "leyel/BoSSS", "max_forks_repo_path": "doc/handbook/ParameterStudy/ParameterStudy.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "leyel/BoSSS", "max_issues_repo_path": "doc/handbook/ParameterStudy/ParameterStudy.tex", "max_line_length": 267, "max_stars_count": 1, "max_stars_repo_head_hexsha": "39f58a1a64a55e44f51384022aada20a5b425230", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "leyel/BoSSS", "max_stars_repo_path": "doc/handbook/ParameterStudy/ParameterStudy.tex", "max_stars_repo_stars_event_max_datetime": "2018-12-20T10:55:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-12-20T10:55:58.000Z", "num_tokens": 3547, "size": 11671 }
\documentclass[11pt]{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{bm} \usepackage{url} \usepackage{fancyvrb} \usepackage{graphicx} \usepackage{float} \title{APPM 5510 HW 7} \author{Zane Jakobs} \maketitle \begin{document} \subsection*{1(a)} Plots are not included here because both filters blew up to machine infinity within 7 assimilation cycles, so I suspect there is an error in my implementation (and if this line is still here when I turn this in, that means I ran out of time trying to find it), which is attached to the end of the assignment (and will be on my GitHub). If you want to run this code and don't want to write your own makefile, let me know and I'll send you mine. \subsection*{2(a)} Solving numerically with the Python code at the end, we have \[ C = \begin{bmatrix} 0.5 & 0.25 & -0.25 \\ 0.25 & 0.5 & 0.25 \\ -0.25 & o.25 & 0.5\end{bmatrix} \] \subsection*{2(b)} Again with the Python code, we get something different--since the values are a bit non-integral, here's the terminal output:\\ \begin{Verbatim} [[ 0.43250799 0.06829073 -0.36421725] [ 0.06829073 0.01078275 -0.05750799] [-0.36421725 -0.05750799 0.30670927]] \end{Verbatim} This is different from the above, because the matrix $\Sigma^{-1}\Sigma^2\Sigma^{-1}$ (where $^{-1}$ indicates taking the pseudoinverse) is not quite the identity (the bottom right element is a zero), so the formula for $C$ is a bit different than in the EnKF case. \subsection*{2(c)} This time, we get the same result as in part (a), because this time, we put the zero eigenvalue in the third column, which multiplies the zero in $\Sigma^{-1}\Sigma^2\Sigma^{-1}$, and thus does not introduce a new zero into the multiplication chain, which does happen in part (b), since there, the zero eigenvalue multiplies a nonzero part of $\Sigma^{-1}\Sigma^2\Sigma^{-1}$. \section*{Filter Implementation} \begin{Verbatim}[xleftmargin=-2cm] #ifndef ENKF_HPP #define ENKF_HPP #include <Eigen/Core> #include <Eigen/Dense> #include <Eigen/Cholesky> #include <vector> #include <functional> #include <utility> #include <cassert> #include <random> #include <cmath> #include <iostream> namespace fastmath { using Vec = Eigen::VectorXd; using Mat = Eigen::MatrixXd; template<class State_t, class Obs_t, class ObsOp_t, class... ForecastArgs> class EnKF { public: constexpr EnKF() {}; //filter with one observation /* virtual void filter(const Obs_t& y);*/ virtual Obs_t observe(const State_t& x); virtual State_t forecast(State_t& x, ForecastArgs&... args); virtual ~EnKF() {}; }; //scalar EnKF with linear observations template<class... ForecastArgs> class VectorEnKF : public EnKF<Vec, Vec, Mat, ForecastArgs...> { private: Mat m_H; Vec m_state; Vec m_obs = m_H * m_state; std::function<Vec(Vec&, ForecastArgs&...)> m_forecast; Vec m_ensemble_mean; Mat m_ensemble_covariance; Mat m_obs_covariance; int m_ensemble_size; int m_dimension; Mat m_ensemble = Mat::Zero(m_dimension, m_ensemble_size); bool m_initialized_ensemble = false; Mat m_background_covariance = Mat::Identity(m_dimension, m_dimension); Mat m_ensemble_transform = gen_covariance_transform(m_ensemble_covariance); public: VectorEnKF(const std::function<Vec(Vec&, ForecastArgs&...)>& n_forecast, const Mat& n_H, const Vec& n_initial_state, const Mat& n_observe_err_covariance, const Vec& n_initial_ensemble_mean, const Mat& n_initial_ensemble_covariance, const int n_ensemble_size ) : EnKF<Vec, Vec, Mat, ForecastArgs...>(), m_H(n_H), m_state(n_initial_state), m_forecast(n_forecast), m_ensemble_mean(n_initial_ensemble_mean), m_ensemble_covariance(n_initial_ensemble_covariance), m_obs_covariance(n_observe_err_covariance), m_ensemble_size(n_ensemble_size), m_dimension(static_cast<int>(n_initial_state.size())) {}; void set_state(const Vec& n_state) noexcept { m_state = n_state; } void set_ensemble_size(const int n_ensemble_size) noexcept { assert(n_ensemble_size > 0); m_ensemble_size = n_ensemble_size; } Vec state() { return m_state; } int ensemble_size() { return m_ensemble_size; } Vec ensemble_mean() { return m_ensemble_mean; } Mat ensemble_covariance() { return m_ensemble_covariance; } Vec observe(const Vec& x){ return m_H * x; } Vec forecast(Vec& x, ForecastArgs&... args){ return m_forecast(x, args...); } private: Mat gen_covariance_transform(const Mat& target_covariance) const { return target_covariance.ldlt().matrixL(); } Mat gen_obs_perturbations() { auto rcovtransform = gen_covariance_transform(m_obs_covariance); std::random_device rd{}; std::mt19937 gen{rd()}; std::normal_distribution<> dis{0.0, 1.0}; Mat A(m_H.rows(), m_ensemble_size); Vec epsilon(m_H.rows()); for(auto i = 0; i < m_ensemble_size; ++i){ for(auto e = 0; e < m_H.rows(); ++e){ epsilon[e] = dis(gen); } A.col(i) = epsilon; } return rcovtransform * A; } Mat gen_ensemble_perturbations(bool gen_cov_transform=true) { std::random_device rd{}; std::mt19937 gen{rd()}; std::normal_distribution<> dis{0.0, 1.0}; Mat A(m_dimension,m_ensemble_size); Vec epsilon(m_dimension); if(gen_cov_transform){ m_ensemble_transform = gen_covariance_transform(m_ensemble_covariance); } for(auto i = 0; i < m_ensemble_size; ++i){ for(auto e = 0; e < m_dimension; ++e){ epsilon[e] = dis(gen); } A.col(i) = epsilon + m_ensemble_mean; } return m_ensemble_transform * A;// * A; } /*std::pair<Vec,Mat> sample_mean_covariance(const std::vector<Vec>& sample) { Vec mu(sample[0].size()); for(auto i = 0; i < m_ensemble_size; ++i){ for(auto i = 0; i < sample[0].size(); ++i){ mu[i] += s[i]; } } mu /= sample.size(); Mat cov = Mat::Zero(sample.size(), sample.size()); for(const auto& s : sample){ auto delta = s - mu; cov.noalias() += delta * delta.transpose(); } cov /= (sample.size() - 1); return std::make_pair(mu, cov); }*/ //A is the result of a call to form_A Mat kalman_gain_A(const Mat& A) { auto V = m_H * A; auto vvr = V * V.transpose();// + m_obs_covariance; return A * V.transpose() * vvr.inverse(); } //copy ensemble here Mat form_A(Mat ensemble) { for(auto i = 0; i < m_ensemble_size; ++i){ ensemble.col(i) -= m_ensemble_mean; } return ensemble / std::sqrt(m_ensemble_size - 1); } Mat kalman_gain_ensemble(const Mat& ensemble) { auto A = form_A(ensemble); return kalman_gain_A(A); } //B matrix given an ensemble Mat ensemble_prior_covariance(const Mat& A) { return A * A.transpose(); } Vec ensemble_member_update(const Mat& K, const Vec& perturbed_y, const Vec& xi) { auto err = perturbed_y - m_H * xi; return xi + K * err; } Mat ensemble_update(const Mat& K, const Mat& perturbed_yvals, const Mat& ensemble_xvals) { Mat updated_xvals(ensemble_xvals.rows(), ensemble_xvals.cols()); for(auto i = 0; i < m_ensemble_size; ++i){ updated_xvals.col(i) = ensemble_member_update(K, perturbed_yvals.col(i), ensemble_xvals.col(i)); } return updated_xvals; } Mat ETKF_ensemble_update_mat(Mat& A) { auto V = m_H * A; auto Id = Mat::Identity(m_ensemble_size); auto emat = Id + V.transpose() * m_obs_covariance.inverse() * V; Eigen::SelfAdjointEigenSolver<Mat> esolver(emat); assert(esolver.info() == Eigen::Success);//, "ETKF eigendecomposition failed."); Mat Gamma = esolver.eigenvalues().asDiagonal();/* I + Gamma in the notes*/ auto Q = esolver.eigenvectors(); for(auto i = 0; i < Gamma.cols(); ++i){ Gamma(i,i) = std::sqrt(Gamma(i,i)); } return Q * Gamma * Q.transpose(); } Mat ETKF_posterior_ensemble(Mat& A) { // A is prior ensemble auto X = ETKF_ensemble_update_mat(A); auto one = Vec::Ones(m_ensemble_size); auto aplus = A * one / m_ensemble_size; return A - aplus * one.transpose(); } Mat ensemble_posterior_covariance(const Mat& B, const Mat& K) { return B - K * m_H * B; } public: void ETKF_filter(Vec& obs, ForecastArgs&... args) { for(auto i = 0; i < m_ensemble_size; ++i){ m_ensemble.col(i) = m_state; } m_ensemble += gen_ensemble_perturbations(); m_ensemble = ETKF_posterior_ensemble(m_ensemble); m_state = m_ensemble.rowwise().mean(); m_ensemble_mean = m_state; m_ensemble_covariance = m_ensemble * m_ensemble.transpose(); } void filter(Vec& obs, ForecastArgs&... args) { // generate ensemble if(!m_initialized_ensemble){ for(auto i = 0; i < m_ensemble_size; ++i){ m_ensemble.col(i) = m_state; } //m_initialized_ensemble = true; } m_ensemble += gen_ensemble_perturbations(); /* forecast ensemble */ for(auto i = 0; i < m_ensemble_size; ++i){ Vec xstate = m_ensemble.col(i); m_ensemble.col(i) = m_forecast(xstate, args...); } m_ensemble_mean = m_ensemble.rowwise().mean(); auto A = form_A(m_ensemble); m_background_covariance = A * A.transpose(); Mat obs_perturbed = gen_obs_perturbations(); for(auto i = 0; i < m_ensemble_size; ++i){ obs_perturbed.col(i) += obs; } auto K = kalman_gain_A(A); A = ensemble_update(K, obs_perturbed / std::sqrt(m_ensemble_size-1), A); m_ensemble_covariance = A * A.transpose(); /* get ensemble back */ A *= std::sqrt(m_ensemble_size - 1); //add prior mean back and re-scale m_ensemble_mean = std::sqrt(m_ensemble_size - 1) * (A.rowwise().mean() + m_ensemble_mean); m_state = m_ensemble_mean; } }; }//namespace fastmath #endif \end{Verbatim} \section*{Filter Calling Code} \begin{Verbatim}[xleftmargin=-2cm] #include "enkf.hpp" #include <Eigen/Core> #include <vector> #include <fstream> using Vec = Eigen::VectorXd; using Mat = Eigen::MatrixXd; Vec Henon_Map(Vec& x, double& a, double& b){ auto xn = x[0]; auto yn = x[1]; x[0] = 1 - a * xn * xn + yn; x[1] = b * xn; return x; } double obs_err(){ std::random_device rd{}; std::mt19937 gen{rd()}; std::normal_distribution<> dis{0.0, std::sqrt(0.02)}; return dis(gen); } int main(int argc, char** argv){ using namespace fastmath; std::function<Vec(Vec&, double&, double&)> fmap = Henon_Map; int ensemble_size = 31; auto initial_state = Vec::Zero(2); Mat H(1,2); H << 1.0, 0.0; Vec true_state(2); true_state << 0.0, 0.0; Vec ensemble_mean = Vec::Zero(ensemble_size); Mat ensemble_cov = 0.01 * Mat::Identity(2,2); Mat obs_err_cov = 0.02 * Mat::Identity(1,1); int n_assim_cycles = 100; double a = 1.4, b = 0.3; Mat state_mat(2, n_assim_cycles); for(auto i = 0; i < n_assim_cycles; ++i){ true_state = Henon_Map(true_state, a, b); state_mat.col(i) = true_state; } std::vector<double> filter_err, etkf_err; auto enkf_runner = VectorEnKF<double, double>(fmap, H, initial_state, obs_err_cov, initial_state, ensemble_cov, ensemble_size); auto etkf_runner = VectorEnKF<double, double>(fmap, H, initial_state, obs_err_cov, initial_state, ensemble_cov, ensemble_size); Vec y(1); double yval; Vec analysis_mean; for(auto i = 0; i < n_assim_cycles; ++i){ true_state = state_mat.col(i); yval = true_state[0] + obs_err(); y[0] = yval; enkf_runner.filter(y, a, b); etkf_runner.filter(y, a, b); analysis_mean = enkf_runner.state(); Vec etkf_mean = etkf_runner.state(); Vec err = true_state - analysis_mean; filter_err.push_back(err.norm()); err = true_state - etkf_mean; etkf_err.push_back(err.norm()); } std::ofstream fwriter("hw7.csv"), twriter("hw7etkf.csv"); for(const auto& e : filter_err){ fwriter << e << '\n'; } for(const auto& e : etkf_err){ twriter << e << '\n'; } return 0; } \end{Verbatim} \section*{Python Code (Problem 2)} \begin{Verbatim}[xleftmargin=-2cm] import numpy as np def gamma_pseudoinverse(gmat, n): g = np.zeros_like(gmat) for i in range(n): g[i,i] = 1/ np.sqrt(1 + g[i,i]) return g if __name__ =='__main__': x = 2 ** 0.5 * np.array([[1.0,1.0,0.0],[-1.0,0.0, 1.0],[0.0,-1.0,-1.0]]) H = np.eye(3) R = np.eye(3) xdelta = x - x.mean(axis=0, keepdims=True) B = 0.5 * xdelta.T @ xdelta #H is identity K = B @ np.linalg.inv(B + R) #again, H is identity C = (H - K) @ B print("Posterior covariance: ", C) A = xdelta.T / (2 ** 0.5) V = H @ A F, sgm, W = np.linalg.svd(A.T @ A) sigma = np.diag(sgm) sigmaInv = np.linalg.pinv(sigma) sigmaPseudoId = sigmaInv @ sigma @ sigma @ sigmaInv #V^T V is Hermitian, and R is identity so V^T R V = V^T V Gamma, Q = np.linalg.eigh(V.T @ V) Gamma[0] = 0.0 #2b GammaInv = np.diag(1.0/np.sqrt(Gamma + 1)) C = A @ Q @ GammaInv @ sigmaPseudoId @ GammaInv @ Q.T @ A.T print("EAKF C: ", C) #2c, same as above, just flip the columns in Gamma and Q A = xdelta.T / (2 ** 0.5) V = H @ A F, sgm, W = np.linalg.svd(A.T @ A) sigma = np.diag(sgm) sigmaInv = np.linalg.pinv(sigma) sigmaPseudoId = sigmaInv @ sigma @ sigma @ sigmaInv Gamma, Q = np.linalg.eigh(V.T @ V) Qp = np.fliplr(Q) Gamma[0] = Gamma[2] Gamma[2] = 0.0 GammaInv = np.diag(1.0/np.sqrt(Gamma + 1)) print(sigmaPseudoId) C = A @ Qp @ GammaInv @ sigmaPseudoId @ GammaInv @ Qp.T @ A.T print("EAKF C: ", C) \end{Verbatim} \end{document}
{ "alphanum_fraction": 0.673800457, "avg_line_length": 22.9144851658, "ext": "tex", "hexsha": "7206485cdbbe12d74b44acacc98f5c795baf1c4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DiffeoInvariant/Data-Assimilation", "max_forks_repo_path": "HW-Text/hw7.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DiffeoInvariant/Data-Assimilation", "max_issues_repo_path": "HW-Text/hw7.tex", "max_line_length": 463, "max_stars_count": null, "max_stars_repo_head_hexsha": "7afe25b1efb87a6988bea6df34e17650d9eb86fa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DiffeoInvariant/Data-Assimilation", "max_stars_repo_path": "HW-Text/hw7.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4110, "size": 13130 }
%Seccion "Resumen" % \chapter{Combined index notation for finite element analysis} \section{Combined index notation for finite element analysis} In the formulation of finite element methods it is customary to start from expressions written in index notations like: \[ \delta W = \intL_V \sigma_{ij} \delta u_{i,j} \dd{V} \] and then proceed to introduce discretization or approximations via interpolation theory. In this case it is useful to combine index notation to describe the physical tensorial fields and at the same time the superposition implicit in interpolation schemes. In this appendix we clarify the use of such combined index notation. \subsection{Index notation of Cartesian tensor fields} In index notation vector and tensor fields are represented by a letter defining the name of the field and a set of different subscripts (or index). The number of different indices associated to the letter indicates whether the field is a vector (1 index), a second order tensor (2 indices), a third order tensor (3 indices) as in: \[u_i , \sigma_{ij} , C_{ijk}\] and further clarified next. Consider a vector $\overset\rightharpoonup u$ represented in a Cartesian reference system in terms of its scalar components $u_x , u_y , u_z$. The following representations of the vector are equivalent: \[u_i\leftrightarrow\begin{bmatrix}u_x&u_y&u_z\end{bmatrix}\leftrightarrow\overset\rightharpoonup u=u_x\widehat i+u_y\widehat j+u_z\widehat k.\] In the first case, the vector is denoted by the letter $u$ and by the subscript $i$ which represents in condensed form the scalar components $\begin{bmatrix}u_x&u_y&u_z\end{bmatrix}$ of the vector. Similarly, consider a second order tensor $\overset{\rightharpoonup(2)}\sigma$ represented in a cartesian reference system in terms of its scalar components $\sigma_{xx} , \sigma_{xy}, \sigma_{xz} , \sigma_{yx}, \sigma_{yy}, \sigma_{yz}, \sigma_{zx}, \sigma_{zy}, \sigma_{zz}$. The following representations of the tensor are equivalent: \[\sigma_{ij}\leftrightarrow\begin{bmatrix}\sigma_{xx}&\sigma_{xx}&\sigma_{xx}\\\sigma_{xx}&\sigma_{xx}&\sigma_{xx}\\\sigma_{xx}&\sigma_{xx}&\sigma_{xx}\end{bmatrix}\leftrightarrow\overset{\rightharpoonup(2)}\sigma.\] Note that each free or different subscript associated to a letter is a condensed representation of normal scalar components along a basis $\widehat i-\widehat j-\widehat k$. Accordingly, in the case of the second order tensor the subscript $i$ represents variation through $x-y-z$ and subscript $j$ represents variation through $x-y-z.$ \subsection{The summation convention} In the context of indicial notation subscripts which appear repeated (forming pairs) are used to represent summation. The most simple example is the representation of the dot product between two vectors which can be represented in the following alternative forms: \[\overrightarrow u\cdot\overrightarrow w\leftrightarrow u_xw_x+u_yw_y+u_zw_z\leftrightarrow\sum_{i=1}^3u_iw_i\leftrightarrow u_iw_i.\] Accordingly repeated indices represent summation over the range of variation of the index. \subsection{Indicial notation in interpolation} From interpolation theory we know that a function $f(x)$ can be approximated in terms of $n$ known values of the solution $\begin{bmatrix}f^1&f^2&...f^n\end{bmatrix}$ using a superposition like \[f(x)=L^1(x)f^1+L^2(x)f^2+...+L^n(x)f^n\] and where the terms $L^k s$ are $n$ interpolation functions of order $n-1$. This approximated function can also be represented in terms of indicial notation where we now use capitalized superscripts to refer to the components of the interpolation set as follows: \[f(x)=L^Q(x)f^Q.\] In the expression above the approximated function represents a scalar quantity varying over a one-dimensional domain with independent variable $x.$. In the case of approximation via interpolation theory of vector or tensor valued functions we will have subscripts indicating the scalar components of the vector or tensor field and superscripts representing the interpolation polynomials. Using explicit notation a vector function $u_i (x,y,z)$ varying over a three-dimensional space with independent variables $x,y,z$ would be represented like: \[ \begin{array}{l}u_x(x,y,z)=N^1(x,y,z)u_x^1+N^2(x,y,z)u_x^2+...+N^n(x,y,z)u_x^n\\u_y(x,y,z)=N^1(x,y,z)u_y^1+N^2(x,y,z)u_y^2+...+N^n(x,y,z)u_y^n\\u_z(x,y,z)=N^1(x,y,z)u_z^1+N^2(x,y,z)u_z^2+...+N^n(x,y,z)u_z^n.\end{array} \] In this representation it has been assumed that each component $u_x$, $u_y$ and $u_z$ has been approximated using the same interpolation space. In the generalization of the indicial notation to the interpolation of the vector valued function we will associate the subscript, representing the scalar components of the field to the interpolation polynomials. Thus the above expression would be written like: \[u_i\;(x,y,z)\;=\;N_i^Q(x,y,z)u^Q\] or \[u_i(\overrightarrow x)\;=N_i^Q(\overrightarrow x)u^Q\] after considering $\overrightarrow x \equiv x,y,z.$ The subscript $i$, representing the physical components of the vector field $u_i$ has been carried out as a subscript to the shape function $N^Q$ for the nodal point $Q$, while the term $u^Q$ refers to the scalar components of the field $u_i$ at the nodal point $Q$. The main advantage in this notation is the possibility of conducting further operations, as required in the derivation of very general finite element algorithms, while combining physical and discrete information. \paragraph*{Example} In the expression \[ \delta W = \intL_V \sigma_{ij} \delta u_{i,j} \dd{V} \] assume that the primary variable is the displacement field $u_i$ in such a way that at the nodal point $Q$ the displacement vector $u^Q$ is known. Write the interpolated version of $\delta W.$ First, let us write the interpolated approximation to the displacement field $\delta u_i$ carrying the subscript to the shape functions: \[\delta u_i(\overrightarrow x)\;=N_i^Q(\overrightarrow x)\delta u^Q.\] To write the interpolated version of $\delta u_{i,j}$ we note that in the expression above the spatial variation of the field has been assumed by the shape function so we just extend the derivative with respect to $x_j$ to these functions to obtain the following interpolated version of the second order tensor field $\delta u_{i,j}$: \[\delta u_{i,j}(\overrightarrow x)\;=N_{i,j}^Q(\overrightarrow x)\delta u^Q.\] Making \[B_{ij}^Q(\overrightarrow x)\equiv N_{i,j}^Q(\overrightarrow x) \] the above can be written like: \begin{equation} \delta u_{i,j}(\overrightarrow x)\;=B_{ij}^Q(\overrightarrow x)\delta u^Q. \label{B_strain} \end{equation} In \cref{B_strain}, the term $B_{ij}^Q(\overrightarrow x)$ is an interpolation function (which is indicated by the superscript $Q$) associated to a second order tensor (which is indicated by the subscripts $ij$). It must be recognized that $B_{ij}^Q(\overrightarrow x)$ are not independent shape functions but just derivatives of the primary interpolation polynomials $N^Q (x,y)$. Consider now the stress-strain relationship in theory of elasticity relating the stress tensor $\sigma_{ij}$ to the strain tensor $\epsilon_{ij}$ through the elastic constitutive tensor $C_{ijkl}$as: \[\sigma_{ij} = C_{ijkl} \epsilon_{kl}.\] In the above the strain tensor $\epsilon_{ij}$ is given by a combination of space derivatives of the displacement field $u_i$ like \[ \varepsilon_{ij}=\frac12(u_{i,j}+u_{j,i}) \] which can be written like \[ \epsilon_{ij}(\overrightarrow x)\;= \frac12 [B_{ij}^Q(\overrightarrow x) + B_{ji}^Q(\overrightarrow x) ] u^Q \equiv H(\overrightarrow x)_{ij}^Q u^Q \] Using the above set of results in $\delta W $ gives: \begin{equation} \delta W=\delta u^Q\int_VH_{ij}^QC_{ijkl}H_{kl}^P\operatorname dVu^P \label{discrete} \end{equation} which is the final discrete version of $\delta W $. \begin{tcolorbox} In the indicial representation of a tensor quantity an index which does not repeat itself in the expression is termed a {\bf free index} and the number of non-repeated free indices appearing in the expression gives the expression order. By contrast, repeated indices appearing in the expression are termed {\bf dummy indices} and they imply summation. \end{tcolorbox} \paragraph*{Problem: Discretization of the principle of virtual displacements from theory of Elasticity} The principle of virtual displacements in the linearized theory of elasticity is given by: \begin{equation} \intL_V \sigma_{ij} \delta u_{i,j} \dd{V} - \intL_V f_i \delta u_i \dd{V} - \intL_{S_t} t_i^n \delta u_i \dd{S} = 0. \label{pvw_2} \end{equation} and where $u_i$ is the displacement field; $\epsilon_{ij}$ is the strain field and $\sigma_{ij}$ is the stress field satisfying the following relations \[ \varepsilon_{ij}=\frac12(u_{i,j}+u_{j,i}) \] and \[\sigma_{ij} = C_{ijkl} \epsilon_{kl}\] where $C_{ijkl}$ is a fourth-order tensor whose terms are material constants. Also $f_i$ and $t_i^n$ are the body forces and the surface traction vectors. Assuming that in a finite element method the displacement field $u_i$ is approximated via interpolation like: \[ u_i(\overrightarrow x)\;=N_i^Q(\overrightarrow x)u^Q.\] \begin{itemize} \item Write the discrete version of \cref{pvw_2} \item Write the term $K^{QP} = \int_VH_{ij}^QC_{ijkl}H_{kl}^P\operatorname dV$ in matrix form and implement it in a python script. \end{itemize} \newpage \chapter{Generalized boundary value problems} %\section{Generalized boundary value problems from a balance law} In this section we use as problem to be solved via the finite element algorithm the case of general initial boundary value problem (I-BVP). In the first part of this appendix we will introduce the differential formulation given in terms of a set of governing equations and properly specified boundary conditions. The resulting equations are obtained after using a generalized balance law. Following this classical and well known approach we formally re-state these equations in the so-called strong form. Subsequently we re-write and prove an equivalent form of the balance law in the form of an integral representation highly friendly for a numerical solution. Since in the integral description of the problem the order of the derivatives in the field functions decreases by one, the resulting statement is called a weak formulation. \section{Governing equations} Let $\dd{S}$ be a differential surface element, $\dd{V}$ a differential volume element, $u(\vb{x},t)$ a scalar (or vector) function of space and time. The flux or rate of flow of the quantity $u(\vb x, t)$ through $\dd{S}$ at time $t$ is defined like \[p(\vb x)\grad u \cdot \hat n \dd{S} \enspace ,\] where $p(\vb x)$ is a positive function, assumed known and time independent. Similarly, the time rate of change of $u(\vb x, t)$ in an element $\dd{V}$ is given by \[\rho (\vb x)\pdv{u}{t} \dd{V} \enspace ,\] where once again $\rho (\vb x)$ is a known, given, time independent positive function. Additional effects occurring in the element $\dd{V}$ at the time $t$ can be expressed like \[H(\vb x, t)\dd{V} \equiv - q(\vb x)u(\vb x, t) + \hat F(\vb x,t)\] where $\hat F(\vb x, t) = \rho (\vb x)F(\vb x, t)$. In the above the term $q u$ represents internal effects due to changes proportional to $u$ while $\hat F(\vb x, t)$ are other external influences in the medium. Balancing the internal and external changes yields \[\int\limits_V \rho (\vb x)\pdv{u}{t}\dd{V} = \int\limits_S p(\vb x)\vb \grad u \cdot \hat n\dd{S} + \int\limits_V H(\vb x, t)\dd{V} \enspace ,\] or equivalently \[\int\limits_V \rho (\vb x)\pdv{u}{t}\dd{V} = \int\limits_S p(\vb x)\vb \grad u \cdot \hat n\dd{S} - \int\limits_V q(\vb x)u(\vb x, t)\dd{V} + \int\limits_V \rho (\vb x)F(\vb x, t)\dd{V} \enspace .\] Using the divergence theorem as \[\int\limits_S p(\vb x) \grad u \cdot \hat n \dd{S} = \int\limits_V \vb \div (p\vb \grad u)\dd{V}\] yields after substitution \[\int\limits_V \left[\rho (\vb x)\pdv{u}{t} - \div (p(\vb x)\grad u) + q(\vb x)u(\vb x, t) - \rho (\vb x)F(\vb x, t)\right]\dd{V} = 0\] Assuming a continuous integrand, the arbitrariness of $V$ implies \[\rho (\vb x)\pdv{u}{t} - \div(p(\vb x)\grad u) + q(\vb x)u(\vb x, t) - \rho (\vb x)F(\vb x, t) = 0\] Letting \[\mathcal{L} \equiv - \div p(\vb x)\grad + q(\vb x)\] allows us to write the generalized set of partial differential equations like: \begin{equation} \rho(\vb x) \pdv{u(\vb x, t)}{t} + \mathcal{L}u(\vb x, t) = \rho (\vb x)F(\vb x, t) \enspace . \label{eq:GenPDE} \end{equation} These are categorized as: \begin{itemize} \item Hyperbolic; \[\rho(\vb x) \pdv[2]{u(\vb x, t)}{t} + \mathcal{L}u(\vb x,t) = 0\] \item Parabolic; \[\rho(\vb x) \pdv{u(\vb x,t)}{t} + \mathcal{L}u(\vb x, t) = 0\] \item Elliptic \[\mathcal{L}u(\vb x, t) = \rho (\vb x)F(\vb x, t) \enspace .\] \end{itemize} It is convenient to show that $\mathcal{L}$ satisfies the \emph{symmetry} condition \[\int\limits_V \mathcal{L}(u)v\dd{V} = \int\limits_V \mathcal{L}(v)u \dd{V}\] and that the operator is positive definite, that is \[\int\limits_V \mathcal{L}(u)u\dd{V} > 0, \quad \forall u \enspace .\] \subsection{Strong form} The strong form for the generalized boundary value problem formulated above can now be explicitly written as follows. Given $\rho(\vb x)$, $q(\vb x)$, $p(\vb x)$, $F(\vb x, t)$ and $\bar u (\vb x , t)$ find $u(\vb x , t):V \to \mathbb{R}$ such: \[\rho(\vb x) \pdv{u}{t} - \div \left[p(\vb x) \grad u\right] + q(\vb x) u(\vb x, t) - \rho(\vb x) F(\vb x, t) = 0 \quad \forall \vb x \in V \] and \begin{align*} &u = \bar u \quad \forall \vb x \in S_u\\ &p(\vb x)u_{,i} \hat n_i= B(\vb x,t)\quad \forall \vb x \in S_t \enspace . \end{align*} In the FEM we will look for approximate solutions to $u$ subject to the following conditions: \[u = \bar u \quad \forall \vb x \in S_u \quad \text{(Essential boundary conditions)}\] and \[\int\limits_S \left(\pdv{u}{x_j}\right)^2 \dd{S} < \infty \enspace ,\] which corresponds to the functions being square integrable. We will denote this space by $\mathbb{H}$ . The space of functions satisfying the above two conditions will be denoted by $\zeta$ and termed the space of trial functions, formally defined like: \[\zeta = \left\{ u \mid u \in \mathbb{H},u = \bar u \quad \forall\vb x \in S_u \right\}\] On the other hand, to validate (or test) the correctness of the approximated or proposed trial functions $u$ it is also necessary to introduce test functions $w$ which are arbitrary except that they satisfy the following conditions: \[w = 0\quad \forall \vb x \in S_u\] and \[\int\limits_S \left(\pdv{w}{x_j}\right)^2 \dd{S} < \infty \enspace ,\] which corresponds to the functions being square integrable. The space of functions satisfying the above two conditions will be denoted by $\pounds$ and termed the space of test functions, formally defined like: \[\pounds = \left\{w \mid w \in \mathbb{H},w = 0 \quad \forall \vb x \in S_u\right\}\] \subsection{Weak form} Given $\rho(\vb x)$, $q(\vb x)$, $p(\vb x)$, $F(\vb x, t)$ and $\bar u (\vb x , t)$ find $u(\vb x , t):V \to \mathbb{R}$ and $\forall w \in \pounds$ such: \begin{align*} \int\limits_V p(\vb x) u_{,i}\, w_{,i}\dd{V} - \int\limits_{S_t} B(\vb x, t)w \dd{S} + \int\limits_V q(\vb x)u(\vb x, t)w\dd{V} &+ \int\limits_V \rho(\vb x)\pdv{u}{t}w \dd{V} \\& - \int\limits_V \rho(\vb x) F(\vb x, t)w\dd{V} = 0 \end{align*} and \[u = \bar u\quad \forall\vb x \in S_u \enspace .\] \subsection{Equivalence between the strong and weak forms} \begin{multline} -\int\limits_V [p(\vb x)u_{,i}]_{,i}w \dd{V} + \int\limits_{S_t} [p(\vb x)u_{,i}] \hat n_i w\dd{S} - \int\limits_{S_t} B(\vb x, t)w\dd{S}\\ + \int\limits_V q(\vb x)u(\vb x, t)w \dd{V} + \int\limits_V \rho(\vec x)\pdv{u}{t}w\dd{V} - \int\limits_V \rho(\vb x)F(\vb x, t)w\dd{V} = 0 \end{multline} Grouping together common terms yields \begin{align*} &\int\limits_V \left\{\rho(\vb x)\pdv{u}{t} - [p(\vb x)u_{,i}]_{,i} + q(\vb x)u(\vb x, t) - \rho(\vb x)F(\vb x, t)\right\} w\dd{V}\\ &+ \int \limits_{S_t} \left\{[p(\vb x)u_{,i}]\hat n_i - B(\vb x, t) \right\} w\dd{S} = 0 \end{align*} from which \[\rho(\vb x)\pdv{u}{t} - [p(\vb x)u_{,i}]_{,i} + q(\vb x)u(\vb x, t) - \rho(\vb x)F(\vb x, t) = 0\] and \[p(\vb x)u_{,i}\hat n_i = B(\vb x, t) \quad\forall \vb x \in S_t \enspace .\] \section{Weighted residual methods} This section introduces the concept of residual or difference from zero in a differential equation once its solution is approximated. For that purpose we will take as prototype equation the one obtained as our general model of BVP (see \cref{eq:GenPDE}) and recalled here for completeness \begin{equation} \rho(\vb x) \pdv{u(\vb x, t)}{t} + \mathcal{L}u(\vb x, t) = \rho (\vb x)F(\vb x, t) \enspace . \label{eq:GenPDE2} \end{equation} We will assume that the actual solution to the generalized BVP given by \cref{eq:GenPDE2} is approximated by $\tilde u(\vb{x})$ through a superposition like \begin{equation} \tilde u (\vb{x}) = {N^I}(\vb{x}){u^I} \label{basicsuper} \end{equation} where ${N^I}(\vb{x})$ are interpolating functions and $I$ denotes a superposition index varying like $I=1,2,...,K$ with $K$ being the number of points where the solution is known. In what follows we will use $u(\vb{x})$ instead of $\tilde u (\vb{x})$ but will keep in mind that we are actually using the approximation given by \cref{basicsuper}. Similarly, in order to keep the discussion simple for the time being we will drop the time effects reducing the generalized PDE to the simple form: \begin{equation} \mathcal{L}u(\vb x) = \rho (\vb x)F(\vb x) \enspace . \label{eq:GenPDE3} \end{equation} Now, since we are using the approximation given by \cref{basicsuper} this equation is not strictly satisifed but instead we will have the following ``unbalanced" condition \[\mathcal{L}u(\vb{x}) - \rho (\vb{x})F(\vb{x}) \equiv R \ne 0\] where the term $R$ corresponds to a residual error which is to be distributed throughout the solution domain. The so-called weighted residual methods differ in the form in which they distribute the residual between the different $K$ points conforming the computational domain. Using \cref{basicsuper} in \cref{eq:GenPDE2} and the linearity in the differential operator yields \[R = \mathcal{L}({N^P}){u^P} - \rho F\, .\] We can see that the residual $R$ is a function defined over the domain of interest. The residual would be exactly zero for the solution of the differential equation, but it will not be zero in general. Thus, we want to make the function $R$ as close to zero as possible. To make $R$ as small as possible we need a function (a functional) where we can compare different approximation functions. After getting this functional we can minimize its value. For this minimization we could use the norm of the function, another option is to compute a weighted \emph{average} of the function over the domain. This is what we call a weighted residual \[\Pi[u, w] = \int\limits_V w R(u) \dd{V}\, ,\] and we want to minimize it by making \[\var{\Pi}[u, w] = 0\, ,\] In what follows we will consider different strategies to distribute or weight the residual $R$ over the computational domain. \subsection{Galerkin method} In the Galerkin scheme the interpolation functions are used also as weighting functions leading to: \[\int\limits_V N^Q R\dd{V} = 0 \] or explicitly \begin{equation} \int\limits_V N^Q \mathcal{L} (N^P)\dd{V} u^P = \int\limits_V N^Q\rho F\dd{V}\, . \label{eq:Galer} \end{equation} Imposing \cref{eq:Galer} in the $K$ points conforming the computational domain or equivalently ranging $Q$ from $1$ to $K$ leads to the following system of algebraic equation \begin{equation} {K^{QP}}{U^P} = {f^Q} \label{eq:DGaler} \end{equation} where $U^P$ is a vector that stores the point values of the function $u$ along te $K$ points of the computational domain, while $f^Q$ stores the corresponding point excitations. \subsection{Least squares method} In this method the integral of the square of the residual is minimized with respect to the $K$ point parameters or nodal values of the function. Accordingly, \begin{align*} &\pdv{u^I}\int\limits_V R^2 \dd{V} = 0\\ &\int\limits_V R \pdv{R}{u^I} \dd{V} = 0\, , \end{align*} The least squares method is a special case of the weighted residual method for the weight functions \[w^I = \pdv{R}{u^I}\, .\] Expanding the residual, and considering the operator $\mathcal{L}$ as linear, we obtain \begin{align*} &\pdv{u^I}\int\limits_V [\mathcal{L}(N^P u^P) - \rho F]^2 \dd{V} = 0\\ &\int\limits_V [\mathcal{L} N^P u^P - \rho F] \mathcal{L}(N^I) \dd{V} = 0\\ &\int\limits_V \mathcal{L}(N^I) \mathcal{L}(N^P) \dd{V} u^P - \rho \int\limits_V \mathcal{L}(N^I) F \dd{V} = 0 \end{align*} which can be written like \begin{equation} K^{IP} U^P = f^I \label{eq:Dsquares} \end{equation} \subsection{Collocation method} In the collocation method the coefficients of the approximation are determined by forcing the residual to be exactly zero at $K$ points over the computational domain, i.e., \[\mathcal{L}(N^I) u^I - \rho F = 0\, ,\] or \[\mathcal{L}[N^I(x^J)] u^I - \rho F(x^J) = 0\, ,\] where $J$ ranges between $1$ and $K$. This equation can be rewritten as a weighted-residual if we consider the residual to be $\delta(x - x^I)$, the Dirac delta function over the selected points The resulting system of algebraic equation can be written as \begin{equation} K^{IP} U^P = f^I\, . \label{eq:Colo} \end{equation} \subsection{Subdomain method} The zero value of the residual is imposed upon $K$ subdomains \[\int\limits_{V^I} \mathcal{L}(N^P)\dd{V^I} u^P - \rho \int\limits_{V^I} F\dd{V^I} = 0 \qquad I=1,\cdots,K\, .\] For instance, for the $N$-th element it follows that \[\int\limits_{V^N} \mathcal{L}(N^P)\dd{V^N} u^P - \rho \int\limits_{V^N} F\dd{V^N} = 0 \qquad P=1,\cdots,K\, .\] Applying the equation over the $K$ subdomains leads to the discrete system; \begin{equation} K^{IP} U^P = f^I \quad \quad I=1,\cdots,K\, . \label{eq:Subdomain} \end{equation} \subsection{Ritz method} It operates directly upon the variational statement of the problem. For a given functional \[\Pi = \Pi (N^Q u^Q)\] the variational equation reads \[\var{\Pi} \equiv \pdv{\Pi}{u^Q} \var{u^Q} = 0\] from which \[\pdv{\Pi}{u^Q} = 0\, .\] \paragraph*{Example: The generalized parabolic equation} Let us consider the case of the generalized parabolic equation and its discretization following the Galerkin method \[\rho(\vb x) \pdv{u(\vb x,t)}{t} + \mathcal{L}u(\vb x, t) = 0\] which can also be written using index notation \[\pdv{x_i}\left[p(x)\pdv{u}{x_i} \right] + q(x)u + \rho \pdv{u}{t} = \rho F\, ,.\] Assuming that $p(x)=1$ yields \begin{align*} &-\int\limits_V N^P N_{,ii}^Q \dd{V} u^Q + \int\limits_V q N^P N^Q\dd{V} u^Q + \rho \int\limits_V N^P N^Q \dd{V} v^Q\\ &- \rho \int\limits_V N^P F \dd{V} = 0\\ &\int\limits_V N_{,i}^P N_{,i}^Q \dd{V} u^Q - \int\limits_S N^P N_{,i}^Q \hat{n}_i \dd{S} u^Q + \int\limits_V q N^P N^Q \dd{V} u^Q\\ &+ \rho \int\limits_V N^P N^Q \dd{V} v^Q - \rho \int\limits_V N^P F \dd{V} = 0 \\ &\int\limits_V \left(N_{,i}^P N_{,i}^Q + q N^PN^Q \right)\dd{V} u^Q + \rho \int\limits_V N^P N^Q \dd{V} V^Q =\\ &\int\limits_S N^P N_{,i}^Q \hat{n}_i \dd{S} u^Q + \rho \int\limits_V N^P F \dd{V} \end{align*} which can be written in discrete form as \[K^{PQ} U^Q + C^{PQ} V^Q = f^p\, .\] \paragraph*{Example: The wave equation.} In this case the differential equation reads \[\div \left[ \frac{1}{\rho} \grad p(\vb{x})\right] - \pdv{t} \left(\frac{1}{\lambda} \pdv{\rho}{t}\right) - q(\vb{x}) = 0\] where we recognize that \[\mathcal{L} \equiv \div \left(\frac{1}{\rho} \grad \right) - \pdv{t}\left(\frac{1}{\lambda}\pdv{t}\right)\, .\] Let \[p(x) = N^K p^K\] then \[\mathcal{L}(p) \equiv \vec \nabla \cdot \left( {\frac{1}{\rho }\vec \nabla {N^K}{p^K}} \right) - \frac{\partial }{{\partial t}}\left( {\frac{1}{\lambda }\frac{{\partial {N^K}{p^K}}}{{\partial t}}} \right)\] or in index notation \[\mathcal{L}(p) \equiv {\left( {\frac{1}{\rho }N_{,i}^K} \right)_{,i}}{p^K} - \frac{\partial }{{\partial t}}\left( {\frac{1}{\lambda }\frac{{\partial {N^K}}}{{\partial t}}} \right){p^K}\] which is equivalent to \[\mathcal{L}(p) \equiv \mathcal{L}({N^K}){p^K}\] using the trial functions as weighting function and recalling the definition of the residual which in this case reads \[R = \mathcal{L}({N^K}){p^K} - q\, ,\] and yields \[\int\limits_V {{N^J}RdV = 0} \quad \quad J=1,2,...,K \] \[\int\limits_V {{N^J}L({N^K})dV{p^K}} - \int\limits_V {{N^J}qdV} = 0\] \[\int\limits_V {{N^J}{{\left( {\frac{1}{\rho }N_{,i}^K} \right)}_{,i}}dV{p^K} - \int\limits_V {{N^J}\frac{\partial }{{\partial t}}\left( {\frac{1}{\lambda }\frac{{\partial {N^K}}}{{\partial t}}} \right)dV{p^K}} } - \int\limits_V {{N^J}qdV} = 0\] Integrating by parts the first term on the right-hand-side gives us \[ - \int\limits_V {N_{,i}^J\frac{1}{\rho }N_{,i}^KdV} {p^K} + \int\limits_S {{N^J}\frac{1}{\rho }N_{,i}^K{{\hat n}_i}dS{p^K}} = \int\limits_V {{N^J}\frac{\partial }{{\partial t}}\left( {\frac{1}{\lambda }\frac{{\partial {N^K}}}{{\partial t}}} \right)dV{p^K}} + \int\limits_V {{N^J}qdV} \] \[\int\limits_V {N_{,i}^J\frac{1}{\rho }N_{,i}^KdV} {p^K} + \int\limits_V {{N^J}\frac{1}{\lambda }{N^K}dV{{\ddot p}^K}} = \int\limits_S {{N^J}\frac{1}{\rho }N_{,i}^K{{\hat n}_i}dS{p^K}} + \int\limits_V {{N^J}qdV} \] \[{K^{JK}}{P^K} + {M^{JK}}{{\ddot P}^K} + {f^J} = 0\] \newpage \paragraph*{Example: The Navier equations.} The stress equilibrium equations when written in terms of displacements after using the constitutive tensor and the proper kinematic description take the form: \[(\lambda + \mu ){u_{j,ij}} + \mu {u_{i,jj}} + {f_i} = 0 \enspace \] which are known as the Navier equations and where the differential operator reads \[L_{ij} \equiv (\lambda + \mu )\pdv[2]{}{x_i}{x_j} + \mu \pdv[2]{}{x_k}{x_k}\delta_{ij}.\] Applying this operator to an interpolated version of the displacement field $u_i = N_i^Q u^Q$ gives: \[L_{ij}(u_j)\equiv R_i(u_j)\] thus \begin{align*} &\mathcal{L}_{ij}(u_j) \equiv (\lambda + \mu )(N_j^Q u^Q)_{,ij} + \mu (N_j^Q u^Q)_{,kk}\delta_{ij}\\ &\mathcal{L}_{ij}(u_j) \equiv (\lambda + \mu )N_{j,ij}^Q u^Q + \mu N_{i,kk}^Q u^Q\\ &\mathcal{L}_{ij}(u_j) \equiv \mathcal{L}_{ij}(N_j^Q) u^Q \enspace . \end{align*} In the Galerkin scheme we use the trial function as weighting function. \[R_i \equiv L_{ij}(N_j^Q) u^Q + f_i\] and we state \[\int\limits_V N_i^P R_i \dd{V} = 0 \qquad P=1,2,\cdots N\, .\] Thus \begin{align*} &\int\limits_V N_i^P \mathcal{L}_{ij} (N_j^K)\dd{V} u^K + \int\limits_V N_i^P f_i \dd{V} = 0 \\ &(\lambda + \mu )\int\limits_V N_i^PN_{j,ij}^K \dd{V} u^K + \mu \int\limits_V N_i^PN_{i,kk}^K \dd{V} u^K + \int\limits_V N_i^P f_i \dd{V} = 0 \end{align*} integrating by parts \begin{align*} - (\lambda + \mu )\int\limits_V N_{i,j}^P N_{j,i}^K \dd{V} u^K + (\lambda + \mu )\int\limits_S N_i^P N_{j,i}^K \hat{n}_j \dd{S} u^K - \mu \int\limits_V N_{i,k}^P N_{i,k}^K \dd{V} u^K \\ + \mu \int\limits_S N_i^P N_{i,k}^K \hat{n}_k \dd{S} u^K + \int\limits_V N_i^P f_i \dd{V} = 0 \end{align*} which can be written like \[K^{PQ} U^Q = F^P\] where \[K^{PQ} = (\lambda + \mu )\int\limits_V N_{i,j}^P N_{j,i}^Q \dd{V} + \mu \int\limits_V N_{i,k}^P N_{i,k}^Q\dd{V} \] and \[F^P = \int\limits_S N_i^P t_i^{(\hat n)} \dd{S} + \int\limits_V N_i^P f_i \dd{V} = 0 \, .\] %\newpage %\section*{Quadratures} %\section{Gaussian quadratures} %In the numerical quadrature corresponding to the extended trapezoidal rule written in the form % %\begin{equation} %\int\limits_a^b {f(x)dx \approx \sum\limits_{I = 1}^{npts} {{w^I}f({x^I})} } %\label{quadra2} %\end{equation} % %the integration points are equidistantly spaced. In a Gaussian quadrature in addition to adjusting the $N$ weighting factors $w^I$ one also leaves as adjustable parameters the location of the $N$ integration points. As a result, there are now $2N$ parameters to adjust in the derivation of an algorithm to numerically approximate the integral of $f(x)$ between $x=a$ and $x=b$ with the maximum accuracy and the minimum number of operations. This class of quadratures provide better precision than those based on Newton-Cotes techniques (such as the trapezoidal rule) when the function to integrate can be appropriately represented by a polynomial. Using a Gaussian quadrature one can integrate functions which are expressible in the form: % %\[\int\limits_a^b {w(x)f(x)dx} \approx \sum\limits_{I = 1}^{npts} {{w^I}f({x^I})}. \] % %This particular factorization ${w(x)f(x)}$ is useful since it allows us to write a function as the product of a polynomial $f(x)$ times a known function $w(x)$. This last function can be selected to remove integrable singularities out of the integral. To clarify, consider the following (Gauss-Chebyshev) integral: % % %\[I = \int\limits_{ - 1}^1 {\frac{{{e^{ - C_x^2}}}}{{\sqrt {1 - {x^2}} }}} dx \equiv \int\limits_{ - 1}^1 {\frac{1}{{\sqrt {1 - {x^2}} }}} {e^{ - C_x^2}}dx\] %where %\[w(x) = \frac{1}{{\sqrt {1 - {x^2}} }}\] % %and % %\[f(x) = {e^{ - C_x^2}}.\] % %Now, making: %\[g(x) = w(x)f(x)\] %and %\[{v^I} = \frac{{{w^I}}}{{w({x^I})}}\] % %yields: % %\begin{equation} %\int\limits_{ - 1}^{ + 1} {g(x)dx} \approx \sum\limits_{I = 1}^{npts} {{v^I}g({x^I})} %\label{gauss} %\end{equation} % %In general, different Gaussian quadratures are found in the literature reported in terms of tables providing the locations of integration (or Gauss) points and the corresponding weighting factors $w^I$. As an example \cref{ejemplo2} gives abscissas and weighting factors for a 4-point Gaussian quadrature. % %\begin{center} %\begin{tabular}{cc} % \hline % $x^I$ & $w^I$ \\ % \hline % $-0.86113$ & $0.34785$ \\ % $-0.33998$ & $0.65214$ \\ % $ +0.33998$ & $0.65214$ \\ % $ +0.86113$ & $0.34785$ \\ % \hline %\end{tabular} %\captionof{table}{Abscissas and weighting factors to compute $\int\limits_{ - 1}^{ + 1} {f(x)dx}$} %\label{ejemplo2} %\end{center} % %To facilitate coding of these quadratures and allow for approximation of general integrals, it is common to consider a primitive range of integration $[-1.0,+1.0]$ which requires transforming the original integral (including the function and its integration limits)to this primitive integral as discussed in \cref{isopar}. \Cref{fig:quagauss} schematizes the primitive integration range and the corresponding Gauss points denoted by the black $x$s. Transformation of a given integral to the primitive space is discussed at a later section. % %\begin{figure}[H] %\centering %\includegraphics[width=10cm]{img/quagauss.pdf} %\caption{Schematic reperesentation of a Gaussian quadrature in the primitive range $[-1.0,1.0]$.} %\label{fig:quagauss} %\end{figure} % %\paragraph*{Example:Derivation of a Gaussian quadrature} %Let $n=2$ and the integration interval $[a,b]=[-1,+1]$. Find $w^1$, $w^2$ and $x^1$, $x^2$ such the quadrature % % %\[I = \int\limits_{ - 1}^{ + 1} {f(x)dx} \approx {w^1}f({x^1}) + {w^2}f({x^2})\] % %integrated exactly the function $f(x)$ corresponding to a third order polynomial like: % % % %\[f(x) = {a_0} + {a_1}x + {a_2}{x^2} + {a_3}{x^3}.\] % %Using $f(x)$ in $I$ and stating the integral for each term we have: % % %\[I = \int\limits_{ - 1}^{ + 1} {{a_0}dx} + \int\limits_{ - 1}^{ + 1} {{a_1}xdx} + \int\limits_{ - 1}^{ + 1} {{a_2}{x^2}dx} + \int\limits_{ - 1}^{ + 1} {{a_3}{x^3}dx} \] % %where: % %\[\int\limits_{ - 1}^{ + 1} {dx} = 2 = {w^1} \cdot 1 + {w^2} \cdot 1\] % %\[\int\limits_{ - 1}^{ + 1} {xdx} = 0 = {w^1} \cdot {x^1} + {w^2} \cdot {x^2}\] % %\[\int\limits_{ - 1}^{ + 1} {{x^2}dx} = \frac{2}{3} = {w^1} \cdot {({x^1})^2} + {w^2} \cdot {({x^2})^2}\] % %\[\int\limits_{ - 1}^{ + 1} {{x^3}dx} = 0 = {w^1} \cdot {({x^1})^3} + {w^2} \cdot {({x^2})^3}.\] % %The resulting system of equations is solved in order to determine the 4 quadrature parameters, namely $w^1$, $w^2$ and $x^1$, $x^2$ giving $w^1 = 1$, $w^2 = 1$, $x^1 = - \sqrt 3 /3$ and $x^1 = + \sqrt 3 /3$ which allows us to write the quadrature in the general form: % %\[I = \int\limits_{ - 1}^{ + 1} {f(x)dx} \approx 1.0 \cdot f( - \sqrt 3 /3) + 1.0 \cdot f( + \sqrt 3 /3)\] % %which is exact for polynomial functions of order at most 3. % %The idea behind Gaussian quadratures can be extended to the integration of higher order polynomials, however its derivation requires an effective method to determine the weighting factors and the abscissas of the Gauss points. The next section discusses a method which is applicable to $2n$-order polynomials, in which advantage is taken from the property of orthogonality existing in certain special polynomials. % %\paragraph*{Orthogonal polynomials} %Two polynomials $P(x)$ and $Q(x)$, where $P(x) \ne Q(x)$ are said to be orthogonal if: % %\[\int\limits_a^b {P(x)Q(x)} dx = 0.\] % %Particularly, the Legendre polynomials, defined by: % %\[P_n(x) = \frac{1}{2^n n!}\frac{d^n}{dx^n}[(x^2 - 1)^n]\] % %which at the same time are solution to the equation: % % %\[(1 - {x^2}){y^{''}} - 2x{y^{'}} + n(n + 1)y = 0\] % %in the range $[-1,+1]$ satisfy the following orthogonality condition: % %\[\int\limits_{ - 1}^{ + 1} {{Q_i}(x)} {P_j}(x)dx = 0\] % %where ${{Q_i}(x)}$ is any polynomial function of order $i<j$. % %Besides the orthogonality property, Legendre polynomials have roots in the range $(-1.0,+1.0)$ which are different and symmetrical with respect to zero. This last condition make the roots useful in the derivation of quadratures for the integration of polynomial functions of order less than $2n$. For instance, the second Legendre polynomial given by: % %\[{P_2}(x) = {x^2} - \frac{1}{3}\] % %has roots ${x^1} = - \frac{{\sqrt 3 }}{3}$ and ${x^2} = + \frac{{\sqrt 3 }}{3}$ which correspond to integration points for an exact quadrature of order 3. % %\paragraph*{Theorem} % %Let $\left\{ {{x^1},{x^2},...,{x^n}} \right\}$ the roots of the Legendre polynomial ${P_n}(x)$ of order $n$; let \[{w^I} = \int\limits_{ - 1}^{ + 1} {\prod\limits_{J = 1}^n {\frac{{x - {x^J}}}{{{x^I} - {x^J}}}} dx} \] and let $f(x)$ be any polynomial function of order less than $2n$, then: % %\begin{equation} %I = \int\limits_{ - 1}^{ + 1} {f(x)dx} = \sum\limits_{I = 1}^n {{w^I}f({x^I})}. %\label{Legendre} %\end{equation} % % %\paragraph*{Proof} %(i) If $f(x)$ is of order less than $n$, then clearly it is representable in terms of Lagrange polynomials which automatically satisfy condition \eqref{Legendre}. % % %(ii) If $f(x)$ is of order less than $2n$ then it is representable like: % %\[f(x) = Q(x){P_n}(x) + R(x)\] % %where $Q(x)$ is the quotient of $f(x)/{P_n}(x)$ and of order $n-1$ (or lesser) and $R(x)$ is the residual and of order lesser than $n$. Integrating this representation of $f(x)$ we have that: % %\[\int\limits_{ -1}^{ +1} {Q(x) P_n(x)\ dx} + \int\limits_{ - 1}^{ + 1} R(x)\ dx \] % %which reduces to: % %\[I = \int\limits_{ -1}^{ +1} {f(x)dx} = \int\limits_{ -1}^{ +1} R(x)\ dx \] % %after using the orthogonality property between $Q(x)$ and $P_n (x)$. Now, recalling the expression % %\[f(x) = Q(x){P_n}(x) + R(x)\] % %and if this is evaluated at the roots of the Legendre polynomials it gives: % %\[f(x^I) = R(x^I)\] % %completing the proof. %\newpage %\begin{minted}[mathescape, % gobble=4, % frame=lines, % framesep=2mm]{python} % """ % Computes the integral of f(x) using a Gauss quadrature % % """ % from __future__ import division, print_function % import numpy as np % from sympy import symbols, integrate % % % def gpoints4(): % """Gauss points for a 2 by 2 grid % % Returns % ------- % xw : ndarray % Weights for the Gauss-Legendre quadrature. % xp : ndarray % Points for the Gauss-Legendre quadrature. % % """ % xw = np.zeros([4]) % xp = np.zeros([4]) % xp[0] = -0.861136311594053 % xp[1] = -0.339981043584856 % xp[2] = +0.339981043584856 % xp[3] = +0.861136311594053 % xw[0] = 0.347854845137454 % xw[1] = 0.652145154862546 % xw[2] = 0.652145154862546 % xw[3] = 0.347854845137454 % return xw, xp % % % f = lambda x: x**3 + 4*x**2 - 10 % % % gauss_inte = 0.0 % xw, xp = gpoints4() % for i in range(4): % r = xp[i] % w = xw[i] % gauss_inte = gauss_inte + w*f(r) % % x = symbols('x') % analytic_inte = integrate(f(x) , (x , -1 , 1)) % print("Analytic integral: {:.6f}".format(float(analytic_inte))) % print("Gauss quadrature: {:.6f}".format(gauss_inte)) %\end{minted} \newpage \chapter{Convergence analysis} In this section we address the problem of convergence of analysis results. We will approach the problem in a loose way proceeding from an engineering point of view. For a thorough discussion the reader is referred to textbooks of numerical analysis, see for instance \cite{abaqus1989karlsson}. Particularly, we will review the fundamental aspects that must be satisfied by a finite element solution. In the first part we address the problem from the element point of view, while in the final part we study the convergence of particular problem in terms of several self-contained meshes. \section{¿What is meant by convergence?} Mathematical convergence of order $p$ and rate $c$ for a series of numerically computed values $\vec u_{k}$ and for a problem with exact solution $\vec u$ is defined like: \[\mathop {\lim }\limits_{k \to \infty } \frac{{\left\| {{{\vec u}_{k + 1}} - \vec u} \right\|}}{{{{\left\| {{{\vec u}_k} - \vec u} \right\|}^p}}} = c.\] A practical definition of convergence in finite element analysis is given as follows. Let us denote by $\Pi$ and $\Pi _{FE}$ the potential energy functionals corresponding to the exact mathematical model and to the finite element solution respectively, where the functional corresponding to a given discretization can be computed as: \[{\Pi _{FE}} = - \frac{1}{2}{U^T}KU\] where $K$ and $U$ are the global stiffness matrix and the global nodal displacements vector. If $k$ represents the number of finite elements in a given discretization then we define convergency as the condition that: \begin{equation} \mathop {\lim }\limits_{k \to \infty } {\Pi_{FE}} \to \Pi \label{convergence} \end{equation} In order to guarantee that a finite element solution convergences to the exact (unknown) solution of a problem certain conditions must be met by both, the single elements and the whole assembled finite element mesh. The analysis of the element is conducted when the element is formulated for the first time while the analysis of the mesh is problem dependent. In the following sections we will address both problems. \section{Conditions on a single element} From a physical point of view we may expect the following behaviour from the individual elements in a given discretization: \begin{itemize} \item[•] Under a rigid body compatible displacement field the element must predicts a zero strain field ${\varepsilon _{ij}} = 0.$ This condition is required in order to maintain actual regions of the domain which are submitted to rigid body modes in a stress free condition. \item[•] The element must be able to predict constant strain states as its size decreases. This condition guarantees that as the element size decreases it also approaches the condition of an actual material point. \item[•] The work from the surface tractions along the element interfaces must vanish. This is nothing but Newton's third law in terms of surface tractions. The fact that the first order derivatives of the shape functions (equivalent to surface tractions) are discontinuous along the element boundaries results in finite jumps in the boundary tractions. The element must be such that these jumps vanish as the element size decreases. \end{itemize} In terms of the shape functions these three conditions are equivalent to: \begin{itemize} \item[(i)] All the element shape functions must be selected in such a way that the element predicts ${\varepsilon _{ij}} = 0$ under rigid body compatible nodal displacements. \item[(ii)] All the element shape functions must be selected in such a way that if the nodal displacements are compatible with a constant strain state that state is actually obtained. \item[(iii)] All the element shape functions must be selected in such a way that the strains over the element interfaces are finite and the displacements along these boundaries are continous. \end{itemize} Conditions (i) and (ii) are known as the {\bf completeness condition} while condition (iii) is referred to as the {\bf compatibility} condition. In order to determine if a specific element is complete we find the eigenvalues of the stiffness matrix and study the resulting eigenvectors which corresponde to the deformation modes of the element. The eigenvalue problem reads: \begin{equation} \left[ {K - \lambda I} \right]\phi = 0 \label{eigen} \end{equation} whose solution gives the rigid body modes and straining modes that can be reproduced by the specific element. In \cref{eigen} $\lambda$ corresponds to the eigenvalues and the vector $\phi$ stores the corresponding eigenmodes. As an example of the single element analysis we show the solution for a single bi-linear perfectly square element of characteristic size $2h=1.0$ and material properties given by $E=1.0$ and $\nu = 0.30$ (see \citep{Bathe1995}). The eigenvalue problem is solved with the script {\bf modes.py} listed in the last section. Solution of \cref{eigen} predicts the following set of eigenvalues: \[\lambda = [0(3),0.495(2),0.769(2),1.43].\] From these the first three zero-valued eigenvalues can be shown to describe the possible rigid body motions, namely one rotation and two translations along the horizontal and vertical directions respectively. The next two eigenvalues corresponding to $0.495$ represent flexure modes. Similarly, the repeated values corresponding to $0.769$ are associated to shear modes while the last eigenvalue corresponds to a uniform extension mode. The number of different modes satisfy the following condition: \[{N_S} = {N_{DOF}} - {N_{RB}}\] where ${N_S}$, ${N_{DOF}}$ and ${N_{RB}}$ are the number of straining modes, number of degrees of freedom and number of rigid body modes. The original and deformed element shapes are shown in \cref{modos} which is obtained after one combines the eigenvalues properly. The last row of the figure (obtained with the script {\bf strfield.py}) shows the zero-valued strain field associated to the first rigid body mode. \begin{figure}[H] \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{first.pdf} \caption{$\lambda_1 = 0$. } \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{second.pdf} \caption{$\lambda_2 = 0$.} \end{subfigure}\\ % \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{third.pdf} \caption{$\lambda_3 = 0$.} \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{fourth.pdf} \caption{$\lambda_4 = 0.495$.} \end{subfigure}\\ % \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{fifth.pdf} \caption{$\lambda_5 = 0.495$.} \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{sixth.pdf} \caption{$\lambda_6 = 0.769$.} \end{subfigure}\\ % \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{seventh.pdf} \caption{$\lambda_7 = 0.769$.} \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{eight.pdf} \caption{$\lambda_8 = 1.43$.} \end{subfigure} % \caption{Deformation modes of a bi-linear element.} \label{modos} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{epsilonx.png} \caption{$\epsilon_{xx}$.} \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{epsilony.png} \caption{$\epsilon_{yy}$.} \end{subfigure} \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{gammaxy.png} \caption{$\gamma_{xy}$} \end{subfigure} % \caption{Deformation modes of a bi-linear element.} \label{strains} \end{figure} \section{Analysis of the mesh results} Consider the square energy norm of the error $\left\| {{{\vec e}_h}} \right\|_E^2$. This error satisfies \[\left\| {{{\vec e}_h}} \right\|_E^2 > 0\] and will be used as an error estimate of the accuraccy of the finite element solution. Particularly we will use the following relationship (see \cite{abaqus1989karlsson}) \begin{equation} \left\| {{{\vec e}_h}} \right\| \le \alpha {h^k} \label{estimate1} \end{equation} from which we can write: \begin{equation} \log \left\| {{{\vec e}_h}} \right\| \approx \log \alpha + k\log h \label{estimate2} \end{equation} In \cref{estimate2} $k$ is the order of the complete interpolation polynomial present in the mesh and gives a measure of the order of convergence in the finite element solution, while the rate of convergency is given by $\alpha$. In order to conduct the convergency study we perform a series of finite element analysis. For each mesh we compute $\left\| {\vec u - {{\vec u}_h}} \right\|$ which is equivalent to $\vec e_h$ and where $\vec u$ is the exact solution. In order to find the exact solution we assume that the most refined results have functionals corresponding to ${\Pi _{n - 2}}$, ${\Pi _{n - 1}}$ and ${\Pi _{n}}$ from which: \[{\Pi _{Exa}} = \frac{{\Pi _{n - 1}^2 - {\Pi _n}{\Pi _{n - 2}}}}{{(2{\Pi _{n - 1}} - {\Pi _n} - {\Pi _{n - 2}})}}\] The procedure is summarized below: \begin{itemize} \item Solve a series of meshes with solutions given by $\vec u_1, \vec u_2,...,\vec u_n$. Each mesh has a characteristic element size $h$. \item For each mesh find the total potential energy: \[{\Pi_h} = - \frac{1}{2}{U^T}KU\] \item Using the most refined meshes compute the potential energy for the exact solution: \[{\Pi _{Exa}} = \frac{{\Pi _{n - 1}^2 - {\Pi _n}{\prod _{n - 2}}}}{{2{\Pi _{n - 1}} - {\Pi _n} - {\Pi _{n - 2}}}}\] \item For each mesh compute: \[\frac{{\left\| {{{\vec u}_{Exa}} - {{\vec u}_h}} \right\|}}{{\left\| {{{\vec u}_{Exa}}} \right\|}} = {\left[ {\frac{{{\Pi _{Exa}} - {\Pi _h}}}{{{\Pi _{Exa}}}}} \right]^{1/2}}\] and fill out the following table: \begin{center} \begin{tabular}{ |c|c|c|c| } \hline $h$ & ${\prod _{FE}}$ & $\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|$ & $\frac{{\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|}}{{\left\| {{{\vec u}_{Exa}}} \right\|}}$ \\ \hline $1.0$ & $$ & $$ & $$ \\ \hline $0.5$ & $$ & $$ & $$ \\ \hline $0.25$ & $$ & $$ & $$ \\ \hline $ 0.125$ & $$ & $$ & $$ \\ \hline $ 0.0625$ & $$ & $$ & $$ \\ \hline \end{tabular} \captionof{table}{Convergence of anlysis results} \label{tabconv} \end{center} \item Plot the values of $\log \left( {\frac{{\left\| {{{\vec u}_{Exa}} - {{\vec u}_h}} \right\|}}{{\left\| {{{\vec u}_{Exa}}} \right\|}}} \right)$ vs $\log h$ and determine the slope which upon convergence must be close to the order of the complete polynomial used in the discretization. \end{itemize} \paragraph*{Exampla: bar in compression} \Cref{mallas} shows a tappered bar under a compressive uniformly distributed load of total magnitude $P=0.5$. The bar is of length $l=10$ and its large and short ends given by $h_1 = 2.0$ and $h_2 =0.5$ respectively. The material properties correspond to a Poisson's ratio and Young modulos $\nu=0.30$ and $E=1.0$. We want to find the converged solution for the bar after using 3-noded triangular elements. \begin{figure}[H] \centering \includegraphics[width=5.0 in]{img/tappered.pdf} \label{bar} \caption{Tappered bar under compressive load at the tip.} \end{figure} \Cref{mallas} displays 4 consecutive meshes with decreasing element size namely $h=[1.0, 0.5, 0.25, 0.125]$. While \cref{gamatap} displays the shear strain contour maps for the finite element solutions corresponding to the coarse and refined meshes. It is clear how these contours become smooth as the mesh is refined. This approach is sometimes used as an empirical test of convergence. To measure the finite element convergence we first compute the total potential energy in each mesh ccording to: \[{\Pi _{FE}} = - \frac{1}{2}{U^T}KU.\] Now assuming we have consecutive meshes each one obtained after halving the elements in the previous mesh we have the following approximation for the exact total potential energy of the system, computed the last most refined meshes: \[{\Pi _{Exa}} = \frac{{{{1.161}^2} - ( - 1.160)( - 1.155)}}{{ - 2(1.161) - ( - 1.160) - ( - 1.155)}} = -1.160.\] To compute the energy norm of the error we use: \[\frac{{\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|}}{{\left\| {{{\vec u}_{Exa}}} \right\|}} = {\left[ {\frac{{{\Pi _{Exa}} - {\prod _{FE}}}}{{{\Pi _{Exa}}}}} \right]^{1/2}}.\] The analysis results are reported in \cref{ejemplo}. %\begin{figure}[H] %\centering % \subfloat [$h=1.00$]{\includegraphics[width=2.5 in]{tap1.pdf}} % \subfloat [$h=0.50$]{\includegraphics[width=2.5 in]{tap2.pdf}}\\ %% \vspace{-.2 cm} % \subfloat [$h=0.250$]{\includegraphics[width=2.5 in]{tap3.pdf}} % \subfloat [$h=0.125$]{\includegraphics[width=2.5 in]{tap4.pdf}}\\ % \caption{Refined meshes for a tappered bar.} % \label{mallas} %\end{figure} %%%%% \begin{figure}[H] \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{tap1.pdf} \caption{$h=1.00$. } \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{tap2.pdf} \caption{$h=0.50$.} \end{subfigure}\\ % \centering \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{tap3.pdf} \caption{$h=0.250$.} \end{subfigure}\, % \begin{subfigure}[b]{0.450\textwidth}\qquad \includegraphics[width=\textwidth]{tap4.pdf} \caption{$h=0.125$.} \end{subfigure} % \caption{Refined meshes for a tappered bar.} \label{mallas} \end{figure} \begin{figure}[H] \centering \includegraphics[width=6 in]{gamatap.pdf} \caption{Shear strain distribution for the coarse and fine mesh.} \label{gamatap} \end{figure} %%%%%% \begin{center} \begin{tabular}{cccc} \hline $h$ & ${\Pi _{FE}}$ & $\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|$ & $\frac{{\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|}}{{\left\| {{{\vec u}_{Exa}}} \right\|}}$ \\ \hline $1.0$ & $-1.151$ & $0.095$ & $0.088$ \\ $0.5$ & $-1.155$ & $0.071$ & $0.066$ \\ $0.25$ & $-1.161$ & $0.032$ & $0.030$ \\ $ 0.125$ & $-1.160$ & $0.001$ & $0.001$ \\ \hline \end{tabular} \captionof{table}{Convergence of anlysis results} \label{ejemplo} \end{center} To measure convergence we plot: \[\log \left( {\left\| {{{\vec u}_{Exa}} - {{\vec u}_{FE}}} \right\|} \right) = \log c + k\log h\] leading to \cref{fig:conv} from which $k \approx 1.14.$ \begin{figure}[H] \centering \includegraphics[width=0.65\textwidth]{img/conver.pdf} \caption{Energy norm of the error} \label{fig:conv} \end{figure} \newpage \paragraph*{Example: cantilever beam} Consider the cantilever beam shown in \cref{fig:viga} \begin{figure}[H] \centering \includegraphics[width=0.75\textwidth]{img/beam.pdf} \caption{Cantelever beam.} \label{fig:viga} \end{figure} with analytic solution \citep{book:timoshenko} given by: \begin{align*} u &= -\frac{P}{2EI} x^2 y - \frac{\nu P}{6EI} y^3 + \frac{P}{2IG}{y^3} + \left(\frac{P l^2}{2EI} - \frac{P c^2}{2IG}\right)y \, ,\\ v &= \frac{\nu P}{2EI} x y^2 + \frac{P}{6EI} x^3 - \frac{Pl^2}{2EI}x + \frac{Pl^3}{3EI}\, ,\\ \varepsilon_{xx} &= \pdv{u}{x} \equiv - \frac{P}{EI}xy\, ,\\ \varepsilon_{yy} &= \pdv{v}{y} \equiv \frac{\nu P}{EI} xy\, ,\\ \gamma_{xy} &= \pdv{u}{y} + \pdv{v}{x} \equiv \frac{P}{2IG} (y^2 - c^2)\, . \end{align*} The horizontal and vertical displacement field corresponding to the particular values of $E=1000.0$, $\nu=0.30$ for the material parameters; $l=24$ and $2c=8$ for the geometry and $P=50$ (upwards) for the load is shown below: \begin{figure}[H] \centering \includegraphics[width=0.75\textwidth]{img/anahorizo.pdf}\\ \includegraphics[width=0.75\textwidth]{img/anavertic.pdf} \caption{Displacement field for the cantilever beam.} \label{fig:ecuacion} \end{figure} Perform a finite element simulation using a series of refined meshes with charateristic element size corresponding to $h=[6.0,3.0,1.5,0.75,0.375]$ and show that convergence is achieved. To conduct the finite element analysis fill out \cref{problem} \begin{center} \begin{tabular}{cccc} \hline $h$ & $\prod_{FE}$ & $\left\| \vec{u}_{Exa} - \vec{u}_{FE}\right\|$ & $\frac{\left\| \vec{u}_{Exa} - \vec{u}_{FE}\right\|}{\left\| \vec{u}_{Exa} \right\|}$ \\ \hline $6.0$ & $$ & $$ & $$ \\ $3.0$ & $$ & $$ & $$ \\ $1.5$ & $$ & $$ & $$ \\ $ 0.75$ & $$ & $$ & $$ \\ $ 0.375$ & $$ & $$ & $$ \\ \hline \end{tabular} \captionof{table}{Convergence of anlysis results} \label{problem} \end{center}
{ "alphanum_fraction": 0.6661457855, "avg_line_length": 51.0272045028, "ext": "tex", "hexsha": "186d1d568ffe20e597609b4645b90c87a90a8243", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2022-03-02T07:54:28.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-17T07:24:59.000Z", "max_forks_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jgomezc1/Introductory-Finite-Elements", "max_forks_repo_path": "course_notes/src/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jgomezc1/Introductory-Finite-Elements", "max_issues_repo_path": "course_notes/src/appendix.tex", "max_line_length": 836, "max_stars_count": 39, "max_stars_repo_head_hexsha": "a4b44d8bf29bcd40185e51ee036f38102f9c6a72", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "AppliedMechanics-EAFIT/Introductory-Finite-Elements", "max_stars_repo_path": "course_notes/src/appendix.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T17:57:11.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-26T13:28:30.000Z", "num_tokens": 17995, "size": 54395 }
\cleardoublepage \chapter{Introduction} \markboth{Introduction}{Introduction} %\addcontentsline{toc}{chapter}{Introduction} \vspace{-1cm} \section{Motivation} % Describe preterm birth Preterm birth (PTB) --- characterised as birth before 37 full weeks of gestation --- affects an estimated 7\% of births in Switzerland, and 11.1\% of all live births worldwide, which corresponds to nearly 15 million babies per year \citep{Blencowe2013}. This comes with a heavy societal burden as it is one of the predominant risk factors behind neurodevelopmental disorders \citep{Pierrat2017,Twilhaar2018}, besides increasing the neonatal and post-discharge costs up to 33 times \citep{Tommiska2003} as compared to term birth. %Economic costs of care in extremely low birth weight infants during the first 2 years of life PTB has been associated with a wide range of impairments in cognitive functions spanning attention \citep{Rommel2017}, working memory \citep{Allotey2018}, affective behaviour \citep{Hornman2016}, executive functions \citep{Costa2017, Burnett2018}, among others \citep{Moreira2014, Allotey2018}. Often unveiled only when children reach school age, some of these difficulties may persist throughout life \citep{Anderson2014, Kajantie2019}. In Switzerland, while the majority of the patients have positive outcomes, 21\% show some form of cognitive impairment, particularly in short term memory \citep{Pittet-Metrailler2019a}. Understanding the neurological underpinnings of these difficulties is thus crucial to identify potential interventions and establish critical periods to restore typical development \citep{Wolke2019}. % Introduce fMRI as an ideal tool to study it Functional magnetic resonance imaging (fMRI) is a powerful tool to characterise brain function in a non-invasive fashion and is, therefore, ideal to investigate the neurological basis of clinical outcomes in the young population. Typically relying on the blood oxygenation level dependent signal, it indirectly measures brain activity with exceptional spatial specificity due to its signal reliability, high spatial resolution for a non-invasive method, and reproducibility. Thanks to this technique, it is now known that brain activity intrinsically oscillates in a highly organised way in rest \citep{Damoiseaux2006}, and during performance of tasks \citep{Elliott2019}. This has promoted discoveries linking brain function and the performance of cognitive demanding tasks in several domains of cognitive neuroscience \citep{Raichle2001, Poldrack2012, DEsposito2016}. FMRI is, in many ways, well-suited to investigate paediatric populations, especially since robust measures of functional activation and connectivity can be obtained from short scanning sessions. It has been successfully employed in studies involving young cohorts tapping into language \citep{Centeno2014, Pigdon2020}, somatomotor \citep{Zwicker2011, Sgandurra2018}, attention \citep{Somandepalli2015, Jiang2019, Harrewijn2020}, memory \citep{Mankinen2015, DeBie2015}, affective processing \citep{Loveland2008, McRae2012}, working memory \citep{Siffredi2017, Yaple2018}, and executive functions \citep{Wang2013, Staphorsius2015}. All of these abilities are more likely to be impaired in preterm- than in term-born individuals, highlighting fMRI's fitness to study this population. Indeed, this technique has uncovered altered brain responses in regions underlying executive functions in preterm-born children in frontal \citep{Reveillon2013,Murner-Lavanchy2014} and temporal areas \citep{Kwon2014a, Wilke2014} which were linked to impaired language performance at age 14\textendash15 \citep{Wilke2014}. Recently, fMRI studies have shown that the brain activity is highly dynamic, fluctuating between large-scale brain states formed by simultaneous activation of different subsets of brain regions during rest \citep{Chang2010, Preti2017, Liu2018} and task performance \citep{Di2015, Cheng2018}. Crucially, features of these moment-to-moment fluctuations of brain configuration have very recently been discovered to be linked to cognitive ability in humans, both during rest \cite{Chen2019} and while performing attentional tasks \cite{Fong2019}. These findings indicate the high potential of brain dynamics as an avenue to further characterise the effects of prematurity in the brain, additionally shedding light on how they relate to cognitive outcomes in those who were born too soon. \section{Organisation and main contributions} % Summary of all chapters The goal of this thesis is to advance the knowledge on the neural effects of preterm birth, in the resting state as well as during performance of cognitive tasks, through the development of state-of-the-art imaging analyses. This manuscript is thus organised as a compilation of two published articles and three preprints in preparation for submission. Chapter \ref{chapter:ch2} provides an overview of the state of the art in functional MRI analysis and preterm birth research, and serves as a background for the studies presented in subsequent chapters. It starts by introducing fMRI as a powerful tool to investigate human brain function, followed by a description of currently available methodologies for human brain mapping using this technique. It then characterises the clinical aspects of preterm birth and presents the current knowledge on how its outcomes relate to brain function. Chapters \ref{chapter:ch3}, \ref{chapter:ch4} and \ref{chapter:ch5}, reproduce published manuscripts and articles in preparation which contribute both through novel research, as well as complementary analyses to existing knowledge. Chapter \ref{chapter:ch6} then summarises and integrates the results, and proposes avenues for future research. % Disclaimer – use of pronoun "we" intead of "I" Below, I summarise the main research questions and contributions of each article. In all of them I contributed to the planning, performed all data processing; methods' development; and statistical analysis where applicable, and wrote --- or contributed equally to --- the manuscript and revisions. Since these studies were achieved thanks to a collaboration involving large groups of people, I will often use the personal pronoun "we" when discussing the work done. \begin{figure}[h!] \centering\includegraphics[width=0.95\linewidth]{images/Ch1/Overview.pdf} \caption{\textbf{Thesis overview.} Main contributions on brain dynamics in preterm-birth. } \label{fig:overview} \end{figure} \subsection*{Chapter \ref{chapter:ch3}: BOLD signal variability and dynamic spontaneous brain function in the preterm-born} Although brain analysis methods often rely on measuring and comparing the average activity in certain areas of interest, blood oxygenation level dependent (BOLD) signal variability has been shown to yield additional information on brain function that is linked to cognitive abilities. To the best of my knowledge, no one has investigated BOLD signal variability in preterm-born populations. In this chapter, I look into functional brain dynamics in two ways: first, in terms of voxelwise BOLD signal variability and its relationship with gestational age and age at assessment. Secondly, I perform a seed-based co-activation pattern analysis focusing on the dorsal anterior cingulate cortex, an area previously described to be affected by preterm-birth \citep{White2014,Daamen2015,Lordier2019} that was also highlighted in the analysis of BOLD variability. \subsubsection*{Section \ref{section:ch3_BOLDvar_paper}: \textit{(Journal Article)} Altered BOLD variability development and brain dynamics in preterm-born young adolescents} \textit{Is the development of BOLD signal variability affected by preterm birth?}\\ \textit{Does preterm birth affect finer temporal scale brain dynamics?} BOLD signal variability, calculated as the standard deviation of BOLD signal time series, is a measure of how dynamic brain activity is throughout the duration of an fMRI experiment. It has been shown to change with age and cognitive ability \citep{Garrett2013} and to be altered in clinical populations \citep{Zoller2017,Nomi2018, Easson2019}. These studies support its role in reflecting the brain's dynamic range and complexity and, when present at the optimal levels, in allowing greater flexibility for brain function \citep{McIntosh2010, Deco2011}. It is thus a promising avenue to investigate brain "dynamism" in preterm populations. In this article, I investigate the link between dynamic brain function and gestational age, as well as with age at assessment, using a multivariate partial least squares (PLS) approach in a resting-state fMRI paradigm. I have addressed this in two steps: First, I compare how those relationships evolve during early adolescence in a preterm-born and a fullterm control group of children. Then, because BOLD variability is closely linked to functional connectivity, I delve deeper into how the relationship between a region of interest identified in this analysis --- namely, the anterior cingulate cortex --- and other parts of the brain evolve over time in these two groups by using co-activation patterns as brain measures for the PLS. We identify interesting interactions between age at assessment and gestational age in both analyses, suggesting that preterm birth alters the development of dynamism in the brain at later stages in life. % Description of Chapter 4 - ORFi? \subsection*{Chapter \ref{chapter:ch4}: Studying cognition with task-based fMRI: Reality Filtering in young populations} While resting-state analyses provide profound insight into brain function, task-based fMRI paradigms are crucial to understand how the brain works under specific cognitive demands. In the case of preterm populations, there is particular interest in cognitive functions involving frontal brain areas, as neuroimaging studies have highlighted widespread alterations in the prefrontal cortex's structure and function in preterm individuals across lifetime. In this chapter, we thus employ a Reality Filtering (RF) task, known to recruit the orbitofrontal cortex (OFC) in adults, to study brain function in this area in young adolescents. To the best of our knowledge, no one has looked into brain function related to RF in children so far. Therefore, this chapter is divided in two steps: First, we confirm the OFC's involvement in RF in typically developing, fullterm-born children. Then, we look into whole-brain, as well as OFC-seed differences, between a preterm-born and a control group while performing an RF task. \subsection*{Section \ref{section:ch4_orfi_ctrl}: \textit{(Journal Article)} "Get real: orbitofrontal cortex mediates the ability to sense reality in early adolescents"} \textit{What are the neural processes underlying reality filtering in early adolescents?} The typical approach to understand the neural underpinnings of cognition is to investigate how brain function changes as a direct effect from task performance. Here, we focus our study on the orbitofrontal cortex (OFC), known to be crucial for the ability to sense reality in adults but to be still under development in young adolescents, to understand how activation in this area changes in the latter population depending on stimuli presentation. Using a previously validated task paradigm adapted to children we confirmed, for the first time using fMRI and in young adolescents, that the OFC mediates reality filtering already at this age. \subsection*{Section \ref{section:ch4_orfi_groups}: \textit{(Journal Article)} Altered orbitofrontal activation in preterm-born young adolescents during performance of a reality filtering task} \textit{Are preterm-born young adolescents able to perform a reality filtering task?}\\ \textit{What are the neural processes underlying reality filtering in preterm-born young adolescents?} Because the prefrontal cortex ––– of which the OFC is a constituting part --- is known to be affected by preterm birth in several ways, we wanted to investigate both whether preterm-born young adolescents are capable of reality filtering and, if this is the case, whether the OFC is involved. Using the same task as in the previous section, we found that although children in the preterm group were able to perform the task with comparable accuracy to the fullterm group, the levels of OFC activation in the former group are lower and no other regions were more activated than in controls. This suggests that preterm-born individuals may have developed mechanisms to optimise OFC activity such that they are still able to perform the task without depending on the same level of activation as the control group. % Description of Chapter 5 - PPI-CAPs paper? \subsection*{Chapter \ref{chapter:ch5}: Time-resolved brain dynamics during task performance} While typical task-based studies compare and contrast how brain activity changes between different task contexts, they still mostly assume stationary within task blocks. This provides a limited, incomplete snapshot of how the brain works under these circumstances. Resting-state fMRI has benefited from methods that uncover dynamic features of large-scale neuronal function for over a decade \citep{Chang2010}, but task-based paradigms have only recently started to explore this important avenue \citep{Gonzalez-Castillo2018}. \subsection*{Section \ref{section:ppicaps_method}: \textit{(Journal Article)} "Time-Resolved Effective Connectivity in Task fMRI: Psychophysiological Interactions of Co-Activation Patterns"} \textit{Can we capture relevant task-related functional dynamics in a frame-wise way?} % Useful for defense: https://lib.ugent.be/fulltxt/RUG01/002/508/649/RUG01-002508649_2018_0001_AC.pdf Previous studies have shown that relevant information on brain function is condensed in specific moments of high amplitude peaks in the BOLD signal \citep{Tagliazucchi2011a}, meaning that large parts of the fMRI time series contain information that does not necessarily add information for certain analyses. This means the fMRI data can reduce to a point-process \citep{Tagliazucchi2012} characterised by a sequence of time points when a seed signal traverses a given threshold. If these points are then averaged, one obtains patterns of co-activation with a seed that are recurring throughout the experiment, at a single-frame resolution \citep{Liu2013}. In this article I develop a seed-based method called Psychophysiological Interactions of Co-Activation Patterns (PPI-CAPs) to investigate such dynamic modulations of functional brain connectivity in a task-based context. In a naturalistic setting in which participants watched a short TV program, several patterns of co-activation were yielded using a posterior cingulate cortex seed --- chosen due to its well documented connectivity arrangements \citep{Liu2013, Karahanoglu2015a,Lin2017} and its description as a hub region \citep{Andrews-hanna2010}. These patterns' occurrence rates and polarity varied according the context; the seed activity; or an interaction between the two. Moreover, this method unveiled the consistency in effective connectivity patterns over time and across subjects, which allowed us to uncover links between PPI-CAPs and specific stimuli contained in the video. The main contribution of this study was revealing that explicitly tracking connectivity pattern transients is paramount to advance our understanding of how different brain areas dynamically communicate when presented with a set of cues. Given its ability to concentrate the analysis on very limited amounts of data, this represents a promising avenue for further study of dynamic features of task-modulated brain function in clinical or young populations, such as the preterm-born young adolescents most of this thesis concentrates on. The code developed to perform the analysis described in this work has been made available on \url{https://github.com/lorenafreitas/PPI_CAPs} \subsection*{Section \ref{section:ppicaps_preterm}: \textit{(Journal Article)} "Tracking moment-to-moment functional connectivity in preterm-born young adolescents during movie watching and emotion regulation"} \textit{Do preterm-born young adolescents present altered configurations of task-related functional dynamics as compared to fullterm-born controls?} Having shown that PPI-CAPs is a compelling avenue for the study of dynamic features of context-driven brain function in clinical populations \citep{Freitas2020} in Section \ref{section:ppicaps_method}, I then proceed to employ this approach to study dynamic connectivity in preterm-born young adolescents as compared to age-matched controls. To this end, our participants undergo a block-type task which alternates between moments of movie watching --- where the films have an emotional valence (\textit{i.e.}, amusing or repulsive) to them --- followed by moments of emotion regulation and concentration on their own breathing. We recover six robust and reoccurring patterns of co-activation with a dorsal anterior cingulate cortex seed. Moreover, we show that several of the data-driven patterns have a seed, task, or group main effects, as well as interactions between those. This study further highlights the importance of investigating task-driven brain dynamics in the context of clinical populations to obtain a more accurate picture of healthy and altered brain function.
{ "alphanum_fraction": 0.8196326062, "avg_line_length": 174.2, "ext": "tex", "hexsha": "6951b75b4570727d6778e31499e0d301dab38921", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lorenafreitas/EPFL_thesis_template", "max_forks_repo_path": "main/ch1_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lorenafreitas/EPFL_thesis_template", "max_issues_repo_path": "main/ch1_introduction.tex", "max_line_length": 1428, "max_stars_count": null, "max_stars_repo_head_hexsha": "a93aaca70d47d4187627549e1624500b5ff77908", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lorenafreitas/EPFL_thesis_template", "max_stars_repo_path": "main/ch1_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3708, "size": 17420 }
%\RequirePackage[]{lineno} \documentclass[iop]{emulateapj} \usepackage{tikz} \usepackage{natbib} \usepackage{amsmath} \usepackage{hyperref} %\usepackage{graphicx} %\usepackage{lineno} \usepackage[percent]{overpic} \usepackage{float} \usepackage{wrapfig} %\usetikzlibrary{shapes.geometric, arrows} %\usetikzlibrary{fit} % %\tikzstyle{hyper} = [circle, text centered, draw=black] %\tikzstyle{param} = [circle, text centered, draw=black] %\tikzstyle{data} = [circle, text centered, draw=black, line width=2pt] %\tikzstyle{arrow} = [thick,->,>=stealth] %\usepackage{soul} \usepackage[title]{appendix} \newcommand{\todo}[3]{{\color{#2}\emph{#1}: #3}} \newcommand{\aim}[1]{\todo{AIM}{red}{#1}} \newcommand{\dwh}[1]{\todo{Attn. Hogg}{blue}{#1}} \newcommand{\que}[1]{\todo{Question}{cyan}{#1}} \newcommand{\myemail}{[email protected]} \newcommand{\mathul}[1]{\underline{#1}} \newcommand{\Sect}[1]{Section~\ref{#1}} \newcommand{\Eq}[1]{Equation~\ref{#1}} \newcommand{\Fig}[1]{Figure~\ref{#1}} \newcommand{\Tab}[1]{Table~\ref{#1}} \newcommand{\project}[1]{\textsc{#1}} \newcommand{\lsst}{\project{LSST}} \newcommand{\desc}{\lsst-\project{DESC}} \newcommand{\sdss}{\project{SDSS}} \newcommand{\boss}{\project{BOSS}} \newcommand{\des}{\project{DES}} \newcommand{\Chippr}{\project{CHIPPR}}% maybe change this to algorithm/italics \newcommand{\repo}[1]{\texttt{#1}} \newcommand{\qp}{\repo{qp}} \newcommand{\chippr}{\repo{chippr}} \newcommand{\cosmolike}{\repo{CosmoLike}} \newcommand{\emcee}{\repo{emcee}} \newcommand{\github}{\href{https://github.com}{GitHub}}% maybe remove link and use code font \newcommand{\python}{\textit{Python}}% establish code font? \newcommand{\data}{\ensuremath{\vec{d}}}% could change to bold \newcommand{\like}{\mathscr{L}} \newcommand{\pr}[1]{\ensuremath{\mathrm{p}(#1)}}% could change to Prob or Pr \newcommand{\expect}[1]{\left<#1\right>} \newcommand{\normal}[2]{\mathcal{N} (#1, #2)} \newcommand{\gvn}{\mid}% could use | or \vert \newcommand{\integral}[2]{\ensuremath{\int #1 \mathrm{d} #2}} \newcommand{\sz}{spec-$z$} \newcommand{\Sz}{Spec-$z$} \newcommand{\pz}{photo-$z$} \newcommand{\Pz}{Photo-$z$} \newcommand{\zpdf}{\pz\ PDF}% could change to posterior \newcommand{\Zpdf}{\Pz\ PDF}% could change to posterior \newcommand{\pzpdf}{\pz\ posterior PDF}% could change to implicit posterior \newcommand{\Pzpdf}{\Pz\ posterior PDF}% could change to implicit posterior \newcommand{\pzip}{\pz\ implicit posterior} \newcommand{\nz}{$n(z)$} \newcommand{\Nz}{$N(z)$} \newcommand{\stack}{$\hat{n}(z)$} \newcommand{\ntot}{\ensuremath{N_{\mathrm{tot}}}} \newcommand{\bvec}[1]{\ensuremath{\boldsymbol{#1}}}% could change to \vec \newcommand{\ndphi}{\bvec{\phi}} \newcommand{\mmle}{marginalized maximum likelihood estimate}% really marginalized maximum a posteriori estimate, may still change this \begin{document} %\linenumbers \title{How to obtain the redshift distribution from probabilistic redshift estimates} \author{Alex I. Malz\altaffilmark{1,2}} \author{David W. Hogg\altaffilmark{2,3,4,5}} \email{[email protected]} \altaffiltext{1}{German Centre of Cosmological Lensing, Ruhr-Universit\"{a}t, Universit\"{a}tsstra{\ss}e 150, 44801 Bochum, Germany} \altaffiltext{2}{Center for Cosmology and Particle Physics, Department of Physics, New York University, 726 Broadway, 9th floor, New York, NY 10003, USA} \altaffiltext{3}{Simons Center for Computational Astrophysics, 162 Fifth Avenue, 7th floor, New York, NY 10010, USA} \altaffiltext{4}{Center for Data Science, New York University, 60 Fifth Avenue, 7th floor, New York, NY 10003, USA} \altaffiltext{5}{Max-Planck-Institut f\"ur Astronomie, K\"onigstuhl 17, D-69117 Heidelberg, Germany} \begin{abstract} A trustworthy estimate of the redshift distribution $n(z)$ is crucial for using weak gravitational lensing and large-scale structure of galaxy catalogs to study cosmology. Spectroscopic redshifts for the dim and numerous galaxies of next-generation weak-lensing surveys are expected to be unavailable, making photometric redshift (photo-$z$) probability density functions (PDFs) the next-best alternative for comprehensively encapsulating the nontrivial systematics affecting photo-$z$ point estimation. The established stacked estimator of $n(z)$ avoids reducing photo-$z$ PDFs to point estimates but yields a systematically biased estimate of $n(z)$ that worsens with decreasing signal-to-noise, the very regime where photo-$z$ PDFs are most necessary. We introduce Cosmological Hierarchical Inference with Probabilistic Photometric Redshifts (\textsc{CHIPPR}), a statistically rigorous probabilistic graphical model of redshift-dependent photometry, which correctly propagates the redshift uncertainty information beyond the best-fit estimator of $n(z)$ produced by traditional procedures and is provably the only self-consistent way to recover $n(z)$ from photo-$z$ PDFs. We present the \texttt{chippr} prototype code %and use it to forecast constraints in the space of cosmological parameters , noting that the mathematically justifiable approach incurs computational expense. The \textsc{CHIPPR} approach is applicable to any one-point statistic of any random variable, provided the prior probability density used to produce the posteriors is explicitly known; if the prior is implicit, as may be the case for popular photo-$z$ techniques, then the resulting posterior PDFs cannot be used for scientific inference. We therefore recommend that the photo-$z$ community focus on developing methodologies that enable the recovery of photo-$z$ likelihoods with support over all redshifts, either directly or via a known prior probability density. \end{abstract} \keywords{cosmology: cosmological parameters --- galaxies: statistics --- gravitational lensing: weak --- methods: data analysis --- methods: statistical} \maketitle %\aim{TODO: Add test case panel labels to all figs of \chippr\ sims/outputs. Take higher-z samples in mock \pzpdf\ plots relative to outlier populations. Include $\bar{z}$ in results plots.} \section{Introduction} \label{sec:intro} % what are photo-zs? Photometric redshift (\pz) estimation has been a staple of studies of galaxy evolution, large-scale structure, and cosmology since its conception half a century ago \citep{baum_photoelectric_1962}. An extremely coarse spectrum in the form of photometry in a handful of broadband filters can be an effective substitute for the time- and photon-intensive process of obtaining a spectroscopic redshift (\sz), a procedure that may only be applied to relatively bright galaxies. Once the photometric colors are calibrated against either a library of spectral energy distribution (SED) templates or a data set of spectra for galaxies with known redshifts, a correspondence between photometric colors and redshifts may be constructed, forming a trustworthy basis for \pz\ estimation or testing. % why do we need photo-zs? Calculations of correlation functions of cosmic shears and galaxy positions that constrain the cosmological parameters require large numbers of high-confidence redshifts of surveyed galaxies. Many more \pz s may be obtained in the time it would take to observe a smaller number of \sz s, and \pz s may be measured for galaxies too dim for accurate \sz\ confirmation, permitting the compilation of large catalogs of galaxies spanning a broad range of redshifts and luminosities. \Pz s have thus enabled the era of precision cosmology, heralded by weak gravitational lensing tomography and baryon acoustic oscillation peak measurements. % what's wrong with photo-zs? However, \pz s are susceptible to inaccuracy and imprecision in the form of their inherent noisiness resulting from the coarseness of photometric filters, catastrophic errors in which galaxies of one SED at one redshift are mistaken for galaxies of another SED at a different redshift, and systematics introduced by observational techniques, data reduction processes, and training or template set limitations. Figure~\ref{fig:pedagogical_scatter} is an adaptation of the ubiquitous plots of \pz\ vs. \sz\ illustrating the assumptions underlying \pz\ estimation in general, that \sz s are a good approximation to true redshifts and \pz s represent special non-linear projections of observed photometry to a scalar variable that approximates the true redshift. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/jain05.png} \caption{ A generic probability space (darker in areas of higher probability density) of true redshift ($x$-axis) and a nonlinear projection of photometric data ($y$-axis), with vertical cuts and marginals (orange) indicating the construction of likelihoods and horizontal cuts and marginals (blue) indicating the construction of posteriors, with a theoretically perfect \pz\ estimate on the diagonal (yellow) for reference. The data points were extracted using WebPlotDigitizer \citep{rohatgi_webplotdigitizer_2019} from a \sz\ vs. \pz\ plot in \citet{jain_whole_2015}. % \aim{TODO: recolor insets/bars to not be same as gradient.} } \label{fig:pedagogical_scatter} \end{center} \end{figure} % how do we interpret photo-z imperfections? There are several varieties of generally non-Gaussian deviation from a trivial relationship between redshift and data in Figure~\ref{fig:pedagogical_scatter}, represented by a $y = x$ diagonal line. The coarseness of the photometric filters causes scatter about the diagonal, with larger scatter perpendicular to the diagonal at redshifts where highly identifiable spectral features pass between the filters, as well as higher scatter at high redshifts where faint galaxies with large photometric errors are more abundant. There are populations of outliers, far from the diagonal, comprised of galaxies for which the redshift estimate is catastrophically distinct from the true redshift, showing that outliers are not uniformly distributed nor restricted to long tails away from a Gaussian scatter. And, though hardly perceptible in the plot, there is a systematic bias, wherein the average of the points would not lie on the diagonal but would be offset by a small bias, suggested by the trend of high-redshift points to lie below the diagonal. % how much do these imperfections matter? Once propagated through the calculations of correlation functions of cosmic shear and galaxy positions, \pz\ errors are a dominant contributor to the total uncertainties reported on cosmological parameters \citep{abruzzo_impact_2019}. As progress has been made on the influence of other sources of systematic error, the uncertainties associated with \pz s have come to dominate the error budget of cosmological parameter estimates made by current surveys such as \des\ \citep{hoyle_dark_2018}, \project{HSC} \citep{tanaka_photometric_2018}, and \project{KiDS} \citep{hildebrandt_kids-450:_2017}. Based on the goals of a photometric galaxy survey, limits can be placed on the tolerance to these effects. For example, the Science Requirements Document \citep{mandelbaum_weak_2017} states \lsst's requirements for the main cosmological sample, reproduced in Table~\ref{tab:lsstsrd}. \begin{table} \begin{center} \caption{\Pz\ requirements for \lsst\ cosmology\\ \citep{mandelbaum_weak_2017}.} \begin{tabular}{ll} Number of galaxies & $\approx 10^{7}$\\ Root-mean-square error & $< 0.02 (1 + z)$\\ $3 \sigma$ catastrophic outlier rate & $< 10\%$\\ Canonical bias & $< 0.003 (1 + z)$\\ \end{tabular} \label{tab:lsstsrd} \end{center} \end{table} % how can we improve photo-zs? Much effort has been dedicated to improving \pz s, though they are still most commonly obtained by a maximum likelihood estimator (MLE) based on libraries of galaxy SED templates, with conservative approaches to error estimation. The presence of galaxies whose SEDs are not represented by the template library tends to lead to catastrophic outliers distributed like the horizontally oriented population of \Fig{fig:pedagogical_scatter}. For data-driven approaches, training sets that are incomplete in redshift coverage tend to result in catastrophic outliers like the vertically oriented population of \Fig{fig:pedagogical_scatter}. The approaches of using a training set versus a template library are related to one another by \citet{budavari_unified_2009}. Sophisticated Bayesian techniques and machine learning methods have been employed to improve precision \citep{carliles_random_2010} and accuracy \citep{sadeh_annz2:_2016}, while other advances have focused on identifying and removing catastrophic outliers when using \pz s for inference \citep{gorecki_new_2014}. % PDFs are a better way to improve photo-zs The probability density function (PDF) in redshift space for each galaxy, commonly written as $\pr{z}$, is an alternative to the MLE (with or without presumed Gaussian error bars) \citep{koo_photometric_1999}. This option is favorable because it contains more potentially useful information about the uncertainty on each galaxy's redshift, incorporating our understanding of precision, accuracy, and systematic error. However, denoting \zpdf s as ``$\pr{z}$'' is an abuse of notation, as it does not adequately convey what information is being used to constrain the redshift $z$; \zpdf s are \textit{posterior} PDFs, conditioned on the photometric data and prior knowledge. In terms of \Fig{fig:pedagogical_scatter}, \zpdf s are horizontal cuts, probabilities of redshift conditioned on a specific value of data, i.e. posteriors $\pr{z \gvn \data}$, which constrain redshifts, whereas vertical cuts through this space are probabilities of data conditioned on a specific redshift, i.e. likelihoods $\pr{\data \gvn z}$, from which photometric data is actually drawn. % photo-z PDFs are established \Pzpdf s have been produced by completed surveys \citep{hildebrandt_cfhtlens:_2012, sheldon_photometric_2012} and will be produced by ongoing and upcoming surveys \citep{abell_lsst_2009, carrasco_kind_exhausting_2014, bonnett_redshift_2016, masters_mapping_2015}. \Pzpdf s are not without their own shortcomings, however, including the resources necessary to calculate and record them for large galaxy surveys \citep{carrasco_kind_sparse_2014, malz_approximating_2018} and the divergent results of each method used to derive them \citep{hildebrandt_phat:_2010, dahlen_critical_2013, sanchez_clustering_2013, bonnett_redshift_2016, tanaka_photometric_2018}. Though the matter is outside the scope of this paper, reviews of various methods have been presented in the literature \citep{sheldon_photometric_2012, ball_robust_2008, carrasco_kind_tpz:_2013, carrasco_kind_exhausting_2014, schmidt_evaluation_2020}. The most concerning weakness of \pzpdf s, however, is their usage in the literature, which is at best inconsistent and at worst incorrect. % photo-z PDFs are most often reduced to point estimates Though their potential to improve estimates of physical parameters is tremendous, \pzpdf s have been applied only to a limited extent, most often by reduction to familiar point estimates. If the true redshifts $\{z_{j}^{\dagger}\}$ of galaxies $j$ are known, then their redshift PDFs are well-approximated by delta functions $\{\delta(z, z_{j}^{\dagger})\}$ centered at the true redshift\footnote{Note that \sz s are not the same as known true redshifts; the PDFs of \sz s would be narrow and almost always unimodal, but they would not be delta functions due to observational errors.}, and the redshift distribution is effectively approximated by a histogram or other interpolation of the delta functions $\{\delta(z, z_{j}^{\dagger})\}$. When \pzpdf s are available instead of true redshifts, the simplest approach reduces them to point estimates $\{\hat{z}_{i}\}$ of redshift by using $\delta(z, \hat{z}_{j})$ in place of $\delta(z, z_{j}^{\dagger})$. Though it is most common for $\hat{z}_{j}$ to be the maximum or \textit{mode} of the \pzpdf, there are other, more principled point estimate reduction procedures \citep{tanaka_photometric_2018}. % photo-z PDFs may also be used to define cuts Regardless of how it is done, any procedure that reduces \pzpdf s to point estimates discards valuable information about the uncertainty on redshift. \Pzpdf s have also been used to form selection criteria of samples from galaxy surveys without propagation through the calculations of physical parameters \citep{van_breukelen_reliable_2009, viironen_high_2015}. Probability cuts on Bayesian quantities are not uncommon \citep{leung_bayesian_2017, dipompeo_quasar_2015}, but that procedure does not fully take advantage of all information contained in a probability distribution for parameter inference. The most prevalent application of \pzpdf s that preserves their information content is the estimation of the \textit{redshift distribution function \Nz}, or, interchangably, its normalized cousin the \textit{redshift density function \nz}. \nz\ is used to calculate the redshift calibration bias $b_{z}$ between the true and observed critical surface densities in galaxy-galaxy lensing \citep{mandelbaum_precision_2008} and the geometric lens efficiency $g_{k}(\chi)$ in tomographic weak lensing by large-scale structure \citep{benjamin_cfhtlens_2013}. \Nz\ may be used to validate survey selection functions used in generation of realistic, multi-purpose mock catalogs \citep{norberg_2df_2002}. As a key input to the traditional calculation of the power spectra of weak gravitational lensing and large-scale structure, the accuracy and precision to which \Nz\ is estimated can strongly impact our constraints on the parameters of cosmological models \citep{bonnett_using_2015, masters_mapping_2015, viironen_high_2015, asorey_galaxy_2016, bonnett_redshift_2016, yang_calibrating_2018}, so it is unsurprising that this last application dominates the canonical bias requirement of Table~\ref{tab:lsstsrd}. %\aim{TODO: Say why \Nz\ matters for cosmology, what precision is needed for \Nz\ for future weak lensing surveys, motivate how well we need to know \Nz.} Even with \pz s adhering to the \lsst\ requirements of \Tab{tab:lsstsrd}, the degree to which constraints on the cosmological parameters can advance is limited by the accuracy and precision to which \nz\ is known \citep{abruzzo_impact_2019}. % Say what precision the mass function is needed (in, say cluster studies) for precision cosmology. % how people get n(z) from photo-z PDFs Though it is traditional to estimate \nz\ from \pz\ point estimates \citep{abruzzo_impact_2019}, it has become more common to use \pzpdf s directly to calculate the conceptually simple but mathematically inconsistent \citep{hogg_data_2012} \textit{stacked estimator} $\hat{n}(z)$ of the redshift density function \citep{lima_estimating_2008} \begin{align} \label{eqn:stack} \hat{n}(z) &= \frac{1}{J} \sum_{j = 0}^{J} \pr{z}_{j} \end{align} for a sample of $J$ galaxies $j$, or, equivalently, the redshift distribution function $\hat{N}(z) = J \hat{n}(z)$, by effectively averaging the \pzpdf s. This summation procedure has been used extensively in cosmological analyses with photometric galaxy samples \citep{mandelbaum_precision_2008, benjamin_cfhtlens_2013, kelly_weighing_2014}. %\aim{TODO: Continue adding equations/citations for the use of \Nz\ in cosmology.} % what this paper is about Despite the growing prevalence of \pzpdf\ production, no implementation of inference using \pzpdf s has yet been presented with a mathematically consistent methodology. This paper challenges the logically invalid yet pervasive analysis procedure of stacking \pzpdf s by presenting and validating a hierarchical Bayesian technique for the use of \pzpdf s\ in the inference of \nz, yielding a method applicable to arbitrary one-point statistics relevant to cosmology, large-scale structure, and galaxy evolution; future work will extend this methodology to higher-order statistics. We aim to develop a clear methodology guiding the use of \pzpdf s in inference so they may be utilized effectively by the cosmology community. Though others have approached the problem before \citep{leistedt_hierarchical_2016, leistedt_hierarchical_2018}, the method presented here differs in that it makes use of any existing catalog of \pzpdf s, rather than requiring a simultaneous derivation of the \pzpdf s and the redshift distribution, making it preferable to ongoing surveys for which there may be inertia preventing a complete restructuring of the analysis pipeline. In Section~\ref{sec:meth}, we present the \Chippr\ model for characterizing the full posterior probability landscape of \Nz\ using \pzpdf s. In Section~\ref{sec:application}, we present the \chippr\ implementation of the \Chippr\ model and the experimental set up by which we validate it, including the forward modeling of mock \pzpdf s. In Section~\ref{sec:alldata}, we present a number of informative test cases and compare the results of \chippr\ with alternative approaches. In Section~\ref{sec:results}, we stress-test the \Chippr\ model under nontraditional conditions. % in the context of cosmology. Finally, in Section~\ref{sec:con}, we make recommendations for future research involving \nz\ estimation. \section{Model} \label{sec:meth} Consider a survey of $J$ galaxies $j$, each with photometric data $\data_{j}$; thus the entire survey over some solid angle produces the ensemble of photometric magnitudes (or colors) and their associated observational errors $\{\data_{j}\}$. Each galaxy $j$ has a redshift parameter $z_{j}$ that we would like to learn. The distribution of the ensemble of redshift parameters $\{z_{j}\}$ may be described by the hyperparameters defining the redshift distribution function \nz\ that we would like to quantify. The redshift distribution function \nz\ is the number of galaxies per unit redshift, effectively defining the evolution in the number of galaxies convolved with the selection function of the sample \citep{menard_clustering-based_2013}. In \Sect{sec:forward}, we establish a forward model encapsulating the causal relationship between \nz\ and photometry $\data$. In \Sect{sec:prob}, we present the directed acyclic graph of this probabilistic generative model and interpret the corresponding mathematical expression, whose full derivation may be found in the Appendix. In \Sect{sec:limitations}, we summarize the necessary assumptions of the model. %\aim{TODO: check for consistent notation of \pzip s vs. \pzpdf s.} \subsection{Forward Model} \label{sec:forward} We begin by reframing the redshift distribution \nz\ from a probabilistic perspective. Here we define a redshift density \nz\ as the normalized probability density \begin{equation} \label{eqn:nz} \int_{-\infty}^{\infty}\ n(z)\ dz\ \equiv\ \frac{1}{J}\ \int_{-\infty}^{\infty}\ \sum_{j=1}^{J}\ \delta(z_{j},\ z)\ dz = 1 \end{equation} of finding a galaxy $j$ in a catalog of $J$ galaxies having a redshift $z$. We believe that galaxy redshifts are indeed sampled, or drawn, from \nz, making it a probability density over redshift; this fact can also be confirmed by dimensional analysis of \Eq{eqn:nz}, as suggested in \citet{hogg_data_2012}. We may without loss of generality impose a parameterization \begin{equation} \label{eqn:fz} f(z; \ndphi)\ \equiv\ n(z) \end{equation} in terms of some parameter vector $\ndphi$. At this point, the parameter vector is quite general and may represent coefficients in a high-order polynomial as a function of redshift, a set of means and variances defining Gaussians that sum to the desired distribution, a set of histogram heights that describe a binned version of the redshift distribution function, etc. Upon doing so, we may rewrite \Eq{eqn:fz} as \begin{equation} \label{eqn:pz} z_{j}\ \sim\ \pr{z \gvn \ndphi}\ \equiv\ f(z; \ndphi), \end{equation} a probability density over redshift conditioned on the parameters $\ndphi$ specifying \nz. Note that $z_{j}$ does not depend on the redshift $z_{j'}$ of some other galaxy $j' \neq j$, a statement of the causal independence of galaxy redshifts from one another. In addition to believing \nz\ is a PDF from which redshifts are drawn, we also believe that there is some higher dimensional probability space $\pr{z, \data}$ of redshift $z$ and photometric data vectors $\data$, which may be any combination of fluxes, magnitudes, colors, and their observational errors. Under this framework, \nz\ is equivalent to an integral \begin{equation} \label{eqn:integral} n(z)\ =\ \integral{\pr{z, \data}}{\data} \end{equation} over the dimension of data in that joint probability space. Note that galaxies may have different observational data despite sharing the same redshift, and that galaxies at different redshifts may have identical photometry; the space $\pr{z, \data}$ need not be one-to-one. We assume a stronger version of statistical independence here, that draws $(z_{j}, \data_{j})$ are independent of draws $(z_{j'}, \data_{j'})$ in this space; the data and redshift of each galaxy are independent of those of other galaxies. However, this problem has additional causal structure that we can acknowledge. The photometry results from the redshifts, not the other way around. This is the fundamental assumption upon which \pz\ estimation is based. The forward model corresponds to first drawing redshifts according to \Eq{eqn:pz} and then drawing data from the likelihood \begin{equation} \label{eqn:pzpdf} \data_{j}\ \sim\ \pr{\data \gvn z_{j}} \end{equation} of photometry conditioned on redshift, illustrated in Figure~\ref{fig:pedagogical_scatter}. This description of the physical system corresponds to a forward model by which we actually believe photometry is generated: \begin{enumerate} \item There exists a redshift distribution \nz\ with parameters $\ndphi$. \item Galaxy redshifts $\{z_{j}\}$ are independent draws from $\pr{z \gvn \ndphi}$. \item Galaxy photometry $\data_{j}$ is drawn from the likelihoods $\pr{\data_{j} \gvn z}$. \end{enumerate} \subsection{Probabilistic Model} \label{sec:prob} A forward model such as that of \Sect{sec:forward} corresponds to a probabilistic graphical model (PGM), represented by a directed acyclic graph (DAG) as in \Fig{fig:pgm}. A DAG conveys the causal relationships between physical parameters and, like a Feynman diagram in the context of particle physics, is a shorthand for mathematical relationships between variables. The photometric data $\data_{j}$ of a galaxy is drawn from some function of its redshift $z_{j}$, independent of other galaxies' data and redshift. Both data and redshift are random variables, but data is the one that we observe and redshift is not directly observable. In this problem, we don't care about further constraining the redshifts of individual galaxies, only the redshift distribution \nz, so we consider redshift to be a \textit{latent variable}. Because the parameters $\ndphi$ that we seek are causally separated from the data by the latent variable of redshift, we call them \textit{hyperparameters}. \begin{figure} \begin{center} \includegraphics[height=0.25\textheight]{figures/chippr/pgm.png} \caption{The directed acyclic graph of the CHIPPR model, where circles indicate random variables and arrows indicate causal relationships. The redshift distribution \nz\ parameterized by hyperparameters $\ndphi$ exists independent of the survey of $J$ galaxies, indicated as a box. The redshifts $\{z_{j}\}$ of all galaxies in the survey are latent variables independently drawn from the redshift distribution, which is a function of $\ndphi$. The photometric data $\data_{j}$ for each galaxy is drawn from a function of its redshift $z_{j}$ and observed, indicated by a shaded circle. } \label{fig:pgm} \end{center} \end{figure} The problem facing cosmologists is to determine the true value of $\ndphi$ from observing the photometry $\{\data_{j}\}$ of a large sample of $J$ galaxies $j$. To self-consistently propagate the uncertainty in the inference of redshift, however, it is more appropriate to estimate the posterior $\pr{\ndphi \gvn \{\data_{j}\}}$ over all possible values of $\ndphi$ conditioned on all the observed data $\{\data_{j}\}$ available in a generic catalog. In order to use the DAG of \Fig{fig:pgm} to derive an expression for $\pr{\ndphi \gvn \{\data_{j}\}}$ in terms of \pzpdf s, we must introduce two more concepts, confusingly named the \textit{implicit prior} and the \textit{prior probability density} (\textit{prior PDF}), elaborated upon below. When we constrain the redshift of a galaxy using its observed photometric data $\data_{j}$, we are effectively estimating a posterior $\pr{z \gvn \data_{j}}$, the probability of an unknown quantity conditioned on the quantity we have in hand, i.e the photometric data. This posterior is effectively a marginalization with respect to redshift at a given value of $\data = \data_{j}$ of the \textit{empirical frequency distribution} $\pr{z, \data \gvn \ndphi^{\dagger}}$, the joint probability density corresponding to the true redshift distribution parameterized by $\ndphi^{\dagger}$, which exists in nature but need not be known. %\que{Propagate new $\pr{z, \data \gvn \ndphi^{\dagger}}$ notation through appendix?} As the hyperparameters $\ndphi^{\dagger}$ of the true redshift distribution are in general unknown, the investigator seeking to estimate a posterior $\pr{z \gvn \data_{j}}$ must have a model $\phi^{*}$ for the general relationship between redshifts and photometry, whether empirical, as is the case for machine learning \pzpdf\ methods, or analytic, as is the case for template-based \pzpdf\ methods. If we were to marginalize over the photometry in $\pr{\data, z}$, we would obtain a one-dimensional PDF $\pr{z \gvn \ndphi^{*}}$ over redshift, which can by definition be parameterized by the same functional form as \nz, for some $\ndphi^{*}$ specific to the estimation procedure that may or may not bear any relation to the hyperparameters $\ndphi^{\dagger}$ of the true \nz. Rather, $\ndphi^{*}$ is a consequence of the generative model for how photometry results from redshift, including the influence of intrinsic galaxy spectra and instrumental effects. We call $\pr{z \gvn \ndphi^{*}}$ the \textit{implicit prior}, as it is rarely explicitly known nor chosen by the researcher\footnote{For template-based methods, the implicit prior is often an explicitly known input to the algorithm, engineered as an initial guess for the true $\ndphi$, with an aim for a realistic choice guided by an earlier spectroscopic survey. (See \citet{benitez_bayesian_2000} for more detail.) It may thus be more appropriate to call it an \textit{interim prior}, but we will use the former term throughout this paper for generality.} Because the implicit prior is unavoidable and almost inherently not uninformative, the \pzpdf s reported by any method must be \textit{implicit posteriors} ${\pr{z \gvn \data, \ndphi^{*}}}$ weighted by the implicit prior. %Posteriors differ from likelihoods by way of a prior distribution, so we cannot simply assume that the available data products are \pz\ posteriors $\pr{z \gvn \data_{j}}$. %Rather, we have a catalog of implicit-prior weighted \pz\ posteriors $\pr{z \gvn \data_{j}, \ndphi^{*}}$. %There must have been some interim prior probability distribution $p(z|\vec{\theta}^{*})$ defined in terms of the interim prior parameter values (hereafter the interim prior) $\vec{\theta}^{*}$ explicitly chosen or implicitly made to perform the calculation of the probabilistic photo-$z$s. %If it is implicit, it may not be representable in the parametrization we have chosen, and furthermore it may not be known at all; a method that produces interim photo-$z$ posteriors of this kind is not suitable for inference. %However, so long as the implicit prior is known, hierarchical inference is possible. The prior probability density $\pr{\ndphi}$ is a more familiar concept in astronomy; to progress, we will have to choose a prior probability density over all possible values of the hyperparameters $\ndphi$. This prior need not be excessively proscriptive; for example, it may be chosen to enforce smoothness at physically motivated scales in redshift without imposing any particular region as over- or under-dense. With inputs of the \pzip\ catalog $\{\pr{z \gvn \data, \ndphi^{*}}\}$, the implicit prior $\pr{z \gvn \ndphi^{*}}$, and the prior PDF $\pr{\ndphi}$, we thus aim to obtain the posterior probability $\pr{\ndphi \gvn \{\data_{j}\}}$ of the redshift density function given all the photometric data. By performing the derivation of the Appendix, we arrive at the desired expression \begin{equation} \label{eqn:fullpost} \pr{\ndphi \gvn \{\data_{j}\}} \propto \pr{\ndphi} \integral{\prod_{j=1}^{J} \frac{\pr{z \gvn \data_{j}, \ndphi^{*}} \pr{z \gvn \ndphi}}{\pr{z \gvn \ndphi^{*}}}}{z}, \end{equation} which is the very heart of \Chippr, also given as \Eq{eqn:final}. This in effect replaces the implicit prior with the sampled model hyperparameters, thereby converting the \pzip s into likelihoods in order to obtain unbiased posteriors. \subsection{Model Limitations} \label{sec:limitations} Finally, we explicitly review the assumptions made by this approach, which are as follows: \begin{enumerate} \item Photometric measurements of galaxies are statistically independent Poisson draws from the set of all galaxies such that \Eq{eqn:indiedat} and \Eq{eqn:indie} hold. \item We take the reported \pzip s to be accurate, free of model misspecification; draws thereof must not be inconsistent with the distribution of photometry and redshifts. Furthermore, we must be given the implicit prior $\ndphi^{*}$ used to produce the \pzip s. \item We must assume a hyperprior distribution $\pr{\ndphi}$ constraining the underlying probability distribution of the hyperparameters, which is informed by our prior beliefs about the true redshift distribution function. \end{enumerate} These assumptions have known limitations. First, the photometric data are not a set of independent measurements; the data are correlated not only by the conditions of the experiment under which they were observed (instrument and observing conditions) but also by redshift covariances resulting from physical processes governing underlying galaxy spectra and their relation to the redshift distribution function. Second, the reported \pzip s may not be trustworthy; there is not yet agreement on the best technique to obtain \pzpdf s, and the implicit prior may not be appropriate or even known to us as consumers of \pzip s. Third, the hyperprior may be quite arbitrary and poorly motivated if the underlying physics is complex, and it can only be appropriate if our prior beliefs about \nz\ are accurate. Furthermore, in Section~\ref{sec:prob}, we have made an assumption of \textit{support}, meaning the model $\pr{z, \data \gvn \ndphi}$ has mutual coverage with the parameter values that real galaxies can take. In other words, any probability distribution over the $(z, \data)$ space must be nonzero where real galaxies can exist. Additionally, the hyperprior $\pr{\ndphi}$ must be nonzero at the hyperparameters $\ndphi^{\dagger}$ of the true redshift density function \nz. This assumption cannot be violated under the experimental design of Section~\ref{sec:forward}, but it is not generically guaranteed when performing inference on real data; thus the chosen $\pr{z, \data \gvn \ndphi^{*}}$ and $\pr{\ndphi}$ must be sufficiently general as to not rule out plausible areas of parameter space. \section{Methods \& Data} \label{sec:application} Here we describe the method by which we demonstrate the \Chippr\ model. In \Sect{sec:exp}, we outline the implementation of the \chippr\ code. %In \Sect{sec:sheldon}, we introduce the alternative \nz\ estimators against which \Chippr\ is compared. %In \Sect{sec:diag}, we present the quantitative metrics by which the \nz\ estimators are compared. In \Sect{sec:mock}, we outline the procedure for emulating mock \pzip s. \subsection{Implementation} \label{sec:exp} We implement the \Chippr\ model in code in order to perform tests of its validity and to compare its performance to that of traditional alternatives. In \Sect{sec:mcmc}, we describe the publicly available \chippr\ library. In \Sect{sec:sheldon}, we introduce the alternative approaches evaluated for comparison with \Chippr. In \Sect{sec:diag}, we describe the diagnostic criteria by which we assess estimators of \nz. %In \Sect{app:acorr}, we outline how \chippr\ can be used to sample the full log-posterior distribution $\ln[\pr{\ndphi \gvn \{\data_{j}\}}]$. \subsubsection{Code} \label{sec:mcmc} \chippr\ is a \python\ 2 library\footnote{\url{https://github.com/aimalz/chippr}} that includes an implementation of the \Chippr\ model as well as an extensive suite of tools for comparing \Chippr\ to other approaches. Though there are plans for future expansion to more flexible parameterizations, the current version of \chippr\ uses a log-space piecewise constant parameterization \begin{equation} \label{eqn:logstepfunc} f(z; \ndphi) = \exp[\phi^{k}]\ \mathrm{if}\ z^{k} < z < z^{k+1} \end{equation} for \nz\ and every \pzpdf, satisfying \begin{equation} \label{eqn:logstepfuncnorm} \sum_{k=1}^{K} \exp[\phi^{k}] \delta z^{k} = 1 \end{equation} with $K$ bins of width $\delta z^{1}, \dots, \delta z^{K}$ defined by endpoints $z^{0}, \dots, z^{K}$. Thus each $\pr{z \gvn \data_{j}} = f(z; \ndphi_{j})$ has parameters $\ndphi_{j}$ that are defined in the same basis as those of \nz. To infer the full log-posterior distribution $\ln[\pr{\ndphi \gvn \{\data_{j}\}}]$, one must provide a plaintext file with $K+1$ redshift bin endpoints $\{z_{k}\}$, the parameters $\ndphi^{*}$ of the implicit log-prior, and the parameters $\{\ndphi_{j}\}$ of the log-posteriors $\ln[\pr{z \gvn \data_{j}, \ndphi^{*})}$. The \emcee \citep{foreman-mackey_emcee_2013} implementation of ensemble sampling is used to sample the full log-posterior of \Eq{eqn:final}. \chippr\ accepts a configuration file of user-specified parameters, among them the number $W$ of walkers. At each iteration $i$ and for each walker, a proposal distribution $\hat{\ndphi}_{i}$ is drawn from the log-prior distribution and evaluated for acceptance to or rejection from the full log-posterior distribution. %Two threshold conditions are defined, one designating all previous samples to be ignored as as products of a burn-in phase and another indicating when a sufficient number of post-burn samples have been accepted. %In this case, the first threshold (described in \Sect{app:acorr}) is defined in terms of sub-runs of $10^{3}$ accepted samples, and the second is defined as an accumulation of $10^{4}$ samples. %Though previous versions used \texttt{HDF5} for the primary I/O format due to its efficiency for large quantities of data, it was abandoned in favor of \texttt{pickle} in the working release due to the instability of the \python\ implementation of the format on high-performance computing systems. The resulting output % is a set of ordered \texttt{pickle} files % enumerated by $\rho$ each containing the state information of $10^{3}$ accepted samples, with the last ten files including only samples taken after the completion of the burn-in phase % after each sub-run. %The state information includes $\frac{I_{0}}{s}$ accepted samples $\ndphi_{i}$ for a pre-specified chain thinning factor $s$ and their full posterior probabilities $\pr{\ndphi_{i} \gvn \{\data_{j}\}}$, as well as the autocorrelation times and acceptance fractions calculated for each element of $\ndphi$, divided into separate files before and after the completion of the burn-in phase, as defined by the Gelman-Rubin statistic \citep{gelman_inference_1992}. \subsubsection{Alternative approaches for comparison} \label{sec:sheldon} In this study, we compare the results of \Eq{eqn:fullpost} to those of the two most common approaches to estimating \nz\ from a catalog of \pzip s: the distribution $n(z_{\mathrm{max}})$ of the redshifts at maximum posterior probability \begin{equation} \label{eqn:mmap} f^{MMAP}(z; \hat{\ndphi}) = \sum_{j=1}^{J}\ \delta(z, \mathrm{mode}[\pr{z \gvn \data_{j}, \ndphi^{*}}]) \end{equation} (i.e. the distribution of modes of the \pzip s) and the stacked estimator of \Eq{eqn:stacked}, which can be rewritten as \begin{equation} \label{eqn:stacked} f^{stack}(z; \hat{\ndphi}) = \sum_{j=1}^{J}\ \pr{z \gvn \data_{j}, \ndphi^{*}} \end{equation} in terms of the \pzip s we have. These two approaches have been compared to one another by \citet{hildebrandt_cfhtlens:_2012}, \citet{benjamin_cfhtlens_2013}, and \citet{asorey_galaxy_2016} in the past but not to \Chippr. Point estimation converts the implicit \pz\ posteriors $\pr{z \gvn \data_{j}, \ndphi^{*}}$ into delta functions with all probability at a single estimated redshift. Some variants of point estimation choose this single redshift to be that of maximum a posteriori probability $\mathrm{mode}[\pr{z \gvn \data_{j}, \ndphi^{*}}]$ or the expected value of redshift $\langle z \rangle = \integral{z \pr{z \gvn \data_{j}, \ndphi^{*}}}{z}$. \citet{tanaka_photometric_2018} directs attention to deriving an optimal point estimate reduction of a \pzpdf, but since the purpose of this paper is to compare against the most established alternative estimators of \nz, its use will be postponed until a future study. Stacking these modified \pzip s leads to the marginalized maximum a posteriori (MMAP) estimator and the marginalized expected value (MExp) estimator, though only the former is included in this study since the latter has fallen out of favor in recent years\footnote{And for good reason! Consider a bimodal \pzpdf; its expected value may very well fall in a region of very low probability, yielding a less probable point estimate than the point at which either peak achieves its maximum.}. It is worth discussing the relationship between point estimation and stacking. When the point estimator of redshift is equal to the true redshift, stacking delta function \pzpdf s will indeed lead to an accurate recovery of the true redshift distribution function. However, stacking is in general applied indiscriminately to broader \pzpdf s and imperfect point estimators of redshift. It is for these reasons that alternatives are considered here. A final estimator of the hyperparameters is the maximum marginalized likelihood estimator (MMLE), the value of $\ndphi$ maximizing the log posterior given by \Eq{eqn:final} using any optimization code. The MMLE can be obtained in substantially less time than enough samples to characterize the full log-posterior distribution of \nz. However, the MMLE yields only a point estimate of \nz\ rather than characterizing the full log-posterior on $\ndphi$, and it does not escape the dependence on the choice of hyperprior distribution. Furthermore, derivatives will not in general be available for the full posterior distribution, restricting optimization methods used, and, as is true for any optimization code, there is a risk of numerical instability. %\begin{equation} %\label{eq:mmle} %\ln[p(\{\vec{d}_{j}\}|\vec{\theta})] \propto -\int\ f_{\vec{\theta}}(z)\ %dz+\sum_{j=1}^{J}\ln\left[\int\ %\exp\left[\ln[p(z_{j}|\vec{d}_{j},\vec{\theta}^{*})]+\ln[f_{\vec{\theta}}(z)]-\l %n[f_{\vec{\theta}^{*}}(z)]\right]\ dz\right], %\end{equation} %accessible with any optimization code. \subsubsection{Performance metrics} \label{sec:diag} The results of the computation described in \Sect{sec:exp} are evaluated for accuracy on the basis of some quantitative measures. Beyond visual inspection of samples, we calculate summary statistics to quantitatively compare different estimators' precision and accuracy. Since MCMC samples of hyperparameters are Gaussian distributions, we can quantify the breadth of the distribution for each hyperparameter using the standard deviation regardless of whether the true values are known. In simulated cases where the true parameter values are known, we calculate the Kullback-Leibler divergence (KLD), given by \begin{equation} \label{eqn:kl} KL_{\ndphi,\ndphi^{\ddagger}} = \integral{\pr{z \gvn \ndphi} \ln \left[ \frac{\pr{z \gvn \ndphi}}{\pr{z \gvn \ndphi^{\dagger}}} \right]}{z} , \end{equation} which measures a distance from parameter values $\ndphi$ to true parameter values $\ndphi^{\dagger}$. The KLD is a measure of the information loss, in units of nats, due to using $\ndphi$ to approximate the true $\ndphi^{\dagger}$ when it is known. A detailed exploration of the KLD may be found in the Appendix to \citet{malz_approximating_2018}. %We note that $KL_{\ndphi,\ndphi^{\ddagger}} \neq KL_{\ndphi^{\ddagger},\ndphi}$ and is only interpretable when there is a notion that $\ndphi^{\dagger}$ is closer to the truth than $\ndphi$. %In simulated tests, $\ndphi^{\dagger}$ is the true value and $\ndphi'$ is that produced by one of the methods in question. %\aim{TODO: Also include the mean redshift/$\Delta_{z}$ for each estimator of \nz, cite the DES/KiDS/HSC papers that motivate this.} \subsection{Validation on mock data} \label{sec:mock} \begin{figure*} \begin{center} \includegraphics[width=0.7\textwidth]{figures/chippr/flowchart.pdf} \caption{ % \que{Change ``true joint model'' to true forward model, condition on $\phi^{\dagger}$, add footnote explaining distinction between dagger, star, prime, etc., check that derivation matches notation} A flow chart illustrating the forward model used to generate mock data in the validation of \Chippr, as described in \Sect{sec:forward}. Ovals indicate a quantity that must be chosen in order to generate the data, rectangles indicate an operation we perform, and rounded rectangles indicate a quantity created by the forward model. Arrows indicate the inputs and outputs of each operation performed to simulate mock \pzip\ catalogs. } \label{fig:flowchart} \end{center} \end{figure*} We compare the results of \Chippr\ to those of stacking and the histogram of \pzip\ maxima (modes) on mock data in the form of catalogs of emulated \pzip s generated via the forward model discussed in \Sect{sec:forward}. \Fig{fig:flowchart} illustrates the implementation of the forward model, defined by the much simpler \Fig{fig:pgm}, used for validating the method presented here. The irony of a simple model and complex validation procedure is not lost on the authors. \Fig{fig:flowchart} outlines the four phases of the generative model, which uses a total of three inputs. The experimental design requires our choice of true values $\phi^{\dagger}$ of the hyperparameters governing \nz, a \pz\ model $\pr{z, \data}$ defining the space of redshift and photometry, and prior values $\phi^{*}$ of the hyperparameters of \nz. In the first phase, we sample $J = 10^{4}$ redshifts $z_{j}^{\dagger} \sim \pr{z \gvn \phi^{\dagger}}$. In the second phase, we evaluate the \pz\ model at those redshifts, yielding a set of $J$ likelihoods $\pr{\data \gvn z_{j}^{\dagger}}$, from which we then sample data $\data_{j}^{\dagger} \sim \pr{\data \gvn z_{j}^{\dagger}}$ for each galaxy. In the third phase, we evaluate the \pz\ model at that data to obtain $J$ posteriors $\pr{z \gvn \data_{j}^{\dagger}}$. In the fourth phase, we convolve the posteriors with the chosen prior $\pr{z \gvn \phi^{*}}$, yielding implicit posteriors $\pr{z \gvn \data_{j}^{\dagger}, \phi^{*}}$. The true redshift distribution used in these tests is a particular instance of the gamma function \begin{equation} \label{eqn:gamma} n^{\dagger}(z) = \frac{1}{2 c_{z}} \left(\frac{z}{c_{z}}\right)^{2}\ \exp\left[-\frac{z}{c_{z}}\right] \end{equation} with $c_{z} = 0.3$, because it has been used in forecasting studies for \des\ and \lsst. %\aim{I learned this from talking to people and don't know of a published source that talks about the nitty gritty of the internal validation tests performed before there was data.} The mock data emulates the three sources of error of highest concern to the \pz\ community that are explored in detail later in this section: intrinsic scatter (\Sect{sec:scatter}), catastrophic outliers (\Sect{sec:outliers}), and canonical bias (\Sect{sec:bias}). \Fig{fig:mega_scatter} illustrates these three effects simultaneously at the tolerance of \lsst\ for demonstrative purposes, harking back to Figure~\ref{fig:pedagogical_scatter}. \begin{figure}%[H] \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst-mega_scatter.png} \caption{ The joint probability space of true and estimated redshift for the three concerning \pz\ systematics at the level of the \lsst\ requirements: intrinsic scatter, uniformly distributed catastrophic outliers, and bias. The main panel shows samples (black points) in the space of mock data and redshift, akin to the standard scatterplots of true and estimated redshift, the $z_{\mathrm{spec}} = z_{\mathrm{phot}}$ diagonal (gray line), and posterior probabilities evaluated at the given estimated redshift (colored step functions). The insets show marginal histograms (light gray) in each dimension, that can be compared with the true \nz\ used to make the figure (black) to see the effect of these systematics, as well as the implicit prior (dark gray). % \aim{TODO: Include slice further up so we can see outliers. % Label panel.} } \label{fig:mega_scatter} \end{center} \end{figure} The hyperprior distribution chosen for these tests is a multivariate normal distribution with mean $\vec{\mu}$ equal to the implicit prior $\ndphi^{*}$ and covariance \begin{equation} \label{eqn:priorcov} \Sigma_{k,k'} = q\ \exp[-\frac{e}{2}\ (\bar{z}_{k}-\bar{z}_{k'})^{2}]\ +\ t\delta(k,k') \end{equation} inspired by one used in Gaussian processes, where $k$ and $k'$ are indices ranging from $1$ to $K$ and $q=1.0$, $e=100.0$, and $t=q\cdot10^{-5}$ are constants chosen to permit draws from this prior distribution to produce shapes similar to that of a true $\tilde{\ndphi}$. We adapt the full log-posterior of \Eq{eqn:final} to the chosen binning of redshift space. %An example of such samples from the prior are shown in \Fig{fig:prior}. %\que{Should I add back the figure of prior samples?} The sampler is initialized with $W=100$ walkers each with a value chosen from a Gaussian distribution of identity covariance around a sample from the hyperprior distribution. \section{Results} \label{sec:alldata} Here, we compare the results of the \Chippr\ methodology with those of established \nz\ estimators under the three traditional measures of \pz\ uncertainty one at a time: \Sect{sec:scatter} concerns the redshift-dependent intrinsic scatter, \Sect{sec:outliers} concerns realistically complex catastrophic outlier populations, and \Sect{sec:bias} concerns the canonical bias in the mean redshift. \subsection{Intrinsic scatter} \label{sec:scatter} %Several factors contribute to photometric redshifts' intrinsic scatter. %Distant galaxies are dimmer compared to galaxies of identical luminosity that are closer, driving up photometric errors in flux-limited surveys. %The nature of the galaxy sample at higher redshifts also changes, meaning the generation of the photometric redshift posterior based on an a locally-calibrated SED template library or spectroscopically-confirmed training set is more likely to be inappropriate, leading to broader features. %In general, the galaxies that could not have been observed spectroscopically will have different and noisier photo-$z$ likelihoods than those that could fall into a spectroscopic training set (or spectroscopically derived template library). %This effect may be stronger for high-redshift galaxies. \Fig{fig:pzs-scatter} shows some examples of \pzpdf s generated with only the systematic of intrinsic scatter, at the level of the \lsst\ requirements on the left and twice that on the right. One can see that the histogram of redshift estimates is broader than that of true redshifts, and that the effect is substantially more pronounced by just doubling the intrinsic scatter from the level of the \lsst\ requirements. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_varsigmas-mega_scatter.png} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_hivarsig-mega_scatter.png} \caption{ Examples of mock \pzpdf s generated with intrinsic scatter at the \lsst\ requirements (left) and twice the \lsst\ requirements (right), including samples from the probability space of true and observed redshift (black points), \pzpdf s (colored step functions), and the true redshifts of the example \pzpdf s (colored vertical lines). A histogram (light gray) of points in each dimension is shown in the respective inset, with the true redshift distribution (black) and implicit prior (dark gray). % \aim{TODO: Label panels. % Show the mean redshift for each estimator.} } \label{fig:pzs-scatter} \end{center} \end{figure*} \Fig{fig:results-scatter} shows the \nz\ recovered by \Chippr\ and the alternative approaches. As expected, the estimates of \nz\ based on the modes of the \pzpdf s and stacking are broader than the marginalized maximum likelihood estimator from \chippr, with more broadening as the intrinsic scatter increases. \Chippr's \mmle\ is robust to intrinsic scatter and is unaffected by increased intrinsic scatter, though the \Chippr\ posterior distribution on the redshift distribution is itself broader for the higher intrinsic scatter case than for the \lsst\ requirements. The broadening of the alternative estimators corresponds to a loss of 3-4 times as many nats of information about \nz\ for the \lsst\ requirements relative to the \mmle\ of \Chippr. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_varsigmas_log_estimators.png} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_hivarsig_log_estimators.png} \caption{ The results of \Chippr\ (samples in light blue and optimization in dark blue) and the alternative approaches (the stacked estimator in red and the histogram of modes in yellow) on \pzpdf s with intrinsic scatter of the \lsst\ requirements (left) and twice that (right), with the true redshift density (black curve) and implicit prior (gray curve). \Chippr\ is robust to intrinsic scatter, but the alternatives suffer from overly broad \nz\ estimates that worsen with increasing intrinsic scatter. % \aim{TODO: Label panels. % Show the mean redshift for each estimator.} } \label{fig:results-scatter} \end{center} \end{figure*} \subsection{Catastrophic outliers} \label{sec:outliers} As was covered in \Sect{sec:intro}, catastrophic outliers tend to be distributed non-uniformly across the space of observed and true redshift. However, the \lsst\ requirements do not specify details for a distribution of outliers to which they were tuned, and it is still instructive to examine the impact of uniform outliers on the inference of \nz, so we begin by addressing uniformly distributed outliers before considering more realistic outlier distributions. A uniformly distributed population of outliers was simulated by giving every sample in true redshift a $10\%$ chance of having an observed redshift drawn from a uniform distribution rather than the Gaussian about the true redshift. Though this results in slightly less than the $10\%$ catastrophic outlier rate, it can be done independently of the definition of the standard deviation so was implemented for demonstrative purposes. \Fig{fig:uniform-outliers} shows examples of \pzpdf s from a uniformly distributed outlier population at the level of the \lsst\ requirements (left) as well as the results of \Chippr\ and other \nz\ estimation methods (right). The intrinsic scatter of the tests in this section does not increase with redshift as indicated in Table~\ref{tab:lsstsrd} in order to isolate the effect of outliers, and is instead held at a constant $\sigma_{z} = 0.02$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_uout-mega_scatter.png}\\ \includegraphics[width=0.45\textwidth]{figures/chippr/single_uout_log_estimators.png} \caption{ Top: Examples of \pzpdf s with a uniformly distributed catastrophic outlier population at the level of the \lsst\ requirements, including samples from the probability space of true and observed redshift (black points), \pzpdf s (colored step functions), and the true redshifts of the example \pzpdf s (colored vertical lines), with marginal histograms (light gray) for each dimension with the true redshift distribution (black) and implicit prior (dark gray) in the insets. % \aim{TODO: Include slice further up. % Label panels.} Bottom: The results of \Chippr\ (samples in light blue, optimization in dark blue) and the alternative approaches (the stacked estimator in red, the histogram of modes in yellow) on \pzpdf s with uniformly distributed catastrophic outliers, with the true redshift density (black curve) and implicit prior (gray curve). % \aim{TODO: Label panels. % Show the mean redshift for each estimator.} The presence of the catastrophic outlier population broadens the histogram of modes and stacked estimator of the redshift distribution, but the result of \Chippr\ is unbiased. } \label{fig:uniform-outliers} \end{center} \end{figure} \Fig{fig:uniform-outliers} shows that at the level of the \lsst\ requirements, the alternative estimators are overly broad, whereas \Chippr's \mmle\ yields an unbiased estimate of \nz. Further, the result of stacking is even broader than that of the histogram of modes, corresponding to ten times the information loss of \Chippr's \mmle, making it worse than the most naive reduction of \pzpdf s to point estimates. When one thinks of the \pzpdf s of catastrophic outliers, however, what comes to mind is multimodal \pzpdf s, wherein reducing \pzpdf s to point estimates to make a standard scatterplot of the true and observed redshifts leads to substantial probability density off the diagonal. These coordinated catastrophic outliers may be emulated in the joint probability space of true and estimated redshifts by using a mixture of the unbiased diagonal defined by the intrinsic scatter and an additional Gaussian in one dimension, with constant observed redshift for a template-fitting code and constant true redshift for a machine learning code. In the case of a catastrophic outlier population like that anticipated of template-fitting codes, $10\%$ of all galaxies have their observed redshift at a particular value unrelated to their true redshift, illustrated in the left panel of \Fig{fig:nonuniform-outliers-data}. This case is subject to the same caveat as the uniformly distributed outliers when it comes to the \lsst\ requirement. It is less straightforward to emulate catastrophic outliers like those anticipated of a machine learning code, those that are truly multimodal. The testing conditions here, illustrated in the right panel of \Fig{fig:nonuniform-outliers-data}, gives $10\%$ of galaxies at the redshift affected by outliers an observed redshift that is uniformly distributed relative to the true redshift, meaning that far fewer than $10\%$ of all galaxies in the sample are catastrophic outliers. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_eout-mega_scatter.png} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_rout-mega_scatter.png} \caption{ Examples of \pzpdf s with a catastrophic outlier population like that seen in template-fitting \pzpdf\ codes (left) and machine learning \pzpdf\ codes (right), including samples from the probability space of true and observed redshift (black points), \pzpdf s (colored step functions), and the true redshifts of the example \pzpdf s (colored vertical lines), with marginal histograms (light gray) for each dimension with the true redshift distribution (black) and implicit prior (dark gray) in the insets. % \aim{TODO: Label panels. % Include slices higher up.} } \label{fig:nonuniform-outliers-data} \end{center} \end{figure*} The results of \Chippr\ and the alternative estimators of \nz\ are presented in \Fig{fig:nonuniform-outliers-results}. The most striking feature is that the histogram of modes is highly sensitive to both outlier populations, producing a severe overestimate in the case of an outlier population like those seen in template-fitting codes and a severe underestimate in the case of an outlier population like those seen in machine learning codes, corresponding to a twenty-fold loss of information compared to the \Chippr\ \mmle\ in both cases. The effect on the stacked estimator of \nz\ is more subtle though still concerning. In the case of outliers like those resulting from template-fitting, the stacked estimator is overly broad even without realistic intrinsic scatter, resulting in ten times the information loss compared to the \Chippr\ \mmle, and in the case of outliers like those resulting from machine learning, the stacked estimator features an overestimate at the redshift affected by the outlier population, resulting in about five times the information loss as the \Chippr\ \mmle. The \Chippr\ \mmle, however, appears unbiased and withstands these effects, and the breadth of the distribution of samples of \nz\ is invariant. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_eout_log_estimators.png} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_rout_log_estimators.png} \caption{ The results of \Chippr\ (samples in light blue and optimization in dark blue) and the alternative approaches (the stacked estimator in red, the histogram of modes in yellow) on \pzpdf s with catastrophic outliers like those seen in template-fitting \pzpdf\ codes (left) and machine learning \pzpdf\ codes (right) to the \lsst\ requirements, with the true redshift density (black curve) and implicit prior (gray curve). Though the histogram of modes is most sensitive to a catastrophic outlier population, the stacked estimator also overestimates \nz\ under (machine learning-like outliers) and beyond (template fitting-like outliers). % \aim{TODO: Label panels. % Show the mean redshift for each estimator.} } \label{fig:nonuniform-outliers-results} \end{center} \end{figure*} \subsection{Canonical bias} \label{sec:bias} Systematic bias in \pz\ point estimates, is a concern for \lsst's cosmology results, for the same reasons explored in \citet{hoyle_dark_2018}. This form of bias is typically summarized by a shift parameter $\Delta_{z} = (\langle \pr{z \gvn \hat{\ndphi}} \rangle - \langle \pr{z \gvn \ndphi^{\dagger}} \rangle)$ representing a difference between the first moment of the estimated redshift density function and that of the true redshift density function. To distinguish other aforementioned manifestations of bias from this common form of bias, we refer to $\Delta_{z}$ as the \textit{canonical bias}. In the context of \pzpdf s, the canonical bias represents an instance of model misspecification. Consider that if the canonical bias were included in the framework of Figure~\ref{fig:pedagogical_scatter}, it could be trivially modeled out as a simple linear transformation of $z_{\mathrm{phot}} \to z_{\mathrm{phot}} - \Delta_{z} (1 + z_{\mathrm{phot}})$ of the $(z_{\mathrm{spec}}, z_{\mathrm{phot}})$ space. Regardless, for completeness, a test at ten times the canonical bias of the \lsst\ requirements, with no redshift-dependent intrinsic scatter nor catastrophic outliers, is provided in \Fig{fig:bias}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_neghivarbias-mega_scatter.png}\\ \includegraphics[width=0.45\textwidth]{figures/chippr/thesis_neghivarbias_log_estimators.png} \caption{ Top: Examples of \pzpdf s with ten times the bias of the \lsst\ requirements, including samples from the probability space of true and observed redshift (black points), \pzpdf s (colored step functions), and the true redshifts of the example \pzpdf s (colored vertical lines), with marginal histograms (light gray) for each dimension with the true redshift distribution (black) and implicit prior (dark gray) in the insets. % \aim{TODO: Include slice further up. % Label panels.} Bottom: The results of \Chippr\ (samples in light blue, optimization in dark blue) and the alternative approaches (the stacked estimator in red, the histogram of modes in yellow) on \pzpdf s with ten times the bias of the \lsst\ requirements, with the true redshift density (black curve) and implicit prior (gray curve). % \aim{TODO: Label panel. % Show the mean redshift for each estimator.} The impact of bias at even ten times the level of the \lsst\ requirements is almost imperceptible on all estimators, though the \Chippr\ \mmle\ minimizes the information loss regardless. } \label{fig:bias} \end{center} \end{figure} As expected based on self-consistency of the forward-modeled \pzpdf s, \Chippr\ is immune to linear bias of the form of $\Delta_{z}$. Furthermore, the alternative estimators are only weakly affected, with information loss two and four times greater than that of the \Chippr\ \mmle\ for the histogram of modes and stacked estimator respectively. (This general robustness may suggest that the canonical bias may not be the most relevant measure of performance of estimators of \nz.) \section{Discussion} \label{sec:results} The experiments of \Sect{sec:alldata} quantify the influence on each estimator of \nz\ due to each of the canonical types of \pz\ error one at a time in isolation. Now, we stress-test \Chippr\ by exploring the impact of the implicit prior, which has thus far not received much attention in the literature. %two realistically complex cases, one in which the \nz\ estimates are made tomographically as in a modern cosmological analysis (\Sect{sec:lsstdemo}) and one \Sect{sec:interim} demonstrates the sensitivity of \nz\ estimation methods to realistically complex implicit priors, and \Sect{sec:violations} demonstrates the consequences of mischaracterization of the implicit prior used to generate the \pzip\ catalog. These results provide compelling motivation for the \pz\ community to prioritize the study of implicit priors of existing and developing \pzpdf\ techniques. %\que{Add back the results of the LSST requirements here?} %\subsection{LSST Requirements} %\label{sec:lsstdemo} % %It is of interest to explore the impact of incorrectly estimated \nz\ on the cosmological inference to answer the question of how wrong we will be in our understanding of the universe if we incorrectly constrain \nz. %To test the impact of these uncertainties, we simulate mock data with all three effects with which \lsst\ is concerned at the levels of Table~\ref{tab:lsstsrd} and propagate the results of \Chippr\ and the other estimators to a Fisher matrix forecast using \cosmolike\ \citep{krause_cosmolike_2017}, a publicly available cosmological forecasting code. % %\begin{figure} % \begin{center} % \includegraphics[width=0.45\textwidth]{figures/chippr/cosmolike_inputs.png} % \caption{ % The \lsst-like tomographic binning and true redshift distribution, where the truth (solid) is a PDF evaluated on a fine grid of $350$ redshifts $0.0101 < z < 3.5001$, and the binned (dashed) and drawn (dotted) \nz\ are piecewise constant functions evaluated in $35$ evenly spaced bins, for four different tomographic bins (colors). % } % \label{fig:tomobins} % \end{center} %\end{figure} % %\dwh{We consider as ground truth a set of known \nz\ corresponding to each of four hypothetical samples of galaxies and the corresponding cosmological parameter covariance matrix. %The \nz\ of each galaxy subsample emulates that anticipated of galaxies binned by a redshift point estimate, as is common in tomographic redshift analyses, though our experimental procedure is agnostic to how the samples are identified. %The cosmological parameter covariance matrices are those used for \desc\ forecasting with the ground truth \nz\ in the same four bins.} %The true \nz\ in each pre-defined bin is already provided in the form of an evaluation of a function on a fine grid of $350$ redshifts $0.0101 < z < 3.5001$. % %First, we bin them down to a piecewise constant parameterization with a manageable $35$ hyperparameters for \chippr's sampling capabilities. %Next, we draw $10^{4}$ true redshifts from the binned true \nz\ for each tomographic bin. %The original, binned, and drawn \nz\ are shown in \Fig{fig:tomobins}. %We emulate \pzpdf s for the $10^{4}$ true redshifts drawn from the true \nz\ in each bin using the procedure of \Fig{fig:flowchart} with all three effects of Table~\ref{tab:lsstsrd} %at their given levels. %Illustrations of this process are provided in \Fig{fig:per-bin-scatter}. % %\begin{figure*} % \begin{center} % \includegraphics[width=0.24\textwidth]{figures/chippr/0single_lsst_mega_scatter.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/1single_lsst_mega_scatter.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/2single_lsst_mega_scatter.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/3single_lsst_mega_scatter.png} % \caption{As in \Fig{fig:mega_scatter}, with a different tomographic bin in each panel and the three effects of intrinsic scatter, uniformly distributed catastrophic outliers, and bias at the levels of the \lsst\ SRD, given in Table~\ref{tab:lsstsrd}. % \aim{TODO: Make this one big plot instead of four little ones to eliminate repeated insets and legend. % Enlarge axis labels. % Label panels. % Also, add watermark of ``mock data'' in UL corner, same for ``results of inference'' on other kind of plot. % Show the mean redshift for each estimator of N(z), cite the DES papers that motivate this.} % } % \label{fig:per-bin-scatter} % \end{center} %\end{figure*} % %\que{Is the distinction between binned samples determined by some observational property and probabilistic redshift distributions sufficiently clear?} % %We then make a point estimate of \nz\ using \chippr's \mmle\ optimization option as well as the alternative methods on the \pzpdf\ catalog for each tomographic bin, shown in \Fig{fig:per-bin-ests}, because \cosmolike\ produces cosmology constraints from a single \nz\ result, rather than samples from the full posterior probability density of possible \nz. %Note that \Fig{fig:per-bin-ests} is shown in linear rather than log probability units, unlike all other plots in this paper, to better show the behavior at low probability. %The excessive breadth of the alternative estimators can be seen quite plainly. % %\begin{figure*} % \begin{center} % \includegraphics[width=0.24\textwidth]{figures/chippr/0single_lsst_lin_estimators.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/1single_lsst_lin_estimators.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/2single_lsst_lin_estimators.png} % \includegraphics[width=0.24\textwidth]{figures/chippr/3single_lsst_lin_estimators.png} % \caption{ % The \chippr-derived and other estimators of \nz\ in each tomographic bin, with the true \nz\ (black), the implicit prior (gray), stacked estimator (red), histogram of modes (yellow), and \Chippr\ \mmle\ (blue). % The result of stacking is far too broad for \lsst-like \pzpdf s, even moreso than the simplistic histogram of modes. % \aim{TODO: Make this one big plot instead of four little ones to eliminate repeated axis labels and legend. % Label figure/panels. % Also, add watermark of ``results of inference'' in UL corner, same for ``mock data'' on other kind of plot. % Show the mean redshift for each estimator of N(z), cite the DES papers that motivate this.} % } % \label{fig:per-bin-ests} % \end{center} %\end{figure*} % %We then use the different estimators of \nz\ in a cosmological forecasting procedure with \cosmolike, constraining $\Omega_{m}$, $\Omega_{b}$, $w_{a}$, $w_{0}$, $n_{s}$, $S_{8}$, and $H_{0}$. %Though there are also slight differences in the angle of the error ellipses, the most striking effect is the broadening of the contours under the alternative estimators relative to \Chippr, which are almost indistinguishable from those derived by using the true redshift distribution in each bin. %The stacked estimator is significantly worse than the \Chippr\ \mmle\ for all parameters except $\Omega_{b}$ and $H_{0}$. %Stacking, however, outperforms the histogram of modes for all parameters except $\Omega_{m}$ and $S_{8}$, for which their constraints are quite similar. %Though the true values of the parameters themselves were not accessible with the Fisher matrix-based framework, we calculate the seven-dimensional KLD for the three \nz\ estimators relative to the constraints derived from the true \nz, showing that \Chippr\ preserves information $200-800$ times better than the alternatives, with the histogram of modes doing about four times better than stacking. %\que{So a Fisher matrix analysis inherently can't provide the bias, because that requires data. %Because we have no data, only posteriors conditioned on hypothetical data, this isn't possible. %Do you think it's sufficient to provide the bias on the moments of \nz, since that's what everyone uses anyway?} % %\begin{figure*} % \begin{center} % \includegraphics[width=0.9\textwidth]{figures/chippr/final_plot.png} % \caption{ % \que{Are the contours any easier to see now?} % The result of propagating the estimators of \nz\ by stacking (red), the histogram of modes (yellow), \Chippr\ (blue), and the true \nz\ (black) of \Fig{fig:per-bin-ests} to a subset of cosmological parameters. % For all parameters considered, \Chippr\ yields contours no broader than those corresponding to the true \nz, whereas for most parameters, stacking and the histogram of modes yield broader contours. % \aim{TODO: Fix formatting of axis labels. % Standardize ticklabels.} % } % \label{fig:cornerplot} % \end{center} %\end{figure*} % %\que{Does this discussion adequately quantify how good \Nz\ has to be and how wrong we'll be if we estimate it wrong? % Suggestions for how to better establish context would be appreciated.} \subsection{Realistically complex implicit prior} \label{sec:interim} \chippr\ can handle any implicit prior with support over the redshift range where \nz\ is defined, but some archetypes of implicit prior are more likely to be encountered in the wilds of \pzip\ codes. Ideally, an uninformative implicit prior would be used, although it may be complicated to compute from the covariances of the raw data. Template-fitting codes have an explicit prior input formed by redshifting a small number of templates, leading to a highly nonuniform but physically-motivated interim prior. %Another potential method for selecting an interim prior with support over the entire redshift range expected of the photometric survey is to sum two or more $N(z)$ distributions obtained from reliable photometric surveys in the past. %This is just as problematic as using a biased spectroscopically derived $N(z)$ as the interim prior because the sum of redshift distributions for two or more surveys does not reflect our beliefs about the true distribution for a single survey even though it provides support over the same redshift range. %To simulate this case, we choose an interim prior with more weight at high and low redshifts than for mid-range redshifts. Machine learning approaches tend to be trained on previously observed data sets that are biased towards low redshift, which biases the implicit prior towards low redshift. Some efforts have been made to modify an observationally informed implicit prior so that it is more representative of the photometric data for which redshifts are desired \citep{sheldon_photometric_2012}, but, unless it is equal to the true \nz, it will propagate to the results of traditional \nz\ estimation methods. %Because low-redshift galaxies are more likely to be bright enough to be observed by such a survey, $N(z)$ determined from that sample may be heavily biased to low redshift galaxies. %By contrast, the galaxies that were unobserved in such a survey are more likely be dimmer, making them more likely to be at higher redshifts. %Since the interim prior is not compatible with our beliefs about the true redshift distribution, the resulting interim redshift posteriors will be inappropriate. \Fig{fig:pzs-priors} shows examples of \pzip s with a low-redshift favoring implicit prior emulating that of a machine learning approach to \pz\ estimation (left panel) and a more complex interim prior emulating that of a template-fitting \pz\ method (right panel). One can see that the \pzip s take different shapes from one another even though the marginal histograms of the points are identical. The machine learning-like implicit prior has been modified to have nonzero value at high-redshift because the implicit prior must be strictly positive definite for the \Chippr\ model to be valid. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_trpr-mega_scatter.png} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_tmpr-mega_scatter.png} \caption{ Examples of mock \pzip s generated with a machine learning-like implicit prior (left) and a template-fitting-like implicit prior (right), including samples from the probability space of true and observed redshift (black points), \pzip s (colored step functions), the true redshifts of the example \pzip s (colored vertical lines). A histogram (light gray) of points in each dimension is shown in the respective inset, with the true redshift distribution (black) and implicit prior (dark gray). % \aim{TODO: Label panels. % Include slices higher up.} } \label{fig:pzs-priors} \end{center} \end{figure*} \Fig{fig:results-priors} shows the performance of \Chippr\ and the traditional methods on \pzip s generated with nontrivial implicit priors. In both cases, the \Chippr\ \mmle\ effectively recovers the true redshift distribution, and the distribution of \nz\ parameter values reflects higher uncertainty where the implicit prior undergoes large changes in derivative. The alternatives, on the other hand, are biased by the implicit prior except where it is flat, in the case of high redshifts for the machine learning-like implicit prior, resulting in over $1,000$ times the information loss on \nz\ for the machine learning-like implicit prior and some $5-20$ times the information loss for the template fitting-like implicit prior, relative to the \Chippr\ \mmle. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_trpr_log_estimators.png} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_tmpr_log_estimators.png} \caption {The results of \Chippr\ (samples in light blue and optimization in dark blue) and the alternative approaches (the stacked estimator in red and the histogram of modes in yellow) on \pzip s with an implicit prior like that of machine learning \pzip\ approaches (left) and an implicit prior like that of template-fitting \pzip\ codes (right), with the true redshift density (black curve) and implicit prior (gray curve). \Chippr\ is robust to a nontrivial implicit prior, but the alternatives are biased toward the implicit prior. % \aim{TODO: Label panels. % Show the mean redshift for each estimator.} } \label{fig:results-priors} \end{center} \end{figure*} The main implication of the response of \nz\ estimates to a nontrivial implicit prior is that the implicit prior must be accounted for when using \pzip\ catalogs. \subsection{Violations of the model} \label{sec:violations} In this test, the \pzip s are made to the \lsst\ requirements but the implicit prior used for the inference is not the same as the implicit prior used for generating the data. \Pzpdf\ codes do not generally provide their implicit prior, with the exception of some template-fitting techniques for which it is a known input. If we naively used the \pzip\ catalog produced by a generic machine learning or template-fitting code and assumed a flat implicit prior, we would observe the contents of \Fig{fig:mischaracterized}. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_trpr_wrong_log_estimators.png} \includegraphics[width=0.45\textwidth]{figures/chippr/single_lsst_tmpr_wrong_log_estimators.png} \caption {The results of \Chippr\ (samples in light blue, optimization in dark blue) and the alternative approaches (the stacked estimator in red, the histogram of modes in yellow) when run with an incorrectly specified implicit prior (gray curve). The data upon which each panel's results are based are provided in Figure~\ref{fig:pzs-priors}, where the left corresponds to the sort of implicit prior anticipated of machine learning approaches and the right corresponds to an implicit prior like that of a template-fitting code. Here, \Chippr\ has been provided with a uniform implicit prior rather than those used to produce the mock \pzip s, and its performance is notably worse than when it is provided an accurate implicit prior, as in Figure~\ref{fig:results-priors}. % \aim{Label panels. % Show the mean redshift for each estimator.} When the incorrect implicit prior is provided to \chippr, even Bayesian inference cannot recover the true \nz. } \label{fig:mischaracterized} \end{center} \end{figure*} The results of using a mischaracterized implicit prior are disastrous, causing every estimator, including \Chippr, to be strongly biased. The stacked estimator and histogram of modes don't make use of the implicit prior so do no worse than when the implicit prior is accurately provided, but \Chippr\ is sensitive to prior misspecification, which violates the model upon which it is based. It is thus crucial that \pzip\ methods always characterize and provide the implicit prior. % removed ancient investigation on real data, would be hard to redo with the new \chippr\ code on this timescale given that I haven't even looked at the data format in 4 years \section{Conclusion} \label{sec:con} %\que{TODO: Break up conclusion into subsections of \Sect{sec:alldata}/\Sect{sec:results}?} This study derives and demonstrates a mathematically consistent inference of a one-point statistic, the redshift density function \nz, based on an arbitrary catalog of \pzpdf s. The fully Bayesian \Chippr\ model, based in the fundamental laws of probability, begins with a probabilistic graphical model corresponding to equations for the full posterior distribution over the parameters for \nz. The \Chippr\ model is implemented in the publicly available \chippr\ code. The method is implemented in the publicly available \chippr\ code and validated on mock data. % at the level of \nz\ as well as in terms of constraining power on the cosmological parameters. %In the tests on simulated data performed here, the full posterior distribution over the hyperparameters defining $N(z)$ derived by this method is consistent with the true redshift distribution function, making the mean of sampled values an excellent point estimator of $N(z)$. %The information contained in the full posterior distribution's shape convey the traditional error bar information without having to explicitly propagate any error estimates. %The results of those tests is summarized below and in \Tab{tab:kld}, where lower values indicate a closer match between the true $N(z)$ and the estimator. %Tests were also performed on subsets of BOSS DR10 data with results consistent with those of simulations. Using a flexible, self-consistent forward model of the relationship between true and estimated redshifts, capable of encapsulating the complexity of observed redshift-photometry relations (e.g. \Fig{fig:pedagogical_scatter}), we emulate the canonical \pz\ error statistics, intrinsic scatter (\Sect{sec:scatter}), catastrophic outliers (\Sect{sec:outliers}), and canonical bias (\Sect{sec:bias}) one at a time. Though these test cases may appear overly simplistic, they enable rigorous quantification of the relative performance of each \nz\ estimation techniques under the controlled conditions of each type of error in isolation, at levels equal to and beyond those of \lsst. %\aim{TODO: point out that Fig 1 is uglier than 4, 5, 7, 8, 10, 11; these tests are simplistic, reality is more complex, but because \Chippr is provably correct, it can handle complexity of the real thing if p(z) is correct and captures complexity} Based on our tests, the following statements about the \Chippr\ methodology may be made with confidence: \begin{itemize} \item \Chippr\ outperforms traditional estimators of \nz\ under realistically complex conditions, even at pessimistic levels relative to future survey requirements on the traditional \pz\ error statistics, as demonstrated both by eye and according to KLD values corresponding to $10\%$ the information loss of alternative methods. %\aim{TODO: Refer to quantitative results on KLD of \Nz.} \item Both the \Chippr\ \mmle\ and the mean of \chippr\ samples are good point estimators of \nz, whereas the histogram of modes is very sensitive to outliers and the stacked estimator is always excessively broad. \item The error bars on the posterior distribution over \nz\ hyperparameters are interpretable and arise naturally under \Chippr, unlike those that may be assumed for the conventional point estimators. \end{itemize} Not only is \Chippr\ the only mathematically correct approach to the problem, it also recovers the true values of the hyperparameters defining \nz\ better than popular alternatives, as measured by the loss of information in \nz. % \ and the size of error ellipses in the space of cosmological parameters. However, the mathematically valid approach to inference with probabilistic data products incurs nontrivial computational expense, motivating future work to optimize the implementation. Additionally, this work highlights a crucial and almost entirely overlooked complication to the usage of \pzpdf s, namely the implicit prior, motivating the following recommendations: \begin{itemize} \item In the presence of a nontrivial implicit prior corresponding to the specifics of the architecture of the method by which \pzpdf s are obtained, established methods cannot recover \nz; a principled hierarchical inference such as \Chippr\ is the only way to recover \nz\ from \pzpdf s. \item %\Chippr, however, is sensitive to misspecification of the implicit prior; Neither \Chippr\ nor traditional alternatives can recover \nz\ in the presence of a misspecified implicit prior; the implicit prior used to produce the \pzpdf\ catalog must be known and provided to \Chippr\ in order to recover the true \nz. \end{itemize} Given the significance of the implicit prior \citep{schmidt_evaluation_2020}, it is therefore imperative that those developing codes to obtain \pzpdf s provide a way to isolate the implicit prior and that those publishing \pzpdf\ catalogs provide the implicit prior to users. This mandate is easier said than done, both for template fitting and machine learning approaches. While the implicit prior is often an explicit input to model-based routines, it may be defined in a space of redshift and SED templates. In this case, it may not be possible to apply \Chippr\ without marginalizing over additional variables $\psi$ for the SEDs. In other words, obtaining the implicit prior from a template fitting code may be challenging or even require consideration of higher-dimensional PDFs such as $\pr{z, \mathrm{SED} \gvn \psi^{*}}$. The situation appears more dire for data-driven techniques, whose training sets may not straightforwardly translate into an implicit prior. For example, some training set galaxies may contribute to the \pzpdf s more than others, resulting in different effective weights when factoring into, say, a histogram of training set redshifts as the implicit prior. Additionally, the weights may be stochastic, depending on the random seed used to initialize non-deterministic methods, precluding reproducibility. It is thus unclear whether the implicit prior can be meaningfully obtained from such methods at all. %\aim{TODO: new paragraph for what can go wrong, % call to community for what to do about it, % what aspects of implicit prior are knowable and not knowable? % if testable how so? outside the scope of paper, data producers be warned! % a likelihood is better -- give us that if you can! % focus on methods that acknowledge probabilistic structure of problem.} A thorough investigation of the degree to which the implicit prior can be meaningfully obtained is outside this paper but should be a priority for all consumers of \pzpdf s. As an alternative, however, we must point out that if likelihoods were available rather than posteriors, the trouble with the implicit prior would be avoided altogether. We thus encourage the community of those making \pzpdf s to consider developing such methods so that the resulting data products may be correctly used in scientific inference more generically. %\aim{TODO: Claim that implications for tomographic binning are severe, if that can be motivated by mean \Nz shifts.} % by Section~\ref{sec:lsstdemo}.} %\subsection*{Recommendations for future work} %The following conclusions and recommendations can be made with confidence: %\begin{enumerate} % % % \item The marginalized maximum likelihood estimator is an excellent estimator for strongly featured redshift distribution function with simple, clean photo-$z$ posteriors; stacking smooths features more than sampling and photo-$z$ point estimation. % \item When the implicit prior is known to be a poor match to the data, only the results of \Chippr\ are satisfactory estimators of the redshift distribution function because they are the only methods that can account for the bias induced on the \pzpdf\ catalog by the method that produces it; this is the most compelling case for the sampler because of the ubiquity of inappropriate interim priors. %\end{enumerate} By showing that \Chippr\ is effective in recovering the true redshift distribution function and posterior distributions on its parameters from catalogs of \pzpdf s, this work supports the production of \pzpdf s by upcoming photometric surveys such as \lsst\ to enable more accurate inference of the cosmological parameters. We discourage researchers from co-adding \pzpdf s or converting them into point estimates of redshift and instead recommend the use of Bayesian probability to guide the usage of \pzpdf s. We emphasize to those who produce \pzpdf s from data that it is essential to release the implicit prior used in generating this data product in order for any valid inference to be conducted by consumers of this information. Methodologies for obtaining \pzpdf s must therefore be designed such that there is a known implicit prior, i.e. one that is not implicit at all, so that likelihoods may be recovered. The technique herein developed is applicable with minimal modification to other one-point statistics of redshift to which we will apply this method in the future, such as the redshift-dependent luminosity function and weak lensing mean distance ratio. Future work will also include the extension of this fully probabilistic approach to higher-order statistics of redshift such as the two-point correlation function. \begin{acknowledgements} AIM acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research. During the completion of this work, AIM was supported by National Science Foundation grant AST-1517237 and the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists, Office of Science Graduate Student Research (SCGSR) program, administered by the Oak Ridge Institute for Science and Education for the DOE under contract number DE‐SC0014664. The authors thank Phil Marshall for advice on relevant examples, Elisabeth Krause for assistance with the \cosmolike\ code, Mohammadjavad Vakili for statistical insights, Geoffrey Ryan for programming advice, and Boris Leistedt for other helpful comments in the development of \Chippr. % The authors also acknowledge % \aim{TODO: Circulate draft to GCCL; submit to ApJ; circulate to Dan Foreman-Mackey, Boris Leistedt, Kate Storey-Fisher; post to arXiv; circulate to Johann Cohen-Tanugi, Will Hartley, Alan Heavens, Mike Jarvis, Francois Lanusse, Ann Lee, Rachel Mandelbaum, Phil Marshall, Chris Morrison, Jeff Newman, Sam Schmidt, Anze Slosar, Josh Speagle, others for feedback.} This work was completed with generous nutritional support from the Center for Computational Astrophysics. % \aim{TODO: Thank thesis readers.} \end{acknowledgements} %\aim{TODO: add software citation section.} \appendix %\renewcommand{\thesection}{\Alph{section}} %\renewcommand{\thesubsection}{\Alph{subsection}} \numberwithin{equation}{section} \section{Derivation} \label{app:math} %We begin by parametrizing $N(z)$ in terms of $\vec{\theta}$, comprising some set of hyperparameters that define the form $N(z)$ may take in whatever basis we choose. %We define a function $f_{\vec{\theta}}(z)=N(z)$ that transforms these hyperparameters into the redshift distribution function $N(z)$. %Because %\begin{equation} %\eqlabel{eq:definition} %N(z) \propto p(z \gvn \vec{\theta}), %\end{equation} %we may discontinue discussion of $N(z)$ in favor of the likelihood $p(z|\vec{\theta})$. We perform the derivation of \Eq{eqn:fullpost} using log-probabilities. What we wish to estimate is then the full log-posterior probability distribution (hereafter the full log-posterior) of the hyperparameters $\ndphi$ given the catalog of photometry $\{\data_{j}\}$. By Bayes' Rule, the full log-posterior \begin{equation} \label{eqn:basicbayes} \ln[\pr{\ndphi \gvn \{\data_{j}\}}] = \ln[\pr{\{\data_{j}\} \gvn \ndphi}] + \ln[\pr{\ndphi}] - \ln[\pr{\{\data_{j}\}}] \end{equation} may be expressed in terms of the full log-likelihood probability distribution (hereafter the full log-likelihood) $\ln[\pr{\{\data_{j}\} \gvn \ndphi}]$ by way of a hyperprior log-probability distribution (hereafter the hyperprior) $\ln[\pr{\ndphi}]$ over the hyperparameters and the log-evidence probability of the data $\ln[\pr{\{\data_{j}\}}]$. However, the evidence is rarely known, so we probe the full log-posterior modulo an unknown constant of proportionality. The full log-likelihood may be expanded in terms of a marginalization over the redshifts as parameters, as in \begin{equation} \label{eqn:marginalize} \ln[\pr{\{\data_{j}\} \gvn \ndphi}] = \ln\left[\integral{\pr{\{\data_{j}\} \gvn \{z_{j}\}} \pr{\{z_{j}\} \gvn \ndphi}}{\{z_{j}\}}\right]. \end{equation} We shall make two assumptions of independence in order to make the problem tractable; their limitations are be discussed below. First, we take $\ln[\pr{\{\data_{j}\} \gvn \{z_{j}\}}]$ to be the sum of $J$ individual log-likelihood distribution functions $\ln[\pr{\data_{j} \gvn z_{j}}]$, as in \begin{equation} \label{eqn:indiedat} \ln[\pr{\{\data_{j}\} \gvn \{z_{j}\}}] = \sum_{j=1}^{J}\ \ln[\pr{\data_{j} \gvn z_{j}}], \end{equation} a result of the definition of probabilistic independence encoded by the box in \Fig{fig:pgm}. Second, we shall assume the true redshifts $\{z_{j}\}$ are $J$ independent draws from the true $\pr{z \gvn \ndphi}$. Additionally, $J$ itself is a Poisson random variable. The combination of these assumptions is given by \begin{equation} \label{eqn:indie} \ln[\pr{\{z_{j}\} \gvn \ndphi}] = -\integral{f(z; \ndphi)}{z} + \sum_{j=1}^{J}\ \ln[\pr{z_{j} \gvn \ndphi}]. \end{equation} %It is important to note that the integral $\integral{n(z)}{z} N(z)\ dz$ is not constrained to equal the variable defining the Poisson distribution but instead $J$ by \Eq{eq:definition}, which can be thought of as another parameter. The derivation differs when $J$ is not known, say, when we want to learn about a distribution in nature rather than a distribution specific to data in hand, but for a photometric galaxy catalog where the desired quantity is $n(z)$ for the galaxies entering a larger cosmology calculation, it is a fixed quantity. A detailed discussion of this matter may be found in \citet{foreman-mackey_exoplanet_2014}. Applying Bayes' Rule, we may combine terms to obtain \begin{align} \begin{split} \label{eqn:posterior} \ln[\pr{\ndphi \gvn \{\data_{j}\}}] & \propto \ln[\pr{\ndphi}] - \integral{f(z; \ndphi)}{z} + \sum_{j=1}^{J}\ln\left[\integral{\pr{\data_{j} \gvn z} \pr{z \gvn \ndphi}}{z}\right]. \end{split} \end{align} %\Eq{eq:posterior} contains two quantities that merit further discussion, the prior distribution $p(\vec{\theta})$ discussed further in \Sect{sec:exp} and the photo-$z$ log-likelihoods $\ln[p(\vec{d}_{j}|z_{j})]$ that have not been mentioned since \Eq{eq:marginalize}. %Though photo-$z$ log-likelihoods would be desirable for use in these equations, they are not generally the product of either empirical and data-driven methods for obtaining photo-$z$ probability distributions. %Though probabilistic photo-$z$s are typically reported as generic probability distributions $p(z_{j})$, the methods that produce them may be understood to always yield posteriors, probability distributions conditioned on the data we believe to be true. %If they were not based in this assumption, they would require a sum over an infinite space of possible datasets. Since we only have access to \pzip s, we must be able to write the full log-posterior in terms of log \pzip s rather than the log-likelihoods of \Eq{eqn:posterior}. To do so, we will need an explicit statement of this implicit prior $\ndphi^{*}$ for whatever method is chosen to produce the \pzip s. To perform the necessary transformation from likelihoods to posteriors, we follow the reasoning of \citet{foreman-mackey_exoplanet_2014}. Let us consider the probability of the parameters conditioned on the data and an interim prior and rewrite the problematic likelihood of \Eq{eqn:posterior} as \begin{align} \label{eqn:trick} \begin{split} \ln[\pr{\data_{j} \gvn z}] = & \ln[\pr{\data_{j} \gvn z}] + \ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] - \ln[\pr{z \gvn \data_{j}, \ndphi^{*}}]. \end{split} \end{align} Once the implicit prior $\ndphi^{*}$ is explicitly introduced, we may expand the last term in \Eq{eqn:trick} according to Bayes' Rule to get \begin{align} \begin{split} \label{eqn:expand} \ln[\pr{\data_{j} \gvn z}] = & \ln[\pr{\data_{j} \gvn z}] + \ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] + \ln[\pr{\data_{j} \gvn \ndphi^{*}}] - \ln[\pr{z \gvn \ndphi^{*}}] - \ln[\pr{\data_{j} \gvn z, \ndphi^{*}}]. \end{split} \end{align} Because there is no direct dependence of the data upon the hyperparameters, we may again expand the term $\ln[\pr{\data_{j} \gvn z, \ndphi^{*}}]$ to obtain \begin{align} \begin{split} \label{eqn:indterm} \ln[\pr{\vec{d}_{j} \gvn z}] = & \ln[\pr{\data_{j} \gvn z}] + \ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] + \ln[\pr{\data_{j} \gvn \ndphi^{*}}] - \ln[\pr{z \gvn \ndphi^{*}}]- \ln[\pr{\data_{j} \gvn \ndphi^{*}}] - \ln[\pr{\data_{j} \gvn z}] . \end{split} \end{align} Canceling the undesirable terms for the inaccessible likelihood $\ln[\pr{\data_{j} \gvn z}]$ and trivial $\ln[\pr{\data_{j} \gvn \ndphi^{*}}]$ yields \begin{equation} \label{eqn:cancel} \ln[\pr{\data_{j} \gvn z}] = \ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] - \ln[\pr{z \gvn \ndphi^{*}}]. \end{equation} We put this all together to get the full log-posterior probability distribution of \begin{align} \begin{split} \label{eqn:final} \ln[\pr{\ndphi \gvn \{\data_{j}\}}] \propto & \ln[\pr{\ndphi}] + \ln \left[\integral{\exp \left[\sum_{j=1}^{J} \left(\ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] + \ln[\pr{z \gvn \ndphi}] - \ln[\pr{z \gvn \ndphi^{*}}] \right)\right]}{z}\right] , \end{split} \end{align} which is equivalent to that of \citet{hogg_inferring_2010}, though the context differs. The argument of the integral in the log-posterior of \Eq{eqn:final} depends solely on knowable quantities (and those we must explicitly assume) and can be calculated for a given sample of log \pzip s $\{\ln[\pr{z \gvn \data_{j}, \ndphi^{*}}]\}$ and the implicit prior $\pr{z \gvn \ndphi^{*}}$ with which they were obtained, noting the relation of \begin{equation} \label{eqn:params} \pr{z \gvn \ndphi} = \frac{f(z; \ndphi)}{\integral{f(z; \ndphi)}{z}}. \end{equation} Since we cannot know constant of proportionality, we sample the desired full log-posterior $\ln[\pr{\ndphi \gvn \{\data_{j}\}}]$ using Monte Carlo-Markov chain (MCMC) methods. % %\begin{align} %\begin{split} %\eqlabel{eqn:fullpost} %\ln[\pr{\ndphi \gvn \{\data_{j}\}}] & \propto \ln[\pr{\ndphi}] + \ln \left[\integral{\exp \left[\sum_{j=1}^{J} \left(\ln[\pr{z \gvn \data_{j}, \ndphi^{*}}] + \ln[\pr{z \gvn \ndphi}] - \ln[\pr{z \gvn \ndphi^{*}}]\right)\right]}{z}\right] , %\end{split} %\end{align} %\section{Convergence Criteria} %\label{app:acorr} % %\que{Cut convergence criteria section?} % %In addition to qualitative visual inspection of the chains, two quantities that probe the convergence of the sampler are used in this study, the autocorrelation time and the Gelman-Rubin convergence criterion. %%\Fig{fig:chains} shows the %evolution of the values of one parameter of one walker over the course of all %iterations of the sampler. % %%\begin{figure} %%%\includegraphics[width=0.5\textwidth]{figs/null/chain0.pdf} %%\caption{This figure shows the evolution of one walker's parameter values for %%one element of the parameter vector $\vec{\theta}$ as a function of iteration %%number, demonstrating the completion of the burn-in phase.} %%\label{fig:chains} %%\end{figure} % %The autocorrelation time is effectively a measure of the efficiency of the method and can be described as the expected number of iterations necessary to accept a new sample independent of the current accepted sample. %A sampler that converges faster will have a smaller autocorrelation time, and smaller autocorrelation times are preferable because it means fewer iterations are wasted on non-independent samples when independent samples are desired. %See \citet{foreman-mackey_emcee_2013} for a more complete exploration of the autocorrelation time. %In all tests discussed here, autocorrelation times across walkers and parameters were approximately 20, meaning two samples 20 or more iterations apart were independent, a satisfactory level of efficiency. %Low autocorrelation times are a necessary but not always sufficient convergence condition, as the autocorrelation times calculated for tests in this paper were constant across all sub-runs, even those that were obviously burning in. % %The Gelman-Rubin statistic %\begin{equation} %\label{eqn:gr} %R_{k} = \sqrt{\frac{(1 - \frac{2}{I_{0}}) w_{k} + \frac{2}{I_{0}} b_{k}}{w_{k}}}, %\end{equation} %a weighted sum of the mean $w_{k}$ of the variances within individual walkers' chains and the variance $b_{k}$ between chains of different walkers $m$, is calculated over each sub-run $i$ to determine the duration of the burn-in period. %Convergence is achieved when the statistic approaches unity. \bibliographystyle{apj} \bibliography{draft} %\aim{TODO: find way to cite Dance Your Ph.D. video} \end{document}
{ "alphanum_fraction": 0.7760868516, "avg_line_length": 89.9522569444, "ext": "tex", "hexsha": "49372d11155752055f41e1109bddbaabdfc39164", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-09-14T17:25:38.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-08T15:30:01.000Z", "max_forks_repo_head_hexsha": "e69960cddafbd0bfbac96d23e0f065ca6db7672f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eacharles/chippr", "max_forks_repo_path": "research/paper/draft.tex", "max_issues_count": 56, "max_issues_repo_head_hexsha": "e69960cddafbd0bfbac96d23e0f065ca6db7672f", "max_issues_repo_issues_event_max_datetime": "2020-09-22T17:26:58.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-24T00:05:31.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eacharles/chippr", "max_issues_repo_path": "research/paper/draft.tex", "max_line_length": 550, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e69960cddafbd0bfbac96d23e0f065ca6db7672f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eacharles/chippr", "max_stars_repo_path": "research/paper/draft.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-27T18:57:58.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-27T18:57:58.000Z", "num_tokens": 26163, "size": 103625 }
\chapter{BKNR Templates} \begin{figure}[htbp] \centering \includegraphics{templatesicon} \end{figure}
{ "alphanum_fraction": 0.75, "avg_line_length": 13.5, "ext": "tex", "hexsha": "d24bb42237c2ce006b2c205641e107b1e4a1ef8d", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-01-20T18:05:06.000Z", "max_forks_repo_forks_event_min_datetime": "2015-09-03T04:17:33.000Z", "max_forks_repo_head_hexsha": "3eb43346a13cd10d15149f6e8c55cb2d41a8c98d", "max_forks_repo_licenses": [ "0BSD" ], "max_forks_repo_name": "Hellseher/bknr-datastore", "max_forks_repo_path": "doc/templates.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "3eb43346a13cd10d15149f6e8c55cb2d41a8c98d", "max_issues_repo_issues_event_max_datetime": "2022-03-18T07:55:26.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-20T17:38:22.000Z", "max_issues_repo_licenses": [ "0BSD" ], "max_issues_repo_name": "Hellseher/bknr-datastore", "max_issues_repo_path": "doc/templates.tex", "max_line_length": 31, "max_stars_count": 63, "max_stars_repo_head_hexsha": "3eb43346a13cd10d15149f6e8c55cb2d41a8c98d", "max_stars_repo_licenses": [ "0BSD" ], "max_stars_repo_name": "Hellseher/bknr-datastore", "max_stars_repo_path": "doc/templates.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-24T06:50:01.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-12T17:41:24.000Z", "num_tokens": 33, "size": 108 }
\documentclass[a4paper, draft]{report} \pagestyle{headings} \author{C. Bohr} \title{Feed-forward neuronal networks - an introduction} \usepackage{amsmath,amsthm, amsfonts,amscd, amssymb, a4} \usepackage[final]{graphicx} \usepackage[final]{listings} \usepackage{bbm} \usepackage{empheq} \usepackage{caption} \usepackage{hyperref} \renewcommand\lstlistingname{Algorithm} \captionsetup[lstlisting]{singlelinecheck=false, margin=0pt, font={sf},labelsep=space,labelfont=bf} % Numbering \numberwithin{section}{chapter} \numberwithin{equation}{chapter} % Theorem environments %% \theoremstyle{plain} %% This is the default \newtheoremstyle{own} {3pt} % Space above {3pt} % Space below {\itshape} % Body font {} % Indent amount {\scshape} % Theorem head font {.} % Punctuation after theorem head {.5em} % Space after theorem head {} % Theorem head spec (can be left empty, meaning ‘normal’) \theoremstyle{own} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{ax}{Axiom}[section] %% \theoremstyle{definition} \newtheorem{defn}{Definition}[section] %% \theoremstyle{remark} \newtheorem{rem}{Remark}[section] \newtheorem*{notation}{Notation} \newtheorem{algorithm}{Algorithm}[section] \theoremstyle{remark} \newtheorem{example}{Example}[section] % Fix alignments % \setlength{\parindent}{0cm} \newcommand*\widefbox[1]{\fbox{\hspace{4em}#1\hspace{4em}}} \newcommand*\fullbox[1]{\framebox[\columnwidth]{#1}} % Math definitions % Fields \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\N}{\mathbb{N}} \newcommand{\quat}{\mathbb{H}} %Groups \newcommand{\Lo}{\mathbf{O}(3,1)} \newcommand{\SL}{\mathbf{SL}} \newcommand{\SU}{\mathbf{SU}} \newcommand{\Spin}{\mathbf{Spin}} \newcommand{\Pin}{\mathbf{Pin}} \newcommand{\SO}{\mathbf{SO}} \newcommand{\Poincare}{\mathcal{P}} \newcommand{\Poincarecov}{\widetilde{\mathcal{P}}} \newcommand{\Poincareprop}{\widetilde{\mathcal{P}}_+^{\uparrow}} \newcommand{\Aut}{\mathrm{Aut}} % Rings \newcommand{\End}{\mathrm{End}} \newcommand{\CCl}{\mathbb{C}\mathrm{l}} \newcommand{\Cl}{\mathrm{Cl}} \newcommand{\Mat}{\mathrm{Mat}} % Lie algebras \newcommand{\spin}{\mathfrak{spin}} \newcommand{\so}{\mathfrak{so}} \newcommand{\su}{\mathfrak{su}} \newcommand{\slc}{\mathfrak{sl}} %Three-vectors \newcommand{\xt}{\mathbf{x}} \newcommand{\yt}{\mathbf{y}} \newcommand{\pt}{\mathbf{p}} \newcommand{\nt}{\mathbf{n}} \newcommand{\sigmat}{\mathbf{\sigma}} % Vector spaces \newcommand{\Hil}{\mathcal{H}} % Other \newcommand{\calE}{\mathcal{E}} \newcommand{\calD}{\mathcal{D}} \newcommand{\calF}{\mathcal{F}} \newcommand{\calP}{\mathcal{P}} \newcommand{\Fock}{\mathcal{F}} \newcommand{\Op}{\mathrm{Op}} \DeclareMathOperator{\per}{per} \DeclareMathOperator{\sign}{sgn} \DeclareMathOperator{\logit}{logit} \begin{document} \maketitle \tableofcontents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Probability %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Statistics and neuronal networks} Learning neuronal networks is hard. Of course getting a first exposure to neuronal networks and getting a first example for image recognition set up and running has become very easy thanks to the plethora of good (and not so good) tutorials available in the net and thanks to frameworks like Theano or Tensorflow. However, if that is not enough and you start digging deeper and trying to understand why and how neuronal networks actually work, it turns out that things quickly get much more complicated than that. You will very soon be confronted with - at times non-trivial - mathematics from branches like statistics, information theory, linear algebra and real analysis. The purpose of these notes is guide you on that journey and to provide a bit more detail behind the usual introductory level crash courses. We will mainly focus on the area of {\em feed forward networks} used for {\em supervised learning}. Some other classes of networks like energy based models and some methods from unsupervised learning are presented in more detail on my blog \url{www.leftasexercise.com}. Most of the mathematics that we will need is somehow related to the field of mathematical statistics - let us first try to understand where this relation comes from. Many neuronal networks are designed to excel at {\em classification tasks}. As an example, suppose you wanted to design and train a neuronal network that, given data about an animal, classifies the animal as either a bird or some other animal (called a ``non-bird'' for convenience). So our starting point is a set modelling all possible objects that could be presented to the network. How exactly we model this set is not so important, more important is that in general, the network will not have access to all the data about the animal, but only to certain attributes of elements in the set called {\em features}. So there could be an attribute which we call $X$ that is defined as $$ X_1 = \text{the animal can fly} $$ taking values in $\{0,1\}$. Another data point the network could get is $$ X_2 = \text{length of animal in cm} $$ taking values in $\R^+$ and so forth. More generally, we assume that on the set of all possible objects, we have certain functions $X_i$ taking values in, say, the real numbers. Based on these numbers, the network will then try to take a decision whether a given animal is a bird or not. Thus we do not have to deal directly with our space of objects, but use the functions $X_i$ as primary objects. If the network had a chance to look at every possible animal, this would be easy, even though it would cost a lot memory - we could simply remember all possible combinations of features and for each feature, store the correct answer. In reality however, this does not work. Instead, we have access to a small subset of data - a {\em sample} for which we can evaluate the $X_i$. Based on this subset, we then have to derive a model which gives the right answer in as many cases as possible. Thus we try to make a statement about the full space of things that could be presented to our network for classification based on a small sample. This is where probabilities come into play very naturally. We need to assume that our sample has been chosen randomly, but still we need to make assertions about the full set. This is exactly what {\em inferential statistics} is doing. The fact that our sample is chosen randomly turns our $X_i$ into {\em random variables}. Similarly, the variable $$ Y = \text{is a bird} $$ taking values in $\{0,1\}$ is a random variable, and we try to gain information on the distribution of $Y$ across the full population based on its values on a given set of labelled samples, i.e. a set of samples where $Y$ is known. Thus $Y$ would represent the {\em labels} or {\em targets} in the language of neuronal networks. Applying the methods of statistical inference to this situation would typically start by chosing a statistical model and than using estimators or hypothesis testing to make deductions. Apart from the fact that we have to derive information on the full population based on a sample, there is another reason why probabilities appear naturally in the theory of machine learning. In many cases, the available input - being a reduction of the full set of data - is not sufficient to classify the sample with full certainty. To see this, let us go back to our examples. How would you derive the property ``bird'' from the given data ``can fly'' and ``length''? Not all animals than can fly are birds - and not all birds can fly. So we have to try to distinguish for instance a butterfly from a hummingbird based on the length. The smallest hummingbird - a bee hummingbird - is about 5 cm in length. The largest known butterfly - the Queen's Alexandra birdwing - can be as long as 8 cm (both informations taken from Wikipedia). Thus our data is not sufficient to clearly distinguish butterflies and birds in all cases. However, very small birds and very large butterflies have one thing in common - they are rare. So chances are that a given animal that can fly and is larger than 5 cm is actually a bird (yes, there are bats....). In other words, if again $Y$ denotes the variable which is 1 on birds and 0 on all other animals, we can in general not hope that $Y$ is a function of the $X_i$, but we can hope that given some values of the $X_i$, the probability $P(Y=1)$ to be a bird depends on the $X_i$. In other words, using the language of {\em conditional probabilities}, $$ P(Y=1 | X = x) = f(x) $$ with some unknown function $f$. In a Bayesian interpretation of probability, the certainty with which can say ``this animal is a bird'' is a function of the values $x_i$ of the observable variables $X_i$. With these considerations, we now arrive at the following mathematical model for what a classification algorithm is about. We are given a probability space $(P, \Omega)$ with a vector valued random variable $X$. The attributes of a sample are described by the {\em feature vector} $X \in \R^m$ where $m$ is the number of different features. In our example, $m=2$, as we try to classify animals based on two properties. The result of the classification is described by a random variable $Y$ taking - for the simple case of a binary classification problem - values in $\{0,1\}$. We then assume that $$ P(Y =1 | X=x) = f(x;w_0) $$ where $f(\cdot;w)$ is a function parametrized by some parameter $w$. The actual value $w_0$ of $w$ is unknown. Based on a sample for $X$ and $Y$, we then try to {\em fit the model}, i.e. we try to find a value for $w$ such that $f(\cdot, w)$ models the actual conditional distribution of $Y$ as good as possible. Once the fitting phase is completed, we can then use the model to derive predictions about objects which are not in our initial sample set. This model sounds a bit abstract, but in the next section, we will look at a simple but very fundamental example which directly relates to a popular type of simple neuronal networks. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Logistic regression %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Logistic regression} \label{sec:logisticregression} In statistics, a very common problem is to make a binary valued prediction based on a set of continuous random variables. An example which is often used as an illustration for machine learning is the distinction of two different species of orchids based on four different features of their blossoms, the length and diameters of petal and sepal. Here, the outcome is a binary variable taking values in $\{0,1\}$ and we suspect that the distribution of this random variable depends somehow on the four features. Another example is a test at school which can have the outcome 1 (pass) or 0 (fail). We could measure the time that a student has invested into preparation and would assume that there is some relation between the preparation time and the outcome. If we model the outcome by a binary valued random variable $Y$ and the preparation time by a real valued random variable $X$, we would probably not believe that $X$ determines $Y$ completely, but at least that the probability to pass is a function of $X$, or more precisely that $$ P(Y=1 | X = x) $$ is a function of $x$. Of course $f$ can be very complicated, so we need a simplified model, and this is what logistic regression is doing. In general, the most fundamental relation between two quantities is a linear dependency. So a first attempt could be to assume that $$ P(Y = 1 | X = x) = ax + b $$ with real numbers $a,b$ (or matrices and vectors in the more general case). Obviously, this cannot be true, as a probability must be a number between $0$ and $1$. Therefore the easiest class of models we can obtain is a model where $$ P(Y = 1 | X = x) = \Phi(ax + b) $$ with some continuous or even smooth function $\Phi$ mapping the real axis to $(0,1)$ - and this is what logistic regression is about. Logistic regression has first been studied in detail by D. Cox in \cite{Cox}. In its simplest form, we assume that we are given a sequence $Y^{1}, Y^2, \dots, Y^n$ of random variables that each take the values $0$ and $1$, representing the outcomes of a classification of a sample with $n$ objects, and a corresponding sequence $X^i, i = 1, \dots, n$ of real valued random variables, which in our example would be the preparation time of the i-th student in the sample. We will use the symbols $Y$ and $X$ to denote the vector formed by the $Y^i$ and the $X^i$. The model assumes that the $Y^i$ are conditionally independent given the $X^i$ and that there is a relation $$ \logit P(Y^i = 1 | X = x) = \alpha + w x^i $$ where $\logit$ is the function defined by $$ \logit(x) = \ln \frac{x}{1-x} $$ which maps the open interval $(0,1)$ one-to-one onto the real axis. The inverse of this function is the {\em standard logistic function} or {\em sigmoid function} - this function is also called expit in some software packages. $$ \sigma(x) = \frac{e^x}{e^x + 1} = \frac{1}{1+e^{-x}} $$ Thus $$ P(Y^i = 1 | X = x) = \frac{e^{\alpha + w x^i}}{e^{\alpha + w x^i} + 1} $$ which, in particular, depends only on $x^i$ (which is a very natural assumption - whether a student fails in a standardized and fair test should only depend on the preparation of that particular student). If we denote by $$ p_i (x) = P(Y^i = 1 | X = x) $$ then we see that the conditional distribution of the $Y^i$ is simply a Bernoulli distribution $$ P(Y^i = y^i | X= x) = p_i^{y^i}(1-p_i)^{1-y^i} $$ so that the joint distribution of the $Y^i$ is given by $$ P(Y = y | X = x) = \prod_i p_i^{y^i}(1-p_i)^{1-y^i} $$ Note that the $Y^i$ are not identically distributed. In our example, the probability for a student to pass the test is not the same for all students, but it depends on the preparational work modelled by the variable $X$, as we expect. However, the parameters $\beta$ and $\alpha$ are the same for all $i$ and thus across the population of all students. Let us now to turn the question how we can estimate the unknown parameter $w$. A standard approach to estimating parameters of a probability distribution is the {\em maximum likelihood estimation}. Thus, given a sample $y^i$ and $x^i$, we try to identify the values $w$ and $\alpha$ which maximize the likelihood $$ P_{w, \alpha}(Y^i = y^i | X = x ) $$ Instead of maximizing the likelihood itself, we can also maximize the logarithm $$ \ln P_{w,\alpha}(Y^i = y^i | X = x ) $$ By assumption, the $Y^i$ are independent given $X$. Thus \begin{align*} P_{w,\alpha} (Y = y | X = x) &= \prod_i P(Y^i = y^i | X = x) = \prod_i p_i^{y^i}(1-p_i)^{1-y^i} \end{align*} Taking the logarithm, we obtain that the log likelihood $\ln L(w)$ is given by $$ \ln L(w, \alpha) = \sum_i y^i \ln p_i + \sum_i (1-y^i) \ln (1-p_i) $$ Finally, it is common to add a minus sign to obtain a {\em loss function} which we need to minimize in order to maximize the likelihood. We therefore define the loss function to be $$ l(w, \alpha) = - \ln L(w,\alpha) = -\sum_i y^i \ln p_i - \sum_i (1-y^i) \ln (1-p_i) $$ It is not really difficult to calculate the derivative of this function. Our starting point is the derivative of the sigmoid function, which is given by $$ \sigma' = \sigma (1 - \sigma) $$ This immediately implies that \begin{align*} \nabla \ln p_i &= (1 - p_i) x^i \\ \nabla \ln (1-p_i) &= - p_i x^i \end{align*} Using this, we obtain that $$ \nabla l(w) = \sum_i (p_i - y^i) x^i $$ Note that $p_i - y^i$ can be interpreted as the error that we make when we replace the computed probabilities by their true values found in the sample. This is a nice expression, but there is no obvious way to derive a closed expression for its zeroes. Thus in order to identify the values of $w$ and $\alpha$ that maximize the likelihood, we need a numerical optimization method. Before we present and explain the method that is commonly used for this purpose, let us generalize our model to the case of vector valued random variables $X^i$. So let us assume that $X^1, \dots, X^n$ is a sequence of $n$ vector valued random variables $X^i$ taking values in $\R^d$. The obvious generalization of our model is then a probability distribution $$ p_i (x) = P(Y^i = y^i | X = x) = \sigma(w^t x^i + \alpha) $$ where now $w \in \R^d$ but $\alpha \in \R$. Our log likelihood function $\ln L(w, \alpha)$ is now depending on the vector $w$ and the scalar $\alpha$. It is very common to add an additional dimension to $d$ by setting $x_0 = 1$ and $w_0 = \alpha$ so that $$ \sum_{j=1}^d w_j x_j^i + \alpha = \sum_{j=1}^d w_j x_j^i + w_0 x_0^i =\sum_{j=0}^d w_j x_j^i = w^t x $$ so that it suffices to consider the case $\alpha = $ where we only have to optimize the parameter $w$. Our log likelihood function is now a smooth function $\R^{d+1} \rightarrow \R$ which we want to maximize. Alternatively, we could also minimize $- \ln L(w)$. So we can apply any of the available numerical methods to find a global minimum for a smooth function on $\R^{d+1}$, i.e. we are deailing with a classical (nonlinear) optimization problem with loss function $- \ln L(w)$. A very popular method - especially in the context of neuronal networks - is the {\em gradient descent method} which is sometimes also called the method of steepest descent. To explain this, suppose we are given a smooth function $$ f \colon \R^d \rightarrow \R $$ and are looking for a minimum of this function. We want to do this iteratively, i.e. we start with a randomly chosen point $x_0 \in \R^d$ and want to determine a new point $x_1$ such that $f(x_1) < f(x_0)$. To do this, recall that for small values of $t$ and a unit vector $u$, we have $$ f(x_0 + tu) \approx f(x_0) + t u \cdot \nabla f $$ Thus if we step away from $x_0$ into direction $u$ with step size $t$, we obtain a point $x_1$ for which $f(x_1)$ will be mimized if we choose $u = - \nabla f$. This motivates the most basic version of the gradient descent algorithm. \begin{enumerate} \item Start with an arbitrary point $x_0$ \item Define a step size $\lambda > 0$ \item Let $x_k = x_{k-1} - \lambda \nabla f (x_{k-1})$ \item Repeat for a given number of steps or until convergence \end{enumerate} In practice, this algorithm does of course have a few limitations. First, the choice of the step size is critical. If we choose the step size too small, then the algorithm converges - if at all - very slowly. If the step size is too large, then the algorithm will overcompensate and the $x_k$ will zig-zag around, missing the local mimimum. There are versions of the algorithm that adapt $\lambda$ dynamically, but in the most basic version $\lambda$ is fixed and needs to be chosen carefully. Second, if the algorithm converges, it might converge to a local minimum instead of a global minimum. However, the algorithm is still very popular and is - along with modifications like RMSProp or AdaGrad - widely used in implementations of neural networks. Using the gradient descent algorithm, we would now implement an algorithm to determine the value of the parameter $w$ in our model as follows. First, we choose a step size $\lambda$ and initialize $w$ to some random value $w_0$ and. Then, based on an existing sample $x, y$, we compute the probabilities $$ p_i = P(Y^i = y^i | X = x) = \sigma(w_0^t x^i ) $$ Using the loss function $$ l(w) = - \ln L(w) = - \sum_i y^i \ln p_i - \sum_i (1-y^i) \ln (1-p_i) $$ we determine the error vector $$ e = y - p $$ and the gradient $$ \nabla l(w_0) = - \sum_i e_i x^i $$ at $w_0$. We then set $$ w_1 = w_0 - \lambda \nabla l(w_0) = w_0 + \lambda \sum_i e_i X^i $$ and repeat the process at $w_1$ to determine $w_2$ and so forth. This is usually repeated for a fixed number of steps. Note that the loss function depends on all sample points, so we run through the full sample in every iteration, which is computationally expensive. A version of the algorithm which only uses a small -typically random - subset of the full sample is known as {\em stochastic gradient descent}. The size of this subset is called the {\em batch size}. In the special case that the batch size is one, i.e. we update $w$ based on only one data point, the algorithm is called {\em online gradient descent}, because it is well suited for online applications where the parameters of the model are updated based on individual requests or messages. Note that to update the weights, we only need the error term, not the value of the loss function itself. The computation of the error terms for all elements in the sample can easily be expressed as a matrix operation, which speeds up the calculation on hardware that supports highly parallel matrix operations, like modern GPUs. In addition, this makes the algorithm very easy to implement. The code snippet below shows the core part of the training algorithm written in Python and using the numpy package, where we assume that the sample feature vectors have been stored in a matrix $X$ (one row corresponding to one sample point), and the labels of the sample have been stored in a matrix $labels$ which has one column and the same number of rows (the sample size) as $X$. The weights are stored in a vector $w$, and the {\em learning rate} is the parameter $\lambda$, i.e. the step-size in the gradient descent algorithm. Finally, the {\em bias} which was called $\alpha$ in our treatment of the logistic regression is called $b$ here, following the usual conventions. \begin{lstlisting}[frame=single,language=Python,caption=Implementation of the training phase in Python] for step in range(epochs): # # First we compute the logits p_i. The activation is # given by the matrix product X w^T (note that numpy # will automatically do the transpose) # p = expit(np.matmul(X,w) + b) # # Next we compute the error error = labels - p # # finally we update the weights # w = w + learning_rate * np.matmul(error,X) b = b + learning_rate * np.sum(error) \end{lstlisting} The weights and the bias are typically initialized to random values, for instance by drawing from a random normal distribution. To test the algorithm, we apply it to typical use case which we did already mention - the {\em Iris data set}. This is a historical data set with 150 records which captures four features (sepal length, sepal width, petal length and petal width) of flowers and assigns the samples to one of three species. In our tests, we have used only two species, i.e. the first hundred records of the set, and two features (sepal length and petal length). Then we ran a training phase with 15 epochs and visualized the results graphically. In all examples, $\lambda = 0.01$ was used as the learning rate. \includegraphics[scale=0.47]{LogisticRegressionIris.png} In the left diagram, we have plotted the norm and the mean of the error vector after a given number of epochs. We see that the network converges already after a few epochs, and the average error becomes very small. The norm of the error vector, however, is always different from zero as the $p_i$ are never exactly $1$ or $0$. Instead, when asking the network to classify a sample, we assign the sample to class $1$ if $p > 0.5$ and to class $0$ otherwise. On the right hand side, the result is displayed. Each item represents one sample with its features sepal length and petal length. The color indicates the class to which the sample has been assigned after the training has been completed. We see that the logistic regression model is able to correctly detect the two clusters which can obviously be linearly separated. Obviously, logistic regression performes less satisfactory if the underlying distribution does not allow a clear separation of the training set into two classes. In the following example, a training set of 500 samples has been created which consists of data points two clusters, both following a standard normal distribution in two dimensions with covariance matrix $$ \begin{pmatrix} 0,7 & 0 \\ 0 & 0,7 \end{pmatrix} $$ with mean values $(6.5, 4.5)$ and $(5. 1.5)$, and the samples have been labeled according to the cluster from which they have been sampled. \includegraphics[scale=0.47]{LogisticRegressionSample.png} As we can see in the picture on the right hand side, these clusters overlap significantly. The model is trying to separate the clusters along the line where blue and yellow cluster points overlap, but - as we can see from the metrics on the right hand side - the errors oscillate and the network does not converge. In fact, there are even phases where the error goes up again for a few epochs. \chapter{Entropy and Kullback-Leibler divergence}\label{sec:entropy-and-kullback-leibler-diverergence} Let us now relate this statistical model to a simple neuronal network with a sigmoidal activation function. Specifically, consider a neuronal network with one input layer consisting of $d$ units and an output layer consisting of one unit only. If we denote the activations of the units in the input layer by $x_i$, the activation of the output unit is given by $$ z = \sum_i w_i x_i = w^t x $$ with a weight vector $w \in \R^d$. We then apply the sigmoid function as an activation function, i.e. the output of the unit is $$ \sigma(z) = \frac{1}{1+e^{-z}} $$ This is already very close to our logistic regression model discussed in the last section. A given activation of the input units is a $d$-dimensional vector which is a point $x^i$ in a sample, and we can identify the resulting output of the network as $$ P(Y = 1 | X = x) $$ in the regression model. We can now understand why such a network is used as a binary classifier. The inputs represent features of a sample - in our initial example, this would be the features of being able to fly and the bodylength - and the output approximates the probability that the object at hand belongs to a certain class, birds in our example. During training, the weights are adapted such that the output comes as close to the actual distribution $Y$ as possible, so that when we apply the network to a new object which has not been part of the sample, the output is still a good approximation to the value of $Y$ on this sample points. \includegraphics[scale=.4]{BirdClassifier.png} But how do we train the model? Typically a neuronal network is trained by defining a loss function $l$ which depends on the weigths $w$ and the {\em labels}, i.e. the given sample $y^i$, and iteratively adjusts the weights using a method like gradient descent to minimize the loss function. A very often used loss function is the {\em cross entropy}, and we will now study how the cross entropy is defined and why the resulting loss function is actually nothing else than minus the logarithmic likelihood function from our logistic regression model. Let us first recall a few definitions. To avoid some technicalities, we will restrict ourselves to the case of discrete random variables, which is not a real restriction in the applications to neural networks as any quantity calculated by a machine with finite word length has a finite and therefore discrete range. We refer the reader to standard textbooks in statistics and probability theory for more details, for instance \cite{Schervish}, section 2.3.2 or \cite{LehmannRomano}, section 11.2. The first definition that we will need is the Kullback Leibler divergence. So let us assume that we are given two random variables $X$ and $X'$ with the same range ${\mathcal X}$, described by probability mass functions $$ f(x) = P(X = x) $$ and $$ f'(x) = P(X' = x) $$ \begin{defn} We define the Kullback-Leibler divergence or relative entropy $D_{KL}(X,X')$ as the expectation of the logarithmic difference of $f$ and $f'$, where the expectation is taken with respect to the distribution of $X$, i.e. $$ D_{KL}(X, X') = \sum_{x \in {\mathcal X}} f(x) \ln \frac{f(x)}{f'(x)}) = {\mathbb{E}}_{P(X)} \ln \frac{P(X = x)}{P(X' = x)} $$ \end{defn} This definition assumes that $f'$ is absolutely continous with respect to $f'$, i.e. that $f'(x)=0$ implies $f(x)=0$, so that the corresponding summand is zero. There is also a conditional version of this definition which is as follows. \begin{defn} Suppose that $X$ and $X'$ are random variables. We then define the conditional Kullback-Leibler divergence of $X$ and $X'$ given a third discrete random variable $T$ to be $$ D_{KL}(X, X' | T) = \sum_{t} P(T=t) \sum_{x \in {\mathcal X}} P(X = x | T = t) \ln \frac{P(X = x | T = t)}{P(X' = x | T = t)} $$ \end{defn} Note that this is actually an expectation value - for each value $t$, we determine the Kullback-Leibler divergence between $X$ conditioned on $T = t$ and $X'$ conditioned on $T = T$, and then we take the average over all values of $t$, i.e. the expectation value. Observing that $$ P(X = x | T = t) P(T = t) = P(T = t, X = x) $$ this can also be written as the expectation value $$ \sum_{x,t} P(X = x, T = t) \ln \frac{P(X = x | T = t)}{P(X' = x | T = t)} = \mathbb{E}_{P(T,X)} \ln \frac{p_{X}(x|t}{p_{X'}(x | t)} $$ where we denote by $p_X(\cdot | t)$ the probability mass function for $X$ conditioned on $T = t$, similarly for $X'$, and by $P(T,X)$ the joint probability distribution of $T$ and $X$. This in turn is the same as $$ \sum_{x,t} P(X = x, T = t) \ln \frac{P(X = x , T = t)}{P(X' = x , T = t)} = \mathbb{E}_{P(T,X)} \ln \frac{p_{(X,T)}(x,t)}{p_{(X',T)}(x,t)} $$ In other words, the conditional Kullback-Leibler divergence between $X$ and $X'$ conditioned on $T$ is the same as the Kullback-Leibler divergence between the joint distributions $(X,T)$ and $(X',T)$. This is only true because the distribution $T$ on which we condition is the same on both sides, in a more general setting with two different distributions $T$ and $T'$, we would get an additional term $D_{KL}(T,T')$, a fact which is known as the chain rule for relative entropy. It is well known that the Kullback-Leibler divergence between two probability distributions is always non-negative and is zero if and only if the two distributions agree almost surely. Thus, in a certain sense, the Kullback-Leibler divergence is a measure for the distance of two distributions in an abstract space of all probability distributions. Note, however, that this is not a real metric, as it is not symmetric. We will know investigate a second quantity, called the {\em cross entropy}, which is often used to express the distance between two probability distributions, and investigate how it relates to the Kullback-Leibler divergence. Recall that for a discrete random variable $X$ taking values in some finite set $\mathcal X$, we define the {\em entropy} of $X$ to be $$ H(X) = {\mathbb{E}}_{P(X)} \ln \frac{1}{P(X = x)} = - \sum_x P(X = x) \ln P(X = x) $$ This quantity was introduced in \cite{Shannon}, which is considered to be the starting point of modern information theory. In this paper, Shannon argues that up to a constant, the entropy is the only measure of the uncertainty that we have about the outcome of a random event that fulfills certain reasonable assumptions. For instance, $H(X) \geq 0$ and $H(X) = 0$ only if $X$ is atomic, i.e. $P(X = x) = 1$ for some $x \in {\mathcal X}$. In addition, the entropy is maximized by a uniform distribution for which all outcomes have the same probability. This is in accordance with our intuition that the uniform distribution is the distribution with the lowest information gain. The entropy is additive, i.e. if $X$ and $Y$ are discrete and independent random variables and $U=(X,Y)$ the joint distribution, we have $$ H(U) = H(X) + H(Y) $$ Thus if we observe two independent events, the information we gain from observing $X$ and $Y$ is the sum of the information that we gain from $X$ and $Y$ alone. Again, there is a conditional version of entropy. The idea is the same as for the Kullback-Leibler divergence - we determine the entropy for each of the conditional distributions and then average over the values of the random variable on which we condition. Thus we get the following \begin{defn} Suppose that $X$ and $T$ are discrete random variables. We define the conditional entropy of $X$ conditioned in $T$ to be $$ H(X | T) = - \sum_t P(T = t) \sum_x P(X = x | T = t) \ln P(X = x | T = t) $$ \end{defn} Note that using $P(X = x | T = t) P(T=t) = P(X = X, T = t)$, we can write the conditional entropy equally well as $$ H(X | T) = - \mathbb{E}_{P(X,T)} \ln P(X = x | T = t) $$ where $P(X,T)$ denotes the joint distribution of $X$ and $T$. It is also straightforward to show that $$ H(U) = H(X | Y) + H(Y) $$ where again $U = (X,Y)$ is the joint distribution of $X$ and $Y$. Thus the conditional entropy given $Y$ measures how much information we gain from observing $X$ after already having observed $Y$. Combining this with our earlier result on the entropy of a joint distribution, we see in particular that $H(X | Y) = H(X)$ if $X$ and $Y$ are independent. Let us now investigate how the Kullback-Leibler divergence is related to the entropy. So suppose that we are given two discrete random variables $X$ and $Y$ described by probality mass functions $p_X$ and $p_Y$. Then the Kullback-Leibler divergence is $$ D_{KL}(X,Y) = \sum_a p_X(a) \ln \frac{p_X(a)}{p_Y(a)} $$ Obviously, this can be written as $$ D_{KL}(X,Y) = \sum_a p_X(a) \ln p_X(a) - \sum_a p_X(a) \ln p_Y(a) $$ The first term is easily recognized as $-H(X)$. Thus $$ D_{KL}(X,Y) + H(X) = - \sum_a p_X(a) \ln p_Y(a) $$ This quantity is called the {\em cross entropy} between $X$ and $Y$. \begin{defn} Suppose that $X$ and $Y$ are discrete random variables with a common range ${\mathcal A}$. Then we call the quantity $$ H(X,Y) = - \sum_{a \in {\mathcal A}} P(X = a) \ln P(Y = a) $$ the cross entropy between $X$ and $Y$. \end{defn} We have claimed above that the cross entropy is another measure for the distance between $X$ and $Y$. In fact, we have seen above that $$ H(X,Y) = H(X) + D_{KL}(X,Y) $$ Thus for a fixed random variable $X$, the Kullback-Leibler divergence and the cross entropy differ only by a constant term, namely the entropy of $X$. Thus they both measure to what extent $Y$ differs from $X$. Of course, we can also replace the Kullback-Leibler divergence and the entropy by their conditional versions to obtain a {\em conditional cross entropy}. After these preparations, let us now come back to the logistic regression model and try to understand the relation between Kullback-Leibler divergence, loss function and cross entropy. In fact, we will study a slightly more general model. We assume that $X^i$ and $Y^i$ are discrete random variables, with $X^i$ taking values in some subset ${\mathcal X}$ of $\R^d$ and $Y^i$ taking values in some subset ${\mathcal Y} \subset \R^s$, where $i$ ranges from $1$ to the sample size $N$. We let $$ U^i = (X^i,Y^i) $$ denote the combined random variables taking values in $\R^{d+s}$ and assume that the $U^i$ are independent and identically distributed. Further, we let $X = (X^1, X^2, \dots, X^N)$ be the random variable with values in $\R^{Nd}$ obtained by combining the $X^i$ into a single variable, and similarly for $Y$ and $U$. Recall that if $X$ and $Y$ are independent, then $f(X)$ and $g(Y)$ are independent for any two measurable functions $f$ and $g$. Applying this to projections, we see that the $X^i$ are mutually independent and similarly the $Y^i$ are mutually independent. We do, however, not assume that $X^i$ and $Y^i$ are independent. Instead, we assume a relationship $$ P(Y^i = y | X^i = x) = f(x,y ; \Theta_0) $$ for a function $f$ depending on $u = (x,y)$ which is parametrizable via a parameter $\Theta$. In the case of logistic regression, we have ${\mathcal Y} = \{0,1\}$ and the function $f$ is given by $$ f(x,y; \Theta) = y^p (1-y)^{1-p} $$ with $p = p(x; \Theta) = \sigma(\Theta^t x)$. However, our model is more general and can in general model a neuronal network with $d$ input units and $s$ output units, when we allow more general functions $f$ and $\Theta$ represents the weights of the neural network. The sample $U^i$ is then a model for the training set, which we assume to be drawn from the distribution corresponding to some parameter $\Theta_0$, and during training, we try to iteratively determine $\Theta$ to be as close to $\Theta_0$ as possible. Let us now try to write down the likelihood function in this model. So suppose that $u^1, \dots, u^n$ is a realization of the $U^i$, i.e. a given actual sample, which we write again as $u^i = (x^i y^i)$. The $y^i$ are the labels and the $x^i$ the features of the i-th point in the sample. The full sample is then represented by vectors $y$ and $x$ and the conditional likelihood of $\Theta$ given the input data is given by \begin{align*} L(\Theta) = P(Y =y | X = x ; \Theta) &= \frac{P(U = u)}{P(X =x)} \\ &= \frac{\prod_i P(U^i = u^i)}{\prod_i P(X^i = x^i)} \\ &= \prod_i \frac{P(U^i = u^i)}{P(X^i = x^i)} \\ &= \prod_i P(Y^i = y^i | X^i = x^i) = \prod_i f(x^i,y^i;\Theta) \end{align*} Maximizing the likelihood therefore amounts to minimizing the {\em loss function} $$ l(\Theta) = - \frac{1}{N} \ln L(\Theta) = - \frac{1}{N} \sum_i \ln f(x^i,y^i ; \Theta) $$ where the reason for the factor $N$ will become clear soon. Now we can define random variables $F_i$ by $$ F_i = \ln f(X^i, Y^i ; \Theta) = f(U^i ; \Theta) $$ As the $U_i$ are assumed to be independent and identically distributed, the same is true for the $F_i$. We can therefore apply the law of large numbers to conclude that \begin{align}\label{eq:expectationvaluelossdistribution} \lim_{N \rightarrow \infty} l(\Theta) = - {\mathbb{E}_{P(U;\Theta_0)}} \ln f(u;\Theta) \end{align} where $P(U;\Theta_0)$ denotes the probability distribution from which the sample $U^i$ was drawn, i.e. the real distribution corresponding to the real parameter $\Theta_0$. But this expectation value can be written as \begin{align*} - {\mathbb{E}_{P(U;\Theta_0)}} \ln f(u;\Theta) &= {\mathbb{E}_{P(U;\Theta_0)}} \ln \frac{1}{ f(u;\Theta)} \\ &= {\mathbb{E}_{P(U;\Theta_0)}} \ln \frac{f(u;\Theta_0)}{f(u;\Theta)} - {\mathbb{E}_{P(U;\Theta_0)}} \ln f(u;\Theta_0) \end{align*} The first term is easily identified as the conditional Kullback-Leibler divergence between the distributions of $Y$ given $X$ belonging to the parameters $\Theta_0$ and $\Theta$. The second term is the conditional entropy of the distribution belonging to the parameter $\Theta_0$. Thus we have found that {\em for a sufficiently large sample, the negative log likelihood function is - up to a constant - approximately equal to the conditional cross entropy between the real distribution from which the sample was drawn and the distribution given by $\Theta$. Therefore minimizing the cross entropy is essentially the same as maximizing the conditional likelihood. } Thus we have reconciled the approach to the logistic regression model via a maximum likelihood estimator with the approach that is typically presented in introductions to neuronal networks and based on the principle of minimizing the cross-entropy - the outcome will be the same. Popular software packages for working with neuronal networks like Theano (\cite{Theano} or TensorFlow (\cite{Tensorflow}) do offer support to calculate the cross entropy for a logistic distribution and also support automatic differentiation to calculate gradients, so that a gradient descent approach can be applied to iteratively determine a mimimum of the loss function - and this is exactly what happens during the training phase of a neuronal network. There is another interpretation of the maximum likelihood estimator in terms of cross entropy which is worth mentioning. So far we have tried to minimize the distance between the distribution corresponding to $\Theta$ from the real distribution which we implicitly did assume to be given by some parameter $\Theta_0$. What happens if we drop this assumption? We then need to measure the distance to some other distribution that we consider as being the ``real'' distribution from which the $Y^i$ are sampled. An obvious candidate for this is the distribution which explains the observed sample exactly, i.e. the {\em empirical distribution} which assigns to each vector $u$ the average number of times this vector is observed. For this distribution, the expectation value of any measurable function is the average over the observed values. In particular, if we replace $P(U,\Theta_0)$ in equation \eqref{eq:expectationvaluelossdistribution} by the empirical distribution, the limit turns into an equality. Thus we see that the loss function can alternatively be interpreted as the conditional cross entropy between the empirical distribution based on the observed sample and the distribution given by the parameter $\Theta$. \chapter{Multinomial logistic regression} So far we have studied the logistic linear regression model, understood why this model describes a simple neuronal network acting as a classifier and studied the loss function that is typically used to train such a network. The logistic regression model we have used so far is, however, restricted to a dependent variable $Y$ which only takes on two values, i.e. a binary random variable. Let us now assume that we want to model the conditional probability for an outcome variable $Y$ which can take on $K$ distinct values, for simplicity this could be $\{1, \dots, K\}$. Thus, instead of generating one probability $p_i$ from each sample $X^i$, we now have to create $K$ distinct probabilities $p^i_k$ that add up to one for a fixed $i$. A standard choice for this purpose is the {\em softmax function} which is a function that maps $\R^K$ to itself and is given by $$ s_j(z_1, \dots, z_K) = \frac{e^{z_j}}{\sum_{i=1}^K e^{z_i}} $$ There is an obvious relation to the sigmoid function. In fact, for $K=2$, we have $$ s_1 (z_1, z_2) = \sigma(z_1 - z_2) $$ Obviously $$ \sum_k s_k(z_1, \dots, z_) = 1 $$ so that the softmax function is well suited to modelling probabilities that add up to one. In order to build an equivalent for the logistic regression model, we now choose our weights to be a matrix $W$ with $d$ rows and $K$ columns, where again $d$ is the number of features, i.e $X^i \in \R^d$. Similarly, we combine all feature vectors $X^i$ into a matrix with $N$ rows and $d$ columns. We can then form the matrix $X \cdot W$ which has $N$ rows and $K$ columns. Each of the rows of this matrix - which we denote by $(X \cdot W)^i$ - is then a $K$-dimensional vector which is given by $$ ((X \cdot W)^i_j = \sum_{k=1}^d X^i_k W^k_j $$ and $$ p^i_j = P(Y^i = j | X^i = x^i) = s_j((X \cdot W)^i) $$ which can be written in an even more compact form as $$ p = s(X \cdot W) $$ Then the i-th row of this matrix will represent the $j = 1, \dots, K$ probabilities for $Y^i$ to take value $j$. Let us now again calculate the likelihood function, the resulting loss function and its gradient. For calculations, especially those that involve logarithms, it is more convenient to use a {\em 1-of-K encoding}. Specifically, we define a random variable $T$ which takes values in $\R^K$ with the restriction that each component can only be one or zero and that there is always exactly one component which is one. We relate $T$ to $Y$ by the prescription that the j-th component of $T$ is one if and only if $Y=j$. In other words $$ P(T^i_j = 1 | Y^i = y^i) = p^i_j $$ This has the advantage that for a given target vector $t$, we can write $$ P(T^i = t^i | Y^i = y^i) = \prod_j {p^i_j}^{t^i_j} $$ where of course only one of the products will be different from one as only one of the $t^i_j$ is different from zero. This makes it easy to write down the loss function, again remembering that the $Y^i$ and thus the $T^i$ are supposed to be independent: $$ l(W) = - \ln L(W) = - \sum_{i,j} t^i_j \ln p^i_j $$ Now let us again calculate gradients. First, a short calculation shows that $$ \frac{\partial s_i}{\partial z_j} = s_i s_j + \delta_{ij} s_j $$ where as usual $\delta_{ij}$ is the Kronecker-Delta. If we denote the activation by $$ a^i_j = \sum_{k=1}^d X^i_k W^k_j $$ then we have $$ \frac{\partial a^i_j}{\partial W^s_t} = \delta_{jt} X^i_s $$ and $$ p^i_j = s_j (a^i) $$ Therefore the chain rule gives us \begin{align*} \frac{\partial p^i_j}{\partial W^s_t} &= \sum_k \frac{\partial p^i_j}{\partial a^i_k} \frac{a^i_j}{\partial W^s_t} =\frac{\partial s_j}{\partial a^i_t} X^i_s \\ &= (-s_j s_t + \delta{tj} s_j) X^i_s \\ &= (-p^i_j p^i_t + \delta_{tj} p^i_j) X^i_s \end{align*} and therefore $$ \frac{\partial}{\partial W^s_t} \ln p^i_j = (-p^i_t + \delta_{tj})X^i_s $$ We can use this result to calculate the derivative of the loss function and obtain $$ \frac{\partial l}{\partial W^s_t} = \sum_i (t^i_t + \sum_j t^i_j p^i_t) X^i_s = \sum_i (p^i_t - t^i_t) X^i_s $$ where we have used that $\sum_j t^i_j = 1$. Again this can be interpreted as the observation $X$ times an error term that measures the difference between the target $t$ and the actual result $p$. If we define a matrix $E$ as $$ E = T - p $$ we can write this as $$ \frac{ \partial l}{\partial W^s_t} = - (X^t E)^s_t $$ and we obtain the following rule to update our weight vectors in each iteration: $$ W \leftarrow W + \lambda (X^t E) $$ where again $\lambda$ denotes the step size in the gradient descent algorithm. Thus it is again very easy to implement multinomial regression in a programming language like Python: \begin{lstlisting}[frame=single,language=Python,caption=Implementation of the training phase in Python] for step in range(epochs): # # The activation is given by the matrix product X W # p = softmax(np.matmul(X, W)) # # Next we compute the error E = target - p # # finally we update the weights # W = W + learning_rate * np.matmul(np.transpose(X), E) \end{lstlisting} Note that this algorithm assumes that the target is given by a sample of the random variable $T$, i.e. in the 1-of-K encoding. The result of the algorithm is then described by the value of $p$ for a test sample, and the value in column $j$ and row $i$ of $p$ is the probability that the sample $i$ belong to class $j$. Again, we can use the iris data set as an example. In this case, we have four features (sepal length, petal length, sepal width, petal width), so that $d=4$, and three classes corresponding to the three types of flowers, i.e. $K=3$. This corresponds to a network with four input units and three output units. The following diagram shows the result of training such a network for 300 epochs, using again a learning rate of 0.01. Again, the left hand side displays the learning progress, measured in terms of norm and mean of the absolute value of the error vector, whereas the right hand side visualizes the classification results in a 3-dimensional scatter plot, where we have suppressed one of the four features. \includegraphics[scale=0.47]{MultinomialRegressionIris.png} In the plot on the right hand side, samples which are not correctly classified are displayed in red. Even after a few hundred epochs, we still have between 3 and 6 incorrectly classified samples, We see that these are sample points that are hidden in cluster of samples of a different species, and a linear classifier is no longer able to clearly separate these points from their closest neighbor. So our simple model is reaching its limit. It is instructive to see how the learning algorithm changes if we replace our multinomial regression model by a model with $K$ independent output variables $Y_j$. In a multinomial model, the targets $T_j$ are obviously not independent, as only one of them can be one. In a model with $K$ independent binary variables $Y_j$, each variable can be $0$ or $1$ and the variables are fully independent. We can model this using again the logistic regression function instead of the softmax function, i.e. we assume that our conditional probabilities are given by $$ P(Y^i_j = 1 | X^i = x^i) = p^i_j = \sigma((XW)^i_j) = \sigma(\sum_k X^i_k W^k_j) $$ where again $X$ is a matrix with $N$ rows (the size of the sample) and $d$ columns (the number of features), and $W$ is a matrix with $d$ rows and $K$ columns, and that the $Y_j$ are independent. Then our loss function is $$ l(W) = - \ln L(W) = - \sum_{i,j} ( y^i_j \ln(p^i_j) + (1 - y^i_j) \ln (1-p^i_j)) $$ Using a calculation similar to the one before, we find that $$ \frac{\partial p^i_j}{\partial W^s_t} = p^i_j (1-p^i_j) \delta_{jt} X^i_s $$ and $$ \frac{\partial l}{\partial W^s_t} = - \sum_i X^i_j (y^i_t - p^i_t) $$ Thus, if we again define an error matrix as $$ E = Y - p $$ we have $$ \frac{\partial l}{\partial W^s_t} = - (X^T E)^s_t $$ In other words, the rule to update the weights for a model with $K$ independent binary output variables has exactly the same form as the update rule for the weights in the multinomial model (but of course the rule to calculate the matrix $p$ and thus $E$ is different, using $\sigma$ instead of a softmax function). It is worth mentioning that this is not just pure coincidende, but true for a larger class of models, so called general linear models (see \cite{Bishop}, secton 4.3.6). So when do we use a softmax activation function and when do we use a sigmoid activation function when designing a neuronal network? The answer depends (surprise) on what you want to model. If the purpose is a classification where the different classes are mutually disjoint, then a multinomial model and correspondingly a softmax activation function is more appropriate. If, however, we want to model independent binary variables, a sigmoid function is the better choice. In practice, hidden layers of neuronal networks tend to use a sigmoid activation (or an alternative choice like a rectified linear unit activation function (RELU ) whereas the final output layer is often modelled using a softmax activation function. \chapter{Feed forward networks and backpropagation} So far we have considered statistical models that are described by a conditional probability distribution of the form $$ P (Y^i| X^i ) = h(XW) $$ with a matrix $X$ describing the inputs to the model, a matrix `$W$ describing weights and bias and a function $h$ which was either the sigmoid function or the softmax function. We have used these models as linear classifiers. The output for the i-th feature vector in the sample is then given by multiplying the matrix $W$ of weights with the i-th row of $X$, i.e. the input to the activation function is linear in the feature vector. Thus, feature vectors that only differ by an element in the kernel of $W$ will yield the same output. This implies that if we use these models as classifiers, they will only be able to distinguish different sets of feature vectors that can be separated by linear submanifolds in the sample space. In order to obtain more general models, we can start to implement more complex networks which have layers. A {\em layered neuronal network} consists of a set of layer $0, 1, 2, \cdots$ of neurons (usually called {\em units} for short) such that a unit in layer $i$ receives input only from units in layer $i-1$, and each layer acts as a simple linear classification model as discussed above. To make this more precise, suppose we are again given a matrix $X$ of random variables with $N$ rows and $d$ columns, where $N$ is interpreted as the sample size and $d$ as the number of features. Suppose further that we are given a set of matrices $W^i, i = 1, \cdots, L$. We will denote the element at row $s$ and column $t$ of matrix $i$ by $W^{i,s}_t$. We can then form the matrix $$ Z^1 = \sigma(X W^1) $$ to form a logistic regression model. We can then feed the output of this model as input into a second logistic regression model, i.e. we consider $$ Z^2 = \sigma(Z^1 W^2) $$ and so forth. The output of the last layer is given by $$ Z^L = s(Z^{L-1} W^L) ) $$ i.e. we use the softmax function as activation layer (other choices are possible and must of what follows applies to other activation functions as well). We call the quantity $$ a^i = Z^{i-1} W^i $$ the {\em activation} of layer $i$ for $i > 0$. Thus $a^{k,i}_s$ denotes the activation of unit $s$ in layer $i$ for sample $k$. Moreover, we set $X=Z^0$. The situation is illustrated below, where not all connections are shown. \includegraphics[scale=0.42]{LayeredNetworkTopology.png} The last layer $Z^L$ is usually called the {\em output layer}. The first layer $Z^0$ is called the {\em input layer}. All other layers are called {\em hidden layers}. Thus our example is a network with one softmax output layer, two sigmoid hidden layers and one input layer. By convention, the weight matrix $W^i$ connects layer $i-1$ and layer $i$. Note that the number of rows of the matrix $W^i$ is the number of units in the layer $i-1$, and the number of columns is the number of units in layer $i$. Again we interpret the model as calculating a probability distribution, i.e. we define a distribution conditional on $X$ by $$ P(Y^i = j | X^i ) = Z^{L,i} $$ and assume that the $X^i$ and the $Y^i$ are independent. Our goal will again be to apply the maximum likelihood method to fit all weights $W^i$ to a given labeled sample of feature vectors, where we assume that the labels are given by a matrix $T$ where each row is the labeling of the corresponding sample in a 1-of-K encoding. In order to determine the maximum of the likelihood function, we again intend to use the method of gradient descent. Using the results of the previous section on the multinomial regression model, we can immediately write down the loss function: $$ l(w) = - \sum_{i,j} T^i_j \ln Z^{L,i}_j = - \sum_{i,j} T^i_j \ln s_j (a^{L,i}) $$ However, this is a significantly more complex function of the weights than in the case of one layer, so we need a clever way to organize the calculation to be able to run it efficiently. The standard method for that purpose is known as the {\em backpropagation algorithm} (see \cite{RHW}), which we now explain. We start by computing the partial derivatives of the loss function with respect to the activation of the last layer. Thus we compute \begin{align*} \frac{\partial}{\partial a^{L,s}_t} \ln s_j(a^{L,i}) = \frac{1}{s_j} \delta_{is} (-s_i s_j + \delta_{jt} s_j) = \delta_{ij} (\delta_{jt} - s_t) \end{align*} and obtain \begin{align*} \frac{\partial l}{\partial a^{L,s}_t} &= - \sum_{i,j} T^i_j \delta_{ij} (\delta_{jt} - s_t) \\ &= - T^s_t + \sum_j s_t T^s_j = - T^s_t + s_t \sum_j T^s_j = - T^s_t + Z^s_t \end{align*} Thus this derivative is given by the error term that we already met before, which is imply the difference between the actual output ($Z^s_t$) and the target ($T^s_t$) and which we have now identified as a partial derivative. This motivates the definition of an error term for the deeper layers. We call $$ E^{i,s}_t = - \frac{\partial l}{\partial a^{i,s}_t} $$ the error term for layer $i$. Then our calculation shows that the error for the output layer is $$ E^L = T - Z^L $$ as we expect. Before we turn to the question how we can compute the error terms for the other layers, let us first try to understand why these terms are useful. Suppose we wanted to calculate the derivative of the loss function with respect to a weight matrix element $W^{i,s}_t$ in the weight matrix that determines the input to layer $i$. The loss function can be written as a function of the weights of all layers $j > i$ and the input to layer $i$. Thus the dependency on the weights for layer $i$ can be written as an implicit dependency via the inputs to layer $i$, i.e. the activation of layer $i$. Using the chain rule, this implies that $$ \frac{\partial l}{\partial W^{i,s}_t} = \sum_{p,q} \frac{\partial l}{\partial a^{i,p}_q} \frac{\partial a^{i,p}_q}{\partial W^{i,s}_t} $$ The first term in each summand is simply minus one times the error term that we have defined above. The second term, however, is easy to calculate, because the activations are simply given by the matrix product of input and weights, so its derivatives with respect to the weights are simply the elements of the input matrix: $$ \frac{\partial a^{i,p}_q}{\partial W^{i,s}_t} = \frac{\partial}{\partial W^{i,s}_t} (Z^{i-1} W^i)^p_q = \delta_{tq} Z^{i-1,p}_s $$ We therefore obtain $$ \frac{\partial l}{\partial W^{i,s}_t} = - \sum_p E^{i,p}_t Z^{i-1,p}_j = - ((Z^{i-1})^T E^i)^s_t $$ Thus we see that again, the gradient can conveniently be expressed as a product of matrices, namely the transpose of the matrix describing the input to the i-th layer (which is the output of the previous layer) and the error matrix for the i-th layer. Thus our update rule will again be \begin{align}\label{eq:iterativeweightupdate} W^i \leftarrow W^i + \lambda (Z^{i-1})^T E^i \end{align} with a step size $\lambda$. This is nice, but only really useful if we find a way to calculate the error terms efficiently. The key point of the backpropagation algorithm is that these terms can be computed iteratively. In fact, the loss functions depends on the activation of layer $i-i$ only via the activation of layer $i$, as every layer is connected only to the next layer. Thus we can again use the chain rule to write $$ E^{i-1,s}_t = \frac{\partial l}{\partial a^{i-1,s}_t} = \sum_{p,q} \frac{\partial l}{\partial a^{i,p}_q} \frac{\partial a^{i,p}_q}{\partial a^{i-1,s}_t} $$ The first term is the error term for layer $i$. The second term in each summand is a partial derivative that we can easily calculate using $$ a^i = \sigma(a^{i-1}) W^i $$ and obtain $$ \frac{\partial a^{i,p}_q}{\partial a^{i-1,s}_t} = \delta_{ps} \sigma' (a^{i-1,s}_t) W^{i,t}_q $$ We can therefore write $$ E^{i-1,s}_t = \sigma'(a^{i-1,s}_t) \sum_q E^{i,s}_q W^{i,t}_q $$ If we use the symbol $\odot$ to denote the Hadamard product of two matrices (which is simply the entrywise product), we can write this conveniently as \begin{align}\label{eq:iterativeerrorterm} E^{i-1} = \sigma'(a^{i-1}) \odot (E^i (W^i)^T) \end{align} The relations \eqref{eq:iterativeweightupdate} and \eqref{eq:iterativeerrorterm} now suggest the following approach to training a multi-layer neuronal network. We first calculate the output $Z^i$ of each layer, starting with the input layer $Z^0$, by multiplying the output of the previous layer with the weight matrix and then applying the activation function. This phase of a training step is called {\em forward propagation}. Once we have reached the output layer, we calculate the error term for the output layer which is the difference between the target and the output of the last layer. We then apply \eqref{eq:iterativeweightupdate} and \eqref{eq:iterativeerrorterm} to iteratively update the weights of each layer and compute the error term for the previous layer, thus working our way back from the output layer to the input layer. We can interpret \eqref{eq:iterativeerrorterm} as distributing the error of layer $i$ back to the units of layer $i-i$, using the weights, and multiplying by the derivative of the activation function to normalize. This step is called the {\em backpropagation}. Also note that in practice, the derivative of the activation function is easily calculated using the formula $$ \sigma' (a^i) = \sigma(a^i) \odot (J-\sigma) = Z^i \odot (J - Z^i) $$ where we have introduced the matrix $J$ which is the matrix for which every entry is one. Formula \eqref{eq:iterativeerrorterm} has another interesting feature. Suppose that we are in an early training phase and start with an error in the output layer which is close to 1 for most entries. When we now apply the formula layer by layer to propagate the error back into the previous layers, we see that we multiply the error for each layer by a derivative of the sigmoid function. By design, the sigmoid function is chosen to squeeze the full real line into the unit interval. Consequently, its derivatives are bounded by a number smaller than one, and in fact the derivatives are close to zero for large input values. Thus, as we proceed backwards through the layers, the error term gets smaller and smaller. Consequently, the correction to the weights gets smaller and smaller. Thus, for neural networks with many weights, we experience the so called {\em vanishing gradient problem}: the lower layers of the network learn very slowly. On the other hand, since the output of the higher layers depend on the weights of the lower layers as well, the correct adjustment of the lower layer weights is crucial for the precision of the network. This problem has prevented the successful training of deep neural networks with many layers for some years, before increasing computational capacity of modern PCs and in particular graphical processing units as well as advanced optimizing and training procedures and novel network architectures like convolutional networks or deep belief networks have finally put us in a position to sucessfully train networks with a large number of layers. For practical applications, it is useful to make the bias which is again hidden in our previous calculations as one column of the weight matrix explicit. Using a bias, the formula for the activation becomes $$ a^{i,s}_p = \sum_k Z^{i-1,s}_k W^k_p + b^i_p $$ where $b^i$ is the bias vector for layer $i$. It is convenient to introduce a set of matrices $B^i$ by letting $$ B^{i,p}_q = b^i_q $$ then we can write this as $$ a^i = Z^{i-1} W^i + B^i $$ The update rule \eqref{eq:iterativeweightupdate} remains correct also in the presence of an explicit bias. The same holds for the iterative rule \eqref{eq:iterativeerrorterm} to derive the error terms layer by layer. However, we do need an additional rule to calculate the necessary updates to the bias vectors. We have $$ \frac{\partial l}{\partial b^i_t} = \sum_{p,q} \frac{\partial l}{\partial a^{i,p}_q} \frac{\partial a^{i,p}_q}{\partial b^i_t} = \sum_{p,q} \frac{\partial l}{\partial a^{i,p}_q} \delta_{tq} = - \sum_p E^{i,p}_t $$ Thus we get one contribution for each sample $p$ which is simply the error term for that sample in the given layer. Using again the matrix $J$ for which $J_{ij} = 1$ for all $i,j$, we can write our update rule as $$ B^i \leftarrow B^i + \lambda (J^T E^i) $$ which is in complete analogy with the update rule for the weights. In Python, the update rules of the backpropagation algorithm can again be implemented easily using a package like numpy. The following code snippet demonstrates how the backpropagation rule can be expressed using the matrix operations provided by this package to determine error terms and update weights during one iteration of the gradient descent algorithm. Here we assume that the forward phase is already concluded and has stored the outputs of the hidden layers in matrices $H[i]$, where $i$ ranges from zero to the number of hidden layers minus one, and the output of the last (softmax) layer is stored in the matrix $O$. \begin{lstlisting}[frame=single,language=Python,caption=Backpropagation in Python] # # Now do actual backpropagation # for i in range(hidden_layers, -1, -1): # # Compute error term # if i == hidden_layers: E = target - O else: E = H[i]*(1-H[i])*np.matmul(E, np.transpose(W[i+1])) # # Determine input of layer # if i == 0: input = np.transpose(X) else: input = np.transpose(H[i-1]) # # Adapt weights and bias # W[i] += learning_rate*np.matmul(input,E) B[i] += learning_rate*np.sum(E, axis=0) \end{lstlisting} Again, we can visualize the results of a training and inference phase for this network. The following diagram shows, as an example, the results of a training with two hidden layers, having 100 and 30 hidden units each. The learning rate was 0.0005 and the training was run for 500 steps. \includegraphics[scale=0.47]{LayeredNetworkIris.png} We see that even with this more advanced network architectures, a few samples cannot be correctly classified (one or two for most training runs) because they are too close to instances of other species in the feature space. What happens if we increase the number of neurons and layers further? The following diagram shows the results of a training run with large network with five hidden layers, having 500, 400, 300, 200 and 100 units, and a learning rate of 0.00005, where after step 3.200, the network did correctly classify all samples. \includegraphics[scale=0.47]{LayeredNetworkIrisLarge.png} Intuitively, the network will use the large number of individual neurons to simply remember all possible samples and the correct labels, and not create any rules. This is a typical example of {\em overtraining} that occurs if the number of free parameters that are available is much higher than the number of features in the sample set. The network will then achieve a perfect match on the training set, but will perform poorly on any new samples. Sometimes, the technique of regularization that we touch upon in then next chapter can be used to avoid overtraining. \chapter{A few ideas from Bayesian inference} \label{chap:ideasfrombayesianinference} So far we have essentially employed one tool from mathematical statistics, namely the maximum likelihood estimator, to derive the weights of our neuronal networks. In this chapter, we will take a brief look at some ideas that come into play when we apply Bayesian inference to the statistical models considered so far. As a starting point, let us go back to the logistic regression. So far, we have motivated the usage of the sigmoid function by the need for a function that maps the full real line into the unit interval to construct probabilities. However, there is an approach that leads us to the sigmoid function in a more natural way. To illustrate this, suppose we are given a scalar random variable $X$ and suspect that a binary random variable $Y$ depends on $X$. Let us assume that the data points for each class $Y = 0,1$ are both distributed according to a Gaussian distribution with a given mean and variance, i.e. that the distribution of $X$ conditioned on $Y$ is given by $$ P(X = x | Y = 0) = \frac{1}{\sqrt{2\pi} \sigma} e^{- \frac{(x - \mu_0)^2}{2\sigma^2}} $$ and similarly $$ P(X = x | Y = 1) = \frac{1}{\sqrt{2\pi} \sigma} e^{- \frac{(x - \mu_1)^2}{2\sigma^2}} $$ where we have assumed that both distributions have the same variance. Let $\pi = P(Y = 1)$ denote the a priori probability for $Y$ to be one. Then we can use Bayes theorem to calculate the probability that a given sample falls into class $Y = 1$ given $X$: \begin{align*} P(Y=1 | X = x) &= \frac{P(X = x | Y = 1)P(Y=1)}{P(X = x)} \\ &= \frac{\pi P(X = x | Y = 1)}{(1 - \pi)P(X = x | Y = 0) + \pi (X = x | Y = 1)} \\ &= \frac{1}{1 + \frac{1-\pi}{\pi} \frac{P(X = x | Y = 0)}{P(X = x | Y = 1)}} \\ &= \frac{1}{1 + \frac{1-\pi}{\pi} e^{- \frac{1}{2\sigma^2} (\mu_0^2 - \mu_1^2 + 2 x(\mu_1 - \mu_0))}} \end{align*} This looks complicated, but it is in fact a sigmoid function. If we let $$ w = \frac{1}{\sigma^2} (\mu_1 - \mu_0) $$ and $$ b = \frac{1}{2\sigma^2}(\mu_0^2 - \mu_1^2) - \ln \frac{1-\pi}{\pi} $$ then we obtain $$ P(Y = 1 | X = x) = \frac{1}{1 + e^{-(wx + b)}} = \sigma(wx + b) $$ Thus we see that our assumption has lead us directly to a model where the conditional probability of $Y$ is described by a sigmoid function. We also obtain a nice interpretation of the parameters $w$ and $b$. The weight vector $w$ is proportional to the vector connecting the centre of the cluster of points for which $Y = 1$ with the centre of the cluster of points where $Y = 0$. The second term in the bias accounts for the fact that the a priori probabilities for $Y$ to be $1$ or $0$ are different. Note that the assumption of equal variance for both distributions makes sure that the term quadratic in $x$ cancels and we obtain a linear model. The distributions $P(X = x | Y)$ which we did assume to be Gaussian are often called {\em class conditional densities}, and the prior distribution of $Y$ which is given by $\pi$ in our case is called the {\em class prior}. We now have two different approaches to fit our model to training data. First, we could determine values for the parameters $w$ and $b$ using the maximum likelihood approach and we are back in the logistic regression model. This requires a fitting of two parameters in order to make predictions on the outcome for future samples. Second, we could try to fit the parameters $\pi$, $\mu_0, \mu_1$ and $\sigma$ directly. This requires the fitting of four parameters, but comes with the additional benefit that once we have these parameters, we could also create additional data which is not contained in the training set. Therefore this approach is sometimes called the {\em generative model approach}. Even if we use the approach of the logistic regression model to fit the parameters $w$ and $b$, we can gain additional insights by applying Bayesian methods to our problem. We could, for instance, treat $w$ as a random variable (we now ignore $b$ again as we have seen that a bias can always be modelled using an additional input dimension) and assume a prior distribution $p(w)$. Then, we can again apply Bayes theorem to calculate the a posteriori distribution of $w$ given some training data $\mathcal D$. We obtain $$ P(w | {\mathcal D}) = \frac{P({\mathcal D} | w) P(w)}{P({\mathcal D})} $$ The first term in the numerator is the likelihood $L(w)$ given the data. The second term is the prior distribution of $w$. The term in the denominator is a constant that does not depend on $w$. Instead of maximizing the likelihood, we could now maximize the a posteriori probability for $w$, i.e. we could try to determine the value of $w$ for which the a posteriori probability is highest. As the denominator does not depend on $w$, this amounts to minimizing $$ - \ln L(w) - \ln P(w) = l(w) - \ln P(w) $$ where $l(w)$ is minus the logarithmic likelihood function described earlier. This method is called the {\em MAP method} (maximum a posteriori method). Thus switching from the maximum likelihood method to MAP amounts to adding an additional term to the loss function which is minus the logarithm of the prior probability. Conversely, we see that using a maximum likelihood estimator corresponds to the assumption of a uniform distribution of $w$ (which, is course, in general not true simply because the domain of $w$ is typically unbounded and therefore a uniform distribution does not define a finite probability measure). As an example, let us assume that the weight vector $w$ is distributed according to a Gaussian with mean zero and variance $\sigma^2$, i.e. that $$ P(w) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{1}{2\sigma^2} w^2} $$ Then our new loss function is \begin{align}\label{eq:lossfunctionwithpenalty} l(w) + \frac{1}{2\sigma^2} w^2 + \text{const.} \end{align} Thus we introduce a penalty for large values of $w$. This additional term is called a {\em regulator} in the language of neural networks. Thus a Bayesian argument does not only explain the original of the regulator but also gives us an idea on the best value, namely one half of the inverse of the variance, which is commonly called the {\em precision} and denoted by $\beta$. Thus, our update rule for the weights during the gradient descent process with step size $\lambda$ is now $$ W \leftarrow W - \lambda \nabla l - \beta \lambda w $$ Due to the minus sign, the additional term is sometimes called the {\em weight decay}. It is worth pointing out that this is still not a full Bayesian treatment. We still search for a single value of $w$ during the training phase of the network and then use this value for a later inference phase. In a Bayesian approach, one would treat the parameter $w$ itself as a random variable. So instead of computing $$ P(t | w,{\mathcal D}) $$ for a fixed value of $w$ and using this to draw conclusions, one would use the relation $$ P(t |{\mathcal D}) = \int P(t | w,{\mathcal D}) P(w | {\mathcal D}) dw $$ and compute the {\em average} of the values $P(t | w,{\mathcal D}$ weighted by the probability $P(w | {\mathcal D})$. As a full numerical integration over $w$ is usually out of the question due to the large number of dimensions of $w$, one typically tries to replace the integral by an average over a sample. Thus, one draws a sample $w_1, \dots, w_n$ distributed according to $P(w | {\mathcal D})$, and then uses $$ P(t |{\mathcal D}) \approx \frac{1}{n} \sum_i P(t | w_i,{\mathcal D}) $$ as an approximation, backed by the law of large numbers. Of course, in general, it is far from obvious how to generate this sample, and we will have to use a non-trivial sampling method again due to the high number of dimensions of the space in which $w$ lives. In the next section, we will see how we can apply {\em Markov chain Monte Carlo methods} to this problem. \chapter{A simple Monte Carlo perceptron}\label{chap:MCPerceptron} In this section, we will use a Monte-Carlo approach to sample from the space of weights of a neuronal network and use this instead of a gradient descent algorithm that determines a single maximum likelihood estimate for the best set of weights. We only discuss a very simple example and refer the reader to e.g. the work of R.~Neal (\cite{Neal1996} ) for a far more detailed and complete study. As in the first few sections, we consider a random variable $Y$ with a conditional distribution depending on a vector $X$ (the input of the neuronal network) and parameters $w$ (the weights) and $\alpha$ (the bias) according to $$ P(Y = 1 | X = x) = \sigma(w^t x + \alpha) $$ For a test sample with input values $X^{(i)} \in \R^d$ and labels $Y^{(i)} \in \{ 0,1 \}$, the loss function which is minus the logarithm of the likelihood is given by $$ l(w, \alpha) = - \ln L(w,\alpha) = -\sum_i Y^{(i)} \ln p_i - \sum_i (1-Y^{(i)}) \ln (1-p_i) $$ where $$ p_i = \sigma(w^t x^{(i)} + \alpha) $$ Now suppose we are given a test set ${\mathcal T}$, consisting of $N$ input vectors $X^{(i)}$ and corresponding labels $Y^{(i)}$. Then the likelihood function is, by definition, the probability to observe the data given values of the parameters $w$ and $\alpha$: $$ P({\mathcal T} | w,\alpha) = L(w,\alpha) = \exp(-l(w,\alpha)) $$ Let us also assume, following the Bayesian approach, that we have a prior distribution for the parameter $$ \Theta = (w,\alpha) \in \R^{d+1} $$ given by some density $p(\Theta)$: $$ P(\Theta) = p(\Theta) d\Theta $$ where $d \Theta$ is the Lebesgue measure. Now suppose that we are given an additional test input vector $X$ and are interested in the probability that the corresponding value of $Y$ is one, i.e. in the classification result for this input. We can then write \begin{align}\label{eq:outcomeisexpectationoverparameterspace} P(Y=1 | X, {\mathcal T}) = \int P(Y=1 | X, \Theta) P(\Theta | {\mathcal T}) \end{align} Now the first term in the integral is simply the output of the network given some value $\Theta = (w,\alpha)$ and the input $X$. The second term is the probability of the parameter given the test data. According to Bayes rule, this is \begin{align}\label{eq:conditionalparameterdensityfromlikelihood} P(\Theta | {\mathcal T}) = \frac{P({\mathcal T} | \Theta) P(\Theta)}{P({\mathcal T})} = L(w,\alpha) \frac{P(\Theta)}{P({\mathcal T})} \end{align} Together, these equations suggest an approach how to obtain an approximation for the classification of the test data. In fact, equation \ref{eq:outcomeisexpectationoverparameterspace} presents the classification result as an expectation value over the posterior distribution of the parameter after having observed the test data. To estimate this integral, we can use a Monte Carlo method like the Metropolis-Hastings algorithm (if you have never heard of that algorithm before, I invite you to take a look at my notes \cite{christianb2018}). For that purpose, we need the conditional density for the parameter up to a constant, and equation \ref{eq:conditionalparameterdensityfromlikelihood} is giving us exactly what we need assuming that the prior distribution $p(\Theta)$ is known. In fact, given a symmetric proposal density $q$, the acceptance probability to go from one parameter $\Theta$ to the proposed parameter $\Theta^*$ is $$ \alpha = \min \{ 1, \frac{p(\Theta^*)L(\Theta^*)}{p(\Theta)L(\Theta)} \} $$ as in the Metropolis algorithm, as $q$ cancels. Let us now make a specific choice for the prior distribution - we will use a multivariate Gaussian normal distribution with variance $\sigma^2$. Thus the density is proportional to $$ \exp(- \frac{1}{2\sigma^2} \Theta^2) $$ Let us now define an {\em energy function} on the parameter space, given by $$ E(\Theta) = \frac{1}{2\sigma^2} \Theta^2 + l(\Theta) $$ (even though there is an obvious similarity with the sum of a kinetic energy and a potential energy in physics, we will see later that we should {\em not} think of the first term as a kinetic energy, but rather as an additional potential energy term). Then $$ \exp(-E) = p(\Theta) L(\Theta) $$ and therefore we can write the Metropolis-Hastings acceptance probability as $$ \alpha = \min \{1, \exp(E(\Theta) - E(\Theta^*)) \} $$ Thus our algorithm will proceed as follows. We start with some initial parameter value $\Theta = (w,\alpha)$ and calculate the energy $E = E(\Theta)$ for this state. Then we use this parameter as starting point for a Markov chain. In each step, we draw a new parameter value $\Theta^* = (w^*, \alpha^*)$ from a fixed symmetric proposal density $q$ and calculate the new energy $E^* = E(\Theta^*)$. We then accept the new parameter with probability $$ \alpha = \min \{1, \exp(E - E^*) $$ After an initial burn-in period, we continue to sample a given number of steps. For each parameter in the sample, we then calculate the output of the neural network for our new input value $X$ and take the average of these values. This is our result for the probability that $Y = 1$. We remark that the energy function is nothing but the loss function including a regularization term studied in equation \eqref{eq:lossfunctionwithpenalty}. As we generate our sample according to the distribution given by the exponential of $-E$ times a normalization factor, we will draw most samples from those regions of the parameter space where this function is small, i.e. in regions close to the maximum likelihood estimate for the weights. Our algorithm basically uses a statistical path through the parameter space to explore that region. However, note that this is more than just a stochastical search of the parameter space for the point with lowest energy, as we also use sample points away from the minimum to calculate our expectation value. If we compare this with a direct scan for the weights with lowest energy, using for instance stochastic gradient descent, we see that we do not use the gradient or, put differently, we fully ignore the gradient. There are methods that use gradient information to guide the Markov chain into directions with lower energy like the Hamiltonian Monte Carlo algorithm, but we will not go deeper into this question in these notes. As before, this algorithm can be easily programmed in a language like Python, as demonstrated in the following listing. \begin{lstlisting}[frame=single,language=Python,caption=Monte Carlo Perceptron in Python] prob = 0 E = energy(w, alpha) for i in range(steps): w_new, alpha_new = propose(w,alpha) E_new = energy(w_new, alpha_new) if (np.random.random() <= np.exp(E - E_new)): w, alpha = w_new, alpha_new E = E_new # # If we are in the inference phase, add current # value to sample # if infer: prob += expit(np.matmul(X,w) + alpha) return prob / steps \end{lstlisting} This algorithm will do a burn in (if the switch {\it infer} is false) or calculate the probability $Y = 1$ given the feature data stored in a matrix $X$ as before if {\it infer} is true. In any case, it will run the Markov chain described above for a given number of steps. If inference is turned on, it will in addition add up the output of the Perceptron for each of the sampled positions in the variable {\it prop} so that this number, divided by the number of steps, can be returned as an approximation for the desired classification results. How does this algorithm work in practice? In figure \ref{fig:MonteCarloPerceptron}, we have displayed the results of a simulation run with a Perceptron using the Monte Carlo algorithm outlined above and the Iris data set as used for the deterministic Perceptron with gradient descent. Here a prior distribution for the weights and the bias with variance $1$, i.e. a standard normal distribution, was used, with a burn-in phase of 50 steps and an inference phase with 5 steps. The diagram on the left displays the classification results. In fact, the network is able to classify all samples correctly with these parameters. The diagram on the left shows how the (two-dimensional) weights evolve over time. We see that after starting at a point close to $(-0.4, -2.2)$, the weights move to the upper left corner of the diagram in a few steps where they then remain. The diagram in the lower part of the figure display how the loss function evolves over time. In this run, only 11 updates were accepted during the 50-step burn-in phase. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{MonteCarloPerceptron.png} \caption{Monte Carlo Perceptron simulation results} \label{fig:MonteCarloPerceptron} \end{figure} It is not very surprising that in this case, a Monte Carlo approach works, as the two data sets are linearly separable and any weight vector perpendicular to a line separating the classes will work. Thus it clearly suffices to be in a region close to the vector which actually minimizes the loss function, as this is what the stochastic algorithm is doing. Neal (\cite{Neal1993}, \cite{Neal1996}) has applied Monte Carlo simulations to more complex and layered networks, using not a plain Metropolis-Hastings algorithm but more sophisticated sampling methods. The approach to work with a prior distribution of the weights and then use Bayesian methods to make predictions is sometimes called {\em Bayesian neural network}. Libraries like Edward (http://edwardlib.org) which is built on top of Tensorflow or PyMC3 can assist in implementing Bayesian models that use Monte Carlo methods or what is called {\em Variational inference}, i.e. the approximation of the true posterior by simpler distributions, see also \cite{Bishop}, section 5.7 and chapter 10. Finally, to avoid confusion, it is worth mentioning that in this example, we have applied Monte Carlo methods in the space of {\em weights}. This is very different from the large class of stochastic models known as Boltzmann machines which apply Monte Carlo methods in the space of {\em states}, but update the weights according to a classical gradient descent algorithm or variations thereof. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Bibliography %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{thebibliography}{9} \bibitem{Bauer} H.~Bauer, {\em Wahrscheinlichkeitstheorie}, de Gruyter, Berlin, New York 1991 \bibitem{Bishop} C.M.~Bishop, {\em Pattern recognition and machine learning}, Springer, New York 2006 \bibitem{Klenke} A.~Klenke, {\em Probability theory} Springer, London 2008 \bibitem{Cox} D.R.~Cox, {\em The regression analysis of binary sequences}, Journal of the Royal Statistical Society, Series B, Vol. 20, No. 2 (1958), pp 215--242 \bibitem{Shannon} C.E.~Shannon, {\em A mathematical theory of communication}, The Bell System Technical Journal {\em Vol. 27}, pp. 379--423, pp. 623--656, July, October 1948 \bibitem{CasellaBerger} G.~Casella, R.L.~Berger, {\em Statistical inference}, Duxbury Press 2002 \bibitem{Schervish} M.J.~Schervish, {\em Random number generation and Monte Carlo Methods}, Springer Verlag, New York, Berlin, Heidelberg 2003 \bibitem{LehmannRomano} E.L.~Lehmann, J.P.~Romano, {\em Testing statistical hypothesis}, Springer, New York 2005 \bibitem{Theano} The Theano development team {\em Theano: A Python framework for fast computation of mathematical expressions}, arXiv:1605.02688 \bibitem{Tensorflow} M.~Abadi et.al., {\em TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed System}, arXiv:1603.04467 \bibitem{RHW} D.E.~Rumelhart, G.E.~Hinton, R.J.~Williams, {\em Learning internal representations by error propagation}, in {\em Parallel distributed processing}, edited by D.E.~McClelland and J.L.~Rumelhart, MIT Press, 1986 \bibitem{MCMCHandbook} S.~Brooks, A.~Gelman, C.L.~Jones,X.L.~Meng (ed.) {\em Handbook of Markov chain Monte Carlo}, Chapman \& Hall / CRC Press, Boca Raton 2011 \bibitem{RobertCasella1999} C.P.~Robert, G.~Casella, {\em Monte Carlo Statistical Methods}, Springer, New York 1999 \bibitem{Neal1993} R.M.~Neal, {\em Probabilist inference using Markov chain Monte Carlo methods}, Technical Report CRG-TR-93-1, Department of Computer Science, University of Toronto, 1993 \bibitem{Neal1996} R.M.~Neal, {\em Bayesian Learning for neural networks}, Springer, New York 1996 \bibitem{MacKay} D.~MacKay, {\em Information Theory, Inference and Learning Algorithms}, Cambridge University Press, Cambridge 2003 \bibitem{Ising1924} E.~Ising, {\em Beitrag zur Theorie des Ferromagnetismus}, Zeitschrift f. Physik, Vol. 31, No.1 (1924), 253--258 \bibitem{christianb2018} C. Bohr, {\em Markov chains and Monte Carlo methods - an introduction}, available online at \url{https://github.com/christianb93/MachineLearning/blob/master/doc/MarkovChains/MarkovChainsIntroduction.pdf} \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.733209259, "avg_line_length": 75.4122807018, "ext": "tex", "hexsha": "9ed8c051d2e5248ce5a3d04e72b05d932da8c92f", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2020-05-09T18:25:51.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-15T22:10:13.000Z", "max_forks_repo_head_hexsha": "30d3b182d33f19b210aa393208236e626eaf5f6a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dnzengou/MachineLearning-1", "max_forks_repo_path": "doc/FeedForward/FeedForwardNetworks.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "30d3b182d33f19b210aa393208236e626eaf5f6a", "max_issues_repo_issues_event_max_datetime": "2020-05-09T18:27:41.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-09T18:27:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dnzengou/MachineLearning-1", "max_issues_repo_path": "doc/FeedForward/FeedForwardNetworks.tex", "max_line_length": 1590, "max_stars_count": 16, "max_stars_repo_head_hexsha": "2cfc344025bc27f74800973e05e79418420615cf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "christianb93/MachineLearning", "max_stars_repo_path": "doc/FeedForward/FeedForwardNetworks.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-28T08:28:24.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-17T08:15:26.000Z", "num_tokens": 23030, "size": 85970 }
\documentclass{beamer} \mode<presentation> { \setbeamertemplate{background canvas}[square] \pgfdeclareimage[width=6em,interpolate=true]{dsailogo}{../dsai-logo} \pgfdeclareimage[width=6em,interpolate=true]{erasmuslogo}{../erasmus-logo} \titlegraphic{\pgfuseimage{dsailogo} \hspace{0.2in} \pgfuseimage{erasmuslogo}} %\usetheme{default} \usetheme{Madrid} \usecolortheme{rose} \usefonttheme[onlysmall]{structurebold} } \usepackage{pgf,pgfarrows,pgfnodes,pgfautomata,pgfheaps,pgfshade} \usepackage{amsmath,amssymb} \usepackage{graphics} \usepackage{ragged2e} \usepackage{array} \usepackage[latin1]{inputenc} \usepackage{colortbl} \usepackage[absolute,overlay]{textpos} \setlength{\TPHorizModule}{30mm} \setlength{\TPVertModule}{\TPHorizModule} \textblockorigin{10mm}{10mm} \usepackage[english]{babel} \usepackage{listings} \setbeamercovered{dynamic} \AtBeginSection[]{ \begin{frame}<beamer> \frametitle{Outline} \tableofcontents[currentsection] \end{frame} } \title[Machine Learning]{Machine Learning\\Learning Theory} \author{dsai.asia} \institute[]{ Asian Data Science and Artificial Intelligence Master's Program} \date{} % My math definitions \renewcommand{\vec}[1]{\boldsymbol{#1}} \newcommand{\mat}[1]{\mathtt{#1}} \newcommand{\ten}[1]{\mathcal{#1}} \newcommand{\crossmat}[1]{\begin{bmatrix} #1 \end{bmatrix}_{\times}} \renewcommand{\null}[1]{{\cal N}(#1)} \newcommand{\class}[1]{{\cal C}_{#1}} \def\Rset{\mathbb{R}} \def\Expec{\mathbb{E}} \def\Pset{\mathbb{P}} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\sign}{sign} \DeclareMathOperator*{\cov}{Cov} \DeclareMathOperator*{\diag}{diag} \DeclareMathOperator*{\var}{Var} \def\norm{\mbox{$\cal{N}$}} \newcommand{\stereotype}[1]{\guillemotleft{{#1}}\guillemotright} \newcommand{\myfig}[3]{\centerline{\includegraphics[width={#1}]{{#2}}} \centerline{\scriptsize #3}} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% CONTENTS START HERE %\setbeamertemplate{navigation symbols}{} \frame{\titlepage} %-------------------------------------------------------------------- %\part<presentation>{Part name} % %\frame{\partpage} \begin{frame} \frametitle{Readings} Readings for these lecture notes: \begin{itemize} \item[-] Bishop, C. (2006), \textit{Pattern Recognition and Machine Learning}, Springer, Chapter 3. \item[-] Hastie, T., Tibshirani, R., and Friedman, J. (2016), \textit{Elements of Statistical Learning: Data Mining, Inference, and Prediction}, Springer, Chapter 7. \item[-] Ng, A. (2017), \textit{Learning Theory}. Lecture note sets 4, 5 for CS229, Stanford University. \end{itemize} These notes contain material $\copyright$ Bishop (2006), Hastie et al.\ (2016), and Ng (2017). \end{frame} %====================================================================== \section{Introduction} %====================================================================== \begin{frame}{Introduction} Machine learning is one of those areas where practical rules of thumb tend to emerge before the theoretical principles that might lead to those rules of thumb. \medskip In this series, we cover some of the most important theoretical principles that underly or seek to explain our practical heuristics: \begin{itemize} \item The bias-variance tradeoff \item How are training set error and test set error related? \item When we use a high variance model that is prone to overfitting, how can we control it? \end{itemize} \end{frame} %====================================================================== \section{Bias-variance tradeoff} %====================================================================== \begin{frame}{Bias-Variance tradeoff}{Test set error} In machine learning, now you see that the general approach is to \alert{fit} a model $h()$ to a \alert{training dataset} then \alert{test} the model on a \alert{test dataset}. \medskip Consider the linear regression case. After estimating $h_{\vec{\theta}}$, we compute the mean squared error over a test set: \[ \Expec_{(\vec{x},y) \sim \text{test set}} (h_{\vec{\theta}}(\vec{x})-y)^2. \] \medskip What are the possible cause of the test set error being \alert{too high}? \begin{itemize} \item \alert{Overfitting}: the model is too closely tailored to the examples in the training set and can't generalize to the test set. \item \alert{Underfitting}: the model doesn't capture the link between the features $\vec{x}$ and the target $y$. \item \alert{Noise}: the data are inherently noisy, and no learning method is going to give us lower error on a test set. \end{itemize} \end{frame} \begin{frame}{Bias-Variance tradeoff}{Randomness of $h_{\vec{\theta}}()$} To formalize this intuition, we first suppose that the training and test data are all sampled from a distribution \[ y = f(\vec{x}) + \varepsilon, \] where $\Expec(\varepsilon) = 0$ and $\var(\varepsilon) = \sigma^2$. \medskip We use a training set to estimate $h_{\vec{\theta}}()$ then apply to each pattern $j$ in the test set. \medskip Our \alert{prediction} of the $y^{(j)}$ for $\vec{x}^{(j)}$ (note that $y^{(j)}$ is equal to $f(\vec{x}^{(j)})+\varepsilon^{(j)}$) is $h_{\vec{\theta}}(\vec{x}^{(j)})$. \medskip Clearly, $\vec{x}^{(j)}$ is fixed when we compute $h_{\vec{\theta}}(\vec{x}^{(j)})$, but $h_{\vec{\theta}}(\vec{x}^{(j)})$ itself is \alert{random}, since it depends on the values $\varepsilon^{(i)}$ in the training set. \end{frame} \begin{frame}{Bias-Variance tradeoff}{Bias and variance} For linear regression, we define the \alert{bias} of $h_{\vec{\theta}}()$ as \[ \Expec(h_{\vec{\theta}}(\vec{x})-f(\vec{x})) \] and the \alert{variance} of $h_{\vec{\theta}}()$ as \[ \Expec \left( (h_{\vec{\theta}}(\vec{x}) - f(\vec{x}))^2 \right). \] \medskip There are various formal notions of bias and variance for classifiers, but there is no general agreement on which one is best. \end{frame} \begin{frame}{Bias-Variance tradeoff}{Bias and variance} Mean squared error on the test data set can be decomposed as \begin{eqnarray} \text {Test MSE} & = & \Expec\left( (y - h_{\vec{\theta}}(\vec{x}))^2 \right) \nonumber \\ & = & \Expec\left( ( f(\vec{x}) + \varepsilon - h_{\vec{\theta}}(\vec{x}) )^2 \right) \nonumber \\ & = & \Expec (\varepsilon^2) + \Expec\left( (f(\vec{x}) - h_{\vec{\theta}}(\vec{x}))^2 \right) \label{squared-sum-eq} \\ & = & \sigma^2 + \left(\Expec(f(\vec{x})-h_{\vec{\theta}}(\vec{x}))\right)^2 + \var\left(f(\vec{x})-h_{\vec{\theta}}(\vec{x})\right) \label{squared-eq} \\ & = & \sigma^2 + \left(\text{Bias } h_{\vec{\theta}}(\vec{x})\right)^2 + \var\left( h_{\vec{\theta}}(\vec{x}) \right) \nonumber \end{eqnarray} \end{frame} \begin{frame}{Bias-Variance tradeoff}{Bias and variance} How to get Equation 1? \medskip We have $\Expec((X_1+X_2)^2)$ where $X_1 = \varepsilon$ and $X_2 = f(\vec{x})-h_{\vec{\theta}}(\vec{x})$. \medskip $\Expec((X_1+X_2)^2) = \Expec(X_1^2+X_2^2+2X_1X_2) = \Expec(X_1^2)+\Expec(X_2^2)+2\Expec(X_1X_2)$. \medskip Since $X_1$ and $X_2$ are independent we have $\Expec(X_1X_2) = \Expec(X_1)\Expec(X_2)$. Since $\Expec(\varepsilon) = 0$, in this special case, we get $\Expec((X_1+X_2)^2) = \Expec(X_1^2)+\Expec(X_2^2)$. \medskip How to get Equation 2? Start with the definition of variance: \begin{eqnarray*} \var(X) & = & \Expec((X-\Expec(X))^2) \\ & = & \Expec(X^2-2X\Expec(X)+(\Expec(X))^2) \\ & = & \Expec(X^2)-2\Expec(X)\Expec(X)+\Expec(X)\Expec(X) \\ & = & \Expec(X^2) - (\Expec(X))^2 \\ \Expec(X^2) & = & (\Expec(X))^2 + \var(X) \end{eqnarray*} \end{frame} \begin{frame}{Bias-Variance tradeoff}{Bias and variance} We cannot predict the \alert{noise} $\varepsilon$. \medskip The \alert{bias} term in the MSE is due to \alert{underfitting}. \medskip The \alert{variance} term of the MSE is related to \alert{overfitting}. \medskip We could say that our grand goal is to find the \alert{sweet spot} of no underfitting and no overfitting, in which bias and variance are both negligible. \medskip Unfortunately, in practice, when we reduce bias we usually increase variance, and vice versa. \end{frame} \begin{frame}{Bias-Variance tradeoff}{Bias and variance} As an example, consider the case of linear regression with a single input variable. \medskip We might try models with degree 1, degree 2, and degree 5 in the input variable: \medskip \myfig{3.5in}{over-under}{Ng (2017), CS229 lecture notes} \medskip The degree 1 model has high bias, and the degree 5 model has high variance. \end{frame} %====================================================================== \section{Probabilistic preliminaries} %====================================================================== \begin{frame}{Probabilistic preliminaries}{Important bounds} This section is directly from Ng (2017), \textit{Learning Theory}. \medskip To get started in understanding the generalization ability of a learning model, we need some preliminary facts useful for the analysis: the \alert{union bound} and the \alert{Hoeffding inequality}. \medskip \begin{block}{The \alert{union bound}} Let $A_1, A_2, \ldots, A_k$ be $k$ different events resulting from a probabilistic experiment that are not necessarily independent. \[ P(A_1 \cup \cdots \cup A_k) \le P(A_1)+\cdots+P(A_k). \] \end{block} \medskip The union bound follows from Kolmomgorov's axioms of probability theory and is sometimes considered an axiom itself. \end{frame} \begin{frame}{Probabilistic preliminaries}{Important bounds} \begin{block}{The \alert{Hoeffding inequality} or the \alert{Chernoff bound}} \begin{enumerate} \item Let $Z_1, \ldots, Z_m$ be $m$ independent and identically distributed random variables drawn from a Bernoulli($\phi$) distribution (i.e., $P(Z_i=1)=\phi, P(Z_i=0)=1-\phi$). \item Let $\hat{\phi} = \frac{1}{m}\sum_{i=1}^m Z_i$ be the mean of the $Z_i$. \item Let $\gamma > 0$ be fixed. \end{enumerate} Then \[ P(|\phi-\hat{\phi}|>\gamma) \le 2 e^{-2\gamma^2m}. \] \end{block} \medskip Essentially, this says that the average of $m$ Bernoulli($\phi$) variables will be very close to $\phi$ as long as $m$ is large. \end{frame} \begin{frame}{Probabilistic preliminaries}{Empirical risk and generalization error} Now, let's restrict ourselves to binary classification with labels $y \in \{0,1\}$. \medskip We are given a training set $S = \{(\vec{x}^{(i)},y^{(i)})\}_{i \in 1..m }$ of size $m$ drawn i.i.d.\ from some distribution $\cal D$. \medskip The \alert{training error} or \alert{empirical risk} or \alert{empirical error} of a hypothesis $h$ is defined as \[ \hat{\varepsilon}(h) = \frac{1}{m}\sum_{i=1}^m \delta(h(\vec{x}^{(i)}) \ne y^{(i)}). \] \medskip The \alert{generalization error} is defined as \[ \varepsilon(h) = P_{(\vec{x},y)\sim \cal D}(h(\vec{x})\ne y). \] \medskip The assumption that the \alert{training and test distributions are the same} is one of the \alert{PAC} (probably approximately correct) assumptions. \end{frame} \begin{frame}{Probabilistic preliminaries}{ERM} How to determine the best $h$? \medskip One way, called \alert{empirical risk minimization} (ERM) says to pick the $h$ minimizing $\hat{\varepsilon}(h)$. \medskip For example, for the linear classifier \[ h_{\vec{\theta}}(\vec{x}) = \delta(\vec{\theta}^\top \vec{x} \ge 0),\] we pick \[ \hat{\vec{\theta}} = \argmin_{\vec{\theta}}\hat{\varepsilon}(h_{\vec{\theta}}) . \] and output $\hat{h}=h_{\hat{\vec{\theta}}}$. \medskip Note: algorithms such as logistic regression are not exactly ERM but can be thought of as approximations to ERM. \end{frame} \begin{frame}{Probabilistic preliminaries}{ERM} Now let's generalize beyond linear classifiers. \medskip Any learning algorithm will consider some set of hypotheses $\cal H$. \medskip Example: linear classifiers are defined by \[ {\cal H} = \{ h_{\vec{\theta}} : h_{\vec{\theta}}(\vec{x}) = \delta(\vec{\theta}^\top \vec{x} \ge 0), \vec{\theta} \in \Rset^{n+1}\}. \] Alternatively, ${\cal H}$ might be the set of all classifiers represented by a particular neural network architecture. \medskip Once we have $\cal H$, we can define ERM as determining \[ \hat{h} = \argmin_{h\in\cal H}\hat{\varepsilon}(h). \] \end{frame} %====================================================================== \section{Bounds on generalization error for finite $\cal H$} %====================================================================== \begin{frame}{Bounds on generalization error for finite $\cal H$} We begin by assuming ${\cal H} = \{ h_1, \ldots, h_k \}$ is finite with $k$ possible hypotheses. \medskip We have $k$ functions mapping $\cal X$ to $\{0,1\}$. \medskip ERM selects the $h_i$ minimzing $\hat{\varepsilon}(h_i)$. \medskip What we can do: \begin{itemize} \item Show that $\hat{\varepsilon}(h)$ is a reliable estimate of $\varepsilon(h)$ for all $h$. \item Bound the generalization error of $\hat{h}$. \end{itemize} \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Uniform convergence} Take a particular hypothesis $h_i \in \cal H$. \medskip Consider a Bernoulli random variable $Z$ whose distribution is defined as follows: \begin{itemize} \item Sample $(\vec{x},y) \sim \cal D$. \item Set $Z = \delta(h_i(\vec{x}) \ne y)$. \end{itemize} \medskip $Z$ indicates whether $h_i$ misclassifies $(\vec{x},y)$. \medskip Next, define $Z_j = \delta(h_i(\vec{x}^{(j)}) \ne y^{(j)})$. \medskip Since the training set $S$ was drawn i.i.d.\ from $\cal D$ and so was $(\vec{x},y)$, we know that $Z$ and Z$_j$ have the same distribution. \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Uniform convergence} The expected value of $Z$ is the probability of misclassification of a randomly drawn example, which is $\varepsilon(h_i)$. \medskip As $Z$ and $Z_j$ have the same distribution, the expected value of $Z_j$ is also $\varepsilon(h_i)$. \medskip We can write the training error as \[ \hat{\varepsilon}(h_i) = \frac{1}{m} \sum_{j=1}^m Z_j .\] We have the mean $\hat{\epsilon}(h_i)$ of $m$ random variables $Z_j$, all drawn i.i.d. from a Bernoulli distribution with mean $\varepsilon(h_i)$. \medskip Now we can apply the Hoeffding inequality! We obtain \[ P(|\varepsilon(h_i)-\hat{\varepsilon}(h_i)| > \gamma) \le 2 e^{-2\gamma^2m} .\] \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Uniform convergence} What have we shown? \medskip For our $h_i$, \alert{training error will be close to generalization error} with probability that increases with $m$. \medskip What about \alert{all possible $h_i$}? \medskip Well, let $A_i$ denote the event that $|\varepsilon(h_i)-\hat{\varepsilon}(h_i)| > \gamma$. \medskip For any $A_i$, we have $P(A_i) \le 2 e^{-2\gamma^2m}$. \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Uniform convergence} Now, we can use the union bound to bound generalization error for \alert{any} $h_i$: \begin{eqnarray*} P(\exists h \in {\cal H}.|\varepsilon(h_i)-\hat{\varepsilon}(h_i)|>\gamma) & = & P(A_1 \cup \cdots \cup A_k) \\ & \le & \sum_{i=1}^k P(A_i) \\ & \le & \sum_{i=1}^k 2 e^{-2\gamma^2m} \\ & = & 2 k e^{-2\gamma^2 m} . \end{eqnarray*} Subtracting both sides from 1, we obtain \begin{eqnarray*} P(\neg \exists h \in {\cal H}.|\varepsilon(h_i)-\hat{\varepsilon}(h_i)| > \gamma) & = & P(\forall h\in {\cal H}.|\varepsilon(h_i) - \hat{\varepsilon}(h_i)| \le \gamma) \\ & \le & 1 - 2 k e^{-2\gamma^2 m} . \end{eqnarray*} \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Using uniform convergence} The previous result is called the \alert{uniform convergence} result. \medskip The bound holds simultaneously for all $h \in {\cal H}$. \medskip We can use this, for example, to answer the question, ``Given $\gamma$ and some $\delta>0$, find $m$ large enough to guarantee that with probability at least $1-\delta$, training error will be within $\gamma$ of generalization error.'' \medskip To do it, let $\delta = 2k e^{-2\gamma^2 m}$ and solve for $m$ to find that we should use \[ m \ge \frac{1}{2\gamma^2}\log\frac{2k}{\delta} .\] Now we know \alert{how many training examples are needed to make a guarantee about generalization error}. Note that $m$ increases in the \alert{logarithm of $k$}!! \medskip The training set size $m$ needed to achieve a level of performance is called the algorithm's \alert{sample complexity}. \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Using uniform convergence} We can also hold $m$ and $\delta$ fixed and solve for $\gamma$: \[ |\hat{\varepsilon}(h) - \varepsilon(h)| \le \sqrt{\frac{1}{2m}\log\frac{2k}{\delta}} .\] \medskip [Think about how you could use this fact...] \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{The bound} Now to prove something about the generalization error obtained by picking \[ \hat{h} = \argmin_{h\in \cal H} \hat{\varepsilon}(h) .\] We have $|\varepsilon(h)-\hat{\varepsilon}(h)| \le \gamma$ for all $h \in \cal H$. \medskip Let $h^* = \argmin_{h \in \cal H} \varepsilon(h)$ be the best possible hypothesis in $\cal H$. \medskip Applying uniform convergence (twice) and the fact that $\hat{h} = \argmin_{h \in H}\hat{\varepsilon}(h)$, we can obtain \begin{eqnarray*} \varepsilon(\hat{h}) & \le & \hat{\varepsilon}(\hat{h}) + \gamma \\ & \le & \hat{\varepsilon}(h^*) + \gamma \\ & \le & \varepsilon(h^*)+2\gamma \end{eqnarray*} \medskip \alert{The test error of the minimum risk classifier is at most $2\gamma$ worse than the best possible classifier!} \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Bounded generalization error} \textbf{Theorem}. Let $|{\cal H}| = k$, and let any $m, \delta$ be fixed. Then with probability at least $1-\delta$, we have that \[ \varepsilon(\hat{h}) \le \left( \min_{h\in \cal H} \varepsilon(h) \right) + 2 \sqrt{\frac{1}{2m}\log\frac{2k}{\delta}} .\] \medskip Prove this by letting $\gamma = \sqrt{\cdot}$ and apply the previous argument. \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{The bound's implications for bias and variance} We have not formally defined bias and variance for a classifier, but the first term in the bound is a rough bias for $\hat{h}$, and the second term is a rough variance. \medskip Suppose we have a hypothesis class $\cal H$. \medskip If we consider a larger hypothesis class ${\cal H}' \supseteq \cal H$: \begin{itemize} \item We will improve our bias ($\min_{h \in \cal H}\varepsilon(h)$ can only get smaller), \item We will increase $k$, thus increasing the variance. \end{itemize} \end{frame} \begin{frame}{Bounds on generalization error for finite $\cal H$}{Bounded sample complexity} If we hold $\gamma$ and $\delta$ fixed, obtain a bound on sample complexity. \medskip \textbf{Corollary}. Let $|{\cal H}| = k$, and let any $\delta,\gamma$ be fixed. Then for $\varepsilon(\hat{h}) \le \min_{h \in \cal H} \varepsilon(h) + 2 \gamma$ to hold with probability at least $1-\delta$, it suffices that \begin{eqnarray*} m & \ge & \frac{1}{2\gamma^2}\log\frac{2k}{\delta} \\ & = & O\left(\frac{1}{\gamma^2}\log\frac{k}{\delta}\right) . \end{eqnarray*} \end{frame} %====================================================================== \section{Bounds on generalization error for infinite $\cal H$} %====================================================================== \begin{frame}{Bounds on generalization error for infinite $\cal H$}{Motivation} We've seen some interesting results for $|{\cal H}| = k$. \medskip But most of the models we've looked at in this class have $|{\cal H}| = \infty$. \medskip What can we do?? \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{A not-so-good approach} To begin with, imagine our parameter vector $\vec{\theta}$ consists of $d$ real numbers represented as double-precision (64 bit) IEEE floating point numbers. \medskip Our hypothesis class thus consists of at most $k = 2^{64d}$ hypotheses. \medskip To guarantee $\varepsilon(\hat{h}) \le \varepsilon(h^*)+2\gamma$ to hold with probability at least $1-\delta$ we need \[ m \ge O\left(\frac{1}{\gamma^2}\log\frac{2^{64d}}{\delta}\right) = O\left(\frac{d}{\gamma^2}\log\frac{1}{\delta}\right) = O_{\gamma,\delta}(d) .\] \medskip So the number of training examples is \alert{linear in the number of parameters in the model} if we use finite floating point representations of real numbers and manage to implement ERM over this space. \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{A better approach} The approach based on finite floating point approximations of $\vec{\theta} \in \Rset^d$ depends on the \alert{parameterization of the hypothesis}. \medskip There is a better approach based on the \alert{characteristics feature space}. \medskip \textbf{Definition}: Given a set $S = \{ \vec{x}^{(1)},\ldots,\vec{x}^{(d)} \}$ of points $\vec{x}^{(i)} \in \cal X$, we say that $\cal H$ \alert{shatters} $S$ if $\cal H$ can realize \alert{any labeling on $S$}. \medskip \textbf{Definition}: Given a hypothesis class $\cal H$, we define the \alert{Vapnick-Chervonenkis dimension} VC($\cal H$) as the largest set that is shattered by $\cal H$. \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{VC dimension} What is the VC dimension of a linear classifier in $\Rset^2$? \medskip \myfig{1.5in}{shatter-points}{Ng (2017), CS229 lecture notes.} \medskip Can these three points be shattered? \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{VC dimension} The answer is yes: \medskip \myfig{4.5in}{shattering}{Ng (2017), CS229 lecture notes.} \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{VC dimension} Note that for VC dimension to be $d$, we have to be able to shatter \alert{some} point set of size $d$, \alert{not all} point sets: \medskip \myfig{3in}{unshatterable-points}{Ng (2017), CS229 lecture notes.} \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{Bounded generalization error} \textbf{Theorem} (Vapnik). Let $\cal H$ be given, and let $d = \text{VC}({\cal H})$. Then with probability at least $1-\delta$, we have that for all $h \in \cal H$, \[ |\varepsilon(h)-\hat{\varepsilon}(h)| \le O\left( \sqrt{\frac{d}{m}\log\frac{m}{d}+\frac{1}{m}\log\frac{1}{\delta}} \right) .\] This also means \[ \varepsilon(\hat{h}) \le \varepsilon(h^*) + O\left( \sqrt{\frac{d}{m}\log\frac{m}{d}+\frac{1}{m}\log\frac{1}{\delta}} \right) .\] We have uniform convergence in $m$ so long as $d$ is finite. \medskip This is considered by many \alert{the most important theorem} in learning theory! \end{frame} \begin{frame}{Bounds on generalization error for infinite $\cal H$}{Bounded sample complexity} \textbf{Corollary}. For $|\varepsilon(h)-\hat{\varepsilon}(h) \le \gamma$ to hold for all $h \in \cal H$, so that $\varepsilon(\hat{h}) \le \varepsilon(h^*) + 2\gamma)$ with probability at least $1-\delta$, it suffices that $m = O_{\gamma,\delta}(d)$. \medskip The number of training examples needed to learn ``well'' using $\cal H$ is linear in the VC dimension of $\cal H$. \medskip For most hypothesis classes, VC dimension is roughly linear in the number of parameters. \medskip This means that for most classifiers, the number of training examples needed is usually roughly linear in the number of parameters of $\cal H$. \end{frame} %====================================================================== \section{Conclusion on generalization bounds} %====================================================================== \begin{frame}{Conclusion on generalization bounds} Now we've seen the most important results in learning theory. \medskip Generally, the story is encouraging: \alert{we can achieve near perfection} for most models just by \alert{gathering sufficient data}. \medskip But the devil is in the details. For example: \begin{itemize} \item SVMs with finite $\phi(\vec{x})$ have finite VC dimension, but if $\phi(\vec{x})$ maps $\vec{x}$ to an infinite dimensional space (like the RBF kernel does) then the VC dimension is \alert{infinite}. \item We find that the $m$ needed to achieve a particular level of generalization error is linear in $\text{VC}({\cal H})$. But what is the constant of proportionality? (Getting data has a cost, and a factor of 100 would be a lot bigger than 10!) \item CNNs with millions of parameters can be trained effectively with thousands of examples. How is it possible? Are deep learning models cheating Vapnik somehow? \end{itemize} \end{frame} \begin{frame}{Conclusion on generalization bounds} Deep learning is particularly interesting here. \medskip Neural network architectures' VC dimension is $O(|\mat{W}|)$ in the best case. \medskip So a model with millions of parameters needs millions of training examples to \alert{guarantee} good generalization according to Vapnik. \medskip However, note that Vapnik's theorem is a theorem of \alert{sufficiency}, not necessity. Techniques we have studied help us cheat Vapnik: \begin{itemize} \item Stopping training when validation set cost is minimal \item Weight decay and dropout (regularization) \end{itemize} \medskip There is a really interesting take from astrophysicists' perspective by Lin, Tegmark, and Roland (2016) at \url{https://arxiv.org/abs/1608.08225} that provides a interesting theory of how deeper models may better due to the hierarchical structure of real world generative processes. \medskip These are topics needing more research! \end{frame} %====================================================================== \section{Model selection} %====================================================================== \begin{frame}{Model selection}{Finding the best model} OK, so now we understand that for a hard problem, the ``right'' model, the one that is guaranteed to generalize to the test set, might be \alert{too biased}. \medskip We also understand that a less biased model may have VC dimension too high for the number of examples we have and will thus be \alert{prone to overfitting} (too much variance). \medskip How can we find the ``best'' model? For example, \begin{itemize} \item In polynomial regression $h_{\vec{\theta}}(x) = g(\theta_0 + \theta_1 x + \theta_2 x^2 + \cdots + \theta_k x^k)$, how to decide the right $k$? \item In locally-weighted regression, how to choose the bandwidth parameter $\tau$? \item With SVMs, how to choose $C$ (and $\gamma$ for the RBF kernel)? \end{itemize} \end{frame} \begin{frame}{Model selection}{Cross validation} In each of these cases, we have a set of models ${\cal M} = \{ M_1, \ldots, M_d\}$ we would like to select from. \medskip From ERM we might train each $M_i$ and pick the $M_i$ with the lowest training error. \medskip But this will give us a high-variance model that doesn't test well. \medskip Our main alternative (using a validation set) is called \alert{cross validation}: \begin{itemize} \item Split the training set $S$ into $S_{train}$ and $S_{validation}$ using a ratio such as 70\% to 30\% or 80\% to 20\%. \item Train each model $M_i$ on $S_{train}$. \item Select the $M_i$ with lowest error on $S_{validation}$. \item (Optional) Re-train $M_i$ on all of $S$. \item Use $M_i$. \end{itemize} \end{frame} \begin{frame}{Model selection}{$k$-fold cross validation} When data is scarse, we'd prefer to select the model that is best on \alert{all} the data, not just 30\% of the data! \medskip Then \alert{$k$-fold cross validation} makes more sense: \begin{enumerate} \item Split $S$ into $k$ disjoint subsets $S_1,\ldots,S_k$. \item For each $M_i$ \begin{itemize} \item For $j = 1, \ldots, k$ \begin{enumerate} \item Train $M_i$ on $S_{train} = S_1,\ldots,S_{j-1},S_{j+1},\ldots,S_k$ \item Test the trained $M_i$ on $S_{validation} = S_j$ \end{enumerate} \end{itemize} \item Select the $M_i$ with the lowest average error on $S_{validation}$ over the $k$ folds. \item Retrain $M_i$ on all of $S$. \end{enumerate} \medskip Typical values of $k$ are 5, 10, and $m$ (\alert{leave-one-out cross validation}). \end{frame} \begin{frame}{Model selection}{$k$-fold cross validation} Be careful about \alert{how you split}! \begin{itemize} \item Usually, the partitioning of examples into $k$ folds is random uniform. \item However, sometimes training items are related to each other (e.g.\ multiple crops of the same object in the same image). \item Randomized partitioning will put some related examples in different folds, leading to overestimated performance. \item In this case, it's necessary to ensure that the related examples are \alert{in the same fold}. \end{itemize} \end{frame} \begin{frame}{Model selection}{Feature selection} A special case of model selection is \alert{feature selection}. \medskip Suppose you have many features, possibly even $n \gg m$.\footnote{This is possible in some domains such as biological sequence analysis, where we might want to find segments of DNA sequences in a particular genome that code for a particular biological function, or to a lesser extent in text classification.} \medskip We know that the VC dimension of a classifier with very large $n$ will be too high. \medskip So we treat the problem as model selection: among the $2^n$ possible subsets of features, \alert{find the subset that gives the best cross validation performance}. \medskip Trying all $2^n$ subsets is impossible unless $n$ is very small. So we need a more efficient approach. \end{frame} \begin{frame}{Model selection}{Forward feature selection} The \alert{forward search} method of feature selection: \begin{enumerate} \item Initialize ${\cal F} = \emptyset$. \item For $j \in \{ 1, \dots, n \}$ \begin{enumerate} \item For $i \in \{ 1, \ldots, n \} - {\cal F}$ \begin{enumerate} \item Let ${\cal F}_i = {\cal F} \cup \{ i \}$ \item Train on ${\cal F}_i$ with cross validation and obtain the cross validation error \end{enumerate} \item Let ${\cal F} = {\cal F} \cup \{ i \}$, where $i$ is the feature that gave the lowest cross validation error. \end{enumerate} \item Select the feature set that gave the lowest cross validation error over all tests. \end{enumerate} \end{frame} \begin{frame}{Model selection}{Other wrapper methods} When we have a method that \alert{wraps} our learning algorithm, supplying a different feature set on each iteration, this is called \alert{wrapper model selection}. \medskip Another wrapper method is \alert{backward search}, in which we start with ${\cal F} = \{ 1, \ldots, n \}$ and \alert{iteratively remove the least informative feature}. \medskip Wrapper methods work well but take a lot of time, for example, $O(kn^2)$ calls to the optimization algorithm for forward selection with $k$-fold cross validation. \end{frame} \begin{frame}{Model selection}{Filter feature selection} \alert{Filter feature selection} methods use a heuristic to decide which subsets of features to try. \medskip Example: we may \alert{rank} features according to some measure of relatedness to the desired output, then incrementally add features in that order if they improve generalization. \medskip Most common measure of relatedness for discrete features: \alert{mutual information} $MI(X_i,Y)$ between feature $X_i$ and target $Y$: \[ MI(X_i,Y) = \sum_{x_i\in X_i} \sum_{y\in Y} p(x_i,y)\log\frac{p(x_i,y)}{p(x_i)p(y)} \] [Think about the value of $MI(X_i,Y)$ if $X_i$ and $Y$ are independent or identical and why that would be a good indication of relatedness.] \end{frame} %====================================================================== \section{Regularization} %====================================================================== \begin{frame}{Regularization}{Frequentist vs.\ Bayesian statistics} We've seen several specific techniques for \alert{regularization} with different models so far. \medskip Here we'll discuss the general framework of Bayesian statistics and how the Bayesian perspective on parameter estimation can be used for regularization. \medskip The \alert{frequentist} perspective in statistics is that our parameter vector $\vec{\theta}$ is \alert{constant but unknown}. \medskip Maximum likelihood is a general technique to identify such a constant but unknown parameter. \medskip The \alert{Bayesian} perspective in statistics is that $\vec{\theta}$ is itself a \alert{random variable} whose value is unknown. \end{frame} \begin{frame}{Regularization}{Bayesian estimation} The frequentist approach of maximum likelihood says that given $S = \{ (\vec{x}^{(i)},y^{(i)})\}_{i=1..m}$, we should find \[ \vec{\theta}_{ML} = \argmax_{\vec{\theta}} \prod_{i=1}^m p(y^{(i)} \mid \vec{x}^{(i)}; \vec{\theta}) .\] The Bayesian approach says we should consider the \alert{posterior distribution over the parameters} \begin{eqnarray*} p(\vec{\theta} \mid S) & = & \frac{p(S \mid \vec{\theta}) p(\vec{\theta})}{p(S)} \\ & = & \frac{\left(\prod_{i=1}^m p(y^{(i)}\mid \vec{x}^{(i)}, \vec{\theta})\right) p(\vec{\theta})}{\int_{\vec{\theta}}\left( \prod_{i=1}^m p(y^{(i)}\mid \vec{x}^{(i)},\vec{\theta}) p(\vec{\theta})\right) d\vec{\theta}} \end{eqnarray*} when making a decision. \end{frame} \begin{frame}{Regularization}{Fully Bayesian prediction} Both probabilities $p(y^{(i)}\mid \vec{x}^{(i)}; \vec{\theta})$ (frequentist) and $p(y^{(i)}\mid \vec{x}^{(i)}, \vec{\theta})$ (Bayesian) are given by the model. \medskip For logistic regression, for example, we would use $h_{\vec{\theta}}(\vec{x}^{(i)})^{y^{(i)}}(1-h_{\vec{\theta}}(\vec{x}^{(i)}))^{1-y^{(i)}}$ with $h_{\vec{\theta}}(\vec{x}^{(i)}) = \frac{1}{1+e^{\vec{\theta}^{\top} \vec{x}^{(i)}}}$. \medskip Decision making would treat $\vec{\theta}$ as random: \[ p(y \mid \vec{x}, S) = \int_{\vec{\theta}} p(y \mid \vec{x},\vec{\theta}) p(\vec{\theta} \mid S) d\vec{\theta} \] [How did we get that??] \medskip Then our answer might be the expected value of $y$ given $\vec{x}$: \[ \Expec[y \mid \vec{x},S] = \int_y y p(y \mid \vec{x},S) dy \] \end{frame} \begin{frame}{Regularization}{MAP prediction} Usually, the full Bayesian predictor on the previous slide is too hard to compute in practice. \medskip Instead, we can go with a single point estimate for $\vec{\theta}$, leading to the \alert{maximum a posteriori} (MAP) estimate of $\vec{\theta}$: \[ \vec{\theta}_{MAP} = \argmax_{\vec{\theta}} \prod_{i=1}^m p(y^{(i)} \mid \vec{x}^{(i)},\vec{\theta}) p(\vec{\theta}) . \] Note that this is the same as the ML estimate except for the \alert{prior term} $p(\vec{\theta})$. One possible prior is $\vec{\theta} \sim {\cal N}(\vec{0},\tau^2 \mat{I})$. \medskip This will assign higher scores to models with smaller parameter vectors, reducing the model's variance. \medskip \alert{Bayesian logistic regression} works well for many problems such as text classification where $n \gg m$. \end{frame} \end{document}
{ "alphanum_fraction": 0.6500667448, "avg_line_length": 29.6502423263, "ext": "tex", "hexsha": "b82012d7a5c9b3bb6b767e25e79cec7f0cd73b4e", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-12-29T10:17:14.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-20T03:51:24.000Z", "max_forks_repo_head_hexsha": "f99d49ddeeb838522162e31a29c12f29b8cbb2df", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "dsai-asia/ml", "max_forks_repo_path": "Lectures/04-LearningTheory/04-Learning-Theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f99d49ddeeb838522162e31a29c12f29b8cbb2df", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "dsai-asia/ml", "max_issues_repo_path": "Lectures/04-LearningTheory/04-Learning-Theory.tex", "max_line_length": 222, "max_stars_count": 7, "max_stars_repo_head_hexsha": "f99d49ddeeb838522162e31a29c12f29b8cbb2df", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "dsai-asia/ml", "max_stars_repo_path": "Lectures/04-LearningTheory/04-Learning-Theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-20T16:18:38.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-17T12:41:02.000Z", "num_tokens": 11328, "size": 36707 }
% !TeX root = thoughts.tex \section{Recap of probabilistic inversion and Hamiltonian Mechanics} \paragraph{Probabilistic inversion and classical sampling} In probabilistic inversion one does not require regularization, in contrast to deterministic inversion. Instead, one explores the model space by quantifying uncertainties and calculating probabilities for any model. In a sense, this can be considered as the `complete' solution. It is defined using Bayes' Theorem as follows; \begin{gather} p(\mathbf{q}|\mathbf{d}) = \frac{p(\mathbf{d}|\mathbf{q})\;p(\mathbf{q})}{ p(\mathbf{d})} \end{gather} Generally, we ignore the data evidence ($p(\mathbf{d})$) as it only provides scaling of the posterior. If the prior is normally distributed in model space, we can define it by \begin{gather} p(\mathbf{q}) = k \cdot \exp{\left( -\frac{1}{2} \left[ \mathbf{q} -\mathbf{q_0}\right]^T \MatrixVariable{C}_M ^{-1}\left[ \mathbf{q} - \mathbf{q_0}\right]\right)} \end{gather} where $k$ is a scaling constant (i.e. for normalization), $\mathbf{q_0}$ the mean of the prior distribution and $\MatrixVariable{C}_M ^{-1}$ the inverse parameter covariance matrix. This inverse parameter covariance matrix is given by \begin{gather} \left[ \MatrixVariable{C}_M^{-1} \right]_{ij} = r_{ij} \sigma_i \sigma_j \end{gather} with $r_{ij}$ the correlation between parameters $q_i$ and $q_j$ and $\sigma_i$ the standard deviation of parameter $q_i$. The prior in data space, $p(\mathbf{d}|\mathbf{q}) $, can be seen as the likelihood to see data based on a selected model. This generally incorporates a forward model, measurement uncertainties and forward modeling uncertainties. Because the quantification of forward modeling uncertainties is generally very hard to estimate usually the actual implementation of this is ignored. With only the forward model and measurement uncertainties (normally distributed) the prior in data space is as follows: \begin{gather} p(\mathbf{d}|\mathbf{q}) = k \cdot \exp{\left( -\frac{1}{2} \left[ G\left(\mathbf{q}\right) -\mathbf{d_0}\right]^T \MatrixVariable{C}_D ^{-1}\left[G\left(\mathbf{q}\right) - \mathbf{d_0}\right]\right)} \end{gather} with $k$ again a scaling constant, $\MatrixVariable{C}_D$ the data covariance matrix, $d_0$ the observed data and $G\left(\mathbf{q}\right)$ the forward modeled data based on parameters $q$ and forward model $G$. This forward model $G$ can be any non-linear model, but \gls{HMC} inversion algorithms are greatly simplified if they are actually linear systems, as will be illustrated later. When these prior data functions are combined one obtains the (improperly scaled) posterior. The negative exponent of the posterior is generally called the misfit and has a special role in many inversions. In deterministic inversion, the aim is usually to find the global minimum of this function. In probabilistic inversion,one tries to map the entire misfit by (pseudo) random sampling. The misfit is given by \begin{gather} \chi(\mathbf{q}) = \frac{1}{2} \left[ \mathbf{q} -\mathbf{q_0}\right]^T \MatrixVariable{C}_M ^{-1}\left[ \mathbf{q} - \mathbf{q_0}\right] + \frac{1}{2} \left[ G\left(\mathbf{q}\right) -\mathbf{d_0}\right]^T \MatrixVariable{C}_D ^{-1}\left[G\left(\mathbf{q}\right) - \mathbf{d_0}\right]. \end{gather} Exploring this misfit is typically done based on prior information. The Metropolis-Hastings algorithm draws new models entirely from the prior in model space, and accepts either by having a lower misfit or randomly based on the exponential increase in misfit if the proposed model has a larger misfit. If the prior model is far off or has large uncertainties, classical samplers might have a hard time finding proper acceptable models. An example which will be expanded on in the first section is a linear model where prior data is off by double the forward modeled value, which on 10,000 samples lead to less than 100 accepted models. These acceptance rates are very wasteful of computing power. On the other side, drawing new models is computationally cheap relative to the more intensive \gls{HMC} sampling. A rather helpful way to write the misfit for a linear forward model with Gaussian uncertainties allows for easy computation of derivatives, as well as keeping notations clean. I start by rewriting the misfit functional as; \begin{align} \chi(\mathbf{q}) =& \frac{1}{2} \left[ \mathbf{q} -\mathbf{q_0}\right]^T \MatrixVariable{C}_M ^{-1}\left[ \mathbf{q} - \mathbf{q_0}\right] + \frac{1}{2} \left[ \MatrixVariable{G}\mathbf{q} -\mathbf{d_0}\right]^T \MatrixVariable{C}_D ^{-1}\left[\MatrixVariable{G}\mathbf{q} - \mathbf{d_0}\right]\nonumber\\ =&\frac{1}{2} \mathbf{q}^T \MatrixVariable{C}_M ^{-1}\mathbf{q} - \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q} - \frac{1}{2} \mathbf{q}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0} + \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0}\nonumber\\ &+ \frac{1}{2}\left( \MatrixVariable{G} \mathbf{q} \right)^T \MatrixVariable{C}_D ^{-1}\left( \MatrixVariable{G} \mathbf{q} \right) -\frac{1}{2} \mathbf{d_0} ^T \MatrixVariable{C}_D ^{-1}\left( \MatrixVariable{G} \mathbf{q} \right)\nonumber\\ &-\frac{1}{2}\left( \MatrixVariable{G} \mathbf{q} \right)^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0} +\frac{1}{2}\mathbf{d_0}^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0}\nonumber\\ =&\frac{1}{2} \mathbf{q}^T \MatrixVariable{C}_M ^{-1}\mathbf{q} - \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q} - \frac{1}{2} \mathbf{q}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0} + \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0\nonumber}\\ &+ \frac{1}{2}\mathbf{q}^T \MatrixVariable{G}^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G} \mathbf{q} -\frac{1}{2} \mathbf{d_0} ^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G} \mathbf{q} %\nonumber\\ -\frac{1}{2} \mathbf{q}^T \MatrixVariable{G}^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0} +\frac{1}{2}\mathbf{d_0}^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0}. \end{align} Now the trick is to realize that all the components can be individually transposed without altering the equation. The fact that $ \mathbf{a}^T \mathbf{b} = \mathbf{a} \cdot \mathbf{b} = \mathbf{b} \cdot \mathbf{a} = \mathbf{b}^T \mathbf{a}$ should be enough proof. Also realizing that covariance matrices are always symmetric, we rearrange the equation in second, first and zeroth order terms of $\mathbf{q}$: \begin{align} \chi(\mathbf{q}) = &\frac{1}{2} \mathbf{q}^T \left[\MatrixVariable{C}_M^{-1} + \MatrixVariable{G}^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G} \right]\mathbf{q} -\left(\mathbf{q_0}^T \MatrixVariable{C}_M ^{-1} + \mathbf{d_0} ^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G}\right) \mathbf{q} \nonumber\\ & + \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0} +\frac{1}{2}\mathbf{d_0}^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0}. \end{align} Or, substituting the different components; \begin{align} \chi(\mathbf{q}) = &\frac{1}{2}\mathbf{q}^T \MatrixVariable{A} \; \mathbf{q} -\mathbf{b} \mathbf{q} + c, \end{align} where \begin{align} \MatrixVariable{A} &= \MatrixVariable{C}_M^{-1} + \MatrixVariable{G}^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G} \label{eq:linear_system.misfit_A}\\ \mathbf{b} &= \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1} + \mathbf{d_0} ^T \MatrixVariable{C}_D ^{-1} \MatrixVariable{G} \label{eq:linear_system.misfit_b}\\ c &= \frac{1}{2} \mathbf{q_0}^T \MatrixVariable{C}_M ^{-1}\mathbf{q_0} +\frac{1}{2}\mathbf{d_0}^T \MatrixVariable{C}_D ^{-1}\mathbf{d_0}. \end{align} Note that $\mathbf{b}$ is a row vector, so $\mathbf{b}\mathbf{q}$ is actually the dot product between $\mathbf{b}^T$ and $\mathbf{q}$. Now using this notation, one can easily compute the gradient of the misfit as \begin{gather} \frac{\partial \chi}{\partial q_i} = A_{ij} q_j - b_i. \end{gather} \index{Algorithm performance}Note that repeated indices are summed. Note that using these quantities (which are precomputed every inversion) sped up propagation by 10 times. \paragraph{Hamiltonian Mechanics and sampling} One way to propose more acceptable models is to use \gls{HMC} sampling. In this algorithm, the model is considered as a particle in $n$-dimensional space, where n denotes the number of inversion parameters. By not assigning new parameters but momenta in each dimension and propagating the model for a certain amount of time one proposes new models. The propagation is based on Hamilton's Equations. These two equations interrelate energy of a system to it's position and momentum.\index{Hamilton's equations} Hamilton's equations for a trajectory of a particle are given in this $n$-dimensional space as; \begin{gather} \frac{d q_i}{dt} = \frac{\partial H}{\partial p_i},\label{eq:ham1}\\ \frac{d p_i}{dt} = - \frac{\partial H}{\partial q_i}\label{eq:ham2}. \end{gather} In these equations, $q_i$ is the position in dimension $i$, $p_i$ is the momentum in dimension $i$ (given by $p_i = \mu_i \frac{d q_i}{dt}$). $H$ stands for the Hamiltonian, which in physical Hamiltonian Mechanics is the sum of potential en kinetic energy. The trick to \gls{HMC} sampling is to consider the misfit functional $\chi$ as a gravitational potential. This allows us to propagate our model over model space as if it were a particle moving under the influence of gravity defined by the posterior distribution. The actual definitions for potential and kinetic energy then become: \begin{gather} K(\mathbf{p}) = \frac{1}{2} \sum_{i=1}^{n} \frac{p_i^2}{\mu_i},\label{eq:kineticsimple}\\ U(\mathbf{q}) = \chi(\mathbf{q}). \end{gather} There are already many interesting options, remarks and conclusions to draw from this framework, but as it is usually nicely illustrated using a few examples, we will come to that later. Noteworthy however that a simplification is done on the calculation of momenta. In a later analysis, I extend this definition. See also the note about equation \eqref{eq:hamsimple1}. One thing which is useful to note now is that the derivatives on the right hand sides of equations \eqref{eq:ham1} and \eqref{eq:ham2} now simplify, for the Hamiltonian derivative with respect to momenta only depends on kinetic energy, while the derivative with respect to position only depends on potential energy. The simplified representation is: \begin{gather} \frac{d q_i}{dt} = \frac{p_i}{\mu_i},\label{eq:hamsimple1}\\ \frac{d p_i}{dt} = - \left[\nabla_{q} \chi \right]_i\label{eq:hamsimple2}. \end{gather} An extension on this with a \index{Mass matrix}non-diagonal mass matrix will allows us to `link' parameters together in the propagation. This will be analyzed later on. Typically, however, for simple cases one chooses a diagonal positive definite mass matrix, which leads to equation \eqref{eq:hamsimple1}. \index{Model propagation}The propagation of models is done using a leapfrog scheme, in which one splits up each time step in three separate calculations. First the momentum is propagated half an original time step, then the model parameters are propagated a full step, after which the momentum catches up again. Since I will not alter much on this side of the algorithm, for specifics I refer to section~5.2.3.3 of \cite{neal2011mcmc}. \index{Acceptance rate}Acceptance of a new proposition works almost equal with \gls{HMC} as it does with the Metropolis-Hastings algorithm. The difference is that one does not compare misfit magnitudes, but the actual Hamiltonian, or energy of the system. Mathematically, this can be expressed as \begin{gather} \min\left( 1, \frac{\exp\left[-H(\mathbf{q_\tau},\mathbf{p_\tau}) \right]}{\exp\left[-H(\mathbf{q},\mathbf{p}) \right]} \right). \end{gather} I like this better in words. As we explore model space, we assign new momenta in each iteration. This will result in a different energy of the system. If the Hamiltonian (the system's energy) has decreased from the previous sample, than it is accepted unconditionally (because $\exp (H - H_\tau)>1$). If, however, energy has increased, it has a chance of $\exp \left(H - H_\tau\right)$ to be accepted. This can be expressed as exploration of energy levels. \index{Measure of exploration}What is noteworthy is that through the conservation of energy the Hamiltonian will not change over the course of the trajectory. This allows two things; a quality control to ensure that propagation of the model is performed correctly. Moreover, the chance of accepting a model is completely defined as soon as the momentum is assigned. This is very important, as it now also follows that propagation time doesn't influence the acceptance rate \textit{at all}. As one will see, trajectory length is more a tuning parameter which determines the algorithm's measure of exploration. \index{Mass matrix}Mass matrices can be chosen in two ways. A standard, non-optimized option would be to choose the unit matrix. A simple analysis given in Andreas Fichtner's reader also reveal that parameters with different forward model derivatives oscillate non-equally during a trajectory. A mitigation is to assign the mass matrix according to the forward model. The mass matrix that would result in equal oscillations would be \begin{gather}\label{eq:massMatrixForward} \MatrixVariable{M} = \MatrixVariable{G}^T \MatrixVariable{G} \end{gather} where just taking the trace of this matrix would `unlink' the parameters again and make oscillations roughly equal. We'll see later that this assumption is not fully correct. \index{Momenta!Drawing} Momenta are drawn from the mass matrix diagonal (correlation is assumed to be non-existent). By using the square root of the mass as standard deviation and zero mean, $n$ momenta are drawn corresponding to $n$ parameters.
{ "alphanum_fraction": 0.7438491775, "avg_line_length": 101.762962963, "ext": "tex", "hexsha": "0962419164b30490458d55570d490168de236d29", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e302375a870359174254cc4e6c0515ef255dea3e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "larsgeb/hmc-documentation", "max_forks_repo_path": "thoughts/recap.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e302375a870359174254cc4e6c0515ef255dea3e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "larsgeb/hmc-documentation", "max_issues_repo_path": "thoughts/recap.tex", "max_line_length": 608, "max_stars_count": null, "max_stars_repo_head_hexsha": "e302375a870359174254cc4e6c0515ef255dea3e", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "larsgeb/hmc-documentation", "max_stars_repo_path": "thoughts/recap.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4127, "size": 13738 }
\section{Discussion}\label{sec:discussion} % Summary. In this work, we explored the use of distance learning between pairs of 2D cryo-EM projections from a 3D protein structure to infer the unknown orientation at which each projection was imaged from. Our two-step method relies on the estimation of pairwise distances between unseen projections, followed by the recovery of the orientations from these distances. %\mdeff{Shall we repeat that? It's in the intro.} %The benefit of this approach is that it would permit to estimate the unknown orientations in single-particle cryo-EM directly from the acquired dataset, \ie, without the need for an intermediate reconstruction procedure or an initial volume estimate; this has obvious attractive implications in the field. %At the current stage of development, The method has been evaluated on synthetic datasets for two different proteins. The results provide key insights on the viability of the proposed scheme. First, they demonstrate that a SNN can learn a distance function between projections that estimates the difference in their orientation (\secref{results:distance-estimation:learned}) and that is invariant to shifts and robust to increasing levels of noise (\secref{results:distance-estimation:sensitivity})---an important condition in cryo-EM\@. Second, they demonstrate that an accurate estimation of distances leads to an accurate recovery of orientations (\secref{results:orientation-recovery:sensitivity}, \secref{results:distance-estimation:sensitivity}). Finally, our method was able to recover orientations with an error of $0.12$ to $0.25$ radians ($7$ to $14\degree$)---leading to an initial volume with a resolution of $8$ to $15$\AA\ (\secref{results:orientation-recovery:reconstruction}). %(from a ground-truth of 1 to $3.67$\AA, respectively) %with a resolution of \banjac{$8.0$ to $9.6$\AA\ for the symmetric protein, and $12.2$ to $15.2$\AA\ for the asymmetric protein when the FSC is $0.5$} In summary, the more accurate the estimated distances, the more precise the recovered orientations, and, ultimately, the higher-resolution the reconstructed volume. % Future work. While the method is not yet ready to be deployed in practice, we believe that a series of developments could make it relevant for single-particle cryo-EM reconstruction.% \footnote{Note that the present project will not be further continued by its authors due to other professional occupations. Hence, we strongly encourage anyone interested to build on these ideas and, hopefully, make it a practical tool.} As previously discussed, the results underline the importance of learning an accurate distance estimator. % $\widehat{d_p}$. In this regard, the performance of the SNN could be improved. % Method gains: mostly distance learning maybe recovery (not alignment). %A set of additional technical developments could also further improve performance. First, the architecture of the twin convolutional neural networks should be expanded and tuned. % While it gave good performance for a reasonable runtime, it could be optimized. % , as well as the distance metric between the two CNN outputs. % For instance, one could parametrize the function $d_f$---which compares the similarity of the features outputs (see \figref{schematic:distance-learning})---as a feed-forward neural network instead of the current Euclidean distance, and learn its weights as well. % \todo{It can definitely learn mirroring / full coverage, albeit not as well as half (because it doesn't exploit that symmetry). Motivation to incorporate this physical knowledge into the NN architecture. Motivation (on top of half in-plane being much easier for 5a1a) to predict the direction and in-plane angle separately.} Second, training could be improved, perhaps by providing more supervision by separately predicting the differences in direction $(\theta_2,\theta_1)$ and in-plane angle $\theta_3$. % \mdeff{Let's ignore improvements to recovery and concentrate the story on better distance learning -> better recovery -> better reconstruction.} %Among others, one shall explore whether reducing the influence of larger distances in orientation recovery could bring further gain in accuracy. % \mdeff{The following are issues with alignment---not directly related to our method---that we mentioned elsewhere.} % Angle alignment didn't always work, even when $L_\text{OR}$ was low (examples?): We might miss a transformation in \eqnref{orientation-recovery-error}. % Why did we need to align with \eqnref{orientation-recovery-error} before reconstructing with ASTRA\@? % Data gains. Importantly, the SNN would be better trained on a more diverse cryo-EM dataset. Indeed, its success as a faithful estimator eventually relies on our capacity to generate a synthetic training dataset whose data distribution is diverse enough to cover that of unseen projection datasets. Such realistic cryo-EM projections could be generated by relying on a more expressive formulation of the cryo-EM physics and taking advantage of the thousands of atomic models available in the PDB\@. % \mdeff{PDB database sounds redundant as PDB stands for protein database.} In particular, a necessary extension will be to include the effects of the PSF and to evaluate its impact. % Towards practical use: unseen proteins (also data) and real measurements. A final phase of tests before deploying the method on real cryo-EM measurements will be to extensively test the method on ``unseen proteins'', \ie, proteins whose simulated projections have never been seen by the SNN\@. % Early experiments indicate the feasibility of this enterprise (see \apxref{unseen-proteins}). In this regard, an interesting aspect of our method is that the twin networks within the SNN intrinsically predict the \textit{relationship} between projections, allowing the SNN as a whole to abstract the particular volume. %Consequently, a well-trained distance estimator could be relatively robust to the ``mismatch'' of volumes within the training set. %In the same line of thought, Learning should benefit from the profound structural similarity shared by proteins---after all, they are all derived from the same $21$ building blocks. % Learning exploits statistical effects, given here by biological building block. Training our 4.5M parameter model (see Appendices~\ref{apx:siamese-architecture} and~\ref{apx:optimization-settings}) has the following negative environmental impact: it consumes $13$ kWh of energy, which produces $6.36$ lbs of $\text{CO}_2$ on average~\cite{Strubell_Ganesh_McCallum_2020}. % \mdeff{I propose to omit the following sentence (which is a truism and is stated in the previous paragraph) to finish on a more forceful note.} % Eventually, the performance of the method on real cryo-EM measurements will provide the real measure of its potential, the imaging conditions being notoriously challenging in single-particle cryo-EM\@. % Further down the line and still in the real of the hypothetical, new approaches for the training of the SNN able to handle the handling of proteins with multiple conformational states could be explored.
{ "alphanum_fraction": 0.8005602241, "avg_line_length": 115.1612903226, "ext": "tex", "hexsha": "c9dd83c7055c274413479e5f1f6ac3b1082a952a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebae0e4e28c706c233d88f5c34dd427b4274cb47", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "mdeff/paper-cryoem-orientation-recovery", "max_forks_repo_path": "sections/4_conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebae0e4e28c706c233d88f5c34dd427b4274cb47", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "mdeff/paper-cryoem-orientation-recovery", "max_issues_repo_path": "sections/4_conclusion.tex", "max_line_length": 345, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebae0e4e28c706c233d88f5c34dd427b4274cb47", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "mdeff/paper-cryoem-orientation-recovery", "max_stars_repo_path": "sections/4_conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-23T18:20:37.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-23T18:20:37.000Z", "num_tokens": 1546, "size": 7140 }
\section{Preliminaries} \subsection{Complexity of counting} \begin{frame} \begin{definition}[The complexity class $\# \mathsf{P}$] A function $f \colon \{0,1\}^{*} \to \mathbb{N}$ is in $\# \mathsf{P}$ if there is a nondeterministic polynomial-time Turing machine $M_f$ such that, for each input $w$, $M_f$ has exactly $f(w)$ accepting paths. \end{definition} \begin{example}[$\#SAT$] \textbf{Instance:} A boolean formula $F$ in conjunctive normal form. \textit{Example:} $F \equiv (x_1 \lor \neg x_2 \lor \neg x_3) \wedge (x_1 \lor x_3) \wedge (\neg x_2 \lor x_4)$. \textbf{Output:} The number of satisfying assignments of $F$. \\ \end{example} \begin{definition}[$\# P$-hardness] A computational problem $C$ is $\# \mathsf{P}$-hard if, for any $f \in \# \mathsf{P}$, the problem ``evaluating $f$'' is Turing reducible to $C$. %A function $f$ is $\# \mathsf{P}$-complete if $f \in \#P$ and the existence of a polynomial-time algorithm that computes $f$ implies the existence of such an algorithm for any other function in $\# \mathsf{P}$. \end{definition} \end{frame} \begin{frame} \begin{definition}[$\# P$-completeness] A function $f \colon \{0,1\}^{*} \to \mathbb{N}$ is $\# \mathsf{P}$-complete if $f \in \# \mathsf{P}$ and evaluating $f$ is $\# \mathsf{P}$-hard. \end{definition} \begin{enumerate} \item {\color{TurkishRose} \textbf{Cook-Levin theorem:}} $\#SAT$ is $\# \mathsf{P}$-complete. \item There are $\# \mathsf{P}$-complete problems whose decision version is trivial. \end{enumerate} \begin{example}[Counting independent sets is $\# \mathsf{P}$-complete] \begin{minipage}{0.4\textwidth} \textbf{Instance:} A graph $G$. \\ \textit{Example: } \end{minipage} \begin{minipage}{0.4\textwidth} \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw}] \node (A) at (0,0) {A}; \node (B) at (1,0.5) {B}; \node (C) at (2,0) {C}; \node (D) at (3.5,0) {D}; \end{scope} \begin{scope}[>={Stealth[black]}, every edge/.style={draw=black,very thick}] \path [-] (A) edge node {} (B); \path [-] (A) edge node {} (C); \path [-] (B) edge node {} (C); \path [-] (C) edge node {} (D); \end{scope} \end{tikzpicture} \end{minipage} \vspace{3mm} \textbf{Output:} The number of independent sets of $G$.\\ \end{example} \end{frame} \subsection{Partition functions} \begin{frame} \begin{definition}[Partition function] Given a family $\mathcal{F}_n$ of subsets of the set $\{1, \ldots, n\}$, we define the partition function of $\mathcal{F}_n$ as the polynomial \begin{equation*} P_{\mathcal{F}_n}(x_1, \ldots, x_n) = \sum_{S \in \mathcal{F}_n} \prod_{j \in S} x_j. \end{equation*} \vspace*{-2mm} \end{definition} \begin{itemize} \item Hard to compute: enumerating $\mathcal{F}_n$ is usually not feasible. % \begin{itemize} % \item \normalsize % the family $\mathcal{F}$ is exponentially large on $n$; % \item \normalsize % enumerating $\mathcal{F}$ is believed to not be feasible in polynomial-time. % \end{itemize} \item Many partition functions arise in statistical mechanics. \end{itemize} \begin{example}[The independent sets polynomial] \vspace*{-1mm} Let $G$ be a graph. The independent sets polynomial of $G$ is \begin{equation*} Z(G; y) = \sum_{I \text{ independent set of } G} y^{|I|}. \end{equation*} \vspace*{-1mm} \end{example} \end{frame}
{ "alphanum_fraction": 0.6181311018, "avg_line_length": 36.9587628866, "ext": "tex", "hexsha": "7b8d01f54dba7c698d369bb3cd7958d3205cb6cc", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-08-15T06:39:01.000Z", "max_forks_repo_forks_event_min_datetime": "2015-10-14T17:54:14.000Z", "max_forks_repo_head_hexsha": "64fdc06ddf76702b9392e871b1fdd0aee6000b30", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andreshp/LatexTemplates", "max_forks_repo_path": "Slides/Sections/preliminaries.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "64fdc06ddf76702b9392e871b1fdd0aee6000b30", "max_issues_repo_issues_event_max_datetime": "2016-04-11T09:21:16.000Z", "max_issues_repo_issues_event_min_datetime": "2016-04-11T09:14:42.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andreshp/PlantillasLatex", "max_issues_repo_path": "Slides/Sections/preliminaries.tex", "max_line_length": 216, "max_stars_count": 23, "max_stars_repo_head_hexsha": "64fdc06ddf76702b9392e871b1fdd0aee6000b30", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andreshp/LatexTemplates", "max_stars_repo_path": "Slides/Sections/preliminaries.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-22T20:02:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-14T17:46:47.000Z", "num_tokens": 1249, "size": 3585 }
\documentclass{article} %\usepackage[utf8]{inputenc} \usepackage{amssymb,amsmath, hyperref} \usepackage{fancyhdr} \usepackage[title]{appendix} \usepackage{enumitem} \pagestyle{fancy} \fancyhf{} \rhead{Math - 6373} \lhead{Project 2} \chead{Autoencoders} \rfoot{Page \thepage} \lfoot{A. Radillo} \usepackage{graphicx} \usepackage{xcolor} \title{Autoencoders training for handwritten digits classification {\color{blue}(version 2)} \\ {\large Project 2 - Math-6373 - Prof. Azencott}} \author{Adrian Radillo - PSID: 1328335} \newcommand{\bi}{\begin{itemize}} \newcommand{\ei}{\end{itemize}} \makeatletter \g@addto@macro\@floatboxreset\centering \makeatother \usepackage{tabularx,ragged2e,booktabs,caption} \newcolumntype{C}[1]{>{\Centering}m{#1}} \renewcommand\tabularxcolumn[1]{C{#1}} \usepackage{listings} \usepackage{color} %red, green, blue, yellow, cyan, magenta, black, white \definecolor{mygreen}{RGB}{28,172,0} % color values Red, Green, Blue \definecolor{mylilas}{RGB}{170,55,241} \newcommand{\lp}{\left(} \newcommand{\rp}{\right)} %\usepackage{mdframed} \begin{document} \lstset{language=Matlab,% %basicstyle=\color{red}, breaklines=true,% morekeywords={matlab2tikz}, keywordstyle=\color{blue},% morekeywords=[2]{1}, keywordstyle=[2]{\color{black}}, identifierstyle=\color{black},% stringstyle=\color{mylilas}, commentstyle=\color{mygreen},% showstringspaces=false,%without this there will be a symbol in the places where there is a space numbers=left,% numberstyle={\tiny \color{black}},% size of the numbers numbersep=9pt, % this defines how far the numbers are from the text emph=[1]{for,end,break},emphstyle=[1]\color{red}, %some words to emphasise %emph=[2]{word1,word2}, emphstyle=[2]{style}, } \maketitle {\color{blue} \begin{description} \item[Remarks:] Everything that is in blue has been added in version 2.\\ \item[GitHub:] This whole project is available at the public repository,\\ \url{https://github.com/aernesto/autoencoders_MATLAB} \end{description} } % ----------------------------------------------------------------------------------------- % DATABASE % ----------------------------------------------------------------------------------------- \section{Database} I use the task and data as in project 1. This is a set of 60,000 14x14 grayscale images of handwritten digits for the training set and 10,000 images for the test set, all taken from the MNIST database and pre-processed by myself in project 1. % ----------------------------------------------------------------------------------------- % Autoencoder ARCHITECTURE % ----------------------------------------------------------------------------------------- \section{Autoencoder architecture} \begin{itemize} \item The input and output layers have $14\times14=196$ units. The range of each one of these units is $[0,1]$ as the initial pixel intensities ranging from 0 to 255 were rescaled by 1/255. \item My tentative values for $h$ are $50,100,150$ for the $h<n_1$ case, and $h=250,300,350$ for the $h>n_1$ case. \end{itemize} % ----------------------------------------------------------------------------------------- % FIRST EXPERIMENT % ----------------------------------------------------------------------------------------- \section{First experiment} \begin{itemize} \item My {\color{blue}Software Tool (}ST{\color{blue})} is MATLAB, and more specifically, the Neural Network Toolbox from the R2015b version. I created custom neural networks for the three small {\color{blue}($h<n_1$)} autoencoders with the function create\_NN() presented in appendix~\ref{app:create}. \item {\color{blue}Below} are short and non-exhaustive descriptions of the options available in this toolbox: \begin{description} \item[learning] A neural network object has a property called trainFcn. This object property can be set to many different values, which correspond to a variety of learning algorithms {\color{blue}(see figure~\ref{fig:trainFcn})}. The backpropagation gradient descent algorithm corresponds to the name `traingd'. However, in order to update the weights after the presentation of a whole batch I believe that the name `trainb' is more appropriate. But I really struggled to find clear documentation on this point. Below is a list of all the other options offered by the ST: \begin{figure}[bth!] \centering \fbox{\includegraphics[width=\textwidth]{trainFcn.png}} \caption{Existing built-in training functions in my ST. Taken from~\cite{Demuth2006}}\label{fig:trainFcn} \end{figure} \item[initialization] I used the `rands' initFcn property value for both the weights and the biases. This samples independent values from the uniform distribution over the interval $[-1,1]$ for each weight and bias. {\color{blue}See figure~\ref{fig:initFcn} for a list of the other available options.} \begin{figure}[bth!] \centering \fbox{\includegraphics[width=\textwidth]{initFcn.png}} \caption{Existing built-in initializing functions in my ST. Taken from~\cite{Demuth2006}. This picture only concerns the inputWeights property but `rands' exists as well for the layerWeights and biases properties.}\label{fig:initFcn} \end{figure} \item[batch learning] The ideal batch learning option that would have suited my needs in the MATLAB Neural Network Toolbox is the `trainingOptions' function called for a convolutional neural network object with the stochastic gradient descent with momentum (`sgdm') solver in the R2016b release of MATLAB. However, my version of MATLAB didn't have this option, so I set out to produce my training batches myself (as in project 1). I produced 3,000 batches containing 500 cases each {\color{blue}(this is the batch size)}, taken from the 60,000 cases in the training set. Consecutive batches were constructed with an overlap of 100 cases, and the whole training set was used 20 times in order to produce the batches. Each sweep through the training set was preceded by a shuffling of its elements order. I refer you to the make\_batches.m function in the appendix of my project 1 to see the source code of this function. In appendix~\ref{app:train_diabolo} I show how the train() function from my ST was called sequentially for each batch in order to train my autoencoder. \item[step size] The step size is controlled by the `lr' property of trainFcn property. The ST offered a learning algorithm based on gradient descent with adapting learning rate (`traingda'), but I was not sure that this corresponded to the equations seen in class. The `traingd' learning function, that I used, uses a fixed learning rate. Since I called the train() function sequentially on each batch, I only set the number of Epochs in the training properties to 1, for each iteration. Then, in between iterations, I wasn't sure whether I should change the learning rate manually, or if the training algorithm would do it for me. This is why there are a lot of commented lines in appendix~\ref{app:train_diabolo}. {\color{blue}In any case, in my code, I used the default value of 0.01 for the learning rate property `lr'.} \item[{\color{blue}stop training}] {\color{blue} When training the `diabolo' autoencoders, my criterion to stop the training was simply to stop after the 20$^\text{th}$sweep through the training set, or in other words, after the presentation of the last batch, \# 3,000. For the sparse autoencoders with $h>n_1$, I used the `trainscg' function, for which the training was stopped as soon as one of the following criterion was met\footnote{\href{https://www.mathworks.com/help/nnet/ref/trainscg.html?searchHighlight=trainscg&s_tid=doc_srchtitle}{{\color{blue}MATLAB manual reference}}}: \renewcommand{\labelitemii}{$\bullet$}%options for bullet symbol \begin{itemize} \item The maximum number of epochs (sweeps) is reached (I set it to 20). \item The maximum amount of time is exceeded (I left it at its default value, which is $\infty$) \item Performance is minimized to the goal (I set it to $MSE=0$). \item The performance gradient falls below min\_grad (I left it at its default value, which is $\|\text{grad}(MSE(W)) \|=10^{-6}$). \item Validation performance has increased more than max\_fail times since the last time it decreased (irrelevant for me as I didn't use validation). \end{itemize}} \end{description} \end{itemize} % ----------------------------------------------------------------------------------------- % Training `small' autoencoders % ----------------------------------------------------------------------------------------- \section*{Training {\color{blue}`diabolo'} autoencoders} { \color{blue} \subsection*{Training autoencoders and plotting RMSEn} As mentioned above, appendix~\ref{app:create} contains the code used to create custom autoencoders with respective hidden layer sizes: 50, 100, 150 (recall that $n_1=196$). Each one of these neural network was subsequently trained according to the source code contained in appendix~\ref{app:train_diabolo}. Figure~\ref{fig:mse_diabolo} shows the resulting RMSE as a function of the batch number for the three distinct hidden layer sizes. The three curves are very similar I am therefore quite doubtful about the correctness of my algorithm in appendix~\ref{app:train_diabolo}. \subsection*{RMSE$^*$ of trained autoencoders} The RMSE for the trained autoencoders was computed by taking the square root of the output of the script presented in appendix~\ref{app:msemat}. The results are presented in figure~\ref{fig:rmse_global}, together with the results from the sparse autoencoders ($h>n_1$). Once again, values of the RMSE$^*$ above 0.5 indicate that my training was inefficient for the `diabolo' autoencoders. Such inefficient training might have come from an erroneous algorithm, or from a too small number of sweeps during training, or from a too low learning rate. Also, I obtained very similar RMSE values for the three hidden layer sizes. This probably indicates that my training algorithm was erroneous. } \begin{figure}[bth!] \centering \fbox{\includegraphics[width=0.65\textwidth]{mse_diabolo.png}} \caption{{\color{blue}Evolution of the RMSE through training, for the three distinct hidden layer sizes. The abscissa has the same unit for all plots.}}\label{fig:mse_diabolo} \end{figure} \subsection*{{\color{blue}Explicit mathematical expression to compute $\text{grad}(MSE(W))$}} Below is a list of mathematical notation that I used to derive the retro-propagation rule by hand. The final result is expressed in equations~\eqref{one} and~\eqref{two}. \begin{itemize} \item Matrix of weights between input and hidden layer: \[ W=\left(w_{ij}^{(1)}\right)_{ij}, \quad 1\leq i\leq n_2; \quad1\leq j \leq n_1+1, \] where $w_{in_1+1}^{(1)}$ is always the threshold of unit $i$. \item Matrix of weights between hidden and output layer: \[ W=\left(w_{ij}^{(2)}\right)_{ij}, \quad 1\leq i\leq n_3; \quad1\leq j \leq n_2+1, \] where $w_{in_2+1}^{(1)}$ is always the threshold of unit $i$. \item The total number of cases in the training set or in the batch is denoted by $M$. \item The state of input unit $i$ under presentation of case $m$ is denoted $x_i$. \item The state of hidden unit $j$ under presentation of case $m$ is denoted $h_j(W,m)$. \item The state of unit $k$ on the output layer under presentation of case $m$ is denoted $o_k(W,m)$. Hence, the output units states are: $o_1(W,m),\ldots,o_{n_3}(W,m)$. \item Linear output of hidden unit $i$: \[ A_i^{(1)}(W,m)=\sum_{j=1}^{n_1+1}x_jw_{ij}^{(1)} \] \item Logistic function is denoted $\sigma$. We have: \[ \sigma(v)=\frac{1}{1+e^{-v}}\qquad \sigma'(v)=\frac{e^{-v}}{\lp1+e^{-v}\rp^2} \] \item Output of unit $j$ in hidden layer, under presentation of case $m$: \[ h_j(W,m)=\sigma\lp A_j^{(1)}(W,m) \rp \] \item Linear output of the output layer unit $i$: \[ A_i^{(2)}(W,m)=\sum_{j=1}^{n_2+1}h_j(W,m)w_{ij}^{(2)} \] \item Output of unit $i$ in output layer, under presentation of case $m$: \[ o_i(W,m)=\sigma\lp A_i^{(2)}(W,m) \rp \] \item Denote by $\delta_{x,y}$ the Kronecker delta which is 1 only when $x=y$ and zero otherwise. \item Also, define $\iota$ to be the function that maps each case $m$ to the output unit index $\iota(m)$ which codes for the correct label of this case. Hence, if case 36 represents a 4, then unit $o_5$ will code for it in $\text{OUT}_{36}$ and therefore $\iota(36)=5$. \item We denote the MSE function by the letter $f$: \begin{align} f(W)&=\frac{1}{M}\sum_{m=1}^{M}\left |\left| \widehat{\text{OUT}}_m-\text{OUT}_m \right |\right |_2^2\\ &=\frac{1}{M}\sum_{m=1}^M \sum_{k=1}^{n_3}\lp o_{k}(W,m)-\delta_{k,\iota(m)}\rp^2 \end{align} \item The update rule for the weights is: \[ \Delta W_n=-\text{grad} \lp f(W)\rp\cdot \frac{\gamma}{\epsilon_n} \] \end{itemize} The formula for the partial derivative of the MSE with respect to a weight from the H-OUT layer, $w^{(2)}_{ij}$, is: \begin{equation} \frac{\partial f}{\partial w^{(2)}_{ij}}(W)=\frac{2}{M}\sum_{m=1}^M\sigma'\left(A_i^{(2)}(W,m)\right)h_j(W,m)\left[o_i(W,m)-\delta_{\iota(m),i}\right] \label{one} \end{equation} And when it is with respect to a weight from the IN-to-H layer, $w^{(1)}_{ij}$, we get: \begin{equation} \frac{\partial f}{\partial w^{(1)}_{ij}}(W)=\frac{2}{M} \sum_{m=1}^M\left[\sigma'\left(A_i^{(1)}(W,m)\right)x_j \sum_{k=1}^{n_3}\sigma'\left(A_k^{(2)}(W,m)\right) w^{(2)}_{ki}\left[o_k(W,m)-\delta_{\iota(m),k}\right] \right]\label{two} \end{equation} % ----------------------------------------------------------------------------------------- % SECOND EXPERIMENT: `large' autoencoders with sparsity % ----------------------------------------------------------------------------------------- \section{Second experiment: `large' autoencoders with sparsity} I trained the `large autoencoders' with the code presented in appendix~\ref{app:sparse} \subsection{Results} The following table {\color{blue}in figure~\ref{fig:rmse_global}} was generated with the script presented in appendix~\ref{app:msemat} after training the 9 autoencoders. {\color{blue}We observe that the RMSE of the trained sparse autoencoders is generally smaller than half the magnitude of the RMSE for the `diabolo' autoencoders. For the sparsity target $\rho=5$\% we observe no difference in RMSE across hidden layer size (h=250, 300, 350). This might be due to the relatively small number of sweeps used in training (20). For the sparsity target $\rho=15$\%, the RMSE of the trained autoencoders decreases slightly as the hidden layer size increases.} \begin{figure}[bth!] \centering {\color{blue} \begin{tabular}{|l | c cc|ccc|ccc|} \hline $h$& $50$&$100$&$150$& $250$&$300$&$350$& $250$&$300$&$350$\\ $\rho$& &&& $ 0.05$&$0.05$&$ 0.05$& $ 0.15$&$0.15$&$0.15$\\ \hline training & 0.59 & 0.58 & 0.58 & 0.27 &0.27 & 0.27& 0.27 & 0.26 &0.25\\ test & 0.59&0.58&0.58&0.27&0.27&0.27&0.27&0.26&0.25\\ \hline \end{tabular}} \caption{{\color{blue}RMSE (bottom two rows) for the nine \emph{trained} autoencoders (one per column), computed over both the training and the test set. The first two rows of the table designate the hidden size $h$ and the sparsity target $\rho$ of each autoencoder.}}\label{fig:rmse_global} \end{figure} \section{Detailed analysis of hidden layer structure and efficiency} I did not have time to do this part of the homework. {\color{blue}The activations of the hidden layer of each trained autoencoder was computed and projected onto the first three Principal Components, using the scripts presented in appendix~\ref{app:pca}. The 3D scatter plots of these projections are presented in figure~\ref{fig:pca}.} \begin{figure}[bth!] \centering \fbox{\includegraphics[width=\textwidth]{pca.png}} \caption{{\color{blue}Projection of the activations of the hidden layers of each of the 9 autoencoders, onto the first three principal components. Hidden layer size is h and k represents the smallest number of PCA dimensions required to explain 90\% of the variance of the hidden layer activations. Top row of plots is for the cases $h<n_1$. Middle row is for the cases $h>n_1$ AND $\rho=5$\%. Bottom row is for $\rho=15$\%.}}\label{fig:pca} \end{figure} {\color{blue} For each network, let $k$ be the smallest number of eigenvalues that, jointly, explain more than 90\% of the variance in the activations of the hidden layer. I do not know why $k$ could not be computed for the two sparse autoencoders having hidden layer sizes $h=250$ and $300$, and sparsity target $\rho=5$\%. I observe that the value of $k$ varies greatly between the `diabolo' and the sparse autoencoders. I do not observe any clustering of the points in the PCA projection space. I hypothesize that none of my networks was sufficiently trained.} {\color{blue} \section{Autoencoding efficiency} I did not have time to address this question.} \bibliographystyle{plain} \bibliography{auto} \begin{appendices} % ----------------------------------------------------------------------------------------- % MATLAB code % ----------------------------------------------------------------------------------------- \section{MATLAB code} \subsection{Custom network with NN Toolbox in MATLAB} \label{app:create} \lstinputlisting{create_NN.m} \subsection{Training the `diabolo'-autoencoders} {\color{blue}When training the networks with hidden layer sizes 100 and 150, the string `net\_50' below was replaced by `net\_100' and `net\_150' respectively. Also, the commented lines were never used in my final results, they are merely vestiges of my trials and errors.} \label{app:train_diabolo} \lstinputlisting{train_autoencoder.m} \subsection{Training autoencoders with sparsity constraint} \label{app:sparse} \lstinputlisting{train_sparse.m} \begin{center} \line(1,0){250} \end{center} \lstinputlisting{train_sparse_UH.m} \subsection{Generating matrix of MSE on training and test sets for all autoencoders} \label{app:msemat} \lstinputlisting{mse_perf_small.m} \subsection{PCA on hidden layers} \label{app:pca} \lstinputlisting{pca_hidden.m} \begin{center} \line(1,0){250} \end{center} \lstinputlisting{create_IH.m} \begin{center} \line(1,0){250} \end{center} {\color{blue}The following is the script used to produce figure~\ref{fig:pca}.} \lstinputlisting{pca_global.m} \end{appendices} \end{document}
{ "alphanum_fraction": 0.6997972714, "avg_line_length": 60.634551495, "ext": "tex", "hexsha": "283a2b981cbd768ed86ed9050df5a913ce1fcdcb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "536aed7bafa5027bf8f17d3e5c4663d990d9f3fb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aernesto/autoencoders_MATLAB", "max_forks_repo_path": "Proj2.radillo.v2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "536aed7bafa5027bf8f17d3e5c4663d990d9f3fb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aernesto/autoencoders_MATLAB", "max_issues_repo_path": "Proj2.radillo.v2.tex", "max_line_length": 688, "max_stars_count": null, "max_stars_repo_head_hexsha": "536aed7bafa5027bf8f17d3e5c4663d990d9f3fb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aernesto/autoencoders_MATLAB", "max_stars_repo_path": "Proj2.radillo.v2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5122, "size": 18251 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % ZEBRA DZ - Reference Manual -- LaTeX Source % % % % Front Material: Title page, % % Copyright Notice % % Preliminary Remarks % % Table of Contents % % EPS files : cernlogo.eps, cnastit.eps % % % % Editor: Michel Goossens / CN-AS % % Last Mod.: 27 Jan 1995 9:00 mg % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tile page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \def\Ptitle#1{\special{ps: /Printstring (#1) def} \epsfbox{cnastit.eps}} \begin{titlepage} \vspace*{-23mm} \includegraphics[height=30mm]{cern15.eps}% \hfill \raisebox{8mm}{\Large\bf CERN Program Library Long Writeups Q100/Q101} \hfill\mbox{} \begin{center} \mbox{}\\[6mm] \mbox{\Ptitle{ZEBRA}}\\[2cm] {\LARGE Overview of the ZEBRA System}\\[4mm] {\LARGE MZ -- Memory Management}\\[4mm] {\LARGE FZ -- Sequential Input/Output}\\[4mm] {\LARGE RZ -- Random-access Input/Output}\\[4mm] {\LARGE DZ -- Debugging Tools}\\[4mm] {\LARGE DZDOC -- Bank documentation tools}\\[4mm] {\LARGE TZ -- Title Handling}\\[4mm] {\LARGE JZ91 -- Processor Support}\\[4mm] {\LARGE Error Diagnostics}\\[25mm] \end{center} \vfill \begin{center}\Large CERN Geneva, Switzerland\end{center} \end{titlepage} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \thispagestyle{empty} \framebox[\textwidth][t]{\hfill\begin{minipage}{0.96\textwidth}% \vspace*{3mm} \begin{center}Copyright Notice\end{center} \setlength{\parskip}{.6\baselineskip} CERN Program Library entries \textbf{Q100} and \textbf{Q101} \textbf{The ZEBRA system} \copyright{} Copyright CERN, Geneva 1995 Copyright and any other appropriate legal protection of these computer programs and associated documentation reserved in all countries of the world. These programs or documentation may not be reproduced by any method without prior written consent of the Director-General of CERN or his delegate. Permission for the usage of any programs described herein is granted apriori to those scientific institutes associated with the CERN experimental program or with whom CERN has concluded a scientific collaboration agreement. Requests for information should be addressed to: \vspace*{-.5\baselineskip} \begin{center}\ttfamily \begin{tabular}{l} CERN Program Library Office \\ CERN-CN Division \\ CH-1211 Geneva 23 \\ Switzerland \\ Tel. +41 22 767 4951 \\ Fax. +41 22 767 8630 \\ Internet: [email protected] \end{tabular} \end{center} \vspace*{2mm} \end{minipage}\hfill}%end of minipage in framebox \vspace{6mm} {\bf Trademark notice: All trademarks appearing in this guide are acknowledged as such.} \vfill \begin{tabular}{l@{\quad}l@{\quad}>{\small\tt}l} {\em Contact Persons\/}: general & Jamie Shiers /CN & (shiers\atsign cern.ch) \\ \phantom{\em Contact Persons\/}: FZ, MZ, TZ, JZ91 & Julius Zoll /ECP & (zoll\atsign cern.ch) \\ \phantom{\em Contact Persons\/}: DZDOC & Otto Schaile/PPE-Opal & (o.schaile\atsign cern.ch)\\[1mm] \textem{Technical Realization\/}: & Michel Goossens /CN & (goossens\atsign cern.ch)\\[1cm] \textem{Edition -- February 1995} \end{tabular} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Introductory material % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagenumbering{roman} \setcounter{page}{1} \section*{Preliminary remarks} This manual consists of several parts: \begin{Itemize} \item An overview of the ZEBRA system. \item A reference section with a description of the DZ, MZ, FZ, JZ91, RZ and TZ packages. \item An example program showing how to use the MZ and DZ routines of ZEBRA. \item A description of the DZDOC documentation system. \item A list of the diagnostics messages generated by the FZ and MZ parts of the ZEBRA system. \end{Itemize} \subsection*{Conventions} In this manual examples are in \texttt{monotype face} and strings to be input by the user are {\Ucom{underlined}}. In the index the page where a routine is defined is in {\bf bold}, page numbers where a routine is referenced are in normal type. This manual flags output parameters in subroutine calls, i.e. parameters which return values to the caller, by an asterisk \Lit{"*"} following the argument's name. If the input value of such a parameter is also significant this is marked by prefixing a second asterisk. A parameter which is a link is marked by an exclamation mark \Lit{"!"}. The types of variables follow from the Fortran default typing convention, except that variables beginning with the letters "{\tt ch}" are of type {\tt CHARACTER}. The Fortran labelled \Lit{COMMON /\QUEST/IQUEST(100)} serves for communication between the Zebra system and the user, and also as scratch area in Zebra. This document has been produced using \LaTeX~\cite{bib-LATEX} with the \Lit{cernman} style option, developed at CERN. A gzipped compressed PostScript file \Lit{zebra.ps.gz}, containing a complete printable version of this manual, can be obtained by anonymous ftp as follows (commands to be typed by the user are underlined)% \footnote{If you do not have the gnu {\tt gunzip} utility on your system you can get the uncompressed PostScript version by typing the command \Ucom{get zebra.ps}, without the {\tt gz} suffix. In order to save Internet bandwidth, you are, however, strongly urged to try and install the {\tt gunzip} utility since gzipped files are about three times smaller than their unzipped equivalents.}: \vspace*{3mm} \begin{XMP} \Ucom{ftp asisftp.cern.ch} Trying 128.141.201.136... Connected to asis01.cern.ch. 220 asis01 FTP server (Version 6.10 ...) ready. Name (asis01:username): \Ucom{anonymous} Password: \Ucom{your\_{}mailaddress} 230 Guest login ok, access restrictions apply. ftp> \Ucom{cd cernlib/doc/ps.dir} ftp> \Ucom{binary} ftp> \Ucom{get zebra.ps.gz} ftp> \Ucom{quit} \end{XMP} \vspace*{3mm} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tables of contents ... % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \tableofcontents \listoffigures \cleardoublepage %\listoftables % Local Variables: % mode: latex % TeX-master: "zebramain" % End:
{ "alphanum_fraction": 0.5724972797, "avg_line_length": 39.1063829787, "ext": "tex", "hexsha": "1e135ae8ce6d6b186348b017ab74902b1614de70", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "zebra/zebfront.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "zebra/zebfront.tex", "max_line_length": 99, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "zebra/zebfront.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 1760, "size": 7352 }
In this chapter we will combine the specifications languages \oz{} and \picalc{} into the combination $\pi$-OZ, and we will study its transformational semantics. \section{Syntax} \label{sec_comp_oz_pi_syntax} \input{chapters/mainpart/the_compination_pi_oz/sections/syntax/syntax} \section{Transformational semantics} \label{sec_comp_oz_pi_transformational_semantics} \input{chapters/mainpart/the_compination_pi_oz/sections/transformational_semantics/transformational_semantics}
{ "alphanum_fraction": 0.85, "avg_line_length": 48, "ext": "tex", "hexsha": "4416fa8c823575c60748108382811a72da2f13e2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MuhammadEkbal/thesis", "max_forks_repo_path": "chapters/mainpart/the_compination_pi_oz/comp_pi_oz.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MuhammadEkbal/thesis", "max_issues_repo_path": "chapters/mainpart/the_compination_pi_oz/comp_pi_oz.tex", "max_line_length": 161, "max_stars_count": null, "max_stars_repo_head_hexsha": "dcb71d5d7af7a8e87d7d230d58f18c01dbdfe13a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MuhammadEkbal/thesis", "max_stars_repo_path": "chapters/mainpart/the_compination_pi_oz/comp_pi_oz.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 119, "size": 480 }
% Created 2021-10-15 Fri 07:39 % Intended LaTeX compiler: pdflatex \documentclass[presentation,aspectratio=169]{beamer} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \usepackage{pifont} \newcommand{\cmark}{\textcolor{green!80!black}{\ding{51}}} \usepackage{amssymb} \usepackage{pgfplotstable} \DeclareMathOperator{\shift}{q} \DeclareMathOperator{\diff}{p} \usepackage{khpreamble, euscript, mathtools} \DeclareMathOperator{\atantwo}{atan2} \newcommand*{\ctrb}{\EuScript{C}} \newcommand*{\obsv}{\EuScript{O}} \usetheme{default} \author{Kjartan Halvorsen} \date{\today} \title{State feedback} \hypersetup{ pdfauthor={Kjartan Halvorsen}, pdftitle={State feedback}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 26.3 (Org mode 9.4.6)}, pdflang={English}} \begin{document} \maketitle \section{Apollo moon lander} \label{sec:orgf2ea2b4} \begin{frame}[label={sec:org348834a}]{The Apollo lunar module} \begin{center} \includegraphics[width=0.7\linewidth]{../../figures/fig-apollo} \end{center} \pause \alert{Activity} Which is the transfer function of the system? \[1: \; G(s) = \frac{\frac{g}{J} }{s^2}\qquad 2: \; G(s) = \frac{\frac{g}{J} }{s(s^2 + 1)} \qquad 3: \; G(s) = \frac{\frac{g}{J} }{s^3}\] \end{frame} \begin{frame}[label={sec:orgfaa9732}]{State variables} \begin{columns} \begin{column}{0.65\columnwidth} \begin{center} \includegraphics[width=\linewidth]{../../figures/fig-apollo} \end{center} State variables: \(x = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}^T = \begin{bmatrix} \dot{\theta} & \theta & \dot{z} \end{bmatrix}^T\). \end{column} \begin{column}{0.45\columnwidth} With dynamics \[ \begin{cases} \dot{x}_1 = \ddot{\theta} = \frac{1}{J} u\\ \dot{x}_2 = \dot{\theta} = x_1\\ \dot{x}_3 = \ddot{z} = g\theta = gx_2 \end{cases} \] \end{column} \end{columns} \end{frame} \begin{frame}[label={sec:orgc9f7ee6}]{State-space model} State variables: \(x = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}^T = \begin{bmatrix} \dot{\theta} & \theta & \dot{z} \end{bmatrix}^T\). With dynamics \[ \begin{cases} \dot{x}_1 = \ddot{\theta} = \frac{1}{J} u\\ \dot{x}_2 = \dot{\theta} = x_1\\ \dot{x}_3 = \ddot{z} = g\theta = gx_2 \end{cases} \] \alert{Activity} Fill the matrix \(A\) and vector \(B\). \[ \dot{x} = \begin{bmatrix} \dot{x}_1\\\dot{x}_2\\\dot{x}_3\end{bmatrix} = \underbrace{\begin{bmatrix} \textcolor{white}{0} & \textcolor{white}{0} &\textcolor{white}{0} \\\textcolor{white}{1} & \textcolor{white}{0}& \textcolor{white}{0}\\ \textcolor{white}{0}& \textcolor{white}{g} &\textcolor{white}{0} \end{bmatrix}}_{A} \begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix} + \underbrace{\begin{bmatrix} \textcolor{white}{\frac{1}{J}} \\ \textcolor{white}{0} \\\textcolor{white}{0} \end{bmatrix}}_{B} u \] \end{frame} \begin{frame}[label={sec:org0597e2b}]{State-space model} \end{frame} \begin{frame}[label={sec:org2af19a2}]{State-space model} State variables: \(x = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}^T = \begin{bmatrix} \dot{\theta} & \theta & \dot{z} \end{bmatrix}^T\). With dynamics \[ \begin{cases} \dot{x}_1 = \ddot{\theta} = \frac{1}{J} u\\ \dot{x}_2 = \dot{\theta} = x_1\\ \dot{x}_3 = \ddot{z} = g\theta = gx_2 \end{cases} \] \[ \dot{x} = \begin{bmatrix} \dot{x}_1\\\dot{x}_2\\\dot{x}_3\end{bmatrix} = \underbrace{\begin{bmatrix} \textcolor{red!60!black}{0} & \textcolor{red!60!black}{0} &\textcolor{red!60!black}{0} \\\textcolor{red!60!black}{1} & \textcolor{red!60!black}{0}& \textcolor{red!60!black}{0}\\ \textcolor{red!60!black}{0}& \textcolor{red!60!black}{g} &\textcolor{red!60!black}{0} \end{bmatrix}}_{A} \begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix} + \underbrace{\begin{bmatrix} \textcolor{red!60!black}{\frac{1}{J}} \\ \textcolor{red!60!black}{0} \\\textcolor{red!60!black}{0} \end{bmatrix}}_{B} u \] \pause \alert{Activity} What are the poles of the system? \end{frame} \begin{frame}[label={sec:org76223fa}]{Sensors} \begin{center} \includegraphics[width=0.8\linewidth]{../../figures/fig-apollo} \end{center} \alert{Activity} What sensors are needed for state feedback? \end{frame} \begin{frame}[label={sec:orgb7c22f6}]{Controllability} \[ \dot{x} = \begin{bmatrix} \dot{x}_1\\\dot{x}_2\\\dot{x}_3\end{bmatrix} = \underbrace{\begin{bmatrix} \textcolor{black!60!black}{0} & \textcolor{black!60!black}{0} &\textcolor{black!60!black}{0} \\\textcolor{black!60!black}{1} & \textcolor{black!60!black}{0}& \textcolor{black!60!black}{0}\\ \textcolor{black!60!black}{0}& \textcolor{black!60!black}{g} &\textcolor{black!60!black}{0} \end{bmatrix}}_{A} \begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix} + \underbrace{\begin{bmatrix} \textcolor{black!60!black}{\frac{1}{J}} \\ \textcolor{black!60!black}{0} \\\textcolor{black!60!black}{0} \end{bmatrix}}_{B} u \] Forming the controllability matrix. Note that \[ A^2 = \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ g & 0 & 0 \end{bmatrix} \] \[ \mathcal{C} = \begin{bmatrix} B & AB & A^2B\end{bmatrix} = \begin{bmatrix} \frac{1}{J} & 0 & \textcolor{white}{0}\\0 & \frac{1}{J} & \textcolor{white}{0} \\0 & 0 & \textcolor{white}{\frac{1}{J}g} \end{bmatrix} \] \pause \alert{Activity} Is the system controllable? \end{frame} \begin{frame}[label={sec:org827db36}]{Linear state feedback} \begin{columns} \begin{column}{0.6\columnwidth} \[ \dot{x} = \begin{bmatrix} \dot{x}_1\\\dot{x}_2\\\dot{x}_3\end{bmatrix} = \underbrace{\begin{bmatrix} \textcolor{black!60!black}{0} & \textcolor{black!60!black}{0} &\textcolor{black!60!black}{0} \\\textcolor{black!60!black}{1} & \textcolor{black!60!black}{0}& \textcolor{black!60!black}{0}\\ \textcolor{black!60!black}{0}& \textcolor{black!60!black}{g} &\textcolor{black!60!black}{0} \end{bmatrix}}_{A} \begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix} + \underbrace{\begin{bmatrix} \textcolor{black!60!black}{\frac{1}{J}} \\ \textcolor{black!60!black}{0} \\\textcolor{black!60!black}{0} \end{bmatrix}}_{B} u \] Introduce linear state feedback \[ u = -\textcolor{morange}{L}x + \textcolor{mbluegreen}{l_0} r,\] where \(r\) is a reference signal. \end{column} \begin{column}{0.4\columnwidth} Closed-loop system \[\dot{x} = (A-B\textcolor{morange}{L})x + \textcolor{mbluegreen}{l_0}Br\] Since the system is \alert{controllable}, we can find a gain vector \(\textcolor{morange}{L}\) that places the eigenvalues of \(A-B\textcolor{morange}{L}\) (the poles of the closed-loop system) at desired locations. \end{column} \end{columns} \end{frame} \begin{frame}[label={sec:org17d03c2}]{Linear state feedback} \small The poles of \(\dot{x} = (A-B\textcolor{morange}{L})x + \textcolor{mbluegreen}{l_0}Br\) are given by the solutions to the characteristic equation \begin{align*} \det \Big(sI - (A-B\textcolor{morange}{L})\Big) &= 0\\ \det \left(\begin{bmatrix} s & 0 & 0\\ 0 & s & 0\\ 0 & 0 & s \end{bmatrix} - \begin{bmatrix} 0 & 0 & 0\\1 & 0 & 0\\0 & g & 0\end{bmatrix} + \begin{bmatrix} \frac{1}{J}\textcolor{morange}{l_1} & \frac{1}{J}\textcolor{morange}{l_2} & \frac{1}{J}\textcolor{morange}{l_3}\\0 & 0 & 0\\0 & 0 & 0\end{bmatrix}\right) &= 0\\ \det \begin{bmatrix} s+\frac{1}{J}\textcolor{morange}{l_1} & \frac{1}{J}\textcolor{morange}{l_2} & \frac{1}{J}\textcolor{morange}{l_3}\\-1 & s & 0\\0 & -g & s \end{bmatrix} &= 0\\ (s+\frac{1}{J}\textcolor{morange}{l_1})s^2 + \frac{1}{J}\textcolor{morange}{l_2}s +\frac{1}{J}g\textcolor{morange}{l_3} &= 0\\ s^3 + \frac{1}{J}\textcolor{morange}{l_1}s^2 + \frac{1}{J}\textcolor{morange}{l_2}s +\frac{1}{J}g\textcolor{morange}{l_3} &= 0 \end{align*} \end{frame} \begin{frame}[label={sec:orga0df4f2}]{Where to place the closed-loop poles} \begin{columns} \begin{column}{0.35\columnwidth} \begin{center} \begin{tikzpicture}[scale=0.7] \pgfmathsetmacro{\wc}{2} \pgfmathsetmacro{\rp}{\wc*cos(45)} \draw[->] (-4,0) to (2,0) node[below] {Re}; \draw[->] (0,-3) to (0,3) node[left] {Im}; \draw[dashed, black!80] (0,\wc) arc[radius=\wc{}cm, start angle=90, end angle=270]; \node[anchor=center, red!80!black] at (-\rp, \rp) {\Large $\times$ }; \node[anchor=center, red!80!black] at (-\rp, -\rp) {\Large $\times$ }; \node[anchor=center, red!80!black] at (-\wc, 0) {\Large $\times$ }; \draw[thin, <->] (0,0) -- node[above] {$\frac{1}{\tau_c}$} (-\rp, \rp); \end{tikzpicture} \end{center} \end{column} \begin{column}{0.65\columnwidth} Desired closed-loop characteristic polynomial \begin{align*} (s-p_1)(s-p_2)(s-p_3) &= (s+\frac{1}{\tau_c})(s^2 + \frac{\sqrt{2}}{\tau_c}s + \frac{1}{\tau_c^2})\\ &= s^3 + \frac{1 + \sqrt{2}}{\tau_c}s^2 + \frac{1+\sqrt{2}}{\tau_c^2}s + \frac{1}{\tau_c^3} \end{align*} \end{column} \end{columns} \end{frame} \begin{frame}[label={sec:org6f6ed56}]{Determining the state feedback gain} By linear state feedback we have characteristic polynomial \[\det \Big(sI - (A-B\textcolor{morange}{L})\Big) = s^3 + \frac{1}{J}\textcolor{morange}{l_1}s^2 + \frac{1}{J}\textcolor{morange}{l_2}s + \frac{1}{J}g\textcolor{morange}{l_3}.\] And we want to achieve the characteristic polynomial \[ s^3 + \frac{1 + \sqrt{2}}{\tau_c}s^2 + \frac{1+\sqrt{2}}{\tau_c^2}s + \frac{1}{\tau_c^3}. \] \alert{Activity} What do we do next? \end{frame} \begin{frame}[label={sec:org2393acf}]{Determining the state feedback gain} Set the characteristic polynomial obtained from \(\det\) \Big(sI - (A-B\textcolor{morange}{L})\Big) equal to the desired characteristic polynomial \[ s^3 + \frac{1}{J}\textcolor{morange}{l_1}s^2 + \frac{1}{J}\textcolor{morange}{l_2}s + \frac{1}{J}g\textcolor{morange}{l_3} = s^3 + \frac{1 + \sqrt{2}}{\tau_c}s^2 + \frac{1+\sqrt{2}}{\tau_c^2}s + \frac{1}{\tau_c^3} \] Solve for the gains by setting corresponding coefficients equal. \begin{equation*} \begin{rcases} s^2: \quad & \frac{1}{J}\textcolor{morange}{l_1} = \frac{1 + \sqrt{2}}{\tau_c}\\ s^1: \quad & \frac{1}{J}\textcolor{morange}{l_2} = \frac{1 + \sqrt{2}}{\tau_c^2}\\ s^0: \quad & \frac{1}{J}g\textcolor{morange}{l_3} = \frac{1}{\tau_c^3} \end{rcases} \Rightarrow \begin{rcases} \quad \textcolor{morange}{l_1} &= \frac{J(1 + \sqrt{2})}{\tau_c}\\ \quad \textcolor{morange}{l_2} &= \frac{J(1 + \sqrt{2})}{\tau_c^2}\\ \quad \textcolor{morange}{l_3} &= \frac{J}{g\tau_c^3} \end{rcases} \end{equation*} \end{frame} \begin{frame}[label={sec:org8879e69}]{The gain \(l_0\)} \begin{center} \includegraphics[width=0.6\linewidth]{../../figures/block-apollo} \end{center} \[ G(s) = \frac{\frac{g}{J}}{s^3}\] It can be shown that state feedback does not change the numerator of the transfer function, only the denominator, so \[G_c(s) = \textcolor{mbluegreen}{l_0}\frac{\frac{g}{J}}{s^3 + \frac{1 + \sqrt{2}}{\tau_c}s^2 + \frac{1+\sqrt{2}}{\tau_c^2}s + \frac{1}{\tau_c^3}}\] We want unit static gain, \(G_c(0) = 1\) \pause \alert{Activity} Determine the gain \(\textcolor{mbluegreen}{l_0}\) \end{frame} \end{document}
{ "alphanum_fraction": 0.6625034187, "avg_line_length": 45.7041666667, "ext": "tex", "hexsha": "2c6193c0c51937ce3f858a7dff5b74e548f2f34b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kjartan-at-tec/mr2025", "max_forks_repo_path": "modern-control/slides/state-feedback.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kjartan-at-tec/mr2025", "max_issues_repo_path": "modern-control/slides/state-feedback.tex", "max_line_length": 607, "max_stars_count": null, "max_stars_repo_head_hexsha": "88c28aa76e84890c25d252167e5bbcd25318463e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kjartan-at-tec/mr2025", "max_stars_repo_path": "modern-control/slides/state-feedback.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4421, "size": 10969 }
\section{Course program} The course is structured into four(4) chapters. The four chapters take place during the six weeks of the course. \subsection{Chapter 1 - statically typed programming languages} \paragraph*{Topics} \begin{itemize} \item What are types? \item (\textbf{Advanced}) Typing and semantic rules: how do we read them? \item Introduction to Java and C\# (\textbf{advanced}) with type rules and semantics \begin{itemize} \item Classes \item Fields/attributes \item Constructor(s), methods, and static methods \item Statements, expressions, and primitive types \item Arrays \item (\textbf{Advanced}) Lambda's \end{itemize} \end{itemize} \subsection{Chapter 2 - reuse through polymorphism} \paragraph*{Topics} \begin{itemize} \item What is code reuse? \item Interfaces, abstract classes and implementation \item Implicit vs explicit conversion \item (\textbf{Advanced}) Implicit and explicit conversion type rules \item Runtime type testing \end{itemize} %\subsection{Chapter 3 - reuse through generics} %\paragraph*{Topics} %\begin{itemize} % \item Using generic parameters % \item (\textbf{Advanced}) Using covariance and contravariance in the presence of generic parameters % \item (\textbf{Advanced}) Designing interfaces and implementation in the presence of generic parameters %\end{itemize} \subsection{Chapter 3 - architectural considerations} \paragraph*{Topics} \begin{itemize} \item Encapsulation \item Input controllers \item State machines \end{itemize} \subsection{Chapter 4 - yet more architectural considerations} \paragraph*{Topics} \begin{itemize} \item (\textbf{Advanced}) Composition versus inheritance \item (\textbf{Advanced}) Entity/component model \end{itemize}
{ "alphanum_fraction": 0.7666284404, "avg_line_length": 32.2962962963, "ext": "tex", "hexsha": "6510a3c0050db079e51a698f8abe7988c33b9746", "lang": "TeX", "max_forks_count": 24, "max_forks_repo_forks_event_max_datetime": "2017-05-13T07:29:06.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-25T16:01:00.000Z", "max_forks_repo_head_hexsha": "8593840ddb7f89e60d541ac4e936393ebf8adf05", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hogeschool/INFDEV02-3", "max_forks_repo_path": "Modulewijzer/Programma.tex", "max_issues_count": 10, "max_issues_repo_head_hexsha": "8593840ddb7f89e60d541ac4e936393ebf8adf05", "max_issues_repo_issues_event_max_datetime": "2021-01-14T23:19:49.000Z", "max_issues_repo_issues_event_min_datetime": "2016-01-24T03:09:23.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hogeschool/INFDEV02-3", "max_issues_repo_path": "Modulewijzer/Programma.tex", "max_line_length": 112, "max_stars_count": 24, "max_stars_repo_head_hexsha": "8593840ddb7f89e60d541ac4e936393ebf8adf05", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hogeschool/INFDEV02-3", "max_stars_repo_path": "Modulewijzer/Programma.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-04T19:18:03.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-13T15:38:00.000Z", "num_tokens": 449, "size": 1744 }
\documentclass{article} \usepackage{graphicx} \usepackage{listings} \def\lstxml{ \lstset{language=XML, keywordstyle=\ttfamily, identifierstyle=\ttfamily, stringstyle=\ttfamily, showstringspaces=false, columns=[l]flexible, escapeinside={(@*}{*@)}, morekeywords={encoding, mrow,math,mfrac,mi,msqrt,mo,mn,span,nobr,img} } } \def\lstjs{ \lstset{language=Java, keywordstyle=\ttfamily, identifierstyle=\ttfamily\bfseries, stringstyle=\ttfamily, showstringspaces=false, columns=[l]flexible, morekeywords={cvox,Api,Math,defineRule} } } \setlength{\parindent}{0cm} \pagestyle{empty} \begin{document} \section*{Fraction Example} \textbf{Mathematics:} \[ \frac{numerator}{denominator} \] \textbf{MathML representation:} \lstxml \begin{lstlisting} <mfrac> <mrow>Numerator</mrow> <mrow>Denominator</mrow> </mfrac> \end{lstlisting} \textbf{JavaScript rule:} \lstjs \begin{lstlisting} defineRule( 'mfrac', 'default.short', '[t] "Start Frac"; [n] ./*[1]; [t] "Over"; [n] ./*[2]; [t] "End Frac"', 'self::mathml:mfrac'); \end{lstlisting} \textbf{GUI Mock:}\vspace*{1ex} \includegraphics[width=\linewidth]{Editor_Mock} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.6471471471, "avg_line_length": 19.3043478261, "ext": "tex", "hexsha": "8f9d48ec77f33e15e77994869aeb21a1543f40a6", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-11-10T16:44:06.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-10T16:44:06.000Z", "max_forks_repo_head_hexsha": "26b153ad5e5de7f53803d40b29cbe1b1ba601820", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pkra/speech-rule-engine", "max_forks_repo_path": "doc/example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26b153ad5e5de7f53803d40b29cbe1b1ba601820", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pkra/speech-rule-engine", "max_issues_repo_path": "doc/example.tex", "max_line_length": 51, "max_stars_count": null, "max_stars_repo_head_hexsha": "26b153ad5e5de7f53803d40b29cbe1b1ba601820", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pkra/speech-rule-engine", "max_stars_repo_path": "doc/example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 427, "size": 1332 }
\documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{imakeidx} \usepackage[hidelinks]{hyperref} %% A screen friendly geometry: \usepackage[paper=a5paper,scale=0.9]{geometry} %% PPL Setup \newcommand{\assign}{\mathrel{\mathop:}=} \newcommand{\concat}{\mathrel{+\!+}} \newcommand{\f}[1]{\mathsf{#1}} \newcommand{\true}{\top} \newcommand{\false}{\bot} \newcommand{\imp}{\rightarrow} \newcommand{\revimp}{\leftarrow} \newcommand{\equi}{\leftrightarrow} \newcommand{\entails}{\models} \newcommand{\eqdef}{\; \raisebox{-0.1ex}[0mm]{$ \stackrel{\raisebox{-0.2ex}{\tiny \textnormal{def}}}{=} $}\; } \newcommand{\iffdef}{\n{iff}_{\mbox{\scriptsize \textnormal{def}}}} \newcommand{\pplmacro}[1]{\mathit{#1}} \newcommand{\ppldefmacro}[1]{\mathit{#1}} \newcommand{\pplparam}[1]{\mathit{#1}} \newcommand{\pplparamidx}[2]{\mathit{#1}_{#2}} \newcommand{\pplparamplain}[1]{#1} \newcommand{\pplparamplainidx}[2]{#1_{#2}} \newcommand{\pplparamsup}[2]{\mathit{#1}^{#2}} \newcommand{\pplparamsupidx}[3]{\mathit{#1}^{#2}_{#3}} \newcommand{\pplparamplainsup}[2]{#1^{#2}} \newcommand{\pplparamplainsupidx}[3]{#1^{#2}_{#3}} \newcommand{\pplparamnum}[1]{\mathit{X}_{#1}} %% %% We use @startsection just to obtain reduced vertical spacing above %% macro headers which are immediately after other headers, e.g. of sections %% \makeatletter% \newcounter{entry}% \newcommand{\entrymark}[1]{}% \newcommand\entryhead{% \@startsection{entry}{10}{\z@}{12pt plus 2pt minus 2pt}{0pt}{}}% \makeatother \newcommand{\pplkbBefore} {\entryhead*{}% \setlength{\arraycolsep}{0pt}% \pagebreak[0]% \begin{samepage}% \noindent% \rule[0.5pt]{\textwidth}{2pt}\\% \noindent} % \newcommand{\pplkbDefType}[1]{\hspace{\fill}{{[}#1{]}\\}} \newcommand{\pplkbBetween} {\setlength{\arraycolsep}{3pt}% \\\rule[3pt]{\textwidth}{1pt}% \par\nopagebreak\noindent Defined as\begin{center}} \newcommand{\pplkbAfter}{\end{center}\end{samepage}\noindent} \newcommand{\pplkbBodyBefore}{\par\noindent where\begin{center}} \newcommand{\pplkbBodyAfter}{\end{center}} \newcommand{\pplkbFreePredicates}[1]{\f{free\_predicates}(#1)} % \newcommand{\pplkbRenameFreeOccurrences}[3]{\f{rename\_free\_occurrences}(#1,#2,#3)} \newcommand{\pplIsValid}[1]{\noindent This formula is valid: $#1$\par} \newcommand{\pplIsNotValid}[1]{\noindent This formula is not valid: $#1$\par} \newcommand{\pplFailedToValidate}[1]{\noindent Failed to validate this formula: $#1$\par} \newcounter{def} \makeindex \begin{document} % % Doc at position 0 % \title{Definientia} \date{Revision: May 9, 2016; Rendered: \today} \maketitle \noindent Definability in terms of projections, for computing definientia by interpolation. Makes use of scratch\_forgetting. Formalized with the \href{http://cs.christophwernhard.com/pie/}{\textit{PIE}} system. % % Doc at position 339 % \section{Definientia} The following formula is valid if and only if formula $G$ is definable in terms of predicates $S$ within formula $F$. Definientia are exactly the interpolants of its antecedent and consequent. % % Statement at position 563 % \pplkbBefore \index{definiens(G,F,S)@$\ppldefmacro{definiens}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S})$}$\begin{array}{lllll} \ppldefmacro{definiens}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S}) \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{proj}(\pplparamplain{S},(\pplparamplain{F} \land \pplparamplain{G})) \imp \lnot \pplmacro{proj}(\pplparamplain{S},(\pplparamplain{F} \land \lnot \pplparamplain{G})). \end{array} $\pplkbAfter % % Doc at position 630 % The following specification based on literal projection allows to restrict the polarity of the predicates in $S$: % % Statement at position 752 % \pplkbBefore \index{definiens_lit(G,F,S)@$\ppldefmacro{definiens\_lit}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S})$}$\begin{array}{lllll} \ppldefmacro{definiens\_lit}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S}) \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{projlit}(\pplparamplain{S},(\pplparamplain{F} \land \pplparamplain{G})) \imp \lnot \pplmacro{projlit}(\pplparamplainidx{S}{1},(\pplparamplain{F} \land \lnot \pplparamplain{G})), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplainidx{S}{1} \assign \mathrm{duals\ of}\; \pplparamplain{S}. \end{array}$\pplkbBodyAfter % % Doc at position 852 % $\mathit{definiens\_lit\_lemma}$ is an incomplete version of $\mathit{definiens\_lit}$ that yields formulas which are more efficient to handle: % % Statement at position 1003 % \pplkbBefore \index{definiens_lit_lemma(G,F,S)@$\ppldefmacro{definiens\_lit\_lemma}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S})$}$\begin{array}{lllll} \ppldefmacro{definiens\_lit\_lemma}(\pplparamplain{G},\pplparamplain{F},\pplparamplain{S}) \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{lemma\_projlit}(\pplparamplain{S},(\pplparamplain{F} \land \pplparamplain{G})) \imp \lnot \pplmacro{lemma\_projlit}(\pplparamplainidx{S}{1},(\pplparamplain{F} \land \lnot \pplparamplain{G})), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplainidx{S}{1} \assign \mathrm{duals\ of}\; \pplparamplain{S}. \end{array}$\pplkbBodyAfter % % Doc at position 1120 % Definability of a single predicate in terms of a given set of predicates: % % Statement at position 1201 % \pplkbBefore \index{predicate_definiens(P,F,S)@$\ppldefmacro{predicate\_definiens}(\pplparamplain{P},\pplparamplain{F},\pplparamplain{S})$}$\begin{array}{lllll} \ppldefmacro{predicate\_definiens}(\pplparamplain{P},\pplparamplain{F},\pplparamplain{S}) \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{definiens}(\pplparamplainidx{P}{X},\pplparamplain{F},\pplparamplain{S}), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplain{N} \assign \mathrm{arity\ of }\; \pplparamplain{P}\; \mathrm{ in }\; \pplparamplain{F},\\ \pplparamplain{X} \assign \mathrm{a\ sequence\ of\ \pplparamplain{N}\ fresh\ symbols},\\ \pplparamplainidx{P}{X} \assign \pplparamplain{P}(\pplparamplain{X}). \end{array}$\pplkbBodyAfter % % Doc at position 1340 % Definability of a single predicate in terms of all other predicates: % % Statement at position 1416 % \pplkbBefore \index{predicate_definiens(P,F)@$\ppldefmacro{predicate\_definiens}(\pplparamplain{P},\pplparamplain{F})$}$\begin{array}{lllll} \ppldefmacro{predicate\_definiens}(\pplparamplain{P},\pplparamplain{F}) \end{array} $\pplkbBetween $\begin{array}{lllll} \exists \pplparamplain{P} \, (\pplparamplain{F} \land \pplparamplainidx{P}{X}) \imp \lnot \exists \pplparamplain{P} \, (\pplparamplain{F} \land \lnot \pplparamplainidx{P}{X}), \end{array} $\pplkbAfter \pplkbBodyBefore $ \begin{array}{l}\pplparamplain{N} \assign \mathrm{arity\ of }\; \pplparamplain{P}\; \mathrm{ in }\; \pplparamplain{F},\\ \pplparamplain{X} \assign \mathrm{a\ sequence\ of\ \pplparamplain{N}\ fresh\ symbols},\\ \pplparamplainidx{P}{X} \assign \pplparamplain{P}(\pplparamplain{X}). \end{array}$\pplkbBodyAfter % % Doc at position 1574 % \subsection{Definientia: Examples} % % Statement at position 1617 % \pplkbBefore \index{ex_definiens_1@$\ppldefmacro{ex\_definiens_{1}}$}$\begin{array}{lllll} \ppldefmacro{ex\_definiens_{1}} \end{array} $\pplkbBetween $\begin{array}{lllll} \pplmacro{definiens}(\mathsf{p}\mathsf{a},\\ \hphantom{\pplmacro{definiens}(} \forall \mathit{x} \, (\mathsf{p}\mathit{x} \equi \mathsf{q}\mathit{x}) \land \forall \mathit{x} \, (\mathsf{p}\mathit{x} \equi \mathsf{r}\mathit{x}),\\ \hphantom{\pplmacro{definiens}(} {[}\mathsf{q}{]}). \end{array} $\pplkbAfter \noindent Input: $\pplmacro{ex\_definiens_{1}}.$\\ \noindent Result of interpolation: \[\begin{array}{lllll} \mathsf{q}\mathsf{a}. \end{array} \] \printindex \end{document}
{ "alphanum_fraction": 0.7228607918, "avg_line_length": 34.192139738, "ext": "tex", "hexsha": "98782032cdcccc3365d2372cc07dced64f315fd3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-10-08T15:13:50.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-08T15:13:50.000Z", "max_forks_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "logicmoo/logicmoo_base", "max_forks_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_definientia_out.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "logicmoo/logicmoo_base", "max_issues_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_definientia_out.tex", "max_line_length": 206, "max_stars_count": 13, "max_stars_repo_head_hexsha": "679789ecda03c586f02f642b38e614a2f925720d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TeamSPoon/logicmoo_base", "max_stars_repo_path": "prolog/logicmoo/circ/pie/scratch/scratch_definientia_out.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-18T01:17:10.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-03T03:18:53.000Z", "num_tokens": 2966, "size": 7830 }
\documentclass[conc-doc]{subfiles} \begin{document} \chapter[Ranges]{Ranges} Concurnas has native support for numerical ranges. For example: \begin{lstlisting} range = 0 to 10 //integer range from 1 to 10 inclusive \end{lstlisting} Above, range is now of type \lstinline{IntSequence}. In Concurnas, sequences implement the \lstinline{java.util.Iterable} interface, meaning that they are able to be used within for loops and anywhere else where an iterator is appropriate. Let's extract the values of the above range: \begin{lstlisting} result = x for x in range //result == [0, 1, 2, 34, 5,6 7, 8, 9, 10] //note that the range is inclusive of the start and finishing items specified \end{lstlisting} The above example denotes a integer sequence. A long sequence is created when either of the range bounds specified are of type long: \begin{lstlisting} range LongSequence = 0L to 10 \end{lstlisting} \section{Steps} Sequences can be created with specific increments via the step method: \begin{lstlisting} stepped = 0 to 10 step 2 result = x for x in stepped //result == [0, 2, 4, 6, 8, 10] \end{lstlisting} \section{Decrementing sequences} So far we have only explored ascending sequences, we can create descending sequences by inverting the boundary arguments: \begin{lstlisting} descending = 10 to 0 step 2 result = x for x in descending //result == [10, 8, 6, 4, 2, 0] \end{lstlisting} \section{Reversed sequences} As an alternative to decrementing sequences, a reversed sequence can be created as follows: \begin{lstlisting} norm = 0 to 4 rev = norm reversed //norm == [0, 1, 2, 3, 4] //rev == [4, 3, 2, 1, 0] \end{lstlisting} \section{Infinite sequences} Infinite sequences can be created simply by omitting a to argument: \begin{lstlisting} infi = 0 to //infi => 0, 1, 2, 3, ... \end{lstlisting} And they can be stepped as follows: \begin{lstlisting} infi = 0 to step 10 //infi == 0, 10, 20, 30,... \end{lstlisting} Note that adding a step also enables us to create infinitely decreasing sequences: \begin{lstlisting} infi = 0 to step -1 //infi == 0, -1, -2, -3,... \end{lstlisting} Infinite sequences cannot be reversed. \section{In} Sequences have direct support for the in operator (without requiring calculation of the entire contents of the range). For example: \begin{lstlisting} range = 0 to 5 cont1 = 4 in range //cont1 resolves to true as 4 is in the range cont2 = 88 not in range//con2 resolves to true as 88 is not in the range \end{lstlisting} \section{Char, double, float sequences} Concurnas doesn't have direct support for non \lstinline{int/long} sequences, however the effects can be easily achieved. For example, a char sequence: \begin{lstlisting} chars = x as char for x in 65 to 70 //chars == [A, B, C, D, E, F] \end{lstlisting} \section{Under the hood} Ranges are implemented via a clever use of both extension functions,expression lists, operator overloading and auto importing of the relevant extension functions and sequence classes. See: \lstinline{com/concurnas/lang/ranges.conc} for more details. \end{document}
{ "alphanum_fraction": 0.7443902439, "avg_line_length": 32.03125, "ext": "tex", "hexsha": "b260f3b0130ce5472ac2f940abaae9f393236763", "lang": "TeX", "max_forks_count": 21, "max_forks_repo_forks_event_max_datetime": "2022-03-17T06:35:22.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-25T19:17:24.000Z", "max_forks_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "michaeldesu/Concurnas", "max_forks_repo_path": "book/ranges.tex", "max_issues_count": 68, "max_issues_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_issues_repo_issues_event_max_datetime": "2022-01-20T13:08:56.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-10T01:37:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "michaeldesu/Concurnas", "max_issues_repo_path": "book/ranges.tex", "max_line_length": 284, "max_stars_count": 201, "max_stars_repo_head_hexsha": "6229ccf610d5db3eca4ebcf85c04b37fe44fcd7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "michaeldesu/Concurnas", "max_stars_repo_path": "book/ranges.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-20T12:17:23.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-04T23:19:26.000Z", "num_tokens": 889, "size": 3075 }
% This file contains the content for a main section \numberedformat %% Modify below this line %% \chapter{Notes on SMPTE ST 268:2014} SMPTE ST 268:2014 -- File Format for Digital Moving Picture Exchange (DPX) is an update to SMPTE ST 268M:2003. The updated standard specifies how image data and metadata should be written to the file format when the image data is in the Academy Density Exchange Encoding (ADX). Those wishing to implement the standard should refer to the SMPTE document which can be obtained from the Society of Motion Picture and Television Engineers.
{ "alphanum_fraction": 0.798245614, "avg_line_length": 95, "ext": "tex", "hexsha": "82ca6b630001e3ab332db41db460a6f1211b2d4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_forks_repo_licenses": [ "AMPAS" ], "max_forks_repo_name": "colour-science/aces-dev", "max_forks_repo_path": "documents/LaTeX/TB-2014-007/sec-notes1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AMPAS" ], "max_issues_repo_name": "colour-science/aces-dev", "max_issues_repo_path": "documents/LaTeX/TB-2014-007/sec-notes1.tex", "max_line_length": 435, "max_stars_count": 2, "max_stars_repo_head_hexsha": "76ea982a988d278dd12b563602771f46a5da3b83", "max_stars_repo_licenses": [ "AMPAS" ], "max_stars_repo_name": "KelSolaar/aces-dev", "max_stars_repo_path": "documents/LaTeX/TB-2014-007/sec-notes1.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-27T06:46:50.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-04T18:12:13.000Z", "num_tokens": 128, "size": 570 }
%------------------------- % Resume in Latex % Initial Author : Sourabh Bajaj % License : MIT %------------------------ \documentclass[letterpaper,11pt]{article} \usepackage{latexsym} \usepackage[empty]{fullpage} \usepackage{titlesec} \usepackage{marvosym} \usepackage[usenames,dvipsnames]{color} \usepackage{verbatim} \usepackage{enumitem} \usepackage[pdftex]{hyperref} \usepackage{fancyhdr} \usepackage{fontawesome} \usepackage{xifthen} \pagestyle{fancy} \fancyhf{} % clear all header and footer fields \fancyfoot{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} % Adjust margins \addtolength{\oddsidemargin}{-0.5in} \addtolength{\evensidemargin}{-0.5in} \addtolength{\textwidth}{1in} \addtolength{\topmargin}{-48pt} \addtolength{\textheight}{1.2in} \urlstyle{same} \raggedbottom \raggedright \setlength{\tabcolsep}{0in} % Sections formatting \titleformat{\section}{ \vspace{-5pt}\scshape\raggedright\large }{}{0em}{}[\color{black}\titlerule\vspace{-3pt}] %------------------------- % Custom commands \newcommand{\resumeItem}[2]{ \linespread{1.1} \item\small{ \textbf{#1}{: #2} } } \newcommand{\resumePoint}[1]{ \linespread{1.2} \item\small{#1} } \newcommand{\resumeExperienceSubheading}[5]{ \vspace{-3pt}\item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} \normalfont{\small#2} - \textbf{#3}\normalfont{#4} & #5 \\ \end{tabular*}\vspace{-4pt} } \newcommand{\resumeSubheadingExtended}[7]{ \item \begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textbf{#1} & #2 \\ \textbf{\small#3} in \textit{#4}; \textbf{#5} & \textit{\small #6} \end{tabular*} \ifthenelse{\isempty{#7}}% {}% if #1 is empty {\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r} \textit{\small#7} \end{tabular*} \vspace{-5pt}} } \newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}} \renewcommand{\labelitemii}{$\circ$} \newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]} \newcommand{\resumeSubHeadingListEnd}{\end{itemize}\vspace{-14pt}} \newcommand{\resumeItemListStart}{\begin{itemize}} \newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-2pt}} \newcommand{\resumeInnerItemListStart}{\begin{itemize}\vspace{-1pt}} \newcommand{\resumeInnerItemListEnd}{\end{itemize}\vspace{2pt}} %------------------------------------------- %%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %----------HEADING----------------- \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r} \textbf{\href{https://www.linkedin.com/in/nikhil-sulegaon/}{\Huge Nikhil Sulegaon}} & \faMapMarker \enspace 353 King St APT 622 San Francisco CA USA 94158\\[8pt] \href{https://github.com/nikhilsu}{\faGithub \enspace https://github.com/nikhilsu} & \faEnvelopeO \enspace \href{mailto:[email protected]}{[email protected]} \\[3pt] \href{https://www.linkedin.com/in/nikhil-sulegaon/}{\faLinkedin \enspace https://www.linkedin.com/in/nikhil-sulegaon/} & \faPhone \enspace \href{tel:+1-720-491-9222}{+1-720-491-9222} \end{tabular*} %-----------EXPERIENCE----------------- \section{Experience} \resumeSubHeadingListStart \resumeExperienceSubheading {Software Engineer}{}{Uber ATG}{, San Francisco, USA}{Oct 2019 - Present} \resumeItemListStart \resumePoint{Built highly scalable distributed systems that processed large volumes of sensor data generated in an autonomous vehicle to facilitate data analysis. analysis. Analysis of this data allowed us to derive mission-critical intelligence} \resumePoint{Developed complex data pipelines using various Big Data and Data Warehousing technologies like Hadoop, Hive, and Spark. Deployed these intricate pipelines onto the Cloud using an array of Amazon AWS Services.} \resumeItemListEnd \resumeExperienceSubheading {Teaching Assistant}{\href{https://www.colorado.edu/cs/csci-3308-software-development-methods-and-tools}{(S/W Dev Methods \& Tools)}}{University of Colorado Boulder}{}{Sep 2017 - Dec 2018} \resumeItemListStart \resumePoint{Taught 80 students, full stack development and deployment of applications using core Agile principles and TDD.} \resumePoint{Trained students on tools like BASH, HTML/CSS, JS, Git, Postgres, NodeJs, TDD, CI, and AWS/Heroku.} \resumeItemListEnd \resumeExperienceSubheading {Data Engineering Intern}{}{Expedia Inc}{, Chicago, USA}{May 2018 - Aug 2018} \resumeItemListStart \resumeItem {RevPlus (Spark, Python, Java, Node.Js, Kafka)} {A B2B app aiding hoteliers manage their hotel pricing.} \resumeInnerItemListStart \resumePoint{Built data pipelines to create training datasets. Made the data accessible through APIs.} \resumePoint{Worked on creating a platform to train a Machine Learning model on a Spark cluster on AWS.} \resumeInnerItemListEnd \resumeItemListEnd \vspace{-5pt} \resumeExperienceSubheading {Full-Stack/Backend Developer}{}{ThoughtWorks}{, Bangalore, India}{Aug 2015 - Aug 2017} \resumeItemListStart \resumeItem {Project Management Tool (C\#, ASP.NET, SQL Server)}{} \resumeInnerItemListStart \resumePoint {Spearheaded the re-architecture of a Multi-Tenant application, improving its overall performance by 60\%.} \resumePoint {Boosted the health of the codebase by making it more extensible. Increased the test coverage from 16\% to 65\%} \resumeInnerItemListEnd \resumeItem {Danglay (Ruby on Rails, Postgres, Heroku)}{Part of a team that built a carpooling web application by employing the best practices of TDD, CI/CD, Agile methodologies and other clean coding techniques.} \resumeItemListEnd \resumeSubHeadingListEnd %-----------EDUCATION----------------- \section{Education} \resumeSubHeadingListStart \resumeSubheadingExtended {University of Colorado Boulder}{Boulder, CO} {Master of Science}{Computer Science}{}{Aug 2017 -- Aug 2019} {Relevant courses: BigData, ML, NLP, Probabilistic Models for ML, Computer Vision, and, Object Oriented Design.} \resumeSubheadingExtended {BMS College of Engineering}{Bangalore, India} {Bachelor of Engineering}{Information Science and Engineering}{}{Sep 2011 -- May 2015} {} \resumeSubHeadingListEnd %-----------PROJECTS----------------- \section{Projects} \resumeSubHeadingListStart \resumeSubItem{Mixed-Modal Learning (Python, Tensorflow)} {Used conditional generative Neural Networks (modified \href{https://arxiv.org/abs/1703.10135}{\textit{Tactron model}}) to generate audio samples of bird chirps given a birds image! Submitted a paper to CVPR-2019. \hfill \textbf{\faLink} \href{https://github.com/nikhilsu/Mixed-modal-learning}{\underline{GitHub}}.} \resumeSubItem{Opinion Mining Tweets (Spark, Kafka, Redis)} {Performed Aspect Extraction and Opinion mining for product reviews on stream of tweets. These opinions were visualized on a map based on the sentiment score. \hfill \textbf{\faLink} \href{https://drive.google.com/file/d/1zUeC4FPc74mw2Z6SB1qXauo8p6Ex2Cc9/view?usp=sharing}{\underline{Demo}}, \href{https://github.com/nikhilsu/Aspect-Extraction-and-Opinion-Mining}{\underline{GitHub}}.} \resumeSubItem{Object 3D Pose Estimation (Python, Tensorflow, C\#, Unity)} {Using a Convolutional Neural Network to predict an object\rq s 3D pose to generate real and virtual object interactions in a Hololens. \hfill \textbf{\faLink} \href{https://drive.google.com/file/d/1kCepKQxR73tUTLuvmd1YL3sIbj1GxDdc/view?usp=sharing}{\underline{Demo}}, \href{https://drive.google.com/file/d/1mRwSJ8p2-g-gtBGl1A8seRB8SojWQphm/view?usp=sharing}{\underline{Paper}}, \href{https://github.com/nikhilsu/Object-location-detection}{\underline{GitHub}}.} \resumeSubItem{Agile board (Java, SpringMVC)} {A web based Agile board developed for an OOP-design cum refactoring exercise. Devised a \href{https://github.com/nikhilsu/Agile-board/blob/master/src/main/java/com/prorg/helper/result/Response.java}{\emph{custom design pattern}} as part of the project implementation. \hfill \textbf{\faLink} \href{https://prorg.herokuapp.com}{\underline{App}}, \href{https://github.com/nikhilsu/Agile-board}{\underline{GitHub}}.} \resumeSubItem{Navisys (Java, Android, Python, C++, OpenCV)} {An embedded system fitted into a jacket that provides turn-by-turn navigation with human and obstacle detection to the visually impaired. \hfill \textbf{\faLink} \href{https://drive.google.com/file/d/1bFHeZ7-7uwZ0spir3YQ7r0maWdLteEtu/view?usp=sharing}{\underline{Report}}, \href{https://drive.google.com/file/d/1JWB67U2jjTG7cXZFVjRPVGKsv-rRhgUQ/view?usp=sharing}{\underline{Synopsis}}.} \resumeSubHeadingListEnd \vspace{3pt} %--------PROGRAMMING SKILLS------------ \section{Programming Skills} \resumeSubHeadingListStart \setlength\itemsep{0em} \item{ \textbf{Languages}{: Python, C\#, Java, NodeJs, Scala, Ruby, C++, PostgreSQL, MySQL, MSSQL, and MongoDB.} } \item{ \textbf{Frameworks}{: Tensorflow, Express.Js, ASP.NET, Flask, Ruby on Rails, SpringMVC, ReactJs with Redux.} } \item{ \textbf{Others}{: Spark, Hadoop, Kafka, Redis, AWS(Lamba, EMR, EC2), Storm, Heroku, Qubole, Pig, CI, Docker.} } \resumeSubHeadingListEnd \end{document}
{ "alphanum_fraction": 0.7070535903, "avg_line_length": 49.1770833333, "ext": "tex", "hexsha": "0856c97b81c6e045af7bbdf603d2050bcc7bfe1c", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-02-10T06:56:23.000Z", "max_forks_repo_forks_event_min_datetime": "2022-02-10T06:56:23.000Z", "max_forks_repo_head_hexsha": "4087b84230df43f4db1c8a7fdccd0bdd6947ffba", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nikhilsu/Resume", "max_forks_repo_path": "Resume_SE.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4087b84230df43f4db1c8a7fdccd0bdd6947ffba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nikhilsu/Resume", "max_issues_repo_path": "Resume_SE.tex", "max_line_length": 321, "max_stars_count": null, "max_stars_repo_head_hexsha": "4087b84230df43f4db1c8a7fdccd0bdd6947ffba", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nikhilsu/Resume", "max_stars_repo_path": "Resume_SE.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2796, "size": 9442 }
\section{M-estimators}
{ "alphanum_fraction": 0.72, "avg_line_length": 6.25, "ext": "tex", "hexsha": "235ed0ee625ce818e594350dcc1f95df73538602", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/M/01-00-M_estimators.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/M/01-00-M_estimators.tex", "max_line_length": 22, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/M/01-00-M_estimators.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9, "size": 25 }
\clearpage \noindent \hrulefill \subsection*{{\tt <plot>}} \hrulefill\newline Cartesian chart with x and y axes. \subsection*{\emph{ivml attributes}} \begin{description} \item[yodomain:]{array of nominal values for discrete y axes (overrides ymin, ymax, yticks)} \item[xaxis-tick-size:]{tick size for x axis} \item[xgridlines-visibility:]{visibility for x axis gridlines} \item[ygridlines-stroke:]{stroke color for y axis gridlines} \item[yaxis-truncate-ending:]{string to append to end of y axis text truncated due to exceeding yaxis-text-max-width} \item[ygridlines-fill:]{fill color for y axis gridlines} \item[margin-right:]{size in pixels of the right margin} \item[height:]{height in pixels of the plot area} \item[brush-stroke:]{stroke color for brush} \item[xgridlines-stroke:]{stroke color for x axis gridlines} \item[margin-left:]{size in pixels of the left margin} \item[xmin:]{minimum value of the x axis} \item[xaxis-label-text:]{x axis label} \item[ymin:]{minimum value of the y axis} \item[yaxis-tick-size:]{tick size for x axis} \item[xgridlines-shape-rendering:]{shape rendering for x axis gridlines} \item[xgridlines-fill:]{fill color for x axis gridlines} \item[brush-fill-opacity:]{fill opacity for brush} \item[ymax:]{maximum value of the y axis} \item[ytick-format-function:]{formatter for y axis tick labels} \item[yaxis-shape-rendering:]{shape rendering for y axis} \item[margin-bottom:]{size in pixels of the bottom margin} \item[xticks:]{number of tick marks to be shown on continuous x axis} \item[width:]{width in pixels of the plot area} \item[brush-fill:]{fill color for brush} \item[ygridlines-opacity:]{opacity for y axis gridlines} \item[brush-shape-rendering:]{shape rendering brush} \item[plot-label-font-color:]{font color of the main label of the plot} \item[yaxis-stroke:]{stroke color for y axis} \item[xaxis-font-color:]{font color for x axis} \item[yaxis-visibility:]{visibility value for y axis} \item[plot-background:]{background color of the plot area} \item[yaxis-font-size:]{font size for y axis} \item[xaxis-text-max-width:]{maximum width of x axis text in pixels} \item[xaxis-fill:]{fill color for x axis} \item[xaxis-visibility:]{visibility value for x axis} \item[ygridlines-visibility:]{visibility for y axis gridlines} \item[xaxis-truncate-ending:]{string to append to end of x axis text truncated due to exceeding xaxis-text-max-width} \item[background:]{background color of the entire element} \item[xtick-format-function:]{formatter for x axis tick labels} \item[ygridlines-shape-rendering:]{shape rendering for y axis gridlines} \item[xaxis-shape-rendering:]{shape rendering for x axis} \item[yaxis-font-color:]{font color for y axis} \item[brush-clear-on-redraw:]{set to true if brush should clear when plot is redrawn} \item[plot-label-font-size:]{font size of the main label of the plot} \item[yaxis-font-family:]{font family for y axis} \item[xodomain:]{array of nominal values for discrete x axes (overrides xmin, xmax, xticks)} \item[xaxis-font-size:]{font size for x axis} \item[yaxis-label-text:]{y axis label} \item[xaxis-font-family:]{font family for x axis} \item[yaxis-fill:]{fill color for y axis} \item[margin-top:]{size in pixels of the top margin} \item[plot-label-text:]{text for the main label of the plot} \item[yaxis-text-max-width:]{maximum width of y axis text in pixels} \item[xmax:]{maximum value of the x axis} \item[yticks:]{number of tick marks to be shown on continuous y axis} \item[xgridlines-opacity:]{opacity for x axis gridlines} \item[xaxis-stroke:]{stroke color for x axis} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[ybrushstart:]{function that will be called when the vertical brush starts. Will pass the d3.svg.brush element of the plot as the first parameter.} \item[ybrush:]{function that will be called when the vertical brush is brushed. Will pass the d3.svg.brush element of the plot as the first parameter.} \item[xbrushend:]{function that will be called when the horizontal brush ends. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables ybrushstart, ybrush, ybrushend.} \item[xbrushstart:]{function that will be called when the horizontal brush starts. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables ybrushstart, ybrush, ybrushend.} \item[xbrush:]{function that will be called when the horizontal brush is brushed. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables ybrushstart, ybrush, ybrushend.} \item[ybrushend:]{function that will be called when the vertical brush ends. Will pass the d3.svg.brush element of the plot as the first parameter.} \item[brush:]{function that will be called when the two dimensional brush is brushed. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables xbrushstart, xbrush, xbrushend, ybrushstart, ybrush, ybrushend.} \item[brushend:]{function that will be called when the two dimensional brush ends. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables xbrushstart, xbrush, xbrushend, ybrushstart, ybrush, ybrushend.} \item[brushstart:]{function that will be called when the two dimensional brush starts. Will pass the d3.svg.brush element of the plot as the first parameter. Setting the function disables xbrushstart, xbrush, xbrushend, ybrushstart, ybrush, ybrushend.} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <paths>}} \hrulefill\newline Paths are visual elements that are defined by a series of points with x and y values \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{yfunction}:]{accessor for the y value of an element of the points array} \item[\uline{points-function}:]{returns an array of JavaScript objects that represent the points of the path.} \item[\uline{data}:]{the javascript data object to plot} \item[\uline{xfunction}:]{accessor for the x value of an element of the points array} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke-opacity:]{opacity of object's outline} \item[fill-opacity:]{fill opacity of object} \item[interpolate:]{interpolation mode of the object (https://github.com/mbostock/d3/wiki/SVG-Shapes\#line\_interpolate)} \item[stroke:]{color of object's outline} \item[stroke-dasharray:]{dashing of object's outline} \item[stroke-width:]{width of object's outline} \item[fill:]{color opacity of object} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <bars>}} \hrulefill\newline Vertical or horizontal bar that is part of a group. The bar's magnitude is it's length along the independent dimension (vertical for horizontal bar charts). \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{value-function}:]{accessor for the the bar's value (size and direction)} \item[\uline{position-function}:]{accessor for the bar's position on the nominal axis} \item[\uline{data}:]{javascript object to plot} \item[stroke:]{color of the bar's outline} \item[fill-opacity:]{fill opacity of bar} \item[thickness:]{the bar's thickness (size parallel to the dependent dimension)} \item[stroke-opacity:]{opacity of the bar's outline} \item[fill:]{fill color of the bar} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <cylinders>}} \hrulefill\newline Disks defined by a radius and height. \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{adjustxfunction}:]{TODO} \item[\uline{data}:]{javascript data object} \item[\uline{adjustyfunction}:]{TODO} \item[\uline{centerxfunction}:]{center function for x position} \item[\uline{centeryfunction}:]{center function for y position} \item[width:]{width of the object} \item[height:]{height of the object} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke-opacity:]{stroke opacity} \item[fill-opacity:]{fill opacity} \item[stroke:]{stroke color} \item[stroke-dasharray:]{stroke dashing} \item[radius:]{radius of the cirle} \item[fill:]{fill color} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <donut-charts>}} \hrulefill\newline Donut charts display data as slices of a circle or arch \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{yfunction}:]{y position function of the object} \item[\uline{xfunction}:]{x position function of the object} \item[\uline{data}:]{javascript data object} \item[\uline{slice-function}:]{function that determines the size of a slice} \item[fill-function:]{function determining the fill of a slice} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke:]{stroke color of slices} \item[inner-radius:]{inner radius of slices} \item[outer-radius:]{outer radius of slices} \item[stroke-opacity:]{stroke opacity of slices} \item[fill-opacity:]{fill opacity of slices} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <points>}} \hrulefill\newline \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{yfunction}:]{accessor for data's y value} \item[\uline{xfunction}:]{accessor for data's x value} \item[\uline{data}:]{the javascript data object to be plotted} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke-opacity:]{opacity of the point's outline} \item[fill-opacity:]{opacity of the points fill} \item[cursor:]{hover cursor style} \item[stroke:]{color of the point's outline} \item[radius:]{point's radius} \item[stroke-dasharray:]{dash array for point's outline} \item[fill:]{point's fill} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <error-bars>}} \hrulefill\newline Error bars are a visual element which can provide a visual representation of uncertainty around measures. In IVML, these are described by a center location and values describing the uncertainty in the positive and negative x and y directions. \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{xcenter-function}:]{accessor for data's function for the center x point} \item[\uline{data}:]{the javascript object for this plot} \item[\uline{ycenter-function}:]{accessor function for the center y point} \item[left-function:]{accessor fir data's uncertainty in the positive x direction} \item[up-function:]{accessor for data's uncertainty in the positive y direction} \item[down-function:]{accessor fir data's uncertainty in the negative y direction} \item[right-function:]{accessor fir data's uncertainty in the negative x direction} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke:]{line color} \item[stroke-width:]{line opacity} \item[stroke-opacity:]{line width} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <line-segments>}} \hrulefill\newline Line segments are visual elements defined with a starting and ending point. \subsection*{\emph{ivml attributes}} \begin{description} \item[\uline{x1-function}:]{accessor for data's x start point} \item[\uline{y1-function}:]{accessor for data's y start point} \item[\uline{data}:]{the javascript data object to be plotted} \item[\uline{x2-function}:]{accessor for data's x end point} \item[\uline{y2-function}:]{accessor for data's y end point} \end{description} \subsection*{\emph{svg attributes}} \begin{description} \item[stroke:]{color of the line} \item[stroke-width:]{width of the line} \item[stroke-dasharray:]{dashing of the line} \item[stroke-opacity:]{opacity of the line} \end{description} \subsection*{\emph{event attributes}} \begin{description} \item[click-e:]{mouse click event} \item[mouse-over-e:]{mouse over event} \item[mouse-out-e:]{mouse out event} \end{description} \clearpage \noindent \hrulefill \subsection*{{\tt <line-group>}} \hrulefill\newline Plots a group of {\tt <paths>} elements cumulatively as a stacked area chart. \clearpage \noindent \hrulefill \subsection*{{\tt <bar-group>}} \hrulefill\newline Group of {\tt <bars>} elements, intended for bar charts. This directive requires the data to be index by a nominal value on the axis. \subsection*{\emph{ivml attributes}} \begin{description} \item[padding:]{pixel spacing between bars} \item[type:]{specifies a {\tt grouped} or {\tt stacked} chart.} \item[arrangement:]{specifies a {\tt vertical} or {\tt horizontal} chart.} \end{description}
{ "alphanum_fraction": 0.763016158, "avg_line_length": 50.2556390977, "ext": "tex", "hexsha": "3e63744b2fa7ac8eb68166e28f543b34b1ff6f14", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2015-06-10T20:14:07.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-15T01:42:34.000Z", "max_forks_repo_head_hexsha": "6f37f71c3eafd62bf8eae8b81906d4dec024a337", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jegentile/IVML", "max_forks_repo_path": "Documentation_generator/manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6f37f71c3eafd62bf8eae8b81906d4dec024a337", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jegentile/IVML", "max_issues_repo_path": "Documentation_generator/manual.tex", "max_line_length": 254, "max_stars_count": 8, "max_stars_repo_head_hexsha": "6f37f71c3eafd62bf8eae8b81906d4dec024a337", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jegentile/IVML", "max_stars_repo_path": "Documentation_generator/manual.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-02T22:52:13.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-04T22:24:04.000Z", "num_tokens": 3644, "size": 13368 }
We have already seen the logical and binary operator $\land$ and the unary logical negation operator $\shortsim$. \subsection{The logical or ($\lor$) operator} The next operator we will introduce is the logical or operator $\lor$. The motivation for considering this operator is as follows. In colloquial language, a statement of the form ``$A$ or $B$'' is usually taken to mean ``either $A$ or $B$ is true.'' However, this excludes the scenario where $A$ and $B$ are both true. Taking this to account, the truth table for $\lor$ is the following: \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline $p$ & $q$ & $p \lor q$ \\ \hline $T$ & $T$ & $T$ \\ \hline $T$ & $F$ & $T$ \\ \hline $F$ & $T$ & $T$ \\ \hline $F$ & $F$ & $F$ \\ \hline \end{tabular} \end{table} \subsection{The NAND and NOR operators} The NAND and NOR operators, $\uparrow$ and $\downarrow$ respectively can be defined in terms of previously defined operators. The NAND operator $\uparrow$ is defined as \[p \uparrow q \equiv \shortsim(p \land q)\] and the NOR operator $\downarrow$ is defined as \[p \downarrow q \equiv \shortsim(p \lor q).\] In other words, NAND and NOR are simply the logical negations of the results of the logical and and logical or, respectively. The reader should create truth tables for these operators if they want more practice in creating truth tables. \subsection{The XOR and XNOR operator} As we have stated before, the or operator does not reflect colloquial language of the term, which more precisely reflects the term ``either-or''. This operator is true whenever exactly one of its arguments is true, and false otherwise. It is denoted by $\oplus$. The truth table for this operator is written below: \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline $p$ & $q$ & $p \lor q$ \\ \hline $T$ & $T$ & $F$ \\ \hline $T$ & $F$ & $T$ \\ \hline $F$ & $T$ & $T$ \\ \hline $F$ & $F$ & $F$ \\ \hline \end{tabular} \end{table} The XNOR operator $\odot$ is defined to be the negation of the XOR operator. Later (In chapter 4) we will see that this operator plays an important role in defining what are called biconditional statements. \subsection{The implies ($\implies$) operator} The truth table of the implies operator $\implies$ is defined below. \begin{table}[h] \centering \begin{tabular}{|l|l|l|} \hline $p$ & $q$ & $p \implies q$ \\ \hline $T$ & $T$ & $T$ \\ \hline $T$ & $F$ & $T$ \\ \hline $F$ & $T$ & $F$ \\ \hline $F$ & $F$ & $T$ \\ \hline \end{tabular} \end{table} The motivation of this operator is to tabulate situations that can happen given that any given implication statement is true. Suppose that the following statement is true no matter what \begin{center} If it is raining outside, then Bob will carry an umbrella. \end{center} Then consider the statements ``it is raining outside'' and ``Bob is carrying an umbrella''. Consider the following scenarios. \begin{itemize} \item It is possible for it to be both raining outside and for Bob to be carrying an umbrella at the same time, for the statement above is always true. So both statements being true is possible. \item It is possible for it to be not raining outside and yet for Bob to be carrying an umbrella. For what if Bob carried an umbrella all the time? The statement above does not discount that possibility. So for the first statement to be true and the second to be false is possible. \item It is not possible for it to be raining outside and Bob to not be carrying an umbrella, as the statement given forbids this possibility. \item It is possible for it to be not raining outside and for Bob to be not carrying an umbrella. For the statement does not say anything about Bob when it is not raining outside. \end{itemize} For every case where the two statements are possible we let the implication operator record true, and for the one case where the two statements are not possible we let the implication operator record false.
{ "alphanum_fraction": 0.7225462846, "avg_line_length": 53.2837837838, "ext": "tex", "hexsha": "79efadbaf0caeb401adaad7fc30d6c45c326f223", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f18413d1eb0ed598b325e5cd8052fcc571337926", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jonlin1000/discr_math", "max_forks_repo_path": "Ch1/ls_op.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f18413d1eb0ed598b325e5cd8052fcc571337926", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jonlin1000/discr_math", "max_issues_repo_path": "Ch1/ls_op.tex", "max_line_length": 387, "max_stars_count": 3, "max_stars_repo_head_hexsha": "f18413d1eb0ed598b325e5cd8052fcc571337926", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jonlin1000/discr_math", "max_stars_repo_path": "Ch1/ls_op.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-14T02:26:40.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-22T03:31:37.000Z", "num_tokens": 1088, "size": 3943 }
\subsection{Platform-as-a-Service (PaaS)} PaaS clouds, which have been experiencing a rapid growth in popularity~\cite{paas-growth,paas-growth2}, typically host web-accessible (HTTP/S) applications, to which they provide high levels of scalability, availability, and sandboxed execution. PaaS clouds provide scalability by automatically allocating resources for applications on the fly (auto scaling), and provide availability through the execution of multiple instances of the application. Applications deployed on a PaaS cloud depend on a number of scalable services intrinsic to the cloud platform. Our goal is to design Roots as another intrinsic service of the PaaS cloud so that it can function automatically and non-intrusively as an APM. %PaaS clouds provide a high level of %abstraction to the application developer that effectively hides all the %infra\-structure-level details such as physical resource allocation (CPU, %memory, disk etc), operating system, and network configuration. This enables %application developers to focus solely on the programming aspects of their %applications, without having to be concerned about deployment issues. %Consequently, viable PaaS technologies as well as PaaS-hosted applications %have been increasing rapidly in number~\cite{paas-growth,paas-growth2}. %But %due to the high level of abstraction, performance monitoring and root cause %analysis is particularly challenging in PaaS clouds. Due to this reason, and %the large number of PaaS applications available for testing, we design Roots %APM to operate within PaaS clouds. \begin{figure} \centering \includegraphics[scale=0.5]{paas_architecture} \caption{PaaS system organization.} \label{fig:paas_architecture} \end{figure} Figure~\ref{fig:paas_architecture} shows the key layers of a typical PaaS cloud. Arrows indicate the flow of data and control in response to application requests. At the lowest level of a PaaS cloud is an infrastructure that consists of the necessary compute, storage and networking resources. How this infrastructure is set up may vary from a simple cluster of physical machines to a comprehensive Infrastructure-as-a-Service (IaaS) cloud. In large scale PaaS clouds, this layer typically consists of many virtual machines and/or containers with the ability to acquire more resources on the fly. On top of the infrastructure layer lies the PaaS kernel. This is a collection of managed, scalable services that high-level application developers can compose into their applications. The provided services may include database services, caching services, queuing services and much more. Some PaaS clouds provide a managed set of APIs (a ``software development kit'' or SDK) for the application developer to access these fundamental services. In that case all interactions between the applications and the PaaS kernel must take place through the cloud provider specified APIs (e.g. Google App Engine~\cite{gae}). One level above the PaaS kernel we find the application servers that are used to deploy and run applications. Application servers provide the necessary integration (linkage) between application code and the underlying PaaS kernel, while sandboxing application code for secure, multi-tenant operation. On top of the application servers layer resides the request front-end and load balancing layer. This layer is responsible for receiving all application requests, filtering them and routing them to an appropriate application server instance for further execution. Front-end server is therefore the entry point for PaaS-deployed applications for all application clients. Each of the above layers can span multiple processes, running over multiple physical or virtual machines. Therefore the execution of a single application request typically involves cooperation of multiple distributed processes and/or machines. In order to perform comprehensive monitoring and root cause analysis, we need to be able to monitor each of the above layers along with their enclosed components. Further, we need to be able to trace the flow of data and control across different layers and components. \subsection{Cloud-hosted Web Applications} %\textcolor{blue}{You might want to move some of this discussion into the %introduction since it allows the reader to understand the focus of the work we %present.} %Added more details about PaaS services to the intro Our work concentrates on the web applications deployed in PaaS clouds. An application of this nature exposes one or more web application programming interfaces (web APIs) through which clients can interact with it. The web APIs accept HTTP/S requests sent by remote clients, and respond with machine readable responses (e.g. HTML, JSON, XML, Protocol Buffers~\cite{protobuff}). This type of applications tend to be highly interactive, and clients typically have strict expectations on the application response time. Studies have shown that poor application response time may even lead to revenue loss~\cite{latency-matters}. Additionally, the PaaS cloud on which an application is running may impose additional constraints on the application response time for scalability reasons~\cite{azure-limits,gae-limits}. For example Google App Engine requires that no application request takes more than 60 seconds to execute. PaaS-hosted web applications rely on various PaaS kernel services offered by the underlying cloud platform. By offloading common application functionality such as data storage, caching, user management, and security to a set of managed cloud services, application developers can greatly reduce the amount of code they have to write. It also eliminates the need to install, run and manage many other support software that would otherwise be necessary (e.g. a database server, a message broker etc.). In other words, when developing applications for a PaaS environment, the developer just focuses on implementing his/her application code using the available PaaS services. All deployment and utility services are provisioned by the PaaS cloud itself. The downside of this approach is that application developers no longer have full runtime visibility into the application execution. Since most of the application functionality is provided by a set of PaaS kernel services that are in cloud provider's domain, the application developer does not have total control over application performance. If the application response time becomes too slow, the application developer is not in a position to determine where in the entire cloud platform the performance bottleneck is due to the opacity of the cloud platform's internal implementation. One way to circumvent this limitation is to instrument application code, and continuously monitor the time taken by various parts of the application. But this activity can be tedious for the application developer, may be error prone thereby misleading those attempting to diagnose a problem, and the additional code instrumentation may slow down or alter the application's performance. %Also there is no way to enforce correct, and consistent instrumentation of application code. %That is, the application developers must be careful in gathering and storing the appropriate %runtime data. Otherwise their analyses will be inaccurate. Implementing data collection and analysis as a service built into the PaaS cloud allows anomaly detection and bottleneck identification to be a ``curated'' service that is reliably managed by the cloud platform. \subsection{Performance Anomalies} In this work we define \textit{anomalies} as application performance change events that cause one or more performance SLOs to be violated. Our model is one in which the clients of a web application have engaged in a ``service-level agreement'' (SLA) with the ``owner'' or operator of the application that is hosted in a PaaS. The SLA stipulates a response-time ``service-level objective'' (SLO) that, if violated, constitutes a breech of the agreement. %approach associates each application with %a set of performance SLOs. We consider SLOs concerning the application response time %(latency), which get violated when an application becomes too slow. If the performance of an application deteriorates to the point that at least one of its SLOs is violated, we treat it as an anomaly. Moreover, the process of diagnosing the reason for an anomaly is referred to as \textit{root cause analysis}. For a given anomaly, the root cause could be a change in the application workload or a \textit{bottleneck} in the application runtime. Bottlenecks may occur in the application code, or in the PaaS services that the application rely on. In order to maintain a satisfactory level of user experience and adhere to any previously agreed upon performance SLOs, application developers and cloud administrators wish to detect performance anomalies as soon as they occur. When detected, they must perform root cause analysis to identify the cause of the anomaly, and take some corrective and/or preventive action. This diagnosis usually occurs as a two step process. First, one must determine whether the anomaly was caused by a change in the workload (e.g. a sudden increase in the number of client requests). If that is the case, the resolution typically involves allocating more resources to the application or spawning more instances of the application for load balancing purposes. If the anomaly cannot be attributed to a workload change, one must go another step to find the bottleneck component that has given rise to the issue at hand. %Detecting performance anomalies %requires continuous monitoring of application performance which could be tedious with %cloud platforms in use today. It is even more challenging to perform root cause analysis %due to the complexity and the blackbox nature of the cloud platforms. Note that there are several third party cloud monitoring solutions available today which provide some performance anomaly detection support~\cite{newrelic,datadog,dynatrace}. However, they require additional configuration, are expensive and cannot support root cause analysis across the entire cloud stack since they do not have visibility into all components of the cloud platform.
{ "alphanum_fraction": 0.8178007409, "avg_line_length": 62.5487804878, "ext": "tex", "hexsha": "531954903d9276a8d9fdcc6b0de164c1860a7d3a", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_forks_repo_path": "Eager/paper/www17/background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_issues_repo_path": "Eager/paper/www17/background.tex", "max_line_length": 110, "max_stars_count": 3, "max_stars_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_stars_repo_path": "Eager/paper/www17/background.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-16T18:20:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-12T01:18:49.000Z", "num_tokens": 2097, "size": 10258 }
\section{Problem Statement} \label{sec:ProblemStatement} Simply put: fit a smooth function through a sequence of data points, \textit{e.g.} Figure \ref{fig:DataFittingExampleFigure}. \\ More precisely: \begin{itemize} \setlength\itemsep{0em} \item \textbf{Given:} a time-stamped set of points in space. \item \textbf{Find:} a smooth vector function. \item \textbf{Minimize:} integral of curvature-rate-squared and sum or squared-error. \item \textbf{Subject to:} value, slope, and curvature at the boundaries. \end{itemize} \par We will use $\bm{x}(t)$ to represent value as a function of time. The data set is $\{t_i, \bar{\bm{x}}_i\}$, which gives measured value at each time stamp. The time-domain of the data set is $[0, T]$. \subsection{Objective Function} The objective function is a weighted combination of two terms. The first is the sum of the squared-error between the candidate vector function and the points in the data set. The second is a smoothing-term, minimizing the integral of the third derivative of the function. \begin{equation} \bm{J}\big(\bm{x}(t)\big) \; = \; \frac{T}{N} \sum_{i=0}^N \big( \bm{x}(t_i) - \bar{\bm{x}}_i \big)^2 \; + \; \alpha \! \int_0^T \! \dddot{\bm{x}}^2(t) \, dt \label{eqn:continuousObjectiveFunction} \end{equation} \subsection{Constraints} The vector function $\bm{x}(t)$ must be smooth: continuous value, slope, and curvature. \begin{equation} \bm{x}(t) \in \mathcal{C}^2 \end{equation} We also require that the value, slope, and curvature be prescribed at the initial and final times. In practice these boundary constraints can be dropped if not required by the end-user. \begin{align} \bm{x}(0) = \bm{x}_0 & \quad & \bm{x}(T) = \bm{x}_T \\ \dot{\bm{x}}(0) = \dot{\bm{x}}_0 & \quad & \dot{\bm{x}}(T) = \dot{\bm{x}}_T \\ \ddot{\bm{x}}(0) = \ddot{\bm{x}}_0 & \quad & \ddot{\bm{x}}(T) = \ddot{\bm{x}}_T \end{align}
{ "alphanum_fraction": 0.6835774059, "avg_line_length": 39.8333333333, "ext": "tex", "hexsha": "bd0e5e90f3e9567ac65ad1cd7c60aed98645f769", "lang": "TeX", "max_forks_count": 19, "max_forks_repo_forks_event_max_datetime": "2022-02-23T14:08:38.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-24T00:15:06.000Z", "max_forks_repo_head_hexsha": "333dcf4891ca05f007590f3a40f67ae46cf2cf6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Boyang--Li/ME149_Spring2018", "max_forks_repo_path": "supplement/fit-spline-to-data/tex/problemStatement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "333dcf4891ca05f007590f3a40f67ae46cf2cf6b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Boyang--Li/ME149_Spring2018", "max_issues_repo_path": "supplement/fit-spline-to-data/tex/problemStatement.tex", "max_line_length": 111, "max_stars_count": 45, "max_stars_repo_head_hexsha": "0cd1960cd3699ef4f24f824c89b32a64c73b5b99", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ShaneRozenLevy/ME149_Spring2018", "max_stars_repo_path": "supplement/fit-spline-to-data/tex/problemStatement.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-06T22:54:58.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-10T15:55:58.000Z", "num_tokens": 619, "size": 1912 }
% Syllabus Template from Arman Shokrollahi % https://www.overleaf.com/latex/templates/syllabus-template-course-info/gbqbpcdgvxjs \documentclass[11pt, letterpaper]{article} %\usepackage{geometry} \usepackage[inner=2cm,outer=2cm,top=2.5cm,bottom=2.5cm]{geometry} \pagestyle{empty} \usepackage{graphicx} \usepackage{fancyhdr, lastpage, bbding, pmboxdraw} \usepackage[usenames,dvipsnames]{color} \definecolor{darkblue}{rgb}{0,0,.6} \definecolor{darkred}{rgb}{.7,0,0} \definecolor{darkgreen}{rgb}{0,.6,0} \definecolor{red}{rgb}{.98,0,0} \usepackage[colorlinks,pagebackref,pdfusetitle,urlcolor=darkblue,citecolor=darkblue,linkcolor=darkred,bookmarksnumbered,plainpages=false]{hyperref} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \pagestyle{fancyplain} \fancyhf{} \lhead{ \fancyplain{}{Political Analysis in R} } %\chead{ \fancyplain{}{} } \rhead{ \fancyplain{}{Fall 2021} }%\today %\rfoot{\fancyplain{}{page \thepage\ of \pageref{LastPage}}} \fancyfoot[RO, LE] {page \thepage\ of \pageref{LastPage} } \thispagestyle{plain} %%%%%%%%%%%% LISTING %%% \usepackage{listings} \usepackage{caption} \DeclareCaptionFont{white}{\color{white}} \DeclareCaptionFormat{listing}{\colorbox{gray}{\parbox{\textwidth}{#1#2#3}}} \captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white} \usepackage{verbatim} % used to display code \usepackage{fancyvrb} \usepackage{acronym} \usepackage{amsthm} \VerbatimFootnotes % Required, otherwise verbatim does not work in footnotes! \definecolor{OliveGreen}{cmyk}{0.64,0,0.95,0.40} \definecolor{CadetBlue}{cmyk}{0.62,0.57,0.23,0} \definecolor{lightlightgray}{gray}{0.93} \lstset{ %language=bash, % Code langugage basicstyle=\ttfamily, % Code font, Examples: \footnotesize, \ttfamily keywordstyle=\color{OliveGreen}, % Keywords font ('*' = uppercase) commentstyle=\color{gray}, % Comments font numbers=left, % Line nums position numberstyle=\tiny, % Line-numbers fonts stepnumber=1, % Step between two line-numbers numbersep=5pt, % How far are line-numbers from code backgroundcolor=\color{lightlightgray}, % Choose background color frame=none, % A frame around the code tabsize=2, % Default tab size captionpos=t, % Caption-position = bottom breaklines=true, % Automatic line breaking? breakatwhitespace=false, % Automatic breaks only at whitespace? showspaces=false, % Dont make spaces visible showtabs=false, % Dont make tabls visible columns=flexible, % Column format morekeywords={__global__, __device__}, % CUDA specific keywords } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{center} {\Large \textsc{POLS 3230: Political Analysis in \texttt{R}}} \end{center} \begin{center} {\large Fall 2021} \end{center} \begin{center} \rule{6.5in}{0.4pt} \begin{minipage}[t]{.96\textwidth} \begin{tabular}{llcccll} \textbf{Professor:} & Joe Ornstein & & & & \textbf{Time:} & MWF 1:50 -- 2:40pm \\ \textbf{Email:} & \href{mailto:[email protected]}{[email protected]} & & & & \textbf{Place:} & 101D Baldwin Hall\\ \textbf{Website:} & \href{https://joeornstein.github.io/pols-3230/}{https://joeornstein.github.io/pols-3230/} & & & & & \end{tabular} \end{minipage} \rule{6.5in}{0.4pt} \end{center} \vspace{.15cm} \setlength{\unitlength}{1in} \renewcommand{\arraystretch}{2} \noindent In this course, you will learn the fundamentals of working with data using \texttt{R}, a programming language widely used among professional data scientists and academic researchers. You'll learn how to write code, explore new datasets, build visualizations, and think carefully about what conclusions you can and cannot draw from data. \begin{figure}[h] \centering \href{https://xkcd.com/523/}{\includegraphics[width=0.6\textwidth]{img/decline.png}} \end{figure} %\begin{quotation} % \noindent``\textit{You can't really know anything if you just remember isolated facts. If the facts don't hang together on a latticework of theory, you don't have them in a usable form. You've got to have models in your head.}''\\ % \\ % --Charlie Munger (investor, vice chairman of Berkshire Hathaway) %\end{quotation} % \noindent \section*{Course Objectives} %\vskip.15in %\noindent\textbf{Course Objectives:} By the end of this course, you will be able to: \begin{itemize} \item Write \texttt{R} scripts to import, tidy, and summarize datasets \item Create beautiful and informative data visualizations \item Draw thoughtful conclusions from data \item Organize your work so that it is transparent and reproducible % \item Manipulate, wrangle, and clean datasets using the \texttt{R} programming language % \item Create beautiful data visualizations % \item Organize your work so that it is transparent and reproducible % \item Compute derivatives and solve systems of linear equations % \item Explain the properties of probability distributions and expected values % \item Perform hypothesis tests and fit models to data \end{itemize} \section*{Readings} Before each class session, I will assign a reading that walks you through a new \texttt{R} programming skill. I will expect you to read and annotate each assignment using \href{https://joeornstein.github.io/pols-3230/index.html#hypothesis}{Hypothesis}. All the readings will be available free online (including the books listed below!), but if you're the type of person who enjoys reading a hard copy, here is a list of books you can purchase: \begin{itemize} \item \href{https://r4ds.had.co.nz/}{Wickham, H., \& Grolemund, G. (2016). \textit{R For Data Science: import, tidy, transform, visualize, and model data}. O'Reilly Media, Inc.} \item \href{https://clauswilke.com/dataviz/}{Wilke, Clause O. (2019). \textit{Fundamentals of Data Visualization: A Primer on Making Informative and Compelling Figures}} \item \href{https://socviz.co/}{Healy, Kieran (2018). \textit{Data Visualization: A Practical Introduction}. Princeton University Press.} \end{itemize} \section*{Assignments \& Grading} To earn your course grade, I will expect the following: \begin{itemize} \item \textbf{Reading (10\%)}: Read all the assigned texts, and actively contribute to the annotated reading discussions. I will grade this on a four-point scale (check-plus, check, check-minus, frowny face) based on how regularly you post. \item \textbf{Quizzes (30\%)}: There will be three in-class quizzes throughout the semester. I will give you a piece of code with a bunch of errors in it, and your job will be to fix the code so that it works. Points assigned based on how many errors you spot and fix. For Fall 2021, the quiz dates will be \textbf{September 15}, \textbf{October 13}, and \textbf{November 17}. \item \textbf{Team Projects (40\%):} Every day in class, you will work in teams to explore some dataset. Roughly once per week, your team will submit a report on your findings. Reports that are error-free, reproducible, thoughtful, and visually appealing will earn full credit. \item \textbf{Final Project (20\%):} To cap off the semester, you will create an original data visualization that explores a topic of your choice. Projects that are error-free, reproducible, thoughtful, and visually appealing will earn full credit, and my 3-5 favorites will receive a prize (your dataviz on a poster or coffee mug)! You can find a copy of the grading rubric \href{https://joeornstein.github.io/pols-3230/syllabus/POLS-3230-final-rubric.xlsx}{here}. \end{itemize} %\vskip.15in %\noindent\textbf{Office Hours:} \section*{Office Hours} I will be available for meetings every Wednesday before and after class, and you can sign up for 15 minute appointments \href{https://calendly.com/jornstein/15min}{here}. My office is Baldwin 304C, but if you prefer Zoom let me know and I'll send you a link. \section*{Tentative Course Outline} Moltke the Elder writes that no battle plan survives first contact with the enemy. The same is true for course outlines. We may need to be flexible, and deviate from the plan if some topics require more or less attention, or we think of something completely unexpected that we want to do, and it takes up a few weeks. Caveats aside, here is what I have planned! %\begin{center} %\begin{minipage}{6in} %\begin{flushleft} %Chapter 1 \dotfill ~$\approx$ 3 days \\ %{\color{darkgreen}{\Rectangle}} ~A little of probability theory and graph theory \subsubsection*{Week 1: Getting Started} \textit{Pre-Class Survey, Overcoming Fear, Setting up Software} \subsubsection*{Week 2: Intro To Data Visualization} \textit{ggplot2, The Grammar of Graphics, Design Principles, Scatterplots} \subsubsection*{Week 3: Fancier Data Visualizations} \textit{Lines, Facets, Histograms, Distributions, Color, Themes} \subsubsection*{Weeks 4-6: Tidying Messy Data} \textit{Making New Variables, Grouping, Summarizing, Importing Filtering, Merging} \subsubsection*{Week 7-8: Space} \textit{Working with geographic data, Drawing maps} \subsubsection*{Week 9-10: Time} \textit{Working with dates, Difference-in-difference} \subsubsection*{Weeks 11-12: Text As Data} \textit{Strings, Twitter, Sentiment Analysis} \subsubsection*{Week 13-15: Final Projects} \textit{Work on whatever you want, then show it off} %\end{flushleft} %\end{minipage} %\end{center} %\vskip.15in %\noindent\textbf{Important Dates:} %\begin{center} \begin{minipage}{3.8in} %\begin{flushleft} %Midterm \#1 \dotfill ~\={A}b\={a}n 16, 1393 \\ %Midterm \#2 \dotfill ~\={A}zar 21, 1393 \\ %%Project Deadline \dotfill ~Month Day \\ %Final Exam \dotfill ~Dey 18, 1393 \\ %\end{flushleft} %\end{minipage} %\end{center} \subsection*{Academic Honesty} Remember that when you joined the University of Georgia community, you agreed to abide by a code of conduct outlined in the academic honesty policy called \href{https://honesty.uga.edu/Academic-Honesty-Policy/Introduction/}{\textit{A Culture of Honesty}}. Team projects may, of course, be completed in teams, but you may not consult other people for help on the quizzes, and I expect your final projects to be your original work. \subsection*{Mental Health and Wellness Resources} \begin{itemize} \item If you or someone you know needs assistance, you are encouraged to contact Student Care and Outreach in the Division of Student Affairs at 706-542-7774 or visit \href{https://sco.uga.edu}{https://sco.uga.edu}. They will help you navigate any difficult circumstances you may be facing by connecting you with the appropriate resources or services. \item UGA has several resources for a student seeking \href{https://www.uhs.uga.edu/bewelluga/bewelluga}{mental health services} or \href{https://www.uhs.uga.edu/info/emergencies}{crisis support}. \item If you need help managing stress anxiety, relationships, etc., please visit \href{https://www.uhs.uga.edu/bewelluga/bewelluga}{BeWellUGA} for a list of FREE workshops, classes, mentoring, and health coaching led by licensed clinicians and health educators in the University Health Center. \item Additional resources can be accessed through the UGA App. \end{itemize} %%%%%% THE END \end{document}
{ "alphanum_fraction": 0.7346830986, "avg_line_length": 51.1711711712, "ext": "tex", "hexsha": "9e70d6eaa907385b18873e455938f5a7ac7b6806", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-02-13T18:02:44.000Z", "max_forks_repo_forks_event_min_datetime": "2022-02-13T18:02:44.000Z", "max_forks_repo_head_hexsha": "c36631db75048dde5d62b49df0bad3f307f92f62", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joeornstein/political-analysis-in-R", "max_forks_repo_path": "course-website/syllabus/POLS-3230-syllabus.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36631db75048dde5d62b49df0bad3f307f92f62", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joeornstein/political-analysis-in-R", "max_issues_repo_path": "course-website/syllabus/POLS-3230-syllabus.tex", "max_line_length": 466, "max_stars_count": null, "max_stars_repo_head_hexsha": "c36631db75048dde5d62b49df0bad3f307f92f62", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joeornstein/political-analysis-in-R", "max_stars_repo_path": "course-website/syllabus/POLS-3230-syllabus.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3168, "size": 11360 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright (c) 2003-2018 by The University of Queensland % http://www.uq.edu.au % % Primary Business: Queensland, Australia % Licensed under the Apache License, version 2.0 % http://www.apache.org/licenses/LICENSE-2.0 % % Development until 2012 by Earth Systems Science Computational Center (ESSCC) % Development 2012-2013 by School of Earth Sciences % Development from 2014 by Centre for Geoscience Computing (GeoComp) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Non-Linear Partial Differential Equations} \label{APP: NEWTON} The solution $u_i$ is given as a solution of the nonlinear equation \begin{equation} \label{APP NEWTON EQU 40} \int_{\Omega} v_{i,j} \cdot X_{ij} + v_{i} \cdot Y_{i} \; dx + \int_{\partial \Omega} v_{i} \cdot y_{i} \; ds = 0 \end{equation} for all smooth $v_i$ with $v_i=0$ where $q_i>0$ and \begin{equation} \label{APP NEWTON EQU 40b} u_i=r_i \mbox{ where } q_i>0 \end{equation} where $X_{ij}$ and $Y_i$ are non-linear functions of the solution $u_k$ and its gradient $u_{k,l}$ and $y_i$ is a function of solution $u_k$. For further convenience we will use the notation \begin{equation} \label{APP NEWTON EQU 40c} <F(u),v> :=\int_{\Omega} v_{i,j} \cdot X_{ij} + v_{i} \cdot Y_{i} \; dx + \int_{\partial \Omega} v_{i} \cdot y_{i} \; ds \end{equation} for all smooth $v$ on $\Omega$. If one interprets $F(u)$ as defined above as a functional over the set of admissible functions $v$ equation~(\ref{APP NEWTON EQU 40}) can be written in compact formulation \begin{equation} \label{APP NEWTON EQU 40d} F(u)= 0 \end{equation} \section{Newton-Raphson Scheme} This equation is iteratively solved by the Newton-Raphson method\index{Newton-Raphson method}, see \cite{Kelley2004a}. Starting with the initial guess $u^{(0)}$ the sequence \begin{equation} \label{APP NEWTON EQU 43} u^{(\nu)}= u^{(\nu-1)} - \delta^{(\nu-1)} \end{equation} for $\nu=1,2,\ldots \;$ generates the (general) Newton-Raphson iteration for the solution $u$. The correction $\delta^{(\nu-1)}$ is the solution of the linear problem \begin{equation} \label{APP NEWTON EQU 43b0} < \fracp{F}{u^{(\nu-1)}} \delta^{(\nu-1)} ,v > = <F(u^{(\nu-1)}),v> \end{equation} for all smooth $v$ on $\Omega$ with $v_i=0$ where $q_i>0$. where \begin{equation} \label{APP NEWTON EQU 43b} < \fracp{F}{u} \cdot \delta ,v > = \int_{\Omega} \left( \fracp{X_{ij}}{u_{k,l}} v_{i,j}\delta_{k,l} + \fracp{X_{ij}}{u_{k}} v_{i,j}\delta_{k} + \fracp{Y_{i}}{u_{k,l}} v_{i}\delta_{k,l} + \fracp{Y_{i}}{u_{k}} v_{i}\delta_{k} \right) \; dx + \int_{\partial \Omega} \fracp{y_{i}}{u_{k}} v_{i}\delta_{k} \; ds \end{equation} It is assumed that the initial guess $u^{(0)}$ fulfills the constraint~(\ref{APP NEWTON EQU 40b}). The $\delta^{(\nu-1)}$ has to fulfill the homogeneous constraint. Notice that the calculation of $\delta^{(\nu-1)}$ requires the solution of a linear PDE as presented in section~\ref{SEC LinearPDE}. The Newton iteration should be stopped in the $k$-th step if for all components of the solution the error of the Newton approximation is lower than the given relative tolerance {rtol}: \begin{equation}\label{APP NEWTON EQU 61} \| u_{i} - u_{i}^{(\nu)} \|_{\infty} \le \mbox{rtol} \cdot \|u_{i} \|_{\infty} \; , \end{equation} for all components $i$ where $\|. \|_{\infty}$ denoted the $L^{sup}$ norm. To measure the quality of the solution approximation on the level of the equation we introduce the weak norm \begin{equation}\label{APP NEWTON EQU 62} \| F(u) \|_{i} := \sup_{v , v=0 \mbox{ where } q_{i}>0 } \frac{<F(u), ve_{i}>}{\|v\|_1} \end{equation} where $(e_{i})_{j}=\delta_{ij}$ and $\|v\|_1=\int_{\Omega} |v| \,dx$ is the $L^1$ norm of $v$ \footnote{In practice a discretization method is applied to solve the update $\delta^{(\nu-1)}$. In this case also an approximation of $\| F(u) \|_{i}$ is calculated taking the maximum over all base function used to represent the solution $u$.}. The stopping criterion (\ref{APP NEWTON EQU 61}) is changed to the level of equation. We use the reasonable heuristic but mathematically incorrect argument that the change on the level of the solution and the change on the level of the equation are proportional: \begin{equation}\label{APP NEWTON EQU 64} \frac{ \| u_i - u^{(\nu)}_i \|_{\infty} }{ \| 0 - F(u^{(\nu)}) \|_{i} } = \frac{ \| \delta^{(\nu)}_i \|_{\infty} }{ \| F(u^{(\nu)}) - F(u^{(\nu-1)}) \|_{i} } \; . \end{equation} where we assume that that component $\nu$ $F(u)$ is mainly controlled by component $\nu$ of the solution. We assume that the term $F(u^{(\nu)})$ can be neglected versus $F(u^{(\nu-1)})$ since $u^{(\nu)}$ is a better approximation, and use the stopping criterion in the formulation: \begin{equation} \label{APP NEWTON EQU 65} \| F(u^{(\nu)}) \|_i \le \frac{ \| F(u^{(\nu-1)})\|_{i} \cdot \|u_{i} \|_{\infty} } { \| \delta^{(\nu)}_i \| _{\infty} } \,\mbox{\it rtol}\, =:\, \mbox{\it qtol}_i \; , \end{equation} which has to hold for all components $\nu$. Now {\it qtol} defines a tolerance for the level of equation. This stopping criterion is not free of problems, because a decrease of the defect $F(u^{(\nu)})$ coupled with a constant correction $\delta^{(\nu)}$ suggests a good approximation. But the quality of the approximation $u^{(\nu)}$ is ensured if the Newton iteration converges quadratically. This convergence behavior is given by the error estimation: \begin{equation} \| u - u^{(\nu)} \|_{\infty} \le C \; \| u - u^{(\nu-1)} \|_{\infty}^2 \end{equation} with a positive value $C$, see \cite{Kelley2004a}. Therefore a quadratic convergence of the Newton iteration can be assumed if the corrections of the current and the last step fulfill the following condition: \begin{equation} \label{APP NEWTON EQU 66} \max_{i} \frac{\| \delta^{(\nu)}_i \|_{\infty} }{\| \delta^{(\nu-1)}_i \|_{\infty} } < \frac{1}{5} \;, \end{equation} where the limit $\frac{1}{5}$ was found by a large number of experiments. The approximation $u^{(\nu)}$ is accepted if the conditions (\ref{APP NEWTON EQU 65}) and (\ref{APP NEWTON EQU 66}) hold. Consequently a safe approximation requires at least two Newton steps. To stop a divergent iteration, which occurs for a bad initial solution, the norms of the defects for the $(k-1)$-th and $k$-th Newton step are compared. Here we use the estimation \begin{equation} \label{APP NEWTON EQU 67} \| F(u^{(\nu)}) \|_i \le \gamma \| F(u^{(\nu-1)}) \|_i \end{equation} for the defects. The value $0<\gamma<1$ depends on $F$ and the distance of the initial guess $u^{(0)}$ to the true solution $u$. Since the constant $\gamma$ is unknown and can be close to one, convergence is assumed if the following (weaker) condition holds for all components $i$: \begin{equation} \label{APP NEWTON EQU 68} \| F(u^{(\nu)}) \|_i < \| F(u^{(\nu-1)}) \|_i \; . \end{equation} If condition (\ref{APP NEWTON EQU 68}) fails, divergence may begin. Therefore under-relaxation is started. Beginning with $\omega=1$ the Newton-Raphson iteration is computed by \begin{equation} \label{APP NEWTON EQU 69} u^{(\nu)} = u^{(\nu-1)} - \omega \, \delta^{(\nu)} \end{equation} instead of (\ref{APP NEWTON EQU 43}). If this new iteration fulfills the condition (\ref{APP NEWTON EQU 68}) of decreasing defect, we accept it. Otherwise we put $\omega \rightarrow \frac{\omega}{2}$, recompute $u^{(\nu)}$ from equation (\ref{APP NEWTON EQU 69}) and retry condition (\ref{APP NEWTON EQU 68}) for the new $u^{(\nu)}$, and so on until either condition (\ref{APP NEWTON EQU 68}) holds or $\omega$ becomes to small ($\omega < \omega_{lim}=0.01$). In the latter case the iteration gives up. The under-relaxation converges only linearly for $\omega<1$. it is a rather robust procedure. which switches back to $\omega=1$ as soon as possible. The price for the robustness is the additional computation of the defects. Due to the quadratic convergence near the solution the error decreases rapidly. Then the solution will not change much and it will not be necessary to mount a new coefficient matrix in each iteration step. This algorithm is called the simplified Newton method. It converges linearly by \begin{equation} \| u_i - u^{(\nu)}_i \|_{\infty} \le \gamma^{\nu} \| u_i - u ^{(0)}_i \|_{\infty} \; , \end{equation} where $\gamma$ equals the $\gamma$ in the estimation (\ref{APP NEWTON EQU 67}). If the general iteration converges quadratically (condition (\ref{APP NEWTON EQU 66}) holds) and we have \begin{equation} \| F(u^{(\nu)}) \|_i < 0.1 \| F(u^{(\nu-1)}) \|_i \end{equation} for all components $\nu$, we can expect $\gamma \le 0.1$. Then the simplified iteration produces one digit in every step and so we change from the general to the simplified method. The `slow' convergence requires more iteration steps, but the expensive mounting of the coefficient matrix is saved. The lienar PDE~(\ref{APP NEWTON EQU 43b}) is solved with with a certain tolerance, namely when the defect of the current approximation of $\delta^{(\nu)}$ relative to the defect of the current Newton iteration is lower than {LINTOL}. To avoid wasting CPU time the FEMLIN iteration must be controlled by an efficient stopping criterion. We set \begin{equation} \mbox{LINTOL} = 0.1 \cdot \max ( \left( \frac{\|\delta^{(\nu)}_i\|_{\infty}}{\|u_i^{(\nu)}\|_{\infty}} \right)^2 ,\min_{i} \frac{\mbox{qtol}_i}{\|F(u^{(\nu)})\|_{i}} ) \end{equation} but restrict {LINTOL} by \begin{equation} 10^{-4} \le \mbox{LINTOL} \le 0.1 \quad \mbox{ and } \quad \mbox{LINTOL}=0.1 \; \mbox{ for }\nu=0 \;. \end{equation} The first term means that it would be useless to compute digits by the linear solver, which are overwritten by the next Newton step. In the region of quadratic convergence the number of significant digits is doubled in each Newton step, i.e.~later digits are overwritten by the following Newton-Raphson correction, see \cite{Schoenauer1981a}. The second mean that no digits should be computed which are in significance below the prescribed tolerance {\it rtol}. The number $0.1$ is a `safety factor' to take care of the coarse norm estimations. Figure~\ref{APP NEWTON PIC 61} shows the workflow of the Newton-Raphson update algorithm. \begin{figure} \begin{center} { \unitlength0.92mm \begin{picture}(200,235) \thicklines \put(-10,-065){\begin{picture}(170,280) \thicklines \newsavebox{\BZar} \savebox{\BZar}(0,0) { \thicklines \put(0,10){\line(-3,-1){30}} \put(0,10){\line(3,-1){30}} \put(0,-10){\line(-3,1){30}} \put(0,-10){\line(3,1){30}} \put(0,20){\vector(0,-1){10}} \put(0,-10){\vector(0,-1){10}} \put(30,0){\vector(1,0){25}} } \newsavebox{\BZal} \savebox{\BZal}(0,0) { \thicklines \put(0,10){\line(-3,-1){30}} \put(0,10){\line(3,-1){30}} \put(0,-10){\line(-3,1){30}} \put(0,-10){\line(3,1){30}} \put(0,20){\vector(0,-1){10}} \put(0,-10){\vector(0,-1){10}} \put(-30,0){\vector(-1,0){10}} } \put(20,285){\framebox(60,20){\parbox{48mm} {Start: \\ $\nu=0$ , $\omega=1$ \\ calculate $F(u^{(0)})$ }} } \put(50,285){\vector(0,-1){10}} \put(20,265){\framebox(60,10){\parbox{48mm} {next iteration: $\nu \leftarrow \nu+1$}} } \put(50,265){\vector(0,-1){10}} \put(20,240){\framebox(60,15){\parbox{54mm} {\hspace*{2.00mm} Solve \\ $\fracp{F}{ u^{(\nu-1)}} \delta^{(\nu)} = F(u^{(\nu-1)})$}} } \put(50,240){\vector(0,-1){10}} \put(20,220){\framebox(60,10){\parbox{48mm} {$\omega = \min(2\,\omega,1)$}} } \put(50,220){\vector(0,-1){10}} \put(120,225){\line(-3,-1){30}} \put(120,225){\line(3,-1){30}} \put(120,205){\line(-3,1){30}} \put(120,205){\line(3,1){30}} \put(110,214){$\omega \le \omega_{lim}$ ?} \put(83,219){no} \put(153,219){yes} \put(090,215){\vector(-1,0){40}} \put(150,215){\line(1,0){05}} \put(20,200){\framebox(60,10){\parbox{48mm} {$u^{(\nu)} = u^{(k-1)} - \omega\delta^{(\nu)}$}} } \put(50,180){\usebox{\BZar}} \put(30,179){$F(u^{(\nu)}) < F(u^{(\nu-1)})$ ?} \put(83,184){no} \put(55,165){yes} \put(105,175){\framebox(30,10){\parbox{23mm} {$\omega=\omega/2$}} } \put(120,185){\vector(0,1){20}} \put(20,145){\framebox(60,15){\parbox{48mm} {evaluate stopping criteria~(\ref{APP NEWTON EQU 65}) \\ and if not simplified mode~(\ref{APP NEWTON EQU 66}) } } } \put(50,145){\vector(0,-1){10}} \put(50,125){\usebox{\BZar}} \put(34,120){\shortstack{stopping criterion \\ satisfied ?}} \put(83,129){yes} \put(55,110){no} \put(105,117.5){\framebox(30,15){\parbox{23mm} {Newton ends \\ successfully!}} } \put(135,125){\vector(1,0){20}} \put(50,095){\usebox{\BZal}} \put(26,093.5){$F(u^{(\nu)}) < 0.1 F(u^{(\nu-1)})$ ?} \put(12,099){no} \put(55,080){yes} \put(20,065){\framebox(60,10){\parbox{48mm} {switch to simplified Newton!}} } \put(20,070){\vector(-1,0){10}} \put(155,215){\vector(0,-1){60}} \put(140,145){\framebox(30,10){\parbox{23mm} {Newton fails!}} } \put(155,145){\vector(0,-1){070}} \put(155,070){\oval(40,10)\makebox(0,0){END}} \put(010,280){\line(0,-1){210}} \put(010,280){\vector(1,0){40}} \end{picture} } \end{picture} } \end{center} \caption{\label{APP NEWTON PIC 61}Flow diagram of the Newton-Raphson algorithm} \end{figure} \section{Local Sensitivity Analysis} If the coefficients $X_{ij}$, $Y_i$ and $y_{i}$ in equation~(\ref{APP NEWTON EQU 40}) depend on a vector of input factors $f_i$ index{input factor} and its gradient one is interested in how the solution $u_i$ is changing if the input factors are changed. This problem is called a local sensitivity analysis \index{local sensitivity analysis}. If $u(f)$ denotes the solution of equation~(\ref{APP NEWTON EQU 40}) for the input factor $p$ and $u(f+\alpha \cdot g)$ denotes the solution for a perturbed value $f+\alpha \cdot g$ for input factor $f$ where $q$ denotes the direction of perturbation and $\alpha$ is the small scaling factor. The derivative of the solution in the direction $g$ is defined as \begin{equation} \fracp{u}{g} : = \lim_{\alpha \rightarrow 0} \frac{ u(f+\alpha \cdot g) - u(f)}{\alpha} \end{equation} In practice one needs to distinguish between the cases of a spatially constant spatially variable. In the first case $g$ is set to a unit vector while in a second case an appropriate function needs to be given for $g$. The function $\fracp{u}{g}$ is calculated from solving the equation \begin{align} \label{APP NEWTON EQU 100} \int_{\Omega} \left( \fracp{X_{ij}}{u_{k,l}} v_{i,j} \left(\fracp{u_k}{g}\right)_{,l} + \fracp{X_{ij}}{u_{k}} v_{i,j}\fracp{u_k}{g} + \fracp{Y_{i}}{u_{k,l}} v_{i}\left(\fracp{u_k}{g}\right)_{,l} + \fracp{Y_{i}}{u_{k}} v_{i}\fracp{u_k}{g}\right) \; dx + \int_{\partial \Omega} \fracp{y_{i}}{u_{k}} v_{i}\fracp{u_k}{g}\; ds \\ + \int_{\Omega} v_{i,j} \left( \fracp{X_{ij}}{f_{k,l}} g_{k,l} + \fracp{X_{ij}}{f_{k}} g_k \right) + v_{i} \left( \fracp{Y_{i}}{f_{k,l}} g_{k,l} + \fracp{Y_{i}}{f_{k}} g_k \right) \; dx + \int_{\partial \Omega} v_{i} \fracp{y_{i}}{f_{k}} g_k\; ds \end{align} for all smooth $v$ on $\Omega$ with $v_i=0$ where $q_i>0$ for the unknown sensitivity $\fracp{u}{g}$. Notice that this equation is similar to the equation which needs to be solved for the Newton-Raphson correction $\delta_k$, see equation~(\ref{APP NEWTON EQU 43b}).
{ "alphanum_fraction": 0.6499742202, "avg_line_length": 44.8439306358, "ext": "tex", "hexsha": "2c44feae836b81585bd34784a36bec805d0f376a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "markendr/esys-escript.github.io", "max_forks_repo_path": "doc/user/nonlinearPDE.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "markendr/esys-escript.github.io", "max_issues_repo_path": "doc/user/nonlinearPDE.tex", "max_line_length": 122, "max_stars_count": null, "max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "markendr/esys-escript.github.io", "max_stars_repo_path": "doc/user/nonlinearPDE.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5451, "size": 15516 }
% ============================================================================== % % O S C I L L O S C O P E % % ============================================================================== \chapter{Oscilloscope} % ----------------------------------------------------- % \label{ch:app:gui} % ---------------------------------------------------------------------------- % Some supplementary information for the technologies underpinning the oscilloscope application is presented here. % ============================================================================== % % W E B S O C K E T S % % ============================================================================== \section{WebSockets} % <<< --------------------------------------------------- % \label{sec:app:gui:websockets} % ---------------------------------------------------------------------------- % WebSockets' final RFC 6455\cite{rfc:6455} was released in December 2011 and is thus still quite young. It is meant to compensate the lack of raw UDP and TCP sockets in JavaScript; while those would offer maximum flexibility, they also pose a significant security risk, and are therefore not available in JavaScript. The WebSockets protocol is located in the Application Layer of the OSI model\footnote{% For those not familiar with the OSI model, Wikipedia provides a good overview in~\cite{wiki:osi}.% }. Instead of directly opening a raw WebSocket, the handshake is done via HTTP(S). This brings the benefit of communicating through the same ports as the browser (\num{80} or \num{443}) which enables the protocol to function through most firewalls. Furthermore it greatly simplifies the implementation of handshakes for the programmer. The client sends an upgrade request to the server which then opens a WebSocket connection. This allows for a very convenient way to use TCP Sockets without any entirely new standards. Section~1.5, \emph{Design Philosophy} in RFC~6455~\cite{rfc:6455} explains it well: \begin{quote} Basically it is intended to be as close to just exposing raw TCP to script as possible given the constraints of the Web. The only exception is that WebSockets adds framing to make it packet rather than stream based and to differentiate between binary and text data. This differentiation is very useful for this project. Instructions to the server are issued via the text channel whilst data is sent back through the binary channel, allowing for very convenient interfacing with close to no effort. \end{quote} In summary: WebSockets are close-to-raw TCP sockets whose handle is shared through HTTP(S). JavaScript provides a WebSockets interface that offers convenient sending and receiving of large amounts of data. As nearly anything in JavaScript this is done using callbacks. There are callbacks which handle connections, messages and errors. The code snippet in Listing~\ref{lst:gui:jsws} gives some insight how WebSockets in JavaScript are used. For more detailed information, the reader is referred to the Mozilla documentation~\cite{moz:ws}. \vspace{2ex} \begin{tcolorbox}[ skin=alpenlisting, title={ \refstepcounter{listing} \textbf{Listing \thelisting:} Using WebSockets in JavaScript \label{lst:gui:jsws} \addcontentsline{lol}{listing}{\protect\numberline{\thelisting}Using Websockets in JavaScript} } ] \inputminted[ linenos, numbersep=4pt, style=solarizedlight, ]{javascript}{./code/websockets.js} \end{tcolorbox} %>>> \clearpage % ============================================================================== % % S T A T E T R E E % % ============================================================================== \section{State Tree of Oscilloscope} % <<< ----------------------------------- % \label{sec:app:gui:state_tree_of_scope} % ---------------------------------------------------------------------------- % \begin{tcolorbox}[ skin=alpenlisting, title={ \refstepcounter{listing} \textbf{Listing \thelisting:} The state tree of the scope application \label{lst:gui:app_structure} \addcontentsline{lol}{listing}{\protect\numberline{\thelisting}Scope State Tree} }, breakable, title after break={\textbf{Listing \thelisting\ (cont.):} The state tree of the scope application}, ] \inputminted[ linenos, numbersep=4pt, style=solarizedlight, ]{javascript}{./code/statetree.js} \end{tcolorbox} %>>> % ============================================================================== % % M I T H R I L . J S % % ============================================================================== \section{mithril.js} % <<< --------------------------------------------------- % \label{sec:app:gui:mithril} % ---------------------------------------------------------------------------- % The official mithril webpage describes mithril.js in the following way: ``Mithril is a modern client-side JavaScript framework for building Single Page Applications. It's small (< 8kb gzip), fast and provides routing and XHR utilities out of the box.''~\cite{mithril:home} Mithril, like a lot of other frameworks such as React, Angular.js or Vue.js, uses a virtual DOM. This means that it does not modify the DOM which is outlined by the browser, but rather maintains its own DOM. When a new render call is issued, the virtual DOM calculates all the deltas that stem from new content and applies them to the real DOM. This allows mithril.js to calculate and recalculate the DOM based on a descriptive model. The developer does not have to manually modify an object's state but rather has to describe it. A redraw generally happens when an event is triggered by any input element but can also be issued manually. A virtual DOM consists of many vnodes (virtual nodes) and can be mounted on any actual node of the browser's DOM as the example in Listing~\ref{lst:gui:mithrilmount} shows. \vspace{2ex} \begin{tcolorbox}[ skin=alpenlisting, title={ \refstepcounter{listing} \textbf{Listing \thelisting:} Basic creation and usage of mithril components in JavaScript \label{lst:gui:mithrilmount} \addcontentsline{lol}{listing}{\protect\numberline{\thelisting}Basic Usage of \code{mithril} Components} } ] \inputminted[ linenos, numbersep=4pt, style=solarizedlight, ]{javascript}{./code/mithrilmount.js} \end{tcolorbox} A component can be mounted on any DOM node and becomes a vnode in the virtual DOM. The developer can create new components by simply creating an object that holds at least a \code{view()} function that instantiates new vnodes. The new component can then be instantiated via the \code{m()} or \code{m.mount()} command. As this section should only give a base overview on mithril and is not meant to be a manual, further information on mithril's features and usage can be obtained on it's webpage~\cite{mithril:home}. %>>> % ============================================================================== % % W e b G L % % ============================================================================== \section{WebGL} % <<< -------------------------------------------------------- % \label{sec:app:gui:webgl} % ---------------------------------------------------------------------------- % An application uses the \emph{canvas} DOM element which provides a direct interface to WebGL. The user can render vertices to the canvas and even apply shaders or, in the case of our scope application, simple 2D geometry calls. These are sufficient for our purposes since the scope basically only requires the drawing of lines. Via the canvas one can retrieve a 2D rendering context on which simple geometry can be drawn. In JavaScript this can be done using the code in Listing~\ref{lst:js2dcontext} which shows how a single red line can be drawn on the canvas. \vspace{2ex} \begin{tcolorbox}[ skin=alpenlisting, title={ \refstepcounter{listing} \textbf{Listing \thelisting:} Getting a 2D Rendering Context from a Canvas and Drawing on it in JavaScript \label{lst:js2dcontext} \addcontentsline{lol}{listing}{\protect\numberline{\thelisting}Drawing on Canvas in JavaScript} } ] \inputminted[ linenos, numbersep=4pt, style=solarizedlight, ]{javascript}{./code/2dcontext.js} \end{tcolorbox} \vspace{2ex} There is also the possibility to draw rectangles, circles and much more. All of those elements can be styled easily via properties of the context environment. All the functionality is documented on the Mozilla Network~\cite{moz:2dcontext}. After having acquired the rendering context, something can be drawn on the canvas once. For the creation of a moving image, those draws have to be re-issued over and over again. There are various possibilities in JavaScript to accomplish this, but only one is actually high-performance and recommended. Instead of simply drawing to the canvas over and over again, it would be ideal to only do that before a new frame is pulled from the framebuffer by the display. JavaScript provides a interface to register a callback that is called before a new frame is released. This callback will be called with the same frequency as the display refresh rate, which nowadays usually is \SI{60}{\hertz}. To make sure that a callback will always be executed, it has to be registered again after a callback has been issued. The example in Listing~\ref{lst:gui:glcallback} shows how this is done. This callback will not affect the rest of the DOM. This allows JavaScript to handle the redraws of the DOM with high speed while the callback will render a fluent graph of the data onto just one of the DOM elements. \vspace{2ex} \begin{tcolorbox}[ skin=alpenlisting, title={ \refstepcounter{listing} \textbf{Listing \thelisting:} Usage of the requestAnimationFrame callback in JavaScript \label{lst:gui:glcallback} \addcontentsline{lol}{listing}{\protect\numberline{\thelisting}Usage of \code{requestAnimationFrame Callback}} } ] \inputminted[ linenos, numbersep=4pt, style=solarizedlight, ]{javascript}{./code/glcallback.js} \end{tcolorbox} %>>> % ============================================================================== % % W I N D O W T A B L E % % ============================================================================== \section{FFT Windowing Parameters} % <<< -------------------------------------- % \label{sec:app:gui:fft_params} % ---------------------------------------------------------------------------- % \begin{centering} \tabcaption[FFT Windowing Parameters]{% FFT windowing parameters, taken from~\cite{gui:meyer}% } \label{tab:fft_window_params} \begin{tabular}{l>{$}c<{$}ScS} \toprule \parbox[c]{17mm}{Window} & {\parbox[c]{26mm}{Scaling Factor for Quasi-Periodical Signals}} & {\parbox[c]{24mm}{Attenuation of Largest Side Lobe (\si{\dB})}} & {\parbox[c]{22mm}{Number of Lines per Bundle} } & {\parbox[c]{22mm}{Maximum Error in \mbox{Amplitude} (\si{\dB})} } \\ \midrule Rectangle & 1 & 13 & 1 -- 2 & -3.8 \\ Hanning & 1/0.5000 & 31 & 3 -- 4 & -1.5 \\ Hamming & 1/0.5400 & 41 & 3 -- 4 & -1.6 \\ Blackman & 1/0.4200 & 58 & 5 -- 6 & -1.1 \\ Bartlett & 1/0.5000 & 26 & 3 -- 4 & -1.9 \\ Kaiser-Bessel & 1/0.4021 & 67 & 7 -- 8 & -1.0 \\ Flat-Top & 1/0.2155 & 67 & 9 -- 10 & 0 \\ \bottomrule \end{tabular} \end{centering} %>>> %^^A vim: foldenable foldcolumn=4 foldmethod=marker foldmarker=<<<,>>>
{ "alphanum_fraction": 0.5800482703, "avg_line_length": 45.8671586716, "ext": "tex", "hexsha": "0f990881293b2fc39dd8e9c1ef583be0cd02aba5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alpenwasser/pitaya", "max_forks_repo_path": "doc/report/chunks/appgui.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alpenwasser/pitaya", "max_issues_repo_path": "doc/report/chunks/appgui.tex", "max_line_length": 122, "max_stars_count": 4, "max_stars_repo_head_hexsha": "a6ced99408171ffcd96c9444adfe30d2ba699f48", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alpenwasser/pitaya", "max_stars_repo_path": "doc/report/chunks/appgui.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-15T20:19:03.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-22T15:26:34.000Z", "num_tokens": 3007, "size": 12430 }
% section-1 % Copyright © 2018-2019 NextStep IT Training, a division of Smallrock Internet Services, Inc. All rights reserved. % % [Replace me: this is a chapter section, describe what is going on here.] % \providecommand{\main}{..} \documentclass[../workbook]{subfiles} \graphicspath{ {\main/_Images/} } \begin{document} \section{[Insert Section One Title]} \sectiontitles{ \sectiontitleselected{[ Insert Section One Title]} \sectiontitle{[Insert Section Two Title]} \sectiontitle{[Insert Section Three Title]} \sectiontitle{Lab - [Insert Chapter Title]} } %% [Insert Topic] \subsection{[Insert Topic One Title]} % figure - center a figure on the page. \fig{_Workbook/nsbanner} % codeblock - specify the language. \begin{codeblock}{javascript} // add.js let x = 5 let y = 10 let s1 = 'for this is a string with \'embedded quotes\'' let s2 = 'for this is a string with "embedded quotes"' console.log(x + 5) \end{codeblock} \begin{codeblock}{bash} $ node add.js 15 $ \end{codeblock} \begin{displaynote} The \$ character will be used as the command line prompt in all examples. \end{displaynote} % \begin{table}[H] % \resizebox{\tablewidth}{!}{ % \hspace*{10pt}\begin{minipage}{\textwidth} % \def\arraystretch{1.5} % \rowcolors{2}{codeblock}{white} % \begin{tabularx}{\textwidth}{|l|X|} % \hline % \textbf{Header Column One} & \textbf{Header Column Two}\\ % \hline % Row one, column one & Row one, column two\\ % \hline % Row two, column one & Row two, column two\\ % \hline % \end{tabularx} % \end{minipage} % } % \end{table} \topictable{|l|X|}{ \tableheader{Header Column One}{Header Column Two} \tablerow{Row one, column one}{Row one, column two} \tablerow{Row two, column one}{Row two, column two} } % The codeblock, table, or image is usually followed with a list of one to three bullet points. The command bulletlist, and % the environment bullets, close up the space between the items. Using the environment is preferred, because you cannot nest % an enumeration inside the bulletlist command. \bulletlist{ \item This is the first bullet point of the topic \item This is the second bullet point of the topic; use \emph{\\emph} to emphasise keywords in the topics (which should be the same as \emph{\\textit}), and \textbf{\\textbf} if bold is necessary } \begin{bullets} \item This is the first bullet point of the topic \item This is the second bullet point of the topic; use \emph{\\emph} to emphasise keywords in the topics (which should be the same as \emph{\\textit}), and \textbf{\\textbf} if bold is necessary \end{bullets} % The numbers environment closes up the space between the lines a bit over using the enumerate environment. \begin{numbers} \item This is the first bullet point of the topic \item This is the second bullet point of the topic; use \emph{\\emph} to emphasise keywords in the topics (which should be the same as \emph{\\textit}), and \textbf{\\textbf} if bold is necessary \end{numbers} %% [Insert Second Topic..., follow topics with a checkpoint (in section-2) if there is only one section] \subsection{[Insert Topic Two Title]} \bulletlist{ \item Each topic must have content } \end{document}
{ "alphanum_fraction": 0.6773718326, "avg_line_length": 31.7196261682, "ext": "tex", "hexsha": "65d17e7bc234591c9af4d04f2dcded8be23141a3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-28T10:05:03.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-28T10:05:03.000Z", "max_forks_repo_head_hexsha": "8aab8b990acb8f3dcfd3780d861cb753a970842d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nextstepitt/standard-course-template", "max_forks_repo_path": "Workbook/01_Rename-Me/section-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8aab8b990acb8f3dcfd3780d861cb753a970842d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nextstepitt/standard-course-template", "max_issues_repo_path": "Workbook/01_Rename-Me/section-1.tex", "max_line_length": 199, "max_stars_count": null, "max_stars_repo_head_hexsha": "8aab8b990acb8f3dcfd3780d861cb753a970842d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nextstepitt/standard-course-template", "max_stars_repo_path": "Workbook/01_Rename-Me/section-1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 926, "size": 3394 }
\documentclass[10pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \author{Pranav Satheesh} \title{Relativity Assignments} \begin{document} \maketitle \section{Einstein's velocity addition} In this problem, you will derive the velocity addition rule in relativity using Lorentz transformations. Consider a particle A in the frame B. Her velocity w.r.t frame B is $V_{AB}$. In another frame C moving w.r.t B with a velocity $V_{CB}$. Show that the velocity of A w.r.t to the frame C using \textbf{Lorentz transformation}: \begin{equation} V_{AC} = \frac{V_{AB} + V_{BC}}{1 + (V_{AB} V_{BC} / c^2)} \end{equation} (\textit{\textbf{Hint} : You can use consider the velocity to be $u =\frac{dx}{dt}$ in the frame B and then write the lorentz transformation equations for $dx$ and $dt$ to the frame C where these become $\bar{dx}$ and $\bar{dt}$}) \section{Superluminal Motion} Astronomers observed radio galaxies moving with velocities exceeding the velocity of c! M87 is an example in the Virgo cluster The distance to this galaxy, M87, is about D = 62 million light years. One can use this distance to convert angular separations into linear separations across the line of sight. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{lumino.png} \caption{Blobs} \end{figure} \begin{enumerate} \item Now, look at one of the blobs: the innermost one, which appears most clearly in 1996 and 1997. Approximately calculate the velocity of the blob.Hold on a minute! Is this against special relativity postulate 2 ? \item This calculation is not correct. The clouds are actually movng traight towards us with a velocity below c. As the cloud rapidly approaches, the distance light has to travel reduces and hence it would appear to reach us sooner. This effect can be understood from the second diagram. The cloud stars at N and moves at an angle $\theta$ w.r.t the line of sight of the observer O. Let $\vec{V}$ be the velocity of the cloud and let $t_{obs}$ be the time the observer sees the light emitted from the cloud at time $t$. From the distance travelled by light calculare the relation between $t_{obs}$ and $t$. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{superlum2.png} \caption{Apparent motion of the cloud} \end{figure} \item Now we using the $t_{obs}$ relation, figure out the transverse speed $V_{T}$ observed by the observer using $V_{T} = dx/d t_{obs}$. \item Can you now plot $V_{T}$ for different $\theta$ and see at what angles you see an apparent motion greater than c ? \end{enumerate} \section{Relativistic Doppler Effect} In this exercise you will derive the Doppler effect using Lorentz transformations. Consider the source to be at rest and the reciever is moving away with some welovity $v$ and $v>0$. Use suscript $s$ for source and $r$ for reciever. \begin{enumerate} \item Consider the first pulse recieved by the recieber at $t_s = 0$ and $x_s = 0$. $\lambda_s$ is the wavelength of the wave. Now is time $t_{rs}$ the reciever recieves the secon pulse. Then show that $$ c t_{rs} = \lambda_s + v t_{rs} $$ \item We now need to make a lorentz transformation to reciever frame and change $t_{rs}$ to $t_{r}$. Show that after doing Lorentz transformation $$ t_{r} = \frac{t_{rs}}{\gamma}$$ \item Now subsitiute this in the previous relation and obtain the fampus relation between the frequencies. $$ f_{r} = \sqrt{\frac{1- \frac{v}{c}}{1+ \frac{v}{c}}} f_{s} $$ \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7446629213, "avg_line_length": 47.4666666667, "ext": "tex", "hexsha": "3221b3f07cf35f2f7cb17f90f5299d23518c043a", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-08-03T17:34:07.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-04T05:51:58.000Z", "max_forks_repo_head_hexsha": "33b766b378e3d4ee21bb5c0baec8d0eda3ebce10", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Omkar-1401/summer-school-2021", "max_forks_repo_path": "week-03/relativity-and-cosmology/Assignments/TeX/Relativity.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33b766b378e3d4ee21bb5c0baec8d0eda3ebce10", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Omkar-1401/summer-school-2021", "max_issues_repo_path": "week-03/relativity-and-cosmology/Assignments/TeX/Relativity.tex", "max_line_length": 608, "max_stars_count": 2, "max_stars_repo_head_hexsha": "317d3940fff31c43c5c040809bc6bce4a0e8393b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HorizonIITM/summer-school-2021", "max_stars_repo_path": "week-03/relativity-and-cosmology/Assignments/TeX/Relativity.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-11T04:44:14.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-17T13:11:59.000Z", "num_tokens": 1004, "size": 3560 }
% %$Id: utilIntro.tex,v 1.2 2005-07-06 14:11:27 kbk Exp $ % \section{Utilities}\label{sec:util} \subsection{Introduction} In this section, different utility modules and routines are assembled, such as the {\tt time} module (see {\tt time.F90}), keeping track of all time calculations, the {\tt mtridiagonal} module with a Gaussian solver for systems of equations with tri-diagonal matrices (see {\tt tridiagonal.F90}), and the {\tt eqstate} module (see {\tt eqstate.F90}) with different versions of the equation of state. Also discussed are advection and diffusion routines, such as {\tt diff\_center()} and {\tt adv\_center()} for variables located at the centers of the grid cells, i.e.\ in general mean flow variables.
{ "alphanum_fraction": 0.7489711934, "avg_line_length": 33.1363636364, "ext": "tex", "hexsha": "be24fdcc996278364261d0beb3dfe9737cc04403", "lang": "TeX", "max_forks_count": 51, "max_forks_repo_forks_event_max_datetime": "2022-03-29T15:48:43.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-09T20:59:07.000Z", "max_forks_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lsht312/schism", "max_forks_repo_path": "src/GOTM3.2.5/doc/utilIntro.tex", "max_issues_count": 42, "max_issues_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f", "max_issues_repo_issues_event_max_datetime": "2022-03-03T17:42:01.000Z", "max_issues_repo_issues_event_min_datetime": "2019-08-19T21:57:12.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lsht312/schism", "max_issues_repo_path": "src/GOTM3.2.5/doc/utilIntro.tex", "max_line_length": 77, "max_stars_count": 42, "max_stars_repo_head_hexsha": "36f3ac35351c99f2f16f60f5d9701efed246293f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lsht312/schism", "max_stars_repo_path": "src/GOTM3.2.5/doc/utilIntro.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-03T03:08:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-12T21:48:24.000Z", "num_tokens": 193, "size": 729 }
%% Maybe mention the difference between regression and classification problems \glsreset{GAN} \chapter{Machine Learning and Neural networks} \label{cha:neuralNets} Machine learning is the discipline of computer science where the goal, instead of laying down the steps for a machine to produce a result given some inputs, is in fact to make the machine find by itself (``learn'') the correct course of action by showing it relevant data. A more rigorous definition is given by \textcite{machineLearning1997}. \begin{citacao} A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at task T, as measured by P, improves with experience E. \cite[p. 2]{machineLearning1997} \end{citacao} In this definition the experience E can be viewed as the data that is used to teach the machine \textbf{--} this could be for example a dataset of images that the computer is expected to classify with the correct label (see \autoref{sub:supervised_learning}), or also repeated self play to become better at games like chess or Go as seen in reinforcement learning (see \autoref{sub:semi-supervised_and_RL}), where AlphaZero is a notable example \cite{alphaZero2017}. The task T is what the machine is ultimately trying to achieve (e.g. classify images, play chess) and P is the measure of success in the task (e.g. percentage of accurately classified images, proportion of wins against an opponent in chess). There are multiple approaches that fall unto the category of machine learning, popular techniques include \gls{KNN}, decision trees, random forest, \gls{SVM}, linear and logistic regression, and others\footnote{ The paper from \textcite{fashionMNIST2017} gives many performance results for different methods applied on the Fashion MNIST dataset }. Among the existing methods, neural networks have shown very good results in the last years, specially after the resurgence of deep learning thanks to the increased computational power, better use of parallelization, and the development of frameworks like Pytorch and Tensorflow. Currently, neural networks are at the forefront of many areas like image classification\footnote{ Big breakthrough in 2012 with AlexNet and the resurgence of Convolutional Neural Networks \cite{alexnet2012} }, reinforcement learning\footnote{ AlphaGo was the first ever computer to be able to defeat a human professional player at the game of Go \cite{alphaGo2016} and its successor AlphaZero was able to surpass it by learning entirely through self-play \cite{alphaZero2017} }, and generative modeling; where \acp{GAN}, which are the main focus of this document, are showing very good results. This chapter will explain what are neural networks and how they can learn by themselves, this learning process is also often called training. First however, it is worth to have a brief description of the different types of learning. \input{chapters/NeuralNets/typesML} \input{chapters/NeuralNets/ANN} \input{chapters/NeuralNets/gradDesc} \input{chapters/NeuralNets/backprop} \input{chapters/NeuralNets/others}
{ "alphanum_fraction": 0.8026189716, "avg_line_length": 97.84375, "ext": "tex", "hexsha": "e67bf2cbea5eb061ad52760cce95bec00c05148d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0fa63fff9c6a3bbee57af38683c492a8b120e24a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PatrickHoeckler/tcc_gan", "max_forks_repo_path": "Overleaf/chapters/NeuralNets/index.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0fa63fff9c6a3bbee57af38683c492a8b120e24a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PatrickHoeckler/tcc_gan", "max_issues_repo_path": "Overleaf/chapters/NeuralNets/index.tex", "max_line_length": 466, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0fa63fff9c6a3bbee57af38683c492a8b120e24a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PatrickHoeckler/tcc_gan", "max_stars_repo_path": "Overleaf/chapters/NeuralNets/index.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-05T06:19:44.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-20T22:17:18.000Z", "num_tokens": 700, "size": 3131 }
\documentclass[usepdftitle=false,professionalfonts,compress ]{beamer} %Packages to be included \usepackage[latin1]{inputenc} \usepackage{graphics,epsfig, subfigure} \usepackage{url} \usepackage[T1]{fontenc} %\usepackage{listings} \usepackage{hyperref} \usepackage[english]{babel} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% PDF meta data inserted here %%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \hypersetup{ pdftitle={Semantic Analysis}, pdfauthor={Kenneth Sundberg} } %%%%%% Beamer Theme %%%%%%%%%%%%% \usetheme[]{Warsaw} \title{Semantic Analysis} \subtitle{CS 5300} \author{Kenneth Sundberg} \date{} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% Begin Document %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \frame[plain]{ \frametitle{} \titlepage \vspace{-0.5cm} \begin{center} %\frontpagelogo \end{center} } \frame{ \tableofcontents[hideallsubsections] } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%% Content starts here %%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Symbol Table} \subsection{Semantic Questions} { \begin{frame}\frametitle{Given an Identifier} \begin{itemize} \item What kind of value is stored in x \item How big is x \item If x is a procedure what is its signature \item What is the lifetime of x \item Where and how is x allocated \end{itemize} \end{frame}} \subsection{Structure} { \begin{frame}\frametitle{Scope} \begin{itemize} \item Symbol Table needs to accomodate scope rules \item Inner scope names eclipse outer scope names \item Names in disjoint scopes do not conflict \end{itemize} \end{frame}} { \begin{frame}\frametitle{Lexical Scope} \begin{itemize} \item Code is organized into a tree structure \item Tree can be traversed with a stack \item Enter a scope $\rightarrow$ Push a table \item Exit a scope $\rightarrow$ Pop a table \item Lookup names in successive scopes until found \end{itemize} \end{frame}} { \begin{frame}\frametitle{Scope in CPSL} \begin{itemize} \item Three levels - Maximum size of symbol table stack is 3 \item Local \item Global \item Predefined \end{itemize} \end{frame}} \section{Type System} \subsection{Purpose} { \begin{frame}\frametitle{Runtime Safety} \begin{itemize} \item Fail to compile for some types of errors \item Add runtime checks if needed \end{itemize} \end{frame}} { \begin{frame}\frametitle{Expressiveness} \begin{itemize} \item Quantities with units are more expressive \item Type can provide a unit \begin{itemize} \item Especially with general type construction \end{itemize} \end{itemize} \end{frame}} { \begin{frame}\frametitle{Generate Better Code} \begin{itemize} \item Type can inform code generation \item Type can inform optimization \end{itemize} \end{frame}} \subsection{Components} { \begin{frame}\frametitle{Base Types} \begin{itemize} \item Numbers \item Characters \item Boolean \end{itemize} \end{frame}} { \begin{frame}\frametitle{Compound Types} \begin{itemize} \item Arrays \item Strings \item Enumerated Types \item Structures \item Variants \item Pointers \end{itemize} \end{frame}} { \begin{frame}\frametitle{Type Constructors} \begin{itemize} \item Decorators \item Generics \end{itemize} \end{frame}} \subsection{Inference} { \begin{frame}\frametitle{Type Checking} \begin{itemize} \item Central to Semantic analysis \item Strongly typed languages do not compile in presence of type errors \end{itemize} \end{frame}} { \begin{frame}\frametitle{Type Equivalence} \begin{itemize} \item By Name \begin{itemize} \item Types are identical only if names are identical \end{itemize} \item By Structure \begin{itemize} \item Types are identical if structure is identical \end{itemize} \end{itemize} \end{frame}} { \begin{frame}\frametitle{Inference Rules} \begin{itemize} \item Operators of a type system \item Implicit v. Explicit \end{itemize} \end{frame}} { \begin{frame}\frametitle{Declarations} \begin{itemize} \item Give names to types \item Give types to variables \item Add to symbol table \end{itemize} \end{frame}} { \begin{frame}\frametitle{Expressions} \begin{itemize} \item Type of expressions inferred from components and operators \end{itemize} \end{frame}} \end{document}
{ "alphanum_fraction": 0.6448087432, "avg_line_length": 13.6160714286, "ext": "tex", "hexsha": "5490f0226311deb51bef088295e0e72c6870747c", "lang": "TeX", "max_forks_count": 91, "max_forks_repo_forks_event_max_datetime": "2021-01-10T14:19:02.000Z", "max_forks_repo_forks_event_min_datetime": "2017-03-29T00:23:21.000Z", "max_forks_repo_head_hexsha": "976552ffca4d9d6e822c74a19bc108f3435bdab1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "McKayRansom/CourseMaterials", "max_forks_repo_path": "cs5300/lectures/05-SemanticAnalysis.tex", "max_issues_count": 20, "max_issues_repo_head_hexsha": "976552ffca4d9d6e822c74a19bc108f3435bdab1", "max_issues_repo_issues_event_max_datetime": "2019-12-01T05:58:12.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-29T22:26:47.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "McKayRansom/CourseMaterials", "max_issues_repo_path": "cs5300/lectures/05-SemanticAnalysis.tex", "max_line_length": 75, "max_stars_count": 6, "max_stars_repo_head_hexsha": "976552ffca4d9d6e822c74a19bc108f3435bdab1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "McKayRansom/CourseMaterials", "max_stars_repo_path": "cs5300/lectures/05-SemanticAnalysis.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-01T22:26:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-04-07T17:57:48.000Z", "num_tokens": 1324, "size": 4575 }
\chapter{Examples without Weights} \label{chapt:exampleswithoutweights} \section{Introduction} The purpose of this chapter is to present some non-trivial examples that do not involve weights. This is definitely work in progress, and examples will be added, corrected and expanded in future releases. Example contributions from users would be gratefully received. \section{Spanish Morphology} \subsection{Spanish Verbs} \subsubsection{Mind-Tuning} I chose Spanish as an initial example because Spanish is widely studied and extensively documented, and I have some basic familiarity with it. I will start with modeling Spanish verbs---in particular, the highly productive regular first conjugation. The various Spanish verb classes, and their conjugations, are documented in numerous published books\footnote{See books such as \emph{501 Spanish Verbs} and especially the verb-conjugation books published by Larousse and Bescherelle.} and are even available on the Internet.\footnote{See, for example, \url{http://www.conjugacion.es} for the conjugations of Spanish verbs.} I hope eventually to offer a Kleene script, downloadable from \url{www.kleene-lang.org}, that handles all Spanish verbs. I emphasize that my approach in the following script is only one of many. By long lexicographic convention, Spanish verbs are listed in dictionaries under the infinitive form, e.g.\@ the infinitive \emph{amar}, meaning ``to love,'' is the conventional \emph{dictionary citation form}. In Spanish and other languages, the traditional dictionary citation forms should not be confused with baseforms or roots. If anything, the root of the ``love'' verb is just \emph{am}, to which various suffixes are attached. The infinitive citation form, \emph{amar}, is really just \emph{am} with an \emph{ar} suffix; it is just one of the many conjugated forms of the verb. However, in a bow to tradition, we will initially list verbs in their infinitive forms, and analyses of verbs will show the infinitive form to facilitate lookup in traditional dictionaries. The script will use alternation rules to strip the infinitive suffixes from the lower side of the \fsm{}s before adding other suffixes to implement the various conjugated forms. \subsubsection{Modeling Spanish First-conjugation Verbs} A verb class traditionally called the \emph{first conjugation} contains thousands of regular verbs whose infinitive forms end in \emph{-ar}, including \emph{amar} ``to love,'' \emph{cantar} ``to sing,'' and \emph{hablar} ``to speak.'' My Larousse \emph{Conjugación} book calls this verb class number 3, and I will use the Larousse numbers (with some modification) to distinguish the various verb-conjugation classes. We can start by collecting a sampling of this class of verbs in a simple union of the infinitive forms. \begin{Verbatim} // "First Conjugation" class 3 $V3CitationForms = amar | cantar | cortar | hablar | simular ; // and continue to add hundreds more \end{Verbatim} \noindent See \url{http://www.www.conjugacion.es/del/verbo/amar.php}, the Larousse or Bescherelle books, or any of the other sources for a table showing all the conjugated forms. We will limit ourselves for now to the single-word conjugations, ignoring composite multi-word conjugations. Clitic pronouns will also be ignored for the time being. These charts contain conjugation groups, almost always of six forms, showing the \begin{Verbatim} 1st person singular 2nd person singular 3rd person singular 1st person plural 2nd person plural 3rd person plural \end{Verbatim} \noindent forms for present indicative, preterite imperfective, preterite perfect, future indicative, conditional, etc. For example, the six present indicative forms of \emph{amar} are \begin{Verbatim} amo amas ama amamos amáis aman \end{Verbatim} \noindent showing that \emph{amo} (`I love') is the first-person singular present indicative form of \emph{amar}, \emph{amas} (`thou (you singular) lovest') is the second-person singular, \emph{ama} (`he/she loves') is the third-person singular, \emph{amamos} (`we love') is the first-person plural, \emph{amáis} (`you (plural) love') is the second-person plural, and \emph{aman} (`they love') is the third-person plural. The root is \emph{am}, and the six suffixes for this group are pretty obviously \emph{o}, \emph{as}, \emph{a}, \emph{amos}, \emph{áis} and \emph{an}. We want to build a morphological analyzer that will eventually accept any orthographical verb string, e.g.\@ \emph{amo}, and return a string that includes the infinitive \emph{amar}, the information that it is a verb, and the information that it is the first-person singular present indicative form. We will invent and employ a number of multi-character symbol tags to convey part-of-speech, person, number, tense, aspect and mood information. Our \fsm{} will include paths like the following, ignoring alignment and epsilons, \begin{Verbatim} Upper: amar[VERB][V3][PresIndic][1P][Sg] Lower: amo \end{Verbatim} \noindent where \texttt{[VERB]}, \texttt{[V3]}, \texttt{[PresIndic]}, \texttt{[1P]} and \texttt{[Sg]} are multi-character symbols. Conversely, we want to be able to apply the same \fsm{} in a downward direction to the upper-side string, and see the output \emph{amo}. While the spellings of the lower-side words in the \fsm{} are determined by the rules of Spanish orthography, the design of the upper-side analysis strings is in our hands, and some care and study should go into that design. This is just one possible design of many.\footnote{Another possibility is to include \init{xml}-like symbols to mark up the analysis strings.} Don't worry too much about the spelling of the multi-character symbols; we will show later that they can be changed trivially at a later time, using alternation rules. Multi-characters symbols like \texttt{[V3]}, identifying the Larousse conjugation class, can also be changed trivially to other tags (e.g.\@ to reflect some other numbering system) or simply deleted. Here is the list that we will use in the script: \begin{center} \begin{tabular}{|l|l|} \hline [VERB] & verb (part of speech) \\ \hline [V1], [V2], [V3], [V4], etc. & verb conjugation classes\\ \hline [PresIndic] & present indicative\\ \hline [PretImperf] & preterite imperfect\\ \hline [PretPerf] & preterite perfect\\ \hline [FutIndic] & future indicative\\ \hline [Cond] & conditional\\ \hline [PresSubj] & present subjunctive\\ \hline [ImperfSubj] & imperfect subjunctive\\ \hline [Var1] & variant 1 (of the imperfect subjunctive)\\ \hline [Var2] & variant 2 (of the imperfect subjunctive)\\ \hline [FutSubj] & future subjunctive\\ \hline [Imptv] & imperative\\ \hline [Infin] & infinitive\\ \hline [PresPart] & present participle\\ \hline [PastPart] & past participle\\ \hline \end{tabular} \end{center} Starting from the dictionary citation forms, such as \emph{amar}, we want to leave the full \emph{amar} on the upper side but effectively strip off the \emph{-ar} infinitive ending on the lower side, leaving just the root. This can be accomplished with the following first-draft function, to be expanded and generalized later: \begin{Verbatim} $^stripRegInfinEnding($fst, $classTag) { return ( $fst _o_ ar -> "" / _ # ) ('[VERB]' $classTag):"" ; } \end{Verbatim} \noindent If we call this function on the word \emph{amar}, with the \$classTag \verb![V3]!, \begin{Verbatim} $stem = $^stripRegInfinEnding(amar, '[V3]') ; \end{Verbatim} \noindent the result is an \fsm{} with the following path, ignoring alignment and epsilons. \begin{Verbatim} Upper: amar[VERB][V3] Lower: am \end{Verbatim} Because the conjugations are in groups of six, we can facilitate modeling each conjugation group by defining the following \verb!$^conj6()! function, which takes seven arguments, the last six being the six suffixes for a conjugation group. \begin{Verbatim} $^conj6($tags, $OnePerSg, $TwoPerSg, $ThreePerSg, $OnePerPl, $TwoPerPl, $ThreePerPl) { return $tags:"" ( ( '[1P]' '[SG]' ):$OnePerSg | ( '[2P]' '[SG]' ):$TwoPerSg | ( '[3P]' '[SG]' ):$ThreePerSg | ( '[1P]' '[PL]' ):$OnePerPl | ( '[2P]' '[PL]' ):$TwoPerPl | ( '[3P]' '[PL]' ):$ThreePerPl ) ; } \end{Verbatim} \noindent And we can then use \verb!$^conj6()! to define the regular suffixes for the first-conjugation verbs including \emph{amar}: \begin{Verbatim} $regArVerbSuffs = ( $^conj6( '[PresIndic]', o, as, a, amos, áis, an ) | $^conj6( '[PretImperf]', aba, abas, aba, ábamos, abais, aban ) | $^conj6( '[PretPerf]', é, aste, ó, amos, asteis, aron ) | $^conj6( '[FutIndic]', aré, arás, ará, aremos, aréis, arán ) | $^conj6( '[Cond]', aría, arías, aría, aríamos, aríais, arían ) | $^conj6( '[PresSubj]', e, es, e, emos, éis, en ) | $^conj6( '[ImperfSubj]' '[Var1]', ara, aras, ara, áramos, arais, aran ) | $^conj6( '[ImperfSubj]' '[Var2]', ase, ases, ase, ásemos, aseis, asen ) | $^conj6( '[FutSubj]', are, ares, are, áremos, areis, aren ) | $^conj6( '[Imptv]', '[Defective]', a, e, emos, ad, en ) | '[Infin]':(ar) | '[PresPart]':(ando) | '[PastPart]':(ado) ) ; \end{Verbatim} The first-person singular present indicative suffix path will look like this: \begin{Verbatim} Upper: [PresIndic][1P][Sg] Lower: o \end{Verbatim} \noindent When concatenated with the truncated infinitive, the full path (ignoring alignment and epsilons) looks like \begin{Verbatim} Upper: amar[VERB][V3][PresIndic][1P][Sg] Lower: amo \end{Verbatim} We can then build our first verb-conjugating \fsm{}, for first-conjugation verbs, with \begin{Verbatim} $V3 = $^stripRegInfinEnding($V3CitationForms, '[V3]') $regArVerbSuffs ; test $V3 ; \end{Verbatim} If we apply this \fsm{} in an upward direction to the string \texttt{amo}, meaning ``I love,'' the output is the string \texttt{amar[VERB][V3][PresIndic][1P][Sg]}, which we can read as the citation form \emph{amar}, which is a verb of class 3, in the present indicative first-person singular conjugation. And we can do the inverse, applying the same \fsm{} in a downward direction to the input \texttt{amar[VERB][V3][PresIndic][1P][Sg]}, and the result will be \texttt{amo}. The \fsm{} thus \emph{transduces} from strings representing conjugated verbs to strings representing analyses of those verbs, and vice versa. Finally, we can define a convenient function, here called \verb!^conj()!, for testing that prints out all the conjugated forms for a verb, in a canonical order that can be compared to the usual published and online charts. \begin{Verbatim} ^conj($fst, $infin) { pr("\nPresent Indicative") ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresIndic]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nPreterit Imperfect") ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretImperf]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nPreterit Perfect") ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PretPerf]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nFuture Indictive") ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutIndic]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nConditional") ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Cond]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nPresent Subjunctive") ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[PresSubj]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nImperfect Subjunctive, Var 1") ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var1]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nImperfect Subjunctive, Var 2") ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[ImperfSubj]' '[Var2]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nFuture Subjunctive") ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[1P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[FutSubj]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nImperative") ; pr($^lowerside( ( $infin '[VERB]' . '[Imptv]' '[2P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Imptv]' '[3P]' '[SG]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Imptv]' '[1P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Imptv]' '[2P]' '[PL]' ) _o_ $fst)) ; pr($^lowerside( ( $infin '[VERB]' . '[Imptv]' '[3P]' '[PL]' ) _o_ $fst)) ; pr("\nInfinitive") ; pr($^lowerside( ( $infin '[VERB]' . '[Infin]' ) _o_ $fst)) ; pr("\nPresent Participle (Gerund)") ; pr($^lowerside( ( $infin '[VERB]' . '[PresPart]' ) _o_ $fst)) ; pr("\nPast Participle") ; pr($^lowerside( ( $infin '[VERB]' . '[PastPart]' ) _o_ $fst)) ; } \end{Verbatim} \noindent One can then simply call \verb!^conj($fst, $infin)! to test various verbs, printing out the entire conjugation paradigm. \begin{Verbatim} ^conj($V3, amar) ; ^conj($V3, cortar) ; \end{Verbatim} \noindent The coverage of our exampler can be expanded trivially to other verbs of the same first-conjugation class by simply adding more citation forms to the definition of \verb!$V3CitationForms!. \subsubsection{Expanding the Spanish Verb Morphological Analyzer} In addition to the first conjugation, a large class consisting of regular \mbox{\emph{-ar}} verbs, the second conjugation consists of regular \mbox{\emph{-er}} verbs, including \emph{beber} (`to drink'), and the third conjugation consists of regular \mbox{\emph{-ir}} verbs, including \emph{vivir} (`to live'). Both my Larousse and Bescherelle conjugation guides identify a total of 90 verb-conjugation classes for Spanish, though their numbering systems differ, and even a single publisher may not use consistent numbering from one edition to another. The 90 classes include a large number of irregular verbs with idiosyncratic conjugations. The \verb!spanverbs-0.9.4.0.kl! script is offered on the downloads page from \url{www.kleene-lang.org}. It is work in progress and incomplete in several ways: \begin{enumerate} \item It covers only about 2/3 of the 90 verb-conjugation classes. \item The definitions of the various ``CitationForms'' for each class are incomplete, but could be expanded easily. \item The grammar does not yet handle clitic pronouns. \end{enumerate} \noindent The script is in \init{utf}-8 and can be loaded from the Kleene \gui{} by invoking \begin{Verbatim} source "path/to/spanverbs-0.9.4.0.kl", "UTF-8" ; \end{Verbatim} \noindent from the pseudo-terminal, replacing ``path/to'' with the path to where you have stored the script on your file system. The final result of the compilation is \verb!$spanverbs!, and it can be tested using the \verb!^conj($spanverbs, infinitiveform)! function, e.g. \begin{Verbatim} ^conj($spanverbs, hablar) ; \end{Verbatim} You can also test \verb!$spanverbs! in the usual way, invoking \begin{Verbatim} test $spanverbs ; \end{Verbatim} \noindent and then entering forms like \emph{canto}, \emph{cantas} and \emph{cantamos} in the lower-side field of the test window. \subsection{Spanish Nouns} This section is currently empty. \section{Latin Morphology} This section is currently empty. \section{Aymara Morphology} This section is currently empty.
{ "alphanum_fraction": 0.6046965776, "avg_line_length": 37.4460694698, "ext": "tex", "hexsha": "758b356710088598ba9ac5ea05ecb9431e887098", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-08-24T12:16:26.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-20T03:29:18.000Z", "max_forks_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cscott/kleene-lang", "max_forks_repo_path": "doc/user/kleene/chapt5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cscott/kleene-lang", "max_issues_repo_path": "doc/user/kleene/chapt5.tex", "max_line_length": 142, "max_stars_count": 8, "max_stars_repo_head_hexsha": "938beb074bcf3706852630881da15e5badb730d5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "cscott/kleene-lang", "max_stars_repo_path": "doc/user/kleene/chapt5.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-08T04:23:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-13T05:56:54.000Z", "num_tokens": 6484, "size": 20483 }
\documentclass[a4paper]{article} \usepackage[inline]{enumitem} \usepackage{tasks} \usepackage{array} \usepackage{tabularx} \settasks{label =•} \usepackage{enumitem} \setenumerate{label = \alph*)} \newcolumntype{Y}[1]{>{\centering\arraybackslash} X{#1}} \input{head} \begin{document} %------------------------------- % TITLE SECTION %------------------------------- \fancyhead[C]{} \hrule \medskip % Upper rule \begin{minipage}{0.295\textwidth} \raggedright \footnotesize Winston Peng \hfill\\ 50364686 \hfill\\ [email protected] \end{minipage} \begin{minipage}{0.4\textwidth} \centering \large Homework 1\\ \normalsize Intro To Discrete Structures, 2021\\ \end{minipage} \begin{minipage}{0.295\textwidth} \raggedleft \today\hfill\\ \end{minipage} \medskip\hrule \bigskip %------------------------------- % CONTENTS %------------------------------- \section{Is or Is Not Proposition?} %\blindtext %\subsection{} %Some equations \begin{enumerate} \item\textbf{CSE 115 is a prerequisite of CSE 191.} Proposition. CSE 115 is not a prerequisite of CSE 191. \item\textbf{Do we have class today?} Not a proposition. Questions are not declarative statements. \item\textbf{Amazon is a rainforest.} Proposition. Amazon is not a rainforest. \item\textbf{All integers are even.} Proposition. Not all integers are even. \item\textbf{Give me a glass of water.} Not a proposition. Commands are not declarative statements. \end{enumerate} % \begin{align*} % y &= \sum\limits_{i,k} m_i \cdot f^k \\ % x &= % \underset{11}{\underbrace{3 + 8}} + 5 + 7 % \end{align*} % \subsection{Second Subtask} % \blindtext \bigskip %------------------------------------------------ \section{Evaluate Truth Value} % \begin{itemize*} % \item p: False (F). % \item q: True (T). % \end{itemize*} \begin{tasks}(4) \task \textit{p}:False(F). \task \textit{q}:True(T) \task \textit{r}:False(F). \task \textit{s}:True(T). \end{tasks} \begin{enumerate} \item \boldmath $p \implies q \land r$ \unboldmath $F \implies T \land F$\\ $F \implies F$\\ $T$ \item \boldmath $(p \implies q) \land r$ \unboldmath $(F \implies T) \land F$\\ $T \land F$\\ $T$ \item \boldmath $p \land \neg q \iff q \implies p$ \unboldmath $F \land \neg T \iff T \implies F$\\ $F \iff F$\\ $T$ \item \boldmath $p \land \neg q \oplus r \implies \neg s \lor q$ \unboldmath $F \land \neg T \oplus F \implies \neg T \lor T$\\ $F \oplus F \implies T$\\ $F \implies T$\\ $T$ \item \boldmath $(p \land \neg q) \oplus (r \implies (\neg s \lor q))$ \unboldmath $(F \land \neg T) \oplus (F \implies (\neg T \lor T))$\\ $F \oplus (F \implies T)$\\ $F \oplus T$\\ $T$ \end{enumerate} % \subsection{First Subtask} \bigskip %------------------------------------------------ \section{Truth Tables} % \begin{boldmath} 32 * 15 + p \implies 13 \end{boldmath} \\ % $32 * 15 + p \implies 13$\\ \begin{enumerate} \item Make truth table for $p \lor (\neg q \land r)$ \begin{center} % \begin{tabular}{ | c | c | c |c| } % \begin{tabular}{ | m{1cm} | m{1cm} | m{1cm} |m{5cm}| } \begin{tabularx}{0.8\textwidth}{ | >{\centering\arraybackslash} X | >{\centering\arraybackslash} X | >{\centering\arraybackslash} X | >{\centering\arraybackslash} X | } % \begin{tabularx}{0.8\textwidth}{ | X | X | X | X | } \hline % \begin{centering} \textbf{p} & \textbf{q} & \textbf{r} & $\mathbf{p \boldsymbol{\lor} \boldsymbol{(\neg} q \boldsymbol{\land} r\boldsymbol{)}}$ \\ [1ex] % {\boldmath p} & {\boldmath q} & {\boldmath r} & {\boldmath p \lor (\neg q \land r)} % \bold % \end{centering} \hline T & T & T & T \\ \hline T & T & F & T \\ \hline T & F & T & T \\ \hline T & F & F & T \\ \hline F & T & T & F \\ \hline F & T & F & F \\ \hline F & F & T & T \\ \hline F & F & F & F \\ \hline \end{tabularx} \end{center} \item Find correct logical expression for truth table $\neg p \lor \neg q$ \begin{center} \begin{tabularx}{0.8\textwidth}{| Y | Y | Y |} \hline \textbf{p} & \textbf{q} & \textbf{?} \\ [1ex] \hline F & F & T \\ \hline F & T & T \\ \hline T & F & T \\ \hline T & T & F \\ \hline \end{tabularx} \end{center} \end{enumerate} \bigskip %------------------------------------------------------- \section{Translate Math To English} \textbf{Propositions:} \begin{itemize} \item p: It was below freezing \item q: it was snowing \item r: the road was closed \end{itemize} \textbf{Problems:} \begin{enumerate} \boldmath \item $q \boldsymbol{\implies} p$ If it was snowing, then the road was closed. \\ \item $p \land q$ It was below freezing and it was snowing. \\ \item $\neg(p \land r)$ It was not below freezing and the road was not closed. \\ \item \textbf{$r \implies (p \lor q)$} If the road was closed, it was either below freezing or snowing. \\ \item \textbf{$r \iff (p \land q)$} The road was closed if and only if it was below freezing and snowing. \\ \unboldmath \end{enumerate} \bigskip %----------------------------------------------------------- \section{Translate English To Math} \textbf{Propositions:} \begin{itemize} \item p: The application has passed the learner permit test. \item q: The applicant has passed the road test. \item r: The applicant is allowed a driver's licence. \end{itemize} \textbf{Problems:} \begin{enumerate}\bfseries \item The applicant did not pass the road test but passed the learner permit test. $\neg q \land p$ \\ \item If the applicant passes the learner permit test and the road test, then the applicant is allowed a driver's license. $(p \land q) \implies r$ \\ \item Passing the learner permit test and the road test are necessary for being allowed a driver's license. $(p \land q) \implies r$ \\ \item The applicant passed either the learner permit test or the road test, but not both $p \oplus q$ \\ \item It is not true that the applicant does not pass the road test and is allowed a driver's license. $\neg(\neg q \implies r)$ \end{enumerate} \bigskip %--------------------------------------------------------- \section{Tautology, Contingency, or Contradiction?} \textbf{What is \boldmath $(p \land q) \implies q$}? \unboldmath\\ \begin{center} \begin{tabularx}{0.8\textwidth}{| Y | Y | Y |} \hline \textbf{\textit{p}} & \textbf{\textit{q}} & \boldmath $(p \land q) \implies q$ \unboldmath \\ [1ex] \hline T & T & T \\ \hline T & F & T \\ \hline F & T & T \\ \hline F & F & T \\ \hline \end{tabularx} \end{center} $(p \land q) \implies q$ is a tautology, as it always results in T. \\ \bigskip %----------------------------------------------------------- \section{Extra Credit: Equivalence Laws} Show $\neg(\neg p \land q) \land (p \lor q)$ is logically equivalent to $p$. \\ \begin{tabularx}{0.4\textwidth}{X X} $\neg(\neg p \land q) \land (p \lor q)$ & Hypothesis \\ $(p \lor \neg q) \land (p \lor q)$ & De Morgan's Law \\ $p \land p$ & Absorption Law \\ $p$ & Idempotent Law % $(\neg q \land p) \land (p \lor q)$ & Commutative Law \\ % $\neg q \land (p \land p) \lor q$ & Associative Law \\ % $\neg q \land p \lor q$ & Idempotent Law \\ % $\neg q \lor q \land p$ & Commutative Law \\ % $T \land p$ & Negation Law \\ % $p$ & Identity Law \\ \end{tabularx} \bigskip Show $\neg(p \lor \neg(p \land q))$ is logically equivalent to $F$. \\ \begin{tabularx}{0.4\textwidth}{X X} $\neg(p \lor \neg(p \land q))$ & Hypothesis \\ $\neg(p \lor (\neg p \lor \neg q))$ & De Morgan's Law \\ $\neg p \land (p \land q)$ & De Morgan's Law \\ $(\neg p \land p) \land q$ & Associative Law \\ $F \land q$ & Negation Law \\ $F$ & Domination Law % $(\neg p \land p) \land (\neg p \land q)$ & Distributive Law \\ % $F \land (\neg p \land q)$ & Negation Law \\ % $(F \land \neg p) \land (F \land q)$ & Distributive Law \\ % $F \land F$ & Domination Law \\ % $F$ % $\neg p \lor q$ & Identity Laws \\ % $p \implies q$ & Conditional Identity \end{tabularx} \end{document}
{ "alphanum_fraction": 0.5629217474, "avg_line_length": 29.2283737024, "ext": "tex", "hexsha": "94aed57362c007107e02ae34ad69ac36aacc31e8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "667cf9d51525c93889fed27730c080fb2570b622", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CreativePenguin/buffalo-discrete-structures-cse191", "max_forks_repo_path": "hw01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "667cf9d51525c93889fed27730c080fb2570b622", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CreativePenguin/buffalo-discrete-structures-cse191", "max_issues_repo_path": "hw01.tex", "max_line_length": 176, "max_stars_count": 1, "max_stars_repo_head_hexsha": "667cf9d51525c93889fed27730c080fb2570b622", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CreativePenguin/buffalo-discrete-structures-cse191", "max_stars_repo_path": "hw01.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-25T12:57:32.000Z", "max_stars_repo_stars_event_min_datetime": "2021-09-25T12:57:32.000Z", "num_tokens": 2806, "size": 8447 }
\chapter{Using Nuance 9} \label{Chapter:Nuance9} \author{Manny Rayner} \section{Overview of Nuance 9} \label{Section:Nuance9Overview} \section{Making your grammars non-recursive} \label{Section:Nuance9Nonrecursive} \section{Strcat semantics} \label{Section:StrcatSemantics}
{ "alphanum_fraction": 0.7964285714, "avg_line_length": 17.5, "ext": "tex", "hexsha": "df7f259e00b8593d929683d8a41b1cb5e01d58b6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5c3e5013a3048da7d68a8a43476ad84d3ea4bb47", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TeamSPoon/logicmoo_nlu", "max_forks_repo_path": "ext/regulus/doc/Cookbook/nuance-9.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "44025b6e389e2f2f7d86b46c1301cab0604bba26", "max_issues_repo_issues_event_max_datetime": "2020-02-02T13:12:34.000Z", "max_issues_repo_issues_event_min_datetime": "2020-02-02T13:12:34.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "logicmoo/old_logicmoo_workspace", "max_issues_repo_path": "pack/logicmoo_nlu/prolog/regulus/doc/Cookbook/nuance-9.tex", "max_line_length": 44, "max_stars_count": 6, "max_stars_repo_head_hexsha": "5c3e5013a3048da7d68a8a43476ad84d3ea4bb47", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TeamSPoon/logicmoo_nlu", "max_stars_repo_path": "ext/regulus/doc/Cookbook/nuance-9.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-28T19:30:28.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-27T12:08:02.000Z", "num_tokens": 85, "size": 280 }
\documentclass{article} \usepackage{parskip} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage[a4paper, total={6in, 8in}]{geometry} \usepackage[scale=.97]{sourcecodepro} \usepackage{titlesec} \title{CCE1013 - Computer Logic 1} \author{Giorgio Grigolo\\{\small B.Sc. Mathematics and Computer Science}} \date{Year 1, Semester 1 - 2021/22} % \titleformat{\section} % {\scshape}{\thesection}{1em}{} \titleformat{\section}[block] % section {\Large\scshape} {} {0pt}{\filcenter}[] \titlespacing*{\section} {0em} %left {2em} %before {2em} %after \begin{document} \maketitle \tableofcontents \section{Introduction} The following report indicates the operation of a 4-instruction arithmetic logic unit. The instructions are \texttt{ADD, ADDC, SUB, SUBB} represented by opcodes \texttt{00, 01, 10, 11} respectively. It is to be noted that this implementation of the above ALU outputs the $C_o$ as the MSB by turning on the \textbf{red LED} (left) and outputs the $Y$ as the LSB by turning on the \textbf{green LED} (right). \newpage \section{Proof of Correct Operation} \subsection{Opcode \texttt{00 - ADD}} With this opcode selected, the ALU performs the addition of bits $A$ and $B$, whilst ignoring the carry in represented by $C_i$. \[A+B+0 = C_o Y\] \vspace{9em} \subsubsection{Truth table} \begin{center} \Large \begin{tabular}{c|c|c||c|c||c|c|c} $A$ & $B$ & $C_i$ & $Op_1$ & $Op_2$ & $C_{o}$ & $Y$ \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 & 0 \\ \end{tabular} \end{center} \newpage \subsubsection{Pictures} \includegraphics[width=\textwidth]{./figures/00000.jpg} \begin{center} All inputs are off, and thus no LED is on. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/00100.jpg} \begin{center} $ABC_i = 001$ and so $C_o Y= 00$.\\ \end{center} \includegraphics[width=\textwidth]{./figures/01000.jpg} \begin{center} $ABC_i = 010$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/01100.jpg} \begin{center} $ABC_i = 011$ and so $C_o Y= 01$. \end{center} \includegraphics[width=\textwidth]{./figures/10000.jpg} \begin{center} $ABC_i = 100$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/10100.jpg} \begin{center} $ABC_i = 101$ and so $C_o Y= 01$. \end{center} \includegraphics[width=\textwidth]{./figures/11000.jpg} \begin{center} $ABC_i = 110$ and so $C_o Y= 10$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/11100.jpg} \begin{center} $ABC_i = 111$ and so $C_o Y= 10$. \end{center} \subsection{Opcode \texttt{01 - ADDC}} With this opcode selected, the ALU performs the addition of bits $A$ and $B$ but consequently adding the carry bit represented by $C_i$. \[A+B+C_i = C_o Y\] \vspace{9em} \subsubsection{Truth table} \begin{center} \Large \begin{tabular}{c|c|c||c|c||c|c|c} $A$ & $B$ & $C_i$ & $Op_1$ & $Op_2$ & $C_{o}$ & $Y$ \\ \hline 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ \end{tabular} \end{center} \newpage \subsubsection{Pictures} \includegraphics[width=\textwidth]{./figures/00001.jpg} \begin{center} All inputs are off, and thus no LED is on. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/00101.jpg} \begin{center} $ABC_i = 001$ and so $C_o Y= 01$.\\ \end{center} \includegraphics[width=\textwidth]{./figures/01001.jpg} \begin{center} $ABC_i = 010$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/01101.jpg} \begin{center} $ABC_i = 011$ and so $C_o Y= 10$. \end{center} \includegraphics[width=\textwidth]{./figures/10001.jpg} \begin{center} $ABC_i = 100$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/10101.jpg} \begin{center} $ABC_i = 101$ and so $C_o Y= 10$. \end{center} \includegraphics[width=\textwidth]{./figures/11001.jpg} \begin{center} $ABC_i = 110$ and so $C_o Y= 10$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/11101.jpg} \begin{center} $ABC_i = 111$ and so $C_o Y= 11$. \end{center} \subsection{Opcode \texttt{10 - SUB}} With this opcode selected, the ALU performs the subtraction of bits $A$ and $B$, or rather, the addition of $A$, $\bar{B}$ and 1, whilst ignoring the borrow bit $C_{i}$. When a negative value is obtained, the resultant representation will be one in 2's complement, and so $-1_{10}$ would be represented as $11_2 = -2_10 + 1_10$. \[A+\bar{B}+1 = C_o Y\] \vspace{9em} \subsubsection{Truth table} \begin{center} \Large \begin{tabular}{c|c|c||c|c||c|c|c} $A$ & $B$ & $C_i$ & $Op_1$ & $Op_2$ & $C_{o}$ & $Y$ \\ \hline 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 \\ \end{tabular} \end{center} \newpage \subsubsection{Pictures} \includegraphics[width=\textwidth]{./figures/00010.jpg} \begin{center} All inputs are off, and thus no LED is on. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/00110.jpg} \begin{center} $ABC_i = 001$ and so $C_o Y= 00$.\\ \end{center} \includegraphics[width=\textwidth]{./figures/01010.jpg} \begin{center} $ABC_i = 010$ and so $C_o Y= 11$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/01110.jpg} \begin{center} $ABC_i = 011$ and so $C_o Y= 00$. \end{center} \includegraphics[angle=180, width=\textwidth]{./figures/10010.jpg} \begin{center} $ABC_i = 100$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[angle=180, width=\textwidth]{./figures/10110.jpg} \begin{center} $ABC_i = 101$ and so $C_o Y= 01$. \end{center} \includegraphics[angle=180, width=\textwidth]{./figures/11010.jpg} \begin{center} $ABC_i = 110$ and so $C_o Y= 00$. \end{center} \vspace{2em} \includegraphics[angle=180, width=\textwidth]{./figures/11110.jpg} \begin{center} $ABC_i = 111$ and so $C_o Y= 00$. \end{center} \subsection{Opcode \texttt{11 - SUBB}} With this opcode selected, the ALU performs the subtraction of bits $A$ and $B$, or rather, the addition of $A$ and $\bar{B}$, whilst ignoring the borrow bit $C_{i}$. When a negative value is obtained, the resultant representation will be one in 2's complement, and so $-1_{10}$ would be represented as $11_2 = -2_{10} + 1_{10}$. \[A+\bar{B}+\bar{C_i} = C_o Y\] \vspace{9em} \subsubsection{Truth table} \begin{center} \Large \begin{tabular}{c|c|c||c|c||c|c|c} $A$ & $B$ & $C_i$ & $Op_1$ & $Op_2$ & $C_{o}$ & $Y$ \\ \hline 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 1 \\% 0 & 1 & 0 & 1 & 1 & 1 & 1 \\% 0 & 1 & 1 & 1 & 1 & 1 & 0 \\% 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ % \end{tabular} \end{center} \newpage \subsubsection{Pictures} \includegraphics[angle=180, width=\textwidth]{./figures/00011.jpg} \begin{center} All inputs are off, and thus no LED is on. \end{center} \vspace{1em} \includegraphics[width=\textwidth]{./figures/00111.jpg} \begin{center} $ABC_i = 001$ and so $C_o Y= 11$. \end{center} \includegraphics[width=\textwidth]{./figures/01011.jpg} \begin{center} $ABC_i = 010$ and so $C_o Y= 11$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/01111.jpg} \begin{center} $ABC_i = 011$ and so $C_o Y= 10$. \end{center} \includegraphics[angle=180, width=\textwidth]{./figures/10011.jpg} \begin{center} $ABC_i = 100$ and so $C_o Y= 01$. \end{center} \vspace{2em} \includegraphics[angle=180, width=\textwidth]{./figures/10111.jpg} \begin{center} $ABC_i = 101$ and so $C_o Y= 00$. \end{center} \includegraphics[angle=180, width=\textwidth]{./figures/11011.jpg} \begin{center} $ABC_i = 110$ and so $C_o Y= 00$. \end{center} \vspace{2em} \includegraphics[width=\textwidth]{./figures/11111.jpg} \begin{center} $ABC_i = 111$ and so $C_o Y= 11$. \end{center} \end{document}
{ "alphanum_fraction": 0.6318472848, "avg_line_length": 22.3121693122, "ext": "tex", "hexsha": "f88f698882b5af1a85c95877110dfd7ab08ae975", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "girogio/university-notes", "max_forks_repo_path": "bachelor-1/semester-1/computer-logic-1/lab03/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "girogio/university-notes", "max_issues_repo_path": "bachelor-1/semester-1/computer-logic-1/lab03/report.tex", "max_line_length": 116, "max_stars_count": null, "max_stars_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "girogio/university-notes", "max_stars_repo_path": "bachelor-1/semester-1/computer-logic-1/lab03/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3429, "size": 8434 }
\subsection{Discrete uniform distribution} There is a set \(s\) such that: \(P(x\in s)=p\) \(P(x\not\in s)=0\) \subsubsection{Moments of the uniform distribution} The mean is the mean of the set \(s\). If the set is all numbers of the real line between two values, \(a\) and \(b\), then: The mean is \(\dfrac{1}{2}(a+b)\). The variance is \(\dfrac{(b-a)^2}{12}\) in the continuous case. The variance is \(\dfrac{(b-a+1)^2-1}{12}\) in the discrete case.
{ "alphanum_fraction": 0.6487068966, "avg_line_length": 21.0909090909, "ext": "tex", "hexsha": "6f6d44506e6fffc5d11c76842616da89083194c9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/probability/distributionsContinous/02-01-uniform.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/probability/distributionsContinous/02-01-uniform.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/probability/distributionsContinous/02-01-uniform.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 150, "size": 464 }
\documentclass[journal]{./IEEE/IEEEtran} \usepackage{cite,graphicx} \newcommand{\SPTITLE}{Sales and Inventory Management System for a Pawnshop Business} \newcommand{\ADVISEE}{Azha Vianca de Belen} \newcommand{\ADVISER}{Roinand Aguila} \newcommand{\BSCS}{Bachelor of Science in Computer Science} \newcommand{\ICS}{Institute of Computer Science} \newcommand{\UPLB}{University of the Philippines Los Ba\~{n}os} \newcommand{\REMARK}{\thanks{Presented to the Faculty of the \ICS, \UPLB\ in partial fulfillment of the requirements for the Degree of \BSCS}} \markboth{CMSC 190 Special Problem, \ICS}{} \title{\SPTITLE} \author{\ADVISEE~and~\ADVISER% \REMARK } \pubid{\copyright~2017~ICS \UPLB} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} % TITLE \maketitle % ABSTRACT \begin{abstract} A Sales and Inventory Management System for Small-Scale Pawnshop Business was developed using PHP language, CodeIgniter framework, PostgreSQL database, Apache2 server and MaterializeCSS. This project was developed to help the small-scale pawnshop owners in tracking and monitoring their inventory and sales.It aims to handle all the transactions of the business effficiently and effectively. \end{abstract} % INDEX TERMS % INTRODUCTION \section{Introduction} \subsection{Background of the Study} An inventory is a complete list of stock of some kind of physical commodity and total stocks of various kinds of a firm. It is the quantity of the product that is available to be sold by a merchandising firm at any given time. But inventory does not only refer to physical merchandise. It can also refer to available seats on flights (airline passsenger reservation system) or slot availability in each class (university/college registration system). \cite{anigbogu11} An inventory management system, on the other hand, monitors the quantity of available products for sale and maintanance of proper stock levels. It is very convenient for wholesale and retail stores (like the supermarkets) and other businesses to use this kind of system. It can facilitate the management and decision of sales, thus reducing the burden of the business and its managers. \cite{ren13} Also, the work efficiency can be improved as it lessens time and effort of the employees as well as the human errors from manual counting and recording. \cite{rajeswari16} Using an effective and efficient inventory management system is very critical to the over all performance and profitability of many businesses. Sales and inventory management systems are common nowadays. A lot of sales and inventory managem ent systems were developed for the past years. They have been used in different businesses (especially the big ones) for competitive advantages and for efficiency and effectiveness of the business. Developing it for small-scale and medium-scale businesses will help them improve their operations, management and performance. \subsection{Significance of the Study} There are a lot of pawnshops in the Philippines but not all of them uses a management system to handle all their transactions. Most of them, especially the small-scale and medium-scale pawnshop businessess, still transacts with their clients and monitor their sales and inventory manually.This consumes a lot of time and effort for the pawnshop owners and their staff. The purpose of the project is to create a system that will handle the transactions of a medium-scale pawnshop business like J and M pawnshop. The system will also track and monitor their inventory and sales and generate sales and inventory reports automatically. This will help the pawnshop owners to manage their business efficiently and effectively. The system will make the transaction easier and secured for both the client and the pawnshop owner. All transaction records will be safe once these are encoded and stored in the database system. The system will have a good password encryption to protect all of its data. Only the administrator and staff with registered account can access the system. It will also be easier for the pawnshop owner (administrator) to generate the Sales and Inventory reports since all computations will be automatically computed based on the database records. The system will also have an SMS Notification Feature. It will be easier for the administrator to notify their clients about their transaction details (like deadline of payment) and other announcements. \subsection{Objectives} Generallly, the objective of he project is to develop a system that will cater to the needs of the medium-scale pawnshop business like J and M Pawnshop. It aims to help the pawnshop owner and administrators to easily monitor their inventory and sales and also to efficiently manage their transactions with their clients. Specifically, the project aims to accomplish the following objectives: 1. To develop a system that will monitor the inventory and sales of a medium-scale pawnshop business; 2. To automatically generate the sales and inventory reports; 3. To develop an SMS Notification Feature that will be used to notify the clients; and 4. To develop an effective and efficient protocol in handling the business transactions and records. \subsection{Scope and Limitations} The system will consist of significant features/modules that will satisfy the business’ needs. These includes the Transaction, Inventory and Sales Modules. There will only be two kind of users (the administrator and the staff) who can use the system. The access will be limited depending on the user/account type. Customers/guest cannot access the system. The staff can only View/Update his profile. He can also Add/View/Update/Delete/Search transactions and View/Search list of Admins and Staff. The administrator can do the all staff's functionalities. In addition, he can also Add/Update/Delete list of administrator and staff, manage Sales and Inventory Records and view system logs. The administrator can also generate reports autoomatically. He can also send SMS notifications to customers but cannnot receive SMS from them. % MATERIALS AND METHODS \section{Review of Related Literature} There are already a lot of inventory and sales management systems (and other related systems) that were created for the past years. For example, in the Institute of Computer Science (ICS) at the University of the Philippines Los Banos (UPLB), many students chose to create and develop systems for their Special Problem course. Those systems contributes in the idea of making this project. In 2015, Ruel developed a similar system. She created an inventory and sales information system for a small-scale merchandising business that monitors the inventory and sales of the small grocery store. It also provides the users a conveniet way of handling the business transactions and records. It was implemented using PHP 5.5.12, HTML5, CodeIgniter 2.1.3, MySQL 5.6.17, Wamp Server 2.5 and Bootstrap 3.3.2. Co (2012), as cited in Ruel’s paper, created an online accounting system a drug company. The system also monitors the inventory and sales of the company efficiently. Ruel also cited in her paper that Caraiga developed an Integrated Sales and Inventory Management System that efficiently handles the transaction of a drugstore in 2005. \cite{ruel15} In 2006, Cariño developed a Sales and Product Inventory Management Sytem for Leads Agricultural Product Corporation. The system was implemented using MicrosoftSQL Server 2000 and .NET Framework. I aimed to improve the efficiency and effectiveness of the business. \cite{carino11} Implemented using PHP, Javascript, PostgreSQL, HTML, CSS and Yii Framewrok, Ong(2014), developed a Web-based Clinic Information and Inventory Management System. The sytem let the user view, add and update the medicines in the clinic. It also made the managing of records and inventory easier. \cite{ong14} A similar system was also developed by San Buenaventura in 2014. He implemented Inventory of Supplies and Materials for Open LGU using Apache Web Server 2.2.22, PHP 5.4.3, Wappstack 5.4.19-0, PostgreSQL 9.0 and Yii 1.1.12. The system keeps track of all the incoming and outgoing resources of the departments. The user can add, view, update and delete supplies and materials. \cite{sanbuenaventura14} % MATERIALS AND METHODS \section{Materials and Methods} \subsection{Materials} The following were used to develop the User Interface, Database and Functionalities of the system: -PHP 7.0 -Laravel Framework -PostgreSQL -Apache2 Server -Bootstrap v3.3.7 -Chikka API \subsection{Security Measures} The user needs a registered account to access the system. They will be asked to put their username and password when logging in. Depending on his user/account type, the user will be directed to the home page after successfully logging in the system. Only the Administrator can add new Administrator and new Staff accounts. E ach account will have be asked for a strong password. The system will require each user to have atleast an 8-character password with one number and one special character. The passwords will be encrypted using SHA1. \subsection{System Functionalities} \subsubsection{Administrator} The following are the the administrator's functionalities. -View/Update Profile -Add/View/Update/Delete/Search Transaction -Add/View/Update/Delete/Search list of administrator and staff -Manage Sales and Inventory Records -View/Generate Sales and Inventory Reports -View Logs -Send SMS Notification \subsubsection{Staff} The following are the the staff's functionalities. -View/Update Profile -Add/View/Update/Delete/Search Transaction -View/Search list of administrator and staff \subsection{System Modules} \subsubsection{Login} The administrator and the staff can access the system using their registered account. The administrator can add new administrator and new staff. \subsubsection{Home} After logging in, the administrator or staff will be directed in this page. A summary of the inventory and sales records will be displayed in the administrator’s home page but not in the staff’s home page. It will also display the functionalities of the system depending on the account type. \subsubsection{Administrators and Staff} The list of the administrators and staffs (as well as their basic information) will be displayed in this page. The user can also add/view/update/delete/search administrator and staff depending on the account type. \subsubsection{Transactions} All the information about the transactions (as well as the actions that can be done with it) will be displayed in this page. \subsubsection{Inventory} The Inventory Record will be diplayed in this page. The Administrator can also generate the inventory report in this page. \subsubsection{Sales} Sales Record will be diplayed in this page. The Administrator can also generate the sales report in this page. \subsubsection{SMS Notification} The administrator and staff can send SMS notification to their clients in this page. \subsubsection{About} Information about the system will be displayed in this page. \subsection{System Evaluation} The client (administrators and staff) will be asked to use and test the system. A set of questionnaires will be given to evaluate the user-interface, functionalities and overall performance of the system. \subsection{Use Case Diagram} \begin{center} \includegraphics[height=60mm]{./images/erd.eps} Figure 1. ERD \end{center} \subsection{Use Case Diagram} \begin{center} \includegraphics[height=60mm]{./images/useCase/profile.eps} Figure 2. View/Update Profile \includegraphics[height=60mm]{./images/useCase/adminstaff.eps} Figure 3. Add/View/Update/Delete/Search Admin/Staff \includegraphics[height=60mm]{./images/useCase/transaction.eps} Figure 4. Add/View/Update/Delete/Search Transactions \includegraphics[height=60mm]{./images/useCase/salesinventory.eps} Figure 5. Manage/View/Generate Sales and Inventory Records, View Logs, and Send SMS Notifications \end{center} \subsection{Mock-up UI} \begin{center} \includegraphics[width=80mm]{./images/mockupUI/login.eps} Figure 6. Login Page \includegraphics[width=80mm]{./images/mockupUI/home.eps} Figure 7. Home Page \includegraphics[width=80mm]{./images/mockupUI/adminstaff.eps} Figure 8. Admin and Staff Page \includegraphics[width=80mm]{./images/mockupUI/transaction.eps} Figure 9. Transaction Page \includegraphics[width=80mm]{./images/mockupUI/inventory.eps} Figure 10. Inventory page Page \includegraphics[width=80mm]{./images/mockupUI/sales.eps} Figure 11. Sales Page \includegraphics[width=80mm]{./images/mockupUI/sendnotif.eps} Figure 12. Send SMS Notifications Page \end{center} % RESULTS AND DISCUSSION \section{Results and Discussion} The Sales and Inventory Management System (SIMS) was presented to the client, the manager of J and M Pawnshop. After the evaluation, the client felt satisfied with the system and its functionalities. The features such as monitoring transactions, generating reports in PDF and Excel, and sending SMS Notification satisfied the client. The Administrator can now track all their business transactions and generate reports and the Staff can handle the transactions efficiently using the system. The system was developed using Laravel Framework, PostgreSQL database and Bootstrap. A Chikka API Package was used for the SMS Notification feature. The user interface was made simple and user-friendly for easier system navigation and usage. For the evaluation of the system, the user was asked to rate the system from 1 to 10 (10 being the highest) using the following evaluatiion criteria: -Overall impression of the system -Understanding of the information asked in the forms -User interface -User friendliness/System Navigation -User functionalities -Generating reports -Consistency of the system -SMS Notification % CONCLUSION AND FUTURE WORK \section{Conclusion and Future Work} The Inventory and Sales Management System can be used by pawnshop owners who still manually do and record their transactions. The system will help them in handlin their transactions well and keeping their records organized. The system is still open for other improvements and development. Additional functionalities can also be added to improve the system. A payroll system can also be integrated to the system's user module to organize the employees' payment. % ACKNOWLEDGMENT \section*{Acknowledgment} Azha Vianca would like to thank her adviser, Sir Roi, for accepting her as his advisee and for allowing her to do this SP. She would like to thank her family for all their love and understanding and her roommates and friends for their continuous support and encouragement while doing this system. Above all, she wants to thank God for giving her the wisdom and guidance she needed to complete this requirement. % BIBLIOGRAPHY \bibliographystyle{./IEEE/IEEEtran} \bibliography{./debelen-cs190-ieee} % \nocite{*} % BIOGRAPHY \begin{biography}[{\includegraphics[width=25mm]{./images/debelen.eps}}]{Azha Vianca N. de Belen} Azha Vianca is the eldest daughter of Alfredo and Verijean de Belen. She is a proud volunteer of UPLB Pelikulab. \end{biography} \end{document}
{ "alphanum_fraction": 0.7987072343, "avg_line_length": 54.7, "ext": "tex", "hexsha": "88eceec0424817ce8b62cc3cfb9d2b581ac15a5e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ef87d5f71ce466910a58a1726b4be1d14363c53c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Mysii/CMSC-190", "max_forks_repo_path": "CS190_LaTex/ICS-template/debelen-cs190-ieee.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ef87d5f71ce466910a58a1726b4be1d14363c53c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Mysii/CMSC-190", "max_issues_repo_path": "CS190_LaTex/ICS-template/debelen-cs190-ieee.tex", "max_line_length": 714, "max_stars_count": null, "max_stars_repo_head_hexsha": "ef87d5f71ce466910a58a1726b4be1d14363c53c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Mysii/CMSC-190", "max_stars_repo_path": "CS190_LaTex/ICS-template/debelen-cs190-ieee.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3309, "size": 15316 }
% !TEX root = ../../main.tex \subsection{The (continuous) master equation for allele frequencies} \label{sec_master_eq} One of the most powerful tools to study stochastic process is the so-called master equation. Originally devised to study the stochastic time evolution of chemical reactions - thereby christened the chemical master equation - this equation has found applications in many areas of chemistry, physics and biology. The equation is as statement about the time evolution of the transition probabilities of a Markov process. That is just a fancy way of saying that the master equation describes how the transition probabilities between states change over time. Let's derive this powerful equation starting from \eref{eq_chapman_kolmogorov}. For our case of study we want to understand how the frequency of a particular allele evolves over time, that is a stochastic process $X(t)$. Notice again that we distinguish between the stochastic process $X(t)$ formed by the ensemble of all possible realizations $x(t)$. The Chapman-Kolmogorov equation for this particular case with three time points $t_1 < t_2 < t_3$ takes the form \begin{equation} P(x_3, t_3 \mid x_1, t_1) = \int_0^1 dx_2\; P(x_3, t_3 \mid x_2, t_2) P(x_2, t_2 \mid x_1, t_1), \label{eq_chapman_freq} \end{equation} where the integration limits $[0, 1]$ are the domain of values that an allele frequency can take. Now let us assume that we observe the frequency at time $t$ and it happens to be at a value $x_1$. Then after a very short time $\Dt$ we observe again the allele frequency which is now at a value $x_2$. Under this short time limit we can approximate the transition probability as \begin{equation} P(x_2, t + \Dt \mid x_1, t) = \delta(x_2 - x_1) \underbrace{\left[ 1 - a^{(0)}(x_1, t) \Dt \right]}_{\text{probability of no transition}} + \underbrace{\phi_t(x_2 \mid x_1)\Dt}_\text{probability of transition} + \mathcal{O}(\Dt^2). \label{eq_transition_short_time} \end{equation} Let's break down this equation. We have split the possible things that can happen on a time window $\Dt$ into two possible cases. The first one represented by the first term on the right-hand side is the possibility that on this small time window no transition actually takes place. The $\delta$-function that is equal to 1 if and only if $x_2 = x_1$ is there to make sure that this term is added only when there was no transition during that time and $x_2$ remains the same as $x_1$. Inside the square brackets we wrote $1 - a^{(0)}(x_1, t) \Dt$, the reason for writing the term $a^{(0)}$ will become clear later on when we derive the so-called Fokker-Planck equation. Having a term of the form 1 - ``something'' hints at the fact that this ``something'' must be the probability of transitioning somewhere else rather than staying at the same position. For the second term we wrote $\phi_t(x_2 \mid x_1)\Dt$ as the probability of transitioning outside of $x_1$ during this time window. Our function $\phi_t(x_2 \mid x_1)$ represents the transition probability per unit time between $x_1$ and $x_2$ at time $t$. When we multiply this rate in time$^{-1}$ units by a small time window, we obtain the probability of transitioning from $x_1$ to $x_2$. In order to understand better the term $a^{(0)}(x_1, t)$ in \eref{eq_transition_short_time} recall that a probability distribution must be normalized. That means that if we integrate both sides of \eref{eq_transition_short_time} over all values of $x_2$ it must be true that \begin{equation} \int_0^1 dx_2 \; P(x_2, t + \Dt \mid x_1, t) = \int_0^1 dx_2 \; \delta(x_2 - x_1) \left[ 1 - a^{(0)}(x_1, t) \Dt \right] + \int_0^1 dx_2 \; \phi_t(x_2 \mid x_1)\Dt = 1. \end{equation} Integrating over the $\delta$-function implies that we set $x_2 = x_1$, therefore we obtain \begin{equation} 1 = \left[ 1 - a^{(0)}(x_1, t) \Dt\right] + \int_0^1 dx_2 \; \phi_t(x_2 \mid x_1)\Dt. \end{equation} Solving for $a^{(0)}(x_1, t)$ results in \begin{equation} a^{(0)}(x_1, t) = \int_0^1 dx_2 \; \phi_t(x_2 \mid x_1), \label{eq_a0} \end{equation} proving our previous assertion that $a^{(0)}(x_1, t)$ must be the probability of jumping from $x_1$ to somewhere else. Using this approximation for short time steps we will now derive the differential equation that the transition probability between frequencies must obey. Specifically for three time points $t_o < t < t + \Dt$ with corresponding allele frequencies $x_o, x', f$ we have that \eref{eq_chapman_freq} takes the form \begin{equation} P(x, t + \Dt \mid x_o, t_o) = \int_0^1 dx' \; P(x, t + \Dt \mid x', t) P(x', t \mid x_o, t_o). \end{equation} Substituting \eref{eq_transition_short_time} results in \begin{equation} P(x, t + \Dt \mid x_o, t_o) = \int_0^1 dx' \; \left[ \delta(x - x') \left( 1 - a^{(0)}(x', t)\Dt \right) + \phi_t(x \mid x')\Dt \right] P(x', t \mid x_o, t_o). \end{equation} Evaluating the integral for the first term on the right-hand side gives \begin{equation} P(x, t + \Dt \mid x_o, t_o) = \left[\left( 1 - a^{(0)}(x, t)\Dt \right)\right] P(x, t \mid x_o, t_o) + \Dt \int_0^1 dx' \; \phi_t(x \mid x') P(x', t \mid x_o, t_o). \end{equation} We can then substitute \eref{eq_a0} and rearrange terms to obtain \begin{equation} P(x, t + \Dt \mid x_o, t_o) = P(x, t \mid x_o, t_o) + \int_0^1 dx' \left[ \phi_t(x \mid x') P(x', t \mid x_o, t_o) - \phi_t(x' \mid x) P(x, t \mid x_o, t_o) \right]\Dt. \end{equation} Sending the first term on the right-hand side to the left, dividing both sides by $\Dt$ and taking the limit $\Dt \rightarrow 0$ gives the differential equation we were looking for \begin{equation} \dt{P(x, t \mid x_o, t_o)} = \int_0^1 dx' \; \underbrace{ \left[ \phi_t(x \mid x') P(x', t \mid x_o, t_o) \right.} _{\text{gain } x' \rightarrow f} - \underbrace{ \left. \phi_t(x' \mid x) P(x, t \mid x_o, t_o) \right]} _{\text{loss } x \rightarrow x'}. \label{eq_master_eq_trans} \end{equation} This is the integro-differential equation known as the master equation. This continuous form of the master equation describes the time evolution of the transition probabilities $P(x, t \mid x_o, t_o)$, not the evolution of the probability of being at a specific state $P(x, t)$. However we can use the rules of probability to obtain such description by the following process: Suppose the stochastic process $X(t)$ describes the time evolution of the frequency. Assuming $X(t)$ is a stationary Markov process as described in \secref{sec_stationary_process} means that this process is completely characterized by two functions - the probability of having a particular value for the allele frequency $P_X(x)$ that does not depend on time, and a transition probability $P(x, t \mid x_o, t_o)$. We define a new, non-stationary process $F^*(t)$ for $t \geq t_o$ by setting \begin{equation} P^*(x, t) = P(x, t\mid x_o, t_o), \end{equation} i.e. forcing the initial condition to be a specific value $x_o$ at time $t_o$. This is a sub-ensemble of the process $X(t)$ since we demanded that $X(t = t_o) = x_o$. More generally if instead of setting the initial condition to be a single specific value $P(x, t_o) = \delta(x - x_o)$ we define a probability distribution for the initial state $P(x, t_o) = \rho(x_o)$, we have a sub-ensemble of the form \begin{equation} P^*(x, t) = \int_0^1 dx_o \; P(x, t\mid x_o, t_o) \rho(x_o). \label{eq_subensemble} \end{equation} The interpretation of this sub ensemble is that the system was initially set on a non-stationary state. The initial state distribution $\rho(x_o)$ does not depend on time, therefore if we take the time derivative on both sides of \eref{eq_subensemble} we find that \begin{equation} \dt{P^*(x, t)} = \int_0^1 dx_o \; \dt{P(x, t\mid x_o, t_o)} \rho(x_o). \label{eq_subensemble_dt} \end{equation} Notice that the term with the time derivative on the right-hand side of \eref{eq_subensemble_dt} is the master equation that we derived in \eref{eq_master_eq_trans}. Substituting this results in \begin{equation} \dt{P^*(x, t)} = \int_0^1 dx_o \; \rho(x_o) \int_0^1 dx' \; \left[ \phi_t(x \mid x') P(x', t \mid x_o, t_o) - \phi_t(x' \mid x) P(x, t \mid x_o, t_o) \right]. \end{equation} Redistributing the integrals gives \begin{equation} \dt{P^*(x, t)} = \int_0^1 dx' \; \phi_t(x \mid x') \overbrace{ \int_0^1 dx_o \; P(x', t \mid x_o, t_o) \rho(x_o)} ^{P^*(x', t)\text{ by definition}} - \int_0^1 dx' \; \phi_t(x' \mid x) \overbrace{ \int_0^1 dx_o \; P(x, t \mid x_o, t_o) \rho(x_o)} ^{P^*(x, t)\text{ by definition}}. \end{equation} Using the definition of the sub-ensembles shown in \eref{eq_subensemble} we arrive to a result of the form \begin{equation} \dt{P^*(x, t)} = \int_0^1 dx' \; \underbrace{ \phi_t(x \mid x') P^*(x', t) }_{x' \rightarrow x \text{ gain}} - \int_0^1 dx' \; \underbrace{ \phi_t(x' \mid x) P^*(x, t) }_{f \rightarrow x' \text{ loss}}. \label{eq_master_eq_full} \end{equation} In this form we can see that the master equation is a balance between gain and loss of probability at each state $f$. Having said that, the truth about the continuous master equation is that is extremely complicated to work with. In general integro-differential equations are challenging mathematical objects to deal with. That is why in the next section we'll use the powerful tool of Taylor expansions to simplify the equation. But before that let's discuss some historic uses of the master equation that might look different to our derivation on \eref{eq_master_eq_full}. \subsubsection{Einstein-Kimura continuous master equations} In 1905 during the groundbreaking year of Einstein's scientific career he published a paper in which he attempted to give a molecular explanation to the phenomena of Brownian motion. For this he derived Fick's second law from of a statistical argument by Taylor expanding a master equation - more on that in the next section. In this classic paper Einstein had one of the very first uses of a continuous master equation applied to a physical problem. The difference from our approach is that Einstein didn't derive the master equation from the Chapman-Kolmogorov property of continuous-time continuous-state Markov processes, but simply proposed its functional form directly. While the problem Einstein was addressing in his paper had to do with a random walker moving in real space, the mathematical tools that he proposed can be directly mapped to the population genetics setup. As a matter of fact Motoo Kimura himself used an equivalent approach to Einstein's formulation of the master equation for his formulation of diffusion theory. Kimura used the same approach as Einstein of Taylor expanding the master equation, with the main difference being that for population genetics the transition probability $\phi_t(x \mid x')$ is a function of the current position $x'$ while in real space free diffusion is independent of the position. For this short section our objective is to show that our derivation of the master equation is equivalent to Einstein's and Kimura's proposed functional form. We will focus on Kimura's version of the master equation since population genetics is what concerns us in these notes. Kimura's original derivation of the classic diffusion theory begins stating that the process of allele frequency changes can be stated as \begin{equation} P(x, t + \Dt) = \int d\varepsilon \; P(x - \varepsilon, t) \phi_t(x \mid x - \varepsilon) \Dt, \label{eq_kimura_master} \end{equation} and that is it. While it took us a while to justify \eref{eq_master_eq_full}, Kimura (and Einstein in the context of Brownian motion) simply stated the master equation as the starting point. There is nothing intrinsically wrong about having \eref{eq_kimura_master} as the starting point, but in this set of extend notes I thought it would be insightful to start from a more fundamental property of Markov processes and have the master equation be a consequence of such property. Also I would like to highlight that in all of the population genetics literature I have come across so far there has never been an explicit account of the integration limits on the equations. This includes Kimura's original work as well as textbooks. For this particular case the integration limits should go from $f$ to $f - 1$ such that the values of $f - \varepsilon$ range from 0 to 1. So the proper form of this integral is given by \begin{equation} P(x, t + \Dt) = \int_{-f}^{f - 1} d\varepsilon \; P(x - \varepsilon, t) \phi_t(x \mid x - \varepsilon) \Dt. \label{eq_kimura_master_lim} \end{equation} Notice that at first glance \eref{eq_master_eq_full} and \eref{eq_kimura_master_lim} don't seem to be equivalent. \eref{eq_master_eq_full} is a differential equation that describes how the probability distribution changes given gains and losses of probability on state $f$, while \eref{eq_kimura_master_lim} only makes a statement of what would the probability distribution look like a tiny time step into the future by adding all the jumps \textbf{into} state $f$, but it doesn't include a term for all the jumps out of state $f$. To show that these equations are equivalent we have to do two things: \begin{enumerate} \item On \eref{eq_master_eq_full} we notice that the second term on the right-hand side can be written as \begin{equation} P^*(x, t)\int_0^1 dx' \; \phi_t(x' \mid x) = P^*(x, t), \label{eq_integral_transition} \end{equation} where we took the term $P^*(x, t)$ outside of the integral and used the fact that the transition probability per unit time $\phi_t(x' \mid x)$ should be normalized regardless of the time window. In other words, the probability of transitioning from $f$ to anywhere else (including staying at $f$) should add up to one regardless of the time window we observe. \mrm{Need to check this statement and that the units make sense} \item Having this result we can rewrite \eref{eq_master_eq_full} as \begin{equation} \dt{P^*(x, t)} = \int_0^1 dx' \; \phi_t(x \mid x') P^*(x', t) - P^*(x, t). \label{eq_master_eq_rearange} \end{equation} We are almost there! All is left is to notice that if we were to Taylor expand the left-hand side of \eref{eq_kimura_master_lim} with respect to time up to first order (as it is often done for time derivatives) we would obtain \begin{equation} P(x, t + \Dt) = P(x, t) + \dt{P(x, t)}\Dt + \mathcal{O}(\Dt^2). \end{equation} That means we can send the second term on the the right-hand side of \eref{eq_master_eq_rearange} to the left and rewrite the equation as \begin{equation} {P^*(x, t + \Dt) \over \Dt} = \int_0^1 dx' \; \phi_t(x \mid x') P^*(x', t), \end{equation} which is equivalent to \eref{eq_kimura_master_lim} where instead of having the integration over the jump size $\varepsilon$ the integration is done over the final position $x'$. \end{enumerate}
{ "alphanum_fraction": 0.7274000934, "avg_line_length": 51.156996587, "ext": "tex", "hexsha": "731635caa4ca712c07f70261e2a1e678e7c3568a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mrazomej/pop_gen", "max_forks_repo_path": "doc/book_draft/chapters/classic_diffusion/02_master_eq.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_issues_repo_issues_event_max_datetime": "2019-03-05T00:17:26.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-05T00:17:26.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mrazomej/stat_gen", "max_issues_repo_path": "doc/book_draft/chapters/classic_diffusion/02_master_eq.tex", "max_line_length": 90, "max_stars_count": null, "max_stars_repo_head_hexsha": "abafd9ecc63ae8a804c8df5b9658e47cabf951fa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mrazomej/stat_gen", "max_stars_repo_path": "doc/book_draft/chapters/classic_diffusion/02_master_eq.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4513, "size": 14989 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{dp\_qlbs\_oneset\_m3\_ex3\_v4} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \subsection{Fitted Q-iteration}\label{fitted-q-iteration} Welcome to your 3rd assignment in Reinforcement Learning in Finance. In this exercise you will take the most popular extension of Q-Learning to a batch RL setting called Fitted Q-Iteration. \textbf{Instructions:} - You will be using Python 3. - Avoid using for-loops and while-loops, unless you are explicitly told to do so. - Do not modify the (\# GRADED FUNCTION {[}function name{]}) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. - After coding your function, run the cell right below it to check if your result is correct. - When encountering \textbf{\texttt{\#\ dummy\ code\ -\ remove}} please replace this code with your own \textbf{After this assignment you will:} - Setup inputs for batch-RL model - Implement Fitted Q-Iteration Let's get started! \subsection{About iPython Notebooks}\label{about-ipython-notebooks} iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the \#\#\# START CODE HERE \#\#\# and \#\#\# END CODE HERE \#\#\# comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{stats} \PY{k}{import} \PY{n}{norm} \PY{k+kn}{import} \PY{n+nn}{random} \PY{k+kn}{import} \PY{n+nn}{sys} \PY{n}{sys}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{..}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{k+kn}{import} \PY{n+nn}{grading} \PY{k+kn}{import} \PY{n+nn}{time} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} ONLY FOR GRADING. DO NOT EDIT \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{submissions}\PY{o}{=}\PY{n+nb}{dict}\PY{p}{(}\PY{p}{)} \PY{n}{assignment\PYZus{}key}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{0jn7tioiEeiBAA49aGvLAg}\PY{l+s+s2}{\PYZdq{}} \PY{n}{all\PYZus{}parts}\PY{o}{=}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{wrZFS}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{yqg6m}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{KY5p8}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{BsRWi}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{pWxky}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} ONLY FOR GRADING. DO NOT EDIT \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{n}{COURSERA\PYZus{}TOKEN} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mxzwbbOi9yVinyJa}\PY{l+s+s1}{\PYZsq{}} \PY{c+c1}{\PYZsh{} the key provided to the Student under his/her email on submission page} \PY{n}{COURSERA\PYZus{}EMAIL} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{[email protected]}\PY{l+s+s1}{\PYZsq{}} \PY{c+c1}{\PYZsh{} the email} \end{Verbatim} \subsection{Parameters for MC simulation of stock prices}\label{parameters-for-mc-simulation-of-stock-prices} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{n}{S0} \PY{o}{=} \PY{l+m+mi}{100} \PY{c+c1}{\PYZsh{} initial stock price} \PY{n}{mu} \PY{o}{=} \PY{l+m+mf}{0.05} \PY{c+c1}{\PYZsh{} drift} \PY{n}{sigma} \PY{o}{=} \PY{l+m+mf}{0.15} \PY{c+c1}{\PYZsh{} volatility} \PY{n}{r} \PY{o}{=} \PY{l+m+mf}{0.03} \PY{c+c1}{\PYZsh{} risk\PYZhy{}free rate} \PY{n}{M} \PY{o}{=} \PY{l+m+mi}{1} \PY{c+c1}{\PYZsh{} maturity} \PY{n}{T} \PY{o}{=} \PY{l+m+mi}{6} \PY{c+c1}{\PYZsh{} number of time steps} \PY{n}{N\PYZus{}MC} \PY{o}{=} \PY{l+m+mi}{10000} \PY{c+c1}{\PYZsh{} 10000 \PYZsh{} 50000 \PYZsh{} number of paths} \PY{n}{delta\PYZus{}t} \PY{o}{=} \PY{n}{M} \PY{o}{/} \PY{n}{T} \PY{c+c1}{\PYZsh{} time interval} \PY{n}{gamma} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}} \PY{n}{r} \PY{o}{*} \PY{n}{delta\PYZus{}t}\PY{p}{)} \PY{c+c1}{\PYZsh{} discount factor} \end{Verbatim} \subsubsection{Black-Sholes Simulation}\label{black-sholes-simulation} Simulate \(N_{MC}\) stock price sample paths with \(T\) steps by the classical Black-Sholes formula. \[dS_t=\mu S_tdt+\sigma S_tdW_t\quad\quad S_{t+1}=S_te^{\left(\mu-\frac{1}{2}\sigma^2\right)\Delta t+\sigma\sqrt{\Delta t}Z}\] where \(Z\) is a standard normal random variable. Based on simulated stock price \(S_t\) paths, compute state variable \(X_t\) by the following relation. \[X_t=-\left(\mu-\frac{1}{2}\sigma^2\right)t\Delta t+\log S_t\] Also compute \[\Delta S_t=S_{t+1}-e^{r\Delta t}S_t\quad\quad \Delta\hat{S}_t=\Delta S_t-\Delta\bar{S}_t\quad\quad t=0,...,T-1\] where \(\Delta\bar{S}_t\) is the sample mean of all values of \(\Delta S_t\). Plots of 5 stock price \(S_t\) and state variable \(X_t\) paths are shown below. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}5}]:} \PY{c+c1}{\PYZsh{} make a dataset } \PY{n}{starttime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{c+c1}{\PYZsh{} Fix random seed} \PY{c+c1}{\PYZsh{} stock price} \PY{n}{S} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{=} \PY{n}{S0} \PY{c+c1}{\PYZsh{} standard normal random numbers} \PY{n}{RN} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randn}\PY{p}{(}\PY{n}{N\PYZus{}MC}\PY{p}{,}\PY{n}{T}\PY{p}{)}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{k}{for} \PY{n}{t} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{p}{(}\PY{n}{mu} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{n}{delta\PYZus{}t} \PY{o}{+} \PY{n}{sigma} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{delta\PYZus{}t}\PY{p}{)} \PY{o}{*} \PY{n}{RN}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{n}{delta\PYZus{}S} \PY{o}{=} \PY{n}{S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{:}\PY{n}{T}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{n}{r} \PY{o}{*} \PY{n}{delta\PYZus{}t}\PY{p}{)} \PY{o}{*} \PY{n}{S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{:}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{delta\PYZus{}S\PYZus{}hat} \PY{o}{=} \PY{n}{delta\PYZus{}S}\PY{o}{.}\PY{n}{apply}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:} \PY{n}{x} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{x}\PY{p}{)}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)} \PY{c+c1}{\PYZsh{} state variable} \PY{n}{X} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{n}{mu} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)} \PY{o}{*} \PY{n}{delta\PYZus{}t} \PY{o}{+} \PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{S}\PY{p}{)} \PY{c+c1}{\PYZsh{} delta\PYZus{}t here is due to their conventions} \PY{n}{endtime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{Time Cost:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{endtime} \PY{o}{\PYZhy{}} \PY{n}{starttime}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{seconds}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{c+c1}{\PYZsh{} plot 10 paths} \PY{n}{step\PYZus{}size} \PY{o}{=} \PY{n}{N\PYZus{}MC} \PY{o}{/}\PY{o}{/} \PY{l+m+mi}{10} \PY{n}{idx\PYZus{}plot} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{step\PYZus{}size}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{p}{,} \PY{n}{step\PYZus{}size}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{S}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Stock Price Sample Paths}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{X}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{State Variable}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Time Cost: 0.05500006675720215 seconds \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_1.png} \end{center} { \hspace*{\fill} \\} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_2.png} \end{center} { \hspace*{\fill} \\} Define function \emph{terminal\_payoff} to compute the terminal payoff of a European put option. \[H_T\left(S_T\right)=\max\left(K-S_T,0\right)\] \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}6}]:} \PY{k}{def} \PY{n+nf}{terminal\PYZus{}payoff}\PY{p}{(}\PY{n}{ST}\PY{p}{,} \PY{n}{K}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} ST final stock price} \PY{c+c1}{\PYZsh{} K strike} \PY{n}{payoff} \PY{o}{=} \PY{n+nb}{max}\PY{p}{(}\PY{n}{K}\PY{o}{\PYZhy{}}\PY{n}{ST}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{)} \PY{k}{return} \PY{n}{payoff} \end{Verbatim} \subsection{Define spline basis functions}\label{define-spline-basis-functions} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}7}]:} \PY{k+kn}{import} \PY{n+nn}{bspline} \PY{k+kn}{import} \PY{n+nn}{bspline}\PY{n+nn}{.}\PY{n+nn}{splinelab} \PY{k}{as} \PY{n+nn}{splinelab} \PY{n}{X\PYZus{}min} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{min}\PY{p}{(}\PY{n}{X}\PY{p}{)}\PY{p}{)} \PY{n}{X\PYZus{}max} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{max}\PY{p}{(}\PY{n}{X}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{X.shape = }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{X}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{X\PYZus{}min, X\PYZus{}max = }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{X\PYZus{}min}\PY{p}{,} \PY{n}{X\PYZus{}max}\PY{p}{)} \PY{n}{p} \PY{o}{=} \PY{l+m+mi}{4} \PY{c+c1}{\PYZsh{} order of spline (as\PYZhy{}is; 3 = cubic, 4: B\PYZhy{}spline?)} \PY{n}{ncolloc} \PY{o}{=} \PY{l+m+mi}{12} \PY{n}{tau} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linspace}\PY{p}{(}\PY{n}{X\PYZus{}min}\PY{p}{,}\PY{n}{X\PYZus{}max}\PY{p}{,}\PY{n}{ncolloc}\PY{p}{)} \PY{c+c1}{\PYZsh{} These are the sites to which we would like to interpolate} \PY{c+c1}{\PYZsh{} k is a knot vector that adds endpoints repeats as appropriate for a spline of order p} \PY{c+c1}{\PYZsh{} To get meaninful results, one should have ncolloc \PYZgt{}= p+1} \PY{n}{k} \PY{o}{=} \PY{n}{splinelab}\PY{o}{.}\PY{n}{aptknt}\PY{p}{(}\PY{n}{tau}\PY{p}{,} \PY{n}{p}\PY{p}{)} \PY{c+c1}{\PYZsh{} Spline basis of order p on knots k} \PY{n}{basis} \PY{o}{=} \PY{n}{bspline}\PY{o}{.}\PY{n}{Bspline}\PY{p}{(}\PY{n}{k}\PY{p}{,} \PY{n}{p}\PY{p}{)} \PY{n}{f} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} B = bspline.Bspline(k, p) \PYZsh{} Spline basis functions } \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Number of points k = }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n+nb}{len}\PY{p}{(}\PY{n}{k}\PY{p}{)}\PY{p}{)} \PY{n}{basis}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Basis\PYZus{}functions.png}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{600}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] X.shape = (10000, 7) X\_min, X\_max = 4.057527970756566 5.162066529170717 Number of points k = 17 \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_12_1.png} \end{center} { \hspace*{\fill} \\} \begin{verbatim} <Figure size 432x288 with 0 Axes> \end{verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}8}]:} \PY{n+nb}{type}\PY{p}{(}\PY{n}{basis}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}8}]:} bspline.bspline.Bspline \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}9}]:} \PY{n}{X}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{shape} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}9}]:} (10000, 7) \end{Verbatim} \subsubsection{Make data matrices with feature values}\label{make-data-matrices-with-feature-values} "Features" here are the values of basis functions at data points The outputs are 3D arrays of dimensions num\_tSteps x num\_MC x num\_basis \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}10}]:} \PY{n}{num\PYZus{}t\PYZus{}steps} \PY{o}{=} \PY{n}{T} \PY{o}{+} \PY{l+m+mi}{1} \PY{n}{num\PYZus{}basis} \PY{o}{=} \PY{n}{ncolloc} \PY{c+c1}{\PYZsh{} len(k) \PYZsh{}} \PY{n}{data\PYZus{}mat\PYZus{}t} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{(}\PY{n}{num\PYZus{}t\PYZus{}steps}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{p}{,}\PY{n}{num\PYZus{}basis} \PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{num\PYZus{}basis = }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{num\PYZus{}basis}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{dim data\PYZus{}mat\PYZus{}t = }\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{c+c1}{\PYZsh{} fill it, expand function in finite dimensional space} \PY{c+c1}{\PYZsh{} in neural network the basis is the neural network itself} \PY{n}{t\PYZus{}0} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{num\PYZus{}t\PYZus{}steps}\PY{p}{)}\PY{p}{:} \PY{n}{x} \PY{o}{=} \PY{n}{X}\PY{o}{.}\PY{n}{values}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{i}\PY{p}{]} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[} \PY{n}{basis}\PY{p}{(}\PY{n}{el}\PY{p}{)} \PY{k}{for} \PY{n}{el} \PY{o+ow}{in} \PY{n}{x} \PY{p}{]}\PY{p}{)} \PY{n}{t\PYZus{}end} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Computational time:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{t\PYZus{}end} \PY{o}{\PYZhy{}} \PY{n}{t\PYZus{}0}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{seconds}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] num\_basis = 12 dim data\_mat\_t = (7, 10000, 12) Computational time: 14.045999765396118 seconds \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}11}]:} \PY{c+c1}{\PYZsh{} save these data matrices for future re\PYZhy{}use} \PY{n}{np}\PY{o}{.}\PY{n}{save}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{data\PYZus{}mat\PYZus{}m=r\PYZus{}A\PYZus{}}\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{n}{N\PYZus{}MC}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}12}]:} \PY{n+nb}{print}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}t}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{c+c1}{\PYZsh{} shape num\PYZus{}steps x N\PYZus{}MC x num\PYZus{}basis} \PY{n+nb}{print}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{k}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] (7, 10000, 12) 17 \end{Verbatim} \subsection{Dynamic Programming solution for QLBS}\label{dynamic-programming-solution-for-qlbs} The MDP problem in this case is to solve the following Bellman optimality equation for the action-value function. \[Q_t^\star\left(x,a\right)=\mathbb{E}_t\left[R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\space|\space X_t=x,a_t=a\right],\space\space t=0,...,T-1,\quad\gamma=e^{-r\Delta t}\] where \(R_t\left(X_t,a_t,X_{t+1}\right)\) is the one-step time-dependent random reward and \(a_t\left(X_t\right)\) is the action (hedge). Detailed steps of solving this equation by Dynamic Programming are illustrated below. With this set of basis functions \(\left\{\Phi_n\left(X_t^k\right)\right\}_{n=1}^N\), expand the optimal action (hedge) \(a_t^\star\left(X_t\right)\) and optimal Q-function \(Q_t^\star\left(X_t,a_t^\star\right)\) in basis functions with time-dependent coefficients. \[a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}\quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}\] Coefficients \(\phi_{nt}\) and \(\omega_{nt}\) are computed recursively backward in time for \(t=T−1,...,0\). Coefficients for expansions of the optimal action \(a_t^\star\left(X_t\right)\) are solved by \[\phi_t=\mathbf A_t^{-1}\mathbf B_t\] where \(\mathbf A_t\) and \(\mathbf B_t\) are matrix and vector respectively with elements given by \[A_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)\left(\Delta\hat{S}_t^k\right)^2}\quad\quad B_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left[\hat\Pi_{t+1}^k\Delta\hat{S}_t^k+\frac{1}{2\gamma\lambda}\Delta S_t^k\right]}\] Define function \emph{function\_A} and \emph{function\_B} to compute the value of matrix \(\mathbf A_t\) and vector \(\mathbf B_t\). \subsection{Define the option strike and risk aversion parameter}\label{define-the-option-strike-and-risk-aversion-parameter} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}13}]:} \PY{n}{risk\PYZus{}lambda} \PY{o}{=} \PY{l+m+mf}{0.001} \PY{c+c1}{\PYZsh{} 0.001 \PYZsh{} 0.0001 \PYZsh{} risk aversion} \PY{n}{K} \PY{o}{=} \PY{l+m+mi}{100} \PY{c+c1}{\PYZsh{} } \PY{c+c1}{\PYZsh{} Note that we set coef=0 below in function function\PYZus{}B\PYZus{}vec. This correspond to a pure risk\PYZhy{}based hedging} \end{Verbatim} \subsection{Part 1: Implement functions to compute optimal hedges}\label{part-1-implement-functions-to-compute-optimal-hedges} \textbf{Instructions:} Copy-paste implementations from the previous assignment, i.e. QLBS as these are the same functions \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}14}]:} \PY{c+c1}{\PYZsh{} functions to compute optimal hedges} \PY{k}{def} \PY{n+nf}{function\PYZus{}A\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{data\PYZus{}mat}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}A\PYZus{}vec \PYZhy{} compute the matrix A\PYZus{}\PYZob{}nm\PYZcb{} from Eq. (52) (with a regularization!)} \PY{l+s+sd}{ Eq. (52) in QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t \PYZhy{} time index, a scalar, an index into time axis of data\PYZus{}mat} \PY{l+s+sd}{ delta\PYZus{}S\PYZus{}hat \PYZhy{} pandas.DataFrame of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ data\PYZus{}mat \PYZhy{} pandas.DataFrame of dimension T x N\PYZus{}MC x num\PYZus{}basis} \PY{l+s+sd}{ reg\PYZus{}param \PYZhy{} a scalar, regularization parameter} \PY{l+s+sd}{ } \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ \PYZhy{} np.array, i.e. matrix A\PYZus{}\PYZob{}nm\PYZcb{} of dimension num\PYZus{}basis x num\PYZus{}basis} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 5\PYZhy{}6 lines of code)} \PY{c+c1}{\PYZsh{} A\PYZus{}mat = your code goes here ...} \PY{n}{X\PYZus{}mat} \PY{o}{=} \PY{n}{data\PYZus{}mat}\PY{p}{[}\PY{n}{t}\PY{p}{,} \PY{p}{:}\PY{p}{,} \PY{p}{:}\PY{p}{]} \PY{n}{num\PYZus{}basis\PYZus{}funcs} \PY{o}{=} \PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{this\PYZus{}dS} \PY{o}{=} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{p}{]} \PY{n}{hat\PYZus{}dS2} \PY{o}{=} \PY{p}{(}\PY{n}{this\PYZus{}dS} \PY{o}{*}\PY{o}{*} \PY{l+m+mi}{2}\PY{p}{)}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{A\PYZus{}mat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{X\PYZus{}mat} \PY{o}{*} \PY{n}{hat\PYZus{}dS2}\PY{p}{)} \PY{o}{+} \PY{n}{reg\PYZus{}param} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{eye}\PY{p}{(}\PY{n}{num\PYZus{}basis\PYZus{}funcs}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{A\PYZus{}mat} \PY{k}{def} \PY{n+nf}{function\PYZus{}B\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{Pi\PYZus{}hat}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{o}{=}\PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{S}\PY{o}{=}\PY{n}{S}\PY{p}{,} \PY{n}{data\PYZus{}mat}\PY{o}{=}\PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{,} \PY{n}{gamma}\PY{o}{=}\PY{n}{gamma}\PY{p}{,} \PY{n}{risk\PYZus{}lambda}\PY{o}{=}\PY{n}{risk\PYZus{}lambda}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}B\PYZus{}vec \PYZhy{} compute vector B\PYZus{}\PYZob{}n\PYZcb{} from Eq. (52) QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t \PYZhy{} time index, a scalar, an index into time axis of delta\PYZus{}S\PYZus{}hat} \PY{l+s+sd}{ Pi\PYZus{}hat \PYZhy{} pandas.DataFrame of dimension N\PYZus{}MC x T of portfolio values } \PY{l+s+sd}{ delta\PYZus{}S\PYZus{}hat \PYZhy{} pandas.DataFrame of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ S \PYZhy{} pandas.DataFrame of simulated stock prices} \PY{l+s+sd}{ data\PYZus{}mat \PYZhy{} pandas.DataFrame of dimension T x N\PYZus{}MC x num\PYZus{}basis} \PY{l+s+sd}{ gamma \PYZhy{} one time\PYZhy{}step discount factor \PYZdl{}exp(\PYZhy{}r \PYZbs{}delta t)\PYZdl{}} \PY{l+s+sd}{ risk\PYZus{}lambda \PYZhy{} risk aversion coefficient, a small positive number} \PY{l+s+sd}{ } \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ B\PYZus{}vec \PYZhy{} np.array() of dimension num\PYZus{}basis x 1} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{} coef = 1.0/(2 * gamma * risk\PYZus{}lambda)} \PY{c+c1}{\PYZsh{} override it by zero to have pure risk hedge} \PY{n}{coef} \PY{o}{=} \PY{l+m+mf}{0.} \PY{c+c1}{\PYZsh{} keep it} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 3\PYZhy{}4 lines of code)} \PY{c+c1}{\PYZsh{} B\PYZus{}vec = your code goes here ...} \PY{n}{tmp} \PY{o}{=} \PY{n}{Pi\PYZus{}hat}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{*} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{p}{]} \PY{n}{X\PYZus{}mat} \PY{o}{=} \PY{n}{data\PYZus{}mat}\PY{p}{[}\PY{n}{t}\PY{p}{,} \PY{p}{:}\PY{p}{,} \PY{p}{:}\PY{p}{]} \PY{c+c1}{\PYZsh{} matrix of dimension N\PYZus{}MC x num\PYZus{}basis} \PY{n}{B\PYZus{}vec} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{tmp}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{B\PYZus{}vec} \end{Verbatim} \subsection{Compute optimal hedge and portfolio value}\label{compute-optimal-hedge-and-portfolio-value} Call \emph{function\_A} and \emph{function\_B} for \(t=T-1,...,0\) together with basis function \(\Phi_n\left(X_t\right)\) to compute optimal action \(a_t^\star\left(X_t\right)=\sum_n^N{\phi_{nt}\Phi_n\left(X_t\right)}\) backward recursively with terminal condition \(a_T^\star\left(X_T\right)=0\). Once the optimal hedge \(a_t^\star\left(X_t\right)\) is computed, the portfolio value \(\Pi_t\) could also be computed backward recursively by \[\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0\] together with the terminal condition \(\Pi_T=H_T\left(S_T\right)=\max\left(K-S_T,0\right)\) for a European put option. Also compute \(\hat{\Pi}_t=\Pi_t-\bar{\Pi}_t\), where \(\bar{\Pi}_t\) is the sample mean of all values of \(\Pi_t\). \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}15}]:} \PY{n}{starttime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} portfolio value} \PY{n}{Pi} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{S}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{apply}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:} \PY{n}{terminal\PYZus{}payoff}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{K}\PY{p}{)}\PY{p}{)} \PY{n}{Pi\PYZus{}hat} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Pi\PYZus{}hat}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} optimal hedge} \PY{n}{a} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{a}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{0} \PY{n}{reg\PYZus{}param} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}3} \PY{k}{for} \PY{n}{t} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{n}{A\PYZus{}mat} \PY{o}{=} \PY{n}{function\PYZus{}A\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)} \PY{n}{B\PYZus{}vec} \PY{o}{=} \PY{n}{function\PYZus{}B\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{Pi\PYZus{}hat}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{S}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{)} \PY{c+c1}{\PYZsh{} print (\PYZsq{}t = A\PYZus{}mat.shape = B\PYZus{}vec.shape = \PYZsq{}, t, A\PYZus{}mat.shape, B\PYZus{}vec.shape)} \PY{n}{phi} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{inv}\PY{p}{(}\PY{n}{A\PYZus{}mat}\PY{p}{)}\PY{p}{,} \PY{n}{B\PYZus{}vec}\PY{p}{)} \PY{n}{a}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{[}\PY{n}{t}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{p}{,}\PY{n}{phi}\PY{p}{)} \PY{n}{Pi}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{gamma} \PY{o}{*} \PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{a}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{*} \PY{n}{delta\PYZus{}S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{n}{Pi\PYZus{}hat}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{Pi}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{n}{a} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{astype}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{float}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{Pi} \PY{o}{=} \PY{n}{Pi}\PY{o}{.}\PY{n}{astype}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{float}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{Pi\PYZus{}hat} \PY{o}{=} \PY{n}{Pi\PYZus{}hat}\PY{o}{.}\PY{n}{astype}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{float}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{endtime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Computational time:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{endtime} \PY{o}{\PYZhy{}} \PY{n}{starttime}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{seconds}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Computational time: 0.08799982070922852 seconds \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] D:\textbackslash{}application\textbackslash{}Anaconda3\textbackslash{}envs\textbackslash{}pyalgo\textbackslash{}lib\textbackslash{}site-packages\textbackslash{}ipykernel\_launcher.py:21: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape({\ldots}) instead \end{Verbatim} Plots of 5 optimal hedge \(a_t^\star\) and portfolio value \(\Pi_t\) paths are shown below. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}16}]:} \PY{c+c1}{\PYZsh{} plot 10 paths} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{a}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Optimal Hedge}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Portfolio Value}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_30_0.png} \end{center} { \hspace*{\fill} \\} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_30_1.png} \end{center} { \hspace*{\fill} \\} Once the optimal hedge \(a_t^\star\) and portfolio value \(\Pi_t\) are all computed, the reward function \(R_t\left(X_t,a_t,X_{t+1}\right)\) could then be computed by \[R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=0,...,T-1\] with terminal condition \(R_T=-\lambda Var\left[\Pi_T\right]\). Plot of 5 reward function \(R_t\) paths is shown below. \subsection{Part 2: Compute the optimal Q-function with the DP approach}\label{part-2-compute-the-optimal-q-function-with-the-dp-approach} Coefficients for expansions of the optimal Q-function \(Q_t^\star\left(X_t,a_t^\star\right)\) are solved by \[\omega_t=\mathbf C_t^{-1}\mathbf D_t\] where \(\mathbf C_t\) and \(\mathbf D_t\) are matrix and vector respectively with elements given by \[C_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\Phi_m\left(X_t^k\right)}\quad\quad D_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Phi_n\left(X_t^k\right)\left(R_t\left(X_t,a_t^\star,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}\] Define function \emph{function\_C} and \emph{function\_D} to compute the value of matrix \(\mathbf C_t\) and vector \(\mathbf D_t\). \textbf{Instructions:} Copy-paste implementations from the previous assignment,i.e. QLBS as these are the same functions \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}17}]:} \PY{k}{def} \PY{n+nf}{function\PYZus{}C\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{data\PYZus{}mat}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}C\PYZus{}vec \PYZhy{} calculate C\PYZus{}\PYZob{}nm\PYZcb{} matrix from Eq. (56) (with a regularization!)} \PY{l+s+sd}{ Eq. (56) in QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t \PYZhy{} time index, a scalar, an index into time axis of data\PYZus{}mat } \PY{l+s+sd}{ data\PYZus{}mat \PYZhy{} pandas.DataFrame of values of basis functions of dimension T x N\PYZus{}MC x num\PYZus{}basis} \PY{l+s+sd}{ reg\PYZus{}param \PYZhy{} regularization parameter, a scalar} \PY{l+s+sd}{ } \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ C\PYZus{}mat \PYZhy{} np.array of dimension num\PYZus{}basis x num\PYZus{}basis} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 5\PYZhy{}6 lines of code)} \PY{c+c1}{\PYZsh{} C\PYZus{}mat = your code goes here ....} \PY{n}{X\PYZus{}mat} \PY{o}{=} \PY{n}{data\PYZus{}mat}\PY{p}{[}\PY{n}{t}\PY{p}{,} \PY{p}{:}\PY{p}{,} \PY{p}{:}\PY{p}{]} \PY{n}{num\PYZus{}basis\PYZus{}funcs} \PY{o}{=} \PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{C\PYZus{}mat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{X\PYZus{}mat}\PY{p}{)} \PY{o}{+} \PY{n}{reg\PYZus{}param} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{eye}\PY{p}{(}\PY{n}{num\PYZus{}basis\PYZus{}funcs}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{C\PYZus{}mat} \PY{k}{def} \PY{n+nf}{function\PYZus{}D\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{Q}\PY{p}{,} \PY{n}{R}\PY{p}{,} \PY{n}{data\PYZus{}mat}\PY{p}{,} \PY{n}{gamma}\PY{o}{=}\PY{n}{gamma}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}D\PYZus{}vec \PYZhy{} calculate D\PYZus{}\PYZob{}nm\PYZcb{} vector from Eq. (56) (with a regularization!)} \PY{l+s+sd}{ Eq. (56) in QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t \PYZhy{} time index, a scalar, an index into time axis of data\PYZus{}mat } \PY{l+s+sd}{ Q \PYZhy{} pandas.DataFrame of Q\PYZhy{}function values of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ R \PYZhy{} pandas.DataFrame of rewards of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ data\PYZus{}mat \PYZhy{} pandas.DataFrame of values of basis functions of dimension T x N\PYZus{}MC x num\PYZus{}basis} \PY{l+s+sd}{ gamma \PYZhy{} one time\PYZhy{}step discount factor \PYZdl{}exp(\PYZhy{}r \PYZbs{}delta t)\PYZdl{}} \PY{l+s+sd}{ } \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ D\PYZus{}vec \PYZhy{} np.array of dimension num\PYZus{}basis x 1} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 2\PYZhy{}3 lines of code)} \PY{c+c1}{\PYZsh{} D\PYZus{}vec = your code goes here ...} \PY{n}{X\PYZus{}mat} \PY{o}{=} \PY{n}{data\PYZus{}mat}\PY{p}{[}\PY{n}{t}\PY{p}{,} \PY{p}{:}\PY{p}{,} \PY{p}{:}\PY{p}{]} \PY{n}{D\PYZus{}vec} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{X\PYZus{}mat}\PY{o}{.}\PY{n}{T}\PY{p}{,} \PY{n}{R}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{+} \PY{n}{gamma} \PY{o}{*} \PY{n}{Q}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{D\PYZus{}vec} \end{Verbatim} Call \emph{function\_C} and \emph{function\_D} for \(t=T-1,...,0\) together with basis function \(\Phi_n\left(X_t\right)\) to compute optimal action Q-function \(Q_t^\star\left(X_t,a_t^\star\right)=\sum_n^N{\omega_{nt}\Phi_n\left(X_t\right)}\) backward recursively with terminal condition \(Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]\). Compare the QLBS price to European put price given by Black-Sholes formula. \[C_t^{\left(BS\right)}=Ke^{-r\left(T-t\right)}\mathcal N\left(-d_2\right)-S_t\mathcal N\left(-d_1\right)\] \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}18}]:} \PY{c+c1}{\PYZsh{} The Black\PYZhy{}Scholes prices} \PY{k}{def} \PY{n+nf}{bs\PYZus{}put}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{S0}\PY{o}{=}\PY{n}{S0}\PY{p}{,} \PY{n}{K}\PY{o}{=}\PY{n}{K}\PY{p}{,} \PY{n}{r}\PY{o}{=}\PY{n}{r}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{n}{sigma}\PY{p}{,} \PY{n}{T}\PY{o}{=}\PY{n}{M}\PY{p}{)}\PY{p}{:} \PY{n}{d1} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{S0}\PY{o}{/}\PY{n}{K}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{n}{r} \PY{o}{+} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{/} \PY{n}{sigma} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)} \PY{n}{d2} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{S0}\PY{o}{/}\PY{n}{K}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{n}{r} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{/} \PY{n}{sigma} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)} \PY{n}{price} \PY{o}{=} \PY{n}{K} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{r} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{*} \PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{d2}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{S0} \PY{o}{*} \PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{d1}\PY{p}{)} \PY{k}{return} \PY{n}{price} \PY{k}{def} \PY{n+nf}{bs\PYZus{}call}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{S0}\PY{o}{=}\PY{n}{S0}\PY{p}{,} \PY{n}{K}\PY{o}{=}\PY{n}{K}\PY{p}{,} \PY{n}{r}\PY{o}{=}\PY{n}{r}\PY{p}{,} \PY{n}{sigma}\PY{o}{=}\PY{n}{sigma}\PY{p}{,} \PY{n}{T}\PY{o}{=}\PY{n}{M}\PY{p}{)}\PY{p}{:} \PY{n}{d1} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{S0}\PY{o}{/}\PY{n}{K}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{n}{r} \PY{o}{+} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{/} \PY{n}{sigma} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)} \PY{n}{d2} \PY{o}{=} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{S0}\PY{o}{/}\PY{n}{K}\PY{p}{)} \PY{o}{+} \PY{p}{(}\PY{n}{r} \PY{o}{\PYZhy{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2} \PY{o}{*} \PY{n}{sigma}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{/} \PY{n}{sigma} \PY{o}{/} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)} \PY{n}{price} \PY{o}{=} \PY{n}{S0} \PY{o}{*} \PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{d1}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{n}{K} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{r} \PY{o}{*} \PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{n}{t}\PY{p}{)}\PY{p}{)} \PY{o}{*} \PY{n}{norm}\PY{o}{.}\PY{n}{cdf}\PY{p}{(}\PY{n}{d2}\PY{p}{)} \PY{k}{return} \PY{n}{price} \end{Verbatim} \subsection{Hedging and Pricing with Reinforcement Learning}\label{hedging-and-pricing-with-reinforcement-learning} Implement a batch-mode off-policy model-free Q-Learning by Fitted Q-Iteration. The only data available is given by a set of \(N_{MC}\) paths for the underlying state variable \(X_t\), hedge position \(a_t\), instantaneous reward \(R_t\) and the next-time value \(X_{t+1}\). \[\mathcal F_t^k=\left\{\left(X_t^k,a_t^k,R_t^k,X_{t+1}^k\right)\right\}_{t=0}^{T-1}\quad k=1,...,N_{MC}\] Detailed steps of solving the Bellman optimalty equation by Reinforcement Learning are illustrated below. Expand Q-function in basis functions with time-dependent coefficients parametrized by a matrix \(\mathbf W_t\). \[Q_t^\star\left(X_t,a_t\right)=\mathbf A_t^T\mathbf W_t\Phi\left(X_t\right)=\mathbf A_t^T\mathbf U_W\left(t,X_t\right)=\vec{W}_t^T \vec{\Psi}\left(X_t,a_t\right)\] \[\mathbf A_t=\left(\begin{matrix}1\\a_t\\\frac{1}{2}a_t^2\end{matrix}\right)\quad\mathbf U_W\left(t,X_t\right)=\mathbf W_t\Phi\left(X_t\right)\] where \(\vec{W}_t\) is obtained by concatenating columns of matrix \(\mathbf W_t\) while \$ vec \left( \{\bf \Psi\} \left(X\_t,a\_t \right) \right) = vec , \left( \{\bf A\}\_t \otimes {\bf \Phi}\^{}T(X) \right) \$ stands for a vector obtained by concatenating columns of the outer product of vectors \$ \{\bf A\}\_t \$ and \$ \{\bf \Phi\}(X) \$. Compute vector \(\mathbf A_t\) then compute \(\vec\Psi\left(X_t,a_t\right)\) for each \(X_t^k\) and store in a dictionary with key path and time \(\left[k,t\right]\). \subsection{Part 3: Make off-policy data}\label{part-3-make-off-policy-data} \begin{itemize} \tightlist \item \textbf{on-policy} data - contains an optimal action and the corresponding reward \item \textbf{off-policy} data - contains random action and the corresponding reward \end{itemize} Given a large enough sample, i.e. N\_MC tending to infinity Q-Learner will learn an optimal policy from the data in a model-free setting. In our case a random action is an optimal action + noise generated by sampling from uniform: distribution \[a_t\left(X_t\right) = a_t^\star\left(X_t\right) \sim U\left[1-\eta, 1 + \eta\right]\] where \(\eta\) is a disturbance level In other words, each noisy action is calculated by taking optimal action computed previously and multiplying it by a uniform r.v. in the interval \(\left[1-\eta, 1 + \eta\right]\) \textbf{Instructions:} In the loop below: - Compute the optimal policy, and write the result to a\_op - Now disturb these values by a random noise \[a_t\left(X_t\right) = a_t^\star\left(X_t\right) \sim U\left[1-\eta, 1 + \eta\right]\] - Compute portfolio values corresponding to observed actions \[\Pi_t=\gamma\left[\Pi_{t+1}-a_t^\star\Delta S_t\right]\quad t=T-1,...,0\] - Compute rewards corrresponding to observed actions \[R_t\left(X_t,a_t,X_{t+1}\right)=\gamma a_t\Delta S_t-\lambda Var\left[\Pi_t\space|\space\mathcal F_t\right]\quad t=T-1,...,0\] with terminal condition \[R_T=-\lambda Var\left[\Pi_T\right]\] \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}19}]:} \PY{n}{eta} \PY{o}{=} \PY{l+m+mf}{0.5} \PY{c+c1}{\PYZsh{} 0.5 \PYZsh{} 0.25 \PYZsh{} 0.05 \PYZsh{} 0.5 \PYZsh{} 0.1 \PYZsh{} 0.25 \PYZsh{} 0.15} \PY{n}{reg\PYZus{}param} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{c+c1}{\PYZsh{} Fix random seed} \PY{c+c1}{\PYZsh{} disturbed optimal actions to be computed } \PY{n}{a\PYZus{}op} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{0} \PY{c+c1}{\PYZsh{} also make portfolios and rewards} \PY{c+c1}{\PYZsh{} portfolio value} \PY{n}{Pi\PYZus{}op} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{S}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{apply}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:} \PY{n}{terminal\PYZus{}payoff}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{K}\PY{p}{)}\PY{p}{)} \PY{n}{Pi\PYZus{}op\PYZus{}hat} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Pi\PYZus{}op\PYZus{}hat}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} reward function} \PY{n}{R\PYZus{}op} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{n}{risk\PYZus{}lambda} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{var}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} The backward loop} \PY{k}{for} \PY{n}{t} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 11\PYZhy{}12 lines of code)} \PY{c+c1}{\PYZsh{} 1. Compute the optimal policy, and write the result to a\PYZus{}op} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{p}{]} \PY{c+c1}{\PYZsh{} 2. Now disturb these values by a random noise} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{p}{]} \PY{o}{*}\PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{uniform}\PY{p}{(}\PY{l+m+mi}{1} \PY{o}{\PYZhy{}} \PY{n}{eta}\PY{p}{,} \PY{l+m+mi}{1} \PY{o}{+} \PY{n}{eta}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} 3. Compute portfolio values corresponding to observed actions} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{gamma} \PY{o}{*} \PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{*} \PY{n}{delta\PYZus{}S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{n}{Pi\PYZus{}hat}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{}Pi\PYZus{}op\PYZus{}hat.iloc[:,\PYZhy{}1] = Pi\PYZus{}op.iloc[:,\PYZhy{}1] \PYZhy{} np.mean(Pi\PYZus{}op.iloc[:,\PYZhy{}1])} \PY{n}{Pi\PYZus{}op\PYZus{}hat}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} 4. Compute rewards corrresponding to observed actions} \PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{gamma} \PY{o}{*} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{*} \PY{n}{delta\PYZus{}S}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{risk\PYZus{}lambda} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{var}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{done with backward loop!}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] done with backward loop! \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}20}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{idx\PYZus{}row} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{idx\PYZus{}col} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{part\PYZus{}1} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{k}{try}\PY{p}{:} \PY{n}{part1} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{join}\PY{p}{(}\PY{n+nb}{map}\PY{p}{(}\PY{n+nb}{repr}\PY{p}{,} \PY{n}{part\PYZus{}1}\PY{p}{)}\PY{p}{)} \PY{k}{except} \PY{n+ne}{TypeError}\PY{p}{:} \PY{n}{part1} \PY{o}{=} \PY{n+nb}{repr}\PY{p}{(}\PY{n}{part\PYZus{}1}\PY{p}{)} \PY{n}{submissions}\PY{p}{[}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{]}\PY{o}{=}\PY{n}{part1} \PY{n}{grading}\PY{o}{.}\PY{n}{submit}\PY{p}{(}\PY{n}{COURSERA\PYZus{}EMAIL}\PY{p}{,} \PY{n}{COURSERA\PYZus{}TOKEN}\PY{p}{,} \PY{n}{assignment\PYZus{}key}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{,}\PY{n}{submissions}\PY{p}{)} \PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Submission successful, please check on the coursera grader page for the status \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}20}]:} array([-4.41648229e-02, -1.11627835e+00, -3.26618627e-01, -4.41648229e-02, 1.86629772e-01, -3.26618627e-01, -3.26618627e-01, -4.41648229e-02, -1.91643174e+00, 1.86629772e-01, -4.41648229e-02, -1.15471981e+01, 8.36214406e-03, -4.41648229e-02, -5.19860756e-01, 8.36214406e-03, 8.36214406e-03, -4.41648229e-02, -5.82629891e-02, -5.19860756e-01, -4.41648229e-02, -2.93024596e+00, -6.70591047e-01, -4.41648229e-02, 3.38303735e-01, -6.70591047e-01, -6.70591047e-01, -4.41648229e-02, -1.35776224e-01, 3.38303735e-01, -4.41648229e-02, 3.89179538e-02, -2.11256164e+00, -4.41648229e-02, -8.62139383e-01, -2.11256164e+00, -2.11256164e+00, -4.41648229e-02, 1.03931641e+00, -8.62139383e-01, -4.41648229e-02, -3.88581528e+00, -2.78664643e-01, -4.41648229e-02, 1.08026845e+00, -2.78664643e-01, -2.78664643e-01, -4.41648229e-02, -1.59815566e-01, 1.08026845e+00, -4.41648229e-02, 1.34127261e+00, -1.32542466e+00, -4.41648229e-02, -1.75711669e-01, -1.32542466e+00, -1.32542466e+00, -4.41648229e-02, -6.89031647e-01, -1.75711669e-01, -4.41648229e-02, 1.36065847e+00, -4.83656917e-03, -4.41648229e-02, 1.01545031e+00, -4.83656917e-03, -4.83656917e-03, -4.41648229e-02, 1.06509261e+00, 1.01545031e+00, -4.41648229e-02, -5.48069399e-01, 6.69233272e+00, -4.41648229e-02, 2.48031088e+00, 6.69233272e+00, 6.69233272e+00, -4.41648229e-02, -4.96873017e-01, 2.48031088e+00, -4.41648229e-02, 1.05762523e+00, -5.25381441e+00, -4.41648229e-02, -3.93284570e+00, -5.25381441e+00, -5.25381441e+00, -4.41648229e-02, -1.75980494e-01, -3.93284570e+00, -4.41648229e-02, -1.12194921e-01, -2.04245741e-02, -4.41648229e-02, -2.95192215e-01, -2.04245741e-02, -2.04245741e-02, -4.41648229e-02, -1.70008788e+00, -2.95192215e-01]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}21}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{idx\PYZus{}row} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{idx\PYZus{}col} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{part\PYZus{}2} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{k}{try}\PY{p}{:} \PY{n}{part2} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{join}\PY{p}{(}\PY{n+nb}{map}\PY{p}{(}\PY{n+nb}{repr}\PY{p}{,} \PY{n}{part\PYZus{}2}\PY{p}{)}\PY{p}{)} \PY{k}{except} \PY{n+ne}{TypeError}\PY{p}{:} \PY{n}{part2} \PY{o}{=} \PY{n+nb}{repr}\PY{p}{(}\PY{n}{part\PYZus{}2}\PY{p}{)} \PY{n}{submissions}\PY{p}{[}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{]}\PY{o}{=}\PY{n}{part2} \PY{n}{grading}\PY{o}{.}\PY{n}{submit}\PY{p}{(}\PY{n}{COURSERA\PYZus{}EMAIL}\PY{p}{,} \PY{n}{COURSERA\PYZus{}TOKEN}\PY{p}{,} \PY{n}{assignment\PYZus{}key}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{,}\PY{n}{submissions}\PY{p}{)} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Submission successful, please check on the coursera grader page for the status \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}21}]:} array([ 0. , 1.42884104, 0.33751419, 0. , 1.21733506, 0.33751419, 0.33751419, 0. , 3.11498207, 1.21733506, 0. , 11.42133749, -0.10310673, 0. , 11.86648425, -0.10310673, -0.10310673, 0. , 11.85284966, 11.86648425, 0. , 3.77013248, 0.86748124, 0. , 3.39527529, 0.86748124, 0.86748124, 0. , 3.50140426, 3.39527529, 0. , 2.37907167, 2.45349463, 0. , 3.21159555, 2.45349463, 2.45349463, 0. , 2.143548 , 3.21159555, 0. , 4.22816728, 0.36745282, 0. , 3.10906092, 0.36745282, 0.36745282, 0. , 3.24065673, 3.10906092, 0. , 1.4213709 , 2.79987609, 0. , 1.57224362, 2.79987609, 2.79987609, 0. , 2.24072042, 1.57224362, 9.05061694, 4.48960086, 5.90296866, 9.05061694, 3.43400874, 5.90296866, 5.90296866, 9.05061694, 2.3390757 , 3.43400874, 11.39022164, 5.65090831, 5.15180177, 11.39022164, 3.12466356, 5.15180177, 5.15180177, 11.39022164, 3.59323901, 3.12466356, 0. , 3.05819303, 4.15983366, 0. , 6.95803609, 4.15983366, 4.15983366, 0. , 7.08659999, 6.95803609, 0. , 0.12024876, 0.03147899, 0. , 0.3970914 , 0.03147899, 0.03147899, 0. , 2.08248553, 0.3970914 ]) \end{Verbatim} \subsection{Override on-policy data with off-policy data}\label{override-on-policy-data-with-off-policy-data} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}23}]:} \PY{c+c1}{\PYZsh{} Override on\PYZhy{}policy data with off\PYZhy{}policy data} \PY{n}{a} \PY{o}{=} \PY{n}{a\PYZus{}op}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} distrubed actions} \PY{n}{Pi} \PY{o}{=} \PY{n}{Pi\PYZus{}op}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} disturbed portfolio values} \PY{n}{Pi\PYZus{}hat} \PY{o}{=} \PY{n}{Pi\PYZus{}op\PYZus{}hat}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{n}{R} \PY{o}{=} \PY{n}{R\PYZus{}op}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}24}]:} \PY{c+c1}{\PYZsh{} make matrix A\PYZus{}t of shape (3 x num\PYZus{}MC x num\PYZus{}steps)} \PY{n}{num\PYZus{}MC} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{} number of simulated paths} \PY{n}{num\PYZus{}TS} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{c+c1}{\PYZsh{} number of time steps} \PY{n}{a\PYZus{}1\PYZus{}1} \PY{o}{=} \PY{n}{a}\PY{o}{.}\PY{n}{values}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{num\PYZus{}MC}\PY{p}{,} \PY{n}{num\PYZus{}TS}\PY{p}{)}\PY{p}{)} \PY{n}{a\PYZus{}1\PYZus{}2} \PY{o}{=} \PY{l+m+mf}{0.5} \PY{o}{*} \PY{n}{a\PYZus{}1\PYZus{}1}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{ones\PYZus{}3d} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{ones}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{num\PYZus{}MC}\PY{p}{,} \PY{n}{num\PYZus{}TS}\PY{p}{)}\PY{p}{)} \PY{n}{A\PYZus{}stack} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{vstack}\PY{p}{(}\PY{p}{(}\PY{n}{ones\PYZus{}3d}\PY{p}{,} \PY{n}{a\PYZus{}1\PYZus{}1}\PY{p}{,} \PY{n}{a\PYZus{}1\PYZus{}2}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{A\PYZus{}stack}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] (3, 10000, 7) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}25}]:} \PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{swapaxes}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{c+c1}{\PYZsh{} (12, 10000, 25)} \PY{c+c1}{\PYZsh{} expand dimensions of matrices to multiply element\PYZhy{}wise} \PY{n}{A\PYZus{}2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{expand\PYZus{}dims}\PY{p}{(}\PY{n}{A\PYZus{}stack}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \PY{c+c1}{\PYZsh{} becomes (3,1,10000,25)} \PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{expand\PYZus{}dims}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)} \PY{c+c1}{\PYZsh{} becomes (1,12,10000,25)} \PY{n}{Psi\PYZus{}mat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{multiply}\PY{p}{(}\PY{n}{A\PYZus{}2}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx}\PY{p}{)} \PY{c+c1}{\PYZsh{} this is a matrix of size 3 x num\PYZus{}basis x num\PYZus{}MC x num\PYZus{}steps} \PY{c+c1}{\PYZsh{} now concatenate columns along the first dimension} \PY{c+c1}{\PYZsh{} Psi\PYZus{}mat = Psi\PYZus{}mat.reshape(\PYZhy{}1, a.shape[0], a.shape[1], order=\PYZsq{}F\PYZsq{})} \PY{n}{Psi\PYZus{}mat} \PY{o}{=} \PY{n}{Psi\PYZus{}mat}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{p}{,} \PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{order}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{F}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Psi\PYZus{}mat}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{c+c1}{\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] (12, 10000, 7) (36, 10000, 7) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}26}]:} \PY{c+c1}{\PYZsh{} make matrix S\PYZus{}t } \PY{n}{Psi\PYZus{}1\PYZus{}aux} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{expand\PYZus{}dims}\PY{p}{(}\PY{n}{Psi\PYZus{}mat}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{Psi\PYZus{}2\PYZus{}aux} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{expand\PYZus{}dims}\PY{p}{(}\PY{n}{Psi\PYZus{}mat}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{Psi\PYZus{}1\PYZus{}aux}\PY{o}{.}\PY{n}{shape}\PY{p}{,} \PY{n}{Psi\PYZus{}2\PYZus{}aux}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{n}{S\PYZus{}t\PYZus{}mat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{multiply}\PY{p}{(}\PY{n}{Psi\PYZus{}1\PYZus{}aux}\PY{p}{,} \PY{n}{Psi\PYZus{}2\PYZus{}aux}\PY{p}{)}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n}{S\PYZus{}t\PYZus{}mat}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] (36, 1, 10000, 7) (1, 36, 10000, 7) (36, 36, 7) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}27}]:} \PY{c+c1}{\PYZsh{} clean up some space} \PY{k}{del} \PY{n}{Psi\PYZus{}1\PYZus{}aux}\PY{p}{,} \PY{n}{Psi\PYZus{}2\PYZus{}aux}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}swap\PYZus{}idx}\PY{p}{,} \PY{n}{A\PYZus{}2} \end{Verbatim} \subsection{\texorpdfstring{Part 4: Calculate \(\mathbf S_t\) and \(\mathbf M_t\) marix and vector}{Part 4: Calculate \textbackslash{}mathbf S\_t and \textbackslash{}mathbf M\_t marix and vector}}\label{part-4-calculate-mathbf-s_t-and-mathbf-m_t-marix-and-vector} Vector \(\vec W_t\) could be solved by \[\vec W_t=\mathbf S_t^{-1}\mathbf M_t\] where \(\mathbf S_t\) and \(\mathbf M_t\) are matrix and vector respectively with elements given by \[S_{nm}^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Psi_n\left(X_t^k,a_t^k\right)\Psi_m\left(X_t^k,a_t^k\right)}\quad\quad M_n^{\left(t\right)}=\sum_{k=1}^{N_{MC}}{\Psi_n\left(X_t^k,a_t^k\right)\left(R_t\left(X_t,a_t,X_{t+1}\right)+\gamma\max_{a_{t+1}\in\mathcal{A}}Q_{t+1}^\star\left(X_{t+1},a_{t+1}\right)\right)}\] Define function \emph{function\_S} and \emph{function\_M} to compute the value of matrix \(\mathbf S_t\) and vector \(\mathbf M_t\). \textbf{Instructions:} - implement function\_S\_vec() which computes \(S_{nm}^{\left(t\right)}\) matrix - implement function\_M\_vec() which computes \(M_n^{\left(t\right)}\) column vector \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}28}]:} \PY{c+c1}{\PYZsh{} vectorized functions} \PY{k}{def} \PY{n+nf}{function\PYZus{}S\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{S\PYZus{}t\PYZus{}mat}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}S\PYZus{}vec \PYZhy{} calculate S\PYZus{}\PYZob{}nm\PYZcb{} matrix from Eq. (75) (with a regularization!)} \PY{l+s+sd}{ Eq. (75) in QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ num\PYZus{}Qbasis = 3 x num\PYZus{}basis, 3 because of the basis expansion (1, a\PYZus{}t, 0.5 a\PYZus{}t\PYZca{}2)} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t \PYZhy{} time index, a scalar, an index into time axis of S\PYZus{}t\PYZus{}mat } \PY{l+s+sd}{ S\PYZus{}t\PYZus{}mat \PYZhy{} pandas.DataFrame of dimension num\PYZus{}Qbasis x num\PYZus{}Qbasis x T} \PY{l+s+sd}{ reg\PYZus{}param \PYZhy{} regularization parameter, a scalar} \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ S\PYZus{}mat\PYZus{}reg \PYZhy{} num\PYZus{}Qbasis x num\PYZus{}Qbasis} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 4\PYZhy{}5 lines of code)} \PY{c+c1}{\PYZsh{} S\PYZus{}mat\PYZus{}reg = your code goes here ...} \PY{n}{num\PYZus{}Qbasis} \PY{o}{=} \PY{n}{S\PYZus{}t\PYZus{}mat}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{S\PYZus{}mat\PYZus{}reg} \PY{o}{=} \PY{n}{S\PYZus{}t\PYZus{}mat}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{+} \PY{n}{reg\PYZus{}param} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{eye}\PY{p}{(}\PY{n}{num\PYZus{}Qbasis}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{S\PYZus{}mat\PYZus{}reg} \PY{k}{def} \PY{n+nf}{function\PYZus{}M\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{Q\PYZus{}star}\PY{p}{,} \PY{n}{R}\PY{p}{,} \PY{n}{Psi\PYZus{}mat\PYZus{}t}\PY{p}{,} \PY{n}{gamma}\PY{o}{=}\PY{n}{gamma}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZdq{}\PYZdq{}\PYZdq{}} \PY{l+s+sd}{ function\PYZus{}S\PYZus{}vec \PYZhy{} calculate M\PYZus{}\PYZob{}nm\PYZcb{} vector from Eq. (75) (with a regularization!)} \PY{l+s+sd}{ Eq. (75) in QLBS Q\PYZhy{}Learner in the Black\PYZhy{}Scholes\PYZhy{}Merton article} \PY{l+s+sd}{ } \PY{l+s+sd}{ num\PYZus{}Qbasis = 3 x num\PYZus{}basis, 3 because of the basis expansion (1, a\PYZus{}t, 0.5 a\PYZus{}t\PYZca{}2)} \PY{l+s+sd}{ } \PY{l+s+sd}{ Arguments:} \PY{l+s+sd}{ t\PYZhy{} time index, a scalar, an index into time axis of S\PYZus{}t\PYZus{}mat } \PY{l+s+sd}{ Q\PYZus{}star \PYZhy{} pandas.DataFrame of Q\PYZhy{}function values of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ R \PYZhy{} pandas.DataFrame of rewards of dimension N\PYZus{}MC x T} \PY{l+s+sd}{ Psi\PYZus{}mat\PYZus{}t \PYZhy{} pandas.DataFrame of dimension num\PYZus{}Qbasis x N\PYZus{}MC} \PY{l+s+sd}{ gamma \PYZhy{} one time\PYZhy{}step discount factor \PYZdl{}exp(\PYZhy{}r \PYZbs{}delta t)\PYZdl{}} \PY{l+s+sd}{ Return:} \PY{l+s+sd}{ M\PYZus{}t \PYZhy{} np.array of dimension num\PYZus{}Qbasis x 1} \PY{l+s+sd}{ \PYZdq{}\PYZdq{}\PYZdq{}} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} START CODE HERE \PYZsh{}\PYZsh{}\PYZsh{} (≈ 2\PYZhy{}3 lines of code)} \PY{c+c1}{\PYZsh{} M\PYZus{}t = your code goes here ...} \PY{n}{M\PYZus{}t} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{Psi\PYZus{}mat\PYZus{}t}\PY{p}{,} \PY{n}{R}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{+} \PY{n}{gamma} \PY{o}{*}\PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{t}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} END CODE HERE \PYZsh{}\PYZsh{}\PYZsh{}} \PY{k}{return} \PY{n}{M\PYZus{}t} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}29}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{reg\PYZus{}param} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{S\PYZus{}mat\PYZus{}reg} \PY{o}{=} \PY{n}{function\PYZus{}S\PYZus{}vec}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{S\PYZus{}t\PYZus{}mat}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)} \PY{n}{idx\PYZus{}row} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{S\PYZus{}mat\PYZus{}reg}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{seed}\PY{p}{(}\PY{l+m+mi}{42}\PY{p}{)} \PY{n}{idx\PYZus{}col} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{random}\PY{o}{.}\PY{n}{randint}\PY{p}{(}\PY{n}{low}\PY{o}{=}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{high}\PY{o}{=}\PY{n}{S\PYZus{}mat\PYZus{}reg}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{size}\PY{o}{=}\PY{l+m+mi}{10}\PY{p}{)} \PY{n}{part\PYZus{}3} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{S\PYZus{}mat\PYZus{}reg}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{k}{try}\PY{p}{:} \PY{n}{part3} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{join}\PY{p}{(}\PY{n+nb}{map}\PY{p}{(}\PY{n+nb}{repr}\PY{p}{,} \PY{n}{part\PYZus{}3}\PY{p}{)}\PY{p}{)} \PY{k}{except} \PY{n+ne}{TypeError}\PY{p}{:} \PY{n}{part3} \PY{o}{=} \PY{n+nb}{repr}\PY{p}{(}\PY{n}{part\PYZus{}3}\PY{p}{)} \PY{n}{submissions}\PY{p}{[}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{]}\PY{o}{=}\PY{n}{part3} \PY{n}{grading}\PY{o}{.}\PY{n}{submit}\PY{p}{(}\PY{n}{COURSERA\PYZus{}EMAIL}\PY{p}{,} \PY{n}{COURSERA\PYZus{}TOKEN}\PY{p}{,} \PY{n}{assignment\PYZus{}key}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{3}\PY{p}{]}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{,}\PY{n}{submissions}\PY{p}{)} \PY{n}{S\PYZus{}mat\PYZus{}reg}\PY{p}{[}\PY{n}{idx\PYZus{}row}\PY{p}{,} \PY{n}{idx\PYZus{}col}\PY{p}{]}\PY{o}{.}\PY{n}{flatten}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Submission successful, please check on the coursera grader page for the status \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}29}]:} array([2.22709265e-01, 2.68165972e+02, 4.46911166e+01, 2.00678517e+00, 1.10020457e+03, 8.44758984e-01, 2.29671816e+02, 2.29671816e+02, 3.78571544e-03, 1.41884196e-02]) \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}30}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{Q\PYZus{}RL} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{risk\PYZus{}lambda} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{var}\PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{Q\PYZus{}star} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{M\PYZus{}t} \PY{o}{=} \PY{n}{function\PYZus{}M\PYZus{}vec}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{Q\PYZus{}star}\PY{p}{,} \PY{n}{R}\PY{p}{,} \PY{n}{Psi\PYZus{}mat}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{,} \PY{n}{gamma}\PY{p}{)} \PY{n}{part\PYZus{}4} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{M\PYZus{}t}\PY{p}{)} \PY{k}{try}\PY{p}{:} \PY{n}{part4} \PY{o}{=} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{o}{.}\PY{n}{join}\PY{p}{(}\PY{n+nb}{map}\PY{p}{(}\PY{n+nb}{repr}\PY{p}{,} \PY{n}{part\PYZus{}4}\PY{p}{)}\PY{p}{)} \PY{k}{except} \PY{n+ne}{TypeError}\PY{p}{:} \PY{n}{part4} \PY{o}{=} \PY{n+nb}{repr}\PY{p}{(}\PY{n}{part\PYZus{}4}\PY{p}{)} \PY{n}{submissions}\PY{p}{[}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]}\PY{p}{]}\PY{o}{=}\PY{n}{part4} \PY{n}{grading}\PY{o}{.}\PY{n}{submit}\PY{p}{(}\PY{n}{COURSERA\PYZus{}EMAIL}\PY{p}{,} \PY{n}{COURSERA\PYZus{}TOKEN}\PY{p}{,} \PY{n}{assignment\PYZus{}key}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{,}\PY{n}{submissions}\PY{p}{)} \PY{n}{M\PYZus{}t} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Submission successful, please check on the coursera grader page for the status \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}30}]:} array([-6.03245979e+01, -8.79998437e+01, -2.37497369e+02, -5.62543448e+02, 2.09052583e+02, -6.44961368e+02, -2.86243249e+03, 2.77687723e+03, -1.85728309e+03, -9.40505558e+03, 9.50610806e+03, -5.29328413e+03, -1.69800964e+04, 1.61026240e+04, -8.42698927e+03, -8.46211901e+03, 6.05144701e+03, -2.62196067e+03, -2.12066484e+03, 8.42176836e+02, -2.51624368e+02, -3.01116012e+02, 2.57124667e+01, -3.22639691e+00, -5.53769815e+01, 1.67390280e+00, -6.79562288e-02, -1.61140947e+01, 1.16524075e+00, -1.49934348e-01, -9.79117274e+00, -7.22309330e-02, -4.70108927e-01, -6.87393130e+00, -2.10244341e+00, -7.70293521e-01]) \end{Verbatim} Call \emph{function\_S} and \emph{function\_M} for \(t=T-1,...,0\) together with vector \(\vec\Psi\left(X_t,a_t\right)\) to compute \(\vec W_t\) and learn the Q-function \(Q_t^\star\left(X_t,a_t\right)=\mathbf A_t^T\mathbf U_W\left(t,X_t\right)\) implied by the input data backward recursively with terminal condition \(Q_T^\star\left(X_T,a_T=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]\). When the vector \$ \vec{W}\_t \$ is computed as per the above at time \$ t \$, we can convert it back to a matrix \$ \bf{W}_t $ obtained from the vector $ \vec{W}_t $ by reshaping to the shape $ 3 \times M $. We can now calculate the matrix $ {\bf U}_t $ at time $ t $ for the whole set of MC paths as follows (this is Eq.(65) from the paper in a matrix form): $$ \mathbf U_{W} \left(t,X_t \right) = \left[\begin{matrix} \mathbf U_W^{0,k}\left(t,X_t \right) \\ \mathbf U_W^{1,k}\left(t,X_t \right) \\ \mathbf U_W^{2,k} \left(t,X_t \right) \end{matrix}\right] = \bf{W}_t \Phi_t \left(t,X_t \right) $$ Here the matrix $ {\bf \Phi}_t $ has the shape shape $ M \times N_{MC}$. Therefore, their dot product has dimension $ 3 \times N_{MC}$, as it should be. Once this matrix $ {\bf U}_t $ is computed, individual vectors $ {\bf U}_{W}^{1}, {\bf U}_{W}^{2}, {\bf U}_{W}^{3} $ for all MC paths are read off as rows of this matrix. From here, we can compute the optimal action and optimal Q-function $Q^{\star}(X_t, a_t^{\star}) $ at the optimal action for a given step $ t $. This will be used to evaluate the $ \max_{a_{t+1} \in \mathcal{A}} Q^{\star} \left(X_{t+1}, a_{t+1} \right) $. The optimal action and optimal Q-function with the optimal action could be computed by $$a_t^\star\left(X_t\right)=\frac{\mathbb{E}_{t} \left[ \Delta \hat{S}_{t} \hat{\Pi}_{t+1} + \frac{1}{2 \gamma \lambda} \Delta S_{t} \right]}{ \mathbb{E}_{t} \left[ \left( \Delta \hat{S}_{t} \right)^2 \right]}\, , \quad\quad Q_t^\star\left(X_t,a_t^\star\right)=\mathbf U_W^{\left(0\right)}\left(t,X_t\right)+ a_t^\star \mathbf U_W^{\left(2\right)}\left(t,X_t\right) +\frac{1}{2}\left(a_t^\star\right)^2\mathbf U_W^{\left(2\right)}\left(t,X_t\right)$$ with terminal condition $a_T^\star=0$ and $Q_T^\star\left(X_T,a_T^\star=0\right)=-\Pi_T\left(X_T\right)-\lambda Var\left[\Pi_T\left(X_T\right)\right]$. Plots of 5 optimal action $a_t^\star\left(X_t\right)$, optimal Q-function with optimal action $Q_t^\star\left(X_t,a_t^\star\right)$ and implied Q-function $Q_t^\star\left(X_t,a_t\right)\$ paths are shown below. \subsection{Fitted Q Iteration (FQI)}\label{fitted-q-iteration-fqi} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}33}]:} \PY{n}{starttime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} implied Q\PYZhy{}function by input data (using the first form in Eq.(68))} \PY{n}{Q\PYZus{}RL} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{\PYZhy{}} \PY{n}{risk\PYZus{}lambda} \PY{o}{*} \PY{n}{np}\PY{o}{.}\PY{n}{var}\PY{p}{(}\PY{n}{Pi}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} optimal action} \PY{n}{a\PYZus{}opt} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{(}\PY{n}{N\PYZus{}MC}\PY{p}{,}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{a\PYZus{}star} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{a\PYZus{}star}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{l+m+mi}{0} \PY{c+c1}{\PYZsh{} optimal Q\PYZhy{}function with optimal action} \PY{n}{Q\PYZus{}star} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{c+c1}{\PYZsh{} max\PYZus{}Q\PYZus{}star\PYZus{}next = Q\PYZus{}star.iloc[:,\PYZhy{}1].values } \PY{n}{max\PYZus{}Q\PYZus{}star} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{p}{(}\PY{n}{N\PYZus{}MC}\PY{p}{,}\PY{n}{T}\PY{o}{+}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{)} \PY{n}{max\PYZus{}Q\PYZus{}star}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]} \PY{o}{=} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{n}{num\PYZus{}basis} \PY{o}{=} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]} \PY{n}{reg\PYZus{}param} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}3} \PY{n}{hyper\PYZus{}param} \PY{o}{=} \PY{l+m+mf}{1e\PYZhy{}1} \PY{c+c1}{\PYZsh{} The backward loop} \PY{k}{for} \PY{n}{t} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{T}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{)}\PY{p}{:} \PY{c+c1}{\PYZsh{} calculate vector W\PYZus{}t} \PY{n}{S\PYZus{}mat\PYZus{}reg} \PY{o}{=} \PY{n}{function\PYZus{}S\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{S\PYZus{}t\PYZus{}mat}\PY{p}{,}\PY{n}{reg\PYZus{}param}\PY{p}{)} \PY{n}{M\PYZus{}t} \PY{o}{=} \PY{n}{function\PYZus{}M\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,}\PY{n}{Q\PYZus{}star}\PY{p}{,} \PY{n}{R}\PY{p}{,} \PY{n}{Psi\PYZus{}mat}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{,} \PY{n}{gamma}\PY{p}{)} \PY{n}{W\PYZus{}t} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{inv}\PY{p}{(}\PY{n}{S\PYZus{}mat\PYZus{}reg}\PY{p}{)}\PY{p}{,}\PY{n}{M\PYZus{}t}\PY{p}{)} \PY{c+c1}{\PYZsh{} this is an 1D array of dimension 3M} \PY{c+c1}{\PYZsh{} reshape to a matrix W\PYZus{}mat } \PY{n}{W\PYZus{}mat} \PY{o}{=} \PY{n}{W\PYZus{}t}\PY{o}{.}\PY{n}{reshape}\PY{p}{(}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{n}{num\PYZus{}basis}\PY{p}{)}\PY{p}{,} \PY{n}{order}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{F}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{c+c1}{\PYZsh{} shape 3 x M } \PY{c+c1}{\PYZsh{} make matrix Phi\PYZus{}mat} \PY{n}{Phi\PYZus{}mat} \PY{o}{=} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{[}\PY{n}{t}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{o}{.}\PY{n}{T} \PY{c+c1}{\PYZsh{} dimension M x N\PYZus{}MC} \PY{c+c1}{\PYZsh{} compute matrix U\PYZus{}mat of dimension N\PYZus{}MC x 3 } \PY{n}{U\PYZus{}mat} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{W\PYZus{}mat}\PY{p}{,} \PY{n}{Phi\PYZus{}mat}\PY{p}{)} \PY{c+c1}{\PYZsh{} compute vectors U\PYZus{}W\PYZca{}0,U\PYZus{}W\PYZca{}1,U\PYZus{}W\PYZca{}2 as rows of matrix U\PYZus{}mat } \PY{n}{U\PYZus{}W\PYZus{}0} \PY{o}{=} \PY{n}{U\PYZus{}mat}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{p}{:}\PY{p}{]} \PY{n}{U\PYZus{}W\PYZus{}1} \PY{o}{=} \PY{n}{U\PYZus{}mat}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{p}{:}\PY{p}{]} \PY{n}{U\PYZus{}W\PYZus{}2} \PY{o}{=} \PY{n}{U\PYZus{}mat}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{,}\PY{p}{:}\PY{p}{]} \PY{c+c1}{\PYZsh{} IMPORTANT!!! Instead, use hedges computed as in DP approach:} \PY{c+c1}{\PYZsh{} in this way, errors of function approximation do not back\PYZhy{}propagate. } \PY{c+c1}{\PYZsh{} This provides a stable solution, unlike} \PY{c+c1}{\PYZsh{} the first method that leads to a diverging solution } \PY{n}{A\PYZus{}mat} \PY{o}{=} \PY{n}{function\PYZus{}A\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{,} \PY{n}{reg\PYZus{}param}\PY{p}{)} \PY{n}{B\PYZus{}vec} \PY{o}{=} \PY{n}{function\PYZus{}B\PYZus{}vec}\PY{p}{(}\PY{n}{t}\PY{p}{,} \PY{n}{Pi\PYZus{}hat}\PY{p}{,} \PY{n}{delta\PYZus{}S\PYZus{}hat}\PY{p}{,} \PY{n}{S}\PY{p}{,} \PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{)} \PY{c+c1}{\PYZsh{} print (\PYZsq{}t = A\PYZus{}mat.shape = B\PYZus{}vec.shape = \PYZsq{}, t, A\PYZus{}mat.shape, B\PYZus{}vec.shape)} \PY{n}{phi} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{inv}\PY{p}{(}\PY{n}{A\PYZus{}mat}\PY{p}{)}\PY{p}{,} \PY{n}{B\PYZus{}vec}\PY{p}{)} \PY{n}{a\PYZus{}opt}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{data\PYZus{}mat\PYZus{}t}\PY{p}{[}\PY{n}{t}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{]}\PY{p}{,}\PY{n}{phi}\PY{p}{)} \PY{n}{a\PYZus{}star}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{a\PYZus{}opt}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{n}{max\PYZus{}Q\PYZus{}star}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{U\PYZus{}W\PYZus{}0} \PY{o}{+} \PY{n}{a\PYZus{}opt}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{*} \PY{n}{U\PYZus{}W\PYZus{}1} \PY{o}{+} \PY{l+m+mf}{0.5} \PY{o}{*} \PY{p}{(}\PY{n}{a\PYZus{}opt}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{o}{*} \PY{n}{U\PYZus{}W\PYZus{}2} \PY{c+c1}{\PYZsh{} update dataframes } \PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{max\PYZus{}Q\PYZus{}star}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{c+c1}{\PYZsh{} update the Q\PYZus{}RL solution given by a dot product of two matrices W\PYZus{}t Psi\PYZus{}t} \PY{n}{Psi\PYZus{}t} \PY{o}{=} \PY{n}{Psi\PYZus{}mat}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{o}{.}\PY{n}{T} \PY{c+c1}{\PYZsh{} dimension N\PYZus{}MC x 3M } \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{dot}\PY{p}{(}\PY{n}{Psi\PYZus{}t}\PY{p}{,} \PY{n}{W\PYZus{}t}\PY{p}{)} \PY{c+c1}{\PYZsh{} trim outliers for Q\PYZus{}RL} \PY{n}{up\PYZus{}percentile\PYZus{}Q\PYZus{}RL} \PY{o}{=} \PY{l+m+mi}{95} \PY{c+c1}{\PYZsh{} 95} \PY{n}{low\PYZus{}percentile\PYZus{}Q\PYZus{}RL} \PY{o}{=} \PY{l+m+mi}{5} \PY{c+c1}{\PYZsh{} 5} \PY{n}{low\PYZus{}perc\PYZus{}Q\PYZus{}RL}\PY{p}{,} \PY{n}{up\PYZus{}perc\PYZus{}Q\PYZus{}RL} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{percentile}\PY{p}{(}\PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{p}{,}\PY{p}{[}\PY{n}{low\PYZus{}percentile\PYZus{}Q\PYZus{}RL}\PY{p}{,}\PY{n}{up\PYZus{}percentile\PYZus{}Q\PYZus{}RL}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} print(\PYZsq{}t = \PYZpc{}s low\PYZus{}perc\PYZus{}Q\PYZus{}RL = \PYZpc{}s up\PYZus{}perc\PYZus{}Q\PYZus{}RL = \PYZpc{}s\PYZsq{} \PYZpc{} (t, low\PYZus{}perc\PYZus{}Q\PYZus{}RL, up\PYZus{}perc\PYZus{}Q\PYZus{}RL))} \PY{c+c1}{\PYZsh{} trim outliers in values of max\PYZus{}Q\PYZus{}star:} \PY{n}{flag\PYZus{}lower} \PY{o}{=} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{o}{\PYZlt{}} \PY{n}{low\PYZus{}perc\PYZus{}Q\PYZus{}RL} \PY{n}{flag\PYZus{}upper} \PY{o}{=} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{t}\PY{p}{]}\PY{o}{.}\PY{n}{values} \PY{o}{\PYZgt{}} \PY{n}{up\PYZus{}perc\PYZus{}Q\PYZus{}RL} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{flag\PYZus{}lower}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{low\PYZus{}perc\PYZus{}Q\PYZus{}RL} \PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{loc}\PY{p}{[}\PY{n}{flag\PYZus{}upper}\PY{p}{,}\PY{n}{t}\PY{p}{]} \PY{o}{=} \PY{n}{up\PYZus{}perc\PYZus{}Q\PYZus{}RL} \PY{n}{endtime} \PY{o}{=} \PY{n}{time}\PY{o}{.}\PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{Time Cost:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{endtime} \PY{o}{\PYZhy{}} \PY{n}{starttime}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{seconds}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Time Cost: 0.0989999771118164 seconds \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] D:\textbackslash{}application\textbackslash{}Anaconda3\textbackslash{}envs\textbackslash{}pyalgo\textbackslash{}lib\textbackslash{}site-packages\textbackslash{}ipykernel\_launcher.py:21: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape({\ldots}) instead \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}34}]:} \PY{c+c1}{\PYZsh{} plot both simulations} \PY{n}{f}\PY{p}{,} \PY{n}{axarr} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{subplots\PYZus{}adjust}\PY{p}{(}\PY{n}{hspace}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{5}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{set\PYZus{}figheight}\PY{p}{(}\PY{l+m+mf}{8.0}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{set\PYZus{}figwidth}\PY{p}{(}\PY{l+m+mf}{8.0}\PY{p}{)} \PY{n}{step\PYZus{}size} \PY{o}{=} \PY{n}{N\PYZus{}MC} \PY{o}{/}\PY{o}{/} \PY{l+m+mi}{10} \PY{n}{idx\PYZus{}plot} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{arange}\PY{p}{(}\PY{n}{step\PYZus{}size}\PY{p}{,} \PY{n}{N\PYZus{}MC}\PY{p}{,} \PY{n}{step\PYZus{}size}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{a\PYZus{}star}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Optimal action \PYZdl{}a\PYZus{}t\PYZca{}}\PY{l+s+s1}{\PYZob{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{star\PYZcb{}\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{Q\PYZus{}RL}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Q\PYZhy{}function \PYZdl{}Q\PYZus{}t\PYZca{}}\PY{l+s+s1}{\PYZob{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{star\PYZcb{} (X\PYZus{}t, a\PYZus{}t)\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,} \PY{n}{idx\PYZus{}plot}\PY{p}{]}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{axarr}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Optimal Q\PYZhy{}function \PYZdl{}Q\PYZus{}t\PYZca{}}\PY{l+s+s1}{\PYZob{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{star\PYZcb{} (X\PYZus{}t, a\PYZus{}t\PYZca{}}\PY{l+s+s1}{\PYZob{}}\PY{l+s+s1}{\PYZbs{}}\PY{l+s+s1}{star\PYZcb{})\PYZdl{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{QLBS\PYZus{}FQI\PYZus{}off\PYZus{}policy\PYZus{}summary\PYZus{}ATM\PYZus{}eta\PYZus{}}\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s1}{.png}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+m+mi}{100} \PY{o}{*} \PY{n}{eta}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{600}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_59_0.png} \end{center} { \hspace*{\fill} \\} Compare the optimal action \(a_t^\star\left(X_t\right)\) and optimal Q-function with optimal action \(Q_t^\star\left(X_t,a_t^\star\right)\) given by Dynamic Programming and Reinforcement Learning. Plots of 1 path comparisons are given below. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}35}]:} \PY{c+c1}{\PYZsh{} plot a and a\PYZus{}star} \PY{c+c1}{\PYZsh{} plot 1 path} \PY{n}{num\PYZus{}path} \PY{o}{=} \PY{l+m+mi}{120} \PY{c+c1}{\PYZsh{} 240 \PYZsh{} 260 \PYZsh{} 300 \PYZsh{} 430 \PYZsh{} 510} \PY{c+c1}{\PYZsh{} Note that a from the DP method and a\PYZus{}star from the RL method are now identical by construction} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{a}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{num\PYZus{}path}\PY{p}{]}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{DP Action}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{a\PYZus{}star}\PY{o}{.}\PY{n}{T}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{n}{num\PYZus{}path}\PY{p}{]}\PY{p}{,} \PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{RL Action}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Time Steps}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Optimal Action Comparison Between DP and RL}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_61_0.png} \end{center} { \hspace*{\fill} \\} \subsection{Summary of the RL-based pricing with QLBS}\label{summary-of-the-rl-based-pricing-with-qlbs} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}36}]:} \PY{c+c1}{\PYZsh{} QLBS option price} \PY{n}{C\PYZus{}QLBS} \PY{o}{=} \PY{o}{\PYZhy{}} \PY{n}{Q\PYZus{}star}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} Q\PYZus{}RL \PYZsh{} } \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ QLBS RL Option Pricing }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}\PYZhy{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Initial Stock Price:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{S0}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Drift of Stock:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{mu}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Volatility of Stock:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{sigma}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Risk\PYZhy{}free Rate:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{r}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Risk aversion parameter :}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{risk\PYZus{}lambda}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Strike:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{K}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}25s}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Maturity:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{,} \PY{n}{M}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}26s}\PY{l+s+s1}{ }\PY{l+s+si}{\PYZpc{}.4f}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{The QLBS Put Price 1 :}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{C\PYZus{}QLBS}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{p}{:}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+si}{\PYZpc{}\PYZhy{}26s}\PY{l+s+s1}{ }\PY{l+s+si}{\PYZpc{}.4f}\PY{l+s+s1}{\PYZsq{}} \PY{o}{\PYZpc{}} \PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{Black\PYZhy{}Sholes Put Price:}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{bs\PYZus{}put}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{c+c1}{\PYZsh{} \PYZsh{} plot one path} \PY{c+c1}{\PYZsh{} plt.plot(C\PYZus{}QLBS.T.iloc[:,[200]])} \PY{c+c1}{\PYZsh{} plt.xlabel(\PYZsq{}Time Steps\PYZsq{})} \PY{c+c1}{\PYZsh{} plt.title(\PYZsq{}QLBS RL Option Price\PYZsq{})} \PY{c+c1}{\PYZsh{} plt.show()} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] --------------------------------- QLBS RL Option Pricing --------------------------------- Initial Stock Price: 100 Drift of Stock: 0.05 Volatility of Stock: 0.15 Risk-free Rate: 0.03 Risk aversion parameter : 0.001 Strike: 100 Maturity: 1 The QLBS Put Price 1 : 4.7040 Black-Sholes Put Price: 4.5296 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}37}]:} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \PY{n}{part5} \PY{o}{=} \PY{n+nb}{str}\PY{p}{(}\PY{n}{C\PYZus{}QLBS}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n}{submissions}\PY{p}{[}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{]}\PY{o}{=}\PY{n}{part5} \PY{n}{grading}\PY{o}{.}\PY{n}{submit}\PY{p}{(}\PY{n}{COURSERA\PYZus{}EMAIL}\PY{p}{,} \PY{n}{COURSERA\PYZus{}TOKEN}\PY{p}{,} \PY{n}{assignment\PYZus{}key}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{,}\PY{n}{all\PYZus{}parts}\PY{p}{,}\PY{n}{submissions}\PY{p}{)} \PY{n}{C\PYZus{}QLBS}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]} \PY{c+c1}{\PYZsh{}\PYZsh{}\PYZsh{} GRADED PART (DO NOT EDIT) \PYZsh{}\PYZsh{}\PYZsh{}} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Submission successful, please check on the coursera grader page for the status \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}37}]:} 4.703966945077183 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}38}]:} \PY{c+c1}{\PYZsh{} add here calculation of different MC runs (6 repetitions of action randomization)} \PY{c+c1}{\PYZsh{} on\PYZhy{}policy values} \PY{n}{y1\PYZus{}onp} \PY{o}{=} \PY{l+m+mf}{5.0211} \PY{c+c1}{\PYZsh{} 4.9170} \PY{n}{y2\PYZus{}onp} \PY{o}{=} \PY{l+m+mf}{4.7798} \PY{c+c1}{\PYZsh{} 7.6500} \PY{c+c1}{\PYZsh{} QLBS\PYZus{}price\PYZus{}on\PYZus{}policy = 4.9004 +/\PYZhy{} 0.1206} \PY{c+c1}{\PYZsh{} these are the results for noise eta = 0.15} \PY{c+c1}{\PYZsh{} p1 = np.array([5.0174, 4.9249, 4.9191, 4.9039, 4.9705, 4.6216 ])} \PY{c+c1}{\PYZsh{} p2 = np.array([6.3254, 8.6733, 8.0686, 7.5355, 7.1751, 7.1959 ])} \PY{n}{p1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{5.0485}\PY{p}{,} \PY{l+m+mf}{5.0382}\PY{p}{,} \PY{l+m+mf}{5.0211}\PY{p}{,} \PY{l+m+mf}{5.0532}\PY{p}{,} \PY{l+m+mf}{5.0184}\PY{p}{]}\PY{p}{)} \PY{n}{p2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{4.7778}\PY{p}{,} \PY{l+m+mf}{4.7853}\PY{p}{,} \PY{l+m+mf}{4.7781}\PY{p}{,}\PY{l+m+mf}{4.7805}\PY{p}{,} \PY{l+m+mf}{4.7828}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} results for eta = 0.25} \PY{c+c1}{\PYZsh{} p3 = np.array([4.9339, 4.9243, 4.9224, 5.1643, 5.0449, 4.9176 ])} \PY{c+c1}{\PYZsh{} p4 = np.array([7.7696,8.1922, 7.5440,7.2285, 5.6306, 12.6072])} \PY{n}{p3} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{5.0147}\PY{p}{,} \PY{l+m+mf}{5.0445}\PY{p}{,} \PY{l+m+mf}{5.1047}\PY{p}{,} \PY{l+m+mf}{5.0644}\PY{p}{,} \PY{l+m+mf}{5.0524}\PY{p}{]}\PY{p}{)} \PY{n}{p4} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{4.7842}\PY{p}{,}\PY{l+m+mf}{4.7873}\PY{p}{,} \PY{l+m+mf}{4.7847}\PY{p}{,} \PY{l+m+mf}{4.7792}\PY{p}{,} \PY{l+m+mf}{4.7796}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} eta = 0.35 } \PY{c+c1}{\PYZsh{} p7 = np.array([4.9718, 4.9528, 5.0170, 4.7138, 4.9212, 4.6058])} \PY{c+c1}{\PYZsh{} p8 = np.array([8.2860, 7.4012, 7.2492, 8.9926, 6.2443, 6.7755])} \PY{n}{p7} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{5.1342}\PY{p}{,} \PY{l+m+mf}{5.2288}\PY{p}{,} \PY{l+m+mf}{5.0905}\PY{p}{,} \PY{l+m+mf}{5.0784}\PY{p}{,} \PY{l+m+mf}{5.0013} \PY{p}{]}\PY{p}{)} \PY{n}{p8} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{4.7762}\PY{p}{,} \PY{l+m+mf}{4.7813}\PY{p}{,}\PY{l+m+mf}{4.7789}\PY{p}{,} \PY{l+m+mf}{4.7811}\PY{p}{,} \PY{l+m+mf}{4.7801}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} results for eta = 0.5} \PY{c+c1}{\PYZsh{} p5 = np.array([4.9446, 4.9894,6.7388, 4.7938,6.1590, 4.5935 ])} \PY{c+c1}{\PYZsh{} p6 = np.array([7.5632, 7.9250, 6.3491, 7.3830, 13.7668, 14.6367 ])} \PY{n}{p5} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{3.1459}\PY{p}{,} \PY{l+m+mf}{4.9673}\PY{p}{,} \PY{l+m+mf}{4.9348}\PY{p}{,} \PY{l+m+mf}{5.2998}\PY{p}{,} \PY{l+m+mf}{5.0636} \PY{p}{]}\PY{p}{)} \PY{n}{p6} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{4.7816}\PY{p}{,} \PY{l+m+mf}{4.7814}\PY{p}{,} \PY{l+m+mf}{4.7834}\PY{p}{,} \PY{l+m+mf}{4.7735}\PY{p}{,} \PY{l+m+mf}{4.7768}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} print(np.mean(p1), np.mean(p3), np.mean(p5))} \PY{c+c1}{\PYZsh{} print(np.mean(p2), np.mean(p4), np.mean(p6))} \PY{c+c1}{\PYZsh{} print(np.std(p1), np.std(p3), np.std(p5))} \PY{c+c1}{\PYZsh{} print(np.std(p2), np.std(p4), np.std(p6))} \PY{n}{x} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mf}{0.15}\PY{p}{,} \PY{l+m+mf}{0.25}\PY{p}{,} \PY{l+m+mf}{0.35}\PY{p}{,} \PY{l+m+mf}{0.5}\PY{p}{]}\PY{p}{)} \PY{n}{y1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p1}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p3}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p7}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p5}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{n}{y2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p2}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p4}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p8}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{n}{p6}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{n}{y\PYZus{}err\PYZus{}1} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p1}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p3}\PY{p}{)}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p7}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p5}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{n}{y\PYZus{}err\PYZus{}2} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p2}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p4}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p8}\PY{p}{)}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{n}{p6}\PY{p}{)}\PY{p}{]}\PY{p}{)} \PY{c+c1}{\PYZsh{} plot it } \PY{n}{f}\PY{p}{,} \PY{n}{axs} \PY{o}{=} \PY{n}{plt}\PY{o}{.}\PY{n}{subplots}\PY{p}{(}\PY{n}{nrows}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{ncols}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{sharex}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{subplots\PYZus{}adjust}\PY{p}{(}\PY{n}{hspace}\PY{o}{=}\PY{o}{.}\PY{l+m+mi}{5}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{set\PYZus{}figheight}\PY{p}{(}\PY{l+m+mf}{6.0}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{set\PYZus{}figwidth}\PY{p}{(}\PY{l+m+mf}{8.0}\PY{p}{)} \PY{n}{ax} \PY{o}{=} \PY{n}{axs}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{ax}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y1}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{axhline}\PY{p}{(}\PY{n}{y}\PY{o}{=}\PY{n}{y1\PYZus{}onp}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{color}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{textstr} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{On\PYZhy{}policy value = }\PY{l+s+si}{\PYZpc{}2.2f}\PY{l+s+s1}{\PYZsq{}}\PY{o}{\PYZpc{}} \PY{p}{(}\PY{n}{y1\PYZus{}onp}\PY{p}{)} \PY{n}{props} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n}{boxstyle}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{round}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{wheat}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{c+c1}{\PYZsh{} place a text box in upper left in axes coords} \PY{n}{ax}\PY{o}{.}\PY{n}{text}\PY{p}{(}\PY{l+m+mf}{0.05}\PY{p}{,} \PY{l+m+mf}{0.15}\PY{p}{,} \PY{n}{textstr}\PY{p}{,} \PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{11}\PY{p}{,}\PY{n}{transform}\PY{o}{=}\PY{n}{ax}\PY{o}{.}\PY{n}{transAxes}\PY{p}{,} \PY{n}{verticalalignment}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{top}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{bbox}\PY{o}{=}\PY{n}{props}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Mean option price}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Noise level}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax} \PY{o}{=} \PY{n}{axs}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{ax}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y2}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{axhline}\PY{p}{(}\PY{n}{y}\PY{o}{=}\PY{n}{y2\PYZus{}onp}\PY{p}{,}\PY{n}{linewidth}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{color}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{textstr} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{On\PYZhy{}policy value = }\PY{l+s+si}{\PYZpc{}2.2f}\PY{l+s+s1}{\PYZsq{}}\PY{o}{\PYZpc{}} \PY{p}{(}\PY{n}{y2\PYZus{}onp}\PY{p}{)} \PY{n}{props} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{n}{boxstyle}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{round}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{wheat}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{alpha}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)} \PY{c+c1}{\PYZsh{} place a text box in upper left in axes coords} \PY{n}{ax}\PY{o}{.}\PY{n}{text}\PY{p}{(}\PY{l+m+mf}{0.35}\PY{p}{,} \PY{l+m+mf}{0.95}\PY{p}{,} \PY{n}{textstr}\PY{p}{,} \PY{n}{fontsize}\PY{o}{=}\PY{l+m+mi}{11}\PY{p}{,}\PY{n}{transform}\PY{o}{=}\PY{n}{ax}\PY{o}{.}\PY{n}{transAxes}\PY{p}{,} \PY{n}{verticalalignment}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{top}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{bbox}\PY{o}{=}\PY{n}{props}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Mean option price}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Noise level}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax} \PY{o}{=} \PY{n}{axs}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{ax}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y\PYZus{}err\PYZus{}1}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Std of option price}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Noise level}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax} \PY{o}{=} \PY{n}{axs}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{ax}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y\PYZus{}err\PYZus{}2}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}title}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Std of option price}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}xlabel}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Noise level}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{suptitle}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Mean and std of option price vs noise level}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{Option\PYZus{}price\PYZus{}vs\PYZus{}noise\PYZus{}level.png}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{600}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_65_0.png} \end{center} { \hspace*{\fill} \\} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5365915744, "avg_line_length": 78.0648743425, "ext": "tex", "hexsha": "c55da94dcf44cc53c4701b2afb66cb5f00bffb05", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4a7896f3225cb84e2f15770409c1f18bfe529615", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cilsya/coursera", "max_forks_repo_path": "Machine _Learning_and_Reinforcement_Learning_in_Finance/03_Reinforcement_Learning_in_Finance/03_Fitted_Q_Iteration/notebook.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "4a7896f3225cb84e2f15770409c1f18bfe529615", "max_issues_repo_issues_event_max_datetime": "2021-06-01T22:49:40.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-24T16:17:05.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cilsya/coursera", "max_issues_repo_path": "Machine _Learning_and_Reinforcement_Learning_in_Finance/03_Reinforcement_Learning_in_Finance/03_Fitted_Q_Iteration/notebook.tex", "max_line_length": 624, "max_stars_count": 1, "max_stars_repo_head_hexsha": "4a7896f3225cb84e2f15770409c1f18bfe529615", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cilsya/coursera", "max_stars_repo_path": "Machine _Learning_and_Reinforcement_Learning_in_Finance/03_Reinforcement_Learning_in_Finance/03_Fitted_Q_Iteration/notebook.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-15T13:57:04.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-15T13:57:04.000Z", "num_tokens": 60147, "size": 133569 }
\commentout{ \begin{code} module HighAssurance where import Enigma import Classic \end{code} } %##################################################################### \chapter{High-assurance programming} \label{cha:high-assur-progr} Writing correct software is the holy grail of programming. Bugs inevitably exist, however, even in thoroughly tested projects. One fundamental issue is the lack of support in typical programming languages to let the programmer {\em state} what it means to be correct, let alone formally establish any notion of correctness. To address this shortcoming, Cryptol advocates the high-assurance programming approach: programmers explicitly state correctness properties\indProperty along with their code, which are explicitly checked by the Cryptol toolset. Properties are not comments or mere annotations, so there is no concern that they will become obsolete as your code evolves. The goal of this chapter is to introduce you to these tools, and to the notion of high-assurance programming in Cryptol via examples. %===================================================================== \section{Writing properties} \label{sec:writingproperties} \sectionWithAnswers{Writing properties}{sec:writingproperties} Consider the equality: $$ x^2 - y^2 = (x-y) * (x+y) $$ Let us write two Cryptol functions that capture both sides of this equation:\indTimes\indExponentiate\indMinus\indPlus \begin{code} sqDiff1 (x, y) = x^^2 - y^^2 sqDiff2 (x, y) = (x-y) * (x+y) \end{code} We would like to express the property that {\tt sqDiff1} and {\tt sqDiff2} are precisely the same functions: Given the same {\tt x} and {\tt y}, they should return exactly the same answer. We can express this property in Cryptol using a {\tt properties} declaration:\indProperty \begin{code} sqDiffsCorrect : ([8], [8]) -> Bit property sqDiffsCorrect (x, y) = sqDiff1 (x, y) == sqDiff2 (x, y) \end{code} The above declaration reads as follows: {\tt sqDiffsCorrect} is a property stating that for all {\tt x} and {\tt y}, the expression {\tt sqDiff1 (x, y) == sqDiff2(x, y)} evaluates to {\tt True}. Furthermore, the type signature restricts the type of the property to apply to only 8-bit values. As usual, the type-signature is optional.\indSignature If not given, Cryptol will infer one for you. \begin{Exercise}\label{ex:thm:intro} Put the above property in a file and load it into Cryptol. Then issue the command: \begin{Verbatim} Cryptol> :t sqDiffsCorrect \end{Verbatim} What do you see?\indCmdInfo \end{Exercise} \begin{Answer}\ansref{ex:thm:intro} Cryptol will print the property location, name, and the type. The command {\tt :i} stands for {\tt info}. It provides data about the properties, type-synonyms, etc. available at the top-level of your program.\indCmdInfo \end{Answer} \note{It is important to emphasize that the mathematical equality above and the Cryptol property are {\em not} stating precisely the same property. Remember that all Cryptol arithmetic is modular,\indModular while the mathematical equation is over arbitrary numbers, including negative, real, or even complex numbers. The takeaway of this discussion is that we are only using this example for illustration purposes: Cryptol properties relate to Cryptol programs, and should not be used for expressing mathematical theorems (unless, of course, you are stating group theory theorems or theorems in an appropriate algebra)! In particular, {\tt sqDiffsCorrect} is a property about the Cryptol functions {\tt sqDiff1} and {\tt sqDiff2}, not about the mathematical equation that inspired it.} \begin{Exercise}\label{ex:thm:0} Write a property {\tt revRev} stating that {\tt reverse} of a {\tt reverse} returns a sequence unchanged.\indReverse \end{Exercise} \begin{Answer}\ansref{ex:thm:0}\indReverse \begin{code} property revRev xs = reverse (reverse xs) == xs \end{code} \end{Answer} \begin{Exercise}\label{ex:thm:1} Write a property {\tt appAssoc} stating that append is an associative operator.\indAppend \end{Exercise} \begin{Answer}\ansref{ex:thm:1}\indAppend \begin{code} property appAssoc (xs, ys, zs) = xs # (ys # zs) == (xs # ys) # zs \end{code} \end{Answer} \begin{Exercise}\label{ex:thm:2} Write a property {\tt revApp} stating that appending two sequences and reversing the result is the same as reversing the sequences and appending them in the reverse order, as illustrated in the following expression:\indReverse\indAppend \begin{Verbatim} reverse ("HELLO" # "WORLD") == reverse "WORLD" # reverse "HELLO" \end{Verbatim} \end{Exercise} \begin{Answer}\ansref{ex:thm:2}\indReverse\indAppend \begin{code} property revApp (xs, ys) = reverse (xs # ys) == reverse ys # reverse xs \end{code} \end{Answer} \begin{Exercise}\label{ex:thm:3} Write a property {\tt lshMul} stating that shifting left by $k$ is the same as multiplying by $2^k$. \end{Exercise} \begin{Answer}\ansref{ex:thm:3} \begin{code} property lshMul (n, k) = n << k == n * 2^^k \end{code} \end{Answer} \nb{A {\tt property} declaration simply introduces a property about your program, which may or may {\em not} actually hold. It is an assertion about your program, without any claim of correctness. In particular, you can clearly write properties that simply do not hold:} \begin{code} property bogus x = x != x \end{code} It is important to distinguish between {\em stating} a property and actually {\em proving} it. So far, our focus is purely on specification. We will focus on actual proofs in Section~\ref{sec:establishcorrectness}. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Property-function correspondence}\indThmFuncCorr \label{sec:prop-funct-corr} In Cryptol, properties can be used just like ordinary definitions: \begin{Verbatim} Cryptol> sqDiffsCorrect (3, 5) True Cryptol> :t sqDiffsCorrect sqDiffsCorrect : ([8],[8]) -> Bit \end{Verbatim} That is, a property over {\tt$(x, y)$} is the same as a function over the tuple {\tt (x, y)}. We call this the property-function correspondence. Property declarations, aside from the slightly different syntax, are \emph{precisely} the same as Cryptol functions whose return type is \texttt{Bit}. There is no separate language for writing or working with properties. We simply use the full Cryptol language write both the programs and the properties that they satisfy. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Capturing test vectors} \label{sec:thmvec} One nice application of Cryptol properties is in capturing test vectors:\indZero \begin{code} property inctest = [ f x == y | (x, y) <- testVector ] == ~zero where f x = x + 1 testVector = [(3, 4), (4, 5), (12, 13), (78, 79)] \end{code} Notice that the property {\tt inctest} does not have any parameters (no {\em forall} section), and thus acts as a simple {\tt Bit} value that will be true precisely when the given test case succeeds. \todo[inline]{Figure out how to re-run Cryptol in this chapter to make sure the tweak to the above example (removal of dummy single formal parameter) works.} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Polymorphic properties} \label{sec:polythm} Just like functions, Cryptol properties can be polymorphic as well. If you want to write a property for a polymorphic function, for instance, your properties will naturally be polymorphic too. Here is a simple trivial example: \begin{code} property multShift x = x * 2 == x << 1 \end{code} If we ask Cryptol the type of {\tt multShift}, we get: \begin{Verbatim} Cryptol> :t multShift multShift : {a} (fin a, a >= 2) => [a] -> Bit \end{Verbatim} That is, it is a property about all words of size at least two. The question is whether this property does indeed hold? In the particular case of {\tt multShift} that is indeed the case, below are some examples using the property-function correspondence:\indThmFuncCorr \begin{Verbatim} Cryptol> multShift (5 : [8]) True Cryptol> multShift (5 : [10]) True Cryptol> multShift (5 : [16]) True \end{Verbatim} However, this is {\em not} always the case for all polymorphic Cryptol properties! The following example demonstrates: \begin{code} property flipNeverIdentity x = x != ~x \end{code} The property {\tt flipNeverIdentity} states that complementing\indComplement the bits of a value will always result in a different value: a property we might expect to hold intuitively. Here is the type of {\tt flipNeverIdentity}: \begin{Verbatim} Cryptol> :t flipNeverIdentity flip : {a} (fin a) => a -> Bit \end{Verbatim} So, the only requirement on {\tt flipNeverIdentity} is that it receives some finite type.\indFin Let us try some examples: \begin{Verbatim} Cryptol> flipNeverIdentity True True Cryptol> flipNeverIdentity 3 True Cryptol> flipNeverIdentity [1 2] True \end{Verbatim} However: \begin{Verbatim} Cryptol> flipNeverIdentity (0 : [0]) False \end{Verbatim} That is, when given a {\tt 0}-bit wide value, the complement will in fact do nothing and return its argument unchanged! Therefore, the property {\tt flipNeverIdentity} is not valid, since it holds at certain monomorphic types, but not at all types.\indMonomorphism \begin{Exercise}\label{ex:polythm:1} Demonstrate another monomorphic type where {\tt flipNeverIdentity} does {\em not} hold. \end{Exercise} \begin{Answer}\ansref{ex:polythm:1} There are many such types, all sharing the property that they do not take any space to represent. Here are a couple examples:\indZero \begin{Verbatim} Cryptol> flipNeverIdentity (zero : ([0], [0])) False Cryptol> flipNeverIdentity (zero : [0][8]) False \end{Verbatim} \end{Answer} \nb{The moral of this discussion is that the notion of polymorphic validity\indThmPolyvalid (i.e., that a given polymorphic property will either hold at all of its monomorphic instances or none) does not hold in Cryptol. A polymorphic property can be valid at some, all, or no instances of it.} \begin{Exercise}\label{ex:polythm:2} The previous exercise might lead you to think that it is the 0-bit word type ({\tt [0]}) that is at the root of the polymorphic validity issue. This is not true. Consider the following example:\indWidth\indThmPolyvalid \begin{code} property widthPoly x = (w == 15) || (w == 531) where w = width x \end{code} What is the type of {\tt widthPoly}? At what instances does it hold? Write a property {\tt evenWidth} that holds only at even-width word instances. \end{Exercise} \begin{Answer}\ansref{ex:polythm:2} \begin{Verbatim} Cryptol> :t widthPoly widthPoly : {a, b} (fin a) => [a]b -> Bit \end{Verbatim} It is easy to see that {\tt widthPoly} holds at the instances: \begin{Verbatim} {b} [15]b -> Bit \end{Verbatim} and \begin{Verbatim} {b} [531]b -> Bit \end{Verbatim} but at no other. Based on this, we can write {\tt evenWidth} as follows:\indWidth \begin{code} property evenWidth x = (width x) ! 0 == False \end{code} remembering that the 0'th bit of an even number is always {\tt False}. We have: \begin{Verbatim} Cryptol> evenWidth (0:[1]) False Cryptol> evenWidth (0:[2]) True Cryptol> evenWidth (0:[3]) False Cryptol> evenWidth (0:[4]) True Cryptol> evenWidth (0:[5]) False \end{Verbatim} \end{Answer} %===================================================================== \section{Establishing correctness} \label{sec:establishcorrectness} \sectionWithAnswers{Establishing correctness}{sec:establishcorrectness} Our focus so far has been using Cryptol to {\em state} properties of our programs, without actually trying to prove them correct. This separation of concerns is essential for a pragmatic development approach. Properties act as contracts that programmers state along with their code, which can be separately checked by the toolset~\cite{erkok-matthews-cryptolEqChecking-09}. This approach allows you to state the properties you want, and then work on your code until the properties are indeed satisfied. Furthermore, properties stay with your program forever, so they can be checked at a later time to ensure changes (improvements/additions/optimizations etc.) did not violate the stated properties. %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Formal proofs} \label{sec:formal-proofs} Recall our very first property, {\tt sqDiffsCorrect}, from Section~\ref{sec:writingproperties}. We will now use Cryptol to actually prove it automatically. To prove {\tt sqDiffsCorrect}, use the command {\tt :prove}:\indCmdProve \begin{Verbatim} Cryptol> :prove sqDiffsCorrect Q.E.D. \end{Verbatim} Note that the above might take a while to complete, as a formal proof is being produced behind the scenes. Once Cryptol formally establishes the property holds, it prints ``{\tt Q.E.D.}'' to tell the user the proof is complete.\indQED\indProve \nb{Cryptol uses off-the-shelf SAT\glosSAT and SMT\glosSMT solvers to perform these formal proofs~\cite{erkok-matthews-cryptolEqChecking-09}. By default, Cryptol will use the CVC4 SMT solver~\cite{cvc4WWW} under the hood, but it can be configured to use other SAT/SMT solvers as well, such as SRI's Yices~\cite{YicesWWW}\indYices, or Microsoft Research's Z3~\cite{Z3WWW}\footnote{To do this, first install the package(s) from the URLs provided in the bibliography. Once a prover has been installed you can activate it with, for example, {\tt :set prover=z3}.}. Note that the {\tt :prove} command is a push-button tool: once the proof starts there is no user involvement. Of course, the external tool used may not be able to complete all the proofs in a feasible amount of time, naturally. \todo[inline]{Is this going to be a SAW-only capability, or Cryptol as well? Ensure that a ticket exists for forward-porting this functionality.} % While it is out of the scope of this book, let us also mention that % Cryptol can also translate properties to Isabelle/HOL\indIsabelleHOL % for semi-automated theorem proving purposes~\cite{Isabelle-book}, % where more complicated properties can be tackled with the human % guidance if necessary. } %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Counterexamples} \label{sec:counterexamples} Of course, properties can very well be invalid, due to bugs in code or the specifications themselves. In these cases, Cryptol will always print a counterexample value demonstrating why the property does not hold. Here is an example demonstrating what happens when things go wrong: \begin{code} failure : [8] -> Bit property failure x = x == x+1 \end{code} We have: \begin{Verbatim} Cryptol> :prove failure failure 0 = False \end{Verbatim} Cryptol tells us that the property is falsifiable, and then demonstrates a particular value ({\tt 0} in this case) that it fails at. These counterexamples are extremely valuable for debugging purposes.\indCounterExample If you try to prove an invalid property that encodes a test vector (Section~\ref{sec:thmvec}), then you will get a mere indication that you have a contradiction, since there is no universally quantified variables to instantiate to show you a counterexample.\indContradiction If the expression evaluates to {\tt True}, then it will be a trivial proof, as expected: \begin{Verbatim} Cryptol> :prove False False = False Cryptol> :prove True Q.E.D. Cryptol> :prove 2 == 3 2==3 = False Cryptol> :prove reverse [1, 2] == [1, 2] reverse [1, 2] == [1,2] = False Cryptol> :prove 1+1 == 0 Q.E.D. \end{Verbatim} The very last example demonstrates modular arithmetic in operation, as usual.\indModular %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Dealing with polymorphism} \label{sec:deal-with-polym} As we mentioned before, Cryptol properties can be polymorphic. As we explored before, we cannot directly prove polymorphic properties as they may hold for certain monomorphic instances while not for others. In this cases, we must tell Cryptol what particular monomorphic instance we would like it to prove the property at. Let us demonstrate this with the {\tt multShift} property from Section~\ref{sec:polythm}: \begin{Verbatim} Cryptol> :prove multShift Not a monomorphic type: {a} (a >= 2, fin a) => [a] -> Bit \end{Verbatim} Cryptol is telling us that it cannot prove a polymorphic property directly. We can, however, give a type annotation to monomorphise it,\indTypeAnnotation and then prove it at a desired instance: \begin{Verbatim} Cryptol> :prove multShift : [16] -> Bit Q.E.D. \end{Verbatim} In fact, you can use this very same technique to pass any bit-valued function to the {\tt :prove} command:\indCmdProve\indWhere \begin{Verbatim} Cryptol> :prove dbl where dbl x = (x:[8]) * 2 == x+x Q.E.D. \end{Verbatim} Of course, a \lamex (Section~\ref{sec:lamex}) would work just as well too:\indLamExp \begin{Verbatim} Cryptol> :prove \x -> (x:[8]) * 2 == x+x Q.E.D. \end{Verbatim} \begin{Exercise}\label{ex:prove:1} Prove the property {\tt revRev} you wrote in Exercise~\ref{sec:writingproperties}-\ref{ex:thm:0}. Try different monomorphic instantiations. \end{Exercise} \begin{Answer}\ansref{ex:prove:1} If we try to prove {\tt revRev} directly, we will get an error from Cryptol: \begin{Verbatim} Cryptol> :prove revRev Not a valid predicate type: {a, b} (fin a, Cmp b) => [a]b -> Bit \end{Verbatim} Cryptol is telling us that the property has a polymorphic type, and hence cannot be proven. We can easily prove instances of it, by either creating new properties with fixed type signatures\indSignature, or by monomorphising it via type annotations\indTypeAnnotation. Several examples are given below: \begin{Verbatim} Cryptol> :prove revRev : [10][8] -> Bit Q.E.D. Cryptol> :prove revRev : [100][32] -> Bit Q.E.D. Cryptol> :prove revRev : [0][4] -> Bit Q.E.D. \end{Verbatim} \end{Answer} \begin{Exercise}\label{ex:prove:2} Prove the property {\tt appAssoc} you wrote in Exercise~\ref{sec:writingproperties}-\ref{ex:thm:1}, at several different monomorphic instances. \end{Exercise} \begin{Exercise}\label{ex:prove:3} Prove the property {\tt revApp} you wrote in Exercise~\ref{sec:writingproperties}-\ref{ex:thm:2}, at several different monomorphic instances. \end{Exercise} \begin{Exercise}\label{ex:prove:4} Prove the property {\tt lshMul} you wrote in Exercise~\ref{sec:writingproperties}-\ref{ex:thm:3}, at several different monomorphic instances. \end{Exercise} \begin{Exercise}\label{ex:prove:5} Use the {\tt :prove} command to prove and demonstrate counterexamples for the property {\tt widthPoly} defined in Exercise~\ref{sec:writingproperties}-\ref{ex:polythm:2}, using appropriate monomorphic instances. \end{Exercise} \begin{Answer}\ansref{ex:prove:5} We have: \begin{Verbatim} Cryptol> :prove widthPoly : [15] -> Bit Q.E.D. Cryptol> :prove widthPoly : [531] -> Bit Q.E.D. Cryptol> :prove widthPoly : [8] -> Bit widthPoly:[8] -> Bit 0 = False Cryptol> :prove widthPoly : [32] -> Bit widthPoly:[32] -> Bit 0 = False \end{Verbatim} \end{Answer} %~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \subsection{Conditional proofs} \label{sec:condproof} It is often the case that we are interested in a property that only holds under certain conditions. For instance, in Exercise~\ref{sec:arithmetic}-\ref{ex:arith:5:1} we have explored the relationship between Cryptol's division, multiplication, and modulus operators, where we asserted the following property:\indMod\indDiv\indTimes $$ x = (x / y) \times y + (x\,\%\,y) $$ Obviously, this relationship holds only when $y \not= 0$. The idea behind a conditional Cryptol property\indThmCond is that we would like to capture these side-conditions formally in our specifications. We simply use an ordinary {\tt if-then-else} expression in Cryptol to write conditional properties (at least until we add boolean logic operators to Cryptol). If the condition is invalid, we simply return {\tt True}, indicating that we are not interested in that particular case. Depending on how natural it is to express the side-condition or its negation, you can use one of the following two patterns: % the following looks ODDLY laid out in the source but comes out OK when latex'ed \begin{Verbatim}[commandchars=\\\{\}] if {\em side-condition-holds} if {\em side-condition-fails} then{\em property-expression} then True else True else {\em property-expression} \end{Verbatim} \begin{Exercise}\label{ex:cond:1} Express the relationship between division, multiplication, and modulus using a conditional Cryptol property. Prove the property for various monomorphic instances.\indMod\indDiv\indTimes\indThmCond \end{Exercise} \begin{Answer}\ansref{ex:cond:1} \begin{code} property divModMul (x,y) = if y == 0 then True // precondition fails => True else x == (x / y) * y + x % y \end{code} We have: \begin{Verbatim} Cryptol> :prove divModMul : ([4], [4]) -> Bit Q.E.D. Cryptol> :prove divModMul : ([8], [8]) -> Bit Q.E.D. \end{Verbatim} \end{Answer} %% TODO: Pedagogy here without exceptions %% \nb{It is tempting to write a function:} %% \begin{code} %% implies (a, b) = if a then b else True %%\end{code} %%to simplify writing conditional properties. %% However, this approach is flawed as it fails to account correctly for %% exceptions. We shall revisit this topic in detail in %% Sections~\ref{sec:pitfall:evorder} and~\ref{sec:pitfall:thmexceptions}. \paragraph*{Recognizing messages} Our work on classic ciphers (Chapter~\ref{chapter:classic}) and the enigma (Chapter~\ref{chapter:enigma}) involved working with messages that contained the letters {\tt 'A'} .. {\tt 'Z'} only. When writing properties about these ciphers it will be handy to have a recognizer for such messages, as we explore in the next exercise. \begin{Exercise}\label{ex:cond:2} Write a function: \begin{code} validMessage : {n} (fin n) => String n -> Bit \end{code} that returns {\tt True} exactly when the input only consists of the letters {\tt 'A'} through {\tt 'Z'}. \lhint{Use the functions {\tt all} defined in Exercise~\ref{sec:zero}-\ref{ex:zero:1},\indAll and {\tt elem} defined in Exercise~\ref{sec:recandrec}-\ref{ex:recfun:4:1}.\indElem} \end{Exercise} \begin{Answer}\ansref{ex:cond:2} Using {\tt all}\indAll and {\tt elem}\indElem, it is easy to express {\tt validMessage}: \begin{code} validMessage = all (\c -> elem (c, ['A' .. 'Z'])) \end{code} Note the use of a \lamex to pass the function to {\tt all}. Of course, we could have defined a separate function for it in a {\tt where}-clause\indWhere, but the function is short enough to directly write it inline. \end{Answer} \todo[inline]{Should we revisit directly some past examples or solutions in the appendix and show what kinds of properties are appropriate to write and which ones can be verified?} \begin{Exercise}\label{ex:cond:3} Recall the pair of functions {\tt caesar} and {\tt dCaesar} from Section~\ref{sec:caesar}.\indCaesarscipher Write a property, named {\tt caesarCorrect}, stating that {\tt caesar} and {\tt dCaesar} are inverses of each other for all {\tt d} (shift amount) and {\tt msg} (message)'s. Is your property valid? What extra condition do you need to assert on {\tt msg} for your property to hold? Prove the property for all messages of length 10. \end{Exercise} \begin{Answer}\ansref{ex:cond:3} A naive attempt would be to write: \begin{code} property caesarCorrectBOGUS (d,msg) = dCaesar(d, caesar(d, msg)) == msg \end{code} However, this property is not correct for all {\tt msg}'s, since Caesar's cipher only works for messages containing the letters {\tt 'A'} \ldots {\tt 'Z'}, not arbitrary 8-bit values as the above property suggests. We can see this easily by providing a bad input: \begin{Verbatim} Cryptol> caesar (3, "1") invalid sequence index: 240 \end{Verbatim} (240 is the difference between the ASCII value of {\tt '1'}, 49, and the letter {\tt 'A'}, 65, interpreted as an 8-bit offset.) We should use the {\tt validMessage} function of the previous exercise to write a conditional property instead: \begin{code} property caesarCorrect (d,msg) = if validMessage msg then dCaesar(d, caesar(d, msg)) == msg else True \end{code} We have: \begin{Verbatim} Cryptol> :prove caesarCorrect : ([8], String(10)) -> Bit Q.E.D. \end{Verbatim} \end{Answer} \begin{Exercise}\label{ex:cond:4} Write and prove a property for the {\tt modelEnigma} machine (Page~\pageref{def:modelEnigma}), relating the {\tt enigma} and {\tt dEnigma} functions from Section~\ref{enigma:encdec}. \end{Exercise} \begin{Answer}\ansref{ex:cond:4} \begin{code} property modelEnigmaCorrect pt = if validMessage pt then dEnigma (modelEnigma, enigma (modelEnigma, pt)) == pt else True \end{code} We have: \begin{Verbatim} Cryptol> :prove modelEnigmaCorrect : String(10) -> Bit Q.E.D. \end{Verbatim} \end{Answer} This may take a long time to prove, depending on the speed of your machine, and the prover you choose. %===================================================================== \section{Automated random testing} \label{sec:quickcheck} \sectionWithAnswers{Automated random testing}{sec:quickcheck} Cryptol's {\tt :prove} command\indCmdProve constructs rigorous formal proofs using push-button tools.\footnote{While some of the solvers that Cryptol uses are capable of \emph{emitting} proofs, such functionality is not exposes as a Cryptol feature.} The underlying technique used by Cryptol (SAT-\glosSAT and SMT-based\glosSMT equivalence checking) is complete\indProofCompleteness, i.e., it will always either prove the property or find a counterexample.\indCounterExample In the worst case, however, the proof process might take infeasible amounts of resources, potentially running out of memory or taking longer than the amount of time you are willing to wait. What is needed for daily development tasks is a mechanism to gain some confidence on the correctness of the properties without paying the price of formally proving them. This is the goal of Cryptol's {\tt :check}\indCmdCheck command, inspired by Haskell's quick-check library~\cite{quickcheck}. Instead of trying to formally prove your property, {\tt :check} tests it at random values to give you quick feedback. This approach is very suitable for rapid development. By using automated testing you get frequent and quick updates from Cryptol regarding the status of your properties, as you work through your code. If you introduce a bug, it is likely (although not guaranteed) that the {\tt :check} command will alert you right away. Once you are satisfied with your code, you can use the {\tt :prove} command to conduct the formal proofs, potentially leaving them running overnight. The syntax of the {\tt :check} command is precisely the same as the {\tt :prove} command. By default, it will run your property over 100 randomly generated test cases. \begin{Exercise}\label{ex:quick:0} Use the {\tt :check} command to test the property {\tt caesarCorrect} you have defined in Exercise~\ref{sec:condproof}-\ref{ex:cond:2}, for messages of length 10. Use the command {\tt :set tests=1000}\indQCCount to change the number of test cases to 1,000. Observe the test coverage statistics reported by Cryptol. How is the total number of cases computed? \end{Exercise} \begin{Answer}\ansref{ex:quick:0} Here is the interaction with Cryptol (when you actually run this, you will see the test cases counting up as they are performed):\indQCCount \begin{Verbatim} Cryptol> :check (caesarCorrect : ([8], String 10) -> Bit) Using random testing. passed 100 tests. Coverage: 0.00% (100 of 2^^88 values) Cryptol> :set tests=1000 Cryptol> :check (caesarCorrect : ([8], String 10) -> Bit) Using random testing. passed 1000 tests. Coverage: 0.00% (1000 of 2^^88 values) \end{Verbatim} In each case, Cryptol tells us that it checked a minuscule portion of all possible test cases: A good reminder of what {\tt :check} is really doing. The number of test cases is: $2^{8+8\times10} = 2^{88}$. We have 8-bits for the {\tt d} value, and $10*8$ bits total for the 10 characters in {\tt msg}, giving us a total of 88 bits. Since the input is 88 bits wide, we have $2^{88}$ potential test cases. Note how the number of test cases increase exponentially with the size of the message. \end{Answer} \begin{Exercise}\label{ex:quick:1} If the property is {\em small} in size, {\tt :check} might as well prove/disprove it. Try the following commands: \begin{Verbatim} :check True :check False :check \x -> x==(x:[8]) \end{Verbatim} \end{Exercise} \begin{Answer}\ansref{ex:quick:1} \begin{Verbatim} Cryptol> :check True Using exhaustive testing. passed 1 tests. QED Cryptol> :check False Using exhaustive testing. FAILED for the following inputs: Cryptol> :check \x -> x == (x:[8]) Using exhaustive testing. passed 256 tests. QED \end{Verbatim} Note that when Cryptol is able to exhaust all possible inputs, it returns QED, since the property is effectively proven. \end{Answer} \begin{Exercise}\label{ex:quick:2} Write a bogus property that will be very easy to disprove using {\tt :prove}, while {\tt :check} will have a hard time obtaining the counterexample. The moral of this exercise is that you should try {\tt :prove} early in your development and not get too comfortable with the results of {\tt :check}!\indCmdProve\indCmdCheck \end{Exercise} \begin{Answer}\ansref{ex:quick:2} \begin{code} property easyBug x = x != (76123:[64]) \end{code} The {\tt :prove} command will find the counterexample almost instantaneously, while {\tt :check} will have a hard time! \end{Answer} \paragraph*{Bulk operations} If you use {\tt :check}\indCmdCheck and {\tt :prove}\indCmdProve commands without any arguments, Cryptol will check and prove all the properties defined in your program. This is a simple means of exercising all your properties automatically. \todo[inline]{Do we want to reintroduce the \texttt{-n} switch, per \ticket{276}?} %% \paragraph*{Automatic invocation} You will notice that Cryptol will %% automatically {\tt :check} the properties in your file when you %% load the file from the command line, giving you a quick status %% report. You can turn this behavior off by passing Cryptol the flag %% {\tt -n}, like this: {\tt cryptol -n myfile.cry}. You can also %% interrupt a running {\tt :prove}\indCmdProve or {\tt %% :check}\indCmdCheck command by pressing {\tt %% Ctrl-C}.\indCmdAutoCheck %===================================================================== \section{Checking satisfiability} \label{sec:sat} \sectionWithAnswers{Checking satisfiability}{sec:sat} Closely related to proving properties is the notion of checking satisfiability. In satisfiability checking,\indCmdSat we would like to find arguments to a bit-valued function such that it will evaluate to {\tt True}, i.e., it will be satisfied.\indSat One way to think about satisfiability checking is {\em intelligently} searching for a solution. Here is a simple example. Let us assume we would like to compute the modular square-root of 9 as an 8-bit value. The obvious solution is {\tt 3}, of course, but we are wondering if there are other solutions to the equation $x^2 \equiv 9 \imod{2^8}$. To get started, let us first define a function that will return {\tt True} if its argument is a square-root of 9: \begin{code} isSqrtOf9 : [8] -> Bit isSqrtOf9 x = x*x == 9 \end{code} Any square-root of 9 will make the function {\tt isSqrtOf9} return {\tt True}, i.e., it will {\em satisfy} it. Thus, we can use Cryptol's satisfiability checker to find those values of {\tt x} automatically: \begin{Verbatim} Cryptol> :sat isSqrtOf9 isSqrtOf9 3 = True \end{Verbatim} Not surprisingly, Cryptol told us that 3 is one such value. We can search for other solutions by explicitly disallowing 3: \begin{Verbatim} Cryptol> :sat \x -> isSqrtOf9 x && ~(elem (x, [3])) \x -> isSqrtOf9 x && ~(elem (x, [3])) 131 = True \end{Verbatim} Note the use of the \lamex to\indLamExp indicate the new constraint. (Of course, we could have defined another function {\tt isSqrtOf9ButNot3} for the same effect, but the \lamex is really handy in this case.) We have used the function {\tt elem} you have defined in Exercise~\ref{sec:recandrec}-\ref{ex:recfun:4:1}\indElem to express the constraint {\tt x} must not be 3. In response, Cryptol told us that {\tt 125} is another solution. Indeed $125 * 125 = 9\imod{2^7}$, as you can verify separately. We can search for more: \begin{Verbatim} Cryptol> :sat \x -> isSqrtOf9 x && ~(elem (x, [3, 125])) \x -> isSqrtOf9 x && ~(elem (x, [3, 131])) 253 = True \end{Verbatim} Rather than manually adding solutions we have already seen, we can search for other solutions by asking the satisfiability checker for more solutions using the {\tt satNum} setting: \begin{Verbatim} Cryptol> :set satNum = 4 Cryptol> :sat isSqrtOf9 isSqrtOf9 3 = True isSqrtOf9 131 = True isSqrtOf9 125 = True isSqrtOf9 253 = True \end{Verbatim} By default, {\tt satNum} is set to {\tt 1}, so we only see one solution. When we change it to {\tt 4}, the satisfiability checker will try to find {\em up to} 4 solutions. We can also set it to {\tt all}, which will try to find as many solutions as possible. \begin{Verbatim} Cryptol> :set satNum = 4 Cryptol> :sat isSqrtOf9 isSqrtOf9 3 = True isSqrtOf9 131 = True isSqrtOf9 125 = True isSqrtOf9 253 = True \end{Verbatim} So, we can rest assured that there are exactly four 8-bit square roots of 9; namely 3, 131, 125, and 253. (Note that Cryptol can return the satisfying solutions in any order depending on the backend-solver and other configurations. What is guaranteed is that you will get precisely the same set of solutions at the end.) The whole point of the satisfiability checker is to be able to quickly search for particular values that are solutions to potentially complicated bit-valued functions. In this sense, satisfiability checking can also be considered as an automated way to invert a certain class of functions, going back from results to arguments. Of course, this search is not done blindly, but rather using SAT\glosSAT and SMT\glosSMT solvers to quickly find the satisfying values. Cryptol's {\tt :sat} command\indCmdSat hides the complexity, allowing the user to focus on the specification of the problem. \todo[inline]{Re-add chapter on verification, or two chapters on \texttt{:sat} and \texttt{:prove}.} % (We will see several larger applications of the satisfiability % checker in Chapter~\ref{chap:usingsat}.) \begin{Exercise}\label{ex:sat:0} Fermat's last theorem states that there are no integer solutions to the equation $a^n + b^n = c^n$ when $a, b, c > 0,$ and $n > 2$. We cannot code Fermat's theorem in Cryptol since we do not have arbitrary integers, but we can code the modular version of it where the exponentiation and addition is done modulo a fixed bit-size. Write a function {\tt modFermat} with the following signature: \begin{code} type Quad a = ([a], [a], [a], [a]) modFermat : {s} (fin s, s >= 2) => Quad s -> Bit \end{code} such that {\tt modFermat (a, b, c, n)} will return {\tt True} if the modular version of Fermat's equation is satisfied by the values of {\tt a}, {\tt b}, {\tt c}, and {\tt n}. Can you explain why you need the constraints {\tt fin s} and {\tt s >= 2}? \end{Exercise} \begin{Answer}\ansref{ex:sat:0} \begin{code} modFermat (a, b, c, n) = (a > 0) && (b > 0) && (c > 0) && (n > 2) && (a^^n + b^^n == c^^n) \end{code} The {\tt fin s} predicate comes from the fact that we are doing arithmetic and comparisons. The predicate {\tt s >= 2} comes from the fact that we are comparing {\tt n} to {\tt 2}, which needs at least $2$ bits to represent. \end{Answer} \begin{Exercise}\label{ex:sat:1} Use the {\tt :sat} command to see if there are any satisfying values for the modular version of Fermat's last theorem for various bit sizes. Surprised? What can you conclude from your observations? \end{Exercise} \begin{Answer}\ansref{ex:sat:1} We can try different instantiations as follows: \begin{Verbatim} Cryptol> :sat modFermat : Quad(2) -> Bit modFermat : Quad(2) -> Bit (1, 2, 1, 3) = True Cryptol> :sat modFermat : Quad(3) -> Bit modFermat : Quad(3) -> Bit (4, 4, 4, 4) = True Cryptol> :sat modFermat : Quad(4) -> Bit modFermat : Quad(4) -> Bit (4, 4, 4, 8) = True \end{Verbatim} The modular form of Fermat's last theorem does not hold for any of the instances up to and including 12-bits wide, when I stopped experimenting myself. It is unlikely that it will hold for any particular bit-size, although the above demonstration is not a proof. (We would need to consult a mathematician for the general result!) Also note that Cryptol takes longer and longer to find a satisfying instance as you increase the bit-size. \end{Answer} %===================================================================== %% \section{Safety checking} %% \label{sec:safetychecking} %% Safety checking is all about run-time exceptions: Division-by-zero %% or index-out-of-bounds being the most infamous ones. Unlike many %% other languages, Cryptol does not suffer from a certain class of %% safety violations, such as trying to put in the number 1000 in a %% buffer that is only 8-bits wide, or trying to multiply a string %% with an integer; but certain run-time exceptions are still %% possible. The goal of {\em safety checking} is to ensure that these %% exceptions cannot happen at run-time.\indSafe\indCmdSafe %% Here is a simple example to demonstrate: %% \begin{Verbatim} %% Cryptol> :safe \(x, y) -> x/y:[8] %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``"command line", line 1, col 13: divide by zero'' %% *** Violation detected: %% (\(x, y) -> x/y:[8]) (0, 0) %% = "command line", line 1, col 14: divide by zero %% *** 1 problem found. %%\end{Verbatim} %%Cryptol is telling us that the function {\tt $\backslash$(x, y) -> %% x/y} is {\em not} safe. When given the argument {\tt (0, 0)} it %% will cause a division by zero exception. If we properly guard for %% the condition, then the safety check will indeed pass: %%\begin{Verbatim} %% Cryptol> :safe \(x, y) -> if y == 0 then 0 else x/y:[8] %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``"command line", line 1, col 35: divide by zero'' %% *** Verified safe. %% *** All safety checks pass, safe to execute %%\end{Verbatim} %%By using {\tt :safe}\indCmdSafe on our top-level functions we can %% ensure that our program is guaranteed not to have any run-time %% exceptions. Cryptol's safety-checker will detect the following %% run-time exceptions: %%\begin{itemize} %% \item Division ({\tt /}) and modulus ({\tt \%}) by 0, both modular %% and polynomial versions (See %% Section~\ref{sec:polynomials})\indDiv\indMod\indDivPoly\indModPoly %% \item Index-out-of-bounds errors for {\tt @}, {\tt @@}, {\tt !} and %% {\tt !!},\indIndex\indIndexs\indRIndex\indRIndexs %% \item Calling {\tt lg2} on {\tt 0},\indLg %% \item Data dependence on {\tt error} and the {\tt undefined} %% values,\indError\indUndefined %% \item Failure cases for Cryptol's {\tt ASSERT} %% expression.\indAssert %%\end{itemize} %%\begin{Exercise}\label{ex:safe:0} %%Consider the function: %%\begin{code} %% lkup : ([10][8], [4]) -> [8] %% lkup (xs, i) = xs @ i %%\end{code} %%Is it safe? For what values of {\tt i} does it throw an exception? %%\end{Exercise} %%\begin{Answer}\ansref{ex:safe:0} %% {\tt lkup} will throw an exception whenever {\tt i} is at least %% {\tt 10}. We can use Cryptol to detect this safety violation: %%\begin{Verbatim} %% Cryptol> :safe lkup %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``..: index of symbolic-value is out of bounds %% (valid range is 0 thru 9).'' %% *** Violation detected: %% lkup ([255 255 255 255 255 255 255 255 255 255], 13) %% = .. index of 13 is out of bounds %% (valid range is 0 thru 9). %% *** 1 problem found. %%\end{Verbatim} %%In this case Cryptol identified that there will be an exception when %% {\tt i} is 13. %%\end{Answer} %%\begin{Exercise}\label{ex:safe:1} %%Here is an attempt to make {\tt lkup} safe: %%\begin{code} %% lkup1 : ([10][8], [4]) -> [8] %% lkup1 (xs, i) = if i > 10 then 0 else xs @ i %%\end{code} %%Is the fix successful? Use the {\tt :safe} command to check your %% answer. %%\end{Exercise} %%\begin{Answer}\ansref{ex:safe:1} %%No. {\tt lkup1} suffers from the infamous off-by-1 error! %%\begin{Verbatim} %% Cryptol> :safe lkup1 %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``..: index of symbolic-value is out of bounds %% (valid range is 0 thru 9).'' %% *** Violation detected: %% lkup1 ([255 255 255 255 255 255 255 255 255 255], 10) %% = ..: index of 10 is out of bounds %% (valid range is 0 thru 9). %% *** 1 problem found. %%\end{Verbatim} %%This time Cryptol will pin-point to the only potential error value, %% where {\tt i} is 10. %%\end{Answer} %% %%\begin{Exercise}\label{ex:safe:2} %% Write a version of {\tt lkup} that fixes the safety issue. Use the %% {\tt :safe} command to verify that your fix is good. %%\end{Exercise} %%\begin{Answer}\ansref{ex:safe:2} %%\begin{code} %% lkup2 : ([10][8], [4]) -> [8] %% lkup2 (xs, i) = if i > 9 then 0 else xs @ i %%\end{code} %%We have: %%\begin{Verbatim} %% Cryptol> :safe lkup2 %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``..: index of symbolic-value is out of bounds %% (valid range is 0 thru 9).'' %% *** Verified safe. %% *** All safety checks pass, safe to execute. %%\end{Verbatim} %%\end{Answer} %% %%\paragraph*{Over-conservative safety checking} %% Cryptol's safety checker is very conservative: If it says that a %% function is {\em safe to execute}, you are guaranteed that the %% execution can not raise any run-time exception. However, the %% converse is not always true: In certain cases, Cryptol can signal a %% potential safety violation which will actually not cause an error %% when {\em run in the normal interpreter}. %% %% Here is an example to illustrate the idea. (The example is somewhat %% contrived, but similar patterns of coding does arise in Cryptol %% programming.) We will use Cryptol's {\tt undefined} %% value\indUndefined, which would throw an exception if executed: %%\begin{Verbatim} %% Cryptol> (1:[8]) + undefined %% Entered Cryptol 'undefined' value %%\end{Verbatim} %% %%Cryptol's {\tt undefined}\indUndefined is useful when representing %% values that should {\em not} be needed during execution, as %% illustrated below: %%\begin{code} %% choose : [8] -> [8] %% choose c = ([undefined] # [1 ..]) @ index %% where index = if c == 0 then 1 else c %%\end{code} %%Notice that {\tt undefined} will not be referenced: The {\tt index} %% can never be {\tt 0}. (Why?) If we ask {\tt :safe}, though, it will %% tell us the following: %%\begin{Verbatim} %% Cryptol> :safe choose %% *** 1 safety condition to be checked. %% *** Checking for violations of: ``Entered Cryptol 'undefined' value'' %% *** Violation detected: %% choose 0 %% = 1 %% *** ALERT: This safety issue is only relevant for certain target platforms %% *** and hence does not cause an exception in the above run. %% *** C and hardware translations might be subject to stricter semantics. %% *** 1 problem found. %%\end{Verbatim} %%The {\tt :safe} command\indCmdSafe told us that there is a safety %% violation when {\tt c} is {\tt 0}, then it also showed us that if %% we run {\tt choose} with {\tt 0} we get the value {\tt 1}, not an %% exception. At first, this sounds quite counterintuitive. However, %% the {\tt ALERT} given by Cryptol sheds some light into the %% potential problem. When Cryptol is used in the normal interpreter %% mode, this program will indeed not produce any exceptions. However, %% Cryptol programs can also be compiled to other targets, such as C %% or Java, or other hardware platforms. What Cryptol is telling us is %% that these translations are {\em not} guaranteed to be exception %% free, and hence it is worth making sure the generated code will %% behave properly. %% %% While it is important to understand what Cryptol is telling us, the %% user can safely ignore these alerts most of the time, especially %% when one is focusing only at the Cryptol level. (We will revisit %% this topic in Section~\ref{sec:pitfall:thmexceptions}.) %% %%\begin{Exercise}\label{ex:safe:3} %% Rewrite {\tt choose} so that {\tt :safe} will not issue any %% alerts. %%\end{Exercise} %%\begin{Answer}\ansref{ex:safe:3} %% One way would be to replace {\tt undefined}\indUndefined with {\tt %% zero}.\indZero Since the result will never be needed, we can %% safely replace it with {\tt 0} without changing the semantics. %%\end{Answer} %%% Local Variables: %%% mode: latex %%% TeX-master: "../main/Cryptol" %%% End:
{ "alphanum_fraction": 0.707318677, "avg_line_length": 40.0553116769, "ext": "tex", "hexsha": "d9ff514dbbe74ea6d392e7507bb05b1808d0dab8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "96bfb0420ad1ff64195b5190db98e3d21166312d", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "anawrocka/CapstoneCryptol", "max_forks_repo_path": "docs/ProgrammingCryptol/highAssurance/HighAssurance.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "96bfb0420ad1ff64195b5190db98e3d21166312d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "anawrocka/CapstoneCryptol", "max_issues_repo_path": "docs/ProgrammingCryptol/highAssurance/HighAssurance.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "96bfb0420ad1ff64195b5190db98e3d21166312d", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "anawrocka/CapstoneCryptol", "max_stars_repo_path": "docs/ProgrammingCryptol/highAssurance/HighAssurance.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12991, "size": 45623 }
\documentclass[12pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{geometry} \usepackage{enumerate} \usepackage{natbib} \usepackage{float}%稳定图片位置 \usepackage{graphicx}%画图 \usepackage[english]{babel} \usepackage{a4wide} \usepackage{indentfirst}%缩进 \usepackage{enumerate}%加序号 \usepackage{multirow}%合并行 \title{\large UM-SJTU JOINT INSTITUTE\\DISCRETE MATHEMATICS\\(VE203)\\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ \\\ ASSIGNMENT 8\\\ \\\ \\\ \\\ \\\ \\\ } \author{Name: Pan Chongdan\\ID: 516370910121} \date{Date: \today} \begin{document} \maketitle \newpage \section{Q1} \begin{enumerate}[(i)] \item \textbf{Input}: $a_1,\cdots,a_n,n$ unsorted elements \par \textbf{Output}: All the $a_i,1\leq i\leq n$ in an increasing order. \par \textbf{for} $p=1$ to $n-1$ \par $x=a_{p+1};$ \par \setlength\parindent{2em}\textbf{If} $a_p>a_{p+1}$ \textbf{then} \par \setlength\parindent{4em}$i=1;j=p;$ \par \textbf{while} $i<j$ \textbf{do} \par \setlength\parindent{6em}$m\leftarrow\lceil(i+j/2)\rceil;$ \par \textbf{if} $x>a_m$ \textbf{then} $i\leftarrow(m+1);$ \par \textbf{else} $j\leftarrow m;$ \par \setlength\parindent{4em}\textbf{end while} \par \textbf{for} $k=p$ to $i$ \par \setlength\parindent{6em}$a_{k+1}=a_k;$ \par \setlength\parindent{4em}\textbf{end for} \par $a_i=x;$ \par \setlength\parindent{2em}\textbf{end if} \par \setlength\parindent{0em}\textbf{end for} \par \textbf{return} $(a_1\cdots a_{n+1});$ \item For the Insertion Sort Algorithm, $n=\sum_1^7=28$ \par For the Binary Insertion Sort Algorithm, $n=1+1+1+2+2+2+3=12$ \item $f(n)=\sum_1^{n-1}=\frac{n^2-n}{2}$, which is order $n^2$ \item $f(n)$ is $O(\log_2n)$, which is faster than Insertion Sort. \end{enumerate} \section{Q2} \begin{enumerate}[(i)] \item Assume $n=b^k,k=\log_bn$ then $$f(n)=b^d\cdot f(b^{k-1})+cb^{kd}$$ $$f(n)=b^d\cdot[b^d\cdot f(b^{k-2})+cb^{kd-d}]+cb^{kd}=b^{2d}f(b^{k-2})+2cb^{kd}$$ $$f(n)=b^{2d}f(b^{k-2})+2cb^{kd}=b^{3d}f(b^{k-3})+3cb^{kd}$$ $$\cdots$$ $$f(n)=b^{kd}f(1)+kcn^d=f(1)n^d+cn^d\log_bn$$ \item Assume $n=b^k,$ where $k=A+B,A\in\mathbb{Z}$ and $0<B<1$, then similarly to (i) $$f(n)=f(\frac{n}{b^A})b^{Ad}+Acn^d$$ $$\lim_{n\to\infty}|\frac{f(n)}{n^d\log_bn}|=c$$ $$\therefore f\quad\mathrm{is}\quad O(n^d\log_b(n))$$ \item Assume $n=b^k,k=\log_bn$ then $$f(n)=a\cdot f(b^{k-1})+cb^{kd}$$ $$f(n)=a\cdot[a\cdot f(b^{k-2})+cb^{kd-d}]+cb^{kd}=a^2f(b^{k-2})+(1+\frac{a}{b^d})cb^{kd}$$ $$\cdots$$ $$f(n)=a^kf(1)+[1+\frac{a}{b^d}\cdots(\frac{a}{b^d})^{k-1}]cb^{kd}=a^kf(1)+\frac{a^k-b^{kd}}{ab^{kd-d}-b^{kd}}\cdot cn^d$$ $$f(n)=a^kf(1)+c\cdot\frac{b^d(n^d-a^k)}{b^d-a}=a^k\cdot(f(1)+\frac{cb^d}{a-b^d})+c\cdot\frac{b^dn^d}{b^d-a}$$ $$f(n)=c_1n^d+c_2a^k$$ $$a^k=a^{\log_nb}=n^{\log_ba}$$ $$\therefore f(n)=c_1n^d+c_2n^{\log_ba}$$ \item $$\lim_{n\to\infty}|\frac{f(n)}{n^d}|=|c_1+c_2n^{\log_ba-d}|$$ $$\because a<b^d\Rightarrow\log_ba-d<0\Rightarrow|c_1+c_2n^{\log_ba-d}|<|c_1+c_2|$$ $$\therefore f\quad\mathrm{ is }\quad O(n^d)$$ \item Similarly to last question, $$\lim_{n\to\infty}|\frac{f(n)}{n^{log_ba}}|=|c_1n^{d-\log_ba}+c_2|$$ $$\because a>b^d\Rightarrow\log_ba-d>0\Rightarrow|c_1n^{d-\log_ba}+c_2|<|c_1+c_2|$$ $$\therefore f\quad\mathrm{ is }\quad O(n^{\log_ba})$$ \end{enumerate} \section{Q3} Since there is only on real root then $$a_n=2\alpha a_{n-1}-\alpha^2a_{n-2}$$ $$2\alpha a_{n-1}-\alpha^2a_{n-2}=2\alpha(q_1\alpha^{n-1}+q_2n\alpha^{n-1})-\alpha^2(q_1\alpha^{n-2}+q_2n\alpha^{n-2})=q_1\alpha^n+q_2n\alpha^n=a_n$$ \section{Q4} $$\lambda^3-2\lambda^2-\lambda+2=0$$ $$\lambda_1=2,\lambda_2=1,\lambda_1=-1$$ $$\therefore a_n=q_12^n+q_2+q_3(-1)^n$$ $$3=q_1+q_2+q_3$$ $$6=2q_1+q_2-q_3$$ $$0=4q_1+q_2+q_3$$ $$q_1=-1,q_2=6,q_3=-2$$ $$a_n=-2^n+6-2(-1)^n$$ \section{Q5} $$\lambda^2-5\lambda+6=0$$ $$\lambda_1=2,\lambda_2=3$$ $$p_n=bn2^n+cn^2+dn+e$$ $$p_n=5p_{n-1}-6p_{n-2}$$ $$b=-2,c=1,d=\frac{15}{2},e=\frac{67}{4}$$ $$a_n=q_12^n+q_23^n-2n2^n+n^2+\frac{15}{2}n+\frac{67}{4}$$ $$q_1=-33,q_2=\frac{65}{4}$$ $$a_n=-33\cdot2^n+\frac{65}{4}\cdot3^n-2n2^n+n^2+\frac{15}{2}n+\frac{67}{4}$$ \section{Q6} $$\lambda^3-7\lambda^2+16\lambda-12=0$$ $$\lambda_1=3,\lambda_2=\lambda_3=2$$ $$p_n=kn4^n+b4^n$$ $$kn4^n+b4^n=7k(n-1)4^{n-1}-16k(n-2)4^{n-2}+12k(n-3)4^{n-3}+n4^n+3b4^n$$ $$k=16,b=-80$$ $$a_n=q_13^n+q_22^n+q_3n2^n+16n4^n-80\cdot4^n$$ $$a_n=49\cdot3^n+28\cdot2^n+27.5n2^n+16n4^n+\frac{5}{2}4^n$$ \section{Q7} $$a_{n+1}=a_n+(n+1)^4$$ $$a_n=an^5+bn^4+cn^3+dn^2+en+f$$ $$a(n+1)^5+(b-1)(n+1)^4+c(n+1)^3+d(n+1)^2+e(n+1)+f=an^5+bn^4+cn^3+dn^2+en+f$$ $$a=\frac{1}{5},b=\frac{1}{2},c=\frac{1}{3},d=0,e=-\frac{1}{30},f=0$$ $$a_n=\frac{n^5}{5}+\frac{n^4}{2}+\frac{n^3}{3}-\frac{n}{30}$$ \section{Q8} $$a_n-b_n=2a_{n-1}\Rightarrow b_n=a_n-2a_{n-1}\Rightarrow a_{n}=5a_{n-1}-4a_{n-2}$$ $$\lambda^2-5\lambda+4=0$$ $$\lambda_1=4,\lambda_2=1$$ $$a_n=q_14^n+q_2\Rightarrow a_n=2\cdot4^{n}-1$$ $$\therefore b_n=4^n+1$$ \section{Q9} $$\lambda^2-2\lambda+2=0$$ $$\lambda_1=1+i,\lambda_2=1-i$$ $$p_n=k3^n\Rightarrow k=\frac{2}{3}k-\frac{2}{9}k+1\Rightarrow k=\frac{9}{5}\Rightarrow p_n=\frac{9}{5}\cdot3^n$$ $$a_n=(\sqrt{2})^n(q_1\cos\frac{n\pi}{4}+q_2\sin\frac{n\pi}{4})+\frac{9}{5}\cdot 3^n\Rightarrow q_1=-\frac{4}{5},q_2=-\frac{13}{5}$$ $$a_n=(\sqrt{2})^n(-\frac{4}{5}\cos\frac{n\pi}{4}-\frac{13}{5}\sin\frac{n\pi}{4})+\frac{9}{5}\cdot 3^n$$ \end{document}
{ "alphanum_fraction": 0.6110371075, "avg_line_length": 41.3779527559, "ext": "tex", "hexsha": "8261329df7b5ed52ed38cf3067be3462d47ca6e9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "PANDApcd/Algorithm", "max_forks_repo_path": "VE203DiscreteMaths/Assignment/Assignment 8/8.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PANDApcd/Algorithm", "max_issues_repo_path": "VE203DiscreteMaths/Assignment/Assignment 8/8.tex", "max_line_length": 149, "max_stars_count": null, "max_stars_repo_head_hexsha": "3017c3abd6f3df227addaa924ff5b1a6d28d9b7c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "PANDApcd/Algorithm", "max_stars_repo_path": "VE203DiscreteMaths/Assignment/Assignment 8/8.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2743, "size": 5255 }
\documentclass[9pt,twocolumn,twoside]{../../styles/osajnl} \usepackage{fancyvrb} \journal{i524} \title{ Analysis of Pentaho} \author[1,*]{Bhavesh Reddy Merugureddy} \affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.} \affil[*]{Corresponding authors: [email protected]} \dates{Paper 1, \today} \ociscodes{Pentaho, Data Integration, Big Data, Community, ETL, MapReduce, SQL, Hadoop, OLAP} % replace this with your url in github/gitlab \doi{\url{https://github.com/cloudmesh/sp17-i524/blob/master/paper1/S17-IR-2018/report.pdf}} \begin{abstract} Pentaho is a business analytics and data integration tool that provides a qualified open source-based platform to assist a variety of big data deployments. It enables different organizations to utilize their data which helps them in delivering their services efficiently with minimum risk and it can also be used for embedded analytics. \newline \end{abstract} \setboolean{displaycopyright}{true} \begin{document} \maketitle \section{Introduction} Pentaho is a business intelligence suite that provides data mining, reporting, dashboarding and data integration capabilities. Generally, organizations tend to obtain meaningful relationships and useful information from the data present with them but data inconsistency and handling large amount of data are the factors that obstruct them from doing so. Pentaho addresses these obstacles \cite{pentaho_wikipedia}. The platform includes a wide range of tools that analyze, explore, visualize and predict data. It simplifies data blending, which is the process of combining data from multiple sources into a functioning dataset. Being an open and extensible source, Pentaho provides big data tools to extract, prepare and blend any data. \section{Pentaho community} Pentaho provides two different editions, Community edition and Enterprise edition. As the name suggests, the Enterprise edition provides more packages to provide additional support. The Community edition enables the developers or users to create complex solutions for the problems pertaining to their business \cite{pentaho-community}. The Pentaho Community has a group of people and helps the users in becoming a part of them and benefitting from the open source contributions. The Community includes all users like developers, testers and managers. Generally, the Community edition platform enables the developers to sketch their design and develop a rough version of their product after which they can upgrade to Enterprise edition for final production. This shows the flexibility that the Community edition provides to innovate, find and experiment with the solution that the developers liked. \section{Architecture} Pentaho architecture can be considered as a set of four components which are presentation layer, business intelligence platform, data and application integration and third party applications. Data can be provided to the presentation layer by reporting, analysis or process management. This data can then be accessed through a web service, portal or a browser \cite{pentaho-architecture}. The security and repository issues are dealt by the business intelligence platform. Data integration and third party applications are respectively, the integration layer and applications with database from various sources. The architecture includes data layer, server layer and client layer. Data layer allows an application to connect to a data source. Server layer serves as a middle layer and several applications run on the server. Dashboards are provided to the end users by deploying them on the server along with the required reports. As mentioned above, a user console is provided that is used for security and configuration purposes. Client layer is of two forms, thin client and thick client. Thin client generally runs on a server. Analyzer and dashboard editor can be considered as the examples. Report designer and data integration come under thick client which act as a standalone. \section{Plugins and applications} Pentaho provides an interactive console to its users. With a few clicks of the mouse, users are allowed to interact with new data models and data. The platform hides the database connections and underlying application server and provides access to various data sources \cite{pentaho-enterprise}. It provides metadata management capabilities and a dashboard to allow the administrators set security levels, monitor servers and set user access. There are many server plugins and desktop applications provided by Pentaho. \subsection{Server applications} Business Intelligence platform is a basic Pentaho service that provides reports, displays dashboards, reports business rules and performs OLAP analysis. It generally runs in Apache Java application server and can be embedded in any other Java application server \cite{pentaho_wikipedia}. Pentaho analysis service is another Pentaho server application that is written in Java which primarily focuses on online analytical processing. It aggregates data into a memory cache by performing read operation from data sources like SQL. It comes with the Pentaho platform in both the editions. These are some of the server applications provided by Pentaho. \subsection{Desktop applications} Pentaho data mining is a desktop application that searches for patterns in data by performing knowledge analysis. Common data mining techniques such as classification, clustering, regression and visualization are employed by this application along with some machine learning algorithms. This helps the users in predicting the trends in future. Pentaho metadata editor is an application that is used as an abstraction layer from the underlying data sources and helps the users in creating business models which can be used by other applications in generating reports for the analytics. There are many more useful desktop applications like Pentaho report designer, design studio and aggregate designer. \subsection{Server plugins} Pentaho provides certain core services in the form of server plugins. Some of the important server plugins are community data access and data browser. Community data access is a Pentaho server plugin that provides a common layer on the business analytics server for an easy data access. It runs the server by providing a REST interface and gets back the results in various forms such as xml, csv or json. Community data browser is a plugin that helps R in performing analytics on the data. It does the job of supplying queries to R by using online analytical processing browser. \section{Data Integration} Extract, transform and load (ETL) are the basic operations that act as a tool for transforming data from one database and placing in other database. These processes can be carried out in Pentaho with the help of a component called Pentaho Data Integration, which is also referred as kettle \cite{pentaho-kettle}. The most useful functions of Pentaho Data Integration include massive load of data into databases, data cleansing, migrating data between applications and integrating several applications. It is metadata oriented and can be used as a standalone application. Various input and output formats such as datasheets and text files are supported by Pentaho Data Integration. The transformation process undergoes three steps, input step, transformation step and output step. In the input step, data is imported \cite{pentaho-dataIntegration}. The data is then processed within Pentaho Data Integration and the transformed data is given out in the output step. All these steps are carried out in parallel. The throughput of transformation process is restricted to speed of the step which is slowest. The slowest step is often referred as bottleneck. To improve the performance of transformation process, two steps are run in a loop which are, identification of the bottleneck and continued improvement of bottleneck until it is no longer a bottleneck. Bottlenecks are eliminated by data conversion logic and character code-page conversion. For encoding a set of characters, a character set is used along with a set of control characters. Code page is a table of values that describes this character set. Pentaho Data Integration has a set of components that contribute to its functionalities. They are ‘Spoon’, ‘Kitchen’, ‘Pan’ and ‘Carte’ \cite{pentaho-dataTransformation}. Spoon can be considered as a desktop application that creates simple and even complex extract, transform and load (ETL) jobs without making the users write or read code. It is used for transformations and jobs with the help of editor and it is the one that is used in most of the cases such as editing, debugging or running a transformation or a job. As the transformations are created in Spoon, they can be executed with the help of a standalone command line process called Pan. It is an engine that reads data, manipulates it and loads into various data sources. Kitchen is another standalone command line process that for executing jobs. It schedules different jobs to run at regular intervals. Carte provides remote execution capabilities and a medium for setting up a remote ETL server. \section{Big Data Use cases} Cyber security analysis helps the end users such as data scientists and security analysts in quickly detecting the threats. Cyber security analytics allows the users to utilize most of the staff resources via automation \cite{pentaho-usecase}. It enables the data scientists to perform predictive analytics with the help of machine learning tools. It also provides the automation of blending and reporting on a variety of data. Pentaho platform can be utilized for data processing, data ingestion and delivery of threat calls with minimal costs and complexity. Pentaho optimizes data warehouse and speeds up the development and deployment processes. It employs a simplified process for offloading to Hadoop. The offloaded data is usually less frequent data. Hand coding in MapReduce jobs and SQL can be avoided by the usage of visual integration tools. It provides access to data sources ranging from relational to operational to NoSQL technologies. Pentaho MapReduce helps in achieving high performance in a cluster environment. It provides a graphical and intuitive big data integration. Another use case identified by Pentaho is the streamlined data refinery. Pentaho data integration processes and refines different data sets by using Hadoop as its data processing platform. It provides modelled, delivered and published data sets to the users for visual analytics just by a mouse click. It can be seen as an integration process that blends huge volumes of highly diversified data. It also supplies tools for in-cluster simplified data processing and is regarded as a highly practical approach. Pentaho’s big data support extends the 360-degree view to internal and external customer related data. Customer service teams are provided with time-sensitive and blended streams of data. The presence of an adaptive big data layer relieves several organizations from evolving technologies. Customers are given access to customizable and interactive dashboards. Data scientists are provided with predictive analytics and data mining tools. \section{Comparison} Pentaho products compete with that of popular IT corporations like SAP, IBM and Oracle. Pentaho provides open source solutions and is considered to be much cheaper than the proprietary equivalents. Jaspersoft is an established open source rival of Pentaho. Though both Pentaho and Jaspersoft offer similar features with similar costs, Pentaho has got wider online presence and more followers in social media \cite{pentaho-comparison}. \section{Licensing} Pentaho Community edition is a free open source product licensed under the General Public License version 2.0 and Mozilla Public License 1.1. The Enterprise edition is available to the users on a commercial license. It needs to be purchased under a subscription model which includes services and support \cite{pentaho_wikipedia}. \section{Conclusion} Pentaho is an open source based platform for diverse big data deployments. It enables analytics in any environment by delivering data with availability, usability, integrity and security. It provides unified data integration and analytics components which are embeddable. The server plugins and desktop applications provided by Pentaho play a major role in enabling data analytics. Pentaho Data Integration is the Pentaho component responsible for data transformation and loading the data. It helps organizations in harnessing the value from their data in order to make their operations efficient and consistent. \bibliography{references} \newpage \appendix \end{document}
{ "alphanum_fraction": 0.8219285212, "avg_line_length": 47.7126865672, "ext": "tex", "hexsha": "5c572f334bf0c5089ce7eeac5b75ee67b1e54478", "lang": "TeX", "max_forks_count": 294, "max_forks_repo_forks_event_max_datetime": "2018-07-13T01:32:24.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-09T13:18:39.000Z", "max_forks_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cloudmesh/sp17-i524", "max_forks_repo_path": "paper1/S17-IR-2018/report.tex", "max_issues_count": 98, "max_issues_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_issues_repo_issues_event_max_datetime": "2017-10-27T11:30:50.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-19T04:24:02.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cloudmesh/sp17-i524", "max_issues_repo_path": "paper1/S17-IR-2018/report.tex", "max_line_length": 93, "max_stars_count": 2, "max_stars_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "suunni/sp17-i524", "max_stars_repo_path": "paper1/S17-IR-2018/report.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-14T19:13:18.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-30T09:54:25.000Z", "num_tokens": 2636, "size": 12787 }
\documentclass[12pt]{article} \usepackage{sigsam, bm, bbm, amsmath} \usepackage{amssymb,stmaryrd} \usepackage{color} \usepackage{hyperref} \usepackage[american]{babel} \usepackage[utf8]{inputenc} \usepackage[OT1]{fontenc} \usepackage{fancyvrb} \usepackage{tikz} % Tikz style \tikzset{arrow/.style={->, >=latex}} \usetikzlibrary{positioning} \usetikzlibrary{calc} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{matrix} \usetikzlibrary{intersections} \usetikzlibrary{backgrounds} \usetikzlibrary{fit} \usetikzlibrary{arrows} % Math Operators \DeclareMathOperator{\Card}{Card} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\Img}{Im} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Minpoly}{Minpoly} \DeclareMathOperator{\Mod}{mod} \DeclareMathOperator{\Ord}{Ord} \DeclareMathOperator{\Vect}{Vect} % Shortcuts \newcommand{\dE}{\partial(E)} \newcommand{\dF}{\partial(F)} \newcommand{\dG}{\partial(G)} \newcommand{\eg}{\emph{e.g. }} \newcommand{\emb}{\hookrightarrow} \newcommand{\embed}[2]{\phi_{#1\hookrightarrow#2}} \newcommand{\ie}{\emph{i.e. }} \newcommand{\FF}{\mathbb{F}} \issue{TBA} \articlehead{TBA} \titlehead{Lattices of compatibly embedded finite fields in Nemo/Flint} \authorhead{Luca De Feo, Hugues Randriambololona and Édouard Rousseau} \setcounter{page}{1} \title{Lattices of compatibly embedded finite fields in Nemo/Flint} \author{Luca De Feo\footnote{ Université de Versailles and Inria, Paris Saclay, \url{[email protected]} }, Hugues Randriambololona\footnote{ Télécom ParisTech, \url{[email protected]} }, Édouard Rousseau\footnote{ Télécom ParisTech and Université de Versailles, \url{[email protected]} } } \begin{document} \maketitle \begin{abstract} We present a software that allows the user to compute embeddings between arbitrary finite fields. It is written using Julia and C, as part of Nemo and Flint\cite{Nemo, Flint}. The software ensures compatibility between the embeddings, so that the user can work with multiple fields and have coherent results independently of where the computations are made and of where the results are expressed. \end{abstract} Finite fields are widely used in mathematics and their applications. They are often the building block of more complicated algebraic structure used in cryptology or coding theory. As a consequence, several computer algebra systems or libraries have been written in order to work in arbitrary finite fields. Among them, we find Magma~\cite{Magma}, Sage~\cite{Sagemath}, Flint~\cite{Flint}, NTL~\cite{NTL} or PARI~\cite{Pari}. Many problems require not only to work in a given finite field $K$, but also in finite extensions of $K$, as represented in Figure~\ref{fig:alg-closure}, that are again finite fields, or in the algebraic closure of $K$. In particular, it is desirable to be able to freely \emph{move} from a field to a subfield or an extension. Given two finite fields $E$ and $F$ with cardinalities $\#E=p^{m}$ and $\#F=p^{n}$, we know that $E$ can be embedded in $F$ if and only if $m\,|\,n$. In other words, $E$ is in that case isomorphic to a subfield $E'\subset F$ of $F$ with cardinality $\#E'=p^{m}$. There are $m=[E:\mathbb{F}_p]=\#\Gal(E/\mathbb{F}_p)$ distinct embeddings from $E$ to $F$ (the degree of $E$ over $\mathbb{F}_p$ will also be denoted by $\partial(E)$). Indeed, the Galois group of the extension $E$ over $\mathbb{F}_p$ acts on the embeddings and, given two different embeddings $\embed{E}{F}$ and $\embed{E}{F}'$ and an element $x\in E$, the images $\embed{E}{F}(x)$ and $\embed{E}{F}'(x)$ must be conjugate. There is no canonical embedding from $E$ to $F$. Furthermore, the proof of the fact that $E$ can be embedded in $F$ if and only if $\dE\,|\,\dF$ does not give an efficient algorithm. Finding such an algorithm is an interesting problem, and there exists a variety of solutions, see for example the survey paper~\cite{BDDFS17}. \begin{figure} \begin{minipage}[b]{.46\linewidth} \centering \tikzset{ dotstyle/.style={circle, inner sep = 1.2pt, outer sep = 4pt, fill = gray}, edgetower/.style={thick}, edgecomp/.style={thick, lightgray} } \begin{tikzpicture} [scale = 0.8] %, every node/.style={inner sep = 2pt, scale=0.8}] \coordinate (T2) at (-2, 0.5); \node (Fp) at (0, 0) {$\FF_p$}; \node (Fp2) at ($(Fp) + (T2)$) {$\FF_{p^2}$}; \node (Fp4) at ($(Fp2) + (T2)$) {$\FF_{p^4}$}; \node (Fp2l) at ($(Fp4) + (T2)$) {};% {$\FF_p^{(2)}$}; % --------------------- \coordinate (T3) at (-0.7, 2); \node (Fp3) at ($(Fp) + (T3)$) {$\FF_{p^3}$}; \node (Fp9) at ($(Fp3) + (T3)$) {$\FF_{p^9}$}; \node (Fp3l) at ($(Fp9) + (T3)$) {};% {$\FF_p^{(3)}$}; % --------------------- \coordinate (T5) at (0.7, 2); \node (Fp5) at ($(Fp) + (T5)$) {$\FF_{p^5}$}; \node (Fp25) at ($(Fp5) + (T5)$) {$\FF_{p^{25}}$}; \node (Fp5l) at ($(Fp25) + (T5)$) {};% {$\FF_p^{(5)}$}; % --------------------- \coordinate (Tl) at (2, .5); \node[blue] (Fpl) at ($(Fp) + (Tl)$) {$\FF_{p^\ell}$}; \node[blue] (Fpl2) at ($(Fpl) + (Tl)$) {$\FF_{p^{\ell^2}}$}; \node[blue] (Fpll) at ($(Fpl2) + (Tl)$) {};% {$\FF_p^{(\ell)}$}; % --------------------- \node[dotstyle] (dot1) at ($(Fp2) + (Fp3) - (Fp)$) {}; \node[dotstyle] (dot2) at ($(Fp4) + (dot1) - (Fp2)$) {}; \node[dotstyle] (dot3) at ($(Fp2) + (Fp5) - (Fp)$) {}; \node[dotstyle] (dot4) at ($(Fp3) + (Fp5) - (Fp)$) {}; \node[dotstyle] (dot5) at ($(Fp3) + (Fpl) - (Fp)$) {}; \node[dotstyle] (dot6) at ($(Fp5) + (Fpl) - (Fp)$) {}; \node[dotstyle] (dot7) at ($(Fpl2) + (dot6) - (Fpl)$) {}; % --------------------- \draw (Fp) edge[edgetower] (Fp2) edge[edgetower] (Fp3) edge[edgetower] (Fp5) edge[edgetower, blue] (Fpl) (Fp2) edge[edgetower] (Fp4) edge[edgecomp] (dot1) (Fp4) edge[edgetower, dotted] (Fp2l) edge[edgecomp] (dot2) (dot1) edge[edgecomp] (dot2) (Fp3) edge[edgetower] (Fp9) edge[edgecomp] (dot1) edge[edgecomp] (dot4) (Fp9) edge[edgetower, dotted] (Fp3l) (Fp5) edge[edgetower] (Fp25) edge[edgecomp] (dot4) edge[edgecomp] (dot6) (Fp25) edge[edgetower, dotted] (Fp5l) (Fpl) edge[edgetower, blue] (Fpl2) edge[edgecomp] (dot6) (Fpl2) edge[edgetower, blue, dotted] (Fpll) edge[edgecomp] (dot7) (dot3) edge[edgecomp] (Fp2) edge[edgecomp] (Fp5) (dot5) edge[edgecomp] (Fp3) edge[edgecomp] (Fpl) (dot6) edge[edgecomp] (dot7); \end{tikzpicture} \caption{Extensions of $\mathbb{F}_p$.} \label{fig:alg-closure} \end{minipage} \hfill \begin{minipage}[b]{.46\linewidth} \centering \begin{tikzpicture} \node (E) at (0, 0) {$E$}; \node (F) at (1.5, 1) {$F$}; \node (G) at (0.5, 2) {$G$}; \draw[arrow] (E) -- (F); \draw[arrow] (E) -- (G); \draw[arrow] (F) -- (G); \node (f12) at (1.25, 0.25) {$\embed{E}{F}$}; \node (f13) at (-0.35, 1) {$\embed{E}{G}$}; \node (f23) at (1.6, 1.65) {$\embed{F}{G}$}; \end{tikzpicture} \caption{Embeddings between finite fields.} \label{fig:compatibility} \end{minipage} \end{figure} Additionaly, we want the embeddings to be compatible. Given three finite fields $E$, $F$, and $G$, such that $\dE\,|\,\dF$ and $\dF\,|\,\dG$, and three embeddings $\embed{E}{F}$, $\embed{F}{G}$, and $\embed{E}{G}$, we say that the embeddings are \emph{compatible} if \[ \embed{E}{G}=\embed{F}{G}\circ\embed{E}{F}. \] In other words, we want the diagram of Figure~\ref{fig:compatibility} to commute. In the context of a computer algebra system, this condition is important in order to give the user coherent answers when performing operations in different fields. This is the case when computing in the algebraic closure of a finite field, because the ambient field may change often. There are also applications in isogeny-based~\cite{Defeo17} or pairing-based cryptography. We note $E\emb F$ if $E$ is explicitly embedded in $F$, \ie if we have computed an embedding $\embed{E}{F}$. \paragraph{Compatibility} Compatibility can be achieved using \emph{Conway polynomials}~\cite{Parker90, HL98}, a set $\left\{ P_d \right\}_{d\in D}$ of irreducible polynomials over $\mathbb{F}_p$ such that any root $\beta_d$ of $P_d$ is primitive in $\mathbb{F}_{p}[X]/(P_d(X))\cong \mathbb{F}_{p^d}$ and such that if $d_1, d_2\in D$ and $d_1|d_2$, then $N_{\mathbb{F}_{p^{d_2}}/\mathbb{F}_{p^{d_1}}}(\beta_{d_2})$ is a root of $P_{d_1}$, where $N_{\mathbb{F}_{p^{d_2}}/\mathbb{F}_{p^{d_1}}}$ denotes the norm of $\mathbb{F}_{p^{d_2}}$ over $\mathbb{F}_{p^{d_1}}$. A compatible embedding of $\mathbb{F}_{p^{d_1}}$ in $\mathbb{F}_{p^{d_2}}$ is then obtained by sending $\beta_{d_1}$ to $\beta_{d_2}^k$, with $k=(p^{d_2}-1)/(p^{d_1}-1)$. By adding a minimality condition with respect to some ordering, this provides a way of constructing standardized finite field extensions. However Conway polynomials are hard to compute, so in practice this technique can only be used with rather small fields. Bosma, Cannon and Steel suggested another scheme~\cite{BCS97} in which they give an axiomatic characterization of a lattice of compatibly embedded finite fields. Their method, implemented in Magma, does not require any precomputation and enables one to work in any user-defined finite field. However, embedding a new finite field requires polynomial factorization and leads to a cost strictly larger than quadratic in the extension degree~\cite{BDDFS17}. Following \cite{BCS97}, we say that a pair $\mathfrak L=(L, \Phi)$, where $L$ is a set of finite fields and $\Phi$ is a set of embeddings between elements of $L$, is a \emph{lattice of compatibly embedded finite fields} if \begin{enumerate} \item[CE1] (unicity) for each pair $(E, F)$ of elements in $L$, there exists at most one corresponding embedding $\embed{E}{F}\in\Phi$. \item[CE2] (reflexivity) For each $E\in L$, the identity map $\Id_E=\embed{E}{E}$ is in $\Phi$. \item[CE3] (prime subfield) There is exactly one $P\in L$ such that $\partial (P) = 1$, and for all $F\in L$, there exists $\embed{P}{F}\in\Phi$ \item[CE4] (invertibility) If $E\emb F$ and $\dE=\dF$, then $F\emb E$ and $\embed{F}{E}=\embed{E}{F}^{-1}$. \item[CE5] (transitivity) For any triple $(E, F, G)$ of elements in $L$, if $E\emb F\emb G$ then $E\emb G$ and $\embed{E}{G}=\embed{F}{G}\circ\embed{E}{F}$. \item[CE6](intersections) For each $E, F, G\in L$ such that $F\emb G$ and $E\emb G$, there exists $S\in L$ such that $\partial(S)=\gcd(\dE, \dF)$ and $S\emb E$, $S\emb F$. \end{enumerate} Under those conditions, we can prove~\cite{BCS97} that we are able to add a finite field in $L$ or an embedding that is not yet in $\Phi$ without altering the compatibility of the lattice $\mathfrak L$. \paragraph{Our contribution} We implemented Bosma, Canon and Steel framework using Nemo/Flint~\cite{Nemo, Flint} and following conditions CE1 to CE6. These conditions are, for most of them, very natural. The condition CE3 is technical and does not imply any work in our implementation because finite fields elements in Nemo/Flint are represented by polynomials over $\mathbb{F}_p$, so the embedding of $\mathbb{F}_p$ into an extension is trivial. Finally, condition CE6 ensures that the implicit isomorphisms between subfields are made explicit. In order to meet this last condition, when embedding a finite field $F$ in $G$, for each subfield $S$ of $G$, we check that the finite field $S\cap F$ is embedded in $S$ and $F$, and if not, we embed it. If there is not any finite field of degree $d=\gcd(\partial(S), \dF)$, we compute an arbitrary finite field $I$ of degree $d$ using Flint and we embed $I$ in $S$ and $F$, resulting in a recursive call to our embedding algorithm. Our code is available as an experimental branch of Nemo\footnote{\url{https://github.com/erou/Nemo.jl/tree/embeddings}}, that is itself based on an experimental branch of Flint\footnote{\url{https://github.com/defeo/flint2/tree/embeddings}}. Critical routines (\eg polynomial factorization, matrix computations) are written in C and computed by Flint, whereas high level tasks (\eg checking conditions CE5 and CE6) are written in Julia using Nemo. Our goal is to first include the C code inside the standard Flint library and then the Julia code in Nemo. With our experimental code, it is possible to define arbitrary finite fields and to compute compatible embeddings between them, it is also possible to compute a section of a field to a subfield: \ie a map that sends an element to its inverse image when the element is in the subfield, and throw an error otherwise. All these computations are automatic and tranparent to the user, except if he or she wants to work with the maps themselves. All the computed embeddings are kept in memory so that the same work is not done twice. Together with conditions CE5 and CE6, this leads to many embeddings being stored behind the scenes. Here is an example of a minimal session using our code\footnote{A longer and interactive one is available at \url{https://mybinder.org/v2/gh/defeo/Nemo-embeddings-demo/master?filepath=demo.ipynb}.}. \begin{minipage}{0.45\textwidth} \begin{Verbatim}[commandchars=\\\{\}] \textcolor{red}{In:} p = 5 \textcolor{gray}{# We create k2 = GF(25) = GF(5)[x2]} k2, x2 = FiniteField(p, 2, ``x2'') k4, x4 = FiniteField(p, 4, ``x4'') k8, x8 = FiniteField(p, 8, ``x8'') \textcolor{gray}{# We compute the embedding k2->k4} f2_4 = embed(k2, k4) y = f2_4(x2) \textcolor{blue}{Out:} x4ˆ3+x4^2+x4+3 \textcolor{red}{In:} \textcolor{gray}{# We check that y is in GF(25)} y^(p^2) == y \textcolor{blue}{Out:} true \end{Verbatim} \end{minipage} \hfill \begin{minipage}{0.45\textwidth} \begin{Verbatim}[commandchars=\\\{\}] \textcolor{red}{In:} f2_8 = embed(k2, k8) f4_8 = embed(k4, k8) \textcolor{gray}{# We check the compatibility} f2_8(x2) == f4_8(f2_4(x2)) \textcolor{blue}{Out:} true \textcolor{red}{In:} \textcolor{gray}{# We can directly see x4^2+1} \textcolor{gray}{# as an element of k8} z = k8(x4^2+1) \textcolor{blue}{Out:} 3*x8^6+4*x8^5+3*x8^3+4*x8+2 \textcolor{red}{In:} k4(z) \textcolor{blue}{Out:} x4^2+1 \end{Verbatim} \end{minipage} \paragraph{Future work} The code has yet to be optimised: condition CE5 could be fulfilled lazily by computing the embeddings only when needed by the user. This would prevent the storage of useless embeddings. Moreover, the embedding algorithm used by Bosma, Canon and Steel (the naive algorithm) is not optimal either. Replacing the naive algorithm by a more sophisticated algorithm requires both theoretical work and some new implementations. An important question is whether a ``standardized'' construction can be found for these algorithms, similar to the one obtained using Conway polynomials. Contrary to Bosma, Canon and Steel framework, Conway polynomials also permit to simply obtain the generator of a field from the generator of another field using norms. We aim to be able to do the same thing, by memorizing a little more than the generators of our finite fields. \bibliographystyle{plain} \bibliography{erou} \end{document} % Local Variables: % ispell-local-dictionary:"american" % End: % LocalWords: isomorphism
{ "alphanum_fraction": 0.677400582, "avg_line_length": 40.2734375, "ext": "tex", "hexsha": "6ecd517209ae2ec69f25c8033e93672fa76262ac", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f73dc37587c61fec494562d7d23795bdaf7d37f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "erou/compatible-embeddings", "max_forks_repo_path": "software-abstract.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f73dc37587c61fec494562d7d23795bdaf7d37f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "erou/compatible-embeddings", "max_issues_repo_path": "software-abstract.tex", "max_line_length": 106, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5f73dc37587c61fec494562d7d23795bdaf7d37f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "erou/compatible-embeddings", "max_stars_repo_path": "software-abstract.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-17T16:33:19.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-17T16:33:19.000Z", "num_tokens": 5223, "size": 15465 }
\providecommand{\main}{..} \documentclass[\main/thesis.tex]{subfiles} \begin{document} \graphicspath{{img/}{02_theory/img/}} % \chapterimage{chapter_head_1.pdf} \chapter{Ultra-Wideband Signals} This chapter provides a quick explanation of UWB along with its advantages and regulations. The later 2 sections also give a modulation scheme and the IEEE standard for UWB. \section{Definition of Ultra-Wideband System} With $f$\textsubscript{c} is the frequency in which the system has the maximum power density and the frequencies $f$\textsubscript{h} and $f$\textsubscript{L} determine the location where the power of spectral density is 10dB less then the $f$\textsubscript{c}. $B$\textsubscript{frac} is defined as \begin{equation} B\textsubscript{frac}=\frac{B}{f\textsubscript{c}} \end{equation} where $B$ is the bandwidth of the system. In term of high and low frequencies, we have: \begin{equation} B=(f\textsubscript{H} - f\textsubscript{L}) \end{equation} and \begin{equation} f\textsubscript{c} = \frac{f\textsubscript{H} + f\textsubscript{L}}{2}\ \end{equation} so \begin{equation} B\textsubscript{frac} = \frac{2(f\textsubscript{H} - f\textsubscript{L})}{f\textsubscript{H} + f\textsubscript{L}} \end{equation} The Federal Communication Commission (FCC) has define UWB systems as those which have an absolute bandwidth larger than 500Mhz and $f$\textsubscript{c} larger than 2.5 GHz, or have a $B$\textsubscript{frac} larger than 0.2 for system with $f$\textsubscript{c} lower than 2.5GHz \cite{ultra_wideband_positioning_systems}. Ultra-Wideband radio is also referred as ‘impulse’, ‘short-pulse’, ‘non-sinusoidal’, ‘carrierless’, ‘time domain’, ‘super wideband’, ‘fast frequency chirp’ and ‘mono-pulse’ radio. Very short pulse transmissions that result in an ultra-wideband spectrum are both utilized by impulse radio communication systems and impulse radars. Since the data modulation is introduced by pulse position modulation, this communication manner is also classified as a pulse modulation technique (PPM). Due to the low-power spectral density, the UWB signal is noise-like which make the interception and detection quite difficult. On the other hand, this makes UWB cause very little interference with existing narrow-band radio systems. Moreover, UWB might operate without license depend on the attitude of national and international regulatory bodies. One result of the large bandwidth of UWB is that due to the inverse relationship of time and frequency. Figure \ref{fig:uwb_versus_other_radio_communication_systems} shows the differences between UWB and other radio systems in the frequency band and power spectral density. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{uwb_versus_other_radio_communication_systems.png} \caption{UWB versus other radio communication systems} \label{fig:uwb_versus_other_radio_communication_systems} \end{figure} \section{Advantages of Ultra-Wideband} Below \cite{uwb_theory_and_applications} are some advantages that make it attractive for consumer communication applications. \begin{itemize} \item Potentially low complexity and low cost \item Noise like signal \item Resistant to multi-path and jamming \item Good time-domain resolution allowing for localization and tracking application. \end{itemize} The base-band nature of signal transmission reduces the complexity and cost of UWB systems. Unlike conventional radio systems, a very short time-domain pulse produced by the UWB transmitter is able to propagate without the need of an additional RF mixing stage. % The RF mixing stage takes a base-band signal and `inject' carrier frequency or translates the signal to a frequency which has desirable propagation characteristic. % The very nature of the UWB signal means it spans frequencies commonly used as carrier frequencies. % The signal will propagate well without the need of additional up-conversion and amplification. There is no need of additional up-conversion and amplification and the sender side. The reverse process of down-conversion is also not required in the UWB receiver. This means the omission of a local oscillator in the receiver, and the removal of associated complex delay and phase tracking loop. Consequently, a UWB system can be implemented in low cost, low power, integrated circuit process. One of the most important topics in current UWB research is the interference phenomenon between impulse radio and existing radio systems. While there is some debate in the literature, it appears that low power, noise-like, UWB transmission does not significantly interfere existing radio systems. The low energy of and the pseudo-random (PR) characteristics of the UWB signal, however, make it noise-like, thus making it quite difficult to detect unintended signals. % Time-modulation systems offer a possibility for high data rates for communication. Hundreds of Mbps have been reported for communication links. Using UWB technology, a very high multi-path resolution is achieved because of the large bandwidth of the transmitted signal. The large bandwidth offers huge frequency diversity which, together with the discontinuous transmission, make the UWB signal resistant to severe multi-path propagation and jamming/interference. The very narrow time-domain pulse of UWB radios means that it is potentially able to offer timing precision much better than other radio systems. Together with good material penetration properties, UWB signals offer opportunities for short-range radar applications such as rescue and anti-crime operations, as well as in surveying and the mining industry. UWB does not provide precise timing and extreme penetration at the same time, but UWB waveforms present a better choice than do conventional radio systems. \section{International Regulations for UWB Signals} The effective isotropic radiated power (EIRP) which can be defined as the product of its gain and input power is one important feature of a radar and communications transmitter . Figure \ref{fig:fcc_spectral_mask_for_indoor_uwb_systems} shows the FCC spectral mask of the indoor UWB EIRP emission level. It is evidence that the maximum signal power is limited to -41.3 dBm per MHz throughout the whole UWB frequency range from 3.1 to 10.6 GHz. To comply with the FCC standards and regulations, all UWB systems and devices must work within this spectral mask for legal operation. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{fcc_spectral_mask_for_indoor_uwb_systems} \caption{FCC spectral mask for indoor UWB systems} \label{fig:fcc_spectral_mask_for_indoor_uwb_systems} \end{figure} \section{Relationship of Bandwidth with Data Rate and Power Consumption} As indicated by the Shannon-Harley theorem, there is a direct relationship between capacity and bandwidth and an inverse relationship between bandwidth and power consumption. Their theorem states: \begin{equation} C=B\log_2(1+\frac{S}{N}) \end{equation} where $C$ is the capacity ($bits/second$), $B$ is the bandwidth, $S$ is the average received signal power over $B$ and $N$ is the average noise over $B$. For a specific capacity, it consumes less power with a larger bandwidth. Since $\frac{S}{N}$ is under a logarithm, it is easier to increase the capacity by increasing of the bandwidth instead of $\frac{S}{N}$. It is common to refer to $\frac{S}{N}$ as SNR, the Signal to Noise Ratio. Figure \ref{fig:capacity_versus_bandwidth_curves_for_uwb_systems_over_awgn_channels} \cite{ultra_wideband_positioning_systems} shows the relationship between the bandwidth and the capacity for five different SNRs. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{capacity_versus_bandwidth_curves_for_uwb_systems_over_awgn_channels} \caption{Capacity versus bandwidth curves for UWB systems over AWGN channels} \label{fig:capacity_versus_bandwidth_curves_for_uwb_systems_over_awgn_channels} \end{figure} \section{An UWB Modulation Scheme: Impulse Radio Scheme} Time-modulated ultra wide-band (TM-UWB) communication is based on discontinuous emission of very short Gaussian pulses or other types of pulse waveforms (monocycles). As in figure \ref{fig:an_ir_uwb_signal} \cite{ultra_wideband_positioning_systems}, data is transmitted by low duty UWB signals information of the symbol is conveyed by position and/or polarity of the signal. Each symbol corresponds to one or more signals. In the following example, two consecutive IR signals represent one symbol. The IR signal can occupy one of the chip-intervals($T\textsubscript{$c$}$) within a frame ($T\textsubscript{$f$}$). A time-hoping (TH) code is used for determining the accurate position of a signal in a dedicated time frame to decrease the chance of interference between UWB systems. % In the following example, the TH codes for the symbols are \{2,1\}, \{2,3\} and \{1,0\} respectively, % In this example, the information corresponds to the polarity of the signals, so the IR stream represents the binary data `101'. This technique is commonly known as Binary Phase Shifts Keying (BPSK). \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{an_ir_uwb_signal} \caption{An IR UWB signal} \label{fig:an_ir_uwb_signal} \end{figure} An IR UWB signal, in which two pulses are transmitted per information symbol, and information is conveyed by the polarities of the pulses (BPSK). Hence, +1, -1 and +1 are being transmitted in the example shown in figure \ref{fig:an_ir_uwb_signal}. Note that each pulse resides in an interval of $T\textsubscript{$f$}$ seconds, called a “frame’’, and the positions of the pulses in different frames are determined by a TH (Time-hoping) code, which is {2, 1, 2, 3, 1, 0} in the example, so the first and the second signals are shifted by two and one chip-interval respectively and so on. \section{IEEE 802.15.4a Standard} A new physical layer concept for low data rate applications utilizing UWB technology has been defined by the 802.15.4a study group \cite{IEEE_Std_802_15_4_2015}. Applications that require only moderate data throughput, but long battery life such as low-rate wireless personal area networks, sensors, and small networks are well-suited to UWB technology. The IEEE 802.15.4a specifies two optional signaling formats based on impulse-radio Ultra-Wideband (IR-UWB) and chirp spread spectrum (CSS). The IR-UWB option can use 250–750 MHz, 3.244-4.742 GHz, or 5.944-10.234 GHz bands; whereas the CSS uses the 2.4–2.4835 GHz band. For the IR-UWB there is an optional ranging capability, whereas the CSS signals can only be used for communications purposes. Only the IR-UWB option of the IEEE 802.15.4a standard is studied in this thesis. \subsubsection{Channel allocations} According to the IEEE 802.15.4 standard, a UWB device can transmit in one or more of the following bands: \begin{itemize} \item Sub-GHz: 250 - 750 MHz \item Low band: 3.244- 4.742 GHz \item High band: 5.944 - 10.234 GHz. \end{itemize} Over these three bands, there are 16 channels supported for the UWB PHY: 1 in the sub-GHz band, 4 in the low band and 11 in the high band. These channels, their center frequencies as well as bandwidths are listed in table \ref{tab:uwb_channels_for_the_IEEE_802_15_4a_standard} together with mandatory channels in each band. \begin{table}[H] \centering \begin{tabular}{ |c|c|c|c|c| } \hline Channel No. & Center freq. (MHz) & Bandwidth (MHz) & UWB band & Mandatory \\\hline 0& 499.2& 499.2& Sub-GHz& Yes \\\hline 1& 3494.4& 499.2& Low band& No \\\hline 2& 3993.6& 499.2& Low band& No \\\hline 3& 4492.8& 499.2& Low band& Yes \\\hline 4& 3993.6& 1331.2& Low band& No \\\hline 5& 6489.6& 499.2& High band& No \\\hline 6& 6988.8& 499.2& High band& No \\\hline 7& 6489.6& 1081.6& High band& No \\\hline 8& 7488.0& 499.2& High band& No \\\hline 9& 7987.2& 499.2& High band& Yes \\\hline 10& 8486.4& 499.2& High band& No \\\hline 11& 7987.2& 1331.2& High band& No \\\hline 12& 8985.6& 499.2& High band& No \\\hline 13& 9484.8& 499.2& High band& No \\\hline 14& 9984.0& 499.2& High band& No \\\hline 15& 9484.8& 1354.97& High band& No \\\hline \end{tabular} \caption{UWB channels for the IEEE 802.15.4a standard} \label{tab:uwb_channels_for_the_IEEE_802_15_4a_standard} \end{table} The DWM1001 module has a built-in antenna for UWB channel 5. \bib \end{document}
{ "alphanum_fraction": 0.7711245235, "avg_line_length": 77.2515337423, "ext": "tex", "hexsha": "3c3d4d2a0add065135bda46dbf8eee5602d3432d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-01-14T02:31:16.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-14T02:31:16.000Z", "max_forks_repo_head_hexsha": "656725bfbb45eae48127733b7c4a96736bfacf84", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tuannv0898/uwb-ips", "max_forks_repo_path": "reports/02_theory/ultra_wideband.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "656725bfbb45eae48127733b7c4a96736bfacf84", "max_issues_repo_issues_event_max_datetime": "2021-01-13T16:22:55.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-13T16:22:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tuannv0898/uwb-ips", "max_issues_repo_path": "reports/02_theory/ultra_wideband.tex", "max_line_length": 615, "max_stars_count": null, "max_stars_repo_head_hexsha": "656725bfbb45eae48127733b7c4a96736bfacf84", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tuannv0898/uwb-ips", "max_stars_repo_path": "reports/02_theory/ultra_wideband.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3310, "size": 12592 }
\documentclass[12pt]{article} \usepackage{natbib} \usepackage[]{hyperref} \usepackage{bm} \usepackage{amsfonts} \usepackage{graphicx} \oddsidemargin 0.0mm \evensidemargin 0.0mm \textwidth 160mm \topmargin -10mm \textheight 230mm % \pagestyle{empty} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\threej}[6] { \left(\begin{array}{ccc} #1&#2&#3\\ #4&#5&#6 \end{array}\right) } \newcommand{\sixj}[6] { \left\{\begin{array}{ccc} #1&#2&#3\\ #4&#5&#6 \end{array}\right\} } \def\separation {0.5cm} \def\non{\nonumber \\} \def\DnuD {\hbox{$\Delta\nu_D$}} \def\Jbar {\hbox{$\bar J$}} \def\j {\hbox{$\jmath$}} \def\N {\hbox{$\cal N$}} \def\Ie {\hbox{$I_e$}} \def\Ji {\hbox{$\bar J^i_\mathrm{ext}$}} \def\about {\hbox{$\sim$}} \def\x {\hbox{$\times$}} \def\half {\hbox{$1\over2$}} \def\Ncr {\hbox{$N'_{\rm cr}$}} \def\mic {\hbox{$\mu$m}} \def\ion#1#2 {#1\,{\small {#2}} } \def\tot {\tau_t} \def\t(#1){\tau^{#1}} \def\a(#1){\alpha^{#1}} \def\H{\textsc{Hazel}} \def\HM{\textsc{P-Hazel}} \def\LVG {\texttt{LVG}} \def\slab {\texttt{slab}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % For generation of the HTML manual with tth: \def\tthdump#1{#1} % For generating TeX source; ignored by tth % Redefine symbols problematic for the browser: %%tth:\def\ga{\hbox{$>\sim$}} %%tth:\def\la{\hbox{$<\sim$}} %%tth:\def\Mo{\hbox{$M_o$}} %%tth:\def\Lo{\hbox{$L_o$}} %%tth:\def\Mdot{\hbox{$M^{dot}$}} %%tth:\def\Ivezic{Ivezic} %%tth:\begin{html}<TITLE>User Manual for MOLPOP-CEP</TITLE>\end{html} %%tth: This HTML file was generated from the TeX source by %%tth: the translator TTH, which uses symbol fonts. These fonts are %%tth: not normally enabled for Netscape running under X, because of %%tth: the way Netscape groups its fonts. If your browser has problems %%tth: displaying the math symbols in this manual, an easy fix can be found %%tth: on the TTH website at %%tth:\begin{html}<A HREF="http://hutchinson.belmont.ma.us/tth/Xfonts.html">http://hutchinson.belmont.ma.us/tth/Xfonts.html</A>\end{html} %%tth:\begin{html}<HR>\end{html} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title {\sc User Manual for \H\ and \HM\footnote{\H\ (an acronym for HAnle and ZEeman Light) is one of the IAC computer programs for the synthesis and inversion of Stokes profiles resulting from the joint action of the Hanle and Zeeman effects.}} \author{ A. Asensio Ramos \\ Instituto de Astrof\'{\i}sica de Canarias\\ 38205, La Laguna, Tenerife, Spain\\ \\ % J. Trujillo Bueno\footnote{Consejo Superior de Investigaciones Cient\'{\i}ficas (Spain)}\\ % Instituto de Astrof\'{\i}sica de Canarias\\ % 38205, La Laguna, Tenerife, Spain\\ \\ % E. Landi Degl'Innocenti \\ % Universit\`a degli Studi di Firenze \\ % Dipartimento di Astronomia e Scienza dello Spazio\\ % Largo Enrico Fermi 2, I-50125 Florence, Italy \\[0.5in] \today} \date{} \maketitle \newpage \tableofcontents \newpage \section*{Disclaimer} This software is distributed ``as is'' and the authors do not take any responsability for possible errors derived from its use by others. Apply it with care and never trust the output without a careful meditation. \H\ can be freely used provided that its origin is properly acknowledged and the reference Asensio Ramos, Trujillo Bueno \& Landi Degl'Innocenti (2008; ApJ 683, 542) is cited and acknowledged in any publication achieved with it. Before using \H\ we recommend the user to read carefully this paper and the previous one by Trujillo Bueno \& Asensio Ramos (2007; ApJ 655, 642). Please, send us bug reports, comments and suggestions of possible improvements. We point out that \H\ will be improved over the years (e.g., by extending it to more realistic radiative transfer problems), but it is now ready for a number of interesting applications in solar and stellar physics. \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} \subsection{Description} \H\ (an acronym for HAnle and ZEeman Light) is a computer program for the synthesis and inversion of Stokes profiles caused by the joint action of atomic level polarization and the Hanle and Zeeman effects. It is based on the quantum theory of spectral line polarization, which takes into account rigorously all the relevant physical mechanisms and ingredients: optical pumping, atomic level polarization, level crossings and repulsions, Zeeman, Paschen-Back and Hanle effects. The code is written in standard Fortran 90. Its parameters are passed using four configuration files that can be manually edited. These configuration files are heavily commented, so that their edition should be an easy task. In any case, two front-ends coded in IDL are given as a part of the distribution in order to facilitate a user-friendly execution of the program. A parallel version of the code using Message Passing Interface (MPI) is also available. This manual considers both distributions. \subsection{Credits} The code has grown since the first version thanks to the suggestions of many people. We thank Rebecca Centeno Elliot, Yukio Katsukawa, Marian Mart\'{\i}nez Gonz\'alez, Rafael Manso Sainz and Tom Schad for their help on testing the code and proposing (and partially coding, in some cases) some of the options of the code. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Uncompressing and compiling \H} \subsection{Serial version} The package comes in a single compressed file \texttt{hazel.tar.gz}. After unpacking with \texttt{tar zxvf hazel.tar.gz}, the \H\ directory will contain the following subdirectories: \begin{enumerate} \item {\tt Source} contains the Fortran 90 sources and a makefile that can be used to build the binary file. \item {\tt Run} contains a directory tree structure with the appropriate configuration files to run the code in command line mode. \item {\tt Widget\_Synth} contains all the files that are needed to run the IDL front-end for the synthesis problem. \item {\tt Widget\_Inv} contains all the files that are needed to run the IDL front-end for the inversion problem. \item {\tt IDL\_routines} contains some IDL routines that are needed by the front-ends. \item {\tt Manual} contains this manual. \end{enumerate} The code has been tested on Linux platforms using the Intel Fortran Compiler (\texttt{ifort}) and the free GFortran compiler. The source code is in the \texttt{Source/} directory. The compilation is performed with the supplied \texttt{makefile}. It is quite simple and easy to modify, and contains additional comments about compiling. The default compiler is the \texttt{ifort}, although you can use any other compiler through the variable \texttt{COMPILER}. In order to obtain the executable file, just type: \begin{verbatim} make all \end{verbatim} After compiling and linking, the executable is copied to the \H\ \texttt{Run/}, \texttt{Widget\_Synth/} and \texttt{Widget\_Inv/} directories. Running the program in the \texttt{Run/} directory should produce the correct output depending on the exact form of the input files. The generated object and module files can be cleaned typing: \begin{verbatim} make clean \end{verbatim} \subsection{Parallel version} The package also decompresses the \HM\ directory tree that will contain the following subdirectories: \begin{enumerate} \item {\tt SourceMPI} contains the Fortran 90 sources and a makefile that can be used to build the binary file. \item {\tt RunMPI} contains a directory tree structure with the appropriate configuration files to run the code in command line mode. \end{enumerate} The source code is in the \texttt{SourceMPI/} directory. The compilation depends on the precompiled library NetCDF\footnote{\texttt{http://www.unidata.ucar.edu/software/netcdf/}} for reading and writing output files. NetCDF is a standard for platform independent binary files that you need to have installed in your system. The compilation is performed with the supplied \texttt{makefile}. It is quite simple and easy to modify, and contains additional comments about compiling. The default compiler is \texttt{mpif90}, although you can use any other compiler through the variable \texttt{COMPILER}. The variables \texttt{NETCDF\_INCLUDE} and \texttt{NETCDF\_LIB} have to point to the \texttt{include} and \texttt{lib} directories of the NetCDF distribution. The code makes use of the MPI package for parallelization, so it has to be installed on your system. In order to obtain the executable file (for instance for the Intel compiler), just type: \begin{verbatim} make -f makefile.Intel \end{verbatim} Modify the \texttt{makefile} to point the variables to the correct libraries and include files. After compiling and linking, the executable is copied to the \HM\ \texttt{RunMPI/} directory, where the code is run. Running the program in the \texttt{RunMPI/} directory should produce the correct output depending on the exact form of the input files. The generated object and module files can be cleaned typing: \begin{verbatim} make clean \end{verbatim} The code is run from the \texttt{RunMPI} directory. Use your MPI launcher to select the number of processors. For example: \begin{verbatim} mpiexec -n 50 hazel_mpi config_inversion.dat 2000 5000 \end{verbatim} The code admits up to three command line parameters: \begin{itemize} \item Filename with the main configuration file. \item Starting pixel of the inversion. This is used if you want to rerun the inversion of some pixels. \item Final pixel of the inversion. This is used if you want to rerun the inversion of some pixels. \end{itemize} See \S\ref{sec:phazel_files} for details on the input files. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{New input file} In previous versions of the code, the code was controlled with four configuration files. The updated version of the code is now controlled by one human-readable configuration file. The code can still be run using the old configuration files but this option will be discontinued in the future so it is advisable to to the shift to this configuration file. In order to use this option, you need to have the \texttt{configparser} package installed in your system. It can be downloaded from \texttt{http://www.voidspace.org.uk/python/configobj.html}. The serial code is now run using: \begin{verbatim} ./run.py conf.ini \end{verbatim} and the parallel code is run with \begin{verbatim} ./run.py conf.ini nProcessors \end{verbatim} An example of the file, that is self-explanatory, is: \begin{verbatim} # Hazel configuration File ##################### # General information ##################### [Files] Input model file = 'ATOMS/helium.mod' File with observations = 'OBSERVATION/test_2comp.prof' File with inverted profiles = 'test.inversion' File with inverted parameters = 'test.parameters' [Working mode] Action = 'inversion' # 'synthesis' or 'inversion' Verbose = no Linear system solver = 'LU' # 'LU' or 'CG' Stopping volume for DIRECT = 0.001 [General parameters] Synthesis mode = 'exact' # 'thin' or 'exact' Include stimulated emission = yes Include magnetic field = yes Include Paschen-Back effect = yes Include atomic level polarization = yes Include magneto-optical effects in the RT = yes Include stimulated emission in the RT = yes Multiplet = 10830 # 10830, 5876, 7065, 3889 A Line-of-sight angles = 0.0, 0.0, 90.0 # theta, chi, gamma deg Wavelength axis = -3.0, 2.5, 200 # Minimum, maximum and number of grid points ##################### # Synthesis parameters ##################### [Synthesis] Number of slabs = '1' # '1' -> single slab, '1+1' -> two slabs with same field, '1+1B' -> 2 slabs with different field, '2' -> two slabs added with a filling factor Boundary condition = 4.098e-5, 0.0, 0.0, 0.0 # I0, Q0, U0, V0 a = 0.0 height = 3.0 # Real height if positive, apparent height if negative arcsec ff = 0.0 [[Slab 1]] B = 0.0 # G thetaB = 0.0 # deg chiB = 0.0 # deg vdopp = 8.0 # km/s tau = 1.0 vmac = 0.0 # Positive is redshift km/s beta = 1.0 [[Slab 2]] B = 0.0 # G thetaB = 0.0 # deg chiB = 0.0 # deg vdopp = 0.0 # km/s tau = 0.0 vmac = 0.0 # Positive is redshift km/s beta = 1.0 ##################### # Ranges for the DIRECT method [min, max] ##################### [Ranges] a = 0,0.5 ff = 0.0,1.0 [[Slab 1]] B = 800,1100 thetaB = 0,180 chiB = 0,180 vdopp = 2,12 tau = 0.1,2 vmac = -5,5 beta = 0.5,2 [[Slab 2]] B = 800,1100 thetaB = 0,180 chiB = 0,180 vdopp = 2,12 tau = 0.1,2 vmac = -5,5 beta = 0.5,2 ##################### # Parameters to invert ##################### [Inversion] Iterations in LM = 20 Number of cycles = 4 Inversion modes = 'DIRECT', 'LM', 'DIRECT', 'LM' # 'DIRECT' for DIRECT algorithm and 'LM' for Levenberg-Marquardt [[Cycles]] a = 1, 1, 0, 0 ff = 0, 0, 0, 0 [[[Slab 1]]] B = 0, 0, 1, 1 thetaB = 0, 0, 1, 1 chiB = 0, 0, 1, 1 vdopp = 1, 1, 0, 0 tau = 1, 1, 0, 0 vmac = 1, 1, 0, 0 beta = 0, 0, 0, 0 [[[Slab 2]]] B = 0, 0, 0, 0 thetaB = 0, 0, 0, 0 chiB = 0, 0, 0, 0 vdopp = 0, 0, 0, 0 tau = 0, 0, 0, 0 vmac = 0, 0, 0, 0 beta = 0, 0, 0, 0 [[Weights]] Stokes I = 1.0, 1.0, 1.0, 1.0 Stokes Q = 0.0, 0.0, 1.0, 1.0 Stokes U = 0.0, 0.0, 1.0, 1.0 Stokes V = 0.0, 0.0, 1.0, 1.0 \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Input files} \H\ is controlled via four configuration files. All configuration files are fully commented, so that changing any parameter should be an easy task. In the following, we describe them step by step. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{\texttt{config\_inversion.dat}} This file can be considered as the main configuration file and it is the only one that has to have a fixed name. This file is used to indicate the names of the input files, the names of the output files, verbosity level and to decide whether \H\ is to be applied to work in synthesis or inversion mode. Using the example included in the present version of \H, we analyze one by one all the inputs. \begin{verbatim} # Input model file 'ATOMS/helium.mod' \end{verbatim} Definition of the file with the atomic model. See \S\ref{sec:atomic_model} for an explanation of the file format. \begin{verbatim} # Initial parameters file 'init_parameters.dat' \end{verbatim} Definition of the file with the initial parameters of the problem. The values of the parameters in this file are taken as initial values for the inversion or for the synthesis. See \S\ref{sec:init_parameters} for a detailed description of the file. \begin{verbatim} # Range of parameters for the DIRECT method 'direct_range.dat' \end{verbatim} This file is used to define the lower and upper limits of the intervals inside which the DIRECT method searches for the minimum of the $\chi^2$ function. See \S\ref{sec:direct_range} for details. \begin{verbatim} # Output for the upper level rho^K_Q(J,J') in the vertical reference frame 'ATOMIC_POL/vertical_upper.rho' # Output for the lower level rho^K_Q(J,J') in the vertical reference frame 'ATOMIC_POL/vertical_lower.rho' # Output for the upper level rho^K_Q(J,J') in the mag. field reference frame 'ATOMIC_POL/magnetic_upper.rho' # Output for the lower level rho^K_Q(J,J') in the mag. field reference frame 'ATOMIC_POL/magnetic_lower.rho' \end{verbatim} The previous lines define the output files where the spherical tensor components of the density matrix are saved. Note that the code stores only the density matrix elements of the upper and lower level of the desired transition. The elements of the atomic density matrix depend on the chosen reference system, and the two most desired reference systems are the one in which the quantization axis is chosen along the solar local vertical direction and the one in which the quantization axis is chosen along the magnetic field vector. \begin{verbatim} # Output absorption/emission coefficients 'INVERTED/rtcoef.emer' # Output absorption/emission coefficients neglecting atomic polarization 'INVERTED/rtcoef_noatompol.emer' \end{verbatim} The emission coefficients $\epsilon_{I,Q,U,V}$, the absorption coefficients $\eta_{I,Q,U,V}$ and the anomalous dispersion coefficients $\rho_{Q,U,V}$ for each wavelength point are saved in these files. The first file includes the effects of atomic level polarization, while the second one neglects its influence. \begin{verbatim} # File with the observed profiles 'OBSERVATION/test.prof' \end{verbatim} When using the code in the inversion mode, this file is the one used for the input of the observed Stokes profiles. The format of this file depends on which version of the code is used. For \H, it is very simple. The first line indicates the number of wavelength points and the normalization (use 'cont' or 'peak'). Then, a table with nine columns gives the value of the wavelength shift with respect to the center of the multiplet, the Stokes vector at each wavelength normalized to the maximum intensity, and an estimation of the noise standard deviation at each wavelength normalized to the maximum intensity. See the example file contained in the \H\ distribution for more details. Note that these lines have to be present in the input file even if \H\ is used in synthesis mode. When using \HM, the input file is more complicated and is described in \S\ref{sec:phazel_files}. \begin{verbatim} # File with the inverted profiles 'test.inversion' # File with the parameters from the inversion 'test.parameters' \end{verbatim} The final Stokes profiles resulting from the synthesis or inversion options is saved in the file indicated in the first line. The format is the same as that explained for the file containing the observation. When \H\ is run in inversion mode, the final inferred parameters of the model are saved in the file indicated in the second line. Again, for \HM\ the output files are described in \S\ref{sec:phazel_files}. \begin{verbatim} # File that sets the parameters to invert 'invert_parameters.dat' \end{verbatim} This file defines which parameters to invert in the inversion mode, together with the algorithm to be used in each cycle and the weight used for each Stokes parameter. \begin{verbatim} # Verbose mode (0-> no, 1-> yes) 0 \end{verbatim} Flag to connect or disconnect the verbose mode. For the inversion of Stokes profiles affected by atomic level polarization it is sometimes useful to turn the verbose mode on for analyzing the process of the code while calculating. \begin{verbatim} # Linear system solver (0-> LU, 1-> CG) 0 \end{verbatim} This flag is used to choose the algorithm that solves the linear system of statistical equilibrium equations. For relatively simple models, the LU decomposition does a very good job in terms of speed. If the number of unknowns (i.e., of $\rho^K_Q(J,J')$ elements) turns out to be of the order of or larger than $10^3$, conjugate gradients (CG) methods are a much better option. We recommend to use the LU decomposition when possible and move to the CG solution only when necessary. The CG solution are based on routines developed by Dr. Mark K. Seager from Lawrence Livermore National Lab. \begin{verbatim} # Optically thin (0), slab no-MO (1), M-E (2), slab DELOPAR (3), simplified slab (4), exact slab (5) 5 \end{verbatim} This flag is used to choose the level of approximation for the solution of the radiative transfer equation. The meaning of each option is explained below in \S\ref{sec:radiative_transfer}. \begin{verbatim} # Synthesis mode -> 0 , Inversion mode -> 1 0 \end{verbatim} This flag controls the working mode of the code (synthesis or inversion). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{\texttt{init\_parameters.dat}} \label{sec:init_parameters} This important file establishes the parameters of the model, together with the definition of the scattering geometry. It includes also flags to turn on or discard different physical mechanisms. In the synthesis mode, the values in this file are used to carry out the synthesis. In the inversion mode, the values in this file are chosen as initial conditions for the inversion for those parameters that are left free. For those that are left fixed, the code uses the values defined in this file. We explain them step by step. \begin{verbatim} # Include stimulated emission (0-> no, 1-> yes) 1 \end{verbatim} This flag is used to take into account or discard the effect of stimulated emission in the emergent Stokes profiles. Although stimulated emission is negligible for most solar it can be of importance for very strong radiation fields. We recommend to use always 1 since the computational time is barely affected by this flag. \begin{verbatim} # Include magnetic field (0-> no, 1-> yes) 1 \end{verbatim} This flag is used to slightly reduce the computational work for the non-magnetic case because, if set to zero, the magnetic kernel [see Eq. (\ref{eq:see})] is not calculated. \begin{verbatim} # Include depolarization rates (0-> no, 1-> yes) 0 # Value of delta if depol. rates are included (not used if prev. value = 0) 1.d14 \end{verbatim} In the present version of \H\ it is possible to include the effect of depolarizing collisions only in the ground level of the atomic system. In case the effect of collisions is to be accounted for, set the first parameter to 1 and give the collisional rate in the next parameter in units of s$^{-1}$. \begin{verbatim} # Include Paschen-Back effect (0-> no, 1-> yes) 1 \end{verbatim} The effect of a magnetic field on the energy levels of the atomic system can be calculated under the approximation of the linear Zeeman effect or in the general case of the intermediate Paschen-Back effect. If this flag is set to 0, the approximation of the linear Zeeman effect is used and no perturbations between different $J$ levels of a term are taken into account. If the flag is set to 1, the general theory of the Paschen-Back effect is used to calculate the wavelength positions and the strengths of the $\pi$ and $\sigma$ components. The difference in the computational work between both approaches is rather small. \begin{verbatim} # Number of slabs (1-> 1 slab, 2-> 2 slabs with same B, 3-> 2 slabs with different B, -2 -> 2 slabs with filling factor) \end{verbatim} \H\ can be used using one slab (option 1) of constant physical properties or two (options 2 and 3 and -2). The difference between options 2 and 3 is that option 2 considers both slabs to have exactly the same field while option 3 considers two different fields. As a consequence, the computing time is smaller in option 2. In both options, the second slab is placed in front of the first one, so that the boundary condition of the second slab is the emergent radiation from the first. In option -2, the radiation emerging from both slabs is added weighted with a filling factor, which is indicated below. \begin{verbatim} # Magnetic field strength [G], thetaB [degrees], chiB [degrees] 0.3d0 90.d0 90.d0 \end{verbatim} The magnetic field vector is defined here. The strength in G and the inclination and azimuth angles in degrees define the magnetic field vector. The angles are defined with respect to the vertical direction in the atmosphere, as shown in Fig. \ref{fig:geometry}. Note that, if the azimuth of the field is set to 999, the random azimuth solution is obtained following the strategy explained in Appendix C of \cite{belluzzi07}. If two slabs are used (setting option 3 or -2 above), put the two field vectors next to each one in the format $(B,\theta_B,\chi_B)_1 (B,\theta_B,\chi_B)_2$. \begin{verbatim} # Apparent height (if <0) or real height (if >0) of the atoms in arcsec 3.d0 \end{verbatim} The tensors $J^0_0$ and $J^2_0$ that quantify the mean intensity of the radiation field and its anisotropy are calculated assuming a standard solar center-to-limb variation (CLV) and taking into account geometrical effects. This parameter gives the height at which the slab of atoms is placed with respect to the surface of the Sun. \begin{small} \begin{verbatim} # Optical depth of the slab in the maximum of I (slab) or strength of the line (ME) 1.0d0 \end{verbatim} \end{small} This quantity is the optical depth of the slab at the wavelength position of the maximum absorption or emission in Stokes $I$. For example, for the 10830 \AA\ multiplet of He \textsc{i}, this is the position of the red blended component. If two slabs with option 2 or 3 are used, put the two optical depths together. If option -2 is used, then add the filling factor as a third number. \begin{verbatim} # Source function increase 1.d0 \end{verbatim} The source function of the slab will be multiplied by this number. This is a way to generate lines in emission even when the slab is seen on the solar disk. If two components (one after the other) are used, this number only modifies the source function of the second component. This allows us to simulate self-absorption in the code. \begin{verbatim} # Boundary Stokes parameters (I0,Q0,U0,V0) 4.098d-5 0.d0 0.d0 0.d0 \end{verbatim} Boundary conditions for the Stokes vector used in the solution of the radiative transfer equation. If the radiation field is the photospheric continuum, the IDL routine \texttt{IDL\_routines/solar\_field.pro} can be used to return an estimation. \begin{verbatim} # Transition where to compute the emergent Stokes profiles 1 \end{verbatim} From the transitions defined in the atomic model, the code calculates the emergent Stokes profiles for the chosen transition. For the moment, only one transition at a time is allowed. We plan to extend this to synthesize several lines. \begin{verbatim} # Include atomic level polarization? (0-> no, 1-> yes) 1 \end{verbatim} The synthesis or inversion options can be used taking into account or neglecting the presence of atomic level polarization. This flag controls it. \begin{verbatim} # Observation angle with respect to the local solar vertical theta,chi,gamma [degrees] 0.d0 0.d0 90.d0 \end{verbatim} The line-of-sight direction is defined using the angles described in Fig. \ref{fig:geometry}. All angles are given in degrees. \begin{verbatim} # Wavelength axis: minimum, maximum and number of grid points -3.d0 2.5d0 200 \end{verbatim} In case the code is run in synthesis mode, this line is used to set the lower and upper limits (in cm$^{-1}$) of the wavelength axis. The last parameter gives the number of wavelength points to be used. In the inversion mode, the wavelength axis is chosen automatically from the observation and these numbers are overridden. \begin{verbatim} # Line wavelength [A], Doppler velocity [km/s] and damping [a] 10829.0911d0 6.5d0 0.d0 \end{verbatim} This line is used to define the wavelength of the multiplet (wavelength of the $(L,S) \to (L',S')$ transition), the Doppler width of the line in km s$^{-1}$ and the reduced damping constant. If two slabs (through options 3 or -2) are used, add the Doppler width of the second component next to the first one. Concerning the reduced damping constant, if its value is negative, it is computed using the natural damping and using the Doppler broadening. The absolute value of the input value is used then as an enhancement factor (so you should use $-1$ is you want to use the natural width). \begin{verbatim} # Macroscopic velocity [km/s] (>0 is a redshift) 0.d0 \end{verbatim} This defines the wavelength shift produced by the presence of a bulk motion of the plasma. Note that positive velocities imply redshifts. If two components (options 2, 3 or -2) are used, put the two bulk velocities. \begin{verbatim} # Include magneto-optical effects in the RT 1 \end{verbatim} It is possible to include (1) or neglect (0) the influence of the anomalous dispersion coefficients $\rho_{Q,U,V}$ in the calculation of the emergent Stokes profiles. \begin{verbatim} # Include stimulated emission in the RT 1 \end{verbatim} This flag controls whether we include (1) or neglect (0) the influence of the stimulated emission in the calculation of the emergent Stokes profiles. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{\texttt{direct\_range.dat}} \label{sec:direct_range} The DIRECT global optimization method is used to give a first estimation of the parameters from which the Levenberg-Marquardt method is applied to locate the minimum of the $\chi^2$ surface. The behavior of the DIRECT method is controlled with this file, in which we must specify the upper and lower limits of the model parameters, together with details about the stopping criterion. In the following, we describe all the options in detail. \begin{verbatim} # Output file 'direct.location' \end{verbatim} The DIRECT method tries to evaluate the merit function $\chi^2$ as few times as possible. The code saves in this file the values of the parameters at which the algorithm has carried out the evaluation of the merit function. This can be useful for analyzing the presence of ambiguities. In this case, the method will clearly mark the position of the possible solutions by evaluating the merit function more times in the surroundings of the compatible solutions. Note that this lines are absent on the \HM\ configuration file. \begin{verbatim} # Maximum number of function evaluations (<0 -> don't use this criteria) -1 # Reduction in the volume (<0 -> don't use this criteria, typically 0.01) 0.001 \end{verbatim} The previous two lines are used to indicate the stopping criterion for the DIRECT method. An early stop will probably give a first estimation of the solution that is far from the final result. Letting the code run for many iterations may degrade too much the computing time because of the poor local convergence properties of the DIRECT scheme. The first option permits the user to stop after a fixed number of evaluations of the merit function. The second option permits the user to stop when the ratio between the hypervolume where the global minimum is located and the original hypervolume is smaller than the given threshold. We have verified that 0.001 gives very good results. Setting one of the two parameters to values $< 0$ will disconnect it. \begin{verbatim} # Magnetic field (0-Bmax) 800.d0 1100.d0 # thetab (0 .. 180) 30.d0 180.d0 # chib (0 .. 180) -180.d0 0.d0 # vdopp (0 .. 20) 2.d0 7.d0 # dtau (0 .. 5) 0.d0 1.d0 # delta_collision (0 .. 18) 0.d0 18.d0 # vmacro (-10 .. 10) -10.d0 10.d0 # damping (0 .. 4) 0.d0 4.d0 # beta (0 .. 10) 0.d0 1.d0 # height (0 .. 100) 0.d0 100.d0 # dtau2 (0 .. 5) 0.d0 2.d0 # vmacro2 (-10 .. 10) 25.d0 35.d0 # Magnetic field 2 (0-Bmax) 800.d0 1100.d0 # thetab 2 (0 .. 180) 30.d0 180.d0 # chib 2 (0 .. 180) -180.d0 0.d0 # vdopp 2 (0 .. 20) 2.d0 12.d0 \end{verbatim} The previous lines define the space of parameters where the DIRECT method will look for the global minimum. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{\texttt{invert\_parameters.dat}} \label{sec:invert_parameters} This file is used to set the behavior of the inversion mode: the structure of the inversion cycle, setting the free and the fixed parameters. \begin{verbatim} # Maximum number of iterations 20 \end{verbatim} This parameter sets the maximum number of Levenberg-Marquardt (LM) iterations to be carried out in each cycle. Sometimes the LM scheme stops before reaching the maximum number of iterations because the relative change in the parameters from one iteration to the next is below 10$^{-4}$. \begin{verbatim} # Number of cycles 2 \end{verbatim} The optimal iteration scheme is composed of combinations of cycles. In the first cycle, the DIRECT method is used to give a first estimation of the solution. In the second cycle, the LM method is used to refine the solution until arriving to the final one. This parameter sets the number of cycles used. \begin{verbatim} # Invert the magnetic field strength 1 1 1 1 # Invert the magnetic field inclination 1 1 1 1 # Invert the magnetic field azimuth 1 1 0 0 # Invert the Doppler width 0 0 0 0 # Invert the optical depth or strength of the line 0 0 0 0 # Invert the D^2 of the lower level 0 0 0 0 # Invert the macroscopic velocity 0 0 0 0 # Invert the damping 0 0 0 0 # Invert the source function gradient 0 0 0 0 # Invert the height of the He atoms 0 0 0 0 # Invert the optical depth or strength of the line of component 2 0 0 0 0 # Invert the macroscopic velocity of component 2 0 0 0 0 # Invert the magnetic field strength of component 2 0 0 1 1 # Invert the magnetic field inclination of component 2 0 0 1 1 # Invert the magnetic field azimuth of component 2 0 0 1 1 # Invert the Doppler width of component 2 0 0 0 0 \end{verbatim} Depending on the number of cycles, the previous lines define whether a parameter is inverted (setting a 1 in the corresponding cycle) or kept fixed to the value given in the \texttt{init\_parameters.dat} file (setting a 0 in the corresponding cycle). The number of 0s/1s in each line has to be larger or equal to the number of cycles. \begin{verbatim} # Weights for Stokes I in each cycle 1.d0 1.d0 1.d0 1.d0 # Weights for Stokes Q in each cycle 1.d0 1.d0 1.d0 1.d0 # Weights for Stokes U in each cycle 1.d0 1.d0 1.d0 1.d0 # Weights for Stokes V in each cycle 1.d0 1.d0 1.d0 1.d0 \end{verbatim} Since the inversion is based on the gradient descent minimization of the $\chi^2$ merit function and not on sampling methods, it is important to modify sometimes the weight of each Stokes vector in order to increase the sensitivity of the $\chi^2$-function to some model parameters. The code allows to change the relative weight of each Stokes vector in each cycle. \begin{verbatim} # Inversion modes (1-> LM, 2-> DIRECT, 3-> PIKAIA) 2 1 2 1 \end{verbatim} The optimization method used in each cycle is set in this line. Note that the scheme DIRECT+LM has been empirically proved to be quite optimal. The possibility to use genetic optimization based on the Pikaia algorithm is still in a preliminary phase. However, the large number of function evaluations that any genetic algorithm needs makes it difficult to beat the DIRECT+LM combination. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Atomic models} \label{sec:atomic_model} Atomic models have to be defined in \H\ in order to carry out a calculation. This section describes the model atom file in detail by using the example \texttt{helium.mod} that is included in the present version of \H. \begin{verbatim} 2 5 \end{verbatim} The previous two numbers define the general properties of the atom. The first line of the file is equal to $2S$, where $S$ is the value of the spin of the terms. In the example, $S=1$. At present, the code does not treat transitions between terms of different multiplicity which are, otherwise, of reduced importance due to their small transition probability. The second line contains the number of terms included in the model atom. This example represents the triplet system of He \textsc{i} with the lowest five terms, 2s$^3$S, 3s$^3$S, 2p$^3$P, 3p$^3$P and 3d$^3$D \begin{verbatim} 1 0 0.00 2 2 0.00 -0.987913 -1.064340 3 0 0.00 4 2 0.00 -0.270647 -0.292616 5 4 0.00 -0.044187 -0.046722 \end{verbatim} The previous lines define the term levels included in the model. The information for each term consist of a line with an index (0,1,2,\ldots) that is used just to label each term and the value of $2L$, where $L$ is the value of the electronic orbital angular momentum. Then, for each term, we must supply a list containing the energy separation in cm$^{-1}$ between each $J$-level and the level with the smallest absolute value of $J$. In case only one value of $J$ is possible in the term, just put 0 in the energy difference. \begin{verbatim} 4 1 1 2 1.022d7 10829.0911 1.0000000 1.0000000 0.0000000 2 1 4 9.478d6 3888.6046 0.2000000 1.0000000 0.0000000 3 2 3 2.780d7 7065.7085 1.0000000 1.0000000 0.0000000 4 2 5 7.060d7 5875.9663 1.0000000 1.0000000 0.0000000 \end{verbatim} Finally, the list of transitions has to be supplied. The first number indicates the number of radiative transitions included in the model. Then, the list contains the following numbers for each transition: index number, index of lower level, index of upper level, Einstein coefficient for spontaneous emission $A_{ul}$ of the transition, modification factor $f(\bar{n})$, modification factor $f(w)$ and value of $J^1_0/J^0_0$. The modification factors $f(\bar{n})$ and $f(w)$ are multiplied by the mean number of photons per mode $\bar{n}$ and the anisotropy factor $w$, respectively. Since \H\ uses the value of $\bar{n}$ and $w$ calculated from the tabulated solar CLV and taking into account geometrical effects, these factors can be used to analyze the behavior of the emergent Stokes profiles when, for some reason, the anisotropy or the intensity of the radiation field is increased or decreased by an arbitrary factor. Finally, if the radiation illuminating the atoms has non-zero net circular polarization, it is possible to include its effect in the statistical equilibrium equations by giving the value of $J^1_0/J^0_0$. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{figure} \includegraphics[width=\columnwidth]{f5.eps} \caption{Screen dump of the graphical front-end used for the synthesis. \label{fig:synthesis_GUI}} \end{figure} \section{Graphical front-ends} Although the code can be run in command line by modifying by hand the input files, \H\ contains also two user friendly front-ends (GUI) for the simple execution and analysis of the results. Note that the directory \texttt{IDL\_routines} has to be in your IDL path. \begin{figure}[!t] \includegraphics[width=0.48\columnwidth]{inv1.eps}% \hspace{0.1cm} \includegraphics[width=0.48\columnwidth]{inv2.eps} \includegraphics[width=0.48\columnwidth]{inv3.eps}% \hspace{0.5cm} \includegraphics[width=0.48\columnwidth]{inv4.eps} \caption{Screen dump of the graphical front-end used for the inversion. \label{fig:inversion_GUI}} \end{figure} \subsection{Synthesis} It is placed in the directory \texttt{Widget\_Synth} and it is invoked with the following commands: \begin{verbatim} IDL> .r hazel IDL> hazel \end{verbatim} Figure \ref{fig:synthesis_GUI} shows the GUI for the synthesis mode. All the parameters explained in the previous sections (fundamentally those in \S\ref{sec:init_parameters}) are present in the GUI. All the parameters are very simple to modify (when changing numerical values in the GUI, always remember to press \texttt{Return} to activate the change) and clicking on \textbf{Calculate}, the window is updated with the new Stokes profiles. The GUI also shows the value of the solar radiation field when the inclination of the line-of-sight and the wavelength of the multiplet is changed. The value (which can be introduced in the value of $I_0$ as a boundary condition) is given next to the height of the slab and indicated with the label ``Allen''. In case of crashes, the GUI can be restarted with the following command: \begin{verbatim} IDL> .r hazel IDL> hazel, /reset \end{verbatim} \subsection{Inversion} It is placed in the directory \texttt{Widget\_Inv} and it is invoked with the following commands: \begin{verbatim} IDL> .r hazel_inv IDL> hazel_inv \end{verbatim} Again, in case of crashes, the GUI can be restarted with the following command: \begin{verbatim} IDL> .r hazel_inv IDL> hazel_inv, /reset \end{verbatim} The GUI for the inversion is more complex because of the large amount of parameters that have to be changed. For this reason, the GUI is composed of 4 pages, as indicated in Fig. \ref{fig:inversion_GUI}. The first page is used to select the output file, together with the atomic system and multiplet to be used. Finally, the button \textbf{Run inversion} will call \H\ and update the state of the best model in the plot window. The second page is used simply to load the file with the observed Stokes profile. A button is also available to plot the observed data. The third page controls the behavior of the DIRECT algorithm. It is essentially a graphical representation of the \texttt{direct\_range.dat} file. Finally, the fourth page controls the behavior of the cycles, the value of the fixed parameters, the weights for each Stokes parameter and the level of physical realism introduced in the simulation. \section{\HM\ input/output files} \label{sec:phazel_files} Both input and output files for \HM\ are NetCDF files. \subsection{Input files} The input file constains the observations and information about the observing position and boundary condition. The file consists of the following variables: \begin{itemize} \item lambda: vector of size \textit{nlambda} containing the wavelength axis with respect to the center of the multiplet. \item map: array of size \textit{(npixel,8,nlambda)} containing the Stokes vector $(I,Q,U,V)$ and the associated standard deviation of the noise $(\sigma_I,\sigma_Q,\sigma_U,\sigma_V)$. \item boundary: array of size \textit{(npixel,4)} containing the boundary condition for every inverted pixel. \item height: vector of size \textit{npixel} which contains the height of the slabs for every pixel. \item obs\_theta: vector of size \textit{npixel} which contains the observing angle $\theta$ for every pixel. \item obs\_gamma: vector of size \textit{npixel} which contains the observing angle $\gamma$ that defines the positive reference for Stokes $Q$ for every pixel. \item mask: array of size \textit{nx,ny} which tells whether this pixel will be inverted. \item normalization: variable indicating whether the profiles are normalized to the peak amplitude or the continuum of Stokes $I$. \item pars: array of size \textit{npixel,npars} which contains the initial value for the model parameters. These will be used to reinvert some pixels or, for instance, to refine the ambiguous solutions. \end{itemize} The routine \texttt{gen\_netcdf.pro} on the directory \texttt{IDL\_routines} and the \texttt{genNetCDF.py} on \texttt{pyRoutines} shows functions that generate such a file by passing all the variables as parameters. The order of pars is the following, depending on the number of slabs: \begin{itemize} \item 1-component (vector of size 8): $B$, $\theta_B$, $\chi_B$, $\tau$, $v_\mathrm{dop}$, $a$, $v_\mathrm{mac}$, $\beta$ \item 2-component 1+1 with same field (vector of size 11): $B$, $\theta_B$, $\chi_B$, $\tau_1$, $\tau_2$, $v_\mathrm{dop}$, $a$, $v_\mathrm{mac1}$, $v_\mathrm{mac2}$, $\beta$, $\beta_2$ \item 2-component 1+1 with different field (vector of size 15): $B_1$, $\theta_{B1}$, $\chi_{B1}$, $B_2$, $\theta_{B2}$, $\chi_{B2}$, $\tau_1$, $\tau_2$, $v_\mathrm{dop}$, $v_\mathrm{dop2}$, $a$, $v_\mathrm{mac1}$, $v_\mathrm{mac2}$, $\beta$, $\beta_2$ \item 2-component 2 with different field with filling factor (vector of size 16): $B_1$, $\theta_{B1}$, $\chi_{B1}$, $B_2$, $\theta_{B2}$, $\chi_{B2}$, $\tau_1$, $\tau_2$, $v_\mathrm{dop}$, $v_\mathrm{dop2}$, $a$, $v_\mathrm{mac1}$, $v_\mathrm{mac2}$, $\mathrm{ff}$, $\beta$, $\beta_2$ \end{itemize} \subsection{Output files} The results of the inversion are saved on two files defined on the \texttt{config\_inversion.dat} configuration file. The file with the inverted profiles contains the following variables: \begin{itemize} \item lambda: vector of size \textit{nlambda} containing the wavelength axis with respect to the center of the multiplet. \item map: array of size \textit{(npixel,4,nlambda)} containing the synthetic Stokes vector $(I,Q,U,V)$ for every pixel. \end{itemize} The file with the inverted parameters contains the following variable: \begin{itemize} \item map: array of size \textit{(npixel,ncolumns)} containing the parameters of the inversion. \end{itemize} The number of columns depends on the selected model: \begin{itemize} \item One-slab: nine columns with the vector $(B,\theta_B,\chi_B,h,\tau,v_\mathrm{th},a,v_\mathrm{mac},\beta)$. \item Two-slab with same magnetic field: eleven columns with the vector \\ $(B,\theta_B,\chi_B,h,[\tau]_1,[\tau]_2,v_\mathrm{th},a,[v_\mathrm{mac}]_1, [v_\mathrm{mac}]_2,\beta)$. \item Two-slab with different magnetic field: fifteen columns with the vector \\ $([B]_1,[\theta_B]_1,[\chi_B]_1,[B]_2,[\theta_B]_2,[\chi_B]_2, h,[\tau]_1,[\tau]_2,[v_\mathrm{th}]_1,[v_\mathrm{th}]_2,a,[v_\mathrm{mac}]_1,[v_\mathrm{mac}]_2,\beta)$. \end{itemize} The file \texttt{read\_results.pro} on the \texttt{RunMPI} directory shows how to read the files from IDL. \subsection{Ambiguities} You have to remember that the results of Hazel are potentially affected by ambiguities and you have to take them into account. There is an utility written in IDL that, given an inverted map, obtains all the other solutions which are ambiguous in the saturation regime. This can be called, including the appropriate paths and discarding the final \texttt{.nc} extension, by: \begin{verbatim} IDL> disamb, 'file_with_inversions', 'file_with_observations', angleObs \end{verbatim} where \texttt{angleObs} is the observation angle $\theta$ (so that it is 90$^\circ$ for an observation exactly at the limb. This program can be called with the additional \texttt{/gen\_files\_inversion}, which then generates a set of observations, configuration files and a file to run \HM. This is useful in case the line is not in the saturation regime. In this case, the ambiguous solutions that are found by the code are not strictly valid and one should refine them with a final LM cycle in which $B$, $\theta_B$ and $\chi_B$ are left free. The solution to the ambiguities in the saturation regime is shown in Section \ref{sec:ambiguities}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Calling Hazel from Python} We have developed a wrapper to allow the user to call the synthesis routines of Hazel in Python. To do so, just enter into the directory \texttt{SourcePy} and type \begin{verbatim} python setup.py build_ext --inplace \end{verbatim} and a library \texttt{pyhazel.so} will be generated (and also copied to the directory \texttt{RunPy}. In this very same directory you can see the \texttt{test.py} file that shows how to call the code to wrapper. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Basic Equations} We consider a constant-property slab of atoms, located at a height $h$ above the visible solar ``surface", in the presence of a deterministic magnetic field of arbitrary strength $B$, inclination $\theta_B$ and azimuth $\chi_B$ (see Fig. 1). The slab's optical thickness at the wavelength and line of sight under consideration is $\tau$. We assume that all the atoms inside this slab are illuminated from below by the photospheric solar continuum radiation field, whose center-to-limb variation has been tabulated by \cite{pierce00}. The ensuing anisotropic radiation pumping produces population imbalances and quantum coherences between pairs of magnetic sublevels, even among those pertaining to the different $J$-levels of the adopted atomic model. This atomic level polarization and the Zeeman-induced wavelength shifts between the $\pi$ ($\Delta{M}=M_u-M_l=0$), $\sigma_{\rm blue}$ ($\Delta{M}=+1$) and $\sigma_{\rm red}$ ($\Delta{M}=-1$) transitions produce polarization in the emergent spectral line radiation. In order to facilitate the understanding of the code, in the following we summarize the basic equations which allow us to calculate the spectral line polarization taking rigorously into account the joint action of atomic level polarization and the Hanle and Zeeman effects. To this end, we have applied the quantum theory of spectral line polarization, which is described in great detail in the monograph by \cite{landi_landolfi04}. We have also applied several methods of solution of the Stokes-vector transfer equation, some of which can be considered as particular cases of the two general methods explained in \S6 of \cite{trujillo03}. \begin{figure} \includegraphics[width=\columnwidth]{f1.eps} \caption{The geometry for the scattering event. The $Z$-axis is placed along the vertical to the solar atmosphere. The magnetic field vector, $\mathbf{B}$, is characterized by its modulus $B$, the inclination angle $\theta_B$ and the azimuth $\chi_B$. The line-of-sight, indicated by the unit vector $\mathbf{\Omega}$, is characterized by the two angles $\theta$ and $\chi$. The reference direction for Stokes $Q$ is defined by the vector $\mathbf{e}_1$ on the plane perpendicular to the line-of-sight. This vector makes an angle $\gamma$ with respect to the plane formed by the vertical and the line-of-sight. In the figures showing examples of the emergent Stokes profiles, our choice for the positive reference direction for Stokes $Q$ is $\gamma=90^\circ$, unless otherwise stated. For off-limb observations, we have $\theta=90^\circ$, while for observations on the solar disk, we have $\theta<90^\circ$. Note also that $\chi$ is generally taken to be $0^\circ$. \label{fig:geometry}} \end{figure} \subsection{The radiative transfer approach} \label{sec:radiative_transfer} The emergent Stokes vector $\mathbf{I}(\nu,\mathbf{\Omega})=(I,Q,U,V)^{\dag}$ (with $\dag$=transpose, $\nu$ the frequency and $\mathbf{\Omega}$ the unit vector indicating the direction of propagation of the ray) is obtained by solving the radiative transfer equation \begin{equation} \frac{d}{ds}\mathbf{I}(\nu,\mathbf{\Omega}) = \bm{\epsilon}(\nu,\mathbf{\Omega}) - \mathbf{K}(\nu,\mathbf{\Omega}) \mathbf{I}(\nu,\mathbf{\Omega}), \label{eq:rad_transfer} \end{equation} where $s$ is the geometrical distance along the ray under consideration, $\bm{\epsilon}(\nu,\mathbf{\Omega})=({\epsilon}_I,{\epsilon}_Q,{\epsilon }_U,{\epsilon}_V)^{\dag}$ is the emission vector and \begin{equation} \mathbf{K} = \left( \begin{array}{cccc} \eta_I & \eta_Q & \eta_U & \eta_V \\ \eta_Q & \eta_I & \rho_V & -\rho_U \\ \eta_U & -\rho_V & \eta_I & \rho_Q \\ \eta_V & \rho_U & -\rho_Q & \eta_I \end{array} \right) \label{eq:propagation} \end{equation} is the propagation matrix. Alternatively, introducing the optical distance along the ray, ${\rm d}{\tau}=-{\eta_I}{\rm d}s$, one can write the Stokes-vector transfer Eq. (\ref{eq:rad_transfer}) in the following two ways: \begin{itemize} \item The first one, whose formal solution requires the use of the evolution operator introduced by \cite{landi_landi85}, is \begin{equation} {{d}\over{d{\tau}}}{\bf I}\,=\,{\bf K}^{*} {\bf I}\,-\,{\bf S}, \label{eq:rad_transfer_peo} \end{equation} where ${\bf K}^{*}={\bf K}/{\eta_I}$ and ${\bf S}=\bm{\epsilon}/{\eta_I}$. The formal solution of this equation can be seen in eq. (23) of \cite{trujillo03}. \item The second one, whose formal solution does not require the use of the above-mentioned evolution operator is \citep[e.g.,][]{rees_delo89} \begin{equation} {{d}\over{d{\tau}}}{\bf I}\,=\,{\bf I}\,-\,{\bf S}_{\rm eff}, \label{eq:rad_transfer_delo} \end{equation} where the effective source-function vector $\,{\bf S}_{\rm eff}\,=\,{\bf S}\,-\, {\bf K}^{'}{\bf I},\,\,\,$ being $\,{\bf K}^{'}={\bf K}^{*}-{\bf 1}$ (with $\bf 1$ the unit matrix). The formal solution of this equation can be seen in eq. (26) of \cite{trujillo03}. \end{itemize} Once the coefficients $\epsilon_I$ and $\epsilon_X$ (with $X=Q,U,V$) of the emission vector and the coefficients $\eta_I$, $\eta_X$, and $\rho_X$ of the $4\times4$ propagation matrix are known at each point within the medium it is possible to solve formally Eq. (\ref{eq:rad_transfer_peo}) or Eq. (\ref{eq:rad_transfer_delo}) for obtaining the emergent Stokes profiles for any desired line of sight. Our computer program considers the following levels of sophistication for the solution of the radiative transfer equation: \begin{itemize} \item {\em Numerical Solutions}. The most general case, where the properties of the slab vary along the ray path, has to be solved numerically. To this end, two efficient and accurate methods of solution of the Stokes-vector transfer equation are those proposed by \cite{trujillo03} (see his eqs. (24) and (27), respectively). The starting points for the development of these two numerical methods were Eq. (\ref{eq:rad_transfer_peo}) and Eq. (\ref{eq:rad_transfer_delo}), respectively. Both methods can be considered as generalizations, to the Stokes-vector transfer case, of the well-known short characteristics method for the solution of the standard (scalar) transfer equation. \item {\em Exact analytical solution of the problem of a constant-property slab including the magneto-optical terms of the propagation matrix}. For the general case of a constant-property slab of arbitrary optical thickness we actually have the following analytical solution, which can be easily obtained as a particular case of eq. (24) of \cite{trujillo03}: \begin{equation} {\bf I}={\rm e}^{-{\mathbf{K}^{*}}\tau}\,{\bf I}_{\rm sun}\,+\,\left[{\mathbf{K}^{*}}\right]^{-1}\, \left( \mathbf{1} - {\rm e}^{-{\mathbf{K}^{*}}\tau} \right) \,\mathbf{S}, \label{eq:slab_peo} \end{equation} where $\mathbf{I}_{\rm sun}$ is the Stokes vector that illuminates the slab's boundary that is most distant from the observer. We point out that the exponential of the propagation matrix ${\mathbf{K}^{*}}$ has an analytical expression similar to eq. (8.23) in \cite{landi_landolfi04}. \item {\em Approximate analytical solution of the problem of a constant-property slab including the magneto-optical terms of the propagation matrix}. An approximate analytical solution to the constant-property slab problem can be easily obtained as a particular case of eq. (27) of \cite{trujillo03}: \begin{equation} \mathbf{I} = \left[ \mathbf{1}+\Psi_0 \mathbf{K}' \right]^{-1} \left[ \left( e^{-\tau} \mathbf{1} - \Psi_M \mathbf{K}' \right) \mathbf{I}_{\rm sun} + (\Psi_M+\Psi_0) \mathbf{S} \right], \label{eq:slab_delo} \end{equation} where the coefficients $\Psi_M$ and $\Psi_0$ depend only on the optical thickness of the slab at the frequency and line-of-sight under consideration, since their expressions are: \begin{eqnarray} \Psi_M&=& \frac{1-e^{-\tau}}{\tau} - e^{-\tau},\nonumber \\ \Psi_0 &=&1-\frac{1-e^{-\tau}}{\tau}. \end{eqnarray} Note that Eq. (\ref{eq:slab_delo}) for the emergent Stokes vector is the one used by \cite{trujillo_asensio07} for investigating the impact of atomic level polarization on the Stokes profiles of the He {\sc i} 10830 \AA\ multiplet. We point out that, strictly speaking, it can be considered only as the exact analytical solution of the optically-thin constant-property slab problem\footnote{More precisely, when the optical thickness of the slab is small in comparison with the eigenvalues of the matrix $\mathbf{K}'$.}. The reason why Eq. (\ref{eq:slab_delo}) is, in general, an approximate expression for calculating the emergent Stokes vector is because its derivation assumes that the Stokes vector within the slab varies linearly with the optical distance. However, it provides a fairly good approximation to the emergent Stokes profiles (at least for all the problems we have investigated in this paper). Moreover, the results of fig. 2 of \cite{trujillo_asensio07} remain also virtually the same when using instead the exact Eq. (\ref{eq:slab_peo}), which from a computational viewpoint is significantly less efficient than the approximate Eq. (\ref{eq:slab_delo}). \item {\em Exact analytical solution of the problem of a constant-property slab when neglecting the second-order terms of the Stokes-vector transfer equation}. Simplified expressions for the emergent Stokes vector can be obtained when $\epsilon_I{\gg}\epsilon_X$ and $\eta_I{\gg}(\eta_X,\rho_X)$, which justifies to neglect the second-order terms of Eq. (\ref{eq:rad_transfer}). The resulting approximate formulae for the emergent Stokes parameters are given by eqs. (9) and (10) of \cite{trujillo_asensio07}, which are identical to those used by \cite{trujillo_merenda05} for modeling the Stokes profiles observed in solar chromospheric spicules. We point out that there is a typing error in the sentence that introduces such eqs. (9) and (10) in \cite{trujillo_asensio07}, since they are obtained only when the above-mentioned second-order terms are neglected in Eq. (\ref{eq:rad_transfer}), although it is true that there are no magneto-optical terms in the resulting equations. \item {\em Optically thin limit}. Finally, the most simple solution is obtained when taking the optically thin limit ($\tau{\ll}1$) in the equations reported in the previous point, which lead to the equations (11) and (12) of \cite{trujillo_asensio07}. Note that if $\mathbf{I}_{\rm sun}=0$ (i.e., $I_0=X_0=0$), then such optically thin equations imply that ${X/I}\,{\approx}\,{\epsilon_X}/{\epsilon_I}$. \end{itemize} The coefficients of the emission vector and of the propagation matrix depend on the multipolar components, $\rho^K_Q(J,J^{'})$, of the atomic density matrix. Let us recall now the meaning of these physical quantities and how to calculate them in the presence of an arbitrary magnetic field under given illumination conditions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{The multipolar components of the atomic density matrix} We quantify the atomic polarization of the atomic levels using the multipolar components of the atomic density matrix. We assume that the atom can be correctly described under the framework of the $L$-$S$ coupling \citep[e.g.,][]{condon_shortley35}. The different $J$-levels are grouped in terms with well defined values of the electronic angular momentum $L$ and the spin $S$. We neglect the influence of hyperfine structure and assume that the energy separation between the $J$-levels pertaining to each term is very small in comparison with the energy difference between different terms. Therefore, we allow for coherences between different $J$-levels pertaining to the same term but not between the $J$-levels pertaining to different terms. As a result, we can represent the atom under the formalism of the multi-term atom discussed by \cite{landi_landolfi04}. In the absence of magnetic fields the energy eigenvectors can be written using Dirac's notation as $|\beta L S J M\rangle$, where $\beta$ indicates a set of inner quantum numbers specifying the electronic configuration. In general, if a magnetic field of arbitrary strength is present, the vectors $|\beta L S J M\rangle$ are no longer eigenfunctions of the total Hamiltonian and $J$ is no longer a good quantum number. In this case, the eigenfunctions of the full Hamiltonian can be written as the following linear combination: \begin{equation} \label{eq:eigenfunctions_total_hamiltonian} |\beta L S j M\rangle = \sum_J C_J^j(\beta L S, M) |\beta L S J M\rangle, \end{equation} where $j$ is a pseudo-quantum number which is used for labeling the energy eigenstates belonging to the subspace corresponding to assigned values of the quantum numbers $\beta$, $L$, $S$, and $M$, and where the coefficients $C_J^j$ can be chosen to be real. In the presence of a magnetic field sufficiently weak so that the magnetic energy is much smaller than the energy intervals between the $J$-levels, the energy eigenvectors are still of the form $|\beta L S J M\rangle$ ($C_J^j(\beta L S, M) \approx \delta_{Jj}$), and the splitting of the magnetic sublevels pertaining to each $J$-level is linear with the magnetic field strength. For stronger magnetic fields, we enter the incomplete Paschen-Back effect regime in which the energy eigenvectors are of the general form given by Eq. (\ref{eq:eigenfunctions_total_hamiltonian}), and the splitting among the various $M$-sublevels is no longer linear with the magnetic strength. If the magnetic field strength is further increased we eventually reach the so-called complete Paschen-Back effect regime, where the energy eigenvectors are of the form $|L S M_L M_S\rangle$ and each $L$-$S$ term splits into a number of components, each of which corresponding to particular values of ($M_L+2M_S$). Within the framework of the multi-term atom model the atomic polarization of the energy levels is described with the aid of the density matrix elements \begin{equation} \rho^{\beta L S}(jM,j'M') = \langle \beta L S j M | \rho | \beta L S j' M'\rangle, \end{equation} where $\rho$ is the atomic density matrix operator. Using the expression of the eigenfunctions of the total Hamiltonian given by Eq. (\ref{eq:eigenfunctions_total_hamiltonian}), the density matrix elements can be rewritten as: \begin{equation} \rho^{\beta L S}(jM,j'M') = \sum_{JJ'} C_J^j(\beta L S, M) C_{J'}^{j'}(\beta L S, M') \rho^{\beta L S}(JM,J'M'), \end{equation} where $\rho^{\beta L S}(JM,J'M')$ are the density matrix elements on the basis of the eigenvectors $| \beta L S J M\rangle$. Following \cite{landi_landolfi04}, it is helpful to use the spherical statistical tensor representation, which is related to the previous one by the following linear combination: \begin{eqnarray} {^{\beta LS}\rho^K_Q(J,J')} &=& \sum_{jj'MM'} C_J^j(\beta L S, M) C_{J'}^{j'}(\beta L S, M') \nonumber \\ &\times& (-1)^{J-M} \sqrt{2K+1} \threej{J}{J'}{K}{M}{-M'}{-Q} \rho^{\beta L S}(jM,j'M'), \end{eqnarray} where the 3-j symbol is defined as indicated by any suitable textbook on Racah algebra. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Statistical equilibrium equations} In order to obtain the ${^{\beta LS}\rho^K_Q(J,J')}$ elements we have to solve the statistical equilibrium equations. These equations, written in a reference system in which the quantization axis ($Z$) is directed along the magnetic field vector and neglecting the influence of collisions, can be written as \citep{landi_landolfi04}: \begin{eqnarray} \frac{d}{dt} {^{\beta LS}\rho^K_Q(J,J')} &=& -2\pi \mathrm{i} \sum_{K' Q'} \sum_{J'' J'''} N_{\beta LS}(KQJJ',K'Q'J''J''') {^{\beta LS}\rho^{K'}_{Q'}(J'',J''')} \nonumber \\ &+& \sum_{\beta_\ell L_\ell K_\ell Q_\ell J_\ell J_\ell'} {^{\beta_\ell L_\ell S}\rho^{K_\ell}_{Q_\ell}(J_\ell,J_\ell')} \mathbb{T}_A(\beta L S K Q J J', \beta_\ell L_\ell S K_\ell Q_\ell J_\ell J_\ell') \nonumber \\ &+& \sum_{\beta_u L_u K_u Q_u J_u J_u'} {^{\beta_u L_u S}\rho^{K_u}_{Q_u}(J_u,J_u')} \Big[ \mathbb{T}_E(\beta L S K Q J J', \beta_u L_u S K_u Q_u J_u J_u') \nonumber \\ & &\qquad \qquad \qquad \qquad \qquad + \mathbb{T}_S(\beta L S K Q J J', \beta_u L_u S K_u Q_u J_u J_u') \Big] \nonumber \\ &-& \sum_{K' Q' J'' J'''} {^{\beta L S}\rho^{K'}_{Q'}(J'',J''') } \Big[ \mathbb{R}_A(\beta L S K Q J J' K' Q' J'' J''') \nonumber \\ & & + \mathbb{R}_E(\beta L S K Q J J' K' Q' J'' J''') + \mathbb{R}_S(\beta L S K Q J J' K' Q' J'' J''') \Big]. \label{eq:see} \end{eqnarray} The first term in the right hand side of Eq. (\ref{eq:see}) takes into account the influence of the magnetic field on the atomic level polarization. This term has its simplest expression in the chosen magnetic field reference frame \citep[see eq. 7.41 of][]{landi_landolfi04}. In any other reference system, a more complicated expression arises. The second, third and fourth terms account, respectively, for coherence transfer due to absorption from lower levels ($\mathbb{T}_A$), spontaneous emission from upper levels ($\mathbb{T}_E$) and stimulated emission from upper levels ($\mathbb{T}_S$). The remaining terms account for the relaxation of coherences due to absorption to upper levels ($\mathbb{R}_A$), spontaneous emission to lower levels ($\mathbb{R}_E$) and stimulated emission to lower levels ($\mathbb{R}_S$), respectively. The stimulated emission and absorption transfer and relaxation rates depend explicitly on the radiation field properties \citep[see eqs. 7.45 and 7.46 of][]{landi_landolfi04}. The symmetry properties of the radiation field are accounted for by the spherical components of the radiation field tensor: \begin{equation} J^K_Q(\nu) = \oint \frac{d\Omega}{4\pi} \sum_{i=0}^3 \mathcal{T}^K_Q(i,\mathbf{\Omega}) S_i(\nu,\mathbf{\Omega}). \label{eq:jkq} \end{equation} The quantities $\mathcal{T}^K_Q(i,\mathbf{\Omega})$ are spherical tensors that depend on the reference frame and on the ray direction $\mathbf{\Omega}$. They are given by \begin{equation} \mathcal{T}^K_Q(i,\mathbf{\Omega}) = \sum_P t^K_P(i) \mathcal{D}^K_{PQ}(R'), \label{eq:tkq} \end{equation} where $R'$ is the rotation that carries the reference system defined by the line-of-sight $\mathbf{\Omega}$ and by the polarization unit vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ into the reference system of the magnetic field, while $\mathcal{D}^K_{PQ}(R')$ is the usual rotation matrix \citep[e.g.,][]{edmonds60}. Table 5.6 in \cite{landi_landolfi04} gives the $\mathcal{T}^K_Q(i,\mathbf{\Omega})$ values for each Stokes parameter $S_i$ (with $S_0=I$, $S_1=Q$, $S_2=U$ and $S_3=V$). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Emission and absorption coefficients} Once the multipolar components ${^{\beta L S}\rho^{K}_{Q}(J,J') }$ are known, the coefficients $\epsilon_I$ and $\epsilon_X$ (with $X=Q,U,V$) of the emission vector and the coefficients $\eta_I$, $\eta_X$, and $\rho_X$ of the propagation matrix for a given transition between an upper term $(\beta L_u S)$ and an lower term $(\beta L_\ell S)$ can be calculated with the expressions of \S7.6.b in \cite{landi_landolfi04}. These radiative transfer coefficients are proportional to the number density of \ion{He}{1} atoms, $\mathcal{N}$. Their defining expressions contain also the Voigt profile and the Faraday-Voigt profile \citep[see \S5.4 in][]{landi_landolfi04}, which involve the following parameters: $a$ (i.e., the reduced damping constant), $v_\mathrm{th}$ (i.e., the velocity that characterizes the thermal motions, which broaden the line profiles), and $v_\mathrm{mac}$ (i.e., the velocity of possible bulk motions in the plasma, which produce a Doppler shift). It is important to emphasize that the expressions for the emission and absorption coefficients and those of the statistical equilibrium equations are written in the reference system whose quantization axis is parallel to the magnetic field. The following equation indicates how to obtain the density matrix elements in a new reference system: \begin{equation} \left[ {^{\beta L S}\rho^{K}_{Q}(J,J') } \right]_\mathrm{new} = \sum_{Q'} \left[ {^{\beta L S}\rho^{K}_{Q'}(J,J') } \right]_\mathrm{old} \mathcal{D}^K_{Q' Q}(R)^*, \end{equation} where $\mathcal{D}^K_{Q' Q}(R)^*$ is the complex conjugate of the rotation matrix for the rotation $R$ that carries the old reference system into the new one. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Inversion} Our inversion strategy is based on the minimization of a merit function that quantifies how well the Stokes profiles calculated in our atmospheric model reproduce the observed Stokes profiles. To this end, we have chosen the standard $\chi^2$--function, defined as: \begin{equation} \chi^2 = \frac{1}{4N_\lambda} \sum_{i=1}^4 \sum_{j=1}^{N_\lambda} \frac{\left[S_i^\mathrm{syn}(\lambda_j)-S_i^\mathrm{obs}(\lambda_j) \right]^2}{ \sigma_i^2(\lambda_j)} , \end{equation} where $N_\lambda$ is the number of wavelength points and $\sigma_i^2(\lambda_j)$ is the variance associated to the $j$-th wavelength point of the $i$-th Stokes profiles. The minimization algorithm tries to find the value of the parameters of our model that lead to synthetic Stokes profiles $S_i^\mathrm{syn}$ with the best possible fit to the observations. For our slab model, the number of parameters (number of dimensions of the $\chi^2$ hypersurface) lies between 5 and 7, the maximum value corresponding to the optically thick case. The magnetic field vector ($B$, $\theta_B$ and $\chi_B$), the thermal velocity ($v_\mathrm{th}$) and the macroscopic velocity ($v_\mathrm{mac}$) are always required. This set of parameters is enough for the case of an optically thin slab. In order to account for radiative transfer effects, we need to define the optical depth of the slab along its normal direction and at a suitable reference wavelength (e.g., the central wavelength of the red blended component for the \ion{He}{1} 10830 \AA\ multiplet). In addition, we may additionally need to include the damping parameter ($a$) of the Voigt profile if the wings of the observed Stokes profiles cannot be fitted using Gaussian line profiles. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Global Optimization techniques} In order to avoid the possibility of getting trapped in a local minimum of the $\chi^2$ hypersurface, global optimization methods have to be used. We have chosen the DIRECT algorithm \citep{Jones_DIRECT93}, whose name derives from one of its main features: \emph{di}viding \emph{rect}angles. The idea is to recursively sample parts of the space of parameters, improving in each iteration the location of the part of the space where the global minimum is potentially located. The decision algorithm is based on the assumption that the function is Lipschitz continuous \citep[see][for details]{Jones_DIRECT93}. The method works very well in practice and can indeed find the minimum in functions that do not fulfill the condition of Lipschitz continuity. The reason is that the DIRECT algorithm does not require the explicit calculation of the Lipschitz constant but it uses all possible values of such a constant to determine if a region of the parameter space should be broken into subregions because of its potential interest \citep[see][for details]{Jones_DIRECT93}. Since the intensity profile is not very sensitive to the presence of a magnetic field (at least for magnetic field strengths of the order of or smaller than 1000 G), we have decided to estimate the optical thickness of the slab, the thermal and the macroscopic velocity of the plasma and the damping constant by using only the Stokes $I$ profile, and then to determine the magnetic field vector by using the polarization profiles. The full inversion scheme begins by applying the DIRECT method to obtain a first estimation of the indicated four parameters by using only Stokes $I$. Afterwards, some LM iterations are carried out to refine the initial values of the model's parameters obtained in the previous step. Once the LM method has converged, the inferred values of $v_\mathrm{th}$, $v_\mathrm{mac}$ (together with $a$ and $\Delta \tau$, when these are parameters of the model) are kept fixed in the next steps, in which the DIRECT method is used again for obtaining an initial approximation of the magnetic field vector ($B$,$\theta_B$,$\chi_B$). According to our experience, the first estimate of the magnetic field vector given by the DIRECT algorithm is typically very close to the final solution. Nevertheless, some iterations of the LM method are performed to refine the value of the magnetic field strength, inclination and azimuth. In any case, although we have found very good results with this procedure, the specific inversion scheme is fully configurable and can be tuned for specific problems. Our experience has proved that the following strategy is appropriate for inverting prominences. Two initial DIRECT+LM cycles with weights $(1,0,0,0)$ to invert the thermodynamical parameters. Then, two DIRECT+LM cycles in which $B$, $\theta_B$ and $\chi_B$ are left free with weights $(0,0.1,0.1,1)$ which tries to set the correct polarity of the field given by Stokes $V$. An additional LM cycle in which we fit only $\theta_B$ and $\chi_B$ with the weights $(0,1,1,0.3)$ and a last LM cycle with weights $(0,0.3,0.3,1)$ leaving the full magnetic field vector free. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Convergence} \label{sec:convergence} We let the DIRECT algorithm locate the global minimum in a region whose hypervolume is $V$. This hypervolume is obtained as the product of the length $d_i$ of each dimension associated with each of the $N$ parameters: \begin{equation} V = \prod_i^N d_i. \end{equation} When the hypervolume decreases by a factor $f$ after the DIRECT algorithm has discarded some of the hyperrectangles, its size along each dimension is approximately decreased by a factor $f^{1/N}$. In order to end up with a small region where the global minimum is located, many subdivisions are necessary, thus requiring many function evaluations. The most time consuming part of any optimization procedure is the evaluation of the merit function. The DIRECT algorithm needs only a reduced number of evaluations of the merit function to find the region where the global minimum is located. For this reason, we have chosen it as the initialization part of the LM method. Since the initialization point is close to the global minimum, the LM method, thanks to its quadratic behavior, rapidly converges to the minimum. \subsection{Stopping criterium} We have used two stopping criteria for the DIRECT algorithm. The first one is stopping when the ratio between the hypervolume where the global minimum is located and the original hypervolume is smaller than a given threshold. This method has been chosen when using the DIRECT algorithm as an initialization for the LM method, giving very good results. The other good option, suggested by \cite{Jones_DIRECT93}, is to stop after a fixed number of evaluations of the merit function. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Ambiguities in the Hanle effect in the saturation regime} \label{sec:ambiguities} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In the saturation regime of the Hanle effect, Stokes $Q$ and $U$ are insensitive to the field strength, but are sensitive to the geometry of the field. For a $J=0 \to J=1$ transition, the linear polarization can be written as: \begin{eqnarray} Q &=& \frac{q}{2} \left( 3 \cos^2 \theta_B-1 \right) \sin^2\Theta_B \cos 2\Phi_B \nonumber \\ U &=& \frac{q}{2} \left( 3 \cos^2 \theta_B-1 \right) \sin^2\Theta_B \sin 2\Phi_B. \end{eqnarray} These expressions contain a mixture of angles to make it clear that the polarization amplitude depends on both the angle between the vertical and the magnetic field and between the magnetic field and the line-of-sight (LOS). The coordinates of the magnetic field vector $\mathbf{B}$ in the reference system of the vertical and the reference system of the LOS are: \begin{eqnarray} \mathbf{B} &=& B \left(\sin \theta_B \cos \phi_B \mathbf{i}+\sin \theta_B \sin \phi_B \mathbf{j}+\cos \theta_B \mathbf{k} \right) \nonumber \\ \mathbf{B} &=& B \left(\sin \Theta_B \cos \Phi_B \mathbf{i}'+\sin \Theta_B \sin \Phi_B \mathbf{j}'+\cos \Theta_B \mathbf{k}' \right), \end{eqnarray} where the unit vectors are related by a simple rotation: \begin{eqnarray} \mathbf{i}' &=& \cos \theta \mathbf{i} - \sin \theta \mathbf{k} \nonumber \\ \mathbf{k}' &=& \sin \theta \mathbf{i} + \cos \theta \mathbf{k}. \end{eqnarray} Introducing these relations on the expression for the magnetic field, we find that the following has to be fulfilled, given that the magnetic field vector is the same in both reference systems: \begin{eqnarray} \sin \theta_B \cos \phi_B &=& \sin \Theta_B \cos \Phi_B \cos \theta + \cos \Theta_B + \sin \theta \nonumber \\ \sin \theta_B \sin \phi_B &=& \sin \Theta_B \sin \Phi_B \nonumber \\ \cos \theta_B &=& \cos \Theta_B \cos \theta - \sin \Theta_B \cos \Phi_B \sin \theta. \end{eqnarray} Solving the previous three equations in the two directions, we find the following transformations between the angles in the vertical reference system and the LOS reference system: \begin{eqnarray} \cos \Theta_B &=& \cos\theta \cos\theta_B + \sin\theta \sin\theta_B \cos\phi_B \nonumber \\ \sin \Theta_B &=& +\sqrt{1-\cos^2\Theta_B} \nonumber \\ \cos \Phi_B &=& \frac{\cos\theta \sin\theta_B \cos\phi_B - \cos\theta_B \sin\theta}{\sin \Theta_B} \nonumber \\ \sin \Phi_B &=& \frac{\sin\theta_B \sin\phi_B}{\sin\Theta_B} \end{eqnarray} and \begin{eqnarray} \cos \theta_B &=& \cos\theta \cos\Theta_B - \sin\theta \sin\Theta_B \cos\Phi_B \nonumber \\ \sin \theta_B &=& +\sqrt{1-\cos^2\theta_B} \nonumber \\ \cos \phi_B &=& \frac{\cos\theta \sin\Theta_B \cos\Phi_B + \cos\Theta_B \sin\theta}{\sin \theta_B} \nonumber \\ \sin \phi_B &=& \frac{\sin\Theta_B \sin\Phi_B}{\sin\theta_B}. \end{eqnarray} Note that, since $\Theta_B \in [0,\pi]$, we can safely use the square root and take the positive value. In order to transform from one reference system to the other, we can compute the inclination easily by inverting the sinus or the cosinus. However, the situation is different for the azimuth, because the range of variation is $[-\pi,\pi]$. Therefore, one has to compute the cosinus and the sinus separately and the decide which is the correct quadrant fo the angle in terms of the signs of both quantities. % \begin{center} % \input{figure_geometry} % \end{center} Four possible kinds of ambiguities can exist for the Stokes $Q$ and $U$ parameters. The idea is that $\Phi_B$ can be modified and still obtain the same $Q$ and $U$ by properly adjusting the value of $\Theta_B$. It is clear that, given that the term that can be used to compensate for the change in the azimuth on the LOS reference system is the same for Stokes $Q$ and $U$, we can only compensate for changes in the sign. Therefore, we have the following potential ambiguities: \begin{eqnarray} \Phi_B' &=& \Phi_B \nonumber \\ \Phi_B' &=& \Phi_B -\pi/2 \nonumber \\ \Phi_B' &=& \Phi_B + \pi/2 \nonumber \\ \Phi_B' &=& \Phi_B + \pi. \end{eqnarray} For each case, we have to compute the value of $\Theta_B'$ that keeps the value of $Q$ and $U$ unchanged. Therefore, once we find a solution to the inversion problem in the form of the pair $(\theta_B,\phi_B)$, we can find the remaining solutions in the saturation regime following the recipes that we present now. Remember that, unless one knows the polarity of the field, or in other words, the sign $\cos\Theta_B$, the number of potential ambiguous solutions is 8. If the polarity of the field is known, the number is typically reduced to 4 (or 2 if no 90$^\circ$ ambiguity is present). \section{\texorpdfstring{$\Phi_B' = \Phi_B$}{PhiB'=PhiB}} Under this change, we have that \begin{equation} \cos 2\Phi_B' = \cos 2\Phi_B, \quad \sin 2\Phi_B' = \sin 2\Phi_B, \quad \cos \Phi_B' = \cos \Phi_B, \quad \sin \Phi_B' = \sin \Phi_B. \end{equation} Making use of the previous relations between the angles wrt to the vertical and the LOS, we have to solve the following equation: \begin{equation} \left( 3 \cos^2\theta_B'-1 \right) \sin^2 \Theta_B' = \left( 3 \cos^2\theta_B-1 \right) \sin^2 \Theta_B, \end{equation} which can be written as: \begin{equation} \left[ 3 \left( \cos \Theta_B' \cos \theta - \sin\theta \sin\Theta_B' \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B' = \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{equation} After some algebra and doing the substitution $t=\sin\Theta_B'$, we end up with the following equation to be solved: \begin{equation} A t^4 + Bt^2 + C t^3 \sqrt{1-t^2} = K, \end{equation} where \begin{eqnarray} A &=& -3\cos^2 \theta + 3\sin^2 \theta \cos^2 \Phi_B \nonumber \\ B &=& 3\cos^2 \theta - 1 \nonumber \\ C &=& -6 \cos\theta \sin\theta \cos \Phi_B \nonumber \\ K &=& \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{eqnarray} The previous equation can be solved if we make the change of variables $t=\pm \sqrt{Z}$, resulting in: \begin{equation} (C^2+A^2) Z^4 + (-C^2+2AB) Z^3 + (-2AK+B^2) Z^2 - 2BKZ + K^2 = 0. \end{equation} This polynomial of 4-th order can have four different solutions. From these solutions, we have to take only the real solutions which are larger than 0, given the range of variation of $\Theta_B$: \begin{equation} t \in \mathbb{R}, \qquad 0 \leq t \leq 1. \end{equation} Once the solutions for $t$ are found, we make $\Theta_B' = \arcsin t$. Note that, for a fixed value of $t$, two values of $\Theta_B'$ are possible. We choose the correct one by evaluating the expressions for $Q$ and $U$ and testing which of the two possible choices give the values equal (or very similar) to the original ones. The angles $(\theta_B,\phi_B)$ are obtained by doing the transformation from $(\Theta_B',\Phi_B)$ to the vertical reference system. \section{\texorpdfstring{$\Phi_B' = \Phi_B+\pi$}{PhiB'=PhiB+pi}} Under this change, we have: \begin{equation} \cos 2\Phi_B' = \cos 2\Phi_B, \quad \sin 2\Phi_B' = \sin 2\Phi_B, \quad \cos \Phi_B' = -\cos \Phi_B, \quad \sin \Phi_B' = -\sin \Phi_B. \end{equation} Following the same approach, we have to solve for $\Theta_B'$ in \begin{equation} \left[ 3 \left( \cos \Theta_B' \cos \theta + \sin\theta \sin\Theta_B' \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B' = \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{equation} The solution are obtained as the roots of the same equations as before but now \begin{eqnarray} A &=& -3\cos^2 \theta + 3\sin^2 \theta \cos^2 \Phi_B \nonumber \\ B &=& 3\cos^2 \theta - 1 \nonumber \\ C &=& 6 \cos\theta \sin\theta \cos \Phi_B \nonumber \\ K &=& \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{eqnarray} The angles $(\theta_B,\phi_B)$ are obtained by doing the transformation from $(\Theta_B',\Phi_B+\pi)$ to the vertical reference system. \section{\texorpdfstring{$\Phi_B' = \Phi_B+\pi/2$}{PhiB'=PhiB+pi/2}} Under this change, we have: \begin{equation} \cos 2\Phi_B' = -\cos 2\Phi_B, \quad \sin 2\Phi_B' = -\sin 2\Phi_B, \quad \cos \Phi_B' = -\sin \Phi_B, \quad \sin \Phi_B' = \cos \Phi_B. \end{equation} Following the same approach, we have to solve for $\Theta_B'$ in \begin{equation} \left[ 3 \left( \cos \Theta_B' \cos \theta + \sin\theta \sin\Theta_B' \sin\Phi_B\right)^2-1 \right] \sin^2 \Theta_B' = \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{equation} The solution are obtained as the roots of the same equations as before but now \begin{eqnarray} A &=& -3\cos^2 \theta + 3\sin^2 \theta \sin^2 \Phi_B \nonumber \\ B &=& 3\cos^2 \theta - 1 \nonumber \\ C &=& 6 \cos\theta \sin\theta \sin \Phi_B \nonumber \\ K &=& -\left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{eqnarray} The angles $(\theta_B,\phi_B)$ are obtained by doing the transformation from $(\Theta_B',\Phi_B+\pi/2)$ to the vertical reference system. \section{\texorpdfstring{$\Phi_B' = \Phi_B-\pi/2$}{PhiB'=PhiB-pi/2}} Under this change, we have: \begin{equation} \cos 2\Phi_B' = -\cos 2\Phi_B, \quad \sin 2\Phi_B' = -\sin 2\Phi_B, \quad \cos \Phi_B' = \sin \Phi_B, \quad \sin \Phi_B' = -\cos \Phi_B. \end{equation} Following the same approach, we have to solve for $\Theta_B'$ in \begin{equation} \left[ 3 \left( \cos \Theta_B' \cos \theta + \sin\theta \sin\Theta_B' \sin\Phi_B\right)^2-1 \right] \sin^2 \Theta_B' = \left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{equation} The solution are obtained as the roots of the same equations as before but now \begin{eqnarray} A &=& -3\cos^2 \theta + 3\sin^2 \theta \sin^2 \Phi_B \nonumber \\ B &=& 3\cos^2 \theta - 1 \nonumber \\ C &=& -6 \cos\theta \sin\theta \sin \Phi_B \nonumber \\ K &=& -\left[ 3 \left( \cos \Theta_B \cos \theta - \sin\theta \sin\Theta_B \cos\Phi_B\right)^2-1 \right] \sin^2 \Theta_B. \end{eqnarray} The angles $(\theta_B,\phi_B)$ are obtained by doing the transformation from $(\Theta_B',\Phi_B-\pi/2)$ to the vertical reference system. \section*{Acknowledgements} We would like to thank Egidio Landi Degl'Innocenti and Marco Landolfi for sharing with us their deep knowledge on the physics of the spectral line polarization, which they have described in great detail in their rigorous monograph on ``Polarization in Spectral Lines". Finantial support by the Spanish Ministry of Education and Science through projects AYA2007-63881 and the European Commission through the SOLAIRE network (MTRN-CT-2006-035484) is gratefully acknowledged. \bibliographystyle{apj} % \bibliography{apjmnemonic,/scratch/Dropbox/biblio} \begin{thebibliography}{11} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \bibitem[{{Belluzzi} {et~al.}(2007){Belluzzi}, {Trujillo Bueno}, \& {Landi Degl'Innocenti}}]{belluzzi07} {Belluzzi}, L., {Trujillo Bueno}, J., \& {Landi Degl'Innocenti}, E. 2007, ApJ, 666, 588 \bibitem[{{Condon} \& {Shortley}(1935)}]{condon_shortley35} {Condon}, E.~U., \& {Shortley}, G.~H. 1935, The Theory of Atomic Spectra (Cambridge: Cambridge University Press) \bibitem[{{Edmonds}(1960)}]{edmonds60} {Edmonds}, A.~R. 1960, Angular Momentum in Quantum Mechanics (Princeton University Press) \bibitem[{{Jones} {et~al.}(1993){Jones}, {Perttunen}, \& {Stuckmann}}]{Jones_DIRECT93} {Jones}, D.~R., {Perttunen}, C.~D., \& {Stuckmann}, B.~E. 1993, Journal of Optimization Theory and Applications, 79, 157 \bibitem[{{Landi Deglinnocenti} \& {Landi Deglinnocenti}(1985)}]{landi_landi85} {Landi Deglinnocenti}, E., \& {Landi Deglinnocenti}, M. 1985, Sol. Phys., 97, 239 \bibitem[{{Landi Degl'Innocenti} \& {Landolfi}(2004)}]{landi_landolfi04} {Landi Degl'Innocenti}, E., \& {Landolfi}, M. 2004, Polarization in Spectral Lines (Kluwer Academic Publishers) \bibitem[{{Pierce}(2000)}]{pierce00} {Pierce}, K. 2000, in Allen's Astrophysical Quantities, ed. A. N. Cox (New York: Springer Verlag and AIP Press) \bibitem[{{Rees} {et~al.}(1989){Rees}, {Durrant}, \& {Murphy}}]{rees_delo89} {Rees}, D.~E., {Durrant}, C.~J., \& {Murphy}, G.~A. 1989, ApJ, 339, 1093 \bibitem[{{Trujillo Bueno}(2003)}]{trujillo03} {Trujillo Bueno}, J. 2003, in Stellar Atmosphere Modeling, ed. I.~{Hubeny}, D.~{Mihalas}, \& K.~{Werner}, ASP Conf. Ser. 288 (San Francisco: ASP), 551 \bibitem[{{Trujillo Bueno} \& {Asensio Ramos}(2007)}]{trujillo_asensio07} {Trujillo Bueno}, J., \& {Asensio Ramos}, A. 2007, ApJ, 655, 642 \bibitem[{{Trujillo Bueno} {et~al.}(2005){Trujillo Bueno}, {Merenda}, {Centeno}, {Collados}, \& {Landi Degl'Innocenti}}]{trujillo_merenda05} {Trujillo Bueno}, J., {Merenda}, L., {Centeno}, R., {Collados}, M., \& {Landi Degl'Innocenti}, E. 2005, ApJ, 619, L191 \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7219290264, "avg_line_length": 47.1926999463, "ext": "tex", "hexsha": "fde7fb2da369788c7c414e4c056fcbf781684be7", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2018-10-01T17:12:52.000Z", "max_forks_repo_forks_event_min_datetime": "2016-02-25T19:35:07.000Z", "max_forks_repo_head_hexsha": "4121df2fa6bf96bf8f193f287bbf11c70c5a519e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fluxtransport/hazel2", "max_forks_repo_path": "docs/manual.tex", "max_issues_count": 26, "max_issues_repo_head_hexsha": "4121df2fa6bf96bf8f193f287bbf11c70c5a519e", "max_issues_repo_issues_event_max_datetime": "2021-05-27T10:10:45.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-03T15:09:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fluxtransport/hazel2", "max_issues_repo_path": "docs/manual.tex", "max_line_length": 747, "max_stars_count": 17, "max_stars_repo_head_hexsha": "4121df2fa6bf96bf8f193f287bbf11c70c5a519e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fluxtransport/hazel2", "max_stars_repo_path": "docs/manual.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-12T02:30:56.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-31T11:13:59.000Z", "num_tokens": 24807, "size": 87920 }
%\documentclass[a4paper,9pt,fleqn,notoc]{diss} %% \renewcommand{\includegraphics}[1][1]{} %\begin{document} \chapter{Evolution of spatial conceptualization strategies}\is{alignment} \label{s:strategies} In this section I research the question how conceptualization strategies can form autonomously and align in a population of agents. The previous chapter studied one component of spatial conceptualization, namely categorization, in isolation. In this section I take a broader look at the prerequisites for building category systems. Every category system is part of a particular strategy of conceptualizing reality which encompasses a particular choice of reference objects but also frames of reference and perspective on the scene. Consequently, which spatial relation system emerges is governed most importantly by the strategies available to agents. In the languages of the world different strategies for the conceptualization of spatial reality have been attested. This is most strikingly the case for frames of reference. Some languages feature only absolute systems, an example being Tenejapan \citep{levinson2003space}\oldindex{Levinson, S. C.}, while others have developed strong intrinsic and relative systems like German \citep{tenbrink2007space}\oldindex{Tenbrink, T.}. But conceptualization strategies go further. Which objects can function as landmarks? How are spatial relations applied? What is the role of perspective reversal? These are all choices which are manifested in conceptualization strategies and shape the way a population conceptualizes reality. Consequently, the evolution of spatial language is intricately linked to the emergence and evolution of conceptualization strategies which together with learning and adaptation operators orchestrate the development of the language system. To understand the process of building conceptualization strategies this section details models of how agents invent strategies and how they become aligned in the population. The important claim in this section is that conceptualization strategies are negotiated in a cultural process, similar to how the lexicon is negotiated, through local interactions by agents in a community. The negotiation process is fueled by the general cognitive capabilities of agents, in other words, the cognitive building blocks. Conceptualization strategies package the usage of certain types of spatial relations with landmarks and perspective. For instance, a conceptualization strategy might involve a set of spatial relations pertaining to a particular global landmark. Strategies are represented technically by chunks which are combinations of cognitive operations into particular semantic structures that allow agents to conceptualize reality. Which strategies are built and which strategies a population agrees on in the cultural process is subject to selective pressures that influences the preference for a particular strategy over others and drives the population to align on a particular way of construing reality. Factors influencing invention and alignment\is{alignment} of strategies include primarily environmental conditions such as the spatial layout and the kind of objects agents face, but they can also include factors such as cognitive complexity of a particular strategy and expressivity of the language systems developed using that particular strategy. Furthermore, already established strategies and language systems upon invention of new strategies influence how and in which way new strategies develop. The idea is that a particular strategy survives when it is relevant to an agent because it is efficient and useful in discriminating objects and it contributes to the communicative success of\is{measures!communicative success} an agent at least in a few spatial contexts. If a new strategy is potentially useful in certain spatial contexts but in case these spatial situations are already handled by another strategy then the new strategy has almost no chance of taking over the system unless it performs better with respect to other factors such as cognitive complexity. In this chapter the focus is on one particular factor governing both invention and alignment \is{alignment} of strategies: discriminative power. Discriminative power for strategies refers to the distinctive ability of a strategy to distinguish and single out objects in the environment. Each strategy known to an agent necessarily starts out in a single context. Which strategy is invented in a particular context is based on its discriminative power in comparison with other strategies available at that moment. New strategies are packaged into chunks and the success of the new strategies is tracked by updating the score of the corresponding chunk. So the invention, alignment\is{alignment} and interaction of strategies is structurally organized very similar to categories. This chapter argues that discriminative power is an important factor driving the development of strategies, similar to how it drove the invention and alignment\is{alignment} of spatial categories. In fact, the success of a strategy is intricately linked to the success of the the category system it builds. For instance, if an agent is building a language system with an absolute strategy this entails that the absolute relations built using the strategy and the strategy itself are subject to the same selective pressure. It is the success of this overall system the spatial relations together with the overall performance of the strategy that drives the organization of the linguistic and semantic repository of the agent. Strategies are implemented using the chunking mechanism of IRL. A particular chunk represents a particular way of conceptualizing reality. Chunks are invented by assembling cognitive operations into ready-made semantic structure; they are scored and their scores are used to represent the success of the conceptualization strategy over many interactions. Invention operators orchestrate how chunks are built, and alignment\is{alignment} operators update the score of chunks after interactions tracking the long-term success of a particular chunk. Spatial conceptualization strategies rely on spatial relations. Invention of spatial relations is tied to particular cognitive operations. For instance, if there is an absolute strategy developed, the cognitive operation doing the categorization has an associated invention operator that invents spatial categories if needed. These are the same operations as discussed in the last section. Important for the purpose of this chapter is that this connects the invented spatial relations to the strategies that incorporate the particular cognitive operation responsible for invention. Before I turn to grammar and other linguistic means of expressing strategies, this section focusses entirely on alignment\is{alignment} and invention of conceptualization strategies without explicit marking in language. In other words, the systems described in this section are purely expressed through the naming of spatial relations re-using all of the mechanisms of invention, adoption and alignment\is{alignment} of spatial categories detailed in the previous section. I apply these insights on two components of spatial language separately. First, I study strategies for different reference objects, followed by a discussion of the interaction of different frame of reference strategies. Finally the chapter turns to invention of strategies. The results presented in this chapter have been published in \cite{spranger2011recruitment,spranger2013evolving}.\oldindex{Spranger, M.} \section{Alignment for landmark strategies}\is{alignment} % reference is unavoidable Landmarks are an integral part of spatial language because every spatial relation is implicitly or explicitly related to a reference object. Typically many different reference objects are present in the world and agents face choices as to which of the reference objects should be used in the particular communicative situation but also with respect to invention and development of language. Environmental conditions can be varied. In some environments a landmark such as a mountain might be visible in every communicative encounter between agents of a population which makes it a successful strategy to base the spatial language system on this landmark. In other environments landmarks might only be available locally and its usage, therefore, bound to a particular communicative encounter. One way of dealing with such choices is that agents agree on the usage of a particular conceptualization strategy that is bound to a particular reference object. This section explores how the use of certain reference objects might be aligned in a population of agents. I look at scenes which feature different landmarks, hence, exhibit a certain amount of choice, and I study under which circumstances agents are able to align on always using the same reference object across spatial scenes. Specifically, I compare allocentric strategies that are using objects such as boxes versus egocentric speaker and egocentric hearer strategies. The key claim in this section is that conventionalizing a particular strategy of conceptualizing reality allows agents to be successful in communication. Technically, this claim is studied by setting up systems in which an alignment\is{alignment} strategy is implemented that scores semantic structure, more specifically, chunks of semantic structure and updates the score of semantic structure based on success in communication. Chunks that were used in production and interpretation are rewarded if the interaction was successful and punished if the interaction was a failure. Moreover, chunks that were not used in the interaction are punished slightly which drives the alignment\is{alignment} to find a single strategy most comprehensive strategy that works in many contexts. This, in essence, implements frequency based dynamics in which structure that is used successfully survives and structure that is never used or always used unsuccessfully is removed. Strictly speaking, however, structure is never removed, it just gets a score of zero which eliminates it from routine processing but leaves it available for recruitment\is{recruitment} should it become necessary. The scoring of semantic structure has important consequences. First, the score of a chunk impacts the choices agents make in production and interpretation. The discrimination score of a chunk, i.e. its discriminatory power given the current context, is multiplied by its score. Consequently, a speaker might choose to use a less discriminating structure over another because it is more conventional, i.e. the score of the chunk is so high as to overrule the discriminative power of the competitive chunk. Second, the score of a chunk not only governs its usage in a particular spatial context, but also influences which categories will be invented. The world from the viewpoint of the speaker and the hearer might be different from the allocentric viewpoint and, consequently, the sensori distinctions, i.e. the categories required to be successful using an allocentric strategy can be different from those when the same set of scenes is construed from the viewpoint the hearer. The competition between the conceptualization strategies impacts, in other words, on the particular category system that will emerge. Because the chunks are under selective pressure, this creates a positive feedback loop in which categories might be invented using a particular conceptualization\enlargethispage{1\baselineskip} strategy such as allocentric, which in turn makes this strategy more successful in discrimination, which re-enforces the strategy and so on and so forth. \begin{figure} \includegraphics[width=1.0\columnwidth]{figs/chunk-alignment-chunks} \caption[Conceptualization strategies]{The three conceptualization strategies given to agents. Top: allocentric; middle: egocentric speaker and bottom: egocentric hearer.} \label{f:chunk-alignment-chunks} \end{figure} \subsection{Experimental setup and measures} I claim that chunk alignment\is{alignment} is an effective mechanism that allows agents to conventionalize the choice for particular conceptualization strategy alongside forming a category system. The claim is tested by running experiments in which agents are given different conceptualization strategies: an allocentric strategy in which they can use a reference object available in each context, and two egocentric strategies one for using themselves and one for using the interlocutor as reference objects (see Figure \ref{f:chunk-alignment-chunks} for the semantic structures agents are given) . At the same time as they are aligning their conceptualization strategy, agents develop a category system including names for category distinctions. The category systems and conceptualization strategies are tightly coupled. Every category is invented as part of a strategy and can only be used within the strategy it was created in. Success of a particular category in communication, therefore, directly impacts on the conceptualization strategy. Figures \ref{f:chunk-alignment-acquisition} and \ref{f:chunk-alignment-formation} show that the mechanism of chunk alignment\is{alignment} works both in acquisition and in formation. Agents can successfully negotiate both categories and the conceptualization strategy at the same time. % extra monitoring The alignment\is{alignment} of conceptualization strategies is measured using the \textsc{conceptualization strategy similarity}\is{measures!conceptualization strategy similarity} which is computed for a population of agents by averaging the {agent conceptualization strategy similarity} of every agent to every other agent. The agent conceptualization strategy similarity ({\footnotesize\tt acss}) is computed by comparing the score of each strategy. Since strategies are never removed but merely reduced to a score of $0.0$, one can compute a distance of scores between the chunks in each agent and envelope the result using an exponential decay function which results in the following formula. \begin{equation} \operatorname{acss}(a_1,a_2,S):= \operatorname{exp} \left( -1 \cdot \sum_{s \in S} \left|\operatorname{score}(s,a_1) - \operatorname{score}(s,a_2)\right|\right) \end{equation} In this formula $a_1,a_2$ are the agents whose similarity score is computed, $S$ is the set of strategies given to agents and $\operatorname{score}(s,a_1)$ is the score agent $a_1$ gives to strategy $s$. The conceptualization strategy similarity ({\footnotesize\tt css}) for the population $P$ is defined as the average $\operatorname{acss}$ for every two agents. Since $\operatorname{acss}$ is symmetric, all combinations of two agents are considered. This measure is only one way to understand the dynamics of a particular chunk alignment \is{alignment} experiment. Since all agents start out equipped with the same set of strategies this measure is equal to $1.0$ in the beginning. However, when considered over many interactions, the measure provides important insights into how similar the development is. Particularly, large drops in similarity diagnose significant divergence in strategy use. Most information from $\operatorname{css}$ is drawn by analyzing its dynamics together with a second measure which tracks the average number of chunks in the population ignoring chunks with a score of $0$. If the average number of chunks in a population of agents drops from 3 to 1 and $\operatorname{css}$ stays high, we can conclude that the population has agreed on a single conceptualization strategy (see Figure \ref{f:chunk-alignment-formation} for such developments). \subsection{Results} Figure \ref{f:chunk-alignment-different-outcomes} shows three different outcomes of the chunk alignment\is{alignment} experiments on different data sets. All three graphs show the average score of the three different strategies over the first 1000 interactions of one particular experimental run. All strategies start with the same score $0.5$ and for some time nothing happens, because agents have not started to invent categories yet. If the first category was invented using a particular strategy in a particular context, the strategy that was used to invent spreads in the population. Categories invented much later will be invented using the dominant strategy, which essentially has already been established when the second, third and fourth category spread in the population. \begin{figure} \begin{centering} \includegraphics[width=0.9\columnwidth]{figs/chunk-alignment-chunks-acquisition} \caption[Results strategy acquisition experiments]{% Acquisition experiment in which agents not only learn the category system from a tutor (projective in this case), but also the underlying conceptualization strategy of the tutor (allocentric).} \label{f:chunk-alignment-acquisition} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.85\columnwidth]{figs/chunk-alignment-formation-projective-space-game-5-success} \includegraphics[width=0.85\columnwidth]{figs/chunk-alignment-formation-projective-space-game-5-alignment} \caption[Results for conceptual alignment]{Experimental results for conceptual alignment\is{alignment}. In these experiments 10 agents are negotiating conceptualization strategies while at the same time agreeing on a system of spatial relations. The top shows the development of categories and communicative success\is{measures!communicative success}. The dynamics is quite similar to previous experiments using a dampened invention approach to categorization. The bottom figure shows that for this particular data set one specific strategy always unanimously wins the competition. which can be seen both by the drop to a single chunk and the corresponding high strategy similarity. Strategy similarity starts out high because all agents start with the three strategies, but most importantly, it also stays high suggesting high similarity between the conceptualization strategies agents are using. } \label{f:chunk-alignment-formation} \end{centering} \end{figure} \begin{figure} \begin{centering} %\includegraphics[width=0.65\columnwidth]{figs/chunk-alignment-formation-projective-space-game-4-run-1} %\includegraphics[width=0.65\columnwidth]{figs/chunk-alignment-formation-projective-space-game-4-run-2} %\includegraphics[width=0.65\columnwidth]{figs/chunk-alignment-formation-projective-space-game-4-run-3} \includegraphics[width=0.95\columnwidth]{figs/chunk-alignment-formation-projective-space-game-4-together} \caption[Different outcomes of chunk alignment experiments]{% Different outcomes of the chunk alignment\is{alignment} experiments on different data sets. All three graphs show the average score of the three different strategies over the first 1000 interactions of one particular experimental run. Top: the speaker egocentric strategy survives; middle: allocentric strategy wins and right: the hearer egocentric strategy wins.} \label{f:chunk-alignment-different-outcomes} \end{centering} \end{figure} The choices in alignment\is{alignment} of strategies depend on the discriminative advantage of the strategy. This is a subtle effect which can easily lead to unaligned populations particularly because agents simultaneously develop categories which is a powerful and adaptive mechanism. The problem lies in adoption. If the hearer observes a new term, the only information given to him in order to decide which strategy to use for adoption is the current context and its particular spatial layout. Let us suppose the speaker thinks the conventional strategy in the population is allocentric and, consequently he uses an allocentric spatial category. If the context is indeed one that favors the allocentric strategy, there is no problem and the hearer will correctly adopt the category as allocentric. If, however, the context favors a conceptualization strategy from the viewpoint of the hearer and the hearer does not share the strong preference for the allocentric strategy with the speaker, he will adopt the term as part of a hearer strategy. Even though the category is essentially unaligned because different agents see it as part of different strategies, it might still be quite successful, particularly in context where there are few objects. If success of the category is above $50\si{\percent}$, it is very hard for the alignment\is{alignment} mechanisms to remove it, since success is rewarded much more than failure is punished to allow the system to get off the ground. In such cases, misalignment can appear while retaining success rates of above $50\si{\percent}$ in communication. In other words, agents might get stuck in local optima and without additional mechanisms might be unable to get out of such conditions. %Lastly, we can consider one particular parameter of the system which is the close coupling %of categories and conceptualization strategies. So the question is what if that coupling %is less strict. What is clear is that typically language systems are developed for a very specific %purpose and then potentially over time they grow to fulfill different functions widening their %area of application. For spatial language paths from body parts to egocentric to allocentric %developments have been sketched and attested in the literature... So does not seem implausible %to assume a strong coupling between the strategy and the categorization system it invents. %But how these two components of conceptualization precisely interact is far from being %determined conclusively which leaves the question of modeling... % %In the most extreme case of no coupling, categories invented in one strategy will %be immediately available for use in other strategies just because they rely %on the same cognitive operations. \section{Alignment for frame of reference strategies}\is{alignment} The same mechanisms explaining alignment of reference object conceptualization strategies can be used to explain the alignment of other components of spatial language such as frames of reference, e.g. intrinsic and absolute conceptualization strategies. The claim is that agents can deal with the problem of aligning frames of reference by aligning chunks based on environmental conditions that favor one over the other. Particularly, I revisit the problem discussed in Sections \ref{s:category-acquisition} and \ref{s:category-formation} which argued that agents who are equipped at the same time with projective and absolute strategies are unable to develop category systems without clear preferences for one of the two strategies. Here, I turn to the question how such preferences can develop from scratch based on the long-term tracking of the success of each strategy. Just as for reference objects, I study environmental conditions and their impact on the success of strategies. % absolute and intrinsic revisited To understand the impact of environmental conditions, we have to understand the pre-requisites for the two strategies at hand. The absolute strategy requires the environment to exhibit absolute features such as a global landmark. The global landmark must be present in some contexts and it must help to discriminate objects in the context. That is to say, in environmental conditions where there are no absolute landmarks or the direction to the absolute landmark is not discriminating, agents do not develop an absolute system. For intrinsic systems the environment, most specifically the landmark objects that are used in conceptualization, have to have properties that allow to conceptualize a direction with respect to the reference object. So one can conclude that in environments were landmarks do not have an orientation or where the direction of objects in each context is not discriminating with respect to the orientation of landmark objects, no intrinsic system will develop. A word of caution is at place here, I will talk about the environment as having intrinsic or absolute features. This is in many ways loose talk, as it is never the environment that has such features but rather the environment is conceptualized by humans or robots as having such features and it is never the environment itself that has an absolute landmark or a landmark with intrinsic features. Now, a mountain range or any other feature of some environment may license or in some ways encourage agents to use the object as a global feature, but, certainly, the decision as to what counts as a global feature is still part of an active cognitive process. This process is simulated, here, by manipulating the spatial context to include an absolute landmark or a landmark that has an inherent orientation. Hence, I talk loosely about environmental conditions pertaining to absolute and intrinsic features, but readers have to keep in mind that this really is scaffolding much more complex processes. % interaction of strategies The strategies an agent possesses always interact in local communicative interactions. Usage of a particular strategy in production, interpretation and invention of spatial relations is exclusively governed by the discriminative power of each strategy. In cases where the discriminative power of two strategies is equal, this leads to a problem in the sense that the agent cannot decide which strategy to use. This problem is especially pressing when agents have not started inventing spatial relations yet and need to decide which strategy to use. This sort of situation is precisely the problem occurring when intrinsic and absolute systems interact in conditions that license both. In order for agents to successfully develop a category system, the symmetry of equal discriminative power of intrinsic and absolute categories, must be broken. The mechanisms for alignment\is{alignment} of conceptualization strategies can help break this symmetry by tracking the success of each strategy. Even if only in a few contexts there is a clear advantage for one of the strategies, the scoring of conceptualization strategies allows agents to track this advantage at which point the success can carry over to contexts that license no particular preference. For instance, if one context out of many features only an absolute landmark and no intrinsic features, agents use this context to start an absolute category system at the same time rewarding the absolute conceptualization strategy. This initial reward carries over to other contexts which feature both intrinsic and absolute features. The head start of the absolute strategy then leads to additional absolute spatial relations being developed which in turn make the absolute system more successful in communication rewarding both the individual absolute relations as well as the overall strategy. In this way the local discriminative power leads to consensus on the population level over time as to which strategy to use. The alignment\is{alignment} on the population level can override the particular discriminative power of strategies in some specific context. That is to say, once there is an established strategy within the population, it gets chosen even in cases where another strategy would be more discriminating or equally discriminating. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic} \includegraphics[width=0.9\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-alignment} \end{center} \caption[Results category formation and frame of reference alignment\is{alignment}]{% Dynamics of a category formation experiment in which 10 agents align the frame of reference used in conceptualization. The environment has a clear preference for the absolute frame of reference in that it is the only frame of reference available in certain context. In $50\si{\percent}$ of the contexts both intrinsic and absolute frame of reference are available, in the remaining $50\si{\percent}$ of the contexts only an absolute frame of reference is available. The strong preference of the environment drives agents to develop an absolute system which dominates across 25 runs of the same experiment. The graph shows that together with the category system agents align their conceptualization strategy.} \label{f:chunk-alignment-frames-absolute-vs-intrinsic} \end{figure} \subsection{Experimental setup} I test the power of chunk alignment\is{alignment} using contexts which can be manipulated to feature absolute and intrinsic properties. More specifically, I manipulate the distribution of intrinsic and absolute properties in the environment. Figure \ref{f:chunk-alignment-frames-absolute-vs-intrinsic} shows the dynamics of an experiment where agents start equipped with two strategies: an absolute and an intrinsic one. The environment is such that it favors absolute systems. In $50\si{\percent}$ of the scenes both intrinsic and absolute features are present. In the remaining $50\si{\percent}$ of the contexts only absolute features are present and no intrinsic ones. The environmental conditions have a strong effect on the development of the system in that all 25 populations agree on using an absolute strategy. What is important is that the contexts where only absolute features are present reward the absolute strategy and punish the intrinsic conceptualization strategy. Consequently, even in contexts where intrinsic and absolute features are present, the absolute strategy is preferred. The development of such a preference has important effects on the invention of categories. Because of the preference for the absolute strategy, invention of categories shifts to producing only absolute categories. The successful use of these categories enforces the absolute strategy and leads to further punishment of the intrinsic strategy. The effect is that only the absolute strategy survives. \subsection{Results} The influence of the distribution of features in the environment and its impact on the developing system are shown in Figures \ref{f:chunk-alignment-frames-absolute-vs-intrinsic-bar-plot} and \ref{f:chunk-alignment-frames-absolute-vs-intrinsic-detail}. Both figures show results for different experimental conditions. In all conditions $50\si{\percent}$ of the scenes feature both intrinsic and absolute properties. The conditions differ only in how the remaining $50\si{\percent}$ of scenes are divided. The following table overviews the conditions. \begin{center} \begin{tabular}{SSSS} \lsptoprule {condition} & {both (\si{\percent})} & {intrinsic only (\si{\percent})} & {absolute only (\si{\percent})} \\ \midrule%\hline\hline 0.0 & 50 & 0 & 50\\ %\hline 0.25 & 50 & 12.5 & 37.5 \\ % \hline 0.50 & 50 & 25 & 25 \\ %\hline 0.75 & 50 & 37.5 & 12.5 \\ %\hline 1.0 & 50 & 50 & 0\\ % \hline \lspbottomrule \end{tabular} \label{t:conditions} \end{center} Conditions are named after the percentage of intrinsic only scenes in the $50\si{\percent}$ of scenes that feature either only intrinsic or only absolute properties. Figure \ref{f:chunk-alignment-frames-absolute-vs-intrinsic-detail} shows the average score of the absolute and intrinsic strategies (projective). In all three cases, $50\si{\percent}$ of the scenes feature both intrinsic and absolute properties. On the top results are shown for an environment that in the remaining $50\si{\percent}$ has only absolute features. The middle figure shows results where $25\si{\percent}$ have only absolute features and $25\si{\percent}$ only intrinsic features. The bottom figure shows results for $50\si{\percent}$ intrinsic features. Clearly, agents in all cases align strategies reflecting environmental conditions. If the environment clearly supports an absolute strategy, the absolute strategy wins. If there are clear advantages for having an intrinsic strategy (bottom), the intrinsic strategy takes over and suppresses the absolute strategy. Figure \ref{f:chunk-alignment-frames-absolute-vs-intrinsic-bar-plot} compares communicative success as well as resulting average scores of each strategy. Interestingly, the middle condition $0.5$ exhibits a significant drop in communicative success to around $80\si{\percent}$. The reason for this can be seen in Figure \ref{f:chunk-alignment-frames-absolute-vs-intrinsic-detail} (middle) which shows the dynamics of a single run of such an experiment. In this condition no strategy is particularly favored and to be reasonably successful both strategies are necessary. This leads to both strategies having high scores. In a particular moment of inventing a new category both agents might have slightly different preferences for strategies based on their respective recent history of interaction. One agent might have just used the intrinsic strategy successful, whereas the other has just the absolute strategy. In this situation, invention happens wrongly in the sense that one agent might invent an absolute category and the other adopt it as an intrinsic strategy. Such categories which have different types across the population are the cause of the drop in success. Overall, however, the system is able to self-organize under different conditions and successful communication systems emerge. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-bar-plot} \end{center} \caption[Comparison for different distributions of intrinsic and absolute features]{% Results for experiments with different distributions of intrinsic and absolute features. Each condition was tested with 25 experiments of populations with 10 agents each. The figure shows the communicative success\is{measures!communicative success} and the scores of the absolute and intrinsic strategy averaged over the multiple runs.} \label{f:chunk-alignment-frames-absolute-vs-intrinsic-bar-plot} \end{figure} \begin{figure} \begin{center} %\includegraphics[width=0.7\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-00.pdf} %\includegraphics[width=0.7\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-05.pdf} %\includegraphics[width=0.7\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-10.pdf} \includegraphics[width=0.8\columnwidth]{figs/chunk-alignment-frames-absolute-vs-intrinsic-together} \end{center} \caption[Dynamics of alignment\is{alignment} for different environmental conditions]{% Dynamics of alignment\is{alignment} for different environmental conditions. The graphs show the average score of the absolute and intrinsic strategy unfold over time. The top figure shows condition 0.0 (see Table \ref{t:conditions}, the middle figure shows condition 0.5, and the bottom figure shows condition 1.0.} \label{f:chunk-alignment-frames-absolute-vs-intrinsic-detail} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{figs/conceptualization-strategy-invention-1.png} \end{center} \caption[Search for new conceptualization strategies]{% When agents are unable to conceptualize, they search for new conceptualization strategies by assembling cognitive operations into new chunks. Every node is a particular semantic structure that is immediately tested using the current context and the current topic. Here, different possible new strategies (green nodes) were found. Other possible strategies did not work on the current context (blue nodes).} \label{f:strategy-invention-1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{figs/conceptualization-strategy-invention-2.png} \end{center} \caption[Effect of new conceptualization strategies on the search process]{% The new strategies assembled by the agent (Figure \ref{f:strategy-invention-1}) are immediately stored in new chunks. This has the immediate effect of condensing the search process. The new strategies are now in competition with each other, as well as with strategies already invented by the agent earlier. Which of these new strategies survives depends on the context and topic that agents are processing at the moment of invention.} \label{f:strategy-invention-2} \end{figure} \section{Invention of conceptualization strategies} The question of how conceptualization strategies can align in a population is important. However, another important ingredient is, of course, how conceptualization strategies come into existence in the first place. Invention is a necessary pre-requisite for the usage of conceptualization strategies and their alignment\is{alignment} in a population. Invention of conceptualization strategies is based on the recruitment\is{recruitment} of basic cognitive operations which are assembled into chunks. Once a chunk is invented, it immediately extends the conceptualization capabilities of agents. Invention of a particular conceptualization strategy is always based on a specific communicative situation, a specific context and a specific topic. The starting point for invention is a problem in communication. The agent is unable to conceptualize a meaning for some topic or the meaning that he was able to conceptualize is not discriminative enough. To solve the problem, the agent starts an elaborate search process (see Figures \ref{f:strategy-invention-1} and \ref{f:strategy-invention-2}) which combines basic cognitive operations and the set of conceptualization strategies already established by the agent into new strategies. This process leads to new strategies, which are tested on the current context based on the current communicative goal. The discriminative power of the new strategies decides if and which strategy is stored for future use. Strategy invention is deeply integrated into the processing of agents. Agents unable to conceptualize or unable to conceptualize with a sufficient discrimination score diagnose a problem which is fixed by a repair that starts the search for new conceptualization strategies. The reason for this integration with other invention mechanisms such as category invention is that agents when inventing new strategies also immediately have to invent new categories with these strategies because the success of spatial strategies is tightly connected with the spatial categories that are part of the strategy. This sort of dual invention is especially important in the beginning of experiments, when agents have neither developed strategies nor categories. But there is a second reason for deep integration of strategy invention. When an agent already has developed a strategy, he might also solve a particular communicative problem by inventing new categories for these established strategies. Such decisions whether to use a new category with an existing strategy or a new strategy with an existing category, or even to use a newly invented strategy with a newly invented category are made based on the discriminative power of each of these different possibilities. So, for instance, if an existing strategy has a low score, the probability of inventing a new strategy increases, whereas if the current topic can be sufficiently discriminated using an existing strategy, no invention occurs. Figure \ref{f:strategy-invention-dynamics} shows the process of invention and alignment of conceptualization strategies in a population of agents. In the experiment generating such results agents have a large repository of basic cognitive operations from which they can draw new building blocks whenever there are problems in communication. They can choose different landmarks: the robot or the box and different category systems absolute and intrinsic projective as well as proximal. The agents manage to agree on one particular strategy while at the same time developing a category system and a lexicon from scratch. The process, however, does not show the same overall success as in the previously discussed experiments. The reason is that conceptual alignment is a difficult process which is complicated by the number of choices in strategies, population size (10 agents) and the variety of different contexts and discriminative situations which might all favor different strategies. In some contexts, proximal is the best strategy, some allow absolute and/or intrinsic categories to be invented. Nevertheless, agents do come to an agreement. Here, they agree on average on 1 conceptualization strategy. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{figs/chunk-alignment-category-invention-success} \includegraphics[width=0.9\columnwidth]{figs/chunk-alignment-category-invention-alignment} \end{center} \caption[Results for strategy invention, alignment and category development]{% Results for strategy invention, alignment and category development. A population of 10 agents develops both conceptualization strategies as well as lexical systems for spatial strategies corresponding to these strategies.} \label{f:strategy-invention-dynamics} \end{figure} \section{Discussion} % is good can explain 1) emergence of single reference systems 2) self-organization of conceptualization % strategies Invention and alignment of conceptualization strategies are powerful processes that together allow agents to develop successful communication systems in the face of varied environmental conditions. In turn, this allows agents to be more adaptive and ultimately more successful than the systems relying exclusively on categorization without taking into account reference objects, frames of reference and the different ways of conceptualizing space. The study of conceptualization strategies is necessarily an important cornerstone in every theory of linguistic selection\is{selection} that has meaning as an important part of the theory. The mechanisms proposed in this section are general enough that they can, in principle, be applied to many different components of language, spatial language being only one of them. For the particular theory of linguistic selection\is{selection} pursued in this book, this section provides substantial evidence in the form of concrete computational experiments. Language systems and in particular the conceptualization strategies underlying language systems, are the product of a cultural process based on the recruitment\is{recruitment} of cognitive operations and the environmental conditions the agents face. This section shows that alignment of conceptualization strategies based on invention and selection\is{selection} can be successful if the environment exhibits strong incentives for developing certain conceptualization strategies rather than others. If this condition is met, discriminative power together with tracking the success of strategies, which is the organizing principle used in this chapter, can successfully orchestrate the self-organization\is{self-organization} of a complete lexical communication system including the conceptualization strategy that gives rise to the communication system. However, despite its success conceptual alignment has its limits, particularly in the approach presented here which relies exclusively on discrimination. For instance, from a theoretical standpoint the relative conceptualization strategy is the most dominant of the absolute, intrinsic and relative strategies. All these three strategies have in common that they rely on angular relationships between objects and reference objects. The relative system in comparison to the other two angle based strategies however does not require additional intrinsic or absolute features, hence, in theory it is applicable in every spatial scene that features some landmark. This leads to a complete takeover of the relative system in experiments where relative, intrinsic and absolute systems compete and where all scenes include a box landmark. Table \ref{t:absolute-vs-intrinsic-vs-relative} summarizes results in which certain spatial scenes have neither intrinsic nor absolute features. \begin{table} \caption[Results absolute, intrinsic and relative strategy competition]{% Results for absolute, intrinsic and relative conceptualization strategy competition in different environmental conditions. This table shows communicative success and the final scores of the relative, intrinsic and absolute strategy (all allocentric) after 10000 interactions (10 agents, 25 runs averaged). Table \ref{t:experimental-conditions-absolute-vs-intrinsic-vs-relative} explains the different conditions.} \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{SSSSS} \lsptoprule {condition} & \multicolumn{1}{p{.2\textwidth}}{communicative success (\si{\percent})} & \multicolumn{1}{p{.2\textwidth}}{score relative\newline strategy} & \multicolumn{1}{p{.2\textwidth}}{score intrinsic\newline strategy} & \multicolumn{1}{p{.2\textwidth}}{score absolute\newline strategy}\\ \midrule 0.25 & 100 & 1.0 & 0.0 & 0.0\\ %\hline 0.50 & 100 & 1.0 & 0.0 & 0.0\\ %\hline 0.75 & 100 & 1.0 & 0.0 & 0.0\\ %\hline 1.0 & 100 & 1.0 & 0.0 & 0.0\\ %\hline \lspbottomrule \end{tabular} } \end{center} \label{t:absolute-vs-intrinsic-vs-relative} \end{table} The strong advantage of relative frames of reference over other frames of reference hints at why relative frames of reference might have emerged, but it cannot explain why certain languages seem to prefer other types of frames of reference over the relative one. Findings in natural language suggest, for example, that English speakers prefer the use of intrinsic frames of reference over relative frames of reference. The favored usage of intrinsic systems in English hints at the influence of important additional factors besides discrimination that govern the success of a particular strategy. One of such additional factors that was not studied in this section is cognitive complexity. For instance, relative systems are generally considered to be cognitively more demanding because they require tracking of perspective. It is relatively easy to add such constraints to the current system, but running such experiments essentially requires one to put a number on how much one thinks the cognitive complexity of the relative systems is different from other strategies. To avoid such ad hoc quantities I did not pursue this idea. Nevertheless, based on the experimental evidence described in this section, one can predict what will happen when factors additional to discriminative advantage are incorporated into the system. The mechanisms presented in this section function by packaging the successful conceptualization of a single context into strategies and tracking the success of these strategies over many interactions, amplifying patterns in environmental conditions and their effects over time and punishing rival strategies. Consequently, additional factors favoring a particular strategy lead to faster and more stable alignment in the population. So cognitive complexity, for instance, might be a factor that influences the alignment; in the best case, however, it leads to more robust alignment. \begin{table} \caption[Statistical distribution of features in the environment]{ Statistical distribution of features in the environment for the different experimental conditions compared in Table \ref{t:absolute-vs-intrinsic-vs-relative}.} \begin{center} \begin{tabular}{SSSSS} \lsptoprule {condition} & {relative} & {intrinsic +} & {absolute +} & {absolute + intrinsic +} \\ & {only (\si{\percent})} & {relative (\si{\percent})} & {relative (\si{\percent})} & {relative (\si{\percent})} \\ \midrule 0.25 & 25 & 30 & 7.5 & 37.5\\ %\hline 0.50 & 50 & 20 & 5 & 25 \\ %\hline 0.75 & 75 & 10 & 2.5 & 12.5 \\ %\hline 1.0 & 100 & 0 & 0 & 0\\ \lspbottomrule \end{tabular} \end{center} \label{t:experimental-conditions-absolute-vs-intrinsic-vs-relative} \end{table} Despite its success, conceptual alignment as presented in this section is a process which requires the right conditions in order to flourish. Because agents not only develop strategies but also at the same time build category systems, the system is very powerful and even in cases where there is almost no strategy alignment agents can reach medium levels of communicative success\is{measures!communicative success}. The concurrent development of categories makes the system so powerful that in some cases alignment of strategies is prevented. Consequently, agents can be rather successful even though the strategies they use are not entirely the same. This brings us to another point: the role of exaptation. The underlying assumption in all of the experiments in this section is that strategies co-evolve with the categories and lexical systems from scratch. But often times new conceptualization strategies can re-use existing material including categories as well as lexical and grammatical constructions and extend them to work within the new conceptual space that a strategy spans. This has been attested in spatial language which often is thought to originate in language for body parts (see for example \citealt{maclaury1989zapotec}\oldindex{MacLaury, R. E.}). For the results in this section this means that the harsh condition that agents start from virtually nothing can be relaxed. Applying this insight to exaptation one can predict that when systems become exapted and there are strong incentives for the population that direct invention of new strategies, this has positive effects on the success and on alignment in conceptualization. A more detailed account of this phenomenon is, however, deferred to later sections. The results presented in this section are interesting because they show that agents can negotiate conceptualization strategies without marking them explicitly in language. Indirect feedback via the spatial relations and the lexical system associated with a particular spatial strategy are enough to allow the system to organize itself. However, an important factor of linguistic systems was deliberately avoided in the discussion so far: the role of syntax. Many of the problems which cannot be solved by discrimination alone can be solved by agents when they are able to mark and express their strategies in more complex ways than so far studied. The next sections picks up on this theme and gradually introduce additional invention and learning mechanisms particular to the mapping from conceptualization strategies to syntactic structure. % \bibliographystyle{diss} % \bibliography{papers,space} % \end{document}
{ "alphanum_fraction": 0.8171514326, "avg_line_length": 68.5290933694, "ext": "tex", "hexsha": "980dfb0d14585321f0ce7af95f4b4fd8c41a30ba", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7056f2405d68066a7b0ba5d0891f327b09c005e3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/Spranger", "max_forks_repo_path": "chapters/11-reference-systems.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7056f2405d68066a7b0ba5d0891f327b09c005e3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/Spranger", "max_issues_repo_path": "chapters/11-reference-systems.tex", "max_line_length": 220, "max_stars_count": null, "max_stars_repo_head_hexsha": "7056f2405d68066a7b0ba5d0891f327b09c005e3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/Spranger", "max_stars_repo_path": "chapters/11-reference-systems.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10454, "size": 50643 }
Comma-separated value (CSV) files store data, both numeric and text, as plain text. Because of this, CSV files provide two main benefits: they can be opened using a wide variety of software, including free and open-source software, and they are not tied to any particular version of software. This flexibility makes CSV files particularly well suited for collaboration and for data sharing. For example, Dryad, the online data repository for data underlying peer-reviewed publications, prefers the use of plain-text formats such as CSV. As the use of such public repositories is increasingly required by journals such as Evolution, Molecular Ecology, and American Naturalist, developing a workflow using CSV files can greatly facilitate the publishing process. \section{Structure of CSV Files} While there is no official structure to CSV files, there is a common format often followed when dealing with data. In particular, data should be organized as a table, with one record per row, and where each record has the same number of elements. Each element in a record is delimited with a specified \emph{delimiter}. Although the use of commas as delimiters is common, as the name implies, other delimiters such as tabs and semicolons are also frequently used. Each element can be text, numeric data, or a combination of the two. For example, the following is a valid CSV entry: \begin{lstlisting} Wild Type, 2, 3.0, 8/2 \end{lstlisting} This example has four elements per entry. The first element, \emph{Wild Type}, is text. The second and third elements are numeric data. Finally the fourth element contains a combination of numeric and text data. Whenever text data are included, that element is interpreted as text. Because CSV files arrange data in a tabular format, each element shares some relationship with the corresponding element in other records. In other words, the elements along each column should contain data corresponding to the same aspect of whatever is being recorded in the dataset. In the example data above, the first element in each record represents the phenotype of the organism for which the measurement was taken. \subsection{Annotating CSV Files} To make datasets easier to understand, \emph{metadata}, or additional information about the data, can be added through the use of \emph{headers} and \emph{comments}. A header row is used to describe the data stored in each column of a dataset. Although there is no official specification, many software packages that support headers expect them to be on the first line of the file. Comments allow CSV files to contain additional notes about the data, such as a description of when and where the data were acquired, how the dataset was obtained, or any remarks about a specific data point. Comments are identified by a single character, typically \lstinline!#!, at the beginning of a line, and specify that all subsequent text on that line should be ignored. For comments that span multiple lines, a comment character must be included at the beginning of each line. The following dataset includes both headers and comments: \begin{lstlisting} Temperature,Row,Column,Luminescence # Luminescence of evolved V. harveyi # Eric Bruger - 2012/06/27 26.3,0,0,7444.945 26.3,0,1,4845.375 26.3,0,2,4056.362 # Look at this luminescence!!! 26.3,0,3,4883.137 26.3,0,4,3593.289 26.3,0,5,2645.281 26.3,1,2,10507.588 \end{lstlisting} It is important to note, however, that not all programs that support CSV-formatted files support headers or comments. When using these programs, these metadata should first be stripped from the file. This can easily be done with several common tools available on the Unix command line, which will be introduced later in this chapter. Alternately, thee metadata can be removed manually or with a script. \subsection{Including Replicates} In most cases, data sets will contain measurements from multiple replicates. For example, the luminescence data might contain data from reads of multiple plates. Since these data describe the same thing, it makes sense for them to be stored in the same file. However, if we just added these data to the end of file, it would not be possible to differentiate between the data for row 0, column 0 of the one plate and any other plate if we keep with the Temperature-Row-Column-Luminescence format. To handle replicates, we can add a new column for each entry that specifies the plate from which each data point were acquired. \begin{lstlisting} Plate,Temperature,Row,Column,Luminescence # Luminescence of evolved V. harveyi # Eric Bruger - 2012/06/27 Plate1,26.3,0,0,7444.945 Plate1,26.3,0,1,4845.375 Plate1,26.3,0,2,4056.362 Plate1,26.3,0,3,4883.137 Plate1,26.3,0,4,3593.289 Plate1,26.3,0,5,2645.281 Plate2,30.0,0,0,5713.744 Plate2,30.0,0,1,3491.94 Plate2,30.0,0,2,2851.252 Plate2,30.0,0,3,3872.232 Plate2,30.0,0,4,2632.069 Plate2,30.0,0,5,1594.228 \end{lstlisting} \subsection{Time Series Data} Similarly, time series can be thought of as measurements replicated over time. To augment our data set to show multiple reads of the plates over time, we can simply add a column that indicates when the measurement was taken: \begin{lstlisting} Plate,Time,Temperature,Row,Column,Luminescence # Luminescence of evolved V. harveyi # Eric Bruger - 2012/06/27 Plate1,0:00,26.3,0,0,7444.945 Plate1,0:00,26.3,0,1,4845.375 Plate1,15:00,30.1,0,0,6088.0 Plate1,15:00,30.1,0,1,3976.694 Plate1,30:00,30.0,0,0,6563.678 Plate1,30:00,30.0,0,1,4188.048 Plate2,0:00,30.0,0,0,6716.929 Plate2,0:00,30.0,0,1,4153.633 Plate2,15:00,30.0,0,0,6672.662 Plate2,15:00,30.0,0,1,4167.991 Plate2,30:00,30.0,0,0,5810.844 Plate2,30:00,30.0,0,1,3652.258 \end{lstlisting} As another example, the data below show reaction counts in one Avida population over 1,000 updates for one population: \begin{lstlisting} Update,NOT,NAND,AND,ORN,OR,ANDN,NOR,XOR,EQU # Reaction counts # Brian Connelly - 2012/03/03 98000.0,172.0,2.0,33.0,35.0,2167.0,1007.0,4377.0,0.0,0.0 98100.0,195.0,4.0,40.0,28.0,2185.0,1085.0,4408.0,0.0,0.0 98200.0,191.0,2.0,37.0,31.0,2147.0,1004.0,4278.0,0.0,0.0 98300.0,177.0,5.0,32.0,27.0,2239.0,904.0,4363.0,0.0,0.0 98400.0,191.0,6.0,45.0,30.0,2285.0,986.0,4390.0,0.0,0.0 98500.0,187.0,11.0,34.0,22.0,2277.0,1072.0,4485.0,0.0,0.0 98600.0,205.0,6.0,38.0,38.0,2417.0,956.0,4449.0,0.0,0.0 98700.0,158.0,7.0,48.0,21.0,2461.0,930.0,4501.0,0.0,0.0 98800.0,176.0,8.0,58.0,20.0,2265.0,931.0,4267.0,0.0,0.0 98900.0,150.0,5.0,45.0,31.0,2199.0,1030.0,4440.0,0.0,0.0 \end{lstlisting} Avida output data can be converted to CSV using the \lstinline!avida2csv.py! script included with BEACONToolkit. \section{Excel and CSV files} Support for reading and writing data in CSV format is included in Microsoft Excel and each of the Excel-like spreadsheet programs (e.g., Numbers, Google Docs, OpenOffice Calc). Like with the native formats, CSV files can be opened with the \textbf{Open} item in the \textbf{File} menu. To save data as a CSV file in Excel, the \textbf{Save As} item in the \textbf{File} menu is used. Shown below, the \emph{Format} should be set to \emph{Comma Separated Values (.csv)}. Menu options for other spreadsheets vary slightly. \begin{figure}[htbp] \centering \includegraphics[width=0.80\columnwidth]{../csv/doc/figures/excel-saveas.png} \caption{Saving data as CSV with Excel} \end{figure} It should be noted, though, that formulas included in spreadsheets will not be saved in the resulting CSV files, only their values. \subsection{Transposing Column-Based Data} CSV data is intended to be row-based, with each row representing a data point. To export data that have been arranged in a column-based layout (see example below), the data must first be transposed. \begin{figure}[htbp] \centering \includegraphics[width=0.80\columnwidth]{../csv/doc/figures/excel-horizdata.png} \caption{Column-based data in Excel} \end{figure} The easiest way to accomplish this is to select the data and copy it. Then, select the cell that will be at the upper left area of the transpose data, select \textbf{Paste Special\ldots{}} from the \textbf{Edit} menu, and choose the \emph{Transpose} option before selecting the \textbf{OK} button. \begin{figure}[htbp] \centering \includegraphics[width=0.50\columnwidth]{../csv/doc/figures/excel-paste_special.png} \caption{Excel's \textit{Paste Special} Dialog Window} \end{figure} Now that the data are arranged in rows, the other data can be deleted, and the spreadsheet can be saved as a CSV file as described previously. This method of copying data and pasting transposed is only supported in Excel and OpenOffice Calc. In Google Docs (as well as Excel), data can be transposed using the \textbf{TRANSPOSE} function. To do this, first select a region of empty cells that is equal in size to the data to be transposed. For example, if the column-based data occupies 3 rows by 9 columns as in the picture above, select an area that is 9 rows by 3 columns. Once the target region has been selected, enter: \begin{lstlisting} =TRANSPOSE(A1:I3) \end{lstlisting} To take the data from the region bounded by cells A1 in the upper left and I3 in the lower right, transpose it, and paste it into the selected region. Excel users should conclude entering this formula with Control-Shift-Enter instead of Enter. Unfortunately, Numbers does not provide any easy ways to transpose data. The best plan for these situations would be to export the column-based data as a CSV file, read that file using the Python tools described later in this Chapter, and transpose the data in Python with a function like \emph{transpose} in NumPy. \section{R and CSV files} Excel is a great tool for creating CSV files and for doing quick analyses, but often using another tool built specifically for data manipulation and analysis will prove useful. Using R (or Python) for your data needs has many advantages including the ability to save analysis scripts so they can be applied to new or different datasets easily, and a large open-source community constantly contributing new packages. The R language is particularly well suited for data anlysis since it was originally written by statisticians, and they still make up a large userbase. \subsection{Reading CSV files} Since data is so central to R, dealing with CSV files is remarkably well incorporated into to the language's base functionality. The most common way to read CSVs, and the one we will use here is \lstinline!read.csv!. You can see the R help page for this function by typing \lstinline!?read.csv! as well as some information about other functions you can use to import data. First, we should tell R which directory we'll be working in so it knows where to load files from. We can do that with the \lstinline!setwd! function: \begin{lstlisting}[language=R] setwd('~/BEACONToolkit/csv/data') \end{lstlisting} Don't worry too much about the \lstinline!~/! in the path, it is a Unix way of addressing relative directories. If you're using Windows, you may run into a few gotchas with directories. The easiest way to get around all of them is to always use full paths (i.e., ignoring the \lstinline!~/!) and always forward slashes instead of backslashes. For example, if your data was located in \lstinline!C:\MyFiles\BEACONToolkit\csv\data! you should instead type: \begin{lstlisting}[language=R] setwd('C:/MyFiles/BEACONToolkit/csv/data') \end{lstlisting} Now that we have set R's working directory, we can ask R what files are in there using the \lstinline!list.files! function. R is not meant to be a repleacement for the terminal or file browser, but this quick way of viewing the contects of a directory is helpful when you forget the exact name of the file you want to load. If we run this function with our working directory set to the \lstinline!BEACONToolkit/csv/data! directory, we should see our two example datasets: \begin{lstlisting}[language=R] list.files() # output [1] "avida_reactions.csv" "luminescence.csv" \end{lstlisting} We can now import this data using the \lstinline!read.csv! function. Calling this function will return a \lstinline!data.frame! object, which is R's way of internally representing tabular data. We want to store this dataframe in a variable so we can use it over and over without having to load the data every time. To do this in R, we simply run the following command: \begin{lstlisting}[language=R] lum_data <- read.csv('luminescence.csv') \end{lstlisting} But if we take a look at this data, we can see there is a problem. The \lstinline!head! function will list the first few rows of a dataset, and is often usful to always run just to make sure there were no problems importing the data. Running \lstinline!summary! will give you some statistical summaries of the data, which is good for making sure the min and max values of your data make sense, and to see if there are any missing values. We can run these two functions, passing lum\_data as a parameter (i.e., \lstinline!head(lum_data)! or \lstinline!summary(lum_data)!). Another way of looking at the data is using the \lstinline!edit! function. We will talk more about this function in the Writing CSV files section. \begin{lstlisting} head(lum_data) # output Plate Time Temperature Row Column Luminescence 1 # Luminescence of evolved V. harveyi NA NA NA NA 2 # Eric Bruger - 2012/06/27 NA NA NA NA 3 Plate1 00:00:00:00 26.3 0 0 7444.945 4 Plate1 00:00:00:00 26.3 0 1 4845.375 5 Plate1 00:00:00:00 26.3 0 2 4056.362 6 Plate1 00:00:00:00 26.3 0 3 4883.137 \end{lstlisting} It looks like R thinks the comments are actually entries, and tried to fit them into the dataframe. R does support comments in CSV files, but by default the \lstinline!read.csv! function isn't expecting them. Instead, we can be a little more explicit with our call to \lstinline!read.csv!: \begin{lstlisting}[language=R] lum_data <- read.csv('luminescence.csv', header=TRUE, sep=',', comment.char='#') \end{lstlisting} The \lstinline!header! and \lstinline!sep! parameters, which let you specify if the data contains a header row and the delimiter using to seperate entries, are working correctly by default, but now you see how easily they can be modified. Now if we take a look at our data, it is correctly being imported by R. \begin{lstlisting} head(lum_data) # output Plate Time Temperature Row Column Luminescence 1 Plate1 00:00:00:00 26.3 0 0 7444.945 2 Plate1 00:00:00:00 26.3 0 1 4845.375 3 Plate1 00:00:00:00 26.3 0 2 4056.362 4 Plate1 00:00:00:00 26.3 0 3 4883.137 5 Plate1 00:00:00:00 26.3 0 4 3593.289 6 Plate1 00:00:00:00 26.3 0 5 2645.281 \end{lstlisting} \subsection{Data Subsets and Selection} Now that we have our dataframe, we can start pulling out data and manipulating it using R. The simplest way to access data is by pulling out an entire column. In R, we can access particular columns using the \lstinline!$! operator, which extracts data from objects. To pull out the Luminescence column from our dataset, we simply run: \begin{lstlisting}[language=R] lum_data$Luminescence \end{lstlisting} Another way we can access data is by indexing into the dataframe, which is the same as indexing into a matrix in R. That means we can pull out particular rows or columns using the \lstinline!data.frame[row,column]! notation. For example, to pull out the Luminescence column (column 6) we could also run: \begin{lstlisting}[language=R] lum_data[,6] \end{lstlisting} Leaving the \lstinline!row! spot blank tells R to return all of the rows from column 6. We could also ask for particular rows instead of columns by specifying the \lstinline!row! but not the \lstinline!column!, and even ask for a single value by specifying both. R also lets you pass vectors into the indecies. For example, maybe we wanted both Time and Luminescence (columns 2 and 6) but didn't care about the rest. We could run the following function, which would return a new \lstinline!data.frame! object with only Time and Luminescence as columns: \begin{lstlisting}[language=R] lum_data[, c(2,6)] \end{lstlisting} The \lstinline!c! function is short for \emph{combine}, which just creates a vector out of the passed in values. Allowing vectors as indexing arguments is very powerful and lets us use indexing as a way to subset the data more flexibly. Perhaps we were only interested in the data where Luminescence was at least 500,000 units. By asking R to only return columns that meet the criteria \lstinline!lum_data$Luminescence >= 500000!, we can get a new dataframe containing a subset of our original data. Under the hood, this is actually creating a \emph{masking vector} where each position is either TRUE if Luminescence is greater than 500,000 or FALSE otherwise, and then returns only rows corrosponding to TRUE values. \begin{lstlisting}[language=R] lum_data[lum_data$Luminescence >= 500000, ] \end{lstlisting} If we were only interested in particularly luminous data points from the first row, we can add another logic statement to further subset the dataframe: \begin{lstlisting}[language=R] lum_data[lum_data$Luminescence >= 500000 & lum_data$Row == 1, ] \end{lstlisting} \subsection{Writing CSV files} Because most of R revolves around \lstinline!data.frame! objects, it is very simple to write new CSV files. Often it will be useful to programmatically add columns or rows to data, and doing so with an R script lets you easily re-create your changes should they be lost, apply them to future data sets, as well as have a record of exactly how you changed your data. \section{Python and CSV files} This section introduces three ways in which CSV files are commonly read, manipulated, and written using Python. The first is Python's \lstinline!csv! module, which is included in all Python installations. The NumPy project, which provides a large collection of objects and functions for scientific computing, also includes functionality for working with CSV files. Finally, we introduce the CSV capabilities of Pandas, a package aimed at providing tools for data analysis in Python. \subsection{Python's csv Module} Python's \lstinline!csv! module provides a number of objects that can be used to read and write CSV files. Although these objects are generally more bare bones than those described later in this section, they still make working with CSV files in Python quite easy. \subsubsection{Reading} First, the following Python code opens and reads a CSV file named \lstinline!luminescence.csv!: \begin{lstlisting}[language=Python] import csv myreader = csv.reader(open('luminescence.csv', 'r')) \end{lstlisting} where \lstinline!'r'! specifies that the file will be opened for reading. With this new object called \lstinline!myreader!, we can now iterate through the contents of the CSV file line-by-line: \begin{lstlisting}[language=Python] for row in myreader: print "Read a row:", row \end{lstlisting} For each iteration of this loop, the \lstinline!row! variable contains a list of values from that row. Because Python's list indices are zero-based, the value in the second column of the current row is accessed as \lstinline!row[1]!. This csv reader provides no functionality for dealing with headers, comments, or empty lines, so each \lstinline!row! may contain header information, comments, data, or nothing. These shortcomings can be compensated for with additional code to check the beginning of the file and to look for the presence of the comment character. Python reads each row as a collection of strings. To convert a value to a numeric value, the \lstinline!int! and \lstinline!float! functions can be used: \begin{lstlisting}[language=Python] for row in myreader: as_pct = float(row[2])/100 \end{lstlisting} As a precaution, \lstinline!int! should only be used when your are sure that the row contains integers and not decimal numbers. If the values of all fields are numeric, they can all be converted at once: \begin{lstlisting}[language=Python] for row in myreader: floatvals = map(float, row) \end{lstlisting} Here, \lstinline!floatvals! will be a list containing the numeric values of each field. With these readers, we can create lists of values from particular columns. For example, to calculate the average luminescence for our data: \begin{lstlisting}[language=Python] import csv myreader = csv.reader(open('luminescence.csv', 'r')) luminescence = [] for row in myreader: luminescence.append(float(row[5])) avg_luminescence = sum(luminescence)/len(luminescence) \end{lstlisting} Using techniques like this, we can easily generate statics for or create plots of columns in a dataset. \subsubsection{Writing} The \lstinline!csv! module also supports writing data to CSV files. To create an object that writes data to the file \lstinline!luminescence_modified.csv!: \begin{lstlisting}[language=Python] import csv mywriter = csv.writer(open('luminescence_modified.csv', 'w')) \end{lstlisting} Where \lstinline!'w'! specifies that we will be opening the file for writing. We can now write rows of data to the file using the \lstinline!writerow! method: \begin{lstlisting}[language=Python] data = [0.2, 0.3, 1.4] mywriter.writerow(data) \end{lstlisting} As a complete example of using the \lstinline!csv! module, let's use the \lstinline!avida_reactions.csv! dataset, which contains the total number of times that each of nine reactions has been completed over the last 100 updates. For each record, we will add a new column that contains the total number of reactions completed in that period of time, and a column that indicates the change in reactions completed from the previous period of time. We will save the new dataset as \lstinline!avida_reactions_modified.csv!. \begin{lstlisting}[language=Python] import csv import re myreader = csv.reader(open('avida_reactions.csv', 'r')) mywriter = csv.writer(open('avida_reactions_modified.csv', 'w')) line_number = 0 prev_total = 0 for row in myreader: if len(row) == 0: continue # skip empty rows m = re.match("^\s*#", row[0]) if m: continue # skip comments line_number += 1 if line_number == 1: continue # skip the header new_row = map(float, row) # Append the total number of tasks completed # We skip the first column, because that contains the update total = sum(new_row[1:]) new_row.append(total) # Append the change in total tasks completed new_row.append(total - prev_total) prev_total = total # Write the new row mywriter.writerow(new_row) \end{lstlisting} \subsection{Working with CSVs in NumPy} Although Python's \lstinline!csv! module makes it fairly easy to read and write CSV files, it requires a lot of code to do common tasks like deal with headers, strip comments, and extract data from columns. \href{http://numpy.scipy.org/}{NumPy} is a frequently-used package for scientific computing in Python. Although it is not included with the Python distribution, it is easy to install on most platforms. NumPy and the related \href{http://www.scipy.org/}{SciPy} package provide a large collection of powerful tools for working with collections of data. Because these tools are based around the use of arrays of data, they are particularly well-suited for working with CSV data. \subsubsection{Reading CSV Files} One of the most powerful methods for reading CSV files is the \lstinline!genfromtxt! function, which is demonstrated below: \begin{lstlisting}[language=Python] import numpy as np mydata = np.genfromtxt('luminescence.csv', delimiter=',', comments='#') \end{lstlisting} This command will read from \lstinline!luminescence.csv!, expecting fields to be delimited by commas and comments to begin with the \# character. Like most tools, it expects the header to be the first row of the file, although this can be relaxed a bit by the optional \lstinline!skip_header! argument, which specifies the number of lines to skip at the beginning of the file. It is also important to note that the comment character can not appear in a record, even as part of a string. The resulting \lstinline!mydata! will be an array containing a row for each record in the dataset and a column for each field. Using indices, we can then obtain specific rows, columns, or cells: \begin{lstlisting}[language=Python] col3 = mydata[:,3] # Extract the fourth column (NumPy arrays are 0-based) row500 = mydata[499,:] # Extract the 500th row value = mydata[1299, 4] # Get the value of the 5th element of the 1300th row \end{lstlisting} NumPy arrays contain only numeric data, so the values associated text fields will be \lstinline!nan!. To limit the columns included in the array, the optional \lstinline!usecols! argument can be used, which specifies a list of columns to be used: \begin{lstlisting}[language=Python] mydata = np.genfromtxt('luminescence.csv', delimiter=',', comments='#', usecols=(2,3,4,5)) \end{lstlisting} Sometimes, it is easier to indicate a column by name rather than number. This can be done using the \lstinline!names! argument, which allows columns to be referred to by the name set in the header. The following example uses this to quickly calculate the average luminescence of the dataset: \begin{lstlisting}[language=Python] mydata = np.genfromtxt('luminescence.csv', delimiter=',', comments='#', names=True) avg_luminescence = np.mean(mydata['Luminescence']) \end{lstlisting} By using options such as these and several others, \lstinline!genfromtxt! makes reading CSV files extremely easy. Once loaded, NumPy and SciPy offer tremendous power for working with the resulting arrays. \subsubsection{Data Subsets and Selection} We've already seen how to extract individual columns, rows, and elements using indices. Indices can also be used to find subsets of data that match certain criteria. For example, to find all records where the temperature (the third column) of the reading is greater than 30: \begin{lstlisting}[language=Python] hitemp = mydata[mydata[:,2] > 30] \end{lstlisting} \lstinline!mydata[:,2] > 30! returns a list containing \lstinline!True! and \lstinline!False! values indicating whether or not the criterion is met. By using this list to index the dataset, we find the entries for which the condition is satisfied. Subsets are a very powerful way to look at different pieces of the dataset. We've already seen how specific rows, columns, and elements can be addressed when using indices. The process is similar when using named columns: \begin{lstlisting}[language=Python] mydata['Temperature'] # The temperature column of data mydata['Temperature'][4] # The temperature in the 5th row of data \end{lstlisting} Named columns can be very useful for selecting subsets. The process of specifying criteria is the same as with indices. To get all readings from the luminescence data when the temperature was above 30 using column names: \begin{lstlisting}[language=Python] hitemp = mydata[mydata['Temperature'] > 30] \end{lstlisting} For multiple criteria, two steps are usually required. Let's say we want to find all of the readings from the luminescence data for wells on the fourth row where the temperature was above 30: \begin{lstlisting}[language=Python] condition = (mydata['Row'] == 3) & (mydata['Temperature'] > 30) row_hitemp = mydata[condition] \end{lstlisting} Multiple criteria can be specified this way using both indices and named columns. In both cases criteria to be specified using \lstinline!<!, \lstinline!<=!, \lstinline!==!, \lstinline!>=!, and \lstinline!>!. \subsubsection{Writing CSV Files} NumPy arrays can easily saved as CSV files using the \lstinline!savetxt! function. For example, to save the data stored in the \lstinline!platedata! array to the file \lstinline!platedata.csv!: \begin{lstlisting}[language=Python] np.savetxt('platedata.csv', platedata, delimiter=',') \end{lstlisting} \lstinline!savetxt! does not provide a way to write a header row, so this can be done afterwards using a text editor. \subsection{Pandas} \href{http://pandas.pydata.org}{Pandas} is a Python library designed to support data reading, writing, and manipulation. In Pandas, data are organized in \emph{DataFrame} objects much like those used in R. Pandas is a fairly new project, so it is changing very rapidly. However, it already contains a number of features that make it a powerful tool for reading, writing, and working with CSV data. \subsubsection{Reading CSV Files} Like NumPy, Pandas provides a fully-featured function for reading CSV files. When a file is read with Pandas' \lstinline!read_csv! function, data are stored in a \emph{DataFrame} object. \begin{lstlisting}[language=Python] import pandas as p data = p.read_csv('luminescence.csv', header=0, comment='#') \end{lstlisting} In the above example, the \lstinline!comment! argument indicates that all text following the \lstinline!#! character is skipped. Pandas currently does not support line comments, so it is best to strip commented lines from files before reading. Otherwise, line comments produce empty records in the DataFrame. Individual columns in this \lstinline!data! DataFrame can be accessed using their names, which are gathered from the header in the CSV file: \begin{lstlisting}[language=Python] print data['Temperature'] \end{lstlisting} \subsubsection{Data Subsets and Selection} Similar to NumPy, subsets can also be selected based on some criteria. For example, we can find the records in our luminescence data where luminescence readings were greater than 40: \begin{lstlisting}[language=Python] data[data['Luminescence'] > 40] \end{lstlisting} Multiple criteria can also be given. If we wanted to find the records for which luminescence was between 40 and 100, we could combine these criteria with an ampersand: \begin{lstlisting}[language=Python] data[(data['Luminescence'] > 40) & (data['Luminescence'] < 100)] \end{lstlisting} \subsubsection{Grouping} Another extremely useful feature that Pandas provides is the ability to group data by a given column or columns. For example, the luminescence data could be grouped by wells (rows and columns) so that we could see the average luminescence of each well over time: \begin{lstlisting}[language=Python] bywells = data.groupby(['Row', 'Column']) bywells['Luminescence'].mean() \end{lstlisting} Similarly, multiple functions can be applied to the grouped data using the \lstinline!aggregate! function. This could be used to find the sum, mean, and standard deviation of the luminescence values in each well using the \lstinline!sum!, \lstinline!mean!, and \lstinline!std! functions in NumPy: \begin{lstlisting}[language=Python] bywells['Luminescence'].aggregate([np.sum, np.mean, np.std]) \end{lstlisting} \subsubsection{Writing CSV Files} A DataFrame can be written to a CSV file using the \lstinline!to_csv! function, as shown below: \begin{lstlisting}[language=Python] data.to_csv('modified_data.csv') \end{lstlisting} By default, the a header row is created, and commas are used as separators. Additionally, the frame number is included in the first column of every row. \section{CSV files and the Unix Shell} Although the Unix command line is a very intimidating place for many people, it contains many programs that can be used to manipulate CSV files very quickly and easily. This is especially useful for those who perform computational work in Unix-based environments such as high-performance computing centers, Linux workstations, and even Apple computers. This section prevents a number of them cookbook style, showing how to use commands to accomplish different tasks. As shown in a few examples, the real power of the Unix shell is in connecting several of these commands to perform multiple operations at once. \subsection{Replacing Newlines} Before we begin, though, we need to first talk about \emph{newlines}. Although you normally don't see them, the end of each line in text files contains a newline character. For some software, newlines can be a source of problems, because the specific symbols that Windows computers and Unix computers (including Mac OS X and Linux) use for newlines is different. To use any of the commands shown in this section on files which were created on a Windows machine, the newlines first need to be converted to the Unix format. For example, the \lstinline!dos2unix! command can be used to easily convert \lstinline!myfile.csv!, which was created on a Windows machine: \begin{lstlisting}[language=bash] dos2unix myfile.csv \end{lstlisting} \subsection{Stripping Comments} Some programs, such as Apple Numbers, do not support comments in CSV files. Fortunately, comments can very easily be stripped using the \lstinline!grep! command. The following command strips out all lines that begin with the \lstinline!#! character from the file \lstinline!luminescence.csv!: \begin{lstlisting}[language=bash] grep -v ^# luminescence.csv \end{lstlisting} This command will print the new contents. To save them to a new file called \lstinline!luminescence-nocomments.csv!: \begin{lstlisting}[language=bash] grep -v ^# luminescence.csv > luminescence-nocomments.csv \end{lstlisting} In the Unix shell, the \lstinline!>! character means to place the output of the previous command into the given file. If a file already exists with that name, it will be replaced by the new one. \subsection{Removing Headers} Headers can also be easily removed. To remove the first line from the \lstinline!luminescence.csv! file: \begin{lstlisting}[language=bash] cat luminescence.csv | sed "1 d" \end{lstlisting} This uses the Unix \emph{pipe} (\lstinline!|!) symbol to connect the output of the \lstinline!cat! program, which just prints the contents of the file, to the \lstinline!sed! program, which we use to filter out the first line. The \lstinline!1! can be replaced in the parameters to \lstinline!sed! to allow a different number of lines to be skipped. As before, the results of this command are printed. To save these results as a new file, the output can be redirected: \begin{lstlisting}[language=bash] cat luminescence.csv | sed "1 d" > luminescence-noheader.csv \end{lstlisting} This command won't quite have the effect we want, though, because the first line in \lstinline!luminescence.csv! is a comment, not a header. We can combine these two commands using pipes to first strip out comments, remove the first line of the resulting data, and save the rest to a new file: \begin{lstlisting}[language=bash] cat luminescence.csv | grep -v ^# | sed "1 d" > luminescence-data.csv \end{lstlisting} \subsection{Combining Multiple Files} The \lstinline!cat! program, which previously we've used to output the contents of a file, can be used to combine two or more files. To write add the contents of \lstinline!data2.csv! after the contents of \lstinline!data1.csv!, \begin{lstlisting}[language=bash] cat data2.csv >> data1.csv \end{lstlisting} Note that we used \lstinline!>>! instead of \lstinline!>!, which would have replaced \lstinline!data1.csv! with the contents of \lstinline!data2.csv!. In the Unix shell, \lstinline!>>! means to add the output to the bottom of the given file if it exists. Otherwise, a new file will be created. Now, \lstinline!data1.csv! has the contents of both files. If \lstinline!data2.csv! had a header, this could cause some confusion, as there would be a second header in the middle of the file. Combining what we did before, we can first remove the first line of \lstinline!data2.csv! before adding it to \lstinline!data1.csv!: \begin{lstlisting}[language=bash] cat data2.csv | sed "1 d" >> data1.csv \end{lstlisting} Similarly, we can combine multiple files into 1 by listing all of them as arguments to \lstinline!cat!: \begin{lstlisting}[language=bash] cat data1.csv data2.csv data3.csv > combined.csv \end{lstlisting} Here, the contents of \lstinline!data1.csv!, \lstinline!data2.csv!, and \lstinline!data3.csv! were combined and placed in a new file called \lstinline!combined.csv!. Doing this while also stripping the headers from the files is a bit more complicated. Files can also be combined side-by-side, which is useful for adding columns from one file to another. This is accomplished with the \lstinline!paste! command. As an example, to add the data from \lstinline!cols2.csv! to \lstinline!cols1.csv! and save the results as \lstinline!combined.csv!, the paste command is used as follows: \begin{lstlisting}[language=bash] paste -d , cols1.csv cols2.csv > combined.csv \end{lstlisting} Here, \lstinline!-d ,! specifies that a comma will be used to separate the contents of each row. \subsection{Extracting Specific Columns} The \lstinline!cut! command can be extremely useful for extracting columns from a CSV file. For example, to extract the third column (Temperature) from \lstinline!luminescence.csv!: \begin{lstlisting}[language=bash] cut -f 3 -d , luminescence.csv \end{lstlisting} where \lstinline!-f 3! specifies that we want the third field, and \lstinline!d ,! indicates that the fields are separated by commas. We can specify multiple columns as well, so to get the Time and Temperature of each entry in the data file and save it to a new file called \lstinline!timetemp.csv!: \begin{lstlisting}[language=bash] cut -f 2,3 -d , luminescence.csv > timetemp.csv \end{lstlisting} \subsection{Extracting Specific Rows} There are a number of ways in which specific rows can be extracted from CSV files. The \lstinline!head! and \lstinline!tail! commands output the first and last rows of a file, respectively. For example, to output the first 12 rows of \lstinline!luminescence.csv!: \begin{lstlisting}[language=bash] head -n 12 luminescence.csv \end{lstlisting} Similarly, the last 8 rows of \lstinline!luminescence.csv! can be shown with \lstinline!tail!: \begin{lstlisting}[language=bash] tail -n 8 luminescence.csv \end{lstlisting} The \lstinline!sed! command can be used to extract a range of lines. Here, we extract lines 50 through 97: \begin{lstlisting}[language=bash] sed -n "50,97 p" luminescence.csv \end{lstlisting} Often, though, we want to extract rows based on specific criteria. For these queries, the \lstinline!grep! command can be used, which searches a file for a particular pattern. Using \lstinline!grep!, we can extract all data from Plate2: \begin{lstlisting}[language=bash] grep "Plate2" luminescence.csv \end{lstlisting} \lstinline!grep! can support much more complex searches using \emph{regular expressions}. Many good references for regular expressions can be found online.
{ "alphanum_fraction": 0.768379142, "avg_line_length": 41.458998935, "ext": "tex", "hexsha": "54141edcc4710e8528cbd2cf10029cbe24517d2d", "lang": "TeX", "max_forks_count": 70, "max_forks_repo_forks_event_max_datetime": "2021-04-12T14:34:03.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-01T21:35:38.000Z", "max_forks_repo_head_hexsha": "71cbc43887d7afe369d54ec92e2a1bc264663559", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "BengaliBabu/BEACONToolkit", "max_forks_repo_path": "book/csv.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "71cbc43887d7afe369d54ec92e2a1bc264663559", "max_issues_repo_issues_event_max_datetime": "2016-08-11T18:20:12.000Z", "max_issues_repo_issues_event_min_datetime": "2016-08-05T00:51:32.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "BengaliBabu/BEACONToolkit", "max_issues_repo_path": "book/csv.tex", "max_line_length": 90, "max_stars_count": 22, "max_stars_repo_head_hexsha": "71cbc43887d7afe369d54ec92e2a1bc264663559", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "briandconnelly/BEACONToolkit", "max_stars_repo_path": "book/csv.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-10T14:43:51.000Z", "max_stars_repo_stars_event_min_datetime": "2015-10-10T20:57:16.000Z", "num_tokens": 10481, "size": 38930 }
\SetAPI{J-C} \section{ambeth.merge.fieldbased.active} \label{configuration:AmbethMergeFieldbasedActive} \ClearAPI \TODO %% GENERATED USAGE REFERENCE - DO NOT EDIT \begin{longtable}{ l l } \hline \textbf{Used in bean} & \textbf{Module} \ \endhead \hline \type{com.koch.ambeth.merge.MergeHandle} & \prettyref{module:Merge} \\ \hline \type{com.koch.ambeth.merge.MergeHandle} & \prettyref{module:Merge} \\ \hline \end{longtable} %% GENERATED USAGE REFERENCE END \type{com.koch.ambeth.merge.config.MergeConfigurationConstants.FieldBasedMergeActive} \begin{lstlisting}[style=Props,caption={Usage example for \textit{ambeth.merge.fieldbased.active}}] ambeth.merge.fieldbased.active=true \end{lstlisting}
{ "alphanum_fraction": 0.7711864407, "avg_line_length": 33.7142857143, "ext": "tex", "hexsha": "702b4df107d611c52354eacb67a1cb83bdc6caf5", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/configuration/AmbethMergeFieldbasedActive.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/configuration/AmbethMergeFieldbasedActive.tex", "max_line_length": 99, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/configuration/AmbethMergeFieldbasedActive.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 221, "size": 708 }
\chapter{Conclusion} <Conclusion here>
{ "alphanum_fraction": 0.775, "avg_line_length": 10, "ext": "tex", "hexsha": "eb79c39766482934f27fbbe1a702631c28254a48", "lang": "TeX", "max_forks_count": 45, "max_forks_repo_forks_event_max_datetime": "2022-03-20T14:58:12.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-26T06:02:56.000Z", "max_forks_repo_head_hexsha": "4a5e948101780bbbd9e33ec014abc34bfbcfbce7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "wtjnk/latex-project-report-template", "max_forks_repo_path": "conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4a5e948101780bbbd9e33ec014abc34bfbcfbce7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "wtjnk/latex-project-report-template", "max_issues_repo_path": "conclusion.tex", "max_line_length": 20, "max_stars_count": 65, "max_stars_repo_head_hexsha": "207ee3e1465ea1c8673ba2a3045228bcca22adbe", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Ace139/final-year-thesis", "max_stars_repo_path": "k4rtik-latex-project-report-template-4a5e948/conclusion_1.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-22T10:05:23.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-25T05:34:55.000Z", "num_tokens": 9, "size": 40 }
%% LyX 2.1.4 created this file. For more info, see http://www.lyx.org/. %% Do not edit unless you really know what you are doing. \documentclass[ruled]{article} \usepackage{courier} \usepackage[latin9]{inputenc} \usepackage[letterpaper]{geometry} \geometry{verbose} \usepackage{url} \usepackage{algorithm2e} \usepackage{amsmath} \usepackage{amsthm} \usepackage[unicode=true, bookmarks=false, breaklinks=false,pdfborder={0 0 1},backref=section,colorlinks=false] {hyperref} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands. \providecommand{\LyX}{\texorpdfstring% {L\kern-.1667em\lower.25em\hbox{Y}\kern-.125emX\@} {LyX}} %% Special footnote code from the package 'stblftnt.sty' %% Author: Robin Fairbairns -- Last revised Dec 13 1996 \let\SF@@footnote\footnote \def\footnote{\ifx\protect\@typeset@protect \expandafter\SF@@footnote \else \expandafter\SF@gobble@opt \fi } \expandafter\def\csname SF@gobble@opt \endcsname{\@ifnextchar[%] \SF@gobble@twobracket \@gobble } \edef\SF@gobble@opt{\noexpand\protect \expandafter\noexpand\csname SF@gobble@opt \endcsname} \def\SF@gobble@twobracket[#1]#2{} %% Because html converters don't know tabularnewline \providecommand{\tabularnewline}{\\} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands. \theoremstyle{definition} \newtheorem*{defn*}{\protect\definitionname} \theoremstyle{plain} \newtheorem*{thm*}{\protect\theoremname} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \title{Machine Learning and Computational Statistics, Spring 2016\\ Homework 4: Kernels, Duals, and Trees} \date{} \usepackage{amsfonts}\usepackage{capt-of} %\usepackage{url} \usepackage{graphicx} \usepackage{color} \usepackage{bbm} \usepackage{enumerate} \newcommand{\carlos}[1]{\textcolor{red}{Carlos: #1}} \newcommand{\field}[1]{\mathbb{#1}} \newcommand{\hide}[1]{#1} \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} \providecommand{\m}[1]{\mathbf{#1}} \providecommand{\norm}[1]{\left\|#1\right\|} \providecommand{\sign}[1]{\text{sign}\left(#1\right)} \DeclareMathOperator*{\argmin}{arg\,min} \providecommand{\what}{\m{\hat{w}}} \providecommand{\dw}{\Delta w} \providecommand{\dmw}{\Delta \m{w}} \providecommand{\hy}{\hat{y}} \makeatother \providecommand{\definitionname}{Definition} \providecommand{\theoremname}{Theorem} \begin{document} \global\long\def\reals{\mathbf{R}} \global\long\def\integers{\mathbf{Z}} \global\long\def\naturals{\mathbf{N}} \global\long\def\rationals{\mathbf{Q}} \global\long\def\ca{\mathcal{A}} \global\long\def\cb{\mathcal{B}} \global\long\def\cc{\mathcal{C}} \global\long\def\cd{\mathcal{D}} \global\long\def\ce{\mathcal{E}} \global\long\def\cf{\mathcal{F}} \global\long\def\cg{\mathcal{G}} \global\long\def\ch{\mathcal{H}} \global\long\def\ci{\mathcal{I}} \global\long\def\cj{\mathcal{J}} \global\long\def\ck{\mathcal{K}} \global\long\def\cl{\mathcal{L}} \global\long\def\cm{\mathcal{M}} \global\long\def\cn{\mathcal{N}} \global\long\def\co{\mathcal{O}} \global\long\def\cp{\mathcal{P}} \global\long\def\cq{\mathcal{Q}} \global\long\def\calr{\mathcal{R}} \global\long\def\cs{\mathcal{S}} \global\long\def\ct{\mathcal{T}} \global\long\def\cu{\mathcal{U}} \global\long\def\cv{\mathcal{V}} \global\long\def\cw{\mathcal{W}} \global\long\def\cx{\mathcal{X}} \global\long\def\cy{\mathcal{Y}} \global\long\def\cz{\mathcal{Z}} \global\long\def\ind#1{1(#1)} \global\long\def\pr{\mathbb{P}} \global\long\def\predsp{\cy} \global\long\def\outsp{\cy} \global\long\def\prxy{P_{\cx\times\cy}} \global\long\def\prx{P_{\cx}} \global\long\def\prygivenx{P_{\cy\mid\cx}} \global\long\def\ex{\mathbb{E}} \global\long\def\var{\textrm{Var}} \global\long\def\cov{\textrm{Cov}} \global\long\def\sgn{\textrm{sgn}} \global\long\def\sign{\textrm{sign}} \global\long\def\kl{\textrm{KL}} \global\long\def\law{\mathcal{L}} \global\long\def\eps{\varepsilon} \global\long\def\as{\textrm{ a.s.}} \global\long\def\io{\textrm{ i.o.}} \global\long\def\ev{\textrm{ ev.}} \global\long\def\convd{\stackrel{d}{\to}} \global\long\def\eqd{\stackrel{d}{=}} \global\long\def\del{\nabla} \global\long\def\loss{\ell} \global\long\def\risk{R} \global\long\def\emprisk{\hat{R}_{\ell}} \global\long\def\lossfnl{L} \global\long\def\emplossfnl{\hat{L}} \global\long\def\empminimizer#1{\hat{#1}_{\ell}} \global\long\def\minimizer#1{#1_{*}} \global\long\def\etal{\textrm{et. al.}} \global\long\def\tr{\operatorname{tr}} \global\long\def\trace{\operatorname{trace}} \global\long\def\diag{\text{diag}} \global\long\def\rank{\text{rank}} \global\long\def\linspan{\text{span}} \global\long\def\proj{\text{Proj}} \global\long\def\argmax{\operatornamewithlimits{arg\, max}} \global\long\def\argmin{\operatornamewithlimits{arg\, min}} \global\long\def\bfx{\mathbf{x}} \global\long\def\bfy{\mathbf{y}} \global\long\def\bfl{\mathbf{\lambda}} \global\long\def\bfm{\mathbf{\mu}} \global\long\def\calL{\mathcal{L}} \global\long\def\vw{\boldsymbol{w}} \global\long\def\vx{\boldsymbol{x}} \global\long\def\vxi{\boldsymbol{\xi}} \global\long\def\valpha{\boldsymbol{\alpha}} \global\long\def\vbeta{\boldsymbol{\beta}} \global\long\def\vsigma{\boldsymbol{\sigma}} \global\long\def\vmu{\boldsymbol{\mu}} \global\long\def\vtheta{\boldsymbol{\theta}} \global\long\def\vd{\boldsymbol{d}} \global\long\def\vs{\boldsymbol{s}} \global\long\def\vt{\boldsymbol{t}} \global\long\def\vh{\boldsymbol{h}} \global\long\def\ve{\boldsymbol{e}} \global\long\def\vf{\boldsymbol{f}} \global\long\def\vg{\boldsymbol{g}} \global\long\def\vz{\boldsymbol{z}} \global\long\def\vk{\boldsymbol{k}} \global\long\def\va{\boldsymbol{a}} \global\long\def\vb{\boldsymbol{b}} \global\long\def\vv{\boldsymbol{v}} \global\long\def\vy{\boldsymbol{y}} \global\long\def\hil{\ch} \global\long\def\rkhs{\hil} \maketitle \textbf{Due: Tuesday, March 22, 2016, at 6pm (Submit via NYU Classes)} \textbf{Instructions}: Your answers to the questions below, including plots and mathematical work, should be submitted as a single file, either HTML or PDF. You may include your code inline or submit it as a separate file. You may either scan hand-written work or, preferably, write your answers using software that typesets mathematics (e.g. \LaTeX, \LyX{}, or MathJax via iPython). \section{Introduction} This problem set is entirely written -- we'll return to coding problems on the next problem set. The problem set begins with a review of some important linear algebra concepts that we routinely use in machine learning and statistics. The solutions to each of these problems is at most a few lines long, and we've tried to give helpful hints. These aren't meant to be very challenging problems -- just the opposite, in fact -- we'd like this material to be second nature to you. We next have a couple problems on kernel methods: the first explores what geometric information about the data is stored in the kernel matrix, and the second revisits kernel ridge regression with a direct approach, rather than using the Representer Theorem. The last required problem has some exercises related to decision trees. We also have three optional problems this week. The first completes the proof of the Representer Theorem that we discussed in lecture. The second applies Lagrangian duality to show the equivalence of Tikhonov and Ivanov regularization. And the third introduces an approach to ``novelty'' or ``anomoly'' detection as an exercise in the machinery of Lagrangian duality. \section{Positive Semidefinite Matrices} In statistics and machine learning, we use positive semidefinite matrices a lot. Let's recall some definitions from linear algebra that will be useful here: \begin{defn*} A set of vectors $\left\{ x_{1},\ldots,x_{n}\right\} $ is \textbf{orthonormal} if $\left\langle x_{i},x_{i}\right\rangle =1$ for any $i\in\left\{ 1,\ldots,n\right\} $ (i.e. $x_{i}$ has unit norm), and for any $i,j\in\left\{ 1,\ldots,n\right\} $ with $i\neq j$ we have $\left\langle x_{i},x_{j}\right\rangle =0$ (i.e. $x_{i}$ and $x_{j}$ are orthogonal). Note that if the vectors are column vectors in a Euclidean space, we can write this as $x_{i}^{T}x_{j}=\ind{i\neq j}$ for all $i,j\in\left\{ 1,\ldots,n\right\} $. \end{defn*} \begin{defn*} A matrix is \textbf{orthogonal }if it is a square matrix with orthonormal columns. It follows from the definition that if a matrix $M\in\reals^{n\times n}$ is orthogonal, then $M^{T}M=I$, where $I$ is the $n\times n$ identity matrix. Thus $M^{T}=M^{-1}$, and so $MM^{T}=I$ as well. \end{defn*} \begin{defn*} A matrix $M$ is \textbf{symmetric }if $M=M^{T}$. \end{defn*} \begin{defn*} For a square matrix $M$, if $Mv=\lambda v$ for some column vector $v$ and scalar $\lambda$, then $v$ is called an \textbf{eigenvector} of $M$ and $v$ is the corresponding \textbf{eigenvalue}. \end{defn*} \begin{thm*} [Spectral Theorem]A real, symmetric matrix $M\in\reals^{n\times n}$ can be diagonalized as $M=Q\Sigma Q^{T}$, where $Q\in\reals^{n\times n}$ is an orthogonal matrix whose columns are a set of orthonormal eigenvectors of $M$, and $\Sigma$ is a diagonal matrix of the corresponding eigenvalues. \end{thm*} \begin{defn*} A real, symmetric matrix $M\in\reals^{n\times n}$ is \textbf{positive semidefinite (psd)} if for any $x\in\reals^{n}$, \[ x^{T}Mx\ge0. \] Note that unless otherwise specified, when a matrix is described as positive semidefinite, we are implicitly assuming it is real and symmetric (or complex and Hermitian in certain contexts, though not here). As an exercise in matrix multiplication, note that for any matrix $A$ with columns $a_{1},\ldots,a_{d}$, that is \[ A=\begin{pmatrix}| & & |\\ a_{1} & \cdots & a_{d}\\ | & & | \end{pmatrix}\in\reals^{n\times d}, \] we have \[ A^{T}MA=\begin{pmatrix}a_{1}^{T}Ma_{1} & a_{1}^{T}Ma_{2} & \cdots & a_{1}^{T}Ma_{d}\\ a_{2}^{T}Ma_{1} & a_{2}^{T}Ma_{2} & \cdots & a_{2}^{T}Ma_{d}\\ \vdots & \vdots & \cdots & \vdots\\ a_{d}^{T}Ma_{1} & a_{d}^{T}Ma_{2} & \cdots & a_{d}^{T}Ma_{d} \end{pmatrix}. \] So $M$ is psd if and only if for any $A\in\reals^{n\times d}$, we have $\diag(A^{T}MA)=\left(a_{1}^{T}Ma_{1},\ldots,a_{d}^{T}Ma_{d}\right)^{T}\succeq0$, where $\succeq$ is elementwise inequality, and $0$ is a $d\times1$ column vector of $0$'s . \end{defn*} \begin{enumerate} \item Give an example of an orthogonal matrix that is not symmetric. (Hint: You can use a $2\times2$ matrix with only $0$'s and $1$'s.) \item Use the definition of a psd matrix and the spectral theorem to show that all eigenvalues of a positive semidefinite matrix $M$ are non-negative. {[}Hint: By Spectral theorem, $\Sigma=Q^{T}MQ$ for some $Q$. What if you take $A=Q$ in the ``exercise in matrix multiplication'' described above?{]} \item In this problem we show that a psd matrix is a matrix version of a non-negative scalar, in that they both have a ``square root''. Show that a symmetric matrix $M$ can be expressed as $M=BB^{T}$ for some matrix $B$, if and only if $M$ is psd. {[}Hint: To show $M=BB^{T}$ implies $M$ is psd, use the fact that for any vector $v$, $v^{T}v\ge0$. To show that $M$ psd implies $M=BB^{T}$ for some $B$, use the Spectral Theorem.{]} \end{enumerate} \section{Positive Definite Matices} \begin{defn*} A real, symmetric matrix $M\in\reals^{n\times n}$ is \textbf{positive definite} (spd) if for any $x\in\reals^{n}$ with $x\neq0$, \[ x^{T}Mx>0. \] \end{defn*} \begin{enumerate} \item Show that all eigenvalues of a symmetric positive definite matrix are positive. {[}Hint: You can use the same method as you used for psd matrices above.{]} \item Let $M$ be a symmetric positve definite matrix. By the spectral theorem, $M=Q\Sigma Q^{T}$, where $\Sigma$ is a diagonal matrix of the eigenvalues of $M$. By the previous problem, all diagonal entries of $\Sigma$ are positive. If $\Sigma=\diag\left(\sigma_{1},\ldots,\sigma_{n}\right)$, then $\Sigma^{-1}=\diag\left(\sigma_{1}^{-1},\ldots,\sigma_{n}^{-1}\right)$. Show that the matrix $Q\Sigma^{-1}Q^{T}$ is the inverse of $M$. \item Since positive semidefinite matrices may have eigenvalues that are zero, we see by the previous problem that not all psd matrices are invertible. Show that if $M$ is a psd matrix and $I$ is the identity matrix, then $M+\lambda I$ is symmetric positive definite for any $\lambda>0$, and give an expression for the inverse of $M+\lambda I$. \item Let $M$ and $N$ be symmetric matrices, with $M$ positive semidefinite and $N$ positive definite. Use the definitions of psd and spd to show that $M+N$ is symmetric positive definite. Thus $M+N$ is invertible. (Hint: For any $x\neq0$, show that $x^{T}(M+N)x>0$. Also note that $x^{T}(M+N)x=x^{T}Mx+x^{T}Nx$.) \end{enumerate} \section{Kernel Matrices} The following problem will gives us some additional insight into what information is encoded in the kernel matrix. \begin{enumerate} \item Consider a set of vectors $S=\{x_{1},\ldots,x_{m}\}$. Let $X$ denote the matrix whose rows are these vectors. Form the Gram matrix $K=XX^{T}$. Show that knowing $K$ is equivalent to knowing the set of pairwise distances among the vectors in $S$ as well as the vector lengths. {[}Hint: The distance between $x$ and $y$ is given by $d(x,y)=\|x-y\|$, and the norm of a vector $x$ is defined as $\|x\|=$$\sqrt{\left\langle x,x\right\rangle }=\sqrt{x^{T}x}$.{]} \end{enumerate} \section{Kernel Ridge Regression} In lecture, we discussed how to kernelize ridge regression using the representer theorem. He we pursue a bare-hands approach. Suppose our input space is $\mbox{\ensuremath{\cx}=}\reals^{d}$ and our output space is $\cy=\reals$. Let $\cd=\left\{ \left(x_{1},y_{1}\right),\ldots,\left(x_{n},y_{n}\right)\right\} $ be a training set from $\cx\times\cy$. We'll use the ``design matrix'' $X\in\reals^{n\times d}$, which has the input vectors as rows: \[ X=\begin{pmatrix}-x_{1}-\\ \vdots\\ -x_{n}- \end{pmatrix}. \] Recall the ridge regression objective function: \[ J(w)=||Xw-y||^{2}+\lambda||w||^{2}, \] for $\lambda>0$. \begin{enumerate} \item Show that for $w$ to be a minimizer of $J(w)$, we must have $X^{T}Xw+\lambda Iw=X^{T}y$. Show that the minimizer of $J(w)$ is $w=(X^{T}X+\lambda I)^{-1}X^{T}y$. Justify that the matrix $X^{T}X+\lambda I$ is invertible, for $\lambda>0$. (The last part should follow easily from the earlier exercises on psd and spd matrices.) \item Rewrite $X^{T}Xw+\lambda Iw=X^{T}y$ as $w=\frac{1}{\lambda}(X^{T}y-X^{T}Xw)$. Based on this, show that we can write $w=X^{T}\alpha$ for some $\alpha$, and give an expression for $\alpha$. \item Based on the fact that $w=X^{T}\alpha$, explain why we say w is ``in the span of the data.'' \item Show that $\alpha=(\lambda I+XX^{T})^{-1}y$. Note that $XX^{T}$ is the kernel matrix for the standard vector dot product. (Hint: Replace $w$ by $X^{T}\alpha$ in the expression for $\alpha$, and then solve for $\alpha$.) \item Give a kernelized expression for the $Xw$, the predicted values on the training points. (Hint: Replace $w$ by $X^{T}\alpha$ and $\alpha$ by its expression in terms of the kernel matrix $XX^{T}$. \item Give an expression for the prediction $f(x)=x^{T}w^{*}$ for a new point $x$, not in the training set. The expression should only involve $x$ via inner products with other $x$'s. {[}Hint: It is often convenient to define the column vector \[ k_{x}=\begin{pmatrix}x^{T}x_{1}\\ \vdots\\ x^{T}x_{n} \end{pmatrix} \] to simplify the expression.{]} \end{enumerate} \section{Decision Trees } \subsection{Building Trees by Hand\protect\protect\footnote{Based on Homework \#4 from David Sontag's DS-GA 1003, Spring 2014.}} In this problem we're going to be build a a small decision tree by hand for predicting whether or not a mushroom is poisonous. The training dataset is given below: \begin{center} \begin{tabular}{lrll} \hline \multicolumn{1}{c}{Poisonous} & \multicolumn{1}{c}{Size} & \multicolumn{1}{c}{Spots} & \multicolumn{1}{c}{Color}\tabularnewline \hline N & $5$ & N & White\tabularnewline N & $2$ & Y & White\tabularnewline N & $2$ & N & Brown\tabularnewline N & $3$ & Y & Brown\tabularnewline N & $4$ & N & White\tabularnewline N & $1$ & N & Brown\tabularnewline Y & $5$ & Y & White\tabularnewline Y & $4$ & Y & Brown\tabularnewline Y & $4$ & Y & Brown\tabularnewline Y & $1$ & Y & White\tabularnewline Y & $1$ & Y & Brown\tabularnewline \hline \end{tabular} \par\end{center} We're going to build a binary classification tree using the Gini index as the node impurity measure. The feature ``Size'' should be treated as numeric (i.e. we should find real-valued split points). For a given split, let $R_{1}$ and $R_{2}$ be the sets of data indices in each of the two regions of the split. Let $\hat{p}_{1}$ be the proportion of poisonous mushrooms in $R_{1}$, and let $\hat{p}_{2}$ be the proportion in $R_{2}$. Let $N_{1}$ and $N_{2}$ be the total number of training points in $R_{1}$ and $R_{2}$, respectively. Then the Gini index for the first region is $Q_{1}=2\hat{p}_{1}(1-\hat{p}_{1})$ and $Q_{2}=2\hat{p}_{2}(1-\hat{p}_{2})$ for the second region. When choosing our splitting variable and split point, we're looking to minimize the weighted impurity measure: \[ N_{1}Q_{1}+N_{2}Q_{2}. \] \begin{enumerate} \item What is the first split for a binary classification tree on this data, using the Gini index? Work this out ``by hand'', and show your calculations. {[}Hint: This should only require calculating 6 weighted impurity measures.{]} \item The first split partitions the data into two parts. Make another split so that the space is partitioned into 3 regions. Determine the predicted ``probability of poisonous'' for each of those regions. \item Suppose we build a binary tree on the dataset given below using the Gini criterion and we build it so deep that all terminal nodes are either pure or cannot be split further. (To think about: How could we have a node that is not pure, but cannot be split further?) What would the training error be, given as a percentage? Why? {[}Hint: You can do this by inspection, without any significant calculations.{]} \end{enumerate} \begin{center} \begin{tabular}{rrrr} \hline \multicolumn{1}{c}{Y} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B} & \multicolumn{1}{c}{C}\tabularnewline \hline $0$ & $0$ & $0$ & $0$\tabularnewline $0$ & $0$ & $0$ & $1$\tabularnewline $0$ & $0$ & $1$ & $0$\tabularnewline $0$ & $0$ & $1$ & $0$\tabularnewline $0$ & $0$ & $1$ & $1$\tabularnewline $1$ & $0$ & $1$ & $1$\tabularnewline $0$ & $1$ & $0$ & $0$\tabularnewline $1$ & $1$ & $0$ & $1$\tabularnewline $1$ & $1$ & $1$ & $0$\tabularnewline $0$ & $1$ & $1$ & $1$\tabularnewline $1$ & $1$ & $1$ & $1$\tabularnewline \hline \end{tabular} \par\end{center} \subsection{Investigating Impurity Measures\protect\protect\footnote{From Bishop's \emph{Pattern Recognition and Machine Learning}, Problem 14.11} } \begin{enumerate} \item Consider a data set with $400$ data points from class $C_{1}$ and $400$ data points from class $C_{2}$ . Suppose that a tree model $A$ splits these into $(300,100)$ at the first leaf node and $(100,300)$ at the second leaf node, where $(n,m)$ denotes that $n$ points are assigned to $C_{1}$ and $m$ points are assigned to $C_{2}$ . Similarly, suppose that a second tree model $B$ splits them into $(200,400)$ and $(200,0)$. Show that the misclassification rates for the two trees are equal, but that the cross-entropy and Gini impurity measures are both lower for tree $B$ than for tree $A$. \end{enumerate} \section{Representer Theorem {[}Optional{]}} Recall the following theorem from lecture: \begin{thm*} [Representer Theorem] Let \[ J(w)=R\left(\|w\|\right)+L\left(\left\langle w,\psi(x_{1})\right\rangle ,\ldots,\left\langle w,\psi(x_{n})\right\rangle \right), \] where $R:\reals^{\ge0}\to\reals$ is nondecreasing (the \textbf{regularization} term) and $L:\reals^{n}\to\reals$ is arbitrary (the\textbf{ loss }term). If $J(w)$ has a minimizer, then it has a minimizer of the form \[ w^{*}=\sum_{i=1}^{n}\alpha_{i}\psi(x_{i}). \] Furthermore, if $R$ is strictly increasing, then all minimizers have this form. \end{thm*} Note: There is nothing in this theorem that guarantees $J(w)$ has a minimizer at all. If there is no minimizer, then this theorem does not tell us anything. In the lecture slides we proved the first part of the theorem. Now we will prove the part beginning with ``Furthermore.'' \begin{enumerate} \item Let $M$ be a closed subspace of a Hilbert space $\ch$. F or any $x\in\ch$, let $m_{0}=\proj_{M}x$ be the projection of $x$ onto $M$. By the Projection Theorem, we know that $x-m_{0}\perp M$. Then by the Pythagorean Theorem, we know $\|x\|^{2}=\|m_{0}\|^{2}+\|x-m_{0}\|^{2}$. From this we concluded in lecture that $\|m_{0}\|\le\|x\|$. Show that we have $\|m_{0}\|=\|x\|$ only when $m_{0}=x$. (Hint: Use the postive-definiteness of the inner product: $\left\langle x,x\right\rangle \ge0$ and $\left\langle x,x\right\rangle =0\iff x=0$, and the fact that we're using the norm derived from such an inner product.) \item Continue the proof of the Representer Theorem from the lecture slides to show that if $R$ is strictly increasing, then all minimizers have this form. (Hint: Consider separately the cases that $\|w\|<\|w^{*}\|$ and the case $\|w\|=\|w^{*}\|$.) \item Suppose that $R:\reals^{\ge0}\to\reals$ and $L:\reals^{n}\to\reals$ are both convex functions. Use properties of convex functions to show that $w\mapsto L\left(\left\langle w,\psi(x_{1})\right\rangle ,\ldots,\left\langle w,\psi(x_{n})\right\rangle \right)$ is a convex function of $w$, and then that $J(w)$ is also convex function of $w$. For simplicity, you may assume that our feature space is $\reals^{d}$, rather than a generic Hilbert space. You may also use the fact that the composition of a convex function and an affine function is convex. That is:, suppose $f:\reals^{n}\to\reals,\ A\in\reals^{n\times m}$ and $b\in\reals^{n}.$ Define $g:\reals^{m}\to\reals$ by $g(x)=f\left(Ax+b\right)$. Then if $f$ is convex, then so is $g$. From this exercise, we can conclude that if $L$ and $R$ are convex, then $J$ does have a minimizer of the form $w^{*}=\sum_{i=1}^{n}\alpha_{i}\psi(x_{i})$, and if $R$ is also strictly increasing, then all minimizers of $J$ have this form. \end{enumerate} \section{Ivanov and Tikhonov Regularization {[}Optional{]}} In lecture there was a claim that the Ivanov and Tikhonov forms of ridge and lasso regression are equivalent. We will now prove a more general result. \subsection{Tikhonov optimal implies Ivanov optimal} Let $\phi:\cf\to\reals$ be any performance measure of $f\in\cf$, and let $\Omega:\cf\to\reals$ be any complexity measure. For example, for ridge regression over the linear hypothesis space $\cf=\left\{ f_{w}(x)=w^{T}x\mid w\in\reals^{d}\right\} $, we would have $\phi(f_{w})=\frac{1}{n}\sum_{i=1}^{n}\left(w^{T}x_{i}-y_{i}\right)^{2}$ and $\Omega(f_{w})=w^{T}w$. \begin{enumerate} \item Suppose that for some $\lambda>0$ we have the Tikhonov regularization solution \begin{equation} \minimizer f=\argmin_{f\in\cf}\left[\phi(f)+\lambda\Omega(f)\right].\label{eq:tikhonovReg} \end{equation} Show that $\minimizer f$ is also an Ivanov solution. That is, $\exists r>0$ such that \begin{equation} \minimizer f=\argmin_{\substack{f\in\cf} }\phi(f)\mbox{ subject to }\Omega(f)\le r.\label{eq:ivanovReg} \end{equation} (Hint: Start by figuring out what $r$ should be. If you're stuck on this, ask for help. Then one approach is proof by contradiction: suppose $f^{*}$ is not the optimum in \eqref{eq:ivanovReg} and show that contradicts the fact that $f^{*}$ solves \eqref{eq:tikhonovReg}.) \end{enumerate} \subsection{Ivanov optimal implies Tikhonov optimal} For the converse, we will restrict our hypothesis space to a parametric set. That is, \[ \cf=\left\{ f_{w}(x):\cx\to\reals\mid w\in\reals^{d}\right\} . \] So we will now write $\phi$ and $\Omega$ as functions of $w\in\reals^{d}$. Let $w^{*}$ be a solution to the following Ivanov optimization problem: \begin{eqnarray*} \textrm{minimize} & & \phi(w)\\ \textrm{subject to} & & \Omega(w)\le r. \end{eqnarray*} Assume that strong duality holds for this optimization problem and that the dual solution is attained. Then we will show that there exists a $\lambda\ge0$ such that $\minimizer w=\argmin_{w\in\reals^{d}}\left[\phi(w)+\lambda\Omega(w)\right].$ \begin{enumerate} \item Write the Lagrangian $L(w,\lambda)$ for the Ivanov optimization problem. \item Write the dual optimization problem in terms of the dual objective function $g(\lambda)$, and give an expression for $g(\lambda)$. {[}Writing $g(\lambda)$ as an optimization problem is expected - don't try to solve it.{]} \item We assumed that the dual solution is attained, so let $\lambda^{*}=\argmax_{\lambda\ge0}g(\lambda)$. We also assumed strong duality, which implies $\phi(w^{*})=g(\lambda^{*})$. Show that the minimum in the expression for $g(\lambda^{*})$ is attained at $w^{*}$. {[}Hint: You can use the same approach we used when we derived that strong duality implies complementary slackness \footnote{See \url{https://davidrosenberg.github.io/ml2015/docs/3b.convex-optimization.pdf} slide 24.}.{]} \textbf{Conclude the proof} by showing that for the choice of $\lambda=\lambda^{*}$, we have $\minimizer w=\argmin_{w\in\reals^{d}}\left[\phi(w)+\lambda\Omega(w)\right].$ \item {[}Optional{]} The conclusion of the previous problem allows $\lambda=0$, which means we're not actually regularizing at all. To ensure we get a proper Ivanov regularization problem, we need an additional assumption. The one below is taken from \cite{kloft2009efficient}: \[ \inf_{w\in\reals^{d}}\phi(w)<\inf_{\substack{w\in\reals^{d}\\ \Omega(w)\le r } }\phi(w) \] Note that this is a rather intuitive condition: it is simply saying that we can fit the training data better {[}strictly better{]} if we don't use any regularization. With this additional condition, show that $\minimizer w=\argmin_{w\in\reals^{d}}\left[\phi(w)+\lambda\Omega(w)\right]$ for some $\lambda>0$. \end{enumerate} \subsection{Ivanov implies Tikhonov for Ridge Regression.} To show that Ivanov implies Tikhonov for the ridge regression problem (square loss with $\ell_{2}$ regularization), we need to demonstrate strong duality and that the dual optimum is attained. Both of these things are implied by Slater's constraint qualifications. \begin{enumerate} \item Show that the Ivanov form of ridge regression is a convex optimization problem with a strictly feasible point. \end{enumerate} \section{Novelty Detection {[}Optional{]}} (Problem derived from Michael Jordan's Stat 241b Problem Set \#2, Spring 2004) A novelty detection algorithm can be based on an algorithm that finds the smallest possible sphere containing the data in feature space. \begin{enumerate} \item Let $\phi:\cx\to\ch$ be our feature map, mapping elements of the input space into our ``feature space'' $\ch$, which is a Hilbert space (and thus has an inner product). Formulate the novelty detection algorithm described above as an optimization problem. \item Give the Lagrangian for this problem, and write an equivalent, unconstrained ``$\inf\sup$'' version of the optimization problem. \item Show that we have strong duality and thus we will have an equivalent optimization problem if we swap the inf and the sup. {[}Hint: Use Slater's qualification conditions.{]} \item Solve the inner minimization problem and give the dual optimization problem. {[}Note: You may find it convenient to define the kernel function $k(x_{i},x_{j})=\left\langle \phi(x_{i}),\phi(x_{j})\right\rangle $ and to write your final problem in terms of the corresponding kernel matrix $K$ to simplify notation.{]} \item Write an expression for the optimal sphere in terms of the solution to the dual problem. \item Write down the complementary slackness conditions for this problem, and characterize the points that are the ``support vectors''. \item Briefly explain how you would apply this algorithm in practice to detect ``novel'' instances. \item {[}More Optional{]} Redo this problem allowing some of the data to lie outside of the sphere, where the number of points outside the sphere can be increased or decreased by adjusting a parameter. (Hint: Use slack variables). \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7105859043, "avg_line_length": 42.4384384384, "ext": "tex", "hexsha": "791d8dbdc3a86d9429a2773e84097cef00ae7e82", "lang": "TeX", "max_forks_count": 248, "max_forks_repo_forks_event_max_datetime": "2022-03-12T00:45:41.000Z", "max_forks_repo_forks_event_min_datetime": "2016-01-31T04:11:57.000Z", "max_forks_repo_head_hexsha": "f5af0db001bf5e2fb153d381c10b35d34a491ebf", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "LBJ-Wade/mlcourse", "max_forks_repo_path": "Archive/2016/Homework/hw4-kernels/hw4.tex", "max_issues_count": 76, "max_issues_repo_head_hexsha": "f5af0db001bf5e2fb153d381c10b35d34a491ebf", "max_issues_repo_issues_event_max_datetime": "2020-06-20T19:52:59.000Z", "max_issues_repo_issues_event_min_datetime": "2016-12-25T19:14:21.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "LBJ-Wade/mlcourse", "max_issues_repo_path": "Archive/2016/Homework/hw4-kernels/hw4.tex", "max_line_length": 151, "max_stars_count": 484, "max_stars_repo_head_hexsha": "f5af0db001bf5e2fb153d381c10b35d34a491ebf", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "LBJ-Wade/mlcourse", "max_stars_repo_path": "Archive/2016/Homework/hw4-kernels/hw4.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-28T21:31:34.000Z", "max_stars_repo_stars_event_min_datetime": "2016-01-29T18:44:38.000Z", "num_tokens": 9286, "size": 28264 }
\section{Functions}
{ "alphanum_fraction": 0.7272727273, "avg_line_length": 5.5, "ext": "tex", "hexsha": "aa1d261ac74a60698988ab5d0417e0401bd3a26c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/sets/04-00-Functions.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/sets/04-00-Functions.tex", "max_line_length": 19, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/sets/04-00-Functions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6, "size": 22 }
\section{Russian Government Links to and Contacts with The Trump Campaign} \markboth{Russian Government Links to and Contacts with The Trump Campaign}{Russian Government Links to and Contacts with The Trump Campaign} The Office identified multiple contacts---``links,'' in the words of the Appointment Order---between Trump Campaign officials and individuals with ties to the Russian government. The Office investigated whether those contacts constituted a third avenue of attempted Russian interference with or influence on the 2016 presidential election. In particular, the investigation examined whether these contacts involved or resulted in coordination or a conspiracy with the Trump Campaign and Russia, including with respect to Russia providing assistance to the Campaign in exchange for any sort of favorable treatment in the future. Based on the available information, the investigation did not establish such coordination. This Section describes the principal links between the Trump Campaign and individuals with ties to the Russian government, including some contacts with Campaign officials or associates that have been publicly reported to involve Russian contacts. Each subsection begins with an overview of the Russian contact at issue and then describes in detail the relevant facts, which are generally presented in chronological order, beginning with the early months of the Campaign and extending through the post-election, transition period. \subsection{Campaign Period (September 2015--November~8, 2016)} Russian-government-connected individuals and media entities began showing interest in Trump's campaign in the months after he announced his candidacy in June 2015.% 288 \footnote{For example, on August~18, 2015, on behalf of the editor-in-chief of the internet newspaper \textit{Vzglyad}, Georgi Asatryan emailed campaign press secretary Hope Hicks asking for a phone or in-person candidate interview. 8/18/15 Email, Asatryan to Hicks. One day earlier, the publication's founder (and former Russian parliamentarian) Konstantin Rykov had registered two Russian websites---\UseVerb{Trump2016ru} and \UseVerb{DonaldTrump2016ru}. No interview took place.} Because Trump's status as a public figure at the time was attributable in large part to his prior business and entertainment dealings, this Office investigated whether a business contact with Russia-linked individuals and entities during the campaign period---the Trump Tower Moscow project, \textit{see} \hyperlink{subsubsection.1.4.1.1}{Volume~I, Section~IV.A.1}, \textit{infra}---led to or involved coordination of election assistance. Outreach from individuals with ties to Russia continued in the spring and summer of 2016, when Trump was moving toward---and eventually becoming---the Republican nominee for President. As set forth below, the Office also evaluated a series of links during this period: outreach to two of Trump's then-recently named foreign policy advisors, including a representation that Russia had ``dirt'' on Clinton in the form of thousands of emails (\hyperlink{subsubsection.1.4.1.2}{Volume~I, Sections~IV.A.2} \& \hyperlink{subsubsection.1.4.1.3}{IV.A.3}); dealings with a D.C.-based think tank that specializes in Russia and has connections with its government (\hyperlink{subsubsection.1.4.1.4}{Volume~I, Section~IV.A.4}); a meeting at Trump Tower between the Campaign and a Russian lawyer promising dirt on candidate Clinton that was ``part of Russia and its government's support for [Trump]'' (\hyperlink{subsubsection.1.4.1.5}{Volume~I, Section~IV.A.5}); events at the Republican National Convention (\hyperlink{subsubsection.1.4.1.6}{Volume~I, Section~IV.A.6}); post-Convention contacts between Trump Campaign officials and Russia's ambassador to the United States (\hyperlink{subsubsection.1.4.1.7}{Volume~I, Section~IV.A.7}); and contacts through campaign chairman Paul Manafort, who had previously worked for a Russian oligarch and a pro-Russian political party in Ukraine (\hyperlink{subsubsection.1.4.1.8}{Volume~I, Section~IV.A.8}). \subsubsection{Trump Tower Moscow Project} The Trump Organization has pursued and completed projects outside the United States as part of its real estate portfolio. Some projects have involved the acquisition and ownership (through subsidiary corporate structures) of property. In other cases, the Trump Organization has executed licensing deals with real estate developers and management companies, often local to the country where the project was located.% 289 \footnote{\textit{See, e.g., Interview of Donald J Trump, Jr, Senate Judiciary Committee}, 115th Cong.~151--52 (Sept.~7, 2017) (discussing licensing deals of specific projects).} Between at least 2013 and~2016, the Trump Organization explored a similar licensing deal in Russia involving the construction of a Trump-branded property in Moscow. The project, commonly referred to as a ``Trump Tower Moscow'' or ``Trump Moscow'' project, anticipated a combination of commercial, hotel, and residential properties all within the same building. Between 2013 and June 2016, several employees of the Trump Organization, including then-president of the organization Donald J. Trump, pursued a Moscow deal with several Russian counterparties. From the fall of 2015 until the middle of 2016, Michael Cohen spearheaded the Trump Organization's pursuit of a Trump Tower Moscow project, including by reporting on the project's status to candidate Trump and other executives in the Trump Organization.% 290 \footnote{As noted in \hyperlink{subsubsection.1.3.4.1}{Volume~I, Section~III.D.1}, \textit{supra}, in November 2018, Cohen pleaded guilty to making false statements to Congress concerning, among other things, the duration of the Trump Tower Moscow project. \textit{See} Information \P~7(a), \textit{United States~v.\ Michael Cohen}, 1:18-cr-850 (S.D.N.Y. Nov.~29, 2018), Doc.~2 (``\textit{Cohen} Information'').} \paragraph{Trump Tower Moscow Venture with the Crocus Group~(2013--2014)} The Trump Organization and the Crocus Group, a Russian real estate conglomerate owned and controlled by Aras Agalarov, began discussing a Russia-based real estate project shortly after the conclusion of the 2013 Miss Universe pageant in Moscow.% 291 \footnote{\textit{See Interview of Donald J Trump, Jr, Senate Judiciary Committee}, 115th Cong.~13 (Sept.~7, 2017) (``Following the pageant the Trump Organization and Mr.~Agalarov's company, Crocus Group, began preliminarily discussion [sic] potential real estate projects in Moscow.''). As has been widely reported, the Miss Universe pageant---which Trump co-owned at the time---was held at the Agalarov-owned Crocus City Hall in Moscow in November 2013. Both groups were involved in organizing the pageant, and Aras Agalarov's son Emin was a musical performer at the event, which Trump attended.} Donald J. Trump~Jr.\ served as the primary negotiator on behalf of the Trump Organization; Emin Agalarov (son of Aras Agalarov) and Irakli ``Ike'' Kaveladze represented the Crocus Group during negotiations,% 292 \footnote{Kaveladze 11/16/17 302, at~2, 4--6; \blackout{Grand Jury} OSC-KAV\_00385 (12/6/13 Email, Trump~Jr.\ to Kaveladze \& E.~Agalarov).} with the occasional assistance of Robert Goldstone.% 293 \footnote{\blackout{Grand Jury}} In December 2013, Kaveladze and Trump~Jr.\ negotiated and signed preliminary terms of an agreement for the Trump Tower Moscow project.% 294 \footnote{\blackout{Grand Jury}} On December~23, 2013, after discussions with Donald J. Trump, the Trump Organization agreed to accept an arrangement whereby the organization received a flat 3.5\%~commission on all sales, with no licensing fees or incentives.% 295 \footnote{OSC-KAV\_00452 (12/23/13 Email, Trump~Jr.\ to Kaveladze \& E.~Agalarov).} The parties negotiated a letter of intent during January and February 2014.% 296 \footnote{\textit{See, e.g.}, OSC-KAV\_01158 (Letter agreement signed by Trump~Jr.\ \& E.~Agalarov); OSC-KAV\_01147 (1/20/14 Email, Kaveladze to Trump~Jr.\ et~al.).} From January 2014 through November 2014, the Trump Organization and Crocus Group discussed development plans for the Moscow project. Some time before January~24, 2014, the Crocus Group sent the Trump Organization a proposal for a 800-unit, 194-meter building to be constructed at an Agalarov-owned site in Moscow called ``Crocus City,'' which had also been the site of the Miss Universe pageant.% 297 \footnote{\textit{See, e.g.}, OSC-KAV\_00972 (10/14/14 Email, McGee to Khoo et~al.) (email from Crocus Group contractor about specifications); OSC-KAV\_00540 (1/24/14 Email, McGee to Trump~Jr.\ et~al.).} In February 2014, Ivanka Trump met with Emin Agalarov and toured the Crocus City site during a visit to Moscow.% 298 \footnote{\textit{See} OSC-KA V 00631 (2/5/14 Email, E.~Agalarov to Ivanka Trump~Jr.\ \& Kaveladze); Goldstone Facebook post, 2/4/14 (8:01~a.m) \blackout{Investigative Technique}} From March 2014 through July 2014, the groups discussed ``design standards'' and other architectural elements.% 299 \footnote{\textit{See, e.g.}, OSC-KAV\_00791 (6/3/14 Email, Kaveladze to Trump~Jr.\ et~al.[)]; OSC-KAV\_00799 (6/10/14 Email, Trump~Jr.\ to Kaveladze et~al.); OSC-KAV\_00817 (6/16/14 Email, Trump~Jr.\ to Kaveladze et~al.).} For example, in July 2014, members of the Trump Organization sent Crocus Group counterparties questions about the ``demographics of these prospective buyers'' in the Crocus City area, the development of neighboring parcels in Crocus City, and concepts for redesigning portions of the building.% 300 \footnote{OSC-KAV\_00870 (7/17/14 Email, Khoo to McGee et~al.).} In August 2014, the Trump Organization requested specifications for a competing Marriott-branded tower being built in Crocus City.% 301 \footnote{OSC-KAV\_00855 (8/4/14 Email, Khoo to McGee et~al.).} Beginning in September 2014, the Trump Organization stopped responding in a timely fashion to correspondence and proposals from the Crocus Group.% 302 \footnote{OSC-KAV\_00903 (9/29/14 Email, Tropea to McGee \& Kaveladze (noting last response was on August~26, 2014)); OSC-KAV\_00906 (9/29/14 Email, Kaveladze to Tropea \& McGee (suggesting silence ``proves my fear that those guys are bailing out of the project'')); OSC-KAV\_00972 (10/14/14 Email, McGee to Khoo et~al.) (email from Crocus Group contractor about development specifications)[].} Communications between the two groups continued through November 2014 with decreasing frequency; what appears to be the last communication is dated November~24, 2014.% 303 \footnote{OSC-KAV\_01140 (11/24/14 Email, Khoo to McGee et~al.).} The project appears not to have developed past the planning stage, and no construction occurred. \paragraph{Communications with I.C.~Expert Investment Company and Giorgi Rtskhiladze (Summer and Fall 2015)} In the late summer of 2015, the Trump Organization received a new inquiry about pursuing a Trump Tower project in Moscow. In approximately September 2015, Felix Sater, a New York-based real estate advisor, contacted Michael Cohen, then-executive vice president of the Trump Organization and special counsel to Donald J. Trump.% 304 \footnote{Sater provided information to our Office in two 2017 interviews conducted under a proffer agreement. \blackout{Grand Jury}} Sater had previously worked with the Trump Organization and advised it on a number of domestic and international projects. Sater had explored the possibility of a Trump Tower project in Moscow while working with the Trump Organization and therefore knew of the organization's general interest in completing a deal there.% 305 \footnote{\blackout{Grand Jury}} Sater had also served as an informal agent of the Trump Organization in Moscow previously and had accompanied Ivanka Trump and Donald Trump~Jr.\ to Moscow in the mid-2000s.% 306 \footnote{Sater 9/19/17 302, at~1--2, 5.} Sater contacted Cohen on behalf of I.C.~Expert Investment Company (I.C.~Expert), a Russian real-estate development corporation controlled by Andrei Vladimirovich Rozov.% 307 \footnote{Sater 9/19/17 302, at~3.} Sater had known Rozov since approximately 2007 and, in 2014, had served as an agent on behalf of Rozov during Rozov's purchase of a building in New York City.% 308 \footnote{Rozov 1/25/18 302, at~1.} Sater later contacted Rozov and proposed that I.C.~Expert pursue a Trump Tower Moscow project in which I.C.~Expert would license the name and brand from the Trump Organization but construct the building on its own. Sater worked on the deal with Rozov and another employee of I.C.~Expert.% 309 \footnote{Rozov 1/25/18 302, at~1; \textit{see also} 11/2/15 Email, Cohen to Rozov et~al.\ (sending letter of intent).} Cohen was the only Trump Organization representative to negotiate directly with I.C.~Expert or its agents. In approximately September 2015, Cohen obtained approval to negotiate with I.C.~Expert from candidate Trump, who was then president of the Trump Organization. Cohen provided updates directly to Trump about the project throughout 2015 and into 2016, assuring him the project was continuing.% 310 \footnote{Cohen 9/12/18 302, at~1--2, 4--6.} Cohen also discussed the Trump Moscow project with Ivanka Trump as to design elements (such as possible architects to use for the project% 311 \footnote{Cohen 9/12/18 302, at~5.}% ) and Donald J. Trump~Jr.\ (about his experience in Moscow and possible involvement in the project% 312 \footnote{Cohen 9/12/18 302, at~4--5.}% ) during the fall of 2015. Also during the fall of 2015, Cohen communicated about the Trump Moscow proposal with Giorgi Rtskhiladze, a business executive who previously had been involved in a development deal with the Trump Organization in Batumi, Georgia.% 313 \footnote{Rtskhiladze was a U.S.-based executive of the Georgian company Silk Road Group. In approximately 2011, Silk Road Group and the Trump Organization entered into a licensing agreement to build a Trump-branded property in Batumi, Georgia. Rtskhiladze was also involved in discussions for a Trump-branded project in Astana, Kazakhstan. The Office twice interviewed Rtskhiladze, \blackout{Grand Jury}} Cohen stated that he spoke to Rtskhiladze in part because Rtskhiladze had pursued business ventures in Moscow, including a licensing deal with the Agalarov-owned Crocus Group.% 314 \footnote{Cohen 9/12/18 302, at~12; \textit{see also} Rtskhiladze 5/10/18 302, at~1.} On September~22, 2015, Cohen forwarded a preliminary design study for the Trump Moscow project to Rtskhiladze, adding ``I look forward to your reply about this spectacular project in Moscow.'' Rtskhiladze forwarded Cohen's email to an associate and wrote, ``[i]f we could organize the meeting in New York at the highest level of the Russian Government and Mr.~Trump this project would definitely receive the worldwide attention.''% 315 \footnote{9/22/15 Email, Rtskhiladze to Nizharadze.} On September~24, 2015, Rtskhiladze sent Cohen an attachment that he described as a proposed ``[l]etter to the Mayor of Moscow from Trump org,'' explaining that ``[w]e need to send this letter to the Mayor of Moscow (second guy in Russia) he is aware of the potential project and will pledge his support.''% 316 \footnote{9/24/15 Email, Rtskhiladze to Cohen.} In a second email to Cohen sent the same day, Rtskhiladze provided a translation of the letter, which described the Trump Moscow project as a ``symbol of stronger economic, business and cultural relationships between New York and Moscow and therefore United States and the Russian Federation.''% 317 \footnote{9/24/15 Email, Rtskhiladze to Cohen.} On September~27, 2015, Rtskhiladze sent another email to Cohen, proposing that the Trump Organization partner on the Trump Moscow project with ``Global Development Group LLC,'' which he described as being controlled by Michail Posikhin, a Russian architect, and Simon Nizharadze.% 318 \footnote{9/27/15 Email, Rtskhiladze to Cohen.} Cohen told the Office that he ultimately declined the proposal and instead continued to work with I.C.~Expert, the company represented by Felix Sater.% 319 \footnote{Cohen 9/12/18 302, at~12.} \paragraph{Letter of Intent and Contacts to Russian Government (October 2015--January 2016)} \subparagraph{Trump Signs the Letter of Intent on behalf of the Trump Organization} Between approximately October~13, 2015 and November~2, 2015, the Trump Organization (through its subsidiary Trump Acquisition, LLC) and I.C.~Expert completed a letter of intent (LOI) for a Trump Moscow property. The LOI, signed by Trump for the Trump Organization and Rozov on behalf of I.C.~Expert, was ``intended to facilitate further discussions'' in order to ``attempt to enter into a mutually acceptable agreement'' related to the Trump-branded project in Moscow.% 320 \footnote{11/2/15 Email, Cohen to Rozov et~al.\ (attachment) (hereinafter ``LOI''); \textit{see also} 10/13/15 Email, Sater to Cohen \& Davis (attaching proposed letter of intent).} The LOI contemplated a development with residential, hotel, commercial, and office components, and called for ``[a]pproximately 250 first class, luxury residential condominiums,'' as well as ``[o]ne first class, luxury hotel consisting of approximately 15 floors and containing not fewer than 150 hotel rooms.''% 321 \footnote{LOI, p.~2.} For the residential and commercial portions of the project, the Trump Organization would receive between 1\% and 5\% of all condominium sales,% 322 \footnote{The LOI called for the Trump Organization to receive 5\% of all gross sales up to \$100~million; 4\% of all gross sales from \$100~million to \$250~million; 3\% of all gross sales from \$250~million to \$500~million; 2\% of all gross sales from \$500~million to \$1~billion; and 1\% of all gross sales over \$1~billion. LOI, Schedule 2.} plus 3\% of all rental and other revenue.% 323 \footnote{LOI, Schedule 2.} For the project's hotel portion, the Trump Organization would receive a base fee of 3\% of gross operating revenues for the first five years and 4\% thereafter, plus a separate incentive fee of 20\% of operating profit.% 324 \footnote{LOI, Schedule 1.} Under the LOI, the Trump Organization also would receive a \$4 million ``up-front fee'' prior to groundbreaking.% 325 \footnote{LOI, Schedule 2.} Under these terms, the Trump Organization stood to earn substantial sums over the lifetime of the project, without assuming significant liabilities or financing commitments.% 326 \footnote{Cohen 9/12/18 302, at~3.} On November~3, 2015, the day after the Trump Organization transmitted the LOI, Sater emailed Cohen suggesting that the Trump Moscow project could be used to increase candidate Trump's chances at being elected, writing: \begin{quote} Buddy our boy can become President of the USA and we can engineer it. I will get all of Putins team to buy in on this, I will manage this process\dots. Michael, Putin gets on stage with Donald for a ribbon cutting for Trump Moscow, and Donald owns the republican nomination. And possibly beats Hillary and our boy is in\dots. We will manage this process better than anyone. You and I will get Donald and Vladimir on a stage together very shortly. That the game changer.% 327 \footnote{11/3/15 Email, Sater to Cohen (12:14~p.m.).} \end{quote} Later that day, Sater followed up: \begin{quote} Donald doesn't stare down, he negotiates and understands the economic issues and Putin only wants to deal with a pragmatic leader, and a successful business man is a good candidate for someone who knows how to negotiate. ``Business, politics, whatever it all is the same for someone who knows how to deal'' I think I can get Putin to say that at the Trump Moscow press conference. If he says it we own this election. America's most difficult adversary agreeing that Donald is a good guy to negotiate\dots. We can own this election. Michael my next steps are very sensitive with Putin's very very close people, we can pull this off. Michael lets go. 2 boys from Brooklyn getting a USA president elected. This is good really good.% 328 \footnote{11/3/15 Email, Sater to Cohen (12:40~p.m.).} \end{quote} According to Cohen, he did not consider the political import of the Trump Moscow project to the 2016 U.S. presidential election at the time. Cohen also did not recall candidate Trump or anyone affiliated with the Trump Campaign discussing the political implications of the Trump Moscow project with him. However, Cohen recalled conversations with Trump in which the candidate suggested that his campaign would be a significant ``infomercial'' for Trump-branded properties.% 329 \footnote{Cohen 9/12/18 302, at~3--4; Cohen 8/7/18 302, at~15.} \subparagraph{Post-LOI Contacts with Individuals in Russia} Given the size of the Trump Moscow project, Sater and Cohen believed the project required approval (whether express or implicit) from the Russian national government, including from the Presidential Administration of Russia.% 330 \footnote{\blackout{Grand Jury}} Sater stated that he therefore began to contact the Presidential Administration through another Russian business contact.% 331 \footnote{Sater 12/15/17 302, at~3--4.} In early negotiations with the Trump Organization, Sater had alluded to the need for government approval and his attempts to set up meetings with Russian officials. On October~12, 2015, for example, Sater wrote to Cohen that ``all we need is Putin on board and we are golden,'' and that a ``meeting with Putin and top deputy is tentatively set for the 14th [of October].''% 332 \footnote{10/12/15 Email, Sater to Cohen (8:07~a.m.).} \blackout{Grand Jury} this meeting was being coordinated by associates in Russia and that he had no direct interaction with the Russian government.% 333 \footnote{\blackout{Grand Jury}} Approximately a month later, after the LOI had been signed, Lana Erchova emailed Ivanka Trump on behalf of Erchova's then-husband Dmitry Klokov, to offer Klokov's assistance to the Trump Campaign.% 334 \footnote{Ivanka Trump received an email from a woman who identified herself as ``Lana E. Alexander,'' which said in part, ``If you ask anyone who knows Russian to google my husband Dmitry Klokov, you'll see who he is close to and that he has done Putin's political campaigns.'' 11/16/15 Email, Erchova to I.~Trump.} Klokov was at that time Director of External Communications for PJSC Federal Grid Company of Unified Energy System, a large Russian electricity transmission company, and had been previously employed as an aide and press secretary to Russia's energy minister. Ivanka Trump forwarded the email to Cohen.% 335 \footnote{11/16/15 Email, I.~Trump to Cohen.} He told the Office that, after receiving this inquiry, he had conducted an internet search for Klokov's name and concluded (incorrectly) that Klokov was a former Olympic weightlifter.% 336 \footnote{Cohen 8/7/18 302, at~17. During his interviews with the Office, Cohen still appeared to believe that the Klokov he spoke with was that Olympian. The investigation, however, established that the email address used to communicate with Cohen belongs to a different Dmitry Klokov, as described above.} Between November~18 and~19, 2015, Klokov and Cohen had at least one telephone call and exchanged several emails. Describing himself in emails to Cohen as a ``trusted person'' who could offer the Campaign ``political synergy'' and ``synergy on a government level,'' Klokov recommended that Cohen travel to Russia to speak with him and an unidentified intermediary. Klokov said that those conversations could facilitate a later meeting in Russia between the candidate and an individual Klokov described as ``our person of interest.''% 337 \footnote{11/18/15 Email, Klokov to Cohen (6:51~a.m.).} In an email to the Office, Erchova later identified the ``person of interest'' as Russian President Vladimir Putin.% 338 \footnote{In July 2018, the Office received an unsolicited email purporting to be from Erchova, in which she wrote that ``[a]t the end of 2015 and beginning of 2016 I was asked by my ex-husband to contact Ivanka Trump \dots\ and offer cooperation to Trump's team on behalf of the Russian officials.'' 7/27/18 Email, Erchova to Special Counsel's Office. The email claimed that the officials wanted to offer candidate Trump ``land in Crimea among other things and unofficial meeting with Putin.'' \textit{Id.} In order to vet the email's claims, the Office responded requesting more details. The Office did not receive any reply.} In the telephone call and follow-on emails with Klokov, Cohen discussed his desire to use a near-term trip to Russia to do site surveys and talk over the Trump Moscow project with local developers. Cohen registered his willingness also to meet with Klokov and the unidentified intermediary, but was emphatic that all meetings in Russia involving him or candidate Trump---including a possible meeting between candidate Trump and Putin---would need to be ``in conjunction with the development and an official visit'' with the Trump Organization receiving a formal invitation to visit.% 339 \footnote{11/18/15 Email, Cohen to Klokov (7:15~a.m.).} (Klokov had written previously that ``the visit [by candidate Trump to Russia] has to be informal.'')% 340 \footnote{11/18/15 Email, Klokov to Cohen (6:51~a.m.).} Klokov had also previously recommended to Cohen that he separate their negotiations over a possible meeting between Trump and ``the person of interest'' from any existing business track.% 341 \footnote{11/18/15 Email, Klokov to Cohen (6:51~a.m.) (``I would suggest separating your negotiations and our proposal to meet. I assure you, after the meeting level of projects and their capacity can be completely different, having the most important support.'').} Re-emphasizing that his outreach was not done on behalf of any business, Klokov added in second email to Cohen that, if publicized well, such a meeting could have ``phenomenal'' impact ``in a business dimension'' and that the ``person of interest['s]'' ``most important support'' could have significant ramifications for the ``level of projects and their capacity.'' Klokov concluded by telling Cohen that there was ``no bigger warranty in any project than [the] consent of the person of interest.''% 342 \footnote{11/19/15 Email, Klokov to Cohen (7:40~a.m.).} Cohen rejected the proposal, saying that ``[c]urrently our LOI developer is in talks with VP's Chief of Staff and arranging a formal invite for the two to meet.''% 343 \footnote{11/19/15 Email, Cohen to Klokov (12:56~p.m.).} This email appears to be their final exchange, and the investigation did not identify evidence that Cohen brought Klokov's initial offer of assistance to the Campaign's attention or that anyone associated with the Trump Organization or the Campaign dealt with Klokov at a later date. Cohen explained that he did not pursue the proposed meeting because he was already working on the Moscow Project with Sater, who Cohen understood to have his own connections to the Russian government.% 344 \footnote{Cohen 9/18/18 302, at~12.} By late December 2015, however, Cohen was complaining that Sater had not been able to use those connections to set up the promised meeting with Russian government officials. Cohen told Sater that he was ``setting up the meeting myself.''% 345 \footnote{FS00004 (12/30/15 Text Message, Cohen to Sater (6:17~p.m.)).} On January~11, 2016, Cohen emailed the office of Dmitry Peskov, the Russian government's press secretary, indicating that he desired contact with Sergei Ivanov, Putin's chief of staff. Cohen erroneously used the email address ``\UseVerb{PrpeskovaATprpressgofru}'' instead of ``\UseVerb{PrpeskovaATprpressgovru},'' so the email apparently did not go through.% 346 \footnote{1/11/16 Email, Cohen to \UseVerb{prpeskovaATprpressgofru} (9:12~a.m.).} On January~14, 2016, Cohen emailed a different address (\UseVerb{infoATprpressgovru}) with the following message: \begin{quote} Dear Mr.~Peskov, Over the past few months, I have been working with a company based in Russia regarding the development of a Trump Tower-Moscow project in Moscow City. Without getting into lengthy specifics, the communication between our two sides has stalled. As this project is too important, I am hereby requesting your assistance. I respectfully request someone, preferably you; contact me so that I might discuss the specifics as well as arranging meetings with the appropriate individuals. I thank you in advance for your assistance and look forward to hearing from you soon.% 347 \footnote{1/14/16 Email, Cohen to \UseVerb{infoATprpressgovru} (9:21~a.m.).} \end{quote} Two days later, Cohen sent an email to \UseVerb{PrpeskovaATprpressgovru}, repeating his request to speak with Sergei Ivanov.% 348 \footnote{1/16/16 Email, Cohen to \UseVerb{prpeskovaATprpressgovru} (10:28~a.m.).} Cohen testified to Congress, and initially told the Office, that he did not recall receiving a response to this email inquiry and that he decided to terminate any further work on the Trump Moscow project as of January 2016. Cohen later admitted that these statements were false. In fact, Cohen had received (and recalled receiving) a response to his inquiry, and he continued to work on and update candidate Trump on the project through as late as June 2016.% 349 \footnote{\textit{Cohen} Information \P\P~4,~7. Cohen's interactions with President Trump and the President's lawyers when preparing his congressional testimony are discussed further in \hyperref[chap:volume-2]{Volume~II}\null. \textit{See} \hyperlink{subsubsection.2.2.11.3}{Vol.~II, Section~II.K.3}, \textit{infra}.} On January~20, 2016, Cohen received an email from Elena Poliakova, Peskov's personal assistant. Writing from her personal email account, Poliakova stated that she had been trying to reach Cohen and asked that he call her on the personal number that she provided.% 350 \footnote{1/20/16 Email, Poliakova to Cohen (5:57~a.m.) (``Mr.~Cohen[,] I can't get through to both your phones. Pls, call me.'').} Shortly after receiving Poliakova's email, Cohen called and spoke to her for 20~minutes.% 351 \footnote{Telephone records show a 20-minute call on January~20, 2016 between Cohen and the number Poliakova provided in her email. Call Records of Michael Cohen \blackout{Grand Jury} After the call, Cohen saved Poliakova's contact information in his Trump Organization Outlook contact list. 1/20/16 Cohen Microsoft Outlook Entry (6:22~a.m.).} Cohen described to Poliakova his position at the Trump Organization and outlined the proposed Trump Moscow project, including information about the Russian counterparty with which the Trump Organization had partnered. Cohen requested assistance in moving the project forward, both in securing land to build the project and with financing. According to Cohen, Poliakova asked detailed questions and took notes, stating that she would need to follow up with others in Russia.% 352 \footnote{Cohen 9/12/18 302, at~2--3.} Cohen could not recall any direct follow-up from Poliakova or from any other representative of the Russian government, nor did the Office identify any evidence of direct follow-up. However, the day after Cohen's call with Poliakova, Sater texted Cohen, asking him to ``[c]all me when you have a few minutes to chat \dots\ It's about Putin they called today.''% 353 \footnote{FS00011 (1/21/16 Text Messages, Sater to Cohen).} Sater then sent a draft invitation for Cohen to visit Moscow to discuss the Trump Moscow project,% 354 \footnote{The invitation purported to be from Genbank, a Russian bank that was, according to Sater, working at the behest of a larger bank, VTB, and would consider providing financing. FS00008 (12/31/15 Text Messages, Sater \& Cohen). Additional information about Genbank can be found \textit{infra}.} along with a note to ``[t]ell me if the letter is good as amended by me or make whatever changes you want and send it back to me.''% 355 \footnote{FSO0011 (1/21/16 Text Message, Sater to Cohen (7:44~p.m.)); 1/21/16 Email, Sater to Cohen (6:49~p.m.).} After a further round of edits, on January~25, 2016, Sater sent Cohen an invitation---signed by Andrey Ryabinskiy of the company MHJ---to travel to ``Moscow for a working visit'' about the ``prospects of development and the construction business in Russia,'' ``the various land plots available suited for construction of this enormous Tower,'' and ``the opportunity to co-ordinate a follow up visit to Moscow by Mr.~Donald Trump.''% 356 \footnote{1/25/16 Email, Sater to Cohen (12:01~p.m.) (attachment).} According to Cohen, he elected not to travel at the time because of concerns about the lack of concrete proposals about land plots that could be considered as options for the project.% 357 \footnote{Cohen 9/12/18 302, at~6--7.} \paragraph{Discussions about Russia Travel by Michael Cohen or Candidate Trump (December 2015--June 2016)} \subparagraph{Sater's Overtures to Cohen to Travel to Russia} The late January communication was neither the first nor the last time that Cohen contemplated visiting Russia in pursuit of the Trump Moscow project. Beginning in late 2015, Sater repeatedly tried to arrange for Cohen and candidate Trump, as representatives of the Trump Organization, to travel to Russia to meet with Russian government officials and possible financing partners. In December 2015, Sater sent Cohen a number of emails about logistics for traveling to Russia for meetings.% 358 \footnote{\textit{See, e.g.}, 12/1/15 Email, Sater to Cohen (12:41~p.m.) (``Please scan and send me a copy of your passport for the Russian Ministry of Foreign Affairs.'').} On December~19, 2015, Sater wrote: \begin{quote} Please call me I have Evgeney [Dvoskin] on the other line.% 359 \footnote{Toll records show that Sater was speaking to Evgeny Dvoskin. Call Records of Felix Sater \blackout{Grand Jury} Dvoskin is an executive of Genbank, a large bank with lending focused in Crimea, Ukraine. At the time that Sater provided this financing letter to Cohen, Genbank was subject to U.S. government sanctions, \textit{see Russia/Ukraine-related Sanctions and Identifications}, Office of Foreign Assets Control (Dec.~22, 2015), \textit{available at} \url{https://www.treasury.gov/resource-center/sanctions/OFACEnforcement/Pages/20151222.aspx}. Dvoskin, who had been deported from the United States in 2000 for criminal activity, was under indictment in the United States for stock fraud under the aliases Eugene Slusker and Gene Shustar. \textit{See United States~v.\ Rizzo, et~al.}, 2:03-cr-63 (E.D.N.Y. Feb.~6, 2003).} He needs a copy of your and Donald's passports they need a scan of every page of the passports. Invitations \& Visas will be issued this week by VTB Bank to discuss financing for Trump Tower Moscow. Politically neither Putins office nor Ministry of Foreign Affairs cannot issue invite, so they are inviting commercially/business. VTB is Russia's 2 biggest bank and VTB Bank CEO Andrey Kostin, will be at all meetings with Putin so that it is a business meeting not political. We will be invited to Russian consulate this week to receive invite \& have visa issued.% 360 \footnote{12/19/15 Email, Sater to Cohen (10:50~a.m.); FS00002 (12/19/15 Text Messages, Sater to Cohen, (10:53~a.m.)[)].} \end{quote} In response, Cohen texted Sater an image of his own passport.% 361 \footnote{FS00004 (12/19/15 Text Message, Cohen to Sater); ERT\_0198-256 (12/19/15 Text Messages, Cohen \& Sater).} Cohen told the Office that at one point he requested a copy of candidate Trump's passport from Rhona Graff, Trump's executive assistant at the Trump Organization, and that Graff later brought Trump's passport to Cohen's office.% 362 \footnote{Cohen 9/12/18 302, at~5.} The investigation did not, however, establish that the passport was forwarded to Sater.% 363 \footnote{On December~21, 2015, Sater sent Cohen a text message that read, ``They need a copy of DJT passport,'' to which Cohen responded, ``After I return from Moscow with you with a date for him.'' FS00004 (12/21/15 Text Messages, Cohen \& Sater).} Into the spring of 2016, Sater and Cohen continued to discuss a trip to Moscow in connection with the Trump Moscow project. On April~20, 2016, Sater wrote Cohen, ``[t]he People wanted to know when you are coming?''% 364 \footnote{FS00014 (4/20/16 Text Message, Sater to Cohen (9:06~p.m.)).} On May~4, 2016, Sater followed up: \begin{quote} I had a chat with Moscow. ASSUMING the trip does happen the question is before or after the convention. I said I believe, but don't know for sure, that it's probably after the convention. Obviously the pre-meeting trip (you only) can happen anytime you want but the 2 big guys where [sic] the question. I said I would confirm and revert\dots. Let me know about If I was right by saying I believe after Cleveland and also when you want to speak to them and possibly fly over.% 365 \footnote{FS00015 (5/4/16 Text Message, Sater to Cohen (7:38~p.m.)).} \end{quote} Cohen responded, ``My trip before Cleveland. Trump once he becomes the nominee after the convention.''% 366 \footnote{FS00015 (5/4/16 Text Message, Cohen to Sater (8:03~p.m.)).} The day after this exchange, Sater tied Cohen's travel to Russia to the St.~Petersburg International Economic Forum (``Forum''), an annual event attended by prominent Russian politicians and businessmen. Sater told the Office that he was informed by a business associate that Peskov wanted to invite Cohen to the Forum.% 367 \footnote{Sater 12/15/17 302, at~4.} On May~5, 2016, Sater wrote to Cohen: \begin{quote} Peskov would like to invite you as his guest to the St.~Petersburg Forum which is Russia's Davos it's June~16--19. He wants to meet there with you and possibly introduce you to either Putin or Medvedev, as they are not sure if 1 or both will be there. This is perfect. The entire business class of Russia will be there as well. He said anything you want to discuss including dates and subjects are on the table to discuss[.]% 368 \footnote{FS00016 (5/5/16 Text Messages, Sater to Cohen (6:26 \& 6:27~a.m.)).} \end{quote} The following day, Sater asked Cohen to confirm those dates would work for him to travel; Cohen wrote back, ``[w]orks for me.''% 369 \footnote{FS00016 (5/6/16 Text Messages, Cohen \& Sater).} On June~9, 2016, Sater sent Cohen a notice that he (Sater) was completing the badges for the Forum, adding, ``Putin is there on the 17th very strong chance you will meet him as well.''% 370 \footnote{FS00018 (6/9/16 Text Messages, Sater \& Cohen).} On June~13, 2016, Sater forwarded Cohen an invitation to the Forum signed by the Director of the Roscongress Foundation, the Russian entity organizing the Forum.% 371 \footnote{6/13/16 Email, Sater to Cohen (2:10~p.m.).} Sater also sent Cohen a Russian visa application and asked him to send two passport photos.% 372 \footnote{FS00018 (6/13/16 Text Message, Sater to Cohen (2:20~p.m.)); 6/13/16 Email, Sater to Cohen.} According to Cohen, the invitation gave no indication that Peskov had been involved in inviting him. Cohen was concerned that Russian officials were not actually involved or were not interested in meeting with him (as Sater had alleged), and so he decided not to go to the Forum.% 373 \footnote{Cohen 9/12/18 302, at~6--8.} On June~14, 2016, Cohen met Sater in the lobby of the Trump Tower in New York and informed him that he would not be traveling at that time.% 374 \footnote{FS00019 (6/14/16 Text Messages, Cohen \& Sater (12:06 and 2:50~p.m.)).} \subparagraph{Candidate Trump's Opportunities to Travel to Russia} The investigation identified evidence that, during the period the Trump Moscow project was under consideration, the possibility of candidate Trump visiting Russia arose in two contexts. First, in interviews with the Office, Cohen stated that he discussed the subject of traveling to Russia with Trump twice: once in late 2015; and again in spring 2016.% 375 \footnote{Cohen 9/12/18 302, at~2.} According to Cohen, Trump indicated a willingness to travel if it would assist the project significantly. On one occasion, Trump told Cohen to speak with then-campaign manager Corey Lewandowski to coordinate the candidate's schedule. Cohen recalled that he spoke with Lewandowski, who suggested that they speak again when Cohen had actual dates to evaluate. Cohen indicated, however, that he knew that travel prior to the Republican National Convention would be impossible given the candidate's preexisting commitments to the Campaign.% 376 \footnote{Cohen 9/12/18 302, at~7.} Second, like Cohen, Trump received and turned down an invitation to the St.~Petersburg International Economic Forum. In late December 2015, Mira Duma---a contact of Ivanka Trump's from the fashion industry---first passed along invitations for Ivanka Trump and candidate Trump from Sergei Prikhodko, a Deputy Prime Minister of the Russian Federation.% 377 \footnote{12/21/15 Email, Mira to Ivanka Trump (6:57~a.m.) (attachments); TRUMPORG\_16\_000057 (1/7/16 Email, I.~Trump to Graff (9:18~a.m.)).} On January~14, 2016, Rhona Graff sent an email to Duma stating that Trump was ``honored to be asked to participate in the highly prestigious'' Forum event, but that he would ``have to decline'' the invitation given his ``very grueling and full travel schedule'' as a presidential candidate.% 378 \footnote{1/14/16 Email, Graff to Mira.} Graff asked Duma whether she recommended that Graff ``send a formal note to the Deputy Prime Minister'' declining his invitation; Duma replied that a formal note would be ``great.''% 379 \footnote{1/15/16 Email, Mira to Graff.} It does not appear that Graff prepared that note immediately. According to written answers from President Trump,% 380 \footnote{As explained in \hyperref[chap:volume-2]{Volume~II} and \hyperlink{section.3.3}{Appendix~C}, on September~17, 2018, the Office sent written questions to the President's counsel. On November~20, 2018, the President provided written answers to those questions through counsel.} Graff received an email from Deputy Prime Minister Prikhodko on March~17, 2016, again inviting Trump to participate in the 2016 Forum in St.~Petersburg.% 381 \footnote{Written Responses of Donald J. Trump (Nov.~20, 2018), at~17 (Response to Question~IV, Part~(e)) (``[D]ocuments show that Ms.~Graff prepared for my signature a brief response declining the invitation.'').} Two weeks later, on March~31, 2016, Graff prepared for Trump's signature a two-paragraph letter declining the invitation.% 382 \footnote{Written Responses of Donald J. Trump (Nov.~20, 2018), at~17 (Response to Question~IV, Part~(e)); \textit{see also} TRUMPORG\_16\_000134 (unsigned letter dated March~31, 2016).} The letter stated that Trump's ``schedule has become extremely demanding'' because of the presidential campaign, that he ``already ha[d] several commitments in the United States'' for the time of the Forum, but that he otherwise ``would have gladly given every consideration to attending such an important event.''% 383 \footnote{TRUMPORG\_16\_000134 (unsigned letter).} Graff forwarded the letter to another executive assistant at the Trump Organization with instructions to print the document on letterhead for Trump to sign.% 384 \footnote{TRUMPORG\_16000133 (3/31/16 Email, Graff to Macchia).} At approximately the same time that the letter was being prepared, Robert Foresman---a New York-based investment banker---began reaching out to Graff to secure an in-person meeting with candidate Trump. According to Foresman, he had been asked by Anton Kobyakov, a Russian presidential aide involved with the Roscongress Foundation, to see if Trump could speak at the Forum.% 385 \footnote{Foresman 10/17/18 302, at~3--4.} Foresman first emailed Graff on March~31, 2016, following a phone introduction brokered through Trump business associate Mark Burnett (who produced the television show \textit{The Apprentice}). In his email, Foresman referenced his long-standing personal and professional expertise in Russia and Ukraine, his work setting up an early ``private channel'' between Vladimir Putin and former U.S. President George W. Bush, and an ``approach'' he had received from ``senior Kremlin officials'' about the candidate. Foresman asked Graff for a meeting with the candidate, Corey Lewandowski, or ``another relevant person'' to discuss this and other ``concrete things'' Foresman felt uncomfortable discussing over ``unsecure email.''% 386 \footnote{\textit{See} TRUMPORG\_16\_00136 (3/31/16 Email, Foresman to Graff); \textit{see also} Foresman 10/17/18 302, at~3--4.} On April~4, 2016, Graff forwarded Foresman's meeting request to Jessica Macchia, another executive assistant to Trump.% 387 \footnote{\textit{See} TRUMPORG\_16\_00136 (4/4/16 Email, Graff to Macchia).} With no response forthcoming, Foresman twice sent reminders to Graff---first on April~26 and again on April~30, 2016.% 388 \footnote{\textit{See} TRUMPORG\_16\_00137 (4/26/16 Email, Foresman to Graff); TRUMPORG\_16\_00141 (4/30/16 Email, Foresman to Graff).} Graff sent an apology to Foresman and forwarded his April~26 email (as well as his initial March 2016 email) to Lewandowski.% 389 \footnote{\textit{See} TRUMPORG\_16\_00139 (4/27/16 Email, Graff to Foresman); TRUMPORG\_16\_00137 (4/27/16 Email, Graff to Lewandowski).} On May~2, 2016, Graff forwarded Foresman's April~30 email---which suggested an alternative meeting with Donald Trump~Jr.\ or Eric Trump so that Foresman could convey to them information that ``should be conveyed to [the candidate] personally or [to] someone [the candidate] absolutely trusts''---to policy advisor Stephen Miller.% 390 \footnote{TRUMPORG\_16\_00142 (5/2/16 Email, Graff to S.~Miller); \textit{see also} TRUMPORG\_16\_00143 (5/2/16 Email, Graff to S.~Miller) (forwarding March 2016 email from Foresman).} No communications or other evidence obtained by the Office indicate that the Trump Campaign learned that Foresman was reaching out to invite the candidate to the Forum or that the Campaign otherwise followed up with Foresman until after the election, when he interacted with the Transition Team as he pursued a possible position in the incoming Administration.% 391 \footnote{Foresman's contacts during the transition period are discussed further in \hyperlink{subsubsection.1.4.2.3}{Volume~I, Section~IV.B.3}, \textit{infra}.} When interviewed by the Office, Foresman denied that the specific ``approach'' from ``senior Kremlin officials'' noted in his March~31, 2016 email was anything other than Kobyakov's invitation to Roscongress. According to Foresman, the ``concrete things'' he referenced in the same email were a combination of the invitation itself, Foresman's personal perspectives on the invitation and Russia policy in general, and details of a Ukraine plan supported by a U.S. think tank (EastWest Institute). Foresman told the Office that Kobyakov had extended similar invitations through him to another Republican presidential candidate and one other politician. Foresman also said that Kobyakov had asked Foresman to invite Trump to speak after that other presidential candidate withdrew from the race and the other politician's participation did not work out.% 392 \footnote{Foresman 10/17/18 302, at~4.} Finally, Foresman claimed to have no plans to establish a back channel involving Trump, stating the reference to his involvement in the Bush--Putin back channel was meant to burnish his credentials to the Campaign. Foresman commented that he had not recognized any of the experts announced as Trump's foreign policy team in March 2016, and wanted to secure an in-person meeting with the candidate to share his professional background and policy views, including that Trump should decline Kobyakov's invitation to speak at the Forum.% 393 \footnote{Foresman 10/17/18 302, at~8--9.} \subsubsection{George Papadopoulos} George Papadopoulos was a foreign policy advisor to the Trump Campaign from March 2016 to early October 2016.% 394 \footnote{Papadopoulos met with our Office for debriefings on several occasions in the summer and fall of 2017, after he was arrested and charged in a sealed criminal complaint with making false statements in a January 2017 FBI interview about, \textit{inter alia}, the timing, extent, and nature of his interactions and communications with Joseph Mifsud and two Russian nationals: Olga Polonskaya and Ivan Timofeev. Papadopoulos later pleaded guilty, pursuant to a plea agreement, to an information charging him with making false statements to the FBI, in violation of 18~U.S.C. \S~1001(a).} In late April 2016, Papadopoulos was told by London-based professor Joseph Mifsud, immediately after Mifsud's return from a trip to Moscow, that the Russian government had obtained ``dirt'' on candidate Clinton in the form of thousands of emails. One week later, on May~6, 2016, Papadopoulos suggested to a representative of a foreign government that the Trump Campaign had received indications from the Russian government that it could assist the Campaign through the anonymous release of information that would be damaging to candidate Clinton. Papadopoulos shared information about Russian ``dirt'' with people outside of the Campaign, and the Office investigated whether he also provided it to a Campaign official. Papadopoulos and the Campaign officials with whom he interacted told the Office that they did not recall that Papadopoulos passed them the information. Throughout the relevant period of time and for several months thereafter, Papadopoulos worked with Mifsud and two Russian nationals to arrange a meeting between the Campaign and the Russian government. That meeting never came to pass. \paragraph{Origins of Campaign Work} In March 2016, Papadopoulos became a foreign policy advisor to the Trump Campaign.% 395 \footnote{\textit{A Transcript of Donald Trump's Meeting with the Washington Post Editorial Board}, Washington Post (Mar.~21, 2016).} As early as the summer of 2015, he had sought a role as a policy advisor to the Campaign but, in a September~30, 2015 email, he was told that the Campaign was not hiring policy advisors.% 396 \footnote{7/15/15 LinkedIn Message, Papadopoulos to Lewandowski (6:57~a.m.); 9/30/15 Email, Glassner to Papadopoulos (7:42:21~a.m.).} In late 2015, Papadopoulos obtained a paid position on the campaign of Republican presidential candidate Ben Carson.% 397 \footnote{Papadopoulos 8/10/17 302, at~2.} Although Carson remained in the presidential race until early March 2016, Papadopoulos had stopped actively working for his campaign by early February 2016.% 398 \footnote{Papadopoulos 8/10/17 302, at~2; 2/4/16 Email, Papadopoulos to Idris.} At that time, Papadopoulos reached out to a contact at the London Centre of International Law Practice (LCILP), which billed itself as a ``unique institution \dots\ comprising high-level professional international law practitioners, dedicated to the advancement of global legal knowledge and the practice of international law.''% 399 \footnote{London Centre of International Law Practice, at \url{https://www.lcilp.org/} (via \url{web.archive.org}).} Papadopoulos said that he had finished his role with the Carson campaign and asked if LCILP was hiring.% 400 \footnote{2/4/16 Email, Papadopoulos to Idris.} In early February, Papadopoulos agreed to join LCILP and arrived in London to begin work.% 401 \footnote{2/5/16 Email, Idris to Papadopoulos (6:11:25~p.m.); 2/6/16 Email, Idris to Papadopoulos (5:34:15~p.m.).} As he was taking his position at LCILP, Papadopoulos contacted Trump campaign manager Corey Lewandowski via LinkedIn and emailed campaign official Michael Glassner about his interest in joining the Trump Campaign.% 402 \footnote{2/4/16 LinkedIn Message, Papadopoulos to Lewandowski (1:28~p.m.); 2/4/16 Email, Papadopoulos to Glassner (2:10:36~p.m.).} On March~2, 2016, Papadopoulos sent Glassner another message reiterating his interest.% 403 \footnote{3/2/16 Email, Papadopoulos to Glassner (11:17:23~a.m.).} Glassner passed along word of Papadopoulos's interest to another campaign official, Joy Lutes, who notified Papadopoulos by email that she had been told by Glassner to introduce Papadopoulos to Sam Clovis, the Trump Campaign's national co-chair and chief policy advisor.% 404 \footnote{3/2/16 Email, Lutes to Papadopoulos (10:08:15~p.m.).} At the time of Papadopoulos's March~2 email, the media was criticizing the Trump Campaign for lack of experienced foreign policy or national security advisors within its ranks.% 405 \footnote{Clovis 10/3/17 302 (1 of 2), at~4.} To address that issue, senior Campaign officials asked Clovis to put a foreign policy team together on short notice.% 406 \footnote{Clovis 10/3/17 302 (1 of 2), at~4.} After receiving Papadopoulos's name from Lutes, Clovis performed a Google search on Papadopoulos, learned that he had worked at the Hudson Institute, and believed that he had credibility on energy issues.% 407 \footnote{\blackout{Grand Jury}; 3/3/16 Email, Lutes to Clovis \& Papadopoulos (6:05:47~p.m.).} On March~3, 2016, Clovis arranged to speak with Papadopoulos by phone to discuss Papadopoulos joining the Campaign as a foreign policy advisor, and on March~6, 2016, the two spoke.% 408 \footnote{3/6/16 Email, Papadopoulos to Clovis (4:24:21~p.m.).} Papadopoulos recalled that Russia was mentioned as a topic, and he understood from the conversation that Russia would be an important aspect of the Campaign's foreign policy.% 409 \footnote{Statement of Offense \P~4, \textit{United States~v. George Papadopoulos}, 1:17-cr-182 (D.D.C. Oct.~5, 2017), Doc.~19 (``\textit{Papadopoulos} Statement of Offense'').} At the end of the conversation, Clovis offered Papadopoulos a role as a foreign policy advisor to the Campaign, and Papadopoulos accepted the offer.% 410 \footnote{Papadopoulos 8/10/17 302, at~2.} \paragraph{Initial Russia-Related Contacts} Approximately a week after signing on as a foreign policy advisor, Papadopoulos traveled to Rome, Italy, as part of his duties with LCILP.% 411 \footnote{Papadopoulos 8/10/17 302, at~2--3; \textit{Papadopoulos} Statement of Offense \P~5.} The purpose of the trip was to meet officials affiliated with Link Campus University, a for-profit institution headed by a former Italian government official.% 412 \footnote{Papadopoulos 8/10/17 302, at~2--3; Stephanie Kirchgaessner et~al., \textit{Joseph Mifsud: more questions than answers about mystery professor linked to Russia}, The Guardian (Oct.~31, 2017) (``Link Campus University \dots\ is headed by a former Italian interior minister named Vincenzo Scotti.'').} During the visit, Papadopoulos was introduced to Joseph Mifsud. Mifsud is a Maltese national who worked as a professor at the London Academy of Diplomacy in London, England.% 413 \footnote{\textit{Papadopoulos} Statement of Offense \P~5.} Although Mifsud worked out of London and was also affiliated with LCILP, the encounter in Rome was the first time that Papadopoulos met him.% 414 \footnote{Papadopoulos 8/10/17 302, at~3.} Mifsud maintained various Russian contacts while living in London, as described further below. Among his contacts was \blackout{Investigative Technique},% 415 \footnote{\textit{See, e.g.} \blackout{Investigative Technique}. \blackout{Harm to Ongoing Matters}} a one-time employee of the IRA, the entity that carried out the Russian social media campaign (\textit{see} \hyperlink{section.1.2}{Volume~I, Section~II}, \textit{supra}). In January and February 2016, Mifsud and \blackout{Investigative Technique} discussed \blackout{Investigative Technique} possibly meeting in Russia. The investigation did not identify evidence of them meeting. Later, in the spring of 2016, \blackout{Investigative Technique} was also in contact \blackout{Investigative Technique} that was linked to an employee of the Russian Ministry of Defense, and that account had overlapping contacts with a group of Russian military-controlled Facebook accounts that included accounts used to promote the DCLeaks releases in the course of the GRU's hack-and-release operations (\textit{see} \hyperlink{subsubsection.1.3.2.1}{Volume~I, Section~III.B.1}, \textit{supra}). According to Papadopoulos, Mifsud at first seemed uninterested in Papadopoulos when they met in Rome.% 416 \footnote{\textit{Papadopoulos} Statement of Offense \P~5.} After Papadopoulos informed Mifsud about his role in the Trump Campaign, however, Mifsud appeared to take greater interest in Papadopoulos.% 417 \footnote{\textit{Papadopoulos} Statement of Offense \P~5.} The two discussed Mifsud's European and Russian contacts and had a general discussion about Russia; Mifsud also offered to introduce Papadopoulos to European leaders and others with contacts to the Russian government.% 418 \footnote{Papadopoulos 8/10/17 302, at~3; Papadopoulos 8/11/17 302, at~2.} Papadopoulos told the Office that Mifsud's claim of substantial connections with Russian government officials interested Papadopoulos, who thought that such connections could increase his importance as a policy advisor to the Trump Campaign.% 419 \footnote{\textit{Papadopoulos} Statement of Offense \P~5.} On March~17, 2016, Papadopoulos returned to London.% 420 \footnote{Papadopoulos 8/10/17 302, at~2.} Four days later, candidate Trump publicly named him as a member of the foreign policy and national security advisory team chaired by Senator Jeff Sessions, describing Papadopoulos as ``an oil and energy consultant'' and an ``[e]xcellent guy.''% 421 \footnote{Phillip Rucker \& Robert Costa, \textit{Trump Questions Need for NATO, Outlines Noninterventionist Foreign Policy}, Washington Post (Mar.~21, 2016).} On March~24, 2016, Papadopoulos met with Mifsud in London.% 422 \footnote{Papadopoulos 8/10/17 302, at~3; 3/24/16 Text Messages, Mifsud \& Papadopoulos.} Mifsud was accompanied by a Russian female named Olga Polonskaya. Mifsud introduced Polonskaya as a former student of his who had connections to Vladimir Putin.% 423 \footnote{Papadopoulos 8/10/17 302, at~3.} Papadopoulos understood at the time that Polonskaya may have been Putin's niece but later learned that this was not true.% 424 \footnote{Papadopoulos 8/10/17 302, at~3; Papadopoulos 2/10/17 302, at~2--3; Papadopoulos Internet Search History (3/24/16) (revealing late-morning and early-afternoon searches on March~24, 2016 for ``putin's niece,'' ``olga putin,'' and ``russian president niece olga,'' among other terms).} During the meeting, Polonskaya offered to help Papadopoulos establish contacts in Russia and stated that the Russian ambassador in London was a friend of hers.% 425 \footnote{Papadopoulos 8/10/17 302, at~3.} Based on this interaction, Papadopoulos expected Mifsud and Polonskaya to introduce him to the Russian ambassador in London, but that did not occur.% 426 \footnote{\textit{Papadopoulos} Statement of Offense \P~8 n.1.} Following his meeting with Mifsud, Papadopoulos sent an email to members of the Trump Campaign's foreign policy advisory team. The subject line of the message was ``Meeting with Russian leadership -- including Putin.''% 427 \footnote{3/24/16 Email, Papadopoulos to Page et~al. (8:48:21~a.m.).} The message stated in pertinent part: \begin{quote} I just finished a very productive lunch with a good friend of mine, Joseph Mifsud, the director of the London Academy of Diplomacy -- who introduced me to both Putin's niece and the Russian Ambassador in London -- who also acts as the Deputy Foreign Minister.% 428 \footnote{Papadopoulos's statements to the Campaign were false. As noted above, the woman he met was not Putin's niece, he had not met the Russian Ambassador in London, and the Ambassador did not also serve as Russia's Deputy Foreign Minister.} The topic of the lunch was to arrange a meeting between us and the Russian leadership to discuss U.S.--Russia ties under President Trump. They are keen to host us in a ``neutral'' city, or directly in Moscow. They said the leadership, including Putin, is ready to meet with us and Mr.~Trump should there be interest. Waiting for everyone's thoughts on moving forward with this very important issue.% 429 \footnote{3/24/16 Email, Papadopoulos to Page et~al.\ (8:48:21~a.m.).} \end{quote} Papadopoulos's message came at a time when Clovis perceived a shift in the Campaign's approach toward Russia---from one of engaging with Russia through the NATO framework and taking a strong stance on Russian aggression in Ukraine, \blackout{Grand Jury}% 430 \footnote{\blackout{Grand Jury}} Clovis's response to Papadopoulos, however, did not reflect that shift. Replying to Papadopoulos and the other members of the foreign policy advisory team copied on the initial email, Clovis wrote: \begin{quote} This is most informative. Let me work it through the campaign. No commitments until we see how this plays out. My thought is that we probably should not go forward with any meetings with the Russians until we have had occasion to sit with our NATO allies, especially France, Germany and Great Britain. We need to reassure our allies that we are not going to advance anything with Russia until we have everyone on the same page. More thoughts later today. Great work.% 431 \footnote{3/24/16 Email, Clovis to Papadopoulos et~al.\ (8:55:04~a.m.).} \end{quote} \paragraph{March~31 Foreign Policy Team Meeting} The Campaign held a meeting of the foreign policy advisory team with Senator Sessions and candidate Trump approximately one week later, on March~31, 2016, in Washington, D.C.% 432 \footnote{Papadopoulos 8/10/17 302, at~4; Papadopoulos 8/11/17 302, at~3.} The meeting---which was intended to generate press coverage for the Campaign% 433 \footnote{Sessions 1/17/18 302, at~16--17.}% ---took place at the Trump International Hotel.% 434 \footnote{Papadopoulos 8/10/17 302, at~4.} Papadopoulos flew to Washington for the event. At the meeting, Senator Sessions sat at one end of an oval table, while Trump sat at the other. As reflected in the photograph below (which was posted to Trump's Instagram account), Papadopoulos sat between the two, two seats to Sessions's left[.] \begin{figure}[t] \vspace{-20pt} \begin{center} \includegraphics[width=5in]{images/p-86-foreign-policy-team.png}% \end{center} \vspace{-20pt} \caption*{March~31, 2016 Meeting of Foreign Policy Team, with Papadopoulos (Fourth from Right of Candidate Trump} \vspace{-10pt} \label{fig:foreign-policy-team} \end{figure} During the meeting, each of the newly announced foreign policy advisors introduced themselves and briefly described their areas of experience or expertise.% 435 \footnote{Papadopoulos 8/10/17 302, at~4.} Papadopoulos spoke about his previous work in the energy sector and then brought up a potential meeting with Russian officials.% 436 \footnote{Papadopoulos 8/10/17 302, at~4.} Specifically, Papadopoulos told the group that he had learned through his contacts in London that Putin wanted to meet with candidate Trump and that these connections could help arrange that meeting.% 437 \footnote{\textit{Papadopoulos} Statement of Offense \P~9; \textit{see} Gordon 8/29/17 302, at~14; Carafano 9/12/17 302, at~2; Hoskins 9/14/17 302, at~1.} Trump and Sessions both reacted to Papadopoulos's statement. Papadopoulos and Campaign advisor J.D.~Gordon---who told investigators in an interview that he had a ``crystal clear'' recollection of the meeting---have stated that Trump was interested in and receptive to the idea of a meeting with Putin.% 438 \footnote{Papadopoulos 8/10/17 302, at~4--5; Gordon 9/7/17 302, at~4--5.} Papadopoulos understood Sessions to be similarly supportive of his efforts to arrange a meeting.% 439 \footnote{Papadopoulos 8/10/17 302, at~5; Papadopoulos 8/11/17 302, at~3.} Gordon and two other attendees, however, recall that Sessions generally opposed the proposal, though they differ in their accounts of the concerns he voiced or the strength of the opposition he expressed.% 440 \footnote{Sessions 1/17/18 302, at~17; Gordon 9/7/17 302, at~5; Hoskins 9/14/17 302, at~1; Carafano 9/12/17 302, at~2.} \paragraph{George Papadopoulos Learns That Russia Has ``Dirt'' in the Form of Clinton Emails} Whatever Sessions's precise words at the March~31 meeting, Papadopoulos did not understand Sessions or anyone else in the Trump Campaign to have directed that he refrain from making further efforts to arrange a meeting between the Campaign and the Russian government. To the contrary, Papadopoulos told the Office that he understood the Campaign to be supportive of his efforts to arrange such a meeting.% 441 \footnote{Papadopoulos 8/10/17 302, at~4--5; Papadopoulos 8/11/17 302, at~3; Papadopoulos 9/20/17 302, at~2.} Accordingly, when he returned to London, Papadopoulos resumed those efforts.% 442 \footnote{\textit{Papadopoulos} Statement of Offense \P~10.} Throughout April 2016, Papadopoulos continued to correspond with, meet with, and seek Russia contacts through Mifsud and, at times, Polonskaya.% 443 \footnote{\textit{Papadopoulos} Statement of Offense \P\P~10--15.} For example, within a week of her initial March~24 meeting with him, Polonskaya attempted to send Papadopoulos a text message---which email exchanges show to have been drafted or edited by Mifsud---addressing Papadopoulos's ``wish to engage with the Russian Federation.''% 444 \footnote{3/29/16 Emails, Mifsud to Polonskaya (3:39~a.m. and 5:36~a.m.).} When Papadopoulos learned from Mifsud that Polonskaya had tried to message him, he sent her an email seeking another meeting.% 445 \footnote{4/10/16 Email, Papadopoulos to Polonskaya (2:45:59~p.m.).} Polonskaya responded the next day that she was ``back in St.~Petersburg'' but ``would be very pleased to support [Papadopoulos's] initiatives between our two countries'' and ``to meet [him] again.''% 446 \footnote{4/11/16 Email, Polonskaya to Papadopoulos (3:11:24~a.m.).} Papadopoulos stated in reply that he thought ``a good step'' would be to introduce him to ``the Russian Ambassador in London,'' and that he would like to talk to the ambassador, ``or anyone else you recommend, about a potential foreign policy trip to Russia.''% 447 \footnote{4/11/16 Email, Papadopoulos to Polonskaya (9:21:56~a.m.).} Mifsud, who had been copied on the email exchanges, replied on the morning of April~11, 2016. He wrote, ``This is already been agreed. I am flying to Moscow on the 18th for a Valdai meeting, plus other meetings at the Duma. We will talk tomorrow.''% 448 \footnote{4/11/16 Email, Mifsud to Papadopoulos (11:43:53).} The two bodies referenced by Mifsud are part of or associated with the Russian government: the Duma is a Russian legislative assembly,% 449 \footnote{\textit{Papadopoulos} Statement of Offense \P~10(c).} while ``Valdai'' refers to the Valdai Discussion Club, a Moscow-based group that ``is close to Russia's foreign-policy establishment.''% 450 \footnote{Anton Troianovski, \textit{Putin Ally Warns of Arms Race as Russia Considers Response to U.S. Nuclear Stance}, Washington Post (Feb.~10, 2018).} Papadopoulos thanked Mifsud and said that he would see him ``tomorrow.''% 451 \footnote{4/11/16 Email, Papadopoulos to Mifsud (11:51:53~a.m.).} For her part, Polonskaya responded that she had ``already alerted my personal links to our conversation and your request,'' that ``we are all very excited [about] the possibility of a good relationship with Mr.~Trump,'' and that ``[t]he Russian Federation would love to welcome him once his candidature would be officially announced.''% 452 \footnote{4/12/16 Email, Polonskaya to Papadopoulos (4:47:06~a.m.).} Papadopoulos's and Mifsud's mentions of seeing each other ``tomorrow'' referenced a meeting that the two had scheduled for the next morning, April~12, 2016, at the Andaz Hotel in London. Papadopoulos acknowledged the meeting during interviews with the Office,% 453 \footnote{Papadopoulos 9/19/17 302, at~7.} and records from Papadopoulos's UK cellphone and his internet-search history all indicate that the meeting took place.% 454 \footnote{4/12/16 Email, Mifsud to Papadopoulos (5:44:39~a.m.) (forwarding Libya-related document); 4/12/16 Email, Mifsud to Papadopoulos \& Obaid (10:28:20~a.m.); Papadopoulos Internet Search History (Apr.~11, 2016 10:56:49~p.m.) (search for ``andaz hotel liverpool street''); 4/12/16 Text Messages, Mifsud \& Papadopoulos.} Following the meeting, Mifsud traveled as planned to Moscow.% 455 \footnote{\textit{See, e.g.}, 4/18/16 Email, Mifsud to Papadopoulos (8:04:54~a.m.).} On April~18, 2016, while in Russia, Mifsud introduced Papadopoulos over email to Ivan Timofeev, a member of the Russian International Affairs Council (RIAC).% 456 \footnote{Papadopoulos 8/10/17 302, at~5.} Mifsud had described Timofeev as having connections with the Russian Ministry of Foreign Affairs (MFA),% 457 \footnote{\textit{Papadopoulos} Statement of Offense \P~11.} the executive entity in Russia responsible for Russian foreign relations.% 458 \footnote{During the campaign period, Papadopoulos connected over LinkedIn with several MFA-affiliated individuals in addition to Timofeev. On April~25, 2016, he connected with Dmitry Andreyko, publicly identified as a First Secretary at the Russian Embassy in Ireland. In July 2016, he connected with Yuriy Melnik, the spokesperson for the Russian Embassy in Washington and with Alexey Krasilnikov, publicly identified as a counselor with the MFA\null. And on September~16, 2016, he connected with Sergei Nalobin, also identified as an MFA official. \textit{See} Papadopoulos LinkedIn Connections \blackout{Investigative Technique}} Over the next several weeks, Papadopoulos and Timofeev had multiple conversations over Skype and email about setting ``the groundwork'' for a ``potential'' meeting between the Campaign and Russian government officials.% 459 \footnote{\textit{Papadopoulos} Statement of Offense \P~11.} Papadopoulos told the Office that, on one Skype call, he believed that his conversation with Timofeev was being monitored or supervised by an unknown third party, because Timofeev spoke in an official manner and Papadopoulos heard odd noises on the line.% 460 \footnote{Papadopoulos 8/10/17 302, at~5; Papadopoulos 9/19/17 302, at~10.} Timofeev also told Papadopoulos in an April~25, 2016 email that he had just spoken ``to Igor Ivanov[,] the President of RIAC and former Foreign Minister of Russia,'' and conveyed Ivanov's advice about how best to arrange a ``Moscow visit.''% 461 \footnote{4/25/16 Email, Timofeev to Papadopoulos (8:16:35~a.m.).} After a stop in Rome, Mifsud returned to England on April~25, 2016.% 462 \footnote{4/22/16 Email, Mifsud to Papadopoulos (12:41:01~a.m.).} The next day, Papadopoulos met Mifsud for breakfast at the Andaz Hotel (the same location as their last meeting).% 463 \footnote{\textit{Papadopoulos} Statement of Offense \P~14; 4/25/16 Text Messages, Mifsud \& Papadopoulos.} During that meeting, Mifsud told Papadopoulos that he had met with high-level Russian government officials during his recent trip to Moscow. Mifsud also said that, on the trip, he learned that the Russians had obtained ``dirt'' on candidate Hillary Clinton. As Papadopoulos later stated to the FBI, Mifsud said that the ``dirt'' was in the form of ``emails of Clinton,'' and that they ``have thousands of emails.''% 464 \footnote{\textit{Papadopoulos} Statement of Offense \P~14.} On May~6, 2016, 10 days after that meeting with Mifsud, Papadopoulos suggested to a representative of a foreign government that the Trump Campaign had received indications from the Russian government that it could assist the Campaign through the anonymous release of information that would be damaging to Hillary Clinton.% 465 \footnote{\label{fnFourSixFive}This information is contained in the FBI case-opening document and related materials. \sout{The information is law enforcement sensitive (LES) and must be treated accordingly in any external dissemination.} The foreign government conveyed this information to the U.S. government on July~26, 2016, a few days after WikiLeaks's release of Clinton-related emails. The FBI opened its investigation of potential coordination between Russia and the Trump Campaign a few days later based on the information.} \paragraph{Russia-Related Communications With The Campaign} While he was discussing with his foreign contacts a potential meeting of campaign officials with Russian government officials, Papadopoulos kept campaign officials apprised of his efforts. On April~25, 2016, the day before Mifsud told Papadopoulos about the emails, Papadopoulos wrote to senior policy advisor Stephen Miller that ``[t]he Russian government has an open invitation by Putin for Mr.~Trump to meet him when he is ready,'' and that ``[t]he advantage of being in London is that these governments tend to speak a bit more openly in `neutral' cities.''% 466 \footnote{4/25/16 Email, Papadopoulos to S.~Miller (8:12:44~p.m.).} On April~27, 2016, after his meeting with Mifsud, Papadopoulos wrote a second message to Miller stating that ``some interesting messages [were] coming in from Moscow about a trip when the time is right.''% 467 \footnote{4/27/16 Email, Papadopoulos to S.~Miller (6:55:58~p.m.).} The same day, Papadopoulos sent a similar email to campaign manager Corey Lewandowski, telling Lewandowski that Papadopoulos had ``been receiving a lot of calls over the last month about Putin wanting to host [Trump] and the team when the time is right.''% 468 \footnote{4/27/16 Email, Papadopoulos to Lewandowski (7:15:14~p.m.).} Papadopoulos's Russia-related communications with Campaign officials continued throughout the spring and summer of 2016. On May~4, 2016, he forwarded to Lewandowski an email from Timofeev raising the possibility of a meeting in Moscow, asking Lewandowski whether that was ``something we want to move forward with.''% 469 \footnote{5/4/16 Email, Papadopoulos to Lewandowski (8:14:49~a.m.).} The next day, Papadopoulos forwarded the same Timofeev email to Sam Clovis, adding to the top of the email ``Russia update.''% 470 \footnote{5/5/16 Email, Papadopoulos to Clovis (7:15:21~p.m.).} He included the same email in a May~21, 2016 message to senior Campaign official Paul Manafort, under the subject line ``Request from Russia to meet Mr.~Trump,'' stating that ``Russia has been eager to meet Mr.~Trump for quite sometime and have been reaching out to me to discuss.''% 471 \footnote{5/21/16 Email, Papadopoulos to Manafort (2:30:14~p.m.).} Manafort forwarded the message to another Campaign official, without including Papadopoulos, and stated: ``Let[']s discuss. We need someone to communicate that [Trump] is not doing these trips. It should be someone low level in the Campaign so as not to send any signal.''% 472 \footnote{\textit{Papadopoulos} Statement of Offense \P~19 n.2.} On June~1, 2016, Papadopoulos replied to an earlier email chain with Lewandowski about a Russia visit, asking if Lewandowski ``want[ed] to have a call about this topic'' and whether ``we were following up with it.''% 473 \footnote{6/1/16 Email, Papadopoulos to Lewandowski (3:08:18~p.m.).} After Lewandowski told Papadopoulos to ``connect with'' Clovis because he was ``running point,'' Papadopoulos emailed Clovis that ``the Russian MFA'' was asking him ``if Mr.~Trump is interested in visiting Russia at some point.''% 474 \footnote{6/1/16 Email, Lewandowski to Papadopoulos (3:20:03~p.m.); 6/1/16 Email, Papadopoulos to Clovis (3:29:14~p.m.).} Papadopoulos wrote in an email that he ``[w]anted to pass this info along to you for you to decide what's best to do with it and what message I should send (or to ignore).''% 475 \footnote{6/1/16 Email, Papadopoulos to Clovis (3:29:14~p.m.). Papadopoulos's email coincided in time with another message to Clovis suggesting a Trump--Putin meeting. First, on May~15, 2016, David Klein---a distant relative of then-Trump Organization lawyer Jason Greenblatt---emailed Clovis about a potential Campaign meeting with Berel Lazar, the Chief Rabbi of Russia. The email stated that Klein had contacted Lazar in February about a possible Trump--Putin meeting and that Lazar was ``a very close confidante of Putin.'' DJTFP00011547 (5/15/16 Email, Klein to Clovis (5:45:24~p.m.)). The investigation did not find evidence that Clovis responded to Klein's email or that any further contacts of significance came out of Klein's subsequent meeting with Greenblatt and Rabbi Lazar at Trump Tower. Klein 8/30/18 302, at~2.} After several email and Skype exchanges with Timofeev,% 476 \footnote{\textit{Papadopoulos} Statement of Offense \P~21(a).} Papadopoulos sent one more email to Lewandowski on June~19, 2016, Lewandowski's last day as campaign manager.% 477 \footnote{\blackout{Grand Jury}} The email stated that ``[t]he Russian ministry of foreign affairs'' had contacted him and asked whether, if Mr.~Trump could not travel to Russia, a campaign representative such as Papadopoulos could attend meetings.% 478 \footnote{6/19/16 Email, Papadopoulos to Lewandowski (1:11:11~p.m.).} Papadopoulos told Lewandowski that he was ``willing to make the trip off the record if it's in the interest of Mr.~Trump and the campaign to meet specific people.''% 479 \footnote{6/19/16 Email, Papadopoulos to Lewandowski (1:11:11~p.m.).} Following Lewandowski's departure from the Campaign, Papadopoulos communicated with Clovis and Walid Phares, another member of the foreign policy advisory team, about an off-the-record meeting between the Campaign and Russian government officials or with Papadopoulos's other Russia connections, Mifsud and Timofeev.% 480 \footnote{\textit{Papadopoulos} Statement of Offense \P~21; 7/14/16 Email, Papadopoulos to Timofeev (11:57:24~p.m.); 7/15/16 Email, Papadopoulos to Mifsud; 7/27/16 Email, Papadopoulos to Mifsud (2:14:18~p.m.).} Papadopoulos also interacted directly with Clovis and Phares in connection with the summit of the Transatlantic Parliamentary Group on Counterterrorism (TAG), a group for which Phares was co-secretary general.% 481 \footnote{Papadopoulos 9/19/17 302, at~16--17; \textit{9th TAG Summit in Washington DC}, Transatlantic Parliament Group on Counter Terrorism.} On July~16, 2016, Papadopoulos attended the TAG summit in Washington, D.C., where he sat next to Clovis (as reflected in the photograph below).% 482 \footnote{\textit{9th TAG Summit in Washington DC}, Transatlantic Parliament Group on Counter Terrorism.} \begin{figure}[t] \vspace{-20pt} \begin{center} \includegraphics[width=5in]{images/p-91-papadopolous-clovis.png}% \end{center} \vspace{-20pt} \caption*{George Papadopoulos (far right) and Sam Clovis (second from right)} \vspace{-10pt} \label{fig:papadopolous-clovis} \end{figure} Although Clovis claimed to have no recollection of attending the TAG summit,% 483 \footnote{\blackout{Grand Jury}} Papadopoulos remembered discussing Russia and a foreign policy trip with Clovis and Phares during the event.% 484 \footnote{Papadopoulos 9/19/17 302, at~16--17.} Papadopoulos's recollection is consistent with emails sent before and after the TAG summit. The pre-summit messages included a July~11, 2016 email in which Phares suggested meeting Papadopoulos the day after the summit to chat,% 485 \footnote{7/11/16 Email, Phares to Papadopoulos.} and a July~12 message in the same chain in which Phares advised Papadopoulos that other summit attendees ``are very nervous about Russia. So be aware.''% 486 \footnote{7/12/16 Email, Phares to Papadopoulos (14:52:29).} Ten days after the summit, Papadopoulos sent an email to Mifsud listing Phares and Clovis as other ``participants'' in a potential meeting at the London Academy of Diplomacy.% 487 \footnote{7/27/16 Email, Papadopoulos to Mifsud (14:14:18).} Finally, Papadopoulos's recollection is also consistent with handwritten notes from a journal that he kept at the time.% 488 \footnote{Papadopoulos 9/20/17 302, at~3.} Those notes, which are reprinted in part below, appear to refer to potential September 2016 meetings in London with representatives of the ``office of Putin,'' and suggest that Phares, Clovis, and Papadopoulos (``Walid/Sam me'') would attend without the official backing of the Campaign (``no official letter/no message from Trump'').% 489 \footnote{Papadopoulos declined to assist in deciphering his notes, telling investigators that he could not read his own handwriting from the journal. Papadopoulos 9/19/17 302, at~21. The notes, however, appear to read as listed in the column to the left of the image above.} \begin{wrapfigure}{tr}{3.1in} \vspace{-20pt} \begin{center} \includegraphics[width=3in]{images/p-92-papadopolous-notes.png}% \end{center} \vspace{-20pt} \caption*{} \vspace{-10pt} \label{fig:papadopolous-notes} \end{wrapfigure} \begin{quote} September: Have an exploratory meeting \sout{to} or lose. In September -- if allowed they will blast Mr.~Trump. We want the meeting in London/England Walid/Sam me No official letter/no message from Trump They are talking to us. -It is a lot of risk. -Office of Putin. -Explore: we are a campaign. off Israel! EGYPT Willingness to meet the FM sp with Walid/Sam -FM coming -Useful to have a session with him. \end{quote} Later communications indicate that Clovis determined that he (Clovis) could not travel. On August~15, 2016, Papadopoulos emailed Clovis that he had received requests from multiple foreign governments, ``even Russia[],'' for ``closed door workshops/consultations abroad,'' and asked whether there was still interest for Clovis, Phares, and Papadopoulos ``to go on that trip.''% 490 \footnote{8/15/16 Email, Papadopoulos to Clovis (11:59:07~a.m.).} Clovis copied Phares on his response, which said that he could not ``travel before the election'' but that he ``would encourage [Papadopoulos] and Walid to make the trips, if it is feasible.''% 491 \footnote{8/15/16 Email, Clovis to Papadopoulos (12:01:45~p.m.).} Papadopoulos was dismissed from the Trump Campaign in early October 2016, after an interview he gave to the Russian news agency Interfax generated adverse publicity.% 492 \footnote{\textit{George Papadopoulos: Sanctions Have Done Little More Than to Turn Russia Towards China}, Interfax (Sept.~30, 2016).} \paragraph{Trump Campaign Knowledge of ``Dirt''} Papadopoulos admitted telling at least one individual outside of the Campaign---specifically, the then-Greek foreign minister---about Russia's obtaining Clinton-related emails.% 493 \footnote{Papadopoulos 9/19/17 302, at~14--15; Def.~Sent.~Mem., United States~v.\ George Papadopoulos, 1:17-cr-182 (D.D.C. Aug.~31, 2018), Doc.~45.} In addition, a different foreign government informed the FBI that, 10 days after meeting with Mifsud in late April 2016, Papadopoulos suggested that the Trump Campaign had received indications from the Russian government that it could assist the Campaign through the anonymous release of information that would be damaging to Hillary Clinton.% 494 \footnote{\textit{See} footnote \ref{fnFourSixFive} of \hyperlink{paragraph.1.4.1.2.4}{Volume~I, Section~IV.A.2.d}, \textit{supra}.} (This conversation occurred after the GRU spearphished Clinton Campaign chairman John Podesta and stole his emails, and the GRU hacked into the DCCC and DNC, \textit{see} \hyperlink{subsection.1.3.1}{Volume~I, Sections~III.A} \&~\hyperlink{subsection.1.3.2}{III.B}, \textit{supra}.) Such disclosures raised questions about whether Papadopoulos informed any Trump Campaign official about the emails. When interviewed, Papadopoulos and the Campaign officials who interacted with him told the Office that they could not recall Papadopoulos's sharing the information that Russia had obtained ``dirt'' on candidate Clinton in the form of emails or that Russia could assist the Campaign through the anonymous release of information about Clinton. Papadopoulos stated that he could not clearly recall having told anyone on the Campaign anyone on the Campaign and wavered about whether he accurately remembered an incident in which Clovis had been upset after hearing Papadopoulos tell Clovis that Papadopoulos thought ``they have her emails.''% 495 \footnote{Papadopoulos 8/10/17 302, at~5; Papadopoulos 8/11/17 302, at~5; Papadopoulos 9/20/17 302, at~2.} The Campaign officials who interacted or corresponded with Papadopoulos have similarly stated, with varying degrees of certainty, that he did not tell them. Senior policy advisor Stephen Miller, for example, did not remember hearing anything from Papadopoulos or Clovis about Russia having emails of or dirt on candidate Clinton.% 496 \footnote{S. Miller 12/14/17 302, at~10.} Clovis stated that he did not recall anyone, including Papadopoulos, having given him non-public information that a foreign government might be in possession of material damaging to Hillary Clinton.% 497 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 498 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 499 \footnote{\blackout{Grand Jury}} No documentary evidence, and nothing in the email accounts or other communications facilities reviewed by the Office, shows that Papadopoulos shared this information with the Campaign. \paragraph{Additional George Papadopoulos Contact} The Office investigated another Russia-related contact with Papadopoulos. The Office was not fully able to explore the contact because the individual at issue---Sergei Millian---remained out of the country since the inception of our investigation and declined to meet with members of the Office despite our repeated efforts to obtain an interview. Papadopoulos first connected with Millian via LinkedIn on July~15, 2016, shortly after Papadopoulos had attended the TAG Summit with Clovis.% 500 \footnote{7/15/16 LinkedIn Message, Millian to Papadopoulos.} Millian, an American citizen who is a native of Belarus, introduced himself ``as president of [the] New York-based Russian American Chamber of Commerce,'' and claimed that through that position he had ``insider knowledge and direct access to the top hierarchy in Russian politics.''% 501 \footnote{7/15/16 LinkedIn Message, Millian to Papadopoulos.} Papadopoulos asked Timofeev whether he had heard of Millian.% 502 \footnote{7/22/16 Facebook Message, Papadopoulos to Timofeev (7:40:23~p.m.); 7/26/16 Facebook Message, Papadopoulos to Timofeev (3:08:57~p.m.).} Although Timofeev said no,% 503 \footnote{7/23/16 Facebook Message, Timofeev to Papadopoulos (4:31:37~a.m.); 7/26/16 Facebook Message, Timofeev to Papadopoulos (3:37:16~p.m.).} Papadopoulos met Millian in New York City.% 504 \footnote{7/16/16 Text Messages, Papadopoulos \& Millian (7:55:43~p.m.).} The meetings took place on July~30 and August~1, 2016.% 505 \footnote{7/30/16 Text Messages, Papadopoulos \& Millian (5:38 \& 6:05~p.m.); 7/31/16 Text Messages, Millian \& Papadopoulos (3:48 \& 4:18~p.m.); 8/1/16 Text Message, Millian to Papadopoulos (8:19~p.m.).} Afterwards, Millian invited Papadopoulos to attend---and potentially speak at---two international energy conferences, including one that was to be held in Moscow in September 2016.% 506 \footnote{8/2/16 Text Messages, Millian \& Papadopoulos (3:04 \& 3:05~p.m.); 8/3/16 Facebook Messages, Papadopoulos \& Millian (4:07:37~a.m. \& 1:11:58~p.m.).} Papadopoulos ultimately did not attend either conference. On July~31, 2016, following his first in-person meeting with Millian, Papadopoulos emailed Trump Campaign official Bo Denysyk to say that he had been contacted ``by some leaders of Russian-American voters here in the US about their interest in voting for Mr.~Trump,'' and to ask whether he should ``put you in touch with their group (US--Russia chamber of commerce).''% 507 \footnote{7/31/16 Email, Papadopoulos to Denysyk (12:29:59~p.m.).} Denysyk thanked Papadopoulos ``for taking the initiative,'' but asked him to ``hold off with outreach to Russian-Americans'' because ``too many articles'' had already portrayed the Campaign, then-campaign chairman Paul Manafort, and candidate Trump as ``being pro-Russian.''% 508 \footnote{7/31/16 Email, Denysyk to Papadopoulos (21:54:52).} On August~23, 2016, Millian sent a Facebook message to Papadopoulos promising that he would ``share with you a disruptive technology that might be instrumental in your political work for the campaign.''% 509 \footnote{8/23/16 Facebook Message, Millian to Papadopoulos (2:55:36~a.m.).} Papadopoulos claimed to have no recollection of this matter.% 510 \footnote{Papadopoulos 9/20/17 302, at~2.} On November~9, 2016, shortly after the election, Papadopoulos arranged to meet Millian in Chicago to discuss business opportunities, including potential work with Russian ``billionaires who are not under sanctions.''% 511 \footnote{11/10/16 Facebook Message, Millian to Papadopoulos (9:35:05~p.m.).} The meeting took place on November~14, 2016, at the Trump Hotel and Tower in Chicago.% 512 \footnote{11/14/16 Facebook Message, Millian to Papadopoulos (1:32:11~a.m.).} According to Papadopoulos, the two men discussed partnering on business deals, but Papadopoulos perceived that Millian's attitude toward him changed when Papadopoulos stated that he was only pursuing private-sector opportunities and was not interested in a job in the Administration.% 513 \footnote{Papadopoulos 9/19/17 302, at~19.} The two remained in contact, however, and had extended online discussions about possible business opportunities in Russia.% 514 \footnote{\textit{E.g.}, 11/29/16 Facebook Messages, Papadopoulos \& Millian (5:09--5:11~p.m.); 12/7/16 Facebook Message, Millian to Papadopoulos (5:10:54~p.m.).} The two also arranged to meet at a Washington, D.C. bar when both attended Trump's inauguration in late January 2017.% 515 \footnote{1/20/17 Facebook Messages, Papadopoulos \& Millian (4:37--4:39~a.m.).} \subsubsection{Carter Page} Carter Page worked for the rump Campaign from January 2016 to September 2016. He was formally and publicly announced as a foreign policy advisor by the candidate in March 2016.% 516 \footnote{Page was interviewed by the FBI during five meetings in March 2017, before the Special Counsel's appointment. \blackout{Grand Jury}} Page had lived and worked in Russia, and he had been approached by Russian intelligence officers several years before he volunteered for the Trump Campaign. During his time with the Campaign, Page advocated pro-Russia foreign policy positions and traveled to Moscow in his personal capacity. Russian intelligence officials had formed relationships with Page in 2008 and~2013 and Russian officials may have focused on Page in 2016 because of his affiliation with the Campaign. However, the investigation did not establish that Page coordinated with the Russian government in its efforts to interfere with the 2016 presidential election. \paragraph{Background} Before he began working for the Campaign in January 2016, Page had substantial prior experience studying Russian policy issues and living and working in Moscow. From 2004 to 2007, Page was the deputy branch manager of Merrill Lynch's Moscow office.% 517 \footnote{\textit{Testimony of Carter Page, Hearing Before the U.S. House of Representatives, Permanent Select Committee on Intelligence,} 115th Cong.~40 (Nov.~2, 2017) (exhibit).} There, he worked on transactions involving the Russian energy company Gazprom and came to know Gazprom's deputy chief financial officer, Sergey Yatsenko.% 518 \footnote{Page 3/30/17 302, at~10.} In 2008, Page founded Global Energy Capital LLC (GEC), an investment management and advisory firm focused on the energy sector in emerging markets.% 519 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 520 \footnote{\blackout{Grand Jury}} The company otherwise had no sources of income, and Page was forced to draw down his life savings to support himself and pursue his business venture.% 521 \footnote{\blackout{Grand Jury}} Page asked Yatsenko to work with him at GEC as a senior advisor on a contingency basis, \blackout{Grand Jury}% 522 \footnote{Page 3/30/17 302, at~10; \blackout{Grand Jury}} In 2008, Page met Alexander Bulatov, a Russian government official who worked at the Russian Consulate in New York.% 523 \footnote{\blackout{Grand Jury}} Page later learned that Bulatov was a Russian intelligence officer, \blackout{Grand Jury}% 524 \footnote{\blackout{Grand Jury}} In 2013, Victor Podobnyy, another Russian intelligence officer working covertly in the United States under diplomatic cover, formed a relationship with Page.% 525 \footnote{\blackout{Grand Jury}; Complaint \P\P~22, 24, 32, \textit{United States~v.\ Buryakov}, 1:15-mj-215 (S.D.N.Y. Jan.~23, 2015), Doc.~1 ``\textit{Buryakov} Complaint'').} Podobnyy met Page at an energy symposium in New York City and began exchanging emails with him.% 526 \footnote{\textit{Buryakov} Complaint \P~34.} Podobnyy and Page also met in person on multiple occasions, during which Page offered his outlook on the future of the energy industry and provided documents to Podobnyy about the energy business.% 527 \footnote{\textit{Buryakov} Complaint \P~34.} In a recorded conversation on April~8, 2013, Podobnyy told another intelligence officer that Page was interested in business opportunities in Russia.% 528 \footnote{\textit{Buryakov} Complaint \P~32.} In Podobnyy's words, Page ``got hooked on Gazprom thinking that if they have a project, he could \dots\ rise up. Maybe he can\dots. [I]t's obvious that he wants to earn lots of money.''% 529 \footnote{\textit{Buryakov} Complaint.} Podobnyy said that he had led Page on by ``feed[ing] him empty promises'' that Podobnyy would use his Russian business connections to help Page.% 530 \footnote{\textit{Buryakov} Complaint.} Podobnyy told the other intelligence officer that his method of recruiting foreign sources was to promise them favors and then discard them once he obtained relevant information from them.% 531 \footnote{\textit{Buryakov} Complaint.} In 2015, Podobnyy and two other Russian intelligence officers were charged with conspiracy to act as an unregistered agent of a foreign government.% 532 \footnote{\textit{See Buryakov} Complaint; \textit{see also} Indictment, \textit{United States~v.\ Buryakov}, 1:15-cr-73 (S.D.N.Y. Feb.~9, 2015), Doc.~10; \blackout{Grand Jury}} The criminal complaint detailed Podobnyy's interactions with and conversations about Page, who was identified only as ``\hbox{Male-1}.''% 533 \footnote{\textit{Buryakov} Complaint \P\P~32--34; \blackout{Grand Jury}} Based on the criminal complaint's description of the interactions, Page was aware that he was the individual described as ``\hbox{Male-1}.''% 534 \footnote{\blackout{Grand Jury}} Page later spoke with a Russian government official at the United Nations General Assembly and identified himself so that the official would understand he was ``\hbox{Male-1}'' from the Podobnyy complaint. % 535 \footnote{Page 3/16/17 302, at~4; \blackout{Grand Jury}} Page told the official that he ``didn't do anything'' \blackout{Grand Jury}% 536 \footnote{Page 3/16/17 302, at~4; \blackout{Grand Jury}} In interviews with the FBI before the Office's opening, Page acknowledged that he understood that the individuals he had associated with were members of the Russian intelligence services, but he stated that he had only provided immaterial non-public information to them and that he did not view this relationship as a backchannel.% 537 \footnote{Page 3/30/17 302, at~6; Page 3/31/17 302, at~1.} Page told investigating agents that ``the more immaterial non-public information I give them, the better for this country.''% 538 \footnote{Page 3/31/17 302, at~1.} \paragraph{Origins of and Early Campaign Work} In January 2016, Page began volunteering on an informal, unpaid basis for the Trump Campaign after Ed Cox, a state Republican Party official, introduced Page to Trump Campaign officials.% 539 \footnote{Page 3/16/17 302, at~1; \blackout{Grand Jury}} Page told the Office that his goal in working on the Campaign was to help candidate Trump improve relations with Russia.% 540 \footnote{Page 3/10/17 302, at~2.} To that end, Page emailed Campaign officials offering his thoughts on U.S.--Russia relations, prepared talking points and briefing memos on Russia, and proposed that candidate Trump meet with President Vladimir Putin in Moscow.% 541 \footnote{\textit{See, e.g.}, 1/30/16 Email, Page to Glassner et~al.; 3/17/16 Email, Page to Clovis (attaching a ``President's Daily Brief\thinspace'' prepared by Page that discussed the ``severe degradation of U.S.--Russia relations following Washington's meddling'' in Ukraine); \blackout{Grand Jury}} In communications with Campaign officials, Page also repeatedly touted his high-level contacts in Russia and his ability to forge connections between candidate Trump and senior Russian governmental officials. For example, on January~30, 2016, Page sent an email to senior Campaign officials stating that he had ``spent the past week in Europe and ha[d] been in discussions with some individuals with close ties to the Kremlin'' who recognized that Trump could have a ``game-changing effect \dots\ in bringing the end of the new Cold War.''% 542 \footnote{1/30/16 Email, Page to Glassner et~al.} The email stated that ``[t]hrough [his] discussions with these high level contacts,'' Page believed that ``a direct meeting in Moscow between Mr[.] Trump and Putin could be arranged.''% 543 \footnote{1/30/16 Email, Page to Glassner et~al.} Page closed the email by criticizing U.S. sanctions on Russia.% 544 \footnote{1/30/16 Email, Page to Glassner et~al.} \blackout{Grand Jury}% 545 \footnote{\blackout{Grand Jury}} On March~21, 2016, candidate Trump formally and publicly identified Page as a member of his foreign policy team to advise on Russia and the energy sector.% 546 \footnote{\textit{A Transcript of Donald Trump's Meeting with the Washington Post Editorial Board}, Washington Post (Mar.~21, 2016); \blackout{Grand Jury}} Over the next several months, Page continued providing policy-related work product to Campaign officials. For example, in April 2016, Page provided feedback on an outline for a foreign policy speech that the candidate gave at the Mayflower Hotel,% 547 \footnote{\blackout{Grand Jury}} \textit{see} \hyperlink{subsubsection.1.4.1.4}{Volume~I, Section~IV.A.4}, \textit{infra}. In May~2016, Page prepared an outline of an energy policy speech for the Campaign and then traveled to Bismarck, North Dakota, to watch the candidate deliver the speech.% 548 \footnote{\blackout{Grand Jury}} Chief policy advisor Sam Clovis expressed appreciation for Page's work and praised his work to other Campaign officials.% 549 \footnote{\textit{See, e.g.}, 3/28/16 Email, Clovis to Lewandowski et~al.\ (forwarding notes prepared by Page and stating, ``I wanted to let you know the type of work some of our advisors are capable of.'').} \paragraph{Carter Page's July 2016 Trip To Moscow} Page's affiliation with the Trump Campaign took on a higher profile and drew the attention of Russian officials after the candidate named him a foreign policy advisor. As a result, in late April 2016, Page was invited to give a speech at the July 2016 commencement ceremony at the New Economic School (NES) in Moscow.% 550 \footnote{Page 3/16/17 302, at~2--3; Page 3/10/17 302, at~3.} The NES commencement ceremony generally featured high-profile speakers; for example, President Barack Obama delivered a commencement address at the school in 2009.% 551 \footnote{S. Weber 7/28/17 302, at~3.} NES officials told the Office that the interest in inviting Page to speak at NES was based entirely on his status as a Trump Campaign advisor who served as the candidate's Russia expert.% 552 \footnote{Y. Weber 6/1/17 302, at~4--5; S. Weber 7/28/17 302, at~3.} Andrej Krickovic, an associate of Page's and assistant professor at the Higher School of Economics in Russia, recommended that NES rector Shlomo Weber invite Page to give the commencement address based on his connection to the Trump Campaign.% 553 \footnote{\textit{See} Y[.]~Weber 6/1/17 302, at~4; S. Weber 7/28/17 302, at~3.} Denis Klimentov, an employee of NES, said that when Russians learned of Page's involvement in the Trump Campaign in March 2016, the excitement was palpable.% 554 \footnote{De[.]~Klimentov 6/9/17 302, at~2.} Weber recalled that in summer 2016 there was substantial interest in the Trump Campaign in Moscow, and he felt that bringing a member of the Campaign to the school would be beneficial.% 555 \footnote{S. Weber 7/28/17 302, at~3.} Page was eager to accept the invitation to speak at NES, and he sought approval from Trump Campaign officials to make the trip to Russia.% 556 \footnote{\textit{See} 5/16/16 Email, Page to Phares et~al.\ (referring to submission of a ``campaign advisor request form'').} On May~16, 2016, while that request was still under consideration, Page emailed Clovis, J.D.~Gordon, and Walid Phares and suggested that candidate Trump take his place speaking at the commencement ceremony in Moscow.% 557 \footnote{\blackout{Grand Jury}; 5/16/16 Email, Page to Phares et~al.} On June~19, 2016, Page followed up again to request approval to speak at the NES event and to reiterate that NES ``would love to have Mr.~Trump speak at this annual celebration'' in Page's place.% 558 \footnote{6/19/16 Email, Page to Gordon et~al.} Campaign manager Corey Lewandowski responded the same day, saying, ``If you want to do this, it would be out side [sic] of your role with the DJT for President campaign. I am certain Mr.~Trump will not be able to attend.% 559 \footnote{6/19/16 Email, Lewandowski to Page et~al.} In early July 2016, Page traveled to Russia for the NES events. On July~5, 2016, Denis Klimentov, copying his brother, Dmitri Klimentov,% 560 \footnote{Dmitri Klimentov is a New York-based public relations consultant.} emailed Maria Zakharova, the Director of the Russian Ministry of Foreign Affairs' Information and Press Department, about Page's visit and his connection to the Trump Campaign.% 561 \footnote{7/5/16 Email, Klimentov to Zakharova (translated).} Denis Klimentov said in the email that he wanted to draw the Russian government's attention to Page's visit in Moscow.% 562 \footnote{7/5/16 Email, Klimentov to Zakharova (translated).} His message to Zakharova continued: ``Page is Trump's adviser on foreign policy. He is a known businessman; he used to work in Russia\dots. If you have any questions, I will be happy to help contact him.''% 563 \footnote{7/5/16 Email, Klimentov to Zakharova (translated).} Dmitri Klimentov then contacted Russian Press Secretary Dmitry Peskov about Page's visit to see if Peskov wanted to introduce Page to any Russian government officials.% 564 \footnote{De.~Klimentov 11/27/18 302, at~1--2.} The following day, Peskov responded to what appears to have been the same Denis Klimentov--Zakharova email thread. Peskov wrote, ``I have read about [Page]. Specialists say that he is far from being the main one. So I better not initiate a meeting in the Kremlin.''% 565 \footnote{7/6/16 Email, Peskov to Klimentov (translated).} On July~7, 2016, Page delivered the first of his two speeches in Moscow at NES.% 566 \footnote{Page 3/10/17 302, at~3.} In the speech, Page criticized the U.S. government's foreign policy toward Russia, stating that ``Washington and other Western capitals have impeded potential progress through their often hypocritical focus on ideas such as democratization, inequality, corruption and regime change.''% 567 \footnote{\textit{See} Carter W. Page, \textit{The Lecture of Trump's Advisor Carter Page in Moscow}, YouTube Channel Katehon Think Tank, Posted July~7, 2016, \textit{available at} \url{https://www.youtube.com/watch?time\_continue=28\&v=1CYF29saA9w}. Page also provided the FBI with a copy of his speech and slides from the speech. \textit{See} Carter Page, ``The Evolution of the World Economy: Trends and Potential,'' Speech at N[ew] Economic S[chool] (July~7, 2016).} On July~8, 2016, Page delivered a speech during the NES commencement.% 568 \footnote{Page 3/10/17 302, at~3.} After Page delivered his commencement address, Russian Deputy Prime Minister and NES board member Arkady Dvorkovich spoke at the ceremony and stated that the sanctions the United States had imposed on Russia had hurt the NES\null.% 569 \footnote{Page 3/16/17 302, at~3.} Page and Dvorkovich shook hands at the commencement ceremony, and Weber recalled that Dvorkovich made statements to Page about working together in the future.% 570 \footnote{S. Weber 7/28/17 302, at~4.} \blackout{Grand Jury}% 571 \footnote{\blackout{Grand Jury}} Page said that, during his time in Moscow, he met with friends and associates he knew from when he lived in Russia, including Andrey Baranov, a former Gazprom employee who had become the head of investor relations at Rosneft, a Russian energy company.% 572 \footnote{Page 3/10/17 302, at~3; Page 3/30/17 302, at~3; Page 3/31/17 302, at~2.} Page stated that he and Baranov talked about ``immaterial non-public'' information.% 573 \footnote{Page 3/30/17 302, at~3.} Page believed he and Baranov discussed Rosneft president Igor Sechin, and he thought Baranov might have mentioned the possibility of a sale of a stake in Rosneft in passing.% 574 \footnote{Page 3/30/17 302, at~9. \blackout{Grand Jury}} Page recalled mentioning his involvement in the Trump Campaign with Baranov, although he did not remember details of the conversation.% 575 \footnote{\blackout{Grand Jury} Page 3/30/17 302, at~3.} Page also met with individuals from Tatneft, a Russian energy company, to discuss possible business deals, including having Page work as a consultant.% 576 \footnote{Page 3/10/17 302, at~3; Page 3/30/17 302, at~7; Page 3/31/17 302, at~2.} On July~8, 2016, while he was in Moscow, Page emailed several Campaign officials and stated he would send ``a readout soon regarding some incredible insights and outreach I've received from a few Russian legislators and senior members of the Presidential Administration here.''% 577 \footnote{\blackout{Grand Jury} 7/8/16 Email, Page to Dahl \& Gordon.} On July~9, 2016, Page emailed Clovis, writing in pertinent part: \begin{quote} Russian Deputy Prime minister and NES board member Arkady Dvorkovich also spoke before the event. In a private conversation, Dvorkovich expressed strong support for Mr.~Trump and a desire to work together toward devising better solutions in response to the vast range of current international problems. Based on feedback from a diverse array of other sources close to the Presidential Administration, it was readily apparent that this sentiment is widely held at all levels of government.% 578 \footnote{\blackout{Grand Jury} 7/9/16 Email, Page to Clovis.} \end{quote} Despite these representations to the Campaign, \blackout{Grand Jury}% 579 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 580 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 581 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 582 \footnote{\blackout{Grand Jury}} The Office was unable to obtain additional evidence or testimony about who Page may have met or communicated with in Moscow; thus, Page's activities in Russia---as described in his emails with the Campaign---were not fully explained. \paragraph{Later Campaign Work and Removal from the Campaign} In July 2016, after returning from Russia, Page traveled to the Republican National Convention in Cleveland.% 583 \footnote{Page 3/10/17 302, at~4; Page 3/16/17 302, at~3.} While there, Page met Russian Ambassador to the United States Sergey Kislyak; that interaction is described in \hyperlink{paragraph.1.4.1.6.1}{Volume~I, Section~IV.A.6.a}, \textit{infra}.% 584 \footnote{Page 3/10/17 302, at~4; Page 3/16/17 302, at~3.} Page later emailed Campaign officials with feedback he said he received from ambassadors he had met at the Convention, and he wrote that Ambassador Kislyak was very worried about candidate Clinton's world views.% 585 \footnote{\blackout{Grand Jury}; 7/23/16 Email, Page to Clovis; 7/25/16 Email, Page to Gordon \& Schmitz.} \blackout{Grand Jury}% 586 \footnote{\blackout{Grand Jury}} Following the Convention, Page's trip to Moscow and his advocacy for pro-Russia foreign policy drew the media's attention and began to generate substantial press coverage. The Campaign responded by distancing itself from Page, describing him as an ``informal foreign policy advisor'' who did ``not speak for Mr.~Trump or the campaign.''% 587 \footnote{\textit{See, e.g.}, Steven Mufson \& Tom Hamburger, \textit{Trump Advisor's Public Comments, Ties to Moscow Stir Unease in Both Parties}, Washington Post (Aug.~5, 2016).} On September~23, 2016, Yahoo!\ News reported that U.S. intelligence officials were investigating whether Page had opened private communications with senior Russian officials to discuss U.S. sanctions policy under a possible Trump Administration.% 588 \footnote{Michael Isikoff, \textit{U.S. Intel Officials Probe Ties Between Trump Adviser and Kremlin}, Yahoo!\ News (Sept.~23, 2016).} A Campaign spokesman told Yahoo!\ News that Page had ``no role'' in the Campaign and that the Campaign was ``not aware of any of his activities, past or present.''% 589 \footnote{Michael Isikoff, \textit{U.S. Intel Officials Probe Ties Between Trump Adviser and Kremlin}, Yahoo!\ News (Sept.~23, 2016); \textit{see also} 9/25/16 Email, Hicks to Conway \& Bannon (instructing that inquiries about Page should be answered with ``[h]e was announced as an informal adviser in March. Since then he has had no role or official contact with the campaign. We have no knowledge of activities past or present and he now officially has been removed from all lists etc.'').} On September~24, 2016, Page was formally removed from the Campaign.% 590 \footnote{Page 3/16/17 302, at~2; \textit{see, e.g.}, 9/23/16 Email, J.~Miller to Bannon \& S.~Miller (discussing plans to remove Page from the campaign).} Although Page had been removed from the Campaign, after the election he sought a position in the Trump Administration.% 591 \footnote{\blackout{Grand Jury}, ``Transition Online Form,'' 11/14/16 \blackout{Grand Jury}} On November~14, 2016, he submitted an application to the Transition Team that inflated his credentials and experiences, stating that in his capacity as a Trump Campaign foreign policy advisor he had met with ``top world leaders'' and ``effectively responded to diplomatic outreach efforts from senior government officials in Asia, Europe, the Middle East, Africa, [and] the Americas.''% 592 \footnote{\blackout{Grand Jury} ``Transition Online Form,'' 11/14/16 \blackout{Grand Jury}} Page received no response from the Transition Team. When Page took a personal trip to Moscow in December 2016, he met again with at least one Russian government official. That interaction and a discussion of the December trip are set forth in \hyperlink{subsubsection.1.4.2.6}{Volume~I, Section~IV.B.6}, \textit{infra}. \subsubsection{Dimitri Simes and the Center for the National Interest} Members of the Trump Campaign interacted on several occasions with the Center for the National Interest (CNI), principally through its President and Chief Executive Officer, Dimitri Simes. CNI is a think tank with expertise in and connections to the Russian government. Simes was born in the former Soviet Union and immigrated to the United States in the 1970s. In April 2016, candidate Trump delivered his first speech on foreign policy and national security at an event hosted by the \textit{National Interest}, a publication affiliated with CNI\null. Then-Senator Jeff Sessions and Russian Ambassador Kislyak both attended the event and, as a result, it gained some attention in relation to Sessions's confirmation hearings to become Attorney General. Sessions had various other contacts with CNI during the campaign period on foreign-policy matters, including Russia. Jared Kushner also interacted with Simes about Russian issues during the campaign. The investigation did not identify evidence that the Campaign passed or received any messages to or from the Russian government through CNI or Simes. \paragraph{CNI and Dimitri Simes Connect with the Trump Campaign} CNI is a Washington-based non-profit organization that grew out of a center founded by former President Richard Nixon.% 593 \footnote{Simes 3/8/18 302, at~1--2.} CNI describes itself ``as a voice for strategic realism in U.S. foreign policy,'' and publishes a bi-monthly foreign policy magazine, the \textit{National Interest}.% 594 \footnote{\textit{About the Center}, CNI, \textit{available at} \url{https://cftni.org/about/}.} CNI is overseen by a board of directors and an advisory council that is largely honorary and whose members at the relevant time included Sessions, who served as an advisor to candidate Trump on national security and foreign policy issues.% 595 \footnote{\textit{Advisory Counsel}, CNI, \textit{available at} \url{https://web.archive.org/web/20161030025331/http://cftni.org/about/advisory-council/}; Simes 3/8/18 302, at~3--4; Saunders 2/15/18 302, at~4; Sessions 1/17/18 302, at~16.} Dimitri Simes is president and CEO of CNI and the publisher and CEO of the \textit{National Interest}.% 596 \footnote{Simes 3/8/18 302, at~2.} Simes was born in the former Soviet Union, emigrated to the United States in the early 1970s, and joined CNI's predecessor after working at the Carnegie Endowment for International Peace.% 597 \footnote{ Simes 3/8/18 302, at~1--2; Simes 3/27/18 302, at~19.} Simes personally has many contacts with current and former Russian government officials,% 598 \footnote{Simes 3/27/18 302, at~10--15.} as does CNI collectively. As CNI stated when seeking a grant from the Carnegie Corporation in 2015, CNI has ``unparalleled access to Russian officials and politicians among Washington think tanks,''% 599 \footnote{C00011656 (\textit{Rethinking U.S.--Russia Relations}, CNI (Apr.~18, 2015)).} in part because CNI has arranged for U.S. delegations to visit Russia and for Russian delegations to visit the United States as part of so-called ``Track~II'' diplomatic efforts.% 600 \footnote{Simes 3/8/18 302, at~5; Saunders 2/15/18 302, at~29--30; Zakheim 1/25/18 302, at~3.} On March~14, 2016, CNI board member Richard Plepler organized a luncheon for CNI and its honorary chairman, Henry Kissinger, at the Time Warner Building in New York.% 601 \footnote{Simes 3/8/18 302, at~6; CO0006784 (3/11/16 Email, Gilbride to Saunders (3:43:12~p.m.)[)]; \textit{cf.}~Zakheim 1/25/18 302, at~1 (Kissinger was CNI's ``Honorary Chairman of the Board''); Boyd 1/24/18 302, at~2; P. Sanders 2/15/18 302, at~5.} The idea behind the event was to generate interest in CNI's work and recruit new board members for CNI\null.% 602 \footnote{Simes 3/8/18 302, at~5--6; Simes 3/27/18 302, at~2.} Along with Simes, attendees at the event included Jared Kushner, son-in-law of candidate Trump.% 603 \footnote{Simes 3/8/18 302, at~6; Kushner 4/11/18 302, at~2.} Kushner told the Office that the event came at a time when the Trump Campaign was having trouble securing support from experienced foreign policy professionals and that, as a result, he decided to seek Simes's assistance during the March~14 event.% 604 \footnote{Kushner 4/11/18 302, at~2.} Simes and Kushner spoke again on a March~24, 2016 telephone call,% 605 \footnote{Simes 3/8/18 302, at~6--7.} three days after Trump had publicly named the team of foreign policy advisors that had been put together on short notice.% 606 \footnote{\blackout{Grand Jury} \textit{see} \hyperlink{subsubsection.1.4.1.2}{Volume~I, Section~IV.A.2}, \textit{supra}.} On March~31, 2016, Simes and Kushner had an in-person, one-on-one meeting in Kushner's New York office.% 607 \footnote{Simes 3/8/18 302, at~7--9.} During that meeting, Simes told Kushner that the best way to handle foreign-policy issues for the Trump Campaign would be to organize an advisory group of experts to meet with candidate Trump and develop a foreign policy approach that was consistent with Trump's voice.% 608 \footnote{Simes 3/8/18 302, at~7--8.} Simes believed that Kushner was receptive to that suggestion.% 609 \footnote{Simes 3/8/18 302, at~8; \textit{see also} Boyd 1/24/18 302, at~2.} Simes also had contact with other individuals associated with the Trump Campaign regarding the Campaign's foreign policy positions. For example, on June~17, 2016, Simes sent J.D.~Gordon an email with a ``memo to Senator Sessions that we discussed at our recent meeting'' and asked Gordon to both read it and share it with Sessions. The memorandum proposed building a ``small and carefully selected group of experts'' to assist Sessions with the Campaign, operating under the assumption ``that Hillary Clinton is very vulnerable on national security and foreign policy issues.'' The memorandum outlined key issues for the Campaign, including a ``new beginning with Russia.''% 610 \footnote{C00008187 (6/17/16 Email, Simes to Gordon (3:35:45~p.m.)).} \paragraph{National Interest Hosts a Foreign Policy Speech at the Mayflower Hotel} During both their March~24 phone call and their March~31 in-person meeting, Simes and Kushner discussed the possibility of CNI hosting a foreign policy speech by candidate Trump.% 611 \footnote{Simes 3/8/18 302, at~7.} Following those conversations, Simes agreed that he and others associated with CNI would provide behind-the-scenes input on the substance of the foreign-policy speech and that CNI officials would coordinate the logistics of the speech with Sessions and his staff, including Sessions's chief of staff, Rick Dearborn.% 612 \footnote{Simes 3/8/18 302, at~8--11; C00008923 (4/6/16 Email, Simes to Burt (2:22:28~p.m.)); Burt 2/9/18 302, at~7.} In mid-April~2016, Kushner put Simes in contact with senior policy advisor Stephen Miller and forwarded to Simes an outline of the foreign-policy speech that Miller had prepared.% 613 \footnote{C00008551 (4/17/16 Email, Kushner to Simes (2:44:25~p.m.)); C00006759 (4/14/16 Email Kushner to Simes \& S.~Miller (12:30~p.m.)).} Simes sent back to the Campaign bullet points with ideas for the speech that he had drafted with CNI Executive Director Paul Saunders and board member Richard Burt.% 614 \footnote{Burt 2/9/18 302, at~7; Saunders 2/15/18 302, at~7--8.} Simes received subsequent draft outlines from Miller, and he and Saunders spoke to Miller by phone about substantive changes to the speech.% 615 \footnote{Simes 3/8/18 302, at~13; Saunders 2/15/18 302, at~7--8.} It is not clear, however, whether CNI officials received an actual draft of the speech for comment; while Saunders recalled having received an actual draft, Simes did not, and the emails that CNI produced to this Office do not contain such a draft.% 616 \footnote{Simes 3/8/18 302, at~13; Saunders 2/15/18 302, at~7--8.} After board members expressed concern to Simes that CNI's hosting the speech could be perceived as an endorsement of a particular candidate, CNI decided to have its publication, the \textit{National Interest}, serve as the host and to have the event at the National Press Club.% 617 \footnote{Saunders 2/15/18 302, at~8; Simes 3/8/18 302, at~12; CO00003834--43 (4/22/16 Email, Simes to Boyd et~al.\ (8:47~a.m.)).} Kushner later requested that the event be moved to the Mayflower Hotel, which was another venue that Simes had mentioned during initial discussions with the Campaign, in order to address concerns about security and capacity.% 618 \footnote{Simes 3/8/18 302, at~12, 18; Saunders 2/15/18 302, at~11.} On April~25, 2016, Saunders booked event rooms at the Mayflower to host both the speech and a VIP reception that was to be held beforehand.% 619 \footnote{Saunders 2/15/18 302, at~11--12; CO0006651--57 (Mayflower Group Sales Agreement).} Saunders understood that the reception---at which invitees would have the chance to meet candidate Trump---would be a small event.% 620 \footnote{Saunders 2/15/18 302, at~12--13.} Saunders decided who would attend by looking at the list of CNI's invitees to the speech itself and then choosing a subset for the reception.% 621 \footnote{Saunders 2/15/18 302, at~12.} CNI's invitees to the reception included Sessions and~Kislyak.% 622 \footnote{C00002575 (Attendee List); C00008536 (4/25/16 Email, Simes to Kushner (4:53:45~p.m.)).} The week before the speech Simes had informed Kislyak that he would be invited to the speech, and that he would have the opportunity to meet Trump.% 623 \footnote{Simes 3/8/18 302, at~19--20.} When the pre-speech reception began on April~27, a receiving line was quickly organized so that attendees could meet Trump.% 624 \footnote{Simes 3/8/18 302, at~21.} Sessions first stood next to Trump to introduce him to the members of Congress who were in attendance.% 625 \footnote{Simes 3/8/18 302, at~21.} After those members had been introduced, Simes stood next to Trump and introduced him to the CNI invitees in attendance, including Kislyak.% 626 \footnote{Simes 3/8/18 302, at~21.} Simes perceived the introduction to be positive and friendly, but thought it clear that Kislyak and Trump had just met for the first time.% 627 \footnote{Simes 3/8/18 302, at~21.} Kislyak also met Kushner during the pre-speech reception. The two shook hands and chatted for a minute or two, during which Kushner recalled Kislyak saying, ``we like what your candidate is saying \dots\ it's refreshing.''% 628 \footnote{Kushner 4/11/18 302, at~4.} Several public reports state that, in addition to speaking to Kushner at the pre-speech reception, Kislyak also met or conversed with Sessions at that time.% 629 \footnote{\textit{See, e.g.}, Ken Dilanian, \textit{Did Trump, Kushner, Sessions Have an Undisclosed Meeting With Russian?}, NBC News (June~1, 2016); Julia Ioffe, \textit{Why Did Jeff Sessions Really Meet With Sergey Kislyak}, The Atlantic (June~13, 2017).} Sessions stated to investigators, however, that he did not remember any such conversation.% 630 \footnote{Sessions 1/17/18 302, at~22.} Nor did anyone else affiliated with CNI or the \textit{National Interest} specifically recall a conversation or meeting between Sessions and Kislyak at the pre-speech reception.% 631 \footnote{Simes 3/8/18 302, at~21; Saunders 2/15/18 302, at~14, 21; Boyd 1/24/18 302, at~3--4; Heilbrunn 2/1/18 302, at~6; \textit{Statement Regarding President Trump's April~27, 2016 Foreign Policy Speech at the Center for the National Interest}, CNI (Mar.~8, 2017).} It appears that, if a conversation occurred at the pre-speech reception, it was a brief one conducted in public view, similar to the exchange between Kushner and~Kislyak. The Office found no evidence that Kislyak conversed with either Trump or Sessions after the speech, or would have had the opportunity to do so. Simes, for example, did not recall seeing Kislyak at the post-speech luncheon,% 632 \footnote{Simes 3/8/18 302, at~22; Heilbrunn 2/1/18 302, at~7.} and the only witness who accounted for Sessions's whereabouts stated that Sessions may have spoken to the press after the event but then departed for Capitol Hill.% 633 \footnote{Luff 1/30/18 302, at~4.} Saunders recalled, based in part on a food-related request he received from a Campaign staff member, that Trump left the hotel a few minutes after the speech to go to the airport.% 634 \footnote{Saunders 2/15/18 302, at~15.} \paragraph{Jeff Sessions's Post-Speech Interactions with CNI} In the wake of Sessions's confirmation hearings as Attorney General, questions arose about whether Sessions's campaign-period interactions with CNI apart from the Mayflower speech included any additional meetings with Ambassador Kislyak or involved Russian-related matters. With respect to Kislyak contacts, on May~23, 2016, Sessions attended CNI's Distinguished Service Award dinner at the Four Seasons Hotel in Washington, D.C.% 635 \footnote{Sessions 1/17/18 302, at~22; Saunders 2/15/18 302, at~17.} Sessions attended a pre-dinner reception and was seated at one of two head tables for the event.% 636 \footnote{Saunders 2/15/18 302, at~17; C00004779--80 (5/23/16 Email, Cantelmo to Saunders \& Hagberg (9:30:12~a.m.)[)]; C00004362 (5/23/16 Email, Bauman to Cantelmo et~al.\ (2:02:32~a.m.)[)].} A seating chart prepared by Saunders indicates that Sessions was scheduled to be seated next to Kislyak, who appears to have responded to the invitation by indicating he would attend the event.% 637 \footnote{C00004362 (5/23/16 Email Bauman to Cantelmo et~al.\ (2:02:32~a.m.)[)].} Sessions, however, did not remember seeing, speaking with, or sitting next to Kislyak at the dinner.% 638 \footnote{Sessions 1/17/18 302, at~22.} Although CNI board member Charles Boyd said he may have seen Kislyak at the dinner,% 639 \footnote{Boyd 1/24/18 302, at~4.} Simes, Saunders, and Jacob Heilbrunn---editor of the \textit{National Interest}---all had no recollection of seeing Kislyak at the May~23 event.% 640 \footnote{Simes 3/8/18 302, at~23; Saunders 2/15/18 302, at~18; Heilbrunn 2/1/18 302, at~7.} Kislyak also does not appear in any of the photos from the event that the Office obtained. In the summer of 2016, CNI organized at least two dinners in Washington, D.C. for Sessions to meet with experienced foreign policy professionals.% 641 \footnote{Simes 3/8/18 302, at~31; Saunders 2/15/18 302, at~19; Burt 2/9/18 302, at~9--10; Khalilzad 1/9/18 302, at~5.} The dinners included CNI-affiliated individuals, such as Richard Burt and Zalmay Khalilzad, a former U.S. ambassador to Afghanistan and Iraq and the person who had introduced Trump before the April~27, 2016 foreign-policy speech.% 642 \footnote{Burt 2/9/18 302, at~9--10; Khalilzad 1/9/18 302, at~1--2, 5.} Khalilzad also met with Sessions one-on-one separately from the dinners.% 643 \footnote{Khalilzad 1/9/18 302, at~5--6.} At the dinners and in the meetings, the participants addressed U.S. relations with Russia, including how U.S. relations with NATO and European countries affected U.S. policy toward Russia.% 644 \footnote{Simes 3/8/18 302, at~31; Burt 2/9/18 302, at~9--10; Khalilzad 1/9/18 302, at~5.} But the discussions were not exclusively focused on Russia.% 645 \footnote{Saunders 2/15/18 302, at~20.} Khalilzad, for example, recalled discussing ``nation-building'' and violent extremism with Sessions.% 646 \footnote{Khalilzad 1/9/18 302, at~6.} In addition, Sessions asked Saunders (of CNI) to draft two memoranda not specific to Russia: one on Hillary Clinton's foreign policy shortcomings and another on Egypt.% 647 \footnote{Saunders 2/15/18 302, at~19--20.} \paragraph{Jared Kushner's Continuing Contacts with Simes} Between the April 2016 speech at the Mayflower Hotel and the presidential election, Jared Kushner had periodic contacts with Simes.% 648 \footnote{Simes 3/8/18 302, at~27.} Those contacts consisted of both in-person meetings and phone conversations, which concerned how to address issues relating to Russia in the Campaign and how to move forward with the advisory group of foreign policy experts that Simes had proposed.% 649 \footnote{Simes 3/8/18 302, at~27.} Simes recalled that he, not Kushner, initiated all conversations about Russia, and that Kushner never asked him to set up back-channel conversations with Russians.% 650 \footnote{Simes 3/8/18 302, at~27.} According to Simes, after the Mayflower speech in late April, Simes raised the issue of Russian contacts with Kushner, advised that it was bad optics for the Campaign to develop hidden Russian contacts, and told Kushner both that the Campaign should not highlight Russia as an issue and should handle any contacts with Russians with care.% 651 \footnote{Simes 3/8/18 302, at~27. During this period of time, the Campaign received a request for a high level Campaign official to meet with an officer at a Russian state-owned bank ``to discuss an offer [that officer] claims to be carrying from President Putin to meet with'' candidate Trump. NOSC00005653 (5/17/16 Email, Dearborn to Kushner (8:12~a.m.)). Copying Manafort and Gates, Kushner responded, ``Pass on this. A lot of people come claiming to carry messages. Very few are able to verify. For now I think we decline such meetings. Most likely these people go back home and claim they have special access to gain importance for themselves. Be careful.'' NOSC00005653 (5/17/16 Email, Kushner to Dearborn).} Kushner generally provided a similar account of his interactions with Simes.% 652 \footnote{Kushner 4/11/18 302, at~11--13.} Among the Kushner--Simes meetings was one held on August~17, 2016, at Simes's request, in Kushner's New York office. The meeting was to address foreign policy advice that CNI was providing and how to respond to the Clinton Campaign's Russia-related attacks on candidate Trump.% 653 \footnote{Simes 3/8/18 302, at~29--30; Simes 3/27/18 302, at~6; Kushner 4/11/18 302, at~12; C00007269 (8/10/16 Meeting Invitation, Vargas to Simes et~al.); DJTFP00023484 (8/11/16 Email, Hagan to Manafort (5:57:15~p.m.)).} In advance of the meeting, Simes sent Kushner a ``Russia Policy Memo'' laying out ``what Mr.~Trump may want to say about Russia.''% 654 \footnote{C00007981--84 (8/9/16 Email, Simes to Kushner (6:09:21~p.m.)). The memorandum recommended ``downplaying Russia as a U.S. foreign policy priority at this time'' and suggested that ``some tend to exaggerate Putin's flaws.'' The memorandum also recommended approaching general Russian related questions in the framework of ``how to work with Russia to advance important U.S. national interests'' and that a Trump Administration ``not go abroad in search of monsters to destroy.'' The memorandum did not discuss sanctions but did address how to handle Ukraine-related questions, including questions about Russia's invasion and annexation of Crimea.} In a cover email transmitting that memo and a phone call to set up the meeting, Simes mentioned ``a well-documented story of highly questionable connections between Bill Clinton'' and the Russian government, ``parts of [which]'' (according to Simes) had even been ``discussed with the CIA and the FBI in the late 1990s and shared with the [Independent Counsel] at the end of the Clinton presidency.''% 655 \footnote{C00007981 (8/9/16 Email, Simes to Kushner (6:09:21~p.m.)).} Kushner forwarded the email to senior Trump Campaign officials Stephen Miller, Paul Manafort, and Rick Gates, with the note ``suggestion only.''% 656 \footnote{DJTFP00023459 (8/10/16 Email, Kushner to S.~Miller et~al.\ (11:30:13~a.m.)).} Manafort subsequently forwarded the email to his assistant and scheduled a meeting with Simes.% 657 \footnote{DJTFP00023484 (8/11/16 Email, Hagan to Manafort (5:57:15~p.m.)).} (Manafort was on the verge of leaving the Campaign by the time of the scheduled meeting with Simes, and Simes ended up meeting only with Kushner[.])[] During the August~17 meeting, Simes provided Kushner the Clinton-related information that he had promised.% 658 \footnote{Simes 3/8/18 302, at~29--30; Simes 3/27/18 302, at~6; Kushner 4/11/18 302, at~12.} Simes told Kushner that, \blackout{Personal Privacy}% 659 \footnote{Simes 3/8/18 302, at~30; Simes 3/27/18 302, at~6.} Simes claimed that he had received this information from former CIA and Reagan White House official Fritz Ermarth, who claimed to have learned it from U.S. intelligence sources, not from Russians.% 660 \footnote{Simes 3/8/18 302, at~30.} Simes perceived that Kushner did not find the information to be of interest or use to the Campaign because it was, in Simes's words, ``old news.''% 661 \footnote{Simes 3/8/18 302, at~30; Simes 3/27/18 302, at~6.} When interviewed by the Office, Kushner stated that he believed that there was little chance of something new being revealed about the Clintons given their long career as public figures, and that he never received from Simes information that could be ``operationalized'' for the Trump Campaign.% 662 \footnote{Kushner 4/11/18 302, at~12.} Despite Kushner's reaction, Simes believed that he provided the same information at a small group meeting of foreign policy experts that CNI organized for Sessions.% 663 \footnote{Simes 3/8/18 302, at~30.} \subsubsection{June~9, 2016 Meeting at Trump Tower} On June~9, 2016, senior representatives of the Trump Campaign met in Trump Tower with a Russian attorney expecting to receive derogatory information about Hillary Clinton from the Russian government. The meeting was proposed to Donald Trump~Jr.\ in an email from Robert Goldstone, at the request of his then-client Emin Agalarov, the son of Russian real-estate developer Aras Agalarov. Goldstone relayed to Trump~Jr.\ that the ``Crown prosecutor of Russia \dots\ offered to provide the Trump Campaign with some official documents and information that would incriminate Hillary and her dealings with Russia'' as ``part of Russia and its government's support for Mr.~Trump.'' Trump~Jr.\ immediately responded that ``if it's what you say I love it,'' and arranged the meeting through a series of emails and telephone calls. Trump~Jr.\ invited campaign chairman Paul Manafort and senior advisor Jared Kushner to attend the meeting, and both attended. Members of the Campaign discussed the meeting before it occurred, and Michael Cohen recalled that Trump~Jr.\ may have told candidate Trump about an upcoming meeting to receive adverse information about Clinton, without linking the meeting to Russia. According to written answers submitted by President Trump, he has no recollection of learning of the meeting at the time, and the Office found no documentary evidence showing that he was made aware of the meeting---or its Russian connection---before it occurred. The Russian attorney who spoke at the meeting, Natalia Veselnitskaya, had previously worked for the Russian government and maintained a relationship with that government throughout this period of time. She claimed that funds derived from illegal activities in Russia were provided to Hillary Clinton and other Democrats. Trump~Jr.\ requested evidence to support those claims, but Veselnitskaya did not provide such information. She and her associates then turned to a critique of the origins of the Magnitsky Act, a 2012 statute that imposed financial and travel sanctions on Russian officials and that resulted in a retaliatory ban on adoptions of Russian children. Trump~Jr.\ suggested that the issue could be revisited when and if candidate Trump was elected. After the election, Veselnitskaya made additional efforts to follow up on the meeting, but the Trump Transition Team did not engage. \paragraph{Setting Up the June~9 Meeting} \subparagraph{Outreach to Donald Trump~Jr} Aras Agalarov is a Russian real-estate developer with ties to Putin and other members of the Russian government, including Russia's Prosecutor General, Yuri Chaika.% 664 \footnote{\blackout{Grand Jury} Goldstone 2/8/18 302, at~4.} Aras Agalarov is the president of the Crocus Group, a Russian enterprise that holds substantial Russian government construction contracts and that---as discussed above, \hyperlink{subsubsection.1.4.1.1}{Volume~I, Section~IV.A.1}, \textit{supra}---worked with Trump in connection with the 2013 Miss Universe pageant in Moscow and a potential Trump Moscow real-estate project.% 665 \footnote{\blackout{Grand Jury} Kaveladze 11/16/17 302, at~3; Shugart 9/25/17 302, at~2--3; \blackout{Grand Jury}} The relationship continued over time, as the parties pursued the Trump Moscow project in 2013--2014 and exchanged gifts and letters in 2016.% 666 \footnote{\blackout{Grand Jury} Goldstone 2/8/18 302, at~10; \blackout{Grand Jury} Kaveladze 11/16/17 302, at~5--6; 4/25/16 Email, Graff to Goldstone.} For example, in April 2016, Trump responded to a letter from Aras Agalarov with a handwritten note.% 667 \footnote{RG000033--34 (4/25/16 Email, Graff to Goldstone (attachment)[)].} Aras Agalarov expressed interest in Trump's campaign, passed on ``congratulations'' for winning in the primary and---according to one email drafted by Goldstone---an ``offer'' of his ``support and that of many of his important Russian friends and colleagues[,] especially with reference to U.S./Russian relations.''% 668 \footnote{DJTJROO008 (2/29/16 Email, Goldstone to Trump~Jr.\ et~al.); \blackout{Grand Jury}} On June~3, 2016, Emin Agalarov called Goldstone, Emin's then-publicist.% 669 \footnote{Call Records of Robert Goldstone \blackout{Grand Jury} Goldstone 2/8/18 302, at~6.} Goldstone is a music and events promoter who represented Emin Agalarov from approximately late 2012 until late 2016.% 670 \footnote{Goldstone 2/8/18 302, at~1--2; \blackout{Grand Jury} Beniaminov 1/6/18 302, at~3.} While representing Emin Agalarov, Goldstone facilitated the ongoing contact between the Trumps and the Agalarovs---including an invitation that Trump sent to Putin to attend the 2013 Miss Universe Pageant in Moscow.% 671 \footnote{Goldstone 2/8/18 302, at~1--5; \blackout{Grand Jury} DJTJRO0008 (2/29/19 Email, Goldstone to Trump~Jr.); Beniaminov 1/6/18 302, at~3; Shugart 9/25/17 302, at~2; TRUMPORG\_18\_001325 (6/21/13 Email, Goldstone to Graff); TRUMPORG\_18\_001013 (6/24/13 Email, Goldstone to Graff); TRUMPORG\_18\_001014 (6/24/13 Email, Graff to Shugart); TRUMPORG\_18\_001018 (6/26/13 Email, Graff to Goldstone); TRUMPORG\_18\_001022 (6/27/13 Email, Graff to L.~Kelly); TRUMPORG\_18\_001333 (9/12/13 Email, Goldstone to Graff, Shugart); MU000004289 (7/27/13 Email, Goldstone to Graff, Shugart).} \blackout{Grand Jury}% 672 \footnote{\blackout{Grand Jury} \textit{see} Goldstone 2/8/18 302, at~6--7.} Goldstone understood \blackout{Grand Jury} a Russian political connection, and Emin Agalarov indicated that the attorney was a prosecutor.% 673 \footnote{\blackout{Grand Jury}} Goldstone recalled that the information that might interest the Trumps involved Hillary Clinton \blackout{Grand Jury}% 674 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 675 \footnote{\blackout{Grand Jury}} The \blackout{Grand Jury} mentioned by Emin Agalarov was Natalia Veselnitskaya.% 676 \footnote{In December 2018, a grand jury in the Southern District of New York returned an indictment charging Veselnitskaya with obstructing the Prevezon litigation discussed in the text above. \textit{See} Indictment, \textit{United States~v.\ Natalia Vladimirovna Veselnitskaya}, No.~18-cr-904 (S.D.N.Y.). The indictment alleges, among other things, that Veselnitskaya lied to the district court about her relationship to the Russian Prosecutor General's Office and her involvement in responding to a U.S. document request sent to the Russian government.} From approximately 1998 until 2001, Veselnitskaya worked as a prosecutor for the Central Administrative District of the Russian Prosecutor's Office,% 677 \footnote{Veselnitskaya 11/20/17 Statement to the Senate Committee on the Judiciary, at~2; \blackout{Grand Jury} } and she continued to perform government-related work and maintain ties to the Russian government following her departure.% 678 \footnote{Testimony of Natalia Veselnitskaya Before the Senate Committee on Judiciary (Nov.~20, 2017) at~33; Keir Simmons \& Rachel Elbaum, \textit{Russian Lawyer Veselnitskaya Says She Didn't Give Trump~Jr.\ Info on Clinton}, NBC News (July~11, 2017); Maria Tsvetkova \& Jack Stubbs, \textit{Moscow Lawyer Who Met Trump~Jr.\ Had Russian Spy Agency As Client}, Reuters (July~21, 2017); Andrew E. Kramer \& Sharon LaFraniere, \textit{Lawyer Who Was Said to Have Dirt on Clinton Had Closer Ties to Kremlin than She Let On}, New York Times (Apr.~27, 2018).} She lobbied and testified about the Magnitsky Act, which imposed financial sanctions and travel restrictions on Russian officials and which was named for a Russian tax specialist who exposed a fraud and later died in a Russian prison.% 679 \footnote{\textit{See} Pub.\ L.~No.~112-208 \S\S~402, 404(a)(1), 126 Stat.~1502, 1502--1506. Sergei Magnitsky was a Russian tax specialist who worked for William Browder, a former investment fund manager in Russia. Browder hired Magnitsky to investigate tax fraud by Russian officials, and Magnitsky was charged with helping Browder embezzle money. After Magnitsky died in a Russian prison, Browder lobbied Congress to pass the Magnitsky Act. \textit{See, e.g}, Andrew E. Kramer, \textit{Turning Tables in Magnitsky Case, Russia Accuses Nemesis of Murder}, New York Times (Oct.~22, 2017); Testimony of Natalia Veselnitskaya Before the Senate Committee on Judiciary (Nov.~20, 2017), Exhibits at~1--4; Rosie Gray, \textit{Bill Browder's Testimony to the Senate Judiciary Committee}, The Atlantic (July~25, 2017).} Putin called the statute ``a purely political, unfriendly act,'' and Russia responded by barring a list of current and former U.S. officials from entering Russia and by halting the adoption of Russian children by U.S. citizens.% 680 \footnote{Rilen Barry, \textit{Russia Bars 18 Americans After Sanctions by US}, New York Times (Apr.~13, 2013); Tom Porter, \textit{Supporters of the Magnitsky Act Claim They've Been Targets of Russian Assassination and Kidnapping Bids}, Newsweek (July~16, 2017).} Veselnitskaya performed legal work for Denis Katsyv,% 681 \footnote{Testimony of Natalia Veselnitskaya Before the Senate Committee on Judiciary (Nov.~20, 2017), at~21.} the son of Russian businessman Peter Katsyv, and for his company Prevezon Holdings Ltd., which was a defendant in a civil-forfeiture action alleging the laundering of proceeds from the fraud exposed by Magnitsky.% 682 \footnote{\textit{See} Veselnitskaya Decl., \textit{United States~v.\ Prevezon Holdings, Ltd.}, No.~13-cv-6326 (S.D.N.Y.); \textit{see Prevezon Holdings}, Second Amended Complaint; \textit{Prevezon Holdings}, Mem.\ and Order; \textit{Prevezon Holdings}, Deposition of Oleg Lurie.} She also appears to have been involved in an April 2016 approach to a U.S. congressional delegation in Moscow offering ``confidential information'' from ``the Prosecutor General of Russia'' about ``interactions between certain political forces in our two countries.''% 683 \footnote{\textit{See} Gribbin 8/31/17 302, at~1--2 \& 1A (undated one-page document given to congressional delegation). The Russian Prosecutor General is an official with broad national responsibilities in the Russian legal system. \textit{See Federal Law on the Prosecutor's Office of the Russian Federation} (1992, amended 2004).} Shortly after his June~3 call with Emin Agalarov, Goldstone emailed Trump~Jr.% 684 \footnote{RG000061 (6/3/16 Email, Goldstone to Trump~Jr.); DJTJRO0446 (6/3/16 Email, Goldstone to Donald Trump~Jr.); \UseVerb{DJTJ} 07/11/17 (11:00) Tweet.} The email stated: \begin{quote} Good morning Emin just called and asked me to contact you with something very interesting. The Crown prosecutor of Russia met with his father Aras this morning and in their meeting offered to provide the Trump campaign with some official documents and information that would incriminate Hillary and her dealings with Russia and would be very useful to your father. This is obviously very high level and sensitive information but is part of Russia and its government's support for Mr.~Trump -- helped along by Aras and Emin. What do you think is the best way to handle this information and would you be able to speak to Emin about it directly? I can also send this info to your father via Rhona, but it is ultra sensitive so wanted to send to you first. Best Rob Goldstone \end{quote} Within minutes of this email, Trump~Jr.\ responded, emailing back: ``Thanks Rob I appreciate that. I am on the road at the moment but perhaps I just speak to Emin first. Seems we have some time and if it's what you say I love it especially later in the summer. Could we do a call first thing next week when I am back?''% 685 \footnote{DJTJROO446 (6/3/16 Email, Trump~Jr.\ to Goldstone); \UseVerb{DJTJ} 07/11/17 (11:00) Tweet; RG000061 (6/3/16 Email, Trump~Jr.\ to Goldstone).} Goldstone conveyed Trump~Jr.'s interest to Emin Agalarov, emailing that Trump~Jr.\ ``wants to speak personally on the issue.''% 686 \footnote{\blackout{Grand Jury} RG000062 (6/3/16 Email, Goldstone \& Trump~Jr.).} On June~6, 2016, Emin Agalarov asked Goldstone if there was ``[a]ny news,'' and Goldstone explained that Trump~Jr.\ was likely still traveling for the ``final elections \dots\ where [T]rump will be `crowned' the official nominee.''% 687 \footnote{RG000063 (6/6/16 Email, A.~Agalarov to Goldstone); RG000064 (6/6/16 Email, Goldstone to A.~Agalarov).} On the same day, Goldstone again emailed Trump~Jr.\ and asked when Trump~Jr.\ was ``free to talk with Emin about this Hillary info.''% 688 \footnote{RG000065 (6/6/16 Email, Goldstone to Trump~Jr.); DJTJR00446 (6/6/16 Email, Goldstone to Trump~Jr.).} Trump~Jr.\ asked if they could ``speak now,'' and Goldstone arranged a call between Trump~Jr.\ and Emin Agalarov.% 689 \footnote{DJTJRO0445 (6/6/16 Email, Goldstone and Trump~Jr.); RG000065--67 (6/6/16 Email, Goldstone and Trump~Jr.); \blackout{Grand Jury}} On June~6 and June~7, Trump~Jr.\ and Emin Agalarov had multiple brief calls.% 690 \footnote{DJTJRO0499 (Call Records of Donald Trump~Jr.\ \blackout{Grand Jury}); Call Records of Donald Trump~Jr.\ \blackout{Grand Jury}.} Also on June~6, 2016, Aras Agalarov called Ike Kaveladze and asked him to attend a meeting in New York with the Trump Organization.% 691 \footnote{Kaveladze 11/16/17 302, at~6; \blackout{Grand Jury}} Kaveladze is a Georgia-born, naturalized U.S. citizen who worked in the United States for the Crocus Group and reported to Aras Agalarov.% 692 \footnote{Kaveladze 11/16/17 302, at~1--2; \blackout{Grand Jury} Beniaminov 1/6/18 302, at~2--3; \blackout{Grand Jury}} Kaveladze told the Office that, in a second phone call on June~6, 2016, Aras Agalarov asked Kaveladze if he knew anything about the Magnitsky Act, and Aras sent him a short synopsis for the meeting and Veselnitskaya's business card. According to Kaveladze, Aras Agalarov said the purpose of the meeting was to discuss the Magnitsky Act, and he asked Kaveladze to translate.% 693 \footnote{Kaveladze 11/16/17 302, at~6.} \subparagraph{Awareness of the Meeting Within the Campaign} On June~7, Goldstone emailed Trump~Jr.\ and said that ``Emin asked that I schedule a meeting with you and [t]he Russian government attorney who is flying over from Moscow.''% 694 \footnote{DJTJROO467 (6/7/16 Email, Goldstone to Trump~Jr.); \UseVerb{DJTJ} 07/11/17 (11:00) Tweet; RG000068 (6/7/16 Email, Goldstone to Trump~Jr.); \blackout{Grand Jury}} Trump~Jr.\ replied that Manafort (identified as the ``campaign boss''), Jared Kushner, and Trump~Jr.\ would likely attend.% 695 \footnote{DJTJROO469 (6/7/16 Email, Trump~Jr.\ to Goldstone); \UseVerb{DJTJ} 07/11/17 (11:00) Tweet; RG000071 (6/7/16 Email, Trump~Jr.\ to Goldstone); OSC-KAV\_00048 (6/7/16 Email, Goldstone to Kaveladze); \blackout{Grand Jury}} Goldstone was surprised to learn that Trump~Jr., Manafort, and Kushner would attend.% 696 \footnote{Goldstone 2/8/18 302, at~7; \blackout{Grand Jury}} Kaveladze \blackout{Grand Jury} ``puzzled'' by the list of attendees and that he checked with one of Emin Agalarov's assistants, Roman Beniaminov, who said that the purpose of the meeting was for Veselnitskaya to convey ``negative information on Hillary Clinton.''% 697 \footnote{\blackout{Grand Jury} \textit{see} Kaveladze 11/16/17 302, at~7; OSC-KAV\_00048 (6/7/16 Email, Goldstone to Kaveladze).} Beniaminov, however, stated that he did not recall having known or said that.% 698 \footnote{Beniaminov 1/6/18 302, at~3.} Early on June~8, 2016 Kushner emailed his assistant, asking her to discuss a 3:00~p.m. meeting the following day with Trump~Jr.% 699 \footnote{NOSC0000007--08 (6/8/18 Email, Kushner to Vargas).} Later that day, Trump~Jr.\ forwarded the entirety of his email correspondence regarding the meeting with Goldstone to Manafort and Kushner, under the subject line ``FW: Russia -- Clinton---private and confidential,'' adding a note that the ``[m]eeting got moved to 4 tomorrow at my offices.''% 700 \footnote{NOSC00000039--42 (6/8/16 Email, Trump~Jr.\ to Kushner \& Manafort); DJTJR00485 (6/8/16 Email, Trump~Jr.\ to Kushner \& Manafort).} Kushner then sent his assistant a second email, informing her that the ``[m]eeting with don jr is 4pm now.''% 701 \footnote{NOSC0000004 (6/8/16 Email, Kushner to Vargas).} Manafort responded, ``See you then.~P.''% 702 \footnote{6/8/16 Email, Manafort to Trump~Jr.} Rick Gates, who was the deputy campaign chairman, stated during interviews with the Office that in the days before June~9, 2016 Trump~Jr.\ announced at a regular morning meeting of senior campaign staff and Trump family members that he had a lead on negative information about the Clinton Foundation.% 703 \footnote{Gates 1/30/18 302, at~7; Gates 3/1/18 302, at~3--4. Although the March~1 302 refers to ``June~19,'' that is likely a typographical error; external emails indicate that a meeting with those participants occurred on June~6. \textit{See} NOSC00023603 (6/6/16 Email, Gates to Trump~Jr.\ et~al.).} Gates believed that Trump~Jr.\ said the information was coming from a group in Kyrgyzstan and that he was introduced to the group by a friend.% 704 \footnote{Gates 1/30/18 302, at~7. Aras Agalarov is originally from Azerbaijan, and public reporting indicates that his company, the Crocus Group, has done substantial work in Kyrgyzstan. \textit{See} Neil MacFarquhar, \textit{A Russian Developer Helps Out the Kremlin on Occasion. Was He a Conduit to Trump?}, New York Times (July~16, 2017).} Gates recalled that the meeting was attended by Trump~Jr., Eric Trump, Paul Manafort, Hope Hicks, and, joining late, Ivanka Trump and Jared Kushner. According to Gates, Manafort warned the group that the meeting likely would not yield vital information and they should be careful.% 705 \footnote{Gates 3/1/18 302, at~3--4.} Hicks denied any knowledge of the June~9 meeting before 2017,% 706 \footnote{Hicks 12/7/17 302, at~6.} and Kushner did not recall if the planned June~9 meeting came up at all earlier that week.% 707 \footnote{Kushner 4/11/18 302, at~8.} Michael Cohen recalled being in Donald J. Trump's office on June~6 or~7 when Trump~Jr.\ told his father that a meeting to obtain adverse information about Clinton was going forward.% 708 \footnote{Cohen 8/7/18 302, at~4--6.} Cohen did not recall Trump~Jr.\ stating that the meeting was connected to Russia.% 709 \footnote{Cohen 8/7/18 302, at~4--5.} From the tenor of the conversation, Cohen believed that Trump~Jr.\ had previously discussed the meeting with his father, although Cohen was not involved in any such conversation.% 710 \footnote{Cohen 9/12/18 302, at~15--16.} In an interview with the Senate Judiciary Committee, however, Trump~Jr.\ stated that he did not inform his father about the emails or the upcoming meeting.% 711 \footnote{\textit{Interview of: Donald J. Trump, Jr., Senate Judiciary Committee}, 115th Cong.~28--29, 84, 94--95 (Sept.~7, 2017). The Senate Judiciary Committee interview was not under oath, but Trump~Jr.\ was advised that it is a violation of 18~U.S.C. \S~1001 to make materially false statements in a congressional investigation. \textit{Id.}~at~10--11.} Similarly, neither Manafort nor Kushner recalled anyone informing candidate Trump of the meeting, including Trump~Jr.% 712 \footnote{Manafort 9/11/18 302, at~3--4; Kushner 4/11/18 302, at~10.} President Trump has stated to this Office, in written answers to questions, that he has ``no recollection of learning at the time'' that his son, Manafort, or ``Kushner was considering participating in a meeting in June 2016 concerning potentially negative information about Hillary Clinton.''% 713 \footnote{Written Responses of Donald J. Trump (Nov.~20, 2018), at~8 (Response to Question~I, Parts~(a)--(c)). We considered whether one sequence of events suggested that candidate Trump had contemporaneous knowledge of the June~9 meeting. On June~7, 2016 Trump announced his intention to give ``a major speech'' ``probably Monday of next week''---which would have been June~13---about ``all of the things that have taken place with the Clintons.'' \textit{See, e.g.}, Phillip Bump, \textit{What we know about the Trump Tower meeting}, Washington Post (Aug.~7, 2018). Following the June~9 meeting, Trump changed the subject of his planned speech to national security. But the Office did not find evidence that the original idea for the speech was connected to the anticipated June~9 meeting or that the change of topic was attributable to the failure of that meeting to produce concrete evidence about Clinton. Other events, such as the Pulse nightclub shooting on June~12, could well have caused the change. The President's written answers to our questions state that the speech's focus was altered ``[i]n light of\thinspace'' the Pulse nightclub shooting. \textit{See} Written Responses, \textit{supra}. As for the original topic of the June~13 speech, Trump has said that ``he expected to give a speech referencing the publicly available, negative information about the Clintons,'' and that the draft of the speech prepared by Campaign staff ``was based on publicly available material, including, in particular, information from the book \textit{Clinton Cash} by Peter Schweizer.'' Written Responses, \textit{supra}. In a later June~22 speech, Trump did speak extensively about allegations that Clinton was corrupt, drawing from the \textit{Clinton Cash} book. \textit{See Full Transcript: Donald Trump NYC Speech on Stakes of the Election}, \UseVerb{politicocom} (June~22, 2016).} \paragraph{The Events of June~9, 2016} \subparagraph{Arrangements for the Meeting} Veselnitskaya was in New York on June~9, 2016, for appellate proceedings in the \textit{Prevezon} civil forfeiture litigation.% 714 \footnote{Testimony of Natalia Veselnitskaya Before the Senate Committee on Judiciary (Nov.~20, 2017) at~41, 42; Alison Frankel, \textit{How Did Russian Lawyer Veselnitskaya Get into U.S. for Trump Tower Meeting?} Reuters, (Nov.~6, 2017); Michael Kranish et~al., \textit{Russian Lawyer who Met with Trump~Jr.\ Has Long History Fighting Sanctions}, Washington Post (July~11, 2017); \textit{see} OSC-KAV\_00113 (6/8/16 Email, Goldstone to Kaveladze); RG000073 (6/8/16 Email, Goldstone to Trump~Jr.); Lieberman 12/13/17 302, at~5; \textit{see also Prevezon Holdings} Order (Oct.~17, 2016).} That day, Veselnitskaya called Rinat Akhmetshin, a Soviet-born U.S. lobbyist, \blackout{Grand Jury} and when she learned that he was in New York, invited him to lunch.% 715 \footnote{\blackout{Grand Jury}} Akhmetshin told the Office that he had worked on issues relating to the Magnitsky Act and had worked on the \textit{Prevezon} litigation.% 716 \footnote{Akhmetshin 11/14/17 302, at~4--6; \blackout{Grand Jury}} Kaveladze and Anatoli Samochornov, a Russian-born translator who had assisted Veselnitska with Magnitsky-related lobbying and the \textit{Prevezon} case, also attended the lunch.% 717 \footnote{Kaveladze 11/16/17 302, at~7; \blackout{Grand Jury}; Samochornov 7/13/17 302, at~2, 4; \blackout{Grand Jury}} \blackout{Grand Jury} Veselnitskaya said she was meeting \blackout{Grand Jury} and asked Akhmetshin what she should tell him.% 718 \footnote{\blackout{Grand Jury}} According to several participants in the lunch, Veselnitskaya showed Akhmetshin a document alleging financial misconduct by Bill Browder and the Ziff brothers (Americans with business in Russia), and those individuals subsequently making political donations to the DNC\null.% 719 \footnote{\blackout{Grand Jury}; Kaveladze 11/16/17 302, at~7; \blackout{Grand Jury}; Samochornov did not recall the planned subject matter of the Trump Tower meeting coming up at lunch. \blackout{Grand Jury} Samochornov 7/12/17 302, at~4. In her later Senate statement and interactions with the press, Veselnitskaya produced what she claimed were the talking points that she brought to the June~9 meeting.} \blackout{Grand Jury}% 720 \footnote{\blackout{Grand Jury}} The group then went to Trump Tower for the meeting.% 721 \footnote{\textit{E.g.}, Samochornov 7/12/17 302, at~4.} \subparagraph{Conduct of the Meeting} Trump~Jr., Manafort, and Kushner participated on the Trump side, while Kaveladze, Samochomov, Akhmetshin, and Goldstone attended with Veselnitskaya.% 722 \footnote{\textit{E.g.}, Samochornov 7/12/17 302, at~4.} The Office spoke to every participant except Veselnitskaya and Trump~Jr., the latter of whom declined to be voluntarily interviewed by the Office \blackout{Grand Jury} The meeting lasted approximately 20~minutes.% 723 \footnote{\textit{E.g.}, Samochornov 7/12/17 302, at~4; Goldstone 2/8/18 302, at~9.} \blackout{Grand Jury}% 724 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} Goldstone recalled that Trump~Jr.\ invited Veselnitskaya to begin but did not say anything about the subject of the meeting.% 725 \footnote{\blackout{Grand Jury}} Participants agreed that Veselnitskaya stated that the Ziff brothers had broken Russian laws and had donated their profits to the DNC or the Clinton Campaign.% 726 \footnote{\blackout{Grand Jury}} She asserted that the Ziff brothers had engaged in tax evasion and money laundering in both the United States and Russia,% 727 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 728 \footnote{\blackout{Grand Jury}} According to Akhmetshin, Trump~Jr.\ asked follow-up questions about how the alleged payments could be tied specifically to the Clinton Campaign, but Veselnitskaya indicated that she could not trace the money once it entered the United States.% 729 \footnote{\blackout{Grand Jury}; Akhmetshin 11/14/17 302, at~12.} Kaveladze similarly recalled that Trump~Jr.\ asked what they have on Clinton, and Kushner became aggravated and asked ``[w]hat are we doing here?''% 730 \footnote{Kaveladze 11/16/17 302, at~8; \blackout{Grand Jury}} Akhmetshin then spoke about U.S. sanctions imposed under the Magnitsky Act and Russia's response prohibiting U.S. adoption of Russian children.% 731 \footnote{Samochornov 7/13/17 302, at~3; \blackout{Grand Jury}} Several participants recalled that Trump~Jr.\ commented that Trump is a private citizen, and there was nothing they could do at that time.% 732 \footnote{\textit{E.g.}, Akhmetshin 11/14/17 302, at~12--13; \blackout{Grand Jury}} Trump~Jr.\ also said that they could revisit the issue if and when they were in government.% 733 \footnote{Akhmetshin 11/14/17 302, at~12--13; \blackout{Grand Jury} Samochornov 7/13/17 302, at~3. Trump~Jr.\ confirmed this in a statement he made in July 2017 after news of the June 2016 meeting broke. \textit{Interview of: Donald J. Trump, Jr., Senate Judiciary Committee U.S. Senate Washington DC}, 115th Cong.~57 (Sept.~7, 2017).} Notes that Manafort took on his phone reflect the general flow of the conversation, although not all of its details.% 734 \footnote{Manafort's notes state: \begin{quote} Bill Browder \\ Offshore -- Cyprus \\ 133m shares \\ Companies \\ Not invest -- loan \\ Value in Cyprus as inter \\ Illici \\ Active sponsors of RNC \\ Browder hired Joanna Glover \\ Tied into Cheney \\ Russian adoption by American families \end{quote} PJM-SJC-00000001--02 (Notes Produced to Senate Judiciary Committee).} At some point in the meeting, Kushner sent an iMessage to Manafort stating ``waste of time,'' followed immediately by two separate emails to assistants at Kushner Companies with requests that they call him to give him an excuse to leave.% 735 \footnote{NOSC00003992 (6/9/16 Text Message, Kushner to Manafort); Kushner 4/11/18 302, at~9; Vargas 4/4/18 302, at~7; NOSC00000044 (6/9/16 Email, Kushner to Vargas); NOSC00000045 (6/9/16 Email, Kushner to Cain).} Samochornov recalled that Kushner departed the meeting before it concluded; Veselnitskaya recalled the same when interviewed by the press in July 2017.% 736 \footnote{Samochornov 7/12/17 302, at~4; \blackout{Grand Jury}; Kushner 4/11/18 302, at~9--10; \textit{see also Interview of: Donald J. Trump, Jr., Senate Judiciary Committee}, 115th Cong.~48--49 (Sept.~7, 2017).} Veselnitskaya's press interviews and written statements to Congress differ materially from other accounts. In a July 2017 press interview, Veselnitskaya claimed that she has no connection to the Russian government and had not referred to any derogatory information concerning the Clinton Campaign when she met with Trump Campaign officials.% 737 \footnote{\textit{Russian Lawyer Veselnitskaya Says She Didn't Give Trump~Jr.\ Info on Clinton}, NBC News (July~11, 2017).} Veselnitskaya's November 2017 written submission to the Senate Judiciary Committee stated that the purpose of the June~9 meeting was not to connect with ``the Trump Campaign'' but rather to have ``a private meeting with Donald Trump~Jr.---a friend of my good acquaintance's son on the matter of assisting me or my colleagues in informing the Congress members as to the criminal nature of manipulation and interference with the legislative activities of the US Congress.''% 738 \footnote{\textit{Testimony of Natalia Veselnitskaya before the United States Senate Committee on the Judiciary}, 115th Cong.~10 (Nov.~20, 2017).} In other words, Veselnitskaya claimed her focus was on Congress and not the Campaign. No witness, however, recalled any reference to Congress during the meeting. Veselnitskaya also maintained that she ``attended the meeting as a lawyer of Denis Katsyv,'' the previously mentioned owner of Prevezon Holdings, but she did not ``introduce [her]self in this capacity.''% 739 \footnote{\textit{Testimony of Natalia Veselnitskaya before the United States Senate Committee on the Judiciary}, 115th Cong.~10 (Nov.~20, 2017).} In a July 2017 television interview, Trump~Jr.\ stated that while he had no way to gauge the reliability, credibility, or accuracy of what Goldstone had stated was the purpose of the meeting, if ``someone has information on our opponent \dots\ maybe this is something. I should hear them out.''% 740 \footnote{Sean Hannity, \textit{Transcript---Donald Trump~Jr}, Fox News (July~11, 2017).} Trump~Jr.\ further stated in September 2017 congressional testimony that he thought he should ``listen to what Rob and his colleagues had to say.''% 741 \footnote{\textit{Interview of: Donald J. Trump, Jr, Senate Judiciary Committee}, 115th Cong.~16 (Sept.~7, 2017).} Depending on what, if any, information was provided, Trump~Jr.\ stated he could then ``consult with counsel to make an informed decision as to whether to give it any further consideration.''% 742 \footnote{\textit{Interview of: Donald J. Trump, Jr, Senate Judiciary Committee}, 115th Cong.~16--17 (Sept.~7, 2017).} After the June~9 meeting, Goldstone apologized to Trump~Jr.% 743 \footnote{Kaveladze 11/16/17 302, at~8; \blackout{Grand Jury}; Goldstone 2/8/18 302, at~9; \blackout{Grand Jury}} According to Goldstone, he told Trump~Jr. \blackout{Grand Jury}% 744 \footnote{\blackout{Grand Jury}} and told Emin Agalarov in a phone call that the meeting was about adoption \blackout{Grand Jury}% 745 \footnote{\blackout{Grand Jury}; The week after the June~9 meeting, a cybersecurity firm and the DNC announced the Russian hack of the DNC\null. \textit{See} \hyperlink{subsubsection.1.3.2.2}{Volume~I, Section~III.B.2}, \textit{supra}. \blackout{Grand Jury} (and one text message shows) that, shortly after the DNC announcement, Goldstone made comments connecting the DNC hacking announcement to the June~9 meeting. \blackout{Grand Jury}; OSC-KAV\_00029 (6/14/16 Email, Goldstone to E.~Agalarov \& Kaveladze (10:09~a.m.)). The investigation did not identify evidence connecting the events of June~9 to the GRU's hack-and-dump operation. OSC-KAV\_00029--30 (6/14/16 Email, Goldstone to E.~Agalarov).} \blackout{Grand Jury}% 746 \footnote{\blackout{Grand Jury}} Aras Agalarov asked Kaveladze to report in after the meeting, but before Kaveladze could call, Aras Agalarov called him.% 747 \footnote{Kaveladze 11/16/17 302, at~8; Call Records of Ike Kaveladze \blackout{Grand Jury}} With Veselnitskaya next to him, Kaveladze reported that the meeting had gone well, but he later told Aras Agalarov that the meeting about the Magnitsky Act had been a waste of time because it was not with lawyers and they were ``preaching to the wrong crowd.''% 748 \footnote{Kaveladze 11/16/17 302, at~8; Call Records of Ike Kaveladze \blackout{Grand Jury}. On June~14, 2016 Kaveladze's teenage daughter emailed asking how the June~9 meeting had gone, and Kaveladze responded, ``meeting was boring. The Russians did not have any bad info on Hilary [sic].'' OSC-KAV\_00257 (6/14/16 Email, I.~Kaveladze to A.~Kaveladze; \blackout{Grand Jury}).} \paragraph{Post-June~9 Events} Veselnitskaya and Aras Agalarov made at least two unsuccessful attempts after the election to meet with Trump representatives to convey similar information about Browder and the Magnitsky Act.% 749 \footnote{Goldstone 2/8/18 302, at~11; \blackout{Grand Jury}} On November~23, 2016, Kaveladze emailed Goldstone about setting up another meeting ``with T people'' and sent a document bearing allegations similar to those conveyed on June~9.% 750 \footnote{OSC-KAV\_00138 1/23/16 Email, Goldstone to Kaveladze); \blackout{Grand Jury}} Kaveladze followed up with Goldstone, stating that ``Mr.~A,'' which Goldstone understood to mean Aras Agalarov, called to ask about the meeting.% 751 \footnote{RG000196 (11/26--29/16 Text Messages, Goldstone \& Kaveladze); \blackout{Grand Jury}} Goldstone emailed the document to Rhona Graff, saying that ``Aras Agalarov has asked me to pass on this document in the hope it can be passed on to the appropriate team. If needed, a lawyer representing the case is in New York currently and happy to meet with any member of his transition team.''% 752 \footnote{Goldstone 2/8/18 302, at~11; \blackout{Grand Jury}; 57700118 (11/28/16 Email, Goldstone to Graff).} According to Goldstone, around January 2017, Kaveladze contacted him again to set up another meeting, but Goldstone did not make the request.% 753 \footnote{\blackout{Grand Jury}} The investigation did not identify evidence of the transition team following up. Participants in the June~9, 2016 meeting began receiving inquiries from attorneys representing the Trump Organization starting in approximately June 2017.% 754 \footnote{\blackout{Grand Jury}} On approximately June~2, 2017, Goldstone spoke with Alan Garten, general counsel of the Trump Organization, about his participation in the June~9 meeting.% 755 \footnote{\blackout{Grand Jury}} The same day, Goldstone emailed Veselnitskaya's name to Garten, identifying her as the ``woman who was the attorney who spoke at the meeting from Moscow.''% 756 \footnote{RG000256 (6/2/17 Email, Goldstone to Garten).} Later in June 2017, Goldstone participated in a lengthier call with Garten and Alan Futerfas, outside counsel for the Trump Organization (and, subsequently, personal counsel for Trump~Jr.).% 757 \footnote{\blackout{Grand Jury}} On June~27, 2017, Goldstone emailed Emin Agalarov with the subject ``Trump attorneys'' and stated that he was ``interviewed by attorneys'' about the June~9 meeting who were ``concerned because it links Don Jr.\ to officials from Russia---which he has always denied meeting.''% 758 \footnote{RG000092 (6/27/17 Email, Goldstone to E.~Agalarov).} Goldstone stressed that he ``did say at the time this was an awful idea and a terrible meeting.''% 759 \footnote{RG000092 (6/27/17 Email, Goldstone to E.~Agalarov). \blackout{Grand Jury}} Emin Agalarov sent a screenshot of the message to Kaveladze.% 760 \footnote{OSC-KAV\_01190 (6/27/17 Text Message, E.~Agalarov to Kaveladze).} The June~9 meeting became public in July 2017. In a July~9, 2017 text message to Emin Agalarov, Goldstone wrote ``I made sure I kept you and your father out of [t]his story,''% 761 \footnote{RG000286--87 (7/9/17 Text Messages, E.~Agalarov \& Goldstone); \blackout{Grand Jury}} and ``[i]f contacted I can do a dance and keep you out of it.''% 762 \footnote{\blackout{Investigative Technique}} Goldstone added, ``FBI now investigating,'' and ``I hope this favor was worth for your dad---it could blow up.''% 763 \footnote{\blackout{Investigative Technique} \blackout{Grand Jury}} On July~12, 2017 Emin Agalarov complained to Kaveladze that his father, Aras, ``never listens'' to him and that their relationship with ``mr T has been thrown down the drain.''% 764 \footnote{OSC-KAV\_01197 (7/11--12/17 Text Messages, Kaveladze \& E.~Agalarov); \blackout{Grand Jury}} The next month, Goldstone commented to Emin Agalarov about the volume of publicity the June~9 meeting had generated, stating that his ``reputation [was] basically destroyed by this dumb meeting which your father insisted on even though Ike and Me told him would be bad news and not to do.''% 765 \footnote{\blackout{Investigative Technique}} Goldstone added, ``I am not able to respond out of courtesy to you and your father. So am painted as some mysterious link to Putin.''% 766 \footnote{\blackout{Investigative Technique}} After public reporting on the June~9 meeting began, representatives from the Trump Organization again reached out to participants. On July~10, 2017, Futerfas sent Goldstone an email with a proposed statement for Goldstone to issue, which read: \begin{quote} As the person who arranged the meeting, I can definitively state that the statements I have read by Donald Trump~Jr.\ are 100\% accurate. The meeting was a complete waste of time and Don was never told Ms.~Veselnitskaya's name prior to the meeting. Ms.~Veselnitskaya mostly talked about the Magnitsky Act and Russian adoption laws and the meeting lasted 20 to 30~minutes at most. There was never any follow up and nothing ever came of the meeting.% 767 \footnote{7/10/17 Email, Goldstone to Futerfas \& Garten.} \end{quote} \blackout{Grand Jury} the statement drafted by Trump Organization representatives was \blackout{Grand Jury}% 768 \footnote{\blackout{Grand Jury}} He proposed a different statement, asserting that he had been asked ``by [his] client in Moscow---Emin Agalarov---to facilitate a meeting between a Russian attorney (Natalia Veselnitzkaya [sic]) and Donald Trump~Jr. The lawyer had apparently stated that she had some information regarding funding to the DNC from Russia, which she believed Mr.~Trump~Jr.\ might find interesting.''% 769 \footnote{7/10/17 Email, Goldstone to Futerfas \& Garten.} Goldstone never released either statement.% 770 \footnote{\blackout{Grand Jury}} On the Russian end, there were also communications about what participants should say about the June~9 meeting. Specifically, the organization that hired Samochornov---an anti-Magnitsky Act group controlled by Veselnitskaya and the owner of Prevezon---offered to pay \$90,000 of Samochornov's legal fees.% 771 \footnote{Samochornov 7/13/17 302, at~1; \blackout{Grand Jury}} At Veselnitskaya's request, the organization sent Samochornov a transcript of a Veselnitskaya press interview, and Samochornov understood that the organization would pay his legal fees only if he made statements consistent with Veselnitskaya's.% 772 \footnote{\blackout{Grand Jury} Samochornov 7/13/17 302, at~1.} Samochornov declined, telling the Office that he did not want to perjure himself.% 773 \footnote{Samochornov 7/13/17 302, at~1.} The individual who conveyed Veselnitskaya's request to Samochornov stated that he did not expressly condition payment on following Veselnitskaya's answers but, in hindsight, recognized that by sending the transcript, Samochornov could have interpreted the offer of assistance to be conditioned on his not contradicting Veselnitskaya's account.% 774 \footnote{\blackout{Grand Jury}} \hyperlink{subsection.2.2.7}{Volume~II, Section~II.G}, \textit{infra}, discusses interactions between President Trump, Trump~Jr., and others in June and July 2017 regarding the June~9 meeting. \subsubsection{Events at the Republican National Convention} Trump Campaign officials met with Russian Ambassador Sergey Kislyak during the week of the Republican National Convention. The evidence indicates that those interactions were brief and non-substantive. During platform committee meetings immediately before the Convention, J.D.~Gordon, a senior Campaign advisor on policy and national security, diluted a proposed amendment to the Republican Party platform expressing support for providing ``lethal'' assistance to Ukraine in response to Russian aggression. Gordon requested that platform committee personnel revise the proposed amendment to state that only ``appropriate'' assistance be provided to Ukraine. The original sponsor of the ``lethal'' assistance amendment stated that Gordon told her (the sponsor) that he was on the phone with candidate Trump in connection with his request to dilute the language. Gordon denied making that statement to the sponsor, although he acknowledged it was possible he mentioned having previously spoken to the candidate about the subject matter. The investigation did not establish that Gordon spoke to or was directed by the candidate to make that proposal. Gordon said that he sought the change because he believed the proposed language was inconsistent with Trump's position on Ukraine. \paragraph{Ambassador Kislyak's Encounters with Senator Sessions and J.D.~Gordon the Week of the RNC} In July 2016, Senator Sessions and Gordon spoke at the Global Partners in Diplomacy event, a conference co-sponsored by the State Department and the Heritage Foundation held in Cleveland, Ohio the same week as the Republican National Convention (RNC or ``Convention'').% 775 \footnote{Gordon 8/29/17 302, at~9; Sessions 1/17/18 302, at~22; Allan Smith, \textit{We Now Know More About why Jeff Sessions and a Russian Ambassador Crossed Paths at the Republican Convention}, Business Insider (Mar.~2, 2017).} Approximately 80 foreign ambassadors to the United States, including Kislyak, were invited to the conference.% 776 \footnote{Gordon 8/29/17 302, at~9; Laura DeMarco, \textit{Global Cleveland and Sen.~Bob Corker Welcome International Republican National Convention Guests}, Cleveland Plain Dealer (July~20, 2016). } On July~20, 2016, Gordon and Sessions delivered their speeches at the conference.% 777 \footnote{Gordon 8/29/17 302, at~9; Sessions 1/17/18 302, at~22.} In his speech, Gordon stated in pertinent part that the United States should have better relations with Russia.% 778 \footnote{Gordon 8/29/17 302, at~9.} During Sessions's speech, he took questions from the audience, one of which may have been asked by~Kislyak.% 779 \footnote{Sessions 1/17/18 302, at~22; Luff 1/30/18 302, at~3.} When the speeches concluded, several ambassadors lined up to greet the speakers.% 780 \footnote{Gordon 8/29/17 302, at~9; Luff 1/30/18 302, at~3.} Gordon shook hands with Kislyak and reiterated that he had meant what he said in the speech about improving U.S.--Russia relations.% 781 \footnote{Gordon 8/29/17 302, at~9.} Sessions separately spoke with between six and 12 ambassadors, including Kislyak.% 782 \footnote{Sessions 1/17/18 302, at~22; Luff 1/30/18 302, at~3; \textit{see also} \hyperlink{paragraph.1.4.1.4.2}{Volume~I, Section~IV.A.4.b}, \textit{supra} (explaining that Sessions and Kislyak may have met three months before this encounter during a reception held on April~26, 2016, at the Mayflower Hotel).} Although Sessions stated during interviews with the Office that he had no specific recollection of what he discussed with Kislyak, he believed that the two spoke for only a few minutes and that they would have exchanged pleasantries and said some things about U.S.--Russia relations.% 783 \footnote{Sessions 1/17/18 302, at~22.} Later that evening, Gordon attended a reception as part of the conference.% 784 \footnote{Gordon 8/29/17 302, at~9--10.} Gordon ran into Kislyak as the two prepared plates of food, and they decided to sit at the same table to eat.% 785 \footnote{Gordon 8/29/17 302, at~9--10.} They were joined at that table by the ambassadors from Azerbaijan and Kazakhstan, and by Trump Campaign advisor Carter Page.% 786 \footnote{Gordon 8/29/17 302, at~10; \textit{see also} \hyperlink{paragraph.1.4.1.3.4}{Volume~I, Section~IV.A.3.d}, \textit{supra} (explaining that Page acknowledged meeting Kislyak at this event). } As they ate, Gordon and Kislyak talked for what Gordon estimated to have been three to five minutes, during which Gordon again mentioned that he meant what he said in his speech about improving U.S.--Russia relations.% 787 \footnote{Gordon 8/29/17 302, at~9--10.} \paragraph{Change to Republican Party Platform} In preparation for the 2016 Convention, foreign policy advisors to the Trump Campaign, working with the Republican National Committee, reviewed the 2012 Convention's foreign policy platform to identify divergence between the earlier platform and candidate Trump's positions.% 788 \footnote{Gordon 8/29/17 302, at~9--10.} The Campaign team discussed toning down language from the 2012 platform that identified Russia as the country's number one threat, given the candidate's belief that there needed to be better U.S. relations with Russia.% 789 \footnote{Gordon 8/29/17 302, at~9--10.} The RNC Platform Committee sent the 2016 draft platform to the National Security and Defense Platform Subcommittee on July~10, 2016, the evening before its first meeting to propose amendments.% 790 \footnote{Gordon 8/29/17 302, at~10; Hoff 5/26/17 302, at~1--2.} Although only delegates could participate in formal discussions and vote on the platform, the Trump Campaign could request changes, and members of the Trump Campaign attended committee meetings.% 791 \footnote{Hoff 5/26/17 302, at~1; Gordon 9/7/17 302, at~10.} John Mashburn, the Campaign's policy director, helped oversee the Campaign's involvement in the platform committee meetings.% 792 \footnote{Mashburn 6/25/18 302, at~4; Manafort 9/20/18 302, at~7--8.} He told the Office that he directed Campaign staff at the Convention, including J.D.~Gordon, to take a hands-off approach and only to challenge platform planks if they directly contradicted Trump's wishes.% 793 \footnote{Mashburn 6/25/18 302, at~4; Gordon 8/29/17 302, at~10.} On July~11, 2016, delegate Diana Denman submitted a proposed platform amendment that included provision of armed support for Ukraine.% 794 \footnote{DENMAN 000001--02, DENMAN 000012, DENMAN 000021--22; Denman 12/4/17 302, at~1; Denman 6/7/17 302, at~2.} The amendment described Russia's ``ongoing military aggression'' in Ukraine and announced ``support'' for ``maintaining (and, if warranted, increasing) sanctions against Russia until Ukraine's sovereignty and territorial integrity are fully restored'' and for ``providing lethal defensive weapons to Ukraine's armed forces and greater coordination with NATO on defense planning.''% 795 \footnote{DENMAN 000001--02, DENMAN 000012, DENMAN 000021--22.} Gordon reviewed the proposed platform changes, including Denman's.% 796 \footnote{Gordon 8/29/17 302, at~10--11.} Gordon stated that he flagged this amendment because of Trump's stated position on Ukraine, which Gordon personally heard the candidate say at the March~31 foreign policy meeting---namely, that the Europeans should take primary responsibility for any assistance to Ukraine, that there should be improved U.S.--Russia relations, and that he did not want to start World War~III over that region.% 797 \footnote{Gordon 8/29/17 302, at~11; Gordon 9/7/17 302, at~11; Gordon 2/14/19 302, at~1--2, 5--6.} Gordon told the Office that Trump's statements on the campaign trail following the March meeting underscored those positions to the point where Gordon felt obliged to object to the proposed platform change and seek its dilution.% 798 \footnote{Gordon 2/14/19 302, at~5--6.} On July~11, 2016, at a meeting of the National Security and Defense Platform Subcommittee, Denman offered her amendment.% 799 \footnote{Denman 6/7/17 302, at~2; \textit{see} DENMAN 000014.} Gordon and another Campaign staffer, Matt Miller, approached a committee co-chair and asked him to table the amendment to permit further discussion.% 800 \footnote{Denman 6/7/17 302, at~2; Denman 12/4/17 302, at~2; Gordon 9/7/17 302, at~11--12; \textit{see} Hoff 5/26/17 302, at~2.} Gordon's concern with the amendment was the language about providing ``lethal defensive weapons to Ukraine.''% 801 \footnote{Denman 6/7/17 302, at~3.} Miller did not have any independent basis to believe that this language contradicted Trump's views and relied on Gordon's recollection of the candidate's views.% 802 \footnote{M. Miller 10/25/17 302, at~3.} According to Denman, she spoke with Gordon and Matt Miller, and they told her that they had to clear the language and that Gordon was ``talking to New York.''% 803 \footnote{Denman 12/4/17 302, at~2; Denman 6/7/17 302, at~2.} Denman told others that she was asked by the two Trump Campaign staffers to strike ``lethal defens[iv]e weapons'' from the proposal but that she refused.% 804 \footnote{Hoff 5/26/17 302, at~2.} Denman recalled Gordon saying that he was on the phone with candidate Trump, but she was skeptical whether that was true.% 805 \footnote{Denman 6/7/17 302, at~2--3, 3--4; Denman 12/4/17 302, at~2.} Gordon denied having told Denman that he was on the phone with Trump, although he acknowledged it was possible that he mentioned having previously spoken to the candidate about the subject matter.% 806 \footnote{Gordon 2/14/19 302, at~7.} Gordon's phone records reveal a call to Sessions's office in Washington that afternoon, but do not include calls directly to a number associated with Trump.% 807 \footnote{Call Records of J.D.~Gordon \blackout{Grand Jury}. Gordon stated to the Office that his calls with Sessions were unrelated to the platform change. Gordon 2/14/19 302, at~7.} And according to the President's written answers to the Office's questions, he does not recall being involved in the change in language of the platform amendment.% 808 \footnote{Written Responses of Donald J. Trump (Nov.~20, 2018), at~17 (Response to Question~IV, Part~(f)).} Gordon stated that he tried to reach Rick Dearborn, a senior foreign policy advisor, and Mashburn, the Campaign policy director. Gordon stated that he connected with both of them (he could not recall if by phone or in person) and apprised them of the language he took issue with in the proposed amendment. Gordon recalled no objection by either Dearborn or Mashburn and that all three Campaign advisors supported the alternative formulation (``appropriate assistance'').% 809 \footnote{Gordon 2/14/19 302, at~6--7; Gordon 9/7/17 302, at~11--12; \textit{see} Gordon 8/29/17 302, at~11.} Dearborn recalled Gordon warning them about the amendment, but not weighing in because Gordon was more familiar with the Campaign's foreign policy stance.% 810 \footnote{Dearborn 11/28/17 302, at~7--8.} Mashburn stated that Gordon reached him, and he told Gordon that Trump had not taken a stance on the issue and that the Campaign should not intervene.% 811 \footnote{Mashburn 6/25/18 302, at~4.} When the amendment came up again in the committee's proceedings, the subcommittee changed the amendment by striking the ``lethal defens[iv]e weapons'' language and replacing it with ``appropriate assistance.''% 812 \footnote{Hoff 5/26/17 302, at~2--3; \textit{see} Denman 12/4/17 302, at~2--3; Gordon 8/29/17 302, at~11.} Gordon stated that he and the subcommittee co-chair ultimately agreed to replace the language about armed assistance with ``appropriate assistance.''% 813 \footnote{Gordon 8/29/17 302, at~11; Gordon 9/7/17 302, at~12.} The subcommittee accordingly approved Denman's amendment but with the term ``appropriate assistance.''% 814 \footnote{Hoff 5/26/17 302, at~2--3.} Gordon stated that, to his recollection, this was the only change sought by the Campaign.% 815 \footnote{Gordon 2/14/19 302, at~6.} Sam Clovis, the Campaign's national co-chair and chief policy advisor, stated he was surprised by the change and did not believe it was in line with Trump's stance.% 816 \footnote{Clovis 10/3/17 302, at~10--11.} Mashburn stated that when he saw the word ``appropriate assistance,'' he believed that Gordon had violated Mashburn's directive not to intervene.% 817 \footnote{Mashburn 6/25/18 302, at~4.} \subsubsection{Post-Convention Contacts with Kislyak} Ambassador Kislyak continued his efforts to interact with Campaign officials with responsibility for the foreign-policy portfolio---among them Sessions and Gordon---in the weeks after the Convention. The Office did not identify evidence in those interactions of coordination between the Campaign and the Russian government. \paragraph{Ambassador Kislyak Invites J.D.~Gordon to Breakfast at the Ambassador's Residence} On August~3, 2016, an official from the Embassy of the Russian Federation in the United States wrote to Gordon ``[o]n behalf of\thinspace'' Ambassador Kislyak inviting Gordon ``to have breakfast/tea with the Ambassador at his residence'' in Washington, D.C. the following week.% 818 \footnote{DJTFP00004828 (8/3/16 Email, Pchelyakov [\UseVerb{russianembassyorg}] to Gordon).} Gordon responded five days later to decline the invitation. He wrote, ``[t]hese days are not optimal for us, as we are busily knocking down a constant stream of false media stories while also preparing for the first debate with HRC\null. Hope to take a raincheck for another time when things quiet down a bit. Please pass along my regards to the Ambassador.''% 819 \footnote{DJTFP00004953 (8/8/16 Email, Gordon to \UseVerb{russianembassyorg}).} The investigation did not identify evidence that Gordon made any other arrangements to meet (or met) with Kislyak after this email. \paragraph{Senator Sessions's September 2016 Meeting with Ambassador Kislyak} Also in August 2016, a representative of the Russian Embassy contacted Sessions's Senate office about setting up a meeting with~Kislyak.% 820 \footnote{Luff 1/30/18 302, at~5.} At the time, Sessions was a member of the Senate Foreign Relations Committee and would meet with foreign officials in that capacity.% 821 \footnote{Sessions 1/17/18 302, at~23--24; Luff 1/30/18 302, at~5.} But Sessions's staff reported, and Sessions himself acknowledged, that meeting requests from ambassadors increased substantially in 2016, as Sessions assumed a prominent role in the Trump Campaign and his name was mentioned for potential cabinet-level positions in a future Trump Administration.% 822 \footnote{Sessions 1/17/18 302, at~23--24; Luff 1/30/18 302, at~5; Landrum 2/27/18 302, at~3--5.} On September~8, 2016, Sessions met with Kislyak in his Senate office.% 823 \footnote{Sessions 1/17/18 302, at~23.} Sessions said that he believed he was doing the Campaign a service by meeting with foreign ambassadors, including Kislyak.% 824 \footnote{Sessions 1/17/18 302, at~23.} He was accompanied in the meeting by at least two of his Senate staff: Sandra Luff, his legislative director; and Pete Landrum, who handled military affairs.% 825 \footnote{Sessions 1/17/18 302, at~23; Luff 1/30/18 302, at~5--6; Landrum 2/27/18 302, at~4--5 (stating he could not remember if election was discussed).} The meeting lasted less than 30~minutes.% 826 \footnote{Luff 1/30/18 302, at~6; Landrum 2/27/18 302, at~5.} Sessions voiced concerns about Russia's sale of a missile-defense system to Iran, Russian planes buzzing U.S. military assets in the Middle East, and Russian aggression in emerging democracies such as Ukraine and Moldova.% 827 \footnote{Luff 1/30/18 302, at~6; Landrum 2/27/18 302, at~4--5.} Kislyak offered explanations on these issues and complained about NATO land forces in former Soviet-bloc countries that border Russia.% 828 \footnote{Luff 1/30/18 302, at~6; Landrum 2/27/18 302, at~4--5.} Landrum recalled that Kislyak referred to the presidential campaign as ``an interesting campaign,''% 829 \footnote{Landrum 2/27/18 302, at~5.} and Sessions also recalled Kislyak saying that the Russian government was receptive to the overtures Trump had laid out during his campaign.% 830 \footnote{Sessions 1/17/18 302, at~23. Sessions also noted that ambassadors came to him for information about Trump and hoped he would pass along information to Trump. Sessions 1/17/18 302, at~23--24.} None of the attendees, though, remembered any discussion of Russian election interference or any request that Sessions convey information from the Russian government to the Trump Campaign.% 831 \footnote{Sessions 1/17/18 302, at~23; Luff 1/30/18 302, at~6; Landrum 2/27/18 302, at~5.} During the meeting, Kislyak invited Sessions to further discuss U.S.--Russia relations with him over a meal at the ambassador's residence.% 832 \footnote{Luff 1/30/18 302, at~5; Landrum 2/27/18 302, at~4.} Sessions was non-committal when Kislyak extended the invitation. After the meeting ended, Luff advised Sessions against accepting the one-on-one meeting with Kislyak, whom she assessed to be an ``old school KGB guy.''% 833 \footnote{Luff 1/30/18 302, at~5.} Neither Luff nor Landrum recalled that Sessions followed up on the invitation or made any further effort to dine or meet with Kislyak before the November 2016 election.% 834 \footnote{Luff 1/30/18 302, at~6; Landrum 2/27/18 302, at~4--5.} Sessions and Landrum recalled that, after the election, some efforts were made to arrange a meeting between Sessions and~Kislyak.% 835 \footnote{Sessions 1/17/18 302, at~23.} According to Sessions, the request came through CNI and would have involved a meeting between Sessions and Kislyak, two other ambassadors, and the Governor of Alabama.% 836 \footnote{Sessions 1/17/18 302, at~23.} Sessions, however, was in New York on the day of the anticipated meeting and was unable to attend.% 837 \footnote{Sessions 1/17/18 302, at~23.} The investigation did not identify evidence that the two men met at any point after their September~8 meeting. \subsubsection{Paul Manafort} Paul Manafort served on the Trump Campaign, including a period as campaign chairman, from March to August 2016.% 838 \footnote{On August~21, 2018, Manafort was convicted in the Eastern District of Virginia on eight tax, Foreign Bank Account Registration (FBAR), and bank fraud charges. On September~14, 2018, Manafort pleaded guilty in the District of Columbia to (1)~conspiracy to defraud the United States and conspiracy to commit offenses against the United States (money laundering, tax fraud, FBAR, Foreign Agents Registration Act (FARA), and FARA false statements), and (2)~conspiracy to obstruct justice (witness tampering). Manafort also admitted criminal conduct with which he had been charged in the Eastern District of Virginia, but as to which the jury hung. The conduct at issue in both cases involved Manafort's work in Ukraine and the money he earned for that work, as well as crimes after the Ukraine work ended. On March~7, 2019, Manafort was sentenced to 47 months of imprisonment in the Virginia prosecution. On March~13, the district court in D.C. sentenced Manafort to a total term of 73 months: 60 months on the Count 1 conspiracy (with 30 of those months to run concurrent to the Virginia sentence), and 13 months on the Count 1 conspiracy, to be served consecutive to the other two sentences. The two sentences resulted in a total term of 90 months. } Manafort had connections to Russia through his prior work for Russian oligarch Oleg Deripaska and later through his work for a pro-Russian regime in Ukraine. Manafort stayed in touch with these contacts during the campaign period through Konstantin Kilimnik, a longtime Manafort employee who previously ran Manafort's office in Kiev and who the FBI assesses to have ties to Russian intelligence. Manafort instructed Rick Gates, his deputy on the Campaign and a longtime employee,% 839 \footnote{As noted in \hyperlink{paragraph.1.3.4.1.2}{Volume~I, Section~III.D.1.b}, \textit{supra}, Gates pleaded guilty to two criminal charges in the District of Columbia, including making a false statement to the FBI, pursuant to a plea agreement. He has provided information and in-court testimony that the Office has deemed to be reliable. \textit{See also} Transcript at~16, \textit{United States~v.\ Paul J. Manafort, Jr.}, 1:17-cr-201 (D.D.C. Feb.~13, 2019), Doc.~514 (``\textit{Manafort} 2/13/19 Transcript'') (court's explanation of reasons to credit Gates's statements in one instance).} to provide Kilimnik with updates on the Trump Campaign---including internal polling data, although Manafort claims not to recall that specific instruction. Manafort expected Kilimnik to share that information with others in Ukraine and with Deripaska. Gates periodically sent such polling data to Kilimnik during the campaign. Manafort also twice met Kilimnik in the United States during the campaign period and conveyed campaign information. The second meeting took place on August~2, 2016, in New York City. Kilimnik requested the meeting to deliver in person a message from former Ukrainian President Viktor Yanukovych, who was then living in Russia. The message was about a peace plan for Ukraine that Manafort has since acknowledged was a ``backdoor'' means for Russia to control eastern Ukraine. Several months later, after the presidential election, Kilimnik wrote an email to Manafort expressing the view---which Manafort later said he shared---that the plan's success would require U.S. support to succeed: ``all that is required to start the process is a very minor `wink' (or slight push) from [Donald Trump].''% 840 \footnote{The email was drafted in Kilimnik's DMP email account (in English) \blackout{Investigative Technique}} The email also stated that if Manafort were designated as the U.S. representative and started the process, Yanukovych would ensure his reception in Russia ``at the very top level.'' Manafort communicated with Kilimnik about peace plans for Ukraine on at least four occasions after their first discussion of the topic on August~2, December 2016 (the Kilimnik email described above); January 2017; February 2017; and again in the spring of 2018. The Office reviewed numerous Manafort email and text communications, and asked President Trump about the plan in written questions.% 841 \footnote{According to the President's written answers, he does not remember Manafort communicating to him any particular positions that Ukraine or Russia would want the United States to support. Written Responses of Donald J. Trump (Nov.~20, 2018), at~16--17 (Response to Question~IV, Part~(d)).} The investigation did not uncover evidence of Manafort's passing along information about Ukrainian peace plans to the candidate or anyone else in the Campaign or the Administration. The Office was not, however, able to gain access to all of Manafort's electronic communications (in some instances, messages were sent using encryption applications). And while Manafort denied that he spoke to members of the Trump Campaign or the new Administration about the peace plan, he lied to the Office and the grand jury about the peace plan and his meetings with Kilimnik, and his unreliability on this subject was among the reasons that the district judge found that he breached his cooperation agreement.% 842 \footnote{Manafort made several false statements during debriefings. Based on that conduct, the Office determined that Manafort had breached his plea agreement and could not be a cooperating witness. The judge presiding in Manafort's D.C. criminal case found by a preponderance of the evidence that Manafort intentionally made multiple false statements to the FBI, the Office, and the grand jury concerning his interactions and communications with Kilimnik (and concerning two other issues). Although the report refers at times to Manafort's statements, it does so only when those statements are sufficiently corroborated to be trustworthy, to identify issues on which Manafort's untruthful responses may themselves be of evidentiary value, or to provide Manafort's explanations for certain events, even when we were unable to determine whether that explanation was credible.} The Office could not reliably determine Manafort's purpose in sharing internal polling data with Kilimnik during the campaign period. Manafort \blackout{Grand Jury} did not see a downside to sharing campaign information, and told Gates that his role in the Campaign would be ``good for business'' and potentially a way to be made whole for work he previously completed in the Ukraine. As to Deripaska, Manafort claimed that by sharing campaign information with him, Deripaska might see value in their relationship and resolve a ``disagreement''---a reference to one or more outstanding lawsuits. Because of questions about Manafort's credibility and our limited ability to gather evidence on what happened to the polling data after it was sent to Kilimnik, the Office could not assess what Kilimnik (or others he may have given it to) did with it. The Office did not identify evidence of a connection between Manafort's sharing polling data and Russia's interference in the election, which had already been reported by U.S. media outlets at the time of the August~2 meeting. The investigation did not establish that Manafort otherwise coordinated with the Russian government on its election-interference efforts. \paragraph{Paul Manafort's Ties to Russia and Ukraine} Manafort's Russian contacts during the campaign and transition periods stem from his consulting work for Deripaska from approximately 2005 to 2009 and his separate political consulting work in Ukraine from 2005 to 2015, including through his company DMP International LLC (DMI). Kilimnik worked for Manafort in Kiev during this entire period and continued to communicate with Manafort through at least June 2018. Kilimnik, who speaks and writes Ukrainian and Russian, facilitated many of Manafort's communications with Deripaska and Ukrainian oligarchs. \subparagraph{Oleg Deripaska Consulting Work} In approximately 2005, Manafort began working for Deripaska, a Russian oligarch who has a global empire involving aluminum and power companies and who is closely aligned with Vladimir Putin.% 843 \footnote{Dinchuk et~al., \textit{Russian Tycoon Deripaska in Putin Delegation to China}, Reuters (June~8, 2018).} A memorandum describing work that Manafort performed for Deripaska in 2005 regarding the post-Soviet republics referenced the need to brief the Kremlin and the benefits that the work could confer on ``the Putin Government.''% 844 \footnote{6/23/05 Memo, Manafort \& Davis to Deripaska \& Rothchild.} Gates described the work Manafort did for Deripaska as ``political risk insurance,'' and explained that Deripaska used Manafort to install friendly political officials in countries where Deripaska had business interests.% 845 \footnote{Gates 2/2/18 302, at~7.} Manafort's company earned tens of millions of dollars from its work for Deripaska and was loaned millions of dollars by Deripaska as well.% 846 \footnote{Manafort 9/20/18 302, at~2--5; Manafort Income by Year, 2005--2015; Manafort Loans from Wire Transfers, 2005--2015.} In 2007, Deripaska invested through another entity in Pericles Emerging Market Partners L.P. (``Pericles''), an investment fund created by Manafort and former Manafort business partner Richard Davis. The Pericles fund was established to pursue investments in Eastern Europe.% 847 \footnote{Gates 3/12/18 302, at~5.} Deripaska was the sole investor.% 848 \footnote{Manafort 12/16/15 Dep., at~157:8--11.} Gates stated in interviews with the Office that the venture led to a deterioration of the relationship between Manafort and Deripaska.% 849 \footnote{Gates 2/2/18 302, at~9.} In particular, when the fund failed, litigation between Manafort and Deripaska ensued. Gates stated that, by 2009, Manafort's business relationship with Deripaska had ``dried up.''% 850 \footnote{Gates 2/2/18 302, at~6.} According to Gates, various interactions with Deripaska and his intermediaries over the past few years have involved trying to resolve the legal dispute.% 851 \footnote{Gates 2/2/18 302, at~9--10.} As described below, in 2016, Manafort, Gates, Kilimnik, and others engaged in efforts to revive the Deripaska relationship and resolve the litigation. \subparagraph{Political Consulting Work} Through Deripaska, Manafort was introduced to Rinat Akhmetov, a Ukrainian oligarch who hired Manafort as a political consultant.% 852 \footnote{Manafort 7/30/14 302, at~1; Manafort 9/20/18 302, at~2.} In 2005, Akhmetov hired Manafort to engage in political work supporting the Party of Regions,% 853 \footnote{Manafort 9/11/18 302, at~5--6.} a political party in Ukraine that was generally understood to align with Russia. Manafort assisted the Party of Regions in regaining power, and its candidate, Viktor Yanukovych, won the presidency in 2010. Manafort became a close and trusted political advisor to Yanukovych during his time as President of Ukraine. Yanukovych served in that role until 2014, when he fled to Russia amidst popular protests.% 854 \footnote{Gates 3/16/18 302, at~1; Davis 2/8/18 302, at~9; Devine 7/6/18 302, at~2--3.} \subparagraph{Konstantin Kilimnik} Kilimnik is a Russian national who has lived in both Russia and Ukraine and was a longtime Manafort employee.% 855 \footnote{Patten 5/22/18 302, at~5; Gates 1/29/18 302, at~18--19; 10/28/97 Kilimnik Visa Record, U.S. Department of State.} Kilimnik had direct and close access to Yanukovych and his senior entourage, and he facilitated communications between Manafort and his clients, including Yanukovych and multiple Ukrainian oligarchs.% 856 \footnote{Gates 1/29/18 302, at~18--19; Patten 5/22/18 302, at~8; Gates 1/31/18 302, at~4--5; Gates 1/30/18 302, at~2; Gates 2/2/18 302, at~11.} Kilimnik also maintained a relationship with Deripaska's deputy, Viktor Boyarkin,% 857 \footnote{Gates 1/29/18 302, at~18; Patten 5/22/18 302, at~8.} a Russian national who previously served in the defense attaché office of the Russian Embassy to the United States.% 858 \footnote{Boyarkin Visa Record, U.S. Department of State.} Manafort told the Office that he did not believe Kilimnik was working as a Russian ``spy.''% 859 \footnote{Manafort 9/11/18 302, at~5.} The FBI, however, assesses that Kilimnik has ties to Russian intelligence.% 860 \footnote{The Office has noted Kilimnik's assessed ties to Russian intelligence in public court filings. \textit{E.g.}, Gov't Opp.\ to Mot.\ to Modify, \textit{United States~v.\ Paul J. Manafort, Jr.}, 1:17-cr-201 (D.D.C. Dec.~4, 2017), Doc.~73, at~2 (``\textit{Manafort} (D.D.C.) Gov't Opp.\ to Mot.\ to Modify'').} Several pieces of the Office's evidence---including witness interviews and emails obtained through court-authorized search warrants---support that assessment: \begin{itemize} \item Kilimnik was born on April~27, 1970, in Dnipropetrovsk Oblast, then of the Soviet Union, and attended the Military Institute of the Ministry of Defense from 1987 until 1992.% 861 \footnote{12/17/16 Kilimnik Visa Record, U.S., Department of State.} Sam Patten, a business partner to Kilimnik,% 862 \footnote{In August 2018, Patten pleaded guilty pursuant to a plea agreement to violating the Foreign Agents Registration Act, and admitted in his Statement of Offense that he also misled and withheld documents from the Senate Select Committee on Intelligence in the course of its investigation of Russian election interference. Plea Agreement, \textit{United States~v.\ W.~Samuel Patten}, 1:18-cr-260 (D.D.C. Aug.~31, 2018), Doc.~6; Statement of Offense, \textit{United States~v.\ W.~Samuel Patten}, 1:18-cr-260 (D.D.C. Aug.~31, 2018), Doc.~7.} stated that Kilimnik told him that he was a translator in the Russian army for seven years and that he later worked in the Russian armament industry selling arms and military equipment.% 863 \footnote{Patten 5/22/18 302, at~5--6.} \item U.S. government visa records reveal that Kilimnik obtained a visa to travel to the United States with a Russian diplomatic passport in 1997.% 864 \footnote{10/28/97 Kilimnik Visa Record, U.S. Department of State.} \item Kilimnik worked for the International Republican Institute's (IRI) Moscow office, where he did translation work and general office management from 1998 to 2005.% 865 \footnote{Nix 3/30/18 302, at~1--2.} While another official recalled the incident differently,% 866 \footnote{Nix 3/30/18 302, at~2.} one former associate of Kilimnik's at TRI told the FBI that Kilimnik was fired from his post because his links to Russian intelligence were too strong. The same individual stated that it was well known at IRI that Kilimnik had links to the Russian government.% 867 \footnote{Lenzi 1/30/18 302, at~2.} \item Jonathan Hawker, a British national who was a public relations consultant at FTI Consulting, worked with DMI on a public relations campaign for Yanukovych. After Hawker's work for DMI ended, Kilimnik contacted Hawker about working for a Russian government entity on a public-relations project that would promote, in Western and Ukrainian media, Russia's position on its 2014 invasion of Crimea.% 868 \footnote{Hawker 1/9/18 302, at~13; 3/18/14 Email, Hawker \& Tulukbaev.} \item Gates suspected that Kilimnik was a ``spy,'' a view that he shared with Manafort, Hawker, and Alexander van der Zwaan,% 869 \footnote{van der Zwaan pleaded guilty in the U.S. District Court for the District of Columbia to making false statements to the Special Counsel's Office. Plea Agreement, \textit{United States~v.\ Alex van der Zwaan}, 1:18-cr-31 (D.D.C. Feb.~20, 2018), Doc.~8.} an attorney who had worked with DMI on a report for the Ukrainian Ministry of Foreign Affairs.% 870 \footnote{Hawker 6/9/18 302, at~4; van der Zwaan 11/3/17 302, at~22. Manafort said in an interview that Gates had joked with Kilimnik about Kilimnik's going to meet with his KGB handler. Manafort 10/16/18 302, at~7.} \end{itemize} \blackout{Investigative Technique} \paragraph{Contacts during Paul Manafort's Time with the Trump Campaign} \subparagraph{Paul Manafort Joins the Campaign} Manafort served on the Trump Campaign from late March to August~19, 2016. On March~29, 2016, the Campaign announced that Manafort would serve as the Campaign's ``Convention Manager.''% 871 \footnote{\textit{Press Release---Donald J. Trump Announces Campaign Convention Manager Paul J. Manafort}, The American Presidency Project -- U.C. Santa Barbara (Mar.~29, 2016).} On May~19, 2016, Manafort was promoted to campaign chairman and chief strategist, and Gates, who had been assisting Manafort on the Campaign, was appointed deputy campaign chairman.% 872 \footnote{Gates 1/29/18 302, at~8; Meghan Keneally, \textit{Timeline of Manafort's role in the Trump Campaign}, ABC News (Oct.~20, 2017).} Thomas Barrack and Roger Stone both recommended Manafort to candidate Trump.% 873 \footnote{Gates 1/29/18 302, at~7--8; Manafort 9/11/18 302, at~1--2; Barrack 12/12/17 302, at~3.} In early 2016, at Manafort's request, Barrack suggested to Trump that Manafort join the Campaign to manage the Republican Convention.% 874 \footnote{Barrack 12/12/17 302, at~3; Gates 1/29/18 302, at~7--8.} Stone had worked with Manafort from approximately 1980 until the mid-1990s through various consulting and lobbying firms. Manafort met Trump in 1982 when Trump hired the Black, Manafort, Stone and Kelly lobbying firm.% 875 \footnote{Manafort 10/16/18 302, at~6.} Over the years, Manafort saw Trump at political and social events in New York City and at Stone's wedding, and Trump requested VIP status at the 1988 and~1996 Republican conventions worked by Manafort.% 876 \footnote{Manafort 10/16/18 302, at~6.} According to Gates, in March 2016, Manafort traveled to Trump's Mar-a-Lago estate in Florida to meet with Trump. Trump hired him at that time.% 877 \footnote{Gates 2/2/18 302, at~10.} Manafort agreed to work on the Campaign without pay. Manafort had no meaningful income at this point in time, but resuscitating his domestic political campaign career could be financially beneficial in the future. Gates reported that Manafort intended, if Trump won the Presidency, to remain outside the Administration and monetize his relationship with the Administration.% 878 \footnote{Gates 1/30/18 302, at~4.} \subparagraph{Paul Manafort's Campaign-Period Contacts} Immediately upon joining the Campaign, Manafort directed Gates to prepare for his review separate memoranda addressed to Deripaska, Akhmetov, Serhiy Lyovochkin, and Boris Kolesnikov,% 879 \footnote{Gates 2/2/18 302, at~11.} the last three being Ukrainian oligarchs who were senior Opposition Bloc officials.% 880 \footnote{\textit{See} Sharon La Franiere, Manafort's Trial Isn't About Russia, but It Will Be in the Air, New York Times (July~30, 2018); Tierney Sneed, \textit{Prosecutors Believe Manafort Made \$60 Million Consulting in Ukraine}, Talking Points Memo (July~30, 2018); Mykola Vorobiov, \textit{How Pro-Russian Forces Will Take Revenge on Ukraine}, Atlantic Council (Sept.~23, 2018); Sergii Leshchenko, \textit{Ukraine's Oligarchs Are Still Calling the Shots}, Foreign Policy (Aug.~14, 2014); Interfax-Ukraine, \textit{Kolesnikov: Inevitability of Punishment Needed for Real Fight Against Smuggling in Ukraine}, Kyiv Post (June~23, 2018); Igor Kossov, \textit{Kyiv Hotel Industry Makes Room for New Entrants}, Kyiv Post (Mar.~7, 2019); Markian Kuzmowycz, \textit{How the Kremlin Can Win Ukraine's Elections}, Atlantic Council (Nov.~19, 2018). The Opposition Bloc is a Ukraine political party that largely reconstituted the Party of Regions. } The memoranda described Manafort's appointment to the Trump Campaign and indicated his willingness to consult on Ukrainian politics in the future. On March~30, 2016, Gates emailed the memoranda and a press release announcing Manafort's appointment to Kilimnik for translation and dissemination.% 881 \footnote{3/30/16 Email, Gates to Kilimnik.} Manafort later followed up with Kilimnik to ensure his messages had been delivered, emailing on April~11, 2016 to ask whether Kilimnik had shown ``our friends'' the media coverage of his new role.% 882 \footnote{4/11/16 Email, Manafort \& Kilimnik.} Kilimnik replied, ``Absolutely. Every article.'' Manafort further asked: ``How do we use to get whole. Has Ovd [Oleg Vladimirovich Deripaska] operation seen?'' Kilimnik wrote back the same day, ``Yes, I have been sending everything to Victor [Boyarkin, Deripaska's deputy], who has been forwarding the coverage directly to OVD.''% 883 \footnote{4/11/16 Email, Manafort \& Kilimnik.} Gates reported that Manafort said that being hired on the Campaign would be ``good for business'' and increase the likelihood that Manafort would be paid the approximately \$2 million he was owed for previous political consulting work in Ukraine.% 884 \footnote{Gates 2/2/18 302, at~10.} Gates also explained to the Office that Manafort thought his role on the Campaign could help ``confirm'' that Deripaska had dropped the Pericles lawsuit, and that Gates believed Manafort sent polling data to Deripaska (as discussed further below) so that Deripaska would not move forward with his lawsuit against Manafort.% 885 \footnote{Gates 2/2/18 302, at~11; Gates 9/27/18 302 (serial 740), at~2.} Gates further stated that Deripaska wanted a visa to the United States, that Deripaska could believe that having Manafort in a position inside the Campaign or Administration might be helpful to Deripaska, and that Manafort's relationship with Trump could help Deripaska in other ways as well.% 886 \footnote{Gates 2/2/18 302, at~12.} Gates stated, however, that Manafort never told him anything specific about what, if anything, Manafort might be offering Deripaska.% 887 \footnote{Gates 2/2/18 302, at~12.} Gates also reported that Manafort instructed him in April 2016 or early May~2016 to send Kilimnik Campaign internal polling data and other updates so that Kilimnik, in turn, could share it with Ukrainian oligarchs.% 888 \footnote{Gates 1/31/18 302, at~17; Gates 9/27/18 302 (serial 740), at~2. In a later interview with the Office, Gates stated that Manafort directed him to send polling data to Kilimnik after a May~7, 2016 meeting between Manafort and Kilimnik in New York, discussed in \hyperlink{subparagraph.1.4.1.8.2.3}{Volume~I, Section~IV.A.8.b.iii}, \textit{infra}. Gates 11/7/18 302, at~3.} Gates understood that the information would also be shared with Deripaska, \blackout{Grand Jury}.% 889 \footnote{Gates 9/27/18 302, Part~II, at~2; \blackout{Grand Jury}} Gates reported to the Office that he did not know why Manafort wanted him to send polling information, but Gates thought it was a way to showcase Manafort's work, and Manafort wanted to open doors to jobs after the Trump Campaign ended.% 890 \footnote{Gates 2/12/18 302, at~10; Gates 1/31/18 302, at~17.} Gates said that Manafort's instruction included sending internal polling data prepared for the Trump Campaign by pollster Tony Fabrizio.% 891 \footnote{Gates 9/27/18 302 (serial 740), at~2; Gates 2/7/18 302, at~15.} Fabrizio had worked with Manafort for years and was brought into the Campaign by Manafort. Gates stated that, in accordance with Manafort's instruction, he periodically sent Kilimnik polling data via WhatsApp; Gates then deleted the communications on a daily basis.% 892 \footnote{Gates 1/31/18 302, at~17.} Gates further told the Office that, after Manafort left the Campaign in mid-August, Gates sent Kilimnik polling data less frequently and that the data he sent was more publicly available information and less internal data.% 893 \footnote{Gates 2/12/18 302, at~11--12. According to Gates, his access to internal polling data was more limited because Fabrizio was himself distanced from the Campaign at that point.} Gates's account about polling data is consistent \blackout{Grand Jury}% 894 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} with multiple emails that Kilimnik sent to U.S. associates and press contacts between late July and mid-August of~2016. Those emails referenced ``internal polling,'' described the status of the Trump Campaign and Manafort's role in it, and assessed Trump's prospects for victory.% 895 \footnote{8/18/16 Email, Kilimnik to Dirkse; 8/18/16 Email, Kilimnik to Schultz; 8/18/16 Email, Kilimnik to Marson; 7/27/16 Email, Kilimnik to Ash; 8/18/16 Email, Kilimnik to Ash; 8/18/16 Email, Kilimnik to Jackson; 8/18/16 Email, Kilimnik to Mendoza-Wilson; 8/19/16 Email, Kilimnik to Patten.} Manafort did not acknowledge instructing Gates to send Kilimnik internal data, \blackout{Grand Jury}% 896 \footnote{\blackout{Grand Jury}} The Office also obtained contemporaneous emails that shed light on the purpose of the communications with Deripaska and that are consistent with Gates's account. For example, in response to a July~7, 2016, email from a Ukrainian reporter about Manafort's failed Deripaska-backed investment, Manafort asked Kilimnik whether there had been any movement on ``this issue with our friend.''% 897 \footnote{7/7/16 Email, Manafort to Kilimnik.} Gates stated that ``our friend'' likely referred to Deripaska,% 898 \footnote{Gates 2/2/18 302, at~13.} and Manafort told the Office that the ``issue'' (and ``our biggest interest,'' as stated below) was a solution to the Deripaska--Pericles issue.% 899 \footnote{Manafort 9/11/18 302, at~6.} Kilimnik replied: \begin{quote} I am carefully optimistic on the question of our biggest interest. Our friend [Boyarkin] said there is lately significantly more attention to the campaign in his boss' [Deripaska's] mind, and he will be most likely looking for ways to reach out to you pretty soon, understanding all the time sensitivity. I am more than sure that it will be resolved and we will get back to the original relationship with V.'s boss [Deripaska].% 900 \footnote{7/8/16 Email, Kilimnik to Manafort.} \end{quote} Eight minutes later, Manafort replied that Kilimnik should tell Boyarkin's ``boss,'' a reference to Deripaska, ``that if he needs private briefings we can accommodate.''% 901 \footnote{7/8/16 Email, Kilimnik to Manafort; Gates 2/2/18 302, at~13.} Manafort has alleged to the Office that he was willing to brief Deripaska only on public campaign matters and gave an example: why Trump selected Mike Pence as the Vice-Presidential running mate.% 902 \footnote{Manafort 9/11/18 302, at~6.} Manafort said he never gave Deripaska a briefing.% 903 \footnote{Manafort 9/11/18 302, at~6.} Manafort noted that if Trump won, Deripaska would want to use Manafort to advance whatever interests Deripaska had in the United States and elsewhere.% 904 \footnote{Manafort 9/11/18 302, at~6.} \subparagraph{Paul Manafort's Two Campaign-Period Meetings with Konstantin Kilimnik in the United States} Manafort twice met with Kilimnik in person during the campaign period---once in May and again in August 2016. The first meeting took place on May~7, 2016, in New York City.% 905 \footnote{\blackout{Investigative Technique}} In the days leading to the meeting, Kilimnik had been working to gather information about the political situation in Ukraine. That included information gleaned from a trip that former Party of Regions official Yuriy Boyko had recently taken to Moscow---a trip that likely included meetings between Boyko and high-ranking Russian officials.% 906 \footnote{4/26/16 Email, Kilimnik to Purcell, at~2; Gates 2/2/18 302, at~12; Patten 5/22/18 302, at~6--7; Gates 11/7/18 302, at~3.} Kilimnik then traveled to Washington, D.C. on or about May~5, 2016; while in Washington, Kilimnik had pre-arranged meetings with State Department employees.% 907 \footnote{5/7/16 Email, Kilimnik to Charap \& Kimmage; 5/7/16 Email, Kansanof to Kilimnik.} Late on the evening of May~6, Gates arranged for Kilimnik to take a 3:00~a.m.\ train to meet Manafort in New York for breakfast on May~7.% 908 \footnote{5/6/16 Email, Manafort to Gates; 5/6/16 Email, Gates to Kilimnik.} According to Manafort, during the meeting, he and Kilimnik talked about events in Ukraine, and Manafort briefed Kilimnik on the Trump Campaign, expecting Kilimnik to pass the information back to individuals in Ukraine and elsewhere.% 909 \footnote{Manafort 10/11/18 302, at~1.} Manafort stated that Opposition Bloc members recognized Manafort's position on the Campaign was an opportunity, but Kilimnik did not ask for anything.% 910 \footnote{Manafort 10/11/18 302, at~1.} Kilimnik spoke about a plan of Boyko to boost election participation in the eastern zone of Ukraine, which was the base for the Opposition Bloc.% 911 \footnote{Manafort 10/11/18 302, at~1.} Kilimnik returned to Washington, D.C. right after the meeting with Manafort. Manafort met with Kilimnik a second time at the Grand Havana Club in New York City on the evening of August~2, 2016. The events leading to the meeting are as follows. On July~28, 2016, Kilimnik flew from Kiev to Moscow.% 912 \footnote{7/25/16 Email, Kilimnik to \UseVerb{katrinyana} (2:17:34~a.m.).} The next day, Kilimnik wrote to Manafort requesting that they meet, using coded language about a conversation he had that day.% 913 \footnote{7/99/16 Email, Kilimnik to Manafort (10:51~a.m.).} In an email with a subject line ``Black Caviar,'' Kilimnik wrote: \begin{quote} I met today with the guy who gave you your biggest black caviar jar several years ago. We spent about 5 hours talking about his story, and I have several important messages from him to you. He asked me to go and brief you on our conversation. I said I have to run it by you first, but in principle I am prepared to do it\dots. It has to do about the future of his country, and is quite interesting.% 914 \footnote{7/29/16 Email, Kilimnik to Manafort (10:51~a.m.).} \end{quote} Manafort identified ``the guy who gave you your biggest black caviar jar'' as Yanukovych. He explained that, in 2010, he and Yanukovych had lunch to celebrate the recent presidential election. Yanukovych gave Manafort a large jar of black caviar that was worth approximately \$30,000 to \$40,000.% 915 \footnote{Manafort 9/12/18 302, at~3.} Manafort's identification of Yanukovych as ``the guy who gave you your biggest black caviar jar'' is consistent with Kilimnik being in Moscow---where Yanukovych resided---when Kilimnik wrote ``I met today with a guy,'' and with a December 2016 email in which Kilimnik referred to Yanukovych as ``BG,'' \blackout{Grand Jury}% 916 \footnote{7/29/16 Email, Manafort to Kilimnik; \blackout{Investigative Technique}; \blackout{Grand Jury}} Manafort replied to Kilimnik's July~29 email, ``Tuesday [August~2] is best \dots\ Tues or weds in NYC.''% 917 \footnote{7/29/16 Email, Manafort to Kilimnik.} Three days later, on July~31, 2016, Kilimnik flew back to Kiev from Moscow, and on that same day, wrote to Manafort that he needed ``about 2 hours'' for their meeting ``because it is a long caviar story to tell.''% 918 \footnote{7/31/16 Email, Manafort to Kilimnik.} Kilimnik wrote that he would arrive at JFK on August~2 at~7:30~p.m., and he and Manafort agreed to a late dinner that night.% 919 \footnote{7/31/16 Email, Manafort to Kilimnik.} Documentary evidence---including flight, phone, and hotel records, and the timing of text messages exchanged% 920 \footnote{Kilimnik 8/2/16 CBP Record; Call Records of Konstantin Kilimnik \blackout{Grand Jury}; Call Records of Rick Gates \blackout{Grand Jury}; 8/2--3/16, Kilimnik Park Lane Hotel Receipt.}% ---confirms the dinner took place as planned on August~2.% 921 \footnote{Deripaska's private plane also flew to Teterboro Airport in New Jersey on the evening of August~2, 2016. According to Customs and Border Protection records, the only passengers on the plane were Deripaska's wife, daughter, mother, and father-in-law, and separate records obtained by our Office confirm that Kilimnik flew on a commercial flight to New York.} As to the contents of the meeting itself, the accounts of Manafort and Gates---who arrived late to the dinner---differ in certain respects. But their versions of events, when assessed alongside available documentary evidence and what Kilimnik told business associate Sam Patten, indicate that at least three principal topics were discussed. First, Manafort and Kilimnik discussed a plan to resolve the ongoing political problems in Ukraine by creating an autonomous republic in its more industrialized eastern region of Donbas,% 922 \footnote{The Luhansk and Donetsk People's Republics, which are located in the Donbas region of Ukraine, declared themselves independent in response to the popular unrest in 2014 that removed President Yanukovych from power. Pro-Russian Ukrainian militia forces, with backing from the Russian military, have occupied the region since 2014. Under the Yanukovych-backed plan, Russia would assist in withdrawing the military, and Donbas would become an autonomous region within Ukraine with its own prime minister. The plan emphasized that Yanukovych would be an ideal candidate to bring peace to the region as prime minister of the republic, and facilitate the reintegration of the region into Ukraine with the support of the U.S. and Russian presidents. As noted above, according to \blackout{Grand Jury} the written documentation describing the plan, for the plan to work, both U.S. and Russian support were necessary. \blackout{Grand Jury} 2/21/18 Email, Manafort, Ward, \& Fabrizio, at~3--5.} and having Yanukovych, the Ukrainian President ousted in 2014, elected to head that republic.% 923 \footnote{Manafort 9/11/18 302, at~4; \blackout{Grand Jury}} That plan, Manafort later acknowledged, constituted a ``backdoor'' means for Russia to control eastern Ukraine.% 924 \footnote{\blackout{Grand Jury}} Manafort initially said that, if he had not cut off the discussion, Kilimnik would have asked Manafort in the August~2 meeting to convince Trump to come out in favor of the peace plan, and Yanukovych would have expected Manafort to use his connections in Europe and Ukraine to support the plan.% 925 \footnote{Manafort 9/11/18 302, at~4.} Manafort also initially told the Office that he had said to Kilimnik that the plan was crazy, that the discussion ended, and that he did not recall Kilimnik asking Manafort to reconsider the plan after their August~2 meeting.% 926 \footnote{Manafort 9/11/18 302, at~4.} Manafort said \blackout{Grand Jury} that he reacted negatively to Yanukovych sending---years later---an ``urgent'' request when Yanukovych needed him.% 927 \footnote{\blackout{Grand Jury} Manafort 9/11/18 302, at~5; Manafort 9/12/18 302, at~4.} When confronted with an email written by Kilimnik on or about December~8, 2016, however, Manafort acknowledged Kilimnik raised the peace plan again in that email.% 928 \footnote{Manafort 9/12/18 302, at~4; \blackout{Investigative Technique}} Manafort ultimately acknowledged Kilimnik also raised the peace plan in January and February 2017 meetings with Manafort \blackout{Grand Jury}% 929 \footnote{\blackout{Grand Jury} Documentary evidence confirms the peace-plan discussions in 2018. 2/19/18 Email, Fabrizio to Ward (forwarding email from Manafort); 2/21/18 Email, Manafort to Ward \& Fabrizio.} Second, Manafort briefed Kilimnik on the state of the Trump Campaign and Manafort's plan to win the election.% 930 \footnote{Manafort 9/11/18 302, at~5.} That briefing encompassed the Campaign's messaging and its internal polling data. According to Gates, it also included discussion of ``battleground'' states, which Manafort identified as Michigan, Wisconsin, Pennsylvania, and Minnesota.% 931 \footnote{Gates 1/30/18 302, at~3, 5.} Manafort did not refer explicitly to ``battle ground'' states in his telling of the August~2 discussion, \blackout{Grand Jury}% 932 \footnote{\blackout{Grand Jury}} Third, according to Gates and what Kilimnik told Patten, Manafort and Kilimnik discussed two sets of financial disputes related to Manafort's previous work in the region. Those consisted of the unresolved Deripaska lawsuit and the funds that the Opposition Bloc owed to Manafort for his political consulting work and how Manafort might be able to obtain payment.% 933 \footnote{Gates 1/30/18 302, at~2--4; Patten 5/22/18 302, at~7.} After the meeting, Gates and Manafort both stated that they left separately from Kilimnik because they knew the media was tracking Manafort and wanted to avoid media reporting on his connections to Kilimnik.% 934 \footnote{Gates 1/30/18 302, at~5; Manafort 9/11/18 302, at~5.} \paragraph{Post-Resignation Activities} Manafort resigned from the Trump Campaign in mid-August~2016, approximately two weeks after his second meeting with Kilimnik, amidst negative media reporting about his political consulting work for the pro-Russian Party of Regions in Ukraine. Despite his resignation, Manafort continued to offer advice to various Campaign officials through the November election. Manafort told Gates that he still spoke with Kushner, Bannon, and candidate Trump,% 935 \footnote{Gates 2/12/18 302, at~12.} and some of those post-resignation contacts are documented in emails. For example, on October~21, 2016, Manafort sent Kushner an email and attached a strategy memorandum proposing that the Campaign make the case against Clinton ``as the failed and corrupt champion of the establishment'' and that ``Wikileaks provides the Trump campaign the ability to make the case in a very credible way---by using the words of Clinton, its campaign officials and DNC members.''% 936 \footnote{NOSC00021517--20 (10/21/16 Email, Manafort to Kushner).} Later, in a November~5, 2016 email to Kushner entitled ``Securing the Victory,'' Manafort stated that he was ``really feeling good about our prospects on Tuesday and focusing on preserving the victory,'' and that he was concerned the Clinton Campaign would respond to a loss by ``mov[ing] immediately to discredit the [Trump] victory and claim voter fraud and cyber-fraud, including the claim that the Russians have hacked into the voting machines and tampered with the results.''% 937 \footnote{NOSC00021573--75 (11/5/16 Email, Manafort to Kushner).} Trump was elected President on November~8, 2016. Manafort told the Office that, in the wake of Trump's victory, he was not interested in an Administration job. Manafort instead preferred to stay on the ``outside,'' and monetize his campaign position to generate business given his familiarity and relationship with Trump and the incoming Administration.% 938 \footnote{Manafort 9/12/18 302, at~1, 4--5; Gates 1/30/18 302, at~4.} Manafort appeared to follow that plan, as he traveled to the Middle East, Cuba, South Korea, Japan, and China and was paid to explain what a Trump presidency would entail.% 939 \footnote{Manafort 9/12/18 302, at~1.} Manafort's activities in early 2017 included meetings relating to Ukraine and Russia. The first meeting, which took place in Madrid, Spain in January 2017, was with Georgiy Oganov. Oganov, who had previously worked at the Russian Embassy in the United States, was a senior executive at a Deripaska company and was believed to report directly to Deripaska.% 940 \footnote{Kalashnikova 5/17/18 302, at~4; Gary Lee, Soviet Embassy's Identity Crisis, Washington Post (Dec.~20, 1991); Georgy S. Oganov Executive Profile \& Biography, Bloomberg (Mar.~12, 2019).} Manafort initially denied attending the meeting. When he later acknowledged it, he claimed that the meeting had been arranged by his lawyers and concerned only the Pericles lawsuit.% 941 \footnote{Manafort 9/11/18 302, at~7.} Other evidence, however, provides reason to doubt Manafort's statement that the sole topic of the meeting was the Pericles lawsuit. In particular, text messages to Manafort from a number associated with Kilimnik suggest that Kilimnik and Boyarkin---not Manafort's counsel---had arranged the meeting between Manafort and Oganov.% 942 \footnote{Text Message, Manafort \& Kilimnik.} Kilimnik's message states that the meeting was supposed to be ``not about money or Pericles'' but instead ``about recreating [the] old friendship''---ostensibly between Manafort and Deripaska---``and talking about global politics.''% 943 \footnote{Text Message, Manafort \& Kilimnik; Manafort 9/12/18 302, at~5.} Manafort also replied by text that he ``need[s] this finished before Jan.~20,''% 944 \footnote{Text Message, Manafort \& Kilimnik.} which appears to be a reference to resolving Pericles before the inauguration. On January~15, 2017, three days after his return from Madrid, Manafort emailed K.T. McFarland, who was at that time designated to be Deputy National Security Advisor and was formally appointed to that position on January~20, 2017.% 945 \footnote{1/15/17 Email, Manafort, McFarland, \& Flynn.} Manafort's January~15 email to McFarland stated: ``I have some important information I want to share that I picked up on my travels over the last month.''% 946 \footnote{1/15/17 Email, Manafort, McFarland, \& Flynn.} Manafort told the Office that the email referred to an issue regarding Cuba, not Russia or Ukraine, and Manafort had traveled to Cuba in the past month.% 947 \footnote{Manafort 9/11/18 302, at~7.} Either way, McFarland---who was advised by Flynn not to respond to the Manafort inquiry---appears not to have responded to Manafort.% 948 \footnote{1/15/17 Email, Manafort, McFarland, \& Flynn; McFarland 12/22/17 302, at~18--19.} Manafort told the Office that around the time of the Presidential Inauguration in January, he met with Kilimnik and Ukrainian oligarch Serhiy Lyovochkin at the Westin Hotel in Alexandria, Virginia.% 949 \footnote{\blackout{Grand Jury} Manafort 9/11/18 302, at~7; Manafort 9/21/18 302, at~3; 1/19/17 \& 1/22/17 Kilimnik CBP Records, Jan.~19 and~22, 2017; 2016--17 Text Messages, Kilimnik \& Patten, at~1--2.} During this meeting, Kilimnik again discussed the Yanukovych peace plan that he had broached at the August~2 meeting and in a detailed December~8, 2016 message found in Kilimnik's DMP email account.% 950 \footnote{\blackout{Investigative Technique}} In that December~8 email, which Manafort acknowledged having read,% 951 \footnote{Manafort 9/11/18 302, at~6; \blackout{Grand Jury}} Kilimnik wrote, ``[a]ll that is required to start the process is a very minor `wink' (or slight push) from DT''---an apparent reference to President-elect Trump---``and a decision to authorize you to be a `special representative' and manage this process.'' Kilimnik assured Manafort, with that authority, he ``could start the process and within 10 days visit Russia [Yanukovych] guarantees your reception at the very top level,'' and that ``DT could have peace in Ukraine basically within a few months after inauguration.''% 952 \footnote{\blackout{Investigative Technique}} As noted above, \blackout{Grand Jury} and statements to the Office, Manafort sought to qualify his engagement on and support for the plan. \blackout{Grand Jury}% 953 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 954 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 955 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} On February~26, 2017, Manafort met Kilimnik in Madrid, where Kilimnik had flown from Moscow.% 956 \footnote{9/21/17 Email, Zatynaiko to Kilimnik.} In his first two interviews with the Office, Manafort denied meeting with Kilimnik on his Madrid trip and then---after being confronted with documentary evidence that Kilimnik was in Madrid at the same time as him---recognized that he met him in Madrid. Manafort said that Kilimnik had updated him on a criminal investigation into so-called ``black ledger'' payments to Manafort that was being conducted by Ukraine's National Anti-Corruption Bureau.% 957 \footnote{Manafort 9/13/18 302, at~1.} \blackout{Grand Jury}% 958 \footnote{\blackout{Grand Jury} In resolving whether Manafort breached his cooperation plea agreement by lying to the Office, the district court found that Manafort lied about, among other things, his contacts with Kilimnik regarding the peace plan, including the meeting in Madrid.} Manafort remained in contact with Kilimnik throughout 2017 and into the spring of 2018. Those contacts included matters pertaining to the criminal charges brought by the Office,% 959 \footnote{\textit{Manafort} (D.D.C.) Gov't Opp.\ to Mot.\ to Modify, at~2; Superseding Indictment \P\P~48--51, \textit{United States~v.\ Paul J. Manafort}, Jr., 1:17-cr-201 (D.D.C. June~8, 2018), Doc.~318.} and the Ukraine peace plan. In early 2018, Manafort retained his longtime polling firm to craft a draft poll in Ukraine, sent the pollsters a three-page primer on the plan sent by Kilimnik, and worked with Kilimnik to formulate the polling questions.% 960 \footnote{9/12/18 Email, Fabrizio to Manafort \& Ward; 2/16/18 Email, Fabrizio to Manafort; 2/19/18 Email, Fabrizio to Ward; 2/21/18 Email, Manafort to Ward \& Fabrizio.} The primer sent to the pollsters specifically called for the United States and President Trump to support the Autonomous Republic of Donbas with Yanukovych as Prime Minister,% 961 \footnote{9/21/18 Email, Manafort to Ward \& Fabrizio (7:16:49~a.m.) (attachment).} and a series of questions in the draft poll asked for opinions on Yanukovych's role in resolving the conflict in Donbas.% 962 \footnote{3/9/18 Email, Ward to Manafort \& Fabrizio (attachment).} (The poll was not solely about Donbas; it also sought participants' views on leaders apart from Yanukovych as they pertained to the 2019 Ukraine presidential election.) The Office has not uncovered evidence that Manafort brought the Ukraine peace plan to the attention of the Trump Campaign or the Trump Administration. Kilimnik continued his efforts to promote the peace plan to the Executive Branch (\textit{e.g.}, U.S. Department of State) into the summer of 2018.% 963 \footnote{\blackout{Investigative Technique}} \subsection{Post-Election and Transition-Period Contacts} Trump was elected President on November~8, 2016. Beginning immediately after the election, individuals connected to the Russian government started contacting officials on the Trump Campaign and Transition Team through multiple channels---sometimes through Russian Ambassador Kislyak and at other times through individuals who sought reliable contacts through U.S. persons not formally tied to the Campaign or Transition Team. The most senior levels of the Russian government encouraged these efforts. The investigation did not establish that these efforts reflected or constituted coordination between the Trump Campaign and Russia in its election-interference activities. \subsubsection{Immediate Post-Election Activity} As soon as news broke that Trump had been elected President, Russian government officials and prominent Russian businessmen began trying to make inroads into the new Administration. They appeared not to have preexisting contacts and struggled to connect with senior officials around the President-Elect. As explained below, those efforts entailed both official contact through the Russian Embassy in the United States and outreaches---sanctioned at high levels of the Russian government---through business rather than political contacts. \paragraph{Outreach from the Russian Government} At approximately 3~a.m.\ on election night, Trump Campaign press secretary Hope Hicks received a telephone call on her personal cell phone from a person who sounded foreign but was calling from a number with a DC area code.% 964 \footnote{Hicks 12/8/17 302, at~3.} Although Hicks had a hard time understanding the person, she could make out the words ``Putin call.''% 965 \footnote{Hicks 12/8/17 302, at~3.} Hicks told the caller to send her an email.% 966 \footnote{Hicks 12/8/17 302, at~3.} The following morning, on November~9, 2016, Sergey Kuznetsov, an official at the Russian Embassy to the United States, emailed Hicks from his Gmail address with the subject line, ``Message from Putin.''% 967 \footnote{NOSC00044381 (11/9/16 Email, Kuznetsov to Hicks (5:27~a.m.)).} Attached to the email was a message from Putin, in both English and Russian, which Kuznetsov asked Hicks to convey to the President-Elect.% 968 \footnote{NOSC00044381--82 (11/9/16 Email, Kuznetsov to Hicks (5:27~a.m.)).} In the message, Putin offered his congratulations to Trump for his electoral victory, stating he ``look[ed] forward to working with [Trump] on leading Russian--American relations out of crisis.''% 969 \footnote{NOSC00044382 (11/9/16 Letter from Putin to President-Elect Trump (Nov.~9, 2016) (translation)).} Hicks forwarded the email to Kushner, asking, ``Can you look into this? Don't want to get duped but don't want to blow off Putin!''% 970 \footnote{NOSC00044381 (11/9/16 Email, Hicks to Kushner (10:26~a.m.)).} Kushner stated in Congressional testimony that he believed that it would be possible to verify the authenticity of the forwarded email through the Russian Ambassador, whom Kushner had previously met in April 2016.% 971 \footnote{Statement of Jared C. Kushner to Congressional Committees, at~4 (Jul.~24, 2017).} Unable to recall the Russian Ambassador's name, Kushner emailed Dimitri Simes of CNI, whom he had consulted previously about Russia, \textit{see} \hyperlink{subsubsection.1.4.1.4}{Volume~I, Section~IV.A.4}, \textit{supra}, and asked, ``What is the name of Russian ambassador?''% 972 \footnote{NOSC00000058 (11/9/16 Email, Kushner to Simes (10:28~a.m.)); Statement of Jared Kushner to Congressional Committees, at~4 (Jul.~24, 2017).} Kushner forwarded Simes's response---which identified Kislyak by name---to Hicks.% 973 \footnote{NOSC00000058 (11/9/16 Email, Kushner to Hicks (11:05:44~a.m.)).} After checking with Kushner to see what he had learned, Hicks conveyed Putin's letter to transition officials.% 974 \footnote{Hicks 12/8/17 302, at~3--4.} Five days later, on November~14, 2016, Trump and Putin spoke by phone in the presence of Transition Team members, including incoming National Security Advisor Michael Flynn.% 975 \footnote{Flynn 11/16/17 302, at~8--10; \textit{see} Doug G. Ware, \textit{Trump, Russia's Putin Talk about Syria, Icy Relations in Phone Call}, UPI (Nov.~14, 2016).} \paragraph{High-Level Encouragement of Contacts through Alternative Channels} As Russian officials in the United States reached out to the President-Elect and his team, a number of Russian individuals working in the private sector began their own efforts to make contact. Petr Aven, a Russian national who heads Alfa-Bank, Russia's largest commercial bank, described to the Office interactions with Putin during this time period that might account for the flurry of Russian activity.% 976 \footnote{Aven provided information to the Office in an interview and through an attorney proffer, \blackout{Grand Jury}} Aven told the Office that he is one of approximately 50 wealthy Russian businessmen who regularly meet with Putin in the Kremlin; these 50 men are often referred to as ``oligarchs.''% 977 \footnote{Aven 8/2/18 302, at~7.} Aven told the Office that he met on a quarterly basis with Putin, including in the fourth quarter (Q4) of 2016, shortly after the U.S. presidential election.% 978 \footnote{\blackout{Grand Jury}} Aven said that he took these meetings seriously and understood that any suggestions or critiques that Putin made during these meetings were implicit directives, and that there would be consequences for Aven if he did not follow through.% 979 \footnote{Aven 8/2/18 302, at~2--3.} As was typical, the 2016 Q4 meeting with Putin was preceded by a preparatory meeting with Putin's chief of staff, Anton Vaino.% 980 \footnote{\blackout{Grand Jury} and interview with the Office, Aven referred to the high-ranking Russian government officials using numbers (\textit{e.g.}, Official~1, Official~2). Aven separately confirmed through an attorney proffer that Official~1 was Putin and Official~2 was Putin's chief of staff, Vaino. \textit{See} Affidavit of Ryan Junck (Aug.~2, 2018) (hard copy on file).} According to Aven, at his Q4 2016 one-on-one meeting with Putin,% 981 \footnote{At the time of his Q4 2016 meeting with Putin, Aven was generally aware of the press coverage about Russian interference in the U.S. election. According to Aven, he did not discuss that topic with Putin at any point, and Putin did not mention the rationale behind the threat of new sanctions. Aven 8/2/18 302, at~5--7.} Putin raised the prospect that the United States would impose additional sanctions on Russian interests, including sanctions against Aven and/or Alfa-Bank.% 982 \footnote{\blackout{Grand Jury}} Putin suggested that Aven needed to take steps to protect himself and Alfa-Bank.% 983 \footnote{\blackout{Grand Jury}} Aven also testified that Putin spoke of the difficulty faced by the Russian government in getting in touch with the incoming Trump Administration.% 984 \footnote{\blackout{Grand Jury}} According to Aven, Putin indicated that he did not know with whom formally to speak and generally did not know the people around the President-Elect.% 985 \footnote{\blackout{Grand Jury}} Aven \blackout{Grand Jury} told Putin he would take steps to protect himself and the Alfa-Bank shareholders from potential sanctions, and one of those steps would be to try to reach out to the incoming Administration to establish a line of communication.% 986 \footnote{\blackout{Grand Jury}} Aven described Putin responding with skepticism about Aven's prospect for success.% 987 \footnote{\blackout{Grand Jury} Aven 8/2/18 302, at~6.} According to Aven, although Putin did not expressly direct him to reach out to the Trump Transition Team, Aven understood that Putin expected him to try to respond to the concerns he had raised.% 988 \footnote{Aven 8/2/18 302, at~4--8; \blackout{Grand Jury}} Aven's efforts are described in \hyperlink{subsubsection.1.4.2.5}{Volume~I, Section~IV.B.5}, \textit{infra}. \subsubsection{Kirill Dmitriev's Transition-Era Outreach to the Incoming Administration} Aven's description of his interactions with Putin is consistent with the behavior of Kirill Dmitriev, a Russian national who heads Russia's sovereign wealth fund and is closely connected to Putin. Dmitriev undertook efforts to meet members of the incoming Trump Administration in the months after the election. Dmitriev asked a close business associate who worked for the United Arab Emirates (UAE) royal court, George Nader, to introduce him to Trump transition officials, and Nader eventually arranged a meeting in the Seychelles between Dmitriev and Erik Prince, a Trump Campaign supporter and an associate of Steve Bannon.% 989 \footnote{Nader provided information to the Office in multiple interviews, all but one of which were conducted under a proffer agreement \blackout{Grand Jury}. The investigators also interviewed Prince under a proffer agreement. Bannon was interviewed by the Office, \blackout{Grand Jury} under a proffer agreement.} In addition, the UAE national security advisor introduced Dmitriev to a hedge fund manager and friend of Jared Kushner, Rick Gerson, in late November 2016. In December 2016 and January 2017, Dmitriev and Gerson worked on a proposal for reconciliation between the United States and Russia, which Dmitriev implied he cleared through Putin. Gerson provided that proposal to Kushner before the inauguration, and Kushner later gave copies to Bannon and Secretary of State Rex Tillerson. \paragraph{Background} Dmitriev is a Russian national who was appointed CEO of Russia's sovereign wealth fund, the Russian Direct Investment Fund (RDIF), when it was founded in 2011.% 990 \footnote{Kirill Dmitriev Biography, Russian Direct Investment Fund, \textit{available at} \url{https://rdif.ru/Eng\_person\_dmitriev\_kirill/}. \textit{See also} Overview, Russian Direct Investment Fund, \textit{available at} \url{https://rdif.ru/Eng\_About/}.} Dmitriev reported directly to Putin and frequently referred to Putin as his ``boss.''% 991 \footnote{Gerson 6/15/18 302, at~1. \textit{See also, e.g.}, 12/14/16 Text Message, Dmitriev to Gerson; 1/9/17 Text Message, Dmitriev to Gerson.} RDIF has co-invested in various projects with UAE sovereign wealth funds.% 992 \footnote{\blackout{Grand Jury}} Dmitriev regularly interacted with Nader, a senior advisor to UAE Crown Prince Mohammed bin Zayed (Crown Prince Mohammed), in connection with RDIF's dealings with the UAE\null.% 993 \footnote{Nader 1/22/18 302, at~1--2; Nader 1/23/18 302, at~2--3; 5/3/16 Email, Nader to Phares; \blackout{Grand Jury}} Putin wanted Dmitriev to be in charge of both the financial and the political relationship between Russia and the Gulf states, in part because Dmitriev had been educated in the West and spoke English fluently.% 994 \footnote{Nader 1/22/18 302, at~1--2.} Nader considered Dmitriev to be Putin's interlocutor in the Gulf region, and would relay Dmitriev's views directly to Crown Prince Mohammed.% 995 \footnote{Nader 1/22/18 302, at~3.} Nader developed contacts with both U.S. presidential campaigns during the 2016 election, and kept Dmitriev abreast of his efforts to do so.% 996 \footnote{Nader 1/22/18 302, at~3; \blackout{Grand Jury}} According to Nader, Dmitriev said that his and the government of Russia's preference was for candidate Trump to win and asked Nader to assist him in meeting members of the Trump Campaign.% 997 \footnote{Nader 1/22/18 302, at~3; \blackout{Grand Jury}} \blackout{Grand Jury}% 998 \footnote{\blackout{Grand Jury}} Nader did not introduce Dmitriev to anyone associated with the Trump Campaign before the election.% 999 \footnote{Nader 1/22/18 302, at~3.} \blackout{Grand Jury}% 1000 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1001 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1002 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1003 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1004 \footnote{\blackout{Grand Jury}} Erik Prince is a businessman who had relationships with various individuals associated with the Trump Campaign, including Steve Bannon, Donald Trump~Jr., and Roger Stone.% 1005 \footnote{Prince 4/4/18 302, at~1--5; Bannon 2/14/18 302, at~21.} Prince did not have a formal role in the Campaign, although he offered to host a fundraiser for Trump and sent unsolicited policy papers on issues such as foreign policy, trade, and Russian election interference to Bannon.% 1006 \footnote{Prince 4/4/18 302, at~1, 3--4; Prince 5/3/18 302, at~2; Bannon 2/14/18 302, at~19--20; 10/18/16 Email, Prince to Bannon.} After the election, Prince frequently visited transition offices at Trump Tower, primarily to meet with Bannon but on occasion to meet Michael Flynn and others.% 1007 \footnote{Flynn 11/20/17 302, at~6; Flynn 1/11/18 302, at~5; Flynn 1/24/18 302, at~5--6; Flynn 5/1/18 302, at~11; Prince 4/4/18 302, at~5, 8; Bannon 2/14/18 302, at~20--21; 11/12/16 Email, Prince to Corallo.} Prince and Bannon would discuss, \textit{inter alia}, foreign policy issues and Prince's recommendations regarding who should be appointed to fill key national security positions.% 1008 \footnote{Prince 4/4/18 302, at~5; Bannon 2/14/18 302, at~21.} Although Prince was not formally affiliated with the transition, Nader \blackout{Grand Jury} received assurances \blackout{Grand Jury} that the incoming Administration considered Prince a trusted associate.% 1009 \footnote{\blackout{Grand Jury}} \paragraph{Kirill Dmitriev's Post-Election Contacts With the Incoming Administration} Soon after midnight on election night, Dmitriev messaged \blackout{Investigative Technique} who was traveling to New York to attend the 2016 World Chess Championship. \blackout{Investigative Technique} Dmitry Peskov, the Russian Federation's press secretary, who was also attending the World Chess Championship.% 1010 \footnote{\blackout{Investigative Technique} Nader 1/22/18 302, at~5--6; \blackout{Grand Jury}} \blackout{Investigative Technique}% 1011 \footnote{\blackout{Investigative Technique}} \blackout{Investigative Technique}% 1012 \footnote{\blackout{Investigative Technique}} \blackout{Investigative Technique}% 1013 \footnote{\blackout{Investigative Technique}} At approximately 2:40~a.m.\ on November~9, 2016, news reports stated that candidate Clinton had called President-Elect Trump to concede. At \blackout{Investigative Technique}% 1014 \footnote{\blackout{Investigative Technique}} \blackout{Investigative Technique} wrote to Dmitriev, ``Putin has won.''% 1015 \footnote{\blackout{Investigative Technique}} Later that morning, Dmitriev contacted Nader, who was in New York, to request a meeting with the ``key people'' in the incoming Administration as soon as possible in light of the ``[g]reat results.''% 1016 \footnote{11/9/16 Text Message, Dmitriev to Nader (9:34~a.m.); Nader 1/22/18 302, at~4.} He asked Nader to convey to the incoming Administration that ``we want to start rebuilding the relationship in whatever is a comfortable pace for them. We understand all of the sensitivities and are not in a rush.''% 1017 \footnote{11/9/16 Text Message, Dmitriev to Nader (11:58~p.m.).} Dmitriev and Nader had previously discussed Nader introducing him to the contacts Nader had made within the Trump Campaign.% 1018 \footnote{Nader 1/22/18 302, at~3.} Dmitriev also told Nader that he would ask Putin for permission to travel to the United States, where he would be able to speak to media outlets about the positive impact of Trump's election and the need for reconciliation between the United States and Russia.% 1019 \footnote{11/9/16 Text Message, Dmitriev to Nader (10:06~a.m.); 11/9/16 Text Message, Dmitriev to Nader (10:10~a.m.); \blackout{Grand Jury}} Later that day, Dmitriev flew to New York, where Peskov was separately traveling to attend the chess tournament.% 1020 \footnote{11/9/16 Text Message, Dmitriev to Nader (10:08~a.m.); 11/9/16 Text Message, Dmitriev to Nader (3:40~p.m.); Nader 1/22/18 302, at~5.} Dmitriev invited Nader to the opening of the tournament and noted that, if there was ``a chance to see anyone key from Trump camp,'' he ``would love to start building for the future.''% 1021 \footnote{11/9/16 Text Message, Dmitriev to Nader (7:10~p.m.).} Dmitriev also asked Nader to invite Kushner to the event so that he (Dmitriev) could meet him.% 1022 \footnote{11/10/16 Text Message, Dmitriev to Nader (5:20~a.m.).} Nader did not pass along Dmitriev's invitation to anyone connected with the incoming Administration.% 1023 \footnote{Nader 1/22/18 302, at~5--6.} Although one World Chess Federation official recalled hearing from an attendee that President-Elect Trump had stopped by the tournament, the investigation did not establish that Trump or any Campaign or Transition Team official attended the event.% 1024 \footnote{Marinello 5/31/18 302, at~2--3; Nader 1/22/18 302, at~5--6.} And the President's written answers denied that he had.% 1025 \footnote{Written Responses of Donald J. Trump (Nov.~20, 2018), at~17--18 (Response to Question~V, Part~(a).} Nader stated that Dmitriev continued to press him to set up a meeting with transition officials, and was particularly focused on Kushner and Trump~Jr.% 1026 \footnote{Nader 1/22/18 302, at~6; \blackout{Grand Jury}} Dmitriev told Nader that Putin would be very grateful to Nader and that a meeting would make history.% 1027 \footnote{Nader 1/22/18 302, at~6; \blackout{Grand Jury}} \blackout{Grand Jury}% 1028 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1029 \footnote{\blackout{Grand Jury}} According to Nader, Dmitriev was very anxious to connect with the incoming Administration and told Nader that he would try other routes to do so besides Nader himself.% 1030 \footnote{Nader 1/22/18 302, at~6.} Nader did not ultimately introduce Dmitriev to anyone associated with the incoming Administration during Dmitriev's post-election trip to New York.% 1031 \footnote{Nader 1/22/18 302, at~5--7.} In early December 2016, Dmitriev again broached the topic of meeting incoming Administration officials with Nader in January or February.% 1032 \footnote{12/8/16 Text Messages, Dmitriev to Nader (12:10:31~a.m.); Nader 1/22/18 302, at~11.} Dmitriev sent Nader a list of publicly available quotes of Dmitriev speaking positively about Donald Trump ``in case they [were] helpful.''% 1033 \footnote{12/8/16 Text Message, Dmitriev to Nader (12:10:31~a.m.); 12/8/16 Text Message, Dmitriev to Nader (12:10:57~a.m.).} \paragraph{Erik Prince and Kirill Dmitriev Meet in the Seychelles} \subparagraph{George Nader and Erik Prince Arrange Seychelles Meeting with Dmitriev} Nader traveled to New York in early January 2017 and had lunchtime and dinner meetings with Erik Prince on January~3, 2017.% 1034 \footnote{Prince 4/4/18 302, at~8.} Nader and Prince discussed Dmitriev.% 1035 \footnote{Prince 5/3/18 302, at~3; \blackout{Grand Jury}} Nader informed Prince that the Russians were looking to build a link with the incoming Trump Administration.% 1036 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} he told Prince that Dmitriev had been pushing Nader to introduce him to someone from the incoming Administration \blackout{Grand Jury}.% 1037 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} Nader suggested, in light of Prince's relationship with Transition Team officials, that Prince and Dmitriev meet to discuss issues of mutual concern.% 1038 \footnote{\blackout{Grand Jury}} Prince told Nader that he needed to think further about it and to check with Transition Team officials.% 1039 \footnote{\blackout{Grand Jury}} After his dinner with Prince, Nader sent Prince a link to a Wikipedia entry about Dmitriev, and sent Dmitriev a message stating that he had just met ``with some key people within the family and inner circle''---a reference to Prince---and that he had spoken at length and positively about Dmitriev.% 1040 \footnote{1/4/17 Text Message, Nader to Prince; 1/4/17 Text Messages, Nader to Dmitriev (5:24~a.m.--5:26~a.m.); Nader 1/22/18 302, at~8--9 \blackout{Grand Jury}} Nader told Dmitriev that the people he met had asked for Dmitriev's bio, and Dmitriev replied that he would update and send it.% 1041 \footnote{1/4/17 Text Messages, Nader \& Dmitriev (7:24:27~a.m.).} Nader later received from Dmitriev two files concerning Dmitriev: one was a two-page biography, and the other was a list of Dmitriev's positive quotes about Donald Trump.% 1042 \footnote{1/4/17 Text Messages, Dmitriev to Nader (7:25--7:29~a.m.).} The next morning, Nader forwarded the message and attachments Dmitriev had sent him to Prince.% 1043 \footnote{1/4/17 Text Messages, Nader to Prince.} Nader wrote to Prince that these documents were the versions ``to be used with some additional details for them'' (with ``them'' referring to members of the incoming Administration).% 1044 \footnote{1/4/17 Text Messages, Nader to Prince; \blackout{Grand Jury}} Prince opened the attachments at Trump Tower within an hour of receiving them.% 1045 \footnote{Prince 5/3/18 302, at~1--3.} Prince stated that, while he was at Trump Tower that day, he spoke with Kellyanne Conway, Wilbur Ross, Steve Mnuchin, and others while waiting to see Bannon.% 1046 \footnote{Prince 5/3/18 302, at~2--3.} Cell-site location data for Prince's mobile phone indicates that Prince remained at Trump Tower for approximately three hours.% 1047 \footnote{Cell-site location data for Prince's mobile phone \blackout{Investigative Technique}} Prince said that he could not recall whether, during those three hours, he met with Bannon and discussed Dmitriev with him.% 1048 \footnote{Prince 5/3/18 302, at~3.} \blackout{Grand Jury}.% 1049 \footnote{\blackout{Grand Jury}} Prince booked a ticket to the Seychelles on January~7, 2017.% 1050 \footnote{1/5/17 Email, Kasbo to Prince.} The following day, Nader wrote to Dmitriev that he had a ``pleasant surprise'' for him, namely that he had arranged for Dmitriev to meet ``a Special Guest'' from ``the New Team,'' referring to Prince.% 1051 \footnote{1/8/17 Text Messages, Nader to Dmitriev (6:05--6:10~p.m.).} Nader asked Dmitriev if he could come to the Seychelles for the meeting on January~12, 2017, and Dmitriev agreed.% 1052 \footnote{1/8/17 Text Messages, Nader \& Dmitriev (6:10--7:27~p.m.).} The following day, Dmitriev sought assurance from Nader that the Seychelles meeting would be worthwhile.% 1053 \footnote{1/9/17 Text Message, Dmitriev to Nader.} \blackout{Grand Jury} Dmitriev was not enthusiastic about the idea of meeting with Prince, and that Nader assured him that Prince wielded influence with the incoming Administration.% 1054 \footnote{\blackout{Grand Jury}} Nader wrote to Dmitriev, ``This guy [Prince] is designated by Steve [Bannon] to meet you! I know him and he is very very well connected and trusted by the New Team. His sister is now a Minister of Education.''% 1055 \footnote{1/9/17 Text Message, Nader to Dmitriev (2:12:59~p.m.); Nader 1/19/18 302, at~13; \blackout{Grand Jury}} According to Nader, Prince had led him to believe that Bannon was aware of Prince's upcoming meeting with Dmitriev, and Prince acknowledged that it was fair for Nader to think that Prince would pass information on to the Transition Team.% 1056 \footnote{Nader 1/19/18 302, at~13; \blackout{Grand Jury} Prince 5/3/18 302, at~3.} Bannon, however, told the Office that Prince did not tell him in advance about his meeting with Dmitriev.% 1057 \footnote{Bannon 2/14/18 302, at~25--26.} \subparagraph{The Seychelles Meetings} Dmitriev arrived with his wife in the Seychelles on January~11, 2017, and checked into the Four Seasons Resort where Crown Prince Mohammed and Nader were staying.% 1058 \footnote{1/10/17 Text Messages, Dmitriev \& Nader (2:05:54--3:30:25~p.m.); 1/11/17 Text Messages, Dmitriev \& Nader (2:16:16--5:17:59~p.m.).} Prince arrived that same day.% 1059 \footnote{1/7/17 Email, Kasbo to Prince.} Prince and Dmitriev met for the first time that afternoon in Nader's villa, with Nader present.% 1060 \footnote{1/11/17 Text Messages, Nader \& Dmitriev (5:18:24--5:37:14~p.m.); \blackout{Grand Jury}} The initial meeting lasted approximately 30--45~minutes.% 1061 \footnote{Prince 5/3/18 302, at~4; \blackout{Grand Jury}} \blackout{Grand Jury}% 1062 \footnote{\blackout{Grand Jury}} Prince described the eight years of the Obama Administration in negative terms, and stated that he was looking forward to a new era of cooperation and conflict resolution.% 1063 \footnote{\blackout{Grand Jury}} According to Prince, he told Dmitriev that Bannon was effective if not conventional, and that Prince provided policy papers to Bannon.% 1064 \footnote{Prince 5/3/18 302, at~4.} \blackout{Grand Jury}% 1065 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1066 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1067 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1068 \footnote{\blackout{Grand Jury}} The topic of Russian interference in the 2016 election did not come up.% 1069 \footnote{Prince 5/3/18 302, at~4--5.} \blackout{Grand Jury}% 1070 \footnote{\blackout{Grand Jury}} Prince added that he would inform Bannon about his meeting with Dmitriev, and that if there was interest in continuing the discussion, Bannon or someone else on the Transition Team would do so.% 1071 \footnote{Prince 5/3/18 302, at~4; \blackout{Grand Jury}} \blackout{Grand Jury}% 1072 \footnote{\blackout{Grand Jury}} Afterwards, Prince returned to his room, where he learned that a Russian aircraft carrier had sailed to Libya, which led him to call Nader and ask him to set up another meeting with Dmitriev.% 1073 \footnote{Prince 4/4/18 302, at~10; Prince 5/3/18 302, at~4; \blackout{Grand Jury}} According to Nader, Prince called and said he had checked with his associates back home and needed to convey to Dmitriev that Libya was ``off the table.''% 1074 \footnote{Nader 1/22/18 302, at~14; \blackout{Grand Jury}} Nader wrote to Dmitriev that Prince had ``received an urgent message that he needs to convey to you immediately,'' and arranged for himself, Dmitriev, and Prince to meet at a restaurant on the Four Seasons property.% 1075 \footnote{\blackout{Grand Jury} 1/11/17 Text Messages, Dmitriev \& Nader (9:13:54--10:24:25~p.m.).} At the second meeting, Prince told Dmitriev that the United States could not accept any Russian involvement in Libya, because it would make the situation there much worse.% 1076 \footnote{\blackout{Grand Jury} Prince, however, denied that and recalled that he was making these remarks to Dmitriev not in an official capacity for the transition but based on his experience as a former naval officer. Prince 5/3/18 302, at~4.} \blackout{Grand Jury}% 1077 \footnote{\blackout{Grand Jury}} After the brief second meeting concluded, Nader and Dmitriev discussed what had transpired.% 1078 \footnote{Nader 1/22/18 302, at~15; \blackout{Grand Jury}} Dmitriev told Nader that he was disappointed in his meetings with Prince for two reasons: first, he believed the Russians needed to be communicating with someone who had more authority within the incoming Administration than Prince had.% 1079 \footnote{Nader 1/22/18 302, at~9, 15; \blackout{Grand Jury}} Second, he had hoped to have a discussion of greater substance, such as outlining a strategic roadmap for both countries to follow.% 1080 \footnote{Nader 1/22/18 302, at~15; \blackout{Grand Jury}} Dmitriev told Nader that \blackout{Grand Jury} Prince's comments \blackout{Grand Jury} were insulting \blackout{Grand Jury}% 1081 \footnote{\blackout{Grand Jury} Nader 1/22/18 302, at~15.} Hours after the second meeting, Prince sent two text messages to Bannon from the Seychelles.% 1082 \footnote{Call Records of Erik Prince \blackout{Grand Jury}} As described further below, investigators were unable to obtain the content of these or other messages between Prince and Bannon, and the investigation also did not identify evidence of any further communication between Prince and Dmitriev after their meetings in the Seychelles. \subparagraph{Erik Prince's Meeting with Steve Bannon after the Seychelles Trip} After the Seychelles meetings, Prince told Nader that he would inform Bannon about his discussion with Dmitriev and would convey that someone within the Russian power structure was interested in seeking better relations with the incoming Administration.% 1083 \footnote{Prince 4/4/18 302, at~10; Prince 5/3/18 302, at~4; \blackout{Grand Jury}} On January~12, 2017, Prince contacted Bannon's personal assistant to set up a meeting for the following week.% 1084 \footnote{1/12/17 Text Messages, Prince to Preate.} Several days later, Prince messaged her again asking about Bannon's schedule.% 1085 \footnote{1/15/17 Text Message, Prince to Preate.} Prince said that he met Bannon at Bannon's home after returning to the United States in mid-January and briefed him about several topics, including his meeting with Dmitriev.% 1086 \footnote{Prince 4/4/18 302, at~11; Prince 5/3/18 302, at~5.} Prince told the Office that he explained to Bannon that Dmitriev was the head of a Russian sovereign wealth fund and was interested in improving relations between the United States and Russia.% 1087 \footnote{Prince 4/4/18 302, at~11; Prince 5/3/18 302, at~5.} Prince had on his cellphone a screenshot of Dmitriev's Wikipedia page dated January~16, 2017, and Prince told the Office that he likely showed that image to Bannon.% 1088 \footnote{Prince 5/3/18 302, at~5; 1/16/17 Image on Prince Phone (on file with the Office).} Prince also believed he provided Bannon with Dmitriev's contact information.% 1089 \footnote{Prince 5/3/18 302, at~5.} According to Prince, Bannon instructed Prince not to follow up with Dmitriev, and Prince had the impression that the issue was not a priority for Bannon.% 1090 \footnote{Prince 5/3/18 302, at~5.} Prince related that Bannon did not appear angry, just relatively uninterested.% 1091 \footnote{Prince 5/3/18 302, at~5.} Bannon, by contrast, told the Office that he never discussed with Prince anything regarding Dmitriev, RDIF, or any meetings with Russian individuals or people associated with Putin.% 1092 \footnote{Bannon 10/26/18 302, at~10--11.} Bannon also stated that had Prince mentioned such a meeting, Bannon would have remembered it, and Bannon would have objected to such a meeting having taken place.% 1093 \footnote{Bannon 10/26/18 302, at~10--11.} The conflicting accounts provided by Bannon and Prince could not be independently clarified by reviewing their communications, because neither one was able to produce any of the messages they exchanged in the time period surrounding the Seychelles meeting. Prince's phone contained no text messages prior to March 2017, though provider records indicate that he and Bannon exchanged dozens of messages.% 1094 \footnote{Call Records of Erik Prince \blackout{Grand Jury}} Prince denied deleting any messages but claimed he did not know why there were no messages on his device before March 2017.% 1095 \footnote{Prince 4/4/18 302, at~6.} Bannon's devices similarly contained no messages in the relevant time period, and Bannon also stated he did not know why messages did not appear on his device.% 1096 \footnote{Bannon 10/26/18 302, at~11; Bannon 2/14/18 302, at~36.} Bannon told the Office that, during both the months before and after the Seychelles meeting, he regularly used his personal Blackberry and personal email for work-related communications (including those with Prince), and he took no steps to preserve these work communications.% 1097 \footnote{Bannon 10/26/18 302, at~11.} \paragraph{Kirill Dmitriev's Post-Election Contact with Rick Gerson Regarding U.S.--Russia Relations} Dmitriev's contacts during the transition period were not limited to those facilitated by Nader. In approximately late November 2016, the UAE national security advisor introduced Dmitriev to Rick Gerson, a friend of Jared Kushner who runs a hedge fund in New York.% 1098 \footnote{Gerson 6/5/18 302, at~1, 3; 11/26/16 Text Message, Dmitriev to Gerson; 1/25/17 Text Message, Dmitriev to Nader.} Gerson stated he had no formal role in the transition and had no involvement in the Trump Campaign other than occasional casual discussions about the Campaign with Kushner.% 1099 \footnote{Gerson 6/5/18 302, at~1.} After the election, Gerson assisted the transition by arranging meetings for transition officials with former UK prime minister Tony Blair and a UAE delegation led by Crown Prince Mohammed.% 1100 \footnote{Gerson 6/5/18 302, at~1--2; Kushner 4/11/18 302, at~21.} When Dmitriev and Gerson met, they principally discussed potential joint ventures between Gerson's hedge fund and RDIF\null.% 1101 \footnote{Gerson 6/5/18 302, at~3--4; \textit{see, e.g.}, 12/2/16 Text Messages, Dmitriev \& Gerson; 12/14/16 Text Messages, Dmitriev \& Gerson; 1/3/17 Text Message, Gerson to Dmitriev; 12/2/16 Email, Tolokonnikov to Gerson.} Dmitriev was interested in improved economic cooperation between the United States and Russia and asked Gerson who he should meet with in the incoming Administration who would be helpful towards this goal.% 1102 \footnote{Gerson 6/5/18 302, at~3; 12/14/16 Text Message, Dmitriev to Gerson.} Gerson replied that he would try to figure out the best way to arrange appropriate introductions, but noted that confidentiality would be required because of the sensitivity of holding such meetings before the new Administration took power, and before Cabinet nominees had been confirmed by the Senate.% 1103 \footnote{12/14/16 Text Message, Gerson to Dmitriev.} Gerson said he would ask Kushner and Michael Flynn who the ``key person or people'' were on the topics of reconciliation with Russia, joint security concerns, and economic matters.% 1104 \footnote{12/14/16 Text Message, Gerson to Dmitriev.} Dmitriev told Gerson that he had been tasked by Putin to develop and execute a reconciliation plan between the United States and Russia. He noted in a text message to Gerson that if Russia was ``approached with respect and willingness to understand our position, we can have Major Breakthroughs quickly.''% 1105 \footnote{12/14/16 Text Messages, Dmitriev \& Gerson; Gerson 6/15/18 302, at~1.} Gerson and Dmitriev exchanged ideas in December 2016 about what such a reconciliation plan would include.% 1106 \footnote{12/14/16 Text Messages, Dmitriev \& Gerson.} Gerson told the Office that the Transition Team had not asked him to engage in these discussions with Dmitriev, and that he did so on his own initiative and as a private citizen.% 1107 \footnote{Gerson 6/15/18 302, at~1.} On January~9, 2017, the same day he asked Nader whether meeting Prince would be worthwhile, Dmitriev sent his biography to Gerson and asked him if he could ``share it with Jared (or somebody else very senior in the team)---so that they know that we are focused from our side on improving the relationship and my boss asked me to play a key role in that.''% 1108 \footnote{1/9/17 Text Messages, Dmitriev to Gerson; 1/9/17 Text Message, Dmitriev to Nader.} Dmitriev also asked Gerson if he knew Prince, and if Prince was somebody important or worth spending time with.% 1109 \footnote{Gerson 6/5/18 302, at~4.} After his trip to the Seychelles, Dmitriev told Gerson that Bannon had asked Prince to meet with Dmitriev and that the two had had a positive meeting.% 1110 \footnote{1/18/17 Text Messages, Dmitriev \& Gerson.} On January~16, 2017, Dmitriev consolidated the ideas for U.S.--Russia reconciliation that he and Gerson had been discussing into a two-page document that listed five main points: (1)~jointly fighting terrorism; (2)~jointly engaging in anti-weapons of mass destruction efforts; (3)~developing ``win-win'' economic and investment initiatives; (4)~maintaining an honest, open, and continual dialogue regarding issues of disagreement; and (5)~ensuring proper communication and trust by ``key people'' from each country.% 1111 \footnote{1/16/17 Text Messages, Dmitriev \& Gerson.} On January~18, 2017, Gerson gave a copy of the document to Kushner.% 1112 \footnote{Gerson 6/5/18 302, at~3; Gerson 6/15/18 302, at~2.} Kushner had not heard of Dmitriev at that time.% 1113 \footnote{Gerson 6/5/18 302, at~3.} Gerson explained that Dmitriev was the head of RDIF, and Gerson may have alluded to Dmitriev's being well connected.% 1114 \footnote{Gerson 6/5/18 302, at~3; Gerson 6/15/18 302, at~1--2; Kushner 4/11/18 302, at~22.} Kushner placed the document in a file and said he would get it to the right people.% 1115 \footnote{Gerson 6/5/18 302, at~3.} Kushner ultimately gave one copy of the document to Bannon and another to Rex Tillerson; according to Kushner, neither of them followed up with Kushner about it.% 1116 \footnote{Kushner 4/11/18 302, at~32.} On January~19, 2017, Dmitriev sent Nader a copy of the two-page document, telling him that this was ``a view from our side that I discussed in my meeting on the islands and with you and with our friends. Please share with them---we believe this is a good foundation to start from.''% 1117 \footnote{1/19/17 Text Message, Dmitriev to Nader (11:11:56~a.m.).} Gerson informed Dmitriev that he had given the document to Kushner soon after delivering it.% 1118 \footnote{1/18/17 Text Message, Gerson to Dmitriev; Gerson 6/15/18 302, at~2.} On January~26, 2017, Dmitriev wrote to Gerson that his ``boss''---an apparent reference to Putin---was asking if there had been any feedback on the proposal.% 1119 \footnote{1/26/17 Text Message, Dmitriev to Gerson.} Dmitriev said, ``[w]e do not want to rush things and move at a comfortable speed. At the same time, my boss asked me to try to have the key US meetings in the next two weeks if possible.''% 1120 \footnote{1/26/17 Text Message, Dmitriev to Gerson.} He informed Gerson that Putin and President Trump would speak by phone that Saturday, and noted that that information was ``very confidential.''% 1121 \footnote{1/96/17 Text Message, Dmitriev to Gerson.} The same day, Dmitriev wrote to Nader that he had seen his ``boss'' again yesterday who had ``emphasized that this is a great priority for us and that we need to build this communication channel to avoid bureaucracy.''% 1122 \footnote{1/26/17 Text Message, Dmitriev to Nader (10:04:41~p.m.).} On January~28, 2017, Dmitriev texted Nader that he wanted ``to see if I can confirm to my boss that your friends may use some of the ideas from the 2 pager I sent you in the telephone call that will happen at~12 EST,''% 1123 \footnote{1/28/17 Text Message, Dmitriev to Nader (11:05:39~a.m.).} an apparent reference to the call scheduled between President Trump and Putin. Nader replied, ``Definitely paper was so submitted to Team by Rick and me. They took it seriously!''% 1124 \footnote{1/28/17 Text Message, Nader to Dmitriev (11:11:33~a.m.).} After the call between President Trump and Putin occurred, Dmitriev wrote to Nader that ``the call went very well. My boss wants me to continue making some public statements that us [sic] Russia cooperation is good and important.''% 1125 \footnote{1/29/17 Text Message, Dmitriev to Nader (11:06:35~a.m.).} Gerson also wrote to Dmitriev to say that the call had gone well, and Dmitriev replied that the document they had drafted together ``played an important role.''% 1126 \footnote{1/28/17 Text Message, Gerson to Dmitriev; 1/29/17 Text Message, Dmitriev to Gerson.} Gerson and Dmitriev appeared to stop communicating with one another in approximately March 2017, when the investment deal they had been working on together showed no signs of progressing.% 1127 \footnote{Gerson 6/15/18 302, at~4; 3/21/17 Text Message, Gerson to Dmitriev.} \subsubsection{Ambassador Kislyak's Meeting with Jared Kushner and Michael Flynn in Trump Tower Following the Election} On November~16, 2016, Catherine Vargas, an executive assistant to Kushner, received a request for a meeting with Russian Ambassador Sergey Kislyak.% 1128 \footnote{\textit{Statement of Jared C. Kushner to Congressional Committees} (``Kushner Stmt.''), at~6 (7/24/17) (written statement by Kushner to the Senate Judiciary Committee).} That same day, Vargas sent Kushner an email with the subject, ``MISSED CALL: Russian Ambassador to the US, Sergey Ivanovich Kislyak\dots.''% 1129 \footnote{NOSC00004356 (11/16/16 Email, Vargas to Kushner (6:44~p.m.)).} The text of the email read, ``RE: setting up a time to meet w/you on 12/1. LMK how to proceed.'' Kushner responded in relevant part, ``I think I do this one -- confirm with Dimitri [Simes of CNI] that this is the right guy.''% 1130 \footnote{NOSC00004356 (11/16/16 Email, Kushner to Vargas (9:54~p.m.)).} After reaching out to a colleague of Simes at CNI, Vargas reported back to Kushner that Kislyak was ``the best go-to guy for routine matters in the US,'' while Yuri Ushakov, a Russian foreign policy advisor, was the contact for ``more direct/substantial matters.''% 1131 \footnote{11/17/16 Email, Brown to Simes (10:41~a.m.); Brown 10/13/17 302, at~4; 11/17/16 Email, Vargas to Kushner (12:31:18).} Bob Foresman, the UBS investment bank executive who had previously tried to transmit to candidate Trump an invitation to speak at an economic forum in Russia, \textit{see} \hyperlink{subparagraph.1.4.1.1.4.2}{Volume~I, Section~IV.A.1.d.ii}, \textit{supra}, may have provided similar information to the Transition Team. According to Foresman, at the end of an early December 2016 meeting with incoming National Security Advisor Michael Flynn and his designated deputy (K.T. McFarland) in New York, Flynn asked Foresman for his thoughts on~Kislyak. Foresman had not met Kislyak but told Flynn that, while Kislyak was an important person, Kislyak did not have a direct line to Putin.% 1132 \footnote{Foresman 10/17/18 302, at~17.} Foresman subsequently traveled to Moscow, inquired of a source he believed to be close to Putin, and heard back from that source that Ushakov would be the official channel for the incoming U.S. national security advisor.% 1133 \footnote{Foresman 10/17/18 302, at~17--18.} Foresman acknowledged that Flynn had not asked him to undertake that inquiry in Russia but told the Office that he nonetheless felt obligated to report the information back to Flynn, and that he worked to get a face-to-face meeting with Flynn in January 2017 so that he could do so.% 1134 \footnote{Foresman 10/17/18 302, at~18.} Email correspondence suggests that the meeting ultimately went forward,% 1135 \footnote{RMF-SCO-00000015 (1/5/17 Email, Foresman to Atencio \& Flaherty); RMF-SCO-00000015 (1/5/17 Email, Flaherty to Foresman \& Atencio).} but Flynn has no recollection of it or of the earlier December meeting.% 1136 \footnote{9/26/18 Attorney Proffer from Covington \& Burling LLP (reflected in email on file with the Office).} (The investigation did not identify evidence of Flynn or Kushner meeting with Ushakov after being given his name.% 1137 \footnote{Vargas 4/4/18 302, at~5.}% ) In the meantime, although he had already formed the impression that Kislyak was not necessarily the right point of contact,% 1138 \footnote{Kushner 11/1/17 302, at~4.} Kushner went forward with the meeting that Kislyak had requested on November~16. It took place at Trump Tower on November~30, 2016.% 1139 \footnote{AKIN\_GUMP\_BERKOWITZ\_0000016--019 (11/29/16 Email, Vargas to Kuznetsov).} At Kushner's invitation, Flynn also attended; Bannon was invited but did not attend.% 1140 \footnote{Flynn 1/11/18 302, at~2; NOS00004240 (Calendar Invite, Vargas to Kushner \& Flynn).} During the meeting, which lasted approximately 30~minutes, Kushner expressed a desire on the part of the incoming Administration to start afresh with U.S.--Russian relations.% 1141 \footnote{Kushner Stmt. at~6.} Kushner also asked Kislyak to identify the best person (whether Kislyak or someone else) with whom to direct future discussions---someone who had contact with Putin and the ability to speak for him.% 1142 \footnote{Kushner Stmt.\ at~6; Kushner 4/11/18 302, at~18.} The three men also discussed U.S. policy toward Syria, and Kislyak floated the idea of having Russian generals brief the Transition Team on the topic using a secure communications line.% 1143 \footnote{Kushner Stmt.\ at~7; Kushner 4/11/18 302, at~18; Flynn 1/11/18 302, at~2.} After Flynn explained that there was no secure line in the Transition Team offices, Kushner asked Kislyak if they could communicate using secure facilities at the Russian Embassy.% 1144 \footnote{Kushner 4/11/18 302, at~18.} Kislyak quickly rejected that idea.% 1145 \footnote{Kushner 4/11/18 302, at~18.} \subsubsection{Jared Kushner's Meeting with Sergey Gorkov} On December~6, 2016, the Russian Embassy reached out to Kushner's assistant to set up a second meeting between Kislyak and Kushner.% 1146 \footnote{Kushner Stmt.\ at~7; NOSC00000123 (12/6/16 Email, Vargas to Kushner (12:11:40~p.m.)).} Kushner declined several proposed meeting dates, but Kushner's assistant indicated that Kislyak was very insistent about securing a second meeting.% 1147 \footnote{Kushner 4/11/18 302, at~19; NOSC00000130 (12/12/16 Email, Kushner to Vargas (10:41~p.m.)).} Kushner told the Office that he did not want to take another meeting because he had already decided Kislyak was not the right channel for him to communicate with Russia, so he arranged to have one of his assistants, Avi Berkowitz, meet with Kislyak in his stead.% 1148 \footnote{Kushner 4/11/18 302, at~19; Kushner Stmt.\ at~7; DJTFP\_SCO\_01442290 (12/6/16 Email, Berkowitz to \blackout{Personal Privacy}} Although embassy official Sergey Kuznetsov wrote to Berkowitz that Kislyak thought it ``important'' to ``continue the conversation with Mr.~Kushner in person,''% 1149 \footnote{DJTFP\_SCO\_01442290 (12/7/16 Email \blackout{Personal Privacy} to Berkowitz (12:31:39~p.m.)).} Kislyak nonetheless agreed to meet instead with Berkowitz once it became apparent that Kushner was unlikely to take a meeting. Berkowitz met with Kislyak on December~12, 2016, at Trump Tower.% 1150 \footnote{Berkowitz 1/12/18 302, at~7; AKIN\_GUMP\_BERKOWITZ\_000001--04 (12/12/16 Text Messages, Berkowitz \& 202-701-8532).} The meeting lasted only a few minutes, during which Kislyak indicated that he wanted Kushner to meet someone who had a direct line to Putin: Sergey Gorkov, the head of the Russian-government-owned bank Vnesheconombank (VEB). Kushner agreed to meet with Gorkov.% 1151 \footnote{Kushner 4/11/18 302, at~19; NOSC00000130--135 (12/12/16 Email, Kushner to Berkowitz).} The one-on-one meeting took place the next day, December~13, 2016, at the Colony Capital building in Manhattan, where Kushner had previously scheduled meetings.% 1152 \footnote{Kushner 4/11/18 302, at~19; NOSC00000130--135 (12/12/16 Email, Kushner to Berkowitz).} VEB was (and is) the subject of Department of Treasury economic sanctions imposed in response to Russia's annexation of Crimea.% 1153 \footnote{\textit{Announcement of Treasury Sanctions on Entities Within the Financial Services and Energy Sectors of Russia, Against Arms or Related Materiel Entities, and those Undermining Ukraine's Sovereignty}, United States Department of the Treasury (Jul.~16, 2014).} Kushner did not, however, recall any discussion during his meeting with Gorkov about the sanctions against VEB or sanctions more generally.% 1154 \footnote{Kushner 4/11/18 302, at~20.} Kushner stated in an interview that he did not engage in any preparation for the meeting and that no one on the Transition Team even did a Google search for Gorkov's name.% 1155 \footnote{Kushner 4/11/18 302, at~19. Berkowitz, by contrast, stated to the Office that he had googled Gorkov's name and told Kushner that Gorkov appeared to be a banker. Berkowitz 1/12/18 302, at~8.} At the start of the meeting, Gorkov presented Kushner with two gifts: a painting and a bag of soil from the town in Belarus where Kushner's family originated.% 1156 \footnote{Kushner 4/11/18 302, at~19--20.} The accounts from Kushner and Gorkov differ as to whether the meeting was diplomatic or business in nature. Kushner told the Office that the meeting was diplomatic, with Gorkov expressing disappointment with U.S.--Russia relations under President Obama and hopes for improved relations with the incoming Administration.% 1157 \footnote{Kushner Stmt.\ at~8.} According to Kushner, although Gorkov told Kushner a little bit about his bank and made some statements about the Russian economy, the two did not discuss Kushner's companies or private business dealings of any kind.% 1158 \footnote{Kushner Stmt.\ at~8.} (At the time of the meeting, Kushner Companies had a debt obligation coming due on the building it owned at 666~Fifth Avenue, and there had been public reporting both about efforts to secure lending on the property and possible conflicts of interest for Kushner arising out of his company's borrowing from foreign lenders.% 1159 \footnote{\textit{See, e.g.}, Peter Grant, \textit{Donald Trump Son-in-Law Jared Kushner Could Face His Own Conflict-of-Interest Questions}, Wall Street Journal (Nov.~29, 2016).}% ) In contrast, in a 2017 public statement, VEB suggested Gorkov met with Kushner in Kushner's capacity as CEO of Kushner Companies for the purpose of discussing business, rather than as part of a diplomatic effort. In particular, VEB characterized Gorkov's meeting with Kushner as part of a series of ``roadshow meetings'' with ``representatives of major US banks and business circles,'' which included ``negotiations'' and discussion of the ``most promising business lines and sectors.''% 1160 \footnote{Patrick Reevell \& Matthew Mosk, \textit{Russian Banker Sergey Gorkov Brushes off Questions About Meeting with Jared Kushner}, ABC News (June~1, 2017).} Foresman, the investment bank executive mentioned in \hyperlink{subsubsection.1.4.1.1}{Volume~I, Sections~IV.A.1} and~\hyperlink{subsubsection.1.4.2.3}{IV.B.3}, \textit{supra}, told the Office that he met with Gorkov and VEB deputy chairman Nikolay Tsekhomsky in Moscow just before Gorkov left for New York to meet Kushner.% 1161 \footnote{Foresman 10/17/18 302, at~14--15.} According to Foresman, Gorkov and Tsekhomsky told him that they were traveling to New York to discuss post-election issues with U.S. financial institutions, that their trip was sanctioned by Putin, and that they would be reporting back to Putin upon their return.% 1162 \footnote{Foresman 10/17/18 302, at~15--16.} The investigation did not resolve the apparent conflict in the accounts of Kushner and Gorkov or determine whether the meeting was diplomatic in nature (as Kushner stated), focused on business (as VEB's public statement indicated), or whether it involved some combination of those matters or other matters. Regardless, the investigation did not identify evidence that Kushner and Gorkov engaged in any substantive follow-up after the meeting. Rather, a few days after the meeting, Gorkov's assistant texted Kushner's assistant, ``Hi, please inform your side that the information about the meeting had a very positive response!''% 1163 \footnote{AKIN\_GUMP\_BERKOWITZ\_0000011 (12/19/16 Text Message, Ivanchenko to Berkowitz (9:56~a.m.)).} Over the following weeks, the two assistants exchanged a handful of additional cordial texts.% 1164 \footnote{AKIN\_GUMP\_BERKOWITZ\_0000011--15 (12/19/16--2/16/17 Text Messages, Ivanchenko \& Berkowitz).} On February~8, 2017, Gorkov's assistant texted Kushner's assistant (Berkowitz) to try to set up another meeting, and followed up by text at least twice in the days that followed.% 1165 \footnote{AKIN\_GUMP\_BERKOWITZ\_0000015 (2/8/17 Text Message, Ivanchenko to Berkowitz (10:41~a.m.)).} According to Berkowitz, he did not respond to the meeting request in light of the press coverage regarding the Russia investigation, and did not tell Kushner about the meeting request.% 1166 \footnote{Berkowitz 3/22/18 302, at~4--5.} \subsubsection{Petr Aven's Outreach Efforts to the Transition Team} In December 2016, weeks after the one-on-one meeting with Putin described in \hyperlink{paragraph.1.4.2.1.2}{Volume~I, Section~IV.B.1.b}, \textit{supra}, Petr Aven attended what he described as a separate ``all-hands'' oligarch meeting between Putin and Russia's most prominent businessmen.% 1167 \footnote{Aven 8/2/18 302, at~7; \blackout{Grand Jury}} As in Aven's one-on-one meeting, a main topic of discussion at the oligarch meeting in December 2016 was the prospect of forthcoming U.S. economic sanctions.% 1168 \footnote{\blackout{Grand Jury}} After the December 2016 all-hands meeting, Aven tried to establish a connection to the Trump team. Aven instructed Richard Burt to make contact with the incoming Trump Administration. Burt was on the board of directors for LetterOne (L1), another company headed by Aven, and had done work for Alfa-Bank.% 1169 \footnote{\blackout{Grand Jury} Aven 8/2/18 302, at~6.} Burt had previously served as U.S. ambassador to Germany and Assistant Secretary of State for European and Canadian Affairs, and one of his primary roles with Alfa-Bank and L1 was to facilitate introductions to business contacts in the United States and other Western countries.% 1170 \footnote{\blackout{Grand Jury} Aven 8/2/18 302, at~6; Burt 2/9/18 302, at~2.} While at a L1 board meeting held in Luxembourg in late December 2016, Aven pulled Burt aside and told him that he had spoken to someone high in the Russian government who expressed interest in establishing a communications channel between the Kremlin and the Trump Transition Team.% 1171 \footnote{Burt 2/9/18 302, at~2; \blackout{Grand Jury}} Aven asked for Burt's help in contacting members of the Transition Team.% 1172 \footnote{\blackout{Grand Jury}} Although Burt had been responsible for helping Aven build connections in the past, Burt viewed Aven's request as unusual and outside the normal realm of his dealings with Aven.% 1173 \footnote{Burt 2/9/18 302, at~4.} Burt, who is a member of the board of CNI (discussed at \hyperlink{subsubsection.1.4.1.4}{Volume~I, Section~IV.A.4}, \textit{supra}),% 1174 \footnote{Burt 2/9/18 302, at~5.} decided to approach CNI president Dimitri Simes for help facilitating Aven's request, recalling that Simes had some relationship with Kushner.% 1175 \footnote{Burt 2/9/18 302, at~3.} At the time, Simes was lobbying the Trump Transition Team, on Burt's behalf, to appoint Burt U.S. ambassador to Russia.% 1176 \footnote{Burt 2/9/18 302, at~3.} Burt contacted Simes by telephone and asked if he could arrange a meeting with Kushner to discuss setting up a high-level communications channel between Putin and the incoming Administration.% 1177 \footnote{Burt 2/9/18 302, at~3; Simes 3/27/18 302, at~4.} Simes told the Office that he declined and stated to Burt that setting up such a channel was not a good idea in light of the media attention surrounding Russian influence in the U.S. presidential election.% 1178 \footnote{Burt 2/9/18 302, at~3; Simes 3/27/18 302, at~4.} According to Simes, he understood that Burt was seeking a secret channel, and Simes did not want CNI to be seen as an intermediary between the Russian government and the incoming Administration.% 1179 \footnote{Simes 3/27/18 302, at~5.} Based on what Simes had read in the media, he stated that he already had concerns that Trump's business connections could be exploited by Russia, and Simes said that he did not want CNI to have any involvement or apparent involvement in facilitating any connection.% 1180 \footnote{Simes 3/27/18 302, at~5.} In an email dated December~22, 2016, Burt recounted for Aven his conversation with Simes: \begin{quote} Through a trusted third party, I have reached out to the very influential person I mentioned in Luxembourg concerning Project~A\null. There is an interest and an understanding for the need to establish such a channel. But the individual emphasized that at this moment, with so much intense interest in the Congress and the media over the question of cyber-hacking (and who ordered what), Project~A was too explosive to discuss. The individual agreed to discuss it again after the New Year. I trust the individual's instincts on this. If this is unclear or you would like to discuss, don't hesitate to call.% 1181 \footnote{12/22/16 Email, Burt to Aven (7:23~p.m.).} \end{quote} According to Burt, the ``very influential person'' referenced in his email was Simes, and the reference to a ``trusted third party'' was a fabrication, as no such third party existed. ``Project~A'' was a term that Burt created for Aven's effort to help establish a communications channel between Russia and the Trump team, which he used in light of the sensitivities surrounding what Aven was requesting, especially in light of the recent attention to Russia's influence in the U.S. presidential election.% 1182 \footnote{Burt 2/9/18 302, at~3.} According to Burt, his report that there was ``interest'' in a communications channel reflected Simes's views, not necessarily those of the Transition Team, and in any event, Burt acknowledged that he added some ``hype'' to that sentence to make it sound like there was more interest from the Transition Team than may have actually existed.% 1183 \footnote{Burt 2/9/18 302, at~3--4.} Aven replied to Burt's email on the same day, saying ``Thank you. All clear.''% 1184 \footnote{12/22/16 Email, Aven to Burt (4:58:22~p.m.).} According to Aven, this statement indicated that he did not want the outreach to continue.% 1185 \footnote{Aven 8/2/18 302, at~7.} Burt spoke to Aven some time thereafter about his attempt to make contact with the Trump team, explaining that the current environment made it impossible, \blackout{Grand Jury}% 1186 \footnote{\blackout{Grand Jury}} Burt did not recall discussing Aven's request with Simes again, nor did he recall speaking to anyone else about the request.% 1187 \footnote{Burt 2/9/18 302, at~3--4.} In the first quarter of 2017, Aven met again with Putin and other Russian officials.% 1188 \footnote{\blackout{Grand Jury}} At that meeting, Putin asked about Aven's attempt to build relations with the Trump Administration and Aven recounted his lack of success.% 1189 \footnote{\blackout{Grand Jury} Aven 8/2/18 302, at~7.} \blackout{Grand Jury}% 1190 \footnote{\blackout{Grand Jury}} Putin continued to inquire about Aven's efforts to connect to the Trump Administration in several subsequent quarterly meetings.% 1191 \footnote{\blackout{Grand Jury}} Aven also told Putin's chief of staff that he had been subpoenaed by the~FBI\null.% 1192 \footnote{Aven 8/2/18 302, at~8.} As part of that conversation, he reported that he had been asked by the FBI about whether he had worked to create a back channel between the Russian government and the Trump Administration.% 1193 \footnote{Aven 8/2/18 302, at~8; \blackout{Grand Jury}} According to Aven, the official showed no emotion in response to this report and did not appear to care.% 1194 \footnote{Aven 8/2/18 302, at~8; \blackout{Grand Jury}} \subsubsection{Carter Page Contact with Deputy Prime Minister Arkady Dvorkovich} In December 2016, more than two months after he was removed from the Trump Campaign, former Campaign foreign policy advisor Carter Page again visited Moscow in an attempt to pursue business opportunities.% 1195 \footnote{Page 3/10/17 302, at~4; Page 3/16/17 302, at~3; \blackout{Grand Jury} Among other meetings, Page contacted Andrey Baranov, head of investor relations at Rosneft, and they discussed the sale of Rosneft and meetings Baranov had attended with Rosneft CEO Igor Sechin. \blackout{Grand Jury}} \blackout{Grand Jury}% 1196 \footnote{\blackout{Grand Jury}} According to Konstantin Kilimnik, Paul Manafort's associate, Page also gave some individuals in Russia the impression that he had maintained his connections to President-Elect Trump. In a December~8, 2016 email intended for Manafort, Kilimnik wrote, ``Carter Page is in Moscow today, sending messages he is authorized to talk to Russia on behalf of DT on a range of issues of mutual interest, including Ukraine.''% 1197 \footnote{\blackout{Investigative Technique}} On December~9, 2016, Page went to dinner with NES employees Shlomo Weber and Andrej Krickovic.% 1198 \footnote{Page 3/16/17 302, at~3; Page 3/30/17 302, at~8.} Weber had contacted Dvorkovich to let him know that Page was in town and to invite him to stop by the dinner if he wished to do so, and Dvorkovich came to the restaurant for a few minutes to meet with Page.% 1199 \footnote{Weber 7/28/17 302, at~4; Page 3/16/17 302, at~3; \blackout{Grand Jury}} Dvorkovich congratulated Page on Trump's election and expressed interest in starting a dialogue between the United States and Russia.% 1200 \footnote{Page 3/16/17 302, at~3; \blackout{Grand Jury}} Dvorkovich asked Page if he could facilitate connecting Dvorkovich with individuals involved in the transition to be in a discussion of future cooperation.% 1201 \footnote{Page 3/16/17 302, at~3; \blackout{Grand Jury}} \blackout{Grand Jury}% 1202 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1203 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury} Dvorkovich separately discussed working together in the future by forming an academic partnership.% 1204 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1205 \footnote{\blackout{Grand Jury}} \blackout{Grand Jury}% 1206 \footnote{\blackout{Grand Jury}} \subsubsection{Contacts With and Through Michael T. Flynn} Incoming National Security Advisor Michael Flynn was the Transition Team's primary conduit for communications with the Russian Ambassador and dealt with Russia on two sensitive matters during the transition period: a United Nations Security Council vote and the Russian government's reaction to the United States's imposition of sanctions for Russian interference in the 2016 election.% 1207 \footnote{As discussed further in \hyperlink{subsubsection.1.5.3.4}{Volume~I, Section~V.C.4}, \textit{infra}, Flynn pleaded guilty to making false statements to the FBI, in violation of 18 U.S.C. \S~1001, about these communications with Ambassador Kislyak. Plea Agreement, \textit{United States~v.\ Michael T. Flynn}, No.~1:17-cr-232 (D.D.C. Dec.~1, 2017), Doc.~3. Flynn's plea agreement required that he cooperate with this Office, and the statements from Flynn in this report reflect his cooperation over the course of multiple debriefings in 2017 and~2018.} Despite Kushner's conclusion that Kislyak did not wield influence inside the Russian government, the Transition Team turned to Flynn's relationship with Kislyak on both issues. As to the sanctions, Flynn spoke by phone to K.T. McFarland, his incoming deputy, to prepare for his call to Kislyak; McFarland was with the President-Elect and other senior members of the Transition Team at Mar-a-Lago at the time. Although transition officials at Mar-a-Lago had some concern about possible Russian reactions to the sanctions, the investigation did not identify evidence that the President-Elect asked Flynn to make any request to~Kislyak. Flynn asked Kislyak not to escalate the situation in response to U.S. sanctions imposed on December~29, 2016, and Kislyak later reported to Flynn that Russia acceded to that request. \paragraph{United Nations Vote on Israeli Settlements} On December~21, 2016, Egypt submitted a resolution to the United Nations Security Council calling on Israel to cease settlement activities in Palestinian territory.% 1208 \footnote{Karen DeYoung, \textit{How the U.S. Came to Abstain on a U.N. Resolution Condemning Israeli Settlements}, Washington Post (Dec.~28, 2016).} The Security Council, which includes Russia, was scheduled to vote on the resolution the following day.% 1209 \footnote{Karen DeYoung, \textit{How the U.S. Came to Abstain on a U.N. Resolution Condemning Israeli Settlements}, Washington Post (Dec.~28, 2016).} There was speculation in the media that the Obama Administration would not oppose the resolution.% 1210 \footnote{Michelle Nichols \& Lesley Wroughton, \textit{U.S. Intended to Allow Passage of U.N. Draft Critical of Israel}, Reuters (Dec.~21, 2016).} According to Flynn, the Transition Team regarded the vote as a significant issue and wanted to support Israel by opposing the resolution.% 1211 \footnote{Flynn 11/16/17 302, at~12; Flynn 11/17/17 302, at~2.} On December~22, 2016, multiple members of the Transition Team, as well as President-Elect Trump, communicated with foreign government officials to determine their views on the resolution and to rally support to delay the vote or defeat the resolution.% 1212 \footnote{Flynn 11/16/17 302, at~12--14; Flynn 11/17/17 302, at~2.} Kushner led the effort for the Transition Team; Flynn was responsible for the Russian government.% 1213 \footnote{Flynn 11/16/17 302, at~12--14; Flynn 11/17/17 302, at~2; Kushner 11/1/17 302, at~3; 12/22/16 Email, Kushner to Flynn; 12/22/16 Email, McFarland to \blackout{Personal Privacy} et~al.} Minutes after an early morning phone call with Kushner on December~22, Flynn called Kislyak.% 1214 \footnote{Flynn 11/16/17 302, at~13; Call Records of Michael T. Flynn \blackout{Grand Jury}} According to Flynn, he informed Kislyak about the vote and the Transition Team's opposition to the resolution, and requested that Russia vote against or delay the resolution.% 1215 \footnote{Statement of Offense \P~3(d), \textit{United States~v.\ Michael T. Flynn}, No.~1:17-cr-232 (D.D.C. Dec.~1, 2017), Doc.~4 (``\textit{Flynn} Statement of Offense''); Flynn 11/16/17 302, at~12--13.} Later that day, President-Elect Trump spoke with Egyptian President Abdel Fattah al-Sisi about the vote.% 1216 \footnote{Flynn 11/17/17 302, at~2; Flynn 11/16/17 302, at~13.} Ultimately, Egypt postponed the vote.% 1217 \footnote{\textit{U.N. Vote on Israeli Settlement Postponed, ``Potentially Indefinitely''}, Reuters (Dec.~22, 2016).} On December~23, 2016, Malaysia, New Zealand, Senegal, and Venezuela resubmitted the resolution.% 1218 \footnote{Somini Sengupta \& Rick Gladstone, \textit{Rebuffing Israel, U.S. Allows Censure Over Settlements}, New York Times (Dec.~23, 2016).} Throughout the day, members of the Transition Team continued to talk with foreign leaders about the resolution, with Flynn continuing to lead the outreach with the Russian government through Kislyak.% 1219 \footnote{Flynn 11/16/17 302, at~12--14; Kushner 11/1/17 302, at~3; 12/23/16 Email, Flynn to Kushner et~al.} When Flynn again spoke with Kislyak, Kislyak informed Flynn that if the resolution came to a vote, Russia would not vote against it.% 1220 \footnote{\textit{Flynn} Statement of Offense \P~3(g).} The resolution later passed 14--0, with the United States abstaining.% 1221 \footnote{\textit{Israel's Settlements Have No Legal Validity, Constitute Flagrant Violation of International Law, Security Council Reaffirms}, 7853rd Meeting (PM), United Nations Security Council (Dec.~23, 2016).} \paragraph{U.S. Sanctions Against Russia} Flynn was also the Transition Team member who spoke with the Russian government when the Obama Administration imposed sanctions and other measures against Russia in response to Russia's interference in the 2016 presidential election. On December~28, 2016, then-President Obama signed Executive Order 13757, which took effect at 12:01~a.m. the following day and imposed sanctions on nine Russian individuals and entities.% 1222 \footnote{\textit{Taking Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities}, The White House, Office of the Press Secretary (Dec.~29, 2016).} On December~29, 2016, the Obama Administration also expelled 35 Russian government officials and closed two Russian government-owned compounds in the United States.% 1223 \footnote{\textit{Statement by the President on Actions in Response to Russian Malicious Cyber Activity and Harassment}, The White House, Office of the Press Secretary (Dec.~29, 2016).} During the rollout of the sanctions, President-Elect Trump and multiple Transition Team senior officials, including McFarland, Steve Bannon, and Reince Priebus, were staying at the Mar-a-Lago club in Palm Beach, Florida. Flynn was on vacation in the Dominican Republic,% 1224 \footnote{Flynn 11/16/17 302, at~14; McFarland 12/22/17 302, at~3--8; Bannon 2/12/18 302, at~5.} but was in daily contact with McFarland.% 1225 \footnote{Flynn 11/17/17 302, at~5; Flynn 1/19/18 302, at~1; McFarland 11/22/17 302, at~3--9.} The Transition Team and President-Elect Trump were concerned that these sanctions would harm the United States's relationship with Russia.% 1226 \footnote{Flynn 11/17/17 302, at~3.} Although the details and timing of sanctions were unknown on December~28, 2016, the media began reporting that retaliatory measures from the Obama Administration against Russia were forthcoming.% 1227 \footnote{Christine Wang, \textit{US to announce new sanctions against Russia in response to election hacking}, CNBC (Dec.~28, 2016).} When asked about imposing sanctions on Russia for its alleged interference in the 2016 presidential election, President-Elect Trump told the media, ``I think we ought to get on with our lives.''% 1228 \footnote{John Wagner, \textit{Trump on alleged election interference by Russia: ``Get on with our lives''}, Washington Post (Dec.~29, 2016).} Russia initiated the outreach to the Transition Team. On the evening of December~28, 2016, Kislyak texted Flynn, ``can you kindly call me back at your convenience.''% 1229 \footnote{SF000006 (12/28/16 Text Message, Kislyak to Flynn).} Flynn did not respond to the text message that evening. Someone from the Russian Embassy also called Flynn the next morning, at 10:38~a.m., but they did not talk.% 1230 \footnote{Call Records of Michael T. Flynn \blackout{Grand Jury}} The sanctions were announced publicly on December~29, 2016.% 1231 \footnote{Flynn 11/17/17 302, at~2--3; McFarland 12/22/17 302, at~4--5.} At 1:53~p.m.\ that day, McFarland began exchanging emails with multiple Transition Team members and advisors about the impact the sanctions would have on the incoming Administration.% 1232 \footnote{12/29/16 Email, McFarland to O'Brien et~al.; 12/29/16 Email, McFarland to Flynn et~al.} At 2:07~p.m., a Transition Team member texted Flynn a link to a New York Times article about the sanctions.% 1233 \footnote{SF000001 (12/29/16 Text Message, Flaherty to Flynn).} At 2:29~p.m., McFarland called Flynn, but they did not talk.% 1234 \footnote{Call Records of K.T. McFarland \blackout{Grand Jury}} Shortly thereafter, McFarland and Bannon discussed the sanctions.% 1235 \footnote{McFarland 12/22/17 302, at~5--6.} According to McFarland, Bannon remarked that the sanctions would hurt their ability to have good relations with Russia, and that Russian escalation would make things more difficult.% 1236 \footnote{McFarland 12/22/17 302, at~5--6.} McFarland believed she told Bannon that Flynn was scheduled to talk to Kislyak later that night.% 1237 \footnote{McFarland 12/22/17 302, at~6.} McFarland also believed she may have discussed the sanctions with Priebus, and likewise told him that Flynn was scheduled to talk to Kislyak that night.% 1238 \footnote{McFarland 12/22/17 302, at~6.} At 3:14~p.m., Flynn texted a Transition Team member who was assisting McFarland, ``Time for a call???''% 1239 \footnote{SF000001 (12/29/16 Text Message, Flynn to Flaherty).} The Transition Team member responded that McFarland was on the phone with Tom Bossert, a Transition Team senior official, to which Flynn responded, ``Tit for tat w Russia not good. Russian AMBO reaching out to me today.''% 1240 \footnote{SF000001 (12/29/16 Text Message, Flynn to Flaherty).} Flynn recalled that he chose not to communicate with Kislyak about the sanctions until he had heard from the team at Mar-a-Lago.% 1241 \footnote{Flynn 11/20/17 302, at~3.} He first spoke with Michael Ledeen,% 1242 \footnote{Michael Ledeen is married to Barbara Ledeen, the Senate staffer whose 2016 efforts to locate Hillary Clinton's missing emails are described in \hyperlink{subsubsection.1.3.4.2}{Volume~I, Section~III.D.2}, \textit{supra}.} a Transition Team member who advised on foreign policy and national security matters, for 20~minutes.% 1243 \footnote{Flynn 11/17/17 302, at~3; Call Records of Michael Ledeen \blackout{Grand Jury}} Flynn then spoke with McFarland for almost 20~minutes to discuss what, if anything, to communicate to Kislyak about the sanctions.% 1244 \footnote{Flynn 11/17/17 302, at~3--4; \textit{Flynn} Statement of Offense \P~3(c); Call Records of K.T. McFarland \blackout{Grand Jury}; Call Records of Michael T. Flynn \blackout{Grand Jury}. } On that call, McFarland and Flynn discussed the sanctions, including their potential impact on the incoming Trump Administration's foreign policy goals.% 1245 \footnote{Flynn 11/17/17 302, at~3--4.} McFarland and Flynn also discussed that Transition Team members in Mar-a-Lago did not want Russia to escalate the situation.% 1246 \footnote{Flynn 11/17/17 302, at~3--4; \textit{Flynn} Statement of Offense \P~3(c); McFarland 12/22/17 302, at~6--7.} They both understood that Flynn would relay a message to Kislyak in hopes of making sure the situation would not get out of hand.% 1247 \footnote{Flynn 11/17/17 302, at~4; McFarland 12/22/17 302, at~6--7.} Immediately after speaking with McFarland, Flynn called and spoke with~Kislyak.% 1248 \footnote{\textit{Flynn} Statement of Offense \P~3(d).} Flynn discussed multiple topics with Kislyak, including the sanctions, scheduling a video teleconference between President-Elect Trump and Putin, an upcoming terrorism conference, and Russia's views about the Middle East.% 1249 \footnote{Flynn 11/17/17 302, at~3--4; \textit{Flynn} Statement of Offense \P~3(c); 12/30/16 Email, Flynn to McFarland.} With respect to the sanctions, Flynn requested that Russia not escalate the situation, not get into a ``tit for tat,'' and only respond to the sanctions in a reciprocal manner.% 1250 \footnote{Flynn 11/17/17 302, at~1; \textit{Flynn} Statement of Offense \P~3(d).} Multiple Transition Team members were aware that Flynn was speaking with Kislyak that day. In addition to her conversations with Bannon and Reince Priebus, at 4:43~p.m., McFarland sent an email to Transition Team members about the sanctions, informing the group that ``Gen [F]lynn is talking to russian ambassador this evening.''% 1251 \footnote{12/29/16 Email, McFarland to Flynn et~al.} Less than an hour later, McFarland briefed President-Elect Trump. Bannon, Priebus, Sean Spicer, and other Transition Team members were present.% 1252 \footnote{12/29/16 Email, Westerhout to Flaherty; McFarland 12/22/17 302, at~7.} During the briefing, President-Elect Trump asked McFarland if the Russians did ``it,'' meaning the intrusions intended to influence the presidential election.% 1253 \footnote{McFarland 12/22/17 302, at~7.} McFarland said yes, and President-Elect Trump expressed doubt that it was the Russians.% 1254 \footnote{McFarland 12/22/17 302, at~7.} McFarland also discussed potential Russian responses to the sanctions, and said Russia's response would be an indicator of what the Russians wanted going forward.% 1255 \footnote{McFarland 12/22/17 302, at~7.} President-Elect Trump opined that the sanctions provided him with leverage to use with the Russians.% 1256 \footnote{McFarland 12/22/17 302, at~7.} McFarland recalled that at the end of the meeting, someone may have mentioned to President-Elect Trump that Flynn was speaking to the Russian ambassador that evening.% 1257 \footnote{McFarland 12/22/17 302, at~7.} After the briefing, Flynn and McFarland spoke over the phone.% 1258 \footnote{McFarland 12/22/17 302, at~7.} Flynn reported on the substance of his call with Kislyak, including their discussion of the sanctions.% 1259 \footnote{Flynn 11/17/17 302, at~4; \textit{Flynn} Statement of Offense \P~3(e).} According to McFarland, Flynn mentioned that the Russian response to the sanctions was not going to be escalatory because they wanted a good relationship with the incoming Administration.% 1260 \footnote{McFarland 12/22/17 302, at~7.} McFarland also gave Flynn a summary of her recent briefing with President-Elect Trump.% 1261 \footnote{McFarland 12/22/17 302, at~7.} The next day, December~30, 2016, Russian Foreign Minister Sergey Lavrov remarked that Russia would respond in kind to the sanctions.% 1262 \footnote{\textit{Comment by Foreign Minister Sergey Lavrov on recent US sanctions and the expulsion of Russian diplomats, Moscow, December~20, 2016}, The Ministry of Foreign Affairs of the Russian Federation (Dec.~30, 2016 (5:32~a.m.)).} Putin superseded that comment two hours later, releasing a statement that Russia would not take retaliatory measures in response to the sanctions at that time.% 1263 \footnote{\textit{Statement of the President of the Russian Federation}, Kremlin, Office of the President (Dec.~30, 2016 (7:15~a.m,)).} Hours later President-Elect Trump tweeted, ``Great move on delay (by V.~Putin).''% 1264 \footnote{\UseVerb{DJT} 12/30/16 (11:41~a.m.) Tweet.} Shortly thereafter, Flynn sent a text message to McFarland summarizing his call with Kislyak from the day before, which she emailed to Kushner, Bannon, Priebus, and other Transition Team members.% 1265 \footnote{12/30/16 Email, Flynn to McFarland; 12/30/16 Email, McFarland to Kushner et~al.} The text message and email did not include sanctions as one of the topics discussed with~Kislyak.% 1266 \footnote{12/30/16 Email, McFarland to Kushner et~al.} Flynn told the Office that he did not document his discussion of sanctions because it could be perceived as getting in the way of the Obama Administration's foreign policy.% 1267 \footnote{Flynn 11/17/17 302, at~4.} On December~31, 2016, Kislyak called Flynn and told him the request had been received at the highest levels and that Russia had chosen not to retaliate to the sanctions in response to the request.% 1268 \footnote{Call Records of Michael T. Flynn \blackout{Grand Jury}; Flynn 11/17/17 302, at~1; Flynn 1/19/17 302, at~3; \textit{Flynn} Statement of Offense \P~3(g).} Two hours later, Flynn spoke with McFarland and relayed his conversation with~Kislyak.% 1269 \footnote{Call Records of Michael T. Flynn \blackout{Grand Jury}; Flynn 11/17/17 302, at~5; Flynn 1/19/17 302, at~3; McFarland 12/22/17 302, at~10.} According to McFarland, Flynn remarked that the Russians wanted a better relationship and that the relationship was back on track.% 1270 \footnote{McFarland 12/22/17 302, at~10.} Flynn also told McFarland that he believed his phone call had made a difference.% 1271 \footnote{McFarland 12/22/17 302, at~10.} McFarland recalled congratulating Flynn in response.% 1272 \footnote{McFarland 12/22/17 302, at~10.} Flynn spoke with other Transition Team members that day, but does not recall whether they discussed the sanctions.% 1273 \footnote{Flynn 11/17/17 302, at~5--6.} Flynn recalled discussing the sanctions with Bannon the next day and that Bannon appeared to know about Flynn's conversation with~Kislyak.% 1274 \footnote{Flynn 11/21/17 302, at~1; Flynn 11/20/17 302, at~3; Flynn 1/19/17 302, at~5; \textit{Flynn} Statement of Offense \P~3(h).} Bannon, for his part, recalled meeting with Flynn that day, but said that he did not remember discussing sanctions with him.% 1275 \footnote{Bannon 2/12/18 302, at~9.} Additional information about Flynn's sanctions-related discussions with Kislyak, and the handling of those discussions by the Transition Team and the Trump Administration, is provided in \hyperref[chap:volume-2]{Volume~II} of this report. \hr In sum, the investigation established multiple links between Trump Campaign officials and individuals tied to the Russian government. Those links included Russian offers of assistance to the Campaign. In some instances, the Campaign was receptive to the offer, while in other instances the Campaign officials shied away. Ultimately, the investigation did not establish that the Campaign coordinated or conspired with the Russian government in its election-interference activities.
{ "alphanum_fraction": 0.7813495264, "avg_line_length": 88.3445096396, "ext": "tex", "hexsha": "2e617057a4ad9c54484ede257f10a9ffb0982b19", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2019-12-10T19:38:44.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-20T21:02:20.000Z", "max_forks_repo_head_hexsha": "d8fac96fa2d04aa31516d7079533b20703d8dfee", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "mds08011/multi-publish", "max_forks_repo_path": "src/volume-1/russian-links-to-trump.tex", "max_issues_count": 64, "max_issues_repo_head_hexsha": "3aa16a20104f48623ce8e12c8502ecb1867a40f8", "max_issues_repo_issues_event_max_datetime": "2019-09-02T03:01:19.000Z", "max_issues_repo_issues_event_min_datetime": "2019-04-20T13:38:54.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "ascherer/mueller-report", "max_issues_repo_path": "src/volume-1/russian-links-to-trump.tex", "max_line_length": 876, "max_stars_count": 57, "max_stars_repo_head_hexsha": "3aa16a20104f48623ce8e12c8502ecb1867a40f8", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "ascherer/mueller-report", "max_stars_repo_path": "src/volume-1/russian-links-to-trump.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-16T13:32:17.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-20T13:29:36.000Z", "num_tokens": 85341, "size": 316185 }
% language=uk % mp solve_path \environment luatex-style \startcomponent luatex-graphics \startchapter[reference=graphics,title={The graphic libraries}] \startsection[title={The \type {img} library}][library=img] \topicindex {images} \topicindex {images+library} \topicindex {graphics} The \type {img} library can be used as an alternative to \orm {pdfximage} and \orm {pdfrefximage}, and the associated \quote {satellite} commands like \type {\pdfximagebbox}. Image objects can also be used within virtual fonts via the \type {image} command listed in~\in {section} [virtualfonts]. \subsection{\type {new}} \libindex{new} \startfunctioncall <image> var = img.new() <image> var = img.new(<table> image_spec) \stopfunctioncall This function creates a userdata object of type \quote {image}. The \type {image_spec} argument is optional. If it is given, it must be a table, and that table must contain a \type {filename} key. A number of other keys can also be useful, these are explained below. You can either say \starttyping a = img.new() \stoptyping followed by \starttyping a.filename = "foo.png" \stoptyping or you can put the file name (and some or all of the other keys) into a table directly, like so: \starttyping a = img.new({filename='foo.pdf', page=1}) \stoptyping The generated \type {<image>} userdata object allows access to a set of user|-|specified values as well as a set of values that are normally filled in and updated automatically by \LUATEX\ itself. Some of those are derived from the actual image file, others are updated to reflect the \PDF\ output status of the object. There is one required user-specified field: the file name (\type {filename}). It can optionally be augmented by the requested image dimensions (\type {width}, \type {depth}, \type {height}), user|-|specified image attributes (\type {attr}), the requested \PDF\ page identifier (\type {page}), the requested boundingbox (\type {pagebox}) for \PDF\ inclusion, the requested color space object (\type {colorspace}). The function \type {img.new} does not access the actual image file, it just creates the \type {<image>} userdata object and initializes some memory structures. The \type {<image>} object and its internal structures are automatically garbage collected. Once the image is scanned, all the values in the \type {<image>} except \type {width}, \type {height} and \type {depth}, become frozen, and you cannot change them any more. You can use \type {pdf.setignoreunknownimages(1)} (or at the \TEX\ end the \lpr {pdfvariable} \type {ignoreunknownimages}) to get around a quit when no known image type is found (based on name or preamble). Beware: this will not catch invalid images and we cannot guarantee side effects. A zero dimension image is still included when requested. No special flags are set. A proper workflow will not rely in such a catch but make sure that images are valid. \subsection{\type {fields}} \libindex{fields} \startfunctioncall <table> keys = img.fields() \stopfunctioncall This function returns a list of all the possible \type {image_spec} keys, both user-supplied and automatic ones. \starttabulate[|l|l|p|] \DB field name \BC type \BC description \NC \NR \TB \NC \type{attr} \NC string \NC the image attributes for \LUATEX \NC \NR \NC \type{bbox} \NC table \NC table with 4 boundingbox dimensions \type {llx}, \type {lly}, \type {urx} and \type {ury} overruling the \type {pagebox} entry \NC \NR \NC \type{colordepth} \NC number \NC the number of bits used by the color space \NC \NR \NC \type{colorspace} \NC number \NC the color space object number \NC \NR \NC \type{depth} \NC number \NC the image depth for \LUATEX \NC \NR \NC \type{filename} \NC string \NC the image file name \NC \NR \NC \type{filepath} \NC string \NC the full (expanded) file name of the image\NC \NR \NC \type{height} \NC number \NC the image height for \LUATEX \NC \NR \NC \type{imagetype} \NC string \NC one of \type {pdf}, \type {png}, \type {jpg}, \type {jp2} or \type {jbig2} \NC \NR \NC \type{index} \NC number \NC the \PDF\ image name suffix \NC \NR \NC \type{objnum} \NC number \NC the \PDF\ image object number \NC \NR \NC \type{page} \NC number \NC the identifier for the requested image page \NC \NR \NC \type{pagebox} \NC string \NC the requested bounding box, one of \type {none}, \type {media}, \type {crop}, \type {bleed}, \type {trim}, \type {art} \NC \NR \NC \type{pages} \NC number \NC the total number of available pages \NC \NR \NC \type{rotation} \NC number \NC the image rotation from included \PDF\ file, in multiples of 90~deg. \NC \NR \NC \type{stream} \NC string \NC the raw stream data for an \type {/Xobject} \type {/Form} object\NC \NR \NC \type{transform} \NC number \NC the image transform, integer number 0..7 \NC \NR \NC \type{orientation} \NC number \NC the (jpeg) image orientation, integer number 1..8 (0 for unset) \NC \NR \NC \type{width} \NC number \NC the image width for \LUATEX \NC \NR \NC \type{xres} \NC number \NC the horizontal natural image resolution (in \DPI) \NC \NR \NC \type{xsize} \NC number \NC the natural image width \NC \NR \NC \type{yres} \NC number \NC the vertical natural image resolution (in \DPI) \NC \NR \NC \type{ysize} \NC number \NC the natural image height \NC \NR \NC \type{visiblefilename} \NC string \NC when set, this name will find its way in the \PDF\ file as \type {PTEX} specification; when an empty string is assigned nothing is written to file; otherwise the natural filename is taken \NC \NR \NC \type{userpassword} \NC string \NC the userpassword needed for opening a \PDF\ file \NC \NR \NC \type{ownerpassword} \NC string \NC the ownerpassword needed for opening a \PDF\ file \NC \NR \NC \type{keepopen} \NC boolean \NC keep the \PDF\ file open \NC \NR \NC \type{nobbox} \NC boolean \NC don't add a boundingbox specification for streams \NC \NR \NC \type{nolength} \NC boolean \NC don't add length key nor compress for streams \NC \NR \NC \type{nosize} \NC boolean \NC don't add size fields for streams \NC \NR \LL \stoptabulate A running (undefined) dimension in \type {width}, \type {height}, or \type {depth} is represented as \type {nil} in \LUA, so if you want to load an image at its \quote {natural} size, you do not have to specify any of those three fields. The \type {stream} parameter allows to fabricate an \type {/XObject} \type {/Form} object from a string giving the stream contents, e.g., for a filled rectangle: \startfunctioncall a.stream = "0 0 20 10 re f" \stopfunctioncall When writing the image, an \type {/Xobject} \type {/Form} object is created, like with embedded \PDF\ file writing. The object is written out only once. The \type {stream} key requires that also the \type {bbox} table is given. The \type {stream} key conflicts with the \type {filename} key. The \type {transform} key works as usual also with \type {stream}. The \type {bbox} key needs a table with four boundingbox values, e.g.: \startfunctioncall a.bbox = { "30bp", 0, "225bp", "200bp" } \stopfunctioncall This replaces and overrules any given \type {pagebox} value; with given \type {bbox} the box dimensions coming with an embedded \PDF\ file are ignored. The \type {xsize} and \type {ysize} dimensions are set accordingly, when the image is scaled. The \type {bbox} parameter is ignored for non-\PDF\ images. The \type {transform} allows to mirror and rotate the image in steps of 90~deg. The default value~$0$ gives an unmirrored, unrotated image. Values $1-3$ give counterclockwise rotation by $90$, $180$, or $270$~degrees, whereas with values $4-7$ the image is first mirrored and then rotated counterclockwise by $90$, $180$, or $270$~degrees. The \type {transform} operation gives the same visual result as if you would externally preprocess the image by a graphics tool and then use it by \LUATEX. If a \PDF\ file to be embedded already contains a \type {/Rotate} specification, the rotation result is the combination of the \type {/Rotate} rotation followed by the \type {transform} operation. \subsection{\type {scan}} \libindex{scan} \startfunctioncall <image> var = img.scan(<image> var) <image> var = img.scan(<table> image_spec) \stopfunctioncall When you say \type {img.scan(a)} for a new image, the file is scanned, and variables such as \type {xsize}, \type {ysize}, image \type {type}, number of \type {pages}, and the resolution are extracted. Each of the \type {width}, \type {height}, \type {depth} fields are set up according to the image dimensions, if they were not given an explicit value already. An image file will never be scanned more than once for a given image variable. With all subsequent \type {img.scan(a)} calls only the dimensions are again set up (if they have been changed by the user in the meantime). For ease of use, you can do right-away a \starttyping <image> a = img.scan { filename = "foo.png" } \stoptyping without a prior \type {img.new}. Nothing is written yet at this point, so you can do \type {a=img.scan}, retrieve the available info like image width and height, and then throw away \type {a} again by saying \type {a=nil}. In that case no image object will be reserved in the PDF, and the used memory will be cleaned up automatically. \subsection{\type {copy}} \libindex{copy} \startfunctioncall <image> var = img.copy(<image> var) <image> var = img.copy(<table> image_spec) \stopfunctioncall If you say \type {a = b}, then both variables point to the same \type {<image>} object. if you want to write out an image with different sizes, you can do \type {b = img.copy(a)}. Afterwards, \type {a} and \type {b} still reference the same actual image dictionary, but the dimensions for \type {b} can now be changed from their initial values that were just copies from \type {a}. \subsection{\type {write}, \type {immediatewrite}, \type {immediatewriteobject}} \topicindex {images+injection} \topicindex {images+immediate} \topicindex {images+object} \libindex{write} \libindex{immediatewrite} \libindex{immediatewriteobject} \startfunctioncall <image> var = img.write(<image> var) <image> var = img.write(<table> image_spec) \stopfunctioncall By \type {img.write(a)} a \PDF\ object number is allocated, and a rule node of subtype \type {image} is generated and put into the output list. By this the image \type {a} is placed into the page stream, and the image file is written out into an image stream object after the shipping of the current page is finished. Again you can do a terse call like \starttyping img.write { filename = "foo.png" } \stoptyping The \type {<image>} variable is returned in case you want it for later processing. You can also write an object. By \type {img.immediatewrite(a)} a \PDF\ object number is allocated, and the image file for image \type {a} is written out immediately into the \PDF\ file as an image stream object (like with \prm {immediate}\orm {pdfximage}). The object number of the image stream dictionary is then available by the \type {objnum} key. No \type {pdf_refximage} whatsit node is generated. You will need an \type {img.write(a)} or \type {img.node(a)} call to let the image appear on the page, or reference it by another trick; else you will have a dangling image object in the \PDF\ file. \startfunctioncall <image> var = img.immediatewrite(<image> var) <image> var = img.immediatewrite(<table> image_spec) \stopfunctioncall Also here you can do a terse call like \starttyping a = img.immediatewrite { filename = "foo.png" } \stoptyping The \type {<image>} variable is returned and you will most likely need it. The next function is kind of special as it copies an object from a (\PDF) image file. This features is experimental and might disappear. \startfunctioncall <integer> objnum = img.immediatewriteobject(<image> var, <integer> objnum) <integer> objnum = img.immediatewriteobject(<table> image_spec, <integer> objnum) \stopfunctioncall \subsection{\type {node}} \libindex{node} \startfunctioncall <node> n = img.node(<image> var) <node> n = img.node(<table> image_spec) \stopfunctioncall This function allocates a \PDF\ object number and returns a whatsit node of subtype \type {pdf_refximage}, filled with the image parameters \type {width}, \type {height}, \type {depth}, and \type {objnum}. Also here you can do a terse call like: \starttyping n = img.node ({ filename = "foo.png" }) \stoptyping This example outputs an image: \starttyping node.write(img.node{filename="foo.png"}) \stoptyping \subsection{\type {types}} \topicindex {images+types} \libindex{types} \startfunctioncall <table> types = img.types() \stopfunctioncall This function returns a list with the supported image file type names, currently these are \type {pdf}, \type {png}, \type {jpg}, \type {jp2} (JPEG~2000), and \type {jbig2}. \subsection{\type {boxes}} \libindex{boxes} \startfunctioncall <table> boxes = img.boxes() \stopfunctioncall This function returns a list with the supported \PDF\ page box names, currently these are \type {media}, \type {crop}, \type {bleed}, \type {trim}, and \type {art}, all in lowercase. The \PDF\ file is kept open after its properties are determined. After inclusion, which happens when the page that references the image is flushed, the file is closed. This means that when you have thousands of images on one page, your operating system might decide to abort the run. When you include more than one page from a \PDF\ file you can set the \type {keepopen} flag when you allocate an image object, or pass the \type {keepopen} directive when you refer to the image with \lpr {useimageresource}. This only makes sense when you embed many pages. An \prm {immediate} applied to \lpr {saveimageresource} will also force a close after inclusion. \starttyping \immediate\useimageresource{foo.pdf}% \saveimageresource \lastsavedimageresourceindex % closed \useimageresource{foo.pdf}% \saveimageresource \lastsavedimageresourceindex % kept open \useimageresource{foo.pdf}% \saveimageresource keepopen\lastsavedimageresourceindex % kept open \directlua{img.write(img.scan{ file = "foo.pdf" })} % closed \directlua{img.write(img.scan{ file = "foo.pdf", keepopen = true })} % kept open \stoptyping \stopsection \startsection[title={The \type {mplib} library}][library=mplib] \topicindex {\METAPOST} \topicindex {\METAPOST+mplib} \topicindex {images+mplib} \topicindex {images+\METAPOST} \libindex{version} The \MP\ library interface registers itself in the table \type {mplib}. It is based on \MPLIB\ version \ctxlua {context(mplib.version())}. \subsection{\type {new}} \libindex{new} To create a new \METAPOST\ instance, call \startfunctioncall <mpinstance> mp = mplib.new({...}) \stopfunctioncall This creates the \type {mp} instance object. The argument hash can have a number of different fields, as follows: \starttabulate[|l|l|pl|pl|] \DB name \BC type \BC description \BC default \NC \NR \TB \NC \type{error_line} \NC number \NC error line width \NC 79 \NC \NR \NC \type{print_line} \NC number \NC line length in ps output \NC 100 \NC \NR \NC \type{random_seed} \NC number \NC the initial random seed \NC variable \NC \NR \NC \type{math_mode} \NC string \NC the number system to use: \type {scaled}, \type {double} or % \type {binary} or \type {decimal} \NC \type {scaled} \NC \NR \NC \type{interaction} \NC string \NC the interaction mode: \type {batch}, \type {nonstop}, \type {scroll} or \type {errorstop} \NC \type {errorstop} \NC \NR \NC \type{job_name} \NC string \NC \type {--jobname} \NC \type {mpout} \NC \NR \NC \type{find_file} \NC function \NC a function to find files \NC only local files \NC \NR \LL \stoptabulate The binary mode is no longer available in the \LUATEX\ version of \MPLIB. It offers no real advantage and brings a ton of extra libraries with platform specific properties that we can now avoid. We might introduce a high resolution scaled variant at some point but only when it pays of performance wise. The \type {find_file} function should be of this form: \starttyping <string> found = finder (<string> name, <string> mode, <string> type) \stoptyping with: \starttabulate[|l|p|] \DB name \BC the requested file \NC \NR \TB \NC \type{mode} \NC the file mode: \type {r} or \type {w} \NC \NR \NC \type{type} \NC the kind of file, one of: \type {mp}, \type {tfm}, \type {map}, \type {pfb}, \type {enc} \NC \NR \LL \stoptabulate Return either the full path name of the found file, or \type {nil} if the file cannot be found. Note that the new version of \MPLIB\ no longer uses binary mem files, so the way to preload a set of macros is simply to start off with an \type {input} command in the first \type {execute} call. When you are processing a snippet of text starting with \type {btex} and ending with either \type {etex} or \type {verbatimtex}, the \METAPOST\ \type {texscriptmode} parameter controls how spaces and newlines get honoured. The default value is~1. Possible values are: \starttabulate[|l|p|] \DB name \BC meaning \NC \NR \TB \NC \type {0} \NC no newlines \NC \NR \NC \type {1} \NC newlines in \type {verbatimtex} \NC \NR \NC \type {2} \NC newlines in \type {verbatimtex} and \type {etex} \NC \NR \NC \type {3} \NC no leading and trailing strip in \type {verbatimtex} \NC \NR \NC \type {4} \NC no leading and trailing strip in \type {verbatimtex} and \type {btex} \NC \NR \LL \stoptabulate That way the \LUA\ handler (assigned to \type {make_text}) can do what it likes. An \type {etex} has to be followed by a space or \type {;} or be at the end of a line and preceded by a space or at the beginning of a line. \subsection{\type {statistics}} \libindex{statistics} You can request statistics with: \startfunctioncall <table> stats = mp:statistics() \stopfunctioncall This function returns the vital statistics for an \MPLIB\ instance. There are four fields, giving the maximum number of used items in each of four allocated object classes: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{main_memory} \NC number \NC memory size \NC \NR \NC \type{hash_size} \NC number \NC hash size\NC \NR \NC \type{param_size} \NC number \NC simultaneous macro parameters\NC \NR \NC \type{max_in_open} \NC number \NC input file nesting levels\NC \NR \LL \stoptabulate Note that in the new version of \MPLIB, this is informational only. The objects are all allocated dynamically, so there is no chance of running out of space unless the available system memory is exhausted. \subsection{\type {execute}} \libindex{execute} You can ask the \METAPOST\ interpreter to run a chunk of code by calling \startfunctioncall <table> rettable = execute(mp,"metapost code") \stopfunctioncall for various bits of \METAPOST\ language input. Be sure to check the \type {rettable.status} (see below) because when a fatal \METAPOST\ error occurs the \MPLIB\ instance will become unusable thereafter. Generally speaking, it is best to keep your chunks small, but beware that all chunks have to obey proper syntax, like each of them is a small file. For instance, you cannot split a single statement over multiple chunks. In contrast with the normal stand alone \type {mpost} command, there is \notabene {no} implied \quote{input} at the start of the first chunk. \subsection{\type {finish}} \libindex{finish} \startfunctioncall <table> rettable = finish(mp) \stopfunctioncall If for some reason you want to stop using an \MPLIB\ instance while processing is not yet actually done, you can call \type finish}. Eventually, used memory will be freed and open files will be closed by the \LUA\ garbage collector, but an explicit \type finish} is the only way to capture the final part of the output streams. \subsection{Result table} \libindex {fields} The return value of \type {execute} and \type {finish} is a table with a few possible keys (only \type {status} is always guaranteed to be present). \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{log} \NC string \NC output to the \quote {log} stream \NC \NR \NC \type{term} \NC string \NC output to the \quote {term} stream \NC \NR \NC \type{error} \NC string \NC output to the \quote {error} stream (only used for \quote {out of memory}) \NC \NR \NC \type{status} \NC number \NC the return value: \type {0} = good, \type {1} = warning, \type {2} = errors, \type {3} = fatal error \NC \NR \NC \type{fig} \NC table \NC an array of generated figures (if any) \NC \NR \LL \stoptabulate When \type {status} equals~3, you should stop using this \MPLIB\ instance immediately, it is no longer capable of processing input. If it is present, each of the entries in the \type {fig} array is a userdata representing a figure object, and each of those has a number of object methods you can call: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{boundingbox} \NC function \NC returns the bounding box, as an array of 4 values \NC \NR \NC \type{postscript} \NC function \NC returns a string that is the ps output of the \type {fig}. this function accepts two optional integer arguments for specifying the values of \type {prologues} (first argument) and \type {procset} (second argument) \NC \NR \NC \type{svg} \NC function \NC returns a string that is the svg output of the \type {fig}. This function accepts an optional integer argument for specifying the value of \type {prologues} \NC \NR \NC \type{objects} \NC function \NC returns the actual array of graphic objects in this \type {fig} \NC \NR \NC \type{copy_objects} \NC function \NC returns a deep copy of the array of graphic objects in this \type {fig} \NC \NR \NC \type{filename} \NC function \NC the filename this \type {fig}'s \POSTSCRIPT\ output would have written to in stand alone mode \NC \NR \NC \type{width} \NC function \NC the \type {fontcharwd} value \NC \NR \NC \type{height} \NC function \NC the \type {fontcharht} value \NC \NR \NC \type{depth} \NC function \NC the \type {fontchardp} value \NC \NR \NC \type{italcorr} \NC function \NC the \type {fontcharit} value \NC \NR \NC \type{charcode} \NC function \NC the (rounded) \type {charcode} value \NC \NR \LL \stoptabulate Note: you can call \type {fig:objects()} only once for any one \type {fig} object! When the boundingbox represents a \quote {negated rectangle}, i.e.\ when the first set of coordinates is larger than the second set, the picture is empty. Graphical objects come in various types that each has a different list of accessible values. The types are: \type {fill}, \type {outline}, \type {text}, \type {start_clip}, \type {stop_clip}, \type {start_bounds}, \type {stop_bounds}, \type {special}. There is a helper function (\type {mplib.fields(obj)}) to get the list of accessible values for a particular object, but you can just as easily use the tables given below. All graphical objects have a field \type {type} that gives the object type as a string value; it is not explicit mentioned in the following tables. In the following, \type {number}s are \POSTSCRIPT\ points represented as a floating point number, unless stated otherwise. Field values that are of type \type {table} are explained in the next section. \subsubsection{fill} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{path} \NC table \NC the list of knots \NC \NR \NC \type{htap} \NC table \NC the list of knots for the reversed trajectory \NC \NR \NC \type{pen} \NC table \NC knots of the pen \NC \NR \NC \type{color} \NC table \NC the object's color \NC \NR \NC \type{linejoin} \NC number \NC line join style (bare number)\NC \NR \NC \type{miterlimit} \NC number \NC miterlimit\NC \NR \NC \type{prescript} \NC string \NC the prescript text \NC \NR \NC \type{postscript} \NC string \NC the postscript text \NC \NR \LL \stoptabulate The entries \type {htap} and \type {pen} are optional. \subsubsection{outline} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{path} \NC table \NC the list of knots \NC \NR \NC \type{pen} \NC table \NC knots of the pen \NC \NR \NC \type{color} \NC table \NC the object's color \NC \NR \NC \type{linejoin} \NC number \NC line join style (bare number) \NC \NR \NC \type{miterlimit} \NC number \NC miterlimit \NC \NR \NC \type{linecap} \NC number \NC line cap style (bare number) \NC \NR \NC \type{dash} \NC table \NC representation of a dash list \NC \NR \NC \type{prescript} \NC string \NC the prescript text \NC \NR \NC \type{postscript} \NC string \NC the postscript text \NC \NR \LL \stoptabulate The entry \type {dash} is optional. \subsubsection{text} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{text} \NC string \NC the text \NC \NR \NC \type{font} \NC string \NC font tfm name \NC \NR \NC \type{dsize} \NC number \NC font size \NC \NR \NC \type{color} \NC table \NC the object's color \NC \NR \NC \type{width} \NC number \NC \NC \NR \NC \type{height} \NC number \NC \NC \NR \NC \type{depth} \NC number \NC \NC \NR \NC \type{transform} \NC table \NC a text transformation \NC \NR \NC \type{prescript} \NC string \NC the prescript text \NC \NR \NC \type{postscript} \NC string \NC the postscript text \NC \NR \LL \stoptabulate \subsubsection{special} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{prescript} \NC string \NC special text \NC \NR \LL \stoptabulate \subsubsection{start_bounds, start_clip} \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{path} \NC table \NC the list of knots \NC \NR \LL \stoptabulate \subsubsection{stop_bounds, stop_clip} Here are no fields available. \subsection{Subsidiary table formats} \subsubsection{Paths and pens} Paths and pens (that are really just a special type of paths as far as \MPLIB\ is concerned) are represented by an array where each entry is a table that represents a knot. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{left_type} \NC string \NC when present: endpoint, but usually absent \NC \NR \NC \type{right_type} \NC string \NC like \type {left_type} \NC \NR \NC \type{x_coord} \NC number \NC X coordinate of this knot \NC \NR \NC \type{y_coord} \NC number \NC Y coordinate of this knot \NC \NR \NC \type{left_x} \NC number \NC X coordinate of the precontrol point of this knot \NC \NR \NC \type{left_y} \NC number \NC Y coordinate of the precontrol point of this knot \NC \NR \NC \type{right_x} \NC number \NC X coordinate of the postcontrol point of this knot \NC \NR \NC \type{right_y} \NC number \NC Y coordinate of the postcontrol point of this knot \NC \NR \LL \stoptabulate There is one special case: pens that are (possibly transformed) ellipses have an extra string-valued key \type {type} with value \type {elliptical} besides the array part containing the knot list. \subsubsection{Colors} A color is an integer array with 0, 1, 3 or 4 values: \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{0} \NC marking only \NC no values \NC \NR \NC \type{1} \NC greyscale \NC one value in the range $(0,1)$, \quote {black} is $0$ \NC \NR \NC \type{3} \NC \RGB \NC three values in the range $(0,1)$, \quote {black} is $0,0,0$ \NC \NR \NC \type{4} \NC \CMYK \NC four values in the range $(0,1)$, \quote {black} is $0,0,0,1$ \NC \NR \LL \stoptabulate If the color model of the internal object was \type {uninitialized}, then it was initialized to the values representing \quote {black} in the colorspace \type {defaultcolormodel} that was in effect at the time of the \type {shipout}. \subsubsection{Transforms} Each transform is a six|-|item array. \starttabulate[|l|l|p|] \DB index \BC type \BC explanation \NC \NR \TB \NC \type{1} \NC number \NC represents x \NC \NR \NC \type{2} \NC number \NC represents y \NC \NR \NC \type{3} \NC number \NC represents xx \NC \NR \NC \type{4} \NC number \NC represents yx \NC \NR \NC \type{5} \NC number \NC represents xy \NC \NR \NC \type{6} \NC number \NC represents yy \NC \NR \LL \stoptabulate Note that the translation (index 1 and 2) comes first. This differs from the ordering in \POSTSCRIPT, where the translation comes last. \subsubsection{Dashes} Each \type {dash} is two-item hash, using the same model as \POSTSCRIPT\ for the representation of the dashlist. \type {dashes} is an array of \quote {on} and \quote {off}, values, and \type {offset} is the phase of the pattern. \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{dashes} \NC hash \NC an array of on-off numbers \NC \NR \NC \type{offset} \NC number \NC the starting offset value \NC \NR \LL \stoptabulate \subsection{Pens and \type {pen_info}} \libindex{pen_info} There is helper function (\type {pen_info(obj)}) that returns a table containing a bunch of vital characteristics of the used pen (all values are floats): \starttabulate[|l|l|p|] \DB field \BC type \BC explanation \NC \NR \TB \NC \type{width} \NC number \NC width of the pen \NC \NR \NC \type{sx} \NC number \NC $x$ scale \NC \NR \NC \type{rx} \NC number \NC $xy$ multiplier \NC \NR \NC \type{ry} \NC number \NC $yx$ multiplier \NC \NR \NC \type{sy} \NC number \NC $y$ scale \NC \NR \NC \type{tx} \NC number \NC $x$ offset \NC \NR \NC \type{ty} \NC number \NC $y$ offset \NC \NR \LL \stoptabulate \subsection{Character size information} These functions find the size of a glyph in a defined font. The \type {fontname} is the same name as the argument to \type {infont}; the \type {char} is a glyph id in the range 0 to 255; the returned \type {w} is in AFM units. \subsubsection{\type {char_width}} \libindex{char_width} \startfunctioncall <number> w = char_width(mp,<string> fontname, <number> char) \stopfunctioncall \subsubsection{\type {char_height}} \libindex{char_height} \startfunctioncall <number> w = char_height(mp,<string> fontname, <number> char) \stopfunctioncall \subsubsection{\type {char_depth}} \libindex{char_depth} \startfunctioncall <number> w = char_depth(mp,<string> fontname, <number> char) \stopfunctioncall \subsubsection{\type {get_[boolean|numeric|string|path]}} \libindex{get_boolean} \libindex{get_numeric} \libindex{get_path} \libindex{get_string} When a script call brings you from the \METAPOST\ run (temporarily) back to \LUA\ you can access variables, but only if they are known (so for instance anonymous capsules like loop variables are not accessible). \startfunctioncall <boolean> w = get_boolean(mp,<string> name) <number> n = get_numeric(mp,<string> name) <string> s = get_string (mp,<string> name) <table> p = get_path (mp,<string> name) \stopfunctioncall The path is returned a a table with subtables that have six numbers: the coordinates of the point, pre- and postcontrol. A \type {cycle} fields indicates if a path is cyclic. \stopsection \stopchapter
{ "alphanum_fraction": 0.6791619071, "avg_line_length": 39.9831730769, "ext": "tex", "hexsha": "3f12762b68b8499be3c863c98571f11d8d7a5d74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_forks_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-graphics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_issues_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-graphics.tex", "max_line_length": 103, "max_stars_count": null, "max_stars_repo_head_hexsha": "52fe6e0cd5af6b4610fd344a7392cca11bc5a72e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "waqas4afzal/LatexUrduBooksTools", "max_stars_repo_path": "Texlive_Windows_x32/2020/texmf-dist/doc/luatex/base/luatex-graphics.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9343, "size": 33266 }
\documentclass[a4paper]{article} %% Language and font encodings \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage{makecell} %% Sets page size and margins \usepackage[a4paper,top=3cm,bottom=2cm,left=3cm,right=3cm,marginparwidth=1.75cm]{geometry} %% Useful packages \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage[shortlabels]{enumitem} \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{arrows.meta} \usepackage[colorinlistoftodos]{todonotes} \usepackage[colorlinks=true, allcolors=blue]{hyperref} \usepackage{color} \usepackage{url} \title{Exam 1 - Practice Problems} \author{CS 182 - Artificial Intelligence} \begin{document} \maketitle \section{Search} \begin{center} \begin{tabular}{cc} \parbox[c]{10cm}{\includegraphics[width =0.75\linewidth]{figs/search_graph.pdf} } & \begin{tabular}[h]{|c|c|c|c|} \hline Node &$h_1$ & $h_2$ & $h_3$ \\ \hline A & 9.5 & 10 & 14\\ B & 9 & 12 & 13 \\ C & 8 & 10 & 10 \\ D & 7 & 8 & 8 \\ E & 1.5 & 1 & 2 \\ F & 4 & 4.5 & 4 \\ G & 0 & 0 & 0 \\ \hline \end{tabular} \end{tabular} \end{center} Consider the state space graph shown above. A is the start state and G is the goal state. The costs for each edge are shown on the graph. Each edge can be traversed in both directions. The table on the right shows three heuristics $h_1$, $h_2$ and $h_3$. \begin{enumerate}[(a)] \item Are $h_1$, $h_2$ and $h_3$ consistent? Are they admissible? \vspace{4em} % \textcolor{blue}{$h_1$ is both admissible and consistent. $h_2$ is not consistent (e.g. it increases by 2 from A to B). $h_3$ is consistent, but not admissible ($h_3(A)$ is 14 when the total minimal path cost is 13).} \item For each of the following graph search strategies, mark which, if any, of the listed paths it could return. Note that for some search strategies the specific path returned might depend on tie-breaking behavior. In any such cases, make sure to mark \emph{all} paths that could be returned under some tie-breaking scheme. \vspace{.1in} \begin{table}[!h] \centering \begin{tabular}{|l|c|c|c|} \hline Search Algorithm & \textbf{A-B-D-G} & \textbf{A-C-D-G} & \textbf{A-B-C-D-F-G} \\ \hline Depth first search & & & \\ \hline Breadth first search & & & \\ \hline Uniform cost search & & & \\ \hline A* search with heuristic $h_1$ & & & \\ \hline A* search with heuristic $h_2$ & & & \\ \hline A* search with heuristic $h_3$ & & & \\ \hline \end{tabular} \end{table} \item Suppose you are completing the new heuristic function $h_4$ shown below. All the values are fixed except $h_4(B)$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Node & A & B & C & D & E & F & G \\ \hline $h_4$& 10 & ? & 9 & 7 & 1.5 & 4.5& 0 \\ \hline \end{tabular} \end{center} For each of the following conditions, write the set of values that are possible for $h_4(B)$. (1) What values of $h_4(B)$ make $h_4$ admissible? (2) What values of $h_4(B)$ make $h_4$ consistent? \newpage % \textcolor{blue}{ (1) To make $h_4$ admissible, $h_4(B)$ has to be less than or equal to the actual optimal cost from B to goal G, which is the cost of path B-C-D-F-G, i.e. 12. The answer is $0 \leq h_3(B) \leq 12$} % \textcolor{blue}{ % (2) All the other nodes except node B satisfy the consistency conditions. The consistency conditions that do % involve the state $B$ are: % \begin{align*} % h(A) \leq c(A, B) + h(B) \ \ \ & \ \ \ h(B) \leq c(B, A) + h(A)\\ % h(C) \leq c(C, B) + h(B) \ \ \ & \ \ \ h(B) \leq c(B, C) + h(C)\\ % h(D) \leq c(D, B) + h(B) \ \ \ & \ \ \ h(B) \leq c(B, D) + h(D) % \end{align*} % Filling in the numbers shows this results in the condition: % $9 \leq h_3(B) \leq 10$} \vspace{4em} \item What values of $h_3(B)$ will cause A* graph search to expand node A, then node C, then node B, then node D in order? \vspace{10em} % \textcolor{blue}{ % In order to make A* graph search expand node A, then node C, then node B, suppose $h_3(B) = x$, we need % \begin{align*} % &1+x>13 \\ % &5+x<14 ~~~(expand~ B') ~~~ or ~~~ 1+x<14 ~~~ (expand~ B) % \end{align*} % so we can get $12<h_3(B)<13$} \end{enumerate} % ------------------------------ADVERSARIAL \section{Adversarial Search} \begin{enumerate}[(a)] \item Below you can see a search tree. Fill out the values of the nodes according to minimax search. What action would the agent take at the root? \begin{center} \begin{figure}[h!] \centering \includegraphics[width=1.25\textwidth]{figs/minmax} \end{figure} \end{center} \newpage % \textcolor{blue}{\textbf{Solution:}} % \begin{center} % \begin{figure}[h!] % \centering % \includegraphics[width=1\textwidth]{figs/alphabeta-sol} % \end{figure} % \end{center} \item Mark the edges that would be pruned when using Alpha-Beta pruning? Also write the alpha and beta value as well as the value on each node. \begin{center} \begin{figure}[h!] \centering \includegraphics[width=1.25\textwidth]{figs/minmax} \end{figure} \end{center} \newpage \item After running the algorithm, you notice that the other player is not playing optimally. Instead, the player picks their action uniformly at random. Recompute the values of the nodes. What is the action at the root now? \begin{center} \begin{figure}[h!] \centering \includegraphics[width=1.25\textwidth]{figs/expectimax} \end{figure} \end{center} \item Your computer can only use 1MB of memory while running the algorithm. A single node takes 10B to store. Given a branching factor of 3, Compute an upper bound on the depth of a tree that your computer can conduct AB-pruning on. \vspace{10em} % \textcolor{blue}{AB-pruning has at worst the same complexity as Minimax. For Minimax, time complexity is $O(b^m)$ and space complexity is $O(bm)$. To compute the space limit, we have to solve % \begin{align*} % 1024^2\text{B} &=3*10\text{B}*m \\ % m &= 69905 % \end{align*} % Therefore, the maximum depth is 69905. % } \item Your computer can check 10,000 nodes per second Is it feasible to solve the problem for the maximum depth within 2 minutes? If not, what strategy do you recommend instead? \vspace{10em} % \textcolor{blue}{The time complexity for this tree is $3 * 10^{33398}$. Even while checking $10000$ nodes a second, it is impossible to solve. Therefore, we need to do depth-limited search. You can compute the maximum depth the algorithm can search through in 2 minutes by solving % \begin{align*} % 3^m &= 10000 * 120 % \end{align*} % This gives an answer $10 < m < 11$. To fully traverse the last layer, we therefore need to choose a depth of 10 for the depth-limited search. % } \newpage \item True or false. For every game tree, the utility obtained by MAX using minimax decisions against a suboptimal MIN will be never be lower than the utility obtained playing against an optimal MIN. Justify your response. \vspace{15em} % \textcolor{blue}{Consider a MIN node whose children are terminal nodes. If MIN plays suboptimally, then the value of the node is greater than or equal to the value it would have if MIN played optimally. Hence, the value of the MAX node that is the MIN node’s parent can only be increased. This argument can be extended by a simple induction all the way to the root. If the suboptimal play by MIN is predictable, then one can do better than a minimax strategy. For example, if MIN always falls for a certain kind of trap and loses, then setting the trap guarantees a win even if there is actually a devastating response for MIN. Consider the tree shown below: If we know for sure that B would pick $b_1$ or $b_2$ over $b_3$, then picking $a_1$ over $a_2$ would be the correct choice.} Can you come up with a game tree in which MAX can do still better using a suboptimal strategy against a suboptimal MIN? \vspace{10em} % \begin{center} % \begin{figure}[h!] % \centering % \includegraphics[width=.6\textwidth]{figs/suboptimal-min} % \end{figure} % \end{center} \end{enumerate} \section{CSP} You are designing a menu for a special event. There are several choices, each represented as a variable: (A)ppetizer, (B)everage, main (C)ourse, and (D)essert. The domains of the variables are as follows: \begin{itemize} \item A: (v)eggies, (e)scargot \item B: (w)ater, (s)oda, (m)ilk \item C: (f)ish, (b)eef, (p)asta \item D: (a)pple pie, (i)ce cream, (ch)eese \end{itemize} Because all of your guests get the same menu, it must obey the following dietary constraints: \begin{itemize} \item (i) Vegetarian options: The appetizer must be veggies or the main course must be pasta or fish (or both). \item (ii) Total budget: If you serve the escargot, you cannot afford any beverage other than water. \item (iii) Calcium requirement: You must serve at least one of milk, ice cream, or cheese. \end{itemize} \begin{enumerate}[(a)] \item Draw the constraint graph over the variables \newpage % \textcolor{blue}{\textbf{Solution:}} % \begin{tikzpicture}[ % shorten >=1pt, auto, thick, % node distance=3cm, % main node/.style={circle,draw,fill=blue!20,font=\sffamily\Large\bfseries} % ] % \node[main node] (a) at (0,0) {A}; % \node[main node] (b) at (2,0) {B}; % \node[main node] (d) at (2,-2) {D}; % \node[main node] (c) at (0,-2) {C}; % \path[every node/.style={font=\sffamily\small}] % (a) edge node {} (b) % (a) edge node {} (c) % (b) edge node {} (d); % \end{tikzpicture} \item Imagine we assign $A=e$. Cross out the eliminated values to show the domains of the variable after forward checking \begin{center} \begin{tabular}{|c|c|c|c|} \hline A & & e & \\ \hline B& w & s & m \\ \hline C& f & b & p \\ \hline D& a & i & ch \\ \hline \end{tabular} \end{center} % \textcolor{blue}{The values s, m, and b should be crossed off. "s" and "m" due to constraint (ii), and "b" due to (i).} \item Imagine again that $A=e$. Cross out the eliminated values after enforcing arc consistency. \begin{center} \begin{tabular}{|c|c|c|c|} \hline A & & e & \\ \hline B& w & s & m \\ \hline C& f & b & p \\ \hline D& a & i & ch \\ \hline \end{tabular} \end{center} % \textcolor{blue}{The values s, m, b, and a should be crossed out. The first three are crossed off for the same reasons as above, and “a” is eliminated because there is no value for (B) that is compatible with “a” (based on constraint (iii)).} \item Give a solution for this CSP or show that none exists. \vspace{5em} % \textcolor{blue}{There are multiple solutions. One of them is A=e, B=w, C=f, and D=i.} \item Define in your own words the terms constraint, backtracking search, arc consistency, backjumping, min-conflicts, and cycle cutset. \newpage % \textcolor{blue}{A \textbf{constraint} is a restriction on the possible values of two or more variables. For example, a constraint might say that A = a is not allowed in conjunction with B = b. \\ % \textbf{Backtracking search} is a form of depth-first search in which there is a single representation of the state that gets updated for each successor, and then must be restored when a dead end is reached. \\ % A directed arc from variable A to variable B in a CSP is \textbf{arc consistent} if, for every value in the current domain of A, there is some consistent value of B. \\ % \textbf{Backjumping} is a way of making backtracking search more efficient, by jumping back more than one level when a dead end is reached.\\ % \textbf{Min-conflicts} is a heuristic for use with local search on CSP problems. The heuristic says that, when given a variable to modify, choose the value that conflicts with the fewest number of other variables. \\ % A \textbf{cycle cutset} is a set of variables which when removed from the constraint graph make it acyclic (i.e., a tree). When the variables of a cycle cutset are instantiated the remainder of the CSP can be solved in linear time.} % SK--this one might be more trouble than it's worth % \item For general CSPs, will enforcing arc consistency after an assignment always prune at least as % many domain values as forward checking? Briefly explain why or why not. % \textcolor{blue}{Two answers are possible:\\ % Yes. The first step of arc consistency is equivalent to forward checking, so arc consistency removes all values % that forward checking does.\\ % No. While forward checking is a subset of arc consistency, after any assignment, arc consistency may have % already eliminated values in a previous step that are eliminated in that step by forward checking. Thus, % enforcing arc consistency will never leave more domain values than enforcing forward checking, but on a given step the forward checking could prune more values as arc consistency.} \end{enumerate} \section{MDP and RL} \begin{enumerate}[(a)] \item Suppose that we define the utility of a state sequence to be the maximum reward obtained in any state in the sequence. Show that this utility function does not result in stationary preferences between state sequences. Is it still possible to define a utility function on states such that MEU decision making gives optimal behavior? \vspace{25em} % \textcolor{blue}{Stationarity requires the agent to have identical preferences between the sequence pair [$s_0, s_1, s_2, \ldots$], [$s'_0, s'_1, s'_2, \ldots$] and between the sequence pair [$s_1, s_2, \ldots$], [$s'_1, s'_2, \ldots$]. If the utility of a sequence is its maximum reward, we can easily violate stationarity. For example, % \begin{equation*} % [4, 3, 0, 0, 0, \ldots] \sim [4, 0, 0, 0, \ldots] % \end{equation*} % but % \begin{equation*} % [3, 0, 0, 0, \ldots] \succ [0, 0, 0, \ldots] % \end{equation*} % We can still define $V^\pi(s)$ as the expected maximum reward obtained by executing $\pi$ starting in $s$. The agent’s preferences seem peculiar, nonetheless. For example, if the current state $s$ has reward $R_{\max}$, the agent will be indifferent among all actions, but once the action is executed and the agent is no longer in $s$, it will suddenly start to care about what happens % } \item Can all MDPs be solved using expectimax search? Justify your answer \vspace{8em} % \textcolor{blue}{No, MDPs with self loops lead to infinite expectimax trees. Unlike search problems, this issue cannot be addressed with a graph-search variant.} \item Let's consider a two-player MDP that correspond to a zero-sum, turn-taking game. Let the players be $A$ and $B$, and let $R(s)$ be the reward for player $A$ in state $s$. (The reward for $B$ is always equal and opposite.). Let $V^*_A(s)$ be the utility of state $s$ when it is $A$’s turn to move in $s$, and let $V^*_B(s)$ be the utility of state $s$ when it is $B$’s turn to move in $s$. All rewards and utilities are calculated from $A$’s point of view (just as in a minimax game tree). Write down the definitions of $V^*_A(s)$ and $V^*_B(s)$ in terms of expected future utility. \vspace{8em} % \textcolor{blue}{\begin{align*} % V^*_A(s) &= \max_{a} \sum_{s'} P(s'|s,a)[R(s,a,s') + \gamma V^*_B(s')]\\ % V^*_B(s) &= \min_{a} \sum_{s'} P(s'|s,a)[R(s,a,s') + \gamma V^*_A(s')] % \end{align*}} \item Explain how to do two-player value iteration with these equation. Additionally, state how to check whether the algorithm has converged. \newpage % \textcolor{blue}{To do value iteration, we simply keep track of twice as many values - once for $A$ and once for $B$. We can use the equations from the previous task and apply them in alternation. The process terminates when the utility vector for one player is the same as the previous utility vector for the same player (i.e., two steps earlier). (Note that the policies and utilities for the two players might not be the same.)} \begin{center} \begin{figure}[h!] \centering \includegraphics[width=.25\textwidth]{figs/small-2-player} \caption{The starting position for a simple game. The two players take turns moving, and each player must move his token to an open adjacent space in either direction. If the opponent occupies an adjacent space, then a player may jump over the opponent to the next open space if any. (For example, if $A$ is on $3$ and $B$ is on $2$, then A may move back to 1.) The game ends when one player reaches the opposite end of the board. If player A reaches space 4 first, then the value of the game to $A$ is $+1$; if player $B$ reaches space 1 first, then the value of the game to $A$ is $−1$.} \end{figure} \end{center} \item Consider the game described in the figure above. Draw the state space (rather than the game tree), showing the moves by $A$ as solid lines and moves by $B$ as dashed lines. Mark each state with $R(s)$. You will find it helpful to arrange the states $(s_A,s_B)$ on a two-dimensional grid, using $s_A$ and $s_B$ as “coordinates.” \vspace{15em} % \textcolor{blue}{\textbf{Solution:}} % \begin{center} % \begin{figure}[h!] % \centering % \includegraphics[width=.25\textwidth]{figs/state-space} % \end{figure} % \end{center} \item Now apply two-player value iteration to solve this game, and derive the optimal policy. Use a $\gamma$ of 1 for the derivation. \newpage % \textcolor{blue}{\textbf{Solution:}} % \begin{center} % \begin{figure}[h!] % \centering % \includegraphics[width=.8\textwidth]{figs/value-it} % \end{figure} % \end{center} \item When using features to represent the Q-function is it guaranteed that the feature-based $Q$-learning finds the same optimal $Q^∗$ as would be found when using a tabular representation for the $Q$-function? \vspace{8em} % \textcolor{blue}{No, if the optimal $Q$-function $Q^∗$ cannot be represented as a weighted combination of features, then the feature-based representation would not have the expressive power to find it.} \item Why is temporal difference (TD) learning of Q-values (Q-learning) superior to TD learning of values? \vspace{8em} % \textcolor{blue}{Because if you use temporal difference learning on the values, it is hard to extract a policy from the learned values. Specifically, you would need to know the transition model $T$. For TD learning of Q-values, the policy can be extracted directly by taking $\pi(s) = \text{argmax}_a Q(s, a)$.} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7092253366, "avg_line_length": 44.750617284, "ext": "tex", "hexsha": "0ae364a7ae193829de4b9832fa7e668aceb2f60e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Harvard-CS182-F18/courseware", "max_forks_repo_path": "Section_07/tex/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Harvard-CS182-F18/courseware", "max_issues_repo_path": "Section_07/tex/main.tex", "max_line_length": 785, "max_stars_count": null, "max_stars_repo_head_hexsha": "b1c5cc83dd45091c0ab74e0252405bc79ce51718", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Harvard-CS182-F18/courseware", "max_stars_repo_path": "Section_07/tex/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5477, "size": 18124 }
\documentclass[]{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{graphicx} \usepackage{hyperref} \usepackage{appendix} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \usepackage{mathtools} \usepackage{float} %this package is for placing graphs and tables in where the TeX is \graphicspath{{images/}{../images/}} \usepackage{blindtext} \usepackage{subfiles} \usepackage{verbatim} \providecommand{\EqDir}{Equations} \providecommand{\RefDir}{References} \usepackage{apacite} \title{Replication of Aiyagari(1994)} \author{Zixuan Huang and Mingzuo Sun} \date{} \begin{document} \linespread{2} \maketitle The paper we replicated is "Uninsured Idiosyncratic Risk and Aggregate Saving" by S. Rao \cite{1994}. The paper was published in The Quaterly Journal of Economics, Vol. 109, No. 3 (Aug., 1994). Besides this document, you can also find the corresponding notebook \href{run:../Aiyagari1994QJE.ipynb}{here}, which contains concrete steps of our replication. \section{Abstract} The paper modifies standard growth model to include precautionary saving motives and liquidity constraints.The paper also examines the impact of the introduction of a particular kind of uninsurable idiosyncratic risk on the aggregate saving rate; the importance of asset trading to individuals; and the relative inequality of the wealth and income distributions. \section{Introduction} The paper wants to provide an exposition of models whose aggregate behavior is the result of market interaction among a large number of agents subject to idiosyncratic shocks. Moreover, another goal of this paper is to use such a model to study the quantitative importance of individual risk for aggregate saving.\\ The paper mainly has five features: Endogenous heterogeneity, aggregation, infinite horizons, exogenous borrowing constraint, and general equilibrium. i.e. interest rate is endogenously determined since in a steady state equilibrium the capital per capita must equal the per capita asset holdings of consumers, and the interest rate must equal the net marginal product of capital. \section{Related Literature} The model in Aiyagari(1994) originates from Bewley model and a subsequent literature Zeldes (1989), Deaton (1991), Carroll (1992), and puts these kinds of models into a general equilibrium context. These models all share the same key components as mentioned in the previous part. And they are used to study the following topics: \begin{itemize} \item How much of observed wealth inequality does a particular choice of uninsurable idiosyncratic income uncertainty explain? \item In this model, what is the fraction of aggregate savings due to the precautionary motive? \item In this model, what are the re-distributional implications of various policies? \end{itemize} \section{Model} \subsection{Individual's Problem} \input{\EqDir/IndividualsProblem.tex} where $\phi$ (if positive) is the limit on borrowing; $l_t$ is assumed to be i.i.d with bounded support given by $[l_{min},l_{max}]$, with $l_{min}>0$; $w$ and $r$ represent wage and interest rate respectively. Let $\hat{a}_t \equiv a_t+\phi$ and $z_t \equiv wl_t+(1+r)\hat{a}_t-r\phi$, where $z_t$ can be interpreted as total resources of the agent at date $t$ respectively. Then the Bellman equation is as follows: \input{\EqDir/BellmanEquation.tex} Consequently, Euler equation is: \input{\EqDir/EulerEquation.tex} Solve the model, the decision rule can be written as: \begin{align} \hat{a}_{t+1}=A(z_t,\phi,w,r) \end{align} And the law of transition would be: \begin{align} z_{t+1}=wl_{t+1}+(1+r)A(z_t,\phi,w,r)-r\phi \end{align} \subsection{Firm's Problem} \input{\EqDir/FirmsProblem} where $K$ is the aggregate capital, $L$ is the aggregate labor, $F(K,L)$ is the production function. \subsection{General Equilibrium} In the steady state, variables are time invariant and all markets are clear, i.e., \begin{itemize} \item $F_K(K,L) = r+\delta$ \item $F_L(K,L) = w$ \item $\int l_i di = L$ \item $\int a_i di = K$ \end{itemize} \section{Model Specification, Parameters and Computation} \subsection{Model specification and parameters} We follow the parameters in Aiyagari(1994) for calibration. Parameters are listed in the table below. \subfile{Tables/Table_Parameters} Production function is Cobb Douglas production function with the capital share taken to be $\alpha$ \begin{align} F(K,L) = K^\alpha L^{1-\alpha} \end{align} Utility function is CRRA utility function with the relative risk aversion coefficient $\mu$.\\ Finally, labor endowment shocks follow an AR process. \begin{align} \log(l_t)=\rho\log(l_{t-1})+\sigma(1-\rho^2)^{\frac{1}{2}}\epsilon_{t}, \ \epsilon_t \sim N(0,1) \end{align} \subsection{Computation} This notebook uses \href{http://www.econforge.org/dolark/}{EconForge/Dolark} toolkit to describe the results and reproduce the tables in the linked paper. And you can our application of this toolkit for this paper in the \href{run:../Aiyagari1994QJE.ipynb}{notebook} we created. \section{Key Results} \subsection{Aggregate Saving Rates} Table \ref{table:1} shows the comparison between aggregate saving rates calculated by us and those in Aiyagari(1994). The results in Aiyagari(1994) are shown in section \ref{Appendix} of this document.\\ Our results are highly similar to but a bit different fom those in Aiyagari(1994). This is very likely because we use a different discrete-valued Markov Chain(MC) to approximate the AR process of individuals' idiosyncratic income shock. In our replication, three grid points (i.e. three MC states), which is set by default in $\texttt{Dolo}$ and $\texttt{Dolark}$, are used to simulate the AR process, whereas this number is seven in Aiyagari(1994). \begin{table}[H] \scalebox{.7}{\input{Tables/Table_SavingRate.tex}} \caption{Aggregate Saving Rate} \label{table:2} \end{table} \subsection{Wealth Distribution} The following figure shows the wealth distribution, where the x-axis represents the level of wealth and the y-axis represents the share of population. \begin{figure}[H] \centering \includegraphics{Figures/Figure_WealthDistribution} \label{figure:1} \end{figure} \newpage \section{Appendix}{\label{Appendix}} \subfile{Appendix/Appendix} \newpage \bibliographystyle{apacite} \bibliography{References/Aiyagari1994} \end{document}
{ "alphanum_fraction": 0.7723933094, "avg_line_length": 44.1172413793, "ext": "tex", "hexsha": "24f34ecfb9fdde49a01937fc1133e5be60ac14df", "lang": "TeX", "max_forks_count": 50, "max_forks_repo_forks_event_max_datetime": "2021-10-05T20:20:26.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-01T16:33:06.000Z", "max_forks_repo_head_hexsha": "92c057a93a7d10a890696db55f874d5fde394b91", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ngkratts/REMARK", "max_forks_repo_path": "REMARKs/AiyagariIdiosyncratic/Tex/main.tex", "max_issues_count": 89, "max_issues_repo_head_hexsha": "92c057a93a7d10a890696db55f874d5fde394b91", "max_issues_repo_issues_event_max_datetime": "2021-08-30T13:30:48.000Z", "max_issues_repo_issues_event_min_datetime": "2019-01-06T19:32:34.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ngkratts/REMARK", "max_issues_repo_path": "REMARKs/AiyagariIdiosyncratic/Tex/main.tex", "max_line_length": 449, "max_stars_count": 18, "max_stars_repo_head_hexsha": "92c057a93a7d10a890696db55f874d5fde394b91", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ngkratts/REMARK", "max_stars_repo_path": "REMARKs/AiyagariIdiosyncratic/Tex/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-10T16:29:55.000Z", "max_stars_repo_stars_event_min_datetime": "2019-01-28T13:17:39.000Z", "num_tokens": 1698, "size": 6397 }
\chapter*{Acknowledgement} \label{ch:Acknowledgements} \addcontentsline{toc}{chapter}{\nameref{ch:Acknowledgments}} \vspace{10mm} I would first like to thank my supervisors, \textbf{Professor Malhar Kulkarni} and \textbf{Professor Swapneel Mahajan}, whose expertise was invaluable in formulating the research questions and methodology. Your insightful feedback pushed me to sharpen my thinking and brought my work to a higher level.\\ I would like to acknowledge my colleagues and friends \textbf{Keshav Melnad}, \textbf{Jayashree Gajjam}, \textbf{Dhanashree Lele}, \textbf{Sayli Khare} and \textbf{Revati Kulkarni} for their constant love and support. \\ I would also like to thank my RPC members, \textbf{Professor Ratikanta Panda} and \textbf{Professor Sudhasheel Sen}, for their valuable guidance throughout my studies. You provided me with the tools that I needed to choose the right direction and successfully complete my dissertation.\\ In addition, I would like to thank my dear parents \textbf{Mr. Arun Aggarwal} and \textbf{Mrs. Sarita Aggarwal}, my sister \textbf{Swati} and my husband \textbf{Manmohan} for their wise counsel and constant support.
{ "alphanum_fraction": 0.7946963216, "avg_line_length": 129.8888888889, "ext": "tex", "hexsha": "1b1528771610fe09e749596a0c3cba9329f54f3e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ba84393c540b1d6c12f8ddbf3b62ec5ca10f325", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "manmohansingh192/Thesis_anupriya", "max_forks_repo_path": "Acknowledgement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ba84393c540b1d6c12f8ddbf3b62ec5ca10f325", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "manmohansingh192/Thesis_anupriya", "max_issues_repo_path": "Acknowledgement.tex", "max_line_length": 304, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ba84393c540b1d6c12f8ddbf3b62ec5ca10f325", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "manmohansingh192/Thesis_anupriya", "max_stars_repo_path": "Acknowledgement.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 309, "size": 1169 }
%!TEX root = TTK4215-Summary.tex \section{Parameter estimation} \subsection{SPR Lyapunov method} Based on choosing an adaptive law so that a \emph{Lyapunov-like} function guarantees $\tilde{\theta} \rightarrow 0$. The parametric model $z = W(s) \theta^{*\T} \psi$ is rewritten $z = W(s) L(s) \theta^{*\T} \phi$, with $L(s)$ a proper stable t.f., and $W(s)L(s)$ a proper SPR t.f. \begin{gather} z = W(s) L(s) \theta^{*\T} \phi \\ \hat{z} = W(s)L(s) \theta\T \phi \\ \epsilon = z - \hat{z} - W(s) L(s) \epsilon n_s^2 \\ \dot{\theta} = \Gamma \epsilon \phi \end{gather} \subsection{Gradient method} \begin{gather} z = \theta^{*\T} \phi \\ \hat{z} = \theta\T \phi \\ \epsilon = \frac{z - \hat{z}}{m^2} \end{gather} \subsubsection{Instantaneous cost} \begin{gather} \dot{\theta} = \Gamma \epsilon \phi \end{gather} \subsubsection{Integral cost} \begin{gather} \dot{\theta} = - \Gamma (R \theta + Q) \\ \dot{R} = - \beta R + \frac{\phi \phi\T}{m^2} \\ \dot{Q} = - \beta Q - \frac{z \phi}{m^2} \end{gather} \subsection{With projection} \begin{gather} \dot{\theta} = \begin{cases} \Gamma \epsilon \phi & \mbox{if } \theta \in \mathcal{S}^0 \\ \Gamma \epsilon \phi - \Gamma \frac{\nabla g \nabla g\T}{\nabla g\T \Gamma \nabla g} \Gamma \epsilon \phi & \mbox{otherwise} \end{cases} \end{gather} \subsection{Least squares} \begin{gather} z = \theta^{*\T} \phi \\ \hat{z} = \theta\T \phi \\ \epsilon = \frac{z - \hat{z}}{m^2} \end{gather} \subsubsection{Pure least squares} \begin{gather} \dot{\theta} = P \epsilon \phi \\ \dot{P} = - P \frac{\phi \phi\T}{m^2} P \end{gather} \subsubsection{With covariance resetting} \begin{gather} \dot{\theta} = P \epsilon \phi \\ \dot{P} = - P \frac{\phi \phi\T}{m^2} P, \quad P(t_r^+) = P_0 = \rho_0 I \end{gather} \subsubsection{With forgetting} \begin{gather} \dot{\theta} = P \epsilon \phi \\ \dot{P} = \begin{cases} \beta P - P \frac{\phi \phi\T}{m^2} P & \mbox{if } ||P(t)|| \leq R_0 \\ 0 & \mbox{otherwise} \end{cases} \end{gather}
{ "alphanum_fraction": 0.6043095005, "avg_line_length": 29.5942028986, "ext": "tex", "hexsha": "7a1cb12ef49b3bf8dbc992b445542d1d43a409a0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jakoblover/ntnu-course-summaries", "max_forks_repo_path": "TTK4215 System identification and adaptive control/sec-estimators.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jakoblover/ntnu-course-summaries", "max_issues_repo_path": "TTK4215 System identification and adaptive control/sec-estimators.tex", "max_line_length": 281, "max_stars_count": 2, "max_stars_repo_head_hexsha": "8ba859de2349b93c5079ca10a4cf2ec49c1f5dc0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jakoblover/ntnu-course-summaries", "max_stars_repo_path": "TTK4215 System identification and adaptive control/sec-estimators.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-11T02:42:40.000Z", "max_stars_repo_stars_event_min_datetime": "2018-05-30T09:19:22.000Z", "num_tokens": 790, "size": 2042 }
\documentclass[12pt]{article} % input \bibliographystyle{ieeetr} \usepackage[utf8]{inputenc} %\usepackage{times} % font \usepackage{lmodern} % scalable font \usepackage{graphicx} % include graphics \usepackage{amsmath} % align, nobreakdash \usepackage[pdf,tmpdir]{graphviz} % digraph \usepackage{fullpage} % book margins -> std margins \usepackage{wrapfig} % wrapfigure %\usepackage{moreverb} % verbatimtabinput \usepackage{subcaption} % subcaptionbox \usepackage[colorlinks]{hyperref} % pdf links \usepackage{url} % url support %\usepackage{comment} % comment % code doesn't wrap \usepackage[table]{xcolor} \definecolor{light-gray}{gray}{0.95} \newcommand{\code}[1]{\colorbox{light-gray}{\texttt{#1}}} % create new commands %\def\^#1{\textsuperscript{#1}} %\def\!{\overline} %\def\degree{\ensuremath{^\circ}} \def\Scale{0.5} % colourize titles \definecolor{ilrblue}{RGB}{79,166,220} \usepackage{titling} \pretitle{\vspace{-3em}\fontfamily{\sfdefault}\fontsize{18bp}{18bp}\color{ilrblue}\selectfont} \posttitle{\par\vspace{18bp}} \preauthor{\normalfont\bfseries\selectfont\MakeUppercase} \postauthor{\par\vspace{4bp}} \predate{\normalfont\selectfont} \postdate{\par\vspace{-8bp}} \usepackage{titlesec} \titleformat{\section}{\fontfamily{\sfdefault}\selectfont\normalsize\bfseries\color{ilrblue}\MakeUppercase}{\thesection}{1em}{} \titleformat{\subsection}{\fontfamily{\sfdefault}\normalsize\bfseries\color{ilrblue}}{\thesubsection}{1em}{} \titleformat{\subsubsection}{\fontfamily{\sfdefault}\normalsize\bfseries\color{ilrblue}\it}{\thesubsubsection}{1em}{} % Ewww \makeatletter \renewenvironment{abstract}{% \if@twocolumn \section*{\abstractname}% \else \small % \begin{center}% {\bfseries\color{ilrblue} \abstractname\vspace{\z@}\vspace{-12bp}}% \end{center}% \quotation \fi} {\if@twocolumn\else\endquotation\fi} \makeatother % for hyperref \hypersetup{ linkcolor=ilrblue, % internal (figure) links urlcolor=ilrblue, filecolor=ilrblue, citecolor=ilrblue, % bibliography links pdfauthor={\@author}, pdftitle={\@title}, pdfsubject={\@title}, pdfpagemode=UseNone } \author{Neil A. Edelman} \title{Compact Radix B-trees as an Efficient Index} \date{2021-10-20} \begin{document} \maketitle \abstract{A data structure based on a compact radix trees is introduced, with an index on key strings bits stored semi-implicitly. B-tree methods are used to group data in a sequential conglomeration, called a tree in a trie-forest. This increases cache-coherence, and presents an efficient ordered string set or map with prefix-matching capabilities. Tests comparing it to a hash map show the structure performs for . . .} \section{Introduction} A trie is, ``a multiway tree structure that stores sets of strings by successively partitioning them [de la Briandais 1959; Fredkin 1960; Jacquet and Szpankowski 1991],''\cite{askitis2011redesigning} so that, ``INSTEAD OF BASING a search method on comparisons between keys, we can make use of their representation as a sequence of digits or alphabetic characters.''\cite{knuth1997sorting} It is ordered, and allows prefix range queries. In practice, we talk about a string always terminated by a sentinel; this is an easy way to allow a string and it's prefix in the same trie\cite{fredkin1960trie}(but not really). For proper ordering, we traverse with most-significant bit first and the sentinel should be less-than the other numeric value of all the other characters. We use null-terminated, Modified~\mbox{UTF-8} strings, like in Figure~\ref{star-3:bits}. Keys are sorted in lexicographic order, but not by any collation algorithm. show a diagram % star-3 %\begin{wrapfigure}{r}{0.5\textwidth} %[!ht] \begin{figure} \centering \subcaptionbox{Bit view.\label{star-3:bits}}{% \digraph[scale=\Scale]{star3bits}{ node [shape = none]; tree0x1028049c0branch0 [shape = box, style = filled, fillcolor="Grey95" label = < <TABLE BORDER="0" CELLBORDER="0"> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="0">Algieba</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="White" BORDER="1">0</TD> </TR> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="1">Regulus</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> <TD>0</TD> <TD BGCOLOR="White" BORDER="1">0</TD> </TR> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="2">Vega</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> <TD>0</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> </TR> </TABLE>>]; } } \subcaptionbox{Logical view.\label{star-3:logic}}{ \digraph[scale=\Scale]{star3logic}{ node [shape = none]; tree0x1028049c0branch0 [label = "3", shape = circle, style = filled, fillcolor = Grey95]; tree0x1028049c0branch0 -> tree0x1028049c0leaf0 [style = dashed, color = Gray]; tree0x1028049c0branch0 -> tree0x1028049c0branch1; tree0x1028049c0branch1 [label = "1", shape = circle, style = filled, fillcolor = Grey95]; tree0x1028049c0branch1 -> tree0x1028049c0leaf1 [style = dashed, color = Gray]; tree0x1028049c0branch1 -> tree0x1028049c0leaf2 [color = Gray]; tree0x1028049c0leaf0 [label = "Algieba"]; tree0x1028049c0leaf1 [label = "Regulus"]; tree0x1028049c0leaf2 [label = "Vega"]; }} \subcaptionbox{Memory view.\label{star-3:mem}}{ \digraph[scale=\Scale]{star3mem}{ node [shape = none]; tree0x1028049c0branch0 [shape = box, style = filled, fillcolor = Gray95, label = < <TABLE BORDER="0"> <TR> <TD ALIGN="right" BORDER="0">left</TD> <TD BGCOLOR="Gray90">0</TD> <TD BGCOLOR="Gray90">0</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">skip</TD> <TD>3</TD> <TD>1</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">leaves</TD> <TD BGCOLOR="Grey90">Algieba</TD> <TD BGCOLOR="Grey90">Regulus</TD> <TD BGCOLOR="Grey90">Vega</TD> </TR> </TABLE>>]; }} \caption{A full 3-trie.\label{star-3}} \end{figure} % star-4 \begin{figure} \centering \subcaptionbox{Bit view.\label{star-4:bits}}{ \digraph[scale=\Scale]{star4bits}{ node [shape = none]; tree0x1028049f0branch0 [shape = box, style = filled, fillcolor="Grey95" label = < <TABLE BORDER="0" CELLBORDER="0"> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="0">\detokenize{↓}<FONT COLOR="Gray">Algieba</FONT></TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="White" BORDER="1">0</TD> </TR> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="1">\detokenize{↓}<FONT COLOR="Gray">Regulus</FONT></TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> </TR> </TABLE>>]; tree0x1028049f0branch0:0 -> tree0x1028049c0branch0 [color = "Black:invis:Black" style = dashed]; tree0x1028049f0branch0:1 -> tree0x102804a20branch0 [color = "Black:invis:Black"]; tree0x1028049c0branch0 [shape = box, style = filled, fillcolor="Grey95" label = < <TABLE BORDER="0" CELLBORDER="0"> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="0">Algieba</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD BORDER="0">&nbsp;</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD BORDER="0">&nbsp;</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD BGCOLOR="White" BORDER="1">0</TD> </TR> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="1">Alpheratz</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD BORDER="0">&nbsp;</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD BORDER="0">&nbsp;</TD> <TD>0</TD> <TD>1</TD> <TD>1</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> </TR> </TABLE>>]; tree0x102804a20branch0 [shape = box, style = filled, fillcolor="Grey95" label = < <TABLE BORDER="0" CELLBORDER="0"> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="0">Regulus</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="White" BORDER="1">0</TD> </TR> <TR> <TD ALIGN="LEFT" BORDER="0" PORT="1">Vega</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD BGCOLOR="Black" COLOR="White" BORDER="1"><FONT COLOR="White">1</FONT></TD> </TR> </TABLE>>]; }} \subcaptionbox{Logical view.\label{star-4:logic}}{ \digraph[scale=\Scale]{star4logic}{ node [shape = none]; tree0x1028049f0branch0 [label = "3", shape = circle, style = filled, fillcolor = Grey95]; tree0x1028049f0branch0 -> tree0x1028049c0branch0 [style = dashed, color = "Black:invis:Black"]; tree0x1028049f0branch0 -> tree0x102804a20branch0 [color = "Black:invis:Black"]; tree0x1028049c0branch0 [label = "15", shape = circle, style = filled, fillcolor = Grey95]; tree0x1028049c0branch0 -> tree0x1028049c0leaf0 [style = dashed, color = Gray]; tree0x1028049c0branch0 -> tree0x1028049c0leaf1 [color = Gray]; tree0x1028049c0leaf0 [label = "Algieba"]; tree0x1028049c0leaf1 [label = "Alpheratz"]; tree0x102804a20branch0 [label = "1", shape = circle, style = filled, fillcolor = Grey95]; tree0x102804a20branch0 -> tree0x102804a20leaf0 [style = dashed, color = Gray]; tree0x102804a20branch0 -> tree0x102804a20leaf1 [color = Gray]; tree0x102804a20leaf0 [label = "Regulus"]; tree0x102804a20leaf1 [label = "Vega"]; }}\\ \subcaptionbox{Memory view.\label{star-4:mem}}{ \digraph[scale=\Scale]{star4mem}{ node [shape = none]; tree0x1028049f0branch0 [shape = box, style = filled, fillcolor = Gray95, label = < <TABLE BORDER="0"> <TR> <TD ALIGN="right" BORDER="0">left</TD> <TD BGCOLOR="Gray90">0</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">skip</TD> <TD>3</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">leaves</TD> <TD PORT="0" BGCOLOR="Gray90">...</TD> <TD PORT="1" BGCOLOR="Gray90">...</TD> </TR> </TABLE>>]; tree0x1028049f0branch0:0 -> tree0x1028049c0branch0 [color = "Black:invis:Black" style = dashed]; tree0x1028049f0branch0:1 -> tree0x102804a20branch0 [color = "Black:invis:Black"]; tree0x1028049c0branch0 [shape = box, style = filled, fillcolor = Gray95, label = < <TABLE BORDER="0"> <TR> <TD ALIGN="right" BORDER="0">left</TD> <TD BGCOLOR="Gray90">0</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">skip</TD> <TD>15</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">leaves</TD> <TD BGCOLOR="Grey90">Algieba</TD> <TD BGCOLOR="Grey90">Alpheratz</TD> </TR> </TABLE>>]; tree0x102804a20branch0 [shape = box, style = filled, fillcolor = Gray95, label = < <TABLE BORDER="0"> <TR> <TD ALIGN="right" BORDER="0">left</TD> <TD BGCOLOR="Gray90">0</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">skip</TD> <TD>1</TD> </TR> <TR> <TD ALIGN="right" BORDER="0">leaves</TD> <TD BGCOLOR="Grey90">Regulus</TD> <TD BGCOLOR="Grey90">Vega</TD> </TR> </TABLE>>]; }} \caption{Split the 3-trie to put four items.\label{star-4}} \end{figure} \section{Implementation} A compact representation is a Patrica trie\cite{morrison1968patricia}: that is, a binary radix tree and skip values when bits offer no difference. In Figure~\ref{star-3:logic}, the branches indicate a \code{do not care} for all the skipped bits before, corresponding to Figure~\ref{star-3:bits} that offer no delineation. If a query might have a difference in the skipped values, one could also check the final result of a query for agreement with the found value. instead of adding in one big trie, where the data type presents a limit of how big it is, we have pointers to sub-blocks. This is seen in ... This allows us to shrink the data-type significantly; for a non-empty complete binary tree that has `n` leaves (order `n`), it has `n - 1` the internal nodes, or branches. The maximum that the left branch count can be is when the right node of the root is a leaf, that is `n - 2`. If we set this left-branch maximum to the maximum of the data type, we can use all the range. Practically, we set it to 254 instead of 255; the branches take up 255, and the branch size number is 1, for alignment. This is a fixed-maximum size trie within a B-tree. Because of overlapping terminology, some care must be used in describing the structure. We use the notion that a trie is a forest of trees. In B-tree terminology, nodes, are now trees, such that a complete path through the forest always visits the same number of trees. However, since it's always the root instead of the middle item, we are not guaranteed fullness. Depending on the implementation, a complete path might be truncated; for example, \code{Sol} might require less than \code{Betelgeuse}, because it's shorter.[ref b-tree] \subsection{Trees} Patrica trie is a full binary tree. Non-empty. Issues with having zero, one, zero, one, zero, items always needing to allocate memory. Fixme: store the order instead of the branches. \subsection{Structure in Memory} Like a node in a B-trie, a tree structure is a contiguous chunk. As an example, refer to Figure~\ref{star-3:mem}. A tree whose $\mathit{size}$, the number of nodes, is composed of internal nodes, $\mathit{branch}$, and external nodes, $\mathit{leaf}$; $\mathit{size} = \mathit{branch} + \mathit{leaf}$. As a non-empty complete binary tree, $\mathit{leaf} = \mathit{branch} + 1 \rightarrow \mathit{size} = 2\mathit{branch} + 1$. We arrange the branches pre-order in an array, thus, forward read in topologically sorted order. If the number of branches remaining is initialized to $b = \mathit{branch}$, we can make the tree semi-implicit by storing the $\mathit{left}$ branches and calculating $\mathit{right} = b - \mathit{left}$. When $b = 0$, the accumulator, $l$, is the position of the leaf. This is a succinct encoding[?]. Programme. The above wastes half of $\mathit{left}$'s dynamic range, in the best case, because branches that are $e$ from the end will have at most $\mathit{left} = e$ (such that the end will always be zero): a tree with all left branches. Separate the leaves, which are pointers to data or to another tree, from the branches, which are decision bits in the key string passed to look up. Only one leaf lookup is performed per tree. \subsection{Machine Considerations} The data has been engineered for maximum effectiveness of the cache in reading and traversing. That is, the tree structure and the string decisions have been reduced to each a byte and placed at the the top of the data structure of the tree. Always forward in memory. The size of this sub-structure should be a multiple of the cache line size, while also maximizing the dynamic range of $\mathit{left}$; a trie (also a B-tree) of order 256 is an obvious choice.\cite{sinha2004cache} \subsection{Running Time} The `\O(log n)` running time is only when the trie has bounds on strings that are placed there. Worst-case $\{ a, aa, aaa, aaaa, ... \}$. The case where the trie has strings that are bounded, as the tree grows, we can guarantee . . . Tries are a fairly succinct encoding at low numbers, but at high numbers, the internal trees add up. In the limit, with bounded strings, we need $n \log_{\text{order}} n$ to store $n$ items? However, practically, if one sets the order high enough, this is not an issue? \subsection{Limits} The skip value is limited by its range; in this case, 255 bits. For example, this trie is valid, $\{ dictionary \}$, as well as, $\{ dictator, dictionary, dictionaries \}$, but one ca'n't transition to, $\{ dictionary, dictionaries \}$, because it is too long a skip value. There are several modifications that would allow this, but they are out of this scope. (This is not true; 8 x 8 = 64; 8 x 32 = 256.) Insertion: Any leaf on the sub-tree queried will do; in this implementation, favours the left side. on split, do we have to go locally and see if we can join them? we don't need to store any data if the leaf-trees are different from the branch-trees? $ trie: root, height branch tree: bsize, +branchtree, branch[o-1], leaf[o] leaf tree: bsize, ?, branch[o-1], is_recursive_bmp[o/8], leaf[o] or leaf tree: bsize, ?, branch[o-1], leaf[o] { skip, union{ data, trie } } or leaf tree: bsize, ?, branch[o-1], leaf[o] { 32:skip, 32:height, 64:root / 64:data } $ ASCII and other encodings don't prioritize splitting the glyphs evenly between bits, therefore with a uniform distribution of strings, it's very unlikely that the trie will be balanced. On our trie, this results in wasted space. \bibliography{trie} \end{document}
{ "alphanum_fraction": 0.6995924822, "avg_line_length": 42.264781491, "ext": "tex", "hexsha": "0306fcc6f4979d8c9bfb67cb6a7fc94bd1ede686", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "38623fb7ede9fa5725df30654d89af8e8ad0f049", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "neil-edelman/Trie", "max_forks_repo_path": "web/trie.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "38623fb7ede9fa5725df30654d89af8e8ad0f049", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "neil-edelman/Trie", "max_issues_repo_path": "web/trie.tex", "max_line_length": 838, "max_stars_count": null, "max_stars_repo_head_hexsha": "38623fb7ede9fa5725df30654d89af8e8ad0f049", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "neil-edelman/Trie", "max_stars_repo_path": "web/trie.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5593, "size": 16441 }
\documentclass[13pt,onlymath]{beamer} \usefonttheme{serif} \usepackage{graphicx,amsmath,amssymb,tikz,psfrag,epstopdf,fancyvrb} \usepackage[lighttt]{lmodern} %\usepackage{graphicx,psfrag} \input defs.tex %% formatting \mode<presentation> { \usetheme{default} } \setbeamertemplate{navigation symbols}{} \usecolortheme[rgb={0.13,0.28,0.59}]{structure} \setbeamertemplate{itemize subitem}{--} \setbeamertemplate{frametitle} { \begin{center} {\large\bf \insertframetitle} \end{center} } \newcommand\footlineon{ \setbeamertemplate{footline} { \begin{beamercolorbox}[ht=2.5ex,dp=1.125ex,leftskip=.8cm,rightskip=.6cm]{structure} \footnotesize \insertsection \hfill {\insertframenumber} \end{beamercolorbox} \vskip 0.45cm } } \footlineon \AtBeginSection[] { \begin{frame}<beamer> \frametitle{Outline} \tableofcontents[currentsection,currentsubsection] \end{frame} } %% begin presentation \title{\large \bfseries Dynamic Programming} \author{Jaehyun Park\\[3ex] CS 97SI\\ Stanford University} \date{\today} \begin{document} \frame{ \thispagestyle{empty} \titlepage } \section{Dynamic Programming} \begin{frame}{What is DP?} \BIT \item Wikipedia definition: ``method for solving complex problems by breaking them down into simpler subproblems'' \vfill \item This definition will make sense once we see some examples \BIT \item Actually, we'll only see problem solving examples today \EIT \EIT \end{frame} \begin{frame}{Steps for Solving DP Problems} \begin{enumerate} \item Define subproblems \item Write down the recurrence that relates subproblems \item Recognize and solve the base cases \end{enumerate} \vfill \BIT \item Each step is very important! \EIT \end{frame} \section{1-dimensional DP} \begin{frame}{1-dimensional DP Example} \BIT \item Problem: given $n$, find the number of different ways to write $n$ as the sum of 1, 3, 4 \item Example: for $n=5$, the answer is 6 \BEAS 5 &=& 1+1+1+1+1 \\ &=& 1+1+3 \\ &=& 1+3+1 \\ &=& 3+1+1 \\ &=& 1+4 \\ &=& 4+1 \EEAS \EIT \end{frame} \begin{frame}{1-dimensional DP Example} \BIT \item Define subproblems \BIT \item Let $D_n$ be the number of ways to write $n$ as the sum of 1, 3, 4 \EIT \item Find the recurrence \BIT \item Consider one possible solution $n = x_1 + x_2 + \cdots + x_m$ \item If $x_m = 1$, the rest of the terms must sum to $n-1$ \item Thus, the number of sums that end with $x_m=1$ is equal to $D_{n-1}$ \item Take other cases into account ($x_m=3$, $x_m=4$) \EIT \EIT \end{frame} \begin{frame}{1-dimensional DP Example} \BIT \item Recurrence is then \[ D_n = D_{n-1} + D_{n-3} + D_{n-4} \] \item Solve the base cases \BIT \item $D_0 = 1$ \item $D_n = 0$ for all negative $n$ \item Alternatively, can set: $D_0 = D_1 = D_2 = 1$, and $D_3 = 2$ \EIT \vfill \item We're basically done! \EIT \end{frame} \begin{frame}[fragile]{Implementation} \begin{Verbatim}[xleftmargin=25pt] D[0] = D[1] = D[2] = 1; D[3] = 2; for(i = 4; i <= n; i++) D[i] = D[i-1] + D[i-3] + D[i-4]; \end{Verbatim} \BIT \item Very short! \item Extension: solving this for huge $n$, say $n \approx 10^{12}$ \BIT \item Recall the matrix form of Fibonacci numbers \EIT \EIT \end{frame} \begin{frame}{POJ 2663: Tri Tiling} \BIT \item Given $n$, find the number of ways to fill a $3 \times n$ board with dominoes \vfill \item Here is one possible solution for $n=12$ \begin{center} \includegraphics[height=0.3\textheight]{figures/tritiling} \end{center} \EIT \end{frame} \begin{frame}{POJ 2663: Tri Tiling} \BIT \item Define subproblems \BIT \item Define $D_n$ as the number of ways to tile a $3 \times n$ board \EIT \vfill \item Find recurrence \BIT \item Uuuhhhhh... \EIT \EIT \end{frame} \begin{frame}{Troll Tiling} \begin{center} \includegraphics[height=0.7\textheight]{figures/trolltiling} \end{center} \end{frame} \begin{frame}{Defining Subproblems} \BIT \item Obviously, the previous definition didn't work very well \item $D_n$'s don't relate in simple terms \vfill \item What if we introduce more subproblems? \EIT \end{frame} \begin{frame}{Defining Subproblems} \begin{center} \includegraphics[height=0.7\textheight]{figures/tritiling_sub1} \end{center} \end{frame} \begin{frame}{Finding Recurrences} \begin{center} \includegraphics[height=0.7\textheight]{figures/tritiling_sub2} \end{center} \end{frame} \begin{frame}{Finding Recurrences} \BIT \item Consider different ways to fill the $n$th column \BIT \item And see what the remaining shape is \EIT \item Exercise: \BIT \item Finding recurrences for $A_n$, $B_n$, $C_n$ \item Just for fun, why is $B_n$ and $E_n$ always zero? \EIT \vfill \item Extension: solving the problem for $n \times m$ grids, where $n$ is small, say $n \le 10$ \BIT \item How many subproblems should we consider? \EIT \EIT \end{frame} \section{2-dimensional DP} \begin{frame}[fragile]{2-dimensional DP Example} \BIT \item Problem: given two strings $x$ and $y$, find the longest common subsequence (LCS) and print its length \item Example: \BIT \item $x$: {\ttfamily A{\bfseries BC}BD{\bfseries AB}} \item $y$: {\ttfamily {\bfseries B}D{\bfseries CAB}C} \item ``\texttt{BCAB}'' is the longest subsequence found in both sequences, so the answer is 4 \EIT \EIT \end{frame} \begin{frame}{Solving the LCS Problem} \BIT \item Define subproblems \BIT \item Let $D_{ij}$ be the length of the LCS of $x_{1\ldots i}$ and $y_{1\ldots j}$ \EIT \item Find the recurrence \BIT \item If $x_i = y_j$, they both contribute to the LCS \BIT \item $D_{ij} = D_{i-1, j-1} + 1$ \EIT \item Otherwise, either $x_i$ or $y_j$ does not contribute to the LCS, so one can be dropped \BIT \item $D_{ij} = \max\{D_{i-1,j}, D_{i, j-1}\}$ \EIT \item Find and solve the base cases: $D_{i0} = D_{0j} = 0$ \EIT \EIT \end{frame} \begin{frame}[fragile]{Implementation} \begin{Verbatim}[xleftmargin=25pt] for(i = 0; i <= n; i++) D[i][0] = 0; for(j = 0; j <= m; j++) D[0][j] = 0; for(i = 1; i <= n; i++) { for(j = 1; j <= m; j++) { if(x[i] == y[j]) D[i][j] = D[i-1][j-1] + 1; else D[i][j] = max(D[i-1][j], D[i][j-1]); } } \end{Verbatim} \end{frame} \section{Interval DP} \begin{frame}{Interval DP Example} \BIT \item Problem: given a string $x = x_{1\ldots n}$, find the minimum number of characters that need to be inserted to make it a palindrome \vfill \item Example: \BIT \item $x$: \texttt{Ab3bd} \item Can get ``\texttt{dAb3bAd}'' or ``\texttt{Adb3bdA}'' by inserting 2 characters (one `\texttt d', one `\texttt A') \EIT \EIT \end{frame} \begin{frame}{Interval DP Example} \BIT \item Define subproblems \BIT \item Let $D_{ij}$ be the minimum number of characters that need to be inserted to make $x_{i\ldots j}$ into a palindrome \EIT \item Find the recurrence \BIT \item Consider a shortest palindrome $y_{1\ldots k}$ containing $x_{i\ldots j}$ \item Either $y_1 = x_i$ or $y_k = x_j$ (why?) \item $y_{2 \ldots k-1}$ is then an optimal solution for $x_{i+1 \ldots j}$ or $x_{i \ldots j-1}$ or $x_{i+1 \ldots j-1}$ \BIT \item Last case possible only if $y_1 = y_k = x_i = x_j$ \EIT \EIT \EIT \end{frame} \begin{frame}{Interval DP Example} \BIT \item Find the recurrence \[ D_{ij} = \begin{cases} 1 + \min\{D_{i+1, j}, D_{i, j-1}\} & x_i \ne x_j \\ D_{i+1, j-1} & x_i = x_j \end{cases} \] \item Find and solve the base cases: $D_{ii} = D_{i, i-1} = 0$ for all $i$ \vfill \item The entries of $D$ must be filled in increasing order of $j-i$ \EIT \end{frame} \begin{frame}[fragile]{Interval DP Example} \begin{Verbatim}[xleftmargin=25pt] // fill in base cases here for(t = 2; t <= n; t++) for(i = 1, j = t; j <= n; i++, j++) // fill in D[i][j] here \end{Verbatim} \vfill \BIT \item Note how we use an additional variable \verb.t. to fill the table in correct order \item And yes, for loops can work with multiple variables \EIT \end{frame} \begin{frame}{An Alternate Solution} \BIT \item Reverse $x$ to get $x^R$ \item The answer is $n-L$, where $L$ is the length of the LCS of $x$ and $x^R$ \vfill \item Exercise: Think about why this works \EIT \end{frame} \section{Tree DP} \begin{frame}{Tree DP Example} \BIT \item Problem: given a tree, color nodes black as many as possible without coloring two adjacent nodes \vfill \item Subproblems: \BIT \item First, we arbitrarily decide the root node $r$ \item $B_v$: the optimal solution for a subtree having $v$ as the root, where we color $v$ black \item $W_v$: the optimal solution for a subtree having $v$ as the root, where we don't color $v$ \item Answer is $\max\{B_r, W_r\}$ \EIT \EIT \end{frame} \begin{frame}{Tree DP Example} \BIT \item Find the recurrence \BIT \item Crucial observation: once $v$'s color is determined, subtrees can be solved independently \item If $v$ is colored, its children must not be colored \[ B_v = 1 + \sum_{u \in \mathrm{children}(v)} W_u \] \item If $v$ is not colored, its children can have any color \[ W_v = 1 + \sum_{u \in \mathrm{children}(v)} \max\{B_u, W_u\} \] \EIT \vfill \item Base cases: leaf nodes \EIT \end{frame} \section{Subset DP} \begin{frame}{Subset DP Example} \BIT \item Problem: given a weighted graph with $n$ nodes, find the shortest path that visits every node exactly once (Traveling Salesman Problem) \vfill \item Wait, isn't this an NP-hard problem? \BIT \item Yes, but we can solve it in $O(n^2 2^n)$ time \item Note: brute force algorithm takes $O(n!)$ time \EIT \EIT \end{frame} \begin{frame}{Subset DP Example} \BIT \item Define subproblems \BIT \item $D_{S, v}$: the length of the optimal path that visits every node in the set $S$ exactly once and ends at $v$ \item There are approximately $n 2^n$ subproblems \item Answer is $\min_{v \in V} D_{V, v}$, where $V$ is the given set of nodes \EIT \vfill \item Let's solve the base cases first \BIT \item For each node $v$, $D_{\{v\}, v} = 0$ \EIT \EIT \end{frame} \begin{frame}{Subset DP Example} \BIT \item Find the recurrence \BIT \item Consider a path that visits all nodes in $S$ exactly once and ends at $v$ \item Right before arriving $v$, the path comes from some $u$ in $S - \{v\}$ \item And that subpath has to be the optimal one that covers $S-\{v\}$, ending at $u$ \item We just try all possible candidates for $u$ \EIT \EIT \vfill \[ D_{S, v} = \min_{u \in S-\{v\}} \left(D_{S-\{v\}, u} + \mathrm{cost}(u, v) \right) \] \end{frame} \begin{frame}{Working with Subsets} \BIT \item When working with subsets, it's good to have a nice representation of sets \item Idea: Use an integer to represent a set \BIT \item Concise representation of subsets of small integers $\{0, 1, \ldots\}$ \item If the $i$th (least significant) digit is 1, $i$ is in the set \item If the $i$th digit is 0, $i$ is not in the set \item \eg, $19 = \mathtt{010011}_{(2)}$ in binary represent a set $\{0, 1, 4\}$ \EIT \EIT \end{frame} \begin{frame}[fragile]{Using Bitmasks} \BIT \item Union of two sets \verb,x, and \verb,y,: \verb,x | y, \item Intersection: \verb,x & y, \item Symmetric difference: \verb,x ^ y, \item Singleton set $\{i\}$: \verb,1 << i, \item Membership test: \verb,x & (1 << i) != 0, \EIT \end{frame} \begin{frame}{Conclusion} \BIT \item Wikipedia definition: ``a method for solving complex problems by breaking them down into simpler subproblems'' \BIT \item Does this make sense now? \EIT \vfill \item Remember the three steps! \begin{enumerate} \item Defining subproblems \item Finding recurrences \item Solving the base cases \end{enumerate} \EIT \end{frame} \end{document}
{ "alphanum_fraction": 0.688722067, "avg_line_length": 25.289183223, "ext": "tex", "hexsha": "2695193bc4b89ecf824e58dac5602414c67526a5", "lang": "TeX", "max_forks_count": 598, "max_forks_repo_forks_event_max_datetime": "2022-03-15T20:25:05.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-03T10:50:20.000Z", "max_forks_repo_head_hexsha": "1cc79c15e8e0e9c27e1470c7400cdb50aaa6bb82", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Charleo85/stanfordacm", "max_forks_repo_path": "97si_slides/dynamic_programming.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "1cc79c15e8e0e9c27e1470c7400cdb50aaa6bb82", "max_issues_repo_issues_event_max_datetime": "2019-04-26T01:54:14.000Z", "max_issues_repo_issues_event_min_datetime": "2015-05-03T17:12:19.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Charleo85/stanfordacm", "max_issues_repo_path": "97si_slides/dynamic_programming.tex", "max_line_length": 141, "max_stars_count": 1624, "max_stars_repo_head_hexsha": "1cc79c15e8e0e9c27e1470c7400cdb50aaa6bb82", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Charleo85/stanfordacm", "max_stars_repo_path": "97si_slides/dynamic_programming.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T17:26:03.000Z", "max_stars_repo_stars_event_min_datetime": "2015-08-11T03:23:37.000Z", "num_tokens": 3957, "size": 11456 }
% Some basic packages \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[a4paper, total={6in, 8in}]{geometry} \usepackage{tikz} \usepackage{textcomp} \usepackage{url} \usepackage{graphicx} \usepackage{float} \usepackage{booktabs} \usepackage{enumitem} \usepackage{arydshln} \delimitershortfall=0pt \setlength{\dashlinegap}{2pt} \usepackage{enumerate} \usepackage{amsmath,amssymb,fancyhdr,array} \pagestyle{fancy} \usepackage{xintfrac, xintbinhex, xinttools} \rhead{1608/1609 WS18/19} \makeatletter \def\DtoB@get@ND #1/#2[#3]{% #2 must be = 1 here, if input was in % decimal notation \edef\DtoB@s{\xintiiSgn{#1}}% \edef\DtoB@A{\xintiiAbs{#1}}% \def\DtoB@L{#3}% \ifnum#3<\z@ \let\DtoB@N\DtoB@A \edef\DtoB@D{\xintiiE{1}{-#3}}% \else \edef\DtoB@N{\xintiiE{\DtoB@A}{#3}}% \def\DtoB@D{1}% \fi }% \newcommand\ParseFromDecimalToIEEEBinary[1]{% % assume #1 is a decimal number (I will write code for fractions % another day) \expandafter\DtoB@get@ND\romannumeral0\xintrez{#1}% % we should here handle \DtoB@s = 0 but no time tonight, assume input non-zero \edef\DtoB@S@bit{\the\numexpr(1-\DtoB@s)/2}% 1 if number < 0, 0 if number > 0 \edef\DtoB@U{\xintDecToBin{\DtoB@N}}% \edef\DtoB@V{\xintDecToBin{\DtoB@D}}% \edef\DtoB@Uk{\the\numexpr\expandafter\xintLength\expandafter{\DtoB@U}-\@ne}% \edef\DtoB@Vl{\the\numexpr\expandafter\xintLength\expandafter{\DtoB@V}-\@ne}% % next step should perhaps compare k and l first % important that we are comparing here two strings of 1s and 0s of % exact same length \ifnum\pdfstrcmp{\DtoB@U\romannumeral\xintreplicate{\DtoB@Vl}{0}} {\DtoB@V\romannumeral\xintreplicate{\DtoB@Uk}{0}}=\m@ne \edef\DtoB@E{\the\numexpr\DtoB@Uk-\DtoB@Vl-\@ne}% \else \edef\DtoB@E{\the\numexpr\DtoB@Uk-\DtoB@Vl}% \fi \edef\DtoB@Eshifted@bits{\expandafter\@gobble \romannumeral0\xintdectobin{\the\numexpr \DtoB@E + 127 + 256\relax}}% \ifnum\DtoB@E>23 \edef\DtoB@f@frac{\DtoB@A/\xintiiPow{2}{\DtoB@E-23}[\DtoB@L]}% % use rather bintodec conversion of 10000...000 in above? \else \edef\DtoB@f@frac{\xintiiMul{\DtoB@A}{\xintiiPow{2}{23-\DtoB@E}}/1[\DtoB@L]}% \fi \edef\DtoB@f@int{\xintNum{\DtoB@f@frac}}% truncates to an int \edef\DtoB@M@bits{\expandafter\@gobble \romannumeral0\xintdectobin{\DtoB@f@int}}% %\edef\DtoBresult{\DtoB@S@bit\DtoB@Eshifted@bits\DtoB@M@bits}% \let\IEEEsign\DtoB@S@bit \let\IEEEexponent\DtoB@Eshifted@bits \let\IEEEmantissa\DtoB@M@bits }% \makeatother \newcommand\ieee[1]{% \ParseFromDecimalToIEEEBinary{#1}% \begin{center} {\footnotesize\setlength{\tabcolsep}{1pt}\begin{tabular}[t]{|*{32}{w{c}{1em}|}} \firsthline \xintListWithSep{&}{\xintSeq[-1]{31}{0}}\\ \hline S \romannumeral\xintreplicate{8}{&E}\romannumeral\xintreplicate{23}{&M}\\ \hline \IEEEsign & \xintListWithSep{&}{\IEEEexponent} & \xintListWithSep{&}{\IEEEmantissa}\\ \hline \end{tabular}\par}% \end{center} } % Don't indent paragraphs, leave some space between them \usepackage{parskip} \usepackage{subcaption} \usepackage{multicol} \usepackage{xcolor} % Other font I sometimes use. % \usepackage{cmbright} % Math stuff \usepackage{amsmath, amsfonts, mathtools, amsthm, amssymb, braket} % Fancy script capitals \usepackage{mathrsfs} \usepackage{cancel} % Bold math \usepackage{bm} %Make implies and impliedby shorter \let\implies\Rightarrow \let\impliedby\Leftarrow \let\iff\Leftrightarrow \let\epsilon\varepsilon % Add \contra symbol to denote contradiction \usepackage{stmaryrd} % for \lightning \newcommand\contra{\scalebox{1.5}{$\lightning$}} % \let\phi\varphi % Command for short corrections % Usage: 1+1=\correct{3}{2} \definecolor{correct}{HTML}{009900} \newcommand\correct[2]{\ensuremath{\:}{\color{red}{#1}}\ensuremath{\to }{\color{correct}{#2}}\ensuremath{\:}} \newcommand\green[1]{{\color{correct}{#1}}} % horizontal rule \newcommand\hr{ \noindent\rule[0.5ex]{\linewidth}{0.5pt} } % hide parts \newcommand\hide[1]{} % Environments \makeatother % For box around Definition, Theorem, \ldots \usepackage{mdframed} \mdfsetup{skipabove=1em,skipbelow=0em} \theoremstyle{definition} \newtheorem*{definition}{Definition} \newtheorem*{recall}{Recall} \newtheorem*{property}{Property} \newtheorem*{consequence}{Consequence} \newtheorem*{lemma}{Lemma} \newtheorem*{proposition}{Proposition} \newtheorem*{theorem}{Theorem} \newmdtheoremenv[nobreak=true]{law}{Law} \newmdtheoremenv[nobreak=true]{corollary}{Corollary} \newmdtheoremenv{conclusion}{Conclusion} \newmdtheoremenv{assumption}{Assumption} \newtheorem*{intermezzo}{Intermezzo} \newtheorem*{observation}{Observation} \newtheorem*{exercise}{Exercise} \newtheorem*{remark}{Remark} \newtheorem*{problem}{Problem} \newtheorem*{terminology}{Terminology} \newtheorem*{application}{Application} \newtheorem*{question}{Question} \newtheorem*{example}{Example} \newtheorem*{notation}{Notation} \newtheorem*{previouslyseen}{As previously seen} \newcommand{\matr}[1]{\mathbf{#1}} % undergraduate algebra version \newcommand{\matri}[1]{$\mathbf{#1}$} % undergraduate algebra version \renewcommand{\mod}[1]{\left|\matr{#1}\right|} \newcommand\p[2]{\ensuremath\frac{\partial #1}{\partial #2}} \newcommand{\rank}[1]{\text{rank}\left(\mathbf{#1}\right)} \newcommand{\inverse}{^{-1}} \newcommand{\augmented}[2]{\begin{array}{c:c}\mathbf{#1} & \mathbf{#2}\end{array}} \newcommand{\rowequiv}{\mathrel{\underset{\sim}{R}}} \newtheorem*{note}{Note} \newmdtheoremenv[nobreak=true]{prop}{Proposition} % End example and intermezzo environments with a small diamond (just like proof % environments end with a small square) \usepackage{etoolbox} \AtEndEnvironment{vb}{\null\hfill$\diamond$}% \AtEndEnvironment{intermezzo}{\null\hfill$\diamond$}% % \AtEndEnvironment{opmerking}{\null\hfill$\diamond$}% % Fix some spacing % http://tex.stackexchange.com/questions/22119/how-can-i-change-the-spacing-before-theorems-with-amsthm \makeatletter \def\thm@space@setup{% \thm@preskip=\parskip \thm@postskip=0pt } \renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{% \hskip -\arraycolsep \let\@ifnextchar\new@ifnextchar \array{#1}} % Exercise % Usage: % \oefening{5} % \suboefening{1} % \suboefening{2} % \suboefening{3} % gives % Oefening 5 % Oefening 5.1 % Oefening 5.2 % Oefening 5.3 \newcommand{\oefening}[1]{% \def\@oefening{#1}% \subsection*{Oefening #1} } \newcommand{\suboefening}[1]{% \subsubsection*{Oefening \@oefening.#1} } \newcommand{\prob}{\mathbb P} % \lecture starts a new lecture (les in dutch) % % Usage: % \lecture{1}{di 12 feb 2019 16:00}{Inleiding} % % This adds a section heading with the number / title of the lecture and a % margin paragraph with the date. % I use \dateparts here to hide the year (2019). This way, I can easily parse % the date of each lecture unambiguously while still having a human-friendly % short format printed to the pdf. \usepackage{xifthen} \def\testdateparts#1{\dateparts#1\relax} \def\dateparts#1 #2 #3 #4 #5\relax{ \marginpar{\small\textsf{\mbox{#1 #2 #3 #5}}} } \def\@lecture{}% \newcommand{\lecture}[3]{ \ifthenelse{\isempty{#3}}{% \def\@lecture{Lecture #1}% }{% \def\@lecture{Lecture #1: #3}% }% \subsection*{\@lecture} %No date % \marginpar{\small\textsf{\mbox{#2}}} } % These are the fancy headers % LE: left even % RO: right odd % CE, CO: center even, center odd % My name for when I print my lecture notes to use for an open book exam. % \fancyhead[LE,RO]{Gilles Castel} \usepackage{fancyhdr} \pagestyle{fancy} \fancyhf{} \fancyhead[RO,LE]{Giorgio Grigolo} % Right odd, Left even \fancyfoot[LE, RO]{\thepage} % Todonotes and inline notes in fancy boxes \usepackage{todonotes} \usepackage{tcolorbox} % Make boxes breakable \tcbuselibrary{breakable} % Figure support as explained in my blog post. \usepackage{import} \usepackage{xifthen} \usepackage{pdfpages} \usepackage{transparent} \newcommand{\incfig}[1]{% \def\svgwidth{\columnwidth} \import{./figures/}{#1.pdf_tex} } %http://tex.stackexchange.com/questions/76273/multiple-pdfs-with-page-group-included-in-a-single-page-warning \pdfsuppresswarningpagegroup=1 % My name \author{Giorgio Grigolo}
{ "alphanum_fraction": 0.7061799783, "avg_line_length": 28.3310580205, "ext": "tex", "hexsha": "5ae726233ebb2161ae9a0c8eebcbffaadb1645db", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "girogio/university-notes", "max_forks_repo_path": "bachelor-1/semester-1/preamble.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "girogio/university-notes", "max_issues_repo_path": "bachelor-1/semester-1/preamble.tex", "max_line_length": 110, "max_stars_count": null, "max_stars_repo_head_hexsha": "52c6db020148fa3636c01f1e343d2172b424cd13", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "girogio/university-notes", "max_stars_repo_path": "bachelor-1/semester-1/preamble.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3025, "size": 8301 }
\PassOptionsToPackage{hyphens}{url} \documentclass[acmsmall]{acmart} \usepackage{hyperref} \def\httpURL#1{\href{http://#1}{\textcolor{blue}{#1}}} \def\LONGhttpURL#1#2{\href{http://#1}{\textcolor{blue}{#2}}} \setcopyright{acmcopyright} \copyrightyear{2021} \acmYear{2021} \acmDOI{} \makeatletter \@printpermissionfalse \def\@copyrightowner{Harold Thimbleby} \def\@acmPrice{} %\def\do@url@hyp{\do\-} \def\do@url@hyp{\do/} \usepackage{moreverb,url} \acmJournal{TOSEM} \def\@journalName{Submitted ACM Transactions on Software Engineering and Methodology}% \def\@journalNameShort{Submitted ACM Trans. Softw. Eng. Methodol.}% \def\@permissionCodeOne{1049-331X}%\def\tablename{Listing} \renewcommand{\thefootnote}{\arabic{footnote}} \makeatother \input paper-seb-macros.tex \IfFileExists{NOPEpaper-seb-generated-info-for-main.tex}{% \input paper-seb-generated-info-for-main.tex }{\typeout{No paper-seb-generated-info-for-main.tex - error ignored, but you need to latex paper-seb-supplementary-material.tex to generate it}} \begin{document} \title{\mytitle} \author{Harold Thimbleby} \renewcommand{\shortauthors}{DRAFT | Thimbleby} \affiliation{\institution{See Change Fellow in Digital Health} \country{Wales}} \email{[email protected]} \orcid{0000-0003-2222-4243} % To make various LaTeX processors do the right thing with page size. \def\pprw{8.5in} \def\pprh{11in} \special{papersize=\pprw,\pprh} \setlength{\paperwidth}{\pprw} \setlength{\paperheight}{\pprh} \setlength{\pdfpagewidth}{\pprw} \setlength{\pdfpageheight}{\pprh} \begin{abstract} \noindent \emph{Background:} Computer code underpins modern science, and at the present time has a crucial role in leading our response to the COVID-19 pandemic. While models are routinely criticised for their assumptions, the algorithms and the quality of code implementing them often avoid scrutiny and, hence, scientific conclusions cannot be rigorously justified. \hskip 1em\noindent\emph{Problem:} Assumptions in programs are hard to scrutinise as they are rarely explicit in published work. In addition, both algorithms and code have bugs, effectively unknown assumptions that have unwanted effects, undermining the rigour of claims. Code is fallible, so any model interpretation that relies on code is therefore fallible, and if the code is not published with adequate documentation, the code cannot be usefully scrutinised. In turn, the scientific claims cannot be properly scrutinised. \hskip 1em\noindent\emph{Solutions:} Code can be made \emph{much\/} more reliable using software engineering good practice. Three specific solutions are proposed. First, professional software engineers can help and should be involved in critical research. Secondly, ``Software Engineering Boards'' (supplementing and analogous to Ethics or Institutional Review Boards) must be instigated and used. Thirdly, code, when used, must be considered an intrinsic part of any publication, and therefore must be formally reviewed by competent software engineers. \hskip 1em\noindent\emph{Readership:} This paper reviews software practice in scientific modelling, and is therefore interdisciplinary. It evidences a call to action with an anticipated dual readership bridging the scientific modelling and software engineering communities. This paper's \supplement\ includes a summary of professional software engineering best practice, particularly as applied to scientific research and publication, however \emph{enabling\/} best practice is also addressed. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010405.10010444</concept_id> <concept_desc>Applied computing~Life and medical sciences</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010432</concept_id> <concept_desc>Applied computing~Physical sciences and engineering</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011074.10011081.10011082</concept_id> <concept_desc>Software and its engineering~Software development methods</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010497</concept_id> <concept_desc>Applied computing~Document management and text processing</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003456</concept_id> <concept_desc>Social and professional topics</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Applied computing~Life and medical sciences} \ccsdesc[500]{Applied computing~Physical sciences and engineering} \ccsdesc[500]{Software and its engineering~Software development methods} \ccsdesc[500]{Applied computing~Document management and text processing} \ccsdesc[500]{Social and professional topics} %% Keywords. The author(s) should pick words that accurately describe %% the work being presented. Separate the keywords with commas. \keywords{Scientific modelling; Dependability; Publishing} %% %% This command processes the author and affiliation and title %% information and builds the first part of the formatted document. \maketitle %\begin{quote} \hfill``Criticism is the mother of methodology.\hbox to 0em{''} \hfill Robert P Abelson, \emph{Statistics as Principled Argument}, 1995 %\end{quote} \section{Introduction}\label{problems} {I}{n good science}, all statistics, methods and results are reported very carefully and in detail. For example, a statistical claim might be summarised in words as follows: \begin{quote}\raggedright \setbox0=\hbox{``} \sf\hskip -\wd0\box0 %Random intercept linear mixed model suggesting significant time by intervention-arm interaction effect. \ldots\ Bonferroni adjusted estimated mean difference between intervention-arms at 8-weeks 2.52 ~($\sf 95\% \mbox{\sf ~CI~} 0.78, 4.27, p = 0.0009$). Between group effect size $\sf d = 0.55 ~(95\% \mbox{\sf ~CI~} 0.32, 0.77)$.'' \cite{example-stats}\end{quote} This may look dauntingly technical, but it is written in the standard and widely accepted form for summarising statistics. The sentence briefly summarises confidence intervals, $p$ levels, and so on, to present the statistics so the paper's claims can be seen to be complete, easy to interpret, and easy to scrutinise. It is a \emph{lingua franca}. Scientists write like this --- and conferences and journals require it --- because statistical claims need to be properly accountable and documented in a clear way. Speigelhalter \cite{Speigelhalter} says statistical information needs to be accessible, intelligible, assessable, and usable; he also suggests probing questions to help assess statistical quality (see \supplement\ section~\ref{supplementary-Speigelhalter-section}). Results should not be uncritically accepted just because they are claimed. The skill and effort required to do statistics so it can be communicated clearly and correctly, as above, is not to be taken for granted. Scientists work closely with competent, often specialist, statisticians who engage with the research from experiment design through to analysis and publication. Further, it is assumed that statistics will be peer reviewed, and that review will often lead to improvement. Moreover, we accept that statistics is a professional science and is itself a subject of research and continual improvement. Among other implications of the depth and progress of the field of statistics, undergraduate statistics options for general scientists are insufficient training for rigorous work in science --- their main value, perhaps, is to help scientists to collaborate with specialist statisticians. Except in the most trivial of cases, all numbers and graphs, and the statistics themselves in scientific research, will have been generated by computer. Indeed, computers are now very widely used, not just to calculate statistics, but to run the models (the term is defined below), do the data sampling or operate the sensors that generate the original data that is analysed. Computers are used to run web-based surveys. The data --- including the databases and bibliographic sources --- and code to analyse it is all stored and manipulated on computers. Computers, data, and computer code have become central to modern science. Using code raises many critical questions: formats, backup, cyber-vulnerability, version control, integrity checking (e.g., managing human error), auditing, debugging and testing, and more. All are non-trivial concerns requiring expertise to do reliably. Failure to properly document and explain computer code undermines the scientific value of the models and the results they generate, in the same way as failure to properly articulate statistics undermines the value of scientific claims. Indeed, as few papers use code that is as well-understood and as well-researched as standard statistical procedures (such as Bonferroni corrections), the scientific problems of poorly reported code are worse. Results claimed, then, however nicely they may be expressed, are no more reliable than the code and data that was used to generate them. A common oversight in scientific papers is to present a mathematical model, such as a set of differential equations, but omit how that model is transformed into code that generates the results the paper summarises. %When relevant details of models are omitted (or are not described in recognisable ways), it is impossible to properly scrutinise claims. Describing a model used, perhaps mathematically, or providing the code (e.g., to download) that implements it is as inadequate as just providing raw data without details of the statistics analysis supporting the claim. Just as in statistics, code and results from code, need to be discussed and presented in a way that ground belief in claims derived from using them. Specifically, code must be developed in a sufficiently professional and rigorous way that is able to support clear presentation and scrutiny (developing justifiably reliable code is the concern of the field of software engineering, which is discussed further below in this paper and more substantially in this paper's \supplement). To make critical claims, to inform public policy or to support further research, models need to be run under varying assumptions \cite{whitty}. Being able to understand (at least in principle) the exact code used in implementing a model is critical to having confidence in the results that rely on it. Unfortunately code is rarely considered a substantial part of the science to which it contributes. In contrast, we would not believe a statistical result that was obtained from some \emph{ad hoc\/} analysis with a new fangled method devised just for one project --- instead, we demand statistics that is recognisable, even traditional, so we are assured we understand what has been done and how reliable results were obtained. Importantly, with statistics we understand, at least in principle, how it works --- or, more precisely, how results \emph{should\/} have been obtained and why or to what extent we can depend on them. This paper will show that the problems raised above about scientific models are widespread and serious. Furthermore, code is rarely published in any useful form or professionally scrutinised, and therefore does not contribute to furthering science, for instance through reproduction. The quality of much science is undermined because the code it relies on is rarely of adequate quality \emph{for the uses to which it is put}, particularly when it influences health or public health. This paper will explore the extent of these problems and reasons why they persist. The paper additionally suggests some ways forward. The \supplement\ is an integral part of the suggested solutions. In particular, the \supplement\ section~\ref{supplementary-Speigelhalter-section} summarises Speigelhater's uncontroversial statistics probes and draws explicit analogies for the critical assessment of code. \subsection{The role of code in science and scientific publication} %Yet without code, models could not be run. For the purposes of this paper, models map theory and parameters to describe phenomena, typically to make predictions or to test and refine the models. With the possible exception of theoretical research, all but the simplest models require computers to evaluate; indeed even theoretical mathematics is now routinely performed by computer. Whereas the mathematical form of a model may be concise and readily explained, even a basic computational representation of a model can easily run to thousands of lines of code, and its parameters --- its data --- may also be extensive. The chance that a thousand lines of hand-written code is error free is negligible, and therefore good practice demands that checks and constraints must be applied to improve its reliability. This issue is a concern of software engineering, which is discussed throughout this paper. While scientific research may rely on relatively easily-scrutinised mathematical models, or models that seem in principle easy to mathematise, the models that are run on computers to obtain the results published are sometimes not disclosed, and even when they are (at least in all cases reviewed here) they are inscrutable --- and therefore the models are very likely to be unreliable \emph{in principle}. If code is not well-documented, this is not only a problem for reviewers and scientists reading the research to understand the intention of the code, but it also causes problems for the original researchers themselves: how can they understand its thinking well enough (e.g., a few weeks or months later) to maintain it correctly if has not been clearly documented? Without documentation, including a reasoned case to assure that the approach taken is sound \cite{assurance-case}, how do researchers, let alone reviewers, know exactly what they are doing? Without substantial documentation it is impossible to scrutinise code properly. Consider just the single line ``\texttt{y = k*exp(x)}'' there can be \emph{no\/} concept of its correctness \emph{unless\/} there is also an explicitly stated relation between the code and the mathematical specifications. What does it mean? Does it mean what was intended? What are the assumptions on \texttt{x} and do they hold invariantly? Moreover, as code generally consists of thousands of such lines, with numerous inter-dependencies, plus calling on many libraries of support code, it is inevitable that the \emph{collective\/} meaning will be unknown. Without an explicit link to the relevant mathematics (depending on the claims), it is impossible to reason whether the code is correct, and in turn it is impossible to scientifically scrutinise results obtained from using the code. Not providing code and documentation, providing partial code, or providing code without the associated reasoning is analogous to claiming ``the results are significant'' without any discussion of the relevant statistical methods and details that justify making such a claim. If such an unjustified pseudo-statistical claim was made in a scientific paper, a reviewer would be justified in wondering whether a competent experiment had even been performed --- it would be generous to ask the author to provide the missing details so the paper could be better reviewed on resubmission. {The same applies to code, and to its methods and reasoning.} Clearly, like statistics, programming (coding) can be done poorly and reported poorly, or it can be done well and reported well --- and any mix between these extremes. The question is whether it matters, and if so, \emph{when\/} it matters and, when it does, \emph{what\/} can be done to \emph{appropriately\/} help improve the quality of code (and discussions about the code) in scientific work. %This paper explores the role of code in scientific research. Initially focusing on examples from pandemic modelling (because of its relevance to serious matters of public health) this paper shows that scientific culture, including editorial processes, have not adapted to the increasingly dominant role of code and managing the risks of its fallibility. The paper then suggests how to improve, including building mutual engagement across the science and software engineering communities. \subsection{Bugs, code and programming}\label{knowledge} Critiques of data and model assumptions are very common \cite{critiques,diagnosis-reviews} but program code is rarely mentioned. Yet data and program are formally equivalent (see further discussion in \supplement, section \ref{on-code-data-publication}). Program code has as great an affect on results as the data. Code is harder to scrutinise, which means errors in code can have more subtle effects on results. It should be noted that almost all code contains ``magic numbers'' --- that is, data masquerading as code. This common practice ensures that published data is very rarely all of the data because it omits the magic numbers. This emphasises the need for repositories to require the inclusion of code so all data is actually available. This is another reason why journal data policies should be applied to code too, so all data embedded in code is covered by the policies (see the longer discussion in the \supplement, section \ref{on-code-data-publication}). Bugs can be understood as discrepancies between what code is intended to do and what it actually does. Many bugs cause erroneous results, but bugs may be ``fail safe'' by causing a program to crash so no incorrect result can be delivered. Better, contracts and assertions are essential defensive programming technique that block compilation or block execution with incorrect results; they turn bugs into safe termination. None of the code examined for this paper includes either. If code is not documented it cannot be clear what it is intended to do, so it is not even possible to detect and eliminate bugs. Indeed, even with good documentation, \emph{intentional bugs\/} will remain, that is, code that correctly implements the wrong things. For instance, in numerical modelling, using an inappropriate method can introduce errors that are not ``bugs'' in the sense of incorrectly implementing what was wanted (e.g., ill-conditioning), but are bugs in the sense of producing incorrect results --- that is, what was wanted was na\"\i ve. Similarly, misuse of random numbers (e.g., using standard libraries without testing them) is a common cause of bugs \cite{knuth}. Following the Dunning-Kruger Effect \cite{dunning-kruger,dunning-kruger-numeracy}, unqualified programmers over-estimate their programming skills because they do not have the skills to recognise their lack of knowledge. Dunning and Kruger say, \begin{quote}\sf\raggedright\setbox0=\hbox{``}\hskip -\wd0\copy0 People usually choose what they think is the most reasonable and optimal option \hbox{[\hskip 2pt\ldots]} The failure to recognise that one has performed poorly will instead leave one to assume that one has performed well; as a result, the incompetent will tend to grossly overestimate their skills and abilities. \hbox{[\hskip 2pt\ldots]} Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realise it.''\end{quote} Unlike many skills (skating, brain surgery, \ldots) programming, typical of much engineering, is one where errors can go unnoticed for long periods of time --- things seem to work nicely right up to the moment they fail. The worse programmers are, the more trivial bugs they tend to make, but trivial bugs are easy to find so, ironically, being a poor programmer \emph{increases\/} one's self-assessment because debugging seems very productive. It is easy for poor programmers and their associates to believe they are better than they actually are. %This is fertile ground for the better-than-average bias \cite{dunning-kruger}. It sounds harsh to call any programmers incompetent, but challenged with the complexity of programs and the complexity of the domains programs are applied in, we are all incompetent and succumb to the limitations of our cognitive resources, creating blindspots in our thinking \cite{fixit}. We \emph{all\/} make mistakes we are unaware of. If we do not have the benefit of professional qualifications that have assessed us objectively, programmers therefore generally have a higher opinion of our own competence than is justified. {Everyone is subject to Human Factors (including the author of the present paper, e.g., as discussed in \cite{enigma}): for instance, the standard cognitive bias of confirmation bias encourages us to look for bugs when code fails to do what is expected and then debug it to produce better results, but if code generates expected results not to bother to debug it further. This of course tends to make code increasingly conform to prior expectations, whether or not those expectations are scientifically justified. Typically, there was no prior specification of the code, so the code must be right, especially after all the debugging to make it ``correct''! Thus coding routinely suffers from HARKing (Hypothesising After the Results are Known \cite{harking}), a methodological trap widely recognised in statistics.} Computers themselves are also a part of the problem. Na\"\i vely modifying a program (as may occur during debugging) can make it more complex, and hence less scrutable. In theory programs can be written so that it is not possible to determine what they do or how they do it (whether deliberate obfuscation or accidentally), except by running them, if indeed it is possible to exactly reproduce the necessary context to do so \cite{framework}. The point is, introducing bugs should be avoided so far as possible in the first place, and programs should routinely have assertions and other methods to detect those bugs that are introduced (see this paper's \supplement\ for more discussion of standard programming methodologies). \section{State of the art in pandemic modelling} A review of epidemic modelling \cite{science-review} says, ``we use the words `computational modelling' loosely,'' and then, curiously, the review discusses exclusively mathematical modelling, implying that for the authors, and for the peer reviewers, there is no conscious role for code or computation as such. It appears that the new insights, advances, rigour, and problems that computers bring to research are not considered relevant. A systematic review \cite{diagnosis-reviews} of published COVID models for individual diagnosis and prognosis in clinical care, including apps and online tools, noted the common failure to follow standard TRIPOD guidelines \cite{tripod}. (TRIPOD guidelines ignore code completely, let alone its quality.) The review \cite{diagnosis-reviews} ignored the mapping from models to their implementation, yet if code is unreliable, the model \emph{cannot\/} be reliably used, and cannot be reliably interpreted. It should be noted that flowcharts, which the review did consider, are programs intended for direct human use. Flowcharts, too, must be designed as carefully as code, for exactly the same reason: it is hard to program reliably. A high-profile 2020 COVID-19 model \cite{nature-summary,ICmodel} uses a modified 2005 computer program \cite{avianFluModel,originalICmodel} for H5N1 in Thailand, which did not model air travel or other factors required for later western COVID-19 modelling. The 2020 model forms part of a series of papers \cite{ICmodel,avianFluModel,originalICmodel} none of which provide details of their code. A co-author disclosed \cite{tweet} that the code was thousands of lines long and was undocumented C code. A more recent version of the system has been made open source (\url{https://github.com/mrc-ide/covid-sim} version dated 19 July 2021), and is approximately: \newcount\u \newcount\v \newcount\w \newcount\frac \def\kloc#1{%[#1] \u=#1 \v=#1 \divide \u by 1000 \w=\u \multiply \w by 1000 \advance \v by -\w \divide \v by 100 \the\u.\the\v } \input models/covidsimSummary.tex kLOC (though this version is C++ with Python, R, sh, YML/JSON, etc, but not C)\@. As Ferguson, the code author, noted in an interview, \begin{quote}\raggedright \setbox0=\hbox{``} \sf\hskip -\wd0\box0 For me the code is not a mess, but it's all in my head, completely undocumented. Nobody would be able to use it~\ldots '' \cite{ferguson-interview}\end{quote} This admission is tantamount to saying that the published scientific findings are not reproducible. (A constructive introductory discussion of software engineering approaches to reproducibility can be found in \cite{basic-reproducibilty}.) This is problematic, especially as the code would have required many non-trivial modifications to update it for COVID-19 with different assumptions; moreover, the code would have had to have been updated very rapidly in response to the urgent COVID-19 crisis. If this C code had been made available for review, the reviewers would not have known how to evaluate it without the relevant documentation. It is, in fact, hard to imagine how a large undocumented program could have been repeatedly modified over fifteen years without becoming incoherent. If code is undocumented, there would be an understandable temptation to modify it arbitrarily to get desired results; worse, without documentation and proper commenting, it is methodologically impossible to distinguish legitimate attempts at debugging from merely fudging the results. In contrast, if code is properly documented, the documentation defines the original intentions (including formally using mathematics to do so), and therefore any modifications will need to be justified and explained --- or the theory revised. The programming language C which was used is not a dependable language; to develop reliable code in C requires professional tools and skills. Moreover, C code is not portable, which limits making it available for other scientists to use safely (C notoriously gets different results on different compliers, libraries, and hardware). The \supplement\ discusses these issues further. Ferguson, author of the code, says of the criticisms of his code, ``However, none of the criticisms of the code affects the mathematics or science of the simulation'' \cite{thumbs-up}. Indeed. The problem the current paper is raising is that the epidemiology may be fine, but if it is not mapped into code that correctly implements the models, then the program's output should not be relied on without independent evidence. In science, this is the normal requirement of \emph{reproducibility}. The code in \cite{nature-summary,ICmodel} has been reproduced \cite{codecheck,thumbs-up} (published in \emph{Nature\/}), but this ``reproduction'' merely confirmed the code can be run again and produce comparable results (compared, apparently, to an Excel spreadsheet!). That ``reproduction'' can be achieved at this low level of sophistication is hardly surprising, regardless of the quality of the code. Instead, if reproducibility is to be a useful scientific criterion, an \emph{independently\/} developed model needs to produce equivalent results --- this is called $N$ version programming, which is a standard software engineering practice in critical areas \cite{NVP}, \emph{like public health surely ought to be} --- as Ferguson's own US influenza (2008) example \cite{nvp-ferguson} argues. Meanwhile, it is a shame that using software for scientific models has enabled the bar to reproducibility, as understood by journals such as \emph{Nature} in 2020, to be lowered to a trivial level, which is only sufficient to detect dishonesty, as opposed to being mistaken. Because of the recognised importance of the paper, a project has been started to document its code \cite{refactoring}. Documenting the code now may describe what it does, \emph{including\/} its bugs, but it is unlikely to explain what it was intended to have done. If nothing else, as the code is documented, bugs will be found, which will then be fixed (refactoring), and so the belatedly-documented code will not be the code that was used in the published models. It is well-known that documenting code helps improve it, so it is surprising to find an undocumented model being used in the first place. The revised code has now been published, and it has been heavily criticised \citeeg{bad-code}, supporting the concerns expressed in the present paper. Some epidemiology papers \citeeg{pseudo} publish models in pseudo-code, a simplified form of programming. Pseudo-code looks deceptively like real code that might be copied to try to reproduce it, but as pseudo-code introduces invisible simplifications. Pseudo-code, properly used, can give a helpful impression of the overall approach of an algorithm, certainly, but pseudo-code alone is not a surrogate for code: using it is arguably even worse than not publishing code at all. Pseudo-code is not precise enough to help anyone scrutinise a model; copying pseudo-code introduces avoidable bugs. An extensive criticism of pseudo-code, and discussion of tools for reliable publication of code can be found elsewhere \cite{relit}. {The \supplement\ provides further discussion of reproducibility.} \subsection{Beyond epidemiology} Code needs to be carefully documented and explained because all code has tacit assumptions, bugs and cybersecurity vulnerabilities \cite{Ben} that, if not articulated, can affect results in unknown ways that may undermine any claims. People reading the code will not know how to obtain good, let alone better results, because they do not know exactly what was intended in the first place. The problem is analogous to the problem of failing to elaborate statistical claims properly: failure to do so suggests that the claims may have unknown limitations or even downright flaws. Even good quality code has, on average, a defect every 100 lines --- and {such a low} rate is only achieved by experienced industrial software developers \cite{ourReview}. World-class software can attain maybe 1 bug per 1,000 lines of code. Code developed for experimental research purposes will have higher rates of bugs than professional industrial software, because the code is less well-defined and evolves as the researchers gain new insights into their models. In addition, and perhaps more widely recognised, code --- especially but not exclusively mathematical code --- can be subject to numerical errors \cite{hamming}. It is therefore inevitable that typical modelling code has many bugs (reference \cite{NVP} is a slightly-dated but very insightful discussion). Such bugs undermine confidence in model results. Only if there is access to the \emph{actual\/} code and data (in the specific version that was used for preparing the paper) does anyone know what researchers have done, but merely making code available (for instance, providing it in their \supplement\ with papers, putting it in repositories, or using open source) is not sufficient. It is noteworthy that some papers disclosed that they had access to special hardware. Some COVID-19 papers \citeeg{unfinished} make unfinished, incomplete code available. While some \citeeg{unfinished,lancet-unfinished} do make what they call ``documented'' code available, they provide no more than superficial comments: this is \emph{not\/} documentation as properly understood. Such comments do not explain code, explain contracts, nor explain algorithms. Some readers of the present paper may not recognise these technical software terms; contracts, for instance, originated in work in the 1960s \cite{hoare}, and are now well-established practice in reliable programming {(see the \supplement\ for a checklist of many relevant, professional software engineering concepts and resources)}. If full code is made available, it may be technically ``reproducible,'' but the scientific point is to be able to understand and challenge, potentially refute, the findings; to do that, much more is required than merely being able to run the code \cite{notebooks,popper}. Even if a computer can run it, badly-written code (as found in \emph{all\/} the research reviewed in the present paper, {and indeed in computer science research itself \cite{machine-learning-reproducibility}}) is inscrutable, even if its original programmers think otherwise. Only if there is access to \emph{adequate\/} documentation can anyone know what the researchers \emph{intended\/} to do. Without all three (code, data, adequate documentation), there are dangers that a paper simplifies or exaggerates the results reported, and that omissions, bugs and errors in the code or data, generally unnoticed by the paper's authors and reviewers, will have affected the results they report \cite{relit}. Making outline code (including pseudo-code) available without proper documentation and without discussing its limitations is unethical: it encourages others to reproduce and build on poor work. \makeatletter \long\def\@makecaption#1#2{% \vskip\abovecaptionskip \sbox\@tempboxa{#1: #2}% \ifdim \wd\@tempboxa >\hsize \textbf{#1}: #2\par \else \global \@minipagefalse \hb@xt@\hsize{\hfil\box\@tempboxa\hfil}% \fi \vskip\belowcaptionskip} \makeatother \begin{table*} \begin{center}\normalsize \IfFileExists{paper-seb-generated-summary-table.tex}{% \input paper-seb-generated-summary-table.tex }{Missing file\typeout{No paper-seb-generated-summary-table.tex - error ignored, but you need to latex paper-seb-supplementary-material.tex to generate it}} \end{center} \caption{Summary of the exploratory survey. Note that code quality was assessed (by reading it) for human use --- due to the paper authors' complex and/or narrative interpretation of data in most papers, code, data and hardware/operating system dependencies, no assessment was made whether the code provided was sufficient to reproduce a paper's specific claims. Details of the survey methodology and the breakdown of measures into individual papers can be found in the \supplement.} \label{table-summary} \end{table*} \begin{table*}[t] \def\bytes#1{%[#1] \u=#1 \frac=0 \let\unit=\ \ifnum \u>999 \frac=\u \divide \u by 1000 \global\let\unit k \w=\u \multiply \w by 1000 \advance \frac by -\w %(\the\frac) \fi \ifnum \u>999 \frac=\u \divide \u by 1000 \global\let\unit M \w=\u \multiply \w by 1000 \advance \frac by -\w %(\the\frac) \fi \ifnum \u>999 \frac=\u \divide \u by 1000 \global\let\unit G \w=\u \multiply \w by 1000 \advance \frac by -\w %(\the\frac) \fi \v=\frac \divide \v by 100 \the\u.\the\v & \unit b } \def\reponame#1#2{{\tt #1} \csname cite-#2\endcsname} \expandafter\def\csname cite-covid-sim\endcsname{\cite{ICmodel}} \begin{center} \input models/repos.tex \\ \small Citation numbers $>$ \MaxMainPaperCitationNumber\ can be found in the Supplementary Material \\ \small Repository clones downloaded and summarised \input models/paper-seb-generated-clone-date.tex \end{center} \caption{Sizes of repositories, with approximate sizes of code (in thousands of lines of code, kLOC) and data (in bytes) for all available GitHub repositories reviewed in the survey. Sizes are approximate because in many repositories some code and data are conceptually interchangeable (for example, some XML, JSON and text files are used as either) so choices were made to avoid double-counting. } \label{table-repo-summary} \end{table*} %%%%%\xlink{bg}{Go to background} \section{A pilot survey of the wider peer-reviewed research relying on code} The problems of unreliable code are not limited to COVID-19 modelling papers, which, understandably, were perhaps rushed into publication. For instance, a 2009 paper reporting a model of H5N1 pandemic mitigation strategies \cite{flu-model} provides no details of its code. Its \supplement, which might have provided code, no longer exists. Almost all scientific papers rely on generic computer code (for statistics or for plotting graphs, and so on) but what is of interest here is whether, and, if so, to what extent, code developed \emph{specifically\/} as part of or to support a research contribution can be understood by colleagues and scientifically scrutinised. \subsection{Methodology} A sample of \plural{\dataN}{recent paper}, not including the motivating papers discussed above, and covering a broad range of science were selected from the online editions (in July 2020) of \journalBreakdown. {The survey sampled the recent work of \plural{\countAuthors}{published author}, perhaps directly involving around \newcount \total \total=\dataN \multiply \total by 4 % three reviewers, one editor \advance \total by \countAuthors \the\total\ leading scientists in all (authors, editors, reviewers --- ignoring multiple roles).}\ The survey was not intended to be a formal, representative sample of all scientific research in general, but it is hoped it will be sufficient to dispel the possibility that the issues described above in earlier sections of this paper are isolated practice, perhaps an idiosyncrasy of a few authors or of a particular field, or perhaps due to a chance selection bias (e.g., the Ferguson papers were reviewed above because of Ferguson's public profile and the importance of dependable pandemic research, but they somehow might have just happened to be software engineering outliers). \emph{It is important that this survey is not taken as an evaluation, whether praise or criticism, of any specific paper: any measurement is subject to error, and individual paper assessments are inevitably noisy. Instead, by evaluating a set of diverse papers, the noise will tend to average out. Mean scores, as summarised in table \ref{table-summary}, are more reliable measurements.} The full methodology, data and analysis, as well as many software engineering and Human Factors techniques to help improve reliability and reproducibility used in the preparation of the present paper, are provided in the \supplement, which is also available from a repository (see details at the end of this paper). \subsection{Summary of results} Scientific models are substantial, complex software engineering projects. On the whole, scientists do not make their code available and rarely provide adequate documentation (see table \ref{table-summary}), yet it is noteworthy that the papers, typically a few published pages, are \emph{much\/} smaller --- and represent less work --- than the code and data that they rely on for their claims and correctness (see table \ref{table-repo-summary}). With one minor exception (which may have been accidental), no papers reported anything on software engineering methodologies, which is perhaps astonishing given the scale of some of the software involved in the papers (table \ref{table-repo-summary}). None of the papers used any specific software engineering methods, such as open source \cite{open-source} and other standard methodologies provided in the \supplement, to help manage their processes and help improve quality. Although software stability \cite{stability} is a relatively new concept, understood as methodologies, such as portability, to provide long-term value of software, it is curious that none of the papers made any attempt at stability (however understood) despite the irony that all the papers were published in archival journals.\footnote{Reason this paper does not directly assess the quality of software in the surveyed papers include: many papers did not provide complete software; it was not possible to find correct versions of all software systems to run the models; also, no papers provided adequate test suites so that correct operation of software could be confirmed objectively.} \emph{Nature Digital Medicine\/} and \emph{Royal Society Open Science\/} have clear data and code policies (see \supplement\ section \ref{supplementary-journal-policies-section}), but actual practice falls short: \the\countHasBreach\ out of the \plural{\countHasPolicy}{paper} (\pc{\countHasBreach}{\countHasPolicy}) published in them and sampled in the survey manifestly breach their code policies. In contrast, \emph{Lancet Digital Health\/}, despite substantial data policies, has no code policy at all to breach. The implication of these results is that the fields, and the editorial expertise of leading journals, misunderstand and dismiss code policies, or they are technically unable to assess them --- and, if so, they also fail to say they are unable to assess them. This lack of expertise is consistent with the limited awareness of software engineering best practice that is manifest in the published papers (and resources) themselves. Code repositories were used by \plural{\countUsesVersionControlRepository}{paper} (\pc{\countUsesVersionControlRepository}{\dataN}), though \plural{\countNoCodeInRepo}{paper} in the survey claimed to have code on GitHub but there was no code in the repository, only the comment ``Code coming soon\ldots'' (checked at the time of doing the review, then double-checked as detailed in the references in the \supplement, as well as most recently on \input models/paper-seb-generated-clone-date.tex while checking table \ref{table-repo-summary}): in other words, the repository had never been used and the code could not been looked at, let alone reviewed.\footnote{GitHub records show that it had not been deleted after paper submission.} This is a pity because GitHub provides help and targeted warnings and hints like ``No description, website, or topics provided [\ldots] no releases published.'' The lack of code is ironic: the paper concerned \csname cite-PostoperativeOutcomes.RiskNet\endcsname\ has as its title ``\emph{Development and validation\/} of a deep neural network model [\ldots]'' (our emphasis), yet it provides no code or development processes for the runnable model it claims to validate, so nobody else (including referees) can check any of the paper's specific claims. Sizes of all GitHub repositories are summarised in table \ref{table-repo-summary} (since many papers not using GitHub do not have all code available, non-GitHub code sizes are not easily compared and are not listed). Overall, there was no evidence that any code had been developed carefully, {let alone by using recognised professional software engineering methods}. In particular, \ifnum \countCodetested=0 no papers \else only \plural{\countCodetested}{paper} \fi in the survey provide any claims or evidence of effective testing, for instance with evidence that tests were run on clean builds. {While it may sound unrealistic to ask for evidence on software quality in a paper written for another field of science, the need is no less than the need for standard levels of rigour in statistics reporting, as discussed in the opening of this paper.} Data repositories (the Dryad Digital Repository, Figshare or similar) were used by \plural{\counthasDataRepository}{paper} to provide structured access to their data. Unlike GitHub, which is a general purpose repository, Dryad has scientifically-informed guidelines on handling data, and all papers that used Dryad provided more than just their raw data --- they provided a little, sometimes substantial, documentation for their data. At the time of writing, Dryad is not helpful for managing code --- its model appears to be founded on the requirement that once published papers must refer to exactly the data they used, so further refinements on the data (or code) are taboo, even with version control. Key quantitative findings from the survey are summarised in tables \ref{table-summary} and \ref{table-repo-summary}, and are discussed in greater detail in the \supplement. \section{Discussion} Effective access to quality code is not routine in science. Using structured repositories that provide suggestions for and which encourage good practice (such as Dryad and GitHub), and requiring their use, would be a lever to improve the quality and value of code and documentation in published papers. The evidence suggests that, generally, some but not all manually developed code is uploaded to a repository just before submitting the paper in order to ``go through the motions.'' In the surveyed papers there is no clear evidence (over the survey period) that any published code was maintained using the repositories. The \supplement\ provides a summary of the analysis and full citations for all papers in the sample. \subsection{Suggestions for further work} Although this study of software engineering in scientific research (specifically, in peer reviewed publications) was necessarily interpretive in nature, the corpus of data (namely, the selected \the\dataN\ papers) was rigorously gathered so that it could be later updated or extended in size or scope using comparable criteria. However, the insights appear to be quite general, and it is not clear that routine generalisation is required, except, perhaps, to explore specialist subfields. Further work to extend the scope of the survey and challenge beyond the basic requirements of the present paper is of course worthwhile, but the following cases (listed in the next few paragraphs) suggest that the problem \emph{is\/} broadly established. We argue, then, that we should be focusing effort on avoiding problems and considering proposed solutions (see section \ref{summary}), not just assessing the problems highlighted in this paper with increasing scale or rigour. \label{jvs-policy} \begin{enumerate}\raggedright \item The journal \emph{The Lancet\/} published and then subsequently retracted a paper on using hydroxychloroquine as a treatment for COVID \cite{lancet-retracted}. The paper was found to rely on fraudulent data \cite{science-lancet1,science-lancet2}. \emph{The Lancet\/} subsequently tightened its data policies \cite{lancet-learning}, for instance to require that more than one author must have directly accessed and verified the data reported in the manuscript. Curiously, the original (now retracted) paper declares \begin{quote}\sf \setbox0=\hbox{``}\hskip -\wd0\copy0 \ldots\ all authors participated in critical revision of the manuscript for important intellectual content. MRM and ANP supervised the study. All authors approved the final manuscript and were responsible for the decision to submit for publication.'' \end{quote} which seems to suggest that several original authors of the paper would have been happy to make the new declarations --- and, of course, if there is fraud (as was established in this case) it seems likely that authors who make the new declarations of accessing and verifying data are unlikely to make reliable declarations. \hskip 1em\emph{The Lancet\/} still has no code publication policy (see the \supplement), and for more than one author to have ``direct access'' to the data they are very likely to access the data through the same code. If the code is faulty or fraudulent, an additional author's confirmation of the data is insufficient, and there is at least as much reason for code to be fraudulent (not least because code is much harder to scrutinise than data). Code needs more than one author to check it, and ideally reviewers independent of the authors so they do not share the same assumptions and systems (for instance shared libraries, let alone potential collusion in fraud). \item In 2020 the \emph{Journal of Vascular Surgery\/} published a controversial research paper \cite{jvs1}, which had to be retracted on ethical grounds \cite{jvs2,jvs3}: it was a na\"\i ve study and the editorial process was unaware of digital norms. Notably, the paper fails to provide access to its anonymised data (with or without qualification), and fails to define the data anonymisation algorithm, and also fails to even mention the code that it developed and used to perform its study. The journal's data policy is itself very weak (the authors ``should consider'' including a footnote to offer limited access to the data) and, despite basic statistics policies, it has no policy at all for code (see \supplement\ section \ref{supplementary-journal-policies-section}). Ironically, the retracted article \cite{jvs1} is still online (as of August 2020) with no reference to any editorial statement to the effect that it has been retracted, despite this being trivial --- and necessary --- to achieve in the widely-accessed online medium. \item Medical research often aims to establish a formula to define a clinical parameter (such as body mass index, BMI) or to specify an optimal drug dose or other intervention for treatment. These formulas, for which there is conventional clinical evidence, are often used as the basis for computer code that provides advice or even directly controls interventions. Unfortunately a simple formula as may be published in a medical paper is \emph{never\/} sufficient to specify code to implement it safely. For example, clinical papers do not need to evaluate or manage user error when operating apps, and therefore the statistical results of the research will be idealistic compared to the outcomes using an app under real conditions --- which is what the clinical research is supposedly for. A widespread bug (and its fix) that is often overlooked is discussed in \cite{numerals}; the paper includes an example of a popular clinical calculator (based on published clinical research) that calculated nonsense, and potentially dangerous, results. The paper \cite{fda} summarises evidence that such bugs, ignored by the clinical research literature, are commonplace in medical systems and devices. \end{enumerate} \subsection{A call to action}\label{summary} Computer programs are the laboratories of modern scientists, and should be used with a comparable level of care that virologists use in their laboratories --- lab books and all \cite{notebooks} --- and for the same reasons: computer bugs accidentally cultured in one laboratory can infect research and policy worldwide. Given the urgency of rigorously understanding COVID-19, any epidemic for that matter, it is essential that epidemiologists engage professional software engineers to help develop reliable laboratory methodologies. For instance, code lab books can be generated and signed easily. Software used for informing public health policy, medical research or other medical applications is \emph{critical software}. Professional critical software development, as used in aviation and the nuclear power industry, is (put briefly) based on \emph{correct by construction}: \cite{cbc} effectively, design it right first time, {supported by numerous rigorous techniques, such as formal methods, to manage error. (See extensive discussion in this paper's \supplement.)}\ Not coincidentally, these are \emph{exactly\/} the right methods to ensure code is both dependable and scrutable. Conversely, not following these practices undermines the rigour of the science. An analogous situation arises in ethics. There are some sorts of research that are ethically unacceptable, but few people have the objectivity and ethical expertise to make sound ethical judgements, particularly when it comes to assessing their own work. Misuse of data, exploiting vulnerable people, and not obtaining informed consent are typical ethical problems. National funders, and others, therefore require Ethics Boards to formally review ethical quality. Medical journals will not publish research that has not undergone appropriate ethical review. Analogously, and supplementing Ethics Boards, Software Engineering Boards would authorise as well as provide advice to guide the implementation of high-quality software engineering. Just as journals require conflicts of interest statements, data availability statements, and ethics board clearance, we must move to epidemic modelling papers --- and in due course, all scientific papers --- being required to include Software Engineering Board clearance statements as appropriate. {Software Engineers have a code of ethics that applies to \emph{their\/} work in epidemic modelling \cite{ethics-code}.} {The present paper did not explore data, because in almost all cases the code and data were explained so poorly and archived so haphazardly it would be impossible to know whether the author's intentions were being followed.\footnote{{For the present paper, all the code, data, analysis and documents are available for download in a single zip file.}} Some journals have policies that code is available (see the \supplement), but they should require that code is not just available in principle but \emph{actually works\/} on the relevant data. Ideally, the authors should test a clean deployed build of their code and save the results. Presumably a paper's authors must have run their code successfully on some data (if any, but see section \ref{on-code-data-publication} in the \supplement) at least once, so preparing the code and data in a way that is reproducible should be a routine and uncontentious part of the rigorous development of code underpinning \emph{any\/} scientific claims. This requirement is no more unreasonable than requesting good statistics, as discussed in the opening of the paper. And the solution is the same: relevant experts --- whether statisticians or software engineers --- need to be routinely available and engaged with the science. SEBs would be a straight forward way of helping achieve this.} There need to be many Software Engineering Boards (SEBs) to ensure convenient access and oversight, potentially at least one per university. Active, professional software engineers should be on these SEBs; this is not a job for people who are not qualified and experienced in the area or who are not actively connected with the true state of the art. There are many high-quality software companies (especially those in safety-critical areas like aviation and nuclear power) who would be willing and competent to help. Open Source generally improves the quality of software. SEBs will take account of the fact that open source software enables contributors to be located anywhere, typically without a formal contractual relationship with the leading researchers. Where appropriate, then, SEBs might require \emph{local\/} version control, unit testing, static analysis {and other quality control methods for which the lead scientist and software engineer remain responsible, and may even need to sign off (and certainly be signed off by the SEB\@).}\ {Software engineering publishers are already developing rigorous badging initiatives to indicate the level of formal review of the quality of software for use in peer reviewed publications \cite{acm-artifacts}.}\ {See this paper's \supplement for further suggestions.} A potential argument against SEBs is that they may become onerous, onerous to run and onerous to comply with their requirements. A more mature view is that SEBs need their processes to be adaptable and proportionate. If software being developed is of low risk, then less stringent engineering is required than if the software could cause frequent and critical outcomes, say in their impact on public health policy for a nation. Hence SEBs processes are likely to follow a risk analysis, perhaps starting with a simple checklist. {There are standard ways to do this, such as following IEC 61508:2010 \cite{redmill,iec61508} or similar. Following a risk analysis (based on safety assurance cases, controlled documents and so on, if appropriate to the domain), the Board would focus scrutiny where it is beneficial without obstructing routine science.} A professional organisation, such as the UK Royal Academy of Engineering ideally working in collaboration with other national international bodies such as IFIP, should be asked to develop and support a framework for SEBs. SEBs could be quickly established to provide direct access to mature software engineering expertise for both researchers and for journals seeking competent peer reviewers. In addition, particularly during a pandemic, SEBs would provide direct access to their expertise for Governments and public policy organisations. Given the urgency, this paper recommends that \emph{ad hoc\/} SEBs should be established for this purpose. SEBs are a new suggestion, providing a supportive, collaborative process. Methodological suggestions already made in the literature include open source and specific software engineering methodologies to improve reproducibility \cite{basic-reproducibilty,open-source}. While \cite{ABCs-SE} provides an insightful framework to conceptualise approaches and compare their merits, there is clearly scope for much further research to provide an evidence base to motivate and assess appropriate interventions to help scientists do more rigorous and effective software engineering to support their research and publishing. These and further issues are discussed at greater length in the \supplement. \section{Conclusions} {While this paper was originally motivated by Ferguson's surprising public statements \citeeg{tweet,ferguson-interview}, the broader evidence reviewed in this paper suggests that coding practice makes for poor science generally, not just in epidemiology --- and many other studies (e.g., the classic \cite{NVP}) support this view. Certainly, with such high stakes, we need to improve the quality of software engineering that supports public health policies, both in research and in public policy issues informed by research, such as track and trace \cite{excel-fiasco} and in balancing COVID mutation pressures against vaccine shortages and delays between vaccinations \cite{science-delays}.} The challenge to epidemic modelling, and to scientific research more generally, is to manage software development to reduce the unnoticed impact of bugs and poor programming practices. Computer code should be explicit and properly documented. Papers should be explicit on their software methodologies, limitations and weaknesses, just as Whitty expressed more generally about the standards of science \cite{whitty}. Professional software methodologies should not be ignored. Unfortunately, while programming is easy, and is often taken for granted, programming \emph{well\/} is very difficult \cite{fixit}. We know from software research than ordinary programming is very buggy and unreliable. Programming well is essential for informing public health policy. Simply put, without quality code and data, research is not open to scrutiny, let alone proper review, and its quality should be suspect. Many would argue that availability of code and data ensure research is reproducible, but that is a very weak criterion: both need to be documented and clear enough to be open to informed scrutiny \cite{reproducibility,relit,popper}. We must prioritise getting appropriate professional software engineering skills and resources particularly to bear on the COVID-19 pandemic without delay. Software Engineering Boards (introduced in this paper) are a straight forward, constructive and practical way to support and improve computer-based modelling. This paper's \supplement\ summarises the relevant professional software engineering practice that Software Engineering Boards would use, including discussing how and why software engineering helps improve code dependability and quality. Poor software engineering has life-and-death consequences for scientific research informing public health decisions or informing healthcare decisions \cite{fixit,devices}. What is likely to be the UK's largest miscarriage of justice ever hinges on inexcusably poor software engineering, resulting in prosecution, conviction and many prison sentences for over 700 people on the basis of faulty computer evidence \cite{ourReview}. Future work, led by the software engineering community, should extend improving professional software engineering standards into other areas, including informing legislators and regulators so it can be more tightly controlled in areas where its dependability matters --- as we like to think it does. {\subsection*{Notes [not all required by ACM TSEM, but provided for refereeing]} \newcount\csrefcount \csrefcount=0 \def\csref{\global\advance\csrefcount by 1} \textbf{Acknowledgements}. {The author is grateful for comments from: \csref Ross Anderson, \csref Nicholas Beale, \csref Ann Blandford, \csref Paul Cairns, \csref Rod Chapman, \csref Jos\'e Corr\'ea de~S\`a, \csref Paul Curzon, \csref Jeremy Gibbons, \csref Richard Harvey, \csref Will Hawkins, \csref Ben Hocking, \csref Daniel Jackson, \csref Peter Ladkin, \csref Bev Littlewood, \csref Paolo Masci, Stephen Mason, \csref Robert Nachbar, \csref Martin Newby, \csref Patrick Oladimeji, \csref Claudia Pagliari, \csref Simon Robinson, \csref Jonathan Rowanhill, \csref John Rushby, \csref Susan Stepney, Prue Thimbleby, \csref Will Thimbleby, \csref Martyn Thomas, and \csref Ben Wilson provided very helpful comments.} %\the\csrefcount\ computer science reviewers. \textbf{Ethics}. {This article presents research with ethical considerations but does not fall not within the usual scope of ethics policies.} \textbf{Code and data access}. {There is a full discussion of the methodology of this paper and its benefits in the \supplement, section \ref{on-code-data-publication}. All material is available for download at \url{github.com/haroldthimbleby/Software-Enginering-Boards} (which has been tested in a clean build, etc). The data is encoded in JSON\@. JavaScript code checks and converts the JSON data into \LaTeX\ number registers and summary tables, etc, thus making it trivial to typeset results reliably in the paper (e.g., table \ref{table-summary} in this paper). In addition, a standard CSV file is generated in case this is more convenient to use, for instance to browse in Excel or other spreadsheet application. All data, analysis, and details of the methods used can be found in the \supplement.} \textbf{Author's contributions}. {Harold Thimbleby is the sole author. An early, short, draft of this paper, containing no supplementary material or data, was submitted to the UK Parliamentary Science and Technology Select Committee's inquiry into UK Science, Research and Technology Capability and Influence in Global Disease Outbreaks, under reference LAS905222, 7 April, 2020. The evidence, which was not peer reviewed and is only available after an explicit search, very briefly summarises the case for Software Engineering Boards but without the detailed analysis and case studies of the literature, etc, that are in the present paper. It is now available to the public \cite{parliamentary-evidence,my-parliamentary-evidence}.} \textbf{Competing interests}. {The author declares no competing interests.} \textbf{Funding}. {This work was jointly supported by See Change (M\&RA-P), Scotland (an anonymous funder), by the Engineering and Physical Sciences Research Council [grant EP/M022722/1], and by the Royal Academy of Engineering through the Engineering~X Pandemic Preparedness programme [grant EXPP2021\textbackslash 1\textbackslash 186]. The funders had no involvement in the research or in the writing of this paper.} } {\raggedright \bibliographystyle{ACM-Reference-Format} \initialiseBibliography{References}{1}{} \bibliography{paper-seb-main.bib} % Note that the supplementary material calculates how many references there are, % because it reads in this .tex file's .aux file so that it can cite the references % above with the same number sequence. % % It simple redefines \bibitem to work it out while it % defines all the citations from the bibliography above. % % The same technique allows the supplementary material to have 2 more bibliographies, % with consecutive numbering. } \end{document} %\immediate\write\@auxout{\string \newcount \string \savedlastReferenceNumber %\string \savedlastReferenceNumber=\the\savedlastReferenceNumber} %\def \allInOne {don't include header in supplemental material} %\input paper-seb-supplementary-material.tex {\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{HT}}]{Harold Thimbleby PhD, FRCP (Edinburgh), Hon.\ FRSA, Hon.\ FRCP} is See Change Fellow in Digital Health at Swansea University, Wales. His research focuses on human error and computer system design, particularly for healthcare. In addition to over 340 peer reviewed and 188 invited publications, Harold has written several books, including \emph{Press On\/} (MIT Press, 2007), which was winner of the American Association of Publishers best book in computer science award; his latest book \emph{Fix IT: How to solve the problems of digital healthcare\/} (OUP, 2021) is in press. Harold won the British Computer Society Wilkes Medal. He is emeritus Gresham Professor of Geometry (a chair founded in 1597), and has been a Royal Society-Leverhulme Trust Senior Research Fellow and a Royal Society-Wolfson Research Merit Award holder. He has been a member of the UK Engineering and Physical Sciences (EPSRC) research council Peer Review College since 1994. See his web site, \url{www.harold.thimbleby.net}, for more details. \label{LastPage} \end{document}
{ "alphanum_fraction": 0.8009909166, "avg_line_length": 139.4328358209, "ext": "tex", "hexsha": "bce1c849144cf0f76725f84e3fa0d49ba945c2d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "41402ca05534372cb901e484560c45b9ea0bea42", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "haroldthimbleby/Software-Enginering-Boards", "max_forks_repo_path": "paper-seb-main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "41402ca05534372cb901e484560c45b9ea0bea42", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "haroldthimbleby/Software-Enginering-Boards", "max_issues_repo_path": "paper-seb-main.tex", "max_line_length": 1614, "max_stars_count": null, "max_stars_repo_head_hexsha": "41402ca05534372cb901e484560c45b9ea0bea42", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "haroldthimbleby/Software-Enginering-Boards", "max_stars_repo_path": "paper-seb-main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14101, "size": 65394 }
\pageId{upconversion} SnuggleTeX 1.1.0 introduced experimental and limited support for generating ``semantically richer'' outputs than its usual display-oriented Presentation MathML that some users might find interesting to play around with. This notion is often referred to as ``semantic enrichment''. (SnuggleTeX usually calls it ``up-conversion'' which has connotations of ``swimming against the tide'', which is quite difficult, whereas ``down-conversion'' is much easier!) These features were added as part of SnuggleTeX's involvement in the JISC-funded \href[MathAssess]{http://mathassess.ecs.soton.ac.uk/} project which was concerned with enhancing existing computer-based assessment tools aimed at the educational sector for ``service level'' mathematics teaching up to early undergraduate level in the United Kingdom. As of SnuggleTeX 1.2.0, these features are still to be considered experimental but might be interesting to some people and feedback/contact about this is most welcome. \subsection*{The \verb|snuggletex-upconversion| module} The \verb|snuggletex-upconversion| module, which is part of the \href[full SnuggleTeX distribution]{docs://download} makes it possible to: \begin{itemize} \item \href[Generate semantically richer Presentation MathML]{docs://pmathmlEnhancement} (from certain LaTeX inputs) \item \href[Convert LaTeX to Content MathML]{docs://cmathml} (for certain LaTeX inputs) \item \href[Convert LaTeX to Maxima notation]{docs://maxima} (for certain LaTeX inputs) \item Generate the above forms from certain \href[ASCIIMathML]{http://www1.chapman.edu/~jipsen/asciimath.html} inputs \end{itemize} It is important to qualify all of the above with phrases like ``limited'' and ``experimental'' as it is of course generally not possible to derive meaning from arbitrary LaTeX math inputs or arbitrary Presentation MathML expressions. This is not difficult to demonstrate. For example, written mathematical notations tend to be reused for different purposes and their correct interpretation relies on an understanding of the underlying context. A simple example of this is the symbol $e$ might refer to the exponential number in some cases whereas, it could also be the identity in a group or have a number of other meanings. Similarly, the expression $f(x+1)$ could well represent the application of the function $f$ to $x+1$, whereas it could also be the product of $f$ and $x+1$. A related difficulty is that mathematical notations and conventions are often localised, with different countries favouring particular notations and symbolic conventions over others. \subsection*{Supported Input Forms} The SnuggleTeX up-conversion processes are aimed at the LaTeX expressions using: \begin{itemize} \item ``Traditional'' UK notation and conventions \item Numbers and identifiers \item Familiar symbols for basic arithmetic operators \item Basic unary trigonometric, hyperbolic, logarithmic and exponential functions \item Some basic n-ary functions like \verb|min|, \verb|max|, \verb|gcd| etc. \item Implicit products and function arguments \item Factorials \item Basic set theory and logic operators \item Basic relation operators \item Greek letters \item Basic symbols (e.g. infinity) \item A certain amount of configurable ``assumptions'' (e.g.\ treat $f$ as a a function, treat $e$ as the exponential number\ldots) \end{itemize} \subsection*{Try For Yourself} You can play around with these ideas yourself using our \href[MathML Semantic Enrichment Demo]{docs://upConversionDemo}. The pages describing the up-conversion process in more detail also have some pop-up examples showing the process in action.
{ "alphanum_fraction": 0.7974302898, "avg_line_length": 46.3037974684, "ext": "tex", "hexsha": "a4cb5fe1b17a43c75f0bc7f6d4d68830b9717ffe", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-17T07:29:24.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-11T17:01:19.000Z", "max_forks_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "bsmith-n4/snuggletex", "max_forks_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/semantic-enrichment.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "bsmith-n4/snuggletex", "max_issues_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/semantic-enrichment.tex", "max_line_length": 131, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2464e26386ad70e51ee0c2b3eda544e21e7a4da", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "bsmith-n4/snuggletex", "max_stars_repo_path": "snuggletex-webapp/src/main/webapp/WEB-INF/docs/semantic-enrichment.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 837, "size": 3658 }
\renewcommand{\columnseprule}{1.5pt} \begin{multicols*}{2} \noindent \rule[0.5\baselineskip]{0.5\textwidth}{1pt} \noindent \subsection{Families of Functions} \noindent 1. Sketch the graph of $f(x)=\frac{1}{2}x-3$ and label the coordinates of the $x$ and $y$ intercepts. Describe the obvious etymology of the adjective \textbf{linear} in a whole sentence. \vspace{3cm} \noindent 2. Sketch the \textbf{quadratic} function $f(x)=-x^2+2x+8$, labelling its intercepts, \textbf{vertex}, and axis of symmetry. Look up the etymology of the word ``quadratic'' and explain why the Latin root for `four' is in our word for an equation with a power of two. \vspace{4cm} \noindent 3. Sketch the graph of the \textbf{power} function $f(x)=2x^\frac{3}{4}$. When is the next lattice point after (1,2)? \vspace{3cm} \noindent 4. Sketch the graph of the \textbf{exponential} function $f(x)=5\cdot{}.9^x$. Why do you think it is called an ``exponential'' function and how does it contrast with the previous problem (algebraically and graphically)? \vspace{3cm} \noindent 5. Sketch the graph of the \textbf{logarithmic} function $f(x)=2\ln{2x)-6$. Does the $y$ value ever exceed 0? If there a vertical asymptote at $x=0$? Why does you calculator stop graphing around (0.15,-8.38)? \vspace{3cm} \noindent 6. Not all numbers make sense, either as outputs or inputs to your function. Conjecture a reason \textbf{domain} and \textbf{range} and tell how you arrived at the intervals you did. Record them below. \vspace{3cm} \noindent 7. When using a \textbf{mathematical model}, there are two words to describe going beyond recorded data: \textbf{interpolate} and \textbf{extrapolate}. Which prefix means `within' and which means `without'? Which word is appropriate to describe finding the missing data point you skipped over from your observations? Which would is appropriate to describe finding how long 100 objects would be? Explain your choice. \vspace{5cm} 8. Describe what you think the point of this problem set is, using technical vocabulary in complete sentences. \end{multicols*}
{ "alphanum_fraction": 0.7514367816, "avg_line_length": 40.1538461538, "ext": "tex", "hexsha": "fc9c762059ad3a1c660703fbbe3bdd9580fc2aa9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_path": "ch01/0102problems.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_path": "ch01/0102problems.tex", "max_line_length": 111, "max_stars_count": 2, "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_path": "ch01/0102problems.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "num_tokens": 624, "size": 2088 }
\documentclass[../main.tex]{subfiles} \begin{document} \chapter{Introduction} %Introduce the problem A universal issue that affects software developers is the presence of software bugs. Bugs in software can be defined as unintentional errors that cause software to perform incorrectly. Defects in software can have a serious financial impact on businesses and reduce the time available for developers to create new features. Bugs in software can also introduce security vulnerabilities that could lead to the leakage of user data as well a loss of reputation \cite{briski2008minimizing}. There have been several large-scale incidents in the past involving defective software. For example, the Mars Climate Orbiter, a space probe launched by NASA in 1998 crashed on Mars due to a bug which failed to use metric measurements instead of imperial units \cite{sauser2009projects}. Another example is the Heartbleed bug, a security vulnerability in the OpenSSL protocol which could potentially expose confidential information such as passwords and credit card numbers \cite{durumeric2014matter}. In 1994, the Pentium FDIV bug incident caused Intel processors to produce faulty values when diving floating point values which lead to a mass recall of defective processors \cite{pratt1995anatomy}. These examples highlight some of the large scale impacts that bugs can have and it can be very challenging to identify the vulnerabilities in the first place. A study by IBM found that the associated cost with resolving a bug in software increases depending on what phase in the development lifecycle the bug was identified \cite{briski2008minimizing}. Moreover, software bugs that are discovered in the production phase are more costly because they can directly affect customers as opposed to bugs that occur early on in. Therefore there is a justification for being able to detect bugs as early as possible. %What is currently being done today and limitations Software developers identify bugs by performing regular testing reduce their likelihood by improving development practices. For example, unit testing involves verifying the validity of individual methods within code and these tests and can be run every time a new build is created. Development practices have also shifted from traditional waterfall methods to Agile methodologies in order to keep up with the fast pace of development cycles. Agile methodologies allow for more frequent testing as well as code reviews before integrating new features to the main product. However, a challenge with identifying bugs is that as a project grows in size, it becomes more complex to maintain. It becomes impractical to try out all possible executions of a program and running tests can become time consuming. %How can we solve it today To assist developers with identifying bugs, recent research has suggested the usage of software defect prediction models \cite{kamei2013large}. These models aim to identify regions in code that are most vulnerable to defects. Using these models, developers would be able to prioritize their testing efforts and receive continuous feedback in an automated fashion. The proposed benefits of such models are reduced time spent when reviewing code and being able to provide immediate feedback to a developer once a change is made. %Using machine learning to assist us In order to create such a defect prediction model, one could leverage the capabilities of machine learning. Machine learning is the field of study concerned with algorithms that can learn from historical samples in order to predict values for unseen samples. Although machine learning has been around since the 60s, its viability has grown immensely in recent years due to advances in hardware, easy to implement open source models and a large availability of data. Machine learning algorithms are able to spot complex patterns in data to solve various tasks such as facial recognition or translation of text and are now being tested out on the task of identifying bugs \cite{nayrolles2018clever}. In the context of identifying risky portions of code, one could use a supervised learning approach. With this technique, an algorithm is shown examples of risky and clean files and aims to predict whether a new file contains a defect. %Just-in-time defect prediction A proposed method for identifying risky code is Just-in-time (JIT) defect prediction. Unlike previous works which aim to make predictions at the file-level, JIT defect prediction aims to predict individual commits as being risky or not \cite{kamei2013large}. The justification for these models is that commits tend to be smaller to review than files which can make the reviewing process simpler. Also, a problem with identifying risky files is that files can have multiple authors so it is unclear as to who should be assigned to resolve the issue. Commits on the other hand only have one author so one could assign the commit's author to resolve any potential issues. Finally, JIT prediction models can make predictions rapidly because the metrics used in the model can be extracted as soon as the commit as made. %SZZ When applying supervised learning techniques for this particular task, labelled data is required. The problem is that the majority of commits do not have any labels that indicate whether or not they caused a defect. In order to automatically label commits as \textit{risky} or \textit{not risky}, the Śliwerski Zimmermann Zeller (SZZ) algorithm can be used to obtain large datasets for building defect prediction models \cite{sliwerski2005changes}. However, a limitation of the original algorithm is that it requires the project being analyzed to have a bug tracking database that confirms if certain commits resolve bugs. Some projects might not have this database or if they do, the information it stores might not contain data for a commit by commit basis. Instead, one could rely on the approximate SZZ (ASZZ) algorithm which removes this restriction. %Semi-supervised learning Since labelling commits can be quite time consuming, it would be useful to have a technique that could leverage the information contained in unlabelled commits. Semi-supervised learning is an example of a subset of machine learning algorithms that can train on both labelled and unlabelled data \cite{zhu2005semi}. The self-training algorithm is a simple semi-supervised technique that can be applied to any classification method that outputs probabilities for its labels \cite{zhu2007semi}. %Contributions of this work performing a qualitative study to as well as performing a case study to investigate the viability of JIT defect prediction models in practice. \section{Research Questions} The purpose of this thesis is to investigate the viability of the self-training algorithm for software risk prediction. It will also perform a qualitative study to understand how software developers identify and resolve bugs. Finally, a case study will be performed to see how practical JIT defect prediction models are in an industrial environment such as King \footnote{https://king.com/}, a game development company. The research questions that will be answered are: \begin{itemize} \item \textbf{RQ1:} When comparing the semi-supervised method of self-training with supervised learning techniques, how well does the semi-supervised approach perform when applied to Just-in-time defect predictions? \item \textbf{RQ2:} How do software developers identify bugs and what are some characteristics of code that contains a bug? \item \textbf{RQ3:} When provided with a Just-in-time defect prediction model, how accurate are the model's predictions according to software developers ? \end{itemize} \section{Scope} This project is focused on Just-in-time defect prediction models, models which predict at the commit level and only rely on commit metadata rather than analyzing the source code. The collection of new data will be done using the approximate SZZ algorithm and using a smaller set of features than those typically found in datasets for JIT defect prediction. \section{Outline of Report} Chapter 2 provides relevant background theory on the field of software defect prediction as well as achievements in related work. Chapter 3 describes the technical contribution that was created for this thesis project. Chapter 4 covers the methodology utilized for answering the specified research questions. Chapter 5 presents the results of these experiments and finally, chapter 6 will analyze and discuss the key findings. \end{document} %contribution section
{ "alphanum_fraction": 0.82, "avg_line_length": 159.2592592593, "ext": "tex", "hexsha": "021b993c4f022d465dd2195f95bf563d5f97f7c8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Arsalan-Syed/Master_Thesis", "max_forks_repo_path": "report/chapters/1_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Arsalan-Syed/Master_Thesis", "max_issues_repo_path": "report/chapters/1_introduction.tex", "max_line_length": 1088, "max_stars_count": 2, "max_stars_repo_head_hexsha": "69098becf961483ba7ebdd526cf9213de1a75545", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Arsalan-Syed/Master_Thesis", "max_stars_repo_path": "report/chapters/1_introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-19T18:01:36.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-29T17:45:10.000Z", "num_tokens": 1658, "size": 8600 }
\subsection{Heapsort} \begin{frame}{Heapsort - Algorithm 1 / 10} \textbf{Heapsort:} \begin{itemize} \item The principle stays the same \item Better structure for finding the smallest element quicker \end{itemize} \vspace{1em} \onslide<2- |handout:1>{\textbf{Binary heap:}} \begin{itemize} \item<2- |handout:1> Preferably a complete binary tree \item<2- |handout:1> \textbf{Heap property:} Each child is {\color{MainA}smaller} (larger) than the parent element \end{itemize} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 2 / 10} \textbf{Min heap:} \begin{itemize} \item<1- |handout:1> \textbf{Heap property:} Each child is {\color{MainA}smaller} (larger) than the parent element \item<2- |handout:1> A valid heap fulfills the property at each node \end{itemize} \vspace{-1em} \begin{columns}% \begin{column}[b]{0.45\textwidth}% \begin{figure}[!h]% \begin{adjustbox}{height=0.75\linewidth}% \input{Images/Heap/MinHeap_Valid.tikz} \end{adjustbox} \caption{Valid min heap} \label{fig:minheap_valid} \end{figure} \end{column}% \hspace*{0.1em}% \begin{column}[b]{0.45\textwidth}% \begin{figure}[!h]% \begin{adjustbox}{height=0.75\linewidth}% \input{Images/Heap/MinHeap_Invalid.tikz} \end{adjustbox} \caption{Invalid min heap} \label{fig:minheap_invalid} \end{figure} \end{column} \end{columns} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 3 / 10} \textbf{How to save the heap?}\\[0.25em] \begin{itemize} \item We number all nodes from top to bottom and left to right starting at {\color{MainA}0} \begin{itemize} \item The children of node {\color{MainA}$i$} are {\color{MainA}$2i + 1$} and {\color{MainA}$2i + 2$} \vspace*{0.5em} \item The parent node of node {\color{MainA}$i$} is {\color{MainA}$\mathrm{floor}\left(\frac{i-1}{2}\right)$} \end{itemize} \end{itemize}% \vspace{-2em}% \begin{columns}% \begin{column}{0.5\textwidth} \begin{figure}[!h]% \begin{adjustbox}{width=\linewidth} \input{Images/Heap/MinHeap_Numbered.tikz}% \end{adjustbox} \vspace*{-0.5em}% \caption{Min heap}% \label{fig:minheap_numbered}% \end{figure}% \end{column} \begin{column}{0.5\textwidth} \begin{table}[!h] \caption{Elements can be stored in array} \label{tab:minheap_numbered} \begin{tabular}{ccccccc} \onslide<2- |handout:1>{\color{MainB}0}&% \onslide<3- |handout:1>{\color{MainB}1}&% \onslide<4- |handout:1>{\color{MainB}2}&% \onslide<5- |handout:1>{\color{MainB}3}&% \onslide<6- |handout:1>{\color{MainB}4}&% \onslide<7- |handout:1>{\color{MainB}5}&% \onslide<8- |handout:1>{\color{MainB}6}\\ \hline \multicolumn{1}{|c}{\onslide<2- |handout:1>{2}}&% \multicolumn{1}{|c}{\onslide<3- |handout:1>{3}}&% \multicolumn{1}{|c}{\onslide<4- |handout:1>{4}}&% \multicolumn{1}{|c}{\onslide<5- |handout:1>{11}}&% \multicolumn{1}{|c}{\onslide<6- |handout:1>{7}}&% \multicolumn{1}{|c}{\onslide<7- |handout:1>{5}}&% \multicolumn{1}{|c|}{\onslide<8- |handout:1>{8}}\\ \hline \end{tabular} \end{table} \end{column} \end{columns} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 4 / 10} \textbf{Repairing after taking the smallest element:} \texttt{heap.pop()} \begin{itemize} \item<2- |handout:1> Remove the smallest element (root node) \item<3- |handout:1> Replace the root with the last node \item<4- |handout:1> {\color{MainA}Sift} the new root node down until the {\color{MainA}heap property} is satisfied \end{itemize} \onslide<5- |handout:1>{ \begin{figure}[!h]% \begin{columns}% \begin{column}{0.3\textwidth}% \begin{adjustbox}{width=\linewidth} \input{Images/HeapSort/MinHeap_Repair_First.tikz}% \end{adjustbox}% \end{column}% \hspace*{0.05em}% \begin{column}{0.3\textwidth}<7- |handout:1>% \begin{adjustbox}{width=\linewidth} \input{Images/HeapSort/MinHeap_Repair_Second.tikz}% \end{adjustbox} \end{column}% \hspace*{0.05em}% \begin{column}{0.3\textwidth}<9- |handout:1>% \begin{adjustbox}{width=\linewidth} \input{Images/HeapSort/MinHeap_Repair_Third.tikz}% \end{adjustbox}% \end{column}% \end{columns}% \caption{Repairing a min heap via sifting}% \label{fig:minheap_repair}% \end{figure} } \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 5 / 10} \textbf{Heapsort:} \begin{itemize} \item Organize the {\color{MainA}$n$} elements as heap \item While the heap still contains elements \begin{itemize} \item Take the smallest element \item Move the last node to the root \item Repair the heap as described \end{itemize} \item<2- |handout:1> % Output: {\color{MainB}4}% % \onslide<6- |handout:1>{, {\color{MainB}5}, {\color{MainB}\ldots}} Output: {\color{MainB}2}% \onslide<9- |handout:1>{, {\color{MainB}3}, {\color{MainB}\ldots}} \end{itemize} \vspace*{-0.5em} \onslide<2- |handout:1>{ \begin{center} \begin{figure}[!h]% \begin{columns}% \begin{column}{0.33\textwidth}% \begin{centering} \begin{adjustbox}{height=8em} \input{Images/HeapSort/HeapSort_First.tikz}% \end{adjustbox}% \end{centering} \end{column}% \begin{column}{0.33\textwidth}<4- |handout:1>% \begin{centering} \begin{adjustbox}{height=8em} \input{Images/HeapSort/HeapSort_Second.tikz}% \end{adjustbox}% \end{centering} \end{column}% \begin{column}{0.33\textwidth}<8- |handout:1>% \begin{centering} \begin{adjustbox}{height=8em} \input{Images/HeapSort/HeapSort_Third.tikz}% \end{adjustbox}% \end{centering} \end{column}% \end{columns}% \caption{One iteration of Heapsort}% \label{fig:heapsort_repair}% \end{figure} \end{center} } \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 6 / 10} \textbf{Creating a heap:} \begin{itemize} \item This operation is called {\color{MainA}heapify} \item<2- |handout:1> The {\color{MainA}$n$} elements are already stored in an array \item<3- |handout:1> Interpret the array as binary heap where the {\color{MainA}heap property} is not yet satisfied \item<4- |handout:1> We repair the heap from bottom up (in layers) with {\color{MainA}sifting} \end{itemize} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 7 / 10} \vspace{-1.0em} \begin{table}[!h]% \caption{Input in array}% \label{tab:heapify_numbers}% \begin{tabular}{ccccccc} {\color{MainB}0}& {\color{MainB}1}& {\color{MainB}2}& {\color{MainB}3}& {\color{MainB}4}& {\color{MainB}5}& {\color{MainB}6}\\ \hline \multicolumn{1}{|c}{11}&% \multicolumn{1}{|c}{7}&% \multicolumn{1}{|c}{8}&% \multicolumn{1}{|c}{3}&% \multicolumn{1}{|c}{2}&% \multicolumn{1}{|c}{5}&% \multicolumn{1}{|c|}{4}\\ \hline \end{tabular} \end{table} \vspace*{-0.5em} \begin{centering} \begin{figure}[!h]% \begin{columns}% \begin{column}{0.425\textwidth}% \begin{adjustbox}{width=\linewidth}% \input{Images/HeapSort/Heapify_First.tikz}% \end{adjustbox}% \end{column}% \begin{column}{0.425\textwidth}<2- |handout:1>% \begin{adjustbox}{width=\linewidth}% \input{Images/HeapSort/Heapify_Second.tikz}% \end{adjustbox}% \end{column}% \end{columns}% \caption{Heapify lower layer}% \label{fig:heapify_lower}% \end{figure} \end{centering} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 8 / 10} \begin{centering} \begin{figure}[!h]% \begin{columns}% \begin{column}{0.425\textwidth}% \begin{adjustbox}{width=\linewidth}% \input{Images/HeapSort/Heapify_Third.tikz}% \end{adjustbox}% \end{column}% \begin{column}{0.425\textwidth}<2- |handout:1>% \begin{adjustbox}{width=\linewidth}% \input{Images/HeapSort/Heapify_Fourth.tikz}% \end{adjustbox}% \end{column}% \end{columns}% \caption{Heapify upper layer}% \label{fig:heapify_upper}% \end{figure} \end{centering} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 9 / 10} \begin{centering} \begin{figure}[!h] \begin{adjustbox}{width=0.425\linewidth}% \input{Images/HeapSort/Heapify_Fifth.tikz}% \end{adjustbox}% \caption{Resulting heap}% \label{fig:heapify_upper_final}% \end{figure} \end{centering} \end{frame} %------------------------------------------------------------------------------- \begin{frame}{Heapsort - Algorithm 10 / 10} \textbf{Finding the minimum is intuitive:} \begin{itemize} \item \textbf{Minsort:} Iterate through all non-sorted elements \item \textbf{Heapsort:} Finding the minimum is trivial (concept) \begin{center} \textit{Just take the root of the heap} \end{center} \end{itemize} \vspace*{1.5em} \onslide<2- |handout:1>{ \textbf{Removing the minimum in Heapsort:} \begin{itemize} \item Repair the heap and restore the {\color{MainA}heap property} \begin{itemize} \item We don't have to repair the whole heap \end{itemize} \item More of this in the next lecture \end{itemize} } \end{frame} %%% =================================================================== %%% This should be at the END of the file !!!!!! %%% %%% Local Variables: %%% mode: latex %%% TeX-master: "../../Lecture.tex" %%% End: %%% ===================================================================
{ "alphanum_fraction": 0.5337728012, "avg_line_length": 31.8218390805, "ext": "tex", "hexsha": "1bf3ef0dcd917fa2c0e34a9a262d024b5b03e6f1", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2021-04-05T08:36:38.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-07T11:55:23.000Z", "max_forks_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TobiOnline/AlgoDat", "max_forks_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_issues_count": 23, "max_issues_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_issues_repo_issues_event_max_datetime": "2019-10-20T15:40:10.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-08T09:27:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TobiOnline/AlgoDat", "max_issues_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_line_length": 100, "max_stars_count": 17, "max_stars_repo_head_hexsha": "565a9f03a9ed7ef354cb4f143959df77df89b726", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TobiOnline/AlgoDat", "max_stars_repo_path": "Lecture-1/Chapter/eng/70_HeapSort.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-26T11:07:16.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-16T17:42:34.000Z", "num_tokens": 3506, "size": 11074 }