Search is not available for this dataset
text
string | meta
dict |
---|---|
\section{Background}
\subsection{Intel SGX: A TEE Implementation}
Intel SGX~\cite{sgxdoc} is a popular implementation of TEE. It runs code inside a special ``Enclave'' so that the execution of the code is deterministic, i.e., not affected by other processes or underlying operating system, and the intermediate states is not leaked. In a properly set up system, Intel SGX can defend the attacks from the OS layer and hardware layer.
\begin{figure}
\centering \footnotesize
\includegraphics[width=.7\columnwidth]{img/pLIBRA-sgxra}
\caption{Intel SGX remote attestation procedure.}
\label{fig:sgx-ra}
\end{figure}
To ensure the execution is finished as expected inside an enclave, a proof can be generated according to a protocol called \textbf{Remote Attestation}. The hardware can generate an \textit{attestation quote} based on the details of hardware, firmware, the code being executed inside the enclave, and other user-defined data produced by the code. The quote is signed by the trusted hardware with credentials embedded during the production process.
Next, the generated attestation quote is sent to the Intel Remote Attestation Service. Intel will sign the quote iff the signing credentials are valid. As each credential is uniquely bound to an Intel CPU unit, fake attestation quotes will never pass the Remote Attestation Service check.
Finally, the attestation quote signed by Intel serves as the proof of a successful execution. It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.
Intel SGX and the Remote Attestation protocol is the foundation of confidential contract. Except for Intel SGX, there are also alternative implementation choices like AMD SEV~\cite{amdsev} and ARM TrustZone~\cite{armtrustzone}.
\subsection{Event Sourcing and CQRS}
Event Sourcing is a software design pattern. Instead of storing the latest state of the data, the events causing state transition are recorded in an append-only log. The events are timestamped and can be replayed to reconstruct the state of any time. Since the events are timestamped, the state of the system is deterministic. Command Query Responsibility Segregation (CQRS) is a design pattern by which the read operations and write operations are handled separately. In a CQRS and Event Sourcing combined system, the write operations are recorded as the events and the read operations can be served by the current view of the state. This pattern make a system easy to scale up and avoid conflicts.
For native CPU performance and better security, each confidential contract is bound to only a single or a small set of TEEs as the executor. By this design, the contract state is isolated from each other without consistency guarantee. It becomes a trouble for cross-contract and even cross-chain interoperability.
While it's hard to keep a strong consistency over the states, contracts can still communicate by passing messages to each other on the premise that state transition is still deterministic. In an Event Sourcing / CQRS design, the commands can be initiated from users or contracts and are timestamped on the blockchain strictly. It guarantees the global state is deterministic and therefore enables message passing between contracts. Message passing is a primitive to implement higher level interoperability like contract invocation and token transferring. The read-only operations are not timestamped for better performance.
| {
"alphanum_fraction": 0.811963445,
"avg_line_length": 124.5172413793,
"ext": "tex",
"hexsha": "091a192da9d220654da6585dcd729f22ffe0d7a9",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-05-04T13:32:38.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-08T02:15:28.000Z",
"max_forks_repo_head_hexsha": "78d28560e0d42259e6c4142250db795102c00ecb",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "contropist/Phala-Network-Whitepaper",
"max_forks_repo_path": "src/tex/2-background.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "78d28560e0d42259e6c4142250db795102c00ecb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "contropist/Phala-Network-Whitepaper",
"max_issues_repo_path": "src/tex/2-background.tex",
"max_line_length": 699,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "78d28560e0d42259e6c4142250db795102c00ecb",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "contropist/Phala-Network-Whitepaper",
"max_stars_repo_path": "src/tex/2-background.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-05T02:06:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-14T15:41:45.000Z",
"num_tokens": 732,
"size": 3611
} |
%!TEX root = ../thesis.tex
\chapter{Introduction}
\label{chap:i}
\section{Motivation}
\label{sec:motivation}
Constructing and maintaining an up-to-date graph-like road network on the national level has a range of firmly established uses. Owing to its structure, it can be used efficiently for modelling and simulation purposes, such as traffic flow simulations, passenger transport modelling, construction and upgrade impact modelling (to pinpoint optimal locations and types of investment), and traffic noise load modelling (\cite{bell_lida_1997, zhu_li_2007, zhang_2011, duran_santos_2014, peng_etal_2020}). It can also be used for navigation; a graph-like road network representation is at the heart of most road navigation services (\cite{yue_etal_2008}). Combined with other datasets, we can mention an even wider range of use cases: complemented by ecological statistics and models, it can offer insight into the impact of the presence of roads, and planned road construction on the flora and fauna in their vicinity.
Or to mention a different type of example, a digital road network may be used as a shared working space when aggregating geospatial data relating to road infrastructure from various sources. It makes it possible for geographical road locations, topographical relationships, and arbitrary semantic information to reside in the same network-type data model, making analysis techniques more straightforward, enforcing consistency and saving effort for data providers who would otherwise all need to maintain their own road models (\cite{ekpenyong_etal_2007}). This example is closely related to the ambitions of the provider of the Dutch digital road network that is the primary subject of this research.
One may remark that a two-dimensional representation with \textit{approximate} geographical locations may suffice for many of the purposes I listed as examples above, topology being the main concern in network analysis. For instance, \ac{gnss} navigation software often use snapping methods to ensure that the navigating vehicle always traverses the road graph – ensuring that navigation remains continuous even when positioning has poor accuracy (\cite{fouque_bonnifait_2008}). Traffic flow simulations are primarily concerned with traffic loads, road properties, and how roads are subdivided by intersections. \textit{Mostly}, they are not concerned with the exact geographical locations of roads – as long as the topology is relatively accurate, any geographical permutation of the network will yield largely invariant results (\cite{thomson_richardson_1995}).
However, some applications are concerned with the road network in the context of its surroundings, which makes the accuracy of its georeferencing important. Noise modelling is such an application, because it requires deriving the noise load affecting various objects in the vicinity of the the roads. This also involves considering objects that may impact the propagation of the noise, such as noise barriers, terrain, and buildings (\cite{ishiyama_etal_1991, bennett_1997, guarnaccia_quartieri_2012}).
A realistic noise propagation model mainly takes into account terrain, and the 3D geometry of the surrounding objects. However, the position of roads \textit{relative to the terrain} should also be taken into account. For instance, consider a hill with a building on one side and a road on the other, as shown in Figure \ref{fig:justification_illu}. The hill (representing the terrain) in this case acts as a noise barrier affecting the noise load received by the building. Unless the assumption holds that roads are always found \textit{exactly on} the terrain, this is not yet enough information to derive the noise load incident on the building. Roads may be elevated, sunken into the ground, or built in tunnels – meaning that the assumption does not always hold. For instance, if the road in my example was built on a bridge with a similar height as the hill, then the hill suppresses the road noise much less effectively, and "snapping" the road to the terrain by ignoring its elevation would yield incorrect noise modelling results. One way to handle such scenarios is to take into account the absolute elevation of the road surfaces, in other words to use a 3D road network.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{final_report/figs/justification_illu.pdf}
\caption[Illustration of the 3D conversion justification]{This illustration shows a justification for the 3D conversion of digital road networks. Assuming that the roads always lie on the terrain (pink road) allows one to model the propagation of noise with the surrounding terrain and 3D objects taken into account. However, roads above or below the terrain will result in faults in the model. For instance, the noise from an arbitrary elevated version of this road (yellow road) would reach the building. "Snapping" it to the terrain suggests incorrectly that the hill blocks the noise.}
\label{fig:justification_illu}
\end{figure}
2D-projected digital road models with mediocre accuracy have attracted great scientific and commercial attention since the advent of digital cartography and satellite navigation (\cite{taylor_etal_2001, fouque_bonnifait_2008, yue_etal_2008}). However, \textit{accurate 3D representations} are still atypical, owing to factors such as the increased cost of generation and maintenance, increased complexity of visualisation and analysis, and a lack of significant use cases (\cite{zhu_li_2007, wang_etal_2014}). As a result, 2D road models are common in terms of both public and private geospatial providers, whereas accurate 3D road models are rare in comparison.
When a use case arises and an accurate 3D model is needed, providers generally have two options: to produce a new model, or to enrich an existing 2D model with elevation data. The decision generally depends on the quality of the available 2D data set relative to the requirements for the 3D model, as well as that of the dataset(s) available as sources of elevation data, among other factors (\cite{zhu_li_2007, zhu_li_2008, wang_etal_2014}). In the geospatial field data acquisition is far more expensive than re-using existing datasets, especially openly available ones. As a result, many providers first attempt to find a way to convert their datasets into 3D using existing data in such a cost-effective manner.
\section{The NDW commission}
\label{sec:commission}
In certain projects the accuracy requirement and restrictions on the modelling procedure may be prescribed legally. Such is the case for the client of the present research, \ac{ndw} (National Road Traffic Data Portal), a division of \ac{rws} (Directorate-General for Public Works and Water Management). This Dutch government organisation is in the process of enriching their pre-existing open data 2D road model \ac{nwb} (National Road Database) with 3D data, to attain compliance with the new version of the Dutch noise legislation or \textit{geluidwetgeving}, coming into effect on the 1\textsuperscript{st} of January 2022. The new version of the legislation prescribes, among other things, a horizontal \textit{and vertical} accuracy of 20 centimetres for the road model underlying the noise simulations. Due to cost considerations and reasons related to \ac{ndw}’s data acquisition pipeline, the pre-existing 2D version of \ac{nwb} will be converted into a 3D dataset (dubbed \textit{3D-NWB}) primarily using open data geospatial datasets. They have produced a prototype implementation themselves, and subsequently contracted the consultant firm \ac{rhdhv} to create a commercial implementation based on their experience with the prototype. The development of this tool was concluded in December 2020, with a preliminary version of 3D-NWB already publicly available on their website in addition to the original 2D version.
Thus for \ac{ndw}, the next year will be about assessing the quality of their new product and improving it as they see necessary. In particular, they wish to assess how it fares in terms of the requirements set by law. This dissertation research attempts to contribute to this assessment by presenting an original system design and implementation that favours scientific correctness, and in which output accuracy can be qualitatively and quantitatively described. By comparing the results of the academic system design and implementation to the commercial one, it becomes possible to indirectly evaluate the commercial results' general quality and accuracy in an indirect manner. My work also explores various related topics that are not directly intended for this purpose, which I refer to as the solely academic aims of this project.
My research was carried out in consultation with the above parties. In fact, the stages leading up to the submission of my dissertation proposal involved thorough consultation with personnel both at \ac{ndw} and \ac{rhdhv} while the commercial implementation was still being developed, to ensure that my research fits well with \ac{ndw}'s plans and the commercial implementation.
\section{Field and relevance}
\label{sec:relevance}
For reasons that will later become clear (see Section \ref{sec:input}), I primarily focused on a Lidar point cloud and a 3D topographical line dataset as elevation sources. In both of these datasets, it is clearly evidenced that roads are occasionally in complex three-dimensional relationships with one another and with their environment. For instance, they cross above and below other roads and are also frequently occluded by other objects such as vegetation and buildings. Already in the planning phase of the project the question had arisen, how such real-world geometries should be dealt with in the conversion process – evidently they will require special treatment relative to well-exposed road surfaces. The answer to this question is closely linked with which field of the geosciences my project is positioned in.
The likely candidates in the context of digital road network modelling are \ac{gis} and geomatics (also called geoinformatics). It is thus worth discussing briefly how each typically treats 3D objects. One of the reasons why 2D road models are popular is that their geometry and network properties can be analysed using a multitude of well-proven \ac{gis} methods and software kits. However, in \ac{gis} models, even if elevation measurements exist, they are generally only present as an \textit{elevation attribute} (i.e. a semantic data field, like street names), because \ac{gis} geometrical models do not typically support true 3D operations. This is conceptually identical to projecting the geometries onto the horizontal plane. Geometric models that treat the vertical dimension explicitly are more common in geomatics; namely 2.5D and 3D models. While using 2.5D models restricts the types of physical entities that can be modelled, it also greatly simplifies certain types of analysis conceptually and computationally. This makes it ideal for working on similar scales to \ac{gis}; i.e. on the national scale for instance, as in this research.
While 2.5D modelling initially appears to be a good candidate for this project, we may observe that it is by definition unsuitable for handling the 3D relationships that roads have with each other and with their environment. However, much like how the concept of divide-and-conquer works in computer science, it possible decompose three-dimensional, geometrical problems into smaller sub-problems until they become natively compatible with 2.5D methods which are simpler to solve individually than the 3D problem as a whole. This research is positioned in the field of geomatics because my system design is specifically intended to explore how 2.5D methods can be applied in a way that enables the \textit{piecewise} modelling of a national road network. The divide-and-conqer concept is applied to decompose the road network into segments that can be individually, locally regarded as \textit{terrain} (i.e. a mathematical surface) and hence be modelled in 2.5D.
Geomatics is comprised of a wide range of disciplines, several of which are relevant to the present research. As I focused on 2.5D methods to a great extent, it overlaps with the geomatics field of \textit{digital terrain modelling} in terms of how it generates and stores the digital representations of roads, and as a consequence, the manner in which it derives elevations from them: using \textit{spatial interpolation}. As the overview of the methods in Section \ref{sec:methodsoverview} reveals, it also strongly overlaps with the geomatics discipline of \textit{feature extraction} (and to a lesser extent, \textit{photogrammetry}), because of the intermediate steps used by the pipeline to derive the 2.5D road surface models from our input datasets.
This is in line with my goal to study how a combination of mainly geomatics-based tools and methods can be used to accomplish the tasks required by \ac{ndw}, and to also assess their accuracy and suitability when used in this way. However, I also use \ac{gis} methods "under the hood" in all parts of the project – for instance, 2D geometry intersection tests, orientation tests and spatial queries are pervasive in the implementation, and are thus often mentioned in the detailed description of the processing steps found in Section \ref{sec:methods}. Furthermore, my research often touches on mathematics and statistics, for instance I use polynomial fitting and \ac{mle} throughout the implementation (as examples of the former) and metrics such as standard deviation and \ac{rmse} (as examples of the latter).
\section{Research questions}
\label{sec:rq}
My main research question is \textit{"How can we achieve a 3D conversion of the \ac{nwb} dataset using Dutch open geospatial data and a primarily 2.5D-based surface modelling methodology, while guaranteeing optimal and quantifiable accuracy and completeness?"}. It was distilled from the main areas of interest that we settled on during the preparatory stages of the project, while planning the project with my academic mentors, \ac{ndw}, and \ac{rhdhv}. The question is comprised of two halves, which I initially intended to devote equal amounts of attention to – the question of devising a system design and implementing it, and that of assessing its effectiveness and the accuracy and completeness of the output it generates (as well as comparing it with the commercial results).
I eventually settled on focusing more time and effort on the system design and integration (i.e. \textit{performing} the elevation-enrichment of \ac{nwb}), both because it required more time than initially anticipated, and because the quality and accuracy assessment of the results (and their comparison with the commercial results) was more straightforward than expected. Furthermore, I found that many of the accuracy-related questions depend strongly on the exact specifications of the system design and its implementation, meaning that not all the work necessary to answer those questions was clearly separated into a dedicated accuracy assessment process – much of it needed to be considered and evaluated during development.
The two halves of the main research question were created by collecting my sub-questions into two categories; pipeline design and implementation, and accuracy assessment. Below I present some of these sub-questions, to characterise in somewhat more detail what specifically I focused on in this research project.
\begin{enumerate}
\item Sub-questions related to \textit{performing} the elevation-enrichment of \ac{nwb} using Dutch open geospatial data and predominantly 2.5D geomatics methods
\begin{enumerate}
\item What are the exact methods of the commercial implementation and what do we suspect its theoretical shortcomings to be?
\item Does the literature suggest any methods that are suitable to this research? If so, can we make use of them in our own methods?
\item How can we best make use of the combined information content of the datasets used by the commercial implementation?
\item How should the road network be subdivided into parts that each represent a 2.5D problem? Could they be processed individually to facilitate easy parallel processing?
\item As we are using Lidar data, can we produce an accurate and complete \ac{tin} surface model for each 2.5D "unit"? Can we interpolate elevations for \ac{nwb} through these models?
\item How do we "stitch" the results of the individual 2.5D procedures back together into a 3D road network with correct topology?
\item Can the implementation be made robust enough to handle all (or \textit{most}) challenging road layouts correctly, such as complex motorway junctions?
\item How can we make the implementation perform well in areas where elevation data is scarce or missing over longer distances, such as in tunnels?
\item Can the computational complexity of the program be kept low enough to be suitable for processing all the relevant roads?
\item While solving \ac{ndw}'s specific problem, can we ensure that our solution generalises well to other, similar problems?
\end{enumerate}
\item Sub-questions related to the \textit{assessment} of overall quality, completeness and accuracy of the output and its similarity to the commercial results
\begin{enumerate}
\item In related work, what methods were typically used to measure empirical and theoretical output accuracy?
\item According to related work, what typically defines output accuracy? Do local factors also play a role, or is it reasonable to estimate it for the procedure globally?
\item What is the accuracy of our elevation data sources? Can we structure the pipeline in a way that their input accuracy can be propagated to the output in a straightforward manner?
\item To facilitate the above, can we derive our output directly from input data points, despite the large number of processing steps that are potentially necessary?
\item What is the effect of uncertainty in the \textit{horizontal} position of \ac{nwb} centrelines on the effectiveness of our methods, and on the output accuracy?
\item The road surface model \ac{tin}s are also important products of the pipeline. How can we assess the overall quality and completeness of these?
\item Can we indicate it in the output, which input elevation source each output elevation estimate was derived from, and to use this to derive the output accuracy from the appropriate input dataset's accuracy?
\item How are temporal inconsistencies between the datasets manifested in the output? Can these be detected by the processing steps?
\item What physical features or sensing issues do drops in accuracy correspond to? If this corresponds to problems with the input, what aspects of the input datasets should be improved?
\item How good is the agreement between the commercial and the academic results? What could be the reason for global/local differences between them?
\end{enumerate}
\end{enumerate} | {
"alphanum_fraction": 0.8054009054,
"avg_line_length": 218.3977272727,
"ext": "tex",
"hexsha": "2981c6fe2ac02a4e0ba5f3f87d1f4590649daf82",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "kriskenesei/geo2020-tex",
"max_forks_repo_path": "final_report/chapters/introduction.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "kriskenesei/geo2020-tex",
"max_issues_repo_path": "final_report/chapters/introduction.tex",
"max_line_length": 1427,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a0708fd31c9a71e224de6ed643e9380d15fa26ec",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "kriskenesei/geo2020-tex",
"max_stars_repo_path": "final_report/chapters/introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4010,
"size": 19219
} |
\documentclass[12pt]{scrartcl}
%\usepackage[printwatermark,disablegeometry]{xwatermark}
%\usepackage{epsfig,amssymb}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{multirow}
\usepackage{colortbl}
\definecolor{steelblue}{RGB}{70, 130, 180}
\definecolor{darkred}{rgb}{0.5,0,0}
\definecolor{darkgreen}{rgb}{0,0.5,0}
\usepackage{hyperref}
\hypersetup{
letterpaper,
colorlinks,
linkcolor=red,
citecolor=darkgreen,
menucolor=darkred,
urlcolor=blue,
pdfpagemode=none,
pdftitle={Syllabus},
pdfauthor={Christopher M. Bourke},
pdfkeywords={}
}
\usepackage{fullpage}
\usepackage{tikz}
\pagestyle{empty} %
\usepackage{subfigure}
\definecolor{MyDarkBlue}{rgb}{0,0.08,0.45}
\definecolor{MyDarkRed}{rgb}{0.45,0.08,0}
\definecolor{MyDarkGreen}{rgb}{0.08,0.45,0.08}
\definecolor{mintedBackground}{rgb}{0.95,0.95,0.95}
\definecolor{mintedInlineBackground}{rgb}{.90,.90,1}
\usepackage[newfloat=true]{minted}
\setminted{mathescape,
linenos,
autogobble,
frame=none,
framesep=2mm,
framerule=0.4pt,
%label=foo,
xleftmargin=2em,
xrightmargin=0em,
%startinline=true, %PHP only, allow it to omit the PHP Tags *** with this option, variables using dollar sign in comments are treated as latex math
numbersep=10pt, %gap between line numbers and start of line
style=default} %syntax highlighting style, default is "default"
\setmintedinline{bgcolor={mintedBackground}}
%doesn't work with the above workaround:
\setminted{bgcolor={mintedBackground}}
\setminted[text]{bgcolor={mintedBackground},linenos=false,autogobble,xleftmargin=1em}
%\setminted[php]{bgcolor=mintedBackgroundPHP} %startinline=True}
\SetupFloatingEnvironment{listing}{name=Code Sample}
\SetupFloatingEnvironment{listing}{listname=List of Code Samples}
\setlength{\parindent}{0pt} %
\setlength{\parskip}{.25cm}
\newcommand{\comment}[1]{}
\usepackage{amsmath}
\usepackage{algorithm2e}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
%NOTE: you can embed algorithms in solutions, but they cannot be floating objects; use [H] to make them non-floats
\usepackage{lastpage}
%\usepackage{titling}
\usepackage{fancyhdr}
\renewcommand*{\titlepagestyle}{fancy}
\pagestyle{fancy}
%\renewcommand*{\titlepagestyle}{fancy}
%\fancyhf{}
%\rhead{~}
%\lhead{~}
\renewcommand{\headrulewidth}{0.0pt}
\renewcommand{\footrulewidth}{0.4pt}
\lhead{~}
\chead{~}
\rhead{~}
\lfoot{\Title\ -- Syllabus}
\cfoot{~}
\rfoot{\thepage\ / \pageref*{LastPage}}
\makeatletter
\title{Computer Science III}\let\Title\@title
\subtitle{Data Structures \& Algorithms\\Syllabus\\
{\small
\vskip1cm
Department of Computer Science \& Engineering \\
University of Nebraska--Lincoln}
\vskip-1cm}
%\author{Dr.\ Chris Bourke}
\date{CSCE 310 -- Summer 2021}
\makeatother
\begin{document}
\maketitle
%\newwatermark[allpages=true,scale=5,textmark=Draft]{},
\hrule
\begin{quote}
``Computer Science is no more about computers than astronomy is about telescopes.''\hfill --Edsger Dijkstra
\end{quote}
\begin{quote}
``If you want to be a good programmer you just program every day for two years.
If you want to be a world class programmer you can program every day for ten
years, or you could program every day for two years and take an algorithms
class.''
\hfill ---Charles E. Leiserson
\end{quote}
\section{Course Info}
\textbf{Prerequisites}: CSCE 156 (Computer Science II) and CSCE 235 (Discrete Math)
\textbf{Description}: A review of algorithm analysis, asymptotic notation,
and solving recurrence relations. Advanced data structures and their
associated algorithms, heaps, priority queues, hash tables, trees, binary
search trees, and graphs. Algorithmic techniques, divide and conquer,
transform and conquer space-time trade-offs, greedy algorithms, dynamic
programming, randomization, and distributed algorithms. Introduction to
computability and NP-completeness.
\textbf{Credit Hours}: 3
\textbf{Textbook}: The \emph{recommended} text book for this course is
\emph{Introduction to the Design and Analysis of Algorithms} (any edition)
by Anany Levitin. However, no text book is required as there are plenty
of free online Data Structures and Algorithms resources:
\begin{itemize}
\item My lecture notes: \url{cse.unl.edu/~cbourke/ComputerScienceThree.pdf}
\item Open DSA: \url{https://opendatastructures.org/}
\item \emph{Algorithms} by Jeff Erickson \url{http://jeffe.cs.illinois.edu/teaching/algorithms/}
\end{itemize}
\textbf{Postrequisites}: If you are a Computer Science or Computer
Engineering major you will need to receive a C or better in this course
to continue in the major.
\section{Course Overview}
Computer Science is not programming. Rather, Computer Science is the
mathematical modeling and study of what computation is--what
problems have a computational solution and how efficient that solution
can be. Thus, a strong foundation in mathematics is essential to your
success as a computer scientist. At the heart of computer science
are fundamental, discrete structures which we will study in this
course. Specifically, you will learn many of the mathematical
definitions, techniques, and ways of thinking that will be useful
in Computer Science.
\subsection{Topics}
\begin{itemize}
\item A review of algorithms, algorithm analysis and asymptotics
\item Brute Force algorithms, backtracking, generating combinatorial objects
\item Divide \& conquer techniques, repeated squaring, Karatsuba multiplication, Strassen's matrix multiplication, etc.
\item Algorithms for linear systems
\item Greedy Algorithms: Huffman coding
\item Balanced Trees: Heaps, AVL Trees, 2-3 Trees
\item Hash-based data structures
\item Graph algorithms: DFS, BFS, MSTs, path finding, shortest path
\item Dynamic Programming
\item Computation and computability
\end{itemize}
\section{Schedule}
See Canvas
\section{Course Delivery}
For summer sessions this course is delivered online only in a (more-or-less)
asynchronously manner.
\begin{itemize}
\item Daily lectures will be live streamed via
YouTube (\url{https://www.youtube.com/c/ChrisBourkeUNL/live}), however
the time is yet to be determined.
\begin{itemize}
\item Recordings of the lectures will be available
immediately following so you can watch/rewatch at your convenience
\item During the live broadcast, Piazza will be used for
questions/answers
\end{itemize}
\item For assignments that allow collaboration, you may use any medium
you choose.
\begin{itemize}
\item You can establish your own Zoom rooms to talk back and forth
and share a screen
\item You may use discord or slack instead
\item You can (in fact should) be using git to share code (but
only use private repos)
\item There are (free) online IDEs that allow you both to type in the same
editor at the same time: \url{https://repl.it}, \url{https://ide.cs50.io/};
a more extensive list: \url{https://gist.github.com/rouzbeh84/4bafc9fe4fe02edf506d11997c4674b0}
\end{itemize}
\item Written solutions must be submitted through canvas as PDFs
\item Live office hours will be held online via zoom
\item Exams will still be run, but asynchronously as ``take home'' exams
that will be released the day-of. You will have a limited but
\emph{flexible} time period to complete them and submit it electronically.
No collaboration will be allowed on the exams.
\end{itemize}
\section{Accommodations for Students with Disabilities}
%updated from https://www.unl.edu/ssd/content/syllabus-statement-faculty
% 2020/07/01
The University strives to make all learning experiences as
accessible as possible. If you anticipate or experience
barriers based on your disability (including mental health,
chronic or temporary medical conditions), please let me know
immediately so that we can discuss options privately. To
establish reasonable accommodations, I may request that you
register with Services for Students with Disabilities (SSD).
If you are eligible for services and register with their
office, make arrangements with me as soon as possible to
discuss your accommodations so they can be implemented in a
timely manner. SSD contact information: 117 Louise Pound
Hall.; 402-472-3787
\section{Grading}
Grading will be based on assignments (both written and programming
portions) as well as two exams.
\begin{table}[h]
\centering
{\small
\setlength{\tabcolsep}{0.5em} % for the horizontal padding
\renewcommand{\arraystretch}{1.2}% for the vertical padding
\begin{tabular}{lrrr}
\hline
\rowcolor{steelblue!50} Category & Number & Points Each & Total \\
\hline
\rowcolor{steelblue!5} Assignments & 4 & 200 & 800 \\
\rowcolor{steelblue!10} Midterm & 100 & 1 & 100 \\
\rowcolor{steelblue!5} Final & 100 & 1 & 100 \\
\hline
Total & & & 1,000
\end{tabular}
}
\end{table}
\subsection{Scale}
Final letter grades will be awarded based on the following
standard scale. This scale may be adjusted upwards if the
instructor deems it necessary based on the final grades only.
No scale will be made for individual assignments or exams.
\begin{table}[h]
\centering
\begin{tabular}{p{1cm}c}
Letter Grade & Percent \\
\hline\hline
A+ & $\geq 97$ \\
A & $\geq 93$ \\
A- & $\geq 90$ \\
B+ & $\geq 87$ \\
B & $\geq 83$ \\
B- & $\geq 80$ \\
C+ & $\geq 77$ \\
C & $\geq 73$ \\
C- & $\geq 70$ \\
D+ & $\geq 67$ \\
D & $\geq 63$ \\
D- & $\geq 60$ \\
F & $<60$ \\
\end{tabular}
\end{table}
\subsection{Assignments}
There will be 4 assignments that will consist of both written
exercises as well as \emph{substantial} programming problems.
You will be expected to follow all instructions on the
assignments. Clarity and legibility are of great importance.
If homework is sloppy or unclear, points may be deducted. You
are not required to typeset your written solutions; however,
it is strongly recommended that you do so using \LaTeX,
markdown or similar typesetting system. Resources for \LaTeX\
are available on the course web page. Source code and all
relevant files for programming portions must be handed in
using the CSE web handin program. Each assignment will
have a fixed deadline based on CSE's server time. No late
assignments will be accepted.
Further, programming solutions will be graded using our online
webgrader system. Failure to submit compilable or runnable
code may result in a zero. You are expected to do your own
substantial testing (and to submit valid, working test cases
as well), but it is essential that your submissions work on
the webgrader.
\subsection{Exams}
There will be two exams, both of which will be open-book, open-note,
open-computer but you may \emph{not} collaborate with anyone
in or outside the class on the solutions.
%\subsection{Quizzes}
%
%There will be 4 quizzes each of equal weight. They will generally
%be short and will cover recent topics.
\subsection{Grading Policy}
If you have questions about grading or believe that points were
deducted unfairly, you must first address the issue with the
individual who graded it to see if it can be resolved. Such
questions should be made within a reasonable amount of time
after the graded assignment has been returned. No further
consideration will be given to any assignment a week after
it grades have been posted. It is important to emphasize that
the goal of grading is consistency. A grade on any given
assignment, even if it is low for the entire class, should
not matter that much. Rather, students who do comparable
work should receive comparable grades (see the subsection
on the scale used for this course).
\subsection{Late Work Policy}
In general, there will be no make-up exams or late work
accepted. Exceptions may be made in certain circumstances
such as health or emergency, but you must make every effort
to get prior permission. Documentation may also be required.
Homework assignments have a strict due date/time as defined by
the CSE server's system clock. All program files must be handed
in using CSE's webhandin as specified in individual assignment
handouts. Programs that are even a few seconds past the due
date/time will be considered late and you will be locked out
of handing anything in after that time.
\subsection{Webgrader Policy}
Failure to adhere to the requirements of an assignment in such
a manner that makes it impossible to grade your program via
the webgrader means that a disproportionate amount of time
would be spent evaluating your assignment. For this reason,
we will not grade any assignment that does not compile and
run through the webgrader.
\subsection{Academic Integrity}
All homework assignments, programs, and exams must represent
your own work unless otherwise stated. No collaboration with
fellow students, past or current, is allowed unless otherwise
permitted on specific assignments or problems. The Department of
Computer Science \& Engineering has an Academic Integrity Policy.
All students enrolled in any computer science course are bound
by this policy. You are expected to read, understand, and follow
this policy. Violations will be dealt with on a case by case
basis and may result in a failing assignment or a failing grade
for the course itself. The most recent version of the Academic
Integrity Policy can be found at \url{http://cse.unl.edu/academic-integrity}
\section{Summer Session Policy}
As a summer session, the course pace and presentation is
accelerated. What would normally be covered over 15 weeks
is compressed into less than five. Your success in this
course depends on your acceptance of this fact and a
commitment to putting in the extra work necessary to
understand this material in the time that we do have.
This means extensive daily review of materials outside
of lecture and a diligent attitude toward completing
assignments. As such, no late work will be accepted and
no makeup quizzes or exams will be given. The compressed
time period and logistics of offering summer courses make
it extraordinarily difficult to make such considerations.
In addition, the summer version of this course lacks the
same resources that would be available during the regular
academic year. In particular, the Student Resource Center
is closed and there is no recitation section. This may
make getting additional help more difficult and you should
make the appropriate adjustments or reconsider taking this
course during the regular academic year.
\section{Communication \& Getting Help}
The primary means of communication for this course is Piazza, an online
forum system designed for college courses. We have established a Piazza
group for this course and you should have received an invitation to join.
If you have not, contact the instructor immediately. With Piazza you
can ask questions anonymously, remain anonymous to your classmates, or
choose to be identified. Using this open forum system the entire class
benefits from the instructor and TA responses. In addition, you and
other students can also answer each other's questions (again you may
choose to remain anonymous or identify yourself to the instructors or
everyone). You may still email the instructor or TAs, but more than
likely you will be redirected to Piazza for help.
In addition, there are two anonymous suggestion boxes that you may
use to voice your concerns about any problems in the course if you
do not wish to be identified. My personal box is available at
\url{https://cse.unl.edu/~cbourke/email/}. The department also
maintains an anonymous suggestion box available at
\url{https://cse.unl.edu/contact-form}.
\subsection{Getting Help}
Your success in this course is ultimately your responsibility. Your
success in this course depends on how well you utilize the opportunities
and resources that we provide. There are numerous outlets for learning
the material and getting help in this course:
\begin{itemize}
\item Lectures: attend lectures regularly and when you do use the
time appropriately. Do not distract yourself with social media or other
time wasters. Actively take notes (electronic or hand written). It is
well-documented that good note taking directly leads to understanding and
retention of concepts.
\item Required Reading: do the required reading on a regular basis. The
readings provide additional details and depth that you may not necessarily
get directly in lecture.
\item Piazza: if you have questions ask them on Piazza. It is the best and
likely fastest way to get help with your questions. Also, be sure to read
other student's posts and questions and feel free to answer yourself!
\item Office Hours: the instructor and GTA(s) hold regular office
hours throughout the week as posted on the
course website. Attend office hours if you have questions or want to
review material.
\item Don't procrastinate. The biggest reason students fail this course
is because they do not give themselves enough opportunities to learn the
material. Don't wait to the last minute to start your assignments. Many
people wait to the last minute and flood the TAs and SRC, making it difficult
to get help as the due date approaches. Don't underestimate how much time
your assignment(s) will take and don't wait to the week before hand to get
started. Ideally, you should be working on the problems as we are covering
them.
\item Get help in the \emph{right way}: when you go to the instructor or
TA for help, you must demonstrate that you have put forth a good faith
effort toward understanding the material. Asking questions that clearly
indicate you have failed to read the required material, have not been
attending lecture, etc.\ is \emph{not acceptable}. Don't ask generic
questions like ``I'm lost, I don't know what I'm doing''. Instead,
explain what you have tried so far. Explain why you think what you
have tried doesn't seem to be working. Then the TA will have an
easier time to help you identify misconceptions or problems. This
is known as ``Rubber Duck Debugging'' where in if you try to explain
a problem to someone (or, lacking a live person, a rubber duck),
then you can usually identify the problem yourself. Or, at the very
least, get some insight as to what might be wrong.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7634193757,
"avg_line_length": 39.3163481953,
"ext": "tex",
"hexsha": "0de4e6c40069b6850343e5756b01774f257dc2fe",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-01-09T20:45:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-09T20:45:50.000Z",
"max_forks_repo_head_hexsha": "02a57fe9d16aa98d7ff00c129c1aef5188d5c680",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "cbourke/ComputerScienceIII",
"max_forks_repo_path": "documents/syllabus.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "02a57fe9d16aa98d7ff00c129c1aef5188d5c680",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "cbourke/ComputerScienceIII",
"max_issues_repo_path": "documents/syllabus.tex",
"max_line_length": 159,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "02a57fe9d16aa98d7ff00c129c1aef5188d5c680",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "cbourke/ComputerScienceIII",
"max_stars_repo_path": "documents/syllabus.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-27T20:36:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-21T13:48:37.000Z",
"num_tokens": 4651,
"size": 18518
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\begin{document}
\section*{Inline (within text) formulas}
The equation $x + x = 2x$ is inside a text, which uses the \TeX{} shorthand.
In addition, an equation can be written like \(x \cdot x = x^2\) as well, using the \LaTeX{} shorthand.
Finally, the environment \emph{math} can be used like this \begin{math}a^2 + b + c = 0\end{math}.
We can force symbols to be displayed like displayed formula. For example the formula $\sum_{k=0}^{10}k$ can be written $\displaystyle\sum_{k=0}^{10}k$ as well. The sum symbol is taller within a sentence using the command \emph{\textbackslash displaystyle}.
\section*{Displayed equations}
The recommended syntax to render a floating equation is to use the syntax below.
\[a \cdot x = ax \]
The use of the syntax \emph{\$\$$\cdots$\$\$} should be avoided, because, it will modify vertical spacing within equations, rendering them inconsistent.
Finally, the environment \emph{displaymath} produces the same effect.
\begin{displaymath}
x^2 \cdot x^2 = x^4
\end{displaymath}
\section*{Equation numbering}
The environment \emph{equation} automatically numbers the equations.
\begin{equation}
f(x)=(x+a)(x+b)
\end{equation}
\end{document} | {
"alphanum_fraction": 0.7339667458,
"avg_line_length": 33.2368421053,
"ext": "tex",
"hexsha": "388052b80982c418a07fafa45ba51b193935635d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ZenLulz/LatexCompendium",
"max_forks_repo_path": "compendium/mathematics/environments.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ZenLulz/LatexCompendium",
"max_issues_repo_path": "compendium/mathematics/environments.tex",
"max_line_length": 256,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "cc623a88ab05ca90430338333003293baea00f8c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ZenLulz/LatexCompendium",
"max_stars_repo_path": "compendium/mathematics/environments.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-23T20:16:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-07-30T21:43:55.000Z",
"num_tokens": 358,
"size": 1263
} |
\subsection{Semantic consequence}
A formula, \(A\), semantically implies another, \(B\), if for every interpretation of \(A\), \(B\) is true.
We show this with:
\(A\vDash B\)
Formula \(B\) is satisfisable if there is some \(A\) where this is true.
For example:
\(A\land B \vDash A\)
Formula \(B\) is a tautology if this is true for any \(A\). We can also write this as \(\vDash B\).
\subsection{Logical equivalence}
If \(A\vDash B\) and \(B\vDash A\) we say that \(A\) and \(B\) are logically equivalent.
This is shown as \(A \Leftrightarrow B\).
| {
"alphanum_fraction": 0.6654676259,
"avg_line_length": 26.4761904762,
"ext": "tex",
"hexsha": "e151b506c8c314f4ec8176c6ed8f7447f4ec556c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/logic/propositionalLogic/04-01-semantic.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/logic/propositionalLogic/04-01-semantic.tex",
"max_line_length": 107,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/logic/propositionalLogic/04-01-semantic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 162,
"size": 556
} |
\section{The \vtyui\ Event Loop}
\label{sec:event_loop}
\vtyui\ manages the user input event loop for you, and once you have
created and populated a \fw{Collection}, you can invoke the main
\vtyui\ event loop:
\begin{haskellcode}
runUi c defaultContext
\end{haskellcode}
The first parameter is the \fw{Collection} you have created; the
second parameter is a \fw{Ren\-der\-Con\-text}. Here we use the
``default'' rendering context provided by the library. The
``rendering context'' provides three key pieces of functionality:
\begin{itemize}
\item The "skin" to use when rendering ASCII lines, corners, and
intersections
\item The default ``normal'' (unfocused) attribute
\item The default ``focused'' attribute
\item The current ``override'' attribute
\end{itemize}
The event loop will run until one of two conditions occurs:
\begin{itemize}
\item An exception of any kind is thrown; if an exception is thrown,
the event loop will shut down Vty cleanly and re-throw the
exception.
\item An event handler or thread calls \fw{shutdownUi}; the
\fw{shutdownUi} function sends a signal to stop the event loop, at
which point control will be returned to your program. The shutdown
signal goes into a queue with all of the other signals processed by
the event loop, such as key input events and scheduled actions (see
Section \ref{sec:concurrency}), but it will preempt them. Note that
there is no \textit{guarantee} that there won't be some other signal
placed into the queue before you run \fw{shutdownUi}, such as when
another thread is running in parallel with an event handler which
calls \fw{shutdownUi}.
\end{itemize}
\subsection{Skinning}
\label{sec:skinning}
Some widgets, such as the \fw{Table} widget (see Section
\ref{sec:tables}) and the horizontal and vertical border widgets
\fw{VBorder} and \fw{HBorder} (see Section \ref{sec:borders}), use
line-drawing characters to draw borders between interface elements.
Some terminal emulators are capable of drawing Unicode characters,
which make for nicer-looking line-drawing. Other terminal emulators
work best only with ASCII. The default rendering context uses a
Unicode line-drawing skin, which you can change to any other skin (or
your own) as follows:
\begin{haskellcode}
runUi c $ defaultContext { skin = asciiSkin }
\end{haskellcode}
The library provides \fw{Skin}s in the \fw{Skins} module.
\subsection{Attributes}
\label{sec:attributes}
An attribute may consist of one or more settings of foreground and
background color and text style, such as underline or blink. The
default attributes specified in the \fw{Render\-Context} control how
widgets appear.
Every widget has the ability to store its own normal and focused
attributes. When widgets are rendered, they use these attributes; if
they are not set, the widgets default to using those specified by the
rendering context. The only exception is the ``override'' attribute.
Instead of ``falling back'' to this attribute, the presence of this
attribute requires widgets to use it. For example, this attribute is
used in the \fw{List} widget so that the currently-selected list item
can be highlighted, which requires the \fw{List} to override the
item's default attribute configuration.
Widgets provide an API for setting these attributes using the
\fw{Has\-Normal\-Attr} and \fw{Has\-Focus\-Attr} type classes. The
reason we use type classes to provide this API is so that third-party
widgets may also provide this functionality. The API is defined in
the \fw{Core} module and is as follows:
\begin{haskellcode}
setNormalAttribute w attr
setFocusAttribute w attr
\end{haskellcode}
Convenience combinators also exist:
\begin{haskellcode}
w <- someWidget
>>= withNormalAttribute attr
>>= withFocusAttribute attr
\end{haskellcode}
The \fw{attr} value is a Vty attribute. A Vty attribute may provide
any (but not necessarily all!) of the settings that make up an
attribute; any setting not specified (e.g. background color) can fall
back to the default. As a result, the attribute of a widget is the
\textit{combination} of its attribute and the attribute from the
rendering context. The widget's settings will take precedence, but
any setting not provided will default to the rendering context.
Consider this example:
\begin{haskellcode}
w <- someWidget
setNormalAttribute w (fgColor white)
runUi c $ defaultContext { normalAttr = yellow `on` blue }
\end{haskellcode}
In this example, the widget \fw{w} will use a normal attribute of
white on a blue background, since it specified only a foreground color
as its normal attribute. This kind of precedence facilitates visual
consistency across your entire interface.
In addition, container widgets are designed to pass their normal and
focused attributes onto their children during the rendering process;
this way, unless a child specifies a default with
\fw{setNormalAttribute} or similar, it uses its parent's attributes.
Again, this facilitates consistency across the interface while only
requiring the you to specify attributes where you want to deviate from
the default.
You can create attributes with varying levels of specificity by using
the \vtyui\ API:
\begin{tabular}{|l|l|} \hline
Expression & Resulting attribute \\ \hline
\fw{fgColor blue} & foreground only \\ \hline
\fw{bgColor blue} & background only \\ \hline
\fw{style underline} & style only \\ \hline
\fw{blue `on` red} & foreground and background \\ \hline
\fw{someAttr `withStyle` underline} & adding a style \\ \hline
\end{tabular}
The Vty \fw{defAttr} value's default configuration is used as a
basis for all partially-specified attributes. The functions described
above are defined in the \fw{Util} module.
\subsection{\vtyui\ and Concurrency}
\label{sec:concurrency}
So far we have only seen programs which modify widget state when user
input events occur. Such changes in widget state are safe, because
they are triggered by the \vtyui\ event loop.\footnote{``Unsafe''
updates are those that are not guaranteed to be reflected in the
most-recently-rendered interface.} However, your program will more
than likely need to trigger some widget state changes due to other
external events -- such as network events -- and \vtyui\ provides a
mechanism for doing this in a safe way.
\vtyui\ provides a function in the \fw{Core} module called
\fw{schedule} which takes an \fw{IO} action and ``schedules'' it to be
run by the main event loop. It will be run as soon as possible, i.e.,
once the program control flow has returned to the event loop. Since
the scheduled action will be run by the event loop, it's important
that the action not take very long; if it's important to block (e.g.,
by calling \fw{Control.Concurrent.threadDelay}), you should do that in
a thread and only call \fw{schedule} when you have work to do.
Consider this example, in which a text widget called \fw{timeText}
gets updated with the current time every second:
\begin{haskellcode}
forkIO $
forever $ do
schedule $ do
t <- getCurrentTime
setText timeText $
formatTime defaultTimeLocale rfc822DateFormat t
threadDelay 1000000
\end{haskellcode}
In this example the blocking occurs outside of the scheduled code, and
only when we have an update for the clock display do we schedule an
action to run.
Some built-in widgets will almost always be used in this way; for an
example, take a look at the \fw{ProgressBar} widget in the
\fw{ProgressBar} module (see Section \ref{sec:progress_bars}).
\subsection{Handling Resize Events}
When \vtyui\ renders a widget, you can be notified if its size changes. This
might be useful if, for example, you want to change the visual style or state
of a widget when its size crosses a threshold. To do this, register a resize
event handler on the widget with \fw{onResize} as follows:
\begin{haskellcode}
w <- someWidget
w `onResize` \(oldSize, newSize) -> do
...
\end{haskellcode}
The resize handler will be given the old size before the change and the new
size after the change. Initially every widget has size \fw{(0, 0)} so your
handler will always run at least once with an "old" size of \fw{(0, 0)} and a
"new" size of the widget's initial size.
The \fw{onResize} mechanism has a serious caveat. Consider a resize handler
which results in another size change, such as a call to \fw{setText} which
makes a text widget larger. Such a change would spur another size change and
would result in a non-terminating sequence of calls which would probably crash
your program or, if not, just use all of your CPU. To avoid this, ensure that
any resize handlers don't result in size changes; you can change the visual
style, attributes, etc., of your widget, but if you change contents enough, a
resize handler loop will result.
Finally, since resize handlers run during the rendering process, any changes to
widgets which require a redraw to be visible on the screen will need to use the
\fw{schedule} function (see Section \ref{sec:concurrency}). This will ensure
that visual changes to a widget made during rendering will force another
rendering.
| {
"alphanum_fraction": 0.770156438,
"avg_line_length": 41.9311926606,
"ext": "tex",
"hexsha": "e2782be7b9aaa1b8708803fc9af04e0cf5119b32",
"lang": "TeX",
"max_forks_count": 14,
"max_forks_repo_forks_event_max_datetime": "2019-04-25T00:07:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-31T14:08:43.000Z",
"max_forks_repo_head_hexsha": "250474a8d9dc5e22b8dc80cfa871d9ac4c12ce04",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "erikd/vty-ui",
"max_forks_repo_path": "doc/ch2/event_loop.tex",
"max_issues_count": 22,
"max_issues_repo_head_hexsha": "250474a8d9dc5e22b8dc80cfa871d9ac4c12ce04",
"max_issues_repo_issues_event_max_datetime": "2019-04-25T14:34:04.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-04T02:31:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "erikd/vty-ui",
"max_issues_repo_path": "doc/ch2/event_loop.tex",
"max_line_length": 79,
"max_stars_count": 43,
"max_stars_repo_head_hexsha": "250474a8d9dc5e22b8dc80cfa871d9ac4c12ce04",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "erikd/vty-ui",
"max_stars_repo_path": "doc/ch2/event_loop.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-03T18:11:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-05T08:22:55.000Z",
"num_tokens": 2251,
"size": 9141
} |
\subsection{Deoxyribonucleic acid (DNA)}
DNA contains nucleotides.
There are 4 types of nucleotides, which differ depending on their nucleobase:
+ Cytosine (C);
+ Guanine (G);
+ Adenine (A); and
+ Thymine (T).
\subsection{Ribonucleic acid (RNA)}
| {
"alphanum_fraction": 0.7233201581,
"avg_line_length": 15.8125,
"ext": "tex",
"hexsha": "b552581299dfba52845a498b1489aec738bb3fc9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/biology/DNA/01-01-DNA.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/biology/DNA/01-01-DNA.tex",
"max_line_length": 77,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/biology/DNA/01-01-DNA.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 81,
"size": 253
} |
\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\title{Yet another derivation of the KLT update rule}
\author{}
\begin{document}
\maketitle
We want to optimize this function with respect to the displacement d,
\begin{equation}
E(d) = \sum_{(x,y)\in W} (I(x,y) - J(x+d_x,y+d_y))^2 \ .
\end{equation}
We will do this iterativelly.
Assume that we have a previous guess $\tilde d$ of the displacement, and we only want to compute the update $\delta d$ such that
\begin{equation}
d = \tilde d + \delta d \ .
\end{equation}
Using the notation
\begin{equation}
\tilde J(x,y) = J(x + \tilde d_x, y + \tilde d_y) \ ,
\end{equation}
the functional to be optimized at each iteration is
\begin{equation}
E(\delta d) = \sum_{(x,y)\in W} (I(x,y) - \tilde J(x + \delta d_x, y + \delta d_y))^2 \ .
\end{equation}
The first order approximation of $\tilde J$ with respect to $\delta d$ is
\begin{equation}
\begin{split}
\tilde J(x + \delta d_x, y + \delta d_y)
& = \tilde J(x,y) + \partial_x \tilde J(x,y) \delta d_x + \partial_y \tilde J(x,y) \delta d_y
\\ & = \tilde J(x,y) + \nabla \tilde J(x,y)^T \delta d \ .
\end{split}
\end{equation}
Inserting this into the functional gives
\begin{equation}
E(\delta d) \approx \sum_{(x,y)\in W} (I(x,y) - \tilde J(x,y) - \nabla \tilde J(x,y)^T \delta d)^2 \ .
\end{equation}
Deriving with respect to $\delta d$ and equalling to 0 gives
\begin{equation}
0 =
\sum_{(x,y)\in W} (I(x,y) - \tilde J(x,y)
- \nabla \tilde J(x,y)^T \delta d)
\nabla \tilde J(x,y) \ ,
\end{equation}
which is a 2-vector equallity. Factorizing $\delta d$,
\begin{equation}
\sum_{(x,y)\in W} \nabla \tilde J(x,y) \nabla \tilde J(x,y)^T \ \delta d
= \sum_{(x,y)\in W} (I(x,y) - \tilde J(x,y)) \nabla \tilde J(x,y)
\end{equation}
which is a set of 2 linear equations on $\delta d$.
We define now the gradient matrix as
\begin{equation}
Z = \sum_{(x,y)\in W} \nabla \tilde J(x,y) \nabla \tilde J(x,y)^T
= \sum_{(x,y)\in W}
\begin{pmatrix}
(\partial_x \tilde J)^2
& \partial_x \tilde J \partial_y\tilde J
\\ \partial_x \tilde J \partial_y\tilde J
& (\partial_y \tilde J)^2
\end{pmatrix}
\end{equation}
and error vector as
\begin{equation}
\mathbf e = \sum_{(x,y)\in W} (I(x,y) - \tilde J(x,y)) \nabla \tilde J(x,y) \ .
\end{equation}
The system to solve at each iteration is
\begin{equation}
Z \delta d = \mathbf e \ .
\end{equation}
\section{Implementation Details}
\subsection{Smoothing the Images}
Before tracking images should be slightly smoothed. There are two reasons for this:
\begin{enumerate}
\item The linearization of $J$ only makes sense if $J$ is smooth.
\item The tracking equations require to compute the gradient of $J$. Given sampling of $J$ at a certain scale, there is no way to get its gradient at the same scale with an accurate precision. We can however compute the gradient at a smoother scale by convolving $J$ with the derivatives of a Gaussian kernel. This is
\begin{equation}
\bar J = G \star J \quad \text{and} \quad \nabla\bar J = \nabla G \star J \ .
\end{equation}
When doing this, we are computing the derivatives of the smooth version $\bar J$, not the derivatives of $J$. Therefore, we have to use the smooth version $\bar J$ in all the equations instead of $J$, if we want the derivatives to be coherent with the image.
\end{enumerate}
Since we smooth $J$, it seems reasonable to also smooth $I$ so that they are at the same scale.
To sum up, given the two images $I$ and $J$, tracking has to be done over their smoothed versions
\begin{equation}
\bar I = G \star I \ , \quad
\bar J = G \star J \quad \text{and} \quad
\nabla\bar J = \nabla G \star J \ .
\end{equation}
\end{document}
| {
"alphanum_fraction": 0.6763207294,
"avg_line_length": 37.29,
"ext": "tex",
"hexsha": "bc8364857d6fc7336259ac2d43ef477213a5458e",
"lang": "TeX",
"max_forks_count": 51,
"max_forks_repo_forks_event_max_datetime": "2022-02-19T06:37:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-12T08:38:12.000Z",
"max_forks_repo_head_hexsha": "cdf65edbb80d8904e2df9a20116d02546df93a81",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rgkoo/libmv-blender",
"max_forks_repo_path": "doc/notes_on_klt.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "cdf65edbb80d8904e2df9a20116d02546df93a81",
"max_issues_repo_issues_event_max_datetime": "2015-12-15T18:09:03.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-04-04T17:54:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rgkoo/libmv-blender",
"max_issues_repo_path": "doc/notes_on_klt.tex",
"max_line_length": 321,
"max_stars_count": 160,
"max_stars_repo_head_hexsha": "aae2e0b825b1c933d6e8ec796b8bb0214a508a84",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jackyspeed/libmv",
"max_stars_repo_path": "doc/notes_on_klt.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-16T02:55:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-16T19:35:28.000Z",
"num_tokens": 1275,
"size": 3729
} |
\documentclass[10pt,letterpaper]{article}
\usepackage[margin=1in]{geometry}
\usepackage{setspace}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{tcolorbox}
\pagestyle{fancyplain}
% Put watermark on
\usepackage{draftwatermark}
\SetWatermarkText{Draft}
\SetWatermarkScale{7}
\lhead{}
\chead{Central Massachusetts Amateur Radio Association}
\rhead{}
\lfoot{\texttt{https://github.com/mide/cmara-meeting-minutes/}}
\cfoot{}
\rfoot{Page \thepage\ of \pageref{LastPage}}
\begin{document}
\begin{center}
{\huge October 2018 Business Meeting}\\
\emph{of the}\\
{\Large Central Massachusetts Amateur Radio Association}\\
\emph{Submitted by Mark Ide \texttt{W1IDE}, Secretary}
\end{center}
\section{Meeting Called to Order}
The CMARA June 2018 business meeting was called to order on October 18, 2018 at 7:06 PM by CMARA president Bob Peloquin (\texttt{W1TAB}).
\section{Attendance}
\noindent
Below is the list of members and guests that attended the meeting. If you are not caught up on your dues, you'll be listed as a guest. If you feel that's in error, please reach out the club treasurer.
\subsection{Officers Present}
\begin{tabular}{|l|l|l|c|}
\hline
\textbf{Position} & \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline
President & Bob Peloquin & \texttt{W1TAB} & Yes \\
Vice President & Brian Loverro & \texttt{K1BML} & No \\
Secretary & Mark Ide & \texttt{W1IDE} & Yes \\
Treasurer & Randolph Dore & \texttt{W4FEB} & Yes \\
Webmaster & Lyn Glagowski & \texttt{WB1CCL} & No \\
\hline
\end{tabular}
\subsection{Board of Directors Present}
\begin{tabular}{|l|l|c|}
\hline
\textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline
Adrian Zeffert & \texttt{AB2IX} & Yes \\
George Gumbrell & \texttt{KA3RLZ} & Yes \\
L. Greg Algieri & \texttt{WA1JXR} & Yes \\
Terry Glagowski & \texttt{W1TR} & Yes \\
Dan Rau & \texttt{K1RAU} & Yes \\
Scott Olsen & \texttt{KB1EZF} & Yes \\
\hline
\end{tabular}
\subsection{Attendance}
\texttt{AB1ZW},
\texttt{AB2IX},
\texttt{K1CCS},
\texttt{K1RAU},
\texttt{K1SAC},
\texttt{K1VEA},
\texttt{K1YYT},
\texttt{KA1YOO},
\texttt{KA3RLZ},
\texttt{KB1EZF},
\texttt{KB1NIP},
\texttt{KB1VXY},
\texttt{KB1YLA},
\texttt{KC1BHD},
\texttt{KC1ETB},
\texttt{KC1GIB},
\texttt{KC1IOK},
\texttt{KC1JCB},
\texttt{KC1SDL},
\texttt{KM1D},
\texttt{N1EFR},
\texttt{N1EKO},
\texttt{N1GEX},
\texttt{N1IEX},
\texttt{N1ZC},
\texttt{NE1O},
\texttt{W1AHM},
\texttt{W1GD},
\texttt{W1IDE},
\texttt{W1LB},
\texttt{W1LEB},
\texttt{W1PA},
\texttt{W1RAU},
\texttt{W1REJ},
\texttt{W1TAB},
\texttt{W1TR},
\texttt{W4FEB},
\texttt{WA1JXR},
\texttt{WA1MDD},
\texttt{WA1RCQ},
\texttt{WK1H},
\texttt{WW2JS},
Chris Wentworth,
Rebecca Ide
% \subsection{Guests \& Visitors}
% \emph{(None)}
% \noindent
% \textasteriskcentered{} Entered as a guest. Voted in as new member. See \S{} \ref{new-cmara-members} for details.
\section{Reports}
\subsection{Secretary's Report}
CMARA Secretary (Mark, \texttt{W1IDE}) is waiting for the minutes from September's meeting. At this time, there were no minutes to present.
\newpage
\subsection{Treasurer's Report}
CMARA Treasurer (Randy, \texttt{W4FEB}) accidentally forgot the report at home and was unable to present. Report tabled until next month.
% The summary of May's treasurer's report was read by Randy (\texttt{W4FEB}). The report was accepted as read by voice vote.
% \subsubsection{May 2018 Treasurer's Summary}
% \noindent
% \begin{tabular}{|l|r|}
% \hline
% Beginning Checking Balance & \texttt{\$6265.30} \\
% Expense: Badges & \texttt{-\$6.25} \\
% Ending Checking Balance & \texttt{\$6259.05} \\
% \hline
% \hline
% LCU Certificate of Deposit & \texttt{\$4,825.32} \\
% \hline
% \hline
% \textbf{Total Club Account} & \texttt{\$11,084.37} \\
% \hline
% \end{tabular}
\subsection{Committee Reports}
\subsubsection{Membership Committee Report}
CMARA has 207 known members, only 92 of which are paid. The remaining have lapsed or left the club. Nationally, the hobby has approximately 248,000 members.
\subsubsection{Repeater Trustees Report}
The CMARA repeater trustees (\texttt{WA1JXR}, \texttt{W1EPH}, \texttt{N1VX}, and \texttt{W1BNC}) prepared a presentation regarding a potential repeater upgrade. The report was extremely detailed and the general summary is as follows: \\
\noindent
\begin{tabular}{|l|r|}
\hline
\textbf{Component} & \textbf{Cost} \\ \hline
Repeater & \texttt{\$2000} \\
Controller & \texttt{\$500} \\
Amplifier & \texttt{\$1300} \\
Power Supply & \texttt{\$500} \\
Isolator & \texttt{\$500} \\ \hline
\textbf{Total} & \texttt{\$4800} \\ \hline
\end{tabular}
\subsubsection{Field Day Committee Report}
There was nothing to report for field day.
\section{Unfinished Business}
There was no known unfinished business.
\section{New Business}
\subsection{New CMARA Members}
\label{new-cmara-members}
% There were no new CMARA members voted in this month.
We would like to welcome the following members to CMARA! All members were voted in by voice vote.
\begin{enumerate}
\item Christopher Wentworth \emph{(Not yet a ham, but wants to support CMARA)}
\item \texttt{K1DX} George Woods
\end{enumerate}
\subsection{Other New Business}
\begin{enumerate}
\item We had the first of our board and officer nominations. If you would like to run for any position, you still can. Let someone know before or during the November meeting. Nominations are listed below.\\
\begin{tabular}{|ll|ll|}
\hline
\textbf{Position} & \textbf{Seats} & \textbf{Nominee} & \textbf{Call} \\ \hline
President & 1 & Bob Peloquin & \texttt{W1TAB} \\ \hline
Vice President & 1 & \emph{(None)} & \\ \hline
Secretary & 1 & \emph{(None)} & \\ \hline
Treasurer & 1 & Randolph Dore & \texttt{W4FEB} \\ \hline
Webmaster & 1 & Lyn Glagowski & \texttt{WB1CCL} \\ \hline
Board of Directors & 6 & Adrian Zeffert & \texttt{AB2IX} \\
& & George Gumbrell& \texttt{KA3RLZ} \\
& & L. Greg Algieri & \texttt{WA1JXR} \\
& & Terry Glagowski & \texttt{W1TR} \\
& & Dan Rau & \texttt{K1RAU} \\
& & Scott Olsen & \texttt{KB1EZF} \\
& & Albert Hayeck & \texttt{N1EFR} \\ \hline
\end{tabular}
\item In order to be overly verbose and clear, the following statements are true regarding the nominations.
\begin{enumerate}
\item Bob Peloquin (\texttt{W1TAB}) has been nominated for the position of club president.
\item There has been no nomination for the position of vice president, but incombent Brian Loverro (\texttt{K1BML}) has not been asked if he'd like a nomination. The intention is to ask him in the November meeting.
\item There has been no nomination for the position of secretary. Mark Ide (\texttt{W1IDE}) has voiced that he's unable to balance his increased workload and family time with the club's needs.
\item Randolph Dore (\texttt{W4FEB}) has been nominated for the position of club treasurer.
\item Lyn Glagowski (\texttt{WB1CCL}) has been nominated for the position of webmaster. Her husband, Terry (\texttt{W1TR}) accepted her nomination for her.
\item Several people have been nominated for the board of directors (they will be listed below). Since there are more than six nominees, you only the six with the highest number of votes will be elected.
\item Adrian Zeffert (\texttt{AB2IX}) has been nominated for a seat on the board.
\item George Gumbrell (\texttt{KA3RLZ}) has been nominated for a seat on the board.
\item L. Greg Algieri (\texttt{WA1JXR}) has been nominated for a seat on the board.
\item Terry Glagowski (\texttt{W1TR}) has been nominated for a seat on the board. Terry has shared he is willing to step down in order to share the board and club with new interested members.
\item Dan Rau (\texttt{K1RAU}) has been nominated for a seat on the board.
\item Scott Olsen (\texttt{KB1EZF}) has been nominated for a seat on the board.
\item Albert Hayeck (\texttt{N1EFR}) has been nominated for a seat on the board.
\item We will have another round of nominations in November. If you're intested in running for any position, please let someone know who can nominate you, or if you'll be ther, you can nominate yourself.
\item Anyone can run for any position.
\item In December, people can continue to run as a Write-In.
\item It is our intention that prior to the ballots being issued in the December meeting, every nominated candidate will be able to respond to any quetions and/or statements and be able to give a final word prior to votes being cast.
\end{enumerate}
\end{enumerate}
\section{For the Good of the Club}
\subsection{New Hams \& Upgrades}
There we no new hams nor upgrades.
\subsection{Other News}
\begin{enumerate}
\item We'd like to thank Dan Pedtke (\texttt{KW2T}) for his wonderful presentation last month.
\item Mark Richards (\texttt{K1MGY}) is always looking for volunteers for all sorts of events. Volunteering as an operator is a great way to give back to the community. Reach out to Mark if you find yourself with some extra time and you'd like to help out. Thanks to Dan (\texttt{K1RAU}) for bringing this news to the club.
\item Selina (\texttt{KC1SDL}) is looking to have an antenna party on October 28 around 1PM. Dan (\texttt{K1RAU}) is helping to organize this and can be reached via email if you're able to help.
\item The Nutmeg Hamfest in Meriden CT is on October 21st. It's approximately 75 miles from Worcester, MA. Visit \texttt{nutmeghamfest.com} for more details.
\end{enumerate}
\section{Next Meeting}
\begin{enumerate}
\item The next meeting will be November 15, 2018. It will be located at the Oakdale United Methodist Church, 15 North Main Street, West Boylston, MA 01583.
\item The meeting will begin promptly at 7:00 PM. Social time will start around 6:30 PM, so be sure to come early if you would like to mingle.
\end{enumerate}
\section{Meeting Adjourned}
The meeting was adjourned October 10, 2018 by CMARA president Bob Peloquin (\texttt{W1TAB}) at approximately 7:28 PM.
\section{Post-Meeting Presentation}
\emph{(No notes taken)}
\end{document}
| {
"alphanum_fraction": 0.7189831174,
"avg_line_length": 40.9156626506,
"ext": "tex",
"hexsha": "d38fbdad074173dbf8cd436f73060cb55a291516",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-03-17T09:20:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-03-17T09:20:26.000Z",
"max_forks_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cmara/meeting-minutes",
"max_forks_repo_path": "minutes/2018-10-18-business-meeting.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cmara/meeting-minutes",
"max_issues_repo_path": "minutes/2018-10-18-business-meeting.tex",
"max_line_length": 325,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cmara/meeting-minutes",
"max_stars_repo_path": "minutes/2018-10-18-business-meeting.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-27T17:33:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-27T17:33:16.000Z",
"num_tokens": 3165,
"size": 10188
} |
\documentclass[10pt,oneside,letterpaper]{article}
%
% TABLE OF CONTENTS STUFF
%
\usepackage{tocloft}
\tocloftpagestyle{empty}
\renewcommand{\cftsubsecfont}{\it}
\renewcommand{\cftsecleader}{\hspace{0.25cm}}%\textbullet\hspace{0.25cm}}
\renewcommand{\cftsecafterpnum}{\cftparfillskip}
\renewcommand{\cftsubsecleader}{\hspace{0.25cm}}%$\cdot$\hspace{0.25cm}}
\renewcommand{\cftsubsecafterpnum}{\cftparfillskip}
\renewcommand{\cftsubsubsecleader}{\hspace{0.25cm}}%$\cdot$\hspace{0.25cm}}
\renewcommand{\cftsubsubsecafterpnum}{\cftparfillskip}
\renewcommand{\cftpnumalign}{l}
%
% INDEXING STUFF
%
\usepackage{makeidx}
\usepackage[columns=1,initsep=20pt,totoc=true]{idxlayout}
\makeindex
\newcommand{\indexcommand}[1]{
\index{\texttt{#1}}%
\marginpar{\texttt{\textbackslash#1}}%
}
\renewcommand\indexname{Index of commands}
%
% TIKZ
%
\usepackage{tikz}
\usetikzlibrary{decorations.pathreplacing}
%
% HEADER AND FOOTER STYLING
%
\usepackage{fancyhdr}
\pagestyle{fancy}
\renewcommand{\headrulewidth}{0pt}
\fancyfoot{}
\fancyhead{}
\fancyhead[R]{\it The Student Handouts Package $\cdot$ \thepage}
\newcommand{\pdf}{\textsc{pdf} }
\newcommand{\Latex}{La\textsc{t}e\textsc{x} }
%
% FONT STUFF
%
\frenchspacing
\usepackage[oldstyle]{libertine}
\usepackage[T1]{fontenc}
\linespread{1.1}
\usepackage{verbatim}
%The mono spaced font
\usepackage{inconsolata}
\title{\vspace{-2\baselineskip}The Student Handouts Package}
\author{James Fennell\\{\normalsize [email protected]}}
\begin{document}
\maketitle
\noindent
The Student Handouts package, \texttt{studenthandouts}, is used to generate a single master document that contains a set of individual student handouts.
The package has two main functions.
First, it provides a simple framework for organizing handout source code, and supplies a set of
import management tools for selectively importing a subset of the handouts into the master document.
Selective import is convenient when compilation of all of the handouts is unnecessary, for example when working on a new handout.
As a secondary feature, the package defines a basic visual style for handouts.
This style can be easily changed.
\tableofcontents
%\section{Managing handouts with the studenthandouts package}
\vfill
\noindent
[\emph{Student handouts version 1.0; this documentation compiled \today.}]
\section{Basic usage of the package}
This package is used as an aid in managing and compiling student handouts.
When using this package, the \Latex source for the handouts project as a whole is divided between a
single master file and individual handout files, one for each handout.
This section describes the details of this structure and the basic method for importing handouts into the master document.
All of these features can be seen in the sample code that is distributed with the package.
The handouts master file, into which all of the handouts are imported during compilation, is a standard \Latex file that includes this
package.
Place the usual command in the preamble of this file:
\begin{verbatim}
\usepackage{studenthandouts}
\end{verbatim}
For options that may be passed through \texttt{usepackage}, see section \ref{sec:options} below.
The source for the handouts themselves is stored in a subdirectory of the directory containing the master document.
By default, this subdirectory is \texttt{./handouts/}.
This directory can be changed; see subsection \ref{sec:variables} below.
The package imposes a one-level organizational structure on the handouts.
Each handout is a member of a numbered \emph{unit} and within that unit has a unique handout number.
Handout 1.1 is the first handout in unit one.
The source for handout 1.1 is, by default, stored in
\begin{verbatim}
./handouts/handout-1-1.tex
\end{verbatim}
Handout 3.4 is the fourth handout in unit three and its source is stored in
\begin{verbatim}
./handouts/handout-3-4.tex
\end{verbatim}
The work `unit' is intentionally generic.
When working on a specific project units may represent, for example, individual lessons, or chapters in the textbook the handouts are based on.
The organizational structure is loose.
For example, there is no requirement that handouts begin at the beginning and proceed sequentially: the first handout can be handout 10.521 if desired.
The source file\indexcommand{sethandouttitle}%
for a specific handout begins with a \texttt{sethandouttitle} command and is followed by the \Latex code for the handout proper.
Place this line at the start:
\begin{verbatim}
\sethandoutitle{<Title of the handout>}
\end{verbatim}
If the handout does not have a title, execute the command with an empty argument:
\begin{verbatim}
\sethandouttitle{}
\end{verbatim}
%The command \texttt{handouttitle} must be executed in all cases.
Once\indexcommand{importhandout}%
there is a source file for handout \emph{n.m}, it is imported into the master document through the \texttt{importhandout} command. % in the master document.
Place this command in the master document at the point the handout is to appear:
\begin{verbatim}
\importhandout{n}{m}
\end{verbatim}
Again, \emph{n} is the unit number and \emph{m} is the unique number of the handout within the unit.
\section{Handout import management}
After writing a number of handouts, the master document will have many handout import commands.
However at a given time it may not be desirable to compile all of the handouts in the project.
For example, if working on a new handout it is faster to only compile the new work when error checking.
If collaborating, it may be necessary to compile just a certain subset of the handouts to share.
The import management tools are designed for these situations.
The basic mechanism is that when handout \emph{n.m} is set to be imported the package first checks, based on previous instructions, whether
unit \emph{n} is to be imported, and if so whether handout \emph{n.m} specifically is to be imported.
By default all units and handouts are imported.
The instructions for which handouts are to be imported are given through the commands that follow.
The instructions may be changed at any point in the master file, even after some of the handouts have already been imported.
\begin{itemize}
\item \verb$\importall$ \indexcommand{importall}
Import all handouts.
This is the default behavior.
The \texttt{importall} command is used to reset to the default behavior after other instructions have been given.
\item \verb$\importnone$ \indexcommand{importnone}
Import none of the handouts.
\item \verb$\importonlyunits{$\emph{<unit numbers>}\verb$}$ \indexcommand{importonlyunits}
Import only those units whose unit numbers appear in \emph{<unit numbers>}. The argument \emph{<unit numbers>} is a comma separated list of numbers with no spaces.
Example usage:
\begin{verbatim}
\importonlyunits{1,2,4}
\end{verbatim}
\item \verb$\importallunits$ \indexcommand{importallunits}
Reverse the last command by removing any unit importing restrictions.
The difference between this command and \texttt{importall} is that if any specific handout restrictions have been imposed then
those restrictions still stand.
\item \verb$\importonlyhandouts{$\emph{<handout numbers>}\verb$}$ \indexcommand{importonlyhandouts}
Import only those handouts whose full handout number \emph{n.m} appears in \emph{<unit numbers>}.
The argument \emph{<unit numbers>} is a comma separated list of handout numbers with no spaces.
Example usage:
\begin{verbatim}
\importonlyhandouts{5.2,6.5,6.6}
\end{verbatim}
\item \verb$\importallhandouts$ \indexcommand{importallhandouts}
Reverse the last command by removing any specific handout importing restrictions.
The difference between this command and \texttt{importall} is that if any specific unit restrictions have been imposed then
those restrictions still stand.
\end{itemize}
An important feature is that the import instructions can be changed on the fly.
For example with the following code
\begin{verbatim}
\importnone
\importhandout{1}{1}
\importhandout{1}{2}
\importonlyhandouts{1.3,2.1}
\importhandout{1}{3}
\importhandout{2}{1}
\importhandout{2}{2}
\importhandout{2}{3}
\importall
\importhandout{2}{4}
\end{verbatim}
handouts 1.3, 2.1 and 2.4 will be imported.
In particular, in the import context the \texttt{importnone} and \texttt{importall} commands work similarly to
\verb$\begin{comment}$ and \verb$\end{comment}$ respectively.
\section{Options and variables}
\subsection{The blanks option}
\label{sec:options}
The package has one option: \texttt{blanks} or \texttt{noblanks}, with \texttt{blanks} as default.
The \texttt{blanks} option places a blank page after every handout that has an odd number of pages.
That way, the compiled handouts document can be printed double sided, and handouts with an odd number of pages will still be on their own sheet.
To see how the \texttt{blanks} options achieves this, see Figure \ref{fig:blanks}.
In this example there are three handouts: 1.1 with one page, 1.2 with two pages, and 1.3 with one page.
It is required that 1.2 be distributed double-sided.
In the first case in Figure \ref{fig:blanks}, the handouts are compiled one after another with the \texttt{noblanks} option.
If the resulting document is printed at once double-sided,
the back of handout 1.1 will contain the first page of handout 1.2, and the next sheet will contain the second page of handout 1.2 with handout 1.3 on the reverse.
In order to print the handouts correctly with this \pdf file, separate print calls for each of the three handouts are required.
On the other hand, with the \texttt{blanks} option, the resulting \pdf document can just be printed double sided at once.
The first sheet will contain handout 1.1 on the front, and the blank page after 1.1 on the back.
The next sheet will have the two pages of handout 1.2 back-to-back, and the last sheet will have handout 1.3 by itself.
\newcommand{\handoutimage}[3]{
\draw[white] #1 -- ++(-0.05,0.15) coordinate(hib) -- ++(-0.03,1) coordinate(hia) {};
\draw[white] #1 -- ++(0.5,0.8) coordinate (hic) {};
\node[anchor=west] at (hia) {\footnotesize #2};
\node[anchor=west] at (hib) {\tiny p.#3};
%\draw (hic) rectangle ++ (0.5,0.5);
\draw #1 rectangle ++ (1,1.3);
}
\newcommand{\handoutimageb}[3]{
\draw[white] #1 -- ++(-0.05,0.15) coordinate(hib) -- ++(-0.03,1) coordinate(hia) {};
\draw[white] #1 -- ++(0.5,0.8) coordinate (hic) {};
%\draw (hic) rectangle ++ (0.5,0.5);
\draw #1 rectangle ++ (1,1.3);
}
\newcommand{\handoutimagedbl}[5]{
\draw[white] #1 -- ++(-0.05,0.15) coordinate(hib) -- ++(-0.03,1) coordinate(hia) {};
\draw[white] #1 -- ++(0.5,1.3) coordinate (hic) {};
\node[anchor=west] at (hia) {\footnotesize #2};
\node[anchor=west] at (hib) {\tiny p.#3};
\node[anchor=west,rotate around={65:(hic)}] at (hia) {\footnotesize #4};
%\node[anchor=west,rotate around={65:(hic)}] at (hib) {\tiny p.#5};
\fill[white] (hic) -- ++(0.5,-0.75) -- ++(0,0.75) -- cycle;
\draw (hic) -- ++(0.5,-0.75);
%\draw #1 -- ++(1,0) -- ++(0,0.1) -- ++(-0.85,0.85) -- ++(0.35,0.35) -- ++(-0.5,0) -- cycle;
\draw #1 -- ++(1,0) -- ++(0,0.55) -- ++(-0.725,0.30) -- ++(0.225,0.45) -- ++(-0.5,0) -- cycle;
}
\begin{figure}
\begin{tikzpicture}[scale=0.95]
%\draw[blue] (0,0) grid (12,5);
\handoutimage{(0,3)}{1.1}{1}
\handoutimage{(1.2,3)}{1.2}{1}
\handoutimagedbl{(0.6,1)}{1.1}{1}{1.2}{1}
\handoutimage{(2.4,3)}{1.2}{2}
\handoutimage{(3.6,3)}{1.3}{1}
\handoutimagedbl{(3,1)}{1.2}{2}{1.3}{1}
\handoutimage{(5.4,3)}{1.1}{1}
\handoutimageb{(6.6,3)}{}{1}
\handoutimagedbl{(6,1)}{1.1}{1}{}{}
\handoutimage{(7.8,3)}{1.2}{1}
\handoutimage{(9,3)}{1.2}{2}
\handoutimagedbl{(8.4,1)}{1.2}{1}{1.2}{2}
\handoutimage{(10.2,3)}{1.3}{1}
\handoutimageb{(11.4,3)}{}{}
\handoutimagedbl{(10.8,1)}{1.3}{1}{}{}
\draw [decorate,decoration={brace,amplitude=6pt}] (2.2,2.8) -- (0,2.8) node {};
\draw [decorate,decoration={brace,amplitude=6pt}] (4.6,2.8) -- (2.4,2.8) node {};
\draw [decorate,decoration={brace,amplitude=6pt}] (7.6,2.8) -- (5.4,2.8) node {};
\draw [decorate,decoration={brace,amplitude=6pt}] (10,2.8) -- (7.8,2.8) node {};
\draw [decorate,decoration={brace,amplitude=6pt}] (12.4,2.8) -- (10.2,2.8) node {};
\node at (2.3,4.8){\texttt{noblanks}};
\node at (8.8,4.8){\texttt{blanks}};
\end{tikzpicture}
\caption{Comparison of the outcome of printing the compiled handouts document double-sided.
On the left is the handouts document compiled with the option \texttt{noblanks}; on the right
is the same document compiled with the option \texttt{blanks}.}
\label{fig:blanks}
\end{figure}
\subsection{Variables}
\subsubsection{The handouts subdirectory}
\label{sec:variables}
By default
\indexcommand{thehandoutsdirectory}%
the handouts are stored in the subdirectory \texttt{handouts/} of the directory that contains the master document.
Change this be redefining
% the variable
\texttt{thehandoutsdirectory}:
\begin{verbatim}
\renewcommand{\thehandoutsdirectory}{worksheets/}
\end{verbatim}
It is necessary to place a forward slash at the end.
If the handouts files are to be stored in the same directory as the master file, set the variable \texttt{thehandoutsdirectory} to be empty:
\begin{verbatim}
\renewcommand{\thehandoutsdirectory}{}
\end{verbatim}
\subsubsection{The handouts label}
\label{sec:label}
By default
\indexcommand{thehandoutslabel}%
the package deals with `handouts', and uses that word in the handouts output itself.
For example, the default style prints `Handout 1.1' in the header of handout 1.1.
A different term like `Student worksheet' might be desired instead.
To achieve this, redefine the variable \texttt{thehandoutlabel}:
\begin{verbatim}
\renewcommand{\thehandoutlabel}{Student worksheet}
\end{verbatim}
\subsubsection{The credit/copyright line}
\label{sec:credit}
\indexcommand{thehandoutscredit}
The default handouts style contains a space in left side of the footer where a credit or copyright line can appear.
By default it is empty.
Set it by redefining the variable \texttt{thehandoutcredit}:
\begin{verbatim}
\renewcommand{\thehandoutcredit}{NYU Calulus I; Summer 2015}
\end{verbatim}
\section{Auxiliary package features}
\subsection{A table of contents for the compiled document}
While each handout has its own page numbering -- the first page of a handout is always page one --
having a table of contents which references the page numbers of the handouts as they appear in the compiled \pdf document
can be convenient for determining which pages of the \pdf document need to be printed to get certain handouts.
To generate this type of table of contents in the compiled document, place usual command in the master file:
\begin{verbatim}
\tableofcontents
\end{verbatim}
The resulting table of contents will have some custom styling defined by the package.
\subsection{Setting unit titles}
When
using this package handouts are organized under generic `units'.
In a given project these units might represent something concrete;
for example, each unit might refer to a specific chapter in a book.
In this case individual units might also have titles, like
`Introduction to Differentiation',
and it may be useful to use these titles in the handouts.
\indexcommand{setunittitle}
Set the title of a unit with the command\texttt{setunittitle}:
\begin{verbatim}
\setunittitle{<unit number>}{<unit title>}
\end{verbatim}
When changing the handout style, or writing a specific handout, the title of the current unit may be printed with the command \verb$\theunittitle$.
If the current unit's title is not set, \verb$\theunittitle$ will do nothing.
In addition, if the title of a unit is set, the title will appear in the compiled document's table of contents.
This will likely make the compiled \pdf easier to navigate, and for this reason alone it is worth setting unit titles if they're available.
\section{Modifying the handout style}
The package defines a basic handouts style.
The style is implemented by changing the page margins (using the \texttt{geometry} package)
and editing the headers and footers of the handouts (using the \texttt{fancyhdr} package).
\subsection{Editing the page layout}
When
\indexcommand{thehandoutsgeometry}%
a new handout is loaded the page layout is temporarily changed using the \texttt{geometry} package.
In general handouts do not have large paragraphs of text, so by default the margins are made smaller so that the whole page can be more efficiently used.
When the package needs to change the page layout to that of the handouts it calls the command
\begin{verbatim}
\thehandoutsgeometry
\end{verbatim}
By default, the command \texttt{thehandoutsgeomtry} calls the command \texttt{newgeometry} \linebreak(from the \texttt{geometry} package)
which implements the desired changes.
The exact command that the command \texttt{thehandoutsgeometry} executes by default is
\begin{verbatim}
\newgeometry{top=3cm,left=2cm,right=2cm,bottom=2.5cm}
\end{verbatim}
Any desired geometry for the handouts can thus be achieved by setting the command $\texttt{thehandoutsgeometry}$
to execute the \texttt{newgeometry} command with appropriate options.
The possible options that can be sent to \texttt{newgeometry} are extensive and
may be discovered in the documentation for the \texttt{geometry} package.
For example, to have uniform margins of four centimeters, use the following:
\begin{verbatim}
\renewcommmand{\thehandoutsgeometry}{
\newgeometry{margins=4cm}
}
\end{verbatim}
To make no changes to the geometry -- that is, to have the handouts use the same layout as the rest of the document --
just set the command to do nothing:
\begin{verbatim}
\renewcommmand{\thehandoutsgeometry}{}
\end{verbatim}
\subsection{Editing the header and footers}
The design of the handouts is implemented by editing the headers and footers using the \texttt{fancyhdr} package.
All of the handouts are within the \texttt{studenthandout} fancy page style.
This way, the headers and footers of the handouts can be styled without affecting the rest of the document.
To change the default handout style, use the appropriate commands from the \texttt{fancyhdr} package.
Make the changes to the fancy page style \texttt{studenthandout}.
For example, the following code removes the horizontal lines of the headers and footers and removes
all of the handout information, except for the handout's full title which is placed in the center of the header.
\begin{verbatim}
\fancypagestyle{studenthandout}{
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
\fancyhf{}
\fancyhead[C]{\thehandoutfulltitle}
}
\end{verbatim}
When changing the handout style the handout output commands in the next section will likely be needed.
\subsection{Handout output commands}
When editing the style there are a number of commands available that print information such as the current handout's title.
None of these commands take an argument.
As is commonplace in \Latex{}, using a command with no argument can result in the whitespace that follows being gobbled up.
If this happens use the command with braces after; for example \verb$\thehandoutnumber{}$.
\vspace{\baselineskip}
\noindent
The commands that print information specific to a given handout are as follows.
\begin{itemize}
\item \verb$\thehandoutnumber$ \indexcommand{thehandoutnumber}
Print the number of the current handout.
This is the unique number of the handout within the unit.
To print the full handout number, write
\begin{verbatim}
\theunitnumber.\thehandoutnumber
\end{verbatim}
\item \verb$\thehandoutitle$ \indexcommand{thehandouttitle}
Print the title of the current handout.
\item \verb$\thehandoutfulltitle$ \indexcommand{thehandoutfulltitle}
Print the full title of the current handout.
This will read something like `Handout 3.4: Handout Title', or, if the handout has no title, simply `3.4'.
\item \verb$\thehandoutpage$ \indexcommand{thehandoutpage}
Print the current handout page number.
This is distinct from the document page number as each handout begins at Page 1.
To print the document page number use the usual command \verb$\thepage$.
\end{itemize}
\vspace{\baselineskip}
\noindent
The commands that print information about the current unit or the handouts as a whole are as follows.
\begin{itemize}
\item \verb$\theunitnumber$ \indexcommand{theunitnumber}
Print the number of the current unit.
\item \verb$\theunittitle$ \indexcommand{theunittitle}
Print the title of the current unit, as set through the \texttt{setunittitle} command.
If the current unit's title has not been set, this command will do nothing.
\item \verb$\theunitfulltitle$ \indexcommand{theunitfulltitle}
Print the full title of the current unit.
This will read something like `1: Unit Title'.
If the unit's title has not been set the full title will just read `1`.
\item \verb$\thehandoutslabel$ \indexcommand{thehandoutslabel}
Print the handouts label -- that is, the word or phrase that says what the handouts are.
By default this is `Handout' but it can be changed;
see section \ref{sec:label} above.
\item \verb$\thehandoutscredit$ \indexcommand{thehandoutscredit}
Print the handouts credit.
This is by default blank but may be set to include information like the teacher's name or the institution's name; see section \ref{sec:credit} above.
\end{itemize}
\vspace{\baselineskip}
\noindent
Finally, the package provides a command
\indexcommand{fullhandoutinfo}
\begin{verbatim}
\fullhandoutinfo
\end{verbatim}
which outputs a table containing all nine pieces of information from above, as well as the output of the command \verb$thepage$.
This command is intended for debugging and is not designed to be used in the final document.
\newpage
\printindex
\end{document}
| {
"alphanum_fraction": 0.7497174375,
"avg_line_length": 35.9658536585,
"ext": "tex",
"hexsha": "e0b1f50c824b055d5f7f8e39f807803977a94e46",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fbdd5d71593fcd4963c94135b3abc26f81ea76ca",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "jamesfennell/student-handouts",
"max_forks_repo_path": "studenthandouts-doc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fbdd5d71593fcd4963c94135b3abc26f81ea76ca",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "jamesfennell/student-handouts",
"max_issues_repo_path": "studenthandouts-doc.tex",
"max_line_length": 165,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fbdd5d71593fcd4963c94135b3abc26f81ea76ca",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "jamesfennell/student-handouts",
"max_stars_repo_path": "studenthandouts-doc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6267,
"size": 22119
} |
\chapter{Neural Networks: Representation}
\label{chap:neural_net_repr}
Let's start by discussing the motivation for neural networks. We already have seen and coded two powerful machine learning algorithms, so we do we need another?
\section{Non-linear Hypotheses}
If we have a fairly messy dataset with three terms, $x_1$, $x_2$, and $x_3$, we can classify them using logistic regression, but we'll probably need to introduce polynomial terms to get an accurate classifier. This would give us a hypothesis in the following form:
$$
h_\theta\left(x\right) = g\left(\theta_0, \theta_1 x_1^2 + \theta_2 x_1 x_2 + \theta_3 x_1 x_3 + \theta_4 x_2^2 + \theta_5 x_2 x_3 + \theta_6 x_3^2\right)
$$
Simply by including quadratic terms, we created six features. We can determine this number of features mathematically from combinatorics, and we can model it after sampling with replacement:
\begin{equation}
\text{num. quadratic features } = \binom{n+k-1}{k} = \frac{\left( n + k - 1\right) !}{k! \left(n-1\right)!} = \frac{\left(3 + 2 - 1\right)!}{2! \cdot \left(3-1\right)!} = \frac{4!}{4} = 6
\end{equation}
If we think back to our housing example, and want to perform classification instead of regression using 100 features, that would give us 5050 polynomial terms to include, in addition to the 100 linear terms. We can approximate the growth of the number of new features we get with all quadratic terms with $\mathcal{O}\left(n^2 / 2\right)$. If we wanted to include cubic terms in our hypothesis too, the features would grow asymptotically as $\mathcal{O}\left(n^3\right)$. Since the number of features grows so rapidly, the number of quadratic and cubic features very quickly becomes impractical.
Consider a collection of $50\times 50$ pixel black-and-white photograph, where we want to determine which photographs are of cars. Then the length of our feature vector is $2500$\footnote{If we were using RGB values, this would be $7500$.}, since we have $2500$ individual pixels. Each features here represents the brightness of the pixel. Now if we want to include quadratic features, we have approximately $2500^2 / 2 = 3,125,000$ features.
\section{Neurons and the Brain}
Neural networks originated when people thought to build algorithms to mimic how the human brain learns. They were popular in the 1980s, but somewhat fell out of use in the 90s; however, there has been a pretty big surge in neural network use lately due to the massive advances in computer hardware and processing speed.
While it might seem like the human brain learns different things in different brain regions, there is a hypothesis that the brain only uses a single learning algorithm for all its different functions. This was motivated by an experiment where scientists rewired the optical nerve to the auditory cortex in animals, and the auditory cortex actually learned to see. This was repeated with other areas of the brain as well. The principle behind this is called neuroplasticity.
\section{Model Representation I}
Neural networks were developed to simulate neurons and networks of neurons in the brain. Very simplistically, the neuron takes inputs via the dendrites as electrical inputs (called spikes) and then channels the output via the axon.
For an artificial neural network, we'll use a very simple model of what a neuron does, we'll model a neuron as a logistic unit. In our model, our inputs are the input features $x_1$, $x_2$, etc. and the output is the result of our hypothesis function. Just like with logistic regression, we have two parameter vectors $\vec{x}$ and $\vec{\theta}$
$$
\vec{x} = \left[\begin{array}{c}x_0 \\ x_1 \\ x_2 \\ x_3 \end{array}\right] ~~\mbox{ \;\;\;\;\;\;\;\;\;\; }~~ \vec{\theta} = \left[\begin{array}{c}\theta_0 \\ \theta_1 \\ \theta_2 \\ \theta_3 \end{array}\right]
$$
where $x_0 = 1$ is the bias term. When representing neural networks, we always have the $\theta_0$ bias term, but we sometimes omit it for notational convenience. Additionally, when representing neural networks, we'll typically use $3$ features, though in reality the number of features is a parameter of the problem.
$$
\left[\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \end{array}\right] \to \left[ {\;\;} \right] \to h_\theta\left(x\right)
$$
We use the same logistic function in the hypothesis as logistic regression. However, in neural networks, it is often called the sigmoid function, or the sigmoid/logistic activation function.
\begin{equation}
\frac{1}{1 + e^{-\theta^{{}^\intercal} x}}
\end{equation}
We sometimes call the $\theta$ parameters \textbf{weights} for neural networks, as is traditional in the neural network literature, so now we might refer to $\vec{\theta}$ as either parameters or weights. Now, let's look at a very simple model of a neural network. The first layer, $\vec{x}$, is called the \textbf{input layer}. The output of the hypothesis function is called the \textbf{output layer}, which gives the our final value for the hypothesis. In between the input layer and the final layer, there are one or more hidden layers.
\begin{figure}[h] % figure placement: here, top, bottom, or page
\centering
\graphicspath{{./Figures/}} %Use this to import an image from a subfolder.
\includegraphics[scale=0.8]{nn_repr_3_layer_neural_netw.pdf}
\caption[]{A sample artificial neural network, with three inputs and one hidden layer.}
\label{nn_repr_3_layer_neural_netw.pdf}
\end{figure}
The hidden layer nodes are labeled $a_1^{(2)}$, $a_2^{(2)}$, etc. and called activation units, where $a_i^{(j)}$ is the activation of unit $i$ in layer $j$. The matrix $\Theta^{(j)}$ is the matrix of weights controlling the function mapping from layer $j$ to layer $j + 1$. Mathematically, we might represent this as
$$
\left[\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \end{array}\right] \to \left[\begin{array}{c} a_1^{(2)} \\ a_2^{(2)} \\ a_3^{(2)} \end{array}\right] \to h_\Theta\left(x\right)
$$
Now, let's break out the computations that are represented by this diagram
\begin{subequations}
\begin{align}
a_1^{(2)} &= g\left( \Theta_{10}^{(1)} x_0 + \Theta_{11}^{(1)} x_1 + \Theta_{12}^{(1)} x_2 + \Theta_{13}^{(1)} x_3 \right) \\
a_2^{(2)} &= g\left( \Theta_{20}^{(1)} x_0 + \Theta_{21}^{(1)} x_1 + \Theta_{22}^{(1)} x_2 + \Theta_{23}^{(1)} x_3 \right) \\
a_3^{(2)} &= g\left( \Theta_{30}^{(1)} x_0 + \Theta_{31}^{(1)} x_1 + \Theta_{32}^{(1)} x_2 + \Theta_{33}^{(1)} x_3 \right) \\
h_\Theta\left(x\right) = a_1^{(3)} &= g\left( \Theta_{10}^{(2)} a_0^{(2)} + \Theta_{11}^{(2)} a_1^{(2)} + \Theta_{12}^{(2)} a_2^{(2)} + \Theta_{13}^{(2)} a_3^{(2)} \right)
\end{align}
\label{chapnnrepr-sectmodelrepr1-definehiddenlayeractivations}
\end{subequations}
This is saying that we compute our activation nodes using a $3\times 4$ matrix or parameters. We apply each row of parameters to our inputs to obtain the value for one activation node. Our hypothesis output is the sigmoid function applied to the sum of the values from the activation nodes, which have been multiplied by yet another parameter matrix, $\Theta^{(2)}$, containing the weights for our second layer of nodes.
More generally, the dimension of the matrix of weights $\Theta^{(j)}$ is given by the following: if a network has $s_j$ units in layer $j$, and $s_{j+1}$ units in later $j+1$, then $\Theta^{(j)}$ will have dimensions
\begin{equation}
\| \Theta^{(j)} \| = \left(s_{j+1}\right)\times\left(s_j + 1\right)
\end{equation}
The $+1$ for layer $j$ comes from the bias nodes, $x_0$ and $\Theta_0^{(j)}$, and it's only applied to the input nodes since the output of a layer doesn't include a bias node.
For example, if layer one has $2$ input nodes and layer two has $4$ activation nodes, then $\Theta^{(1)}$ will be a $4\times 3$ matrix, since $s_j = s_1 = 2$ and $s_{j+1} = s_2 = 4$.
\section{Model Representation II}
Now, we're going to go through the neural network model again, but this time with a vectorized implementation. We begin by defining a new variable $z_k^{(j)}$ that encompasses the parameters inside of our sigmoid function $g$. As such, we can now rewrite equations \ref{chapnnrepr-sectmodelrepr1-definehiddenlayeractivations} as
\begin{align*}
a_1^{(2)} &= g\left( \Theta_{10}^{(1)} x_0 + \Theta_{11}^{(1)} x_1 + \Theta_{12}^{(1)} x_2 + \Theta_{13}^{(1)} x_3 \right) & &\implies & a_1^{(2)} &= g\left(z_1^{(2)}\right) \\
a_2^{(2)} &= g\left( \Theta_{20}^{(1)} x_0 + \Theta_{21}^{(1)} x_1 + \Theta_{22}^{(1)} x_2 + \Theta_{23}^{(1)} x_3 \right) & &\implies & a_2^{(2)} &= g\left(z_1^{(2)}\right) \\
a_3^{(2)} &= g\left( \Theta_{30}^{(1)} x_0 + \Theta_{31}^{(1)} x_1 + \Theta_{32}^{(1)} x_2 + \Theta_{33}^{(1)} x_3 \right) & &\implies & a_3^{(2)} &= g\left(z_1^{(2)}\right)
\end{align*}
So the $z$ values are just a weighted linear combination of the input values $x_0$, $x_1$, etc. going to a particular neuron. In other words, for layer $j=2$ and node $k$,
\begin{equation}
z_k^{(2)} = \Theta_{k, 0}^{(1)} x_0 + \Theta_{k, 1}^{(1)} x_1 + \cdots + \Theta_{k, n}^{(1)} x_n
\end{equation}
The vector representations of $x$ and $z^{(j)}$ are
$$
x = \left[\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \end{array}\right] ~~\mbox{ \;\;\;\;\;\;\;\;\;\; }~~ z^{(j)} = \left[\begin{array}{c} z_1^{(j)} \\ z_2^{(j)} \\ z_3^{(j)} \end{array}\right]
$$
From these vectors, we have $z^{(2)} = \Theta^{(1)} x$ and $a^{(2)} = g\left(z^{(2)}\right)$, where $z^{(2)} \in \mathbb{R}^3$ and $a^{(2)} \in \mathbb{R}^3$. We define $x$ to be $a^{(1)}$, which makes sense because $x$ is our input vector and $a^{(1)}$ implies that we're looking at our first layer, which is the input layer. Then, we can write the general definition for $z$ as
\begin{equation}
z^{(j)} = \Theta^{(j-1)} a^{(j-1)}
\end{equation}
Here, we are multiplying our matrix $\Theta^{(j-1)}$, with dimensions $s_j \times (n+1)$, by our vector $a^{j-1}$, with length $(n+1)$. This yields our vector $z^{(j)}$ with length $s_j$.\footnote{Recall that $s_j$ is the number of activation nodes.}
From this, we can create a vector of our activation nodes for layer $j$ as
\begin{equation}
a^{(j)} = g\left(z^{(j)}\right)
\end{equation}
where the sigmoid function $g$ is applied element-wise to $z^{(j)}$. Next, to get our hypothesis, we need to add a bias term to the layer $j=2$, $a_0^{(2)} = 1$. In fact, we can generalize going forward that we will need to add bias terms, and they'll all equal one $a_0^{(j)} = 1$. Now, we have $a^{(2)} \in \mathbb{R}^4$, since we just added the bias term to the previously length-three vector. Now, we can compute
$$
z^{(3)} = \Theta^{(2)} a^{(2)}
$$
and
$$
h_\Theta\left(x\right) = a^{(3)} = g\left(z^{(3)}\right)
$$
This process for computing $h_\Theta\left(x\right)$ is called \textbf{forward propagation}, because we start off with the activations of the input units, then forward propagate that to compute the activations of the hidden layer, then again forward propagate that to compute the activations of the output layer.
Let's step back for a minute. What we're doing here is very similar to logistic regression, though it might not seem like it. Previously, we have the input feed directly into the logistic regression; now instead, we have the nodes from layer $j=2$ (the hidden layer) feeding into the logistic regression. However, those nodes $a_k^{(2)}$ are themselves learned from the input data
We've been specifically talking about the neural network architecture described in Figure \ref{nn_repr_3_layer_neural_netw.pdf}, but there can be other neural network architectures too. Consider the example shown in Figure \ref{nn_repr_4_layer_neural_netw.pdf}.
\begin{figure}[h] % figure placement: here, top, bottom, or page
\centering
\graphicspath{{./Figures/}} %Use this to import an image from a subfolder.
\includegraphics[scale=1]{nn_repr_4_layer_neural_netw.pdf}
\caption[]{A sample artificial neural network, with three inputs and two hidden layer.}
\label{nn_repr_4_layer_neural_netw.pdf}
\end{figure}
Here, we have the same input layer, but there are two hidden layers. The first hidden layer has three hidden units, which are computed as some complex function of the input layer. The second hidden layer can take the first hidden layer's features and compute even more complex features, so the output layer can have very complex features.
\section{Examples and Intuitions}
Let's say we have inputs $x_1, x_2 \in \{0, 1\}$. In this case, our target label $y = x_1 \text{ AND } x_2$. This is a logical \textit{and}. Can we make a neural network that can recreate this \textit{and} operator? The graph of our function will look something like this
$$
\left[\begin{array}{c} x_0 \\ x_1 \\ x_2 \end{array}\right] \to \left[g\left(z^{(2)}\right)\right] \to h_\Theta\left(z\right)
$$
where $x_0 = 1$ is our bias variable. For this example, let's define our first $\Theta^{(1)}$ matrix as
$$
\Theta^{(1)} = \left[\begin{array}{ccc}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} & \Theta_{1,2}^{(1)} \end{array}\right] = \left[\begin{array}{ccc}-30 & 20 & 20 \end{array}\right]
$$
This means our hypothesis is given by
$$
h_\Theta\left(x\right) = g\left( -30 + 20x_1 + 20x_2 \right)
$$
Let's figure out what our hypothesis evaluates to for different combinations of $x_1$ and $x_2$\footnote{Keep in mind that the sigmoid function evaluates to about $0.99$ for an input of $4.6$, and about $0.01$ for an input value of $-4.6$.}
\begin{center}
\begin{tabular}{c c | r}
$x_1$ & $x_2$ & $h_\Theta\left(x\right)$ \\
\hline
$0$ & $0$ & $g\left(-30\right) \approx 0$ \\
$0$ & $1$ & $g\left(-10\right) \approx 0$ \\
$1$ & $0$ & $g\left(-10\right) \approx 0$ \\
$1$ & $1$ & $g\left(10\right) \approx 1$
\end{tabular}
\end{center}
This is exactly the truth table for the logical \textit{and}, so $h_\Theta\left(x\right) \approx x_1 \text{ AND } x_2$. Using a small neural network, we have just constructed one of the most fundamental operations in computing: the \textit{and} gate.
\subsection{Building Logical Gates Using Neural Networks}
We are also able to build neural networks to simulate all other logical gates. Let's start with a super-simple example. If we have a single input variable $x_1$, let's use a neural network to build the logical \textit{not} gate. We could do this with
$$
\Theta^{(1)} = \left[\begin{array}{cc}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} \end{array}\right] = \left[\begin{array}{cc}10 & -20 \end{array}\right]
$$
giving us a hypothesis of
$$
h_\Theta\left(x\right) = g\left(10 - 20x\right)
$$
If we fill out the table of values for this, we get
\begin{center}
\begin{tabular}{c | r}
$x_1$ & $h_\Theta\left(x\right)$ \\
\hline
$0$ & $g\left(10\right) \approx 1$ \\
$1$ & $g\left(-10\right) \approx 0$
\end{tabular}
\end{center}
As a reminder, here is a truth table for some additional logic gates.
\begin{center}
\begin{tabular}{| c | c | c | c | c | c | c | c |} \hline
\multicolumn{2}{| c |}{\textbf{Input}} & \multicolumn{6}{ c |}{\textbf{Output}} \\ \hline
A & B & A and B & A or B & A nand B & A nor B & A xor B & A xnor B \\ \hline \hline
0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ \hline
0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 \\ \hline
1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 \\ \hline
1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ \hline
\end{tabular}
\end{center}
We can similarly construct $\Theta$ for the \textit{or} gate as
$$
\Theta^{(1)} = \left[\begin{array}{ccc}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} & \Theta_{1,2}^{(1)} \end{array}\right] = \left[\begin{array}{ccc}-10 & 20 & 20 \end{array}\right]
$$
and $\Theta$ for the \textit{nor} gate as
$$
\Theta^{(1)} = \left[\begin{array}{ccc}\Theta_{1,0}^{(1)} & \Theta_{1,1}^{(1)} & \Theta_{1,2}^{(1)} \end{array}\right] = \left[\begin{array}{ccc}10 & -20 & -20 \end{array}\right]
$$
\subsection{Logical XNOR Gate}
Having defined the \textit{not}, \textit{and}, \textit{or}, and \textit{nor} gates, let's try and build a logical \textit{xnor} gate. We'll start by building a hidden layer with two nodes, one built with the \textit{and} gate and the other with the \textit{nor} gate. Using
$$
\Theta^{(1)} = \left[\begin{array}{ccc} -30 & 20 & 20 \\ 10 & -20 & -20 \end{array}\right]
$$
we can build $a_1^{(2)}$ from the \textit{and} gate and build $a_2^{(2)}$ from the \textit{or} gate. This gives us the following
\begin{center}
\begin{tabular}{c c | c c | c}
$x_1$ & $x_2$ & $ a_1^{(2)}$ & $a_2^{(2)}$ & $h_\Theta\left(x\right)$ \\ \hline
0 & 0 & 0 & 1 & {} \\
0 & 1 & 0 & 0 & {} \\
1 & 0 & 0 & 0 & {} \\
1 & 1 & 1 & 0 & {}
\end{tabular}
\end{center}
Now, to finish our \textit{xnor} gate, we can use the $\textit{or}$ gate between our two existing nodes on the second layer.
$$
\Theta^{(2)} = \left[\begin{array}{ccc} -10 & 20 & 20 \end{array}\right]
$$
Writing this out formally, we find
\begin{align*}
a^{(2)} &= g\left(\Theta^{(1)} x\right) \\
h_\Theta\left(x\right) = a^{(3)} &= g\left(\Theta^{(2)} a^{(2)}\right)
\end{align*}
Filling in the rest of our table, we find we've built the \textit{xnor} gate!
\begin{center}
\begin{tabular}{c c | c c | c}
$x_1$ & $x_2$ & $ a_1^{(2)}$ & $a_2^{(2)}$ & $h_\Theta\left(x\right)$ \\ \hline
0 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 1
\end{tabular}
\end{center}
\section{Multiclass Classification}
Similar to logistic regression, we can do multiclass classification with neural networks, and the way we do it is essentially an extension of the one-vs-all method. Let's say we have a computer vision example, where we're trying to classify an image into a pedestrian, a car, a motorcycle, or a truck. We would do this buy building a neural network with an output of four numbers, meaning the output $h_\Theta$ will actually be a $4$-vector. In our example, when we have a pedestrian or car, we'd want our output to be
$$
\text{(pedestrian) } h_\Theta\left(x\right) \approx \left[\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right] ~~\mbox{ \;\;\;\;\;\;\;\;\;\; }~~ \text{(car)} h_\Theta\left(x\right) \approx \left[\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right]
$$
Our training set will look similar
$$
\left(x^{(1)}, y^{(1)}\right), \left(x^{(2)}, y^{(2)}\right), \left(x^{(3)}, y^{(3)}\right), \cdots, \left(x^{(m)}, y^{(m)}\right)
$$
but instead of representing $y \in \{1, 2, 3, 4\}$, we'll represent $y^{(i)}$ as one of the following:
$$
y^{(i)} \in \left\{ \left[\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right],
\left[\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}
$$
where both $h_\Theta\left(x\right)$ and $\vec{y}$ will be in $\mathbb{R}^4$.
Let's write this out a bit. Sticking with the image classification problem with four output classes, our artificial neural network can be represented by
$$
\left[\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_n\end{array}\right] \to
\left[\begin{array}{c} a_0^{(2)} \\ a_1^{(2)} \\ a_2^{(2)} \\ a_3^{(2)} \\ \vdots \end{array}\right] \to
\left[\begin{array}{c} a_0^{(3)} \\ a_1^{(3)} \\ a_2^{(3)} \\ a_3^{(3)} \\ \vdots \end{array}\right] \to
\left[\begin{array}{c} h_\Theta\left(x\right)_1 \\ h_\Theta\left(x\right)_2 \\ h_\Theta\left(x\right)_3 \\ h_\Theta\left(x\right)_1 \end{array}\right]
$$
The final hidden layer of nodes, when multiplied by its $\Theta$ matrix, will result in another vector, on which we can apply the sigmoid function $g$ to get a vector of hypothesis values, which will be (asymptotically) equal to one of the four $y^{(i)}$ vectors.
| {
"alphanum_fraction": 0.678,
"avg_line_length": 70.1438848921,
"ext": "tex",
"hexsha": "84b18a7707d92aa2cad4850b737118840e6df516",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Sz593/coursera_ml_notes",
"max_forks_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Sz593/coursera_ml_notes",
"max_issues_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex",
"max_line_length": 596,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "098e0bd81759312b2c14640772112e9accb63a93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Sz593/coursera_ml_notes",
"max_stars_repo_path": "LaTeX Notes/Chapters/4-Neural_Networks_Representation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6534,
"size": 19500
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{txfonts}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{stmaryrd}
\usepackage{color}
\usepackage{bussproofs}
\usepackage{multicol}
\usepackage{setspace}
\graphicspath{ {./} }
\author{J. O'Connor}
\date{December 2021}
\setlength\parindent{0pt}
\setlength{\parskip}{1em}
\setlength{\columnsep}{0.5cm}
\lstset{
language=ML,
basicstyle=\small\sffamily,
frame=single,
tabsize=4,
columns=fixed,
showstringspaces=false,
showtabs=false,
keepspaces,
commentstyle=\color{red},
keywordstyle=\color{blue},
xleftmargin=.05\textwidth,
xrightmargin=.05\textwidth
}
\newcommand{\centerbox}[1]{
\begin{center}
\fbox{
\begin{minipage}{\dimexpr\textwidth-2.5cm}
#1
\end{minipage}
}
\end{center}
}
\newcommand{\centerboxtitled}[2]{
\centerbox {
\begin{center}
\textbf{#1}
\end{center}
#2
}
}
\newcommand{\SafeRightLabel}[1]{
\RightLabel{$\textrm{#1}$}
}
\newcommand{\SequentAxiom}[2]{
\SafeRightLabel{#1}
\SequentAxiomNoLabel{#2}
}
\newcommand{\SequentAxiomNoLabel}[1]{
\AxiomC{}
\UnaryInfC{#1}
\DisplayProof
\hspace{10px}
}
\newcommand{\SequentUnary}[3]{
\SafeRightLabel{#1}
\SequentUnaryNoLabel{#2}{#3}
}
\newcommand{\SequentUnaryNoLabel}[2]{
\AxiomC{#1}
\UnaryInfC{#2}
\DisplayProof
\hspace{10px}
}
\newcommand{\SequentBinary}[4]{
\SafeRightLabel{#1}
\SequentBinaryNoLabel{#2}{#3}{#4}
}
\newcommand{\SequentBinaryNoLabel}[3]{
\AxiomC{#1}
\AxiomC{#2}
\BinaryInfC{#3}
\DisplayProof
\hspace{10px}
}
\newcommand{\SequentTrinary}[5]{
\SafeRightLabel{#1}
\SequentTrinaryNoLabel{#2}{#3}{#4}{#5}
}
\newcommand{\SequentTrinaryNoLabel}[4]{
\AxiomC{#1}
\AxiomC{#2}
\AxiomC{#3}
\TrinaryInfC{#4}
\DisplayProof
\hspace{10px}
}
\newcommand{\SequentBox}[1]{
\centerbox{
\vspace{15px}
\begin{center}
\begin{spacing}{3.0}
#1
\hspace{-10px}
\end{spacing}
\end{center}
\vspace{-30px}
}
}
\newcommand{\inlineeq}[1]{
\vspace{-2em}
\begin{gather*}
#1
\end{gather*}
\vspace{-2em}
}
\newcommand{\quasi}[0]{\; \sim \!}
\setcounter{secnumdepth}{0}
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{1cm}
\Huge
\textbf{Unofficial Types Notes}
\vspace{0.5cm}
\LARGE
Student written notes for the type theory course, based on the 2021 lectures
\vspace{1.5cm}
\textbf{J. O'Connor}
jo429
\vfill
THIS IS NOT AN OFFICIAL DOCUMENT! This was written by students to collate and reinforce their own understanding and should not be used as a substitute for the official documents. There may be missing content, things may be written incorrectly or misunderstood and entire points may be missed.
\vspace{0.8cm}
\Large
Thanks to the following people for proofreading this document:
R. Laine
K. Druciarek
\end{center}
\end{titlepage}
\setlength{\parskip}{-0.5em}
\tableofcontents
\setlength{\parskip}{1em}
\newpage
\section{Types}
Adding types to a programming language's design seems like an obvious choice\footnote{except to Lisp programmers} to make; we can add some guarantees to the format and uses of data, which can be checked at compile time via Type Checking, allowing for Type Safety to be guaranteed before the program is even run. Type systems are often also used as a form of documentation, where in the names of types may dictate the function of the data stored within instances of the types. This leads naturally to Object Oriented Programming, where the types not only represent the format and guarantees on the data stored within the type, but also the functions that are allowed to be called on the data. Type systems also lead to faster code, as the compilers for type safe languages can make strong assumptions about the data stored in a memory location based on the type of the variable that the memory location corresponds to.
However, type systems lead a double life. Once we begin to formalise some of the above notions of 'Type Safety' and 'Typing judgements', we begin to see a direct parallel between \textbf{Types}, and \textbf{Logic \& Proof}.
\section{Semantics of Programming Languages Revision}
In IB Semantics of Programming Languages we saw how we can define a grammar of a language, then an Operational Semantics and Typing Judgement for terms within this grammar. In II Types we start with an understanding of these concepts.
For example, we may have a simple language of Booleans and Integers. Its grammar may look like the following:
\inlineeq{
e ::= true \; | \; false \; | \; n \; | \; e_1 \leq e_2 \; | \; e_1 + e_2 \; | \; e_1 \land e_2 \; | \; \lnot e
}
From this grammar, we can begin to build terms:
\inlineeq{
3 + 4 \leq 5 \\
(3 + 4 \leq 7) \land (7 \leq 3 + 4)
}
Excellent! However, we can also build some terms that don't make sense:
\inlineeq{
4 \land true \\
false + 7
}
The obvious thing to do is to modify our grammar to only allow for valid terms. In this language, we can do this by splitting the terms into three rules:
\inlineeq{
e_1 ::= n \; | \; e_1 + e_2 \\
e_2 ::= true \; | \; false \; | \; e_1 \leq e_1 \; | \; e_2 \land e_2 \; | \; \lnot e_2 \\
e ::= e_1 \; | \; e_2
}
By doing this, we have actually introduced a basic notion of types. We can see that $e_1$ is all of the expressions with type \textit{number} and $e_2$ is all of the expressions with type \textit{boolean}.
Unfortunately, then, we have managed to entangle our concepts of types and expressions. To disentangle these ideas, we introduce \textit{typing judgements} on expressions, where a judgement only exists if an expression is typed correctly, or \textit{well-typed}. This means that we can have
Returning to our original language of Booleans and Integers:
\inlineeq{
e ::= true \; | \; false \; | \; n \; | \; e_1 \leq e_2 \; | \; e_1 + e_2 \; | \; e_1 \land e_2 \; | \; \lnot e
}
We can add judgement rules for every expression with a type as follows:
\SequentBox{
\SequentAxiom{Num}{$n : \mathbb{N}$}
\SequentAxiom{True}{$true : bool$}
\SequentAxiom{False}{$false : bool$}
\SequentUnary{Neg}{$e : bool$}{$\lnot e : bool$}
\SequentBinary{Plus}{$ e : \mathbb{N} $}{$ e' : \mathbb{N} $}{$e + e' : \mathbb{N}$}
\SequentBinary{And}{$ e : bool $}{$ e' : bool $}{$e \land e' : bool$}
\SequentBinary{LEQ}{$ e : \mathbb{N} $}{$ e' : \mathbb{N} $}{$e \leq e' : bool$}
}
Note that we have not yet defined how these expressions actually behave when we step through their execution. Typing is a process done on expressions before they are run, and does not change the path that the expression takes when it is executed. Typing judgements simply tell us whether an expression is \textit{well-typed}, and we can show that well-typed expressions within some languages have nice properties like \textbf{Termination}.
\newpage
But now comes the question of variables. For example, the following statement should only be valid if the variable $x$ stores a number:
\inlineeq{
(x + x) \leq 10
}
To do this we add a context which holds the type of every variable in the expression. This context propagates through the proof tree as we build it from the bottom up, allowing us to see the types of variables within the local context in which they occupy.
\SequentBox{
$\textrm{Contexts} \; \Gamma ::= \cdot \; | \; \Gamma, x: \tau$
\SequentAxiom{Num}{$\Gamma \vdash n : \mathbb{N}$}
\SequentAxiom{True}{$\Gamma \vdash true : bool$}
\SequentAxiom{False}{$\Gamma \vdash false : bool$}
\SequentBinary{Plus}{$\Gamma \vdash e : \mathbb{N} $}{$\Gamma \vdash e' : \mathbb{N} $}{$\Gamma \vdash e + e' : \mathbb{N}$}
\SequentBinary{And}{$\Gamma \vdash e : bool $}{$\Gamma \vdash e' : bool $}{$\Gamma \vdash e \land e' : bool$}
\SequentBinary{LEQ}{$\Gamma \vdash e : \mathbb{N} $}{$\Gamma \vdash e' : \mathbb{N} $}{$\Gamma \vdash e \leq e' : bool$}
\SequentUnary{Var}{$x:\tau \in \Gamma$}{$\Gamma \vdash x : \tau$}
\SequentBinary{Let}{$\Gamma \vdash e : \tau $}{$\Gamma, x:\tau \vdash e' : \tau' $}{$\Gamma \vdash \textrm{let} \; x = e \; \textrm{in} \; e' : \tau '$}
}
\newpage
\section{Structural Properties and Substitution}
We have introduced variables into our language, so we should introduce a notion of substitution as well:
\inlineeq{
\begin{split}
[e/x]true &= true \\
[e/x]false &= false \\
[e/x]n &= n \\
[e/x](e_1 + e_2) &= [e/x]e_1 + [e/x]e_2 \\
[e/x](e_1 \leq e_2) &= [e/x]e_1 \leq [e/x]e_2 \\
[e/x](e_1 \land e_2) &= [e/x]e_1 \land [e/x]e_2 \\
[e/x]x &= e \\
[e/x]z &= z \\
[e/x](\textrm{let} \; z = e_1 \; \textrm{in }e_2) &= \textrm{let }z = [e/x]e_1 \; \textrm{in} \; [e/x]e_2 \\
(\textrm{assuming} \; z &\notin dom(e))
\end{split}
}
These rules are akin to $\beta$-reduction in Lambda Calculus, and should be rules that you are very familiar with. Note that we did not need to define these substitution rules when defining the sequents for types in the previous section - we only use substitution when evaluating an expression, or when describing properties that hold for well-typed expressions.
There are three properties that we like to have hold for any type system. These are as follows:
\begin{enumerate}
\item (\textbf{Weakening})
If a term typechecks in a context, then it will still typecheck in a bigger context.
$\Gamma, \Gamma' \vdash e : \tau \implies \Gamma, x : \tau'', \Gamma' \vdash e : \tau$
\item (\textbf{Exchange})
If a term typechecks in a context, then it will still typecheck after reordering the variables in the context.
$\Gamma, x_1 : \tau_1, x_2 : \tau_2, \Gamma' \vdash e : \tau \implies
\Gamma, x_2 : \tau_2, x_1 : \tau_1, \Gamma' \vdash e : \tau$
\item (\textbf{Substitution})
Subsituting a type-correct term for a variable will preserve type correctness.
$(\Gamma \vdash e : \tau) \land (\Gamma, x : \tau \vdash e' : \tau') \implies
\Gamma \vdash [e/x]e' : \tau'$
\end{enumerate}
These properties are all proven by structural induction in the lectures. That is, by proving that each property holds for each possible structure of expression independently, we prove that all expressions must have these properties.
\newpage
\section{Operational Semantics}
We have a language and type system. We therefore have an idea for what programs are valid within our language and also what the type of the value calculated will be. How do we say what value a program computes? With an \textit{Operational Semantics}, a two-place relation on terms $e \rightsquigarrow e'$, pronounced as "e steps to e prime".
\SequentBox{
$\textrm{Values} \; v ::= n \; | \; true \; | \; false$
\SequentUnary{AndCong}{$e_1 \rightsquigarrow e_1'$}{$e_1 \land e_2 \rightsquigarrow e_1' \land e_2$}
\SequentAxiom{AndTrue}{$true \land e \rightsquigarrow e$}
\SequentAxiom{AndFalse}{$false \land e \rightsquigarrow false$}
\SequentUnary{LEQCong1}{$e_1 \rightsquigarrow e_1'$}{$e_1 \leq e_2 \rightsquigarrow e_1' \leq e_2$}
\SequentUnary{LEQCong2}{$e \rightsquigarrow e'$}{$v \leq e \rightsquigarrow v \leq e'$}
\SequentUnary{LEQTrue}{$n_1 \leq n_2$}{$n_1 \leq n_2 \rightsquigarrow true$}
\SequentUnary{LEQFalse}{$n_1 > n_2$}{$n_1 \leq n_2 \rightsquigarrow false$}
\SequentUnary{AddCong1}{$e_1 \rightsquigarrow e_1'$}{$e_1 + e_2 \rightsquigarrow e_1' + e_2$}
\SequentUnary{AddCong2}{$e \rightsquigarrow e'$}{$v + e \rightsquigarrow v + e'$}
\SequentUnary{AddStep}{$n_1 + n_2 = n_3$}{$n_1 + n_2 \rightsquigarrow n_3$}
\SequentUnary{LetCong}{$e_1 \rightsquigarrow e_1'$}{$\textrm{let} \; z = e_1 \; \textrm{in} \; e_2 \rightsquigarrow \textrm{let} \; z = e_1' \; \textrm{in} \; e_2$}
\SequentAxiom{LetStep}{$\textrm{let} \; z = v \; \textrm{in} \; e \rightsquigarrow [v / z] e$}
}
A reduction sequence is a sequence of transitions $e_1 \rightsquigarrow e_2, e_2 \rightsquigarrow e_3, ..., e_{n-1} \rightsquigarrow e_n \implies e_1 \rightsquigarrow^* e_n$
A term $e$ is stuck if it is not a value, and there is no $e'$ such that $e \rightsquigarrow e'$
\newpage
Stuck terms are erroneous programs with no defined behaviour, so we like to avoid them. Therefore we have two properties that we like to have hold:
\begin{enumerate}
\item (\textbf{Progress})
Well-typed programs are not stuck: they can always take a step of progress (or are done).
$\cdot \vdash e : \tau \implies (e \in v) \lor (\exists e'. e \rightsquigarrow e')$
\item (\textbf{Preservation})
If a well-typed program takes a step, it will stay well-typed.
$(\cdot \vdash e : \tau) \land (e \rightsquigarrow e') \implies \cdot \vdash e' : \tau$
\end{enumerate}
All five of these key properties of \textbf{Weakening}, \textbf{Exchange}, \textbf{Substitution}, \textbf{Progress} and \textbf{Preservation} are collectively known as \textit{Type Safety}
These properties are also proven by structural induction in the lectures.
\newpage
\section{Typed Lambda Calculus}
The above language is fine, but it has a lot of fluff with numbers and addition and so on. We've seen that we can represent computability with Lambda Calculus, which doesn't come with any fancy numbers or booleans. We can add types to lambda calculus exactly as you'd expect:
\begin{equation*}
\begin{split}
X ::= 1 &| X \times Y \; | \; 0 \; | \; X + Y \; | \; X \to Y \\
e ::= x &| \langle \rangle \; | \; \langle e, e'\rangle \; | \; \textrm{fst} \; e \; | \; \textrm{snd} \; e \; | \; \textrm{abort} \; e \\
&| L e \; | \; R e \; | \; \textrm{case}(e, L x \to e', R y \to e'') \\
&| \lambda x : X. e \; | \; e e' \\
\Gamma ::= \cdot &| \Gamma, x : X
\end{split}
\end{equation*}
Where the only 'base' types are unit $\langle \rangle $ with type $1$ and the empty type $\bot$ that cannot be constructed with type $0$. See the next section to understand why the empty type is in our system.
Note that we have inherently restricted the expressions that we can form with this new typed calculus. This was actually our aim - we want to prevent the formation of expressions that do not have type safety. However our notation of type safety at this point is restrictive, and there are some expressions within lambda calculus which terminate but cannot be expressed here. For example, you cannot create the Ackermann function within STLC, but we could within untyped Lambda Calculus.
Also of interest is that we've added product and sum types to our simply typed lambda calculus, which weren't present in the original lambda calculus. This is because in LC we can represent these ideas purely through functions and function applications, however our type system is restrictive and prevents us from encoding these structures (See Polymorphic Lambda Calculus later on for more detail). Therefore we add product and sum expressions, just to preserve some of the computability of Lambda Calculus. Other typed lambda caluli we can add other combinations of these expressions, but our STLC has function applications, products, sums and the empty type $\bot$.
\newpage
The typing derivations are as follows:
\SequentBox{
\SequentAxiom{1I}{$\Gamma\vdash\langle\rangle : 1$}
\SequentBinary{$\times$I}{$\Gamma\vdash e : X$}{$\Gamma\vdash e' : Y$}{$\Gamma\vdash \langle e,e' \rangle : X \times Y$}
\SequentUnary{$\times $E$_1$}{$\Gamma\vdash e : X \times Y$}{$\Gamma\vdash \textrm{fst} \; e : X$}
\SequentUnary{$\times $E$_2$}{$\Gamma\vdash e : X \times Y$}{$\Gamma\vdash \textrm{snd} \; e : Y$}
\SequentUnary{HYP}{$x : X \in \Gamma$}{$\Gamma\vdash x : X$}
\SequentUnary{$\to$I}{$\Gamma, x:X \vdash e:Y$}{$\Gamma \vdash \lambda x : X . e : X \to Y$}
\SequentBinary{$\to$E}{$\Gamma \vdash e : X \to Y$}{$\Gamma \vdash e' : X$}{$\Gamma \vdash e e' : Y$}
\SequentUnary{$+$I$_1$}{$\Gamma \vdash e : X$}{$\Gamma \vdash L e : X + Y$}
\SequentUnary{$+$I$_2$}{$\Gamma \vdash e : Y$}{$\Gamma \vdash R e : X + Y$}
\SequentTrinary{$+$E}{$\Gamma \vdash e : X + Y$}{$\Gamma, x: X \vdash e' : Z$}{$\Gamma, y: Y \vdash e'' : Z$}{$\Gamma \vdash \textrm{case}(e, Lx \to e', Ry \to e'') : Z$}
\SequentUnary{0E}{$\Gamma \vdash e : 0$}{$\Gamma \vdash \textrm{abort} \; e : Z$}
}
Note that we cannot build any expression of type 0, but if we could then we could abort it to create any other type we want. We will see that this is akin to being able to prove anything from falsehood.
\newpage
We then add the operational semantics:
\SequentBox{
$\textrm{Values }v ::= \langle \rangle \; | \; \langle v, v'\rangle \; | \; \lambda x : A. e \; | \; L v \; | \; R v$
\SequentUnaryNoLabel{$e_1 \rightsquigarrow e_1'$}{$\langle e_1, e_2 \rangle \rightsquigarrow \langle e_1', e_2 \rangle$}
\SequentUnaryNoLabel{$e_2 \rightsquigarrow e_2'$}{$\langle v_1, e_2 \rangle \rightsquigarrow \langle v_1, e_2' \rangle$}
\SequentAxiomNoLabel{$\textrm{fst }\langle v_1, v_2 \rangle \rightsquigarrow v_1$}
\SequentAxiomNoLabel{$\textrm{snd }\langle v_1, v_2 \rangle \rightsquigarrow v_2$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{fst} \; e \rightsquigarrow \textrm{fst} \; e'$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{snd} \; e \rightsquigarrow \textrm{snd} \; e'$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{abort} \; e \rightsquigarrow \textrm{abort} \; e'$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$Le \rightsquigarrow Le'$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$Re \rightsquigarrow Re'$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{case}(e, Lx \to e_1, Ry \to e_2) \rightsquigarrow \textrm{case}(e', Lx \to e_1, Ry \to e_2)$}
\SequentAxiomNoLabel{$\textrm{case}(Lv, Lx \to e_1, Ry \to e_2) \rightsquigarrow [v / x]e_1$}
\SequentAxiomNoLabel{$\textrm{case}(Rv, Lx \to e_1, Ry \to e_2) \rightsquigarrow [v / y]e_2$}
\SequentUnaryNoLabel{$e_1 \rightsquigarrow e_1'$}{$e_1 e_2 \rightsquigarrow e_1' e_2$}
\SequentUnaryNoLabel{$e_2 \rightsquigarrow e_2'$}{$v_1 e_2 \rightsquigarrow v_1 e_2'$}
\SequentAxiomNoLabel{$(\lambda x : X . e) v \rightsquigarrow [v / x] e$}
}
The five key properties of \textbf{Weakening}, \textbf{Exchange}, \textbf{Substitution}, \textbf{Progress} and \textbf{Preservation} all hold for these rules.
\newpage
\section{The Curry-Howard Correspondence}
Imagine that we have a program that has type $A \to B$. Then we know that if we have an instance of the type $A$ then we can feed it in to the program and get an instance of the type $B$.
This looks suspiciously like Modus Ponens. That is, if we assume that having an instance of a type is equivalent to proving that the type is 'true', then we have derived the following rule:
\begin{center}
\SequentBinary{}{A}{A $\to$ B}{B}
\end{center}
This shows a mapping between a function of type $A\to B$ and implication $A' \implies B'$ where $A'$ and $B'$ are the prepositions that the types $A$ and $B$ correspond to.
We can also see that, if having an instance of a type is equivalent to proving that the corresponding preposition is 'true', then the type that corresponds to falsehood must be the type that has no instance. We call this type $\bot$
Continuing in this line of reasoning, we can imagine Conjunction as product types since we must have evidence for both elements to generate the product, and Disjunction as Unions since we only need evidence of a single of the elements to generate the union.
It is also useful to establish the concept of a normal form for a proof. For example, take the following expression:
\inlineeq{
\lambda x:A.\lambda y:A\to B.yx
}
This expression has the type $A \to (A \to B) \to B$
We could also consider a slightly more complicated function:
\inlineeq{
\lambda x:A.\lambda y:A\to B.((\lambda c:A\to B.c)y)((\lambda b:A.b)x)
}
This expression also has the type $A \to (A \to B) \to B$
Therefore both expressions are an instance of the type corresponding to Modus Ponens, so both functions can be interpreted as proofs for the same thing.
However if you study the expressions for a bit you'll notice that there is some fluff in the second expression that will be evaluated away immediately: $(\lambda b:A.b)x \rightsquigarrow [x/b]b = x$. In fact, the second expression will reduce down to the first.
You'll also notice that the first expression is a value. That is, it cannot be reduced any further. Therefore we can assert that the the first expression is a normal form, and proof normalisation is equivalent to stepping the expression to a value.
\newpage
This gives us the following table:
\begin{center}
\begin{tabular}{ c|c} \;
Logic & Programming \\
\hline
Formulas & Types \\
Proofs & Programs \\
Truth & Unit \\
Falsehood & Empty Type \\
Conjugation & Products \\
Disjunction & Unions \\
Implication & Functions \\
Normal Form & Value \\
Proof Normalization & Evaluation \\
Normalization Strategy & Evaluation Order \\
\end{tabular}
\end{center}
We can construct an expression of Negation using our idea of Falsehood, of $\bot$. Since a type being false corresponds to there being no object that inhabits the type, then proving that a statement is false is equivalent to proving that if the statement were true, we would be able to inhabit $\bot$.
Therefore the logical statement $\lnot A$ equates to the type $A \to \bot$.
Interestingly, we find that we cannot prove some statements within this system that we would normally take as valid. For example, we cannot construct a function of the following type:
\inlineeq{((A \to \bot) \to \bot) \to A}
Which means that, within this system of proving statements by finding instances of types, we cannot prove that $\lnot \lnot A \implies A$ (Double Negation Elimination). We also find that we cannot prove $A \lor \lnot A$ (Law of Excluded Middle).
Overall, the set of logical statements that we are able to prove with the Curry-Howard Correspondence does not contain every logical statement that we can prove to be true in Classical Logic. This system of logic is instead called Intuitionistic Propositional Logic, and is in fact equivalent to Classical Logic, just without DNE as an axiom.
\newpage
\section{Not Halting and Falsehood}
We said before that there should be no way to generate an instance of $\bot$, as it should equate to falsehood and since any instance of a type equates to the type being true, then $\bot$ should have no instances. If we were able to generate a proof (i.e an example program) which suggests from its type that we can generate a term of type $\bot$ then our logic is inconsistent.
Since we know that there is no value of type $\bot$, and also that all well-typed programs in STLC must be a value or progress, we know that the only hope that we have to create a term that types to $\bot$ would be a term that loops forever and does not halt.
However if we try to create a function which loops forever (and so types to $\bot$) we find that we cannot type it. For example the infinite looping function $\Omega$:
\inlineeq{
\Omega = (\lambda x. x x)(\lambda x. x x)
}
We know that $\Omega$ loops forever, but what would its type be? Well if we try to start figuring it out, we can see that for each half of $\Omega$, the parameter $x$ must have a type. This type must be a function, since $x$ is applied to something, and the parameter of the function must be the type of $x$. So we get that $x: A\to B$ and also that $A=A \to B$. This infinite type is not allowed within STLC so $\Omega$ is not a well-typed expression.
This is valid intuition as to why STLC cannot loop forever (and so we cannot build expressions that have type $\bot$), but for a proof we need to be more rigorous.
\newpage
\section{A Proof of Termination}
So far in this course we have used structural induction to prove most of the properties that we want to hold. Unfortunately, as seen in the slides, proving termination using structural induction is not possible. This is because the result of function application may itself be a function application, and so we cannot use the inductive hypothesis to assume that the result of function application terminates since we are in the process of proving if function application terminates!
Instead we first build a collection of sets of terms named \textit{Halt}, where the following hold:
\begin{itemize}
\item $Halt_0 = \emptyset $
i.e. for all $e$, $e \notin Halt_0$
\item $e \in Halt_1$ when $e$ halts
\item $e \in Halt_{X \to Y}$ when:
\begin{itemize}
\item $e \in Halt_1$ (i.e. $e$ halts)
\item $\forall e' \in Halt_X. (e e') \in Halt_Y$
\end{itemize}
\end{itemize}
We can therefore read a set \textit{Halt} in the following way:
\begin{itemize}
\item $Halt_1$ is the set of expressions that halt.
\item $Halt_{1 \to 1}$ is the set of expressions that halt when applied to an expression that also halts.
\item $Halt_{(1 \to 1) \to 1}$ is the set of expressions that halt when applied to an expression $e'$ that halts when applied to an expression that halts.
\item $Halt_{1 \to (1 \to 1)}$ is the set of expressions that, when applied to an expression that halts, result in an expression which preserves halting.
\item $Halt_{(1 \to 1) \to (1 \to 1)}$ is the set of expressions that, when applied to an expression which preserves halting, result in an expression which preserves halting.
\end{itemize}
and so on.
So essentially for all expressions in a language to always halt, all expressions $e$ in that language must be in $Halt_1$, though this is not a property that we will attempt to prove, since we only really care whether $\bot$ can be formed, which can be proven in a different way using the fundamental lemma.
\newpage
\section{Closure Lemma}
Before we prove the \textit{fundamental lemma} we first prove the \textit{closure lemma}. That is:
\inlineeq{
e \rightsquigarrow e' \implies (e' \in Halt_X \iff e \in Halt_X)
}
Or, in english, if some property of halting or preserving halting is held by some expression $e$ then any expression that steps to $e$ and any expression that $e$ steps to must also preserve that property of halting or preserving halting.
We prove the statement by induction on $X$ and this can be seen in the lecture slides.
\section{Fundamental Lemma}
The fundamental lemma is as follows:
\inlineeq{
x_1 : X_1, . . ., x_n : X_n \vdash e : Z\\
\land \space \space \forall i \in {1 . . . n}. (\cdot \vdash v_i : X_i) \land (v_i \in Halt_{X_i}) \\
\implies [v_1/x_1, . . ., v_n/x_n]e \in Halt_Z
}
That is, if we have an expression $e$ which has type $Z$ and free variables $x_1$ to $x_n$ with types $X_1$ to $X_n$, and we can show that there are values $v_1$ to $v_n$ with types $X_1$ to $X_n$ that all preserve halting as described by their respective type $X_i$, then if we substitute all of these values in to the variables in $e$ we get an expression that preserves halting as described by the type $Z$.
Or, in English, haltingness is invarient under the $\rightsquigarrow$ relation.
This is interesting because we get the halting property for $e$ seemingly from nowhere. The clever part is that when we apply an expression to a value, we add the value to the set of substitutions and so we can refer back to the inductive hypothesis (step 8 for $Case \to l$)
And from this we can prove \textbf{Consistency}:
\inlineeq{
\textrm{There are no terms }\cdot \vdash e : 0
}
\begin{enumerate}
\item Assume $\cdot \vdash e : 0$
\item $e \in Halt_0$ by Fundamental lemma
\item $Halt_0 = \emptyset $ by definition
\end{enumerate}
which gives a contradiction.
\newpage
\section{The Halting Problem}
Since every closed program reduces to a value, and there are no values of empty type, there are no programs of the empty type. But the only programs of the empty type are the ones that do not halt. So have we avoided the halting problem?
\inlineeq{
e \; \textrm{well-formed} \land \cdot \vdash e : \tau \implies e \; \textrm{Halts}\\
e \in \textrm{Simply Typed Lambda Calculus} \implies e \; \textrm{Halts}
}
The thing to notice is that this isn't a bi-implication! There are programs within LC that do halt but are not accepted by STLC, e.g. \textit{ack}. So then how can we make STLC stronger?
\section{Loops}
We know from Foudations of Computer Science that while loops can be represented using unbounded recursion. However unfortunately adding unbounded recursion runs us straight into the issue that we were trying to avoid above; we can construct terms that loop forever and so typecheck to 0. For example:
\inlineeq{
\cdot \vdash (fun_{1\to 0} f x. f x) \langle \rangle : 0
}
However we do know that recursion with a base case that will be hit, or a for loop with a bounded upper bound, will always terminate as long as the code being run within that loop also terminates. This leads us to adding bounded recursion, and inventing Gödel’s T.
\newpage
\section{Gödel’s T}
Gödel’s T begins as STLC, but we add integers and bounded iteration over those integers. This gives us the following grammar:
\inlineeq{
\begin{split}
X ::= 1 \; &| \; X \times Y \; | \; 0 \; | \; X + Y \; | \; X \to Y \; | \; \mathbb{N} \\
e ::= x \; &| \; \langle \rangle \; | \; \langle e, e'\rangle \; | \; \textrm{fst} \; e \; | \; \textrm{snd} \; e \; | \; \textrm{abort} \; e \\
&| \; L e \; | \; R e \; | \; \textrm{case}(e, L x \to e', R y \to e'') \\
&| \; \lambda x : X. e \; | \; e e' \\
&| \; z \; | \; s(e) \; | \; \textrm{iter}(e, z \to e', s(x) \to e'') \\
\Gamma ::= \cdot \; &| \; \Gamma, x : X
\end{split}
}
And the following rules \textbf{in addition to the ones within STLC}:
\SequentBox{
\SequentAxiom{$\mathbb{N}I_z$}{$\Gamma \vdash z : \mathbb{N}$}
\SequentUnary{$\mathbb{N}I_s$}{$\Gamma \vdash e : \mathbb{N}$}{$\Gamma \vdash s(e) : \mathbb{N}$}
\SequentTrinary{$\mathbb{N}E$}{$\Gamma \vdash e_0 : \mathbb{N}$}{$\Gamma \vdash e_1 : X$}{$\Gamma, x:X \vdash e_2 : X$}{$\Gamma \vdash \textrm{iter}(e_0, z \to e_1, s(x) \to e_2) : X$}
\SequentUnaryNoLabel{$e_0 \rightsquigarrow e_0'$}{$\textrm{iter}(e_0, z \to e_1, s(x) \to e_2) \rightsquigarrow \textrm{iter}(e_0', z \to e_1, s(x) \to e_2)$}
\SequentAxiomNoLabel{$\textrm{iter}(z, z \to e_1, s(x) \to e_2) \rightsquigarrow e_1$}
\SequentAxiomNoLabel{$\textrm{iter}(s(v), z \to e_1, s(x) \to e_2) \rightsquigarrow [\textrm{iter}(v, z \to e_1, s(x) \to e_2)/x]e_2$}
}
We can see that the above language is at least as powerful as primitive recursion, however it is a little bit more powerful than that. It can't be as powerful as partial recursion since:
\begin{enumerate}
\item we have still preserved our property of every well-typed expression halting
\item being as powerful as partial recursion would mean the ability to compute all computable functions
\end{enumerate}
Therefore being as powerful as partial recursion would violate The Halting Problem. But it sits somewhere in-between. For example, \textit{ack} is computable with Gödel’s T but not by primitive recursion.
\newpage
\section{More Data Structures}
We have just added integers within Gödel’s T, but we might want to go further and add some more useful structures like lists.
The naive approach is to add these structures to our language through sequents:
\SequentBox{
\SequentAxiom{ListNil}{$\Gamma \vdash [] : \textrm{list} \; X$}
\SequentBinary{ListCons}{$\Gamma \vdash e : X$}{$\Gamma \vdash e' : \textrm{list} \; X$}{$\Gamma \vdash e :: e' : \textrm{list} \; X$}
\SequentTrinary{ListFold}{$\Gamma \vdash e_0 : \textrm{list} \; X$}{$\Gamma \vdash e_1 : Z$}{$\Gamma, x:X, r:Z \vdash e_2:Z$}{$\Gamma \vdash \textrm{fold}(e_0, [] \to e_1, x::r \to e_2 ): Z$}
\SequentUnaryNoLabel{$e_0 \rightsquigarrow e_0'$}{$e_0 :: e_1 \rightsquigarrow e_0' :: e_1$}
\SequentUnaryNoLabel{$e_1 \rightsquigarrow e_1'$}{$v_0 :: e_1 \rightsquigarrow v_0 :: e_1'$}
\SequentUnaryNoLabel{$e_0 \rightsquigarrow e_0'$}{$\textrm{fold}(e_0, [] \to e_1, x::r \to e_2 ) \rightsquigarrow \textrm{fold}(e_0', [] \to e_1, x::r \to e_2 )$}
\SequentAxiomNoLabel{$\textrm{fold}([], [] \to e_1, x::r \to e_2 ) \rightsquigarrow e_1$}
\SequentUnaryNoLabel{$R \triangleq \textrm{fold}(v', [] \to e_1, x::r \to e_2 )$}{$\textrm{fold}(v::v', [] \to e_1, x::r \to e_2 ) \rightsquigarrow [v/x, R/r] e_2$}
}
However the biggest issue here is that now when we define functions over these lists within this Typed Lambda Calculus we must define the type of the list within the function signature. For example, consider the map function:
\inlineeq{
\lambda f:A \to B. \lambda xs: \textrm{List} \; A.\\ \; \textrm{fold}(xs, [] \to [], x :: r \to (f x) :: r)
}
We must define the specific $A$ and $B$ that this map function will use at the time of definition, even though we can run the function with any different map $f$.
The solution is polymorphism.
\newpage
\section{Polymorphic Lambda Calculus (AKA System F)}
To add polymorphism, we need to extend our type representation to be able to represent type 'variables, or types that have not yet been filled in. At this stage, we only want to think about polymorphism that accepts all types, i.e. universal quantification over types. This is because universal quantification is the one that solves our polymorphic map issue as above; we don't care about the type of the variables in our list as long as our function to apply to all of the elements matches the list elements then the map function will work. Later we will see the concept of existential quantification, but it serves a different purpose.
\inlineeq{
\textrm{Types} \; A ::= \alpha \; | \; A \to B \; | \; \forall \alpha . A
}
You might notice that we have removed both the unit and empty types. We will show later that we can represent data using polymorphics without base data types like unit.
We also need to extend our terms definition since we need to be able to now also abstract over types. We do this by a sort of lambda abstraction, just like we do in regular basic lambda calculus over variables. These lambda abstractions, written with a big lambda $\Lambda$, take a type rather than a value to substitute in.
\inlineeq{
\textrm{Terms} \; e ::= x \; | \; \lambda x : A. e \; | \; e e' \; | \; \Lambda \alpha . e \; | \; e A
}
This gives us some power to represent the map function that we wanted:
\inlineeq{
map : \forall \alpha . \forall \beta . (\alpha \to \beta ) \to \textrm{list} \; \alpha \to \textrm{list} \; \beta \\
map = \Lambda \alpha.\Lambda \beta. \lambda f:\alpha \to \beta. \lambda xs: \textrm{List} \; \alpha.\\ \; \textrm{fold}(xs, [] \to [], x :: r \to (f x) :: r)
}
Well-formedness of types for these expressions is more tricky since we need to keep track of what type variables are currently being abstracted over - a type is not valid if it contains $\alpha$ while $\alpha$ is not currently abstracted over.
We introduce a type context $\Theta$ that holds all of the currently abstracted over variables. From this set we can deduce if a type is well-formed. We can think of this adding an extra step: it is no longer good enough to type an expression $e:T$, we must also ensure that $\Theta \vdash T \; \textrm{type}$.
\newpage
The rules for checking $\Theta$ are as follows:
\SequentBox{
\begin{spacing}{1.0}
\inlineeq{
\begin{split}
\textrm{Type Contexts} \; \Theta &::= \cdot \; | \; \Theta, \alpha \\
\textrm{Term Contexts} \; \Gamma &::= \cdot \; | \; \Gamma, x:A\\
\end{split}
}
\end{spacing}
\vspace{20px}
\SequentUnaryNoLabel{$\alpha \in \Theta$}{$\Theta \vdash \alpha \; \textrm{type}$}
\SequentBinaryNoLabel{$\Theta \vdash A \; \textrm{type}$}{$\Theta \vdash B \; \textrm{type}$}{$\Theta \vdash A \to B \; \textrm{type}$}
\SequentUnaryNoLabel{$\Theta, \alpha \vdash A \; \textrm{type}$}{$\Theta \vdash \forall \alpha.A \; \textrm{type}$}
\SequentUnaryNoLabel{$x:A \in \Gamma$}{$\Theta; \Gamma \vdash x : A$}
\SequentBinaryNoLabel{$\Theta \vdash A \; \textrm{type}$}{$\Theta ; \Gamma, x : A \vdash e : B$}{$\Theta ; \Gamma \vdash \lambda x : A. e : A \to B$}
\SequentBinaryNoLabel{$\Theta ; \Gamma \vdash e : A \to B$}{$\Theta ; \Gamma \vdash e' : A$}{$\Theta ; \Gamma \vdash e e' : B$}
\SequentUnaryNoLabel{$\Theta, \alpha ; \Gamma \vdash e : B$}{$\Theta ; \Gamma \vdash \Lambda \alpha . e : \forall \alpha . B$}
\SequentBinaryNoLabel{$\Theta ; \Gamma \vdash e : \forall \alpha . B$}{$\Theta \vdash A \; \textrm{type}$}{$\Theta ; \Gamma \vdash e A : [A / \alpha] B$}
}
Note the presence of substitution in the typing rules! In regular Lambda Calculus we substitute in the operational semantics when $\lambda$ terms are applied to values, whereas in PLC we substitute in the typing rules when the $\Lambda$ terms are applied to types.
For these rules, we'd like to ensure that \textbf{Weakening}, \textbf{Exchange} and \textbf{Substitution} apply to the type well-formedness $\Theta$ rules:
\begin{enumerate}
\item (\textbf{Type Weakening})
$\Theta, \Theta' \vdash A \; \textrm{type} \implies \Theta, \beta, \Theta' \vdash A \; \textrm{type}$
\item (\textbf{Type Exchange})
$\Theta, \beta, \gamma, \Theta' \vdash A \; \textrm{type} \implies \Theta, \gamma, \beta, \Theta' \vdash A \; \textrm{type}$
\item (\textbf{Type Substitution})
$(\Theta \vdash A \; \textrm{type}) \land (\Theta, \alpha \vdash B \; \textrm{type}) \implies
\Theta \vdash [A/\alpha ]B \; \textrm{type}$
\end{enumerate}
These all follow essentially the same format as \textbf{Weakening}, \textbf{Exchange} and \textbf{Substitution} for the $\Gamma$ well-typedness equivalents, both in the way they are presented above and the inductive proofs.
\newpage
We can lift these up a level and say that an entire context is well formed if each type in the context is well formed under $\Theta$:
\SequentBox{
\SequentAxiomNoLabel{$\Theta \vdash \cdot \; \textrm{ctx}$}
\SequentBinaryNoLabel{$\Theta \vdash \Gamma \; \textrm{ctx}$}{$\Theta \vdash \tau \; \textrm{type}$}{$\Theta \vdash \Gamma, \tau \; \textrm{ctx}$}
}
A well-formed context has the three key properties:
\begin{enumerate}
\item (\textbf{Context Weakening})
$\Theta, \Theta' \vdash \Gamma \; \textrm{ctx} \implies \Theta, \beta, \Theta' \vdash \Gamma \; \textrm{ctx}$
\item (\textbf{Context Exchange})
$\Theta, \beta, \gamma, \Theta' \vdash \Gamma \; \textrm{ctx} \implies \Theta, \gamma, \beta, \Theta' \vdash \Gamma \; \textrm{ctx}$
\item (\textbf{Context Substitution})
$(\Theta \vdash A \; \textrm{type}) \land (\Theta, \alpha \vdash \Gamma \; \textrm{ctx}) \implies
\Theta \vdash [A/\alpha ]\Gamma \; \textrm{ctx}$
\end{enumerate}
To prove these properties is trivial by induction on the size of $\Gamma$
We also have a property of \textbf{Regularity}:
\inlineeq{
(\Theta \vdash \Gamma \; \textrm{ctx}) \land (\Theta ; \Gamma \vdash e : A) \implies \Theta \vdash A \; \textrm{type}
}
In English, this just says if typechecking succeeds, and the context is well-formed, then we found a well-formed type.
We also still want to prove \textbf{Weakening}, \textbf{Exchange} and \textbf{Substitution} for terms, not just types and contexts. This gives us 6 more iterations, since we can modify $\Theta$ or $\Gamma$ within the Antecedent:
\begin{enumerate}
\item (\textbf{Type Weakening of Terms})
$(\Theta, \Theta' \vdash \Gamma \; \textrm{ctx}) \land (\Theta, \Theta '; \Gamma \vdash e : A) \implies \Theta, \alpha, \Theta '; \Gamma \vdash e : A$
\item (\textbf{Type Exchange of Terms})
$(\Theta, \alpha, \beta, \Theta' \vdash \Gamma \; \textrm{ctx}) \land
(\Theta, \alpha, \beta, \Theta '; \Gamma \vdash e : A) \implies \Theta, \beta, \alpha, \Theta '; \Gamma \vdash e : A$
\item (\textbf{Type Substitution of Terms})
$(\Theta, \alpha \vdash \Gamma \; \textrm{ctx}) \land (\Theta \vdash A \; \textrm{type})
\land (\Theta, \alpha ; \Gamma \vdash e : B) \implies \Theta ; [A/\alpha ]\Gamma \vdash [A/\alpha ]e : [A/\alpha ]B$
\item (\textbf{Weakening of Terms})
$(\Theta \vdash \Gamma, \Gamma' \; \textrm{ctx}) \land (\Theta \vdash B \; \textrm{type}) \land (\Theta ; \Gamma, \Gamma' \vdash e : A) \implies \Theta ; \Gamma, y: B, \Gamma' \vdash e : A$
\item (\textbf{Exchange of Terms})
$(\Theta \vdash \Gamma, y : B, z : C, \Gamma' \; \textrm{ctx}) \land
(\Theta ; \Gamma, y : B, z : C, \Gamma' \vdash e : A) \implies \Theta ; \Gamma, z : C, y : B, \Gamma' \vdash e : A$
\item (\textbf{Substitution of Terms})
$(\Theta \vdash \Gamma, x : A \; \textrm{ctx}) \land (\Theta ; \Gamma \vdash e : A)
\land (\Theta ; \Gamma, x : A \vdash e' : B) \implies \Theta ; \Gamma \vdash [e/x]e' : B$
\end{enumerate}
This brings us to a total of 4 variants of \textbf{Weakening}, \textbf{Exchange} and \textbf{Substitution}, plus a single \textbf{Regularity} rule. To prove some of the rules we have to assume well-formedness conditions, but the proofs are all otherwise similar to STLC.
And then the operational semantics:
\SequentBox{
$\textrm{Values} \; v ::= \lambda x : A. e \; | \; \Lambda \alpha . e$
\SequentUnary{CongFun}{$e_0 \rightsquigarrow e_0'$}{$e_0 e_1 \rightsquigarrow e_0' e_1$}
\SequentUnary{CongFunArg}{$e_1 \rightsquigarrow e_1'$}{$v_0 e_1 \rightsquigarrow v_0 e_1'$}
\SequentAxiom{FunEval}{$(\lambda x : X . e) v \rightsquigarrow [v / x] e$}
\SequentUnary{CongForAll}{$e \rightsquigarrow e'$}{$e A \rightsquigarrow e' A$}
\SequentAxiom{ForAllEval}{$(\Lambda \alpha . e) A \rightsquigarrow [A / \alpha] e$}
}
The first three sequents are ripped straight from STLC, and the last two are added just to deal with our new type lambdas. I find it interesting that we only need these five operational semantics rules, rather than the fifteen in STLC.
And as per usual we want the two type safety rules:
\begin{enumerate}
\item (\textbf{Progress})
$\cdot ; \cdot \vdash e : \tau \implies (e \in v) \lor (\exists e'. e \rightsquigarrow e')$
\item (\textbf{Preservation})
$(\cdot ; \cdot \vdash e : \tau) \land (e \rightsquigarrow e') \implies \cdot ; \cdot \vdash e' : \tau$
\end{enumerate}
But where did the data go?
\newpage
\section{Data and System F}
Discovered in 1941 by Alonzo Church, the idea follows the following observations:
\begin{enumerate}
\item Data is used to make choices
\item Based on the choice, you perform different results
\item So we can encode data as functions which take different possible results, and return the right one
\end{enumerate}
For an example, take a boolean. The only reason that a boolean is useful is if we can run an if statement on it. There is no use being able to store a boolean if we can't print it out, run a branching statement or modify other data based on the boolean's value.
We can imagine then that a boolean and a function which either executes the first branch or the second branch are equivalent. That is, if the only way that we can use a boolean is to either do a first thing or a second thing, then we may as well just encode the boolean as doing either the first or the second thing.
This gives us a type for a boolean of $\forall \alpha . \alpha \to \alpha \to \alpha $. Or in English, if you give me two $\alpha$ values, the boolean will either give the first value or the second value.
We can then encode true as $\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . x$, getting the first value, and false as $\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . y$, getting the second value.
We also get the if statement for free:
\inlineeq{
\textrm{if} \; e \; \textrm{then} \; e' \; \textrm{else} \; e'' : X \implies e X e' e''
}
Another way to see this is from a functional point of view. In functional languages we define new types as tagged unions. These tagged unions can then be acted on using a match statement. Since the only way that a tagged union can be interrogated is this match statement, we can combine the two and encode the tagged union \textit{as} the match statement, that takes a set of values and gives back the value that the tagged union contains.
\begin{lstlisting}[language=ML]
type bool = True \; | \; False;
let encode (b: bool) (t: 'a) (f: 'a) =
match b with
| True -> t
| False -> f;;
let true: 'a -> 'a -> 'a = encode True;;
let false: 'a -> 'a -> 'a = encode False;;
\end{lstlisting}
We see that \textit{true} and \textit{false} both have type $\forall \alpha . \alpha \to \alpha \to \alpha $ as expected.
When we have a tagged union that contains some data, we must make sure that the encoded data is managed. For example, imagine a tagged union that holds either X or Y
\begin{lstlisting}[language=ML]
type union = L of x \; | \; R of y;
\end{lstlisting}
Then we need to manage where these values go when passed to the match function. The answer is to enforce that the values passed to the match function are functions that take X or Y:
\begin{lstlisting}[language=ML]
let encode (u: union) (l: x -> 'a) (r: y -> 'a) =
match u with
| L(v) -> l v
| R(v) -> r v;;
let left: (x -> 'a) -> (y -> 'a) -> 'a = encode (L x1);;
let right: (x -> 'a) -> (y -> 'a) -> 'a = encode (R y1);;
\end{lstlisting}
And from these types we can read off our encodings of unions within PLC:
\inlineeq{
\begin{split}
X + Y &\implies \forall \alpha . (X \to \alpha ) \to (Y \to \alpha ) \to \alpha \\
L e &\implies \Lambda \alpha . \lambda f : X \to \alpha . \lambda g : Y \to \alpha . f e\\
R e &\implies \Lambda \alpha . \lambda f : X \to \alpha . \lambda g : Y \to \alpha . g e
\end{split}
}
The Case statement is encoded with a bit more fluff but follows the same idea:
\inlineeq{
\textrm{case}(e, L x \to e1, R y \to e2) : Z \implies e Z (\lambda x : X \to Z. e1) (\lambda y : Y \to Z. e2)
}
We can encode product types in the same way, except far easier since we don't actually have a choice in the types stored:
\begin{lstlisting}[language=ML]
type prod = Some of (x * y);
\end{lstlisting}
Which gives us the following:
\inlineeq{
\begin{split}
X \times Y &\implies \forall \alpha . (X \to Y \to \alpha ) \to \alpha \\
\langle e, e'\rangle &\implies \Lambda \alpha . \lambda k : X \to Y \to \alpha . k e e'\\
\textrm{fst} \; e &\implies e X (\lambda x : X. \lambda y : Y. x)\\
\textrm{snd} \; e &\implies e Y (\lambda x : X. \lambda y : Y. y)
\end{split}
}
Where this becomes more complicated is when the data that we add to the tagged union is recursive. For example, imagine a representation of integers. Then we might have the following type:
\begin{lstlisting}[language=ML]
type int = Zero \; | \; Succ of int;
\end{lstlisting}
Then the match statement for the Succ branch must take the variable stored, meaning the Succ value must be a function that takes an int:
\begin{lstlisting}[language=ML]
let encode (b: int) (z: 'a) (s: int -> 'a) =
match b with
| Zero -> z
| Succ(i) -> s i;;
\end{lstlisting}
This doesn't really help us to encode int since we still need to know the encoding of an int within our definition for the type of s.
The trick is to leverage that recursion into the function as follows:
\begin{lstlisting}[language=ML]
let encode (b: int) (z: 'a) (s: 'a -> 'a) =
match b with
| Zero -> z
| Succ(i) -> s (encode i z s);;
\end{lstlisting}
This gives us the following encodings:
\inlineeq{
\begin{split}
N &\implies \forall \alpha . \alpha \to (\alpha \to \alpha ) \to \alpha \\
z &\implies \Lambda \alpha . \lambda z : \alpha . \lambda s : \alpha \to \alpha . z\\
s(e) &\implies \Lambda \alpha . \lambda z : \alpha . \lambda s : \alpha \to \alpha . s (e \alpha z s)\\
\textrm{iter}(e, z \to e_z, s(x) \to e_s) : X &\implies e X e_z (\lambda x : X. e_s)
\end{split}
}
And we can also do lists recursively too:
\begin{lstlisting}[language=ML]
type list 'x = Nil \; | \; Cons of ('x * list);
let encode (l: list 'x) (n: 'a) (c: 'x -> 'a -> 'a) =
match l with
| Nil -> n
| Cons(v, l) -> c v (encode l n c);;
\end{lstlisting}
Which gives the following:
\inlineeq{
\begin{split}
list X &\implies \forall \alpha . \alpha \to (X \to \alpha \to \alpha ) \to \alpha \\
[] &\implies \Lambda \alpha . \lambda n : \alpha . \lambda c : X \to \alpha \to \alpha . n\\
e :: e' &\implies \Lambda \alpha . \lambda n : \alpha . \lambda c : X \to \alpha \to \alpha . c e (e' \alpha n c)
\end{split}
}
And we can define the three main list functions on this representation
\inlineeq{
fold(e, [] \to e_n, x :: r \to e_c) : Z = e Z e_n (\lambda x : X. \lambda r : Z. e_c)\\
map(e, f): listZ = fold(e, [] \to [], x::r \to (f x) :: r): listZ \\
filter(e, f): listZ = fold(e, [] \to [], x::r \to if(f x, x :: r, r)): listZ \\
}
\newpage
\section{Existential types}
Within modern programming languages, we use polymorphism not only to allow methods to run on any number of types, as we saw in the list example above, but we also use polymorphism to define interfaces behind which we hide implementation. For example, in Java we might define an interface as follows:
\begin{lstlisting}[language=Java]
interface Bool {
public Bool getTrue();
public Bool getFalse();
public <T> T eval(T ifTrue, T ifFalse);
}
\end{lstlisting}
We can then require our argument to be any implementation of this interface, but we do not care about the actual way that the boolean is stored. We might have some obvious implementation backed by a boolean:
\begin{lstlisting}[language=Java]
class BoolBool implements Bool {
boolean value;
private BoolBool(boolean value) {
this.value = value;
}
public Bool getTrue() {
return new BoolBool(true);
}
public Bool getFalse() {
return new BoolBool(false);
}
public <T> T eval(T ifTrue, T ifFalse) {
return value ? ifTrue : ifFalse;
}
}
\end{lstlisting}
\newpage
Or an integer:
\begin{lstlisting}[language=Java]
class IntBool implements Bool {
private int value;
private IntBool(int value) {
this.value = value;
}
public Bool getTrue() {
return new IntBool(1);
}
public Bool getFalse() {
return new IntBool(0);
}
public <T> T eval(T ifTrue, T ifFalse) {
return value == 1 ? ifTrue : ifFalse;
}
}
\end{lstlisting}
Or any number of other implementations. The only important addition to being able to query the value is to also be able to construct the value.
The way that we can capture this concept within our type system is existential types, written $\cdot ; \cdot \vdash e : \exists \alpha . A$ to mean 'Expression $e$ requires some implementation $\alpha$ of interface $A$'
We add the grammar for these implementations as follows:
\SequentBox{
\begin{spacing}{1.0}
\inlineeq{
\begin{split}
\textrm{Types} \; A &::= . . . \; | \; \exists \alpha . A\\
\textrm{Terms} \; e &::= . . . \; | \; \textrm{pack}_{\alpha .B}(A, e) \; | \; \textrm{let pack}(\alpha, x) = e \; \textrm{in} \; e'\\
\textrm{Values} \; v &::= . . . \; | \; \textrm{pack}_{\alpha .B}(A, v)
\end{split}
}
\end{spacing}
\vspace{30px}
\SequentTrinary{$\exists l$}{$\Theta, \alpha \vdash B \; \textrm{type}$}{$\Theta \vdash A \; \textrm{type}$}{$\Theta; \Gamma \vdash e : [A/\alpha]B$}{$\Theta ; \Gamma \vdash \textrm{pack}_{\alpha . B}(A, e) : \exists \alpha . B$}
\SequentTrinary{$\exists$E}{$\Theta ; \Gamma \vdash e : \exists \alpha . A$}{$\Theta, \alpha ; \Gamma, x : A \vdash e' : C$}{$\Theta \vdash C \; \textrm{type}$}{$\Theta ; \Gamma \vdash \textrm{let pack}(\alpha, x) = e \; \textrm{in} \; e' : C$}
}
\newpage
The concept here is that:
\begin{itemize}
\item We can 'pack' both a type $A$ and an expression $e$ together to form an expression of type $\exists \alpha.B$, where $\alpha$ can appear within $B$, as long as $e$ has type $[A/\alpha]B$
\item We can use this packed value in an expression $e'$ if it has both a free type $\alpha$ and a free variable $x$ of type $B$ (note that in the rules above we result in a pack expression with type $\exists \alpha. B$ but then use a pack expression of type $\exists \alpha. A$).
\end{itemize}
When we use a packed value in an expression, we want to substitute both the type and the value in at once. This gives us the operational semantics:
\SequentBox{
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{pack}_{\alpha.B}(A, e) \rightsquigarrow \textrm{pack}_{\alpha.B}(A, e')$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\textrm{let pack}(\alpha, x) = e \; \textrm{in} \; t \rightsquigarrow \textrm{let pack}(\alpha, x) = e' \; \textrm{in} \; t$}
\SequentAxiomNoLabel{$\textrm{let pack}(\alpha, x) = \textrm{pack}_{\alpha.B}(A, v) \; \textrm{in} \; e \rightsquigarrow [A/\alpha, v/x]e$}
}
To see how this works in practice, consider our boolean example. A boolean interface needs to provide a true instance, a false instance, and an if statement.
If $\alpha$ encodes a boolean value, then an if statement would have the signature
\inlineeq{
\textrm{if}(a, b, c): \alpha \to \delta \to \delta \to \delta
}
Therefore the boolean existential type might look something like:
\inlineeq{
\exists \alpha. (\alpha \times \alpha \times (\forall \delta . \alpha \to \delta \to \delta \to \delta))
}
Remember that we encode a tuple in System F as $\forall \beta. (X \to Y \to \beta) \to \beta$, so we can expand the tuple:
\inlineeq{
\exists \alpha . \forall \beta. (\alpha \to \alpha \to (\forall \delta . \alpha \to \delta \to \delta \to \delta) \to \beta) \to \beta
}
This gives us the type that we want for the boolean pack, which we can validate by working from the pack.
\newpage
To build a specific pack implementation we need a type and a value. The type that we had for our booleans before was $\forall \alpha . \alpha \to \alpha \to \alpha $. We also have true as $\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . x$ and false as $\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . y$. This means that our implementation \textbf{value} would be:
\inlineeq{
(\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . x, \\\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . y, \\ \Lambda A. \lambda b: \forall \alpha . \alpha \to \alpha \to \alpha . \lambda x: A. \lambda y: A. b x y)
}
AKA a true value, a false value, and an if statement. Using the tuple encoding we get a System F value:
\begin{equation*}
\begin{split}
&\Lambda B.\lambda c: (\forall \alpha . \alpha \to \alpha \to \alpha )\to \\
&\quad(\forall \alpha . \alpha \to \alpha \to \alpha )\to \\
&\quad(\forall \delta. (\forall \alpha . \alpha \to \alpha \to \alpha ) \to \delta \to \delta \to \delta)\\
&\quad\to B. \\
&\qquad c (\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . x) \\
&\qquad(\Lambda \alpha . \lambda x : \alpha . \lambda y : \alpha . y) \\
&\qquad(\Lambda A. \lambda b: \forall \alpha . \alpha \to \alpha \to \alpha . \lambda x: A. \lambda y: A. b x y)
\end{split}
\end{equation*}
Unpleasant but it's all there. If you want to try to read this and understand where it all comes in, imagine that c is a selector function that returns either its first argument (a value representing \textbf{True}), its second argument (a value representing \textbf{False}) or its third argument (a function representing \textbf{if}) If we type this value we get the following:
\inlineeq{
\forall \beta. ((\forall \alpha . \alpha \to \alpha \to \alpha )\to (\forall \alpha . \alpha \to \alpha \to \alpha )\to \\(\forall \delta . (\forall \alpha . \alpha \to \alpha \to \alpha )\to \delta \to \delta \to \delta) \to \beta) \to \beta
}
Which has a lot of $\forall \alpha . \alpha \to \alpha \to \alpha $ in it. Luckily this is also the type of our boolean implementation, and the idea of the pack expression is that we hide our specific implementation type and value. So we observe that this type is the same as:
\inlineeq{
[\forall \alpha . \alpha \to \alpha \to \alpha /\gamma ](\forall \beta. (\gamma \to \gamma \to \\(\forall \delta . \gamma \to \delta \to \delta \to \delta) \to \beta) \to \beta)
}
And voilà we have arrived at our boolean pack type again.
It is worth noting that we haven't actually extended the computational power of System F since we can encode existential types using type polymorphism:
\begin{equation*}
\begin{split}
\exists \alpha . B &\implies \forall \beta . (\forall \alpha. B \to \beta) \to \beta\\
\textrm{pack}_{\alpha.B}(A, e) &\implies \Lambda \beta . \lambda k : \forall \alpha . B \to \beta . k A e\\
\textrm{let pack}(\alpha, x) = e \; \textrm{in} \; e' : C &\implies e C (\Lambda \alpha / \lambda X : B . e')
\end{split}
\end{equation*}
\newpage
\section{System F Termination}
We already know that structural and rule induction were non-starters when it came to proving termination within STLC, and we can assume that the same applies here. Instead, System F can be proven to terminate using similar methods to when we proved termination for STLC.
We define a \textit{Semantic Type} to be a similar concept to the \textit{Halt} sets when we were proving termination for STLC - A \textit{Semantic Type $X$} is a set of terms such that:
\begin{itemize}
\item (\textbf{Halting})
If some expression is in the semantic type then it halts
$e \in X \implies \exists v.e \rightsquigarrow^* v$
\item (\textbf{Closure})
If some expression in the semantic type steps to or is stepped to by another expression, that other expression is also in the semantic type
$e \rightsquigarrow e' \implies (e \in X \iff e' \in X)$
\end{itemize}
The reason that we can't use the same \textit{Halt} sets as STLC is that not all types are valid, so we can't have all \textit{Halt} sets existing. The solution is to include $\Theta$ and only make Semantic Types over well-formed types.
From the two conditions above, just like our definition of Halt, we can see that each Semantic Type X with at least one expression has an associated type T. This can be seen by the type safety of stepping within the second point - if an expression with type T is in the Semantic Type then the normal form of that expression must also be in the Semantic Type, which means that all expressions that reduce to that normal form must be in the semantic type. We can't use this property to assume that all expressions of a type are in a Semantic Type since we are currently proving that System F actually terminates, so we haven't yet proven that all expressions of a type have a normal form.
Recall that we use $\Theta$ to represent a type variable context. That is a set of type variables $\alpha, \beta, ...$.
We can define $\theta$ to be a specific substitution of the generic types to specific semantic types, i.e.
\inlineeq{\theta ::= \cdot \; | \; (\theta, X/\alpha)}
\newpage
Then we define a function $\llbracket - \rrbracket$ such that:
\inlineeq{\llbracket - \rrbracket \in WellFormedType \to VarInterpretation \to SemanticType}
Since $\theta$ is a mapping from well formed types to semantics types, we can define the following:
\inlineeq{\llbracket \Theta \vdash \alpha \; \textrm{Type} \rrbracket = \theta (\alpha)}
We then know that all other valid types follow one of two other forms, either $A \to B$ or $\forall \alpha . B$
The $A \to B$ case is very similar to Halt; $e$ is in $\llbracket \Theta \vdash A \to B \; \textrm{Type} \rrbracket \theta$ if:
\begin{itemize}
\item $e$ halts
\item For all $e'$ in $\llbracket \Theta \vdash A \; \textrm{Type} \rrbracket \theta$ we have that $(e e')$ is in $\llbracket \Theta \vdash B \; \textrm{Type} \rrbracket \theta$
\end{itemize}
That is, $e$ preserves halting of expressions when applied to an expression. This too is similar to the interpretation of \textit{Halt}.
The new type form of System F is $\forall \alpha . B$; $e$ is in $\llbracket \Theta \vdash \forall \alpha . B \; \textrm{Type} \rrbracket \theta$ if:
\begin{itemize}
\item $e$ halts
\item For all Types $A$ and Semantic Types $X$ we have that $(e A)$ is in
$\llbracket \Theta, \alpha \vdash B \; \textrm{Type} \rrbracket (\theta, X/\alpha )$
\end{itemize}
This can be interpreted as $e$ halting no matter what type it is instantiated with. Note that the type A and semantic type X don't have to be linked. The reason that we can do this is that technically type applications are only used for book keeping - they never modify the behaviour of a program. This is similar logic to type erasure in Java in that once we have proven that an expression is well-typed, the types can be stripped away and the program can be run without any of the type information. Therefore we can allow for any type A without checking that the semantic type X corresponds to the same type since the type A that we choose will not modify whether or not the program halts, only the semantic type X could possibly modify the halting behaviour of the program.
\newpage
We have a few properties that we can prove on these Semantic Types:
\begin{itemize}
\item (\textbf{Closure})
If $\theta$ is an interpretation for $\Theta$, then $\llbracket \Theta \vdash A \; \textrm{type} \rrbracket \theta$ is a semantic type.
\item (\textbf{Exchange})
$\llbracket \Theta, \alpha, \beta, \Theta' \vdash A \; \textrm{type}\rrbracket = \llbracket \Theta, \beta, \alpha, \Theta' \vdash A \; \textrm{type}\rrbracket$
\item (\textbf{Weakening})
If $\Theta \vdash A \; \textrm{type}$, then $\llbracket \Theta, \alpha \vdash A \; \textrm{type}\rrbracket (\theta, X/\alpha) = \llbracket \Theta \vdash A \; \textrm{type}\rrbracket \theta$
\item (\textbf{Substitution})
If $\Theta \vdash A \; \textrm{type}$ and $\Theta, \alpha \vdash B \; \textrm{type}$ then
$\llbracket \Theta \vdash [A/\alpha ]B \; \textrm{type}\rrbracket \theta = \llbracket \Theta, \alpha \vdash B \; \textrm{type}\rrbracket (\theta, \llbracket \Theta \vdash A \; \textrm{type}\rrbracket \theta)$
\end{itemize}
These properties can be proven by induction on the three structures of well formed types.
Which leads us to the fundamental lemma: If we have that
\begin{itemize}
\item $\Theta = \alpha_1, ..., \alpha_k$
\item $\Gamma = x_1: A_1, ..., x_n: A_n$
\item $\Theta \vdash \Gamma$ ctx
\item $\Theta; \Gamma \vdash e : B$
\item $\theta$ interprets $\Theta$
\item For all $x_i: A_i$ in $\Gamma$ we have some $e_i \in \llbracket \Theta \vdash A_i$ type $\rrbracket \theta$
\end{itemize}
Then we can substitute all free type variables with arbitrary types and all free variables with the expressions and gain the properties given by the semantic type set:
$$[C_1/\alpha_1, ..., C_k/\alpha_k][e_1/x_1, ..., e_n/x_n]e\in \llbracket \Theta \vdash B \; \textrm{type} \rrbracket \theta$$
And so, since all values within System F halt by definition, and all expressions can be built up by substitution of values into values, all expressions must be in their respective Semantic Type. And since all expressions in a Semantic Type halt, all expressions in System F must halt.
\newpage
\section{Second Order Intuitionistic Propositional Logic}
We have just seen that we can introduce both Universal and Existential quantifiers to our type system. This would imply that we can show that Second Order Intuitionistic propositions are true by finding instances of higher order types.
For example, imagine that we have propositions $P(x)$ and $Q(x)$ with corresponding types $\alpha \vdash T_P$ type and $\alpha \vdash T_Q$ type. Note the free type variables $\alpha$ modelling the parameter $x$. Then we may want to prove the following second order statement:
\inlineeq{\exists x. P(x) \land \forall x. (P(x) \implies Q(x)) \implies \exists x. Q(x)}
This has the following type:
\inlineeq{\exists \alpha. T_P \to (\forall \alpha. T_P \to T_Q) \to \exists \alpha. T_Q}
We can build the following term in System F with the above type:
\begin{equation*}
\begin{split}
&\lambda a : \exists \alpha. T_P. (\\
&\quad\lambda b. \forall \alpha. T_P \to T_Q. (\\
&\qquad\textrm{let pack}(\alpha, x) = a \; \textrm{in pack}_{\alpha.T_Q}(\alpha, b \alpha x)\\
&\quad)\\
&)
\end{split}
\end{equation*}
We will discuss what this means further when we introduce Classical Logic.
\newpage
\section{State and Stores}
Sometimes it'd be useful to actually harvest output from our code. The most intuitive way to do this is with a store and state, as if modelling a file system or memory block.
Borrowing from IB Semantics we can imagine a language with references to state locations that we can update. We might want to create a fresh reference (akin to malloc) using an instruction like \textbf{ref e}, read a reference with \textbf{!e} and update a reference with \textbf{e := e'}.
\centerboxtitled{Small Aside:}{
Something not mentioned on the slides is that we will also need a sequence operator, as we will want to run instructions in sequence while we modify the state. However our operational semantics state that function application only occurs once both the function and the argument are values, so we can encode sequence in the following way:
$$e_1 ; e_2 \implies (\lambda x. ) e_2) e_1$$
Here $e_1$ must be fully evaluated before being substituted, and then $e_2$ will be fully evaluated and returned.
}
We first define our language grammar:
\begin{equation*}
\begin{split}
\textrm{Types X} &::= 1 \; | \; N \; | \; X \to Y \; | \; \textrm{ref} \; X\\
\textrm{Terms e} &::= \langle \rangle \; | \; n \; | \; \lambda x : X. e \; | \; e e' \; | \; \textrm{new} \; e \; | \; !e \; | \; e := e' \; | \; l\\
\textrm{Stores} \; \sigma &::= \cdot \; | \; \sigma, l : v\\
\textrm{Contexts} \; \Gamma &::= \cdot \; | \; \Gamma, x : X
\end{split}
\end{equation*}
\newpage
We then get the following operational semantics. Note that they are mostly just copied and pasted from STLC, but we keep a state $\sigma$ along for the ride. The only exceptions are the instructions which modify the state:
\SequentBox{
$\textrm{Values v} ::= \langle \rangle \; | \; n |\lambda x : X. e \; | \; l$
\SequentUnaryNoLabel{$\langle \sigma; e_0 \rangle \rightsquigarrow \langle \sigma'; e_0' \rangle$}{$\langle \sigma; e_0 e_1 \rangle \rightsquigarrow \langle \sigma'; e_0' e_1 \rangle$}
\SequentUnaryNoLabel{$\langle \sigma; e_1 \rangle \rightsquigarrow \langle \sigma'; e_1' \rangle$}{$\langle \sigma; v_0 e_1 \rangle \rightsquigarrow \langle \sigma'; v_0 e_1' \rangle$}
\SequentAxiomNoLabel{$\langle \sigma; (\lambda x : X . e) v \rangle \rightsquigarrow \langle \sigma; [v / x] e \rangle$}
\SequentUnaryNoLabel{$\langle \sigma; e \rangle \rightsquigarrow \langle \sigma'; e' \rangle$}{$\langle \sigma; \textrm{new} \; e \rangle \rightsquigarrow \langle \sigma'; \textrm{new} \; e' \rangle$}
\SequentUnaryNoLabel{$\langle \sigma; e \rangle \rightsquigarrow \langle \sigma'; e' \rangle$}{$\langle \sigma; ! e \rangle \rightsquigarrow \langle \sigma'; ! e' \rangle$}
\SequentUnaryNoLabel{$l \notin dom(\sigma)$}{$\langle \sigma; \textrm{new} \; v \rangle \rightsquigarrow \langle (\sigma, l : v); l \rangle$}
\SequentUnaryNoLabel{$l:v \in \sigma$}{$\langle \sigma; !l \rangle \rightsquigarrow \langle \sigma; v \rangle$}
\SequentUnaryNoLabel{$\langle \sigma; e_0 \rangle \rightsquigarrow \langle \sigma'; e_0' \rangle$}{$\langle \sigma; e_0 := e_1 \rangle \rightsquigarrow \langle \sigma'; e_0' := e_1 \rangle$}
\SequentUnaryNoLabel{$\langle \sigma; e_1 \rangle \rightsquigarrow \langle \sigma'; e_1' \rangle$}{$\langle \sigma; v_0 := e_1 \rangle \rightsquigarrow \langle \sigma'; v_0 := e_1' \rangle$}
\SequentAxiomNoLabel{$\langle (\sigma, l:v, \sigma'); l := v' \rangle \rightsquigarrow \langle (\sigma, l:v', \sigma'); \langle \rangle \rangle$}
}
When it comes to typing this language, we need to make sure that we can type the store. We keep this information in an extra variable $\Sigma$ that we pass around our type derivations.
\newpage
The first five typing rules are taken from STLC and $\Sigma$ is just added to the things we keep track of. The last four deal with references, but only the last one actually uses the contents of $\Sigma$.
\SequentBox{
$\textrm{Store Typings} \; \Sigma ::= \cdot \; | \; \Sigma, l : X$
\SequentUnary{HYP}{$x : X \in \Gamma$}{$\Sigma;\Gamma\vdash x : X$}
\SequentAxiom{1l}{$\Sigma;\Gamma\vdash\langle\rangle : 1$}
\SequentAxiom{$\mathbb{N}$l}{$\Sigma;\Gamma\vdash n : \mathbb{N}$}
\SequentUnary{$\to$l}{$\Sigma;\Gamma, x:X \vdash e:Y$}{$\Sigma;\Gamma \vdash \lambda x : X . e : X \to Y$}
\SequentBinary{$\to$E}{$\Sigma;\Gamma \vdash e : X \to Y$}{$\Sigma;\Gamma \vdash e' : X$}{$\Sigma;\Gamma \vdash e e' : Y$}
\SequentUnary{RefL}{$\Sigma ; \Gamma \vdash e : X$}{$\Sigma ; \Gamma \vdash \textrm{new} \; e : \textrm{ref }X$}
\SequentUnary{RefGet}{$\Sigma ; \Gamma \vdash e : \textrm{ref }X$}{$\Sigma ; \Gamma \vdash !e : X$}
\SequentBinary{RefSet}{$\Sigma ; \Gamma \vdash e : \textrm{ref} \; X$}{$\Sigma ; \Gamma \vdash e' : X$}{$\Sigma ; \Gamma \vdash e := e' : 1$}
\SequentUnary{RefBar}{$l:X \in \Sigma$}{$\Sigma ; \Gamma \vdash l : \textrm{ref} \; X$}
}
We now would want to talk about this language having type safety, but to do this we need to be able to describe that the store at any point of execution is well typed. We're not able to show that an expression is type safe if it is possible for the expression to load something that isn't a valid type from the store.
We define that a store is well typed recursively:
\SequentBox{
\SequentAxiom{StoreNil}{$\Sigma \vdash \cdot : \cdot$}
\SequentBinary{StoreCons}{$\Sigma \vdash \sigma' : \Sigma'$}{$\Sigma ; \cdot \vdash v : X$}{$\Sigma \vdash (\sigma', l:v) : (\Sigma', l : X)$}
\SequentBinary{ConfigOK}{$\Sigma \vdash \sigma : \Sigma$}{$\Sigma ; \cdot \vdash e : X$}{$\langle \sigma ; e \rangle : \langle \Sigma ; X \rangle$}
}
\newpage
We can interpret this in the following way:
\begin{itemize}
\item (\textbf{Store Nil})
An empty store is well typed
\item (\textbf{Store Cons})
A store can be grown by a single location $l$ and variable $v$ with type $X$ if it can be shown that $v$ has type $X$
\item (\textbf{Config OK})
A store is valid for a store configuration if every location in the store matches with the store configuration
\end{itemize}
Unfortunately if we were to now try to prove type safety naively we'd find ourselves stuck when we try to do structural induction on \textit{new}. This is because the store grows, and so the typing for the store is no longer valid. This leads us to the idea of \textbf{Store Monotonicity}, or that The Store Only Grows.
We define $\Sigma \leq \Sigma'$ to mean there is some other $\Sigma''$ such that $\Sigma' = \Sigma, \Sigma''$. Note that this $\Sigma''$ may just be $\cdot$, hence the less than \textit{or equal to} relation.
We can show that if the store grows then the typing of a store and an expression are still valid. That is,
\begin{equation*}
\begin{split}
\Sigma ; \Gamma \vdash e : X &\implies \Sigma' ; \Gamma \vdash e : X\\
\Sigma \vdash \sigma_0 : \Sigma_0 &\implies \Sigma' \vdash \sigma_0 : \Sigma_0
\end{split}
\end{equation*}
With these rules we can define progress and preservation:
\begin{enumerate}
\item (\textbf{Progress})
Well-typed programs and stores are not stuck: they can always take a step of progress (or are done).
$\langle \sigma; e\rangle : \langle \Sigma; X\rangle \implies (e \in v) \lor (\exists \; \sigma', e'. \langle \sigma; e\rangle \rightsquigarrow \langle \sigma'; e'\rangle)$
\item (\textbf{Preservation})
If a well-typed program and store take a step, the program will stay well-typed and the store will only grow.
$(\langle \sigma ; e \rangle : \langle \Sigma ; X \rangle) \land (\langle \sigma ; e \rangle \rightsquigarrow \langle \sigma' ; e' \rangle) \implies \exists \; \Sigma' \geq \Sigma.\langle \sigma' ; e' \rangle : \langle \Sigma';X'\rangle$
\end{enumerate}
\newpage
\section{Accidental Looping}
Using our store, we can store references to functions. This is an issue, because functions can themselves read those references when they are run. This lets us create recursion by creating a function that:
\begin{enumerate}
\item Creates a function that loads a reference and calls the reference
\item Stores the function in a memory location
\item Runs the function with the memory location
\end{enumerate}
This can be illustrated with Landin's Knot:
\begin{lstlisting}
let knot : ((int -> int) -> int -> int) -> int -> int =
fun f ->
let r = ref (fun n -> 0) in
let recur = fun n -> !r n in
let () = r := fun n -> f recur n in recur
\end{lstlisting}
This means that we can build programs that loop forever and so our type system does not result in \textbf{Termination}. This is a shame and it would be nice if we could separate our effectful code from our pure code in a way that prevents infinite loops...
\newpage
\section{Monads}
To stop infinite loops, we need to prevent the formation of structures like Landin's Knot. Landin's Knot was able to be formed because we could both dereference a value and apply that value as a function to an argument within the same program.
Monads introduce two separate types of instructions and two separate environments in which these instructions execute, called Pure and Impure.
Pure terms operate like traditional lambda calculus - we have values, functions and function compositions. The only thing that we add is that impure terms can be passed around like values. We can't interrogate or execute these impure terms within a pure environment, instead they are opaque and remain unevaluated.
Impure terms \textit{have effects}. They may read from or write to a store, perform IO, read and write from a terminal. We may also execute Pure terms.
This means that Pure terms may not execute their containing Impure terms, but Impure terms may execute their Pure terms.
One interesting result of this is that a program that has a Pure term as its top level structure must not have any effects, since the Pure term cannot initiate execution of an Impure term.
An initial grammar for a monadic language might be as follows:
\begin{equation*}
\begin{split}
\textrm{Pure Terms} \; e &::= \langle \rangle \; | \; n \; | \; \lambda x : X . e \; | \; e e' \; | \; l \; | \; {t}\\
\textrm{Impure Terms} \; t &::= \textrm{new} \; e \; | \; !e \; | \; e := e' \; | \; \textrm{let} \; x = e ; t \; | \; \textrm{return }e\\
\textrm{Impure Terms} \; e &::= \langle \rangle \; | \; n \; | \; \lambda x : X . e \; | \; l \; | \; {t}\\
\textrm{Values} \; v &::= \langle \rangle \; | \; n \; | \; \lambda x : X. e \; | \; l \; | \; \{t\}\\
\textrm{Stores} \; \sigma &::= \cdot \; | \; \sigma, l : v\\
\textrm{Contexts} \; \Gamma &::= \cdot \; | \; \Gamma, x : X\\
\textrm{Store Typings} \; \Sigma &::= \cdot \; | \; \Sigma, l : X
\end{split}
\end{equation*}
The differences to notice are:
\begin{enumerate}
\item We have two new types, ref $X$ and $TX$. The first describes the type of a store location, and the second describes the type as the result of an Impure expression, or Monad. For example, $3$ has the type $\mathbb{N}$ while $\{return 3\}$ has the type $T\mathbb{N}$
\item We have two new Pure terms, $l$ and $\{t\}$. The first represents a location which should be generated by new $e$ under normal circumstances, but should be able to be typechecked so that we can show \textbf{Type Safety} later on. The second is how we embed Impure expressions within pure ones, by wrapping the impure statement in curly braces. This links into one of our new value types, as Impure expressions wrapped in curly braces are values and so when encountered in a Pure context are not reduced at all.
\item We have a new category called Impure types which consist of everything that can manipulate the store, as well as the \textbf{let} and \textbf{return} expressions.
We can see the let expression as temporarily exposing the content of a monad. That is, the let expression is a bit like a function of the type $T\alpha \to (\alpha \to T \beta) \to T \beta$, except its second argument isn't given as a function but instead an expression with a free variable.
The \textbf{return} expression is the counterpart to let and allows us to create a new monad, that exposes whatever is returned to the let expression.
One way of viewing these two commands (and monads in general) is as a many-layered object like a burrito or an onion. When an object is wrapped in curly brackets we wrap the monad with an extra layer, obscuring the mess inside. This wrapped object is itself inert and cannot be queried or evaluated. However a \textbf{let} expression allows us to peel back one layer of the monad and access the value inside, so long as the expression that uses the value wraps the result back up at the end.
\end{enumerate}
Since it can be quite hard to read this grammar, here are some examples of programs:
$$(\lambda x: T1 . x)\{\textrm{let} \; x = 1; x := 2\}$$
This first equation reduces to $\{\textrm{let} \; x = 1; x := 2\}$ and then halts. This is because an Impure term wrapped in curly braces is a value, said to be a \textbf{Suspended Computation}, and is not itself evaluated. This means that any expression for which the 'top level' component is Pure must have no side effects.
$$\textrm{let} \; y = \lambda x: 1. x; \textrm{let} \; z = \textrm{new} \; y$$
This second expression displays that we can still store functions in the store. We could even load this store and run the function. The difference is that this function itself cannot load from the store, since functions must be Pure. This is how we avoid the formation of Landin's Knot.
$$\textrm{let} \; x = (\textrm{if True then }\{ \textrm{return} \; 1\}\; \textrm{else }\{ \textrm{return} \; 2\}); return !x$$
This program illustrates the nesting of Pure and Impure programs. The outer layer is Impure with a \textbf{let} statement, then within this is a Pure \textbf{lambda} expression, which contains two impure \textbf{return} statements.
\newpage
More formally, here are the typing rules. Note the two different forms of typing derivations, using $\Sigma ; \Gamma \vdash e : \tau$ for Pure expressions and $\Sigma ; \Gamma \vdash e \div \tau$ for Impure expressions.
Also notice that HYP, 1l, $\mathbb{N}$l, $\to$l and $\to$E are exactly the same as STLC just with a $\Sigma$ passed through.
\SequentBox{
\SequentUnary{HYP}{$x : X \in \Gamma$}{$\Sigma;\Gamma\vdash x : X$}
\SequentAxiom{1l}{$\Sigma;\Gamma\vdash\langle\rangle : 1$}
\SequentAxiom{$\mathbb{N}$l}{$\Sigma;\Gamma\vdash n : \mathbb{N}$}
\SequentUnary{$\to$l}{$\Sigma;\Gamma, x:X \vdash e:Y$}{$\Sigma;\Gamma \vdash \lambda x : X . e : X \to Y$}
\SequentBinary{$\to$E}{$\Sigma;\Gamma \vdash e : X \to Y$}{$\Sigma;\Gamma \vdash e' : X$}{$\Sigma;\Gamma \vdash e e' : Y$}
\SequentUnary{RefBar}{$l:X \in \Sigma$}{$\Sigma ; \Gamma \vdash l : \textrm{ref }X$}
\SequentUnary{Tl}{$\Sigma ; \Gamma \vdash t \div X$}{$\Sigma ; \Gamma \vdash \{ t \} : TX$}
\SequentUnary{RefL}{$\Sigma ; \Gamma \vdash e : X$}{$\Sigma ; \Gamma \vdash \textrm{new} \; e \div \textrm{ref }X$}
\SequentUnary{RefGet}{$\Sigma ; \Gamma \vdash e : \textrm{ref }X$}{$\Sigma ; \Gamma \vdash !e \div X$}
\SequentBinary{RefSet}{$\Sigma ; \Gamma \vdash e : \textrm{ref} \; X$}{$\Sigma ; \Gamma \vdash e' : X$}{$\Sigma ; \Gamma \vdash e := e' \div 1$}
\SequentUnary{TRet}{$\Sigma ; \Gamma \vdash e : X$}{$\Sigma ; \Gamma \vdash \textrm{return} \; e \div X$}
\SequentBinary{TLet}{$\Sigma ; \Gamma \vdash e : TX$}{$\Sigma ; \Gamma, x:X \vdash t \div Z$}{$\Sigma ; \Gamma \vdash \textrm{let} \; x = e ; t \div Z$}
}
\newpage
We then define our operational semantics on two levels. We have our very basic function application semantics for Pure expressions, and then our operational semantics for Impure expressions which caries with it a store. Note that we could extend our monad system however we wanted to include any other kinds of effects, and these would all be tracked inside this impure semantics.
\SequentBox{
\SequentUnaryNoLabel{$e_0 \rightsquigarrow e_0'$}{$e_0 e_1 \rightsquigarrow e_0' e_1$}
\SequentUnaryNoLabel{$e_1 \rightsquigarrow e_1'$}{$v_0 e_1 \rightsquigarrow v_0 e_1'$}
\SequentAxiomNoLabel{$(\lambda x : X . e) v \rightsquigarrow [v / x] e$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\langle \sigma ; \textrm{new} \; e \rangle \rightsquigarrow \langle \sigma ; \textrm{new} \; e' \rangle$}
\SequentUnaryNoLabel{$l \notin dom(\sigma)$}{$\langle \sigma ; \textrm{new} \; e \rangle \rightsquigarrow \langle (\sigma, l:v);\textrm{return} \; l \rangle$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\langle \sigma ; ! e \rangle \rightsquigarrow \langle \sigma ; ! e' \rangle$}
\SequentUnaryNoLabel{$l : v \in \sigma$}{$\langle \sigma ; ! l \rangle \rightsquigarrow \langle (\sigma, l:v);\textrm{return} \; v \rangle$}
\SequentUnaryNoLabel{$e_0 \rightsquigarrow e_0'$}{$\langle \sigma ; e_0 := e_1 \rangle \rightsquigarrow \langle \sigma ; e_0' := e_1 \rangle$}
\SequentUnaryNoLabel{$e_1 \rightsquigarrow e_1'$}{$\langle \sigma ; v_0 := e_1 \rangle \rightsquigarrow \langle \sigma ; v_0 := e_1' \rangle$}
\SequentAxiomNoLabel{$\langle ( \sigma, l:v, \sigma');l:=v'\rangle \rightsquigarrow \langle (\sigma, l:v', \sigma');\textrm{return} \; \langle \rangle \rangle$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\langle \sigma ; \textrm{return} \; e \rangle \rightsquigarrow \langle \sigma ; \textrm{return} \; e' \rangle$}
\SequentUnaryNoLabel{$e \rightsquigarrow e'$}{$\langle \sigma ; \textrm{let} \; x = e ; t \rangle \rightsquigarrow \langle \sigma ; \textrm{let} \; x = e' ; t \rangle$}
\SequentAxiomNoLabel{$\langle \sigma ; \textrm{let }x = \{\textrm{return} \; v\} ; t_1\rangle \rightsquigarrow \langle \sigma ; [v/x]t_1\rangle$}
\SequentUnaryNoLabel{$\langle \sigma ; t_0 \rangle \rightsquigarrow \langle \sigma' ; t_0' \rangle$}{$\langle \sigma ; \textrm{let }x = \{ t_0 \} ; t_1 \rangle \rightsquigarrow \langle \sigma' ; \textrm{let} \; x = \{t_0'\} ; t_1 \rangle$}
}
Things that are worth noting here:
\begin{itemize}
\item Since all effectful code is strictly linear, and the only way to deviate from the chosen path of execution is function application which is done on the Pure level, we have preserved our halting properties from before.
\item Also, since all effectful code is strictly linear, the only rule that uses progress of Impure code is the final \textbf{let} rule. This rule propagates the execution step into the argument to be assigned to the variable until the argument is a value. Every other Impure rule either steps a containing Pure argument (and so doesn't change the store), or does some work on itself, possibly changing the store. This demonstrates the seperation of Pure and Impure code in that any containing Pure code is not handed the store and cannot change the store, so must be independent of the result of any Impure code.
\item As seen in the lectures, we can see that the Operational Semantics for Pure statements are the first three rules and no more. If we add other effects such as \textbf{print} or \textbf{read} then we may need to change the store that is threaded around the Impure rules, but the Pure part of this system always stays the same.
\end{itemize}
We can prove \textbf{Weakening}, \textbf{Exchange} and \textbf{Substitution} for the set $\Gamma$ for both Pure and Impure types. Since the proof of each property on Pure expressions would require the property to be true on Impure types and the other way around too, prove each property on both Pure and Impure at the same time (mutually inductively) so as to be able to use structural induction using every rule for every possible expression form. This way you can use the inductive hypothesis on both Pure and Impure expressions. \textbf{Progress} and \textbf{Preservation} can be proven mutually inductively in a similar way, giving us \textbf{Type Safety}.
And as we discussed earlier, we can see that Pure expressions cannot depend on the results of Impure expressions, so Pure expressions must still have the property of \textbf{Termination} that we showed for STLC.
\centerboxtitled{Extra Information:}{
Often, this \textbf{let} expression as described above is instead called \textbf{bind}, and has exactly the signature that we described before:
$$bind: m\alpha \to (\alpha \to m\beta) \to m\beta$$
We also have the \textbf{return} function, which has the type
$$return: \alpha \to m\alpha$$
However there is a more simple function, called \textbf{join}, which simply takes to monadic 'black boxes' and combines them together:
$$join: m(m\alpha) \to m\alpha$$
We can represent this join function in our language using two let expressions:
$$\textrm{let} \; x = v; \textrm{let} \; y = x; \textrm{return} \; y$$
}
\newpage
\section{Classical Logic}
We saw with the Curry Howard correspondence that Simply Typed lambda calculus can be used to create instances of types that prove corresponding First Order Intuitionistic Propositional statements. We then saw that we could further introduce Universal and Existential quantifiers through System F to gain more proof power, allowing us to create instances of types that prove corresponding \textit{Second Order} Intuitionistic Propositional statements.
However we are still one Double Negation Elimination axiom away from being able to prove any true statement within Classical logic.
The issue is that within Intuitionistic Logic we have a slightly different notion of Falsehood. We write $\lnot P$ to mean that $P \implies \bot$, or that $T_P \to \bot$. In other words, if we are able to give an instance of type $T_P$, then we are also able to give an instance of type $\bot$, however since we cannot ever have a type of $\bot$ we must not be able to give an instance of $T_P$.
This means that, within Intuitionistic Logic, we have ideas of \textit{Proven} and \textit{Unprovable}, rather than Classical Logic's \textit{True} and \textit{False}.
We could therefore begin to build up to Classical Logic by treating Refutations, or Proofs of Unprovability, as a first-class notion of Falsehood. This moves away from STLC where the only first-class object is a type and instances of a type represent proofs of a true proposition. By having two different first-class objects, \textit{true} and \textit{false}, we find that an instance of a type is now either a proof that a proposition is true or that it is false.
To expand on this, imagine first we want to prove that some proposition $P$ holds. Within STLC, it is sufficient to show that some expression with type $T_P$ exists. However within our new system, we will want to show that some expression with type $T_P \; true$ exists. This allows us to prove that a proposition $P$ is false by showing that some expression in our system has the type $T_P \; false$.
We now must establish what exactly these expressions with these types are. We know that our types now have an extra tag stating \textit{true} or \textit{false}, but we need to devise a type of expression which has these types.
The trick is in going backwards; we know that we must be able to use our expressions as proofs of the proposition that their type corresponds to, so we can let our expressions be the encoding of proofs of statements within Classical Logic.
That is, if we can establish firstly a system for proving all true or false statements in Classical Logic, and then secondly a way to encode these proofs as expressions, then each expression can have the type that corresponds to the proven true or false proposition.
\newpage
We know from IB Logic and Proof that we can use sequents to prove propositions within Classical Logic:
\SequentBox{
\begin{spacing}{1.0}
\inlineeq{
\begin{split}
\textrm{Propositions} \; A &::= \top \; | \; A \land B \; | \; \bot \; | \; A \lor B \; | \; \lnot A\\
\textrm{True contexts} \; \Gamma &::= \cdot \; | \; \Gamma, A\\
\textrm{False contexts} \; \Delta &::= \cdot \; | \; \Delta, A\\
\textrm{Typing Judgements} \; &::= \Gamma; \Delta \vdash A \; \textrm{true} \; | \; \Gamma; \Delta \vdash A \; \textrm{false} \; | \; \Gamma; \Delta \vdash \textrm{contr}
\end{split}
}
\end{spacing}
\vspace{30px}
\SequentUnary{HypP}{$A \in \Gamma$}{$\Gamma ; \Delta \vdash A$ true}
\SequentUnary{HypR}{$A \in \Delta$}{$\Gamma ; \Delta \vdash A$ false}
\SequentAxiom{$\top$P}{$\Gamma ; \Delta \vdash \top$ true}
\SequentAxiom{$\bot$R}{$\Gamma ; \Delta \vdash \bot$ false}
\SequentBinary{$\land$P}{$\Gamma ; \Delta \vdash A$ true}{$\Gamma ; \Delta \vdash B$ true}{$\Gamma ; \Delta \vdash A \land B$ true}
\SequentBinary{$\lor$R}{$\Gamma ; \Delta \vdash A$ false}{$\Gamma ; \Delta \vdash B$ false}{$\Gamma ; \Delta \vdash A \lor B$ false}
\SequentUnary{$\lor$P$_1$}{$\Gamma ; \Delta \vdash A$ true}{$\Gamma ; \Delta \vdash A \lor B$ true}
\SequentUnary{$\lor$P$_2$}{$\Gamma ; \Delta \vdash B$ true}{$\Gamma ; \Delta \vdash A \lor B$ true}
\SequentUnary{$\land$R$_1$}{$\Gamma ; \Delta \vdash A$ false}{$\Gamma ; \Delta \vdash A \land B$ false}
\SequentUnary{$\land$R$_2$}{$\Gamma ; \Delta \vdash B$ false}{$\Gamma ; \Delta \vdash A \land B$ false}
\SequentUnary{$\lnot$P}{$\Gamma ; \Delta \vdash A$ false}{$\Gamma ; \Delta \vdash \lnot A$ true}
\SequentUnary{$\lnot$R}{$\Gamma ; \Delta \vdash A$ true}{$\Gamma ; \Delta \vdash \lnot A$ false}
\SequentUnaryNoLabel{$\Gamma ; \Delta, A \vdash $contr}{$\Gamma ; \Delta \vdash A$ true}
\SequentUnaryNoLabel{$\Gamma, A ; \Delta \vdash $contr}{$\Gamma ; \Delta \vdash A$ false}
\SequentBinary{Contr}{$\Gamma ; \Delta \vdash A$ true}{$\Gamma ; \Delta \vdash A$ false}{$\Gamma ; \Delta \vdash $contr}
}
\newpage
With these rules we've gained the ability to prove the Double Negation Elimination rule:
\begin{center}
\AxiomC{}
\SafeRightLabel{HYP}
\UnaryInfC{$A ; \cdot \vdash A$ true}
\SafeRightLabel{$\lnot$R}
\UnaryInfC{$A ; \cdot \vdash \lnot A$ false}
\SafeRightLabel{$\lnot$P}
\UnaryInfC{$A ; \cdot \vdash \lnot \lnot A$ true}
\DisplayProof
\hspace{3em}
\AxiomC{}
\UnaryInfC{$\lnot \lnot A ; A \vdash \lnot \lnot A$ true}
\AxiomC{}
\UnaryInfC{$\lnot \lnot A ; A \vdash A$ false}
\UnaryInfC{$\lnot \lnot A ; A \vdash \lnot A$ true}
\UnaryInfC{$\lnot \lnot A ; A \vdash \lnot \lnot A$ false}
\BinaryInfC{$\lnot \lnot A ; A \vdash$ contr}
\UnaryInfC{$\lnot \lnot A ; \cdot \vdash A$ true}
\DisplayProof
\end{center}
We can now devise a way to encode these rules as values within our new language. We have a parity between the rules, where every proposition form has exactly two sequents that it corresponds to; one for if the proposition is true and one for if it was false. We must therefore include in our encoding whether an expression is true or false, which we can do by introducing an alternative variation of each value, dubbed a Continuation.
\newpage
This gives us the following grammar:
\inlineeq{
\begin{split}
\textrm{Values} \; e &::= \langle\rangle \; | \; \langle e, e' \rangle \; | \; L e \; | \; R e \; | \; \textrm{not}(k) \; | \; \mu u : A. c\\
\textrm{Continuations} \; k &::= [] \; | \; [k, k'] \; | \; \textrm{fst} \; k \; | \; \textrm{snd} \; k \; | \; \textrm{not}(e) \; | \; \mu x : A. c\\
\textrm{Contradictions} \; c &::= \langle e |_A k\rangle
\end{split}
}
Note the correspondence between Values and Continuations.
We can then type these expressions with the types that we established earlier:
\SequentBox{
\SequentUnary{HypP}{$x: A \in \Gamma$}{$\Gamma ; \Delta \vdash x : A$ true}
\SequentUnary{HypR}{$x: A \in \Delta$}{$\Gamma ; \Delta \vdash x : A$ false}
\SequentAxiom{$\top$P}{$\Gamma ; \Delta \vdash \langle \rangle : \top$ true}
\SequentAxiom{$\bot$R}{$\Gamma ; \Delta \vdash [] : \bot$ false}
\SequentBinary{$\land$P}{$\Gamma ; \Delta \vdash e: A$ true}{$\Gamma ; \Delta \vdash e': B$ true}{$\Gamma ; \Delta \vdash \langle e, e' \rangle A \land B$ true}
\SequentBinary{$\lor$R}{$\Gamma ; \Delta \vdash k : A$ false}{$\Gamma ; \Delta \vdash k' : B$ false}{$\Gamma ; \Delta \vdash [k, k'] : A \lor B$ false}
\SequentUnary{$\lor$P$_1$}{$\Gamma ; \Delta \vdash e: A$ true}{$\Gamma ; \Delta \vdash \textrm{L}e : A \lor B$ true}
\SequentUnary{$\lor$P$_2$}{$\Gamma ; \Delta \vdash e: B$ true}{$\Gamma ; \Delta \vdash \textrm{R}e : A \lor B$ true}
\SequentUnary{$\land$R$_1$}{$\Gamma ; \Delta \vdash k: A$ false}{$\Gamma ; \Delta \vdash \textrm{fst} \; k : A \land B$ false}
\SequentUnary{$\land$R$_2$}{$\Gamma ; \Delta \vdash k: B$ false}{$\Gamma ; \Delta \vdash \textrm{snd} \; k : A \land B$ false}
\SequentUnary{$\lnot$P}{$\Gamma ; \Delta \vdash k: A$ false}{$\Gamma ; \Delta \vdash \textrm{not}(k) : \lnot A$ true}
\SequentUnary{$\lnot$R}{$\Gamma ; \Delta \vdash e: A$ true}{$\Gamma ; \Delta \vdash \textrm{not}(e) : \lnot A$ false}
\SequentUnaryNoLabel{$\Gamma ; \Delta, u: A \vdash c$ contr}{$\Gamma ; \Delta \vdash \mu u : A.c: A$ true}
\SequentUnaryNoLabel{$\Gamma, x: A ; \Delta \vdash c$ contr}{$\Gamma ; \Delta \vdash \mu x : A.c: A$ false}
\SequentBinary{Contr}{$\Gamma ; \Delta \vdash e: A$ true}{$\Gamma ; \Delta \vdash k: A$ false}{$\Gamma ; \Delta \vdash \langle e |_A k \rangle$ contr}
}
Note that the set of things that we have assumed to be true $\Gamma$ and the set of things we have assumed to be false $\Delta$ now become our variable contexts. This gives added meaning to an expression having no free variables - it must correspond to a tautology.
\newpage
As with the Curry Howard Correspondence, we can imagine that evaluation of an expression within this new language would be akin to normalising the proof for it. Since most proofs within this language will take the form of a contradiction, we can observe a few things about proving with contradictions:
\begin{itemize}
\item If a contradiction exists with the type $A \land B$, and the evidence that $A \land B$ is false is of the form fst $k$, then there must actually be a contradiction specifically with $A$.
\item If a contradiction exists with the type $\lnot A$ and both evidence are of the form not$(e)$ and not$(k)$, then a contradiction must exist with $A$ with the evidence being $e$ and $k$.
\item If a contradiction exists with the type $A$, and the evidence $\mu u: A.c$ that $A$ is true directly uses a contradiction, then we can simply use the evidence of the contradiction as the contradiction.
\end{itemize}
To see that last point more clearly, consider the following two proof trees:
\begin{center}
\AxiomC{$\Gamma ; \Delta, A \vdash B$ true}
\AxiomC{$\Gamma ; \Delta, A \vdash B$ false}
\BinaryInfC{$\Gamma ; \Delta, A \vdash$ contr}
\UnaryInfC{$\Gamma ; \Delta \vdash A$ true}
\AxiomC{$\Gamma ; \Delta \vdash A$ false}
\BinaryInfC{$\Gamma ; \Delta \vdash$ contr}
\DisplayProof
\vspace{1em}
\AxiomC{$\Gamma ; \Delta \vdash B$ true}
\AxiomC{$\Gamma ; \Delta \vdash B$ false}
\BinaryInfC{$\Gamma ; \Delta \vdash$ contr}
\DisplayProof
\end{center}
Both show the same thing, which is that $\Gamma ; \Delta$ leads to a contradiction. We can transform the first into the second since we can take the leftmost branch of the first tree and substitute our proof that A is false in the rightmost branch.
This gives us the following operational semantics:
\SequentBox{
\begin{spacing}{1.0}
\inlineeq{
\begin{split}
\langle \langle e_1, e_2\rangle \; |_{A \land B} \; \textrm{fst } k\rangle &\rightsquigarrow \langle e_1 \; |_A k\rangle \\
\langle \langle e_1, e_2\rangle \; |_{A \land B} \; \textrm{snd } k\rangle &\rightsquigarrow \langle e_2 \; |_B \; k\rangle \\
\langle L e \; |_{A \lor B} \; [k_1, k_2]\rangle &\rightsquigarrow \langle e \; |_A \; k_1\rangle \\
\langle R e \; |_{A \lor B} \; [k_1, k_2]\rangle &\rightsquigarrow \langle e \; |_B \; k_2\rangle \\
\langle \textrm{not}(k) \; |_{\lnot A} \; \textrm{not}(e)\rangle &\rightsquigarrow \langle e \; |_A \; k\rangle \\
\langle \mu u : A. c \; |_A \; k\rangle &\rightsquigarrow [k/u]c\\
\langle e \; |_A \; \mu x : A. c\rangle &\rightsquigarrow [e/x]c\\
\end{split}
}
\end{spacing}
\vspace{-10px}
}
\newpage
\section{The Computational Ability of Classical Logic}
When inventing our new Classical Logic computation system, we've removed the lambda expressions. Luckily, if you squint hard enough, our new contradiction terms look a bit like lambda expressions. In fact, we have the operational semantics $\langle \mu u : A. c \; |_A \; k\rangle \rightsquigarrow [k/u]c$, which looks to me like $(\lambda u: A. c) k \rightsquigarrow [k/u]c$.
\textbf{Progress} can be written that if $\cdot ; \cdot \vdash c$ contr then $c \rightsquigarrow c'$ (or $c$ final). Unfortunately, if Classical Logic is consistent (which it is), we know that there do not exist any such expressions that are both closed and contradictions. Therefore our language is functionally useless.
We can try to fix this if we want by introducing extra values, called \textit{halt}, \textit{ans} and \textit{done} to be able to form contradictions where there are none and so allow for arbitrary computations. Unfortunately, by doing this we completely break all of the consistency of Classical Logic; if any contradiction can be formed at any time, we can prove that anything holds.
This appears to give us a choice: we can either use our Classical Logic in a consistent state, or in a computationally useful way. We will see that there is a different approach that we can take towards this whole thing that allows us to have both.
\newpage
\section{Classical Embedding}
We introduce the concept of Embedding Classical logic with an observation:
\inlineeq{\lambda k : ((X \to \bot) \to \bot) \to \bot . \lambda a: X. k (\lambda q: X \to \bot. q a))\\
\textrm{has the type}\\
((X \to \bot) \to \bot) \to \bot \to (X \to \bot)}
If we use our previous observation that we can equate the proposition $\lnot A$ to the type $A \to \bot$, we have proven that:
\inlineeq{\lnot\lnot\lnot X \implies \lnot X}
This is called triple negation, and holds within Intuitionistic logic.
\centerboxtitled{Small Aside}{Why is it then that Double Negation Elimination doesn't hold but Triple Negation Elimination does?
The trick is in the Intuitionistic interpretation of False as 'Not Able to be Proven'.
Imagine that you and a group of researchers find some ancient cave writings. These writings might be written by aliens, making the statements written True, or they might be scribbles written by children, making them Nonsensical and Unable to be proven True. The children's scribbles may be completely True statements about the meaning of Life, the Universe and Everything, however unfortunately the children's handwriting is not good enough for us to ever decode these statements.
Now imagine that I told you that the writing is \textit{Not} the children's. This means that the statements contained must be True, and follows the classical idea of Double Negation Elimination.
Imagine instead that I told you that we are \textit{Unable to Prove} that the writing is the Children's. Then we do not know anything more about the validity of the contained statements, and so Double Negation Elimination does not hold.
Now imagine that I told you that we would \textit{Never be Able to Prove} whether or not we are \textit{Unable to Prove} that the writing is the Children's. Then we will also \textit{Never be Able to Prove} that we are \textit{\textbf{Able} to Prove} that the writing is the Alien's, since showing that the writing is the Alien's proves that we are unable to show that the writing is the Children's. Therefore we will never know whether the contained statements are True or not, which means that we are functionally in the same place as if we knew that the writing was the Children's. This extra layer of Unprovability allows us to essentially use the Law of Excluded Middle, that $\lnot \lnot \lnot A \lor \lnot \lnot A$ holds.
}
\newpage
Note that we can define a far more relaxed version of negation where Triple Negation Elimination still holds, called \textbf{quasi-negation}.
We take any proposition p and define quasi-negation $\quasi X$ as $X \to T_p$. This type equates to the proposition $X \implies P$ for any P. This logically makes sense since we know that $\bot \implies$ \textit{anything} by the Principle of Explosion ("\textit{ex falso quodlibet}").
Using this quasi-negation, we define a translation for Classical Statements into STLC as follows:
\inlineeq{
\begin{split}
(\lnot A)^{\circ} &= \quasi A^{\circ}\\
\top^{\circ} &= 1\\
(A \land B)^{\circ} &= A^{\circ} \times B^{\circ}\\
\bot^{\circ} &= p\\
(A \land B)^{\circ} &= \quasi\quasi (A^{\circ} + B^{\circ})
\end{split}
}
Note the extra double-negation on the union term. This is only one possible encoding of Classical Logic into a form for which Double Negation Elimination holds. For example, we could place a double negation in front of every term (known as the Kolmogorov Translation), or the de Morgan duality for disjunction $\quasi (\quasi A^{\circ} \times \quasi B^{\circ})$ to avoid unions all together.
We can prove that double negation elimination holds for all of these encoded terms within Intuitionistic logic by creating functions that have the following types:
\inlineeq{
\begin{split}
\cdot \vdash \textrm{dne}_A&: \quasi \quasi A^{\circ} \to A^{\circ}\\
&\\
\cdot \vdash \textrm{dne}_\top &: \quasi \quasi 1 \to 1 \\
\textrm{dne}_\top &= \lambda q. \langle \rangle \\
&\\
\cdot \vdash \textrm{dne}_{A \land B}&: \quasi \quasi A^{\circ} \times B^{\circ} \to A^{\circ} \times B^{\circ}\\
\textrm{dne}_{A \land B} &= \lambda q. \langle \substack{\textrm{dne}_A (\lambda k. q(\lambda p. k(\textrm{fst } p)))\\\textrm{dne}_B (\lambda k. q(\lambda p. k(\textrm{snd } p)))} \rangle\\
&\\
\cdot \vdash \textrm{dne}_\bot &: \quasi \quasi p \to p \\
\textrm{dne}_\bot &= \lambda q. q (\lambda x. x) \\
&\\
\cdot \vdash \textrm{dne}_{A \lor B} &: \quasi \quasi \quasi \quasi (A^{\circ} + B^{\circ}) \to \quasi \quasi (A^{\circ} + B^{\circ}) \\
\textrm{dne}_{A \lor B} &= \lambda q. (\lambda k. \lambda a. k (\lambda q. q a)) q \\
&\\
\cdot \vdash \textrm{dne}_{\lnot A} &: \quasi \quasi (\quasi A^{\circ}) \to \quasi A^{\circ} \\
\textrm{dne}_{\lnot A} &= \lambda q. (\lambda k. \lambda a. k (\lambda q. q a)) q
\end{split}
}
Note the similarity between the terms for dne$_{A \lor B}$ and dne$_\lnot A$, and the Triple Negation Elimination term $\lambda k. \lambda a. k (\lambda q. q a)$. We call this term \textit{tne} for short.
\newpage
Now that we have established an embedded form of Intuituinistic logic for which Double Negation Elimination holds, we can begin mapping our encoding of Classical logic into this embedding.
\end{document} | {
"alphanum_fraction": 0.6897314101,
"avg_line_length": 56.6,
"ext": "tex",
"hexsha": "e6e0a4185b163d1d2e432a544b647a24713e55a4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8110be2ab8e3776471c5af451222403e0035c3f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Joeoc2001/UnofficialTypesNotes",
"max_forks_repo_path": "unofficial_types.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8110be2ab8e3776471c5af451222403e0035c3f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Joeoc2001/UnofficialTypesNotes",
"max_issues_repo_path": "unofficial_types.tex",
"max_line_length": 918,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8110be2ab8e3776471c5af451222403e0035c3f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Joeoc2001/UnofficialTypesNotes",
"max_stars_repo_path": "unofficial_types.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 31939,
"size": 103578
} |
\chapter{Magnetic Inversion}\label{Chp:cook:magnetic inversion}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{QLDWestMagneticDataPlot.png}
\caption{Magnetic anomaly data in $nT$ from Western Queensland, Australia
(file \examplefile{data/QLDWestMagnetic.nc}). Data obtained from Geoscience Australia.}
\label{FIG:P1:MAG:0}
\end{figure}
Magnetic data report the observed magnetic flux density over a region above the
surface of the Earth.
Similar to the gravity case the data are given as deviation from an expected
background magnetic flux density $B^b$ of the Earth.
Example data in units of $nT$ (nano Tesla) are shown in Figure~\ref{FIG:P1:MAG:0}.
It is the task of the inversion to recover the susceptibility distribution $k$
from the magnetic data collected. The approach for inverting magnetic data is
almost identical to the one used for gravity data.
In fact the \downunder script~\ref{code: magnetic1} used for the magnetic
inversion is very similar to the script~\ref{code: gravity1} for gravity inversion.
\begin{pyc}\label{code: magnetic1}
\
\begin{python}
# Header:
from esys.downunder import *
from esys.weipa import *
from esys.escript import unitsSI as U
# Step 1: set up domain
dom=DomainBuilder()
dom.setVerticalExtents(depth=40.*U.km, air_layer=6.*U.km, num_cells=25)
dom.setFractionalPadding(pad_x=0.2, pad_y=0.2)
B_b = [2201.*U.Nano*U.Tesla, 31232.*U.Nano*U.Tesla, -41405.*U.Nano*U.Tesla]
dom.setBackgroundMagneticFluxDensity(B_b)
dom.fixSusceptibilityBelow(depth=40.*U.km)
# Step 2: read magnetic data
source0=NetCdfData(NetCdfData.MAGNETIC, 'MagneticSmall.nc',
scale_factor=U.Nano * U.Tesla)
dom.addSource(source0)
# Step 3: set up inversion
inv=MagneticInversion()
inv.setSolverTolerance(1e-4)
inv.setSolverMaxIterations(50)
inv.fixMagneticPotentialAtBottom(False)
inv.setup(dom)
# Step 4: run inversion
inv.getCostFunction().setTradeOffFactorsModels(0.1)
k = inv.run()
# Step 5: write reconstructed susceptibility to file
saveVTK("result.vtu", susceptibility=k)
\end{python}
\end{pyc}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{QLDMagContourMu01.png}
\caption{Contour plot of the susceptibility from a three-dimensional magnetic inversion (with $\mu=0.1$).
Colours represent values of susceptibility where high values are represented by
blue and low values are represented by red.}
\label{FIG:P1:MAG:1}
\end{figure}
The structure of the script is identical to the gravity case.
Following the header section importing the necessary modules the domain of the
inversion is defined in step one.
In step two the data are read and added to the domain builder.
Step three sets up the inversion and step four runs it.
Finally in step five the result is written to the result file, here
\file{result.vtu} in the \VTK format.
Results are shown in Figure~\ref{FIG:P1:MAG:1}.
Although scripts for magnetic and gravity inversion are largely identical there
are a few small differences which we are going to highlight now.
The magnetic inversion requires data about the background magnetic flux density
over the region of interest which is added to the domain by the statements
\begin{verbatim}
B_b = [2201.*U.Nano*U.Tesla, 31232.*U.Nano*U.Tesla,
-41405.*U.Nano*U.Tesla]
dom.setBackgroundMagneticFluxDensity(B_b)
\end{verbatim}
Here it is assumed that the background magnetic flux density is constant across
the domain and is given as the list
\begin{verbatim}
B_b= [ B_E, B_N, B_V ]
\end{verbatim}
in units of Tesla (T) where
\member{B_N}, \member{B_E} and \member{B_V} refer to the north, east and
vertical component of the magnetic flux density, respectively.
Values for the magnetic flux density can be obtained by the International
Geomagnetic Reference Field (IGRF)~\cite{IGRF} (or the Australian Geomagnetic
Reference Field (AGRF)~\cite{AGRF} via \url{http://www.ga.gov.au/oracle/geomag/agrfform.jsp}).
Similar to the gravity case susceptibility below a certain depth can be set to
zero via the statement
\begin{verbatim}
dom.fixSusceptibilityBelow(depth=40.*U.km)
\end{verbatim}
where here the susceptibility below $40km$ is prescribed (this has no effect as
the depth of the domain is $40km$)\footnote{Notice that the method called is
different from the one in the case of gravity inversion.}.
Magnetic data are read and added to the domain with the following statements:
\begin{verbatim}
source0=NetCdfData(NetCdfData.MAGNETIC, 'MagneticSmall.nc', \
scale_factor=U.Nano * U.Tesla)
dom.addSource(source0)
\end{verbatim}
The first argument \member{NetCdfData.MAGNETIC} identifies the data read from
file \file{MagneticSmall.nc} (second argument) as magnetic data.The argument
\file{scale_factor} specifies the units (here $nT$) of the magnetic flux
density data in the file.
If scalar data are given it is assumed that the magnetic flux density anomalies
are measured in direction of the background magnetic flux density\footnote{The
default for \file{scale_factor} for magnetic data is $nT$.}.
Finally the inversion is created and run:
\begin{verbatim}
inv=MagneticInversion()
inv.fixMagneticPotentialAtBottom(False)
k = inv.run()
\end{verbatim}
The result for the susceptibility is named \member{k}. In this case the magnetic potential is
not fixed at the bottom of the domain. The magnetic potential is still set zero at the top of the domain.
We then write the result
to a \VTK file using
\begin{verbatim}
saveVTK("result.vtu", susceptibility=k)
\end{verbatim}
where the result of the inversion is tagged with the name \member{susceptibility}
as an identifier for the visualization software.
\begin{figure}
\begin{center}
\subfigure[$\mu=0.001$]{%
\label{FIG:P1:MAG:10 MU0001}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagContourMu0001.png}}
}%
\subfigure[$\mu=0.01$]{%
\label{FIG:P1:MAG:10 MU001}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagContourMu001.png}}
}\\ % ------- End of the first row ----------------------%
\subfigure[$\mu=0.1$]{%
\label{FIG:P1:MAG:10 MU01}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagContourMu01.png}}
}%
\subfigure[$\mu=1.$]{%
\label{FIG:P1:MAG:10 MU1}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagContourMu1.png}}
}\\ % ------- End of the second row ----------------------%
\subfigure[$\mu=10.$]{%
\label{FIG:P1:MAG:10 MU10}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagContourMu10.png}}
}%
\end{center}
\caption{3-D contour plots of magnetic inversion results with data from
Figure~\ref{FIG:P1:MAG:0} for various values of the model trade-off
factor $\mu$. Visualization has been performed in \VisIt.}
\label{FIG:P1:MAG:10}
\end{figure}
\begin{figure}
\begin{center}
\subfigure[$\mu=0.001$]{%
\label{FIG:P1:MAG:11 MU0001}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagDepthMu0001.png}}
}%
\subfigure[$\mu=0.01$]{%
\label{FIG:P1:MAG:11 MU001}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagDepthMu001.png}}
}\\ % ------- End of the first row ----------------------%
\subfigure[$\mu=0.1$]{%
\label{FIG:P1:MAG:11 MU01}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagDepthMu01.png}}
}%
\subfigure[$\mu=1.$]{%
\label{FIG:P1:MAG:11 MU1}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagDepthMu1.png}}
}\\ % ------- End of the second row ----------------------%
\subfigure[$\mu=10.$]{%
\label{FIG:P1:MAG:11 MU10}
\scalebox{0.95}{\includegraphics[width=0.45\textwidth]{QLDMagDepthMu10.png}}
}%
\end{center}
\caption{3-D slice plots of magnetic inversion results with data from
Figure~\ref{FIG:P1:MAG:0} for various values of the model trade-off
factor $\mu$. Visualization has been performed \VisIt.}
\label{FIG:P1:MAG:11}
\end{figure}
Figures~\ref{FIG:P1:MAG:10} and~\ref{FIG:P1:MAG:11} show results from the
inversion of the magnetic data shown in Figure~\ref{FIG:P1:MAG:0}.
In Figure~\ref{FIG:P1:MAG:10} surface contours are used to represent the
susceptibility while Figure~\ref{FIG:P1:MAG:11} uses contour lines
on a lateral plane intercept and two vertical plane intercepts.
The images show the strong impact of the trade-off factor $\mu$ on the result.
Larger values give more emphasis to the misfit term in the cost function
leading to rougher susceptibility distributions.
The result for $\mu=0.1$ seems to be the most realistic.
| {
"alphanum_fraction": 0.7175720305,
"avg_line_length": 42.6262135922,
"ext": "tex",
"hexsha": "a0484119af422a75b1e7df8bdaae036fced7b5a2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "markendr/esys-escript.github.io",
"max_forks_repo_path": "doc/inversion/CookMagnetic.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_issues_repo_issues_event_max_datetime": "2019-01-14T03:07:43.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-14T03:07:43.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "markendr/esys-escript.github.io",
"max_issues_repo_path": "doc/inversion/CookMagnetic.tex",
"max_line_length": 105,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0023eab09cd71f830ab098cb3a468e6139191e8d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "markendr/esys-escript.github.io",
"max_stars_repo_path": "doc/inversion/CookMagnetic.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2502,
"size": 8781
} |
\input{preamble.tex}
\begin{document}
\begin{center}
{\LARGE \bf Spencer H. Bryngelson} \\
\medskip
Compiled on: \today
\end{center}
\section{Basic information}
\begin{itemize}
\item Title: Assistant Professor of Computational Science \& Engineering
\item Institution: \GIT
\item Address: S1313 CODA, 756 W Peachtree St NW, Atlanta, GA 30308
\item Email: \href{mailto:[email protected]}{\texttt{[email protected]}}
\item Website: \href{https://bryngelson-research.com}{\texttt{https://bryngelson-research.com}}
\end{itemize}
\section{Education}
\begin{itemize}
\item \UIUC
\begin{itemize}
\item (2017) Doctor of Philosophy, Theoretical \& Applied Mechanics
\item (2015) Master of Science, Theoretical \& Applied Mechanics
\item (2015) Graduate Certificate, Computational Science \& Engineering
\end{itemize}
\item \UMD
\begin{itemize}
\item (2013) Batchelor of Science, Mechanical Engineering
\item (2013) Batchelor of Science, Engineering Mathematics
\end{itemize}
\end{itemize}
\section{Research positions}
\begin{itemize}
\item (2021--Present) Assistant Professor, College of Computing, \GIT
\item (2018--2021) Senior Postdoctoral Scholar, \CIT, with Tim Colonius
\item (2019) Visiting Researcher, \MIT, with Themis Sapsis
\item (2017--8) Postdoctor Researcher, XPACC, with Carlos Pantano, Dan Bodony, Jon Freund
\item (2013--7) Graduate Research Fellow, \UIUC, with Jon Freund
\item (2012--3) Undergraduate Research Assistant, \UMD, with Eric Ratts
\end{itemize}
\section{Teaching}
\begin{itemize}
\item (2015) Fundamentals of Fluid Dynamics, \UIUC
\item (2013) Design and Analysis of Machine Elements, \UMD
\item (2012) Probability, Statistics, and Reliability in Design, \UMD
\item (2012) Statics and Mechanics of Materials, \UMD
\end{itemize}
\section{Students}
\subsection{Current}
\begin{itemize}
\item Anand Radhakrishnan, \GIT
\item Scott Simms, \GIT
\item Jose Chreim, \CIT
\item Jean-Sebastien Spratt, \CIT
\item Ben Stevens, \CIT
\item Qifan Wang, \CIT
\item Alexis Charalampopoulos, \MIT
\end{itemize}
\subsection{Past}
\begin{itemize}
\item David Mittelstein, \CIT, Ph.D. (2020)
\item Theresa Trummler, TU Munich, Ph.D. (2020)
\item Franz O'Meally, Johns Hopkins University, B.S. (2020) -- Now: Caltech Ph.D. student
\end{itemize}
\section{Awards}
\begin{itemize}
\item (2017) Stanley Weiss Outstanding Dissertation Award, \UIUC
\item (2016) Hassan Aref Award (research in fluid mechanics), \UIUC
\item (2015) Alumni Teaching Fellowship, \UIUC
\item (2010--2013) Dean's List, \UMD
\item (2011) Pi Tau Sigma (honor society, member), \UMD
\end{itemize}
\section{Grants}
\subsection{Funded grants}
\begin{itemize}
\item (2019-20) co-PI: XSEDE CTS120005, $\$1.35$M dollar valuation, 9M CPU Hours
\end{itemize}
\subsection{Grants supported}
\begin{itemize}
\item (2019-21) NIH 2P01-DK04881, with T. Colonius
\item (2018-21) ONR MURI N0014-17-1-2676, with T. Colonius
\item (2018-21) ONR BRC N0014-17-1-2625, with T. Colonius
\item (2017-18) DOE PSAAP DE-NA0002374, with J. B. Freund and W. Gropp
\item (2013-17) NSF CBET 13-36972, with J. B. Freund
\end{itemize}
\section{Professional activity}
\subsection{Referee}
\begin{itemize}
\item AIAA Journal
\item Fluids
\item International Journal of Multiphase Flow
\item Journal of Fluid Mechanics
\item Journal of Computational Physics
\item Theoretical and Computational Fluid Dynamics
\end{itemize}
\subsection{Affiliations}
\begin{itemize}
\item American Physical Society
\item Society of Industrial and Applied Mathematics
\end{itemize}
\subsection{Service}
\begin{itemize}
\item (2021) Organizer, Mini-symposium, ``Machine learning for multiphase flows'', IACM Conference on Mechanistic Machine Learning and Digital Twins for Computational Science, Engineering \& Technology
\item (2015-16) Judge, Illinois State-wide Math Competition
\item (2014) Organizer, Science Night, Illinois Middle Schools
\end{itemize}
\section{Publications}
\nocite{*}
\newrefcontext[labelprefix=P]
\printbibliography[type=unpublished,title={Preprints},resetnumbers=true,heading=subbibnumbered]
\newrefcontext[labelprefix=J]
\printbibliography[type=article,title={Journal papers},resetnumbers=true,heading=subbibnumbered]
\newrefcontext[labelprefix=C]
\printbibliography[type=inproceedings,title={Refereed proceedings},resetnumbers=true,heading=subbibnumbered]
\newrefcontext[labelprefix=O]
\printbibliography[title={Other publications},resetnumbers=true,filter=other,heading=subbibnumbered]
\section{Talks}
\newrefcontext[labelprefix=I]
\printbibliography[title={Invited talks},resetnumbers=true,filter=invited,heading=subbibnumbered]
\newrefcontext[labelprefix=T]
\printbibliography[title={Conference talks},resetnumbers=true,filter=talk,heading=subbibnumbered]
% Software
\newrefcontext[labelprefix=S]
\printbibliography[type=software,title={Software developed},resetnumbers=true,heading=bibnumbered]
\end{document}
| {
"alphanum_fraction": 0.7339911008,
"avg_line_length": 31.1385542169,
"ext": "tex",
"hexsha": "45bb27f4c85c2dbdd098dfe832b9ae0b55eac657",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bf7a198eed2e50a2330bd6c64f28203d22477e47",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "nitzanfarhi/nitzanfarhi.github.io",
"max_forks_repo_path": "cv/cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bf7a198eed2e50a2330bd6c64f28203d22477e47",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "nitzanfarhi/nitzanfarhi.github.io",
"max_issues_repo_path": "cv/cv.tex",
"max_line_length": 205,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bf7a198eed2e50a2330bd6c64f28203d22477e47",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "nitzanfarhi/nitzanfarhi.github.io",
"max_stars_repo_path": "cv/cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1550,
"size": 5169
} |
% Due Date: 6/25/14
\chapter{Logical Dependence Analysis}
\label{chapter:logical}
The first stage in the Legion task pipeline
is logical dependence analysis. Given a stream
of sub-task launches from a common parent task, the
dependence analysis stage is responsible for computing which
tasks are {\em interfering} and therefore must have
dependences (we give a formal definition of non-interference
in Section~\ref{sec:noninterference}). Unlike other
programming systems that compute dependences between
tasks based on inferred data usage, either statically (e.g.
by doing pointer analysis), or dynamically (e.g. transactional
memory), Legion has specific names for the
sets of data being accessed by tasks in the form
of logical regions. The concrete names for different
sets of data provided by logical regions will
considerably simplify the dependence analysis.
A na\"{i}ve implementation of the Legion dependence
analysis stage would perform a pairwise test for
non-interference between a sub-task and all of the
tasks launched before it in the same parent task.
For a stream of $N$ tasks, dependence analysis is
known to require $O(N^2)$ non-interference tests
be performed\footnote{In Legion, non-interference
tests are actually performed between the regions
used by tasks. If tasks on average have $R$ region
requirements, then dependence analysis is actually
$O(N^2R^2)$.}. In practice, the size of $N$ is
bounded by a sliding window reflecting tasks that
have yet to complete. A task only needs to perform
non-interference tests against tasks within this
window. Figure~\ref{fig:taskwindow} shows a
representative example. Task $t_8$ only needs to
perform dependence tests against tasks in the
stream $S$ that remain within the window and
therefore not complete. However, the
task window is usually on the order of a few hundred
to a thousand tasks in many applications. While
finding an asymptotically superior algorithm for dependence
analysis is unlikely, we can introduce data structures
that can significantly improve the constant factors
associated with the dependence analysis algorithm.
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{figs/TaskWindow.pdf}
\caption{Example Task Window for Dependence Analysis\label{fig:taskwindow}}
\end{figure}
In this chapter, we begin by giving a precise
definition of when two tasks are non-interfering
(Section~\ref{sec:noninterference}). Based on
the properties of the non-interference test,
we design an algorithm that leverages the logical
region tree data structures maintained by the
runtime to accelerate non-interference tests
(Section~\ref{sec:logtraversal}). We then
elaborate on how storage for the meta-data
associated with dependence analysis is
efficiently maintained
(Section~\ref{sec:logicaltree}). In
Section~\ref{sec:mapdepgraph}, we describe
the mapping dependence graph produced by
the logical dependence analysis.
Finally, we describe an optimization
for memoizing the results of the dependence
analysis in Section~\ref{sec:tracing}.
\section{Task Non-Interference}
\label{sec:noninterference}
Before proceeding with our discussion of how
we implement the dependence analysis stage, we
first need to give a precise definition of
what it means for two tasks to be non-interfering.
Two tasks are non-interfering if all pairs of region
requirements between the two tasks are
non-interfering. A pair of region requirements
are non-interfering if any one of the following
three disjunction non-interference conditions
are met:
\begin{itemize}
\item {\bf Region Disjointness} - the logical
regions in the two region requirements are disjoint
(e.g. there are no shared rows).
\item {\bf Field Disjointness} - the sets of fields
requested by each of the two region requirements
are independent (i.e. there are no shared columns).
\item {\bf Privilege Non-Interference} - either
both region requirements are requesting read-only
privileges, or both region requirements are
requesting reduction privileges with the same
reduction operator.
\end{itemize}
Figure~\ref{fig:noncases} gives a visual depiction
of the three different non-interference criteria
from the node's logical region in the circuit
simulation from Chapter~\ref{chapter:model}.
The red and blue rectangles illustrate the data
requested from two different region requirements.
Non-interference can be proven in any of the three
dimensions: if the two logical regions access
disjoint sets of rows, if the sets of fields requested
are disjoint, or if the privileges are non-interfering.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{figs/NonInterfering.pdf}
\caption{Example Non-Interference Criteria from the Circuit Simulation
\label{fig:noncases}}
\end{figure}
Due to the disjunctive nature of the three conditions,
they can be applied in any order. If any of the
three non-interference criteria are met, then the
pair of region requirements are non-interfering
and testing the remaining conditions can be skipped.
Consequently, the order in which these criteria are
tested can have a significant performance impact.
It is therefore important that we pick an order for
testing these conditions that minimizes the number
of non-interference criteria tested.
The ordering that we select is region disjointness,
field disjointness, and finally privilege
non-interference. This ordering stems from the
observation that Legion programs commonly express
data parallelism at several different granularities
both across nodes and within nodes. It is therefore
most likely that two tasks will be proven to be
non-interfering using the disjointness of their
logical regions. After this, field set disjointness
is most likely. Finally, privilege non-interference
is a more complicated test and is therefore placed
last where it is least likely to be performed.
While it is possible to write Legion applications that
perform better with a different test ordering, it
has been our experience that the performance of this
ordering is sufficient for achieving high
non-interference test throughput. As we show in
Section~\ref{sec:logtraversal}, this ordering also
lends itself to a natural implementation based
on logical region trees.
\newcommand{\appnonint}[7]{
\begin{tikzpicture}[every node/.style={inner sep=0,outer sep=0}]
\node (n1) at (0,0) {\footnotesize\begin{tabular}{c}no tests \\ $0.0 \%$\end{tabular}};
\node (n2) at (2,1) [minimum width=0.4in] {\footnotesize\begin{tabular}{c}regions \\ $#1 \%$\end{tabular}};
\node (n3) at (2,0) [minimum width=0.4in] {\footnotesize\begin{tabular}{c}fields \\ $#2 \%$\end{tabular}};
\node (n4) at (2,-1) [minimum width=0.4in] {\footnotesize\begin{tabular}{c}privileges \\ $#3 \%$\end{tabular}};
\draw [->] (n1.north east) -- (n2.west);
\draw [->] (n1) -- (n3);
\draw [->] (n1.south east) -- (n4.west);
\node (n5) at (4.5,1) [minimum width=0.7in] {\footnotesize\begin{tabular}{c}regions+fields \\ $#4 \%$\end{tabular}};
\node (n6) at (4.5,0) [minimum width=0.7in] {\footnotesize\begin{tabular}{c}regions+privileges \\ $#5 \%$\end{tabular}};
\node (n7) at (4.5,-1) [minimum width=0.7in] {\footnotesize\begin{tabular}{c}fields+privileges \\ $#6 \%$\end{tabular}};
\draw [->] (n2) -- (n5);
\draw [->] (n2) -- (n6);
\draw [->] (n3) -- (n5);
\draw [->] (n3) -- (n7);
\draw [->] (n4) -- (n6);
\draw [->] (n4) -- (n7);
\node (n8) at (7,0) {\footnotesize\begin{tabular}{c}all tests \\ $#7 \%$\end{tabular}};
\draw [->] (n5.east) -- (n8.north west);
\draw [->] (n6) -- (n8);
\draw [->] (n7.east) -- (n8.south west);
\end{tikzpicture}
}
\begin{figure}
\centering
\begin{tabular}{c}
\subfloat[Circuit]{
\appnonint{69.0}{43.6}{25.7}{81.1}{76.3}{60.5}{85.5}
} \\
\subfloat[Fluid]{
\appnonint{99.2}{43.2}{20.5}{99.4}{99.7}{54.1}{99.8}
} \\
\subfloat[S3D]{
\appnonint{33.1}{98.4}{32.2}{98.6}{52.7}{99.4}{99.5}
} \\
\end{tabular}
\caption{Non-Interference Test Success Rates by Application\label{fig:nonint_venn}}
\end{figure}
To provide empirical evidence for our chosen ordering
of these tests, Figure~\ref{fig:nonint_venn} shows decision
diagrams for potential orderings of non-interference
tests for three real world applications: a circuit
simulation, a fluid flow simulation, and the combustion
simulation S3D discussed in Chapter~\ref{chapter:s3d}.
At each node of the decision diagrams, we show the
percentage of non-interference tests that would
succeed with that subset of tests. The percentage at
the end shows the overall percentage of non-interference
tests that succeed for a given application.
Ideally, we want to minimize the overall cost of the
non-interference tests, which favors the early use
of cheaper and/or more efficient tests. Although
there is considerable variability between applications,
region disjointness is the most effective test
overall. The use of region tree data structures
is essential to making this test inexpensive, and it
is the clear choice for the first test. Field disjointness
is the next obvious test as it finds significant
parallelism, especially in S3D. By using {\em field
masks} (discussed in Section~\ref{subsec:fieldmasks}) this
test can also be made inexpensive which justifies
performing it second. Finally, the more expensive
privilege non-interference test is placed last to
minimize the number of invocations.
\section{Logical Region Tree Traversal Algorithm}
\label{sec:logtraversal}
The goal of the logical region tree traversal algorithm
is to improve the efficiency of the dependence
analysis for a stream of sub-tasks $S$ within a
parent task $P$. To perform the dependence
analysis for any sub-task $T$ in $S$, we
need to find all other tasks that come before
$T$ in $S$ that interfere with at least one of
the region requirements requested by $T$. We therefore
need to perform a separate analysis for each
region requirement of $T$.
%We say that all the sub-tasks
%in $S$ are analyzed within the {\em context} of
%the parent task $P$. A context denotes the subset
%of privileges that the parent task owns on the
%region tree as part of its execution. (A context
%will also be given a technical definition in
%Section~\ref{subsec:logicalctx}.)
To detect interference on region requirements,
our analysis operates over the logical region
tree forest. As part of our analysis, we ensure
that after each task in $S$ has performed its
non-interference tests, it registers itself as a
{\em user} of the region tree node for each logical
region on which it requests privileges. A user
record stores information about the task including
its requested fields, privileges, and coherence.
User records allow later tasks to determine whether
dependences should be registered from later tasks
in $S$. By registering users at the region tree
nodes on which they requested privileges, we will
be able to easily elide non-interference tests based
on logical region disjointness (the first
non-interference condition from
Section~\ref{sec:noninterference}). In practice,
we do not need to store all the users on each logical
region tree node from previous tasks in the
stream $S$. Instead, we can aggressively prune tasks
that have finished executing or for which we can prove
there exists transitive dependences. We discuss these
optimizations further in Section~\ref{sec:logicaltree}.
To perform the dependence analysis for a sub-task $T$,
we need to traverse the region tree forest for each region
requirement in $T$ to find any potential tasks from
$S$ that are not disjoint on region usage. We first
compute the path from the logical region requested
to the corresponding logical region on which the
parent task $P$ owns privileges. This path is
guaranteed to exist because of the restriction
enforced by the Legion programming model that
all sub-tasks can only request privileges for
a subset of the privileges held by the parent task
(see Section~\ref{subsec:subtasks}). Using this
path we can immediately determine the nodes in the
logical region tree that must be visited because
they are interfering on region disjointness. This
set of nodes includes all nodes in the region tree
along the path, as well as any sub-trees of nodes
along that path that may contain interfering nodes.
By computing this path and any potential interfering
sub-trees, we immediately elide any region requirements
of previous tasks that are disjoint on region usage,
including region requirements in different region
trees within the region tree forest.
As an example of the region tree path, consider a
sub-task launched inside of the {\tt simulate\_circuit}
task from the example in Chapter~\ref{chapter:model}.
Figure~\ref{fig:interferencepath} illustrates both the
computed path and other logical regions that would need
to be analyzed for interferences for a region requirement
requesting privileges on the {\tt r\_all\_shared} logical
region. The interference path would run from the root node
of the logical region tree where the {\tt simulate\_circuit}
task has privileges to the {\tt r\_all\_shared} logical region
where the task is requesting privileges. In addition, the task
would also need to visit all the sub-trees of the
{\tt r\_all\_shared} logical region. All of the nodes that
would need to be analyzed for interference are colored in
red. Nodes that can be skipped due to being non-interfering
are shown in blue. It is important to note that the two-level
partitioning scheme that we chose in Chapter~\ref{chapter:model}
is what allows this analysis to directly omit all logical
regions in {\tt r\_all\_private} nodes from consideration
when analyzing region requirements that request privileges
on logical regions in the {\tt r\_all\_shared} logical region
sub-tree.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{figs/NonInterferencePath.pdf}
\caption{Example Non-Interference Path for Dependence Analysis
\label{fig:interferencepath}}
\end{figure}
Having determined the set of region tree nodes that
must be examined because they are interfering on
logical region usage, we can now perform the second
dimension of the non-interference test based on
field usage. For each node that interferes based on
region usage, we iterate through the
list of users registered at the node.
We then check whether the set of fields requested
by each user is disjoint from the
set of fields in the region requirement being
tested from $T$. If they are disjoint, then the
two region requirements are non-interfering, otherwise
we perform the non-interference test on privileges.
If both the field sets are interfering and the
privileges are interfering, then we record a
dependence between the two tasks, otherwise that
pair of region requirements is non-interfering.
Finally, after finishing the analysis for a
region requirement, we register the region
requirement at the destination node of the
path, ensuring that any future sub-tasks in
the stream $S$ will be able to determine
the necessary dependences.
While this discussion has primarily described
sub-tasks as the elements within the stream $S$,
because the region tree analysis is performed
based on region requirements, the analysis is
easily generalized to all other operations within
the Legion programming model, including inline
mappings and explicit copy operations that also
use region requirements to describe their data usage.
\section{Logical Region Tree Data Structures}
\label{sec:logicaltree}
While the correctness of the traversal algorithm
described in the previous section is easy to
determine, achieving high-performance requires
optimization of both the algorithm as well as the data
structures used for the traversal. We first give
a brief description of how region tree data structures
are stored in the Legion runtime in
Section~\ref{subsec:logicalshape} and then we describe
each of the optimizations in subsequent sections.
\subsection{Region Tree Shape}
\label{subsec:logicalshape}
The shapes of region tree data structures in the
region tree forest are determined by the runtime calls
made by the application for creating index spaces and
partitioning them. Figure~\ref{fig:regforest} gives one
example of a region tree forest that might be instantiated
for an application. There are two index space trees,
rooted by $I_0$ and $I_1$, and two field spaces
$A$ and $B$. Every region tree in the region tree
forest is associated with an index space tree
and a field space. For example, the region tree
rooted by logical region $R_0$ is associated with
index space $I_0$ and field space $A$. Note that
the creation of region trees is done dynamically
and therefore multiple region trees can be created
with the same index space and field space (e.g.
both region trees rooted by $R_1$ and $R_2$ are
associated with the same index space and field
space, but are distinct logical region trees). Region
trees are not automatically created
for each pair of a field space with an index space
tree as is the case with index space $I_1$ and field
space $B$.
In both index space trees and region trees, the
node type alternates between levels.
Even numbered levels in the tree (starting with
zero at the root) consist of index space nodes
for index space trees and logical region nodes for
logical region trees. Odd levels in the trees
contain partition nodes. In the internal representation
of these data structures, Every node in both index space
trees and region trees maintain pointers both to their
parent node and to all of their child nodes. Every region
tree node also maintains a pointer back to its corresponding
index space tree node.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figs/RegionTreeForest}
\caption{Example Relationships Between Index Space Trees,
Field Spaces, and Logical Region Trees\label{fig:regforest}}
\end{figure}
Index space trees are eagerly instantiated when
runtime calls are made by the application. However,
to save space, logical region trees are lazily
instantiated from index space trees. When a new
top-level logical region is created, only a node
for that root region is created. The other nodes
in the region tree are only created when they are
requested as part of a dependence analysis traversal.
Consequently, on many nodes in the machine, only a subset
of the full region tree forest is ever instantiated for a
real Legion application, thereby saving considerable
space\footnote{To further save memory, The migration of region trees to other nodes
is also done lazily as we describe in
Chapter~\ref{chapter:distributed}.}.
Index space tree nodes (and therefore by proxy region
tree nodes) also keep track of two important properties
of their child nodes: disjointness and completeness.
Disjointness records which children of a node are entirely
independent of each other. Disjointness information
is used by the dependence analysis to determine which
nodes in the region tree must be visited. All logical
regions in sub-trees rooted by a region tree node that
is potentially aliased with another region tree node
along the interference path must be analyzed for
interference. In some cases, disjointness information is
provided by the application, such as when disjoint partitions
are created, indicating that all pairs of children within a
partition are disjoint. Alternatively, if enabled, the runtime
can dynamically test whether pairs of children are disjoint
(e.g. testing whether two specific logical regions in an
aliased partition are independent). In the case of the
dynamic disjointness tests, the results can be memoized to
help amortize the cost of performing the (sometimes
expensive) tests.
The second property recorded by all index space tree
nodes is completeness. A child is considered to be
complete if every row in the index space node is
contained in at least one child index space. Completeness
will be used to perform several important copy reduction
optimizations described in Chapter~\ref{chapter:physical}.
The disjointness and completeness properties of child
nodes behave differently under dynamic allocation of
rows in index spaces permitted by the Legion programming
model (see Section~\ref{subsec:indexspace}). The disjointness
of two children is invariant under dynamic allocation.
If a new row is allocated in one child node, then it is
impossible for it to be allocated in the other child
node. However, completeness of a child node is impacted
by dynamic allocation. Consider an index space $I$ with
two initially complete index partitions $P_1$ and $P_2$.
If a new entry is allocated in one of the index sub-spaces
of $P_1$ it is therefore also allocated in index space $I$.
While $P_1$ is still complete, $P_2$ can no longer be
considered complete. Therefore, while disjointness information
can be safely memoized under dynamic allocation, completeness
can only be cached and must always be invalidated whenever
dynamic allocation is performed.
\subsection{Epoch Lists}
\label{subsec:epochlists}
The first optimization that we perform is
designed to reduce the number of users that
must be considered on each node that we
traverse. The critical insight is that we
don't need to record all dependences that
exist between a task $T$ and earlier tasks
in the stream $S$ if $T$ has transitive
dependences through other tasks in $S$.
While there are many ways that we could detect
transitive dependences, we focus on a subset
of transitive dependences that are easy
to detect: those dependences that exist
through the same field with the same
logical region. The crucial insight
is that tasks within the same node using
the same fields will form {\em epochs}
of tasks with the same privileges. For
example, there maybe an epoch of tasks
performing reductions to a field, followed
by an epoch of tasks that read the field
(note there can also be epochs containing
multiple tasks with read-write privileges
by using relaxed coherence modes that
are discussed in Chapter~\ref{chapter:relaxed}).
By only storing the most recent epochs of
tasks for each field, we can reduce the
number of tasks that must be considered
when traversing a region tree node.
Figure~\ref{fig:epochlist} shows example
epochs that might be generated from a stream
of tasks (accessing only a single field).
Epoch zero consists of tasks that are all
accessing the field with read-only privileges.
The next epoch contains a single read-write
task that records mapping dependences on all
tasks from epoch zero. Epoch two is a reduction
epoch, while epoch three is again a read-only
epoch. The important observation is that each
operation in an epoch needs to record a dependence
on all operations from the previous epoch.
Therefore at most, two epochs need to be tracked
at a time.
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{figs/EpochLists}
\caption{Epoch Lists Example\label{fig:epochlist}}
\end{figure}
To track epochs, we create two {\em epoch lists}
that are used to store all the tasks
necessary for performing dependence analysis
in the given node. The first epoch list stores
a set of {\em current} epoch users while the
second list stores the set of {\em previous}
epoch users. The important observation is that when
performing dependence analysis, only one of
two things will occur: either a user
will register dependences on all the users
in the current epoch (for a field), in which
case it will start a new current epoch, or it
will join the current epoch and must therefore
record dependences on all the users
in the previous epoch list (for a field).
These two epoch lists eliminate the need for
storing all previous users from the stream of
tasks not in the two most recent epochs, thereby
reducing the number of tasks for which we need
to perform non-interference tests when
registering a user within a region tree node.
When traversing a given region tree node for
the dependence analysis we need to detect which
of the two scenarios apply to the user
being analyzed. To detect the first scenario,
we test the region requirement from $T$ being
analyzed against all of the region requirements
in the current epoch list to see if it interferes
with all the users for a specific set of fields.
If the region requirement does interfere with all
users for some set of fields, we say that the
task has {\em dominated} those fields, and should
start a new epoch for those fields. For all the
dominated fields, the users of those fields in the
previous epoch list are filtered out, and the users
from the current epoch list are moved from the
current epoch list back to the previous epoch
list. If the region requirement also has non-dominated
fields, then it must also traverse the previous
epoch list to search for interfering region
requirements. It is important to note
that because we only add region requirements
at the destination node of the traversal algorithm,
it is possible not to observe any users of
specific fields in the current epoch list.
In these cases, the unobserved fields must
be considered non-dominating fields. The
alternative of considering them dominated
would result in premature filtering of region
requirements from the previous epoch list.
One important design decision associated with
epoch lists is whether to store previous
users in different lists for every field
or to maintain a single list of tasks and
filter based on field set disjointness as
part of the traversal. We opted to implement
the latter, as there can often be hundreds
or thousands of fields that could result
in large list overheads for only a few
users. Furthermore, most users request multiple
fields that would result in duplicated
meta-data for the same user over many lists.
Instead we maintain a single current epoch
list and a single previous epoch list and
rely on a fast mechanism for testing for
field set disjointness that we describe
in Section~\ref{subsec:fieldmasks}.
\subsection{Field Masks}
\label{subsec:fieldmasks}
For many Legion applications, including
the combustion simulation S3D discussed
in Chapter~\ref{chapter:s3d}, field spaces
can contain on the order of hundreds to
thousands of fields. In order to quickly
evaluate the field set disjointness condition
for non-interference, we need an efficient
representation of a set of fields. We use
{\em field masks} that store fields as
bit masks.
Field masks present a very compact way of
representing the potentially large space of
fields that can be used by region requirements.
It is important to note that we can only
implement field masks efficiently because of
the static upper bound on the number of fields
in a field space, which was described as part
of the Legion programming model in
Section~\ref{subsec:fieldspace}. Having unbounded
field masks would require dynamic memory allocation
that invalidates many of the important compiler
optimizations that make operations on field
masks fast.
Field masks support all of the common operations
associated with bit masks including conjunction
and disjunction as well as set subtraction. The
three most important operations that are supported
by field masks are conjunction, testing for an
empty set, and testing for disjointness with
another field mask. These operations are useful
for detecting the field set disjointness condition
of non-interference and serve as the basis of
region tree traversal algorithms for both logical
state in this chapter as well as physical state
that we discuss in Chapter~\ref{chapter:physical}.
To accelerate these important operations on field
masks we employ a simple optimization technique:
two-level field masks. Two-level field masks
contain a 64-bit summary representation of the
field mask. A bit set at index $i$ in the summary
mask indicates the presence of at least one set
bit at an index $j$ in the field mask, where
$mod(j,64)==i$. Before actually
testing for conjunction, emptiness, or disjointness
the summary masks for the field masks are tested
first. Since testing the summary masks only involves
executing a single instruction (on 64-bit architectures),
it can easily reduce the work associated with important
field mask operations, especially for field spaces
with large upper bounds on the number of fields.
In the general case, field masks are implemented
using 64-bit unsigned integers. However, where
possible, our implementation of field masks also
takes advantage of the underlying hardware by
using SSE and AVX vector intrinsics for performing
field mask operations. It is important to note that
these vectorized bit operations do not conflict
with any of the underlying paths for vectorized
floating point hardware in most processors. Therefore
our field masks do not cause any performance degradation
when floating point units are shared between
multiple cores within a chip (as is the case on
same target architectures such as the AMD
Interlagos architecture).
\subsection{Region Tree States}
\label{subsec:logicalstate}
The next optimization that we perform for the
traversal algorithm adds some additional state
to each node in the region tree to
reduce the number of sub-trees in the region
tree that must be traversed when checking for
region disjointness in the non-interference test.
In our original version of the algorithm described
in Section~\ref{sec:logtraversal}, we computed
a path from where the parent task $P$ had privileges
to where the region requirement from sub-task $T$
was requesting privileges. In addition to traversing
all of the nodes along this path, we also described
that we needed to check all sub-trees that contain
regions that might not be disjoint from the logical
region in the target region requirement. To avoid
the traversal of many sub-trees, we
add state to every region tree node that records
which sub-trees are {\em open}. A sub-tree is
open if there exists at least one region requirement
from another sub-task from the stream $S$ that has
been registered in the sub-tree.
To further add to our efficiency, we also track the
fields that are open for different sub-trees, as well
as the privileges of the open fields. For example, a
sub-tree might only be open for one field or a small
number of fields. If those fields are disjoint from the
fields in the region requirement of $T$, then we
do not need to traverse the sub-tree. Similarly,
a sub-tree might be open for a set of fields with
read-only privileges which indicates that all
sub-tasks registered in the sub-tree are requesting
read-only privileges. If the region requirement
from $T$ being analyzed is also requesting read-only
privileges then dependence analysis for the sub-tree
can be skipped because we know all users are
non-interfering on privileges.
Care must be taken to maintain the proper state at
each node of the region tree. Therefore, we perform the
traversal along the path from where the parent task $P$
has privileges to where the region requirement of $T$
is requesting privileges, we {\em open} the appropriate
child nodes and mark which fields are open with
specific privileges. In some cases the correct
sub-tree might already be open with the same or
a super-set of the privileges necessary (see the
semi-lattice from Figure~\ref{fig:privlattice}).
However, when privileges are different, there are
two possible solutions. First, we could elevate
privileges in the semi-lattice to soundly represent
the set of region requirements in a sub-tree; this
approach is easy to implement, but results in a
lack of precision. Alternatively,
we could {\em close} all potentially conflicting
sub-trees and then open the sub-tree we intend
to traverse with the precise privileges. In order to
maintain the fidelity of the dependence analysis, we
chose to implement the later option.
A close operation on a sub-tree mutates the state
of the region tree in two ways. First, a close
operation on a field is the equivalent of dominating
all the users in the current epoch of the node at
the root of the close operation
We therefore siphon all the users for the
field(s) being closed from the previous epoch list
and filter any users from the current epoch list
into the previous epoch list. We then hoist all
of the tasks in the current epoch lists in the
sub-tree, for the given field(s), up to the previous
epoch list of the node at the root of the sub-tree
being closed. While this reduces the precision of
the tasks for region analysis, it is always sound
as they now appear to be using a region node that
is a superset of the region node they were
originally using\footnote{Ultimately the loss in
precision is inconsequential because extra mapping
dependences need to be computed between users accessing
logical regions in different partitions in order to
correctly perform the pre-mapping traversals described
in Section~\ref{sec:premapping}.}. Furthermore, this
loss of precision is minimal since the primary reason
for performing close operations is to open a
different sub-tree whose regions all conflict
the sub-tree being closed (e.g. closing one
partition sub-tree to open another). After the
close operation is complete, the traversal
can continue by opening up the sub-tree of
the next node in the appropriate mode.
Since close operations effectively mutate the
state of the epoch lists for the fields that
are closed, we record the close operations that
need to be performed between the previous
epoch and the current epoch in a {\em close
list}. It is important that each user added to
the current epoch list of a set of fields also
record dependences on all close operations that
must be performed for those fields. The reason
for this is that these close operations must
be performed prior to mapping any of the region
requirements for a task.
Since multiple tasks in the same epoch are
able to map in parallel, then it is the
responsibility of the first task that maps
in an epoch to perform the close operations in
the physical state of the tree. We discuss the
details of how close operations are performed in the
physical tree in Chapter~\ref{chapter:physical}.
The close list is also filtered along with the
previous epoch list when one or more fields begins
a new epoch.
\begin{figure}[t]
\centering
\subfloat[State After CNC]{
\label{fig:cktstate_a}
\includegraphics[scale=0.5]{figs/CNC_State}
}
\subfloat[State After DC]{
\label{fig:cktstate_b}
\includegraphics[scale=0.5]{figs/DC_State}
}
\subfloat[State After UV]{
\label{fig:cktstate_c}
\includegraphics[scale=0.5]{figs/UV_State}
}
\caption{Example Circuit Region Tree States\label{fig:cktstate}}
\end{figure}
Figure~\ref{fig:cktstate} shows the state of the
logical partitions in the node logical region tree
after different sets of task perform their
dependence analysis. The first set of tasks to
map are the {\tt calculate\_new\_currents} (CNC)
tasks. In Figure~\ref{fig:cktstate_a}, all of the
CNC tasks have completed their analysis and registered
themselves at the appropriate logical region nodes
based upon their region requirements. In the process
of traversing from the root node, each task has opened
up partitions with the appropriate privileges\footnote{In
practice all nodes (both privileges and logical regions)
must be opened, but we only show the partitions
for simplicity.}. In this example, we show the steady-state
behavior in the middle of the loop, therefore intermediate
partitions are already open with read-write privileges
indicating the existence of modified state from earlier
loop iterations.
Figure~\ref{fig:cktstate_b} shows the
state of the region tree after the {\tt distribute\_charge}
(DC) tasks have finished their dependence analysis.
Note that all of the CNC tasks in the shared sub-tree have
been moved back to the {\tt all\_shared} logical region
as the result of a close operation necessary to transition
from the partition being open with read-only privileges
to reduction privileges. The {\tt all\_private} sub-region
did not require such a transition as it was already open
with read-write privileges that subsume reduction privileges.
Figure~\ref{fig:cktstate_b} also demonstrates the benefits
of reduction privileges: both the {\tt p\_shr\_nodes} and
{\tt p\_ghost\_nodes} partitions can be open at the same
time with reduction privileges.
Lastly, Figure~\ref{fig:cktstate_c} shows the state of
the region tree after the {\tt update\_voltages} (UV)
tasks have completed their analysis. Another close operation
was necessary to close up the {\tt p\_shr\_nodes} and
{\tt p\_ghost\_nodes} partitions to transition from
reduction privileges to the read-write privileges needed
by the UV tasks on the {\tt p\_shr\_nodes} partition.
Each of the UV tasks records a dependence on this close
operation as part of their dependence analysis.
\subsection{Privilege State Transitions}
\label{subsec:statetrans}
Figure~\ref{fig:childstate} shows the state transition
diagram for child nodes based on privileges. For
a given field in a specific state, arcs
show the transition that must occur based on the
privileges being requested by the region requirement
of $T$. For cases where the state of the privileges
move down or across the privilege semi-lattice, close
operations must first be performed before the next
sub-tree can be opened. One interesting observation
about this state diagram is that it is very similar
in many ways to the state diagram of directory-based
cache coherence protocols, with the only differences
being that the privilege state diagram has more potential
states (involving reductions), and that the granularity
of logical regions is much larger than individual cache
lines. By operating at a coarser granularity, our Legion
implementation is able to amortize the cost of
having a more complex state graph and maintain state
for each field of a logical region in software instead
of in hardware.
% Multiple reductions
% Reduction Optimization
Figure~\ref{fig:childstate} has an interesting property
when it comes to reductions. Note that there exists
an intermediate state called {\em single-reduce}. In
this state, only a single sub-tree is permitted to be
open for a specific kind of reduction. This state acts
as an intermediate state for deferring the decision of
whether a sub-tree from a node should be opened in
read-write or reduction mode. The decision is ambiguous
when the first reduction region requirement is registered
in a sub-tree: the runtime cannot be sure whether more
reductions of the same kind will open other sub-trees, or
if other operations such as reads or write will be
performed in the same sub-tree. If a different kind of
reduction or a read or a write task proceeds down the
same sub-tree, then the state is changed to be in
read-write mode. However, if a later sub-task
attempts to open a different sub-tree with the same
reduction mode, then the state is changed to be
in multiple-reduce mode. Deferring the decision about
whether to be in multiple-reduce or read-write mode
for reduction is important because it avoids guessing
by the runtime about the proper way to open the sub-tree,
that could result in unnecessary close operations
being performed.
\begin{figure}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm,thick]
\node[initial,state,initial text=] (A) {Closed};
\node[state] (B) [below=3cm of A] {Read-Only};
\node[state] (C) [above right=1cm and 4.5cm of A] {Multiple-Reduce};
\node[state] (D) [below=2cm of C] {Single-Reduce};
\node[state] (E) [below=2cm of D] {Read-Write};
% Arrows from close
\path[->,anchor=east] (A) edge [bend right] node {read} (B);
\path[->,anchor=north] (A) edge [bend left] node {reduce} (D);
\path[->,anchor=east] (A) edge [bend right,pos=0.75] node {write} (E);
% Arrows from read-only
\path[->,anchor=west] (B) edge [bend right] node {close} (A);
\path[->,anchor=north] (B) edge [bend right] node {write} (E);
\path[->,anchor=east] (B) edge [bend left,right,pos=0.25] node {reduce} (D);
\path[->,anchor=east] (B) edge [loop below] node {read} (B);
% Arrows from read-write
\path[->,anchor=west] (E) edge [bend right,pos=0.15] node {close} (A);
\path[->,anchor=east] (E) edge [loop right] node {read,write,reduce} (E);
% Arrows for single-reduce
\path[->,anchor=east] (D) edge [bend left,right] node[align=center] {read,\\reduce (diff. op. or\\diff. child),\\write} (E);
\path[->,anchor=east] (D) edge [bend right,right] node[align=center] {reduce (same op. and\\diff child)} (C);
\path[->,anchor=east] (D) edge [loop right,right] node[align=center] {reduce (same op. and\\same child)} (D);
\path[->,anchor=north] (D) edge [bend left,pos=0.3] node {close} (A);
% Arrows for multi-reduce
\path[->,anchor=south] (C) edge [bend right,below] node {close} (A);
\path[->,anchor=north] (C) edge [loop right] node {reduce (same op.)} (C);
\end{tikzpicture}
\caption{Open Child Privilege State Diagram\label{fig:childstate}}
\end{figure}
\subsection{Logical Contexts}
\label{subsec:logicalctx}
The final optimization that we perform on
region trees is designed to reduce the memory
usage required for storing state associated
with region trees. Recall from
Section~\ref{subsec:logicalshape} that region
tree data structures are de-duplicated across
runtime instances. Since there may be many
tasks executing simultaneously within a runtime
instance, all of these tasks will be generating
different streams of sub-tasks. We therefore
need a mechanism for differentiating the state
that must be stored in the region tree forest
from different streams of tasks.
This is achieved by differentiating
different streams of sub-tasks as {\em logical
contexts}. On every node in the region tree
forest there exists an array of {\em logical
state} objects that store the necessary meta-data
for supporting the logical region tree traversal
algorithms described in this chapter (e.g.
epoch lists, close lists, open children). The
array of logical state objects is indexed by a
logical context ID. Before a parent task $P$
begins to execute, it requests a logical context
ID from the runtime. When $P$ executes,
it generates a stream of sub-tasks all of which
are analyzed within the logical context allocated
to the parent, meaning they use the parent's
logical context ID to index into the logical
state arrays on the region tree nodes when
performing their dependence analysis.
In order to ensure correctness of Legion programs,
before executing, each task is allocated a logical
context by the runtime instance. This is necessary
since any task can launch arbitrary sub-tasks.
However, logical contexts can be an expensive
resource to allocate because of the implied memory
usage required on all instantiated nodes of a region
tree. To reduce context usage, Legion programmers
can indicate that certain task variants are
actually leaf task variants
(see Section~\ref{subsec:qualifiers}). Leaf task
variants are guaranteed not to perform any sub-task
or other operation launches.
The knowledge that no sub-tasks will be launched
by an executing task is sufficient to allow the
runtime to elide the allocation of a context to
the executing task, thereby reducing both context
and memory usage.
\section{Dynamic Dependence Graph}
\label{sec:mapdepgraph}
The result of the dependence analysis stage
is a {\em dynamic dependence graph} for a
given stream of operations generated by a parent
task. The dynamic dependence graph is a directed
acyclic graph where nodes represent operations
(e.g. sub-tasks, inline mappings, explicit region
copies) and edges represent dependences that
result from interfering region requirements.
It is impossible for there to exist cycles in
this graph as the dependence analysis stage
is performed serially for all sub-tasks
within the stream $S$ generated by parent
task $P$. While there are no
cycles within the dynamic dependence graph,
there can be multiple edges between nodes as
sub-tasks may have multiple interfering
region requirements. Exactly one dynamic
dependence graph is computed for the execution
of each parent task. While the entire graph
needs to be computed, we describe how the
entire dynamic dependence graph does not
need to persist throughout the lifetime of
the parent task $P$.
Figure~\ref{fig:s3ddg} shows an example dynamic
dependence graph from the S3D application
described in detail in Chapter~\ref{chapter:s3d}.
Boxes represent operations that are performed as
part of the computation while edges represent
computed dependences between operations as a
result of the logical dependence analysis.
Edges primarily point from left to right.
The vertical span of the graph is therefore
indicative of the amount of task-level parallelism
available in S3D. It is important to realize
that this graph was generated from a fraction of
a time step in S3D for the smallest chemical
mechanism possible. Production S3D runs generate
graphs that would consume many pages if
depicted here.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.06]{figs/s3d_h2_smart_mapper}
\caption{Example Dynamic Dependence Graph from S3D\label{fig:s3ddg}}
\end{figure}
The directed edges indicating interference
between sub-tasks serve a dual purpose. First,
dependences between sub-tasks are used to determine
when it is safe for a sub-task to progress to
the mapping stage of the pipeline. A task is
only permitted to progress to the mapping stage
once all of the sub-tasks on which it has a
dependence have finished the mapping stage of
the pipeline. Under these circumstances, we
refer to the edges as {\em mapping dependences}.
By requiring all mapping dependences be
satisfied for a sub-task before it can map,
we ensure that it will properly compute any
dependences (either true or anti-dependences)
as part of its physical analysis (see
Chapter~\ref{chapter:physical}). To know when
mapping dependences have been satisfied, sub-tasks
register themselves with the source sub-tasks
of all their dependence edges. If the source
sub-task has yet to map, it will accept the
registration, otherwise it will indicate that
it has already finished the mapping stage
of the pipeline and indicate that no mapping
dependence is necessary.
Each sub-task records how many successful
registrations it makes; this is the number
of other sub-tasks that it must wait to
complete the mapping stage before it can progress
to the mapping stage. When a task finishes
the mapping stage, it notifies all its
registered waiters to indicate that it has
finished mapping. If any of the waiters no
longer have any outstanding registrations, then
they are placed on the {\em ready queue} of
tasks that are ready to map, and the mapper
is queried to choose which tasks in the
ready queue should be mapped next (see
Section~\ref{sec:mapbasic} for more details
on mapper queries and the ready queue).
The second purpose of the edges in the
dynamic dependence graph are to act as
{\em commit edges}. Edges which represent
true data dependences (those on which the
privilege of the source is a reduction or
a write, and the privilege of the
destination is either read-only or read-write)
and for which the destination logical region
is at least as large as the source logical
region are considered commit edges. Commit
edges help to govern the reclamation of the
dynamic dependence graph so that the entire
graph need not persist throughout
the duration of the execution of the parent
task. Long running parent tasks can generate
hundreds or even thousands of sub-tasks and
operations, resulting in very large dynamic
dependence graphs. As tasks finish, commit edges
govern the parts of the dynamic dependence
graph that can be reclaimed.
The direction of commit edges is the
opposite of the direction for dependences
in the dynamic dependence graph. Every
node $N$ in the dynamic dependence graph
registers itself with all of the other
nodes that are sources on commit edges
that point to $N$. As tasks commit, they
notify all tasks that have registered
commit dependences on them. If at any
point all of the commit edges pointing
at a node have been satisfied then the
node can be safely reclaimed. The
intuition is that once all the commit
edges have been satisfied, then there
are no later tasks in the stream of
sub-tasks $S$ that can trigger a
roll-back that might require a
re-execution of the task at node $N$.
There are two other ways in which tasks
can pass the commit stage that allows
them to be reclaimed, both of which are discussed
in Chapters~\ref{chapter:mapping} and
\ref{chapter:resilience} respectively.
Due to the number of places in both the
region tree forest and the dynamic
dependence graph that contain references
to objects that represent sub-task and
other operations, it is expensive to
go through all these data structures
and prune out references. Instead, we
borrow an idea from \cite{Realm14} and
recycle the objects that represent
sub-tasks and other operations. We
assign each use of these objects a
{\em generation} that identifies a
specific operation that the task is
using. We then update all references
to these objects to include the generation.
All methods on the object (e.g. for
performing registration) require that
the generation be passed as well. If
the generation is older than the current
generation, then the method will return
that the version of the operation
being referenced has already committed.
Once a task has committed, it increments
its generation, and then adds itself
back to the pool of available objects
managed by the runtime to be recycled.
\section{Memoizing Logical Dependence Traces}
\label{sec:tracing}
One very important optimization that the
dependence analysis stage supports is the ability
to capture {\em traces} of task executions.
Capturing traces of sub-tasks is inspired by
and has a direct analogy to a common feature
in hardware out-of-order processors: trace
caches. The idea behind capturing traces of
execution is that the dependence analysis
stage is the only inherently sequential
computation in the Legion task pipeline. All
other stages permit multiple tasks or operations
to be processed in parallel. The sequential
nature of this stage means that it can be
very expensive to do for large numbers of
tasks. By capturing traces of a subset
of operations in a stream, we can memoize
the dependence analysis results and replay
them later when the trace is encountered again.
The motivation for incorporating trace capture
as an important optimization is that many
Legion applications employ
large loops of repetitive task execution. For
example, most scientific simulations execute
for a large number of time steps, executing
either the same or similar sets of tasks
for each time step. Similarly, iterative
solvers execute the same set of tasks until
they converge. Owing to the multitude of
Legion applications that share this structure,
we deemed it prudent to include tracing as
an optimization.
Unlike traditional hardware trace caches that
are invisible to the application, the Legion
programming model requires applications to
explicitly indicate the beginning and end of
a trace with a runtime call. Both the start
and end runtime call
take a trace ID parameter for identifying
the trace. The first time a dynamic trace ID
is encountered within a context, the runtime
initializes an object for capturing the trace.
Each sub-task or other operation that is
launched during the trace is recorded. All
dependences in the dynamic dependence graph
between operations in the trace are also
recorded. Dependences between an operation
inside the trace and a second
operation outside of the trace are not
recorded. As we will see, we rely on a second
mechanism to handle these dependences when
the trace is re-executed. It is important to
note that we can eagerly prune dependences
for tasks that have already finished executing
when capturing a trace. Even through some operations
in a trace may have already committed, which
commonly occurs in larger traces, we still must
record the dependences since there is no
guarantee that the same scenario will occur
when the trace is replayed. Once the trace
capture is complete, the trace is frozen and
can be re-used. Trace objects only exist within
the context of a parent task and are automatically
reclaimed when the parent task completes.
After a trace has been captured, the next
time the trace is encountered, the Legion
runtime can automatically replay the results
of the dependence analysis stage instead
of having to recompute the dependences.
In order to avoid missing dependences between
operations within the trace and operations
that come before and after the trace, the
runtime inserts a mapping fence (see
Section~\ref{subsec:fences}) both before
the trace begins and after the trace ends.
These fences ensure that any missing
dependences across the trace boundary are
included. While these fences do add additional
dependences to the dynamic dependence graph
which may constrain execution, the assumption
is that the size of the trace will be
sufficiently large to amortize the additional
cost of the mapping fences.
Another important requirement of the tracing
feature is that the re-execution of the trace
executes an identical stream of operations
to the one that was captured. The onus is
currently on the user to guarantee this
property, and any failure to maintain this
invariant will result in a runtime error. In the
future, we hope that additional logic can be
added to both the Legion compiler and runtime to
automatically add support for tracing. It
may be possible for the Legion runtime to
recognize long streams of isomorphic tasks
that are constantly replayed and should
therefore be traced. Furthermore, it should
be possible for the Legion compiler to
recognize loops in Legion tasks that generate
the same stream of sub-tasks with no
control statements and insert the proper
tracing calls.
| {
"alphanum_fraction": 0.7939123979,
"avg_line_length": 43.03514377,
"ext": "tex",
"hexsha": "b7f5074b71f679a75d42acf1eea1e00d4ca40008",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "lightsighter/Thesis",
"max_forks_repo_path": "logical.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "lightsighter/Thesis",
"max_issues_repo_path": "logical.tex",
"max_line_length": 124,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "lightsighter/Thesis",
"max_stars_repo_path": "logical.tex",
"max_stars_repo_stars_event_max_datetime": "2018-05-22T23:07:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-22T23:07:31.000Z",
"num_tokens": 12561,
"size": 53880
} |
\chapter{Quantitative Finance}
\label{ch:quantitative-finance}
\index{Quantitative Finance}
\newthought{This section deals with some applications of quantitative finance,
both computational and theoretical. }
\section{Equity Mathematical Models}
\index{Equity Mathematical Models}
Equities and finance are heavily influenced by random events (see \cite{fooledbyrandomness} for a detailed philosophical discussion). As such, the analysis of financial instruments lends itself to a stochastic process.
A {\em Geometric Brownian Motion}\index{Geometric Brownian Motion} (GBM) equation (a form of stochastic differential equation, or SDE known as the Black-Sholes-Merton equation\cite{pyfinanceoreilly}) can be used to approximate a random process over time. In its continuous form this looks like \cite{advancedquantcpp}:
\begin{equation}
ds = \mu Sdt+\sigma SdW
\end{equation}
This breaks the movement of a stock price down into two key effects:
\begin{itemize}
\item a deterministic effect (the left of the plus sign)
\item a stochastic effect (the right of the plus sign)
\end{itemize}
In the equation, $\mu$ is known as {\it drift}, $\sigma$ is {\it volatility}, $S$ is the stock price, $dt$ is the change in time and $dW$ is an increment in a {\it Weiner process}.
As equity markets are a discrete process, this equation must be transformed into a discrete equation.
\begin{equation}
S_{t+1}=S_t(1+r\Delta{}t+\sigma\varepsilon_t \sqrt{\Delta{}t})
\end{equation}
Here $\varepsilon$ is a sample from a gaussian distribution with zero mean and standard deviation of 1 (i.e. $N(0,1)$) and $r$ is the risk free rate of return. This equation can be solved iteratively for a given time period if $r$, $\sigma$, $\varepsilon$ and $S_0$ are provided.
Another formulation for pricing European Call options gives a similar result of the discretised SDE\cite{pyfinanceoreilly}:
\begin{equation}
S_t=S_{t-\Delta t}e^{(r-\frac{1}{2}\sigma^2)\Delta t+\sigma\sqrt{\Delta t}z_t}
\end{equation}
In this instance, $z_t$ represents the random variable. Similar formulations can be determined for foreign exchange:
\begin{equation}
X_{t+1} = X_t(1+(r_d-r_f)\Delta{}t+\sigma\varepsilon_t\sqrt{\Delta{}t})
\label{eq:foreignexchangemodel}
\end{equation}
where $r_d$ and $r_f$ are the domestic and foreign fisk free rates of return.
\section{Structural and Intensity Models}
\TODO See \cite{advancedquantcpp} chapter 2.
\section{Monte Carlo Simulation}
\index{Monte Carlo Simulation}
Monte Carlo simulation is a method of estimating a probabilistic outcome through a high quantity of simulations. In a quantitative finance context, we may wish to estimate the future price of an equity based on use of a GBM equation simulated $M$ times.
For instance the process could be as follows for calculating a call option derivative based on estimate the price of a stock using Monte Carlo simulation:
\begin{enumerate}
\item Generate $M$ different ``trajectories'' for the stock using a GBM simulation from time $t=0$ to time $t=T$. This generates a set of $N$ price estimates for $M$ different simulations, with the notation:
\begin{equation}
\{S_i^j\}\qquad i = 0...N,\qquad j=1...M
\end{equation}
This produces a vector of $M$ values for $S_T$,
\begin{equation}
\{S_T^i\}\qquad i = 0...M
\end{equation}
\item We next compute the pay off for each stock value. This is given by
\begin{equation}
H(S_T^i)\qquad i=1...M
\end{equation}
Where
\begin{equation}
H(S_T)=max(S_T-K,0)
\end{equation}
and $K$ is the actual price of the equity or premium. The expected pay off can then be computed by the average of all pay offs:
\begin{equation}
E[H(S_T^i)]=\frac{1}{M}\Sigma_{i=1}^{M}H(S_T^i)
\end{equation}
\item The item should then be discounted to present value, by either applying a discount factor $DF_T$ or
\begin{equation}
\pi = e^{-rT}\times E[H(S_T)]
\end{equation}
Where $\pi$ is the value of the derivative.
\end{enumerate}
\section{Binomial Trees \cite{advancedquantcpp}}
\index{Binomial Trees}
This approach builds a tree of possible prices. At each stage, the underlying can be assumed to go up or down by a given amount. The amount of change up ($u$) or down ($d$) is described by
\begin{equation}
u=e^{\sigma\sqrt{\Delta{}t}}
\end{equation}
\begin{equation}
d=e^{-\sigma\sqrt{\Delta{}t}}
\end{equation}
The probability of an equity going up $p$ is
\begin{equation}
p=\frac{e^{r\Delta{}t}-d}{u-d}
\end{equation}
The probability of the underlying going down is $1-p$. The binomial tree is then built in the following phases:
\begin{enumerate}
\item Construct a tree where each level corresponds to a time step in the simulation period from $t=0$ to $t=T$. For example in two simulation steps,
\begin{equation}
At = 0, S = S_0
\end{equation}
\begin{equation}
At = t_1, S = uS_0,\qquad dS_0
\end{equation}
\begin{equation}
At = t_2, S = u^2S_0,\qquad udS_0,\qquad udS_0,\qquad d^2S_0
\end{equation}
Note that the central value at the last period is shared between adjacent nodes, hence there are only three distinct estimates after $N=2$ steps.
This produces a number of prices at each time step. The notation is based on the estimate number $k$ at time T. There are $N$ time steps
\begin{equation}
\{S_T^k\}\qquad k=1...N+1
\end{equation}
\item Once the tree has been built, the payoff $H(S_T^k)$ should be calculated for each $S_T^k$
\item Finally the tree is traversed back up towards the root node, calculating the discounted weighted probability of each node. If we define a given node (not a leaf, i.e. $t\neq T$) it has two children, one for the up price (denoted $S_T^u$) and one for the down price, $S_T^d$. The value for the parent node is given by
\begin{equation}
V_{T-1}^k = e^{-r\Delta{}t}[pH(S_T^u) + (1-p)H(S_T^d)]
\end{equation}
in this case $V_T^l$ is shorthand for $H(S_T^k)$
\item Once the tree has been traversed back to the top, the value of the derivative $\pi=V_1^1$
\end{enumerate}
\section{Finite Difference Method}
\index{Finite Difference Method}
The finite difference method is a method for discretising a differential equation.\cite{advancedquantcpp} In the quantitative finance approach, we want to discretise partial differential equations (PDEs). The finite differences method is based on the relationship:
\begin{equation}
f(x) = \frac{df}{dx} \approx \frac{\Delta f}{\Delta x} = \frac{f_{i+1}-f_i}{\Delta x}
\end{equation}
The most important PDE in finance is the {\em Black-Scholes PDE}
\index{Black-Scholes PDE}, which is given by:
\begin{equation}
\frac{\delta V}{\delta t} + \frac{1}{2}\sigma^2S^2\frac{\delta^2V}{\delta S^2}+rS\frac{\delta V}{\delta S}-rV=0
\label{eq:blackscholeseq}
\end{equation}
This is usually solved in the $S$ and $t$ axes, where $S\in [a,b]$ and $t\in [0,T]$. The domain of this equation is said to be
\begin{equation}
\Omega=\{(S,t)\forall S\in [a,b]\times t\in [0,T]\}
\end{equation}
In other words, as the finite difference method is solving some partial differential equation in $S$ and $t$, the solution space is the rectangular domain defined by the ranges of $S$ and $t$. For a European call,
\begin{equation}
V(S,T) = max(S-K,0)
\end{equation}
The boundary conditions are $V(a,t)=0$ and $V(b,t)=S$. This equation can be transformed with some variable substitution so that
\begin{equation}
\frac{\delta u}{\delta\tau}=\frac{\delta^2 u}{\delta x}\qquad -\infty<x<\infty, \tau>0
\end{equation}
This is a dimensionless PDE with a new solution domain $\Omega=\{(x,\tau)\}$. The payoff relationship therefore becomes
\sidenote{Where $k=\frac{r}{0.5\times\sigma^2}$}
\begin{equation}
u(x,0) = max(e^{\frac{1}{2}(k+1)x}-e^{\frac{1}{2}(k-1)x},0)
\end{equation}
Using finite differences, the return can be described as\sidenote{Where $\alpha=\frac{\Delta\tau}{(\Delta x)^2}$}
\begin{equation}
u_{i,j+1}=\alpha u_{i+1,j}+(1-2\alpha)u_{i,j}+\alpha u_{i-1,j}
\end{equation}
This relationship can be solved iteratively, using the following steps:
\begin{enumerate}
\item Discretise the domain into N space divisions of $dS$ and M time divisions of $dT$. Use these to determine the steps $\Delta\tau,\Delta x$.
\item Use finite differences to approximate the derivatives
\item Calculate the results of the equation iteratively for each time step
\end{enumerate}
For a worked example, see \cite{advancedquantcpp}, end of chapter 3.
| {
"alphanum_fraction": 0.734900931,
"avg_line_length": 47.8742857143,
"ext": "tex",
"hexsha": "874ea41950cd603de6d4e198721c6e2acb5477bb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fd3e8c8777924e7e46521e62556052d18dffa15f",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "will-hart/interesting-notes",
"max_forks_repo_path": "quantfinance.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fd3e8c8777924e7e46521e62556052d18dffa15f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "will-hart/interesting-notes",
"max_issues_repo_path": "quantfinance.tex",
"max_line_length": 324,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fd3e8c8777924e7e46521e62556052d18dffa15f",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "will-hart/interesting-notes",
"max_stars_repo_path": "quantfinance.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2534,
"size": 8378
} |
\documentclass[12pt,openright,twoside]{book}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{geometry}
\usepackage{pdflscape}
\usepackage{natbib}
\usepackage{amsmath}
\DeclareMathOperator{\sgn}{sgn}
\usepackage{amssymb}
\usepackage{pdflscape}
%%\usepackage{tabu}
\usepackage{caption}
%%\usepackage{floatrow}
\usepackage{enumerate}
\usepackage{enumitem}
\usepackage{titlesec}
\usepackage{wrapfig}
\usepackage[toc,page]{appendix}
\usepackage{lmodern} % for bold teletype font
\usepackage{amsmath} % for \hookrightarrow
%\usepackage{xcolor} % for \textcolor
\usepackage[usenames,dvipsnames]{color}
\usepackage{listings}
\lstset{
columns=fullflexible,
frame=single,
breaklines=true,
postbreak=\mbox{\textcolor{red}{$\hookrightarrow$}\space},
basicstyle=\ttfamily,
numbers=left,
numberstyle=\small\ttfamily\color{Gray},
stepnumber=1,
numbersep=10pt,
numberfirstline=true,
numberblanklines=true,
tabsize=4,
lineskip=-1.5pt,
extendedchars=true,
keywordstyle=\color{Blue}\bfseries,
identifierstyle=, % using emph or index keywords
commentstyle=\sffamily\color{OliveGreen},
stringstyle=\color{Maroon},
showstringspaces=false,
showtabs=false,
upquote=false,
%texcl=true % interpet comments as LaTeX
}
\lstdefinelanguage{julia}
{
keywordsprefix=\@,
morekeywords={
exit,whos,edit,load,is,isa,isequal,typeof,tuple,ntuple,uid,hash,finalizer,convert,promote,
subtype,typemin,typemax,realmin,realmax,sizeof,eps,promote_type,method_exists,applicable,
invoke,dlopen,dlsym,system,error,throw,assert,new,Inf,Nan,pi,im,begin,while,for,in,return,
break,continue,macro,quote,let,if,elseif,else,try,catch,end,bitstype,ccall,do,using,module,
import,export,importall,baremodule,immutable,local,global,const,Bool,Int,Int8,Int16,Int32,
Int64,Uint,Uint8,Uint16,Uint32,Uint64,Float32,Float64,Complex64,Complex128,Any,Nothing,None,
function,type,typealias,abstract
},
sensitive=true,
morecomment=[l]{\#},
morestring=[b]',
morestring=[b]"
}
\usepackage[utf8]{inputenc}
\usepackage{csquotes}
\usepackage{hyperref}
\hypersetup{
colorlinks=false
}
\titleformat{\chapter}
{\Large\bfseries} % format
{} % label
{0pt} % sep
{\LARGE} % before-code
\renewcommand{\figurename}{\textbf{Figure}}
\renewcommand{\tablename}{\textbf{Table}}
\newcommand{\HRule}{\rule{\linewidth}{0.4mm}}
\renewcommand{\contentsname}{{\Huge Table of Contents}}
\renewcommand{\chaptername}{{\Huge }}
%\renewcommand{\thebibliography}{\Bibliografie}
\geometry{
a4paper,
total={210mm,297mm},
left=25.0mm,
right=25.0mm,
top=25.0mm,
bottom=30.0mm,
}
\setcounter{tocdepth}{3}
%level -1: part, 0: chapter, 1: section, etc.
\begin{document}
\begin{titlepage}
\begin{center}
\includegraphics[width=16cm]{./header.png}
\vspace{4cm}
\HRule \\[0.3cm]
{\Large \textsc {Simulation of Potts Model on a Dynamically Rewired Network}}\\
\HRule \\[1.1cm]
\textsc{\large Bachelor's Thesis}\\[4cm]
\begin{flushleft} \large
{Author} \\[0.1cm]
Luca Mircea MIHĂILESCU
\end{flushleft}
\begin{flushright} \large
{Scientific coordinator} \\[0.1cm]
Conf. Univ. Dr. Alexandru NICOLIN
\end{flushright}
\vfill
% Partea de jos a paginii
%% {\large \today}
{\large Bucharest, 2020}
\end{center}
\end{titlepage}
\newpage
\vspace*{\fill}
\thispagestyle{empty}
\newpage
\thispagestyle{plain} \pagenumbering{roman}
\vspace*{36pt}
\begin{center}
{\LARGE \textbf{Acknowledgements}}
\end{center}
\vspace{36pt}
I would like to acknowledge and thank the following important people who have supported me, not only during the course of this project, but throughout my Bachelor's degree.\\
Firstly, I would like to express my gratitude to my supervisor, Assistant Professor Dr. Alexandru Nicolin, for his support, guidance in the field of numerical methods and simulations, and insight throughout this research project.\\
I would also like to thank my colleague, Sebastian
Micluță-Câmpeanu. Without his expertise in the Julia programming language, I would not been able to run my simulations at the scale I did.\\
And finally, I would like to thank all my close friends and family. You have all helped me to focus on what has been a hugely rewarding and enriching process.\\
\vspace{40pt}
\vspace*{\fill}
\newpage
\thispagestyle{empty} \vspace*{\fill} \tableofcontents
\setlength\parindent{0pt}
\setcounter{page}{0}
%% this comand remove indent
\newpage
\pagenumbering{arabic}
\chapter{Introduction}
\label{intro}
The understanding of the laws which govern the behaviour of social masses is one of the outstanding challenges of modern research. It is now more important than ever as our democratic societies are threatened by the rise of \textit{en masse} data mining and nontransparent social media algorithms\cite{o'neil_2016}. Fortunately, physics can lend a hand in building such an understanding. The idea of a physical modeling of social phenomena is not at all a new one. In an 1825 essay, french philosopher Auguste Comte defines social physics as:\\
\begin{displayquote}
"that science which occupies itself with social phenomena, considered in the same light as astronomical, physical, chemical, and physiological phenomena, that is to say as being subject to natural and invariable Laws the discovery of which is the special object of its researches." \cite{iggers_1959,comte_1825}
\end{displayquote}
\vspace{14pt}
Ten years later, Belgian statistician Adolphe Quetelet publishes his \textit{Essay on Social Physics}\cite{quetelet_1835}, which proposes characterising social statistics using the concept of the 'average man' which would be built on measured variables that follow a normal distribution\cite{jahoda_2015}. After this essay, Comte would go on to refer to his new field as sociology, out of fear of being regarded as a follower.\\
The developments in the fields of social statistics were well known to Maxwell and Boltzmann and played a role in their embracing a statistical description of gases in favour of deriving the macroscopic laws of gases from the individual motions of particles, thus laying the foundations of modern statistical physics\cite{porter}. In a 1873 lecture to the British Association, Maxwell argues that physicists have started to employ the methods already used at the time by social statisticians\cite{maxwell_1873}. Boltzmann, in the introduction to a scientific paper published by the Vienna Academy of Science a few years earlier, similarly states that the connection between the theory of heat and 'the principle of living forces' has been "known for a long time already"\cite{boltzmann_1866}.\\
In the recent years, with the extended accessibility to computational resources and large databases (mostly thanks to the Internet), the field of social dynamics has made the transition from philosophical thought experiments to concrete research efforts worldwide\cite{castellano_fortunato_loreto_2009}. However, to apply the tools and concepts of thermodynamics to the study of society, one needs to understand their meaning in a social context. One such interpretation of thermodynamics concepts into sociology has been produced by J. Mimkes in 1995 \cite{mimkes_1995}. According to his study, a binary multicultural society can be understood using the model of regular solutions, which is applied to metal alloys. Members of two communities can manifest sympathy to members outside the community (attractive interaction), be indifferent to them (ideal solution), or manifest antipathy to them (repulsive interaction). In this interpretation Gibbs free energy G describes the general happiness of the society, and temperature T can be understood as tolerance, which can make society more united against the differences of the two communities. Table ~\ref{mimkes} shows all the conclusions drawn regarding the equivalence between thermodynamics and social science.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Equivalence of thermodynamics terms to social science according to Mimkes' model of regular mixtures}}
\begin{tabular}{ccc}
\hline\hline
Abbreviation & Natural Science & Social science\u{a} \\
\hline
A-B & Alloys & Societies \\
$x$ & atomic percentage & size of minority (\%) \\
& \underline{Functions} & \underline{Feelings} \\
$G$ & free enthalpy & general happiness \\
$T$ & temperature & tolerance \\
$E_{AA}$ & cohesive energy & tradition, heritage\\
$E_{AB}>0$ & cohesive energy & curiosity, love \\
$E_{AB}<0$ & repelling energy & distrust, hate \\
$E=0$ & no cohesion & apathy \\
$\epsilon>0$ & attractive interaction & sympathy \\
$\epsilon=0$ & ideal solution & indifference \\
$\epsilon<0$ & repelling interaction & antipathy \\
& \underline{State of alloys} & \underline{State of Society} \\
& disorder, solubility & integration \\
& solubility limit & segregation \\
& phase diagram & intermarriage diagram \\
\hline \label{mimkes}
\end{tabular}
\end{small}
\end{table}
\section{The Ising paradigm}
Whether we're focusing on opinions, social status, cultural and linguistic features, or human kinematics, models can be devised in terms of small sets of variables. Of course, they would be oversimplifications, but qualitative properties of large scale phenomena do not necessarily depend on the microscopic details of the process. As such, simplified models can offer valuable information about macroscopic features such as symmetries, dimensionality, conservation laws etc. One of the most relevant models in physics to this kind of analysis is the Ising model for ferromagnets\cite{binney_2002}. Beyond its physical significance, the Ising ferromagnet can also serve as a simple model for opinion dynamics: spins can be seen agents under the influence of the majority of their interacting partners.\\
Let us consider a collection of $N$ points (i.e. agents) with a spin (i.e. opinion) $s_i=\pm1$. For any two neighbouring points $i,j$ there is an interaction $J_{ij}$. Energetically, this interaction determines each spin to be aligned with its nearest neighbours. When no external magnetic field is present, the total energy of the system is equal to the Hamiltonian function
\begin{equation}
H=-\frac{1}{2}\sum_{(i,j)}J_{ij}s_is_j
\label{ising-hamiltonian}
\end{equation}
\vspace{14pt}
where the sum runs on the pairs of neighbours. The most common implementation of the Ising dynamics is the Metropolis algorithm\cite{landau_binder_2015}. In it, each step of the simulation a spin is flipped with a probability exp($-\Delta E/k_BT$), where $\Delta E$ is the change in energy, $k_B$ is the Boltzmann constant, and $T$ is the temperature. The interactions driven by (\ref{ising-hamiltonian}) should lead to a completely homogeneous state: either all spins are positive, or all are negative. However, this holds only for small temperatures. At temperatures above a critical temperature $T_c$, thermal noise injected fluctuations destroy order. By definition, the average magnetization is\\
\begin{equation}
M=\frac{1}{N}\sum_j\langle s_j \rangle
\label{ising-magnetization}
\end{equation}
\vspace{14pt}
where the brackets denote the average over different iterations. For $T<T_c$ the magnetization will be $M(T)>0$, while for $T>T_c$, $M(T)=0$. Worth mentioning is also the Potts model\cite{wu_1982}, which will be relevant later on for the model I decided to employ. The difference in its case is that spins can take one out of $q$ values. Identical neighbouring spins are energetically favored. The Potts has found such uses as the simulation of sorting in a mixture of biological cells\cite{graner_glazier_1992}, or computer vision and image restoration\cite{boykov_veksler_zabih_2001}. The Ising model corresponds to the Potts special case $q=2$.\\
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/t<t_c.PNG}
\caption{$T<T_c$}
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/t=t_c.PNG}
\caption{$T\sim T_c$}
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/t>t_c.PNG}
\caption{$T>T_c$}
\end{subfigure}
\caption{\textit{{\small Spatial configurations in the Ising model. Black squares represent spins with $\sigma=+1$ and white one correspond to $\sigma=-1$. (Reprinted from "Behavior of Early Warnings near the Critical Temperature in the Two-Dimensional Ising Model," by Morales I.O., Landa E., Angeles C.C., Toledo J.C., Rivera A.L., et al., 2015, PLOS ONE, 10(6), doi:10.1371/journal.pone.0130751. Copyright 2015 by Morales et al. Distributed under terms of the Creative Commons Attribution License.)}}}
\label{quench}
\end{figure}
Yet another variation to the Ising model can be found in Axelrod's model of dissemination of culture\cite{axelrod_1997}. From the point of view of statistical physics it is a vectorial generalization of the Ising-like models (here culture refers to a vector of variables denoting a set of inextricable 'cultural' characteristics). The model consists of individuals located on a network (or lattice) endowed with a vector of F integer variables $(\sigma_1,...,\sigma_F)$. Each variable, or 'cultural features', can assume values $\sigma_f=0,1,...,q$. These cultural features are supposed to model different “beliefs, attitudes, and behavior”. Each step, an individual $i$ together with a neighbor $j$ are selected and the similarity between them is calculated:\\
\begin{equation}
\omega_{i,j}=\frac{1}{F}\sum^F_{f=1}\delta_{\sigma_f(i),\sigma_f(j)}
\label{axelrod-probability}
\end{equation}
\vspace{14pt}
here $\delta_{i,j}$ being Kronecker's delta. Then, with probability $\omega_{i,j}$ one of the features for which traits are different is set equal to the neighbor's ($\sigma_f(i)=\sigma_f(j)$). The phenomenology these dynamics determine is not trivial, however, and predicts the emergence of polarization despite the tendency of interacting people to become more alike.
The Ising model has been applied to describe business confidence, segregation, and language change\cite{stauffer_2008}. In the last two cases\cite{schelling_1971,nettle_1999}, the authors were not aware of the Ising model, and designed more complex simulations that were less flexible. In the case of business confidence\cite{hohnisch_pittnauer_solomon_stauffer_2005} good news or bad news can lead to a uniform optimist or pessimist overlook in the population, if the news follow in too quick succession (i.e. the field oscillates too much), people will start ignoring them and adopting random opinions \cite{hohnisch_stauffer_pittnauer_2008}. In the study of segregation, temperature T emerges as a measure of tolerance, with individual agents possibly having their own T which might change over time. In the case of language change, it seems that the rate of change for certain characteristics decays as the population gets larger. If the agents only exchange characteristics with their neighbours, this influence is weak.\\
\section{Scale-free distributions}
A very important role in the theory of critical phase transitions (such as the one reviewed earlier in the case of the Ising model) is played by critical point exponents\cite{yeomans_1992}. To arrive to critical exponents, we need to define, for convenience, a measure of the deviation in temperature from the critical temperature $T_c$:\\
\begin{equation}
\tau=\frac{T-T_c}{T_c}
\label{deviation-critical}
\end{equation}
\vspace{14pt}
The critical exponent associated with a function will then be:\\
\begin{equation}
\lambda=\lim_{\tau\to 0}\frac{\ln|F(\tau)|}{\ln|\tau|}
\label{critical-exponent}
\end{equation}
\vspace{14pt}
More usually, we will encounter the following, equivalent, relation:\\
\begin{equation}
F(\tau)\sim |\tau|^{-\lambda}
\label{critical-exponent-relation}
\end{equation}
\vspace{14pt}
Here, the $\sim$ sign is used instead of the two sides being equal because (\ref{critical-exponent-relation}) it represents the asymptotic behaviour of the function $F(\tau)$ as $\tau \to 0$. In magnetic systems, critical exponents are for several functions, some of the most common of which are listed in table ~\ref{magnetic-functions}.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Commonly used critical exponents for a magnetic and fluid systems}}
\begin{tabular}{ccc}
\hline \hline
Magnetic system\\
\hline
Zero-field specific heat & $C_H\sim|\tau|^{-\alpha}$ \\
Zero-field magnetization & $M\sim(-\tau)^{\beta}$ \\
Zero-field isothermal susceptibility & $\chi_T\sim|\tau|^{-\gamma}$\\
Critical isotherm ($t=0$) & $H\sim|M|^{\delta}\sgn (M)$ \\
Correlation length & $\xi\sim|\tau|^{-\nu}$ \\
Pair correlation function at $T_C$ & $G(\Vec{r})\sim 1/r^{d-2+\eta}$ \\
\hline
Fluid system\\
\hline
Specific heat at constant volume $V_C$& $T\sim |t|^{-\alpha}$\\
Liquid-gas density difference & $(\rho_l-\rho_g)\sim(-t)^\beta$\\
Isothermal compressibility & $k_T\sim|t|^{-\gamma}$\\
Critical isotherm ($t=0$) & $P-P_c\sim|\rho_l-\rho_g|^\delta\sgn(\rho_l-\rho_g)$\\
Correlation length & $\xi\sim|\tau|^{-\nu}$\\
Pair correlation function at $T_C$ & $G(\Vec{r})\sim 1/r^{d-2+\eta}$ \\
\hline
\label{magnetic-functions}
\end{tabular}
\end{small}
\end{table}
We call a distribution scale-free when it follows a power law like the one at (\ref{critical-exponent-relation}). Essentially, when measuring something, if the distribution of results follows a power-law it means that the measured phenomenon exhibits features at all scales. This occurrence is not at all limited to critical phase transitions, however\cite{pinto_lopes_machado_2012}. For example, the distribution of earthquake magnitudes seems to obey a $k^{-\gamma}$ function with $\gamma\approx3.04$, number of hits on web sites, $\gamma\approx2.40$, intensity of wars, $\gamma\approx1.80$, and intensity of solar flares, $\gamma\approx1.83$ \cite{newman_2005}. Studies have also been done on the rank-frequency distribution of words in various languages, of which I will just mention Romanian, with $\gamma\approx1$ \cite{cocioceanu_raportaru_nicolin_jakimovski_2017}, and English, with $\gamma\approx2.20$ \cite{newman_2005}.\\
Scale-free distributions do not need to follow strictly a $k^{-\gamma}$ form. Take, for instance, figure \ref{power-law-distribs}, which comes from a study calculating the distribution of income in the United States\cite{toda_2012}. The distribution in this case is a double power law. More complex distributions exist, but they are beyond the scope of this thesis.\\
\begin{figure}[!htb]
\centering
\includegraphics[width=0.65\linewidth]{figures/income_distribution.png}
\caption{\textit{{\small Laplace plot by age group. Cyan plus (+): under 30, magenta circle (o): 30s, red asterisk (*): 40s, green cross (x): 50s, blue dot (·): over 60. Data from the U.S. Census Bureau Current Population Survey 2000-2009. Reprinted from Journal of Economic Behavior \& Organization, 84(1), Alexis Akira Toda, "The double power law in income distribution: Explanations and evidence," pages 364-381, Copyright 2012, with permission from Elsevier.}}}
\label{power-law-distribs}
\end{figure}
\section{Scale-free networks}
Often, natural and man-made systems (e.g. the Internet, citation networks, social networks) have the tendency to exhibit a structure closely resembling scale-free networks\cite{barabasi_2002}. These are networks with a scale-free degree distribution (i.e. the probability that a randomly chosen node will have $k$ connections). This means that the fraction $P(k)$ of nodes of degree $k$ has the following form:\\
\begin{equation}
P(k) \sim k^{-\gamma}
\label{scale-free}
\end{equation}
\vspace{14pt}
In a famous study about the World Wide Web, University of Notre-Dame researchers mapped all incoming and outgoing links in the university's \textit{nd.edu} domain\cite{barabasi_albert_jeong_2000}. Sure enough, the distribution they found was a power-law like the one at (\ref{scale-free}), meaning it was a scale-free network. The resulting exponent for outgoing links was then checked against the ones obtained by independently mapping \textit{whitehouse.gov}, \textit{yahoo.com}, and \textit{snu.ac.kr}. All the three domains exhibited a distribution with the same exponent as with \textit{nd.edu}, $\gamma\approx2.45$.\\
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/1000px-Barabasi_albert_graph.png}
\caption{\textit{{\small Display of three graphs generated with the Barabasi-Albert (BA) model. Each has 20 nodes and a parameter of attachment $m$ as specified. The color of each node is dependant upon its degree (same scale for each graph). (Wikipedia user HeMath / Creative Commons Attribution-Share Alike 4.0 International license (\url{https://commons.wikimedia.org/wiki/File:Barabasi_albert_graph.svg}))}}}
\label{barabasi-graph}
\end{figure}
In an attempt to explain the occurrence of scale-free networks in real life, Albert-László Barabási and Réka Albert have elaborated an algorithm for generating random scale-free networks (see figure ~\ref{barabasi-graph}) using a preferential attachment mechanism\cite{barabasi_albert_1999}. In the Barabási-Albert model, the network starts with $m_0$ initial connected nodes. Then, new nodes are added to the network one by one. Each new node is then attached to $m \le m_0$ of the already existing nodes with the following probability:\\
\begin{equation}
p_i=\frac{k_i}{\sum_jk_j}
\label{attachment-probability}
\end{equation}
\vspace{14pt}
where $p_i$ is the probability that the new node is attached to node $i$, $k_i$ is the degree of node $i$, and the sum is calculated over all pre-existing nodes $j$.\\
An evolving Barabási-Albert network can be mapped to a Bose gas, with nodes corresponding to energy levels, and links to particles\cite{bianconi_barabasi_2001}. For each new node, $2m$ particles are added: $m$ particles on the energy level corresponding to the node's fitness, and m particles distributed to the other energy levels, corresponding to the outgoing links. There are 3 possible behaviours:\\
\begin{enumerate}
\item Scale-free phase: occurs when all nodes have the fitness. The fraction of links of the oldest node decay to zero in the thermodynamic limit.
\item Fit-get-rich: occurs when nodes have different fitnesses and $I(\beta,\mu)=1$ has a solution. Eventually, the system evolves to a configuration of a few very connected nodes along with many less connected ones.
\item Bose-Einstein condensation: occurs under a critical temperature where $I(\beta,\mu)=1$ has no solution. Under this circumstance a winner-takes-all scenario occurs (the biggest hub also maintains a finite share of the links throughout the expansion of the system).
\end{enumerate}
\vspace{14pt}
\section{Social networks}
At the most basic level, a social network is a structure made up of a set of social actors (be it individuals or organizations), links between pairs of people (dyadic ties), and other social interactions\cite{hancean_2014,carrington_scott_2014}. Besides dyads, which are links between two people and simplest possible feature in a social network, we can also encounter repeating structures made up of three or more people. All these are called network motifs: recurrent and statistically significant patterns in a network.\\
\begin{figure}
\centering
\includegraphics[width=7cm]{figures/network_motifs.png}
\caption{\textit{{\small Types of network motifs.}}}
\label{fig1}
\end{figure}
Heider's Balance Theory\cite{heider_1958} is one of the most famous theories concerned with the analysis of triads (i.e. structures emerging between tree agents). According to it, balance state over a dyad occurs if the two like each other or dislike each other. Meanwhile, should the two have different sentiments regarding each other (positive and negative) the dyad is in imbalance. In the case of triads, balance state between the three can be found if the algebraic multiplication of signs in the triad relations has a positive sign. As it is immediately apparent, this theory holds many similarities with the aforementioned Ising dynamics, in which the energetic tendency is towards what Heider would call balance.\\
Another aspect that has been studied in social networks is the degree of separation, or average path length\cite{newman_barabasi_2006}. Arguably the most famous experiment in this area is Milgram's small-world experiment\cite{milgram_1967}. Long before the appearance of internet social networks, this experiment aimed to find what is the average path length between any two persons by making use of the postal system. The procedure was as follows:\\
\begin{enumerate}
\item Individuals in the U.S. city of Boston, Massachusetts, were chosen to be end points of the experiment.
\item Information packets containing the instructions for participating in the experiment were initially sent to randomly selected individuals in Omaha or Wichita.
\item Recipients were asked to sign the roster included in the packet and resend the packet to the target person if they knew the target on a first name basis. If not, they were asked to send it to the person they deemed most likely to be able to reach them in this manner.
\item Upon arrival, the target would inspect the roster and count how many times the package had been forwarded.
\end{enumerate}
\vspace{14pt}
Using this methodology, Milgram was able to calculate that the median number of intermediate persons was only $5.5$ \cite{barabasi_six_degrees_2002}.\\
\section{Justification for a two-layered model}
All the social simulation models reviewed thus far concerned themselves with reviewing the evolution of one observable in the system, be it opinion, cultural vector, or connectivity. Some of them equip the agents with a certain 'fitness' value, which determines the probability the agent will have influence on another agent or not. But this too is arbitrary. In real life scenarios such as elections\cite{luca_ispas_teaca_iavita_andreescu}, there is one one rapidly evolving question of opinion (e.g. candidate to vote for) while the structure of the social network is given by other, more long-term, features of the individuals (e.g. political values). In other words, one will be more likely to copy the opinion of people sharing the same values as oneself. This effect has been especially observed in the case of social media, where the followed-follower dynamic makes it easier to select the people sharing similar values\cite{holone_2016}.\\
In terms of a simulation, this considerations will translate into a two-layered model, with two different dynamics: one layer is concerned with rewiring the network so that agents with similar values connect with each other, and another layer for the evolution of the opinion of interest on said network.\\
\chapter{Model Setup}
In this thesis, I model opinion dynamics by using a Potts-like agent-based model, and the evolution of the network structure is performed by using a slightly modified version of Axelrod's model of dissemination of culture. This chapter provides the reader with a detailed description of the proposed model.\\
\section{Agent characteristics}
A directional network consisting of N agents is used. Each agent is represented by a vertex in the network, accompanied by a set of three features. These features are:\\
\begin{enumerate}
\item \textbf{Vote:} a variable in the range of $v=0..n$, representing the agent's opinion. \textit{Votes} can be understood literally, as voting intention in an election, or, more generally, as an opinion subject to quick change in a social network.
\item \textbf{Cultural vector:} a vector $\sigma=(\sigma_1,...,\sigma_F)$, with $\sigma_f=0,1,...,q$, representing an immutable set of cultural characteristics. Here culture is understood as, in Axelrod's words, "\textit{the set of individual attributes that are subject to social influence}".
\item \textbf{Energy:} defined as $\epsilon_{i}=-J\sum_{j}\delta{v_i,v_j}$, where $j\in$ inneighbors, and $\delta$ is the Kronecker delta symbol. Note that this definition is akin to the Potts model Hamiltonian, with the difference that in this case spins are replaced by \textit{votes}.
\end{enumerate}
\vspace{14pt}
\section{Network rewiring}
Each step, an agent is selected for which, with a certain probability, an inneighbour will be removed and another added. Here, I referred to the transition probability defined by Axelrod for his model of dissemination of culture:\\
\begin{equation}
\omega_{i,j}=\frac{1}{F}\sum_{f=1}^{F}\delta_{\sigma_f(i),\sigma_f(j)}
\label{ax_trans_prob}
\end{equation}
\vspace{14pt}
However, this probability only grows when to cultural characteristics are \textit{identical}. In reality, beliefs are usually on a spectrum of intensity. For instance, a person with a belief $\sigma_f=3$ will find themselves more likely to interact with another with $\sigma_f=4$ rather than one with $\sigma_f=9$. Taking this into consideration, a new function $\eta$ can be devised:\\
\begin{equation}
\eta_{i,j}=1-\frac{1}{F} \cdot \frac{1}{q}\sum_{f=1}^{F}|\sigma_f(i)-\sigma_f(j)|
\label{trans_prob}
\end{equation}
\vspace{14pt}
Using this revised probability, this stage in the time-step is defined by the following activities:\\
\begin{itemize}
\item One agent $i$ is selected at random.
\item Another agent $j$ that is not an inneighbour is selected randomly.
\item With a probability $\eta_{i,j}$ an edge from $j$ to $i$ is added, and $i$'s energy $\epsilon_i$ is reevaluated.
\item Now an agent $k$ is selected randomly from $i$'s inneigbours.
\item With a probability $1-\eta_{i,j}$ the edge from $k$ to $i$ is removed, and $i$'s energy $\epsilon_i$ is reevaluated.
\end{itemize}
\vspace{14pt}
It is immediately apparent that this rewiring procedure will lead to similar agents becoming more connected, and eventually forming hubs. This behaviour mirrors the echo-chamber effect observed in social media.\\
\section{Opinion dynamics}
Having rewired the network the selected agent's connections, it will now reconsider its \textit{vote}. This happens by attributing a new random vote to the agent, and reevaluating agent energy with the new vote. If the energy is lower, i.e. the agent's opinion is more in line with his influencers', then the new vote is kept. Otherwise, it will keep its new opinion with a certain probability, which is dependent on temperature (tolerance) $T$ and difference in energy $\Delta\epsilon=\epsilon_{new}-\epsilon_{old}$:\\
\begin{equation}
p=\exp \left( -\frac{\Delta\epsilon}{T} \right)
\label{switch_prob}
\end{equation}
\vspace{14pt}
\section{Overall execution procedure}
Putting the network rewiring part and the opinion dynamics parts together, the algorithm will go through the following procedure:\\
\textbf{Step 1: Generating initial population.} A data vector containing $N$ data structures is created, where each data structure contains the features described at section 2.1, with random votes and cultural vectors.\\
\textbf{Step 2: Network initialization.} A Barabási–Albert graph is initialized, growing by adding new vertices to $N_0$ initial vertices. Each new vertex is attached to $k$ different vertices already present in the system by preferential attachment.\\
\textbf{Step 3: Compute initial energy and vote distribution.} Compute energy for each agent, count vote distribution, then store the data in the log.\\
\textbf{Step 4: Select random agent.}\\
\textbf{Step 5: Network rewiring.} Perform the network rewiring procedure on the selected agent, recalculating $\epsilon$ and $E$ afterwards.\\
\textbf{Step 6: Opinion dynamics.} Perform the opinion dynamics procedure on the selected agent, recalculating $\epsilon$ and $E$ afterwards.\\
\textbf{Step 7: Advance to next step.} Advance the time step and go back to Step 4 until desired number of steps is reached.\\
\textbf{Step 8: Data export.} The generated data containing the $E$ time series, vote distribution over time and the final network form is exported.\\
\chapter{Results}
To examine the general characteristics exhibited by this model, I ran a series of simulations, varying, one at a time, population, temperature, initial network, and the way probability $\eta_{i,j}$ is calculated. In what follows I will present each variation together with its result.\\
The main simulation run was done over a population of 1000 agents initialized in a Barabási–Albert network, for a number of steps $t=2\cdot 10^7$, at temperature $T=1$. The full specifications can be seen in table ~\ref{final-model-specs}.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for main simulation run}}
\begin{tabular}{ccc}
\hline
Initial network & Barabási–Albert network\\
Opinion dynamics & Potts model \\
Rewiring probability & $\eta_{i,j}=1-\frac{1}{F\cdot q}\sum_{f=1}^{F}|\sigma_f(i)-\sigma_f(j)|$\\
Run time & $t=2\cdot 10^7$\\
Temperature & $T=1$ \\
Coupling constant & $J=1$ \\
Population & $N=1000$\\
\hline
\label{final-model-specs}
\end{tabular}
\end{small}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{figures/pop1000/energy_evolution_1-2_10e7.png}
\caption{\textit{{\small Total system energy evolution over time for a simulation of final model ran for $t=2\cdot 10^7$ over a population of 1000}}}
\label{pop1000:evol}
\end{figure}
Figure \ref{pop1000:evol} shows the evolution of the total system energy for this simulation. Energetically, the system reaches the ground state at $t\approx5\cdot 10^6$. However, consensus is reached much earlier ($t\approx10^5$), as figure \ref{pop1000:partition} reveals. This indicates that the network rewiring dynamics bring their own contribution to the total energy. The issue of different rewiring frequencies will be analysed in section 3.2. Refer to figure \ref{pop1000} for all the plots for this simulation.\\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{figures/pop1000/opinion_evolution_1-15_10e4.png}
\caption{\textit{{\small Opinion partition over time, $t \in \{1..1.5\cdot 10^5\}$}}}
\label{pop1000:partition}
\end{figure}
\section{Fixed rewiring probability}
When the rewiring probability $\eta$ is fixed, consensus is reached more slowly, even if the population itself is smaller, as revealed by figure \ref{2_2:partition}, where consensus wasn't yet reached by the end of the simulation at $t=1.8\cdot 10^6$.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for fixed rewiring probability simulation run}}
\begin{tabular}{ccc}
\hline
Initial network & Barabási–Albert network\\
Opinion dynamics & Potts model \\
Rewiring probability & $\eta_{i,j}=0.5$\\
Run time & $t=1.8\cdot 10^6$\\
Temperature & $T=1$ \\
Coupling constant & $J=1$ \\
Population & $N=100$\\
\hline
\label{fixed-rewiring-specs}
\end{tabular}
\end{small}
\end{table}
A possible explanation for the slower opinion dynamics is that the randomness of the rewiring caused by the fixed $\eta$ makes it harder for the agents to have the same neighbors for too long, thus adding more noise to the dynamics.\\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{figures/2_2/vote_evolution.png}
\caption{\textit{{\small Opinion partition over time for simulation ran for $t=1.8\cdot 10^6$ over a population of 100}}}
\label{2_2:partition}
\end{figure}
Refer to figure \ref{2_2} for all the plots for this simulation.
\subsection{Majority rule}
A variant of the fixed rewiring probability simulation is the majority rule simulation. In this scenario, the opinion dynamics step behaves as follows:\\
\begin{enumerate}
\item An agent is selected at random.
\item The opinions of its inneighbors are counted.
\item The agent copies the opinion held by the majority of its inneighbors.
\end{enumerate}
\vspace{14pt}
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for majority rule run}}
\begin{tabular}{ccc}
\hline
Initial network & Barabási–Albert network\\
Opinion dynamics & Majority rule \\
Rewiring probability & $\eta_{i,j}=0.5$\\
Run time & $t=10^6$\\
Temperature & $T=1$ \\
Coupling constant & $J=1$ \\
Population & $N=100$\\
\hline
\label{majority-rule-specs}
\end{tabular}
\end{small}
\end{table}
Unsurprisingly, in this case consensus is reached almost immediately, despite the noise generated by network rewiring (see figure \ref{2_1:partition}). This opinion dynamics rule is even stronger than a Potts rule with $T=0$, since in that case there are still chances that the new opinion will cause an increase in system energy, and as such it is dropped.\\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.65\linewidth]{figures/2_1/vote_evolution.png}
\caption{\textit{{\small Opinion partition over time for majority rule simulation ran for $t=10^6$ over a population of 100}}}
\label{2_1:partition}
\end{figure}
Refer to figure \ref{2_1} for all the plots for this simulation.
\subsection{Random initial network}
Another tested variant to the fixed rewiring probability simulation is one where the network is initialized using a
Erdős–Rényi random network\cite{bollobas_bela_2004}.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for Erdős–Rényi initial network run}}
\begin{tabular}{ccc}
\hline
Initial network & Erdős–Rényi network\\
Opinion dynamics & Potts model \\
Rewiring probability & $\eta_{i,j}=0.5$\\
Run time & $t=2\cdot 10^6$\\
Temperature & $T=1$ \\
Coupling constant & $J=1$ \\
Population & $N=100$\\
\hline
\label{er-specs}
\end{tabular}
\end{small}
\end{table}
In this case, the results (figure \ref{2_1_er:partition}) are rather similar to the results obtained from using a Barabási–Albert. This confirms that the network rewiring dynamics are strong enough so as to be independent of the topology of the initial network.\\
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/vote_evolution.png}
\caption{\textit{{\small Opinion partition over time}}}
\label{2_1_er:partition}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/OutneighborHistogram.png}
\caption{\textit{{\small Outneighbor histogram}}}
\label{2_1_er:histogram}
\end{subfigure}
\caption{\textit{{\small Opinion partition and outneighbor histogram for random initial network simulation ran for $t=2\cdot 10^6$ over a population of 100}}}
\end{figure}
Refer to figure \ref{2_2_er} for all the plots for this simulation.
\section{Varying rewiring frequency}
Keeping the same fixed rewiring probability $\eta=0.5$, four simulations were experimenting with different rewiring frequencies. Instead of running the rewiring procedure every step of the simulation, the procedure was run at every 10, 50, 100, and 1000 steps, respectively (see table \ref{rew-specs}).\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for different rewiring frequencies simulation runs}}
\begin{tabular}{ccc}
\hline
Initial network & Barabási–Albert network\\
Opinion dynamics & Potts model \\
Rewiring probability & $\eta_{i,j}=0.5$\\
Run time & $t=1.4\cdot 10^5, 2\cdot 10^6, 4.2\cdot 10^6, 2\cdot 10^6, 6.5\cdot 10^6$\\
Temperature & $T=1$ \\
Coupling constant & $J=1$ \\
Population & $N=100$\\
Rewiring frequency & $1/10, 1/50, 1/100, 1/1000, 0$ \\
\hline
\label{rew-specs}
\end{tabular}
\end{small}
\end{table}
Analysing the results (figure \ref{2_2_rew:all}), it appears, however, that even with a fixed $\eta$, the network rewiring is involved in reaching the energy ground state. The lower the rewiring frequency is, the higher the noise and the higher the minimal system energy.\\
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_10rew/energy_evolution.png}
\caption{Rewiring every 10 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_50rew/energy_evolution.png}
\caption{Rewiring every 50 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_100rew/energy_evolution.png}
\caption{Rewiring every 100 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_1000rew/energy_evolution.png}
\caption{Rewiring every 1000 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/no_rew/Energy_norewire2.png}
\caption{No rewiring}
\end{subfigure}
\caption{\textit{{\small Simulations ran for different rewiring frequencies over a population of 100}}}
\label{2_2_rew:all}
\end{figure}
\section{Varying temperature}
Similar to the previous the section, simulations were also done with varying temperatures $T$. Again, $\eta$ was fixed and the population was 100 (see table \ref{T-specs}).\\
Figure \ref{2_2_t1:ener_inne} shows the energy evolution and the opinion partition over time for a system at $T=1$. Figures \ref{2_2_t10:ener_inne} and \ref{2_2_t100:ener_inne} show the same information for simulations ran at $T=10$, and $T=100$, respectively. Additional plots, such as energy distribution plots, can be found in the appendix figures \ref{2_2_t1}, \ref{2_2_t10}, and \ref{2_2_t100}.\\
\begin{table}[!ht]
\centering
\begin{small}
\caption{\textit{Specifications for different temperatures simulation runs}}
\begin{tabular}{ccc}
\hline
Initial network & Barabási–Albert network\\
Opinion dynamics & Potts model \\
Rewiring probability & $\eta_{i,j}=0.5$\\
Run time & $t=6\cdot 10^6$\\
Temperature & $T=1,10,100$ \\
Coupling constant & $J=1$ \\
Population & $N=100$ \\
\hline
\label{T-specs}
\end{tabular}
\end{small}
\end{table}
Here, three trends are apparent. First, higher temperature is correlated with higher noise. This should come as no surprise, especially when considering the expression of opinion switch probability $p$ (\ref{switch_prob}). Indeed, this mirrors the classic Ising behaviour presented in section 1.1. It is still an important confirmation of the fact temperature behaves as expected in this model, and thus can be easily related to tolerance.\\
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=1$, over a population of 100}}}
\label{2_2_t1:ener_inne}
\end{figure}
Secondly, the evolution of opinions over time exhibits a chaotic behaviour for $T=10$ and $T=100$, while for $T=1$ it seems like it may be convergent, though a longer run would have been necessary to confirm this beyond all doubts. However, it can still be observed that for $T=1$ the share of opinions in the network is more evenly divided, hence the smaller values in the opinion partition plot (figure \ref{2_2_t1:ener_inne}).\\
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=10$, over a population of 100}}}
\label{2_2_t10:ener_inne}
\end{figure}
Thirdly, and most interestingly, the higher the temperature, the lower the system energy ground state seems to be. If for $T=1$ it is $\langle E_{ground} \rangle\approx490$, for $T=10$, it is $\langle E_{ground} \rangle\approx610$, and for $T=100$, it is $\langle E_{ground} \rangle\approx650$. This has interesting implications, as higher tolerance could mean, in this context, a more stable configuration of the social network.\\
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=100$, over a population of 100}}}
\label{2_2_t100:ener_inne}
\end{figure}
\chapter{Conclusions}
The most important contribution of this thesis is to provide an alternative and --- in my opinion --- more realistic approach to model opinion dynamics compared to single-layered models. The new approach is based on the postulate that people are influenced by others with already compatible views. The process of selecting the persons one is willing to listen to can ultimately be the factor that leads to consensus, polarization, or otherwise fragmentation. In the two-layered model, agents change their opinions in order to be more in line with their neighbors, while at the same time the network is constantly changing in order to link agents with similar values.\\
I show that the outcome depends strongly on the fashion and frequency to which the network is updated. A random rewiring dynamic will cause high noise and prolonged fragmentation, while a dynamic based on similarity between agents ultimately leads to consensus. A lower network update ratio will lead to a slower evolution towards system stability and to higher noise in the evolution of opinions. Furthermore, temperature variations evoke the expected behaviour from an Ising-like simulation.\\
Admittedly, the model is still a long way from maturity, with the analytical descriptions of its behaviour still needing to be done. The exact relationship between the time required to reach the system energy ground state and rewiring frequency is sill unknown, as is the value of critical temperature $T_C$ under default conditions. Besides a better description of the model phenomenology, future steps can include experimenting with different cultural values distributions. While it might be interesting to study models in themselves, ultimately they have to work for the reasons they were created, in this regard it would be a good idea to test the model on networks extracted from empirical data.\\
\renewcommand{\contentsname}{References}
\renewcommand\bibname{References} %
\addcontentsline{toc}{chapter}{References}
\markboth{References}{References} %
\bibliographystyle{unsrt}
%
%\bibliographystyle{plain}
\bibliography{References}
\begin{appendices}
\chapter{Simulation Code}
\section{SocialSim.jl}
\begin{lstlisting}[language=julia]
module SocialSim
using Dates
using Random
using LightGraphs
using DataFrames
using CSV
using Dates
using GraphIO
using Plots
using StatsBase
using ProgressMeter
export run_sim
const J = 1
include("init.jl")
include("analysis.jl")
include("dynamics.jl")
include("storage.jl")
#Runs the simulation at a determined temperature for a determined number of steps
function run_sim(T,population,steps; exports_number=10)
#Random.seed!(1234)
#Get time of simulation and prepare folders
exportTime = Dates.format(Dates.now(), "yyyy-mm-ddTHH-MM-SS")
mkdir("Data/$exportTime")
mkdir("Data/$exportTime/Network")
#Initializing network and data frame
data = createDataFrame()
network, nodes = initNetwork(data, population)
#Export network at each nx steps
nx = div(steps, exports_number)
#Executing steps
@showprogress 5 "Computing..." for i in 1:steps
ni = rand(1:population)
procedure2(nodes, network, data, ID=ni, N=population, T=T)
if mod(i, nx) == 0
exportNetwork(exportTime, network, nodes, i)
end
end
#Export data to a new folder
exportData(exportTime, data, network, nodes)
plotAnalysis(steps, exportTime, data, nodes, network)
end
#Runs the simulation at T for a determined time interval (in hours)
#=function set_run(T,population,duration; exports_number=10)
#Get time of simulation and prepare folders
exportTime = Dates.format(Dates.now(), "yyyy-mm-ddTHH-MM-SS")
mkdir("Data/$exportTime")
mkdir("Data/$exportTime/Network")
#Initializing network and data frame
data = createDataFrame()
network, nodes = initNetwork(data, population)
#Export network at each nx steps
nx = div(steps, exports_number)
timelimit = Dates.now() + Dates.Minute(duration)
#Executing steps
while Dates.now() < timelimit #&& length(Data.E) <= 2000000
ni = rand(1:population)
procedure2(nodes, network, data, ID=ni, N=population, T=T)
if mod(i, nx) == 0
exportNetwork(exportTime, network, nodes, i)
end
end
#Export data to a new folder
exportData(exportTime)
plotAnalysis(length(data.E)-1, exportTime)
end=#
end
\end{lstlisting}
\vspace{14pt}
\section{init.jl}
\begin{lstlisting}[language=julia]
struct Agent{T,E}
id::Int
values::Vector{T}
vote::Ref{Int}
energy::Ref{E}
end
function initNetwork(data, N)
#Generate Barabasi graph with N nodes, 3 conntections each, 10 initial nodes
network = barabasi_albert(N, 10, 3, #=seed=1,=# is_directed=true)
#global Network = erdos_renyi(N, 4, is_directed=true)
#Initialize agents
nodes = Agent{Int,Int}[]
for i in 1:N
push!(nodes, Agent(i, rand(0:10, 5), Ref(rand(1:10)), Ref(0)))
end
#Initialize energy
for i in 1:N
dE!(nodes, i, network)
end
computeEnergy!(data, nodes)
#Count initial preferences
trackPreference!(data, nodes)
return network, nodes
end
\end{lstlisting}
\vspace{14pt}
\section{dynamics.jl}
\begin{lstlisting}[language=julia]
function rewire!(network, nodes, ID, options; remove=false)
isempty(options) && return
target = rand(options)
# number of values
val_range = axes(nodes[ID].values, 1)
val_norm = 1 / (val_range[end] * 10)
p = val_norm * sum(
i->abs(nodes[ID].values[i]-nodes[target].values[i]),
val_range)
if !remove
p = 1 - p
end
if rand() <= p
#Add pr remove edge
if remove
rem_edge!(network, target, ID)
else
add_edge!(network, target, ID)
end
#Adjust energy
if nodes[target].vote[] == nodes[ID].vote[]
j = remove ? J : -J
nodes[ID].energy[] += j
end
end
end
function procedure2(nodes, network, data; ID, N, T)
#Add new row to data frame
push!(data, zeros(10+1))
#Select unconnected node, connect with probability w
options = Int[]
for i in 1:N
if has_edge(network, i, ID) == false
push!(options, i)
end
end
rewire!(network, nodes, ID, options, remove=false)
#Select edge, disconnect with probabilty w^-1
options = inneighbors(network, ID)
rewire!(network, nodes, ID, options, remove=true)
#Select new random preference & track preference change
oldVal = nodes[ID].vote[]
newVal = rand(1:10)
nodes[ID].vote[] = newVal
#Substract current node energy from former node energy
deltaE = dE(nodes, ID, network) - nodes[ID].energy[]
p = -(deltaE)/T
#If deltaE<0 apply it:
if deltaE < 0
nodes[ID].energy[] += deltaE
for i in outneighbors(network, ID)
nodes[i].energy[] = dE(nodes, i, oldVal, newVal)
end
#If deltaE>0 apply it with following probability:
elseif rand() < exp(p)
nodes[ID].energy[] += deltaE
for i in outneighbors(network, ID)
nodes[i].energy[] = dE(nodes, i, oldVal, newVal)
end
else
nodes[ID].vote[] = oldVal
end
#Log preferences
trackPreference!(data, oldVal, nodes[ID].vote[])
#Compute system energy
computeEnergy!(data, nodes)
#Optional: change core opinions
end
\end{lstlisting}
\vspace{14pt}
\section{analysis.jl}
\begin{lstlisting}[language=julia]
"""
computeEnergy!(data, nodes)
Compute sytem energy
"""
function computeEnergy!(data, nodes)
E = 0
for i in 1:length(nodes)
data.E[end] += nodes[i].energy[]
end
end
"""
dE(ID,oldVal,newVal)
Computes node energy by assessing the change in preference of an inneighbor
"""
function dE(nodes, ID, oldVal, newVal)
if newVal == nodes[ID].vote[] && oldVal == nodes[ID].vote[]
epsilon = nodes[ID].energy[]
elseif newVal == nodes[ID].vote[] && oldVal != nodes[ID].vote[]
epsilon = nodes[ID].energy[] - J
elseif newVal != nodes[ID].vote[] && oldVal == nodes[ID].vote[]
epsilon = nodes[ID].energy[] + J
else
epsilon = nodes[ID].energy[]
end
return epsilon
end
function dE(nodes, ID, network)
#Goes through inneighbors and computes Potts node energy (epsilon)
options = inneighbors(network,ID)
epsilon = 0
for i in 1:length(options)
if nodes[ID].vote[] == nodes[options[i]].vote[]
epsilon += -J
end
end
return epsilon
end
#Slower: recomputes node energy thoroughly
function dE!(nodes, ID, network)
epsilon = dE(nodes, ID, network)
#Assign node energy
nodes[ID].energy[] = epsilon
end
function datacol(data, i)
getproperty(data, Symbol("c$i"))
end
"""
trackPreference!(data, nodes)
Count popularity of each candidate
"""
function trackPreference!(data, nodes)
for i in 1:length(nodes)
v = nodes[i].vote[]
# data.c$v[end] += 1
datacol(data, v)[end] += 1
end
end
function trackPreference!(data, old, new)
#Copy previous distribution
for i=1:10
datacol(data, i)[end] = datacol(data, i)[end-1]
end
#Remove old value
datacol(data, old)[end] -= 1
#Add new value
datacol(data, new)[end] += 1
end
#Plotting
function plotAnalysis(t, dir, data, nodes, network)
#Energy evolution
plot1=plot(1:t+1, data.E[1:t+1]#=/Data.E[1]=#, legend=false)
xlabel!("Time")
ylabel!("E/E_0")
title!("Energy")
png("Data/$dir/Energy")
#Energy distribution
energyDistribution=counts(-data.E)
plot2=plot(1:length(energyDistribution), energyDistribution, legend=false)
title!("Energy distribution")
png("Data/$dir/EnergyDistribution")
#Inneighbor histogram
noInneighbors = []
for i in 1:length(nodes)
push!(noInneighbors, length(inneighbors(network, i)))
end
plot3=histogram(noInneighbors, bins = 15)
title!("Inneighbor histogram")
png("Data/$dir/InneighborHistogram")
end
\end{lstlisting}
\vspace{14pt}
\section{storage.jl}
\begin{lstlisting}[language=julia]
function createDataFrame()
DataFrame(E=[0], c1=[0], c2=[0], c3=[0], c4=[0], c5=[0], c6=[0], c7=[0], c8=[0], c9=[0], c10=[0])
end
function exportData(dir, data, network, nodes)
#Save log
CSV.write("Data/$dir/data.csv", data)
#Save nodes table
nodes_df = DataFrame(ID = Int[], Vote = Int[], sigma_1 = Int[], sigma_2 = Int[], sigma_3 = Int[],
sigma_4 = Int[], sigma_5 = Int[], E = Int[])
for i in 1:length(nodes)
push!(nodes_df, (nodes[i].id, nodes[i].vote[], nodes[i].values[1], nodes[i].values[2],
nodes[i].values[3], nodes[i].values[4], nodes[i].values[5], nodes[i].energy[]))
end
CSV.write("Data/$dir/nodes.csv", nodes_df)
#Exporting the graph
savegraph("Data/$dir/graph.net", network, "Network", GraphIO.NET.NETFormat())
end
function exportNetwork(dir, network, nodes, step)
#Save nodes table
nodes_df = DataFrame(ID = Int[], Vote = Int[], sigma_1 = Int[], sigma_2 = Int[], sigma_3 = Int[],
sigma_4 = Int[], sigma_5 = Int[], E = Int[])
for i in 1:length(nodes)
push!(nodes_df, (nodes[i].id, nodes[i].vote[], nodes[i].values[1], nodes[i].values[2],
nodes[i].values[3], nodes[i].values[4], nodes[i].values[5], nodes[i].energy[]))
end
CSV.write("Data/$dir/Network/nodes_$step.csv", nodes_df)
#Exporting the graph
savegraph("Data/$dir/Network/graph_$step.net", network, "Network", GraphIO.NET.NETFormat())
end
\end{lstlisting}
\vspace{14pt}
\chapter{Plots}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_10rew/energy_evolution.png}
\caption{Rewiring every 10 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_50rew/energy_evolution.png}
\caption{Rewiring every 50 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_100rew/energy_evolution.png}
\caption{Rewiring every 100 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_1000rew/energy_evolution.png}
\caption{Rewiring every 1000 steps}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/no_rew/Energy_norewire2.png}
\caption{No rewiring}
\end{subfigure}
\caption{\textit{{\small Simulations ran for different rewiring frequencies over a population of 100}}}
\label{2_2_rew}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/pop1000/energy_evolution_1-2_10e7.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/pop1000/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/pop1000/opinion_evolution_1-15_10e4.png}
\caption{Opinion partition over time, $t \in \{1..1.5\cdot 10^5\}$}
\end{subfigure}
\caption{\textit{{\small Simulation of final model ran for $t=2\cdot 10^7$ over a population of 1000}}}
\label{pop1000}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_2/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_2/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_2/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Fixed rewiring probability simulation ran for $t=1.8\cdot 10^6$ over a population of 100}}}
\label{2_2}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_1/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_1/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figures/2_1/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Majority rule simulation ran for $t=10^6$ over a population of 100}}}
\label{2_1}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_er/OutneighborHistogram.png}
\caption{Outneighbor histogram}
\end{subfigure}
\caption{\textit{{\small Random initial network simulation ran for $t=2\cdot 10^6$ over a population of 100}}}
\label{2_2_er}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/InneighborHistogram_temp=1.png}
\caption{Inneighbor histogram}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t1/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=1$, over a population of 100}}}
\label{2_2_t1}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/InneighborHistogram_temp=10.png}
\caption{Inneighbor histogram}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t10/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=10$, over a population of 100}}}
\label{2_2_t10}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/energy_evolution.png}
\caption{Total system energy evolution over time}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/energy_distribution.png}
\caption{Distribution of the absolute values of energy}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/InneighborHistogram_temp=100.png}
\caption{Inneighbor histogram}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics[width=\linewidth]{figures/2_2_t100/vote_evolution.png}
\caption{Opinion partition over time}
\end{subfigure}
\caption{\textit{{\small Simulation with fixed rewiring probability, ran for $t=6\cdot 10^6$ at $T=100$, over a population of 100}}}
\label{2_2_t100}
\end{figure}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7359435571,
"avg_line_length": 45.4795486601,
"ext": "tex",
"hexsha": "2c9de5d8a56f65bb63e5956e410f3f0f0ed980dd",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-10-22T14:07:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-22T14:07:26.000Z",
"max_forks_repo_head_hexsha": "3f8be99ca902a102fe3fd7fbe08934c588df632f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "luca-mm/Bachelor-Thesis",
"max_forks_repo_path": "Thesis/source/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3f8be99ca902a102fe3fd7fbe08934c588df632f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "luca-mm/Bachelor-Thesis",
"max_issues_repo_path": "Thesis/source/main.tex",
"max_line_length": 1267,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3f8be99ca902a102fe3fd7fbe08934c588df632f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "luca-mm/Bachelor-Thesis",
"max_stars_repo_path": "Thesis/source/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17213,
"size": 64490
} |
\documentclass[14pt]{report}
\usepackage[utf8]{inputenc}
\usepackage{blindtext}
\usepackage{amsmath}
\usepackage[margin=1in]{geometry}
\usepackage{graphicx,changepage}
\usepackage{xcolor}
\usepackage{hyperref}
\usepackage{tcolorbox}
\usepackage{subfig}
\usepackage{fancyvrb}
\usepackage{listings}
\usepackage{svg}
\usepackage{csvsimple}
\newcommand\todo[1]{\textcolor{red}{#1}}
\title{Legend Of Bluespec}
\author{kr469 }
\date{October 2021}
\tcbset{colback=pink!7,colframe=pink!90,coltitle=black}
\begin{document}
\maketitle
\tableofcontents
\chapter{Introduction}
Making hardware is hard, and similarly to other problems in computer science, we make it easier by adding layers of abstraction. There are many layers involved in making hardware from that abstract transistors to gates to logical operations to functions to modules to chips to multi-chip modules to whole devices. The layer this project focuses on is going from modules to chips(TODO: this probably need renaming). At this layer we already have code for modules like memory, CPU-cores, interconnects and so on. Now what we need is an ability to compose them in such a way that they create a whole device. On a paper this should be a simple task but in reality it's a lot more complicated.
\section{Reality}
Creating custom hardware is getting more expensive both in therms of money but also in terms of man-hours needed to create it.
This creates barriers to entry for new players, and also reinforces strong positions of already adopted standards.
Two monoliths that I will focus on are language Verilog and design software Intel Quartus Prime(IQP).
In my work I will work with Bluespec language that can be compiled down to Verilog.
\begin{tcolorbox}[title=Market share and justification for focusing entirely on comparisons with Intel Quartus Prime]
It's a bit difficult to accurately judge market share of IQP, but It is produced by one of the biggest chip manufacturer and designer in the world, and it's also used by Qualcomm according to \href{https://discovery.hgdata.com/product/intel-quartus-prime}{HG Insights}. Using Google trends tool we can see that while in recent years(since 2016) competitor Xilinx Vivado has overtaken, IQP in interest at the global scale, in most developed countries like US, Europe, and parts of Asia. There is roughly 50/50 split between IQP and Vivado. My personal observations suggest that Xilinx Vivado has been more popular in India which is a large country that is currently developing rapidly. Therefore, while I'm going to assume that IQP is still one of two biggest hardware design platforms, and it's fair to not investigate Xilinx's software during evaluation(TODO: check if this has changed).
\end{tcolorbox}
My project will try to tackle a subset of functionality provided by a tool called Platform Designer that is a part of Intel Quartus Prime package. Platform Designer is a GUI tool for connecting modules, it is capable of saving and loading designs, from proprietary plain text file format. Unfortunately, those files are not exactly what one might call "Human readable" as they have a tendency to be megabytes long(millions of characters). This tool also have some other pains that will be explained later.
\section{Solution}
During my project I will try to create an alternative file format that is much simpler and allows for editing by a human. To do this I will harness power of types in Bluespec language. My tool will be able to operate on Bluespec packages, but thanks to other tools that allow for interoperability of Bluespec and Verilog, it should be still technically possible to use my tool with wider Verilog ecosystem.
\chapter{Preparation}
\section{Understanding Bluespec}
To begin working with Bluespec we first need to understand the language.
\subsection{Rules}
In Bluespec all computation is done in form of rules. Each cycle we will take a subset of all rules that we are going to execute in this cycle, rule is fired (executed) in cycle only if it's ready (or will be ready) and it's not conflicting with other rules (If this happens compiler must issue a warning, and picks arbitrary rule to fire from subset of conflicting rules). Each rule can fire at most one time per cycle. For rule to be ready to fire it needs it's implicit and explicit conditions to be true. Rule can be fired in a cycle even if it's not ready at the start of cycle, for example if you add item on an empty queue and then pop can happen in same cycle.\\
\includegraphics[width=\textwidth]{Rulemapping.png}
(TODO: add reference to the bsv reference document from which this image was taken)
\begin{verbatim}
TODO: piece of Bluespec with module using fifo and other rule
explaining types of conditions.
\end{verbatim}
\subsection{Modules and interfaces}
Module is something (TODO: I have no Idea how to call it). Modules don't have types, instead they implement some interface. This means that you can have multiple different modules that implement the same interface, this makes interoperability much easier. Interfaces are made up out of two things:
\begin{itemize}
\item Methods - that allow for interaction with the module.
\item Subinterfaces - That allow for more generalization, for example you need to connect one more thing you don't need to change an interface used by other modules, you can just create a new interface that contains two subinterfaces and pass them accordingly.
\end{itemize}
\subsection{Types}
Welp, there is a ton of them.
\subsection{Typeclasses}
Types classes in Bluespec are used to group types for which specific functions are implemented. TODO:
\section{Creating a grammar}
\subsection{Why grammar is needed ?}
I had effectively 3 choices for a language to write this grammar in:
\begin{itemize}
\item Haskell - This would require me to directly tap into complier for information about modules and ect. in packages, and I would also need to learn Haskell effectively from scratch. I hope that I don't need to explain why learning Haskell while trying to understand complier of another language is a bad idea.
\item Tcl (pronounced "tickle") - This language is used as a scripting language in both Inter and Xilinx tools, I will be reading packages using Tcl scripts provided by the creators of the Bluespec complier (BSC), but my understanding is that those scripts are just handy wrappers for some Haskell code. This is also foreign language to me with minimal presence, and negligible learning resources.
\item Python - Firstly this is a language I have experience working with, secondly it's widely supported, and there is extensive tooling for it. It's flexible typing system allows for rapid experimenting.
\end{itemize}
I have chosen Python for this project as I didn't want to dabble in the BSC as I was advised that this is dangerous for part II project as compliers are overwhelmingly complex and difficult to understand. This meant that I will need to parse output of Bluetcl (complier script for inspecting packages), and to do this I'm going to need grammar.
\subsection{What grammar is needed ?}
Bluetcl produces many, outputs but two of them that we are going to focus on are: descriptions of functions and descriptions of types. To simplify parsing of those I will have one grammar capable of parsing both outputs at the same time, as some grammar structures are reued in both outputs.
\subsection{Where to find this grammar ?}
Unfortunately this grammar is not documented anywhere, so I needed to reverse engineer it. This grammar might not be perfect and might not cover every input, but if created carfuly to allow for as much flexibility as possible, and use of large body of test input in from of standard library during creation of it we should be exposed to enough examples to be able to parse decently large subset of future inputs.
Here are other reasons to justify this approach:
\begin{itemize}
\item Heaps' law suggests that number of unique words in given body of text is proportional to roughly square root of number of words in the text. I think it's fair to assume that something similar will be true if we consider number of unique grammar rules.
\item This grammar while different from grammar of Bluespec language maps subset of Bluespec grammar, so we can supplement our deductions with cases that we expect to arise.
\item We don't need to understand everything to just connect modules, and because at the module level, things need to be less abstract as they need to be synthesizable, we don't expect highly exotic things to appear in higher level modules.
\item This approach is probably the best way for me anyway.
\end{itemize}
\subsection{Technical aspects}
This is EBNF grammar, I parse it using Lark library for Python, and I'm using Earley parser, as it is capable of arbitrary length lookahead. Grammar I created contains roughly 90 rules, and I won't include all of them here, but I will show few examples to give a feel of what is happening.
\begin{tcolorbox}[title = Parsing position TODO maybe find a better example with shorter line]
\begin{verbatim}
tcl_position: "{" "position" "{" tcl_path NUMBER NUMBER ["{" "Library" identifier_u "}" ]"}""}"
// todo check paths with spaces
tcl_path: ["%/"] /(([.]{1,2})|(\w+))/ ["/" /(([.][.])|([.])|(\w+))/]* "." /\w+/
-------- Text to parse ---------
{position {%/Libraries/Connectable.bs 25 1 {Library Connectable}}}
\end{verbatim}
\includegraphics[width=0.4\textwidth]{images/TCLPath.png}
\end{tcolorbox}
A nice feature supported by Lark is ability to have regular expressions in the grammar, I'm mentioning this as is effectively having parser in side of parser. A handy tool for debuging and creating grammar was this website: \href{https://www.lark-parser.org/ide/}{https://www.lark-parser.org/ide/}, it can run parser online, and show output tree.
\chapter{Implementation}
\section{Reading packages}
\subsection{Bluetcl}
As mentioned before, bluetcl is a tool written in Tcl that behaves like a library. To interact with this tool I have written a script using pexpect library. This script works by creating subprocess of bluetcl, and gives works like a library with functions that allow for performing certain quarries. Core of this script is a function called \verb!fancy_call! that takes as input a string that is a command and returns output of stripped of warnings or raises an error if such occurred(for example in case where package was not found). To remove warnings I make some assumptions.
\begin{itemize}
\item I only care about supporting fixed set of commands.
\item For those commands output that I care about is always equal to the last line. (this was checked empirically)
\item Output of a command is always followed by $\%$ a character that never occurs in the rest of the output and marks the end of the output.
\end{itemize}
This simple script allows me for:
\begin{itemize}
\item Initialize subprocess
\item Add folder to search path of bluetcl
\item Load package (bluetcl takes care of loading dependencies)
\item Get list of loaded packages
\item List functions in package
\item List types in package
\item Get information about types and function in the package
\end{itemize}
\section{Parsing}
TODO, I might want to clean this up a bit before I write about it.
\section{Synthesizing}
The goal of this project is to synthesize a
\end{document}
| {
"alphanum_fraction": 0.7813507625,
"avg_line_length": 83.7591240876,
"ext": "tex",
"hexsha": "16e0cbe1a1e84da1b16455d29d752bff15f15ec5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "86a2c6d119a0746ece7a9afd179777edc67c4f96",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Fireronin/TLoB",
"max_forks_repo_path": "Latex/disertation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "86a2c6d119a0746ece7a9afd179777edc67c4f96",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Fireronin/TLoB",
"max_issues_repo_path": "Latex/disertation.tex",
"max_line_length": 892,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "86a2c6d119a0746ece7a9afd179777edc67c4f96",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Fireronin/TLoB",
"max_stars_repo_path": "Latex/disertation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2587,
"size": 11475
} |
\input{../utils/slide-preamble1.tex}
\input{../utils/slide-preamble2.tex}
\input{../utils/macros.tex}
\bibliography{../bib/references}
\input{../utils/title-info.tex}
\title[Professioinal Development II]{Professioinal Development II: Career
planning (and graduate/professional school preparation)}
% \date{\today}
\date{April 16, 2015}
\begin{document}
\begin{noheadline}
\maketitle
\end{noheadline}
\nopost{
\begin{noheadline}
\begin{frame}[c]
\vspace{-1.3cm}
\begin{center}
\includegraphics[height=1.3\textheight]{../images/seating-chart.pdf}
\end{center}
\end{frame}
\end{noheadline}
}
\begin{noheadline}
\begin{frame}
\frametitle{Today's questions:}
\tableofcontents[subsectionstyle=hide]
\end{frame}
\end{noheadline}
\section{What careers can you pursue with a biology-related degree?}
\clickerslide{
\begin{noheadline}
\begin{frame}[t]
\textcolor{blue}{Q 1:} What is the \#1 career choice of Bio 180 students?
\begin{table}%[htbp]
\begin{flushleft}
\begin{tabular}{ l l r }
\textcolor{red}{1)} & Dentist (DDS) & \\
\textcolor{red}{2)} & Ecology/conservation & \\
\textcolor{red}{3)} & Law/business & \\
\textcolor{red}{4)} & Pharmacy & \\
\textcolor{red}{5)} & PT/OT/allied health & \\
\textcolor{red}{6)} & Physician (MD) & \\
\textcolor{red}{7)} & Plant science & \\
\textcolor{red}{8)} & Public/global health & \\
\textcolor{red}{9)} & Research: biomed/biotech & \\
\textcolor{red}{10)} & Teaching & \\
\textcolor{red}{11)} & Undecided & \\
\textcolor{red}{12)} & Other & \\
\textcolor{red}{13)} & Computer Science/Engineering & \\
\textcolor{red}{14)} & Veterinary Medicine & \\
\end{tabular}
\end{flushleft}
\end{table}
\end{frame}
\end{noheadline}
}
\clickerslide{
\begin{noheadline}
\begin{frame}[t]
\textcolor{blue}{Q 2:} What is the LEAST popular choice of Bio 180 students?
\begin{table}%[htbp]
\begin{flushleft}
\begin{tabular}{ l l r }
\textcolor{red}{1)} & Dentist (DDS) & \uncover<2->{5.5\%} \\
\textcolor{red}{2)} & Ecology/conservation & \uncover<3->{7.1\%} \\
\textcolor{red}{3)} & Law/business & \uncover<4->{2.9\%} \\
\textcolor{red}{4)} & Pharmacy & \uncover<5->{6.4\%} \\
\textcolor{red}{5)} & PT/OT/allied health & \uncover<6->{9.5\%} \\
\textcolor{red}{6)} & Physician (MD) & \uncover<7->{31.2\%} \\
\textcolor{red}{7)} & Plant science & \uncover<8->{1.0\%} \\
\textcolor{red}{8)} & Public/global health & \uncover<9->{5.4\%} \\
\textcolor{red}{9)} & Research: biomed/biotech & \uncover<10->{12.3\%} \\
\textcolor{red}{10)} & Teaching & \uncover<11->{1.2\%} \\
\textcolor{red}{11)} & Undecided & \uncover<12->{6.9\%} \\
\textcolor{red}{12)} & Other & \uncover<13->{4.0\%} \\
\textcolor{red}{13)} & Computer Science/Engineering & \uncover<14->{5.3\%} \\
\textcolor{red}{14)} & Veterinary Medicine & \uncover<15->{1.2\%} \\
\end{tabular}
\end{flushleft}
\end{table}
\note[item]{Data from Fall 2014}
\note[item]{70.2\% clinical medicine or clinical research}
\note[item]{Irony: \#1 public health issue in developed countries is
obesity, and \#1 issue in developing countries is food security}
\note[item]{The biggest challenge in health is food, and no one wants to
work on it}
\note[item]{Demand for biomed research is very low\ldots crisis with so
many in/entering field}
\note[item]{Demand for plant researchers expected to be extremely high}
\end{frame}
\end{noheadline}
}
{
\usebackgroundtemplate{\includegraphics[page=5,width=\paperwidth]{./scotts-slides.pdf}}
\begin{frame}[plain]
\note[item]{What do Husky bio grads actually do?}
\end{frame}
}
{
\usebackgroundtemplate{\includegraphics[page=6,width=\paperwidth]{./scotts-slides.pdf}}
\begin{frame}[plain]
\end{frame}
}
{
\usebackgroundtemplate{\includegraphics[page=7,width=\paperwidth]{./scotts-slides.pdf}}
\begin{frame}[plain]
\end{frame}
}
{
\usebackgroundtemplate{\includegraphics[page=8,width=\paperwidth]{./scotts-slides.pdf}}
\begin{frame}[plain]
\end{frame}
}
{
\usebackgroundtemplate{\includegraphics[page=9,width=\paperwidth]{./scotts-slides.pdf}}
\begin{frame}[plain]
\end{frame}
}
{
\usebackgroundtemplate{\includegraphics[page=1,width=\paperwidth]{./last-jobs-slide.pdf}}
\begin{frame}[plain]
\end{frame}
}
\section{How should you prepare for graduate school, professional school, or a
job in industry?}
\begin{noheadline}
\begin{frame}[t]
\frametitle{How to prepare for graduate school, professional school, or a job
in industry?}
\vspace{-4mm}
\begin{adjustwidth}{-1.5em}{-1.5em}
\begin{flushleft}
\includegraphics[page=1,width=0.96\textwidth]{../images/us-worker-task-plot.png}
\end{flushleft}
\end{adjustwidth}
\note[item]{Study by economists at MIT}
\note[item]{Label lines---show parents (late 1970s)---what's the
punchline---what does it mean for you versus your parents}
\note[item]{Top non-routine analytical and non-routine interpersonal}
\note[item]{Routine and manual}
\note[item]{TAKE HOME: Non-routine tasks becoming much more
prevalent---Bloom's 1-2 = routine; Bloom's 3-6 = non-routine}
\end{frame}
\end{noheadline}
\begin{noheadline}
\begin{frame}[t]
\begin{adjustwidth}{-1.5em}{-1.5em}
From the sheets we passed out, what are the top 3 attributes of
successful applicants?
\nbox{Problem solving/intellectual potential; Ability to work with
others; Maturity/emotional stability}
\vspace{2cm}
How will you get better at them?
\nbox{In an interview, you have to have \highlight{data}!}
\nbox{Seek out leadership positions}
\end{adjustwidth}
\end{frame}
\end{noheadline}
\begin{noheadline}
\begin{frame}[t]
\begin{adjustwidth}{-1.5em}{-1.5em}
\begin{itemize}
\item What is the role of your numbers (GPA, scores on
DAT/MCAT/PCAT/GRE, etc.)?
\nbox{They are baseline---It shows that you are good at school}
\vspace{2cm}
\item Why is the average age of matriculation to med school 25? Why
has it been increasing recently?
\nbox{Looking for individuals with greater emotional maturity,
perspective, and interpersonal skills}
\end{itemize}
\end{adjustwidth}
\end{frame}
\end{noheadline}
\begin{noheadline}
\begin{frame}[t]
\begin{adjustwidth}{-1.5em}{-1.5em}
The three most revealing interview questions:
\begin{enumerate}
\item What are you going to do to change the world?
\nbox{Are you an original thinker and intellectual leader? Have
you thought deeply about the broader context of problems in
your field?}
\vspace{1cm}
\item Tell me about a book you have read recently and how it has
changed you?
\nbox{Do you immerse yourself in what you do and think
critically? Or are you just going through the motions? Are
you creative?}
\vspace{1cm}
\item Do you enjoy your own mind? And, if so, how?
\nbox{Do you value independent intellectual growth and
curiosity}
\end{enumerate}
\nbox{It helps to have experiences (data!) to talk about!}
\end{adjustwidth}
\end{frame}
\end{noheadline}
\begin{noheadline}
\begin{frame}[t]
\begin{adjustwidth}{-1.5em}{-1.5em}
How should you go about getting letters of recommendation?
\nbox{Letter writers have to describe how long they have known the
applicant and in what capacity? You have to establish professional
relationships with potential letter writers; you have to take the
initiative to establish these relationships}
\end{adjustwidth}
\end{frame}
\end{noheadline}
\end{document}
\clickerslide{
\begin{frame}
\begin{clickerquestion}
\item
\begin{clickeroptions}
\item
\item
\item
\item
\end{clickeroptions}
\end{clickerquestion}
\end{frame}
}
| {
"alphanum_fraction": 0.589091313,
"avg_line_length": 34.8914728682,
"ext": "tex",
"hexsha": "0f8c6d26c327ae6dc2bae13f8fcad7eed716d69a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c212c94bd532f72f83d9d48d4393ada71f8b7b5a",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "joaks1/bio-180-intro-bio",
"max_forks_repo_path": "lecture-materials/12-prof-dev-2/12-professional-development-2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c212c94bd532f72f83d9d48d4393ada71f8b7b5a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "joaks1/bio-180-intro-bio",
"max_issues_repo_path": "lecture-materials/12-prof-dev-2/12-professional-development-2.tex",
"max_line_length": 93,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c212c94bd532f72f83d9d48d4393ada71f8b7b5a",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "joaks1/bio-180-intro-bio",
"max_stars_repo_path": "lecture-materials/12-prof-dev-2/12-professional-development-2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2454,
"size": 9002
} |
\documentclass{beamer}
\usepackage{comment}
\usepackage[utf8]{inputenc}
% \usecolortheme{}
\usetheme{Warsaw}
\begin{comment}
Antibes Bergen Berkeley Berlin Copenhagen
Darmstadt Dresden Frankfurt Goettingen Hannover
Ilmenau JuanLesPins Luebeck Madrid Malmoe
Marburg Montpellier PaloAlto Pittsburgh Rochester
Singapore Szeged Warsaw boxes CambridgeUS
albatross beaver beetle crane dolphin
dove fly lily orchid rose seagull
seahorse whale wolverine
\end{comment}
\title{CS2309 Project Presentation: Web Crawling}
\author{Lim Jia Yee}
\begin{document}
\frame{\titlepage}
\section{Problem}
\begin{frame}
\frametitle{Problem: All thanks to you.}
\begin{itemize}
\item It is easy to create content on the web.
\item It is easier for the web to expand now thanks to you.
\item It is easiest to take web search for granted.
\end{itemize}
\end{frame}
\subsection{Motivation}
\begin{frame}
\frametitle{Motivation}
\begin{itemize}
\item Ensure web search remains optimised; prevent world destruction.
\item Vested interest in learning from data (i.e. machine learning).
\item Brainchild: Exploring the possibility of enhancing web crawling with decisions based on data.
\end{itemize}
\end{frame}
\subsection{Relevance}
\begin{frame}
\frametitle{Relevance}
\begin{itemize}
\item Educate on the considerations of designing a web crawler, or equivalent systems.
\item Increasing the scope of machine learning as a solution.
\item Target Audience: People who maintain focused web crawlers
\end{itemize}
\end{frame}
\subsection{Solution}
\begin{frame}
\frametitle{Three-Part Solution}
\begin{enumerate}
\item Requesting
\item Deciding
\item Parsing
\end{enumerate}
\end{frame}
\section{Web Crawling}
\begin{frame}
\frametitle{Single Web Crawler Algorithm}
\begin{enumerate}
\item Decide on a good hyperlink to begin crawling from.
\item Fetch the corresponding web page of the hyperlink in (1).
\item Parse for all hyperlinks and store them.
\item Process the contents of the web page.
\item From the storage of hyperlinks, extract an unvisited one.
\item Repeat from (2).
\end{enumerate}
\end{frame}
\subsection{Context}
\begin{frame}
\frametitle{Graph Problem}
\begin{itemize}
\item The World Wide Web is the graph.
\item Directed, unweighted, unknown.
\item Vertices: Web pages and their contents
\item Edges: Hyperlinks
\end{itemize}
\end{frame}
\subsection{Graph Algorithms}
\begin{frame}
\frametitle{Graph Exploration}
\begin{itemize}
\item Breadth-first search
\item Depth-first search
\item Iterative deepening depth-first search
\item Beam search (i.e. enhanced best-first search)
\end{itemize}
\end{frame}
\subsection{Algorithm Analysis}
\begin{frame}
\frametitle{Complexity Analysis}
\begin{itemize}
\item Breadth-first search: Memory
\item Depth-first search: Narrow scope
\item Iterative deepening depth-first search: Data structure
\item Beam search: Accuracy of the heuristic
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Solution Part I: Graph Exploration}
\begin{itemize}
\item Modified depth-first search
\item Does not always push to the stack
\end{itemize}
\end{frame}
\section{Decision Making}
\begin{frame}
\frametitle{Alert: Another Performance Bottleneck}
\begin{itemize}
\item Fetching the web page and parsing it.
\item Why not decide beforehand whether we should even do it?
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Solution Part II: Reinforcement Learning}
\begin{itemize}
\item Why?
\item No training data.
\item Unknown until experienced.
\end{itemize}
\end{frame}
\subsection{Reinforcement Learning and MDP}
\begin{frame}
\frametitle{Markov Decision Process (MDP)}
\begin{enumerate}
\item State
\item Action: To parse or not to parse, that is the question.
\item Reward
\item Policy
\end{enumerate}
\end{frame}
\subsection{Components of MDP}
\begin{frame}
\frametitle{Reinforcement Learning: State}
\begin{enumerate}
\item How relevant the current host is.
\item How relevant the previous host was.
\item How many web pages belonging to the current host were actually parsed, and not skipped.
\item How relevant the URL is.
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Reinforcement Learning: State}
\begin{itemize}
\item Real-valued states $ \Rightarrow $ infinite states.
\item Reduce dimension via intervals.
\item Result: $ 2^4 = 16 $ (high and low features) states.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Reinforcement Learning: Action}
\begin{itemize}
\item Parse or not.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Reinforcement Learning: Reward}
\begin{itemize}
\item Number of ``high" features in the state.
\item No additional penalisation of ``low" features in the state.
\end{itemize}
\end{frame}
\subsection{Finite State MDP}
\begin{frame}
\frametitle{Reinforcement Learning: Policy}
\begin{itemize}
\item Value Iteration Algorithm
\item Policy Iteration Algorithm
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Value Iteration Algorithm}
\begin{enumerate}
\item Assign random \textbf{\underline{true values}} to each of the states.
\item For every state, calculate a new true value based on its neighbours' current true values.
\item Terminate if any of the true values in (2) changes by more than a user-specified $ \delta $. Else, repeat from (2).
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Value Iteration Algorithm: Limitations}
\begin{itemize}
\item Slow convergence.
\item Do we want true value or policy?
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Policy Iteration Algorithm}
\begin{enumerate}
\item Create a random \textbf{\underline{policy}} (i.e. assign a random action for each state).
\item Calculate the true value of each state given the policy in (1).
\item Based on these new true values, choose the optimal action for each state.
\item Terminate if none of the actions in (3) is changed. Else, repeat from (2).
\end{enumerate}
\end{frame}
\section{Parsing}
\begin{frame}
\frametitle{Solution Part III: Parsing}
\begin{itemize}
\item Not the main focus, but needed for testing.
\item Define ``relevant": Keywords in the web page match the list of ``search words" prepared beforehand.
\end{itemize}
\end{frame}
\subsection{RAKE}
\begin{frame}
\frametitle{Rapid Automatic Keyword Extraction (RAKE)}
\begin{enumerate}
\item Remove punctuation and special characters.
\item Remove stop words.
\item Stem the remaining words or phrases.
\item Find the degree of each word or phrase in (3).
\item Count the frequency of each word or phrase in (3).
\item Compute \textit{score = $ \frac{degree}{frequency} $} for each word or phrase in (3).
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{RAKE: Specifications}
\begin{itemize}
\item Assumes input is in standard English.
\item Numbers are also extracted.
\end{itemize}
\end{frame}
\subsection{word2vec}
\begin{frame}
\frametitle{Similarity Measure: word2vec}
\begin{itemize}
\item Words as vectors.
\item Similarity $ \Rightarrow $ distance between words.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{word2vec: Limitations}
\begin{itemize}
\item Bias towards exact words.
\item Resolve by classifying a range as ``similar".
\end{itemize}
\end{frame}
\section{Experiments}
\begin{frame}
\frametitle{Three-Part Experiment}
\begin{itemize}
\item Modified DFS
\item Policy Iteration
\item RAKE and word2vec
\end{itemize}
\end{frame}
\subsection{Modified DFS}
\begin{frame}
\frametitle{Modified DFS versus BFS}
\begin{itemize}
\item Reduced execution time (approx. 40\%)
\item Reduced memory usage (approx. 40\%)
\item Not verified: ``Quality" of the visited web pages
\item Can be verified via logic, but the algorithm will have to be even more accurate in targeting particular kinds of URL.
\end{itemize}
\end{frame}
\subsection{Policy Iteration}
\begin{frame}
\frametitle{Final Policy}
\begin{itemize}
\item Policy was ``False" for every state $ \Rightarrow $ nothing will ever be parsed.
\item Policy remains the same despite increase of $ \gamma $, number of iterations, and increase in penalty for skipping.
\item Conclusion:
\begin{itemize}
\item Insufficient domain knowledge,
\item Should not define parameters by hand, and/or
\item Unsuitable library
\end{itemize}
\item Try model-free learning instead.
\end{itemize}
\end{frame}
\subsection{Phrase Similarity}
\begin{frame}
\frametitle{Phrase Similarity: Generality}
\begin{itemize}
\item Keywords which are more general than search words can still score very high in similarity.
\item Discovered: ``Relevance" and ``similarity" are not the same.
\end{itemize}
\end{frame}
\subsection{Future Work}
\begin{frame}
\frametitle{Future Work: Beam Search and URL Analysis}
\begin{enumerate}
\item Rank vertices by their edges (URLs).
\item Add only the first $ N $ ranked edges to the heap.
\end{enumerate}
\end{frame}
\section{}
\begin{frame}
\frametitle{Conclusion}
\begin{itemize}
\item Failure: Nope. Instead, you have \textit{succeeded} in proving that this failed. Try something else.
\item Failure: Is when you give up.
\end{itemize}
\end{frame}
\end{document} | {
"alphanum_fraction": 0.7717355189,
"avg_line_length": 27.1141141141,
"ext": "tex",
"hexsha": "4b85f0bdd30d2441baba06ed0814ecfdc71b1443",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c1fd80b30e16379da2f5f2996b9f5839e1c7761",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jia1/cs2309",
"max_forks_repo_path": "Report and Presentation Drafts/slides.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c1fd80b30e16379da2f5f2996b9f5839e1c7761",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jia1/cs2309",
"max_issues_repo_path": "Report and Presentation Drafts/slides.tex",
"max_line_length": 123,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1c1fd80b30e16379da2f5f2996b9f5839e1c7761",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jia1/cs2309",
"max_stars_repo_path": "Report and Presentation Drafts/slides.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2462,
"size": 9029
} |
\chapter{iSHELL Immersion Gratings}
The Immersion Grating Echelle Spectrograph is a high resolution near infrared spectrograph for the 3.5 m NASA Infrared Telescope Facility (IRTF) located at the summit of Mauna Kea, Hawaii. The spectrograph has two modules with different immersion gratings serving as the high resolution dispersing optical elements. The two modules have wavelength ranges 1.2-2.5 $\mu$m and 3.0-5.0 $\mu$m. The short wavelength module will have a resolving power $R=80000$. The long wavelength module will have a resolving power $R=67000$. We are producing the immersion gratings at the University of Texas at Austin.
\section{Specifications}
The design and specifications of iSHELL have been described elsewhere (Rayner et al. in press). The table below lists the basic properties of iSHELL and the specifications for the immersion grating.
\begin{tabular}{llr}
\hline
& \multicolumn{2}{c}{Module} \\
\cline{2-3}
& JHK & LM \\
\hline
\multicolumn{3}{c}{Basic Properties} \\
\hline
$\lambda$ range ($\mu$m) & 1.15-2.5 & 3.0-5.0 \\
$\lambda/n$ ($\mu$m) & 0.34-0.73 & 0.88-1.5 \\
resolution, $R$ & 80k & 67k \\
slit width, $\phi$ ('') & \multicolumn{2}{c}{$\ge$0.375} \\
$\lambda/D$ ('') & 0.07-0.15 & 0.18-0.3 \\
$\phi \lambda/D$ & 5.3-2.5 & 2.0-1.25 \\
\hline
\multicolumn{3}{c}{Grating Properties} \\
\hline
pitch, $\sigma$ & 48.5 & 80 \\
top, $t$ & TBD & 30 \\
fill factor & TBD & 37.5\% \\
blaze, $\delta$ ($\;^\circ \;$) & \multicolumn{2}{c}{71.5} \\
Si apex, $a$ ($\deg$) & \multicolumn{2}{c}{70.53+$\Delta a$} \\
$\Delta a$, ($\deg$) & \multicolumn{2}{c}{0.8} \\
Method & e-beam & UV mask \\
\hline
\end{tabular}
A key figure of merit is the quantity $\phi \lambda/D$, listed in the above table. $\phi \lambda/D$ is the number of diffraction limited resolution elements that fit inside the seeing disk. It is a measure of the performance demand on the immersion grating in the following way. The total delivered spot is the convolution of the PSF from the seeing disk and the PSF from the immersion grating- ignoring all other optics. If the seeing disk is much larger than the diffraction limit, then the diffraction grating need not be diffraction limited. The ``coherence length'', that is the length over which the grating must be in phase, is equal to the grating length divided by the factor $\phi \lambda/D$.
\section{iShell grating fabrication}
\subsection{Substrate properties and preparation}
The iShell grating fabrication has followed our heritage of grating production \cite{2010SPIE.7739E.146W}. Specifically, the technique is identical to the one used to produce the immersion grating for IGRINS. The iShell diffraction gratings will be larger than that for IGRINS. IGRINS has one 30 $\times$ 90 mm$^2$ immersion grating with a 30 $\times$ 30 mm square entrance face. iShell will have a 30 $\times$ 40 mm$^2$ entrance face, with a grating area of 40 $\times$ 95 mm. Otherwise, the substrate preparation and patterning will be identical for both gratings. Specifically, we are using identical 30 mm thick R3 Si substrates. These substrates have been bias cut from the 100 Si silicon plane like a loaf of bread cut at an angle. The cutting angle is $theta=17.6^\circ$ which, after wet anisotropic chemical etching, produces V-grooves on the Si surface with angles $a, b, c =$ 71.5$^\circ$, 70.5$^\circ$, 39$^\circ$.
\section{Expected performance of iShell immersion gratings}
%Outline-
%Blaze, number of orders, etc.
%Performance in backside illumination with red HeNe
\subsection{Orders, blaze peaks, and wavelength range}
The LM module will have about 69 diffraction orders from orders 173 to 104 at 3.0-5.0 $\mu$m, respectively. Figure \ref{fig:LMbandcalc} show blaze envelopes calculated from scalar diffraction theory using the wavelength and temperature dependent Si refractive index from (cite CHARMS group XX). The JHK module will cover 158 orders from 1.14 to 2.5 $\mu$m, orders 285 to 127. The blaze curves show samples at 1 nm increments, which is the resolution of the DK480 monochromator we use to evaluate the efficiency. The J-band is undersampled at 1 nm increments- it will be a challenge to accurately measure the efficiency in J-band, since there will only be a few spectral resolution elements across the entire free spectral range.
\begin{figure}[htb]
\begin{center}
\subfloat[Example M band blaze envelope]{\ \psfig{file=chIshell/ishell_Mband.pdf,height=3.8in,width=5.6in}}
\
\subfloat[Example L band blaze envelope]{\ \psfig{file=chIshell/ishell_Lband.pdf,height=3.8in,width=5.6in}}
\caption[Calculated $L-$ and $M-$ band blaze envelopes for iSHELL]{ L and M band blaze envelopes calculated from scalar diffraction theory in immersion. The refractive index is based on the (cite XX) Sellmeier equations for Si at room temperature (T$=295$ K).}
\label{fig:LMbandcalc}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\subfloat[Example K band blaze envelope]{\ \psfig{file=chIshell/ishell_Kband.pdf,height=3.8in,width=5.6in}}
\
\subfloat[Example J band blaze envelope]{\ \psfig{file=chIshell/ishell_Jband.pdf,height=3.8in,width=5.6in}}
\caption[Calculated $J-$ and $K-$ band blaze envelopes for iSHELL]{ J and K band blaze envelopes calculated from scalar diffraction theory in immersion. The refractive index is based on the (cite XX) Sellmeier equations for Si at room temperature (T$=295$ K).}
\label{fig:JKbandcalc}
\end{center}
\end{figure}
\subsection{Optical evaluation}
Metrology is not possible in immersion until after the costly cost of cutting the Si pucks into prisms. We perform optical metrology on the grating surface after KOH etching but before cutting. The optical measurements have $\lambda=632$ nm, with $n=1$ for air. We measure the PSF over a 25 mm beam in shallow images. We construct deeper images by summing hundreds of read-noise limited CCD frames.
| {
"alphanum_fraction": 0.7483466169,
"avg_line_length": 78.6266666667,
"ext": "tex",
"hexsha": "97f287be67eee33132c1e10a2dc3f9805bcac2fb",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2b5be79e4b05ea98f5748011e1e4a5df142e06f6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "BrownDwarf/gullyDiss",
"max_forks_repo_path": "appendices/chIshell/chapter-ishellgratings.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2b5be79e4b05ea98f5748011e1e4a5df142e06f6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "BrownDwarf/gullyDiss",
"max_issues_repo_path": "appendices/chIshell/chapter-ishellgratings.tex",
"max_line_length": 933,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2b5be79e4b05ea98f5748011e1e4a5df142e06f6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "BrownDwarf/gullyDiss",
"max_stars_repo_path": "appendices/chIshell/chapter-ishellgratings.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1702,
"size": 5897
} |
\section{Prototype Implementation}
\subsection*{The Prototype Architecture}
\subsection*{Validation of the Prototype} | {
"alphanum_fraction": 0.8376068376,
"avg_line_length": 39,
"ext": "tex",
"hexsha": "23435f16d8a75c24cd0699268f52b756a16bd589",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-05-16T10:39:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-18T13:38:25.000Z",
"max_forks_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da",
"max_forks_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_forks_repo_name": "MitchellTesla/decentralized-software-updates",
"max_forks_repo_path": "papers/working-document/prototype.tex",
"max_issues_count": 120,
"max_issues_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da",
"max_issues_repo_issues_event_max_datetime": "2021-06-24T10:20:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-06T18:29:25.000Z",
"max_issues_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_issues_repo_name": "MitchellTesla/decentralized-software-updates",
"max_issues_repo_path": "papers/working-document/prototype.tex",
"max_line_length": 41,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "89f5873f82c0ff438e2cd3fff83cc030a46e29da",
"max_stars_repo_licenses": [
"ECL-2.0",
"Apache-2.0"
],
"max_stars_repo_name": "MitchellTesla/decentralized-software-updates",
"max_stars_repo_path": "papers/working-document/prototype.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-06T02:08:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-25T19:38:49.000Z",
"num_tokens": 23,
"size": 117
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Untitled},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{longtable,booktabs}
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\title{Untitled}
\author{}
\date{\vspace{-2.5em}}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{2}
\tableofcontents
}
\hypertarget{table-including--symbol-escapef-not-set}{%
\section{Table including \%-symbol, escape=F not set}\label{table-including--symbol-escapef-not-set}}
\begin{table}[!h]
\caption{\label{tab:unnamed-chunk-1}Caption}
\centering
\begin{threeparttable}
\begin{tabular}[t]{lcccc}
\toprule
Variable & \textbackslash{}makecell[c]\{All\textbackslash{}\textbackslash{}(n = 300)\} & \textbackslash{}makecell[c]\{Group1\textbackslash{}\textbackslash{}(n = 120)\} & \textbackslash{}makecell[c]\{Group2\textbackslash{}\textbackslash{}(n = 100)\} & \textbackslash{}makecell[c]\{Group3\textbackslash{}\textbackslash{}(n = 80)\}\\
\midrule
Var1 & 31\% & 79\% & 51\% & 14\%\\
Var2 & 67\% & 42\% & 50\% & 43\%\\
Var3 & 14\% & 25\% & 90\% & 91\%\\
Var4 & 69\% & 91\% & 57\% & 92\%\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item \textit{Anmerkung:}
\item * This is a note to show what * shows in this table plus some addidtional words to make this string a bit longer. Still a bit more
\end{tablenotes}
\end{threeparttable}
\end{table}
\hypertarget{table-including--symbol-escapef}{%
\section{Table including \%-symbol, escape=F}\label{table-including--symbol-escapef}}
\begin{verbatim}
This leads to an Error, see pic below
\end{verbatim}
\begin{table}[!h]
\caption{\label{tab:unnamed-chunk-2}Caption}
\centering
\begin{threeparttable}
\begin{tabular}[t]{lcccc}
\toprule
Variable & \makecell[c]{All\\(n = 300)} & \makecell[c]{Group1\\(n = 120)} & \makecell[c]{Group2\\(n = 100)} & \makecell[c]{Group3\\(n = 80)}\\
\midrule
Var1 & 31\% & 79\% & 51\% & 14\%\\
Var2 & 67\% & 42\% & 50\% & 43\%\\
Var3 & 14\% & 25\% & 90\% & 91\%\\
Var4 & 69\% & 91\% & 57\% & 92\%\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item \textit{Anmerkung:}
\item * This is a note to show what * shows in this table plus some addidtional words to make this string a bit longer. Still a bit more
\end{tablenotes}
\end{threeparttable}
\end{table}
\end{document}
| {
"alphanum_fraction": 0.7297531399,
"avg_line_length": 33.7080291971,
"ext": "tex",
"hexsha": "7ab6af83de1fd4e16c8a080efe8ee021e73754d6",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-06-29T21:14:40.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-10T16:09:40.000Z",
"max_forks_repo_head_hexsha": "2266b485c120a42173258bdeb5bf0b4a55324fbc",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "pwep/RJafrocBook",
"max_forks_repo_path": "R/learn/table2.tex",
"max_issues_count": 27,
"max_issues_repo_head_hexsha": "2266b485c120a42173258bdeb5bf0b4a55324fbc",
"max_issues_repo_issues_event_max_datetime": "2021-10-31T21:17:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-11T21:13:42.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "pwep/RJafrocBook",
"max_issues_repo_path": "R/learn/table2.tex",
"max_line_length": 330,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "2266b485c120a42173258bdeb5bf0b4a55324fbc",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "pwep/RJafrocBook",
"max_stars_repo_path": "R/learn/table2.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-21T13:35:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-17T17:19:54.000Z",
"num_tokens": 1636,
"size": 4618
} |
% +-======-+
% Copyright (c) 2003-2007 United States Government as represented by
% the Admistrator of the National Aeronautics and Space Administration.
% All Rights Reserved.
%
% THIS OPEN SOURCE AGREEMENT ("AGREEMENT") DEFINES THE RIGHTS OF USE,
% REPRODUCTION, DISTRIBUTION, MODIFICATION AND REDISTRIBUTION OF CERTAIN
% COMPUTER SOFTWARE ORIGINALLY RELEASED BY THE UNITED STATES GOVERNMENT AS
% REPRESENTED BY THE GOVERNMENT AGENCY LISTED BELOW ("GOVERNMENT AGENCY").
% THE UNITED STATES GOVERNMENT, AS REPRESENTED BY GOVERNMENT AGENCY, IS AN
% INTENDED THIRD-PARTY BENEFICIARY OF ALL SUBSEQUENT DISTRIBUTIONS OR
% REDISTRIBUTIONS OF THE SUBJECT SOFTWARE. ANYONE WHO USES, REPRODUCES,
% DISTRIBUTES, MODIFIES OR REDISTRIBUTES THE SUBJECT SOFTWARE, AS DEFINED
% HEREIN, OR ANY PART THEREOF, IS, BY THAT ACTION, ACCEPTING IN FULL THE
% RESPONSIBILITIES AND OBLIGATIONS CONTAINED IN THIS AGREEMENT.
%
% Government Agency: National Aeronautics and Space Administration
% Government Agency Original Software Designation: GSC-15354-1
% Government Agency Original Software Title: GEOS-5 GCM Modeling Software
% User Registration Requested. Please Visit http://opensource.gsfc.nasa.gov
% Government Agency Point of Contact for Original Software:
% Dale Hithon, SRA Assistant, (301) 286-2691
%
% +-======-+
\section{Timing}
%
The module {\tt m\_zeit} is a multi-timer of process times and wall-clock
times. For a particular subroutine, this module produces the wall-clock
time per PE or for all the PEs. The timing data are automatically written
at the end of the run in the standard output file.
There are three main procedures:
\noindent {\tt zeit\_ci:} Pushes a new name to the timer.
It receives as argument the name of the subroutine to be measured.
\noindent {\tt zeit\_co:} Pops the current name on the timer. It has a
variable that returns (1) the NET timing data charged under the account
name only, and (2) the SCOPE timing data since the last zeit\_ci() call with
the same account name and at the out most level.
Its argument is the same as zeit\_ci().
\noindent {\tt zeit\_flush:} Prints the timing data (per PE) to the logical
unit provided as argument. This is called once at the end of the code.
zeit\_flush has two integer arguments. The first one for the logical
unit for the output, and the second one (optional) describes the type of
output to be presented.
\noindent {\tt zeit\_allflush:} Prints all PE timing.
It has four integers as arguments.
The first two are the MPI communicator and the PE number of the master
processor respectives and the last two are the same as in zeit\_flush.
\newcommand{\tb}{\overline{t}}
We can obtain a summary (obtained by calling {\tt zeit\_allflush:}) of the
timing data of all PEs with quantified load balancing measures:
%
\begin{eqnarray*}
x &=& \frac{\max(t) - \tb}{N\tb} \times 100\% \\
i &=& \frac{\max(t) - \tb}{\max(t)} \times 100\% \\
r &=& \frac{1}{N\tb} \sum^{t\tb}{(t-\tb)} \times 100\%
\end{eqnarray*}
%
where
%
\begin{center}
\begin{tabular}{rl}
$t$: & time by any process element \\
$\tb$: & mean time by all process elements \\
$x$: & the ma{\bf x}imum percentage load deviation \\
$i$: & percentage {\bf i}dle process-time or load {\bf i}mbalance \\
$r$: & percentage {\bf r}elocatable loads \\
$N$: & {\bf n}umber of process elements
\end{tabular}
\end{center}
\begin{verbatim}
EXAMPLE 1:
Assume that we want to record the timing numbers from a code running
on three processors. Each processor (PE) calls its own subroutine:
PE 0 calls subA
PE 1 calls subB
PE 2 calls subC
We measure the time spent by each PE while executing its own section.
The ouput per PE is obtained from the call of zeit_flush. In addition,
PE 0 gives a summary of the timing results on all the PEs (call of
zeit_allflush).
Code:
program runzeit
use m_mpout,only : mpout
use m_mpout,only : mpout_open
use m_mpout,only : mpout_log
use m_mpout,only : mpout_ison
use m_mpout,only : mpout_close
use m_mpif90,only : MP_init
use m_mpif90,only : MP_finalize
use m_mpif90,only : MP_comm_world
use m_mpif90,only : MP_comm_rank
use m_zeit,only : zeit_ci
use m_zeit,only : zeit_co
use m_zeit,only : zeit_flush
use m_zeit,only : zeit_allflush
use m_zeit,only : MWTIME ! MPI_Wtime() wall-clock time
use m_zeit,only : PUTIME ! times() process user time
use m_zeit,only : PSTIME ! times() process system time
use m_die ,only : MP_die
implicit none
character(len=*),parameter :: myname='runzeit'
integer :: ier,myID
call MP_init(ier)
if(ier/=0) call MP_die(myname,'MP_init()',ier)
call zeit_ci(myname)
call MP_comm_rank(MP_comm_world,myID,ier)
if(ier/=0) call MP_die(myname,'MP_comm_rank()',ier)
select case(mod(myID,3))
case(0)
call zeit_ci('subA')
call subA(mpout)
call zeit_co('subA')
case(1)
call zeit_ci('subB')
call subB(mpout)
call zeit_co('subB')
case(2)
call zeit_ci('subC')
call subC(mpout)
call zeit_co('subC')
end select
call zeit_co(myname)
call mpout_open(pfix='zeit',mask=0)
call zeit_flush(lu=mpout,umask=MWTIME+PUTIME+PSTIME)
call zeit_allflush(comm=MP_comm_world,root=0,lu=mpout, &
umask=MWTIME+PUTIME+PSTIME)
call mpout_close()
call MP_finalize(ier)
if(ier/=0) call MP_die(myname,'MP_init()',ier)
write(mpout,'(2a)') myname,': normal termination'
stop
end program runzeit
Output:
*****Timing on PE 0*****
Summary from zeit_flush()
------------------------------------------------------------------------
[MWTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 57.9+ 0.0 00:00 100.0+
runzeit 1 0.0 0.0 00:00 0.7% 0.0 00:00 42.1%
subA 1 0.0 0.0 00:00 41.4% 0.0 00:00 41.4%
------------------------------------------------------------------------
[PUTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subA 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
[PSTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subA 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
Summary from zeit_allflush()
------------------------------------------------------------------------
[MWTIME]x5 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
..zeit. 0.0 0.0 001 5 7 19 0.0 0.0 001 3 4 13
runzeit 0.0 0.0 004 2 6 2 0.0 0.0 000 28 28 58
subA 0.0 0.0 000 62 62 75 0.0 0.0 000 62 62 75
subB 0.0 0.0 004 35 60 63 0.0 0.0 004 35 60 63
subC 0.0 0.0 002 80 80 80 0.0 0.0 002 80 80 80
------------------------------------------------------------------------
[PUTIME]x5 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
..zeit. 0.0 0.0 004 5 20 20 0.0 0.0 004 5 20 20
runzeit 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subA 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subB 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subC 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
------------------------------------------------------------------------
[PSTIME]x5 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
..zeit. 0.0 0.0 002 80 80 80 0.0 0.0 002 80 80 80
runzeit 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subA 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subB 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
subC 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
------------------------------------------------------------------------
*****Timing on PE 1*****
------------------------------------------------------------------------
[MWTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 85.9+ 0.0 00:00 100.0+
runzeit 1 0.0 0.0 00:00 0.5% 0.0 00:00 14.1%
subB 1 0.0 0.0 00:00 13.5% 0.0 00:00 13.5%
------------------------------------------------------------------------
[PUTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subB 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
[PSTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subB 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
*****Timing on PE 2*****
------------------------------------------------------------------------
[MWTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 83.3+ 0.0 00:00 100.0+
runzeit 1 0.0 0.0 00:00 1.0% 0.0 00:00 16.7%
subC 1 0.0 0.0 00:00 15.7% 0.0 00:00 15.7%
------------------------------------------------------------------------
[PUTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subC 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
[PSTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
..zeit. 1s 0/ 2 0.0 00:00 0.0+ 0.0 00:00 0.0+
runzeit 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
subC 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
\end{verbatim}
\begin{verbatim}
EXAMPLE 2:
Here we present only the timing results obtained by executing
(on 12 PEs) another sample code (not shown). We only show the output
from PE 0 where zeit_flush() and zeit_allflush() are called.
1) Output of an extented example of zeit_flush():
------------------------------------------------------------------------
[MWTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
.zeit. 1s 0/ 5 0.5 00:00 0.3+ 154.5 02:34 100.0+
run_ 1 0.0 0.0 00:00 0.0% 154.0 02:34 99.7%
rootedObs_get 1 46.1 46.1 00:46 29.8% 46.1 00:46 29.8%
solve 1 0.5 0.5 00:00 0.3% 104.6 01:45 67.7%
rootedAIGrid 1 2.4 2.4 00:02 1.6% 11.9 00:12 7.7%
AIGrid_distr_3d 1 9.5 9.5 00:09 6.1% 9.5 00:09 6.1%
AE_solve 1 31.3 31.3 00:31 20.3% 91.4 01:31 59.2%
distribute_ob 1 2.7 2.7 00:03 1.8% 2.7 00:03 1.8%
distribute_ai 1 3.2 3.2 00:03 2.1% 3.2 00:03 2.1%
CGSolver_solve 1 18.0 18.0 00:18 11.6% 46.5 00:47 30.1%
localSolve_ 9 0.2 2.2 00:02 1.4% 2.2 00:02 1.4%
sMatxU_agat 11 0.2 2.5 00:03 1.6% 2.5 00:03 1.6%
sMatxUxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxU_rscat 11 0.2 2.0 00:02 1.3% 2.0 00:02 1.3%
sMatxO_agat 11 0.2 2.1 00:02 1.4% 2.1 00:02 1.4%
sMatxOxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxO_rscat 11 0.2 2.0 00:02 1.3% 2.0 00:02 1.3%
sMatxF_agat 33 0.2 5.7 00:06 3.7% 5.7 00:06 3.7%
sMatxFxpy 33 0.1 4.0 00:04 2.6% 4.0 00:04 2.6%
sMatxF_rscat 33 0.2 8.0 00:08 5.2% 8.0 00:08 5.2%
FcstErrCovMatx_Cx
1 0.4 0.4 00:00 0.3% 7.7 00:08 5.0%
rMatxF_agat 3 0.1 0.3 00:00 0.2% 0.3 00:00 0.2%
rMatxFxpy 3 1.1 3.3 00:03 2.1% 3.3 00:03 2.1%
rMatxF_rscat 3 1.2 3.7 00:04 2.4% 3.7 00:04 2.4%
rootedAIGrid_intp
1 0.8 0.8 00:01 0.5% 0.8 00:01 0.5%
wGrADS 1 3.4 3.4 00:03 2.2% 3.4 00:03 2.2%
------------------------------------------------------------------------
[PUTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
.zeit. 1s 0/ 5 0.0 00:00 0.0+ 80.5 01:20 100.0+
run_ 1 0.0 0.0 00:00 0.0% 80.4 01:20 100.0%
rootedObs_get 1 23.0 23.0 00:23 28.6% 23.0 00:23 28.6%
solve 1 0.1 0.1 00:00 0.1% 57.4 00:57 71.4%
rootedAIGrid 1 0.8 0.8 00:01 1.0% 6.3 00:06 7.8%
AIGrid_distr_3d 1 5.4 5.4 00:05 6.8% 5.4 00:05 6.8%
AE_solve 1 16.5 16.5 00:16 20.5% 50.5 00:51 62.8%
distribute_ob 1 1.5 1.5 00:02 1.9% 1.5 00:02 1.9%
distribute_ai 1 1.8 1.8 00:02 2.3% 1.8 00:02 2.3%
CGSolver_solve 1 10.5 10.5 00:10 13.1% 26.2 00:26 32.6%
localSolve_ 9 0.1 1.1 00:01 1.4% 1.1 00:01 1.4%
sMatxU_agat 11 0.1 1.3 00:01 1.7% 1.3 00:01 1.7%
sMatxUxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxU_rscat 11 0.1 1.3 00:01 1.6% 1.3 00:01 1.6%
sMatxO_agat 11 0.1 1.0 00:01 1.2% 1.0 00:01 1.2%
sMatxOxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxO_rscat 11 0.1 1.1 00:01 1.3% 1.1 00:01 1.3%
sMatxF_agat 33 0.1 3.4 00:03 4.3% 3.4 00:03 4.3%
sMatxFxpy 33 0.1 2.3 00:02 2.8% 2.3 00:02 2.8%
sMatxF_rscat 33 0.1 4.2 00:04 5.2% 4.2 00:04 5.2%
FcstErrCovMatx_Cx
1 0.2 0.2 00:00 0.3% 4.5 00:05 5.6%
rMatxF_agat 3 0.1 0.2 00:00 0.2% 0.2 00:00 0.2%
rMatxFxpy 3 0.6 1.9 00:02 2.3% 1.9 00:02 2.3%
rMatxF_rscat 3 0.8 2.3 00:02 2.8% 2.3 00:02 2.8%
rootedAIGrid_intp
1 0.5 0.5 00:01 0.7% 0.5 00:01 0.7%
wGrADS 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
------------------------------------------------------------------------
[PSTIME] counts period NET m:s % SCOPE m:s %
------------------------------------------------------------------------
.zeit. 1s 0/ 5 0.0 00:00 0.8+ 1.2 00:01 100.0+
run_ 1 0.0 0.0 00:00 0.8% 1.2 00:01 99.2%
rootedObs_get 1 0.3 0.3 00:00 21.0% 0.3 00:00 21.0%
solve 1 0.0 0.0 00:00 0.0% 0.9 00:01 73.9%
rootedAIGrid 1 0.1 0.1 00:00 9.2% 0.3 00:00 26.1%
AIGrid_distr_3d 1 0.2 0.2 00:00 16.8% 0.2 00:00 16.8%
AE_solve 1 0.4 0.4 00:00 33.6% 0.5 00:01 44.5%
distribute_ob 1 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
distribute_ai 1 0.1 0.1 00:00 5.9% 0.1 00:00 5.9%
CGSolver_solve 1 0.0 0.0 00:00 0.0% 0.0 00:00 2.5%
localSolve_ 9 0.0 0.0 00:00 0.8% 0.0 00:00 0.8%
sMatxU_agat 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxUxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxU_rscat 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxO_agat 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxOxpy 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxO_rscat 11 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxF_agat 33 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
sMatxFxpy 33 0.0 0.0 00:00 1.7% 0.0 00:00 1.7%
sMatxF_rscat 33 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
FcstErrCovMatx_Cx
1 0.0 0.0 00:00 0.0% 0.0 00:00 2.5%
rMatxF_agat 3 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
rMatxFxpy 3 0.0 0.0 00:00 0.0% 0.0 00:00 0.0%
rMatxF_rscat 3 0.0 0.0 00:00 2.5% 0.0 00:00 2.5%
rootedAIGrid_intp
1 0.0 0.0 00:00 3.4% 0.0 00:00 3.4%
wGrADS 1 0.0 0.0 00:00 3.4% 0.0 00:00 3.4%
------------------------------------------------------------------------
2) Output of an extented example of 12 PE zeit_allflush():
------------------------------------------------------------------------
[MWTIME]x12 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
.zeit. 4.9 5.4 006 1 7 9 155.4 155.5 008 0 0 0
run_ 0.0 0.0 002 40 55 83 150.5 154.0 000 0 0 2
rootedObs_get 46.6 46.8 008 0 0 0 46.6 46.8 008 0 0 0
solve 0.1 0.5 000 32 42 79 103.6 104.6 000 0 0 1
rootedAIGrid 1.0 2.4 000 13 13 60 11.4 11.9 000 0 0 4
AIGrid_distr_3d 10.5 10.7 002 0 1 2 10.5 10.7 002 0 1 2
AE_solve 31.8 32.4 008 0 0 2 91.9 92.2 003 0 0 0
distribute_ob 2.6 2.7 000 0 1 4 2.6 2.7 000 0 1 4
distribute_ai 3.1 3.3 006 1 1 6 3.1 3.3 006 1 1 6
CGSolver_solve 17.7 21.4 007 2 6 17 46.6 46.8 006 0 0 0
localSolve_ 2.8 8.9 00B 18 34 68 2.8 8.9 00B 18 34 68
sMatxU_agat 2.2 2.6 003 1 4 13 2.2 2.6 003 1 4 13
sMatxUxpy 0.0 0.1 002 17 38 67 0.0 0.1 002 17 38 67
sMatxU_rscat 2.0 2.3 009 1 3 12 2.0 2.3 009 1 3 12
sMatxO_agat 2.1 2.5 003 2 4 17 2.1 2.5 003 2 4 17
sMatxOxpy 0.0 0.0 003 2 14 21 0.0 0.0 003 2 14 21
sMatxO_rscat 2.0 2.6 004 2 4 22 2.0 2.6 004 2 4 22
sMatxF_agat 5.7 6.8 00B 2 4 16 5.7 6.8 00B 2 4 16
sMatxFxpy 3.2 4.0 000 2 7 20 3.2 4.0 000 2 7 20
sMatxF_rscat 8.8 10.5 001 2 3 16 8.8 10.5 001 2 3 16
FcstErrCovMatx_Cx 0.3 0.4 00B 5 13 39 7.8 7.9 006 0 0 2
rMatxF_agat 0.7 1.0 007 3 12 28 0.7 1.0 007 3 12 28
rMatxFxpy 3.3 3.8 006 1 4 14 3.3 3.8 006 1 4 14
rMatxF_rscat 3.5 4.0 00A 1 4 12 3.5 4.0 00A 1 4 12
rootedAIGrid_intp 0.2 0.8 000 34 34 80 0.2 0.8 000 34 34 80
wGrADS 0.3 3.4 000 92 92 92 0.3 3.4 000 92 92 92
------------------------------------------------------------------------
[PUTIME]x12 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
.zeit. 2.7 3.2 009 2 8 17 85.6 86.5 002 0 1 1
run_ 0.0 0.0 000 0 0 0 82.9 83.5 007 0 0 1
rootedObs_get
26.5 27.0 00B 0 1 2 26.5 27.0 00B 0 1 2
solve 0.0 0.1 000 50 50 86 56.4 57.4 000 0 0 2
rootedAIGrid 0.5 0.8 000 5 7 36 4.8 6.3 000 3 3 24
AIGrid_distr_3d 4.2 5.4 000 2 3 22 4.2 5.4 000 2 3 22
AE_solve 17.4 17.7 005 0 1 2 51.6 52.0 009 0 0 1
distribute_ob 1.5 1.6 002 1 2 8 1.5 1.6 002 1 2 8
distribute_ai 1.7 1.8 000 1 2 8 1.7 1.8 000 1 2 8
CGSolver_solve 10.0 12.4 007 2 6 19 26.5 26.8 002 0 0 1
localSolve_ 1.6 4.8 00B 17 33 67 1.6 4.8 00B 17 33 67
sMatxU_agat 1.3 1.4 003 1 4 11 1.3 1.4 003 1 4 11
sMatxUxpy 0.0 0.1 002 11 39 57 0.0 0.1 002 11 39 57
sMatxU_rscat 1.1 1.3 00A 1 5 14 1.1 1.3 00A 1 5 14
sMatxO_agat 1.2 1.4 00B 1 4 15 1.2 1.4 00B 1 4 15
sMatxOxpy 0.0 0.0 00A 92 92 92 0.0 0.0 00A 92 92 92
sMatxO_rscat 1.2 1.4 004 1 4 13 1.2 1.4 004 1 4 13
sMatxF_agat 3.3 4.1 00B 2 4 19 3.3 4.1 00B 2 4 19
sMatxFxpy 2.0 2.5 009 2 7 19 2.0 2.5 009 2 7 19
sMatxF_rscat 4.8 5.9 001 2 4 19 4.8 5.9 001 2 4 19
FcstErrCovMatx_Cx 0.2 0.3 00B 2 6 19 4.4 4.6 003 0 1 4
rMatxF_agat 0.4 0.7 007 5 14 37 0.4 0.7 007 5 14 37
rMatxFxpy 1.8 2.0 006 1 2 6 1.8 2.0 006 1 2 6
rMatxF_rscat 2.0 2.3 000 1 3 13 2.0 2.3 000 1 3 13
rootedAIGrid_intp 0.1 0.5 000 37 41 82 0.1 0.5 000 37 41 82
wGrADS 0.0 0.0 000 92 92 92 0.0 0.0 000 92 92 92
------------------------------------------------------------------------
[PSTIME]x12 NET avg max imx x% r% i% SCP avg max imx x% r% i%
------------------------------------------------------------------------
.zeit. 0.0 0.0 00A 14 33 63 2.0 2.4 00A 2 5 16
run_ 0.0 0.0 00B 25 75 75 2.0 2.4 00A 2 5 16
rootedObs_get 0.2 0.3 000 4 11 35 0.2 0.3 000 4 11 35
solve 0.0 0.0 000 0 0 0 1.8 2.3 00A 2 6 20
rootedAIGrid 0.0 0.1 000 92 92 92 1.1 1.4 00A 3 9 24
AIGrid_distr_3d 1.1 1.4 00A 3 10 25 1.1 1.4 00A 3 10 25
AE_solve 0.5 0.6 00B 1 4 15 0.7 0.9 00B 2 5 15
distribute_ob 0.0 0.0 006 22 44 73 0.0 0.0 006 22 44 73
distribute_ai 0.1 0.1 001 4 10 31 0.1 0.1 001 4 10 31
CGSolver_solve 0.0 0.0 00B 8 33 50 0.1 0.1 001 7 22 46
localSolve_ 0.0 0.1 001 11 33 58 0.0 0.1 001 11 33 58
sMatxU_agat 0.0 0.0 005 42 83 83 0.0 0.0 005 42 83 83
sMatxUxpy 0.0 0.0 005 42 83 83 0.0 0.0 005 42 83 83
sMatxU_rscat 0.0 0.0 006 92 92 92 0.0 0.0 006 92 92 92
sMatxO_agat 0.0 0.0 00B 92 92 92 0.0 0.0 00B 92 92 92
sMatxOxpy 0.0 0.0 000 0 0 0 0.0 0.0 000 0 0 0
sMatxO_rscat 0.0 0.0 002 92 92 92 0.0 0.0 002 92 92 92
sMatxF_agat 0.0 0.0 006 42 83 83 0.0 0.0 006 42 83 83
sMatxFxpy 0.0 0.0 000 32 67 79 0.0 0.0 000 32 67 79
sMatxF_rscat 0.0 0.0 003 25 75 75 0.0 0.0 003 25 75 75
FcstErrCovMatx_Cx 0.0 0.0 002 25 67 75 0.0 0.1 002 11 26 56
rMatxF_agat 0.0 0.0 002 42 83 83 0.0 0.0 002 42 83 83
rMatxFxpy 0.0 0.0 005 8 50 50 0.0 0.0 005 8 50 50
rMatxF_rscat 0.0 0.1 004 16 46 65 0.0 0.1 004 16 46 65
rootedAIGrid_intp 0.0 0.0 000 92 92 92 0.0 0.0 000 92 92 92
wGrADS 0.0 0.0 000 92 92 92 0.0 0.0 000 92 92 92
------------------------------------------------------------------------
\end{verbatim}
| {
"alphanum_fraction": 0.4228495136,
"avg_line_length": 57.9977876106,
"ext": "tex",
"hexsha": "f9b68a50b9927cfc67251581a709fa9cf89ea90e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_forks_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_forks_repo_name": "joeylamcy/gchp",
"max_forks_repo_path": "ESMF/src/addon/MAPL/GMAO_mpeu/doc/Timing.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_issues_repo_issues_event_max_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-03-04T16:12:02.000Z",
"max_issues_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_issues_repo_name": "joeylamcy/gchp",
"max_issues_repo_path": "ESMF/src/addon/MAPL/GMAO_mpeu/doc/Timing.tex",
"max_line_length": 77,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "0e1676300fc91000ecb43539cabf1f342d718fb3",
"max_stars_repo_licenses": [
"NCSA",
"Apache-2.0",
"MIT"
],
"max_stars_repo_name": "joeylamcy/gchp",
"max_stars_repo_path": "ESMF/src/addon/MAPL/GMAO_mpeu/doc/Timing.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-05T16:48:58.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-05T16:48:58.000Z",
"num_tokens": 11372,
"size": 26215
} |
\documentclass[review]{elsarticle}
\usepackage{amsmath}
\usepackage{subcaption}
\usepackage[usenames]{xcolor}
\usepackage{lineno,hyperref}
\modulolinenumbers[5]
\journal{TBA}
%%%%%%%%%%%%%%%%%%%%%%%
%% Elsevier bibliography styles
%%%%%%%%%%%%%%%%%%%%%%%
%% To change the style, put a % in front of the second line of the current style and
%% remove the % from the second line of the style you would like to use.
%%%%%%%%%%%%%%%%%%%%%%%
%% Numbered
%\bibliographystyle{model1-num-names}
%% Numbered without titles
%\bibliographystyle{model1a-num-names}
%% Harvard
%\bibliographystyle{model2-names.bst}\biboptions{authoryear}
%% Vancouver numbered
%\usepackage{numcompress}\bibliographystyle{model3-num-names}
%% Vancouver name/year
%\usepackage{numcompress}\bibliographystyle{model4-names}\biboptions{authoryear}
%% APA style
%\bibliographystyle{model5-names}\biboptions{authoryear}
%% AMA style
%\usepackage{numcompress}\bibliographystyle{model6-num-names}
%% `Elsevier LaTeX' style
\bibliographystyle{elsarticle-num}
%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{frontmatter}
\title{Convergence issues of the FEM solution to the fiber/matrix interface crack problem}
%\tnotetext[mytitlenote]{Fully documented templates are available in the elsarticle package on \href{http://www.ctan.org/tex-archive/macros/latex/contrib/elsarticle}{CTAN}.}
%% Group authors per affiliation:
%\author{Luca Di Stasio\fnref{myfootnote}}
%\address{Radarweg 29, Amsterdam}
%\fntext[myfootnote]{Since 1880.}
%% or include affiliations in footnotes:
\author[nancy,lulea]{Luca Di Stasio}
\author[lulea]{Janis Varna}
\author[nancy]{Zoubir Ayadi}
%\ead[url]{www.elsevier.com}
%\author[mysecondaryaddress]{Global Customer Service\corref{mycorrespondingauthor}}
%\cortext[mycorrespondingauthor]{Corresponding author}
%\ead{[email protected]}
\address[nancy]{Universit\'e de Lorraine, EEIGM, IJL, 6 Rue Bastien Lepage, F-54010 Nancy, France}
\address[lulea]{Lule\aa\ University of Technology, University Campus, SE-97187 Lule\aa, Sweden}
\begin{abstract}
\noindent
\textcolor{purple}{{\em Priority}: 4}\\
\textcolor{purple}{{\em Target journal(s)}: Computer Methods in Applied Mechanics and Engineering, European Journal of Computational Mechanics, Acta Mechanica, Computational Mechanics, Engineering Fracture Mechanics, Theoretical and Applied Fracture Mechanics, International Journal of Fracture}\\
\end{abstract}
%\begin{keyword}
%\texttt{elsarticle.cls}\sep \LaTeX\sep Elsevier \sep template
%\MSC[2010] 00-01\sep 99-00
%\end{keyword}
\end{frontmatter}
\linenumbers
\section{Introduction}
\section{The fiber/matrix interface crack problem}
\section{Finite Element Discretization}
\textcolor{blue}{Comparison of different formulations in a bottom-up approach to FEM modeling:
\begin{enumerate}
\item applied displacement vs applied stress
\item small displacement vs finite displacement formulation
\item surface-to-surface vs node-to-surface formulation
\item small sliding vs finite sliding
\item Abaqus built-in vs in-house VCCT
\end{enumerate}
}
\section{Convergence of the VCCT solution}
\section{Path independence of the J-integral}
\section{Accuracy of contact zone estimation}
\section{Conclusions \& Outlook}
\end{document}
| {
"alphanum_fraction": 0.763546798,
"avg_line_length": 30.0740740741,
"ext": "tex",
"hexsha": "5a8c1fa18428f25e439fbdde3dc0f1c1fe5dd20d",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-12-14T20:55:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-08-09T19:20:39.000Z",
"max_forks_repo_head_hexsha": "813bdeef7e07db6b7830d41fcca198f8dd2eb3cf",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "LucaDiStasio/thinPlyMechanics",
"max_forks_repo_path": "tex/04_Preprints/09_RVE-automatic-generation-and-analysis/fibermatrixdebond-convergence.tex",
"max_issues_count": 123,
"max_issues_repo_head_hexsha": "813bdeef7e07db6b7830d41fcca198f8dd2eb3cf",
"max_issues_repo_issues_event_max_datetime": "2018-06-21T12:01:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-07T14:05:04.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "LucaDiStasio/thinPlyMechanics",
"max_issues_repo_path": "tex/04_Preprints/09_RVE-automatic-generation-and-analysis/fibermatrixdebond-convergence.tex",
"max_line_length": 297,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "813bdeef7e07db6b7830d41fcca198f8dd2eb3cf",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "LucaDiStasio/thinPlyMechanics",
"max_stars_repo_path": "tex/04_Preprints/09_RVE-automatic-generation-and-analysis/fibermatrixdebond-convergence.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-04T03:53:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-06-04T10:15:30.000Z",
"num_tokens": 881,
"size": 3248
} |
\documentclass[a4paper]{article}
\def\npart{III}
\def\ntitle{Introduction to Approximate Groups}
\def\nlecturer{M.\ Tointon}
\def\nterm{Lent}
\def\nyear{2019}
\input{header}
\begin{document}
\input{titlepage}
\tableofcontents
\section*{Lecture 1: missed}
\clearpage
\section*{Lecture 2: Covering and higher sum and product sets}
Introduce two techniques we'll use repeatedly: covering and bounding higher product sets. A nice way to do this is by proving the following theorem.
\begin{theorem}[Ruzsa]\index{Ruzsa's theorem}
Suppose \(A \subseteq \F_p^r\) satisfying \(|A + A| \leq K |A|\). Then exists \(H \leq \F_p^r\) with \(|H| \leq p^{K^4} K^2 |A|\) such that \(A \subseteq H\).
\end{theorem}
So again, like in theorem 1.1., \(A\) is a large propotion of a finite subgroup.
\begin{remark}
It is not ideal that \(|A|/|H|\) depends on \(p\). We'll remove this dependence in a few lectures' time.
\end{remark}
We'll start by proving the following weaker version:
\begin{proposition}
Suppose \(A \subseteq \F_p^r\) satisfies \(|2A - 2A| \leq K |A|\). Then exists \(H \subseteq \F_p^r\) with \(|H| \leq p^K|A - A|\) (so \(\leq p^k K|A|\)) such that \(A \subseteq H\).
\end{proposition}
We'll prove this using ``covering'', encapsulated by the following lemma:
\begin{lemma}[Ruzsa's covering lemma]\index{Ruzsa's scovering lemma}
Suppose \(A, B \subseteq G\) and \(|AB| \leq K |B|\). Then there exists \(X \subseteq A\) with \(|X| \leq K\) such that \(A \subseteq XBB^{-1}\). Indeed we may take \(X \subseteq A\) maximal such that the sets \(xB\), \(x \in X\) are disjoint.
\end{lemma}
The term ``covering'' refers to the conclusion \(A \subseteq XBB^{-1}\), which say that \(A\) can be covered by a few left translates of \(BB^{-1}\).
\begin{proof}
First disjointness of \(xB\) implies that \(|XB| = |X||B|\). Since \(X \subseteq A\),
\[
|XB| \leq |AB| \leq K|B|
\]
so \(|X| \leq K\). Maximality implies that for all \(a \in A\) there exists \(x \in X\) such that \(aB \cap xB \neq \emptyset\), and hence \(a \in xBB^{-1}\). Hence \(A \subseteq XBB^{-1}\) as required.
\end{proof}
\begin{lemma}
Suppose \(A \subseteq G\) satisfies
\[
|A^{-1}A^2 A^{-1}| \leq K |A|.
\]
Then exists \(X \in A^{-1}A^2, |X| \leq K\) such that
\[
A^{-1}A^n \subseteq X^{n - 1}A^{-1}A
\]
for all \(n \in \N\).
\end{lemma}
\begin{proof}
By Ruzsa' covering lemma exists \(X \subseteq A^{-1}A^2, |X| \leq K|\) such that
\[
A^{-1}A^2 \subseteq XA^{-1}A.
\]
We then have
\begin{align*}
A^{-1}A^n
&= A^{-1}A^{n - 1}A \\
&\subseteq X^{n - 2}A^{-1}A^2 \text{ by induction} \\
&\subseteq X^{n - 1} A^{-1}A
\end{align*}
\end{proof}
\begin{proof}[Proof of proposition]
The lemma above implies that exists \(X\) with \(|X| \leq K\) such that
\[
nA - A \subseteq (n - 1) X + A - A
\]
for all \(n \in \N\). Since \(\F_p^r\) is a vector space,
\[
\langle A \rangle \subseteq \langle X \rangle + A - A
\]
so
\[
|\langle A \rangle| \leq |\langle X \rangle| |A - A| \leq p^K|A - A|
\]
as required.
\end{proof}
To strengthen the proposition to the theorem, we use the second technique: bounding higher sum/product sets. The key result, at least in the abelian case, is the following:
\begin{theorem}[Plünnecke-Ruzsa]
Suppose \(A \subseteq G\) where \(G\) is an abelian group and \(|A - A| \leq K |A|\). Then
\[
|mA - nA| \leq K^{m + n} |A|
\]
for all \(m, n \geq 0\).
\end{theorem}
This is proved in III Introduction to Discrete Analysis. We won't redo the whole proof, but we will reprove some parts of it.
\begin{proof}[Proof of Ruzsa's theorem]
Plünnecke-Ruzsa implies that \(|2A - 2A| \leq K^4 |A|\) and \(|A - A| \leq K^2 |A|\). Then the result follows from prop 2.2.
\end{proof}
We'll spend the rest of the lecture discussing Plünnecke-Ruzsa and variants of it.
We've seen it's useful, at least in one context. To see philosophically why it's useful, let's think about what the genuine closure of subgroups under products and inverses mean. One useful feature is that it can be iterated: given \(h_1, h_2, \dots \in H\) a subgroup, this means \(h_1^{\varepsilon_1}, \dots, h_m^{\varepsilon_m}, \dots \in H\) for all \(\varepsilon_i = \pm 1\) for all \(m\), for all \(h_i \in H\). The theorem allows us to ``iterate'' the ``approximate closure'' of a set of small doubling.
\[
a_1 + \dots a_m - a_1' - \dots - a_n'
\]
may not belong to \(A\) but at least it belongs to \(mA - nA\), which is ``not too large'' (\(|mA - nA| \leq K^{m + n} |A|\)), and is itself a set of small doubing (\(2|mA - nA|| \leq K^{2m + 2n} |mA - nA|\)). This is an important part of why the theory works so well.
It is therfore unfortunate that the theorem does not hold for non-abelian groups.
\begin{eg}
Let \(x\) generates an infinite cyclic group \(\langle x \rangle\), \(H\) be a finite subgroup. Set \(G = H * \langle x \rangle\) (the key point is that \(x^{-1}Hx \neq H\). Set \(A = H \cup \{x\}\). Then
\[
A^2 = H \cup xH \cup Hx \cup \{x^2\}
\]
so \(|A^2| \leq 3 |A|\). But \(A^3\) contains \(HxH\), which has size \(|H|^2 \sim |A|^2\). So as \(|H| \to \infty\), the theorem cannot hold.
\end{eg}
Nevertheless, if we strengthen small doubling slightly we can recover a form of the theorem. One way is to replace small doubing with \emph{small tripling}\index{small tripling}, i.e.\ \(|A^3| \leq K |A|\).
\begin{proposition}[2.7]
Suppose \(A \subseteq G\) and \(|A^3| \leq K |A|\). Then
\[
|A^{\varepsilon_1} \cdots A^{\varepsilon_m}| \leq K^{3(m - 2)} |A|
\]
for all \(\varepsilon_i = \pm 1\) for all \(m \geq 3\).
\end{proposition}
The key ingredient is the following:
\begin{lemma}[Ruzsa's triangle inequality]\index{Ruzsa's triangle inequality}
Given \(U, V, W \subseteq G\), all finite, we have
\[
|U| |V^{-1} W| \leq |UV| |UW|.
\]
\end{lemma}
\begin{proof}
We'll define an injection \(\varphi: U \times V^{-1}W \to UV \times UW\). First for \(x \in V^{-1}W\), set \(v(x) \in V\), \(w(x) \in W\) such that \(x = v(x)^{-1}w(x)\). Set
\[
\varphi(u, x) = (uv(x), uw(x)).
\]
To see injectivity, first notice that
\[
(uv(x))^{-1}(uw(x)) = x
\]
so \(x\) is determined by \(\varphi(u, x)\), and then \((uv(x)) v(x)^{-1} = u\) so \(u\) is also determined by \(\varphi(u, x)\).
\end{proof}
\begin{proof}[Proof of proposition 2.7]
First do the case \(m = 3\):
\[
|A^3| = |A^{-3}| \leq K|A|.
\]
Apply triangle inequality with \(U = W = A, V = A^2\). Get
\[
|A||A^{-2}A| \leq |A^3| |A^2| \leq K^2 |A|^2
\]
so
\[
|A^{-2}A| \leq K^2 |A|.
\]
Next note that \((A^{-1}A)^{-1} = A^{-1}A^2\) so
\[
|A^{-1}A^2| = |A^{-2}A| \leq K^2|A|.
\]
Replace \(A\) by \(A^{-1}\) we get
\[
|AA^{-2}| = |A^2A^{-1}| \leq K^2 |A|.
\]
Finally, use triangle inequality with \(U = V = A, W = AA^{-1}\) gives
\[
|A| |A^{-1}AA^{-1}| \leq |A^2| |A^2A^{-1}| \leq K^3 |A|^2
\]
so
\[
|A^{-1}AA^{-1}| \leq K^3 |A|.
\]
For the last case swap \(A\) and \(A^{-1}\).
For general \(m\), triangle inequality implies that
\[
|A| |A^{\varepsilon_1} \cdots A^{\varepsilon_m}|
\leq |AA^{-\varepsilon_2}A^{-\varepsilon_1}| |AA^{\varepsilon_3} \cdots A^{\varepsilon_m}|
\leq K^3 |A| |K^{3(m - 2)} |A|
\]
by induction.
\end{proof}
\section*{Lecture 3: Approximate groups}
Last time we saw that assuming all small tripling instead of small doubling allowed us to control higher product sets of the form \(A^{\varepsilon_1} \cdots A^{\varepsilon_m}\). In this lecture we'll see another possible strengening of small doubling. We also saw, in the proof of theorem 2.1 and proposition 2.2, an advantage of having a ``covering'' condition in place of a size bound. This motivates in part the following definition.
\begin{definition}[approximate group]\index{approximate group}
A set \(A \subseteq G\) is called a \emph{\(K\)-approximate group} or \emph{\(K\)-approximate subgroup} if \(1 \in A, A^{-1} = A\) and exists \(X \subseteq G\) with \(|X| \leq K\) such that \(A^2 \subseteq XA\).
\end{definition}
\begin{remark}
Note that \(A\) need not to be finite, although in this course it almost always will be. Also if \(A\) is finite that \(|A^2| \leq K|A|\).
\end{remark}
The conditions \(1 \in A\) and \(A^{-1} = A\) are convenient notationally: for example we can write \(A^m\) instead of \(A^{\varepsilon_1} \cdots A^{\varepsilon_m}\), and \(1 \in A\) implies that \(A \subseteq A^2 \subseteq A^3 \subseteq \dots\), which is also convenient at times. It is the condition \(A^2 \subseteq XA\) that is more important.
For approximate groups, bounding higher product is easy:
\begin{lemma}[lemma 3.1]
If \(A\) is a finite \(K\)-approximate group then
\[
A^m| \leq K^{m - 1}|A|.
\]
\end{lemma}
\begin{proof}
Let \(X\) be as in the definition of approximate group. In fact we have \(A^m \subseteq X^{m - 1}A\):
\[
A^m
= A^{m - 1} A
\subseteq X^{m - 2} A^2
\subseteq X^{m - 1} A
\]
by induction.
\end{proof}
Another advantage is that if \(\pi: G \to H\) is a homomorphism and \(A\) is a \(K\)-approximate group then \(\pi(A)\) is also trivially a \(K\)-approximate group (although we'll see that there exists a version of this for small tripling).
It turns out that sets of small tripling and approximate groups are essentially equivalent, in the followin sense:
\begin{proposition}[proposition 3.2]
Let \(A \subseteq G\) be finite. If \(A\) is a \(K\)-approximate group then \(|A^3| \leq K^2 |A|\). Conversely if \(|A^3| \leq K |A|\) then exists \(O(K^{12})\)-approximate group \(B\) with \(A \subseteq B\) and \(|B| \leq 7K^3 |A|\). In fact, we may take \(B = (A \cup \{1\} \cup A^{-1})^2\).
\end{proposition}
The interesting direction of the proposition says that \(A\) is a large proportion of an approximate group.
\begin{proof}
The first part is just lemma 3.1. For the converse, set
\[
\hat A = A \cup \{1\} \cup A^{-1}
\]
and note that
\[
B = \hat A^2 = \{1\} \cup A \cup A^{-1} \cup A^2 \cup A^{-1}A \cup AA^{-1} \cup A^{-2}.
\]
Each set in this union has size \(\leq K^3 |A|\) by proposition 2.7 so \(|B| \leq 7K^3|A|\) as claimed. Similarly
\[
\hat A^4 = \bigcup_{\varepsilon_i = \pm 1, 0 \leq m \leq 4} A^{\varepsilon_1} \cdots A^{\varepsilon_m}
\]
and all the sets in this union have size \(\leq K^6 |A|\). It follows that \(|\hat A^4| \leq O(K^6) |\hat A|\).
Lemma 2.4 implies that there exists \(X \subseteq G\), \(|X| \leq O(K^6)\) such that \(\hat A^n \subseteq X^{n - 1} \hat A^2\) for every \(n \geq 2\). In particular \(|X^2| \leq O(K^{12})\) and \(\hat A^4 \subseteq X^2 \hat A^2\), so \(\hat A^2\) is an \(O(K^{12}\)-approximate group as claimed.
\end{proof}
This is all well and good, but what if we are faced with a set like that from example 2.6, which only has small doubling? In that specific example, a large proportion of \(A\) was a set of small tripling, namely \(H\). Rather helpfully, that turns out to be a general phenomenon.
\begin{theorem}[theorem 3.3]
If \(A \subseteq G\) satisfies \(|A^2| \leq K|A|\) then exists \(U \subseteq A\) with \(|U| \geq \frac{1}{K}|A|\) such that
\[
|U^m| \leq K^{m - 1}|U|
\]
for all \(m \in \N\).
\end{theorem}
Thus small doubling reduces to small tripling, which reduces to approximate groups. In example sheet 1, we'll see a direct reduction from small doubling to approximate groups.
Tao proved a version of theorem 3.3 when he introduced the definition of apparoximate groups. We'll use instead a lemma of Petridis, which he proved when proving the Plüneccke-Ruzsa inequalities.
\begin{lemma}[lemma 3.4][Petridis]
Suppose \(A, B \subseteq G\) are finite. Let \(U \subseteq A\) be non-empty, chosen to minimise the ratio \(|UB|/|U|\) and write \(R = |UB|/|U|\). Then for all finite \(C \subseteq G\) we have
\[
|CUB| \leq R |CU|.
\]
\end{lemma}
\begin{proof}
Trivial if \(C = \emptyset\) so we may assume there exists \(x \in C\). Define \(C' = C \setminus \{x\}\), we may also assume by induction that \(|C'UB| \leq R|C'U|\). We are going to write \(CU = C'U \cup xU\) and deal with the overlap. Set
\[
W = \{u \in U: xu \in C'U\}.
\]
Then
\[
CU = C'U \cup xU (\setminus xW)
\]
is a disjoint union so in particular
\[
|CU| = |C'U| + |U| - |W|.
\]
We also have \(xWB \subseteq C'UB\) by definition of \(W\) so
\[
CUB \subseteq C'UB \cup (xUB \setminus xWB)
\]
and hence
\[
|CUB| \leq |C'UB| + |UB| - |WB|.
\]
We have \(|C'UB| \leq R|C'U|\) by induction hypothesis. We have \(|UB| = R|U|\) by defintion of \(R\), and \(|WB| \geq R|W|\) by minimality in the definition of \(U\). So
\[
|CUB|
\leq R(|C'U| + |U| - |W|)
= R|CU|.
\]
\end{proof}
\begin{proof}[Proof of theorem 3.3]
Set \(U \subseteq A\) to be non-empty minimising \(|UA|/|U|\) and write \(R = |UA|/|A|\). Noting that \(R \subseteq K\) by minimality. Also \(U\) is non-empty so \(|UA| \geq |A|\), so \(|U| \geq |A|/K\) as required. Lemma 2.4 also implies that
\[
|U^mA| \leq K|U^m|
\]
for all \(m\) (taking \(C = U^{m - 1}\)) and since \(U \subseteq A\), this gives
\[
|U^{m + 1}| \leq K |U^m|
\]
for all \(m\), so \(|U^m| \leq K^{m - 1} |U|\).
\end{proof}
A bit of non-examinable information:
The reason \(A\) in example 2.6 failed to have small tripling was the existence of \(x \in A\) with \(AxA\) large. It turns out that this is the only obstruction to small doubling having small tripling.
\begin{theorem}[theorem 3.5][Tao, Petridis]
If \(|A^2| \leq K|A|\) and \(|AxA| \leq K|A|\) for all \(x \in A\) then \(|A^m| \leq K^{O(m)}|A|\) for all \(m \geq 3\).
\end{theorem}
\section*{Lecture 4: Stability of approximate closure under basic operations}
Two familiar properties of genuine subgroups are that they behave well under quotients and intersections: if \(H \leq G\) and \(\pi: G \to \Gamma\) is a homomorphism then \(\pi(H) \leq \Gamma\), and if \(H_1, H_2 \leq G\) then \(H_1 \cap H_2 \leq G\). In this lecture we'll see versions of these properties for approximate groups and set of small tripling.
It's trivial that if \(A \subseteq G\) is a \(K\)-approximate group then \(\pi(A)\) is also a \(K\)-approximate group. The following is the corresponding result for sets of small tripling.
\begin{proposition}[prop 4.1][stability of small tripling under homomorphisms]
Let \(A \subseteq G\) be finite, symmetric, containing the idenity. Suppose \(\pi: G \to \Gamma\) is a homomorphism. Then
\[
\frac{|\pi(A)^m|}{|\pi(A)|} \leq \frac{|A^{m + 2}|}{|A|}
\]
for all \(m \in \N\).
In particular if \(|A^3| \leq K |A|\) then
\[
|\pi(A)^3| \leq K^9 |\pi(A)|
\]
by prop 2.7.
\end{proposition}
Prove this using an argument of Helfgolt. We's start with a simple observation that we'll use repeatedly in this course.
\begin{lemma}[lemma 4.2]
Let \(H \leq G\). Let \(A \subseteq G\) be finite and let \(x \in G\). Then
\[
|A^{-1}A \cap H| \geq |A \cap xH|.
\]
\end{lemma}
\begin{proof}
We have
\[
(A \cap xH)^{-1} (A \cap xH) \subseteq A^{-1}A \cap H.
\]
\end{proof}
\begin{remark}
Most of the lemmas and propositions in this lecture will have familiar/trivial analogues for genuine subgroups. It is a useful exercise to think about what they are.
\end{remark}
\begin{lemma}[lemma 4.3]
Let \(H \leq G\). Write \(\pi: g \to G/H\) for the quotient map. Let \(A \subseteq G\) be finite. Then
\[
|A^{-1}A \cap H| \geq \frac{|A|}{|\pi(A)|}.
\]
\end{lemma}
Note that \(H\) is not assumed to be normal, so \(G/H\) is just the space of left cosets \(xH\), not necessarily a group.
\begin{proof}
By pigeonhole principle, there exists \(x \in G\) such that
\[
|A \subseteq x H| \geq \frac{|A|}{|\pi(A)|}.
\]
Then apply lemma 4.2.
\end{proof}
\begin{lemma}[lemma 4.4]
Let \(H \leq G\). Write \(\pi: G \to G/H\) for the quotient map and let \(A \subseteq G\) be finite. Then
\[
|\pi(A^m)| |A^n \cap H| \leq |A^{m + n}|
\]
for all \(m, n \geq 0\).
\end{lemma}
\begin{proof}
Define \(\varphi: \pi(A^m) \to A^m\) by picking arbitrarily for each \(x \in \pi(A^m)\) some \(\varphi(x)\) such that \(\pi(\varphi(x)) = x\). Then the cosets \(\varphi(x)H\) for \(x \in \pi(A^m)\) are all distinct by definition, so
\[
|\varphi(\pi(A^m)) (A^n \cap H)| = |\pi(A^m)| |A^n \cap H|.
\]
But also,
\[
\varphi(\pi(A^m)) (A^n \cap H) \subseteq A^{m + n}.
\]
\end{proof}
\begin{proof}[Proof of prop 4.1]
Write \(H = \ker \pi\). By lemma 4.4,
\[
|\pi(A^m)| \leq \frac{|A^{m + 2}|}{|A^2 \cap H|}.
\]
The by lemma 4.3
\[
|A^2 \cap H| \geq \frac{|A|}{|\pi(A)|}.
\]
The proposition then follows.
\end{proof}
Now we'll look at intersections.
\begin{proposition}[prop 4.5][stability of small tripling uder intersections with subgroups]
Let \(A \subseteq G\) be finite, symmetric and containing identity. Let \(H \leq G\). Then
\[
\frac{|A^m \cap H|}{|A^2 \cap H|} \leq \frac{|A^{m + 1}|}{|A|}.
\]
In particular by prop 2.7 if \(|A^3| \leq K|A|\) then
\[
|(A^m \cap H)^3| \leq K^{9m} |A^m \cap H|
\]
for all \(m \geq 2\).
\end{proposition}
\begin{remark}
We'll see in example sheet 1 that even if \(A\) has small tripling, \(A \cap H\) need not. So \(m \geq 2\) really is important for this last condition.
\end{remark}
\begin{proof}
By lemma 4.4
\[
|A^m \cap H| \leq \frac{|A^{m + 1}|}{|\pi(A)|}
\]
where \(\pi: G \to G/H\) is the quotient map as before. By lemma 4.3,
\[
|A^2 \cap H| \geq \frac{|A|}{|\pi(A)}.
\]
Just combine these two inequalities.
\end{proof}
\begin{proposition}[prop 4.6][stability of approximate groups under intersections with subgroups]
Let \(H \leq G\). Let \(A \subseteq G\) be a \(K\)-approximate group. Then \(A^m \cap H\) is covered by \(\leq K^{m - 1}\) left translates of \(A^2 \cap H\). In particlar \(A^m \cap H\) is a \(K^{2m - 1}\)-approximate subgroup (since \(A^2 \cap H \leq A^m \cap H\) and \((A^m \cap H)^2 \leq A^{2m} \cap H\)).
\end{proposition}
\begin{proof}
By definition, there exists \(X \in G\) with \(|X| = K^{m - 1}\) such that \(A^m \subseteq XA\). In particular
\[
A^m \cap H \subseteq \bigcup_{x \in X} (xA \cap H).
\]
For each \(xA \cap H\) that is not empty, exists \(h = xa \in H\) for some \(a \in A\). This means that
\[
xA \cap H \subseteq h(a^{-1}A \cap H) \subseteq h(A^2 \cap H).
\]
Hence each set \(xA \cap H\) in this union is contained in a single left translate of \(A^2 \cap H\).
\end{proof}
In III Introduction to Discrete Analysis, you saw that when studying small doubling/tripling, there is a more general notion of homomorphism that comes into play: the Freiman homomorphisms. To motivate this, consider two sets
\begin{align*}
A &= \{-n, \dots, n\} \subseteq \Z/p\Z \\
B &= \{-n, \dots, n\} \subseteq \Z/q\Z
\end{align*}
for \(p, q\) two large primes, \(\geq 10 n\) say. These two sets are intuitively ``isomorphic'' from the perspective of \(A + A\) and \(B + B\), but there is no way of encoding this with a group homomorphism \(\Z/p\Z \to \Z/q\Z\).
\begin{definition}[Freiman homomorphism]\index{Freiman homomorphism}\index{Freiman homomorphism!centred}
Let \(m \in \N\). Let \(A, B\) be subsets of groups. Then a map \(\varphi: A \to B\) is a \emph{Freiman \(m\)-homomorphism} if for all \(x_1, \dots, x_m, y_1, \dots, y_m \in A\) with \(x_1\cdots x_m = y_1 \cdots y_m\) we have
\[
\varphi(x_1) \cdots \varphi(x_m) = \varphi(y_1) \cdots \varphi(y_m).
\]
If \(1 \in A\) and \(\varphi(1) = 1\) then we say that \(\varphi\) is \emph{centred}. If \(\varphi\) is injective and its inverse \(\varphi(A) \to A\) is also a Freiman \(m\)-homomorphism we say \(\varphi\) is a \emph{Freiman \(m\)-isomorphism}.
\end{definition}
\begin{remark}\leavevmode
\begin{enumerate}
\item Every map is trivially a \(1\)-homomorphism so we only care about the case \(m \geq 2\).
\item This definition gets stronger as \(m\) increases: assume \(A \neq \emptyset\). Picking \(a \in A\). If \(x_1 \cdots x_k = y_1 \cdots y_k\) for \(k \leq m\) then \(x_1 \cdots x_k a \cdots a = y_1 \cdots y_k a \cdots a\).
\item If \(\varphi\) is centred and \(a, a^{-1} \in A\) then exercise to check that \(\varphi(a^{-1}) = \varphi(a)^{-1}\) (for \(m \geq 2\)).
\end{enumerate}
\end{remark}
From now on when we say \(\varphi\) is a Freiman homomorphism we mean it is a \(2\)-homomorphism.
\begin{lemma}[lemma 4.7]
Suppose \(\varphi: A \to \Gamma\) is a Freiman \(m\)-homomorphism. Then
\[
|\varphi(A)^m| \leq |A^m|.
\]
In particular if \(\varphi\) is injective then
\[
\frac{|\varphi(A)^m|}{|\varphi(A)|} \leq \frac{|A^m|}{|A|},
\]
and if \(\varphi\) is a Freiman \(m\)-isomorphism then this is an equality.
\end{lemma}
\begin{proof}
Exercise.
\end{proof}
\begin{lemma}[lemma 4.8]
Let \(A \subseteq G\) be a \(K\)-approximate group. Suppose \(\varphi: A^3 \to \Gamma\) is a centred Freiman \(2\)-homomorphism. Then \(\varphi(A)\) is also a \(K\)-homomorphism.
\end{lemma}
\begin{proof}
By definition there exists \(X \subseteq G, |X| \leq K\) such that \(A^2 \subseteq XA\). So given \(a_1, a_2 \in A\), there exists \(x \in X, a_3 \in A\) such that \(a_1a_2 = xa_3\). In particular, \(x \in A^3\) so \(\varphi(x)\) is defined and
\[
\varphi(a_1) \varphi(a_2) = \varphi(x) \varphi(a_3).
\]
Hence \(\varphi(A)^2 \subseteq \varphi(X \cap A^3) \varphi(A)\). Also as \(\varphi\) is centred, \(\varphi(A)\) is symmetric and contains \(1\).
\end{proof}
\section*{Lecture 5: Coset progressions, Bohr sets and the Freiman-Green-Ruzsa theorem}
We'll introduce some non-trivial examples of sets of small doubling in abelian groups.
\begin{definition}[coset progression]\index{coset progression}
Let \(G\) be an abelian group, \(x_1, \dots, x_r \in G, L_1, \dots, L_r \in \N\). Then the set
\[
P(x; L) = P(x_1, \dots, x_r; L_1, \dots, L_r) = \{\ell_1x_1 + \dots + \ell_rx_r: |\ell_1| \leq L_i \text{ for all} i\}
\]
is called a \emph{progression of rank \(r\)}. If in addition \(H \leq G\) is finite then \(H + P(x; L)\) is called a \emph{coset progression of rank \(r\)}.
\end{definition}
It is useful to think of \(P(x; L)\) as a homomorphic image of a box in \(\Z^r\). For example if \(G = \Z\) and \(r = 2\) (picture)
It's easy to see that such a box \(B\) in \(\Z^r\) is a \(2^r\)-approximate group. For example in \(r = 2\) (picture)
Hence \(P(x; L)\) is also a \(2^r\)-approximate group, as is \(H + P(r; L)\).
Remarkably, these are essentially the only examples:
\begin{theorem}[Freiman (\(G = \Z\)), Green-Ruzsa (arbitrary abelian \(G\))]
Suppose \(A \subseteq G\) abelian satisfies \(|A + A| \leq K |A|\). Then there exists a coset progression \(H + P\) of rank \(\leq O(K^{O(1)})\) such that
\[
A \subseteq H + P \subseteq O(K^{O(1)}) (A \cup \{0\} \cup (-A).
\]
In particular theorem 2.5 (Plünnecke-Ruzsa inequality) implies that
\[
|H + P| \leq \exp(O(K^{O(1)})) |A|
\]
so \(A\) is a large proportion of \(H + P\).
\end{theorem}
A substantial part of this result was proved in III Introduction to Discrete Analysis, but with a slightly less explicity version of coset progression.
\begin{definition}[Bohr set]\index{Bohr set}
Let \(G\) be a finite abelian group. Let
\[
\Gamma = \{\gamma_1, \dots, \gamma_r\}
\subseteq \hat G = \Hom(G, \R/\Z)
\]
and let \(\rho \in [0, \frac{1}{2}]\). Then the set
\[
B(\Gamma, \rho) = \{g \in G: \norm{\gamma_i(g)}_{\R/\Z} \leq \rho \text{ for all } i\}
\]
is called a \emph{Bohr set} of rank \(r\). Here, given \(x \in \R/\Z\) with representative \(\hat x \in (-\frac{1}{2}, \frac{1}{2}]\), we write
\[
\norm x_{\R/\Z} = |\hat x|.
\]
\end{definition}
We'll see in example sheet 1 that \(B(\Gamma, \rho)\) is a \(4^r\)-approximate group. Whereas progression were homomorphic images of boxes, \(B(\Gamma, \rho)\) is the pullback of \([-\rho, \rho]^r\) under \((\gamma_1, \dots, \gamma_r) \in \hat G^r\).
It turns out that the notions of coset progression and Bohr set are essentially equivalent. In example sheet 2 we'll see that every coset progression is a Freiman image of a Bohr set of the same rank. Moreover, every Freiman image of a Bohr set is a large proportion of some coset progression. We'll see a special case of that shortly.
\begin{proposition}[from III Introduction to Discrete Analysis]
Suppose \(A \subseteq G\) abelian with \(|A + A| \leq K|A|\). Then there exists \(b \subseteq 2A - 2A\), a finite abelian group \(Z\) with \(|Z| \geq |A|\), a set \(\Gamma \subseteq \hat Z\) with \(|\Gamma| \leq O(K^{O(1)})\), some \(\rho \geq \frac{1}{O(K^{O(1)})}\) and a centred Freiman \(2\)-isomorphism \(\varphi: B(\Gamma, \rho) \to B\).
\end{proposition}
This is saying if \(A\) has small doubling then \(2A - 2A\) contains a large set isomorphic to a Bohr set of bounded rank.
In III Introduction to Discrete Analysis we see this in the special case where \(G\) is torsion-free. The general case is harder, but nonetheless conceptually very similar so we'll assume this result from now on.
To pass from prop 5.2 to theorem 5.1, we use the following results:
\begin{proposition}[prop 5.3]
Suppose \(X\) is a finite abelian group, \(\Gamma \subseteq \hat Z\) is of size \(r\), \(\rho < \frac{1}{10}\). Then there exists a coset progression \(H + P \subseteq B(\Gamma, \rho)\) with rank \(r\) and \(|H + P| \geq (\rho/r)^r |Z|\).
\end{proposition}
\begin{lemma}[lemma 5.4]
Suppose \(H + P\) is a coset progression of rank \(r\) and \(\varphi: H + P \to G\), where \(G\) abelian, is a centred Freiman \(2\)-homomorphism. Then \(\varphi(H + P)\) is also a coset progression of rank \(r\).
\end{lemma}
We'll prove proposition 5.3 in the next couple of lectures.
\begin{proof}[Proof of lemma 5.4]
Exercise: if \(H\) is a group and \(\varphi: H \to G\) is a centred Freiman \(2\)-homomorphism then \(\varphi\) is also a group homomorphism.
In particular in this lemma \(\varphi(H)\) is a finite subgropu. Therefore suffices to show that
\[
\varphi(H + P(x; L)) = \varphi(H) + P(\varphi(x_1), \dots, \varphi(x_r); L_1, \dots, L_r).
\]
In fact, we'll show that for all \(h \in H, |\ell_i| \leq L_i\) we have
\[
\varphi(h + \ell_1 x_1 + \dots + \ell_r x_r) = \varphi(h) + \ell \varphi(x_1) + \dots + \ell_r \varphi(x_r).
\tag{5.1}
\]
Since \(\varphi\) is centred, \(\varphi(-x_i) = - \varphi(x_i)\) so we may assume that \(\ell_i \geq 0\) for all \(i\). Also 5.1 is trivial if \(\ell_i = 0\) for all \(i\). So we may assume there there exists \(\ell_j > 0\). Then
\begin{align*}
\varphi(h + \ell_1 x_1 + \dots + \ell_r x_r)
&= \varphi(h + \ell_1x_1 + \dots + \ell_r x_r) + \varphi(0) \\
&= \varphi(h + \ell_1x_1 + \dots + (\ell_j - 1) x_j + \dots + \ell_rx_r) + \varphi(x_j)
\end{align*}
so lemma follows by induction on \(\sum_i \ell_i\).
\end{proof}
\begin{proof}[Proof of theorem 5.1]
By proposition 5.2 and 5.3 and lemma 5.4, there exists \(H + P\) coset progression of rank \(\leq O(K^{O(1)})\) such that
\begin{align*}
H + P &\subseteq 2A - 2A \\
|H + P| &\geq \exp(-O(K^{O(1)})) |A|
\end{align*}
We'll now apply a version of Ruzsa's covering lemma due to Chang. Define recursively sets \(S_1, S_2, \dots \subseteq A\) such that \(S_i\) the a maximal subset of size \(\leq 2K\) such that the translates \(x + S_{i - 1} + \dots + S_1 + H + P\) are all disjoint. If ever \(|S_i| < 2K\) we stop. Now suppose we get as far as \(S_1, \dots, S_t\). Then
\[
S_t + \dots + S_1 + H + P \subseteq 2A - 2A + tA
\]
so by proposition 2.5
\[
|S_k + \dots + S_1 + H + P| \leq K^{4 + t}|A|.
\]
On the other hand, disjointness of the translates in the definition of \(S_i\) means that
\[
|S_k + \dots + S_1 + H + P| \geq (2K)^{t - 1} \exp (-O(K^{O(1)})) |A|.
\]
Putting these together, we have
\[
2^{t - 1} \leq K^5 \exp (O(K^{O(1)})),
\]
hence \(t \leq O(K^{O(1)})\). In particular this process terminates, at \(S_t\), say.
But also, since \(S_t\) is therefore maximal among \emph{all} subsets of \(A\) such that \(x + S_{t - 1} + \dots + S_1 + H + P\) are disjoint for \(x \in S_t\), Ruzsa's covering lemma from lecture 2 implies that
\[
A \subseteq H + 2P + S_1 - S_1 + \dots + S_{t - 1} - S_{t - 1} + S_t.
\]
Enumerating \(\bigcup_i S_i\) as \(s_1, \dots, s_d\), we have \(d \leq O(K^{O(1)})\) and
\[
A \subseteq H + 2P + P(s_1, \dots, s_d; 1, \dots, 1) \subseteq O(K^{O(1)}) (A \cup \{0\} \cup (-A))
\]
as claimed.
\end{proof}
\begin{ex}
See what bounds you can get if you apply Ruzsa's covering lemma directly, instead of Chang's argument.
\end{ex}
\section*{Lecture 6: Geometry of numbers}
\begin{proposition}[prop 5.3]
Let \(G\) be a finite abelian group. Suppose \(\Gamma \subseteq \hat G\) with \(|\Gamma| = r\) and let \(\rho < \frac{1}{2}\). Then there exists coset progression \(H + P \subseteq B(\Gamma, \rho)\) of rank \(r\) with
\[
|H + P| \geq (\rho/r)^r|G|.
\]
\end{proposition}
To prove this, we'll use a field called the \emph{geometry of numbers}, which is concerned with lattices in \(\R^d\). For us, a \emph{lattice}\index{lattice} \(\Lambda \subseteq \R^d\) will simply be the additive subgroup (not the subspace) generated by some basis \(x_1, \dots, x_d\) for \(\R^d\), so
\[
\Lambda = \{\sum \ell_i x_i: \ell_i \in \Z\}.
\]
If \(\Gamma \subseteq \Lambda\) is another lattice then we say it is a \emph{sublattice}, write \(\Gamma \leq \Lambda\). It is an exercise (example sheet 2) to check that if \(\Gamma \leq \Lambda\) with basis \(y_1, \dots, y_d\), say, then
\[
\frac{\det (y_1, \dots, y_d)}{\det (x_1, \dots, x_d)} = [\Lambda:\Gamma].
\]
In particular if \(x_1, \dots, x_d\) and \(x_1', \dots, x_d'\) are bases for the same lattice \(\Lambda\) then
\[
\det (x_1, \dots, x_d) = \det (x_1', \dots, x_d').
\]
We define this to be \(\det (\Lambda)\).
The relevance of lattices to prop 5.3 is the following:
\begin{lemma}[lemma 6.1]
Let \(G, \Gamma\) be as in prop 5.3 and set \(\gamma: G \to \R^d/\Z^d\) via by enumerating \(\Gamma\) as \(\{\gamma_1, \dots, \gamma_d\}\) and set \(\gamma = (\gamma_1, \dots, \gamma_d)\). Then
\[
\Lambda = \gamma(G) + \Z^d
\]
is a lattice with determinant \(|\ker \gamma|/|G|\).
\end{lemma}
\begin{proof}
\(\Lambda\) is finitely generated as \(G\) is finite, and torsion-free as in \(\R^d\), so isomorphic to \(\Z^k\) for some \(k\). Also \(\Lambda\) has \(\Z^d\) as a finite-index subgroup. So \(k = d\) and \(\text{span}_\R(\Lambda) = \R^d\). So we may take a generating set for \(\Lambda\) of size \(d\), which is then a basis for \(\R^d\). Determinant follows from 6.1 because \(\det \Z^d = 1\).
\end{proof}
We'll investigate the interaction of \([-\rho, \rho]^d\) with \(\Lambda\). To do this we introduce another definition.
\begin{definition}[convexity]\index{convex}
A set \(A \subseteq \R^d\) is \emph{convex} if for all \(x \in \R^d \setminus \ocirc A\), there exists a hyperplane \(h_x\) with \(x \in h_x\) and \(h_x \cap \ocirc A = \emptyset\).
\end{definition}
\begin{definition}[convex body]\index{convex body}
A set \(B \subseteq \R^d\) is a \emph{convex body} if it is bounded and convex and \(\ocirc B \neq \emptyset\) and \(\ocirc A\) is contained in one of the two half spaces into which \(h_x\) divides \(\R^d\). It is \emph{symmetric} if for all \(x \in B\), \(-x \in B\).
\end{definition}
Given a symmetric convex body \(B\) and a lattice \(\Lambda\), define the \emph{successive minima}\index{successive minima} \(\lambda_1 \leq \dots \lambda_d\) of \(B\) with respect to \(\Lambda\) via
\[
\lambda_i = \inf \{\lambda > 0: \dim \text{span}_\R(\lambda \cdot B \cap \Lambda) \geq i\}.
\]
We may then inductively define linear independent vectors \(v_1, \dots, v_d \in \Lambda\) such that \(v_1, \dots, v_i \in \lambda_i \cl B\). Will call such a set a \emph{directional basis}\index{directional basis} for \(\Lambda\) with respect to \(B\). Note that it is not unique, and not necessarily a basis for \(\Lambda\) in the earlier sense. See example sheet 2.
\begin{theorem}[theorem 6.2][Minkowski's second thereom]
Suppose \(B\) is a symmetric convex body, \(\Lambda\) a lattice in \(\R^d\) and \(\lambda_d\) are successive minima. Then
\[
\lambda_1 \cdots \lambda_d \operatorname{vol}(B) \leq 2^d \det(\Lambda).
\]
\end{theorem}
\begin{lemma}[lemma 6.3][Blichfeldt]
Suppose \(A \subseteq \R^d\) is a measurable set, \(\Lambda\) a lattice and for all \(a, b \in A\) distinct we have \(a - b \notin \Lambda\). Then
\[
\operatorname{vol}(A) \leq \det \Lambda.
\]
\end{lemma}
\begin{proof}
Fix a basis \(x_1, \dots, x_d\) for \(\Lambda\) and define the \emph{fundamental parallelopiped}\index{fundamental parallelopiped} \(P\) with respect to \(x_1, \dots, x_d\) as
\[
P = \{\sum \ell_i x_i: \ell_i \in [0, 1)\}.
\]
Since \(x_1, \dots, x_d\) is a basis for \(\R^d\), for all \(v \in \R^d\) there exists unique \(x_v \in \Lambda, p_v \in P\) such that \(v = x_v + p_v\). Define a map
\begin{align*}
\varphi: \R^d &\to P \\
v &\mapsto p_v
\end{align*}
This cuts \(A\) into countably many measurable pieces and translates these pieces to \(P\). It is injective by hypothesis, hence volume preserving, and so
\[
\operatorname{vol}(A) = \operatorname{vol}(\varphi(A)) \leq \operatorname{vol}(P) = \det \Lambda.
\]
\end{proof}
\begin{proof}[Proof of theorem 6.2]
Let \(v_1, \dots, v_d\) be a directional basis for \(\Lambda\) with respect to \(B\). Set \(V_i = \text{span}(v_1, \dots, v_i)\) (with \(V_0 = 0\)) and set
\[
\Lambda_i = \Lambda \cap (V_i \setminus V_{i - i}).
\]
Then \(\Lambda = \bigcup_{i = 0}^d \Lambda_i\) as a disjoint union.
Claim 1: we have
\[
\lambda_d \ocirc B \cap (\lambda_d \ocirc B + \alpha x) = \emptyset
\]
whenever \(x \in \Lambda_j\) and \(\alpha \geq \frac{2\lambda_d}{\lambda_j}\).
\begin{proof}
Given \(x \in \Lambda_j\), by definition \(x \neq \lambda_j \ocirc B\), so by convexity there exists hyperplane \(h_x\) such that \(x \in h_x\) and \(h_x \cap \lambda_j \ocirc B = \emptyset\). By symmetry, we may take \(h_{-x} = -h_x\). Note, however, that
\[
-h_x = h_x - 2x.
\]
That means that \(\lambda_j \ocirc B\) is contained in the slice of space \(S_x\) between the two parallel hyperplanes \(h_x\) and \(h_x - 2x\). Clearly
\[
S_x \cap (S_x + \alpha x ) = \emptyset
\]
for all \(\alpha \geq 2\), so in particular
\[
\lambda_j \ocirc B \cap (\lambda_j \ocirc B + \alpha x) = \emptyset
\]
for all such \(\alpha\) as well. Scaling by \(\lambda_d/\lambda\), we see that
\[
\lambda_d \ocirc B \cap (\lambda_d \ocirc B + \alpha x) = \emptyset
\]
whenever \(\alpha \geq \frac{2\lambda_d}{\lambda_j}\).
\end{proof}
Claim 2: there exists sets
\[
B_1 \subseteq B_2 \subseteq \dots \subseteq B_d = \lambda_d \ocirc B
\]
such that
\begin{enumerate}
\item \(\operatorname{vol}(B_i) = \left( \frac{\lambda_i}{\lambda_{i + 1}} \right)^i \operatorname{vol}(B_{i + 1})\) for all \(i\).
\item We have \(B_i \cap (B_i + \alpha x) = \emptyset\) whenever \(x \in \Lambda_j\) and \(\alpha \geq 2 \max \{ \frac{\lambda_i}{\lambda_j}, 1\}\).
\end{enumerate}
\begin{proof}
Define operations \(\sigma_1, \dots \sigma_{d - 1}\) on suitable subsets of \(\R^d\) as follows. Given \(L\) bounded and open, define \(\sigma_i\) separately for each affine subspace \(z + V_i\) with \(z \in L\)\footnote{Correction by lecturer: assume \(z\) is the centre of mass of \(L \cap (z + V_i)\), so that \(\sigma\)'s depend continuously on \(z\). Also to be on safe side, in statement of Minkowski, assume that \(B\) is a polytope.}. For each such affine subspace, fix a particular \(z \in L\) and define
\[
\varphi(z + v) = z + \frac{\lambda_i}{\lambda_{i + 1}} v
\]
for all \(v \in V_i\). (on each slice, \(\sigma_i\) scales \(L\) b a factor of \(\frac{\lambda_i}{\lambda_{i + 1}}\) cetred at \(z\) parallel to \(V_i\)) Note the following properties:
\begin{enumerate}
\item \(\operatorname{vol}(\sigma_i(L)) = (\lambda_i/\lambda_{i + 1})^i \operatorname{vol}(L)\) (by Fubini)
\item If \(L \cap (z + V_i)\) is open and convex for all \(z\) then \(\sigma_i(L) \subseteq L\) because \(z \in L\).
\item If \(L \cap (z + V_i)\) is open and convex then so is \(\sigma_i(L) \cap (z + V_i)\), and indeed so is
\[
\sigma_i(L) \cap (z + V_j, j < i).
\]
\end{enumerate}
Set \(B_d = \lambda_d \ocirc B\) and \(B_i \sigma_i(B_{i + 1})\). Conclusion 1 is immediate from property 1. Conclusion 2 follows from claim 1 when \(n = d\). For \(i < d\), it follows by induction and repreated application of 2 and 3. Indeed, 2 for \(i\) follows from 2 for \(i + 1\) because \(\sigma_i\) scales by \(\frac{\lambda_i}{\lambda_{i + 1}}\) in direction \(x\). For \(i < j\), follows from \(B_i \subseteq B_{i + 1}\).
\end{proof}
To prove the theorem, note that
\[
\operatorname{vol}(B_1) = \lambda_1 \cdots \lambda_d \operatorname{vol}(B)
\]
and by 2, \(a - b \notin 2 \cdot \Lambda\) for all \(a, b \in B_1\) so by Blichfeldt,
\[
\operatorname{vol}(B_1) \leq 2^d \det \Lambda.
\]
\end{proof}
\begin{proof}[Proof of prop 5.3]
Write \(\gamma = (\gamma_1, \dots, \gamma_r) \in \hat G^r\). Define \(\gamma(H) + \Z^r\), which is a lattice of determinant \(|\ker \gamma|/|G|\) by lemma 6.1. Let \(\lambda_1, \dots, \lambda_r\) be the successive minima of \([-1, 1]^r\) with respect to \(\Lambda\), and \(v_1, \dots, v_r\) a directional basis. Set \(L_i = \floor*{\frac{\rho}{r\lambda_i}}\) for each \(i\). Then
\[
P(v_1, \dots, v_r; L_1, \dots, L_r) \subseteq [-\rho, \rho]^r.
\]
For each \(i\), pick \(x_i \in G\) such that \(\gamma(x_i) = v_i\) and set \(H = \ker \gamma\). Write \(P = P(x_1, \dots, x_r; L_1, \dots, L_r)\). Then \(H + P \subseteq B(\Gamma, \rho)\).
Claim that if \(\ell_1, \dots, \ell_r\) and \(\ell_1', \dots, \ell_r'\) satisfy \(|\ell_i|, |\ell_i'| \leq L_i\), and
\[
\rho_1 x_1 + \dots + \ell_r x_r \in \ell_1' x_1 + \dots + \ell_r' x_r + H
\]
then in fact \(\ell_i = \ell_i'\) for all \(i\). Indeed, the equation implies that
\[
(\ell_1 - \ell_1') v_1 + \dots + (\ell_r - \ell_r') v_\ell \in \Z^r \cap [-2\rho, 2\rho]^r
\]
but since \(\rho < \frac{1}{2}\) this last intersection is just \(\{0\}\).
Then
\begin{align*}
|H + P|
&\geq |H| (L_1 + 1) \cdots (L_r + 1) \\
&\geq |H| \left( \frac{\rho}{r} \right)^r \frac{1}{\lambda_1 \cdots \lambda_r} \\
&\geq |G| \left( \frac{\rho}{r} \right)^r
\end{align*}
by Minkowski's 2nd theorem.
\end{proof}
\section*{Progressions in the Heisenberg group}
Define \emph{Heisenberg group}\index{Heisenberg group}
\[
H(\Z) =
\begin{pmatrix}
1 & \Z & \Z \\
0 & 1 & \Z \\
0 & 0 & 1
\end{pmatrix}
= \left\{
\begin{pmatrix}
1 & n_2 & n_3 \\
0 & 1 & n_1 \\
0 & 0 & 1
\end{pmatrix}
, n_i \in \Z
\right\}
\]
Set
\[
u_1 =
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 1 \\
0 & 0 & 1
\end{pmatrix}
,u_2 =
\begin{pmatrix}
1 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
,u_3 =
\begin{pmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\]
and note that any element of \(H(\Z)\) can be expressed in the form
\[
\begin{pmatrix}
1 & n_2 & n_3 \\
0 & 1 & n_1 \\
0 & 0 & 1
\end{pmatrix}
= u_1^{n_1} u_2^{n_2} u_3^{n_3},
\]
and we have the following formula for multiplying elements in this form:
\[
(u_1^{n_1}u_2^{n_2}u_3^{n_3}) (u_1^{n_1'}u_2^{n_2'}u_3^{n_3'}) = u_1^{n_1 + n_2'} u_2^{n_2 + n_2'} u_3^{n_3 + n_3' + n_1'n_2}.
\tag{7.1}
\]
This is easy to verify by multiplying matrices, but there is a more abstract reason for it. To see this, given \(x, y \in G\), define the commutator \([x, y] = x^{-1}y^{-1}xy\). In light of the identity \(xy = yx [x, y]\), we can view the commutator as being the ``error'' or ``cost'' incurred when interchaning two elements. For example the fact that commutators are trivial in abelian groups can be viewed as capturing the notion that elements can be interchanged freely. The \(n_1'n_2\) term arises because we swap the order of \(n_1'n_2\) pairs of elements \(u_1\) and \(u_2\).
Now let's see one possible generalisation of progression to non-abelian groups.
\begin{definition}[ordered progression]\index{ordered progression}
Given \(x_1, \dots, x_r \in G, L_1, \dots, L_r \geq 0\). We define the \emph{ordered progression} of rank \(r\)
\[
P_{\text{ord}}(x; L)
= P_{\text{ord}}(x_1, \dots, x_r; L_1, \dots, L_r)
= \{x_1^{\ell_1} \cdots x_r^{\ell_r}: |\ell_i| \leq L_1\}
\]
\end{definition}
Now consider \(P = P_{\text{ord}}(u_1, u_2; L_1, L_2)\) for \(u_1, u_2 \in H(\Z)\) as before and \(L_1, L_2 \geq 0\). We have
\[
(u_1^{\ell_1} u_2^{\ell_2}) (u_1^{\ell_1'} u_2^{\ell_2'}) = u_1^{\ell_1 + \ell_1'} u_2^{\ell_2 + \ell_2'} u_3^{\ell_1'\ell_2}
\]
and it is then easy to check that \(|P^2|/|P| \to \infty\) as \(L_1, L_2 \to \infty\), essentially because by varying \(\ell_1, \ell_1', \ell_2, \ell_2'\) within their ranges one can change \(\ell_1'\ell_2\) without changing \(\ell_1 + \ell_1'\) or \(\ell_2 + \ell_2'\). This can be thought of as an extra freedom in \(P^2\) compared to \(P\).
Coming back to commutators and recalling that \(u_3 = [u_2, u_1]\), we see that this corresponds to the freedom to interchange the order of some of the \(u_1, u_2\) in \(P^2\), as seen in the LHS of
\[
(u_1^{\ell_1} u_2^{\ell_2}) (u_1^{\ell_1'} u_2^{\ell_2'}) = u_1^{\ell_1 + \ell_1'} u_2^{\ell_2 + \ell_2'} u_3^{\ell_1'\ell_2}
\]
This is a freedom that the definition of ordered progression explicitly denies us.
It turns out that if we introduce this freedom \(P\) as above then this does force \(P\) to have small tripling.
\begin{definition}[nonabelian progression]\index{nonabelian progression}
Given \(x_1, \dots, x_r \in G, L_1, \dots, L_r \geq 0\), the \emph{nonabelian progression} \(P(x; L)\) of order \(r\) is defined to consist of those elements of \(G\) expressible as products of \(x_1^{\pm 1}, \dots, x_r^{\pm 1}\) wih each \(x_1, x_i^{-1}\) appear at most \(L_i\) times between them.
\end{definition}
Note that for abelian groups all three notions coincide.
It turns out that \(P(u_1, u_2'; L_1, L_2)\) does have small tripling (see example sheet 2). A note of caution: nonabelian progression don't always have small tripling. Consider \(P(x_1, x_2; L_1, L_2)\) for \(x_1, x_2\) generators of a nonabelian free gropu. In the case of \(H(\Z)\), the formula 7.1 is simplified by the fact that \(u_3 = [u_2, u_1]\) is central in \(H(\Z)\). If this were not the case, we'd end up with elements of the form \([[u_2, u_1], u_1]\), for example. This is in fact a specific example of a property called \emph{nilpotence}.
To define nilpotence, first define a \emph{normal series}\index{normal series} for a group \(G\) to be a sequence
\[
G = G_1 > G_2 > \cdots
\]
of normal subgroups \(G_i \normal G\), and a \emph{central series}\index{central series} to be such a normal series in which each \(G_1/G_{i + 1}\) is central in \(G/G_{i + 1}\).
\begin{definition}[nilpotent group]\index{nilpotent group}\index{nilpotency class}\index{step}
A group \(G\) is \emph{nilpotent} if there exists a finite central series
\[
G = G_1 > \dots G_{s + 1} = \{1\}.
\]
The smallest \(s\) for which such a series exists is called the \emph{step} or \emph{nilpotency class} of \(G\)
\end{definition}
\begin{ex}
\(H(\Z)\) is \(2\)-step nilpotent.
\end{ex}
\section*{Lecture 8: Nilpotent groups}
Last time, we said \(G\) is \emph{nilpotent} if there exists a finite central series
\[
G = H_1 > H_2 > \dots H_{s + 1} = \{1\}.
\]
and defined the smallest \(s\) for which such a series existed the \emph{step} of \(G\). Today we'll look in more details at nilpotent groups.
The reasons we focus on this setting are two fold: there is a clean generalisation of Freimann-Green-Ruzsa to nilopotnent groups, and a deep theorem of Breuillard, Green and Tao essentially reduces the general case to the nilpotent case.
Given \(x_1, \dots, x_k \in G\), define the \emph{simple commutator}\index{simple commutator} \([x_1, \dots, x_k] = [x_1, \dots, x_k]_k\) recursively as follows:
\begin{align*}
[x_1] &= x_1 \\
[x_1, \dots, x_k] &= [[x_1, \dots, x_k], x_k]
\end{align*}
(Recall that \([x, y] = x^{-1}y^{-1}xy\)) Given subgroups \(H, N \leq G\), define
\[
[H, N] = \langle [h, n]: h \in H, n \in N \rangle
\]
and then given \(H_1, \dots, H_k \leq G\), set
\begin{align*}
[H_1] &= H_1 \\
[H_1, \dots, H_k] &= [[H_1, \dots, H_{k - 1}], H_k]
\end{align*}
Note that
\[
[H, N] = [N, H]
\tag{8.1}
\]
since \([h, n] = [n, h]^{-1}\).
\begin{lemma}[lemma 8.1]
Let \(H_1, \dots, H_k, N \normal G\). Let \(S_i\) be a generating set for \(H_i\) for each \(i\). Suppose \([s_1, \dots, s_k] \in N\) whevever \(s_i \in S_i\) for all \(i\). Then
\[
[H_1, \dots, H_k] \leq N.
\]
\end{lemma}
\begin{proof}
Induction on \(k\). \(k = 1\) is trivial so assume \(k > 1\). If \([s_1, \dots, s_k] \in N\) for all \(s_i \in S_i\) then we have \([[s_1, \dots, s_{k - 1}], s_k] \in N\) for all \(s_i \in S_i\), hence
\[
[s_1, \dots, s_{k - 1}] \in C_{G/N} (H_k) = \{g \in G: [g, h] \in N \text{ for all } h \in H_k\}
\]
The centraliser of a normal subgroup is itself normal, so by induction we have \([H_1, \dots, H_{k - 1}] \leq C_{G/N}(H_k)\), and hence \([H_1, \dots, H_k] \leq N\) as claimed.
\end{proof}
\begin{definition}[lower central series]\index{lower central series}
Given a group \(G\), we define the \emph{lower central series}
\[
G = G_1 > G_2 > \cdots
\]
of \(G\) via
\[
G_k = \langle [g_1, \dots, g_k]: g_i \in G \rangle.
\]
\end{definition}
note that \(G_k \geq G_{k + 1}\) as
\[
[g_1, \dots, g_{k + 1}] = [[g_1, g_2], g_3, \dots, g_{k + 1}].
\]
Also since
\[
[g_1, \dots, g_k]^h = [g_1^h, \dots, g_k^h]
\]
each \(G_k\) is normal in \(G\), where \(x^y = y^{-1}xy\) for all \(x, y \in G\). The fact that this is a \emph{central series} (i.e.\ \(G_k/G_{k + 1}\) is central in \(G/G_{k + 1}\) for all \(k\)) follows from the result.
\begin{proposition}[prop 8.2]
We have \(G_{k + 1} = [G_k, G]\) for all \(k\). In particular
\[
G_k = [G, \dots, G]_k.
\]
\end{proposition}
\begin{proof}
First, \(G_{k + 1} \leq [G_k, G]\) by definition. The fact that \([G_k, G] \leq G_{k + 1}\) follows from lemma 8.1 since \([g_1, \dots, g_{k - 1}]\) generator \(G_k\) and \(G, G_k, G_{k + 1}\) are normal.
\end{proof}
\begin{proposition}[prop 8.3]
Let \(G\) be a group generated by \(S\). Then
\[
G_k = \langle [s_1, \dots, s_k] G_{k + 1}: s_i \in S_i \text{ for all } i \rangle.
\]
\end{proposition}
``\(G_k\) is generated mod \(G_{k + 1}\), by simple commutators of generators''
\begin{proof}
Note that \([s_1, \dots, s_k]^g \in [s_1, \dots, s_k] G_{k + 1}\) by definition of \(G_{k + 1}\), so \(\langle [s_1, \dots, s_k] G_{k + 1}: s_i \in S \rangle\) is normal in \(G\). Moreover \([s_1, \dots, s_k] \in \langle [t_1, \dots, t_k] G_{k + 1}: t_i \in S \rangle\) whenever \(s_i \in G\) for all \(i\), so lemma 8.1 implies that
\[
[G, \dots, G]_k \subseteq \langle [s_1, \dots, s_k] G_{k + 1} s_i \in S \rangle.
\]
Proposition 8.2 implies that \([G, \dots, G]_k = G_k\), so we have
\[
G_k \subseteq \langle [s_1, \dots, s_k] G_{k + 1}: s_i \in S \rangle.
\]
The reverse inclusion is immediate.
\end{proof}
\begin{proposition}[prop 8.4]
We have
\[
[G_i, G_j] \subseteq G_{i + j}
\]
for all \(i, j\).
\end{proposition}
For this we'll use the following commutator identity, which you can check directly:
\[
[x, y^{-1}, z]^y [y, z^{-1}, x]^z [z, x^{-1}, y]^x = 1.
\tag{8.2}
\]
\begin{proof}
Case \(j = 1\) follows from proposition 8.2, so we assume \(j > 1\) and, by induction, that for all \(k\)
\[
[G_k, G_{j = 1}] \subseteq G_{k + j - 1}
\tag{8.3}
\]
Now note that
\[
[G_i, G_j] = [G_i, [G_{j - 1}, G]] = [[G, G_{j - 1}], G_i]
\tag{8.4}
\]
by proposition 8.2 and (8.1). We also have
\[
[[G_{j - 1}, G_i], G] = [[G_i, G_{j - 1}], G] \subseteq [G_{i + j - 1}, G] = G_{i + j}.
\tag{8.5}
\]
by (8.1), (8.3) and proposition 8.2, and
\[
[[G_i, G_i], G_{j - 1}] = [G_{i + 1}, G_{j - 1}] = G_{i + j}
\]
by prop 8.2 and (8.3). Given \(x \in G, y \in G_j\) and \(z \in G_i\), we therefore have
\[
[x, y, z] = (([y^{-1}, z^{-1}, x]^z [z, x^{-1}, y^{-1}]^x)^{-1})^y
\]
by (8.2), which is contained in \(G_{i + j}\) by (8.5) and (8.6).
The proposition follows from (8.4) and lemma 8.1.
\end{proof}
\begin{definition}
Given a group \(G\), the \emph{upper central series}
\[
1 = Z_0(G) \leq Z_1(G) \leq Z_2(G) \leq \cdots
\]
is defined recursively setting \(Z_{i + 1}(G)\) so that \(Z_{i + 1}(G)/Z_i(G)\) is the centre of \(G/Z_i(G)\). Note that each \(Z_iG()\) is normal by induction, since the centre of any group is normal.
\end{definition}
\begin{proposition}[prop 8.5]
Let \(G = H_1 > H_2 > \dots > H_{r + 1} = \{1\}\) be a finite central series for \(G\) (so \(G\) is nilpotent). Then we have \(H_i \supseteq G_i\) for all \(i = 1, \dots, r + 1\), and \(H_{r + 1 - i} \subseteq Z_i(G)\) for all \(i = 0, \dots, r\).
\end{proposition}
This justifies the name \emph{upper} and \emph{lower} central series:
\begin{corollary}
If \(G\) is \(s\)-step nilpotnent then both the upper and lower central series have length \(s - 1\).
\end{corollary}
\begin{proof}[Proof of prop 8.5]
\(H_1 \supseteq G_1\) by definition, so we may assume \(i > 1\), and then we have
\begin{align*}
H_i
&\supseteq [H_{i - 1}, G] \quad \text{central series} \\
&\supseteq [G_{i - 1}, G] \quad \text{by induction} \\
&= G_i \quad \text{by prop 8.2}
\end{align*}
We also have \(Z_0(G) > H_{r + 1}\) by definition so we may assume \(i > 0\) and, by induction, that \(H_{r + 2 - i} \subseteq Z_{i - 1}(G)\). But then
\[
G/Z_{i - 1}(G) = \frac{G/H_{r + 2 - i}}{Z_{i - 1}(G)/H_{r + 2 - i}}.
\]
Because \((H_j)\) is a central series, \(H_{r + 1 - i}/H_{r + 2 - i}\) is central in \(G/H_{r + 2 - i}\), so its image in \(G/Z_{i - 1}(G)\) in the above quotient is central. But the centre of \(G/Z_{i - 1}(G)\) is \(Z_i(G)/Z_{i - 1}(G)\), so \(H_{r + 1 -i} \subseteq Z_i(G)\) as required.
\end{proof}
These results say
\begin{enumerate}
\item \(G\) is nilpotent of step \(\leq s\) if and only if \(G_{s + 1} = \{1\}\) if and only if \(Z_s(G) = G\).
\item If \(G = \langle s \rangle\), we can verify just by checking that \([t, \dots, t_{s + 1}] = 1\) for all \(t_i \in S\).
\item If \(S\) is nilpotent of step \(\leq 5\), then any commutator like
\[
[[[q_1, q_2], q_3], [q_4, q_5]]
\]
with more than \(5\) entries is trivial.
\end{enumerate}
\section*{Lecture 9: Torsion-free nilpotent approximate groups -- an overview}
Recall from lecture 7 that if \(x_1, \dots, x_n \in G\) and \(L_1, \dots, L_r \geq 0\) then the \emph{nonabelian progression} \(P(x; L)\) consists of those elements of \(G\) expressible as products of \(x_1^{\pm 1}, \dots, x_r^{\pm 1}\) in which each \(x_i\) and \(x_i^{-1}\) appear at most \(\ell_i\) times between them.
\begin{definition}[nilprogression]\index{nilprogression}
If \(\langle x_1, \dots, x_r \rangle\) is \(s\)-step nilpotnent then \(P(x; L)\) is called a \emph{nilprogression} of rank \(r\) and step \(s\). In this case we'll write \(P_{\text{nil}}(x; L)\) instead of \(P(x; L)\).
\end{definition}
\begin{proposition}[prop 9.1*]
Given \(r, s \in \N\), there exists \(\lambda = \lambda_{r, s}\) such that if \(x_1, \dots, x_r\) generate an \(s\)-step nilpotnent group and \(L_2, \dots, L_r \geq \lambda_{r, s}\) then \(P_{\text{nil}}(x; L)\) is an \(O_{r, s}(1)\)-approximate group.
\end{proposition}
We won't have time to prove this, but we'll do a special case on example sheet 2, where we'll also see that it's necessary to have \(L_i \geq \lambda_{r, s}\).
As in the abelian setting, it turns out that these are essentially the only examples of finite nilpotent approximate groups, apart from genunine subgroups.
\begin{theorem}[theorem 9.2]
Let \(G\) be \(s\)-step nilpotnent, \(A \subseteq G\) a finite \(K\)-approximate group. Then there exists \(H \normal \langle A \rangle\) and a nilprogression \(P_{\text{nil}}\) of rank \(\leq K^{O_s(1)}\) such that
\[
A \subseteq HP_{\text{nil}} \subseteq A^{K^{O_s(1)}}.
\]
In particular
\[
|H P_{\text{nil}}| \leq \exp (K^{O_s(1)}) |A|.
\]
\end{theorem}
\begin{remark}
If \(K < 2\) then the theorem is trivial. For \(K \geq 2\) we have \(O(K^{O(1)}) = K^{O(1)}\), i.e.\ multiplicative constants can be absorbed into exponents. So we're not cheating when I write \(K^{O(1)}\) instead of \(K^{O(1)}\).
\end{remark}
Unfortunately we won't have time to prove this in full, but in the next few lectures we'll prove some special cases that contain most of the main ideas. We'll start with the case where \(G\) is torsion-free, where theorem 9.2 is originally due to Breuillard and Green (although we'll give a different proof).
We shall start with the following weakened version:
\begin{theorem}[theorem 9.3]
Let \(G\) be torsion-free \(s\)-step nilpotent, \(A \subseteq G\) a finite \(K\)-approximate group. Then there exists an ordered progression \(P_{\text{ord}}\) of rank \(\leq K^{O_s(1)}\) such that
\[
A \subseteq P_{\text{ord}} \subseteq A^{K^{O_s(1)}}.
\]
\end{theorem}
The basic idea is to write \(A\) as a product of approximate groups of step \(< s\) and then apply induction to reduce to the step-\(1\) case, aka.\ the abelian case, and apply the Frieman-Green-Ruzsa theorem (FGR).
The result we use to do this is as follows:
\begin{proposition}[prop 9.4]
Let \(G\) be torsion-free \(s\)-step nilpotent, \(A \subseteq G\) a finite \(K\)-approximate group. Then there exists \(k \leq K^{O(1)}\) and \(K^{O(1)}\)-approximate subgroups \(A_1, \dots, A_k \subseteq A^{O(1)}\) such that
\[
A \subseteq A_1 \cdots A_k \subseteq A^{K^{O(1)}},
\]
and \(\langle A_i \rangle\) is of step \(< s\) for all \(i\).
\end{proposition}
\begin{proof}[Proof of thm 9.3]
An easy induction gievs \(K^{O_s(1)}\)-approximate groups \(B_1, \dots, B_m \subseteq A^{O_s(1)}\) with \(m \leq K^{O_s(1)}\). FGR then gives abelian progressions --- in particular ordered progression --- \(P_1, \dots, P_m\), each of rank \(\leq K^{O_s(1)}\), such that
\[
B_i \subseteq P_i \subseteq B_i^{K^{O_s(1)}}
\]
and hence
\[
A \subseteq P_1 \cdots P_m \subseteq A^{K^{O_s(1)}}.
\]
\(P_1 \cdots P_m\) is an order progression of rank \(\leq m K^{O_s(1)} \leq K^{O_s(1)}\), so we are done.
\end{proof}
Recall that in proving FGR, we wanted ``\(A \subseteq \) small progression'', but we first proved ``\(A^c \supseteq\) large progression''. We then used Chang's covering argument to get what we wanted. We'll use a similar approach here, starting with the following:
\begin{proposition}[prop 9.5]
Suppose \(G\) is torsion-free nilpotnent and \(A \subseteq G\) is a finite \(K\)-approximate group. Then exists \(r \leq K^{O(1)}\) and \(K^{O(1)}\) approximate groups \(A_0A_1 \cdots A_r \subseteq A^{O(1)}\), each generating a group of step \(< s\), such that
\[
|A_0A_1 \cdots A_r| \geq \exp (-K^{O(1)}) |A|.
\]
\end{proposition}
Next time we'll see that passing from proposition 9.5 to proposition 9.4 is very similar to Chang covering part of the proof of FGR.
In proving prop 9.5, we actually use the preliminary version of FGR in which \(A^c\) contains a large progression. As we noted in that proof, combining prop 5.2 and 5.3 and lemma 5.4 gives the folllowing result:
\begin{theorem}[theorem 9.6][Green-Ruzsa]
Let \(G\) be abelian and \(A \subseteq G\) be a finite \(K\)-approximate group. Then exists \(H < G\) and \(x_1, \dots, x_r \in G\) with \(r \leq K^{O(1)}\), and \(L_1, \dots, L_r \in \N\) such that \(H P(x; L) \subseteq A^4\) and
\[
|H P(x; L)| \geq \exp (-K^{O(1)}) |A|.
\]
\end{theorem}
We'll apply this to prop 9.3 via the following result:
\begin{proposition}[proposition 9.7]
Let \(G\) be \(s\)-step nilpotnent and \(A \subseteq G\) a finite \(K\)-approximate gropu. Write \(\pi: G \to G/[G, G]\) for the quotient homomorphism. Noting that \(G/[G, G]\) is abelian and that \(\pi(A)\) is a \(K\)-approximate group. Let \(H \leq G/[G, G]\) and \(x_1, \dots, x_r \in G/[G, G]\) be as coming from applying theorem 9.6 to \(\pi(A)\). Then
\[
\Big| (A^{24} \cap \pi^{-1}(H)) \prod_{i = 1}^r (A^{24} \cap \pi^{-1}(\langle x_i \rangle) \Big|
\geq \exp(-K^{O(1)}) |A|.
\]
\end{proposition}
We'll prove prop 9.7 next time. For now, let's see how this implies prop 9.5. Proposition 4.6 immediately tells us that \(A^{24} \cap \pi^{-1}(H)\) and \(A^{24} \cap \pi^{-1}( \langle x_1 \rangle)\) are \(K^{O(1)}\)-approximate groups. It turns out they also generate subgropus of step \(< s\), at least when \(G\) is torsion-free.
\begin{lemma}[lemma 9.8]
Let \(G\) be \(s\)-step nilpotnent, and write \(\pi: G \to G/[G, G]\) as before. Then
\begin{enumerate}
\item for all \(x \in G/[G, G]\), \(\pi^{-1}(\langle x \rangle)\) is of step \(< s\).
\item if \(H \leq G/[G, G]\) is a finite subgroup and \(G\) is torsion-free then \(\pi^{-1}(H)\) is of step \(< s\).
\end{enumerate}
\end{lemma}
\begin{lemma}[lemma 9.9]
Let \(G\) be an arbitrary group. Then the simple commutator map
\begin{align*}
[\cdot, \cdots, \cdot]_k: G^k &\to G_k \\
(x_1, \dots, x_k) &\mapsto [x_1, \dots, x_k]
\end{align*}
is a homomorphism in each variable mod \(G_{k + 1}\). Moreover \([G, G]\) is contained in the kernel of each of these homomorphisms.
\end{lemma}
\printindex
\end{document}
% tointon.neocities.org | {
"alphanum_fraction": 0.6125274005,
"avg_line_length": 45.8,
"ext": "tex",
"hexsha": "b9226824bc4629e5c70d69b11201fd40719d60fe",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z",
"max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "geniusKuang/tripos",
"max_forks_repo_path": "III/introduction_to_approximate_groups.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "geniusKuang/tripos",
"max_issues_repo_path": "III/introduction_to_approximate_groups.tex",
"max_line_length": 581,
"max_stars_count": 27,
"max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "geniusKuang/tripos",
"max_stars_repo_path": "III/introduction_to_approximate_groups.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z",
"num_tokens": 21662,
"size": 57937
} |
% !TEX program = pdflatex
\documentclass{tufte-handout}
\title{\centering Course: Course Name}
\author{I'm the author}
\date{\today} % without \date command, current date is supplied
%\geometry{showframe} % display margins for debugging page layout
\usepackage{graphicx} % allow embedded images
\setkeys{Gin}{width=\linewidth,totalheight=\textheight,keepaspectratio}
\usepackage{amsmath} % extended mathematics
\usepackage{booktabs} % book-quality tables
\usepackage{units} % non-stacked fractions and better unit spacing
\usepackage{multicol} % multiple column layout facilities
\usepackage{lipsum} % filler text
\usepackage{fancyvrb} % extended verbatim environments
\fvset{fontsize=\normalsize}% default font size for fancy-verbatim environments
% Standardize command font styles and environments
\newcommand{\doccmd}[1]{\texttt{\textbackslash#1}}% command name -- adds backslash automatically
\newcommand{\docopt}[1]{\ensuremath{\langle}\textrm{\textit{#1}}\ensuremath{\rangle}}% optional command argument
\newcommand{\docarg}[1]{\textrm{\textit{#1}}}% (required) command argument
\newcommand{\docenv}[1]{\textsf{#1}}% environment name
\newcommand{\docpkg}[1]{\texttt{#1}}% package name
\newcommand{\doccls}[1]{\texttt{#1}}% document class name
\newcommand{\docclsopt}[1]{\texttt{#1}}% document class option name
\newenvironment{docspec}{\begin{quote}\noindent}{\end{quote}}% command specification environment
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% add numbers to chapters, sections, subsections
\setcounter{secnumdepth}{2}
\usepackage{xcolor}
\definecolor{g1}{HTML}{077358}
\definecolor{g2}{HTML}{00b096}
% chapter format %(if you use tufte-book class)
%\titleformat{\chapter}%
%{\huge\rmfamily\itshape\color{red}}% format applied to label+text
%{\llap{\colorbox{red}{\parbox{1.5cm}{\hfill\itshape\huge\color{white}\thechapter}}}}% label
%{2pt}% horizontal separation between label and title body
%{}% before the title body
%[]% after the title body
% section format
\titleformat{\section}%
{\normalfont\Large\itshape\color{g1}}% format applied to label+text
{\llap{\colorbox{g1}{\parbox{1.5cm}{\hfill\color{white}\thesection}}}}% label
{1em}% horizontal separation between label and title body
{}% before the title body
[]% after the title body
% subsection format
\titleformat{\subsection}%
{\normalfont\large\itshape\color{g2}}% format applied to label+text
{\llap{\colorbox{g2}{\parbox{1.5cm}{\hfill\color{white}\thesubsection}}}}% label
{1em}% horizontal separation between label and title body
{}% before the title body
[]% after the title body
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{color-tufte}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\maketitle% this prints the handout title, author, and date
\begin{abstract}
\noindent
A simple notes template. Inspired by Tufte-\LaTeX class and beautiful notes by \begin{verbatim*}
https://github.com/abrandenberger/course-notes
\end{verbatim*}
\end{abstract}
%\printclassoptions
\section{Page Layout}\label{sec:page-layout}
\lipsum[1][1-8]\footnote[1]{Footnotes will appear on the margins}
\begin{definition}%% [can be kept empty]
Here's is the beautiful Schr\"odinger equation
\[ i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=
\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t)\]
\end{definition}
\subsection{Headings}\label{sec:headings}
\marginnote{\begin{proof}[Proof (Theorem 1.1)]
\lipsum[1][1-3]\end{proof}}
\begin{theorem}%% [can be kept empty]
\lipsum[1][1-3] %% for dummy text
\end{theorem}
\begin{lemma}%% [can be kept empty]
\lipsum[1][1-3] %% for dummy text
\end{lemma}
\begin{proof}
\lipsum[1][1-5]
\end{proof}
%\marginnote{\begin{proof}\lipsum[1][1-3]\end{proof}}
\begin{corollary}%% [can be kept empty]
\lipsum[1][1-3] %% for dummy text
\end{corollary}
\begin{proposition}
\lipsum[1][1-3] %% for dummy text
\end{proposition}
\begin{problem}
\lipsum[1][1-2]
\end{problem}
\begin{proof}
\lipsum*[1]
\end{proof}
\end{document}
| {
"alphanum_fraction": 0.6774958501,
"avg_line_length": 33.2047244094,
"ext": "tex",
"hexsha": "db7bf9ed179f3c1e650bb2a09ace587e8a447f6f",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2020-12-22T11:57:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-12-15T03:58:09.000Z",
"max_forks_repo_head_hexsha": "8fa084c99ff4943cf5bac84789a08952eb71e713",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dev-aditya/LaTeX-template",
"max_forks_repo_path": "Notes/notes-sample.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "8fa084c99ff4943cf5bac84789a08952eb71e713",
"max_issues_repo_issues_event_max_datetime": "2021-01-17T15:41:00.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-12-22T03:38:04.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dev-aditya/LaTeX-template",
"max_issues_repo_path": "Notes/notes-sample.tex",
"max_line_length": 112,
"max_stars_count": 67,
"max_stars_repo_head_hexsha": "8fa084c99ff4943cf5bac84789a08952eb71e713",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dev-aditya/LaTeX-template",
"max_stars_repo_path": "Notes/notes-sample.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T02:17:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-12T04:30:15.000Z",
"num_tokens": 1267,
"size": 4217
} |
\section{Paper 7}
\subsection{\emph{"Variational-Based Mixed Noise Removal With CNN Deep Learning Regularization"}}
\begin{frame}{INTRODUCTION}
Random noise distributions correspond to standard probabilistic distributions,
such as the Gaussian, Poisson and other distributions. There have
been many methods that attempt to clean up noisy images. Other tools such as the Variational method have been widely used.
This method is based on reducing the cost function containing the fidelity
term of the data, useful for calculating the difference between true and observed
data, and the regularization terms. In the
proposed article, the EM (Expectation-Maximization) algorithm is used to
remove the noise, with the integration of a CNN process as regularization,
in order to create a new variational method.
\end{frame}
\begin{frame}{RELATED WORK - What types of image noises?}
The noises analyzed in this paper are mainly two categories of Gaussian
noises:
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.45\linewidth}
\begin{block}{GAUSSIAN MIXED-NOISE}
$$
\small
n = \left\{
\begin{array}{cc}
n_1, & with~probability~r_1\\
n_2, & with~probability~r_2\\
\cdots, & \cdots\\
n_k, & with~probability~r_k\\
\end{array}
\right\}
$$
\end{block}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.47\linewidth}
\begin{block}{GAUSSIAN RANDOM NOISE}
$$
\small
n = \left\{
\begin{array}{cc}
n_1, & with~probability~1-r\\
n_2, & with~probability~r\\
\end{array}
\right\}
$$
\end{block}
\end{minipage}
\end{minipage}
where $ n_k $ is the \emph{k}-th noise component with probability density function (PDF)
$ p_k $ and $ r_k $ are the unknown mixture ratios and their sum is equal to
1. Each type of noise has a standard deviation ($\sigma$) which indicates the
amount of noise distribution present.
\end{frame}
\begin{frame}{RELATED WORK - Variational Method Approach}
To obtain the clean image is necessry minimizing a cost functional(\ref{cost}).
\begin{block}{COST-FUNCTIONAL}
\begin{equation}\label{cost}
F(u) = E(u)+\lambda\mathcal{J}(u)
\end{equation}
\end{block}
where $E(u)$ is the set of data fidelity terms belonging to each pixel of the
image $u$ useful to measure the discrepancy between the true and the
observed data and it can derived from the maximum likelihood estimation
of noise, task perfromed by the EM algorithm \footfullcite{0884882814}. $\mathcal{J}$ is a regularizer while $ \lambda $
control the balance of the terms
\end{frame}
\begin{frame}{RELATED WORK - EM Algorithm (An Optimization problem)}
The EM algorithm is applied to this noise which has the task of estimating the noise
and classifying it by minimizing the terms of \ref{HFunction}. The minimization of each
weight $w$ leads to a continuous updating of this parameter which translates
into greater precision in determining the noise on each individual pixel.
\begin{equation}\label{HFunction}
(u^*, \Theta^*, w^*) = \argmin_x\left\{H(u,\Theta,w) + \lambda_1\mathcal{J}(u)\right\}
\end{equation}
where $\Theta^*$ is a statistical parameters set contains noise parameters such as mixture ratios ($r$), means and variances ($\sigma^2$).
\end{frame}
\begin{frame}{THE PROPOSED METHOD (EM-CNN) - Architecture}
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.45\linewidth}
The convolutional neural network (CNN) performs the following tasks
\begin{enumerate}
\item Smoothness (Denoiser)
\item Regularization (TV)
\item Synthesis (Best Choice)
\item Parameters estimation (EM)
\item Noise classification (EM)
\item Output (Restored image and Noise Estimation)
\end{enumerate}
\end{minipage}
\hspace{0.05\linewidth}
\begin{minipage}{0.47\linewidth}
\begin{figure}[h!]
\centering
\includegraphics[width = 1 \linewidth]{images/paper7/flowchart.png}
\centering
\label{fig:EM-CNN}
\end{figure}
\end{minipage}
\end{minipage}
\end{frame}
\begin{frame}{EXPERIMENTAL RESULTS - Gaussian mixed-noise}
The indexs used to estimate the quality of the restructured images are
the \emph{Signal-to-Noise Ratio} (PSNR) and the \emph{Structural Similarity Index}
(SSIM). The results obtained by comparing different methods with the
proposed, in image reconstruction and Gaussian mixed-noise removal, one
are visible in figure \ref{fig:GMNComp} and table \ref{indexCompare}. The test set images were taken from
the BSDS100 dataset.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.8\linewidth]{images/paper7/GMN comparison.png}
\centering
\caption{Image reconstruction under Gaussain mixture noise comparison.}
\label{fig:GMNComp}
\end{figure}
\begin{table}[htbp]
\centering
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{|c||ccc|ccc|ccc|}
\hline
& \multicolumn{3}{c||}{$\sigma_1=5 ~~~\sigma_2=30 $} & \multicolumn{3}{c||}{$\sigma_1=10 ~~~\sigma_2=50 $} & \multicolumn{3}{c||}{$\sigma_1=15 ~~~\sigma_2=75 $}\\
\hline
$r_1:r_2\rightarrow$ & 0.3:0.7 & 0.5:0.5 & 0.7:0.3 & 0.3:0.7 & 0.5:0.5 & 0.7:0.3 & 0.3:0.7 & 0.5:0.5 & 0.7:0.3\\
\hline
\hline
\multirow{2}{*}{PGPD\cite{0884882815}} & 29.42 & 30.16 & 31.19 & 27.23 & 27.88 & 28.78 &25.64 & 26.26 & 27.14\\
& 0.8132 & 0.8349 & 0.8544 & 0.7404 & 0.7634 & 0.7874 & 0.6790 & 0.7044 & 0.7336\\
\hline
\multirow{2}{*}{IRCNN\cite{0884882819}} & 29.83 & 30.53 & 31.48 & 27.66 & 28.25 & 29.04 & 19.37 & 25.22 & 27.32\\
& 0.8262 & 0.8451 & 0.8664 & 0.7611 & 0.7789 & 0.7993 & 0.2948 & 0.6423 & 0.7426\\
\hline
\multirow{2}{*}{Proposed} & \bfseries{29.88} & \bfseries{31.26} & \bfseries{32.72} & \bfseries{28.01} & \bfseries{29.00} & \bfseries{30.37} & \bfseries{26.23} & \bfseries{27.41} & \bfseries{28.84}\\
& \bfseries{0.8273} & \bfseries{0.8785} & \bfseries{0.9094} & \bfseries{0.7801} & \bfseries{0.8112} & \bfseries{0.8528} & \bfseries{0.6931} & \bfseries{0.7613} & \bfseries{0.8077}\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Average PSNR and SSIM values on BSD100 datasets of some methods.}
\label{indexCompare}
\end{table}
\end{frame}
\begin{frame}{EXPERIMENTAL RESULTS - Gaussian random-noise}
As for the reconstruction of an image affected by random noise, also
called salt and pepper noise, the results obtained from the comparison of
different methods are visible in figure \ref{fig:saltComp} and in table \ref{GNPINIndex}. As in the previous
experiment, the indexs are calculated by changing the mixed-ratio $r$ and
standard deviation $\sigma$ values.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.7\linewidth]{images/paper7/salt.png}
\centering
\caption{Image reconstruction under Gaussain noise plus random-valued noise comparison.}
\label{fig:saltComp}
\end{figure}
\begin{table}[h!]
\centering
\begin{adjustbox}{max width=0.7\textwidth}
\begin{tabular}{|c||ccc|ccc|ccc|}
\hline
& \multicolumn{3}{c||}{$\sigma_1=5$} & \multicolumn{3}{c||}{$\sigma_1=10$} & \multicolumn{3}{c||}{$\sigma_1=15$}\\
\hline
$r\rightarrow$ & 0.1 & 0.2 & 0.3 & 0.1 & 0.2 & 0.3 & 0.1 & 0.2 & 0.3\\
\hline
\hline
\multirow{2}{*}{Noisy} & 18.76 & 15.76 & 14.04 & 18.43 & 15.61 & 13.95 & 17.94 & 15.38 & 13.81\\
& 0.3545 & 0.2290 & 0.1662 & 0.3313 & 0.2216 & 0.1626 & 0.3040 & 0.2109 & 0.1571\\
\hline
\multirow{2}{*}{Two-phase\cite{0884882828}} & 25.40 & 24.77 & 24.13 & 24.34 & 23.94 & 23.45 & 23.32 & 23.02 & 22.67\\
& 0.6599 & 0.6313 & 0.5957 & 0.5116 & 0.4914 & 0.4655 & 0.4224 & 0.4058 & 0.3854\\
\hline
\multirow{2}{*}{ACWMF+K-SVD \cite{0884882844}\cite{0884882813}} & 26.07 & 25.27 & 24.54 & 25.50 & 24.91 & 23.13 & 24.64 & 24.19 & 23.67\\
& 0.7130 & 0.6761 & 0.6333 & 0.5902 & 0.5613 & 0.5291 & 0.4855 & 0.4625 & 0.4414\\
\hline
\multirow{2}{*}{LSM-NLR \cite{0884882826}} & 29.48 & 26.97 & 24.55 & 29.43 & 27.08 & 24.43 & 28.51 & 26.06 & 23.98\\
& 0.6548 & 0.5774 & 0.4982 & 0.6420 & 0.5646 & 0.4770 & 0.6282 & 0.5458 & 0.4643\\
\hline
\multirow{2}{*}{$l_1-l_0$\cite{0884882829}} & 30.45 & 27.75 & 25.95 & 28.45 & 26.59 & 25.34 & 27.33 & 25.69 & 24.55\\
& 0.8962 & 0.7911 & 0.7312 & 0.7539 & 0.6748 & 0.6293 & 0.6793 & 0.6010 & 0.5744\\
\hline
\multirow{2}{*}{Proposed} & \bfseries{31.06} & \bfseries{30.44} & \bfseries{27.66} & \bfseries{30.97} & \bfseries{29.5} & \bfseries{27.41} & \bfseries{29.74} & \bfseries{28.09} & \bfseries{25.73}\\
& \bfseries{0.9214} & \bfseries{0.9157} & \bfseries{0.8497} & \bfseries{0.8943} & \bfseries{0.8749} & \bfseries{0.8316} & \bfseries{0.8530} & \bfseries{0.8222} & \bfseries{0.7750}\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Average PSNR and SSIM values on one image of some methods.}
\label{GNPINIndex}
\end{table}
\end{frame}
\begin{frame}{CONCLUSION}
Even reaching good performances, the proposed method does not always
manage to obtain better results than those of competing methods.
However, the average of the PSNR index achieved is the highest (Fig. \ref{fig:cropRes}).
\begin{figure}[h!]
\centering
\includegraphics[width = \linewidth]{images/paper7/crop.png}
\centering
\caption{Cropped real noisy images.}
\label{fig:crop}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width = 0.6\linewidth]{images/paper7/crop result.png}
\centering
\caption{PSNR result of different methods on cropped real noisy images.}
\label{fig:cropRes}
\end{figure}
\end{frame}
| {
"alphanum_fraction": 0.5893036415,
"avg_line_length": 49.5791855204,
"ext": "tex",
"hexsha": "007ed1c9cddb80d59c0c9b5c4f4ce1e9cacecf45",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "da2b42677036dc3a8ae6851a550651d442736d98",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "flavioforenza/Intelligent-Systems-project",
"max_forks_repo_path": "slides/slides_paper7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "da2b42677036dc3a8ae6851a550651d442736d98",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "flavioforenza/Intelligent-Systems-project",
"max_issues_repo_path": "slides/slides_paper7.tex",
"max_line_length": 210,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "da2b42677036dc3a8ae6851a550651d442736d98",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "flavioforenza/Intelligent-Systems-project",
"max_stars_repo_path": "slides/slides_paper7.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3576,
"size": 10957
} |
%-------------------------
% Resume in Latex
% Author : Solaiman Mansyur
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formattin
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
%-------------------------
% Custom commands
\newcommand{\resumeItem}[2]{
\item\small{
\textbf{#1}{: #2 \vspace{-2pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-1pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-5pt}
}
\newcommand{\resumeSubItem}[2]{\resumeItem{#1}{#2}\vspace{-4pt}}
\renewcommand{\labelitemii}{$\circ$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=*]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% CV STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING-----------------
\begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
\textbf{\href{https://solaiman.me/}{\Large Solaiman Mansyur}} & Email : \href{mailto:[email protected]}{[email protected]}\\
\href{https://solaiman.me}{https://www.solaiman.me} & Mobile : +62-811-412-8485 \\
\end{tabular*}
%-----------EXPERIENCE-----------------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{Confidential Clients}{Confidential Locations, Malaysia/Germany/Romania/UK}
{Senior Web Developer (Independent Contractor) - Remote}{Mar 2019 - Present}
\resumeItemListStart
\resumeItem{Prize Competition Platform}
{Stabilize, Enhance, Migrate the Technologies that backed the platform.}
\resumeItem{Project Management Tool}
{Identity Service Migration (implementing ORY ecosystem to the stack).}
\resumeItem{Global Supply and Demand Platform for Medical Supplies}
{Initiated during Covid-19 pandemic. The project is for supply and demand for medical supplies around the globe.}
\resumeItem{Leave Management}
{Leave Management for Multi National Company.}
\resumeItem{Bug Prediction and Software Entropy}
{Bug Prediction with Bug Caching methodology. Software Entropy calculation.}
\resumeItemListEnd
\resumeSubheading
{Metacloud Sdn Bhd (Sunway Technology Group)}{Petaling Jaya, Malaysia}
{Senior Software Engineer}{Jan 2017 - Feb 2019}
\resumeItemListStart
\resumeItem{3 Way Invoice Matching Integration}
{ERPs, e-Procurement and Invoice Matching system integration.}
\resumeItem{Microservice Architecture}
{e-Procurement from monolith to containerized microservice architecture.}
\resumeItem{SRE Implementation}
{From bi-weekly deployment, now can deploy multiple times a day without down time.}
\resumeItemListEnd
\resumeSubheading
{Logistics Consulting Asia Sdn Bhd}{Petaling Jaya, Malaysia}
{Application Developer}{Sep 2015 - Jan 2017}
\resumeItemListStart
\resumeItem{Distributed data Syncrhonization}
{Distributor Management System in more than 300 instances accross South-East Asia.}
\resumeItem{Data Warehousing}
{Maintenance and operation of OLTP databases.}
\resumeItem{Data ETL Processing}
{Procesing OLTP data sets into OLAP data sets.}
\resumeItem{Data ETL Monitoring}
{Monitoring the ETL processes, write the logs and showing the progress.}
\resumeItem{BI Dashboard}
{Business Intelligence Dashboard.}
\resumeItem{SSO Security Enhancement}
{Single sign on security enhancement.}
\resumeItemListEnd
\resumeSubheading
{CloudApps Technologies Sdn Bhd}{Kuala Lumpur (remote), Malaysia}
{Senior Software Engineer}{Feb 2015 - Aug 2015}
\resumeItemListStart
\resumeItem{Cloud-Based Accounting Application}
{SaaS accounting application.}
\resumeItemListEnd
\resumeSubheading
{CV. Unggul Visi Utama - EwakooLabs}{Makassar, Indonesia}
{Software Engineer}{July 2012 - Jan 2015}
\resumeItemListStart
\resumeItem{CBT Simulator}
{An application to practice the exam using computer-based test methodology}
\resumeItem{GladiResik}
{A simulation of online registration of universities in Indonesia. Used by more than 30,000 users.}
\resumeItem{TaxIS}
{An Information system for a taxi cab.}
\resumeItem{TohaPutraPOS}
{Point of Sales Application for the biggest Quran Publisher in Indonesia, Toha Putra.}
\resumeItem{iSILK}
{Medical Laboratory Information System.}
\resumeItemListEnd
\resumeSubheading
{Jakarta Intensive Learning Center}{Makassar, Indonesia}
{Desktop Software Engineer and Branch Manager}{Sept 2007 - June 2012}
\resumeItemListStart
\resumeItem{Online Employee Attendance}
{Managing attendance for about 200 of employees.}
\resumeItem{Student Information System}
{Integrated an Multi-tenancy Student Information System. Served more than 30 tenancies/branches.}
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------Skills-----------------
\section{Skills}
\resumeSubHeadingListStart
\resumeSubItem{Fundamental Technical Skills}
{Design Patterns, Clean Code, Clean Architecture, CI/CD, Infrastructure as a Code}
\resumeSubItem{Software Development}
{Modern Fullstack Web Development, Hexagonal Architecture, TDD, Agile methodology}
\resumeSubItem{Soft and Communication Skills}
{Bahasa Indonesia, English, Arabic}
\resumeSubHeadingListEnd
%-----------EDUCATION-----------------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{Hasanuddin University}{Makassar, Indonesia}
{Diploma in Electrical Engineering and Bachelor Degree in Information Technology}{Aug. 2004 -- Dec. 2012}
\resumeSubHeadingListEnd
%-------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.6812806461,
"avg_line_length": 36.8829787234,
"ext": "tex",
"hexsha": "9854b5ecd0ac78fdef1f7e9cc2f93f7571ad54a1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2b547c303608abc1156415692df9828618363a47",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "emanmks/resume",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2b547c303608abc1156415692df9828618363a47",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "emanmks/resume",
"max_issues_repo_path": "resume.tex",
"max_line_length": 144,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "2b547c303608abc1156415692df9828618363a47",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "emanmks/resume",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": "2020-09-24T07:44:21.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-09-24T07:44:12.000Z",
"num_tokens": 1806,
"size": 6934
} |
\chapter{Running PyLith}
\label{cha:running}
\section{Organization of Simulation Components}
The components in a PyLith simulation generally fall into four main
categories:
\begin{description}
\item[Topology] Components associated with the spatial discretization
of the domain, such as the finite-element mesh;
\item[Physics] Components specifying the physics to be solved, such as
materials associated with a governing equation, bulk rheologies,
boundary conditions, and fault interface conditions;
\item[Physics Implementation] Components that perform the
finite-element operations, such as integration of the residual and
system Jacobian; and
\item[Observers] Components that get notified of updates to the
solution and state variables, such as writers for saving the
solution to a file.
\end{description}
The physics components provide the point-wise functions (kernels) used
by the physics implementation components, the auxiliary field, and the
layout of the derived field (subfields computed from the auxiliary
field and solution, such as stress and strain).
Figure \vref{fig:pylith:workflow} shows the workflow for running PyLith.
The user supplies:
\begin{enumerate}
\item Mesh information. This includes the topology of the
finite-element mesh (coordinates of vertices and how the vertices
are connected into cells), a material identifier for each cell, and
sets of vertices associated with boundary conditions, faults, and
output (for subsets of the mesh). This information can be provided
using the PyLith mesh ASCII format (see Chapter \vref{cha:examples}
for examples and Section \vref{sec:format:MeshIOAscii} for the format
specification) or by importing the information from the LaGriT or
CUBIT meshing packages (see Chapter \vref{cha:examples} for
examples).
\item A set of parameters describing the problem. These parameters
describe the type of problem to be run, solver information,
time-stepping information, boundary conditions, materials, etc. This
information can be provided from the command-line or by using a
\filename{cfg} file.
\item Spatial databases specifying the values for the material
properties and boundary conditions. Arbitrarily complex spatial
variations in boundary and fault conditions and material properties
may be given in the spatial database (see Chapter
\vref{cha:examples} for examples and Appendix
\vref{sec:format:SimpleIOAscii} for the format specification).
\end{enumerate}
PyLith writes solution information, such as solution fields and state
variables, to either VTK files or HDF5/Xdmf files using the observer
components. ParaView and Visit can read both types of
files. Post-processing of output is generally performed using HDF5
files accessed via a Python script and the h5py package or a Matlab
script.
\begin{figure}[htbp]
\includegraphics[width=5in]{runpylith/figs/runpylith}
\caption{PyLith requires a finite-element mesh (three different
mechanisms for generating a mesh are currently supported),
simulation parameters, and spatial databases (defining the spatial
variation of various parameters). PyLith writes the solution
output to either VTK or HDF5/Xdmf files, which can be visualized
with ParaView or Visit. Post-processing is generally done using
the HDF5 files with Python or Matlab scripts.}
\label{fig:pylith:workflow}
\end{figure}
% ----------------------------------------------------------------------
\input{./runpylith/definesim.tex}
\input{./runpylith/pylithapp.tex}
\input{./runpylith/problems.tex}
\input{./runpylith/databases.tex}
\input{./runpylith/labels.tex}
\input{./runpylith/output.tex}
\input{./runpylith/utils.tex}
\input{./runpylith/parametersgui.tex}
\input{./runpylith/troubleshooting.tex}
% End of file
| {
"alphanum_fraction": 0.7783940835,
"avg_line_length": 45.6144578313,
"ext": "tex",
"hexsha": "38b010f537d66195ac7b1ce58c476e59c44b46dd",
"lang": "TeX",
"max_forks_count": 71,
"max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z",
"max_forks_repo_head_hexsha": "b144ed7cc7aeaa32dc10a4540cc10f14669f59a1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "baagaard-usgs/pylith",
"max_forks_repo_path": "doc/userguide/runpylith/runpylith.tex",
"max_issues_count": 277,
"max_issues_repo_head_hexsha": "b144ed7cc7aeaa32dc10a4540cc10f14669f59a1",
"max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "baagaard-usgs/pylith",
"max_issues_repo_path": "doc/userguide/runpylith/runpylith.tex",
"max_line_length": 72,
"max_stars_count": 93,
"max_stars_repo_head_hexsha": "b144ed7cc7aeaa32dc10a4540cc10f14669f59a1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "baagaard-usgs/pylith",
"max_stars_repo_path": "doc/userguide/runpylith/runpylith.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z",
"num_tokens": 889,
"size": 3786
} |
\chapter{Formalisms for language systems and language strategies}
\label{s:formalisms}
Modelling a language strategy encompasses defining semantic and
syntactic templates and applying realised templates that make up a
language system. Moreover, the language strategy needs to define
adoption, alignment and invention operators. This imposes hard
requirements on the formalisms that are needed to model a language
strategy.
Standard first-order formalisms in logic that are commonly used in
artificial language evolution research, such as predicate logic, are
insufficient to represent the semantic templates of some of the
strategies outlined in the previous chapter. For example, the meaning
of a realisation of the graded membership strategy, such as \textit{very
red}, cannot be expressed using any first-order logical formalism in
a satisfactory way as the the adverb \textit{very} modifies the meaning of
the adjective \textit{red}.
The syntactic templates require a grammar formalism, as the word order
seems to have an impact on the resulting focal colour that is
intended. This is for example the case in the compounding strategy in
Russian, where \textit{zel\"eno-\v z\"eltyj} (`green-yellow') is different
from \textit{\v z\"elto-zel\"enyj} (`yellow-green'). This difference implies
that the lexical approach in which the lexicon captures a direct
association between terms and colour category is no longer sufficient.
In this book, I have chosen to use Incremental Recruitment
Language (IRL) to represent semantic templates and Fluid
Construction Grammar (FCG) to represent syntactic templates. Both
formalisms have been especially designed to support experiments in
artificial language evolution \citep{loetzsch09understanding}.
This chapter provides a short introduction to both systems that
introduces the design principles behind these formalisms and that
should enable the reader to understand the models of language
strategies that will be presented in future chapters. Readers can
choose to skip this chapter and return to it when needed.
\section{Embodied cognitive semantics using IRL}
\label{s:irl}
\is{Incremental Recruitment Language|see{IRL}}
\is{IRL}
\subsection{Theoretical foundations}
Although research on the emergence of communication systems with
similar features as human natural language has shown important
progress, the complexity of the meanings considered so far remains
limited. Experiments either use simple categories
\citep{steels05coordinating, belpaeme05explaining}, conjunctive
combinations of categories \citep{wellens08flexible} or
predicate-argument expressions \citep{batali02negotiation,
smith03iterated, debeule08emergence}. Natural languages are clearly
capable of expressing second order semantics
\citep{dowty1981introduction}. For example, the adverb \textit{very} in
\textit{very big} modifies the meaning of the adjective, it is not just a
simple conjunction of the predicates \textit{very} and \textit{big}. Moreover the
same predicate (e.g. \textit{big}) can often be used in different ways, for
example to further restrict the set of possible referents of a noun
(as in \textit{the big ball}), to state a property of an object (as in
\textit{the ball is big}), to reify the predicate itself and make a
statement about it (as in \textit{big says something about size}), to
compare the elements of a set (as in \textit{this ball is bigger than the
others}), etc. The specific usage of a predicate in a particular
utterance is clearly conveyed by the grammar, so any theory on the
origins and evolution of grammar must address second order semantics.
The semantics of the utterances in this book are not represented in
a standard logic, but in an alternative framework, Incremental
Recruitment Language or IRL \citep{steels00emergence,
steels05planning, vandenbroeck07constraintbased,
vandenbroeck08constraintbased}. In this framework the meaning of a
sentence is a \textsc{semantic constraint network} that the speaker
wants the hearer to evaluate in order to achieve the communicative
goal selected by the speaker. This approach resonates with earlier
work in AI on procedural semantics \citep{winograd72understanding}.
The IRL framework has been especially designed for experiments on \enlargethispage{\baselineskip}
artificial language evolution and therefore supports key features that
have been proven successful in this field of research. It is
\emph{omni-directional}: not only can it be used for both
conceptualisation and interpretation but also to complete partial
semantic constraint networks. This feature does not only enable both
speaker and hearer to use the same formalism, but it has also proven
to be crucial when writing adoption, alignment and invention
operators. The speaker can use it to diagnose potential problems in
communication by interpreting its own utterance to detect potential
ambiguities \citep{steels03reentrance}. The hearer can try to
reproduce a partially understood meaning together with the
communicative goal, revealed by the speaker in a failed interaction,
to infer which parts it misinterpreted or did not know yet. On a\enlargethispage{\baselineskip}
technical level, this strongly suggests a constraint-propagation
language \citep{marriott98programming}.
Another key feature of IRL is its \emph{open-endedness} towards the
cognitive operations it can represent. Previous research has deployed
a wide range of such operations including discrimination trees
\citep{steels96perceptually}, event feature detectors
\citep{siskind01grounding}, nearest neighbour classification
\citep{belpaeme05explaining} and radial basis function networks
\citep{steels05coordinating}. IRL aims to be an overarching formalism
which can support any cognitive operation for which a tractable
implementation on a computer exists. It can be used for rich semantics
in which any of these operations can be combined and also for
experiments in which the choice of the cognitive operation is not
predetermined by the experimenter.
Finally, IRL is designed to support world models which are
\emph{grounded} in the sensory-motor system of the agent. These world
models are non-symbolic and are based on the operation of their
sensorimotor apparatus. Often \citep[e.g.][]{batali02negotiation,
smith03iterated, wellens08flexible} it is assumed that there is a
simple straightforward mapping of the non-symbolic world model onto a
categorial situation model, which is a representation of the world in
the form of facts in some variant of predicate calculus. But as
different languages conceptualise the world in different ways, this
mapping function is clearly nontrivial.
\subsection{Semantic constraint network}
\label{s:semantic-constraint-network}
\is{semantic constraint network}
\is{semantic network|see{semantic constraint network}}
The meaning of an utterance will be viewed as a \textsc{semantic
constraint network}, or \textsc{semantic network} for short. The basic
nodes of these networks are \textsc{primitive
constraints}\is{primitive constraint} which reflect cognitive
operations and which are provided by the experimenter. Each
constraint has a number of arguments which can be bound to a certain
variable. Variables are denoted using a question mark prefix. If a
variable appears as an argument to more than one constraint, it means
the value for this variable is constrained by more than one
constraint. Some variables can be bound to a certain \textsc{semantic
entity}\is{semantic entity} by means of a bind
statement. Semantic entities are marked by square brackets.
An example network for an utterance like \textit{the block} is shown in
\figref{f:context-and-network} to identify the block within a
hypothetical context. The {\sc Equal-to-Context} primitive (primitives
will always be printed in small capitals) binds all entities in the
context to $?s1$. The {\sc Filter-Set-Proto\-type} primitive takes
this entity-set as input, computes all entities that are similar to
the prototype of a block (provided by the bind statement through $?p1$)
and binds the resulting set to $?s2$. Finally, the {\sc
Select-Element}, of which the selector is specified as [unique],
checks whether this set contains only one element and binds this
element to $?t$.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[width=0.45\textwidth]{./frameworks/figures/context.pdf}
\label{f:context}
}
\subfigure[]{
\includegraphics[width=\textwidth]{./frameworks/figures/network.pdf}
\label{f:network}
}
\caption[Example semantic constraint network for \textit{the block}]{\subref{f:context} a hypothetical context \subref{f:network}
an example of a semantic constraint network for \textit{the block} to
identify the topic within (a) (marked in grey for clarity)}
\label{f:context-and-network}
\end{figure}
\newpage The more complex the world (for example by adding a second block), the
more complex the semantic constraint network will need to be in order
to achieve this goal (for example extending the previous one with
another filter operation based on size). An example of such a context
and such a network is shown in \figref{f:more-complex-context-and-network}. This network could represent
the meaning of an utterance like \textit{the big block}. The entity-set of
all blocks in $?s2$ is now further filtered to contain only big blocks
using the {\sc Filter-Set-Category} primitive, which binds the
resulting set to $?s3$ which is passed on to the {\sc Select-Element}
primitive. Note that the previous network in \figref{f:network}
would fail in this context as the {\sc Select-Element} primitive with
a [unique] selector constrains the number of blocks in the context to
be one at most.
\begin{figure}[htbp]
\centering
\subfigure[]{
\includegraphics[width=.45\textwidth]{./frameworks/figures/more-complex-context.pdf}
\label{f:more-complex-context}
}
\subfigure[]{
\includegraphics[width=\textwidth]{./frameworks/figures/more-complex-network.pdf}
\label{f:more-complex-network}
}
\caption[Example semantic constraint network for \textit{the big
block}]{\subref{f:more-complex-context} a more complex hypothetical
world \subref{f:more-complex-network} a more complex semantic
constraint network to identify \textit{the big block} (marked in grey for
clarity)}
\label{f:more-complex-context-and-network}
\end{figure}
\subsection{Evaluation}
\is{semantic constraint network!evaluation}
The evaluation of a semantic constraint network involves cycling
through the primitives of the network until each primitive has been
successfully revised. The revision of a primitive has three possible
outcomes: (1) validation with possible bindings for one or more of its
arguments (2) rejection or (3) suspension. Whenever a primitive
returns more than one possible solution, the evaluation tree which
keeps tracks of possible bindings for each variable in the network,
splits. This especially occurs during conceptualisation when the
semantic entities of the primitive constraints are still
unknown. Whenever a primitive rejects a particular set of bindings,
that particular branch in the evaluation tree can not be explored any
further. Whenever a primitive is not specified for a certain pattern \enlargethispage{\baselineskip}
of bound or open arguments, it is suspended and revised at a later
moment.
During conceptualisation, the topic is typically known but the
semantic entities of the cognitive operators (like for example which
prototype or which category to use) are not. During interpretation,
the opposite is true: the semantic entities of the cognitive operators
have been passed on in the utterance, but the topic has not. A typical
network during interpretation is shown in \figref{f:more-complex-network}. The same network during
conceptualisation is shown in \figref{f:network-conceptualisation}.
The evaluation process of this network is shown in \figref{f:evaluation-process}. The context consists of four objects: a
big block (b-bk), a small block (s-bk), a ball (bl) and a pyramid (pd)
and the goal is to identify the big block in this context, so we have
a binding for $?t$. The only primitive that can be revised is {\sc
Equal-to-Context} which can bind $?s1$ to the context: \{b-bk, s-bk,
bl, pd\}. The next primitive that can be revised is {\sc
Filter-Set-Prototype} and let us suppose it knows the prototypes for
block and ball. This will cause a split in the evaluation tree: one in
which $?p1$ is bound to [block] and $?s2$ is bound to \{b-bk, s-bk\}
(node 2) and another branch in which $?p1$ is bound to [ball] and
$?s2$ is bound to \{bl\} (node 3). The next primitive that can be
revised is {\sc Filter-Set-Category}. Let us suppose this primitive is
only defined when its second argument contains at least two
entities. This will lead to a rejection of the branch of node 3 and to
a further split of the branch of node 2: one in which $?c1$ is bound
to [small] and $?s3$ is bound to \{s-bk\} (node 4) and another branch
in which $?c1$ is bound to [big] and $?s4$ is bound to \{b-bk\} (node
5). The final primitive that needs to be revised is {\sc
Select-Element}, which checks whether $?s4$ contains only one entity
that is equal to the big block. This leads to a rejection of the
branch of node 4 but also to a successful evaluation of the branch of
node 5 in which $?sr1$ is bound to [unique].
\begin{figure}[htbp]
\centering
\subfigure[]{
\includegraphics[width=.60\textwidth]{./frameworks/figures/network-conceptualisation.pdf}
\label{f:network-conceptualisation}
}
\subfigure[]{
\includegraphics[width=.8\textwidth]{./frameworks/figures/evaluation.pdf}
\label{f:evaluation-process}
}
\caption[Evaluation process of an example constraint
network]{\subref{f:evaluation-process} The evaluation process of an
example network \subref{f:network-conceptualisation} during
conceptualisation in a context consisting of four objects: a big
block (b-bk), a small block (s-bk), a ball (bl) and a pyramid
(pd). The communicative goal is to identify the big block.}
\label{f:evaluation}
\end{figure}
During acquisition of new semantic entities, the hearer will have been
able to reconstruct the intended semantic constraint network for a
large part. This network will be extended by the communicative goal
that is revealed by the speaker and will be revised in order to
acquire the semantic entity that fulfills the need in the current
network.
An example of such a network is shown in \figref{f:network-learning}, which could have been parsed after hearing a
sentence like \textit{the wabado ball}. Due to the omni-directional\-ity of
IRL, the first two arguments of the {\sc Filter-Set-Category}
primitive, $?s3$ and $?s2$ can be completely determined, which allows
IRL to come up with either a category that is already known or with an
entirely new category that would perform the correct filtering.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.9\textwidth]{./frameworks/figures/network-learning.pdf}
\caption[Example constraint network during learning]{A partial
network that could be reconstructed by combining information
from parsing \textit{the wabado ball} and the communicative goal
revealed by the speaker. Due to the omni-directionality of IRL,
the first two arguments of the {\sc Filter-Set-Category}
primitive are sufficient to allow IRL to deduce a valid category
for $?c1$.}
\label{f:network-learning}
\end{center}
\end{figure}
\subsection{Conceptualisation and chunking}
\label{s:irl-conceptualisation}
\is{conceptualisation}
\is{semantic constraint network!generation}
% \largerpage %long distance to get exp
Conceptualisation can now be viewed as a search process in which a
semantic constraint network that is suitable to achieve the
communicative goal it selected \citep{steels05planning} needs to be
constructed. Agents start with a library of primitive
constraints. These primitives are combined using heuristics to
construct networks that become more and more complex. In general,
these heuristics exploit the typical structure of the arguments of a
primitive and the type information of these arguments. The typical
structure is that the first argument is the target variable which can
be computed based on the values of the other arguments. Type
information is used to ensure that arguments that are linked are of
compatible type. More elaborate heuristics, which for example avoid
duplicate primitive constraints in one network, are also available.
An example of such a search process to identify a single object is
shown in \figref{f:conceptualisation} which starts from a library
of four primitive constraints: {\sc Equal-to-Context}, {\sc
Filter-Set-Prototype}, {\sc Filter-Set-Category} and {\sc
Select-Element}. The search process starts from a variable bound to
the topic. This variable is considered to be an open variable for
which a primitive with a compatible target argument needs to be
found. Only one primitive in the library fulfils this requirement:
{\sc Select-Element}. This primitive again introduces an open variable
for its second argument which is of type entity-set. In the next
expansion step of the search tree, three primitives are considered:
{\sc Equal-to-Context}, {\sc Filter-Set-Prototype} and {\sc
Filter-Set-Category} (nodes 2--4). Node 2 already contains a complete
network and can be evaluated. If the context contains only one object
this network succeeds and the conceptualisation process terminates. If
this is not the case, nodes 3 and 4 will be further expanded as they
again have an open variable of type entity-set. Both nodes can be
expanded with the three primitive constraints that have a compatible
target argument (nodes 5--10). If the topic can be identified using a
single {\sc Filter-Set-Prototype} (node 5) or {\sc
Filter-Set-Category} (node 8), conceptualisation has been
completed. If not, the search process continues.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.6\textwidth]{./frameworks/figures/conceptualisation.pdf}
\caption[Example of the conceptualisation process]{Example search
tree during conceptualisation, starting from a library of four
primitives: {\sc Equal-to-Context}, {\sc Filter-Set-Prototype},
{\sc Filter-Set-Category} and {\sc Select-Element}. The number
in each node of this tree reflects the order in which they are
expanding, following a standard breadth-first heuristic. Node 5
corresponds to the network shown in \figref{f:network}, node
14 to the one in \figref{f:more-complex-network}.}
\label{f:conceptualisation}
\end{center}
\end{figure}
%\subsubsection{Chunking}
\label{s:irl-chunking}
\is{IRL!chunking}
\is{chunking|see{IRL}}
Earlier research on automatic programming in knowledge systems
\citep[see e.g.][]{barstow79knowledge} has shown that complex programs
can only be derived fast enough if there is a set of powerful building
blocks, and if the system progressively develops a library of rich
subprograms and templates that are re-used or further extended,
possibly aided by heuristics.
\newpage We have followed a similar strategy that stores previous solutions
(like nodes 2, 5, 8 and 14 in \figref{f:conceptualisation}) as a
\textsc{chunk} which can later be re-used like any other primitive in
the library. Some variables will be considered to be internal to the
chunk but one variable will have the special state of target argument
and the other external variables will become arguments to this
chunk. Thanks to chunking, the search for a solution becomes
progressively more efficient because more complex components are
readily available.
\subsection{Implementation of a primitive}
\is{primitive constraint!implementation}
Implementing a primitive involves the specification of its typed
arguments and a set of revision specifications which specify how to
deal with a particular pattern of open and bound arguments. In
general, all open arguments will need to get bound simultaneously, but
some patterns can be left unspecified so the primitive will get
suspended until more slots are bound. An example of a semantic
primitive is given below for {\sc Filter-Set-Prototype}.
\definition{Semantic primitive}{Filter-Set-Prototype}
\begin{explanation}{description}
Filters the entities in a source-set according to their similarity
to a certain prototype. Constrains the filtered-set to contain all
the elements from source-set that are similar to the prototype.
\end{explanation}
\begin{explanation}{arguments}
\verb+?filtered-set+ (of type entity-set) \\
\verb+?source-set+ (of type entity-set) \\
\verb+?prototype+ (of type prototype)
\end{explanation}
\begin{explanation}{revision specs}
\verb+?filtered-set ?source-set ?prototype+: recomputes the filtering using the provided prototype and validates or rejects the bindings accordingly \\
\verb+?filtered-set ?source-set+: tries to find a stored prototype that could perform the correct filtering and binds it to \verb+?prototype+ \\
\verb+?source-set+: computes the subsets of \verb+?source-set+ that
are similar to each stored prototype and returns pairwise bindings
for \verb+?prototype+ and \verb+?filtered-set+
\end{explanation}
\section{Construction Grammar using FCG}
\label{s:fcg}
\is{Fluid Construction Grammar|see{FCG}}
\is{FCG}
\subsection{Theoretical foundations}
The main linguistic theory that we adopt is the one of Construction
Grammar \citep{goldberg95constructions, goldberg03constructions}. This
theory assumes that each unit of linguistic knowledge is a
\textsc{construction} which is specified both in the syntactic and the
semantic domain. This contrasts sharply with a generative constituent
structure grammar which focusses only on syntax, and in which
semantics is supposed to be defined separately by translation rules
\citep{chomsky57syntactic}. Several variations of the theory
of Contruction Grammar have been proposed, each focusing on a different
linguistic aspect. Radical Construction Grammar argues that
syntactical relations can not be studied autonomously and can only be
understood in relation to the constructions they appear in
\citep{croft01radical}. Embodied Construction Grammar focusses on the
semantic content of constructions, especially relating it to
embodiment and sensorimotor experiences \citep{bergen03embodied}.
In this book I will use another variation of construction grammar as the main linguistic framework:
Fluid Construction Grammar (FCG). FCG
is a fully operational implementation of construction grammar. It is
unification-based, similar to the widely used Head-Driven Phrase
Structure Grammar (HPSG) frameworks \citep{pollard94hpsg}. FCG is
designed to support experiments in artificial language evolution and
hence supports some unique features: reversibility and fluidity.
\textsc{reversibility} refers to the idea that the same set of
constructions can be used for both production and parsing. This
feature does not only allow the agents to use the same formalism and
set of constructions in both production and interpretation, but also
has proven crucial to writing invention operators for grammar. Before
uttering an utterance, a speaker can re-enter the utterance he is
about to say and check whether potential ambiguities arise. This can
be used as a trigger to add some additional grammar or syntax to the
language \citep{steels06how}.
Another feature that makes FCG suitable for experiments in artificial
language evolution is its \textsc{fluidity}, which states that agents
will produce and parse as much information as possible, even if their
linguistic knowledge is incomplete or conflicting. Incomplete
knowledge might lead to the invention of a new construction in which
the semantic information that could not be produced is associated with
the syntactic information that could not be parsed. Conflicting
knowledge might lead to multiple hypotheses about how to produce a
certain meaning or how to parse a certain utterance.
\subsection{Language processing}
During language processing, a \textsc{linguistic
structure}\is{linguistic structure} is being built up by applying
a series of rules to it. The application process is organised as a
search process in which each node consists of the linguistic structure
so far and the children of each node are the result of applying a rule
to the linguistic structure it contains. When more than one rule could
apply or a rule could apply in more than one way, this results in a
split in the application tree. This could for example occur when there
are some homonyms or synonyms in the linguistic knowledge of the
agent. Processing typically happens in a depth-first fashion and
continues until no rule could be applied to the structure built up so
far. An additional test might be provided to check whether this
structure is satisfactory. Heuristics could be used to favour one
branch over the other. An example of an application tree is shown in
\figref{f:fcg-search}.
\begin{figure}
\begin{center}
\includegraphics[width=.75\textwidth]{./frameworks/figures/fcg-search.pdf}
\caption[Application of a rule-set]{Typical language processing in
FCG is organised as a search process. A linguistic structure is
being built up by applying a series of rules to it. When two
(conflicting) rules could apply to the same structure, this
leads to a split in the application tree.}
\label{f:fcg-search}
\end{center}
\end{figure}
\subsection{Coupled feature structures}
\label{s:coupled-feature-structures}
\is{coupled feature structure}
The linguistic structure that is being built up is represented as
\textsc{coupled feature structures}. Each coupled feature structure
consists of two feature structures or \textsc{poles}: one is defined in
the semantic domain and the other in the syntactic domain. Each
feature structure consists of a list of units, which are typically
reflected in both poles. Each unit consists of a list of feature-value
pairs which represent linguistic information. Special features,
{\footnotesize\tt sem-subunits} in the semantic pole and {\footnotesize\tt syn-subunits},
allow to specify hierarchical relations between units to construct
treelike relations between units.
FCG is open-ended to the features it can handle, but the features that
are typically used are: {\footnotesize\tt meaning}, {\footnotesize\tt referent} and
{\footnotesize\tt sem-cat} in the semantic pole and {\footnotesize\tt form} and {\footnotesize\tt syn-cat}
in the syntactic pole. The {\footnotesize\tt meaning} feature refers to the
conceptual meaning of a certain unit, which can be expressed in any
formalism, including predicate logic or a semantic network in IRL. The
{\footnotesize\tt referent} is typically represented as a unique variable which is
bound to the (physical) entity that a unit (including all its
subunits) refers to. The {\footnotesize\tt form} feature contains all possible form
constraints, such as particular strings or word-order constraints
between its subunits. The {\footnotesize\tt syn-} and {\footnotesize\tt sem-cat} are
categories, either in the semantic or syntactic domain, that allow
other rules to specify which units to select for.
An example of a simplified linguistic structure for an utterance like
\textit{le ballon} is shown in \figref{f:cfs-le-ballon} and its
bracketed notation is shown below. The semantic pole is shown on the
left and the syntactic pole on the right to show the structural
similarity between both poles.
\footnotesize
\ltitle{Example linguistic structure for "le ballon"}
\begin{lstlisting}
((top-unit ((top-unit
(sem-subunits (det-np-unit))) (syn-subunits (det-np-unit))
(det-np-unit (det-np-unit
(sem-subunits (syn-subunits
(ballon-unit le-unit)) (ballon-unit le-unit))
(referent x) (form ((meets le-unit ballon-unit)))
(meaning ((grounded x))) (syn-cat
(sem-cat (object))) (determined-nounphrase)))
(le-unit (le-unit
(referent x) (form
(meaning ((unique x))) ((string le-unit "le")))
(sem-cat (selector))) (syn-cat (determiner)))
(ballon-unit (ballon-unit
(referent x) (form
(meaning ((ball x))) ((string ballon-unit "ballon")))
(sem-cat (prototype)))) (syn-cat (noun))))
\end{lstlisting}
\normalsize
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.8\textwidth]{./frameworks/figures/cfs-le-ballon.pdf}
\caption[Example coupled feature structure for \textit{le ballon}]{A
graphical representation of the linguistic structure for \textit{le
ballon}. The semantic pole is shown on the left, the syntactic
pole on the right. Both poles are structurally very similar, but
only contain features that are relevant in their domain.}
\label{f:cfs-le-ballon}
\end{center}
\end{figure}
\subsection{Application of a construction}
\largerpage
Now that we know how a linguistic structure is represented in FCG, we
can turn to the application of a construction to build up such a
structure. Like a linguistic structure, a construction is also
represented as a coupled-feature structure. The semantic pole of a
construction specifies how meaning has to be built up in parsing or
decomposed in production, and the syntactic pole how the form has to
be analysed in parsing or built in production. A construction also
typically contains more variables as it should be applicable to a wide
range of instantiated linguistic structures.
A construction is applied in three steps: a \textsc{matching phase}, a
\textsc{first merging phase} and a \textsc{second merging phase}. In
general, the matching phase checks whether the rule is applicable and
the two merging phases add new information to the linguistic structure
that is being built up. Although the matching phase is the most strict
one, all other phases can block the application of a rule if
conflicting information would already be present in the current
structure. More details on how matching and merging is exactly
implemented can be found in a background article
\citep{steels06unify}.
In production, it is the syntactic pole that is matched to the
syntactic pole of the current structure; in interpretation it is the
semantic pole that is matched to the current semantic pole. When the
matching phase has been successful, both poles of the rule are merged
into the current structure. The application of a rule is illustrated
in \figref{f:fcg-rule-application}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\textwidth]{./frameworks/figures/fcg-rule-application.pdf}
\caption[Application of a rule]{Rule application in FCG. Applying
a rule consists of three phases: a matching phase and two
merging phases. In production (left), the semantic pole is
matched to check whether a rule is applicable. In interpretation
(right), it is the syntactic pole that is matched. When the
matching phase has been successful, both poles are merged into
the coupled feature structure.}
\label{f:fcg-rule-application}
\end{center}
\end{figure}
\subsection{Structure building}
\is{FCG!structure building}
\is{structure building|see{FCG}}
\largerpage[-1]
The merging phases during the application of a construction can be
used to add new features to a unit or to add new values to a
particular feature of a particular unit, but more powerful structure
building operations are also possible. These operations can be used to
add new units to the structure, to change the hierarchy relations
between units and to move feature-value pairs from one unit to
another. All these operations are achieved through two operators: the
J-operator \citep{debeule05hierarchy} and the TAG-operator. The syntax
of these operators is shown below.
\footnotesize
\ltitle{Syntax of the TAG-Operator}
\begin{lstlisting}
(?unit
(TAG ?tag-variable (feature-name feature-value)))
\end{lstlisting}
\ltitle{Syntax of the J-operator}
\begin{lstlisting}
((J ?focus-unit ?parent-unit (?child-unit-1 ... ?child-unit-n))
?tag-variable-1
...
?tag-variable-n
(feature-name-1 feature-value-1)
...
(feature-name-n feature-value-n))
\end{lstlisting}
\normalsize
\newpage
Units that are marked by the J-operator, or \textsc{J-units} for short,
are ignored in the matching phase, but receive special treatment
during the merging phase. The TAG-operator allows a construction to
bind a certain variable, {\footnotesize\tt ?tag-variable}, to a certain
feature-value pair. Whenever this variable appears inside a J-unit of
the same rule, the bound feature-value pair will be moved to this
J-unit. The special treatment of a J-unit in the merging phase is as
follows:
\begin{enumerate}
\item If {\footnotesize\tt ?focus-unit} is bound to a unit-name in the current
structure, it will consider this unit to be in focus; if this
variable is unbound, a new unit is created that will be in focus of
this J-unit.
\item The focus-unit will become a subunit of {\footnotesize\tt ?parent-unit} and
the optional \linebreak{\footnotesize\tt ?child-units} will become children of the
focus-unit.
\item The listed feature-value pairs will be merged into to the
focus-unit.
\item The feature-value pairs that are bound to the
{\footnotesize\tt ?tag-variables} will be moved from their original unit to the
focus-unit.
\end{enumerate}
\subsection{Linking through variable equalities}
\is{linking|see{FCG}}
\is{FCG!linking}
Once relations between several entities can be expressed in language,
hearers face an additional problem in figuring out what these
relations are. This is typically considered to be conveyed through
grammar. For example in a sentence like \textit{Jack hits Jill} English
grammar clearly conveys it is Jack who is the agent and Jill the
unfortunate recipient of the event, unlike the sentence \textit{Jill hits
Jack} in which the roles are reversed. Another example would be \textit{the
big block and the red ball} in which the hearer would need to figure
out it is the block which is big and the ball which is red. This
problem has been identified as the linking problem
\citep{steels05linking}.
In FCG this problem has been solved by first assuming that variables
introduced by different rules are different, but can be made equal
during the application of other grammatical rules. Let us consider the
phrase \textit{red ball} and assume the meaning is represented in predicate
logic. In parsing, the lexical constructions would introduce two
predicates, ``red(?x)'' and ``ball(?y)'', each introducing a different
variable. Another grammatical construction, which specifies that all
predicates referred to by adjectives and nouns that are part of the
same noun phrase should share the same variable, will make these
variables equal. The application of this rule transforms the
interpreted meaning in ``red(?x)'' and ``ball(?x)'' and hence solves the
linking problem for this small example.
\subsection{Application of an example construction}
I will now show the application of an example
Noun-Adjective-construction in interpretation. It illustrates both the
structure building operators and the linking problem. It will search
for two units, one of syntactical category ``noun'' and the other
of syntactical category ``adjective'' that occur next to each other
in the utterance parsed so far. When it has found two such units, it
introduces an intermediary unit and makes the variables of the
predicates in these two units equal through the referents of their
units. The coupled feature structure before and after application of
the construction is shown in \figref{f:nounadj-application}.
\begin{figure}
\begin{center}
\includegraphics[width=.7\textwidth]{./frameworks/figures/nounadj-application.pdf}
\caption[Coupled feature structures before and after the
application of an example construction]{Coupled feature structures
before (top) and after (bottom) the application of the
Noun-Adjective construction in interpretation. It introduces a
new unit which combines a Noun and an Adjective unit and makes
the variables of their predicates equal through their
referents. The coupled feature structures are now mapped on one
structure which both contains semantic and syntactic
information.}
\label{f:nounadj-application}
\end{center}
\end{figure}
I will now step through the rule application of the rule in more
detail. In interpretation the utterance is de-rendered into a set of
form constraints: a set of strings, one for each word, and a set of
\emph{meets} constraints between each consecutive pair of words. An example
for \textit{le ballon rouge} is given in the initial structure below. Note
that the semantic pole and syntactic pole are now shown under each
other and separated by a double arrow.
\footnotesize
\ltitle{Initial Structure}
\begin{lstlisting}
((top-unit))
<-->
((top-unit
(form
((string le-unit "le")
(string ballon-unit "ballon")
(string rouge-unit "rouge")
(meets le-unit ballon-unit)
(meets ballon-unit rouge-unit)))))
\end{lstlisting}
\normalsize
Next the lexical constructions apply, which introduce for each string
a different unit (using the J-operator), which on the semantic side
introduces the corresponding meaning predicate together with their
referent and semantical category and on the syntactic side introduces
the appropriate syntactical categories. The coupled feature structure
after application of lexical constructions is shown below in bracketed
notation and in \figref{f:nounadj-application} (top).
\footnotesize
\ltitle{Structure before application of the Noun-Adjective construction}
\begin{lstlisting}
((top-unit
(sem-subunits (rouge-unit le-unit ballon-unit)))
(rouge-unit
(meaning ((red ?x)))
(referent ?x)
(sem-cat ((pom category))))
(ballon-unit
(meaning ((ball ?y)))
(referent ?y)
(sem-cat ((pom prototype))))
(le-unit
(meaning ((unique ?z)))
(referent ?z)
(sem-cat ((pom selector)))))
<-->
((top-unit
(syn-subunits (rouge-unit le-unit ballon-unit))
(form
((meets ballon-unit rouge-unit)
(meets le-unit ballon-unit))))
(rouge-unit
(form ((string rouge-unit "rouge")))
(syn-cat ((pos adjective))))
(le-unit
(form ((string le-unit "le")))
(syn-cat ((pos determiner))))
(ballon-unit
(form ((string ballon-unit "ballon")))
(syn-cat ((pos noun)))))
\end{lstlisting}
\normalsize
We are now ready to apply the Noun-Adjective construction which is
shown below. The first phase is the matching phase and as we are in
interpretation, this means the syntactic pole of the construction will
be matched against the syntactic pole of the current coupled feature
structure, which succeeds. There is only one adjective-unit and one
noun-unit, so this results one set of possible bindings:
{\footnotesize\tt ((?parent-unit . top-unit) (?noun-unit . ballon-unit)
(?adjective-unit . rouge-unit))}.
Matching succeeded, so we can now continue to the first merge phase,
in which the syntactic pole of the construction is merged into the
current feature structure. The syntactic pole contains only one J-unit
which will now be applied. As there is no binding for
{\footnotesize\tt ?adj-noun-unit} yet, it will create a new unit that will be a
child of {\footnotesize\tt ?parent-unit} (which in this application will be {\footnotesize\tt top-unit} as can be seen in the bindings of the unification
phase) and which will have two children: {\footnotesize\tt ballon-unit} and
{\footnotesize\tt rouge-unit}. The feature-value pairs specified in the J-unit,
namely the syntactical category noun, will be added to this
unit. Finally the tag-variable {\footnotesize\tt ?form} will be handled, which
moves the meets constraint between the {\footnotesize\tt ballon-unit} and the
{\footnotesize\tt rouge-unit} from {\footnotesize\tt top-unit} to the newly created unit.
\footnotesize
\ltitle{The Noun-Adjective construction}
\begin{lstlisting}
((?parent-unit
(sem-subunits (== ?noun-unit ?adjective-unit)))
(?noun-unit
(referent ?x)
(sem-cat (==1 (pom prototype))))
(?adjective-unit
(referent ?x)
(sem-cat (==1 (pom category))))
((J ?adj-noun-unit ?parent-unit (?noun-unit ?adjective-unit))
(referent ?x)
(sem-cat (==1 (pom prototype)))))
<-->
((?parent-unit
(syn-subunits (== ?noun-unit ?adjective-unit))
(tag
?form
(form (== (meets ?noun-unit ?adjective-unit)))))
(?noun-unit
(syn-cat (==1 (pos noun))))
(?adjective-unit
(syn-cat (==1 (pos adjective))))
((J ?adj-noun-unit ?parent-unit (?noun-unit ?adjective-unit))
?form
(syn-cat (==1 (pos noun)))))
\end{lstlisting}
\normalsize
In the final merging phase, the semantic pole of the construction is
merged into the semantic pole of the current feature structure. Next
to creating a new unit similar to the unit created in the syntactic
pole, it ensures the variables of the predicates for ball and red will
be made equal. In the current structure they are available as the
referent of the {\footnotesize\tt ballon-unit} and the {\footnotesize\tt rouge-unit}, which
are equal in the {\footnotesize\tt ?adjective-unit} and the {\footnotesize\tt ?noun-unit} of
the Noun-Adjective construction. The merging phase will ensure that
these variables are equalised in the resulting feature structure,
which is shown below and in \figref{f:nounadj-application}
(bottom).
\footnotesize
\ltitle{Structure after application in interpretation}
\begin{lstlisting}
((top-unit
(sem-subunits (noun-adj-unit le-unit)))
(noun-adj-unit
(sem-subunits (rouge-unit ballon-unit))
(referent ?y)
(sem-cat ((pom prototype))))
(le-unit
(meaning ((unique ?z)))
(referent ?z)
(sem-cat ((pom selector))))
(ballon-unit
(meaning ((ball ?y)))
(sem-cat ((pom prototype)))
(referent ?y))
(rouge-unit
(meaning ((red ?y)))
(sem-cat ((pom category)))
(referent ?y)))
<-->
((top-unit
(syn-subunits (noun-adj-unit le-unit))
(form ((meets le-unit noun-adj-unit))))
(noun-adj-unit
(form ((meets ballon-unit rouge-unit)))
(syn-subunits (rouge-unit ballon-unit))
(syn-cat ((pos noun))))
(rouge-unit
(form ((string rouge-unit "rouge")))
(syn-cat ((pos adjective))))
(le-unit
(form ((string le-unit "le")))
(syn-cat ((pos determiner))))
(ballon-unit
(form ((string ballon-unit "ballon")))
(syn-cat ((pos noun)))))
\end{lstlisting}
\normalsize
| {
"alphanum_fraction": 0.7745442617,
"avg_line_length": 48.0067491564,
"ext": "tex",
"hexsha": "6622237e990cff635084dee349297724ee235a80",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2017-04-01T06:50:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-01T06:50:23.000Z",
"max_forks_repo_head_hexsha": "d33df4fe062e27bded56495a16922228ef834602",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "langsci/Bleys",
"max_forks_repo_path": "frameworks/frameworks.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d33df4fe062e27bded56495a16922228ef834602",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "langsci/Bleys",
"max_issues_repo_path": "frameworks/frameworks.tex",
"max_line_length": 153,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d33df4fe062e27bded56495a16922228ef834602",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "langsci/Bleys",
"max_stars_repo_path": "frameworks/frameworks.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 10243,
"size": 42678
} |
\documentclass[12pt,english,ignorenonframetext,]{beamer}
%%%%%%%%%%%%%%%
%% Beamer theme
% choose one from http://deic.uab.es/~iblanes/beamer_gallery/
% or http://www.hartwork.org/beamer-theme-matrix/
% \usetheme{Warsaw}
\usetheme{CambridgeUS}
%%%%%%%%%%%%%%%%%%%%%%
%% Beamer color theme
%% default albatross beaver beetle crane dolphin dove fly lily
%% orchid rose seagull seahorse whale wolverine
%\usecolortheme{seahorse} %% very lighty
\usecolortheme{dolphin} %% nice blue
\usecolortheme{orchid} %% dark red ?
\usecolortheme{whale} %% black and blue as Warsaw
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Define your own colors
\definecolor{blackblue}{rgb}{19,19,59} % rgb(48,48,150)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Change the theme
%\setbeamercolor{alerted text}{fg=orange}
%\setbeamercolor{background canvas}{bg=white}
%\setbeamercolor{block body alerted}{bg=normal text.bg!90!black}
%\setbeamercolor{block body}{bg=normal text.bg!90!black}
%\setbeamercolor{block body example}{bg=normal text.bg!90!black}
%\setbeamercolor{block title alerted}{use={normal text,alerted text},fg=alerted text.fg!75!normal text.fg,bg=normal text.bg!75!black}
%\setbeamercolor{block title}{bg=blue}
%\setbeamercolor{block title example}{use={normal text,example text},fg=example text.fg!75!normal text.fg,bg=normal text.bg!75!black}
%\setbeamercolor{fine separation line}{}
\setbeamercolor{frametitle}{fg=black}
%\setbeamercolor{item projected}{fg=black}
%\setbeamercolor{normal text}{bg=black,fg=yellow}
%\setbeamercolor{palette sidebar primary}{use=normal text,fg=normal text.fg}
%\setbeamercolor{palette sidebar quaternary}{use=structure,fg=structure.fg}
%\setbeamercolor{palette sidebar secondary}{use=structure,fg=structure.fg}
%\setbeamercolor{palette sidebar tertiary}{use=normal text,fg=normal text.fg}
%\setbeamercolor{section in sidebar}{fg=brown}
%\setbeamercolor{section in sidebar shaded}{fg= grey}
\setbeamercolor{separation line}{}
%\setbeamercolor{sidebar}{bg=red}
%\setbeamercolor{sidebar}{parent=palette primary}
%\setbeamercolor{structure}{bg=black, fg=green}
%\setbeamercolor{subsection in sidebar}{fg=brown}
%\setbeamercolor{subsection in sidebar shaded}{fg= grey}
%\setbeamercolor{title}{fg=blackblue}
%\setbeamercolor{titlelike}{fg=blackblue}
%%%%%%%%%%%%%%%%%%%%%%%
%% Other beamer options
%\setbeamercovered{transparent}
% Permet de laisser en gris le texte qui n'est pas encore apparu (lorsqu'on utilise les commandes avec des <1,2> ou <4-9>.
%\setbeamercolor{normal text}{fg=black,bg=white}
%%%%%%%%%%%%%%%%%%%%%%%
%% Change Beamer fonts
% \usefonttheme{default}
% \usefonttheme[onlymath]{serif}
\usefonttheme{serif}
\setbeamerfont{title}{family=\rm}
\setbeamerfont{titlelike}{family=\rm}
\setbeamerfont{frametitle}{family=\rm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% innertheme
%% rectangles circles inmargin rounded
% \useinnertheme{rounded} % XXX My preference
\useinnertheme{circles} % XXX
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% outertheme
%% infolines miniframes shadow sidebar smoothbars smoothtree split tree
%\useoutertheme{infolines}
%% No navigation symbol.
\setbeamertemplate{navigation symbols}{}
\beamertemplatenavigationsymbolsempty
% XXX Add a background image to the slides
\usepackage{tikz}
% \setbeamertemplate{background}{\includegraphics[width=\paperwidth,height=\paperheight,keepaspectratio]{IETR.jpg}}
% \setbeamertemplate{background}{{\centering\begin{tikzpicture}\node[opacity=0.15]{\includegraphics[width=0.98\paperwidth]{IETR_et_partenaires_IETR.png}};\end{tikzpicture}}}
% Other options
%\setbeamertemplate{footline}[page number]
\beamertemplateballitem
\setbeamertemplate{itemize item}[square]
\setbeamertemplate{caption}[numbered]
\setbeamertemplate{caption label separator}{: }
\setbeamercolor{caption name}{fg=normal text.fg}
\beamertemplatenavigationsymbolsempty
\usepackage{lmodern}
\usepackage{color}
\newcommand{\urlb}[1]{\textcolor{blue}{\url{#1}}}
%% Color definition
\usepackage{xcolor}
\definecolor{bleu}{RGB}{0,0,204} % rgb(0,0,204)
\definecolor{violet}{RGB}{102,0,204} % rgb(102,0,204)
\definecolor{darkgreen}{RGB}{0,100,0} % rgb(0,100,0)
\definecolor{gold}{RGB}{255,184,0} % rgb(255,184,0)
\definecolor{rouge}{RGB}{204,0,0} % rgb(204,0,0)
\usepackage{amssymb,amsmath}
\usepackage{bbm,bm} % bold maths symbols
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\usepackage{macrosText} % FIXME remove
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[shorthands=off,main=english]{babel}
\else
\usepackage{polyglossia}
\setmainlanguage[]{}
\fi
\newif\ifbibliography
\hypersetup{
pdftitle={MAB Learning in IoT Networks},
pdfauthor={ Christophe Moy Émilie Kaufmann},
pdfborder={0 0 0},
breaklinks=true}
% \urlstyle{same} % don't use monospace font for urls
% Code embedding.
\usepackage{palatino} % Use the Palatino font % XXX remove if it is ugly ?
% Prevent slide breaks in the middle of a paragraph:
\widowpenalties 1 10000
\raggedbottom
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% https://tex.stackexchange.com/a/2559/
\newcommand{\backupbegin}{
\newcounter{framenumberappendix}
\setcounter{framenumberappendix}{\value{framenumber}}
}
\newcommand{\backupend}{
\addtocounter{framenumberappendix}{-\value{framenumber}}
\addtocounter{framenumber}{\value{framenumberappendix}}
}
\title{MAB Learning in IoT Networks}
\subtitle{Decentralized Multi-Player Multi-Arm Bandits}
\author[Lilian Besson]{\textbf{Lilian Besson} \newline \emph{Advised by} \and Christophe Moy
\and Émilie Kaufmann}
\institute[CentraleSupélec \& Inria]{PhD Student \newline Team SCEE, IETR, CentraleSupélec, Rennes
\newline \& Team SequeL, CRIStAL, Inria, Lille}
\date[SCEE Seminar - 23/11/17]{SCEE Seminar - 23 November 2017}
% For \justifying command, see https://tex.stackexchange.com/a/148696/
\usepackage{ragged2e}
\addtobeamertemplate{frame begin}{}{\justifying}
\addtobeamertemplate{block begin}{}{\justifying}
\addtobeamertemplate{block alerted begin}{}{\justifying}
\addtobeamertemplate{block example begin}{}{\justifying}
\addtobeamertemplate{itemize body begin}{}{\justifying}
\addtobeamertemplate{itemize item}{}{\justifying}
\addtobeamertemplate{itemize subitem}{}{\justifying}
\addtobeamertemplate{itemize subsubitem}{}{\justifying}
\addtobeamertemplate{enumerate body begin}{}{\justifying}
\addtobeamertemplate{enumerate item}{}{\justifying}
\addtobeamertemplate{enumerate subitem}{}{\justifying}
\addtobeamertemplate{enumerate subsubitem}{}{\justifying}
\addtobeamertemplate{description body begin}{}{\justifying}
\addtobeamertemplate{description item}{}{\justifying}
\begin{document}
\justifying
\begin{frame}[plain]
\titlepage
% XXX manual inclusion of logos
\begin{center}
\includegraphics[height=0.16\textheight]{../common/LogoIETR.png}
\includegraphics[height=0.16\textheight]{../common/LogoCS.png}
\includegraphics[height=0.16\textheight]{../common/LogoInria.jpg}
\end{center}
\end{frame}
\section*{\hfill{}CentraleSupélec Rennes \& Inria Lille\hfill{}}
\subsection*{\hfill{}Team {:} SCEE @ IETR \& SequeL @ CRIStAL\hfill{}}
\section{\hfill{}1. Introduction and motivation\hfill{}}
\subsection{\hfill{}1.a. Objective\hfill{}}
\end{frame}
\begin{frame}{Motivation: \emph{Internet of Things} problem}
A \emph{lot} of IoT devices want to access to a single base station.
\begin{itemize}
\tightlist
\item
Insert them in a possibly \textbf{crowded wireless network}.
\item
With a protocol \textbf{slotted in both time and frequency}.
\item
Each device has a \textbf{low duty cycle} (a few messages per day).
\end{itemize}
\pause
\begin{block}{Goal}
\begin{itemize}
\tightlist
\item
Maintain a \textbf{good Quality of Service}.
\item
\textbf{Without} centralized supervision!
\end{itemize}
\pause
\end{block}
\begin{block}{How?}
\begin{itemize}
\tightlist
\item
Use \textbf{learning algorithms}: devices will learn on which
frequency they should talk!
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}1.b. Outline and references\hfill{}}
\end{frame}
\begin{frame}{Outline and references}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Introduction and motivation
\item
Model and hypotheses
\item
Baseline algorithms : to compare against naive and efficient
centralized approaches
\item
Two Multi-Armed Bandit algorithms : UCB, TS
\item
Experimental results
\item
An easier model with theoretical results
\item
Perspectives and future works
\end{enumerate}
\vfill{}
\begin{footnotesize}
Main references are my recent articles (on HAL):
\begin{itemize}
\item \emph{Multi-Armed Bandit Learning in IoT Networks and non-stationary settings}, Bonnefoi, Besson, Moy, Kaufmann, Palicot. CrownCom 2017,
\item \emph{Multi-Player Bandits Models Revisited}, Besson, Kaufmann. arXiv:1711.02317,
\end{itemize}
\end{footnotesize}
\end{frame}
\section{\hfill{}2. Model and hypotheses\hfill{}}
\subsection{\hfill{}2.a. First model\hfill{}}
\end{frame}
\begin{frame}{First model}
\begin{itemize}
\tightlist
\item
Discrete time \(t\geq1\) and \(K\) radio channels (\emph{e.g.}, 10)
\hfill{} (\emph{known})
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[height=0.35\textheight]{crowncom/protocol.eps}
\caption{\small{Protocol in time and frequency, with an \textcolor{darkgreen}{\emph{Acknowledgement}}.}}
\end{figure}
\begin{itemize}
\tightlist
\item
\(D\) \textbf{dynamic} devices try to access the network
\emph{independently}
\item
\(S=S_1+\dots+S_{K}\) \textbf{static} devices occupy the network :
\newline
\(S_1,\dots,S_{K}\) in each channel \hfill{} (\emph{unknown})
\end{itemize}
\end{frame}
\subsection{\hfill{}2.b. Hypotheses\hfill{}}
\end{frame}
\begin{frame}[fragile,allowframebreaks]{Hypotheses}
\begin{block}{Emission model}
\begin{itemize}
\tightlist
\item
Each device has the same \emph{low} emission probability: \newline
each step, each device sends a packet with probability \(p\).
\newline
\hfill{}\small{(this gives a duty cycle proportional to $1/p$)}
\end{itemize}
\end{block}
\begin{block}{Background traffic}
\begin{itemize}
\tightlist
\item
Each static device uses only one channel.
\item
Their repartition is fixed in time.
\end{itemize}
\begin{quote}
\(\implies\) Background traffic, bothering the dynamic devices!
\end{quote}
\end{block}
\begin{block}{Dynamic radio reconfiguration}
\begin{itemize}
\tightlist
\item
Each \textbf{dynamic device decides the channel it uses to send every
packet}.
\item
It has memory and computational capacity to implement simple
\textbf{decision algorithm}.
\end{itemize}
\end{block}
\begin{block}{Problem}
\begin{itemize}
\tightlist
\item
\emph{Goal} : \emph{minimize packet loss ratio} (\(=\) maximize number
of received \texttt{Ack}) in a \emph{finite-space discrete-time
Decision Making Problem}.
\item
\emph{Solution ?} \textbf{Multi-Armed Bandit algorithms},
\textbf{decentralized} and used \textbf{independently} by each device.
\end{itemize}
\end{block}
\end{frame}
\section{\hfill{}3. Baseline algorithms\hfill{}}
\subsection{\hfill{}3.a. A naive strategy : uniformly random access\hfill{}}
\end{frame}
\begin{frame}{A naive strategy : uniformly random access}
\begin{itemize}
\item
\textbf{Uniformly random access}: dynamic devices choose uniformly
their channel in the pull of \(K\) channels.
\item
Natural strategy, dead simple to implement.
\item
Simple analysis, in term of \textbf{successful transmission
probability} (for every message from dynamic devices) :
\end{itemize}
\begin{small} \begin{align*}
\mathbb{P}(\text{success}|\text{sent}) = \sum_{i=1}^{K} \underbrace{(1 - p / K)^{D-1}}_{\text{No other dynamic device}} \times \underbrace{(1-p)^{S_i}}_{\text{No static device}} \times\; \frac{1}{K}.
\end{align*} \end{small}
\pause
\begin{block}{No learning}
\begin{itemize}
\tightlist
\item
Works fine only if all channels are similarly occupied,\newline
but \textbf{it cannot learn} to exploit the best (more free)
channels.
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}3.b. Optimal centralized strategy\hfill{}}
\end{frame}
\begin{frame}[allowframebreaks]{Optimal centralized strategy}
\begin{itemize}
\tightlist
\item
If an oracle can decide to affect \(D_i\) dynamic devices to channel
\(i\), the \textbf{successful transmission probability} is:
\vspace*{-10pt}
\begin{small} \begin{align*}
\mathbb{P}(\text{success}|\text{sent}) = \sum_{i=1}^{K} \underbrace{(1 - p)^{D_i - 1}}_{\;\;D_i - 1 \;\text{others}\;\;} \times \underbrace{(1 - p)^{S_i}}_{\;\;\text{No static device}\;\;} \times \underbrace{ D_i / D }_{\;\;\text{Sent in channel}\; i}.
\end{align*} \end{small}
\item
The oracle has to solve this \textbf{optimization problem}:
\vspace*{-5pt}
\begin{small} \begin{equation*} \begin{cases}
\underset{D_1,\dots,D_{K}}{\arg\max}\;\;\; & \sum_{i=1}^{K} D_i (1 - p)^{S_i + D_i -1}\\
\text{such that}\;\;\; & \sum_{i=1}^{K} D_i = D \; \text{and} \; D_i \geq 0, \; \; \forall 1 \leq i \leq K .
\end{cases} \end{equation*} \end{small}
\item
We solved this quasi-convex optimization problem with \emph{Lagrange
multipliers}, only numerically.
\item
\(\implies\) Very good performance, maximizing the transmission rate
of all the \(D\) dynamic devices
\end{itemize}
\begin{block}{But unrealistic}
But \textbf{not achievable in practice}: no centralized control and no
oracle!
\end{block}
\begin{block}{Now let see \emph{realistic decentralized approaches}}
\(\hookrightarrow\) Machine Learning ? \newline
\hspace*{30pt}\(\hookrightarrow\) Reinforcement Learning ? \newline
\hspace*{60pt} \(\hookrightarrow\) \emph{Multi-Armed Bandit} !
\end{block}
\end{frame}
\section{\hfill{}4. Two Multi-Armed Bandit algorithms : UCB, TS\hfill{}}
\subsection{\hfill{}4.1. Multi-Armed Bandit formulation\hfill{}}
\end{frame}
\begin{frame}[fragile]{Multi-Armed Bandit formulation}
A dynamic device tries to collect \emph{rewards} when transmitting :
\begin{itemize}
\tightlist
\item
it transmits following a Bernoulli process \newline
(probability \(p\) of transmitting at each time step \(t\)),
\item
chooses a channel \(A(\tau) \in \{1,\dots,K\}\),
\begin{itemize}
\tightlist
\item
if \texttt{Ack} (no collision) \hspace*{10pt} \(\implies\) reward
\(r_{A(\tau)} = 1\),
\item
if collision (no \texttt{Ack}) \hspace*{10pt} \(\implies\) reward
\(r_{A(\tau)} = 0\).
\end{itemize}
\end{itemize}
\begin{block}{Reinforcement Learning interpretation}
Maximize transmission rate \(\equiv\) \textbf{maximize cumulated
rewards}
\[\max_{\text{algorithm}\;A} \;\; \sum_{\tau=1}^{\text{horizon}} r_{A(\tau)}.\]
\end{block}
\end{frame}
\subsection{\hfill{}4.2. Upper Confidence Bound algorithm : UCB\hfill{}}
\end{frame}
\begin{frame}{Upper Confidence Bound algorithm (\(\mathrm{UCB}_1\))}
Dynamic device keep \(\tau\) number of sent packets, \(T_k(\tau)\)
selections of channel \(k\), \(X_k(\tau)\) successful transmission in
channel \(k\).
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
For the first \(K\) steps (\(\tau=1,\dots,K\)), try each channel
\emph{once}.
\item
Then for the next steps \(t > K\) :
\begin{itemize}
\tightlist
\item
Compute the index
\(g_k(\tau) := \underbrace{\frac{X_k(\tau)}{T_k(\tau)}}_{\text{Mean}\; \widehat{\mu_k}(\tau)} + \underbrace{\sqrt{\frac{\log(\tau)}{2 T_k(\tau)}},}_{\text{Upper Confidence Bound}}\)
\item
Choose channel
\(A(\tau) = \mathop{\arg\max}\limits_{k} \; g_k(\tau)\),
\item
Update \(T_k(\tau+1)\) and \(X_k(\tau+1)\).
\end{itemize}
\end{enumerate}
\vfill{}\hfill{}\tiny{\textcolor{gray}{References: [Lai \& Robbins, 1985], [Auer et al, 2002], [Bubeck \& Cesa-Bianchi, 2012]}}
\end{frame}
\subsection{\hfill{}4.3. Thompson Sampling : Bayesian index policy\hfill{}}
\end{frame}
\begin{frame}[fragile]{Thompson Sampling : Bayesian approach}
A dynamic device assumes a stochastic hypothesis on the background
traffic, modeled as Bernoulli distributions.
\begin{itemize}
\item
Rewards \(r_k(\tau)\) are assumed to be \emph{i.i.d.} samples from a
Bernoulli distribution \(\mathrm{Bern}(\mu_k)\).
\item
A \textbf{binomial Bayesian posterior} is kept on the mean
availability \(\mu_k\) :
\(\mathrm{Bin}(1 + X_k(\tau), 1 + T_k(\tau) - X_k(\tau))\).
\item
Starts with a \emph{uniform prior} :
\(\mathrm{Bin}(1, 1) \sim \mathcal{U}([0,1])\).
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Each step \(\tau \geq 1\), draw a sample from each posterior
\(i_k(\tau) \sim \mathrm{Bin}(a_k(\tau), b_k(\tau))\),
\item
Choose channel \(A(\tau) = \mathop{\arg\max}\limits_k \; i_k(\tau)\),
\item
Update the posterior after receiving \texttt{Ack} or if collision.
\end{enumerate}
\vfill{}\hfill{}\tiny{\textcolor{gray}{References: [Thompson, 1933], [Kaufmann et al, 2012]}}
\end{frame}
\section{\hfill{}5. Experimental results\hfill{}}
\subsection{\hfill{}5.1. Experiment setting\hfill{}}
\end{frame}
\begin{frame}{Experimental setting}
\begin{block}{Simulation parameters}
\begin{itemize}
\tightlist
\item
\(K = 10\) channels,
\item
\(S + D = 10000\) devices \textbf{in total}. Proportion of dynamic
devices \(D/(S+D)\) varies,
\item
\(p = 10^{-3}\) probability of emission, for all devices,
\item
Horizon \(= 10^6\) time slots, \hfill{} (\(\simeq 1000\) messages
\(/\) device)
\item
Various settings for \((S_1,\dots,S_{K})\) static devices repartition.
\end{itemize}
\end{block}
\begin{block}{What do we show \hfill{} (for static \(S_i\))}
\begin{itemize}
\tightlist
\item
After a short learning time, MAB algorithms are almost as efficient as
the oracle solution !
\item
Never worse than the naive solution.
\item
Thompson sampling is more efficient than UCB.
\item
Stationary alg. outperform adversarial ones (UCB \(\gg\) Exp3).
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}5.2. First result: $10\%$\hfill{}}
\end{frame}
\begin{frame}{\(10\%\) of dynamic devices}
\begin{figure}[h!]
\centering
\includegraphics[height=0.74\textheight]{crowncom/10intelligent.eps}
\caption{\small{$10\%$ of dynamic devices. $7\%$ of gain.}}
\end{figure}
\end{frame}
\subsection{\hfill{}5.2. First result: $20\%$\hfill{}}
\end{frame}
\begin{frame}{\(30\%\) of dynamic devices}
\begin{figure}[h!]
\centering
\includegraphics[height=0.74\textheight]{crowncom/30intelligent.eps}
\caption{\small{$30\%$ of dynamic devices. $3\%$ of gain but not much is possible.}}
\end{figure}
\end{frame}
\subsection{\hfill{}5.3. Growing proportion of devices dynamic devices\hfill{}}
\end{frame}
\begin{frame}{Dependence on \(D/(S+D)\)}
\begin{figure}[h!]
\centering
\includegraphics[height=0.65\textheight]{crowncom/perf_learning.eps}
\caption{\small{\emph{Almost optimal}, for any proportion of dynamic devices, \emph{after a short learning time}. Up-to $16\%$ gain over the naive approach!}}
\end{figure}
\end{frame}
\section{\hfill{}6. An easier model\hfill{}}
\end{frame}
\begin{frame}{Section 6}
\begin{center}
A brief presentation of a different approach...
Theoretical results for an easier model
\end{center}
\end{frame}
\subsection{\hfill{}6.1. Presentation of the model\hfill{}}
\end{frame}
\begin{frame}[fragile]{An easier model}
\begin{block}{Easy case}
\begin{itemize}
\tightlist
\item
\(M \leq K\) dynamic devices \textbf{always communicating} (\(p=1\)).
\item
Still interesting: many mathematical and experimental results!
\end{itemize}
\pause
\end{block}
\begin{block}{Two variants}
\begin{itemize}
\item
\emph{With sensing}: Device first senses for presence of Primary Users
(background traffic), then use \texttt{Ack} to detect collisions.
\small{Model the "classical" Opportunistic Spectrum Access problem. Not exactly suited for IoT networks like LoRa or SigFox, can model ZigBee, and can be analyzed mathematically...}
\hfill{}{\small{\textcolor{gray}{(\emph{cf} Wassim's and Navik's theses, 2012, 2017)}}}
\item
\emph{Without sensing}: like our IoT model but smaller scale. Still
very hard to analyze mathematically.
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}6.2. Notations\hfill{}}
\end{frame}
\begin{frame}[fragile]{Notations for this second model}
\begin{block}{Notations}
\begin{itemize}
\tightlist
\item
\(K\) channels, modeled as Bernoulli (\(0/1\)) distributions of mean
\(\mu_k\) \(=\) background traffic from \emph{Primary Users},
\item
\(M\) devices use channel \(A^j(t) \in \{1,\dots,K\}\) at each time
step,
\item
Reward:
\(r^j(t) := Y_{A^j(t),t} \times \mathbbm{1}(\overline{C^j(t)}) = \mathbbm{1}(\)uplink
\& \texttt{Ack}\()\)
\begin{itemize}
\tightlist
\item
with sensing information \(Y_{k,t} \sim \mathrm{Bern}(\mu_k)\),
\item
collision for device \(j\)
\(C^j(t) = \mathbbm{1}(\)\emph{alone on arm $A^j(t)$}\()\).
\end{itemize}
\end{itemize}
\pause
\end{block}
\begin{block}{Goal : \emph{decentralized} reinforcement learning
optimization!}
\begin{itemize}
\tightlist
\item
Each player wants to \textbf{maximize its cumulated reward},
\item
With no central control, and no exchange of information,
\item
Only possible if : each player converges to one of the \(M\) best
arms, orthogonally (without collisions)
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}6.2. Centralized regret\hfill{}}
\end{frame}
\begin{frame}{Centralized regret}
\begin{block}{New measure of success}
\begin{itemize}
\tightlist
\item
Not the network throughput or collision probability,
\item
Now we study the \textbf{centralized regret} \vspace*{-5pt}
\[ R_T(\boldsymbol{\mu}, M, \rho) := \left(\sum_{k=1}^{M}\mu_k^*\right) T - \E_{\mu}\left[\sum_{t=1}^T\sum_{j=1}^M r^j(t)\right]. \]
\end{itemize}
\pause
\end{block}
\begin{block}{Two directions of analysis}
\begin{itemize}
\tightlist
\item
Clearly \(R_T = \mathcal{O}(T)\), but we want a sub-linear regret
\item
\emph{What is the best possible performance of a decentralized
algorithm in this setting?} \newline
\hfill{} \(\hookrightarrow\) \textbf{Lower Bound} on regret for
\textbf{any} algorithm !
\item
\emph{Is this algorithm efficient in this setting?} \newline
\hfill{} \(\hookrightarrow\) \textbf{Upper Bound} on regret for
\textbf{one} algorithm !
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}6.3. Lower Bound on regret\hfill{}}
\end{frame}
\begin{frame}[allowframebreaks]{Asymptotic Lower Bound on regret}
For any algorithm, decentralized or not, we have \vspace*{-20pt}
\begin{small}\begin{align*}
R_T(\boldsymbol{\mu}, M, \rho) &= \sum_{k \in \Mworst} (\mu_M^* - \mu_k) \E_{\mu}[T_k(T)] \\
&+ \sum_{k \in \Mbest} (\mu_k - \mu_M^*) (T - \E_{\mu}[T_k(T)]) + \sum_{k=1}^{K} \mu_k \E_{\mu}[\mathcal{C}_k(T)].
\end{align*}\end{small}
\begin{block}{Small regret can be attained if\ldots{}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Devices can quickly identify the bad arms \(\Mworst\), and not play
them too much (\emph{number of sub-optimal selections}),
\item
Devices can quickly identify the best arms, and most surely play them
(\emph{number of optimal non-selections}),
\item
Devices can use orthogonal channels (\emph{number of collisions}).
\end{enumerate}
\end{block}
\begin{block}{Lower-bounds}
\begin{itemize}
\tightlist
\item
The first term \(\E_{\mu}[T_k(T)]\), for sub-optimal arms selections,
is lower-bounded, using technical information theory tools
(Kullback-Leibler divergence, entropy),
\item
And we lower-bound collisions by\ldots{} \(0\) : hard to do better!
\end{itemize}
\end{block}
\begin{block}{Theorem 1
\hfill{}\textcolor{gray}{[Besson \& Kaufmann, 2017]}}
\begin{itemize}
\tightlist
\item
For any uniformly efficient decentralized policy, and any
non-degenerated problem \(\boldsymbol{\mu}\), \vspace*{-10pt}
\[ \mathop{\lim\inf}\limits_{T \to +\infty} \frac{R_T(\boldsymbol{\mu}, M, \rho)}{\log(T)} \geq M \times \left( \sum_{k \in \Mworst} \frac{(\mu_M^* - \mu_k)}{\kl(\mu_k, \mu_M^*)} \right) . \]
\footnotetext{\tiny Where $\kl(x,y) := x \log(\frac{x}{y}) + (1 - x) \log(\frac{1-x}{1-y})$ is the binary Kullback-Leibler divergence.}
\end{itemize}
\end{block}
\end{frame}
\begin{frame}[plain]{Illustration of the Lower Bound on regret}
\begin{figure}[h!]
\centering
\includegraphics[height=0.75\textheight]{alt/figures/main_RegretCentralized____env3-4_2092905764868974160.pdf}
\caption{\footnotesize{Any such lower-bound is very asymptotic, usually not satisfied for small horizons. We can see the importance of the collisions!}}
\end{figure}
\end{frame}
\subsection{\hfill{}6.4. Algorithms\hfill{}}
\end{frame}
\begin{frame}{Algorithms for this easier model}
\begin{block}{Building blocks : separate the two aspects}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{MAB policy} to learn the best arms (use sensing
\(Y_{A^j(t),t}\)),
\item
\textbf{Orthogonalization scheme} to avoid collisions (use
\(C^j(t)\)).
\end{enumerate}
\pause
\end{block}
\begin{block}{Many different proposals for \emph{decentralized} learning
policies}
\begin{itemize}
\tightlist
\item
Recent: \MEGA{} and \MusicalChair{},
\hfill{}{\tiny \textcolor{gray}{[Avner \& Mannor, 2015], [Shamir et al, 2016]}}
\item
State-of-the-art: \textbf{RhoRand policy} and variants,
\hfill{}{\tiny \textcolor{gray}{[Anandkumar et al, 2011]}}
\item
\textbf{Our proposals}:
\hfill{}{\tiny \textcolor{gray}{[Besson \& Kaufmann, 2017]}}
\begin{itemize}
\tightlist
\item
With sensing: \RandTopM{} and \MCTopM{} are sort of mixes between
RhoRand and \MusicalChair{}, using UCB indexes or more efficient
index policy (\klUCB),
\item
Without sensing: \Selfish{} use a UCB index directly on the reward
\(r^j(t)\) : like the first IoT model !
\end{itemize}
\end{itemize}
\end{block}
\end{frame}
\begin{frame}[plain]{Illustration of different algorithms}
\begin{figure}[h!]
\centering
\includegraphics[height=0.75\textheight]{alt/figures/MP__K9_M6_T5000_N500__4_algos/all_RegretCentralized____env1-1_8318947830261751207.pdf}
\caption{\footnotesize{Regret, $M=6$ players, $K=9$ arms, horizon $T=5000$, against $500$ problems $\boldsymbol{\mu}$ uniformly sampled in $[0,1]^K$. \newline \textcolor{blue}{\rhoRand{}} < \textcolor{red}{\RandTopM{}} < \textcolor{darkgreen}{\Selfish{}} < \textcolor{gold}{\MCTopM{}} in most cases.}}
\end{figure}
\end{frame}
\subsection{\hfill{}6.5. Regret upper-bound\hfill{}}
\end{frame}
\begin{frame}{Regret upper-bound for \MCTopM-\klUCB}
\begin{block}{Theorem 2
\hfill{}\textcolor{gray}{[Besson \& Kaufmann, 2017]}}
\begin{itemize}
\tightlist
\item
If all \(M\) players use \MCTopM-\klUCB, then for any non-degenerated
problem \(\boldsymbol{\mu}\), \[
R_T(\boldsymbol{\mu}, M, \rho) \leq G_{M,\boldsymbol{\mu}} \log(T) + \smallO{\log T}.
\]
\end{itemize}
\end{block}
\begin{block}{Remarks}
\begin{itemize}
\tightlist
\item
Hard to prove, we had to carefully design the \MCTopM{} algorithm to
conclude the proof,
\item
For the suboptimal selections, we \emph{match our lower-bound} !
\item
We also \emph{minimize the number of channel switching}: interesting
as it costs energy,
\item
Not yet possible to know what is the best possible control of
collisions\ldots{}
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}6.6. Problems with \Selfish\hfill{}}
\end{frame}
\begin{frame}{In this model}
The \Selfish{} decentralized approach = device don't use sensing, just
learn on the receive acknowledgement,
\begin{itemize}
\tightlist
\item
Like our first IoT model,
\item
It works fine in practice!
\item
Except\ldots{} when it fails drastically!
\item
In small problems with \(M\) and \(K = 2\) or \(3\), we found small
probability of failures (\emph{i.e.}, linear regret), and this
prevents from having a generic upper-bound on regret for \Selfish.
Sadly\ldots{}
\end{itemize}
\end{frame}
\begin{frame}[plain]{Illustration of failing cases for
\(\mathrm{Selfish}\)}
\begin{figure}[h!]
\centering
\includegraphics[height=0.60\textheight]{alt/figures/MP__K3_M2_T5000_N1000__4_algos/all_HistogramsRegret____env1-1_5016720151160452442.pdf}
\caption{\footnotesize{Regret for $M=2$ players, $K=3$ arms, horizon $T=5000$, $1000$ repetitions and $\boldsymbol{\mu} = [0.1, 0.5, 0.9]$. Axis $x$ is for regret (different scale for each), and \textcolor{darkgreen}{\Selfish{}} have a small probability of failure ($17$ cases of $R_T \geq T$, out of $1000$). The regret for the three other algorithms is very small for this ``easy'' problem.}}
\end{figure}
\end{frame}
\section{\hfill{}7. Perspectives and future work\hfill{}}
\subsection{\hfill{}7.1. Perspectives\hfill{}}
\end{frame}
\begin{frame}{Perspectives}
\begin{block}{Theoretical results}
\begin{itemize}
\tightlist
\item
MAB algorithms have guarantees for \emph{i.i.d. settings},
\item
But here the collisions cancel the \emph{i.i.d.} hypothesis,
\item
Not easy to obtain guarantees in this mixed setting \newline
(\emph{i.i.d.} emissions process, ``game theoretic'' collisions).
\item
For OSA devices (always emitting), we obtained strong theoretical
results,
\item
But harder for IoT devices with low duty-cycle\ldots{}
\end{itemize}
\end{block}
\begin{block}{Real-world experimental validation ?}
\begin{itemize}
\tightlist
\item
Radio experiments will help to validate this.
\hspace*{40pt}\hfill{}\textcolor{red}{Hard !}
\end{itemize}
\end{block}
\end{frame}
\subsection{\hfill{}7.2. Future work\hfill{}}
\end{frame}
\begin{frame}{Other directions of future work}
\begin{itemize}
\item
\emph{More realistic emission model}: maybe driven by number of
packets in a whole day, instead of emission probability.
\item
Validate this on a \emph{larger experimental scale}.
\item
Extend the theoretical analysis to the large-scale IoT model, first
with sensing (\emph{e.g.}, models ZigBee networks), then without
sensing (\emph{e.g.}, LoRaWAN networks).
\item
And also conclude the Multi-Player OSA analysis (remove hypothesis
that objects know \(M\), allow arrival/departure of objects,
non-stationarity of background traffic etc)
\end{itemize}
\end{frame}
\section{\hfill{}7. Conclusion\hfill{}}\subsection{\hfill{}7.3 Thanks!\hfill{}}
\end{frame}
\begin{frame}[allowframebreaks]{Conclusion}
\begin{block}{We showed}
\begin{itemize}
\tightlist
\item
Simple Multi-Armed Bandit algorithms, used in a Selfish approach by
IoT devices in a crowded network, help to quickly learn the best
possible repartition of dynamic devices in a fully decentralized and
automatic way,
\item
For devices with sensing, smarter algorithms can be designed, and
analyze carefully.
\item
Empirically, even if the collisions break the \emph{i.i.d} hypothesis,
stationary MAB algorithms (UCB, TS, \klUCB) outperform more generic
algorithms (adversarial, like Exp3).
\end{itemize}
\end{block}
\begin{block}{But more work is still needed\ldots{}}
\begin{itemize}
\tightlist
\item
\textbf{Theoretical guarantees} are still missing for the IoT model,
and can be improved (slightly) for the OSA model.
\item
Maybe study \textbf{other emission models}.
\item
Implement this on \textbf{real-world radio devices}
(\textcolor{rouge}{\emph{TestBed}}).
\end{itemize}
\end{block}
\begin{block}{\textbf{Thanks!}}
\begin{center}\begin{Large}
\emph{Any question?}
\end{Large}\end{center}
\end{block}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7080361243,
"avg_line_length": 26.7089125102,
"ext": "tex",
"hexsha": "b56e5087c34a87d4b64a7264be4baf66a11d3761",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-09-07T21:01:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-09T08:45:14.000Z",
"max_forks_repo_head_hexsha": "b16071df486b1d093976c1670e64debd98863a67",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "pierre-haessig/slides",
"max_forks_repo_path": "2017_11__Presentation_Supelec_SCEE_Seminar/slides.tex",
"max_issues_count": 29,
"max_issues_repo_head_hexsha": "b16071df486b1d093976c1670e64debd98863a67",
"max_issues_repo_issues_event_max_datetime": "2021-02-15T04:30:39.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-07-12T16:14:04.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "pierre-haessig/slides",
"max_issues_repo_path": "2017_11__Presentation_Supelec_SCEE_Seminar/slides.tex",
"max_line_length": 394,
"max_stars_count": 10,
"max_stars_repo_head_hexsha": "b16071df486b1d093976c1670e64debd98863a67",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "pierre-haessig/slides",
"max_stars_repo_path": "2017_11__Presentation_Supelec_SCEE_Seminar/slides.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-23T14:35:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-07-06T13:25:11.000Z",
"num_tokens": 10486,
"size": 32665
} |
\section{Installing the development environment}
\subsection{Unreal engine requirements}
The \textit{minimal requirements}\footnote{\url{https://docs.unrealengine.com/en-US/GettingStarted/RecommendedSpecifications/index.html}} for installing the unreal engine are quite low, as can be seen in figure \ref{fig:unrealminspecs} \cite{UnrealEngineSpecs}. These requirements make it seem as if development with the unreal engine was actually possible on such a machine.\\
I have tried to install the development environment on a computer, which just barely satisfied these requirements. From the experience I can say, that while the program actually starts, it is completely unusable. The graphics card and the CPU will be absolutely overwhelmed.
\setlength{\fboxsep}{0pt}
\setlength{\fboxrule}{0pt}
\begin{figure}[h]
\centering
\fbox{\includegraphics[width=\textwidth]{./fig/unreal_specs_minimal.png}}
\caption[Minimal system requirements unreal engine]{Minimal system requirements for the unreal engine}
\label{fig:unrealminspecs}
\end{figure}
On the far bottom of the same requirements page, epic games lists the specs of the PC's, which themselves use for their game development, as shown in figure \ref{fig:unrealgoodspecs}.\\
These should be considered as the \textit{actual} minimal requirements for a smooth development experience.
\setlength{\fboxsep}{0pt}
\setlength{\fboxrule}{0pt}
\begin{figure}[h]
\centering
\fbox{\includegraphics[width=0.4\textwidth]{./fig/unreal_specs_good.png}}
\caption[Reasonable system requirements unreal engine]{Reasonable system requirements for the unreal engine}
\label{fig:unrealgoodspecs}
\end{figure}
In the end, a machine with the above or comparable hardware specifications \textit{should} be used to guarantee a lag free experience of the unreal editor software.
\section{Installing the unreal engine}
The installation of the unreal engine is managed by installing the \textit{Epic Games Launcher}. This launcher is used to manage and launch games, developed by Epic Games. But it can also be used to download the Unreal Engine from. Using the Epic Games Launcher provides the benefit, that it is easily possible to download multiple different versions of the engine. All these versions can be installed and managed on the same computer at once.\\
The installation of the unreal engine is rather well documented on the following web page:
\url{https://docs.unrealengine.com/en-US/GettingStarted/Installation/index.html}
\section{Additional programs}
When creating a game, a developer has to deal with many different things:
\begin{itemize}
\item Creating the 3D models for the characters and the environment
\item Creating surface material textures for the 3D models
\item Creating animation sequences
\end{itemize}
Just to name a few.\\
While all these steps can be done within the Unreal Editor itself, it is strongly discouraged to do so. All of these processes have their own, specialized application programs. These programs provide a superior environment with better functionality.\\
The problem is, that most of these programs require a license fee.
\section{Installing Blender}
\textit{Blender}\cite{Blender} is an open-source program, which is mainly used for 3D modeling and VFX animations sequences. But it can also be used for the other operations. Even though for some steps Blender is also not the best tool to be used, it is still better than using the built in functionality of the Unreal Engine.
Blender is installed by downloading an installer from the following page:
\url{https://www.blender.org/download/}
Running the installer program should install the blender program, which will be executable through a desktop icon. | {
"alphanum_fraction": 0.8049175898,
"avg_line_length": 62.7288135593,
"ext": "tex",
"hexsha": "1e9be36926f0a48aa4694aee749fb8a36ceeb638",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_forks_repo_path": "chap/chapter2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_issues_repo_path": "chap/chapter2.tex",
"max_line_length": 445,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f410f3e4850c4617d8e05a5d459910b8f93f4b15",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "the16thpythonist/vr-unreal-starting-guide",
"max_stars_repo_path": "chap/chapter2.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-22T01:45:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-09-22T01:45:04.000Z",
"num_tokens": 818,
"size": 3701
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Wenneker Assignment
% LaTeX Template
% Version 2.0 (12/1/2019)
%
% This template originates from:
% http://www.LaTeXTemplates.com
%
% Authors:
% Vel ([email protected])
% Frits Wenneker
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[11pt]{scrartcl} % Font size
\input{structure.tex} % Include the file specifying the document structure and custom commands
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\title{
\normalfont\normalsize
\textsc{University of Trento}\\ % Your university, school and/or department name(s)
\vspace{25pt} % Whitespace
\rule{\linewidth}{0.5pt}\\ % Thin top horizontal rule
\vspace{20pt} % Whitespace
{\huge Evaluating the Convergence Speed of Anti-Entropy Protocols}\\ % The assignment title
\vspace{12pt} % Whitespace
\rule{\linewidth}{2pt}\\ % Thick bottom horizontal rule
\vspace{12pt} % Whitespace
}
\author{\LARGE Name Surname, Pinco Pallino and Leslie Lamport} % Your name
\date{\normalsize\today} % Today's date (\today) or a custom date
\begin{document}
\maketitle % Print the title
\todo[inline]{This a suggested report template. You are free to edit it and change its organization in the way you consider
more appropriate to present your work.}
\begin{abstract}
\bfseries
%
\textsc{Abstract.} Application-level broadcast/multicast is an important building
block to create modern distributed applications.
%
Epidemic protocols are proposed in the literature to support broadcast in distributed systems.
The main differences among such protocols can be evaluated in terms of efficiency, robustness and speed when scaling.
In this assignment we have focused on Anti-entropy protocols and implemented three fundamental well-known schemes, i.e.,
\textit{Push}, \textit{Pull} and \textit{Push-pull}. Our implementations is based on \textit{python-MESA}
and let us conduct a comparative study in
simulation which confirms that, as the theory suggests, push-pull protocols grant faster convergence speed.
\end{abstract}
\section{Introduction}
Provide a gentle introduction to the implemented/studied protocols. Basic example follows:
\begin{itemize}
\item Problem statement: e.g., ``fast and reliable diffusion of a content in a distributed system.''
\item General/brief discussion of known approaches in the literature: e.g., short discussion about PROs and CONs of flooding, tree-based diffusion and gossip.
\item Narrow down to your chosen protocols: introduce a bit more push/pull protocols and briefly discuss advantages and disadvantages.
\item Declare goal and content of the assignment: e.g. \emph{We wanted to study the properties of push/pull protocols and verify that,
compared to flooding approaches, they enable a considerable reduction of the number of messages necessary to complete the diffusion
process of a file in distributed systems. Moreover, we verified that they grant a higher degree of tolerance to failure if compared with tree-based approaches. Finally, we have studied in simulation the convergence properties of 3 different anti-entropy based protocols, namely, \textit{Push}, \textit{Pull} and \textit{Push-pull}, with our simulation results confirming the expectation that push-pull
protocol ensures shorter convergence times in all our simulated scenarios of file diffusion processes within
different distributed systems with varying number of nodes $N$.}
\end{itemize}
\section{Theory Background}\label{background}
Minimal more detailed background reporting the essential notions to understand the rest of the report.
For example, this section can be a good place for explaining why with flooding the efficiency in terms of number of messages
is known to be $O(n^2)$; why with tree-based protocols this efficiency improves up to $O(n)$ but why we have reliability issues.
Describe anti-entropy main principle, with pseudocode;
Describe distributed system model: crash and network failures models/assumptions.
\section{Simulator/Implementation Architecture}\label{architecture}
Describe how you have developed an implementation of your target protocols. Describe if this implementation is tailored for being
tested under simulation (Discrete Event / Agent-based modeling etc.) or if it is a minimal real-world implementation that has been
tested/studied creating an appropriate test-bench / emulation framework.
Essentially, describe your code and your design choices. Provide your instructor the necessary information to understand
your codebase and evaluate your design choices. Not only, remember that... system model and assumptions matter!
If your simulation framework is responsible for the simulation of channel losses etc., then this section is the place where you should
document the modeling assumptions relevant for the interpretation of your results (these latter should be reported and commented in \cref{results}).
\begin{figure}[h]
\caption{A piece of code shown here and described in \cref{architecture}}
\inputpython{code/agent.py}{1}{150}
\end{figure}
\section{Experiment Setup}\label{expSetup}
If your effort consists in implementing a particularly complicated protocol / distributed systems for the sake of proving
the ability of implementing such one and for the academic purpose of showing that you master the theoretical and practical
skills necessary for completing this implementation... then SKIP THIS SECTION and rather write a DEMO section, where you describe
how to run your code to visualize and thus appreciate all the implemented mechanisms. Include screenshots if appropriate.
Otherwise, this section should be the classic section where, once that Background and Architecture (\cref{background,architecture})
are already clear, you document the fixed and varying parameters describing the experiment you did to measure the performance of your protocol/system.
Document/Define also the performance metrics measured during experiments. Using a MESA jargon, metrics could be defined by
the \textit{model-level} or \textit{agent-level reporters.} Example of a metric definition:
\begin{definition*}[Convergence Speed]
The pure number indicating, for a MESA experiment, the step index after which all processes have received the diffused file.
\end{definition*}
\section{Results}\label{results}
Report here your experimental results, make use of figures and tables if appropriate.
\begin{figure}[h] % [h] forces the figure to be output where it is defined in the code (it suppresses floating)
\centering
\includegraphics[width=0.9\columnwidth]{terminationTime.png} % Example image
\caption{Termination Time for network size $N = 10000$}
\end{figure}
\begin{table}[h] % [h] forces the table to be output where it is defined in the code (it suppresses floating)
\centering % Centre the table
\begin{tabular}{l l l l}
\toprule
\textit{Network Size} & \textbf{Push} & \textbf{Pull} & \textbf{Push-Pull} \\
\midrule
10 & X & Y & Z\\
50 & ... & ... & ...\\
100 & ... & ... & ...\\
500 & ... & ... & ...\\
1000 & ... & ... & ...\\
10000 & ... & ... & ...\\
\bottomrule
\end{tabular}
\caption{Termination Time for tested protocols and for growing number of nodes in the network.}
\end{table}
\section{Conclusion}
Tell me how clever you have been! :)
\iffalse
\newpage
SOME EXAMPLES OF LATEX IMAGES, BULLET POINTS, EQUATIONS, TABLES AND CODE LISTINGS THAT MAY BE USEFUL
\subsection{What is the airspeed velocity of an unladen swallow?}
While this question leaves out the crucial element of the geographic origin of the swallow, according to Jonathan Corum, an unladen European swallow maintains a cruising airspeed velocity of \textbf{11 metres per second}, or \textbf{24 miles an hour}. The velocity of the corresponding African swallows requires further research as kinematic data is severely lacking for these species.
%----------------------------------------------------------------------------------------
% TEXT EXAMPLE
%----------------------------------------------------------------------------------------
\section{Understanding Text}
\subsection{How much wood would a woodchuck chuck if a woodchuck could chuck wood?}
%------------------------------------------------
\subsubsection{Suppose ``chuck" implies throwing.}
According to the Associated Press (1988), a New York Fish and Wildlife technician named Richard Thomas calculated the volume of dirt in a typical 25--30 foot (7.6--9.1 m) long woodchuck burrow and had determined that if the woodchuck had moved an equivalent volume of wood, it could move ``about \textbf{700 pounds (320 kg)} on a good day, with the wind at his back".
%------------------------------------------------
\subsubsection{Suppose ``chuck" implies vomiting.}
A woodchuck can ingest 361.92 cm\textsuperscript{3} (22.09 cu in) of wood per day. Assuming immediate expulsion on ingestion with a 5\% retainment rate, a woodchuck could chuck \textbf{343.82 cm\textsuperscript{3}} of wood per day.
%------------------------------------------------
\paragraph{Bonus: suppose there is no woodchuck.}
Fusce varius orci ac magna dapibus porttitor. In tempor leo a neque bibendum sollicitudin. Nulla pretium fermentum nisi, eget sodales magna facilisis eu. Praesent aliquet nulla ut bibendum lacinia. Donec vel mauris vulputate, commodo ligula ut, egestas orci. Suspendisse commodo odio sed hendrerit lobortis. Donec finibus eros erat, vel ornare enim mattis et.
%----------------------------------------------------------------------------------------
% EQUATION EXAMPLES
%----------------------------------------------------------------------------------------
\section{Interpreting Equations}
\subsection{Identify the author of Equation \ref{eq:bayes} below and briefly describe it in English.}
\begin{align}
\label{eq:bayes}
\begin{split}
P(A|B) = \frac{P(B|A)P(A)}{P(B)}
\end{split}
\end{align}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent porttitor arcu luctus, imperdiet urna iaculis, mattis eros. Pellentesque iaculis odio vel nisl ullamcorper, nec faucibus ipsum molestie. Sed dictum nisl non aliquet porttitor. Etiam vulputate arcu dignissim, finibus sem et, viverra nisl. Aenean luctus congue massa, ut laoreet metus ornare in. Nunc fermentum nisi imperdiet lectus tincidunt vestibulum at ac elit. Nulla mattis nisl eu malesuada suscipit.
%------------------------------------------------
\subsection{Try to make sense of some more equations.}
\begin{align}
\begin{split}
(x+y)^3 &= (x+y)^2(x+y)\\
&=(x^2+2xy+y^2)(x+y)\\
&=(x^3+2x^2y+xy^2) + (x^2y+2xy^2+y^3)\\
&=x^3+3x^2y+3xy^2+y^3
\end{split}
\end{align}
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
\begin{align}
A =
\begin{bmatrix}
A_{11} & A_{21} \\
A_{21} & A_{22}
\end{bmatrix}
\end{align}
Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem.
%----------------------------------------------------------------------------------------
% LIST EXAMPLES
%----------------------------------------------------------------------------------------
\section{Viewing Lists}
\subsection{Bullet Point List}
\begin{itemize}
\item First item in a list
\begin{itemize}
\item First item in a list
\begin{itemize}
\item First item in a list
\item Second item in a list
\end{itemize}
\item Second item in a list
\end{itemize}
\item Second item in a list
\end{itemize}
%------------------------------------------------
\subsection{Numbered List}
\begin{enumerate}
\item First item in a list
\item Second item in a list
\item Third item in a list
\end{enumerate}
%----------------------------------------------------------------------------------------
% TABLE EXAMPLE
%----------------------------------------------------------------------------------------
\section{Interpreting a Table}
\begin{table}[h] % [h] forces the table to be output where it is defined in the code (it suppresses floating)
\centering % Centre the table
\begin{tabular}{l l l}
\toprule
\textit{Per 50g} & \textbf{Pork} & \textbf{Soy} \\
\midrule
Energy & 760kJ & 538kJ\\
Protein & 7.0g & 9.3g\\
Carbohydrate & 0.0g & 4.9g\\
Fat & 16.8g & 9.1g\\
Sodium & 0.4g & 0.4g\\
Fibre & 0.0g & 1.4g\\
\bottomrule
\end{tabular}
\caption{Sausage nutrition.}
\end{table}
%------------------------------------------------
\subsection{The table above shows the nutritional consistencies of two sausage types. Explain their relative differences given what you know about daily adult nutritional recommendations.}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent porttitor arcu luctus, imperdiet urna iaculis, mattis eros. Pellentesque iaculis odio vel nisl ullamcorper, nec faucibus ipsum molestie. Sed dictum nisl non aliquet porttitor. Etiam vulputate arcu dignissim, finibus sem et, viverra nisl. Aenean luctus congue massa, ut laoreet metus ornare in. Nunc fermentum nisi imperdiet lectus tincidunt vestibulum at ac elit. Nulla mattis nisl eu malesuada suscipit.
%----------------------------------------------------------------------------------------
% CODE LISTING EXAMPLE
%----------------------------------------------------------------------------------------
\section{Reading a Code Listing}
\lstinputlisting[
caption=Luftballons Perl Script., % Caption above the listing
label=lst:luftballons, % Label for referencing this listing
language=Perl, % Use Perl functions/syntax highlighting
frame=single, % Frame around the code listing
showstringspaces=false, % Don't put marks in string spaces
numbers=left, % Line numbers on left
numberstyle=\tiny, % Line numbers styling
language=Perl
]{code/luftballons.pl}
\inputpython{fft.py}{1}{150}
%------------------------------------------------
\subsection{How many luftballons will be output by the Listing \ref{lst:luftballons} above?}
Aliquam arcu turpis, ultrices sed luctus ac, vehicula id metus. Morbi eu feugiat velit, et tempus augue. Proin ac mattis tortor. Donec tincidunt, ante rhoncus luctus semper, arcu lorem lobortis justo, nec convallis ante quam quis lectus. Aenean tincidunt sodales massa, et hendrerit tellus mattis ac. Sed non pretium nibh. Donec cursus maximus luctus. Vivamus lobortis eros et massa porta porttitor.
%------------------------------------------------
\subsection{Identify the regular expression in Listing \ref{lst:luftballons} and explain how it relates to the anti-war sentiments found in the rest of the script.}
Fusce varius orci ac magna dapibus porttitor. In tempor leo a neque bibendum sollicitudin. Nulla pretium fermentum nisi, eget sodales magna facilisis eu. Praesent aliquet nulla ut bibendum lacinia. Donec vel mauris vulputate, commodo ligula ut, egestas orci. Suspendisse commodo odio sed hendrerit lobortis. Donec finibus eros erat, vel ornare enim mattis et.
%----------------------------------------------------------------------------------------
\fi
\end{document}
| {
"alphanum_fraction": 0.6784888204,
"avg_line_length": 48.037037037,
"ext": "tex",
"hexsha": "bb9211c1ef7c35a52db1d0c5e9144e45f3217d4a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "138c47f2f857e2b4a08c01edcbf0daa00b281077",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "andreastedile/unitn-m-ds2",
"max_forks_repo_path": "assignment/reportTemplate/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "138c47f2f857e2b4a08c01edcbf0daa00b281077",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "andreastedile/unitn-m-ds2",
"max_issues_repo_path": "assignment/reportTemplate/main.tex",
"max_line_length": 470,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "138c47f2f857e2b4a08c01edcbf0daa00b281077",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "andreastedile/unitn-m-ds2",
"max_stars_repo_path": "assignment/reportTemplate/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3751,
"size": 15564
} |
\documentclass[letterpaper, 11pt]{article}
%=================================================
\usepackage{fullpage, parskip}
\usepackage{fancyhdr}
\usepackage{amsmath, mathtools,amssymb}
\usepackage{graphicx}
\usepackage{tabularx}
\usepackage{xspace}
\usepackage{natbib}
%% Journal control sequences
\usepackage{aas_macros}
%--------------------------------------------------
\def\humvi{{\sc HumVI}\xspace}
%--------------------------------------------------
%% Header and Footer
\pagestyle{fancy}
\fancyhead{}
\renewcommand{\headrulewidth}{0.0pt}
\rfoot{HumVI}
\lfoot{December 2012}
%% Top matter
\title{HumVI: Image Stretching and Scaling Tests}
\author{Phil Marshall\thanks{\texttt{[email protected]}},
Cato Sandford, David Hogg, Amit Kapadia}
\date{\today}
%%-------------------------------------------------
\begin{document}
\maketitle
\vspace{1cm}
\begin{abstract}
We investigate some useful input parameters for
use when composing color images from the data in the CFHTLS.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Suppose we have a large set of imaging data from a given sky survey. We would
like to make color representations of the images in this set, such that they
can be a) compared with each other, to build intuition about the data quality
and the appearance of objects in the survey, and b) searched for low surface
brightness, color-contrasting features. We use the \humvi python
implementation of the \citet[][hereafter L04]{Lup++04} algorithm, with some simplifications and
extensions. For simplicity we use a single filter for each RGB channel,
choosing the {\it i}, {\it r} and {\it g} bands respectively.
\section{Scaling and Stretching the Images}
\label{sec:stretch}
The scaling and stretching of the input images is controlled by three
parameters. The three \texttt{scales} parameters, one for each channel, are
used to multiply the channel images before any other operations are performed.
The \texttt{scales} account for any difference in units between the images,
and also the sensitivity of the detector in that filter, the exposure time
used, and so on. We denote this scale parameter by $s_{X}$ where $X$ is one of
the channel identifiers, $R,G,B$. For convenience, we normalize the scales to
have unit mean, since it will often be the case that the images taken in
different filters will have the same units and approximately equal pixel
values. Crucially, these scales can be chosen to be the same for all images in
a survey, allowing different images to be compared with each other; likewise
they can be used to account for variations in, for example, exposure time
between images.
After scaling, the total intensity image is
computed, and used to compute the stretch factor, which is governed by two
parameters, $Q$ and $\alpha$ as given by L04:
\begin{align}
I &= r s_R + g s_G + b s_B \\
X(I) &= \frac{1}{QI}\cdot{\rm arcsinh}(\alpha Q I) \\
R &= r s_R X \\
G &= g s_G X \\
B &= b s_B X \\
\end{align}
For small values of $\alpha Q I$, $X \approx \alpha$, and constant: at low
intensity, each channel image is simply rescaled by $\alpha$. Low surface
brightness features are made more visible by increasing $\alpha$.
At higher values,
the arcsinh function reduces this scale factor, making high intensity regions
less saturated. The onset of this behavior occurs when $\alpha Q I \approx 1$,
or when $I \approx 1 / \alpha Q$.
In order to make a PNG image, \humvi works to make three channel images whose
values are clipped at zero and one. This choice of 1 as the maximum pixel
value allows us to choose $Q$ and $\alpha$ sensibly. For example, suppose we
have a set of scaled images with approximately zero mean, unit
rms and brightest pixel
value $10^4$. If we want a pixel with value 3 times the rms to have
normalized value 0.1 in each channel of
the final image, then we need $X(9)=1/30$ ($I\approx9$ if each channel image
pixel value is about $3\sigma$). We'd
like this to still be in the linear regime, so we need $9 \alpha Q \ll 1$,
and also $\alpha \approx 1/30 \approx 0.03$. Combining these two requirements,
we find that we need $Q
\ll 3$. The algorithm is not very sensitive to the value of $Q$, as long as
it is very much less than $1/\alpha$.
Drawing a parallel with television controls,
$Q$ behaves like the brightness, while $\alpha$ is like the contrast.
In Figure~\ref{fig:stretch} we show, for a fixed set of scales, the effect of
varying $\alpha$ and $Q$ when displaying an image that has approximately unit
rms pixel value in each channel. The values $\alpha = 0.03$ and $Q = 1$
provide a good representation of the image.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{Images/CFHTLS_27_Q-alpha_gallery.png}
\caption{The effect of the non-linearity parameters $Q$ and $\alpha$ on an
example image from the CFHTLS survey. Left to right, $\alpha$ increases
through the set $\{0.01,0.03,0.1,0.3,1.0\}$. Top to bottom, $Q$ increases
through the set $\{0.01,0.1,1.0,10,100\}$.}
\label{fig:stretch}
\end{figure}
\section{Saturation and Thresholding}
\label{sec:saturation}
The scaled and stretched pixel values of the previous section have to be
mapped onto a unit range for encoding in a PNG image. How we deal with pixels
that fall outside that range will affect the appearance of the composite.
At low brightness, we have to decide which pixels we want to appear black.
Background-subtracted images will have negative pixel values in the ``blank''
sky regions. One option is to set all pixels with value less than zero to
zero. This leads to a large number of black pixels (approximately half of
them!) and a strong impression of dark sky. If we want to retain the
information that those negative pixels contain (about the noise level), we can
add an offset $\delta$ to each scaled and stretched channel image, such that
pixels with value $-\delta$ appear black in the final composite. This has the
effect of making the blank regions of the image appear dark gray instead of
black. Choosing $\delta$ to be negative has the opposite effect, making the
sky look blacker than black... Figure~\ref{fig:offset} shows the effect of
various offset values on the appearance of the test image from
Figure~\ref{fig:stretch}. We find that actually an offset of zero is a good
compromise between ``seeing the noise'' and achieving a nice dark background
against which low surface brightness features can be seen.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{Images/CFHTLS_27_offset-alpha_gallery.png}
\caption{The effect of the offset parameter $\delta$ and contrast
$\alpha$ on an
example image from the CFHTLS survey. Left to right, $\alpha$ increases
through the set $\{0.01,0.03,0.1,0.3,1.0\}$. Top to bottom, $\delta$ increases
through the set $\{-0.2,-0.1,0.0,0.1,0.2\}$.}
\label{fig:offset}
\end{figure}
At the bright end we have a different choice to make: what to do in pixels
whose values are greater than 1 in any channel? In Figures~\ref{fig:stretch}
and~\ref{fig:offset} we simply snapped these pixel values to one in that
channel, a procedure that leads to ``saturation to white'' in the case where
all three channel pixel values exceed 1. An alternative is to snap the highest
pixel value of an RGB triplet to 1, and then rescale the other two so as to
preserve the {\it color} of the pixel. This was advocated in L04. The two
approaches are shown in Figure~\ref{fig:saturation}, again as a function of
$\alpha$. If the image is not stretched too hard, saturation to white is not
an issue, as all pixels remain in the required 0:1 range. Note that the faint
objects in the low $\alpha$ frames in Figure~\ref{fig:saturation} look very
similar between the two saturation schemes. With
color-preserving saturation we see some odd effects: red central cores and
ring-like artifacts which, while providing more information about the images,
may provide distractions during a search for low surface brightness features.
The contrasting (eg yellow) rings around faint (eg red) objects is likely to
be a result of PSF mismatch between these images: if the resolutions of the
three channels' images are not well-matched, confusing artifcacts will arise.
Saturation to white seems to be an {\it easy way to hide this problem}.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{Images/CFHTLS_27_saturation_gallery.png}
\caption{The effect of the saturation scheme on an
example image from the CFHTLS survey. Left to right, $\alpha$ increases
through the set $\{0.01,0.03,0.1,0.3,1.0\}$. Top row: saturation to white;
Bottom row: color-preserving saturation, as proposed by L04.}
\label{fig:saturation}
\end{figure}
\section{Color Balance}
\label{sec:color}
Finally, we return to the choice of scales for a composite image, which is
best made after setting the stretch and saturation parameters well. The
scales should reflect the quality of each channel's image, including
the sensitivity of the instrumentation, the filter transmission, the exposure
time and so on. However, the relative scales can also be chosen to change the
balance of color in the image, in order to improve the color contrast between
different objects. In Figure~\ref{fig:color} we show the effect of varying the
scales by small amounts around their natural (unit) values.
A good strategy when looking for contrasting features around massive galaxies
could be to choose scales such that the massive galaxies appear as bright
yellow as possible, and the objects around them as different as possible. The
set $\{0.8,1.0,1.0\}$ seems to work reasonably well in this example.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{Images/CFHTLS_27_scales_gallery.png}
\caption{The effect of changing the relative image scales on the color balance
in an example image from the CFHTLS survey. We keep the G channel image scale
fixed at 1.0. Left to right, the R channel image scale increases through the
set $\{0.6,0.8,1.0,1.2,1.4\}$. Top to bottom, the B channel image increases
through the same set. $Q=1.0$ and $\alpha = 0.03$.}
\label{fig:color}
\end{figure}
\section{Conclusions}
\label{sec:conclude}
From these explorations we conclude:
\begin{itemize}
\item $\alpha$ is the key ``contrast'' parameter needed to bring out low
surface brightness features in an image.
\item If all images in a set were taken under the same conditions, only the
relative scales need be specified.
\item These relative scales determine the color balance of the image, and
should be chosen by experimentation.
\item The $Q$ parameter is a ``brightness'' control, and has less effect on an
image than $\alpha$; however, they do need to be set together.
\item Future work using images with varying conditions may need unnormalized
scales, in which case $Q$ may become somewhat, although not completely,
redundant.
\item While color-preserving saturation retains more information in the image,
this information may be distracting during a low surface brightness feature
search.
\item Matching the resolutions of the input channel images may mitigate
against some of the artifacts highlighted by the saturate to color algorithm.
\end{itemize}
Further work should include allowing unnormalized scales, one per image, to
allow for images with different units, as concluded above. Then we should
enable combinations of $N > 3$ images, breaking the one image per channel
paradigm set up here. Input images will need to be scaled first, then combined
into channel images, and then stretched as described in
Section~\ref{sec:stretch}.
\section{References}
\bibliographystyle{mn2e}
\bibliography{humvi}
\end{document}
| {
"alphanum_fraction": 0.7583598384,
"avg_line_length": 46.3466135458,
"ext": "tex",
"hexsha": "1981c00d2a98a014dce56311f05d908168e4fa9f",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2021-12-10T10:46:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-08T17:13:22.000Z",
"max_forks_repo_head_hexsha": "17c7e2a46c8b6524d64ae22462214780e5a7d60e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ccld/HumVI",
"max_forks_repo_path": "doc/composition.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "17c7e2a46c8b6524d64ae22462214780e5a7d60e",
"max_issues_repo_issues_event_max_datetime": "2016-06-21T22:54:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-03-05T02:11:04.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ccld/HumVI",
"max_issues_repo_path": "doc/composition.tex",
"max_line_length": 95,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "17c7e2a46c8b6524d64ae22462214780e5a7d60e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ccld/HumVI",
"max_stars_repo_path": "doc/composition.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-07T12:39:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-12-03T08:32:28.000Z",
"num_tokens": 2950,
"size": 11633
} |
\clearpage
\phantomsection
\addcontentsline{toc}{subsection}{json}
\label{subr:json}
\subsection*{json: extract a single value from a JSON string}
\subsubsection*{Calling convention}
\begin{description}
\item[\registerop{rd}] A string containing the value or NULL
\item[\registerop{arg0}] JSON formatted string
\item[\registerop{rd}] Key to search for
\end{description}
\subsubsection*{Description}
The \subroutine{json} subroutine extracts a value from a JSON string
based on one or more keys supplied via a list in which NULL is used as
a key separator, e.g. \verb|"name" NULL "age" NULL| where the keys are
\emph{name} and \emph{age} and the number of elements in the list
(\verb|nelems|) is equal to two (2).
\subsubsection*{Failure modes}
This subroutine has no run-time failure modes beyond its constraints.
| {
"alphanum_fraction": 0.7670731707,
"avg_line_length": 31.5384615385,
"ext": "tex",
"hexsha": "41b128f029e402ac30bc596cd74e218cf9e848bb",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2020-03-17T10:53:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-10T16:11:37.000Z",
"max_forks_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "trombonehero/opendtrace-documentation",
"max_forks_repo_path": "specification/subr/json.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_issues_repo_issues_event_max_datetime": "2018-11-27T16:42:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-07-29T17:34:24.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "trombonehero/opendtrace-documentation",
"max_issues_repo_path": "specification/subr/json.tex",
"max_line_length": 70,
"max_stars_count": 20,
"max_stars_repo_head_hexsha": "ffa4129d0d6caa1e719502f416a082498c54be5d",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "trombonehero/opendtrace-documentation",
"max_stars_repo_path": "specification/subr/json.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-14T17:13:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-12T21:24:01.000Z",
"num_tokens": 221,
"size": 820
} |
\subsection{Attributes}
\label{character-attributes}
\begin{description}
\item[Accuracy (0--100\%)] Likeliness to hit a still target.
\item[Critical Hit Chance (0--100\%)]
Likeliness for each attack to be a critical hit (see~\ref{critical-hit}).
From 0 to 100\%.
\item[Damage Modifier (0--300\%)] Multiplier for the damage.
\item[Dodge Chance (0--175\%)] Likeliness to dodge someone targetting you as if
you were standin still.
\item[Double Hit Chance (0--100\%)] Likeliness to perform a follow-up attack.
\item[Health Points (1--500)] Maximum health points.
\item[Movement Points (8--200)] Amount of movement points available every turn.
\item[Parry Chance (0--100\%)]
Likeliness to perform a parry, should the circumstances allow it.
\end{description}
| {
"alphanum_fraction": 0.7365661861,
"avg_line_length": 44.8823529412,
"ext": "tex",
"hexsha": "fd81b658acae14833e794d1a8d9afbc48a2f3176",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ff9e412afdca483831a4dad5974453928856e988",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "nsensfel/tacticians-design",
"max_forks_repo_path": "src/game_design/attributes.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "ff9e412afdca483831a4dad5974453928856e988",
"max_issues_repo_issues_event_max_datetime": "2021-12-24T11:02:52.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-10-07T14:54:18.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "nsensfel/tacticians-design",
"max_issues_repo_path": "src/game_design/attributes.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ff9e412afdca483831a4dad5974453928856e988",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "nsensfel/tacticians-design",
"max_stars_repo_path": "src/game_design/attributes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 218,
"size": 763
} |
%% Richard Wen
%% [email protected]
% *** APPENDICES ***
\begin{appendices}
\section{Literature Review Methods} \label{appendix:literature-review-methods}
The paper selection process involved identifying reputable digital libraries using the Journal Impact Factor (JIF) measure \citep{Garfield:2006b}, followed by using automatic search queries to produce an initial list of potential papers. The potential papers were then further filtered by manual selection criteria to produce a list of selected papers for reviewing. The literature review process is seen in Figure \ref{figure:litreview_process}.
\begin{figure}[!htb]
\centering
\includegraphics[width=6in]{litreview_process}
\caption{\textbf{Literature Review Methods.}}
\label{figure:litreview_process}
\end{figure}
\subsection{Digital Library Selection} \label{appendix:digital-library-selection}
The papers for the literature review were found with the search engines available in the Association for Computing Machinery (ACM) \citep{ACM:2017} and Institute of Electrical and Electronics Engineers (IEEE) Xplore \citep{IEEE:2017} digital libraries. A search for the top journals in computer science by journal impact factor \citep{Garfield:2006b} was done using the InCites journal citation reports web tool \citep{Clarivate:2017a}. A majority of ACM and IEEE journals were found to be in the first quartile of journal impact factor values for the computer science category. A visualization of the top 25 journals in computer science by journal impact factor in 2016 is shown in Figure \ref{figure:incites_top25jifcs}.
\begin{figure}[!htb]
\centering
\includegraphics[width=6in]{incites_top25jifcs}
\caption{\textbf{Top 25 Computer Science Journals by Journal Impact Factor from InCites Journal Citation Report in 2016.} Gray circles represent the Journal Impact Factor, where higher Journal Impact Factor values are represented by larger sizes. Connected lines represent the citation relationships between each journal, where thicker lines mean stronger relationships.}
\label{figure:incites_top25jifcs}
\end{figure}
The search for the top 25 computer science journals was based on the Journal Impact Factor (JIF) \citep{Garfield:2006b} measure, and was done using the InCites Journal Citation Reports (JCR) web tool \citep{Clarivate:2017a}. The search used the following options available on InCites:
\begin{itemize}
\item \textbf{Categories}:
\begin{itemize}
\item COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
\item COMPUTER SCIENCE, CYBERNETICS
\item COMPUTER SCIENCE, HARDWARE \& ARCHITECTURE
\item COMPUTER SCIENCE, INFORMATION SYSTEMS
\item COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
\item COMPUTER SCIENCE, SOFTWARE ENGINEERING
\item COMPUTER SCIENCE, THEORY \& METHODS
\end{itemize}
\item \textbf{JCR Year}: 2016
\item \textbf{Edition}: Science Citation Index Expanded (SCIE) \citep{Garfield:2006a} and Social Sciences Citation Index (SSCI) \citep{Klein:2004}
\item \textbf{Category Schema}: Web of Science \citep{Clarivate:2017b}
\item \textbf{JIF Quartile}: Quarter 1 (Q1)
\end{itemize}
\subsection{Automatic Search Queries} \label{appendix:automatic-search-queries}
Potential papers were found using search engine queries in the ACM \citep{ACM:2017} and IEEE Xplore \citep{IEEE:2017} digital libraries identified in Appendix \ref{appendix:digital-library-selection}. Search queries were modified from the defaults and sorted by relevance. Each search query was defined to filter for potential papers with the following requirements:
\begin{enumerate}[label=(\alph*)]
\item \textbf{Publication}: Published in ACM or IEEE
\item \textbf{Year}: Published from 2012 to December 2, 2017
\item \textbf{Keywords}: Contains the keywords \textit{"real time"} and \textit{"social media"} in the paper title, and \textit{"prediction"}, \textit{"predict"}, \textit{"detection"}, or \textit{"detect"} anywhere in the text
\end{enumerate}
The query syntax in the ACM digital library was accessed through the advanced search page by clicking \textit{"show query syntax"}. The \textit{"+"} symbol includes each keyword in the title. \textit{"gte"} and \textit{"lte"} represent \textit{"greater than or equal to"} and \textit{"less than or equal to"} respectively. The publication date query syntax must be manually generated using the web interface. The full advanced query syntax used for the ACM digital library to return potential papers is shown below:
\lstinputlisting{data/acm_querysyntax.txt}
The command search in the IEEE Xplore digital library was accessed through the advanced search page by clicking \textit{"command search"}. Refinements were manually applied using the web interface to filter command search results for the years 2012 to 2017 and to search in \textit{"Full Text \& Metadata"}. The command search used for the IEEE Xplore digital library to return potential papers is shown below:
\lstinputlisting{data/ieeexplore_commandsearch.txt}
\subsection{Manual Selection Criteria} \label{appendix:manual-selection-criteria}
The potential papers from Appendix \ref{appendix:automatic-search-queries} were further filtered with the abstracts and paper length. The abstracts were inspected for relevancy to the topic: \textit{"real-time geosocial media event detection and prediction"}. This included mentions of methods that deal with detecting or predicting real-world events in real-time using geosocial media data. After inspections of the abstract, each paper was further evaluated for practicality by searching for mentions of event prediction or detection applications, benchmarks, and experiments in the results sections. The manual selection criteria sought to find papers with the following characteristics:
\begin{enumerate}[label=(\alph*)]
\item \textbf{Detailed}: Paper contained sufficient details and explanations to obtain a general understanding of the methods and results
\item \textbf{Relevant}: Paper had mentions of real-time geosocial media event detection or prediction
\item \textbf{Practical}: Paper had conducted experiments, benchmarks, or applications using described event detection or prediction methods
\end{enumerate}
\subsection{Review Procedure} \label{appendix:review-procedure}
A literature review of the papers selected using the methods in Appendix \ref{appendix:manual-selection-criteria} was done with the following procedure:
\begin{enumerate}
\item \textbf{Identify} methods used for real-time geosocial media event detection or prediction
\item \textbf{Summarize} methods in (1)
\item \textbf{Summarize} applications and results for the methods in (1)
\item \textbf{Discuss} limitations, possible improvements, and future directions relative to the summaries from (2) and (3)
\end{enumerate}
\end{appendices}
| {
"alphanum_fraction": 0.7992056487,
"avg_line_length": 73.0967741935,
"ext": "tex",
"hexsha": "e78b9fcdf9c8df934359ccc2e5c3d917eaa409e7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0c0841635d38a3efb9f3a1bc00cae080d3961db4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rrwen/assign-rmcs-proposal",
"max_forks_repo_path": "tex/appendices.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0c0841635d38a3efb9f3a1bc00cae080d3961db4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rrwen/assign-rmcs-proposal",
"max_issues_repo_path": "tex/appendices.tex",
"max_line_length": 722,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0c0841635d38a3efb9f3a1bc00cae080d3961db4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rrwen/assign-rmcs-proposal",
"max_stars_repo_path": "tex/appendices.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1635,
"size": 6798
} |
%-------------------------
% Resume in Latex
% Author : Mateo Wang
% Based off of: https://github.com/jakegut/resume
% License : MIT
%------------------------
\documentclass[letterpaper,11pt]{article}
\usepackage{latexsym}
\usepackage[empty]{fullpage}
\usepackage{titlesec}
\usepackage{marvosym}
\usepackage[usenames,dvipsnames]{color}
\usepackage{verbatim}
\usepackage{enumitem}
\usepackage[hidelinks]{hyperref}
\usepackage{fancyhdr}
\usepackage[english]{babel}
\usepackage{tabularx}
\input{glyphtounicode}
%----------FONT OPTIONS----------
% sans-serif
% \usepackage[sfdefault]{FiraSans}
% \usepackage[sfdefault]{roboto}
% \usepackage[sfdefault]{noto-sans}
% \usepackage[default]{sourcesanspro}
% serif
% \usepackage{CormorantGaramond}
% \usepackage{charter}
\pagestyle{fancy}
\fancyhf{} % clear all header and footer fields
\fancyfoot{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
% Adjust margins
\addtolength{\oddsidemargin}{-0.5in}
\addtolength{\evensidemargin}{-0.5in}
\addtolength{\textwidth}{1in}
\addtolength{\topmargin}{-.5in}
\addtolength{\textheight}{1.0in}
\urlstyle{same}
\raggedbottom
\raggedright
\setlength{\tabcolsep}{0in}
% Sections formatting
\titleformat{\section}{
\vspace{-4pt}\scshape\raggedright\large
}{}{0em}{}[\color{black}\titlerule \vspace{-5pt}]
% Ensure that generate pdf is machine readable/ATS parsable
\pdfgentounicode=1
%-------------------------
% Custom commands
\newcommand{\resumeItem}[1]{
\item\small{
{#1 \vspace{-2pt}}
}
}
\newcommand{\resumeSubheading}[4]{
\vspace{-2pt}\item
\begin{tabular*}{0.97\textwidth}[t]{l@{\extracolsep{\fill}}r}
\textbf{#1} & #2 \\
\textit{\small#3} & \textit{\small #4} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeSubSubheading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\textit{\small#1} & \textit{\small #2} \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeProjectHeading}[2]{
\item
\begin{tabular*}{0.97\textwidth}{l@{\extracolsep{\fill}}r}
\small#1 & #2 \\
\end{tabular*}\vspace{-7pt}
}
\newcommand{\resumeSubItem}[1]{\resumeItem{#1}\vspace{-4pt}}
\renewcommand\labelitemii{$\vcenter{\hbox{\tiny$\bullet$}}$}
\newcommand{\resumeSubHeadingListStart}{\begin{itemize}[leftmargin=0.15in, label={}]}
\newcommand{\resumeSubHeadingListEnd}{\end{itemize}}
\newcommand{\resumeItemListStart}{\begin{itemize}}
\newcommand{\resumeItemListEnd}{\end{itemize}\vspace{-5pt}}
%-------------------------------------------
%%%%%% RESUME STARTS HERE %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%----------HEADING----------
% \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}r}
% \textbf{\href{http://sourabhbajaj.com/}{\Large Sourabh Bajaj}} & Email : \href{mailto:[email protected]}{[email protected]}\\
% \href{http://sourabhbajaj.com/}{http://www.sourabhbajaj.com} & Mobile : +1-123-456-7890 \\
% \end{tabular*}
\begin{center}
\textbf{\Huge \scshape Mateo Wang} \\ \vspace{1pt}
\small(408) 913-6163 $|$ \href{mailto:[email protected]}{\underline{[email protected]}} $|$
\href{https://linkedin.com/in/mateo-wang}{\underline{linkedin.com/in/mateo-wang}} $|$
\href{https://github.com/SwiftWinds}{\underline{github.com/SwiftWinds}} $|$
Bay area, CA
\end{center}
%-----------EDUCATION-----------
\section{Education}
\resumeSubHeadingListStart
\resumeSubheading
{UC Santa Barbara}{3.77 GPA $|$ Aug. 2020 -- Jun. 2023}
{Bachelor of Science in Computer Science, Technology Management Certificate}{Santa Barbara, CA}
\resumeSubHeadingListEnd
%-----------EXPERIENCE-----------
\section{Experience}
\resumeSubHeadingListStart
\resumeSubheading
{Software/Data Science Intern}{Jun. 2021 -- Sep. 2021}
{Tascent, Inc.}{Los Gatos, CA}
\resumeItemListStart
\resumeItem{Trained CNN and tuned SVM kernel to get \textbf{96\%} and
\textbf{96.5\%} accuracy for motion detection on low-resolution thermal imaging dataset, respectively}
\resumeItem{Set up automatic data labeling of thermal images with DeepSORT on Google Coral}
\resumeItem{Implemented simultaneous RGB/thermal/depth recording, automatically triggered with object detection}
\resumeItemListEnd
% -----------Multiple Positions Heading-----------
% \resumeSubSubheading
% {Software Engineer I}{Oct 2014 - Sep 2016}
% \resumeItemListStart
% \resumeItem{Apache Beam}
% {Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines}
% \resumeItemListEnd
% \resumeSubHeadingListEnd
%-------------------------------------------
\resumeSubheading
{Data Center Technician Intern}{Jun. 2021 -- Sep. 2021}
{Vofo Corp}{Sunnyvale, CA}
\resumeItemListStart
\resumeItem{Wrote scripts to automate IPMI server IP changes}
\resumeItem{Documented knowledge base of server troubleshooting standard procedures}
\resumeItem{Communicated with managers, engineers, and coworkers in Chinese to set up dozens of new server nodes}
\resumeItemListEnd
\resumeSubheading
{Software Developer}{Oct. 2020 -- Jan. 2021}
{UnMesh, LLC.}{New York, NY}
\resumeItemListStart
\resumeItem{Wrote Express.js endpoints to consume Spoonacular API and make MongoDB queries to send recipe list to users}
\resumeItem{Wrote React Native UI for the user profile statistics screen, fetching data with REST API from Node.js backend}
\resumeItem{Migrated deployment from EC2 to microservices/serverless with MongoDB Atlas and AWS lambda}
\resumeItemListEnd
\resumeSubheading
{Full-stack Web Developer}{Jun. 2020 -- Oct. 2020}
{Electify Technologies}{Palo Alto, CA}
\resumeItemListStart
\resumeItem{Built React.js UI for tracking prize redemptions, fetching and writing data to the Firestore database}
\resumeItem{Wrote backend Express.js code to automatically update user statistics/metadata every day}
\resumeItem{Solved long-standing DEADLINE\_EXCEEDED issue, reducing runtime length by about \textbf{99.2\%}}
\resumeItemListEnd
\resumeSubheading
{Software Engineer Intern}{Jun. 2019 -- Aug. 2019}
{Denali System Co. Ltd}{Mountain View, CA}
\resumeItemListStart
\resumeItem{Developed an internal React.js website for uploading and watching employee training videos}
\resumeItem{Wrote a Flask backend and used Tus protocol and FFmpeg to upload, transcode, store, and serve video files}
\resumeItemListEnd
\resumeSubHeadingListEnd
%-----------PROJECTS-----------
\section{Projects}
\resumeSubHeadingListStart
\resumeProjectHeading
{\textbf{Recommeddit.tech} $|$ \emph{Python, NLTK, Flask, Svelte, Tailwind CSS, Google Cloud Functions}}{Jan. 2021 - Jun. 2021}
\resumeItemListStart
\resumeItem{Developed a full-stack web app with Flask serving a REST API to a Svelte/Tailwind CSS as the frontend}
\resumeItem{Aggregated Reddit comments via Pushshift API and PRAW library and scraped websites with BeautifulSoup4}
\resumeItem{Analyzed Reddit comments with MonkeyLearn API for NER and NLTK's Vader for sentiment analysis}
\resumeItem{Deployed frontend with Netlify, Cloudflare, Domain.com and backend with Google Cloud Functions}
\resumeItemListEnd
\resumeProjectHeading
{\textbf{Other projects on \href{https://github.com/SwiftWinds?tab=repositories}{\underline{GitHub}}}}{}
\resumeSubHeadingListEnd
%
%-----------PROGRAMMING SKILLS-----------
\section{Technical Skills}
\begin{itemize}[leftmargin=0.15in, label={}]
\small{\item{
\textbf{Languages}{: JavaScript, HTML, CSS, Python, Typescript, C/C++, C\#, Java, Bash, Kotlin, Dart} \\
\textbf{Frameworks}{: React, Vue, Node.js, Express.js, Unity, Arduino, Bootstrap, Tailwind CSS, Svelte} \\
\textbf{Developer Tools}{: Git, Docker, Kubernetes, Google Cloud Platform, AWS, Azure} \\
\textbf{Libraries}{: PyTorch, fast.ai, TensorFlow, Keras, NumPy, OpenCV, pandas, Matplotlib, Flask} \\
\textbf{Other technologies}{: GraphQL, REST API, Firebase, MongoDB}
}}
\end{itemize}
%-------------------------------------------
\section{Hackathon Awards}
\begin{itemize}[leftmargin=0.15in, label={}]
\small{\item{
\textbf{HackMIT 2020}{: latent-space.tech wins 3rd place of 1000+ competitors} \\
\textbf{LAHacks 2020}{: Archiscape wins 1st place of 192 projects} \\
\textbf{CoVIDathon 2020}{: Centivize wins 3rd place of 100 projects} \\
\textbf{MissionHacks 2019}{: Bookworm wins 1st place of 52 projects} \\
}}
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7040876428,
"avg_line_length": 35.9703389831,
"ext": "tex",
"hexsha": "a44487d11318d4961e2d872d390d7667ff250ad3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "41eca983398b5716398d75e079f54f6cada4a69a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SwiftWinds/resume",
"max_forks_repo_path": "resume.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "41eca983398b5716398d75e079f54f6cada4a69a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SwiftWinds/resume",
"max_issues_repo_path": "resume.tex",
"max_line_length": 143,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "41eca983398b5716398d75e079f54f6cada4a69a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SwiftWinds/resume",
"max_stars_repo_path": "resume.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-21T00:30:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-21T00:30:56.000Z",
"num_tokens": 2441,
"size": 8489
} |
\section{Conversion from raw format}
\label{sec:conversion}
Once the data is captured, the second problem is parsing and
converting that data to a easily usable format. The raw packet format
contains a large amount of unnecessary data, and would require
repeated, expensive parsing to be used for NFS analysis. There are four main
challenges in conversion: representation, storage, performance and
anonymization. {\it Data representation} is the challenge of deciding the
logical structure of the converted data. {\it Storage format} is the challenge of
picking a suitable physical structure for the converted data.
{\it Conversion performance} is the challenge of making the conversion run quickly,
ideally faster than the capture stage. {\it Trace anonymization} is the
challenge of hiding sensitive information present in the data and is
necessary for being able to release traces.
One lesson we learned after conversion is that the converter's version
number should be included in the trace. As with most programs, there
can be bugs. Having the version number in the trace makes it easy to
determine which flaws need to be handled. For systems such as
subversion or git, we recommend the atomic check-in ID as a suitable
version number.
A second lesson was
preservation of data. An NFS parser will discard data both for space
reasons and for anonymization. Keeping underlying information, such as
per packet conversion in addition to per NFS-request conversion can
enable cross checking between analysis. We caught an early bug in our
converter that failed to record packet fragments by comparing the
packet rates and the NFS rates.
\subsection{Data representation}
One option for the representation is the format used in the
Ellard~\cite{ellardTraces} traces: one line per request or reply in a text
file with field names to identify the different parameters in the RPC.
This format is slow to parse, and works poorly for representing
readdir, which has an arbitrary number of response fields.
Therefore, we chose to use a more relational data
structuring~\cite{codd70relational}.
We have a primary data table with the common fields present in every
request or reply, and an identifier for each RPC. We then have
secondary tables that contain request-type specific information, such
as a single table for RPC's that include attributes, and a single
table for read and write information. We then join the common table
to the other tables when we want to perform an analysis that uses
information in both. Because of this structure, a single RPC request
or reply will have a single entry in the common table. However, a
request/reply pair will have zero (no entry in the read/write table
unless the operation is a read/write) or more entries (multiple
attribute entries for readdir+) in other tables.
The relational structuring improves flexibility, and avoids reading unnecessary data for
analyses that only need a subset of the data.
For example, an analysis only looking at operation
latency can simply scan the common table.
\subsection{Storage format}
Having decided to use a relational structuring for our data, we next
needed to decide how to physically store the data. Three
options were available to us: text, SQL, and DataSeries, our custom
binary format~\cite{DataSeriesOSR2009} for storing trace data.
Text is a traditional way of storing trace data, however, we were
concerned that a text representation would be too large and too slow.
Having later converted the Ellard traces to our format, we found that
the analysis distributed with the traces used 25$\times$ less CPU time when
the traces and analysis used DataSeries, and ran 100$\times$ faster on a 4
core machine. This disparity confirmed our intuition that text is a
poor format for trace data.
% cat .../*-log | perl .../DataSeries/doc/fast2008-nfs-analysis/scripts/compression-sum.pl
% last column is size it happened to be compressed to, but used lzf, gzip, bzip2
% nfs-2/set-0: 4260280689888 43738379636 1064156793204 -> 576848706584; 7.46x / 9.23x
% nfs-2/set-1: 3882851266232 42353445184 958237524264 -> 525549755192; 7.47x / 9.21x
% nfs-2/set-2: 4447266037744 51364690388 1127969564332 -> 692186154292; 6.50x / 8.05x
% nfs-2/set-3: 5560956128368 173211639312 1469393844368 -> 1053382989256; 5.44x / 6.67x
% nfs-2/set-4: 3372576162216 76584618188 757255149580 -> 667308820144; 5.17x / 6.19x
% nfs-2/set-5: 3993210407120 16925899320 829238508600 -> 661073679548; 6.07x / 7.29x
% du -b (for entirely gzip compression) ; (a + b / du) / (a + c / du)
% 841832166984 set-0/ ; 5.1x / 6.3x
% 737388264364 set-1/ ; 5.3x / 6.5x
% 852201228772 set-2/ ; 5.2x / 6.5x
% 1106440058916 set-3/ ; 5.2x / 6.4x
% 675664608564 set-4/ ; 5.1x / 6.1x
% 807115490052 set-5/ ; 5.0x / 6.0x
SQL databases support a relational structure. However, the lack of
extensive compression means that our datasets would
consume a huge amount of space. We also expected that many complex
queries would not benefit from SQL and would require extracting
the entire tables through the slow SQL connection.
Therefore, we selected DataSeries as an efficient and compact format
for storing traces. It uses a relational data model, so
there are rows of data, with each row comprised of the same typed
columns. A column can be nullable, in which case there is a hidden
boolean field for storing whether the value is null. Groups of rows
are compressed as a unit. Prior to compression,
various transforms are applied to reduce the size of the data. First,
duplicate strings are collapsed down to a single
string. Second, values are delta compressed relative to either the
same value in the previous row or another value in the same row. For
example, the packet time values are delta compressed, making them
more compressible by a general purpose compression algorithm.
DataSeries is designed for efficient access. Values are packed so that
once a group of rows is read in, an analysis can iterate over them
simply by increasing a single counter, as with a C++ vector.
Individual values are accessed by an offset from that
counter and a C++ cast. Byte swapping is automatically
performed if necessary. The offset is not fixed, so the same analysis can read
different versions of the data, provided the meaning of the fields
has not changed. Efficient access to subsets of the data is supported
by an automatically generated index.
DataSeries is designed for generality. It supports versioning on the
table types so that an analysis can properly interpret data that may
have changed in meaning. It has special support for time fields so
that analysis can convert to and from different raw formats.
DataSeries is designed for integrity. It has internal checksums on
both the compressed and the uncompressed data to validate that the
data has been processed appropriately. Additional details on the
format, additional transforms, and comparisons to a wide variety of
alternatives can be found in the technical
report~\cite{DSTechnicalReportSnapshot}.
\subsection{Conversion performance}
To perform the conversion in parallel, we divide the collected files
into groups and process each group separately. We make two passes
through the data. First, we parse the data and count the number of
requests or replies. Second, we use those counts to determine the
first record-id for each group, and convert the files. Since NFS
parsing requires the request to parse the reply, we currently do not
parse any request-reply pairs that cross a group boundary. Similarly,
we do not do full TCP reconstruction, so for NFS over TCP, we parse
multiple requests or replies if the first one starts at the beginning of the packet.
These limitations are similar to earlier work, so we found
them acceptable.
We run the conversion locally on the 8-way tracing
machine rather than a cluster because conversion runs faster than the
1Gbit LAN connection we had at the customer site (the tracing card
does not act as a normal NIC). Conversion of a full data set (30TiB)
takes about 3 days.
We do offline conversion from trace files, rather than online conversion, primarily for simplicity.
However, a side benefit was that our converter could be
paranoid and conservative, rather than have it try to recover from
conversion problems, since we could fix the converter when it was
mis-parsing or was too conservative. The next time we trace, we plan
to do more on-the-fly conversion by converting early groups and
deleting those trace files during capture so that we can capture
longer traces.
\subsection{Trace anonymization}
In order to release the traces, we have to obscure private data such
as filenames. There are three primary ways to map values in order to
anonymize them:
\begin{enumerate}
\item {\bf unique integers}. This option results in the
most compact identifiers ($\leq$ 8 bytes), but is difficult to
calculate in parallel and requires a large translation table to
maintain persistent mappings and to convert back to the original data.
\item {\bf hash/HMAC}. This option results in larger identifiers
(16-20 bytes), but enables parallel conversion. A keyed
HMAC~\cite{Bellare96keyinghash} instead of a hash protects against
dictionary attacks. Reversing this mapping requires preserving a
large translation table.
\item {\bf encrypted values}. This option results in
the longest identifiers since the encrypted value will be at least as
large as the original value. It is parallizable and easily reversible
provided the small keys are maintained.
\end{enumerate}
We chose the last approach because it preserved the maximum
flexibility, and allowed us to easily have discussions with the
customer about unexpected issues such as writes to what should have
been a read-only filesystem. Our encryption includes a self-check, so
we can convert back to real filenames by decrypting all hexadecimal
strings and keeping the ones that validate. We have also used the
reversibility to verify for a colleague that they properly identified
the `.' and `..' filenames.
We chose to encrypt entire filenames since the suffixes are specific
to the animation process and are unlikely to be useful to people.
This choice also simplified the discussions about publishing the
traces. Since we can decrypt, we could in the future change this
decision.
The remaining values were semi-random (IP addresses in the 10.*
network, filehandles selected by the NFS servers), so we pass those
values through unchanged. We decided that the filehandle content,
which includes for our NFS servers the filesystem containing the file,
could be useful for analysis. Filehandles could also be anonymized.
All jobs in the customers' cluster were being run as a common user, so
we did not capture user identifiers. Since they are transitioning
away from that model, future traces would include unchanged user
identifiers and group identifiers. If there were public values in the
traces, then we would have had to apply more sophisticated
anonymization~\cite{ruoming07anonymization}.
% LocalWords: Gbit tcpdump Gb Leung pg lindump driverdump endacedump pcap mmap
% LocalWords: filesystem DL pps tmpfs gzip NIC IP MiB Endace timestamps lzf du
% LocalWords: Ghz Opterons TiB PCI Gbps CIFS iSCSI anonymization chunked RPC
% LocalWords: hashtable Veitch Keeton Ellard readdir RPC's analyses SQL perl
% LocalWords: DataSeries bzip nfs nullable versioning LAN offline mis HMAC
% LocalWords: anonymize parallizable filehandles filehandle anonymized
| {
"alphanum_fraction": 0.7953008497,
"avg_line_length": 52.9082568807,
"ext": "tex",
"hexsha": "70193b63484cec06d08c570178123efc1de81f33",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2020-09-28T19:06:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-13T23:02:28.000Z",
"max_forks_repo_head_hexsha": "8436462519eb22fc653387885b5f0339fb419061",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "sbu-fsl/DataSeries",
"max_forks_repo_path": "doc/fast2009-nfs-analysis/conversion.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "b5b9db8e40a79a3e546a59cd72a80be89412d7b2",
"max_issues_repo_issues_event_max_datetime": "2017-08-16T00:16:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-17T15:18:50.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "yoursunny/DataSeries",
"max_issues_repo_path": "doc/fast2009-nfs-analysis/conversion.tex",
"max_line_length": 99,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "8436462519eb22fc653387885b5f0339fb419061",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "sbu-fsl/DataSeries",
"max_stars_repo_path": "doc/fast2009-nfs-analysis/conversion.tex",
"max_stars_repo_stars_event_max_datetime": "2018-10-25T14:22:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-27T19:15:11.000Z",
"num_tokens": 2765,
"size": 11534
} |
% Created 2016-04-02 Sat 19:10
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fixltx2e}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{float}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{marvosym}
\usepackage{wasysym}
\usepackage{amssymb}
\usepackage{hyperref}
\tolerance=1000
\usepackage[version=3]{mhchem}
\author{Benjamin Bass}
\date{18 February 2016}
\title{Earth Materials: Intro to Silicates}
\hypersetup{
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 24.4.1 (Org mode 8.2.10)}}
\begin{document}
\maketitle
\tableofcontents
\pagebreak
\section{Marching Throught the Silicates}
\label{sec-1}
Most Polymerized -> Least Polymerized.
\subsection{Most Polymerized: Tectosilicates}
\label{sec-1-1}
Make up 2/3rds of the Crust
Simplets = \ce{SiO2} Group (silica group)
We find that the \ce{SiO2} group has many form
Polymorphs (same chemical form, different groups)
\subsubsection{Silica Polymorphs}
\label{sec-1-1-1}
Alpha Quartz
Coesite
really important environment of growth: quartz
Environment:
\begin{itemize}
\item 2nd most abundant
\item can grow or be found:
\begin{itemize}
\item igneaus
\item Metamorphic
\item sedimentary
\item hydrothermal
\item These need silica saturated chemistry
\item they're Felsic (high in silica)
\end{itemize}
\item Felsic
\item Igneaus rocks
\begin{itemize}
\item granite -> pegmatite (beautiful y economic) (ultra-felsic igneous rock)
\item rhyolite
\item
\end{itemize}
\item Metamorphic
\begin{itemize}
\item almost any schist, queiss
\item Sedimentary
\begin{itemize}
\item as common and detrital grains
\item as a chemical cement
\end{itemize}
\end{itemize}
\end{itemize}
\uline{Natural Fluids}
\begin{itemize}
\item quartz precipitate
\item can be very fine grained
\begin{itemize}
\item "cryptocrystalline"
\begin{itemize}
\item agate
\item chalcedony
\item chert or flint
\end{itemize}
\end{itemize}
\end{itemize}
Fulgarite: if lightening hits silica-rich soil.
Opal:
\begin{itemize}
\item Not quite a mineral because its Amorphous
\item SiO$_{\text{2}}$ + H$_{\text{2}}$O
\item low T fluids
\item also biomineral
\item ex: plant phytoliths
\end{itemize}
\textbf{Whats the most abundant mineral in the crust: Feldspar}
\textbf{Quartz is in second place}
Quartz: Crystal Shape, hexagonal prysm with degraded symmetry. Glassy luster.
Color is no good.
What causes color in Quartz:
\begin{itemize}
\item clear
\item smokey (bc aluminum. if quartz in invironment
\end{itemize}
\uline{Feldspars}
most abundant in the crust
(Si + Al)
\rule{\linewidth}{0.5pt}
O = 1/2
Structure
<classPic>
hole = 9-fold distorted site: K, Na, Ca
T$_{\text{1}}$, T$_{\text{2}}$ = Si or Al
Specific Felspar Minerals are distinguished by the 9-fold site Cation
and \uline{Al$_{\text{1}}$, Si content ordering}
\uline{Feldspars}
Feldspar composition
(K,Na)$_{\text{1-x}}$ Ca$_{\text{x}}$ Al$_{\text{1+x}}$ Si$_{\text{3-x}}$ O$_{\text{8}}$
where x = 0 to 1
\textbf{Ternary Diagram}
3 Polymorphs @ Kspar
differ only in their ordering of Al, Si
Sanidine: Complete Disorder -> monoclinic > 900celcius
Orthoclase: somewhere in the middle -> monoclinic
Microcline: completely ordered -> triclinic (low symmetry) < 500celcius
Plag:
\begin{enumerate}
\item albite
\item oligoclase
\item andesine
\item labradorite
\item bytonite
\item anorthite
\end{enumerate}
recall \uline{exolution}
\begin{itemize}
\item refers to chemical unmixing upon cooling below the \uline{soldus}.
\end{itemize}
perthitic texture in K-spar
\uline{Twinning}
3 types of twins
\begin{enumerate}
\item Contact Twins: shares a plane
\item Interpenetration twin: grown together= might share rotational axis
\item Polysynthetic Twins: many repeated crystals
\end{enumerate}
Twins get pink highlighter.
% Emacs 24.4.1 (Org mode 8.2.10)
\end{document} | {
"alphanum_fraction": 0.75234712,
"avg_line_length": 20.6335078534,
"ext": "tex",
"hexsha": "6386e88e8f38bdeb73f9198e8dd559e6b81cbc3f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "189a3729b25e7d1e239ec06c8bf409fca1e5f8e7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wesleybass42/earthMaterials",
"max_forks_repo_path": "classNotes/incomplete_earthNotes_2_18_16/earthnotes_2_18.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "189a3729b25e7d1e239ec06c8bf409fca1e5f8e7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wesleybass42/earthMaterials",
"max_issues_repo_path": "classNotes/incomplete_earthNotes_2_18_16/earthnotes_2_18.tex",
"max_line_length": 88,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "189a3729b25e7d1e239ec06c8bf409fca1e5f8e7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wesleybass42/earthMaterials",
"max_stars_repo_path": "classNotes/incomplete_earthNotes_2_18_16/earthnotes_2_18.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1295,
"size": 3941
} |
\chapter{Hypothesis Testing and p-values}
\begin{ex}
Suppose that the true value of $\theta$ is $\theta_\star\neq \theta_0$. Then
\begin{align*}
W
=\frac{\thetahat-\theta_0}{\sehat}
=\frac{\thetahat-\theta_\star+\theta_\star-\theta_0}{\sehat}
=\frac{\thetahat-\theta_\star}{\sehat}-\frac{\theta_0-\theta_\star}{\sehat},
\end{align*}
and therefore
\begin{align*}
\P{|W|>z_{\alpha/2}}
& =\P{\frac{\thetahat-\theta_\star}{\sehat}-\frac{\theta_0-\theta_\star}{\sehat}>z_{\alpha/2}}
+\P{\frac{\thetahat-\theta_\star}{\sehat}-\frac{\theta_0-\theta_\star}{\sehat}<-z_{\alpha/2}} \\
& =\P{\frac{\thetahat-\theta_\star}{\sehat}>\frac{\theta_0-\theta_\star}{\sehat} + z_{\alpha/2}}
+\P{\frac{\thetahat-\theta_\star}{\sehat}<\frac{\theta_0-\theta_\star}{\sehat}-z_{\alpha/2}} \\
& =1-\P{\frac{\thetahat-\theta_\star}{\sehat}<\frac{\theta_0-\theta_\star}{\sehat} + z_{\alpha/2}}
+\P{\frac{\thetahat-\theta_\star}{\sehat}<\frac{\theta_0-\theta_\star}{\sehat}-z_{\alpha/2}},
\end{align*}
which, since $(\thetahat-\theta_\star)/\sehat \approx N(0, 1)$, implies that
\[
\P{|W|>z_{\alpha/2}}\approx
1-\Phi\left(\frac{\theta_0-\theta_\star}{\sehat} + z_{\alpha/2}\right)
+\Phi\left(\frac{\theta_0-\theta_\star}{\sehat}-z_{\alpha/2}\right).
\]
\end{ex}
\begin{ex}
By Theorem 10.12,
\[
p=\mathbb{P}_{\theta_0}{(T(X^n)\geq T(x^n))}=1-F(T(x^n)),
\]
where $T(X^n)\sim F$ under $H_0:\theta=\theta_0$. Therefore,
\[
\P{p<y}
=\P{1-F(T(Y^n))<y},
\]
where $T(Y^n)\sim F$ and therefore by Exercise 2.15,
$F(T(Y^n))\sim\text{Uniform}(0, 1)$. Finally, note that if
$U\sim\text{Uniform}(0,1)$ then $1-U\sim\text{Uniform}(0,1)$ as well, and that
therefore
\[
\P{p<y}=yI_{[0,1]}(y),
\]
the CDF of a $\text{Uniform}(0,1)$ distribution.
\end{ex}
\begin{ex}
Note that $\theta_0\not\in C$, where
\[
C=(\thetahat-\sehat \, z_{\alpha/2}, \thetahat+\sehat \, z_{\alpha/2}),
\]
if and only if
\[
\theta_0> \thetahat+\sehat\, z_{\alpha/2}\text{, or }
\theta_0< \thetahat-\sehat\, z_{\alpha/2}.
\]
This is equivalent to
\[
|\theta_0-\thetahat|> \sehat\, z_{\alpha/2},
\]
which is equivalent to
\[
\frac{|\thetahat-\theta_0|}{\sehat} > z_{\alpha/2},
\]
but this is precisely the size $\alpha$ Wald test.
\end{ex}
\begin{ex}
Note that
\[
p\text{-value}
=\inf\left\{\alpha \mid T(X^n)\geq c_\alpha\right\},
\]
where $\alpha=\sup_{\theta\in\Theta_0}\mathbb{P}_{\theta}(T(X^n)\geq
c_\alpha)$ is a decreasing function of $c_\alpha$, and that therefore,
having observed $x^n$, the smallest value of $\alpha$ will be obtained for
the largest $c_\alpha$ such that we still reject the null, $c_\alpha=T(x^n)$.
The size of the test for which we reject the null is then
\[
\sup_{\theta\in\Theta_0}\mathbb{P}_{\theta}(T(X^n)\geq T(x^n))
\]
and therefore
\[
p\text{-value}
=\sup_{\theta\in\Theta_0}\mathbb{P}_{\theta}(T(X^n)\geq T(x^n)),
\]
or, in the case where $\Theta_0=\{\theta_0\}$, the supremum is over only a
single element and therefore
\[
p\text{-value}
=\mathbb{P}_{\theta_0}(T(X^n)\geq T(x^n)).
\]
\end{ex}
% 5
\begin{ex}
Let $X_1,\ldots,X_n\sim\text{Uniform}(0,\theta)$ and
$Y=\max\{X_1,\ldots,X_n\}$.
\begin{enumerate}[(a)]
\item We have
\begin{align*}
\beta(\theta_0)
& =\mathbb{P}_{\theta_0}(Y>c) \\
& =1-\mathbb{P}_{\theta_0}(Y\leq c) \\
& =1-\mathbb{P}_{\theta_0}(X_1\leq c)\cdots\mathbb{P}_{\theta'}(X_n\leq c) \\
& =1-(c/\theta_0)^n
\end{align*}
for $c\in[0,\theta_0]$.
\item Solving
\[
0.05=1-(c/\theta_0)^n
\]
for $c$ we get
\[
c=0.95^{1/n}\theta_0,
\]
or, substituting $0.5$ for $\theta_0$,
\[
c=0.5\cdot 0.95^{1/n}.
\]
\item We have
\[
p=1-(0.48/0.5)^{n},
\]
which implies that $p\approx 0.558$, which means that the test does
not provide any evidence against $H_0$.
\item Note that $Y=0.52$ is outside the range $[0,\theta_0]$, and therefore
$\mathbb{P}_{\theta_0}(Y>0.52)=0$. The $p$-value is therefore $0$, and
we can thus reject $H_0$ at any level.
\end{enumerate}
\end{ex}
\begin{ex}
Since we will need to construct a confidence interval, we will use the Wald
test for a Binomial distribution. Let $H_0: p=1/2$ and $H_1:p\neq 1/2$. By
Exercise 10.15, $\phat=X/n=922/1919$ with
$\sehat{\phat}=\sqrt{\phat(1-\phat)/n}$. The test statistic is
\[
\frac{|\phat-p_0|}{\sehat(\phat)}
\]
and thus by Theorem 10.13 the $p$-value is given by
\[
\P{|Z|>\frac{|\phat-p_0|}{\sqrt{\phat(1-\phat)/n}}}
=2\Phi\left(
-\frac{|\phat-p_0|}{\sqrt{\phat(1-\phat)/n}}
\right).
\]
\inputminted{python}{../code/10-06.py}
\inputminted{text}{../output/10-06.txt}
The test indicates weak evidence against $H_0$.
\end{ex}
\begin{ex}~
\begin{enumerate}[(a)]
\item Let $X_1$ be the distribution of the proportion of three-letter words
in the Twain essays, and let $X_2$ be the distribution in the
Snodgrass essays. Recall from Example 10.8 that the plug-in estimator
for a difference of means $d=\mu_1-\mu_2$ is given by the difference
of the plug-in estimators of the means, the sample averages. Thus,
\[
\widehat{d}=\Xbar_1-\Xbar_2,
\]
with
\[
\sehat(\widehat{d})
=\sqrt{(\sehat(\muhat_1))^2+(\sehat(\muhat_2))^2},
\]
where
\[
\sehat(\muhat_j)=\frac{1}{\sqrt{n_j}}
\sqrt{\frac{1}{n_j}\sum_{i=1}^{n_j}(X_{j,i}-\Xbar_j)^2}.
\]
Note that $H_0: d=0$, and therefore the size $\alpha$ Wald test for
this hypothesis is obtained by checking whether
\[
\frac{\left|\Xbar_1-\Xbar_2\right|}{
\sqrt{(\sehat(\Xbar_1))^2+(\sehat(\Xbar_2))^2}}
> z_{\alpha/2}.
\]
\inputminted{python}{../code/10-07.py}
\inputminted{text}{../output/10-07.txt}
We may conclude that there is very strong evidence to reject the null
hypothesis, i.e.\ that the proportion of three-letter words is
similarly distributed in both sets of essays.
\item We obtain the same result from the permutation test: we have very
strong evidence that the samples comes from sets with different means.
\end{enumerate}
\end{ex}
\begin{ex}
Let $X_1,\ldots,X_n\sim N(\theta, 1)$. We are testing $H_0:\theta=0$ versus
$H_1:\theta=1$, using the rejection region $R=\{x^n \mid T(x^n)>c\}$ where
$T(x^n)=\Xbar$.
\begin{enumerate}[(a)]
\item Note that
\begin{align*}
\alpha
& = \mathbb{P}_{\theta=0}\left(\Xbar > c \right) \\
& = \mathbb{P}_{\theta=0}\left(\sqrt{n}\Xbar > \sqrt{n}c \right) \\
& = \mathbb{P}_{\theta=0}\left(Z > \sqrt{n}c \right) \\
& = 1-\Phi(\sqrt{n}c),
\end{align*}
and that therefore $c=z_{\alpha}/\sqrt{n}$.
\item We have
\begin{align*}
\beta(1)
& = \mathbb{P}_{\theta=1}\left(\Xbar > c \right) \\
& = \mathbb{P}_{\theta=1}\left(\Xbar-1 > c-1 \right) \\
& = \mathbb{P}_{\theta=1}\left(\sqrt{n}(\Xbar-1) > \sqrt{n}(c-1) \right) \\
& = \mathbb{P}_{\theta=1}\left(z > \sqrt{n}(c-1) \right) \\
& = 1-\Phi(\sqrt{n}({c-1})) \\
& = 1-\Phi(z_\alpha-\sqrt{n}).
\end{align*}
\item Note that
\begin{align*}
\lim_{n\to\infty}\beta_n(1)
& =\lim_{n\to\infty}[1-\Phi(z_\alpha-\sqrt{n})] \\
& =1-\lim_{u\to-\infty}\Phi(u) \\
& =1.
\end{align*}
\end{enumerate}
\end{ex}
\begin{ex}
Note that
\begin{align*}
\beta(\theta_1)
& =\mathbb{P}_{\theta_1}\left(\left|\frac{\thetahat-\theta_0}{\sehat}\right| > z_{\alpha/2} \right) \\
& =\mathbb{P}_{\theta_1}\left(\frac{\thetahat-\theta_0}{\sehat} > z_{\alpha/2} \right)
+\mathbb{P}_{\theta_1}\left(\frac{\thetahat-\theta_0}{\sehat} < -z_{\alpha/2} \right) \\
& =1
-\mathbb{P}_{\theta_1}\left(\frac{\thetahat-\theta_0}{\sehat} < z_{\alpha/2} \right)
+\mathbb{P}_{\theta_1}\left(\frac{\thetahat-\theta_0}{\sehat} < -z_{\alpha/2} \right) \\
& =1
-\mathbb{P}_{\theta_1}
\left(\frac{\thetahat-\theta_1}{\sehat}
+\frac{\theta_1-\theta_0}{\sehat}
< z_{\alpha/2} \right)
+\mathbb{P}_{\theta_1}\left(
\frac{\thetahat-\theta_1}{\sehat}
+\frac{\theta_1-\theta_0}{\sehat}
< -z_{\alpha/2} \right) \\
& =1
-\mathbb{P}_{\theta_1}
\left(\frac{\thetahat-\theta_1}{\sehat}
< z_{\alpha/2}
-\frac{\theta_1-\theta_0}{\sehat}
\right)
+\mathbb{P}_{\theta_1}\left(
\frac{\thetahat-\theta_1}{\sehat}
< -z_{\alpha/2}
-\frac{\theta_1-\theta_0}{\sehat}
\right) \\
& =1-\lim_{n\to\infty}\Phi\left(z_{\alpha/2}-\sqrt{nI(\thetahat)}(\theta_1-\theta_0)\right)
+\lim_{n\to\infty}\Phi\left(-z_{\alpha/2}-\sqrt{nI(\thetahat)}(\theta_1-\theta_0)\right),
\end{align*}
or, since $\theta_1>\theta_0$,
\[
\beta(\theta_1)
= 1 - \lim_{u\to-\infty}\Phi(u)+\lim_{u\to-\infty}\Phi(u)
= 1.
\]
\end{ex}
% 10
\begin{ex}
We will use the Wald test to check whether there is a difference between the
proportion of deaths before the Chinese Harvest Moon Festival between the two
groups.
\inputminted{python}{../code/10-10.py}
\inputminted{text}{../output/10-10.txt}
We find that there is little to no evidence against $H_0$.
\end{ex}
\begin{ex}~
\begin{enumerate}[(a)]
\item
\inputminted{python}{../code/10-11.py}
\inputminted{text}{../output/10-11.txt}
Since only the $p$-value for Chlorpromazine is less than $0.05$, the
only null hypothesis that we may reject is that the placebo is
similarly effective as Chlorpromazine.
\item Our finding remains statistically significant at the $0.05$ level
under both the Bonferroni or the Benjamini-Hochberg multiple
testing method corrections.
\end{enumerate}
\end{ex}
\begin{ex}~
\begin{enumerate}[(a)]
\item Let $X_1,\ldots,X_n\sim\text{Poisson}(\lambda)$. We begin by computing
the maximum likelihood estimator for $\lambda$. Recall that then
\[
f(x;\lambda)=e^{-\lambda}\frac{\lambda^x}{x!},
\]
and that therefore
\[
\ell_n(\lambda)=\sum_{i=1}^n-\lambda+X_i\log(\lambda)+\log(X_i!).
\]
Hence,
\[
\frac{\d\ell_n(\lambda)}{\d\lambda}
=\sum_{i=1}^n\left[-1+\frac{X_i}{\lambda}\right]
=\frac{n(\Xbar-\lambda)}{\lambda}
\text{ implies }
\lambda=\Xbar,
\]
and it is clear by the second derivative test that this is a maximum.
Hence, $\widehat{\lambda}=\Xbar$, and
$\sehat(\widehat{\lambda})=\sqrt{\Xbar/n}$.
Let $H_0:\lambda=\lambda_0$. Then, the size $\alpha$ Wald test is
given by rejecting $H_0$ when
\[
\frac{\sqrt{n}|\Xbar-\lambda_0|}{\sqrt{\Xbar}}>z_{\alpha/2}.
\]
\item
\inputminted{python}{../code/10-12.py}
\inputminted{text}{../output/10-12.txt}
\end{enumerate}
\end{ex}
\begin{ex}
Let $X_1,\ldots, X_n\sim N(\mu, \sigma^2)$. Recall from Example 9.11 that then
\[
\ell(\mu, \sigma)
=-n\log\sigma -\frac{nS^2}{2\sigma^2}-\frac{n(\Xbar-\mu)^2}{2\sigma^2}.
\]
Therefore,
\[
\frac{\pd \ell(\mu, \sigma)}{\pd \mu}
=\frac{n(\Xbar-\mu)}{\sigma^2}
\text{ implies }
\muhat = \overline{X},
\]
and
\[
\frac{\pd \ell(\Xbar, \sigma)}{\pd \sigma}=-\frac{n}{\sigma}+\frac{nS^2}{\sigma^3}
\text{ implies }
\sigmahat = S.
\]
We have $\Theta_0=\{(\mu, \sigma) \mid \mu=\mu_0\}$ and so for the likelihood
ratio test we have
\begin{align*}
\lambda
& = 2\log\left(\frac{\sup_{\theta\in\Theta}\L(\theta)}{\sup_{\theta_0\in\Theta_0}\L(\theta_0)} \right) \\
& =2\ell(\Xbar, S)-2\ell(\mu_0, S) \\
& =\frac{n(\Xbar-\mu_0)^2}{S^2},
\end{align*}
where, under $H_0$, we expect $\lambda\rightsquigarrow \chi^2$, and
therefore have $p$-value
\[
\P{\chi^2>\frac{n(\Xbar-\mu_0)^2}{S^2}}.
\]
Recall that $\se(\muhat)=\sigma/\sqrt{n}$, and that therefore the $p$-value
for the Wald test is given by
\[
\P{|Z|>\frac{\sqrt{n}|\Xbar -\mu_0|}{S}},
\]
or, by squaring both sides,
\[
\P{Z^2>\frac{n(\Xbar -\mu_0)^2}{S^2}},
\]
which is equivalent to the likelihood-ratio test since $Z^2$ has a chi-squared
distribution.
\end{ex}
\begin{ex}
Let $X_1,\ldots, X_n\sim N(\mu, \sigma^2)$. Recall that
\[
\ell(\mu, \sigma)=-n\log\sigma -\frac{nS^2}{2\sigma^2}-\frac{n(\Xbar-\mu)^2}{2\sigma^2},
\]
and by an identical argument to the previous problem,
\[
\muhat=\Xbar\text{ and }\sigmahat=S.
\]
We then have $\Theta_0=\{(\mu, \sigma) \mid \sigma=\sigma_0\}$, and therefore
\begin{align*}
\lambda
& =2\log\left(\frac{\sup_{\theta\in\Theta}\L(\theta)}{\sup_{\theta_0\in\Theta_0}\L(\theta_0)} \right) \\
& =2\ell(\Xbar, S)-2\ell(\Xbar, \sigma_0) \\
& = n\left[2\log\frac{\sigma_0}{S}+\frac{S^2}{\sigma_0^2}-1\right] \\
& = 2n\log\frac{\sigma_0}{S}+\frac{n(S^2-\sigma_0^2)}{\sigma_0^2},
\end{align*}
where, under $H_0$, we expect $\lambda\rightsquigarrow \chi^2$, and
therefore have $p$-value
\[
\P{\chi^2>
2n\log\frac{\sigma_0}{S}+\frac{n(S^2-\sigma_0^2)}{\sigma_0^2}
}.
\]
Recall that $\se(\sigmahat)=\sigma/\sqrt{2n}$, and that therefore the
$p$-value for the Wald test is given by
\[
\P{|Z|>\frac{\sqrt{2n}|S -\sigma_0|}{S}},
\]
or,
\[
\P{Z^2>\frac{2n(S -\sigma_0)^2}{S^2}}.
\]
\end{ex}
% 15
\begin{ex}
Let $X \sim \text{Binomial}(n, p)$. Note that then
\[
\ell(n, p)=\log\binom{n}{X}+X\log{p}+(n-X)\log(1-p),
\]
and therefore
\[
\frac{\pd\ell(n, p)}{\pd{p}}
=\frac{X}{p}-\frac{n-X}{1-p}
\text{ implies }
\phat =X/n.
\]
We have $\Theta_0=\{p_0\}$. Therefore,
\[
\lambda
=2\log\left(\frac{\sup_{\theta\in\Theta}\L(\theta)}{\sup_{\theta_0\in\Theta_0}\L(\theta_0)} \right)
=2\ell(n, \phat)-2\ell(n, p_0)
=2X\log\left(\frac{\phat}{p_0}\right)+2(n-X)\log\left(\frac{1-\phat}{1-p_0}\right),
\]
where, under $H_0$, we expect $\lambda\rightsquigarrow \chi^2$.
Note that by Exercise 9.7, $\sehat(\phat)=\sqrt{\phat(1-\phat)/n}$, and
therefore the $p$-value of the Wald test is given by
\[
\P{|Z|>\frac{\sqrt{n}|\phat-p_0|}{\sqrt{\phat(1-\phat)}}}.
\]
\end{ex}
\begin{ex}
Let $\ell(\theta)$ be a log-likelihood. Note that then the second degree
Taylor polynomial expansion of $\ell$ around the MLE $\thetahat$ is given by
\[
\ell(\theta)
\approx
\ell(\thetahat)+\ell'(\thetahat)(\theta-\thetahat)
+\frac{1}{2}\ell''(\thetahat)(\theta-\thetahat)^2,
\]
and therefore,
\[
\lambda
=2\ell(\thetahat)-2\ell(\theta_0)
\approx -\ell''(\thetahat)(\thetahat-\theta_0)^2
= I(\thetahat)(\thetahat-\theta_0)^2
= \frac{(\thetahat-\theta_0)^2}{\sehat^2(\thetahat)},
\]
however, this is precisely $W^2$.
To complete the proof and show that $\frac{W^2}{\lambda}\xrightarrow{P} 1$, we
need to be able to show that the error of our quadratic approximation to the
log-likelihood decreases with sample size, but this is not true in general,
and it does not look like we have any of the relevant results in the book.
\end{ex} | {
"alphanum_fraction": 0.5436176058,
"avg_line_length": 35.2279569892,
"ext": "tex",
"hexsha": "d1e969c997c2ac064889b28cc445bac830311df3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "dtrifuno/all-of-stats-solutions",
"max_forks_repo_path": "src/tex/ch10.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "dtrifuno/all-of-stats-solutions",
"max_issues_repo_path": "src/tex/ch10.tex",
"max_line_length": 110,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0572cdae22b128e71c1c6c7ead2bf3b259875bc9",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "dtrifuno/all-of-stats-solutions",
"max_stars_repo_path": "src/tex/ch10.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6038,
"size": 16381
} |
\documentclass[conference]{IEEEtran}
\IEEEoverridecommandlockouts
% The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out.
\usepackage[square]{natbib}
\setcitestyle{numbers}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{cleveref}
\usepackage{diagbox}
\usepackage{xcolor}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
\title{Social Conventions Developed by Multi Agent Systems Using Reinforcement Learning}
\author{\IEEEauthorblockN{Daniel Aaron Salwerowicz}
\IEEEauthorblockA{\textit{Institutt for datateknologi og beregningsorienterte ingeniørfag} \\
\textit{Arctic University of Norway, Narvik}\\
Narvik, Norway \\
[email protected]}
}
\maketitle
\begin{abstract}
In this report I describe my research related to developing a system with self driving taxis that operate in an imaginary city picking up passengers after cinema, opera, and restaurant close. Before that I have simulated simpler models to learn how to handle similar problems and study the emergent social conventions.
\end{abstract}
\begin{IEEEkeywords}
Reinforcement learning, Erev-Roth model, Q-learning, Multi Agent Systems
\end{IEEEkeywords}
\section{Introduction}
This research project revolved around studying emerging social conventions developed by agents in a Multi Agent System (MAS) that was trained using reinforcement learning methods like Q-Learning and Erev-Roth model. My goal was to see whether or not they will be able to reach a form of social convention, if it was stable, and if it was better than a convention developed by zero intelligence agents.
To begin with I focused on a simple problem where I learned two cars to cross a bridge where only one could cross the bridfe at time. Then I went over to study a group of cars that would cross a bridge that could support only 5 cars at any given point before collapsing. My last task was the most interesting, there I studied behaviour of self driving taxis that would pick up passengers from three different locations and would haggle for the price.
\section{Problem specification}
First problem is quite simple, one car is placed on each end of the bridge. Bridge can only allow one bridge to cross it, and the crossing takes ten minutes. Cars cannot see eachother and as such they need to decide whether to drive or not without knowing what the other did. If both of them decide to cross the brige in the same timeslot then they will crash, while waiting will cost them $10$ minutes extra. In my simulation both of them need to cross the bridge before they can start again. This is of course a continous play, so they will cross the brigde a lot of times.
Second problem was a bit more advanced, where there were more cars ($15$ in my case, though it can be easily changed) who would try to cross the bridge. This time they all started from the same end of the bridge so crashing into eachother was not a problem, however the bridge can only support 5 cars at the same time. As such they need to learn when to cross the bridge, when they know the amount of cars on the bridge, but do not know the capacity.
Last task revolved around auction systems where the self driving cars were used as taxis and they would drive around picking up people from cinema, opera and restaurant. Normally it would happen smoothly where each passenger would pay a fixed price per minute driven. However there was often a surge of passenger when one of these establishments closed and if there were too few cars to take care of the demand they would start initiating auction. Auction itself is a simple English Auction where each passenger would have a certain upper limit on how much he will bid, while taxi had a lower limit. They would be randomly paired and if taxi asked for less than what the passenger would pay then they would be paired together and drive off, if ther didn't strike a deal then they would try again with other agents. Of course there can be up to $4$ passengers that would ride the taxi at the same time, so that would lessen the demand faster.
\section{Methods applied}
If the first case I used simple Q-Learning algorithm where I put up a reward matrix (R-matrix) with values shown in \cref{tab:RTablePart1}. These were used to build a Q-matrix that would inform its further actions. I have also set up a check that if both cars chose to drive in the same time slot then they would be punished by deduction of $200$ points, instead of getting a reward of $100$ points given for driving.
Formula for updating values in Q-matrix is shown in \cref{eq:qAlgortihm}, where $\alpha$ stands for learning factor that can vary between $0-1$, $\gamma$ stands for discount factor with same bounds as $\alpha$, $Q_{old}(s_t,a_t)$ is chosen to be the maximal reachable q-value from new state, and $r_t$ stands for reward given by R-matrix or the rules.
\begin{equation}
Q_{new}(s_t,a_t) = \alpha \left(r_t + \gamma Q(s_{t+1}, a) - Q_{old}(s_t,a_t) \right)
\label{eq:qAlgortihm}
\end{equation}
\begin{table}[htbp]
\centerline{
\begin{tabular}{|l|l|l|}
\hline
\diagbox{Action}{State}& Wait & Drive \\ \hline
Wait & -10 & -10 \\ \hline
Drive & 100 & 100 \\ \hline
\end{tabular}}
\caption{Reward matrix for cars in first problem.}
\label{tab:RTablePart1}
\end{table}
In the case of the other problem faced I chose to employ Q-learning algorithm once again in this case my reward matrix is simply made up of two values, $-1$ point for waiting one minute, $100$ points for driving over and $-200$ points for collapsing the bridge. The Q-matrix itself is a vector of length six for a bridge that can only support five cars. However I have set up the problem in such a way that I can change a lot of parameters involved here. These are: number of cars in simulation, number of cars approaching the bridge in given timestep (decided by $\lambda$ in Poisson distribution), speed of cars, lenght of bridge, maximal amount of cars at the bridge.
In this case cars would drive up to bridge and decide each minute if they want to cross the bridge or not, knowing how many cars there are on the bridge at given moment. I chose to allow every car waiting in queue to decide to cross, even if the car before it has decided to wait. I also chose to only punish the car that lead to bridge collapsing, not the other cars that were on the bridge during collapse.
In the last problem I employed Erev-Roth model to learn my cars instead of Q-learning as I was tasked with using it. In this case as I lacked the time, I went for a simple solution where I assume that all taxis and passengers are in auction, regardless of demand. Each taxi and passenger start with a set propensity for choosing to lower or raise their demand/offer. Propensities are updated by a simple formula shown in \cref{eq:ErevRoth}, where $q_j(t)$ represents propensity for action $j$ at time $t$, while $\phi$ is a recency parameter, and $r$ is the reward. After each update propensities are balanced to always sum up to 1. Those are then used as probabilities for choosing whether to raise or lower their bids.
\begin{equation}
q_j(t+1) = (1-\phi) \cdot q_j(t) + r
\label{eq:ErevRoth}
\end{equation}
In all cases the zero intelligence agents choose their actions randomly.
\section{Results}
In the case of first problem both cars learned quickly to reach a mutual agreement where one car would simply drive first, while the other one would wait for him to cross and then drive afterwards. This is by far not a perfect convention as one car becomes the "alpha" car, never waiting for the other thus getting better score than the "beta" car that would always wait first and lose ten minutes.
However it is a social convention nonetheless and it is quite stable one at that. Cars learn quite quickly to not collapse the bridge and always end up with an overall increasing score, as shown in \cref{fig:ScoresPart1}. One can also see the scores for zero intelligence agents. Their scores are overall decreasing even if sometimes by pure chance they manage to increase it for a while. Reason for their close relation is caused by the way I set up my simulation where instead of each decision being an isolated encounter I chose to set it up in such a way that I wait untill both cars have crossed bridge before letting them drive again.
\begin{figure}[htbp]
\centerline{\includegraphics[width=.4\textwidth]{Part1_QLearning}}
\centerline{\includegraphics[width=.4\textwidth]{Part1_Random}}
\caption{Scores for zero intelligence agents and learning agents (respectively) in the first problem.}
\label{fig:ScoresPart1}
\end{figure}
In \cref{tab:QTablePart1} I have provided one of the possible Q-matrices formed after 1000 turns with each car having a learning factor equal to $0.9$ and discount factor $0.5$.
\begin{table}[htbp]
\centerline{
\begin{tabular}{|l|l|l|}
\hline
\diagbox{A}{S}& Wait & Drive \\ \hline
Wait & -16.67 & 126.67 \\ \hline
Drive & 53.33 & -203.87 \\ \hline
\end{tabular}
\begin{tabular}{|l|l|l|}
\hline
\diagbox{A}{S}& Wait & Drive \\ \hline
Wait & -13.95 & -203.87 \\ \hline
Drive & -13.95 & 200.00 \\ \hline
\end{tabular}}
\caption{Q matrix for each of the cars in simulation, "submissive" car on the left, "dominant" one on the right.}
\label{tab:QTablePart1}
\end{table}
Using Q-learning I have also been able to develop a very effective social convention where each car learns not to drive if the bridge is too full and as such it their scores manage to always increase. Increasing $\lambda$ value leads to more cars waiting in line, which causes them to earn lesser score, but they don't affect the overall result. However increasing the number of cars and lowering bridges capacity leads to a surprising result where there is a very visible split among the results. Values used in simulation are: $\lambda: \, 1.5$, number of cars: $15$, $\alpha: \, 0.9$, $\phi: \, 0.66$. Scores for cars in normal situation and with increased number of cars and lowered bridge capacity can be seen in \cref{fig:ScoresPart2}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=.4\textwidth]{Part2_QLearning}}
\centerline{\includegraphics[width=.4\textwidth]{Part2_Curiosity}}
\caption{Scores for normal case and situation with more cars and lower capacity.}
\label{fig:ScoresPart2}
\end{figure}
In \cref{tab:QTablePart2} I have provided one of the possible Q-matrices formed after $10000$ rounds by one of the cars. This is of course highly variable and some cars end up with slightly negative values in places other than the last state that would end up to bridge collapse.
\begin{table}[htbp]
\centerline{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Q-values: \\ \hline
3.9617887930067868 \\ \hline
128.97605980893587 \\ \hline
68.83724816000003 \\ \hline
132.72264 \\ \hline
88.04689076791496 \\ \hline
-180.0 \\ \hline
\end{tabular}}
\caption{Q matrix one of the cars in my simulation.}
\label{tab:QTablePart2}
\end{table}
In the case of cars driving as taxis I found what's the mean price after each day in the simulation that ran for $1000$ days for both the learning agents and zero intelligence agents. In case of learning agents representing passengers they can be considered as pretty random, since I haven't set up my project to re-use them. In this case once can clearly see that taxi drivers get the upper hand and prices quickly rise up and stay far above prices generated by zero intelligence agents. They also deviate much less than in the case of ZI-agents. Prices are shown in \cref{fig:ScoresPart3}.
Values used in simulation are: $\phi=0.25$
\begin{figure}[htbp]
\centerline{\includegraphics[width=.4\textwidth]{Average_Price_Roth_Erev}}
\centerline{\includegraphics[width=.4\textwidth]{Average_Price_Zero_Int}}
\caption{Average prices generated by intelligent and random agents over span of $1000$ days.}
\label{fig:ScoresPart3}
\end{figure}
Resulting propensities always seemed to stop at similar values which are shown in \cref{tab:QTablePart3}
\begin{table}[htbp]
\centerline{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Lower & Raise \\ \hline
0.41017003892640636 & 0.5898299610735938 \\ \hline
\end{tabular}}
\caption{Propensities for taxi after $1000$ days, opposite for passengers.}
\label{tab:QTablePart3}
\end{table}
\section{Discussion}
As discussed a bit before for the part 1 my cars were able to reach a type of social convention with an "alpha \& beta" mechanic where one car was always dominating the other. I think that if I set up my simulation in another way, where each decision would be independent of the other or employed another learning algorithm I might have reached a more fair situation.
As it is right now with simple maximizing Q-learning, cars will always try to reach best score and in this way learn to cooperate in a way even if one of them is losing to the other. There are some cases where those car take a while to learn to cooperate though, but after $100$ rounds they are guaranteed to start earning points instead of losing them.
In the case of several cars on the bridge, they too learn to stop driving if there are too many cars on the bridge. They learn to keep use the bridge to the fullest, but not collapse it after enough time. In the example shown before where capacity was low and there were many cars we can clearly see a clustering of scores with few outliers. Some cars in this situation are plain lucky and manage to amass a bigger score in the start and keep it, while others are not so lucky and instead lose them and stay poor for the remainder of the simulation. However this is quite random as I choose the cars randomly.
Now if there was another car that wanted to cross the bridge in another way it would have a tough time learning if the bridge was still a one way bridge. As it would most likely collapse it or crash with incoming cars. However if it was a two way bridge with a set capacity I see no problem in it learning to do so if I only had time to implement it. If we're looking at a situation with a two lane bridge I see no problem in cars learning to drive without any problem as it is now. However if this bridge kept being a one way bridge, then the situation would be harder to solve. In this case lack of communication is highly detramental and therefore I would suggest using a learning algorithm that would let agents communicate with each other and cooperate this way.
In the case of last assignment I haven't had time to properly implement it and I doubt that any of us would have a chance to solve both assignments if we hadn't had cooperated on it. In my case I lack a way for my passengers to learn properly therefore what I will talk about here will be highly influenced by those lacking features.
What I can say for certain is that prices reached by the learning agentsare much more consistent, than those reached by absolutely random agents, even if passengers are pretty much random agents that learn nothing or almost nothing. If both the passengers and taxis would have more time to learn then I am sure that average prices per day would be much more stable, and on daily basis they would increase with demand and lower when there is no much call for taxis. Taxis would most likely benefit from finding out what is the maximum price they can ask for given the demand from passengers and try to aim a bit lower so that they will be chosen more often.
If the number of taxis is not highly disproportional to the number of passengers both the ZI-agents and learning agents are able to clear up the waiting lines. However learning agents do it faster as one can see in the \cref{fig:TakenTaxis}, where $60$ taxis were let loose in the simualtion. It's also worth noting that when demand is high, learning agents are able to better utilize it and almost all taxis were driving when there were a lot of passengers waiting to go home.
\begin{figure}[htbp]
\centerline{\includegraphics[width=.4\textwidth]{Taken_Taxis_Roth_Erev}}
\centerline{\includegraphics[width=.4\textwidth]{Taken_Taxis_Zero_Int}}
\caption{Number of taken taxis plotted throughout the day for noth intelligent agents and ZI-agents.}
\label{fig:TakenTaxis}
\end{figure}
Given enough time I am certain that the system will ballance itself out and will be able to react to changes like more or less taxis/passengers, different closing hours, etc.
\section*{Acknowledgments}
I would like to thank Christopher Kragebøl Hagerup, Kent Arne Larsen, Hans Victor Andersson Lindbäck, and Olav Kjartan Larseng for helping me underway. We brainstormed a lot of ideas on how to solve this problem, and I got a lot of help with task 1.3 from Victor.
\end{document}
| {
"alphanum_fraction": 0.7655675518,
"avg_line_length": 86.9796954315,
"ext": "tex",
"hexsha": "53c6561620138d7dcdceacbfd75d70a1782fbc82",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8b1e330c64dd58743513f3e48efb6569457beb94",
"max_forks_repo_licenses": [
"WTFPL"
],
"max_forks_repo_name": "MormonJesus69420/Knowledge-Based-Systems-Project",
"max_forks_repo_path": "Assignment1/Report/Report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8b1e330c64dd58743513f3e48efb6569457beb94",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"WTFPL"
],
"max_issues_repo_name": "MormonJesus69420/Knowledge-Based-Systems-Project",
"max_issues_repo_path": "Assignment1/Report/Report.tex",
"max_line_length": 941,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8b1e330c64dd58743513f3e48efb6569457beb94",
"max_stars_repo_licenses": [
"WTFPL"
],
"max_stars_repo_name": "MormonJesus69420/Knowledge-Based-Systems-Project",
"max_stars_repo_path": "Assignment1/Report/Report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4209,
"size": 17135
} |
\section{Margins and spacing}
Example fonts in Latex could be found here\cite{latex-font}.\\
\begin{itemize}
\item Left margin: 1.5” from left edge of page
\item Right margin: 1.0” from right edge of page
\item Top margin: 1.0” from top edge of page
\item Bottom margin: 1.25” from bottom edge of page
\end{itemize}
\begin{table}[]
\centering
\caption{Example of a table.}
\label{tab:complexitylocal}
\begin{tabular}{|l|l|}
\hline
\textbf{Year} & \textbf{Name} \\ \hline
\end{tabular}
\end{table}
\section{Fonts}
Fonts should be easy to read. Times New Roman, Arial or a similarly clear font is preferred; type size must be 10, 11, or 12 point. Script and italic typefaces are not acceptable except where absolutely necessary i.e. in Latin designations of species, etc. \\
In preparing your dissertation or thesis for electronic submission, you must embed all fonts. In Microsoft Word 2013, this is done by accessing the FILE menu; select OPTIONS, select SAVE. From the SAVE menu check the box by” Embed fonts in the file”. If the file size is a concern, check the box next to “Do NOT embed common system fonts”. \\
Large tables, charts, etc., may be reduced to conform to page size, but the print must remain clear enough to be readable. You can also attach a PDF for electronic submissions.
\section{Page numbering}
Every page, with the exception of the title page, the copyright page, and the committee approval page is numbered in the upper right hand corner, one half inch from the top of the page and one inch from the right edge of the page. Do not underline or place a period after the number. Do not use a running header. \\
\begin{itemize}
\item The prefatory materials (abstract, acknowledgements, table of contents, etc.) are numbered in lower case Roman numerals (i, ii, iii, iv…). Insert a section break after the Roman numerals to create different page numbering styles.
\item The first page of the main text and all subsequent pages are continuously numbered in Arabic numerals beginning with 1 until the final page number (1, 2, 3, 4…).
\item Do NOT number appendices or pages of additional material with numbers such as 4a or A-1.
\end{itemize}
\section{Tables and appendices}
Tables and appendices are part of the document and must conform to the same margin and page numbering requirements.
\section{Sequence of pages}
Assemble pages in the following order:
\begin{itemize}
\item Title page *no page number* (create according to example provided)
\item Copyright Notice *no page number* (optional - see example)
\item Committee Approval Page *no page number* (use\cite{unr-2020-forms} NO SIGNATURES on this page)
\item Abstract (begins lower case Roman numerals i, ii, iii…)
\item Dedication (optional)
\item Acknowledgments (optional)
\item Table of Contents
\item List of Tables
\item List of Figures
\item Body of Manuscript (begins Arabic numbering 1, 2, 3…)
\item Back Matter (appendices, notes, bibliography, etc.)
\end{itemize}
\section{Title page}
\begin{itemize}
\item Do not number the title page
\item Center each line of type
\item Use BOLD text type for the manuscript title
\item The date listed is the month and year in which you will graduate. The only acceptable months are May, August, and December (graduation cycles).
\end{itemize}
\section{Copyright page}
No page number on this page. Although not required, we strongly recommend you insert a copyright notice in your manuscript following the title page. Essential components of the copyright notice are: copyright symbol, full legal name of author, and year of first publication. Follow the format of the sample provided below.
\section{Committee approval page}
\begin{itemize}
\item No page number on this page
\item Use the electronic PDF template provided below. This page will list the advisory committee members and graduate dean but will NOT include committee signatures.
\end{itemize}
\section{Abstract}
Lower case Roman numeral ``i'' page number. \\
Abstracts are required for all theses and dissertations. ProQuest no longer has a word limit on the abstract, ``as this constrains your ability to describe your research in a section that is accessible to search engines, and therefore would constrain potential exposure of your work.'' ProQuest does publish print indices that include citations and abstracts of all dissertations and theses published by ProQuest/UMI. These print indices require word limits of 350 words for doctoral dissertations and 150 words for master’s theses (only text will be included in the abstract). You may wish to limit the length of your abstract if this concerns you. The abstracts as you submit it will NOT be altered in your published manuscript.
\section{Instructions for completing dissertation committee approval page}
Please follow the forms shared in: \href{https://www.unr.edu/grad/student-resources/filing-guidelines}{unr-grad/filing-guidelines} | {
"alphanum_fraction": 0.7680144115,
"avg_line_length": 64.8831168831,
"ext": "tex",
"hexsha": "e6972005c3e69a09e7474cea661cab8bff07166e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "16046ece76811c915006a3a38ab32ad062bba8bf",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "tungdanganh/unr_thesis_latex",
"max_forks_repo_path": "chapter2_format.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "16046ece76811c915006a3a38ab32ad062bba8bf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "tungdanganh/unr_thesis_latex",
"max_issues_repo_path": "chapter2_format.tex",
"max_line_length": 730,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "16046ece76811c915006a3a38ab32ad062bba8bf",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "tungdanganh/unr_thesis_latex",
"max_stars_repo_path": "chapter2_format.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1189,
"size": 4996
} |
\documentclass[12pt, a4]{article}
\usepackage[utf8]{inputenc}
\usepackage{fullpage}
\title{Title}
\author{Weipeng He}
\date{\today}
\begin{document}
\maketitle
\section{One}
test
\end{document}
| {
"alphanum_fraction": 0.7170731707,
"avg_line_length": 11.3888888889,
"ext": "tex",
"hexsha": "a0ebbc1b6d3de9d1a22f44ff744855a5c9a25837",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "942460c6cce0810050ae4cb70106a0a98a30353e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hwp/myrc",
"max_forks_repo_path": "vim/templates/tex_article.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "942460c6cce0810050ae4cb70106a0a98a30353e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hwp/myrc",
"max_issues_repo_path": "vim/templates/tex_article.tex",
"max_line_length": 33,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "942460c6cce0810050ae4cb70106a0a98a30353e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hwp/myrc",
"max_stars_repo_path": "vim/templates/tex_article.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 72,
"size": 205
} |
%
%
%
%
%
%
\chapter{Perturbation Treatment in Molecular Quantum Mechanics}
%
%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%
% what does the perturbation treatment do in the quantum chemistry
%
%
In quantum chemistry, the perturbation treatment is usually employed
to tackle with the weak interactions in the molecular systems. Such
weak interaction can be in variety kinds, such as the vibration of
atom skeleton, or the magnetic susceptibility and
shielding\cite{stevens:550}; or even the time dependent external
field change etc. In general, such physical quantity are very small
so that it's suitable to be the perturbation operator, and the
perturbation treatment can usually yield good results.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The perturbed Hatree-Fock equation in differential form}
%
% 1 start from the Fock equation, the orbital is involved. we
% do not consider its expansion
% 2 perturbation expansion for the orbital and the orbital energy
%
%
Firstly let's derive the general procedure for the perturbation
treatment in the Hatree-Fock framework\cite{PhysRev.118.167,
Peng_H_W, RevModPhys.32.455}.
As what has been shown in the Hatree-Fock chapter, the general
Hatree-Fock equation in molecular orbital form can be expressed as:
\begin{multline}\label{PTIMQMeq:1}
\hat{H}_{1}(1)\varphi_{i}(1) + \sum_{j}\left[
\left\{\int\varphi^{*}_{j}(2)\varphi_{j}(2)d\tau(2)\frac{1}{r_{12}}\right\}\varphi_{i}(1)
\right] \\
-
\sum_{j}\left[
\left\{\int\varphi^{*}_{j}(2)\varphi_{i}(2)d\tau(2)\frac{1}{r_{12}}\right\}\varphi_{j}(1)
\right] \\
= \epsilon_{i}\varphi_{i}(1)
\end{multline}
Here the $j$ goes over all the occupied orbitals. Here for
simplicity we only consider the differential form of Hatree-Fock
equation, the involvement of basis sets are left for the future.
Suggest there's some perturbation in the Fock operator, and it's
assumed to be the one electron operator-where such assumption can
satisfy most of the perturbation cases. In the presence of the
perturbed operator, the molecular orbitals and the its energy levels
will make corresponding change; we can express such changes in
progressive way:
\begin{align}\label{PTIMQMeq:2}
\varphi_{i} &= \varphi^{(0)}_{i} + \lambda\varphi^{(1)}_{i} +
\lambda^{2}\varphi^{(2)}_{i} + \cdots \nonumber \\
\epsilon_{i} &= \epsilon^{(0)}_{i} + \lambda\epsilon^{(1)}_{i} +
\lambda^{2}\epsilon^{(2)}_{i} + \cdots
\end{align}
The new Fock operator can be expressed as:
\begin{equation}\label{PTIMQMeq:3}
\hat{F}^{'} = \hat{F} + \lambda \hat{V}
\end{equation}
Now we go to see how to get the perturbed HF equations.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{First order Hatree Fock perturbed equation}
%
% to get the first order Fock equation
%
By taking the (\ref{PTIMQMeq:2}) and (\ref{PTIMQMeq:3}) into the
original Hatree-Fock equation (\ref{PTIMQMeq:1}), then by gathering
the terms with same power of $\lambda$, we can get a series of
perturbation equations. The equation scales with $\lambda^{0}$ is:
\begin{equation}\label{PTIMQMeq:4}
\hat{F}\varphi^{(0)}_{i} = \epsilon^{(0)}_{i}\varphi^{(0)}_{i}
\end{equation}
That's the equation for the zero order approximation.
By gathering the terms related to the $\lambda$, we can get the
first order approximation:
\begin{multline}\label{PTIMQMeq:6}
\hat{H}_{1}\varphi^{(1)}_{i} + \hat{V}\varphi^{(0)}_{i} \\
+\sum_{j}\Bigg\{
\int\varphi^{*(0)}_{j}\varphi^{(1)}_{j}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{i}
+
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{j}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{i}
+
\int\varphi^{*(0)}_{j}\varphi^{(0)}_{j}d\tau\frac{1}{r_{12}}\varphi^{(1)}_{i}
\Bigg\} \\
-\sum_{j}\Bigg\{
\int\varphi^{*(0)}_{j}\varphi^{(1)}_{i}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{j}
+
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{j}
+
\int\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}\varphi^{(1)}_{j}
\Bigg\} \\
=\epsilon^{(0)}_{i}\varphi^{(1)}_{i} +
\epsilon^{(1)}_{i}\varphi^{(0)}_{i}
\end{multline}
However, we can arrange the above equation into the more convenient
form:
\begin{multline}\label{PTIMQMeq:5}
\hat{F}\varphi^{(1)}_{i} - \epsilon^{(0)}_{i}\varphi^{(1)}_{i} =
\epsilon^{(1)}_{i}\varphi^{(0)}_{i} - \hat{V}\varphi^{(0)}_{i} \\
-\sum_{j}\Bigg\{
\int\varphi^{*(0)}_{j}\varphi^{(1)}_{j}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{i}
+
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{j}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{i}
\\
-
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}\varphi^{(0)}_{j}
-
\int\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}\varphi^{(1)}_{j}
\Bigg\}
\end{multline}
Here the items related to the $\varphi^{(1)}_{i}$ are put to the
left of the equation, and the others related to the
$\varphi^{(0)}_{i}$ are put to the right side. Furthermore the
$\hat{F}\varphi^{(1)}_{i}$ is:
\begin{multline}\label{PTIMQMeq:18}
\hat{F}\varphi^{(1)}_{i} = \hat{H}_{1}\varphi^{(1)}_{i} +
\sum_{j}\left[
\left\{\int\varphi^{*(0)}_{j}\varphi^{(0)}_{j}d\tau\frac{1}{r_{12}}\right\}
\varphi^{(1)}_{i}
\right. \\
-
\left.
\left\{\int\varphi^{*(0)}_{j}\varphi^{(1)}_{i}d\tau\frac{1}{r_{12}}\right\}
\varphi^{(0)}_{j} \right]
\end{multline}
Compared with the (\ref{PTIMQMeq:1}) we have omitted all the
explicit electron labels for simplicity. Besides, the higher order
equations can be obtained in a like manner, but here we concentrate
on the discussion of first order perturbation equation.
By multiply the equation with $\varphi^{*(0)}_{i}$, we can get the
expression for the first order energy for the molecular orbitals:
\begin{multline}\label{PTIMQMeq:13}
\epsilon^{(1)}_{i}=
\int\varphi^{*(0)}_{i}\hat{V}\varphi^{(0)}_{i}d\tau + \\
\sum_{j}\Bigg\{
\int\varphi^{*(0)}_{j}\varphi^{(1)}_{j}d\tau\frac{1}{r_{12}}
\varphi^{*(0)}_{i}\varphi^{(0)}_{i}d\tau +
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{j}d\tau\frac{1}{r_{12}}
\varphi^{*(0)}_{i}\varphi^{(0)}_{i}d\tau
\\
-
\int\varphi^{*(1)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}
\varphi^{*(0)}_{i}\varphi^{(0)}_{j}d\tau
-
\int\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau\frac{1}{r_{12}}
\varphi^{*(0)}_{i}\varphi^{(1)}_{j}d\tau
\Bigg\}
\end{multline}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Relations between the perturbed molecular
orbitals}\label{PTIMQM:1}
%
% consideration about the relations between the wave functions
%
Next we will consider the relations between the $\varphi^{(0)}_{i}$,
$\varphi^{(1)}_{i}$ etc. For the $\varphi^{(0)}_{i}$, it's known
that it equals to the unperturbed HF orbitals:
\begin{align}\label{}
\varphi^{(0)}_{i} &= \varphi_{i} \nonumber \\
\epsilon^{(0)}_{i} &= \epsilon_{i}
\end{align}
Furthermore, these orbitals can be safely assumed to be orthogonal
with each other:
\begin{equation}\label{}
\int \varphi^{*(0)}_{i}\varphi^{(0)}_{j} d\tau = \delta_{ij}
\end{equation}
For the $\varphi^{(0)}_{i}$, $\varphi^{(1)}_{i}$, we can prove that
they are orthogonal with each other:
\begin{equation}\label{PTIMQMeq:19}
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{i} d\tau = 0
\end{equation}
Such relation can be obtained via the normalization condition of the
$\varphi_{i}$. Suggest that the $\varphi_{i}$ is approximated at the
first order, then we have:
\begin{align}\label{}
\int \varphi^{*}_{i}\varphi_{i} d\tau &= 1 \Rightarrow \nonumber \\
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{i} d\tau + \int
\varphi^{*(1)}_{i}\varphi^{(0)}_{i} d\tau &= 0
\end{align}
Since that we have:
\begin{equation}\label{}
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{i} d\tau = \left\{\int
\varphi^{*(1)}_{i}\varphi^{(0)}_{i} d\tau\right\}^{*}
\end{equation}
Then the integral should be some imaginary number so that it can be
set to zero without hurting the generality. Therefore the
orthogonality condition has been proved.
For the $\varphi^{(0)}_{i}$, $\varphi^{(1)}_{k}$ where $k\neq i$, we
can have the relation that:
\begin{equation}\label{PTIMQMeq:9}
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{k} d\tau + \int
\varphi^{*(1)}_{i}\varphi^{(0)}_{k} d\tau = 0
\end{equation}
This can be achieved through the (\ref{PTIMQMeq:5}). If we take
complex conjugation of the equation in (\ref{PTIMQMeq:5}) and
multiply the $\varphi^{(0)}_{k}$ then to integrate, we can get some
equation:
\begin{multline}\label{PTIMQMeq:7}
\langle\varphi^{(1)}_{i}|\hat{F}|\varphi^{(0)}_{k}\rangle -
\epsilon^{(0)}_{i}\langle\varphi^{(1)}_{i}|\varphi^{(0)}_{k}\rangle
=
- \langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{k}\rangle \\
-\sum_{j}\Bigg\{
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{k}\right)
+
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{k}\right)
\\
-
\left(\varphi^{(0)}_{i}\varphi^{(1)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{k}\right)
-
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{k}\right)
\Bigg\}
\end{multline}
Here we use the Dirac notion to express the single electron
integral, on the other hand the double electron integral is shorten
as:
\begin{equation}\label{}
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{k}\right)
= \int \int
\varphi^{*(1)}_{j}(1)\varphi^{(0)}_{j}(1)\frac{1}{r_{12}}
\varphi^{*(0)}_{i}(2)\varphi^{(0)}_{k}(2)d\tau_{1}d\tau_{2}
\end{equation}
Such abbreviation is to make the derivation as clear as possible.
On the other hand, if the equation in (\ref{PTIMQMeq:5}) in on
$\varphi^{(1)}_{k}$ and we multiply the $\varphi^{*(0)}_{i}$ then to
make integration, we can get similar equation with
(\ref{PTIMQMeq:7}):
\begin{multline}\label{PTIMQMeq:8}
\langle\varphi^{(0)}_{i}|\hat{F}|\varphi^{(1)}_{k}\rangle -
\epsilon^{(0)}_{k}\langle\varphi^{(0)}_{i}|\varphi^{(1)}_{k}\rangle
=
- \langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{k}\rangle \\
-\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{k}\right)
+
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{k}\right)
\\
-
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{k}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{k}|\varphi^{(0)}_{i}\varphi^{(1)}_{j}\right)
\Bigg\}
\end{multline}
Hence it's easy to see that the right part of (\ref{PTIMQMeq:7}) is
same with the right side of (\ref{PTIMQMeq:8}), therefore we have:
\begin{align}\label{PTIMQMeq:27}
\langle\varphi^{(0)}_{i}|\hat{F}|\varphi^{(1)}_{k}\rangle -
\epsilon^{(0)}_{k}\langle\varphi^{(0)}_{i}|\varphi^{(1)}_{k}\rangle
&=\langle\varphi^{(1)}_{i}|\hat{F}|\varphi^{(0)}_{k}\rangle -
\epsilon^{(0)}_{i}\langle\varphi^{(1)}_{i}|\varphi^{(0)}_{k}\rangle
\nonumber \\
(\epsilon^{(0)}_{i}-\epsilon^{(0)}_{k})
\langle\varphi^{(0)}_{i}|\varphi^{(1)}_{k}\rangle &=
(\epsilon^{(0)}_{k}-\epsilon^{(0)}_{i})
\langle\varphi^{(1)}_{i}|\varphi^{(0)}_{k}\rangle
\end{align}
Then we can get the conclusion in the (\ref{PTIMQMeq:9}).
Nevertheless, in such derivation if the energy level is degenerated
(which means that we have $\epsilon^{(0)}_{k}=\epsilon^{(0)}_{i}$
for the $k\neq i$); we can not get the conclusion of
(\ref{PTIMQMeq:9}) directly from the (\ref{PTIMQMeq:27}). However,
there is another way to derive the relation shown in the
(\ref{PTIMQMeq:9}), which is similar to the way we got the relation
in (\ref{PTIMQMeq:19}).
Once again we use the normalization condition for the $\varphi_{i}$.
Suggest that the $\varphi_{i}$ is approximated at the first order,
then we have:
\begin{align}\label{}
\int \varphi^{*}_{i}\varphi_{k} d\tau = 0 \quad i\neq k &
\Rightarrow \nonumber \\
\int (\varphi^{*(0)}_{i} + \lambda\varphi^{*(1)}_{i})
(\varphi^{(0)}_{k} + \lambda\varphi^{(1)}_{k}) d\tau &=0 \nonumber
\\
\int\varphi^{*(0)}_{i}\varphi^{(0)}_{k} d\tau + \lambda(\int
\varphi^{*(0)}_{i}\varphi^{(1)}_{k} d\tau + \int
\varphi^{*(1)}_{i}\varphi^{(0)}_{k} d\tau) + O(\lambda^{2}) &= 0
\end{align}
Because the unperturbed orbitals are orthogonal with each other,
thus $\int\varphi^{*(0)}_{i}\varphi^{(0)}_{k} d\tau = 0$.
Furthermore, since $\lambda$ is some arbitrary number, therefore it
requires that:
\begin{equation}
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{k} d\tau + \int
\varphi^{*(1)}_{i}\varphi^{(0)}_{k} d\tau = 0
\end{equation}
This is just the conclusion in the (\ref{PTIMQMeq:9}). We note that
this proof does not refer to the energy degeneracy so that the
conclusion in the (\ref{PTIMQMeq:9}) is holding in all cases.
All in all, we can get the relations between the first order
correlated orbital of $\varphi^{(1)}_{i}$ and the unperturbed
orbital of $\varphi^{(0)}_{j}$:
\begin{equation}\label{}
\int \varphi^{*(0)}_{i}\varphi^{(1)}_{j} d\tau + \int
\varphi^{*(1)}_{i}\varphi^{(0)}_{j} d\tau = 0
\end{equation}
Here the label of $i$ and $j$ can be any kind of orbitals.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{The first order total energy expression}
%
% consider the total energy expression under the perturbation framework
%
Next we switch to the total energy derivation. According to the
results we have gotten in the HF chapter, the total energy for an
arbitrary $n$ electrons system is:
\begin{multline}\label{PTIMQMeq:10}
E =
\sum_{i}^{n}\langle\varphi_{i}(1)|\hat{H}_{1}(1)|\varphi_{i}(1)\rangle
+
\\
\frac{1}{2}\sum_{i}\sum_{j} \left\{
\int\int\varphi^{*}_{i}(1)\varphi_{i}(1)\frac{1}{r_{12}}
\varphi^{*}_{j}(2)\varphi_{j}(2)d\tau_{1}d\tau_{2}- \right. \\
\left.
\int\int\varphi^{*}_{i}(1)\varphi_{j}(1)\frac{1}{r_{12}}
\varphi^{*}_{j}(2)\varphi_{i}(2)d\tau_{1}d\tau_{2} \right\}
\end{multline}
Similarly, the total energy can be also expanded according to the
order of $\lambda$:
\begin{equation}\label{}
E = E^{(0)} + \lambda E^{(1)} + \lambda^{2}E^{(2)} + \cdots
\end{equation}
Now by taking the expansion for $\varphi$ defined in
(\ref{PTIMQMeq:2}) as well as the total energy expansion into the
(\ref{PTIMQMeq:10}), we can get the perturbation equations for $E$.
The zero order approximation for the total energy equation is:
\begin{multline}\label{PTIMQMeq:11}
E^{(0)} =
\sum_{i}^{n}\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
+
\\
\frac{1}{2}\sum_{i}\sum_{j} \left\{
\int\int\varphi^{*(0)}_{i}\varphi^{(0)}_{i}\frac{1}{r_{12}}
\varphi^{*(0)}_{j}\varphi^{(0)}_{j}d\tau_{1}d\tau_{2}- \right. \\
\left. \int\int\varphi^{*(0)}_{i}\varphi^{(0)}_{j}\frac{1}{r_{12}}
\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau_{1}d\tau_{2} \right\}
\end{multline}
On the other hand, by using the orbital energy we can express it as:
\begin{multline}\label{PTIMQMeq:12}
E^{(0)} = \sum_{i}^{n}\epsilon_{i}^{(0)} -
\frac{1}{2}\sum_{i}\sum_{j} \left\{
\int\int\varphi^{*(0)}_{i}\varphi^{(0)}_{i}\frac{1}{r_{12}}
\varphi^{*(0)}_{j}\varphi^{(0)}_{j}d\tau_{1}d\tau_{2}- \right. \\
\left. \int\int\varphi^{*(0)}_{i}\varphi^{(0)}_{j}\frac{1}{r_{12}}
\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau_{1}d\tau_{2} \right\}
\end{multline}
For the first order approximation, it scales with $\lambda$ and we
can express this term as:
\begin{multline}\label{PTIMQMeq:14}
E^{(1)} =\sum_{i}^{n}
\Bigg\{\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
+\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle +
\langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{i}\rangle\Bigg\}
\\
+\frac{1}{2}\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{i}|\varphi^{(0)}_{j}\varphi^{(0)}_{j}\right)
+
\left(\varphi^{(0)}_{i}\varphi^{(1)}_{i}|\varphi^{(0)}_{j}\varphi^{(0)}_{j}\right)
\\
+
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{i}|\varphi^{(1)}_{j}\varphi^{(0)}_{j}\right)
+
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{i}|\varphi^{(0)}_{j}\varphi^{(1)}_{j}\right)
\Bigg\} \\
-\frac{1}{2}\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
+
\left(\varphi^{(0)}_{i}\varphi^{(1)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
\\
+
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{i}\right)
+
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(1)}_{i}\right)
\Bigg\} \\
\end{multline}
This expression is too much complicated. Therefore, by using the
relation we have got before, we try to reduce it into some simpler
form. Firstly, from the (\ref{PTIMQMeq:13}) we can see that the two
electrons integral can be expressed as:
\begin{multline}\label{PTIMQMeq:16}
\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
+
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
\\
-
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(1)}_{j}\right)
\Bigg\} \\
= \sum_{i}\Bigg\{\epsilon^{(1)}_{i} -
\langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{i}\rangle\Bigg\}
\end{multline}
By exchanging the label of $i$ and $j$ on the right side of the
above equation (such operation will change nothing), we can get:
\begin{multline}\label{PTIMQMeq:15}
\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{i}\varphi^{(1)}_{i}|\varphi^{(0)}_{j}\varphi^{(0)}_{j}\right)
+
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{i}|\varphi^{(0)}_{j}\varphi^{(0)}_{j}\right)
\\
-
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(1)}_{i}\right)
\Bigg\} \\
= \sum_{i}\Bigg\{\epsilon^{(1)}_{i} -
\langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{i}\rangle\Bigg\}
\end{multline}
That's half part of the two electron integrals in the
(\ref{PTIMQMeq:14}).
On the other hand, in the (\ref{PTIMQMeq:16}) we can exchange the
electron label in the two electron integrals, that is to make:
\begin{equation}\label{}
(ii|jj) \Rightarrow (jj|ii)
\end{equation}
Certainly such operation will not change the integrals. Thus we can
transform the (\ref{PTIMQMeq:16}) into:
\begin{multline}\label{PTIMQMeq:17}
\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{i}|\varphi^{(0)}_{j}\varphi^{(1)}_{j}\right)
+
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{i}|\varphi^{(1)}_{j}\varphi^{(0)}_{j}\right)
\\
-
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(0)}_{i}\varphi^{(1)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
\Bigg\} \\
= \sum_{i}\Bigg\{\epsilon^{(1)}_{i} -
\langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{i}\rangle\Bigg\}
\end{multline}
Now it can see this is the other half part of the two electron
integrals in the (\ref{PTIMQMeq:14}).
All in all, by the expression in (\ref{PTIMQMeq:15}) and
(\ref{PTIMQMeq:17}) the (\ref{PTIMQMeq:14}) can be transformed into:
\begin{equation}\label{PTIMQMeq:22}
E^{(1)} =\sum_{i}^{n} \left\{\epsilon^{(1)}_{i} +
\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
+\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle\right\}
\end{equation}
so far we have gotten some simpler expression for the $E^{(1)}$,
however; the concrete expression for the $\epsilon^{(1)}_{i}$ is not
known yet. Therefore we make to make additional transformation.
First, let's evaluate the integrals below according to the
(\ref{PTIMQMeq:18}):
\begin{multline}\label{PTIMQMeq:20}
\sum_{i}\Bigg\{\langle\varphi^{(0)}_{i}|\hat{F}|\varphi^{(1)}_{i}\rangle
+\langle\varphi^{(1)}_{i}|\hat{F}|\varphi^{(0)}_{i}\rangle\Bigg\}
= \\
\sum_{i}\Bigg\{
\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle +
\sum_{j}\Big[
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(1)}_{i}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{i}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
\Big] \Bigg\} \\
+ \sum_{i}\Bigg\{
\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle +
\sum_{j}\Big[
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{j}|\varphi^{(1)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
\Big] \Bigg\}
\end{multline}
Because of the orthogonality between the $\varphi^{(0)}_{i}$ and
$\varphi^{(1)}_{i}$ defined in (\ref{PTIMQMeq:19}), we have:
\begin{equation}\label{}
\langle\varphi^{(0)}_{i}|\hat{F}|\varphi^{(1)}_{i}\rangle =
\epsilon^{(0)}_{i}\langle\varphi^{(0)}_{i}|\varphi^{(1)}_{i}\rangle
= 0
\end{equation}
Therefore the equation of (\ref{PTIMQMeq:20}) can be rearranged
into:
\begin{multline}\label{PTIMQMeq:20}
\sum_{i}\Bigg\{
\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle +
\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
\Bigg\}= \\
-\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(1)}_{i}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{i}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
+ \\
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{j}|\varphi^{(1)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(1)}_{i}\varphi^{(0)}_{j}|\varphi^{(0)}_{j}\varphi^{(0)}_{i}\right)
\Bigg\}
\end{multline}
Here we can exchange the label of $i$, $j$ and then exchange the
electron label, such operation will keep the integrals invariant.
Then we can have:
\begin{multline}\label{PTIMQMeq:20}
\sum_{i}\Bigg\{
\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle +
\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
\Bigg\}= \\
-\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(1)}_{j}\right)
+ \\
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{i}\right)
\Bigg\}
\end{multline}
For the integral of
$\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{i}\right)$,
if we exchange the electron label again, it will be:
\begin{equation}\label{}
\left(\varphi^{(0)}_{i}\varphi^{(0)}_{j}|\varphi^{(1)}_{j}\varphi^{(0)}_{i}\right)=
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
\end{equation}
Then finally the (\ref{PTIMQMeq:20}) will be:
\begin{multline}\label{PTIMQMeq:21}
\sum_{i}\Bigg\{
\langle\varphi^{(0)}_{i}|\hat{H}_{1}|\varphi^{(1)}_{i}\rangle +
\langle\varphi^{(1)}_{i}|\hat{H}_{1}|\varphi^{(0)}_{i}\rangle
\Bigg\}= \\
-\sum_{i}\sum_{j}\Bigg\{
\left(\varphi^{(0)}_{j}\varphi^{(1)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(0)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(1)}_{j}\right)
+ \\
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{j}|\varphi^{(0)}_{i}\varphi^{(0)}_{i}\right)
-
\left(\varphi^{(1)}_{j}\varphi^{(0)}_{i}|\varphi^{(0)}_{i}\varphi^{(0)}_{j}\right)
\Bigg\}
\end{multline}
Now we can transform the (\ref{PTIMQMeq:22}) by using the conclusion
we get from the (\ref{PTIMQMeq:21}), as well as the expression for
the $\epsilon^{(1)}_{i}$ in (\ref{PTIMQMeq:13}). Consequently, it
turns out that $E^{(1)}$ finally is:
\begin{equation}\label{}
E^{(1)} = \sum_{i}
\langle\varphi^{(0)}_{i}|\hat{V}|\varphi^{(0)}_{i}\rangle
\end{equation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Matrix form of the perturbed Hatree-Fock equation}
%
%
%
%
In the above content, we have derived the first order perturbed
Hatree-Fock equation in a very detailed way. However, since in
quantum chemistry we usually use the matrix form of the Hatree-Fock
equation in terms of the basis sets functions; thus it's necessary
to rebuild the perturbed equations into the matrix form based on the
above achievement\cite{stevens:550}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{General procedures}
%
% general preparation before the derivation,
% specify the ground states and HF equation as well as how to express the
% first order correlated molecular orbitals
%
Now we begin to investigate how to make a perturbation treatment on
the base of the linear combination of basis sets functions. In this
case, the unperturbed molecular orbitals is usually expressed as:
\begin{equation}\label{PTIMQMeq:23}
\varphi_{q} = \sum_{p}c_{pq}\chi_{p}
\end{equation}
Where the $\chi_{p}$ is referred as the basis sets functions. Here
we follow the convention that to use $i, j$ etc. to designate the
occupied orbitals, $a, b$ etc. to designate the virtual orbitals and
to use the $p, q$ etc. to refer to the general orbitals. Here it's
better to reform it into the matrix form, that is:
\begin{equation}\label{PTIMQMeq:30}
\varphi_{q} =\begin{bmatrix}
\chi_{1} & \chi_{2} & \cdots & \chi_{n} \\
\end{bmatrix}
\begin{bmatrix}
c_{1q} \\
c_{2q} \\
\cdots \\
c_{nq} \\
\end{bmatrix}
\end{equation}
According to the representation theory \ref{REPRESENTATION:1}, such
expression is called coefficients $C$ representation based on the
selected basis functions space of $\chi$.
Now we assume that there's one to one correspondence between the
coefficients representation and the total wave function (so far in
my opinion this assumption does not hurt the generality, in other
words such assumption always holds true). Then by taking the vector
form of the molecular orbitals in (\ref{PTIMQMeq:30}) back into the
original Hatree-Fock equation (\ref{PTIMQMeq:1}), and to multiply
with $\psi_{i}^{*}(1)$ then make integration; we can get the
corresponding matrix form:
\begin{equation}\label{PTIMQMeq:24}
FC_{q} = SC_{q}\epsilon_{q}
\end{equation}
Furthermore, since we can use some unitary matrix to drop the
overlap matrix, then the Hatree-Fock equation can finally be:
\begin{equation}\label{PTIMQMeq:34}
FC_{q} = \epsilon C_{q}
\end{equation}
This equation has been fully discussed in the Hatree-Fock chapter
\ref{HFT}. Here $C_{q}$ denotes the vector of $c_{pq}$ in the
(\ref{PTIMQMeq:23}). The $\epsilon$ is some $n\times n$ diagonal
matrix which characterizes the orbital energy. Later in the content,
we will go on the discussion focusing on the (\ref{PTIMQMeq:34}).
Moreover, the total energy is given as:
\begin{equation}\label{PTIMQMeq:25}
E = \frac{1}{2}\sum_{q}(C_{q}^{+}FC_{q} + \epsilon_{q})
\end{equation}
The (\ref{PTIMQMeq:25}) can be get directly from the addition
between (\ref{PTIMQMeq:11}) and (\ref{PTIMQMeq:12}). For simplicity,
The orbitals can still assumed to be orthogonal with each other,
that means $\langle\varphi_{p}|\varphi_{q}\rangle = \delta_{pq}$.
Now we assume that there's some perturbation in the whole system so
that the it causes corresponding change in the molecular orbital,
the orbital energy and the total energy:
\begin{align}\label{}
\hat{F}^{'} &= \hat{F} + \lambda \hat{V} \Rightarrow \nonumber \\
\varphi_{q} &= \varphi^{(0)}_{q} + \lambda\varphi^{(1)}_{q} +
\lambda^{2}\varphi^{(2)}_{q} + \cdots \nonumber \\
\epsilon_{q} &= \epsilon^{(0)}_{q} + \lambda\epsilon^{(1)}_{q} +
\lambda^{2}\epsilon^{(2)}_{q} + \cdots \nonumber \\
E &= E^{(0)} + \lambda E^{(1)} + \lambda^{2}E^{(2)} + \cdots
\end{align}
We note that this is same with the derivation in the last section.
Here in a like manner we still concentrate on the first order
perturbation effects.
Furthermore, how to express the first order molecular orbital? Since
that all the molecular orbitals are expressed on the space
constituted by the basis sets, thus it's natural to be represented
by the perturbed coefficients:
\begin{align}\label{PTIMQMeq:26}
\varphi^{(0)}_{q} &= \sum_{p}c^{(0)}_{pq}\chi_{p} \nonumber \\
\varphi^{(1)}_{q} &= \sum_{p}c^{(1)}_{pq}\chi_{p}
\end{align}
Here the label of $p$ goes over all the orbitals, including the
occupied ones and the virtual ones. Next, we are going to consider
the relations between the molecular orbitals.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Relation for the perturbed orbitals}
%
% derive the relation between the perturbed orbitals, in terms of the
% basis sets functions
%
%
In the section of (\ref{PTIMQM:1}), we have derived some general
relation between the $\varphi^{(1)}_{p}$ and $\varphi^{(0)}_{q}$:
\begin{equation}\label{PTIMQMeq:31}
\int\varphi^{*(1)}_{p}\varphi^{(0)}_{q}d\tau
+\int\varphi^{*(0)}_{p}\varphi^{(1)}_{q}d\tau = 0
\end{equation}
Furthermore, for the case that $p=q$; we have proved that the
integral can be set to $0$:
\begin{equation}\label{}
\int\varphi^{*(1)}_{p}\varphi^{(0)}_{q}d\tau = 0
\end{equation}
By taking the expression of (\ref{PTIMQMeq:26}) into the
(\ref{PTIMQMeq:31}), we can get:
\begin{align}\label{PTIMQMeq:33}
\sum_{r}\sum_{s}(c^{*(1)}_{rp}c^{(0)}_{sq} +
c^{*(0)}_{rp}c^{(1)}_{sq})\langle\chi_{r}|\chi_{s}\rangle &= 0
\Rightarrow\nonumber \\
C^{+(1)}_{p}SC^{(0)}_{q} + C^{+(0)}_{p}SC^{(1)}_{q} &= 0
\end{align}
However, since that in the (\ref{PTIMQMeq:34}) we have ``drop'' the
overlap matrix by multiplying some unitary matrix; that means we
have:
\begin{equation}\label{}
C^{'}_{q} = UC_{q} \quad U^{+}SU = I \Rightarrow
C^{+(1)'}_{p}SC^{(0)'}_{q} = C^{+(1)}_{p}U^{+}SUC^{(0)}_{q} =
C^{+(1)}_{p}C^{(0)}_{q}
\end{equation}
Therefore the (\ref{PTIMQMeq:33}) can be transformed as:
\begin{equation}\label{PTIMQMeq:35}
C^{+(1)}_{p}C^{(0)}_{q} + C^{+(0)}_{p}C^{(1)}_{q} = 0
\end{equation}
Now let's consider the expectation value for an single electron
operator of $\hat{A}$. Based on the perturbed molecular orbital
expression, the expectation value is:
\begin{equation}\label{}
\sum_{i=1}^{n}\langle\varphi_{i}|\hat{A}|\varphi_{i}\rangle =
\sum_{i=1}^{n}\langle\varphi^{(0)}_{i}|\hat{A}|\varphi^{(0)}_{i}\rangle
+ \lambda\sum_{i=1}^{n}\Bigg\{
\langle\varphi^{(1)}_{i}|\hat{A}|\varphi^{(0)}_{i}\rangle +
\langle\varphi^{(0)}_{i}|\hat{A}|\varphi^{(1)}_{i}\rangle \Bigg\}
\end{equation}
Here we suggest there's $n$ occupied orbitals and totally $m$ basis
sets. The expectation is going over all the occupied orbitals. The
expectation value for the $\hat{A}$ is approximated at the first
order.
On the other hand, we can expand the expectation value also in the
progressive order:
\begin{equation}\label{}
\langle A \rangle = \langle A \rangle^{(0)} + \lambda\langle A
\rangle^{(1)} + \cdots
\end{equation}
Obviously the zero order approximation is:
\begin{equation}\label{}
\langle A \rangle^{(0)} =
\sum_{i=1}^{n}\langle\varphi^{(0)}_{i}|\hat{A}|\varphi^{(0)}_{i}\rangle
\end{equation}
And the first order approximation is:
\begin{equation}\label{}
\langle A \rangle^{(1)} = \sum_{i=1}^{n}\Bigg\{
\langle\varphi^{(1)}_{i}|\hat{A}|\varphi^{(0)}_{i}\rangle +
\langle\varphi^{(0)}_{i}|\hat{A}|\varphi^{(1)}_{i}\rangle \Bigg\}
\end{equation}
Now let's further evaluate the first order approximation. By taking
the basis sets into the equation above, it transforms into:
\begin{align}\label{}
\langle A \rangle^{}(1) &= \sum_{i=1}^{n}c^{*(1)}_{ki}c^{(0)}_{ji}
\end{align}
Now it comes up with a question that how to express the relations
for the perturbed orbitals in terms of the basis sets functions?
By taking the expansion of (\ref{PTIMQMeq:23}) into the above
relation, we have:
\begin{equation}\label{}
\sum_{p}c^{*(1)}_{pi}\langle\varphi^{(0)}_{p}|\varphi^{(0)}_{j}\rangle
+
\sum_{q}c^{(1)}_{qj}\langle\varphi^{(0)}_{i}|\varphi^{(0)}_{q}\rangle
= 0
\end{equation}
Since that the unperturbed orbitals satisfy the orthogonal relation,
thus $\langle\varphi^{(0)}_{p}|\varphi^{(0)}_{q}\rangle =
\delta_{pq}$. Hence we have:
\begin{equation}\label{PTIMQMeq:28}
c^{*(1)}_{ji} + c^{(1)}_{ij} = 0
\end{equation}
Generally if $i=j$, then $c^{*(1)}_{ii} + c^{(1)}_{ii} = 0$; that
means the $c^{(1)}_{ii}$ is some imaginary number so that it can be
safely set to $0$.
Now let's go to see how to use the conclusion in (\ref{PTIMQMeq:28})
to get the further information about the density. In the
perturbation treatment, the density can be expressed as:
\begin{equation}\label{}
\rho = \rho^{(0)} + \lambda\rho^{(1)} +\lambda^{2}\rho^{(2)} +
\cdots
\end{equation}
For the orbitals approximated at the first order, according to the
(\ref{PTIMQMeq:26}) we can express it as:
\begin{align}\label{}
\varphi_{i} &= \varphi^{(0)}_{i} +
\lambda\sum_{p}c^{(1)}_{pi}\varphi^{(0)}_{p} \nonumber \\
\varphi^{*}_{i} &= \varphi^{*(0)}_{i} +
\lambda\sum_{q}c^{*(1)}_{qi}\varphi^{*(0)}_{q}
\end{align}
Then the electron density can be expressed as (suggest that there
are $m$ orbitals, each one is occupied with one electron; the others
are virtual orbitals):
\begin{align}\label{}
\rho &= \rho^{(0)} + \lambda\rho^{(1)} \Rightarrow \nonumber \\
\sum_{i=1}^{m}\varphi^{*}_{i}\varphi_{i}d\tau &=
\sum_{i=1}^{m}\varphi^{*(0)}_{i}\varphi^{(0)}_{i}d\tau + \nonumber \\
&\lambda\Bigg\{\sum_{i=1}^{m}\sum_{p}c^{(1)}_{pi}
\varphi^{*(0)}_{i}\varphi^{(0)}_{p}d\tau +
\sum_{i=1}^{m}\sum_{q}c^{*(1)}_{qi}
\varphi^{*(0)}_{q}\varphi^{(0)}_{i}d\tau \Bigg\}
\end{align}
Here it's known that the unperturbed density $\rho^{(0)}$ is:
\begin{equation}\label{}
\rho^{(0)} = \sum_{i=1}^{m}\varphi^{*(0)}_{i}\varphi^{(0)}_{i}d\tau
\end{equation}
Therefore the first perturbed density $\rho^{(1)}$ is:
\begin{equation}\label{PTIMQMeq:29}
\rho^{(1)} = \sum_{i=1}^{m}\sum_{p}c^{(1)}_{pi}
\varphi^{*(0)}_{i}\varphi^{(0)}_{p}d\tau +
\sum_{i=1}^{m}\sum_{q}c^{*(1)}_{qi}
\varphi^{*(0)}_{q}\varphi^{(0)}_{i}d\tau
\end{equation}
Here the label of $i$ goes over all the occupied orbitals, and the
label of $p$, $q$ go over all the orbitals (occupied type as well as
the virtual orbitals).
Now by using the relation in the (\ref{PTIMQMeq:28}), we can further
simplify the expression for the $\rho^{(1)}$. By dividing the sum in
(\ref{PTIMQMeq:29}) into the occupied type and occupied-virtual
type; we can have (suggesting that there are $n$ basis sets so that
totally $n$ orbitals generated in the Hatree-Fock equation):
\begin{multline}\label{}
\rho^{(1)} = \sum_{i=1}^{m}\sum_{j=1}^{m}c^{(1)}_{ji}
\varphi^{*(0)}_{i}\varphi^{(0)}_{j}d\tau +
\sum_{i=1}^{m}\sum_{a=m+1}^{n}c^{(1)}_{ai}
\varphi^{*(0)}_{i}\varphi^{(0)}_{a}d\tau + \\
\sum_{i=1}^{m}\sum_{j=1}^{m}c^{*(1)}_{ji}
\varphi^{*(0)}_{j}\varphi^{(0)}_{i}d\tau +
\sum_{i=1}^{m}\sum_{a=m+1}^{n}c^{*(1)}_{ai}
\varphi^{*(0)}_{a}\varphi^{(0)}_{i}d\tau
\end{multline}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../../main"
%%% End:
| {
"alphanum_fraction": 0.6326090127,
"avg_line_length": 40.3352941176,
"ext": "tex",
"hexsha": "706211375aa8cc7f5c2a9533eeda8c799deed244",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "murfreesboro/fenglai-note",
"max_forks_repo_path": "theory/chemistry/perturbation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "murfreesboro/fenglai-note",
"max_issues_repo_path": "theory/chemistry/perturbation.tex",
"max_line_length": 97,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7bdf943f681e54948cd68775a31e4c93a53a13f8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "murfreesboro/fenglai-note",
"max_stars_repo_path": "theory/chemistry/perturbation.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-16T07:23:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-16T07:23:48.000Z",
"num_tokens": 13088,
"size": 34285
} |
% !TeX spellcheck = de_DE
\chapter{Methodologies}
\label{chap:k2}
The following sections introduce the principles of 3D lane marking reconstruction method of this work, based on the work flow shown in \cref{fig:FlowChart}.
\cref{sec:LineExtraction} describes the applied standard line detection algorithm for labeling the lane markings.
To relate the object coordinates of a point with its image coordinates, \cref{sec:Geometry} introduces the imaging properties of aerial images and their mathematical models, including the collinearity equation and lens distortion correction.
\cref{sec:LineFitting} introduce the principle of line fitting and further presents the orthogonal regression model with line equations in two-point form.
A non-linear LS model is derived and estimated in \cref{sec:LSadj} to elaborate the usage of line fitting with the combination of collinearity condition for 3D lane marking reconstruction.
In \cref{sec:LineProjectionOnDSM} the generation of approximate 3D line segments is described, as initial values of unknown quantities are required in nonlinear LS estimation.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{workflow.pdf}
\caption{\small The work flow}
\label{fig:FlowChart}
\end{figure}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Lane markings Properties and Automatic Extraction}
\label{sec:LineExtraction}
The appearance of lane markings on German roads including line type, color and width is specified depending on the road type. Different line types of lane markings are shown in \cref{fig:LaneMarkingTypes} and their line widths are defined in \cref{tab:LaneMarkingWidths}. As shown in \cref{tab:DashedLaneMarkingLengths}, the dashed lane markings on motorways have 6 meter length.
Because of the appearance, the problem of lane marking detection can be treated as a line detection problem. We restrict the proposed framework to lane markings with single white lines (dashed or continuous) of 0.3 meter width. Other types like in restricted zone, double lines, parking areas, temporal yellow lines in construction sites etc, are excluded.% https://www.transchool.lee.army.mil/adso/documents/zeichen.pdf
\begin{figure}
\centering
\subfloat[\small Continuous line]{\fbox{\includegraphics[width=0.3\textwidth, trim={0 150 0 150},clip=true]{Laengsmarkierung_durchgehend.pdf}}}
\quad
\subfloat[\small Dashed line]{\fbox{\includegraphics[width=0.3\textwidth, trim={0 150 0 150},clip=true]{Laengsmarkierung_unterbrochen.pdf}}}
\newline
\subfloat[\small Continuous and dashed double lines]{\fbox{\includegraphics[width=0.3\textwidth, trim={0 150 0 150},clip=true]{Laengsmarkierung_unterbrochen_durchgehend.pdf}}}
\quad
\subfloat[\small Continuous double lines]{\fbox{\includegraphics[width=0.3\textwidth, trim={0 150 0 150},clip=true]{Laengsmarkierung_durchgehend_doppelt.pdf}}}
\quad
\subfloat[\small Dashed double lines]{\fbox{\includegraphics[width=0.3\textwidth, trim={0 150 0 150},clip=true]{Laengsmarkierung_unterbrochen_doppelt.pdf}}}
\caption{\small Line types of lane markings \cite{RMS1}}
\label{fig:LaneMarkingTypes}
\end{figure}
%\setlength{\floatsep}{16pt plus 1.0pt minus 2.0pt}
\begin{table} [h!]
\centering
\begin{tabular}{l|cc}
\toprule
& motorways\footnote{\label{motorway}and corresponding roads in the sense of the VwV-StVO to § 42 to mark 330 (motorway) II} & other roads\\
\midrule
narrow lines & $0.15$ [m] & $0.12$ [m] \\
wide lines & $0.30$ [m] & $0.25$ [m] \\
\bottomrule
\end{tabular}
\caption{\small Widths of lane markings \cite{RMS1}}
\label{tab:LaneMarkingWidths}
% Der Deutsche Verkehrssicherheitsrat
% https://www.dvr.de/download/publikationen-schriftenreihe-17.pdf
% Richtlinien für die Markierung von Straßen (RMS) Teil 1
\end{table}
\setlength{\floatsep}{16pt plus 1.0pt minus 2.0pt}
\begin{table} [h!]
\centering
\begin{tabular}{l|ccc}
\toprule
& motorways\textsuperscript{\ref{motorway}} & \multicolumn{2}{c}{other roads}\\
\cline{3-4}
& & in town & out of town\\
\midrule
line / gap & $6$ [m] / $12$ [m] & $3$ [m] / $6$ [m] & $4$ [m] / $8$ [m]\\
\bottomrule
\end{tabular}
\caption{\small Lengths of dashed lane markings with ratio 1:2 \cite{RMS1}}
\label{tab:DashedLaneMarkingLengths}
% Der Deutsche Verkehrssicherheitsrat
% https://www.dvr.de/download/publikationen-schriftenreihe-17.pdf
% Richtlinien für die Markierung von Straßen (RMS) Teil 1
\end{table}
\clearpage
% https://en.wikipedia.org/wiki/Edge_detection#Approaches
There are many algorithms for line detection. Prewitt line detector uses two orthogonal gradient operators, and the pixels in the operators are of same weights. Sobel detector also uses two orthogonal gradients operators, but the weights of pixel in operators are not equal ---the closer the pixel to the center of operator, the higher weight it has. Canny edge detector searches local extrema of gradient to locate the positions of line features and is still a state-of-the-art edge detector. Edge-detectors that perform better than the Canny usually require higher computational complexities or a greater number of parameters. Edge drawing \cite{Topal2012} first spots anchors along rows and columns by Sobel detector, and then joins these anchors to extract line features.
In this work, the principle to extract line features is to firstly derive the line direction for each pixel by using partial derivatives of a Gaussian smoothing kernel. Pixels that have a local maximum in the second directional derivative perpendicular to the line direction are marked as line points. By thresholding their second directional derivative values, the accepted line points are then linked and connected.\cite{Steger1998}
The resulting connected points which compose a line are of sub-pixel precision. \cref{fig:LineExtraction} shows the extracted lines on part of the masked original image.
% http://www.mvtec.com/doc/halcon/11/en/lines_gauss.html
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth, trim=750 260 200 360,clip]{ML1234_extlines_rsz.png}
\caption{\small Lane markings Extraction. The extracted long lane-lines are marked in green and the dashed ones are in yellow. Note that both cases are reconstructed into 3D with the same framework; different colors here are only for illustration.}
\label{fig:LineExtraction}
\end{figure}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Imaging Properties of Aerial Photographs}
\label{sec:Geometry}
This section describes the geometric model of the projection of 3D points into the image generated by a real camera. We first restrict the discussion in \cref{subsec:Collinearity} to central perspective projection where the collinearity equation originates from. We then model deviations from this model, addressing real cameras with imperfect lenses, in \cref{subsec:LensDistortion}.
\subsection{Collinearity Equations}
\label{subsec:Collinearity}
We assume frame photography, i.e. photographs exposed on a frame chip in one instant, and assume the central projection model with cameras that have a single viewpoint and a planar sensor and being straight line-preserving. Collinearity indicates the condition that the image point (on the sensor plate of the camera), the observed point (in object space) and the projection center of the camera were aligned at the moment the picture was taken. Every measured point leads to two collinearity equations, describing transformations from object space to image coordinates:
\begin{equation} \label{eq:collinearity}
\begin{split}
x = x_0 -c \dfrac {r_{11}(X-X_0) + r_{21}(Y-Y_0) + r_{31}(Z-Z_0)} {r_{13}(X-X_0) + r_{23}(Y-Y_0) + r_{33}(Z-Z_0)} \\
y = y_0 -c \dfrac {r_{12}(X-X_0) + r_{22}(Y-Y_0) + r_{32}(Z-Z_0)} {r_{13}(X-X_0) + r_{23}(Y-Y_0) + r_{33}(Z-Z_0)}
\end{split}
\end{equation}
where\newline
$(x, y)$: image coordinates of the point \newline
$(x_0, y_0)$: image coordinates of principal point \newline
$c$: principal distance; focal length \newline
$(X, Y, Z)$: object coordinates of the point \newline
$(X_0, Y_0, Z_0)$: object coordinates of projection center \newline
$r_{11},...,r_{33}$: elements of the rotation matrix R (orthogonal 3$\times$3-matrix from object space to image space, with 3 independent angles $\omega$, $\phi$ and $\kappa$)
\subsection{Lens Distortion Correction}
\label{subsec:LensDistortion}
An original image has some degree of deviations from perspective mapping due to lens distortion, lens refraction or non-planarity of the sensor surface. There are several models to describe these perturbing effects and can be used to undistort the images, resulting in rectified images which are now straight line-preserving.
A subset of physical distortion model \cite{Fraser1997} is chosen, with two radial symmetric distortion parameters $A_1$ and $A_2$, two asymmetric parameters $B_1$ and $B_2$, and a scaling $C1$ and an affine shearing parameter $C_2$. Assuming $x\prime$ and $y\prime$ to be the distorted image coordinates, the corrections $\Delta x$ and $\Delta y$ are then calculated by the following equations:
\begin{equation} \label{eq:LensDistortion}
\begin{split}
\Delta x &= x_p + A_1x_*(r^2-R_0^2) + A_2x_*(r^4-R_0^4) + B_1(r^2+2x_*^2) + B_22x_*y+C_2y \\
\Delta y &= y_p + A_1y (r^2-R_0^2) + A_2y (r^4-R_0^4) + B_1(r^2+2y^2) + B_22x_*y
\end{split}
\end{equation}
with $r=\sqrt{x_*^2+y^2}$, $x_*=\dfrac{x}{C_1}$ and radius\footnote{At the radius $R_0$ the radial symmetric distortion is zero by definition, which avoids too high distortion values at the edges and reduces the correlation with the focal length.} $R_0$ being set %to 0.014m which corresponds
to a third of the sensor diagonal.
The undistorted image coordinates $x$ and $y$ are then calculated by
\begin{equation} \label{eq:undistortedimgcoord}
\begin{split}
x=x\prime+\Delta x \\
y=y\prime+\Delta y
\end{split}
\end{equation}
\subsection{Extended Collinearity Equation}
\label{subsec:ExtendedCollinearity}
As real cameras only approximate the perspective camera model, lens distortion correction can be additionally included in the collinearity model, attempting to correct the pixel positions so that they obey the perspective model with sufficient accuracy.%[W. Förstner et al. 2016] % The reference is not the original one
By inserting \eqref{eq:collinearity} and \eqref{eq:LensDistortion} into \eqref{eq:undistortedimgcoord} , the relationship between a 3D point $\mathbf{P}(X, Y, Z)$ and its corresponding distorted image coordinates $\mathbf{p}(x\prime,y\prime)$ can be described as
\begin{equation} \label{eq:expandedcollinearity}
\begin{split}
x\prime =& x_0-c\dfrac{r_{11}(X-X_0)+r_{21}(Y-Y_0)+r_{31}(Z-Z_0)}{r_{13}(X-X_0)+r_{23}(Y-Y_0)+r_{33}(Z-Z_0)} \\
&-(x_p + A_1x_*(r^2-R_0^2) + A_2x_*(r^4-R_0^4) + B_1(r^2+2x_*^2) + B_22x_*y+C_2y)\\
y\prime =& y_0-c\dfrac{r_{12}(X-X_0)+r_{22}(Y-Y_0)+r_{32}(Z-Z_0)}{r_{13}(X-X_0)+r_{23}(Y-Y_0)+r_{33}(Z-Z_0)} \\
&-(y_p + A_1y (r^2-R_0^2) + A_2y (r^4-R_0^4) + B_1(r^2+2y^2) + B_22x_*y)
\end{split}
\end{equation}
To express \eqref{eq:expandedcollinearity} shortly, a function $\mathcal{G}$ is defined as
\begin{equation} \label{eq:Gfunction}
\mathbf{p} = \mathcal{G}(\mathbf{q},\mathbf{P})
\end{equation}
which takes the interior and exterior orientations as well as the lens distortion parameters of a camera $\mathbf{q}(x_0,y_0,c,X_0,Y_0,Z_0,r_{11},...,r_{33},A_1,A_2,B_1,B_2,C_1,C_2)$ and the position of a 3D point $\mathbf{P}(X, Y, Z)$, and returns the corresponding distorted image coordinates $\mathbf{p}(x\prime,y\prime)$.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Line Fitting}
\label{sec:LineFitting}
Line fitting is the process of constructing an infinite straight line that has a best fit to a 2D dataset. One of the approaches is linear regression which attempts to find the linear function that "best" predicts the dependent variable values as a function of the independent variable. In this work, "best" predict will be understood as in the \gls{ls} approach: minimization of the sum of squared residuals (differences between the measured and the estimated values of the dependent variable).
In the case of standard linear regression, the regressor $x$ is assumed error free, inconsistencies\footnote{The word "inconsistencies" indicates the unobserved random errors, also called as measurement errors.} are only for the dependent variable $y$. Geometrically it means that the vertical distances from observed data to the fitted line is minimized. To minimize the perpendicular distances from the data points to the regression line, a orthogonal regression model is derived in \cref{subsec:OrthogonalRegression}.
For a later combination with point-wise extended collinearity equation \eqref{eq:Gfunction} in next section, the aim is to fit the line equation in two-point form to the observed dataset. For such nonlinear functional relation between variables, a nonlinear LS model is derived in \cref{subsec:NonLinear}.
A functional model is unsolvable when the assumed "dependent" variable is indeed not a function of the independent variables, i.e. the assumed functional relation does not really exist. Take an observed set of 2D points with their Cartesian coordinates $\{x_i,y_i\}^n_{i=1}$ on a vertical line $x=constant$ for example. Their $y$ values have no dependency on their $x$ values, i.e. knowledge of $x$ tells nothing about $y$. Therefore, for this dataset, the functional model $y=f(x)$ is singular. In such cases, however, $x$ is a function of $y$ (which is actually a constant function) and the equation system which models the dependent variable $x$ being a function of the independent variable $y$ becomes solvable. Regarding the dataset used in this work (which will be described with more details in \cref{sec:Materials}) where the observed 2D points scatter mainly in column direction in image space, the functional relation between variables $x$ and $y$ will be setup as $x=f(y)$ to avoid weakly solvable equations system.
%\subsection{Simple Linear Regression}
%\label{subsec:LinearRegression}
%A simple linear regression model describes the linear relationship between a dependent variable and a regressors (an independent variable). By assuming the regressor $y$ being exactly measured without errors, it accounts only for errors $e_x$ in the dependent variable $x$.
%Given a dataset $\{x_i,y_i\}^n_{i=1}$ of $n$ points on a 2D plane, the model takes the form:
%\begin{equation} \label{eq:SimpleLinearRegression}
%x_i - e_{x_i} = a_0 + a_1y_i
%\end{equation}
%where the regression coefficients $a_0$ and $a_1$ are the unknown parameters to be estimated; the error variable $e_x$ is an unobserved random variable that adds noise to the linear relationship between the dependent variable $x$ and regressor $y$.
%https://en.wikipedia.org/wiki/Linear_regression#Assumptions
\subsection{Orthogonal Regression}
\label{subsec:OrthogonalRegression}
A linear regression model describes a dependent variable as a linear function of the regressor (an independent variable). Given a dataset $\{x_i,y_i\}^n_{i=1}$ of $n$ points on a 2D plane, in the case when both dependent variable $x_i$ and regressor $y_i$ are measured with errors, a linear regression model takes the form:
\begin{equation} \label{eq:MixModel1-1}
x_i - e_{x_i} = %a_0 + a_1(y_i-e_{y_i}) =
a_0 + a_1\bar{y_i}
\end{equation}
where the regression coefficients $a_0$ and $a_1$ are the unknown parameters to be estimated, $\bar{y_i}$ denotes the true but unobserved regressor, and the error variable $e_{x_i}$ is an unobserved random variable that adds noise to the linear relationship between the dependent variable $x$ and true regressor $\bar{y_i}$. Whereas the true regressor $\bar{y_i}$ is observed with an error $e_{y_i}$ in the pseudo observation equation:
\begin{equation} \label{eq:MixModel1-2}
y_i-e_{y_i} = \bar{y_i}
\end{equation}
Such models, as the combination of \eqref{eq:MixModel1-1} and \eqref{eq:MixModel1-2}, that take into account the measurement errors of both dependent variables and regressors, are errors-in-variables models. Further more, for the case of equal error variances, i.e. when $\delta=\dfrac{\sigma_{e_x}}{\sigma_{e_y}}=1$, it is a orthogonal regression model which minimizes the perpendicular distances from the data points to the regression line.
\subsection{Orthogonal Regression in Two-point Form}
\label{subsec:NonLinear}
The two-point form of a infinite line in the Cartesian plane passing through the points $(x_1,y_1)$ and $(x_2,y_2)$ is given by:
\begin{equation} \label{eq:LineInTwoPointForm}
(x-x_1) = \dfrac{(x_2-x_1)}{(y_2-y_1)}\times y-y_1
\end{equation}
with $y_2\neq y_1$, where $(x,y)$ is any point on the line.
Let the unknown coordinates of two different points on a line in 2D space be $(x_1,y_1)$ and $(x_2,y_2)$ and the observed 2D points be $\{x_i,y_i\}^n_{i=1}$ with measurement errors $e_{x_i}$ and $e_{y_i}$ in both variables. The orthogonal regression model in two-point form is:
\begin{equation} \label{eq:MixModel2-1}
x_i - e_{x_i}= (x_1-\dfrac{(x_2-x_1)}{(y_2-y_1)}\times y_1) + \dfrac{(x_2-x_1)}{(y_2-y_1)}\times \bar{y_i}
\end{equation}
\begin{equation} \tag{\ref{eq:MixModel1-2} revisited}
y_i-e_{y_i} = \bar{y_i}
\end{equation}
To express \eqref{eq:MixModel2-1} and \eqref{eq:MixModel1-2} shortly, a function $\mathcal{F}$ is defined as
\begin{equation} \label{eq:Ffunction}
\hat{\mathbf{p}} = \mathcal{F}(\mathbf{p}_s,\mathbf{p}_e,y)
\end{equation}
which takes 2D coordinates of a start-point $\mathbf{p_s}(x_s,y_s)$ and an end-point $\mathbf{p_e}(x_e,y_e)$ that define an infinite line, and takes the measured y-coordinate $y$ of an image point $\mathbf{p}(x,y)$, and returns the estimated image coordinates $\mathbf{\hat{p}}(\hat{x},\hat{y})$ which lies on the infinite line $\overline{\mathbf{p_s}\mathbf{p_s}}$.
Note that as a combination of \eqref{eq:MixModel2-1} and \eqref{eq:MixModel1-2}, function $\mathcal{F}$ is actually composed of
\begin{equation} \label{eq:Ffunction_xy}
\begin{split}
\hat{x} = \mathcal{F}^x(\mathbf{p}_s,\mathbf{p}_e,y)\\
\hat{y} = \mathcal{F}^y(\mathbf{p}_s,\mathbf{p}_e,y)
\end{split}
\end{equation}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{3D Line Reconstruction with Nonlinear LS Adjustment}
\label{sec:LSadj}
In this section, the process of refining the position of a 3D line segment in the object space so that its back-projection in each image has a best-fit to the extracted line in the image space is described, as illustrated in \cref{fig:mainidea}.
\begin{figure}
\centering
\subfloat[Before optimization]{\includegraphics[width=0.9\textwidth]{2-4_1_1.png} \label{fig:beforeoptimization}}
\subfloat[After optimization]{\includegraphics[width=0.9\textwidth]{2-4_1_2.png} \label{fig:afteroptimization}}
\caption{\small Before reconstruction, the back projection of initial approximate 3D line segment does not best-fit the extracted 2D lines in the covering images. After optimization, the back-projection of the reconstructed line segment should be best fitting the extracted 2D lines in all the covering images.}
\label{fig:mainidea}
\end{figure}
%%%%%%%%%%%%%%%%%% introduce the subsections
\cref{subsec:LSmodel} introduces the non-linear LS adjustment model with constraints. The observation equations for LS adjustment are set up in \cref{subsec:ObsEqua}. They describe the fitting of a straight line to the measurements, which are the extracted lines, in all covering images, where the fitting lines on different images are transformed from a single 3D straight line segment through the extended collinearity equation \eqref{eq:expandedcollinearity}. Regarding the fact that the collinearity is a point-wise condition, a line segment is represented by its two endpoints. Correspondingly, the observation equations in the LS model are line equations in two-point form.
By satisfying the collinearity condition, single fitting line on an image spans a plane in $\mathbb{R}^3$. The corresponding lines from different views span several planes in $\mathbb{R}^3$, which are non-parallel to each other and should intersect in a line in $\mathbb{R}^3$. As this infinite line is the solution space of the LS estimation, at least two constraints on the location of the two endpoints of the targeted 3D line segment are necessary to avoid their arbitrary locations on this infinite line in $\mathbb{R}^3$. The constraint equations are modeled in \cref{subsec:ConEqua}.
So far the non-linear LS model has been setup. In \cref{subsec:LSadj} it is further linearized and the substitute linear LS model is estimated.
%\vspace{20pt}
%%%%% introduce the sliding window
To simplify the problem, a long lane-marking segment is partially reconstructed through a sliding window in the object space. Each segment is approximated by a straight line, taking into account the maximum curvature of the highway.
In each sliding window, a segment is reconstructed, i.e. a complete non-linear LS adjustment is performed. Only the middle point of the reconstructed line segment is recorded. The sliding window then moves a stepsize forward, and the process of 3D reconstruction is performed again starting from the recorded middle point of the previous line segment. Another line segment is then reconstructed, with its middle point being recorded, and so on. These recorded middle points are in the end the nodes of the reconstructed line. This process is illustrated in \cref{fig:slidingwindow}.
\begin{figure}
\centering
\subfloat[The first line segment of "sliding window length" is reconstructed, with its starting point and its middle point of "step size" from the starting point being recorded.]{\includegraphics[width=0.9\textwidth]{2-4_2_1.png} \label{fig:slidingwindow1}}
\subfloat[Starting from the recorded node of last process, another line segment of "sliding window length" will be reconstructed. i.e. the sliding window has moved "step size" forward.]{\includegraphics[width=0.9\textwidth]{2-4_2_2.png} \label{fig:slidingwindow2}}
\subfloat[The point of "step size" length from the starting point on the reconstructed segment is recorded.]{\includegraphics[width=0.9\textwidth]{2-4_2_3.png} \label{fig:slidingwindow3}}
\caption{\small 3D reconstruction of a lane marking by a sliding window.}
\label{fig:slidingwindow}
\end{figure}
%%%%% describe the collection of measurements
The measurements for each reconstruction process are collected correspondingly, as shown in \cref{fig:measurementscollection} ---by back-projecting the initial line segment into image space and buffering 10 pixels width on each side. By this, all the extracted 2D line segments in this region are collected. As shown in \cref{fig:overlappingregion}, the reconsideration in overlapping region of successive sliding windows makes the reconstruction more robust.
\begin{figure}
\centering
\subfloat[The pink points represent all the extracted lines (in the form of sets of points). The green points are the endpoints of the back projected initial approximate 3D line segment.]{\includegraphics[width=0.45\textwidth]{2-4_3_1.png} \label{fig:measurementscollection1}}
\subfloat[The points in the buffering area are collected as the measurements for LS adjustment.]{\includegraphics[width=0.45\textwidth]{2-4_3_2.png} \label{fig:measurementscollection2}}
\caption{\small Measurements collection in image space.}
\label{fig:measurementscollection}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{2-4_3_3.png}
\caption{\small The red circle points out the reconsidered measurements in successive sliding windows.}
\label{fig:overlappingregion}
\end{figure}
\clearpage
\subsection{The Gauss-Markov Model with Constraints}
\label{subsec:LSmodel}
Adjustment theory deals with the optimal combination of redundant measurements/observations, together with the estimation of unknown parameters \cite{Teunissen2000}.
Given are the $N$ observations $\boldsymbol l=[l_n],\,n=1,2,...,N$, from which the $U$ unknown parameters $\boldsymbol x=[x_u],\,u=1,2,...,U$ are to be determined, with generally $U\leq N$.%, and the fixed $H$ constraints $\boldsymbol c_h=[h_h],\,h=1,2,...,H$
The Gauss-Markov model with $N$ nonlinear functions $\boldsymbol f(\boldsymbol x)=[f_n(\boldsymbol x)],\,u=1,2,...,U$ and the $H$ nonlinear constraints $\boldsymbol h(\boldsymbol x)=[h_\eta(\boldsymbol x)],\,\eta=1,2,...,H$ % !!!!!!!!!!!!$\boldsymbol c=[c_h],\,h=1,2,...,H$
($H<U$) between the unknowns can be written as:
\begin{equation} \label{eq:GM-ObsEq}
\boldsymbol l+\widehat{\boldsymbol v}=\boldsymbol f(\widehat{\boldsymbol x})\quad \textnormal{or}\quad \widehat{\boldsymbol l}=\underset{N\times 1}{\boldsymbol f}(\widehat{\boldsymbol x})
\end{equation}
\begin{equation} \label{eq:GM-ConEq}
\underset{H\times 1}{\boldsymbol h}(\widehat{\boldsymbol x})=\mathbf{0}
\end{equation}
where the observations $\boldsymbol l$ are explicit functions of the unknowns $\boldsymbol x$, with the additive residuals $\boldsymbol v$ introduced to the observations $\boldsymbol l$ to achieve consistency.
Assuming that the deviations between the observed values $\boldsymbol l$ and the true values $\widehat{\boldsymbol l}$ are of random nature and have normal (or Gaussian) distribution, the uncertain observations $\boldsymbol l$ are modeled with first and second moments:
\begin{equation}
\boldsymbol l\sim\mathcal{N}(\boldsymbol f(\widehat{\boldsymbol x}) ,\mathsf{\Sigma}_{ll})
\end{equation}
where $\mathsf{\Sigma}_{ll}$ is the variance-covariance matrix of observations $\boldsymbol l$.%, i.e. the observational errors.
The optimal estimate results from minimizing the Mahalanobis distance with given constraints
\begin{equation}
\widehat{\boldsymbol x}=
argmin_{x|\mathsf{H^T}x=c_h}
\boldsymbol v^\mathsf{T}(\boldsymbol x)\:
\mathsf{W}_{ll}\:
\boldsymbol v(\boldsymbol x)
\end{equation}
Using Lagrangian multipliers $\boldsymbol \lambda$ we aim to minimize
\begin{equation}\label{eq:targetequation}
\Phi(\boldsymbol x,\boldsymbol \lambda)=
\dfrac{1}{2}
(\boldsymbol l-\boldsymbol f(\boldsymbol x))^\mathsf{T}\:
\mathsf{W}_{ll}\:
(\boldsymbol l-\boldsymbol f(\boldsymbol x))+
\boldsymbol \lambda^\mathsf{T}
(\mathsf{H^T}\boldsymbol x-\boldsymbol c_h)
\end{equation}
with respect to $\boldsymbol x$ and $\boldsymbol \lambda$.
%%%%%%%%%%%%%%%%%%
%%The residuals have the same statistical dispersion as
%%The observational errors:
%%\begin{equation}
%%\mathsf{\Sigma}_{ll}=\mathbb{D}(\boldsymbol l)=\mathbb{D}(\boldsymbol v)=\mathbb{D}(\widehat{\boldsymbol l})
%%\end{equation}
%
%The task is to minimize the weighted sum of residuals:
%\begin{equation}\label{eq:targetfunction}
%\boldsymbol v^\mathsf{T}(\boldsymbol x)\:
%\mathsf{W}_{ll}\:
%\boldsymbol v(\boldsymbol x)
%\end{equation}
%such that
%\begin{equation}\tag{\ref{eq:targetfunction} revisited}
%\quad\boldsymbol h(\boldsymbol x)=\boldsymbol 0
%\end{equation}
%%Using Lagrangian multipliers $\lambda$, with the assumption of equal-weighted observations, the target function has to be minimized:
%%\begin{equation}
%%\mathcal{L}_{\mathsf{A}}(x,\lambda)=\dfrac{1}{2}(\underbrace{\boldsymbol l-\boldsymbol f(\widehat{\boldsymbol x}^a)}_{\Delta\boldsymbol l}-\underbrace{\mathsf{A}\,\widehat{\Delta\boldsymbol x}}_{\widehat{\Delta\boldsymbol l}})^T
%%(\boldsymbol l-\boldsymbol f(\widehat{\boldsymbol x}^a)-\mathsf{A}\,\widehat{\Delta\boldsymbol x})+\lambda^T(\mathsf{H}^T\boldsymbol x-\boldsymbol c_h)
%%\end{equation}
\subsection{Observation Equations}
\label{subsec:ObsEqua}
Given a start-point $\mathbf{P}_s(X_s,Y_s,Z_s)$ and an end-point $\mathbf{P}_e(X_e,Y_e,Z_e)$ of a line segment $L$ in the object space and the camera parameters $\mathbf{q}^j$ of camera $j$. Consider the case where there are $J$ images covering this line segment. With the expanded collinearity model \eqref{eq:expandedcollinearity}, the start- and end-points of this line segment's back-projection in image $j$ have the image coordinates $\mathbf{p}^j_s(x^j_s,y^j_s)$ and $\mathbf{p}^j_e(x^j_e,y^j_e)$:
\begin{equation} \label{eq:obsmodel-collinearity}
\begin{split}
\mathbf{p}^j_s = \mathcal{G}(\mathbf{q}^j,\mathbf{P}_s)\\
\mathbf{p}^j_e = \mathcal{G}(\mathbf{q}^j,\mathbf{P}_e)
\end{split}
\qquad
\begin{split}
\forall j=1,2,...J
\end{split}
\end{equation}
%%%%%%%%%%%%%%% !!!!!!!!!!!!!!!!!! %%%%%%%%%%%%%%%%
%Let $l^j$ be the corresponding line segment of $L$ being extracted (observed) on image $j$. Given a dataset $\{x^j_{l,i},y^j_{l,i}\}^{N^j_l}_{i=1}$ of $N^j_l$ points on line segment $l^j$. Rewriting the orthogonal regression model \cref{eq:MixModel2-1} and \cref{eq:MixModel1-2} in the structure of the Gauss-Markov model gives the observation equations in vector form:
%\begin{equation} \label{eq:Ffunction}
%\mathbf{l}+\hat{\mathbf{v}}=\mathbf{f}(\hat{\mathbf{x}}):\quad
%\begin{bmatrix}
% x^j_{l,i} + \hat{e}_{x^j_{l,i}}\\[0.3em]
% y^j_{l,i} + \hat{e}_{y^j_{l,i}}\\[0.3em]
%\end{bmatrix}
%=
%\begin{bmatrix}
%(\hat{x}^j_s-\dfrac{(\hat{x}^j_e-\hat{x}^j_s)}{(\hat{y}^j_e-\hat{y}^j_s)}\times \hat{y}^j_s) + \dfrac{(\hat{x}^j_e-\hat{x}^j_s)}{(\hat{y}^j_e-\hat{y}^j_s)}\times \hat{\bar{y}}_{l,i}\\
%\hat{\bar{y}}_{l,i}
%\end{bmatrix}
%\end{equation}
%which estimates a start-point $\mathbf{p_s}(x_s,y_s)$ and an end-point $\mathbf{p_e}(x_e,y_e)$ that define a infinite line in image space so that the observed image coordinates $\hat{\mathbf{p}}^j_{l,i}(\hat{x}^j_{l,i},\hat{y}^j_{l,i})$ on defined by . % !!!!!!!!!!!!!!!!!
%%%%%%%%%%%%%%% !!!!!!!!!!!!!!!!!! %%%%%%%%%%%%%%%%
Let $l^j$ be the corresponding line segment of $L$ being extracted (observed) on image $j$. Given a dataset $\{x^j_{l,i},y^j_{l,i}\}^{N^j_l}_{i=1}$ of $N^j_l$ points on line segment $l^j$, their estimated image coordinates $\hat{\mathbf{p}}^j_{l,i}(\hat{x}^j_{l,i},\hat{y}^j_{l,i})$ on the infinite line $\overline{\mathbf{p}^j_s,\mathbf{p}^j_e}$ computed from the orthogonal regression model \eqref{eq:Ffunction} are:
\begin{equation} \label{eq:obsmodel-linefitting}
\hat{\mathbf{p}}^j_{l,i} = \mathcal{F}(\mathbf{p}^j_s,\mathbf{p}^j_e,y^j_{l,i})
\qquad
\forall i=1,2,...N^j_l
\end{equation}
Combining \eqref{eq:obsmodel-collinearity} with \eqref{eq:obsmodel-linefitting} gives function $\mathcal{H}$:
\begin{equation} \label{eq:Hfunction}
\begin{split}
\hat{\mathbf{p}}^j_{l,i} &= \mathcal{F}(\mathcal{G}(\mathbf{q}^j,\mathbf{P}_s),\mathcal{G}(\mathbf{q}^j,\mathbf{P}_e),y^j_{l,i})\\
&=\mathcal{H}(\mathbf{q}^j,\mathbf{P}_s,\mathbf{P}_e,y^j_{l,i})
\qquad
\forall i=1,2,...N^j_l,\quad\forall j=1,2,...J
\end{split}
\end{equation}
which takes camera parameters $\mathbf{q}^j(x_0,y_0,c,X_0,Y_0,Z_0,R_{11},...,R_{33},A_1,A_2,B_1,B_2,C_1,C_2)$, object coordinates of $\mathbf{P}_s$ and $\mathbf{P}_e$ which define a line $\overline{\mathbf{P}_s,\mathbf{P}_e}$, and the observed y-coordinate of the point $\mathbf{p}^j_{l,i}$ in image space, and returns the estimated image coordinates $\hat{\mathbf{p}}^j_{l,i}$ on the back projected line of $\overline{\mathbf{P}_s,\mathbf{P}_e}$.
Corresponding to \cref{eq:Ffunction_xy}, function $\mathcal{H}$ is actually composed of
\begin{equation} \label{eq:Hfunction_xy}
\begin{split}
\hat{x}^j_{l,i} = \mathcal{H}^x(\mathbf{q}^j,\mathbf{P}_s,\mathbf{P}_e,y^j_{l,i})\\
\hat{y}^j_{l,i} = \mathcal{H}^y(\mathbf{q}^j,\mathbf{P}_s,\mathbf{P}_e,y^j_{l,i})
\end{split}
\qquad
\begin{split}
\forall i=1,2,...N^j_l,\quad\forall j=1,2,...J
\end{split}
\end{equation}
Since the 3D line reconstruction will be done "segment-wise", i.e. for a pair of $P_s(X_s,Y_s,Z_s)$ and $P_e(X_e,Y_e,Z_e)$ of interest, the measurements $(x^j_{l,i},y^j_{l,i})$ will be collected correspondingly, the subscription $l$ representing specific line segment will be left out in the followings.
Each image gives $2\times N^j$ observation equations. These equations are often stacked together and written in vector form as:
\begin{equation} \label{eq:Ffunction}
\begin{bmatrix}
x^j_1\\[0.3em]
x^j_2\\[0.3em]
\vdots\\[0.3em]
x^j_{N^j}\\[0.5em]
\arrayrulecolor{lightgray} \hline
y^j_1\\[0.3em]
y^j_2\\[0.3em]
\vdots\\[0.3em]
y^j_{N^j}\\[0.5em]
\end{bmatrix}
\doteq
\begin{bmatrix}
\mathcal{H}^x(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{1})\\[0.3em]
\mathcal{H}^x(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{2})\\[0.3em]
\vdots\\[0.3em]
\mathcal{H}^x(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{N^j})\\[0.5em]
\arrayrulecolor{lightgray} \hline
\mathcal{H}^y(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{1})\\[0.3em]
\mathcal{H}^y(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{2})\\[0.3em]
\vdots\\[0.3em]
\mathcal{H}^y(\mathbf{q}^j,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^j_{N^j})\\[0.5em]
\end{bmatrix}
\begin{array}{@{\kern-\nulldelimiterspace}l@{}}
\left.\begin{array}{@{}c@{}}\\[0.3em] \\[0.3em] \\[0.3em] \\[0.5em] \end{array}\right\} N^j\\
\left.\begin{array}{@{}c@{}}\\[0.3em] \\[0.3em] \\[0.3em] \\[0.5em] \end{array}\right\} N^j\\
\end{array}
\end{equation}
where dot equal indicates inconsistencies between the measured values, $x^j_i$ and $x^j_i$, and the computed values, $\mathcal{H}^x(q^j,P_s,P_e,y^j_i)$ and $\mathcal{H}^y(q^j,P_s,P_e,y^j_i)$.
For all covering image $j=1,2,...J$, there are $2\times\displaystyle\sum_{j=1}^{J}N^j$ observation equations. Being written in the structure of the \textit{Gauss-Markov model}, corresponding to \cref{eq:GM-ObsEq}, they are expressed as:
\begin{equation} \label{eq:obsvec-allcam}
\boldsymbol l+\widehat{\boldsymbol v}=\boldsymbol f(\widehat{\boldsymbol x}):\quad
\begin{bmatrix}
x^1_1\\[0.3em]
\vdots\\
x^1_{N^1}\\[0.3em]
\arrayrulecolor{lightgray} \hline
y^1_1\\[0.3em]
\vdots\\
y^1_{N^1}\\[0.3em]
\arrayrulecolor{lightgray} \hline
\vdots\\
\arrayrulecolor{lightgray} \hline
x^J_1\\[0.3em]
\vdots\\
x^J_{N^J}\\[0.3em]
\arrayrulecolor{lightgray} \hline
y^J_1\\[0.3em]
\vdots\\
y^J_{N^J}\\[0.3em]
\end{bmatrix}
+
\begin{bmatrix}
\hat{v}_{x^1_1}\\[0.3em]
\vdots\\
\hat{v}_{x^1_{N^1}}\\[0.3em]
\arrayrulecolor{lightgray} \hline
\hat{v}_{y^1_1}\\[0.3em]
\vdots\\
\hat{v}_{y^1_{N^1}}\\[0.3em]
\arrayrulecolor{lightgray} \hline
\vdots\\
\arrayrulecolor{lightgray} \hline
\hat{v}_{x^J_1}\\[0.3em]
\vdots\\
\hat{v}_{x^J_{N^J}}\\[0.3em]
\arrayrulecolor{lightgray} \hline
\hat{v}_{y^J_1}\\[0.3em]
\vdots\\
\hat{v}_{y^J_{N^J}}\\[0.3em]
\end{bmatrix}
=
\begin{bmatrix}
\mathcal{H}^x(\mathbf{q}^1,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^1_1)\\[0.3em]
\vdots\\
\mathcal{H}^x(\mathbf{q}^1,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^1_{N^1})\\[0.3em]
\arrayrulecolor{lightgray} \hline
\mathcal{H}^y(\mathbf{q}^1,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^1_1)\\[0.3em]
\vdots\\
\mathcal{H}^y(\mathbf{q}^1,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^1_{N^1})\\[0.3em]
\arrayrulecolor{lightgray} \hline
\vdots\\
\arrayrulecolor{lightgray} \hline
\mathcal{H}^x(\mathbf{q}^J,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^J_1)\\[0.3em]
\vdots\\
\mathcal{H}^x(\mathbf{q}^J,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^J_{N^J})\\[0.3em]
\arrayrulecolor{lightgray} \hline
\mathcal{H}^y(\mathbf{q}^J,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^J_1)\\[0.3em]
\vdots\\
\mathcal{H}^y(\mathbf{q}^J,\hat{\mathbf{P}}_s,\hat{\mathbf{P}}_e,y^J_{N^J})\\[0.3em]
\end{bmatrix}
\begin{array}{@{\kern-\nulldelimiterspace}l@{}}
\left.\begin{array}{@{}c@{}}\\ \\ \\ \\ \\ \\[18pt] \end{array}\right\}2\times N^1\\
\left.\begin{array}{@{}c@{}}\\ \end{array}\right. \vdots \\
\left.\begin{array}{@{}c@{}}\\ \\ \\ \\ \\ \\[18pt] \end{array}\right\}2\times N^J\\
\end{array}
\end{equation}
with the amount of observations:
\begin{equation}
N=2\times\displaystyle\sum_{j=1}^{J}N^j
\end{equation}
The unknown parameters in the \textit{Gauss-Markov model} are
\begin{equation}
\boldsymbol x:\quad
\begin{bmatrix}
X_s\\
Y_s\\
Z_s\\
X_e\\
Y_e\\
Z_e\\
y^1_1\\[0.3em]
\vdots\\
y^J_{N^J}\\[0.3em]
\end{bmatrix}
\end{equation}
with the amount of unknowns:
\begin{equation}
U=6+\displaystyle\sum_{j=1}^{J}N^j
\end{equation}
\clearpage
\subsection{Constraint Equations}
\label{subsec:ConEqua}
There are three constraints on the unknown parameters used in this work:
\begin{itemize}
\item Fixing the X-, Y-coordinates of the start-point using the approximate values:
\item [] \begin{equation} \label{eq:constraint1}
\hat{X}_s-{X_s}^0=0
\end{equation}
\begin{equation} \label{eq:constraint2}
\hat{Y}_s-{Y_s}^0=0
\end{equation}
\item Fixing the length of the line segment (i.e. constraining the relative location of the end-point):
\begin{equation} \label{eq:constraint3}
\sqrt{(\hat{X}_s-\hat{X}_e)^2+(\hat{Y}_s-\hat{Y}_e)^2+(\hat{Z}_s-\hat{Z}_e)^2}-S=0
\end{equation}
\end{itemize}
Only in the very first line segment reconstruction of a long lane marking, the fixed $X_s$ and $Y_s$ values are from the initial parameter estimates derived in \cref{sec:LineProjectionOnDSM}. Starting from the second line segment, the fixed values ${X_s}^0$ and ${Y_s}^0$ depend on the previously determined values.
The constraint equations \eqref{eq:constraint1}, \eqref{eq:constraint2} and \eqref{eq:constraint3} can be stacked together and written in the structure of the \textit{Gauss-Markov model with constraints}, corresponding to \cref{eq:GM-ConEq}:
\begin{equation} \label{eq:convec}
\boldsymbol h(\widehat{\boldsymbol x})=\mathbf{0}:\quad
\begin{bmatrix}
\hat{X}_s-{X_s}^0\\[0.3em]
\hat{Y}_s-{Y_s}^0\\[0.3em]
\sqrt{(\hat{X}_s-\hat{X}_e)^2+(\hat{Y}_s-\hat{Y}_e)^2+(\hat{Z}_s-\hat{Z}_e)^2}-S\\[0.3em]
\end{bmatrix}
=
\begin{bmatrix}
0\\[0.3em]
0\\[0.3em]
0\\[0.5em]
\end{bmatrix}
\end{equation}
with the amount of constraints:
\begin{equation}
H=3
\end{equation}
\subsection{Least-Squares Estimation for 3D Line Reconstruction}
\label{subsec:LSadj}
The nonlinear equation system is approximated to be locally linear with small step size of the unknown quantities. The linearized form is expressed as:
\begin{equation} \label{eq:GM-ObsEq-linear}
\widehat{\Delta\boldsymbol l}=\Delta\boldsymbol l+\widehat{\boldsymbol v}=\underset{N\times U}{\mathsf{A}}\,\widehat{\Delta\boldsymbol x}
\end{equation}
\begin{equation} \label{eq:GM-ConEq-linear}
\boldsymbol c_h=\underset{H\times U}{\mathsf{H^T}}\widehat{\Delta\boldsymbol x}
\end{equation}
where\newline
the $N\times U$ design matrix is the Jacobian of the function evaluated at the approximate values of the unknown parameters
\begin{equation*}
\mathsf{A}=\left.\dfrac{\partial\boldsymbol f(\boldsymbol x)}{\partial\boldsymbol x}\right|_{\boldsymbol x=\widehat{\boldsymbol x}^a}
\end{equation*}
the $U\times H$ constraint matrix is the Jacobian of the constraints
\begin{equation*}
\mathsf{H}=\left.\left(\dfrac{\partial\boldsymbol h(\boldsymbol x)}{\partial\boldsymbol x}\right)^\mathsf{T}\right|_{\boldsymbol x=\widehat{\boldsymbol x}^a}
\end{equation*}
and the residual constraints are % !!!!!!!!!!!!!!!!!!!!!!!!!
\begin{equation*}
\boldsymbol c_h=-\boldsymbol h(\widehat{\boldsymbol x}^a)
\end{equation*}
with the corrections
\begin{equation} \label{eq:GM-ObsEq-linear-l}
\Delta\boldsymbol l=\boldsymbol l-\boldsymbol f(\widehat{\boldsymbol x}^a)=:\widehat{\boldsymbol v}^a
\end{equation}
%and the unknown corrections to the parameters:
\begin{equation} \label{eq:GM-ObsEq-linear-x}
\widehat{\Delta\boldsymbol x}=\widehat{\boldsymbol x}-\widehat{\boldsymbol x}^a
\end{equation}
where $\widehat{\boldsymbol x}^a$ is the approximate values for the estimates of the unknown parameters.
In the \textit{linearized substitute model} as shown in \eqref{eq:GM-ObsEq-linear} and \eqref{eq:GM-ConEq-linear}, it turns to solve for the increments of unknowns $\Delta\boldsymbol x$ instead of the unknowns themselves.
%%%%%%%%%%%%%%%%%%%%%%%%
As the lines were extracted independently and with the same procedure, the measurements are assumed to be independent from each other and equal-weighted. That is, the weight matrix is an identity matrix:
\begin{equation}
\mathsf{W}_{ll}=
\begin{bmatrix}
1&0&0&\cdots &0\\
0&1&0&\cdots &0\\
0&0&1&\cdots &0\\
\vdots&&&\ddots&\\
0&0&0&\cdots &1
\end{bmatrix}
\end{equation}
The unknown parameters $\widehat{\Delta\boldsymbol x}$ of the linearized model can be determined from the extended normal equation system
\begin{equation}
\begin{bmatrix}
\mathsf{A^T\mathsf{W}_{ll}A} & \mathsf{H}\\
\mathsf{H^T} & 0
\end{bmatrix}
\begin{bmatrix}
\widehat{\Delta\boldsymbol x}\\
\lambda
\end{bmatrix}
=
\begin{bmatrix}
\mathsf{A^T}\Delta\boldsymbol l\\
\boldsymbol c_h
\end{bmatrix}
\end{equation}
With the iteration index $\nu$ and the approximate values in the first iteration $\widehat{\boldsymbol x}^{(1)}=\widehat{\boldsymbol x}^{(a)}$, we have
\begin{equation}\label{eq:xiter}
\widehat{\boldsymbol x}^{(\nu+1)}=
\widehat{\boldsymbol x}^{(\nu)}+
\widehat{\Delta\boldsymbol x}^{(\nu)}
\end{equation}
By updating the parameters using \cref{eq:xiter}, the LS estimation is applied iteratively until convergence is achieved.
The redundancy of the problem is
\begin{equation}
R=N+H-U=\displaystyle\sum_{j=1}^{J}N^j-3
\end{equation}
The matrix $\mathsf{A}$ does not need to have full rank, but the block matrix $[\mathsf{A_T},\mathsf{H}]$ must have full rank in order to guarantee the estimation problem has a unique solution.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Two kinds of singular cases can happen. First, a configuration defect in object space can happen: as the 3D reconstruction approach still relies on intersection of multiple projection rays from different views, either the cases where there is only one image covering the targeted line segment, or when the targeted line segment lies on all of the epipolar planes of any of its two covering images, in these cases the problem is not solvable. Second, a configuration defect in image space can happen: as mentioned in \cref{sec:LineFitting}, when the extracted line lies (nearly) in row direction on all the covering images, the problem is also not solvable. In the cases that the targeted line segment lies only on some of the stereo pairs' epipolar planes, the problem is still solvable whereas those stereo pairs are not contributing to the solution. Or similarly, when only in some of the images the extracted line segments lie (nearly) in row direction, the problem is solvable whereas those images are not contributing measurements to the estimation.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
After LS estimation, the estimated variance-covariance matrix of the estimated parameters is
\begin{equation} \label{eq:postSigmaXX}
\begin{split}
\hat{\Sigma}_{\hat{X}\hat{X}}&=
\hat{\sigma}_0^2
(
(\mathsf{A^TW_{ll}A})^{-1}
-
(\mathsf{A^TW_{ll}A})^{-1}
\mathsf{H}
(
\mathsf{H^T}
(\mathsf{A^TW_{ll}A})^{-1}
\mathsf{H}
)^{-1}
\mathsf{H^T}
(\mathsf{A^TW_{ll}A})^{-1}
)\\
&=
\begin{bmatrix}
\hat{\sigma}_{\hat{X}}^2 && \hat{\sigma}_{\hat{X}\hat{Y}} && \hat{\sigma}_{\hat{X}\hat{Z}} \\
\hat{\sigma}_{\hat{Y}\hat{X}} && \hat{\sigma}_{\hat{Y}}^2 && \hat{\sigma}_{\hat{Y}\hat{Z}} \\
\hat{\sigma}_{\hat{Z}\hat{X}} && \hat{\sigma}_{\hat{Z}\hat{Y}} && \hat{\sigma}_{\hat{Z}}^2
\end{bmatrix}
\end{split}
\end{equation}
which depends on both the design matrix $\mathsf{A}$ (i.e. the configuration) and the posterior standard deviation of the measurements $\hat{\sigma}_0$ (i.e. the posterior measurements quality) under the constraints $\mathsf{H}$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Line Projection on the DSM (Determination of Initial Parameter Estimates)}
\label{sec:LineProjectionOnDSM}
As the target equation \ref{eq:targetequation} introduced in \cref{sec:LSadj} may exhibit multiple local minimum, a "correct" initial approximation of the unknowns is required for convergence to the correct solution. To provide such initial 3D line segment, the extracted line features derived in \cref{sec:LineExtraction} can be projected onto a DSM based on the bundle adjusted exterior and interior orientations. An example is shown in \cref{fig:DSMprofile}.
\begin{figure}%[!h]
\centering
\includegraphics[width=\textwidth]{Test_3D_DSM.png}
\caption{\small The projected line on DSM.}
\label{fig:DSMprofile}
\end{figure}
Given image coordinates $\mathbf{p}(x,y)$ of a point and (bundle-adjusted) image orientations $\mathbf{q}$, there is still one degree of freedom in extended collinearity equation \eqref{eq:Gfunction} on solving object coordinates $\mathbf{P}(X,Y,Z)$. Combined with the usage of a DSM, which provides the height information $Z_{DSM}$ given a position $(X,Y)$, the corresponding object coordinates can be solved iteratively until the increment $\Delta Z$ is small enough, i.e. convergence achieved. The iterative scheme is illustrated in \cref{fig:ProjectiononDSM} and \cref{alg:LineProjectionOnDSM}.
Considering that the DSM is raster (discrete) whereas $X$ and $Y$ have continuous numerical values, the DSM height $Z_{DSM}$ for the given point $(X,Y)$ is bilinear interpolated.
\begin{figure}%[!h]
\centering
\includegraphics[width=\textwidth]{ProjectionOnDSM.png}
\caption{\small Iterative scheme of single projection ray intersecting DSM.}
\label{fig:ProjectiononDSM}
\end{figure}
\begin{Algorithmus}
\caption{Single Point Projection on DSM\newline
[$X$,$Y$,$Z_{DSM}$]=\texttt{PointProjectionOnDSM}($p(x,y)$,$\mathbf{q}$,$DSM$)\newline
\textbf{Input}: image coordinates of a point $p(x,y)$,
camera parameters $\mathbf{q}$ and
surface model $DSM$\newline
\textbf{Output}: object coordinates of the projected point $P(X,Y,Z_{DSM})$ on DSM
}
\label{alg:LineProjectionOnDSM}
\begin{algorithmic}
\State $\Theta=0.5$
\Comment unit: [meter], the convergence threshold
\State $Z_{initial}=500$
\Comment unit: [meter], an initial height value
\State $\Delta Z=9999$
\Comment unit: [meter], an infinite large value approximation
\While{$\Delta Z>\Theta$}
\State ($X$,$Y$) =$\mathbf{q}$.img2geo($p(x,y)$,$Z_{initial}$)
\State $Z_{DSM}$ = $DSM$.GetHeight($X$,$Y$)
\State $\Delta Z=|Z_{DSM}-Z_{initial}|$
\State $Z_{initial}=Z_{DSM}$
\EndWhile
\State return ($X$,$Y$,$Z_{DSM}$)
\end{algorithmic}
\end{Algorithmus}
| {
"alphanum_fraction": 0.7299289058,
"avg_line_length": 58.0327044025,
"ext": "tex",
"hexsha": "7c1dcc51349af56efbbbcc418e38ba568b30d4bd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e6a113c538b5313c7d201a6bbf6aabfbe357a7e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "eileen19930711/MyMasterThesis",
"max_forks_repo_path": "content/kapitel2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e6a113c538b5313c7d201a6bbf6aabfbe357a7e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "eileen19930711/MyMasterThesis",
"max_issues_repo_path": "content/kapitel2.tex",
"max_line_length": 1054,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e6a113c538b5313c7d201a6bbf6aabfbe357a7e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "eileen19930711/MyMasterThesis",
"max_stars_repo_path": "content/kapitel2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 14662,
"size": 46136
} |
\section{Introduction}
\begin{frame}{Introduction}
\end{frame}
| {
"alphanum_fraction": 0.765625,
"avg_line_length": 12.8,
"ext": "tex",
"hexsha": "ca01cbbb9b9e08c3e29a29979c00c4fe27a1c749",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a709ea73793ca4ff5d5bd38cdc68ffe3253c6127",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bencwbrown/beamer-template",
"max_forks_repo_path": "content/intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a709ea73793ca4ff5d5bd38cdc68ffe3253c6127",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bencwbrown/beamer-template",
"max_issues_repo_path": "content/intro.tex",
"max_line_length": 27,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a709ea73793ca4ff5d5bd38cdc68ffe3253c6127",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bencwbrown/beamer-template",
"max_stars_repo_path": "content/intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17,
"size": 64
} |
%************************************************
\chapter{Pervasive Authentication Schemes}\label{ch:review}
%************************************************
In this chapter, we introduce a semi-structured review through pervasive authentication schemes. We evaluate the schemes, by rating how well they offer different benefits, that we consider relevant for any human-to-computer authentication scheme. The evaluation is based on the comparative evaluation framework for web authentication sch\-emes presented by \citet{bonneau2012quest} (and introduced in the previous chapter). The purpose of this evaluation, is to identify the benefits of pervasive authentication, and highlight potential shortcomings. We evaluate the schemes of both \citet{bardram2003context, ojala2008wearable}. The Pico authentication scheme \cite{stajano2011pico} and passwords have both already been evaluated in the original paper \cite{bonneau2012quest}. The ratings are shown in table~\ref{table:pre_property_table}.
We have attempted to rate the schemes fairly, by adhering to the property definitions and rating criteria specified by the framework. Additionally, we have attempted to document and describe the rating process in the same way as done in the supplementary technical report \cite{UCAM-CL-TR-817}.
\begin{table}[t]
\centering
\begin{wide}
\resizebox{\linewidth}{!}{
%\rotatebox{270}{
\setlength\tabcolsep{1.8pt}
\begin{tabular}{r|c|cccccccc|cccccc|ccccccccccc}
\multicolumn{2}{c}{} &
\multicolumn{8}{c}{\textbf{Usability}} &
\multicolumn{6}{c}{\textbf{Deployability}} &
\multicolumn{11}{c}{\textbf{Security}}\\
\multicolumn{27}{c}{}
\\
& \rot{\textit{Reference}} &
\rot{\textit{Memorywise-Effortless}} &
\rot{\textit{Scalable-for-Users}} &
\rot{\textit{Nothing-to-Carry}} &
\rot{\textit{Physically-Effortless}} &
\rot{\textit{Easy-to-Learn}} &
\rot{\textit{Efficient-to-Use}} &
\rot{\textit{Infrequent-Errors}} &
\rot{\textit{Easy-Recovery-from-Loss}} &
\rot{\textit{Accessible}} &
\rot{\textit{Negligible-Cost-per-User}} &
\rot{\textit{Server-Compatible}} &
\rot{\textit{Browser-Compatible}} &
\rot{\textit{Mature}} &
\rot{\textit{Non-Proprietary}} &
\rot{\textit{Resilient-to-Physical-Observation}} &
\rot{\textit{Resilient-to-Targeted-Impersonation}} &
\rot{\textit{Resilient-to-Throttled-Guessing}} &
\rot{\textit{Resilient-to-Unthrottled-Guessing}} &
\rot{\textit{Resilient-to-Internal-Observation}} &
\rot{\textit{Resilient-to-Leaks-from-Other-Verifiers}} &
\rot{\textit{Resilient-to-Phishing}} &
\rot{\textit{Resilient-to-Theft}} &
\rot{\textit{No-Trusted-Third-Party}} &
\rot{\textit{Requiring-Explicit-Consent}} &
\rot{\textit{Unlinkable}} \\ \hline
Passwords & &
& %Memorywise-Effortless
& %Scalable-for-Users
\CIRCLE & %Nothing-to-Carry
& %Physically-Effortless
\CIRCLE & %Easy-to-Learn
\CIRCLE & %Efficient-to-Use
\Circle & %Infrequent-Errors
\CIRCLE & %Easy-Recovery-from-Loss
\CIRCLE & %Accessible
\CIRCLE & %Neglible-Cost-per-User
\CIRCLE & %Server-Compatible
\CIRCLE & %Browser-Compatible
\CIRCLE & %Mature
\CIRCLE & %Non-Proprietary
& %Resilient-to-Physical-Observation
\Circle & %Resilient-to-Targeted-Impersonation
& %Resilient-to-Throttled-Guessing
& %Resilient-to-Unthrottled-Guessing
& %Resilient-to-Internal-Observation
& %Resilient-to-Leaks-from-Other-Verifiers
& %Resilient-to-Phishing
\CIRCLE & %Resilient-to-Theft
\CIRCLE & %No-Trusted-Third-Party
\CIRCLE & %Requiring-Explicit-Consent
\CIRCLE %Unlinkable
\\ \hline
Context-Aware Auth~~& \cite{bardram2003context} &
\Circle & %Memorywise-Effortless
\CIRCLE & %Scalable-for-Users
& %Nothing-to-Carry
\Circle & %Physically-Effortless
\CIRCLE & %Easy-to-Learn
\CIRCLE & %Efficient-to-Use
\CIRCLE & %Infrequent-Errors
& %Easy-Recovery-from-Loss
\CIRCLE & %Accessible
& %Neglible-Cost-per-User
& %Server-Compatible
& %Browser-Compatible
& %Mature
\CIRCLE & %Non-Proprietary
\Circle & %Resilient-to-Physical-Observation
\CIRCLE & %Resilient-to-Targeted-Impersonation
& %Resilient-to-Throttled-Guessing
& %Resilient-to-Unthrottled-Guessing
\Circle & %Resilient-to-Internal-Observation
& %Resilient-to-Leaks-from-Other-Verifiers
\Circle & %Resilient-to-Phishing
\Circle & %Resilient-to-Theft
& %No-Trusted-Third-Party
\CIRCLE & %Requiring-Explicit-Consent
%Unlinkable
\\ \hline
Wearable Auth & \cite{ojala2008wearable} &
\CIRCLE & %Memorywise-Effortless
\CIRCLE & %Scalable-for-Users
\Circle & %Nothing-to-Carry
\Circle & %Physically-Effortless
\CIRCLE & %Easy-to-Learn
\CIRCLE & %Efficient-to-Use
\CIRCLE & %Infrequent-Errors
& %Easy-Recovery-from-Loss
\CIRCLE & %Accessible
& %Neglible-Cost-per-User
& %Server-Compatible
& %Browser-Compatible
& %Mature
\CIRCLE & %Non-Proprietary
\CIRCLE & %Resilient-to-Physical-Observation
\CIRCLE & %Resilient-to-Targeted-Impersonation
\CIRCLE & %Resilient-to-Throttled-Guessing
\CIRCLE & %Resilient-to-Unthrottled-Guessing
? & %Resilient-to-Internal-Observation
? & %Resilient-to-Leaks-from-Other-Verifiers
\CIRCLE & %Resilient-to-Phishing
\CIRCLE & %Resilient-to-Theft
? & %No-Trusted-Third-Party
\Circle & %Requiring-Explicit-Consent
? %Unlinkable
\\ \hline
Pico & \cite{stajano2011pico} &
\CIRCLE & %Memorywise-Effortless
\CIRCLE & %Scalable-for-Users
& %Nothing-to-Carry
\CIRCLE & %Physically-Effortless
& %Easy-to-Learn
\Circle & %Efficient-to-Use
\Circle & %Infrequent-Errors
& %Easy-Recovery-from-Loss
& %Accessible
& %Neglible-Cost-per-User
& %Server-Compatible
& %Browser-Compatible
& %Mature
\CIRCLE & %Non-Proprietary
\CIRCLE & %Resilient-to-Physical-Observation
\CIRCLE & %Resilient-to-Targeted-Impersonation
\CIRCLE & %Resilient-to-Throttled-Guessing
\CIRCLE & %Resilient-to-Unthrottled-Guessing
\CIRCLE & %Resilient-to-Internal-Observation
\CIRCLE & %Resilient-to-Leaks-from-Other-Verifiers
\CIRCLE & %Resilient-to-Phishing
\Circle & %Resilient-to-Theft
\CIRCLE & %No-Trusted-Third-Party
\CIRCLE & %Requiring-Explicit-Consent
\CIRCLE %Unlinkable
\\ \hline
\multicolumn{27}{r}{
\CIRCLE~=~offers the benefit
\quad \Circle~=~almost offers the benefit
\quad ?~=~not known}
\quad \\
\end{tabular}}
\end{wide}
\caption[Overview of benefits of related work]{Comparing the benefits of related work in Pervasive Authentication.}
\label{table:pre_property_table}
\end{table}
\section{Context-Aware Authentication}
%\todo[inline]{Revisit this section}
\citet{bardram2003context} presents a prototype and authentication protocol for secure and usable authentication for physicians in hospitals. The system is comprised of a personal smart-card, that can be inserted into the hospital computers to access the computers, and a context-aware subsystem that as a minimum is location-aware. If the practitioners try to access a computer using their key-card, and their location is the same as the workstations, then they are authenticated without further interaction. If the location differs, then they are asked to type their password.
When a new key-card is initialized it generates a public-private key pair and sends the public key to a central server. The key-card uses a one-way authentication protocol and the user's password is only known to the keycard.
We grant the system \textit{Quasi-Memorywise-Effortless} as the user is still required to remember the keycard password.
It is \textit{Scalable-for-Users} as the card could easily submit the same public-key to many verifiers.
It is not \textit{Noting-to-Carry}, although, in the hospital setup where it is applied, the staff is required to carry their identity card, and it could qualify for a \textit{Quasi-Noting-to-Carry} in some scenarios.
It is both \textit{Easy-to-Learn}, \textit{Efficient-to-Use} and \textit{Infrequent-Errors} (assuming that the context-aware service works most of the time).
It is not \textit{Easy-Recovery-from-Loss} as a new card needs to be issued, and a new public-private key pair needs to be created and submitted to verifiers.
As it is a prototype, deployability benefits are not well documented. However, we grant it \textit{Accessible} and \textit{Non-Propritary}.
The system is not \textit{Negligible-Cost-per-User} as the setup is very infrastructure heavy. It requires all employees to both have a key-card and some RF-`badge' for the context-server to estimate a user's indoor location.
The system is not built to access web services and is, therefore, neither \textit{Browser-Compatible} nor \textit{Server-Compatible}.
However, it could easily be used for web services by transmitting the users public key to every verifier, or even generate a new key-pair for every verifier. It would however still not be compatible.
On the security aspects, we deem it to be \textit{Quasi-Resilient-to-Physical-Observation} as the user only rarely types the password.
However, if the key-card is stolen and the password is known, the adversary has full access, and we, therefore, grant it \textit{Quasi-Resilient-to-Theft}.
It is not \textit{Resilient-to-Phishing} as man-in-the-middle (MITM) attacks are possible.
It is not \textit{Resilient-to-Throttled-Guessing} nor \textit{Resilient-to-Unthrottled-Guessing}, however, the adversary would have to steal the key-card to start guessing. It is not \textit{Unlinkable}.
\section{Wearable Authentication}
\citet{ojala2008wearable} presents a prototype for transparent and continuous authentication with workstations. The system is comprised of three components. A ZigBee enabled wearable wrist device that monitors the wearer's vitals, a ZigBee receiver, and the workstation. When the user puts on the watch, it starts to monitor his vitals. The user can now use a fingerprint reader to authenticate with the system. The user remains authenticated for as long, as he is wearing the watch. If he takes off the watch, or his vitals stops, then he will be logged-out after 10 seconds. While the user is authenticated he can approach any workstation that has a receiver, and without further interaction start using the machine. As soon as he leaves the machine -- he is logged out.
We grant the system \textit{Memorywise-Effortless}, \textit{Scalable-for-Users}, \textit{Easy-to-Learn}, \textit{Efficient-to-Use} and \textit{Infrequent-Errors}. We deem it \textit{Quasi-Noth\-ing-to-Carry} as a watch is something that most users always carry, like for some people a smartphone. It is not \textit{Easy-Recovery-from-Loss} as loosing the watch means having to get a new one, that should then be linked to the users existing identities.
As the system, much like ´Context-Aware Authentication' \cite{bardram2003context} is a prototype that is not built for web-services, we grant it the same scores for deployability.
On the security side it is \textit{Resilient-to-Physical-Observation}, \textit{Resilient-to-Targeted-Impersonation}, \textit{Resilient-to-Throttled-Guessing}, \textit{Resilient-to-Un\-throttled-Guessing} and \textit{Resilient-to-Theft}. It is \textit{Quasi-Requiring-Explicit-Consent} as the user only gives explicit consent once when using the fingerprint reader.
Other security aspects are not known due to the simplicity of the prototype and are therefore left out of consideration, although we deem them feasible to include.
| {
"alphanum_fraction": 0.724324782,
"avg_line_length": 55.7122641509,
"ext": "tex",
"hexsha": "b24b06e2b8686dd47552cfe830f043a2359cc980",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "880b6d6fcd2826e0439ce726c5c3fad44441643e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cholewa1992/cta",
"max_forks_repo_path": "report/Chapters/Review.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "880b6d6fcd2826e0439ce726c5c3fad44441643e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cholewa1992/cta",
"max_issues_repo_path": "report/Chapters/Review.tex",
"max_line_length": 841,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "880b6d6fcd2826e0439ce726c5c3fad44441643e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cholewa1992/cta",
"max_stars_repo_path": "report/Chapters/Review.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3241,
"size": 11811
} |
\subsection{1D Analysis}
\label{ssec:1d-analysis}
\todo{Axel}
One of the most common analysis cases in gamma-ray astronomy is measuring the
spectrum of a source in a given region defined on the sky, in conventional
astronomy also called aperture photometry.
The spectrum is typically measured in two steps: first a parametric
spectral model is fitted to the data and secondly flux points are computed
in a pre-defined set of energy bins.
The result of such an analysis performed on three simulated CTA observations
is shown in Fig.~\ref{fig:cta_galactic_center}.
In this case the spectrum was measured in a circular aperture centered
on the Galactic Center, in \gammaray~astronomy often called "on region".
For such analysis the users first chooses a region of interest
and energy binning, both defined by a `RegionGeom`.
In a second step the events and instrument response are binned into
maps of this geometry, by the `SpectrumDatasetMaker`.
All the data and reduced instrument response are bundled into a
`SpectrumDataset`.
To estimate the expected background in the on region a "reflected regions"
background method was used~\cite{Berge07}, represented in \gammapy
by the `ReflectedRegionsBackgroundMaker` class.
The resulting reflected regions are illustrated for all three observations
on top of the map of counts.
After reduction of the data it was modelled using a forward-folding
method and a power-law spectral shape, using the `PowerLawSpectralModel`
and `Fit` class.
Based on this best fit model the final flux points and
corresponding log-likleihood profiles are computed using
the `FluxPointsEstimator`.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{figures/cta_galactic_center.pdf}
\caption{
Example spectral analysis of the Galactic Center for three simulated CTA observations.
The left image shows the maps of counts with the measurement region and background
regions overlaid in different colors. The right image shows the resulting spectral
points and their corresponding log-likelihood profiles.
}
\label{fig:cta_galactic_center}
\end{figure*}
| {
"alphanum_fraction": 0.7992477668,
"avg_line_length": 48.3409090909,
"ext": "tex",
"hexsha": "677e22fed4f7958bcd0d13f7c34ba674d5a45195",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "edb61c092ad90b523282be363150ed6013af0a43",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bkhelifi/gammapy-v1.0-paper",
"max_forks_repo_path": "src/text/3-applications-subsections/1D-analysis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "edb61c092ad90b523282be363150ed6013af0a43",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bkhelifi/gammapy-v1.0-paper",
"max_issues_repo_path": "src/text/3-applications-subsections/1D-analysis.tex",
"max_line_length": 94,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "edb61c092ad90b523282be363150ed6013af0a43",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bkhelifi/gammapy-v1.0-paper",
"max_stars_repo_path": "src/text/3-applications-subsections/1D-analysis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 483,
"size": 2127
} |
\documentclass[twoside]{article}
\usepackage{epsfig}
\usepackage{amssymb}
\usepackage{amsmath}
\setlength{\oddsidemargin}{0.25 in}
\setlength{\evensidemargin}{-0.25 in}
\setlength{\topmargin}{-0.6 in}
\setlength{\textwidth}{6.5 in}
\setlength{\textheight}{8.5 in}
\setlength{\headsep}{0.75 in}
\setlength{\parindent}{0 in}
\setlength{\parskip}{0.1 in}
\newtheorem{thm}{Theorem}[section]
\newtheorem{Defn}{Definition}[section]
\newcommand{\lecture}[3]{
\pagestyle{myheadings}
\thispagestyle{plain}
\newpage
\setcounter{page}{1}
\noindent
\begin{center}
\framebox{
\vbox{\vspace{2mm}
\hbox to 6.28in { {\bf ~Probabilistic Graphical Models 10-708 Notes with Koller and Friedman Textbook\hfill} }
\vspace{6mm}
\hbox to 6.28in { {\Large \hfill #1 \hfill} }
\vspace{6mm}
\hbox to 6.28in { {\it Lecturer: #2 \hfill Scribes: #3} }
\vspace{2mm}}
}
\end{center}
\markboth{#1}{#1}
\vspace*{4mm}
}
\begin{document}
\lecture{2 : Directed GMs: Bayesian Networks}{Eric P. Xing}{Xing JunJie} % Lecture name, Lecturer, Scribes
\section{Introduction}
The goal of establishing GMs (Graphical Models) is to represent a joint distribution \(P\) over some set of random variables \(\mathbf{\chi} = \{X_1,\ldots,X_n\}\). Consider the simplest case where each variable is binary-valued, a joint distribution requires total \(2^n - 1\) numbers (minus 1 comes from sum-to-one constraint).This explicit representation of the joint distribution is unmanageable from every perspective\footnote{The following mainly quotes from Koller andFriedman Textbook Ch.3}.
\begin{itemize}
\item \textbf{Computationally}, it's very expensive to manipulate and too large to store in memory.
\item \textbf{Cognitively}, it is impossible to acquire so many numbers from a human expert, and the numbers are very small and do not correspond to events that people can reasonably contemplate.
\item \textbf{Statistically}, if we want to learn the distribution from date, we would need ridiculously large amounts of data to estimate this many parameters robustly.
\end{itemize}
However, \textbf{Bayesian Networks} are able to represent compact representations by exploiting \textbf{Independence Properties}.
\section{The \emph{student} Example}
We'll introduce perhaps the simplest example to see how \textbf{independence assumptions} produce a very compact representation of a high-dimensional distribution.
We now assume that a company would like to hire some graduates. The company's goal is to hire intelligent employees, but there is no way to test intelligence directly. However, the company have access to student's SAT scores and course grades. Thus, our probability space is induced by three relevant random variables \(I, S\) and \(G\). Assuming that \(G\) takes on three values \(g^1,g^2,g^3\), representing grades \(A, B\) and \(C\), \(I\) takes on two values \(i^0\)(low intelligence), \(i^1\)(high intelligence), \(S\) takes on two values \(s^0\)(low score) and \(s^1\)(high score).
We can get some intuitive independences in this example. The student's intelligence is clearly correlated both with his SAT score and grade. The SAT score and grade are also not independent.If we on the fact that the student received a high score on his SAT, the chances that he gets a high grade in his class are also likely to increase. Thus, we assume that, for our distribution \(P\),\[P(g^1\ |\ s^1) > P(g^1\ |\ s^0)\]
However, it's quite plausible that our distribution \(P\) satisfies a \textbf{conditional independence property}. If we know that the student has high intelligence, a high grade on the SAT no longer gives us information about the student’s performance in the class. That is:
\[P(g\ |\ i^1,s^1) = P(g\ |\ i^1)\]
Generally, we may assume that \[P\models(S\perp{G\ |\ I)}\]
Note that this independence holds only if we assume that student's intelligence is the only reason why his grade and SAT score might be correlated, which means that it assumes that there is no correlations due to other factors. These assumptions are also not ``True'' in any formal sense of word, and they are often only approximations of our true beliefs.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{assets/student_nb.png}
\caption{\label{fig:student_nb}Simple Bayesian networks for the \emph{student} example}
\end{figure}
As in the case of marginal independence, conditional independences allows us to provide a compact specification of the joint distribution. The compact representation is based on a very natural alternative parameterization. By simple probabilistic reasoning, we have that
\(P(I,S,G) = P(S,G \ |\ I)P(I).\)
But now, the \textbf{conditional independence assumption} implies
\(P(S,G\ |\ I) = P(S\ |\ I)P(G\ |\ I).\)
Hence, we have that
\(P(I,S,G) = P(S\ |\ I)P(G\ |\ I)P(I)\)
Thus, we have factorized the joint distribution \(P(I,G,G)\) as a product of three conditional probability distributions (CPDs). This factorization immediately leads us to the desired alternative parameterization. Together with \(P(I), P(S\ |\ I), P(G\ |\ I)\), we can specify the joint distribution. For example, \(P(i^1,s^1,g^2) = P(i^1)P(s^1\ |\ i^1)P(g^2\ |\ i^1)\).
We note that this probabilistic model would be represented using the Bayesian network shown in Figure \ref{fig:student_nb}.
In this case, the alternative parameterization is more compact than the joint. We now have three binomial distributions --- \(P(I)\), \(P(S\ |\ i^1)\) and \(P(S\ |\ i^0)\), and two three-valued multinomial distributions --- \(P(G\ |\ i^1)\) and \(P(G\ |\ i^0)\). Each of the binomials requires one independent parameter, and each three-valued multinomial requires two independent parameters, for a total of \textbf{seven} (\(3 * (2 - 1) + 2 * (3 - 1)\)).By contract, our joint distribution has twelve entries, so that \textbf{eleven} independent parameters.
\section{Bayesian Networks}
Bayesian networks build on the intuition as the naive Bayes model by exploiting conditional independence properties in order to allow a compact and natural representation.However, they are not restricted to the strong independence assumptions naive Bayes model makes.
The core of the Bayesian network representation is a directed acyclic graph (DAG), whose nodes are the random variables in our domain and whose edges correspond, intuitively, to direct influence of one node on another.
We can view the graph in two ways:
\begin{itemize}
\item a data structure that provides the skeleton for representing \textbf{a joint distribution} compactly in a \emph{factorized} way.
\item a compact representation for \textbf{a set of conditional independence assumptions} about a distribution.
\end{itemize}
\subsection{Factorization Theorem}
Given a DAG, the most general form of the probability distribution that is \textbf{consistent} with the graph factors according to ``\textbf{node given its parents}'':\[P(X) = \prod_{1=1:d}{P(X_i\ |\ X_{\pi_i})}\] where \(X_{\pi_i}\)is the set of parent node of \(x_i\), and \(d\) is the number of nodes.See Figure \ref{fig:factorize_example} for an example.
This graph can be factorized and represented as follows:
\[
\begin{split}
&P(X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_8) = \\
&P(X_1)P(X_2)P(X_3\ |\ X_1)P(X_4\ |\ X_2)P(X_5\ |\ X_2)P(X_6\ |\ X_3, X_4)P(X_7\ |\ X_6)P(X_8\ |\ X_5, X_6)
\end{split}
\]
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{assets/factorize_example.png}
\caption{\label{fig:factorize_example}Factorize example graph}
\end{figure}
\subsection{Local Structures and Independences}
Graphical models have three fundamental local structures that composes bigger structures.
\begin{itemize}
\item \textbf{Common parent} Fixing \(B\) decouples \(A\) and \(C\). When two variables \(A\) and \(C\) have a common parent \(B\), conditional independence \(A\perp C\ |\ B\) holds.
\item \textbf{Cascade} Knowing \(B\) decouples \(A\) and \(C\). When a middle node in a cascaded three random variables is known, a conditional independence \(A\perp C\ |\ B\) holds.
\item \textbf{V-structure} If \(C\) is not observed, then \(A\) and \(B\) are independent. However, if it is given, then the independence is lost. (\(A\) and \(B\) are not independent given \(C\)). In this case, \(A\) and \(B\) are \emph{marginally independent}.
\end{itemize}
The unintuitive V-structure can be described by a simple example. Suppose \(A = \) clock on tower, \(B = \) traffic jam on Eric's way to campus, and \(C = \) Eric on time for class. If Eric is not on time and the clock is on time, then our belief that \(B\) occurred is higher.
\section{I-maps}
\begin{Defn}
Let \(P\) be a distribution over \(X\). We define \(I(P)\) to be the set of independence assertions of the form \((X \perp Y\ |\ Z)\) that hold in P.
\end{Defn}
\begin{Defn}
Let \(K\) be an any graph object associated with a set of independences \(I(K)\). Then \(K\) is an \(I-map\) for a set of independences \(I\) if \(I(K) \subseteq I\)
\end{Defn}
For example, if a graph \(K\) is totally connected, then every pair of variables are dependent, more formally, \(I(K) = \emptyset \subset P\). A complete graph is ``useless'', since it does not give any knowledge about the structural.
\subsection{Facts about I-maps}
For \(G\) to be an I-map of \(P\), it is necessary that \(G\) does not mislead us regarding independences in \(P\). In other words, any independence that \(G\) asserts must also hold in \(P\), but conversely, \(P\) may have additional independences that are not reflected in \(G\).
\begin{figure}[!bph]
\centering
\includegraphics[width=0.4\textwidth]{assets/imap_example.png}
\caption{\label{fig:imap_example}} I-map example
\end{figure}
Example:
Consider a joint probability space over two independent random variables \(X\) and \(Y\) . There are three possible graphs (as shown in Figure \ref{fig:imap_example}) over these two nodes: \(G_\emptyset\), which is a disconnected pair \(X\) \(Y\) ; \(G_{X\rightarrow Y}\) , which has the edge \(X\rightarrow Y\) ; and \(G_{X\rightarrow Y}\) , which contains \(Y\rightarrow X\). The graph \(G_\emptyset\) encodes the assumption that \((X \perp Y )\). The latter two encode no independence assumptions.
Consider following two distributions:
\begin{table}[!htbp]
\parbox{.3\linewidth}{
\begin{tabular}{cc|c}
\(X\) & \(Y\) & \(P(X, Y)\)\\ \hline
\(x^0\) & \(y^0\) & \(0.08\)\\
\(x^0\) & \(y^1\) & \(0.32\)\\
\(x^1\) & \(y^0\) & \(0.12\)\\
\(x^1\) & \(y^1\) & \(0.48\)\\
\end{tabular}
\hfill\hfill
\parbox{.3\linewidth}{
\begin{tabular}{cc|c}
\(X\) & \(Y\) & \(P(X, Y)\)\\ \hline
\(x^0\) & \(y^0\) & \(0.4\)\\
\(x^0\) & \(y^1\) & \(0.3\)\\
\(x^1\) & \(y^0\) & \(0.2\)\\
\(x^1\) & \(y^1\) & \(0.1\)\\
\end{tabular}
}
}
\end{table}
In the example on the left, \(X\) and \(Y\) are independent in \(P\); for example, \(P(x^1) = 0.48 + 0.12 = 0.6\), \(P(y^1) = 0.8\), and \(P(x^1, y^1) = 0.48 = 0.6 · 0.8\). Thus, \((X \perp Y ) \in I(P)\), and we have that \(G_\emptyset\) is an I-map of \(P\). In fact, all three graphs are I-maps of \(P\): \(I(G_{X\rightarrow Y})\) is empty, so that trivially \(P\) satisfies all the independences in it (similarly for \(G_{Y\rightarrow X}\) ). In the example on the right, \((X \perp Y) \not\in I(P)\), so that \(G_\emptyset\) is not an I-map of \(P\). Both other graphs are I-maps of \(P\).
\subsection{Local independences}
\begin{Defn}
A Bayesian network structure \(G\) is a directed acyclic graph whose nodes represent random variables \(X_1,\ldots,X_n\) . Let \(Pa_{X_i}\) denote the parents of \(X_i\) in \(G\), and \(NonDescendants_{X_i}\) denote the variables in the graph that are not descendants of \( X_i\) . Then \(G\) encodes the following set of \textbf{local conditional independence assumptions} \(I_l (G)\):
\[For\ each\ variable\ X_i: (X_i \perp NonDescendant_{X_i} | Pa_{x_i}).\]
\end{Defn}
In other words, a node \(X_i\) is independent of any non descendants given its parents.
\section{D-separation}
\textbf{Direct connection}\quad The simple case is that \(X\) and \(Y\) are directly connected via an edge, say \(X \rightarrow Y\). For any network structure \(G\) that contains the edge \(X \rightarrow Y\) , it is possible to construct a distribution where \(X\) and \(Y\) are correlated regardless of any evidence about any of the other variables in the network. In other words, if \(X\) and \(Y\) are directly connected, we can always get examples where they influence each other, regardless of \(Z\).
\begin{figure}[!bth]
\centering
\includegraphics[width=.6\linewidth]{assets/xyz_trail.png}
\caption{\label{fig:xyz_trail} The four possible two-edge trails from \(X\) to \(Y\) via \(Z\)}
\end{figure}
\textbf{Indirect connection}\quad Now consider the more complicated case when X and Y are not directly connected, but there is a trail between them in the graph. We begin by considering the simplest such case: a three-node network, where X and Y are not directly connected, but where there is a trail between them via Z. It is clear that there are four cases where X and Y are connected via Z, as shown in Figure \ref{fig:xyz_trail}.
\begin{itemize}
\item Causal trail \(X \rightarrow Y \rightarrow Z\), and evidential trail \(X \leftarrow Y \leftarrow Z\): active iff \(Z\)is not observed. These two is shown in Figure \ref{fig:xyz_trail} (a),(b)
\item Common cause \(X \leftarrow Z \rightarrow Y\) : active iff \(Z\) is not observed.
\item Common effect \(X \rightarrow Z \leftarrow Y\) : active iff \(Z\) or one of its descendants is observed.
\end{itemize}
\begin{Defn}
Let \(\mathbf{X}\), \(\mathbf{Y}\) , \(\mathbf{Z}\) be three sets of nodes in \(G\). We say that \(\mathbf{X}\) and \(\mathbf{Y}\) are d\textrm{-}separated given \(\mathbf{Z}\), denoted \(d-sep_G(\mathbf{X} ; \mathbf{Y} \ |\ \mathbf{Z})\), if there is no active trail between any node \(X \in \mathbf{X}\) and \(Y \in \mathbf{Y}\) given \(\mathbf{Z}\). We use \(I(G)\) to denote the set of independences that correspond to d-separation:
\[I(G) = \{(\mathbf{X}\perp{\mathbf{Y}}\ |\ \mathbf{Z})\ :\ d\textrm{-}sep_G(\mathbf{X} ; \mathbf{Y} \ |\ \mathbf{Z})\}.\]
This set is also called the set of \textbf{global Markov independences}.
\end{Defn}
\section{Soundness and completeness}
\textbf{Soundness}\quad If a distribution \(P\) factorizes according to a graph \(G\), then \(I(G) \subseteq I(P\)).
\textbf{Completeness}\quad d-separation detects all possible independences.
However, it is important to note that if \(X\) and \(Y\) are not d-separated given \(G\), then it is not the case that \(X\) and \(Y\) are dependent given \(Z\) in all distributions that factorize over \(G\). For example, consider the graph \(A \rightarrow B\). Clearly, \(A\) and \(B\) are dependent. Note that every distribution over \(A\) and \(B\) factorizes according to this graph, since it is always true that \(P(A, B) = P(A)P(B\ |\ A)\). But if we consider the specific distribution give in Table 1, then \(A \perp B\). However, we can assert that if \(X\) and \(Y\) are not d-separated given \(Z\), then there is at least one distribution which factorizes according to the graph, and where \(X\) is not independent of \(Y\) given \(Z\). Combining this with the above theorems gives us an important result.
\begin{table}[!hbt]
\centering
\begin{tabular}{c|cc}
& \(b^0\) & \(b^1\)\\ \hline
\(a^0\) & 0.4 & 0.6\\
\(a^1\) & 0.4 & 0.6\\
\end{tabular}
\caption{\label{table:completeness} The distribution specified in this table factorizes according to the graph \(A \rightarrow B\) but \(A\) is independent of \(B\).}
\end{table}
\section{Uniqueness of BN}
Very different BN graphs can actually be equivalent, in that they encode precisely the same set of conditional independence assertions.For example, the three networks in figure \ref{fig:xyz_trail}(a),(b),(c) encode precisely the same independence assumption: \(X\perp Y\ |\ Z\). Note that the v-structure network in figure \ref{fig:xyz_trail}(d) induces a very different set of d-separation assertions, and hence it does not fall into the same I-equivalence class as the first three.
\begin{Defn}
Two graph structures \(K^1\) and \(K^2\) over X are I-equivalent if \(I(K^1) = I(K^2)\). The set of all graphs over \(X\) is partitioned into a set of mutually exclusive and exhaustive I-equivalence classes, which are the set of equivalence classes induced by the I-equivalence relation.\(\)
\end{Defn}
\begin{Defn}
The skeleton of a Bayesian network graph \(\mathcal{G}\) over \(X\) is an undirected graph over \(X\) that contains an edge \(\{X, Y\}\) for every edge \((X, Y)\) in \(\mathcal{G}\).
\end{Defn}
\begin{thm}
Let \(\mathcal{G}^1\) and \(\mathcal{G}^2\) be two graphs over \(X\). If \(\mathcal{G}^1\) and \(\mathcal{G}^2\) have the same skeleton and the same set of v-structures then they are I-equivalent.
\end{thm}
\section{Minimum I-Map}
Complete graph is a trivial I-map for any distribution over all variables, since it does not reveal any of the independence structure in the distribution.
\begin{Defn}
A graph \(\mathcal{K}\) is a minimal I-map for a set of independences \(\mathcal{I}\) if it is an I-map for \(\mathcal{I}\), and if the removal of even a single edge from \(\mathcal{K}\) renders it not an I-map.
\end{Defn}
\section{Perfect Maps}
\begin{Defn}
We say that a graph \(\mathcal{K}\) is a perfect map (P-map) for a set of independences \(I\) if we have that \(I(\mathcal{K}) = I\). We say that \(\mathcal{K}\) is a perfect map for \(P\) if \(I(\mathcal{K}) = I(P)\).
\end{Defn}
Note that not every distribution has a perfect map.
\section{Summary}
\begin{itemize}
\item \begin{Defn}
\(A\) Bayesian network is a pair \(B = (G, P)\) where \(P\) factorizes over \(G\), and where \(P\) is specified as a set of CPDs associated with \(G’s\) nodes. The distribution \(P\) is often annotated \(P_B\).
\end{Defn}
\item BN utilizes local and global independences to give a compact representation of the joint distribution.
\item Joint likelihood is computed by multiplying CPDs.
\item Local and global independences are identifiable via d-separation.
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7054575524,
"avg_line_length": 65.017921147,
"ext": "tex",
"hexsha": "777655a051b59efeaaad6a61e426db10fb1bc95d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "59cf8ac227974a17178a28176469adbfc8bacbdf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GavinXing/blog",
"max_forks_repo_path": "assets/pgm/lecture2/lecture2.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "59cf8ac227974a17178a28176469adbfc8bacbdf",
"max_issues_repo_issues_event_max_datetime": "2022-02-26T03:34:38.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-01-31T02:21:14.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GavinXing/blog",
"max_issues_repo_path": "assets/pgm/lecture2/lecture2.tex",
"max_line_length": 815,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "59cf8ac227974a17178a28176469adbfc8bacbdf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GavinXing/blog",
"max_stars_repo_path": "assets/pgm/lecture2/lecture2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5309,
"size": 18140
} |
\section{Implementation}
FST for CM similarity searches has been implemented in
\textsc{infernal} version 1.01 \citep{Nawrocki09}. The filtering
algorithm is the HMM Forward algorithm with a reimplementation of
Weinberg/Ruzzo's ML HMMs \citep{WeinbergRuzzo06}. FST thresholds are
determined for two different main algorithms, the CYK and Inside CM
search algorithms. \textsc{infernal} also implements approximate
E-values for HMM Forward and CM CYK and Inside scores. E-values are
integral to the FST implementation because they allow a predicted
survival fraction $S$ to be calculated from a bit score $T$. FST is
executed using $N$ sampled sequences for a single target sensitivity,
$F$, by \textsc{infernal}'s \texttt{cmcalibrate} program for both
local and globally configured CMs. By default, $N=10,000$ and
$F=0.993$ but both values can be changed by the user. The resulting
pairs of survival thresholds $T$ and reporting thresholds $C$ are
stored in the CM save file and read by the \texttt{cmsearch} program
when a database search is executed. (To avoid storing $N=10,000$
points, a representative set of a few hundred $(T,C)$ pairs is saved
in which no two $C$ values $C_1$, $C_2$ ($C_1 < C_2$) with E-values
$E_1$ and $E_2$ ($E_1 > E_2$) follow $E_2 - E_1 < (0.1 * E_1)$.) For
a search with final algorithm CYK or Inside with reporting threshold
$C'$, $T$ from the CYK/Inside $(T,C)$ pair in the CM file with the
maximum $C < C'$ is selected and $T$ is set as the filter surviving
threshold. The search proceeds by scanning each target sequence in the
database with the filter. For any subsequence $i..j$ that scores above
$T$, the subsequence $j-W+1..i+W-1$ is flagged as a surviving
subsequence. ($W$ is the maximum hit length defined as $dmax(0)$ from
the band calculation algorithm using $\beta=10^{-7}$
\citep{NawrockiEddy07}a). If a second round of filtering is used (as
discussed below), it is used in the same manner, but only on the
surviving subsequences from the first filter. The final algorithm is
used to rescore subsequences that survive all filtering stages. The
complete \textsc{infernal} ANSI C source code is included in the
Supplementary Material.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% UNUSED
\begin{comment}
The following is a description of
how FST is executed and filter thresholds are employed in a database
search using default settings. Many of the parameters can be changed
from their defaults by the user as explained in \citep{infguide03}.
\textsc{infernal}'s \texttt{cmcalibrate} program reads in a CM save
file and executes FST for a pre-defined target sensitivity $F=0.99$ by
sampling $N=10000$ sequences and scoring them with the HMM Forward,
CM Inside and CYK search algorithms. The scores are sorted, and HMM survival
thresholds $T$ are determined for the lowest $90\%$ CM score reporting
thresholds $C$ as defined previously in step 3 of the FST procedure.
%(The calculation
%stops at $9000$ to avoid fitting $C$ to
%stops at score $9000$ lowest scoring threshold because at that point there
%are only $N_C = 999$ sequences with scores greater than $C$.
This procedure generates a single list of ($T,C$) pairs in which both
$T$ and $C$ monotonically increase. A representative set of these
pairs is selected such that each pair of consecutive $C_x$ and
$C_{x+1}$ values obey $C_x < 0.9 C_{x+1}$, and saved to the CM save
file. Determination of $(T,C)$ pairs takes place twice, once for
Inside CM scores, and once for CYK CM scores. The saved $(T,C)$ pairs
are used to set the filter thresholds by the \texttt{cmsearch} program
as follows. As a search with reporting threshold $C'$ is executed
with final algorithm CYK or Inside, $T$ from the CYK or Inside $(T,C)$
pair in the CM file with the maximum $C < C'$ is selected.
$T$ is then converted to an approximate $E$ value using the
empirically fit Gumbel distribution parameters for the Forward
algorithm stored in the CM file. The average length $L$ of a surviving
chunk of the database is calculated using the band calculation
algorithm as $L = 2 * W - \sum_{d = \mbox{dmin}(0)}^{\mbox{dmax(0)}}
\gamma_0(d) * d$ with $dmin$ and $dmax$ with
$\beta=10^{-7}$ \citep{NawrockiEddy07}. The predicted survival
fraction $S$ is calculated as $S=\frac{EL}{Z}$ using target database
size $Z$. If $S$ is less than $S_{max}$ ($0.02$ by default), the
filter threshold $T'$ value that gives a $S_{max}$ is used instead of $T$.
The search proceeds by scanning each target sequence in the database
with the HMM Forward algorithm using an ML HMM constructed from the CM
\citep{WeinbergRuzzo07}. For any subsequence $i..j$ that scores above
$T$, the subsequence $j-W+1..i+W-1$ is flagged as a surviving
subsequence. ($W$ is the maximum hit length defined as $dmax(0)$
from the band calculation algorithm using $\beta=10^{-7}$).
The surviving subsequences are then searched with a second round of
filtering with the banded CYK algorithm using
$\beta=10^{-7}$. FST is not used to calibrate thresholds for this
filter, instead the CYK bit score that corresponds to an E-value of
100*$E$
The CM Inside
or CYK algorithm is then used to search the surviving subsequences
(after merging together any that overlap). Any hit that scores above
the reporting score threshold $C$ is saved and output.
HERE HERE HERE (EXPLAIN THRESHOLDING FOR CYK)
The most computationally expensive step is scoring the sampled
sequences with the CM CYK and Inside algorithm. For an average sized
model, searching a single sequence takes roughly $1$ or $0.3$ seconds
for Inside or CYK respectively, so scoring $10,000$ sequences takes
about $3$ or $1$ hours. To accelerate we implemented a constrained
dynamic programming technique developed by citet{Brown00} that uses
sequence specific bands derived from a first pass HMM alignment of the
target sequence. The technique works poorly when searching random
sequences (and therefore works poorly for accelerating database
searching), but offers significant speedups when there are high
scoring stretches of primary sequence in the sequence being
searched, which are common in the sampled sequences. Empirically, this
technique accelerates the algorithm roughly $10$ to $50$ fold with a
negligible impact on the final score returned by the algorithm.
The \texttt{cmcalibrate} and \texttt{cmsearch} programs are
implemented in coarse-grained MPI for use on clusters.
\end{comment}
\begin{comment}
Though generally applicable to any search method, we developed FST for
accelerating CM searches and decided to test it's performance in that
context.
%We decided to test the performance of the FST strategy for CM
%similarity search using a
We used the the Forward HMM algorithm with ML HMMs as our filtering
method as described by citet{WeinbergRuzzo06}, mainly because of
their previously demonstrated utility for the task, implemented in
version 1.0 of \textsc{infernal} \citep{Nawrocki09}.
\textsc{Infernal}'s \texttt{cmcalibrate} program executes FST for a
pre-defined target sensitivity $F=0.99$ by sampling $N=10000$
sequences and scoring them with both the HMM Forward and CM Inside
search algorithms. (Both $F$ and $N$ can be set to different values by
the user). The scores are sorted, and survival thresholds $T$ are
determined for the $9000$ lowest scoring reporting thresholds $C$ as
defined previously in step 3 of the FST procedure. This determination
stops at score $9000$ lowest scoring threshold because at that point there
are only $N_C = 999$ sequences with scores greater than $C$.
This procedure generates a list of $T$,$C$ pairs ranked by increasing
$C$. A representative set of these pairs, for which no two consecutive
$C$ values differ by less than 90\%, is saved to the CM save
file. Those $T$,$C$ values are then used to set the filter thresholds
in the \texttt{cmsearch} program, when a search with reporting
threshold $C'$ is executed, the $T$ from the $T$,$C$ pair in the CM
file with the maximum $C$ that is less than or equal to $C'$ is
selected and employed as a survival threshold for an HMM filter.
The most computationally expensive step is scoring the sampled
sequences with the CM Inside algorithm. For an average sized model,
searching a single sequence takes roughly $1$ second, so $10,000$
sequences takes about $3$ hours. To accelerate
Inside algorithm is by far the most computationally expensive step
of the pipeline.
A heuristic is used to accelerate the scoring of the $N$ sequences
with the Inside algorithm.
\end{comment}
| {
"alphanum_fraction": 0.7642581098,
"avg_line_length": 53.7044025157,
"ext": "tex",
"hexsha": "d96b8600c22637514d126fc00a3404f2280e8067",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "lamby/infernal",
"max_forks_repo_path": "Manuscripts/filter_thresholds/imp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "lamby/infernal",
"max_issues_repo_path": "Manuscripts/filter_thresholds/imp.tex",
"max_line_length": 76,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a358a4984a90efd8177a82440f7576204735ae5c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "lamby/infernal",
"max_stars_repo_path": "Manuscripts/filter_thresholds/imp.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2260,
"size": 8539
} |
\chapter*{}
\thispagestyle{empty}
\poemtitle*{Eternity}
% use longest verse
\settowidth{\versewidth}{But he who kisses the joy as it flies}
\begin{verse}[\versewidth]
He who binds to himself a joy \\
Does the winged life destroy; \\
But he who kisses the joy as it flies \\
Lives in eternity's sun rise.
\end{verse}
\attrib{William Blake (1757--1827)} | {
"alphanum_fraction": 0.6981132075,
"avg_line_length": 23.1875,
"ext": "tex",
"hexsha": "ee525de535b2da28e1f7cf16dfda4b071c7d086c",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-10-28T11:14:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-28T11:14:06.000Z",
"max_forks_repo_head_hexsha": "d99f5c66c225ec873638671138da8e4f5e83972e",
"max_forks_repo_licenses": [
"WTFPL"
],
"max_forks_repo_name": "JDtroles/BambergThesis",
"max_forks_repo_path": "content/Poem.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "d99f5c66c225ec873638671138da8e4f5e83972e",
"max_issues_repo_issues_event_max_datetime": "2020-10-28T11:29:53.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-10-28T11:29:53.000Z",
"max_issues_repo_licenses": [
"WTFPL"
],
"max_issues_repo_name": "JDtroles/BambergThesis",
"max_issues_repo_path": "content/Poem.tex",
"max_line_length": 63,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "d99f5c66c225ec873638671138da8e4f5e83972e",
"max_stars_repo_licenses": [
"WTFPL"
],
"max_stars_repo_name": "JDtroles/BambergThesis",
"max_stars_repo_path": "content/Poem.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-02T21:22:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-02T21:22:50.000Z",
"num_tokens": 107,
"size": 371
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CS622: Theory of Formal Languages
% Copyright 2014 Pejman Ghorbanzade <[email protected]>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/beacon
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{Question 2}
Prove that the context-free grammar $ G = (\{S\},\{a,b,c\},S,\{S\rightarrow SS, S\rightarrow \lambda, S\rightarrow aSb, S\rightarrow bSa, S\rightarrow aSc, S\rightarrow cSa\}) $
generates the language $ L = \{x \in \{a,b,c\}^* | n_a(x) = n_b(x) + n_c(x) \}$.
\subsection*{Solution}
Proof is given in two steps. First we prove $L(G) \subseteq L$. Second, by proving $L \subseteq L(G)$ we convert inclusion to equality.
\begin{enumerate}
\item
It is claimed that for any word $w \in L(G)$, $n_a = n_b + n_c$.
It is clear that using the first two productions will not hinder the condition for they do not generate terminal symbols.
However, using any of the four latter productions would increment $n_a$ while incrementing either $n_b$ or $n_c$.
Thus, informally, the statement $w \in L$ holds true.
A more formal solution can be given by induction on length of $w$.
\begin{itemize}[label={}]
\item
Clearly, if $|w| = 0$, $w = \lambda$, $n_a = n_b + n_c = 0$.
If $|w| = 2$, either $S\rightarrow aSx$ or $S\rightarrow xSa$ where $x \in \{b,c\}$ and then $S\rightarrow \lambda$.
In which case, $w \in L$.
Note that we cannot have a word generated by grammar $G$ which is of odd length.
Nor can we have a word $w$ with odd length whose $n_a(w) = n_b(w) + n_c(w)$.
\item
We take the induction hypothesis as, for $w \in L(G)$ such that $|w| = p$, $w \in L$.
It is shown for any $w^\prime \in L(G)$ such that $|w^\prime| = p + 2$, $w^\prime \in L$.
Starting from $S$, $w^\prime$ is generated first by generating $w$ such that $|w| = p$, then applying one of the four productions in $G$ that generate terminal symbols and finally by using $S\rightarrow \lambda$.
As discussed previously, applying either one of the four productions would increment $n_a$ by one.
As there is no production that increment $n_b$ and $n_c$ by one at the same time, the argument $n_a = n_b + n_c$ still holds true and therefore $w \in L$.
\end{itemize}
Thus $L(G) \subseteq L$.
\item
Now we prove $L \subseteq L(G)$, that is, for any $w \in L$, $w$ can be generated by S.
It is claimed that grammar $G$ is indifferent to position and order of the symbols of $w$ as long as they follow $n_a(w) = n_b(w) + n_c(w)$.
Proof is given again by induction on length of $w$.
\begin{itemize}[label={}]
\item
Clearly if $|w| = 0$, $w = \lambda \in L$ and $S\xRightarrow[G]{} \lambda = w$, thus $w \in L(G)$.
If $|w| = 2$, $w \in L$ either begins with an $a$ or ends with an $a$.
In the first case, $S\xRightarrow[G]{} aSx \xRightarrow[G]{} ax$ and in the second case, $S\xRightarrow[G]{} xSa \xRightarrow[G]{} xa$, where $x$ is either $b$ or $c$.
Both cases prove $w \in L(G)$.
\item
We take the induction hypothesis that $S\xRightarrow[G]{*}w$ for $|w| = k$.
Let $w = uv$ such that $u,v \in A^*$.
Based on hypothesis, $S \xRightarrow[G]{*} uSv \xRightarrow[G]{} uv = w$.
For any $|w^\prime| = k + 2$, $S \xRightarrow[G]{*} uSv$.
Then either $uSv \xRightarrow[G]{} uaSxv \xRightarrow[G]{} uaxv = w^\prime$ or $uSv \xRightarrow[G]{} uxSav \xRightarrow uxav = w^\prime$ where $x$ is $b$ or $c$.
Therefore $w^\prime \in L(G)$.
\end{itemize}
Thus $L \subseteq L(G)$.
\end{enumerate}
Based on the equality, grammar $G$ can be said to generate language $L$ as defined.
| {
"alphanum_fraction": 0.6382400869,
"avg_line_length": 49.7567567568,
"ext": "tex",
"hexsha": "3561a29b505a53462c11d8e59a91bd144d18bd6a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z",
"max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ghorbanzade/beacon",
"max_forks_repo_path": "umb-cs622-2015f/src/tex/hw03/hw03q02.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ghorbanzade/beacon",
"max_issues_repo_path": "umb-cs622-2015f/src/tex/hw03/hw03q02.tex",
"max_line_length": 214,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ghorbanzade/beacon",
"max_stars_repo_path": "umb-cs622-2015f/src/tex/hw03/hw03q02.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z",
"num_tokens": 1217,
"size": 3682
} |
\documentclass[12pt]{article}
\usepackage[top=1in, bottom=1in, left=1in, right=1in]{geometry}
\title{Week One: Model Intergrated Computing}
\author{Umesh Timalsina}
\begin{document}
\maketitle
\section*{Prologue}
Little Late on this week.
Will update soon.
\end{document}
| {
"alphanum_fraction": 0.7226027397,
"avg_line_length": 26.5454545455,
"ext": "tex",
"hexsha": "c4e79736ec221b999642437a8825c5dbab73c71f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ad756e440a772ee327d3f5696965d0f312a2fc38",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "umesh-timalsina/mic-scribbles",
"max_forks_repo_path": "week-1/week-1.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "ad756e440a772ee327d3f5696965d0f312a2fc38",
"max_issues_repo_issues_event_max_datetime": "2020-09-07T07:24:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-09-01T15:34:06.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "umesh-timalsina/mic-scribbles",
"max_issues_repo_path": "week-1/week-1.tex",
"max_line_length": 63,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ad756e440a772ee327d3f5696965d0f312a2fc38",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "umesh-timalsina/mic-scribbles",
"max_stars_repo_path": "week-1/week-1.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 91,
"size": 292
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{book}
\title{Scientific Writing for Health Research}
\author{Ehsan Karim \& Dahn Jeong \& Fardowsa Yusuf}
\date{Last update: 2022-01-23}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{iftex}
\ifPDFTeX
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Scientific Writing for Health Research},
pdfauthor={Ehsan Karim \& Dahn Jeong \& Fardowsa Yusuf},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage{longtable,booktabs,array}
\usepackage{calc} % for calculating minipage widths
% Correct order of tables after \paragraph or \subparagraph
\usepackage{etoolbox}
\makeatletter
\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{}
\makeatother
% Allow footnotes in longtable head/foot
\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}}
\makesavenoteenv{longtable}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
\newlength{\cslhangindent}
\setlength{\cslhangindent}{1.5em}
\newlength{\csllabelwidth}
\setlength{\csllabelwidth}{3em}
\newlength{\cslentryspacingunit} % times entry-spacing
\setlength{\cslentryspacingunit}{\parskip}
\newenvironment{CSLReferences}[2] % #1 hanging-ident, #2 entry spacing
{% don't indent paragraphs
\setlength{\parindent}{0pt}
% turn on hanging indent if param 1 is 1
\ifodd #1
\let\oldpar\par
\def\par{\hangindent=\cslhangindent\oldpar}
\fi
% set entry spacing
\setlength{\parskip}{#2\cslentryspacingunit}
}%
{}
\usepackage{calc}
\newcommand{\CSLBlock}[1]{#1\hfill\break}
\newcommand{\CSLLeftMargin}[1]{\parbox[t]{\csllabelwidth}{#1}}
\newcommand{\CSLRightInline}[1]{\parbox[t]{\linewidth - \csllabelwidth}{#1}\break}
\newcommand{\CSLIndent}[1]{\hspace{\cslhangindent}#1}
\usepackage{booktabs}
\usepackage{amsthm}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=8pt plus 2pt minus 4pt
\thm@postskip=\thm@preskip
}
\makeatother
\usepackage{tcolorbox}
\newtcolorbox{blackbox}{colback=black,colframe=orange,coltext=white,boxsep=5pt,arc=4pt}
\usepackage{color}
\usepackage{framed}
\setlength{\fboxsep}{.8em}
\ifLuaTeX
\usepackage{selnolig} % disable illegal ligatures
\fi
\usepackage[]{natbib}
\bibliographystyle{apalike}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\newenvironment{blackbox}{
\definecolor{shadecolor}{rgb}{0, 0, 0} % black
\color{white}
\begin{shaded}}
{\end{shaded}}
\hypertarget{project-summary}{%
\chapter*{Project Summary}\label{project-summary}}
\addcontentsline{toc}{chapter}{Project Summary}
This website has a focus on scientific communication and manuscript writing. Specifically, communicating with the scientific community about an epidemiological study that has been designed to answer a particular health research question and appropriately interpreting the study results. Students often write research articles suitable for submission to an academic peer-reviewed health journal for publication. However, a general challenge of teaching scientific communication is that most of the associated teaching materials and references are either overly general (not discipline-specific, do not take into account applications to healthcare), or not openly accessible (expensive textbooks). Also, students have varying academic backgrounds, and many are not familiar with new scientific writing collaboration tools that (i) are helpful for managing collaborative projects and (ii) can aid in conducting reproducible research. This OER project allows us to make the course content open to health researchers who struggle with scientific writing and introduce them to tools that will help them manage collaborative group research projects.
The project aims to create and share openly accessible high-quality OER content through a step-by-step educational guide on how to write scientific articles for peer-reviewed journals, with a specific focus on communicating with the health research community. Specific topics include:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Scientific writing basics, components of a good research question
\item
Writing the `Introduction' section, identifying a gap in the literature
\item
Writing the `Methods' section, describing the data sources/collection, study design, and statistical analysis
\item
Presenting tables and figures, and writing the `Results' section
\item
Writing the `Discussion' section, interpreting results, stating the implications, strengths and limitations of the study, and future research
\item
Introducing tools for managing collaborative scientific writing projects and reproducible research (e.g., RMarkdown, GitHub).
\end{enumerate}
\hypertarget{project-goals}{%
\section*{Project Goals}\label{project-goals}}
\addcontentsline{toc}{section}{Project Goals}
\textbf{Background}: The materials of this website has been used for teaching a UBC course (\href{https://ehsank.com/teaching/}{SPPH 504-007} by \href{https://ehsank.com/}{Dr.~Ehsan Karim}). In this course, one of the components is the communication of scientific research findings through scientific writing. The specific goals are stated below.
\textbf{Goals}: The goals of the project are to (1) create and update course content and materials as open educational resources (OERs), and (2) openly share a step-by-step guide (i.e., written OER content, primarily text) on how to write scientific articles for peer-reviewed journals (in the ``IMRaD'' format; structured in the following sections: Introduction, Methods, Results, and Discussion). (3) Introduce some cutting-edge tools helpful for managing collaborative scientific writing projects and conducting reproducible research (e.g., RMarkdown, GitHub).
\textbf{Implication}: One obvious implication is increased accessibility to students. Currently, the course SPPH 504-007 is restricted to SPPH Ph.D.~students. This content has been made suitable for open access and will potentially help health researchers across campus interested in publishing manuscripts on their health data research. Examples of people who will directly benefit from this open access project include: master's students (e.g., MSc, MPH, MHA, who cannot currently register for this course), researchers from all the Departments in the UBC Faculty of Medicine, and any other Departments with researchers that conduct healthcare and biomedical data analysis and research.
\textbf{Format}: Content is primarily text, that are placed on an openly accessible GitHub page. Short videos are embedded to summarize important sections of the content.
\hypertarget{funding}{%
\section*{Funding}\label{funding}}
\addcontentsline{toc}{section}{Funding}
\begin{itemize}
\tightlist
\item
\href{https://oerfund.open.ubc.ca/oer-rapid-innovation-grants/}{UBC OER Rapid Innovation}
\item
\href{https://facultystaff.students.ubc.ca/student-engagement/centre-student-involvement-careers/work-learn}{UBC Work Learn}
\end{itemize}
\hypertarget{version-history}{%
\section*{Version history}\label{version-history}}
\addcontentsline{toc}{section}{Version history}
Initial outlines were created for SPPH 504-007 during 2018-2021. In 2021, though OER Fund Rapid Innovation Grant (together with \href{https://forestry.ubc.ca/faculty-profile/suborna-ahmed/}{Dr.~Suborna Ahmed}) and Work Learn program, a GRA was hired to convert the outlines to drafts, that were revised and updated by other contributors.
\hypertarget{contributor-list}{%
\section*{Contributor list}\label{contributor-list}}
\addcontentsline{toc}{section}{Contributor list}
\begin{longtable}[]{@{}ll@{}}
\toprule
\endhead
Dahn Jeong & SPPH, UBC \\
Fardowsa Yusuf & SPPH, UBC \\
\href{https://ehsank.com/}{Ehsan Karim} & SPPH, UBC \\
\bottomrule
\end{longtable}
\hypertarget{license}{%
\section*{License}\label{license}}
\addcontentsline{toc}{section}{License}
\includegraphics[width=0.25\linewidth]{images/by}
The online version of this book is licensed under the \href{https://creativecommons.org/licenses/by/4.0/}{Attribution 4.0 International (CC BY 4.0)} International License. You may share, and adapt the content.
\begin{rmdcomment}
\textbf{How to cite}
Karim ME, Jeong D, Yusuf F (2021) Scientific Writing for Health
Research. URL:
\url{https://ehsanx.github.io/Scientific-Writing-for-Health-Research/}
\end{rmdcomment}
\hypertarget{feedback-form}{%
\section*{Feedback form}\label{feedback-form}}
\addcontentsline{toc}{section}{Feedback form}
Feel free to \href{https://ehsank.com/}{reach out} for any comments, corrections, suggestions.
Visitors of this website are encouraged to provide feedback regarding how to improve it to make it more accessible to the general medical researchers. Here is the \href{https://forms.gle/tzJ3YYP7P4edtgnW9}{feedback form}.
\hypertarget{research-question}{%
\chapter{Research Question}\label{research-question}}
The first step in conducting a scientific study is to develop a research question; however, this can be a difficult process. Research questions organize and direct the study, communicate the research study's goal to the readers, define the study's boundaries and limitations, and inform researchers on how to conduct the study. A good research question can serve all of these purposes, but developing a ``good'' research question can be difficult and time-consuming. A good research question should address the clinical or population health problem under investigation. Furthermore, the significance of a study's findings is determined by how well it addresses the research question; therefore, not asking the ``right'' research question can jeopardize the validity of the whole study.
Generally, the process of formulating a new research question begins with a public health or clinical problem that needs to be addressed. In health sciences research, the rationale for conducting a research study is typically to address at least one of three issues: the existing evidence is scarce, the current literature contains conflicting evidence, or the evidence base can be improved \citep{fandino2019formulating}. As a result, conducting a thorough literature search on the topic of interest is frequently required in order to formulate a good research question. In addition, the answers to the research question should address an aspect of the specific problem that was identified, and be supportive of the study's rationale. In many instances, this requires narrowing and specifying the research question from a broader, more general question. It has previously been suggested that using a PICOT (population, intervention, comparator, outcome and time frame) framework can help researchers formulate a good research question and can ensure higher quality reporting in studies \citep{thabane2009posing, rios2010association}. A well-structured research question will guide the implementation of the study as well as the reporting of the results. The FINER criteria may also be used to assess the quality of the research question and refine it as needed \citep{thabane2009posing}. In this chapter, we discuss the PICOT framework and FINER criteria for developing a good research question in population and public health research.
\hypertarget{framing-a-research-question-using-the-picot-framework}{%
\section{Framing a research question using the PICOT framework}\label{framing-a-research-question-using-the-picot-framework}}
Formulating and refining the research question using the PICOT framework can inform which study design is most appropriate, what types of data should be collected and what types of analytical methods are most suitable to answer the research question. The framework has many different variations, but the general framework for studies in health research is as follows:
Table 1: Key elements and guiding questions for the PICOT framework
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.50}}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.50}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Element
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Description and guiding questions
\end{minipage} \\
\midrule
\endhead
\textbf{P opulation} & \textbf{Specify the target population and study sample} \\
& \emph{What is the target population of the data source?} \\
& \emph{How broad or narrow is the target population?} \\
& \emph{To whom will your research question be generalizable to?} \\
& \emph{Who is included in the study sample that you are trying to make inferences about?} \\
\textbf{I ntervention, treatment or exposure} & \textbf{Specify the intervention, treatment or exposure} \\
& \emph{What is the primary experimental condition that you want to test?} \\
& \emph{What is the intervention / treatment / diagnostic test / procedure?} \\
& \emph{What is the exposure or the explanatory variable of interest?} \\
\textbf{C omparator} & \textbf{Specify the ``control'' group (e.g., standard of care, control, no exposure)} \\
& \emph{Who is included in the comparison group to contrast with the exposed group?} \\
& \emph{What is the standard of care to which the intervention/treatment is compared to?} \\
& \emph{Are there multiple comparison groups?} \\
\textbf{O utcome} & \textbf{Specify the outcome of interest} \\
& \emph{Is the outcome variable objective or subjective?} \\
& \emph{How is the outcome variable measured? Is the outcome quantifiable?} \\
& \emph{Is the measurement tool validated?} \\
& \emph{Is the outcome measurement reproducible? How precise is the measurement?} \\
& \emph{Can temporality be established to avoid the possibility of reverse causality?} \\
& \emph{Are there any secondary outcomes of interest?} \\
\textbf{T ime frame} & \textbf{Specify the time frame in which recruitment, follow-up and data collection will take place} \\
& \emph{When did follow-up happen?} \\
& \emph{When were the measurements taken?} \\
& \emph{How often are the outcomes measured?} \\
\textbf{S etting (optional)} & \textbf{Identify the setting of the study sample to understand the generalizability of findings and provide appropriate interpretations} \\
& \emph{What are the inclusion/exclusion criteria?} \\
\bottomrule
\end{longtable}
\hypertarget{evaluating-research-questions-using-finer-criteria}{%
\section{Evaluating research questions using FINER criteria}\label{evaluating-research-questions-using-finer-criteria}}
Once the research question is developed using the PICOT framework, the FINER criteria can be used to assess the quality of the research question and determine if the research is feasible.
Table 2: Key elements and guiding questions for the FINER criteria
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.50}}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.50}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Element
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Description and guiding questions
\end{minipage} \\
\midrule
\endhead
\textbf{F easible?} & \textbf{The feasibility criterion should ensure that the research is doable and the results will be generated in a reasonable time frame} \\
& \emph{Is the research feasible?} \\
& \emph{Is the exposure/treatment or outcome rare?} \\
& \emph{Will it be possible to obtain an adequate sample size?} \\
& \emph{Is the study population representative of the target population?} \\
& \emph{Is there an appropriate study design to address the research question?} \\
& \emph{Is the scope of the study manageable?} \\
\textbf{I nteresting?} & \textbf{This criterion encourages researchers to think about who the target audience of the research study would be} \\
& \emph{Who would be interested in this research?} \\
& \emph{Who is the target audience?} \\
& \emph{Who would be the knowledge users of this research?} \\
& \emph{How will you make it ``interesting'' to the target audience?} \\
\textbf{N ovel?} & \textbf{The research question should generate evidence that adds to the existing literature} \\
& \emph{Is the research original and novel?} \\
& \emph{Is the research question already answered in the literature?} \\
& \emph{What does this research add?} \\
\textbf{E thical?} & \textbf{It is critical to think about the ethical implications of the proposed study} \\
& \emph{How will the research process and dissemination of findings affect the study participants or the target population?} \\
& \emph{Is this research question ethical?} \\
& \emph{Will the findings of the study harm anyone? Create or exacerbate any stigma?} \\
& \emph{Will this study meet the evaluation criteria of the ethics review board?} \\
\textbf{R elevant?} & \textbf{The proposed research should generate knowledge that is relevant to the topic of interest} \\
& \emph{Will answering the research question provide relevant information for the clinical or public health problem identified?} \\
& \emph{How is this research relevant for the topic in question?} \\
& \emph{Will the findings of this study contribute to the existing literature?} \\
& \emph{Does this research address a current need?} \\
& \emph{Would this research generate further investigations in the future?} \\
\bottomrule
\end{longtable}
\hypertarget{tips-for-formulating-a-good-research-question}{%
\section{Tips for formulating a good research question}\label{tips-for-formulating-a-good-research-question}}
A research question needs to be aligned with the data, methods and results. In addition, a good research question should have the following characteristics: clarity, specificity, empirical support, and relevance. Questions in population and public health research typically ask about phenomena related to health and may focus on comparisons, associations, relationships, or descriptions of variables \citep{creswell2017research}. Once you have a broad, general idea of the question you want to investigate, try to describe the goal of the research study as precisely as possible, for example, the gap in knowledge you want to fill or the new evidence you want to generate for a question previously considered in the literature \citep{vandenbroucke2018ideas}. Determining this objective can be helpful when deciding what types of results you need to present. Vandenbroucke and Pearce \citep{vandenbroucke2018ideas} advise describing what table or figure is required to achieve the goal. For example, what table or figure would be needed to fill the knowledge gap. Following this process, the questions will become clearer and guide what types of study design and methods are required to achieve the study objective and attain results.
The most common pitfalls when developing research questions are that the questions incorporate the methods or the study's expected outcomes \citep{mayo2013research}. Furthermore, the clarity of the research question can be impeded by the lack of a clear parameter to assess the relationship or association between exposure and outcome \citep{mayo2013research}.
We propose the following overall roadmap for developing a good research question:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Gain an understanding of the research context
\item
Experiment with a few different PICOT(S) combinations
\item
Choose the best set of combinations and narrow the research question
\item
Use the FINER criteria to evaluate the research question's quality
\item
``Prune'' the research question by removing any extraneous details \citep{vandenbroucke2018ideas}
\end{enumerate}
A good research question can inform the study objective, data collection, methodology and the relevance of the findings. Not having a good research question can create confusion for readers and reviewers, make the research aimless and the interpretation of the results may be difficult or pointless. Therefore, developing a clear, well-structured research question is a critical step in any scientific investigation.
\hypertarget{statistical-analysis-plans}{%
\section{Statistical analysis plans}\label{statistical-analysis-plans}}
Statistical analysis plans (SAP) are also known as data analysis plans (DAP) or reporting analysis plans (RAP). A statistical analysis plan describes the study variables and the plan for analyzing a data \textbf{before} conducting the analysis; this is essentially the strategy for connecting the study objective to the data analysis that will answer the research question. SAPs have been used in biomedical research and in clinical trials for many years; statistical analysis plans for clinical trials are registered and made publicly available in repositories such as \href{http://www.clinicaltrials.gov/}{ClinicalTrials.gov}. In fact, the National Institutes of Health (NIH) in the United States established policies for reporting NIH-funded clinical trials in 2016, requiring researchers to report full protocol and statistical analysis plan, along with levels of specification for outcome measures, information about adverse events and collection method, and baseline information and characteristics associated with primary outcome measures \citep{zarin2016trial}. Pre-registering SAPs can prevent \emph{``P-value hacking''}, which can occur when researchers \emph{``shop around for a statistical test to give them the P-value that they love''} \citep{yuan2019guide}. By registering pre-specified SAPs, researchers can help improve the study reproducibility and reduce bias \citep{kahan2020public}.
In observational studies, SAPs are much less adopted compared to clinical trials \citep{thor2020registering}; however, the discussion around its use and value have been growing. In this chapter, we discuss the use of SAPs for observational studies, and propose some key components of SAP for observational studies.
\hypertarget{the-value-of-statistical-analysis-plans-in-observational-studies}{%
\subsection{The value of statistical analysis plans in observational studies}\label{the-value-of-statistical-analysis-plans-in-observational-studies}}
Many observational studies are based on large datasets, or ``big data,'' which is defined as heterogeneous datasets linked to a single dataset, with a large number of observations and variables, and that is either real-time or frequently updated \citep{ehrenstein2017clinical}. With these big data and powerful statistical software and methods, finding statistically significant associations without pre-established study objectives, research questions and hypotheses has become easier \citep{yuan2019guide}. These types of analyses can produce statistically significant findings without implications to clinical relevance or justification. SAPs can be useful in ensuring that the analytical methods are planned ahead of time in relation to the research question and objectives, and that this procedure is transparent.
As the findings from observational studies may have an impact on public health policies, guidelines and decision-making, it is critical to ensure that these studies are of high standard, that analyses are pre-specified based on relevance to public health, and that they are replicable. When there is no pre-established SAP specifying the primary outcome variable, outcome reporting bias can occur \citep{cafri2018mitigating}. Many efforts have been made to reduce reporting bias in observational studies, such as STROBE guidelines \citep{von2007strengthening}. The use of SAPs has also been suggested, and that only the variables that researchers pre-specified as variables of interest be made available to them to limit post hoc analyses \citep{thomas2012value, williams2010registration}. Some even argue that SAPs should be required even before obtaining data, during the application stage of data access \citep{trinh2013statistical, hiemstra2019debate}. In fact, to obtain access to big data, it is often required to submit a data request form that contains some key elements of a SAP \citep{dars2021, popdata2021}.
SAPs also have an important role in identifying potential biases, such as selection bias (based on the inclusion/exclusion criteria) or measurement bias, and can help researchers plan how to minimize and address these biases.
\hypertarget{guide-on-writing-an-sap-for-observational-studies}{%
\subsection{Guide on writing an SAP for observational studies}\label{guide-on-writing-an-sap-for-observational-studies}}
Based on the guidelines for SAP for clinical trials \citep{gamble2017guidelines} and literature suggesting its' adaptation for observational studies \citep{yuan2019guide, thomas2012value, hiemstra2019debate}, we suggest the following four key components for writing SAP for observational studies in health sciences research:
\begin{itemize}
\item
\textbf{Study objectives and hypotheses}
\begin{itemize}
\tightlist
\item
Broad research area, study background and rationale
\item
Research question (e.g.~using PICOT framework and FINER criteria)
\item
Hypothesis and aims
\end{itemize}
\item
\textbf{Study population}
\begin{itemize}
\tightlist
\item
Study design (e.g.~cross-sectional, prospective cohort)
\item
Study sample and inclusion/exclusion criteria
\item
Study period (time points under consideration in the data source)
\item
Baseline characteristics of study population
\end{itemize}
\item
\textbf{Study variables: definitions, types, how they are measured}
\begin{itemize}
\tightlist
\item
Outcome variables
\item
Explanatory/exposure variables
\item
Covariates (e.g.~mediators, colliders, confounders)
\item
Derived variables
\end{itemize}
\item
\textbf{Statistical analysis methods}
\begin{itemize}
\tightlist
\item
Defined level for statistical significance
\item
Plans for handling missing data, correlation, bias and confounding, and repetitive analyses
\item
Details on model building and variable selection
\item
Details on additional methods if model assumptions do not hold (e.g.~normality, proportional hazards)
\item
Strategies for interaction or subgroup analysis and sensitivity analyses
\end{itemize}
\end{itemize}
Finally, SAPs can be useful in observational studies because they encourage detailed and rigorous planning of the study rather than disorganized and spontaneous data analysis. They can also optimize the resources to focus on the right methods for the research questions, and ensure methodological transparency and replicability of findings. We propose the following two broad questions that can be used to determine whether the SAP is appropriate:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Does the SAP help in answering the research question or achieving the original study objective?
\item
Are the planned analyses appropriate in the context of the research question?
\end{enumerate}
\hypertarget{introduction-section}{%
\chapter{Introduction Section}\label{introduction-section}}
A scientific research paper generally follows the format of IMRaD (Introduction, Methods, Results and Discussion). The introduction section sets the stage for the entire paper and introduces the topic of interest to the audience. This first section provides a broad context of the issue under investigation, summarizes what is known and unknown, and tries to convince the readers that this particular study will be a valuable addition to current knowledge.
There are several guidelines suggesting how to best structure the introduction section for a research paper \citep[p.84-88]{cals2013effective, bahadoran2018principles, heard2016scientist}. Typically, a well-written introduction section will contain broader background information on the topic, a summary of key existing knowledge relevant to the specific problem, the gap in the current knowledge (rationale), and the research question and/or the hypothesis. Though not essential, some authors may opt to briefly describe the study design and methods.
\hypertarget{funnel-shape}{%
\section{Funnel shape}\label{funnel-shape}}
To better organize the main components of the introduction, it may be useful to build an outline or a skeleton of the section. One approach could be adopting a ``funnel shape'' or an inverted pyramid shape to organize the components. Based on the funnel shape, the introduction section has five key elements going from broad to narrow: big picture, what is known, what is unknown, research question and methods/design (see Figure 1).
\begin{figure}
{\centering \includegraphics[width=0.25\linewidth]{images/funnel}
}
\caption{The typical funnel shape of an Introduction section.}\label{fig:funnel}
\end{figure}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{Big picture}: the introduction starts with the big picture, represented by the broad opening of the funnel shape. The big picture introduces the general context of the research area and provides an overview of ``why this topic or issue is important.'' For a research paper in population and public health, it is a good idea to present the broader background information on the health-related topic. This may include the magnitude of the problem and/or the burden of disease (e.g., incidence, prevalence or cost). The big picture should provide the audience with an understanding of the study outcome or explanatory variable from a public health perspective.
\item
\textbf{What is known}: from the big picture, the author narrows down to a more specific research area under investigation. This part should outline the existing knowledge of the research area by providing a summary of the evidence, including the landmark and recent studies. This summary should cite the most current and comprehensive knowledge on the subject. Remember that the evidence cited should be directly relevant to your specific study and inform your research question. These summaries should focus on the particular exposure or disease of interest (e.g., intervention or outcome elements of the PICOT framework) \citep{thabane2009posing}.
\item
\textbf{What is unknown}: as the funnel further narrows, this part should present a synthesis of the reasons why the issue is important (in the big picture), what is already known, and what is unknown, to convince the audience that there is a need to conduct your specific study. This part can include the gaps in current knowledge, any inconsistencies in the literature, gaps in the methodology or the need for different or better methodology. When describing what is unknown, the author should highlight the importance of conducting the present study and persuade the readers that this analysis was needed (rationale). Who would likely benefit from this study should also be highlighted. For example, if there is a previous study that answered the same research question, a clear and compelling argument on the need for the updated study should be included.
\item
\textbf{Research question}: following the identification of the gap in current knowledge, this part outlines the specific purpose of the study. It should include the study objective and/or hypothesis that will address the identified gap in current knowledge.
\item
\textbf{Methods/design}: as the last stage of the funnel, this part can briefly introduce the approach used to answer the research question. This can include the study design or methods, however, a brief summary is sufficient as the methodological approach will be described in depth in the methods section.
\end{enumerate}
\hypertarget{examples}{%
\section{Examples}\label{examples}}
\hypertarget{example-1}{%
\subsection{Example 1}\label{example-1}}
The first example is taken from \citet{nisingizwe2020perceived}. You can download the open access PDF from \href{https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-020-2775-8.pdf}{here}.
Table 1: A study about the association between perceived barriers to health care access and inadequate antenatal care visits \citep{nisingizwe2020perceived}
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.15}}
>{\centering\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.25}}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.60}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Elements
\end{minipage} & \begin{minipage}[b]{\linewidth}\centering
Location in the introduction section
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Comments
\end{minipage} \\
\midrule
\endhead
Big picture & 1st paragraph (\emph{``Maternal and neonatal mortality \ldots{}''}) & Authors introduce the public health problem of interest, maternal and neonatal mortality, by presenting the magnitude of the problem and the burden of disease. This paragraph highlights the importance of the public health problem. \\
What is known & 2nd paragraph (\emph{``Timely and frequency of ANC \ldots{}''}) & The authors describe the more specific research area: the relationship between receiving adequate antenatal care (ANC) and barriers to healthcare. \\
& 3rd paragraph (\emph{``However, the country' maternal and neonatal death rates \ldots{}''}) & The authors provide a summary of the existing knowledge relevant to the study that informs the research question. \\
What is unknown & 4th paragraph (\emph{``To date, there is a paucity \ldots{}''}) & The authors present what is unknown: the relationship between perceived barriers to health care and inadequate ANC visits in Rwanda. In addition, they identify previous studies and the gaps in the current knowledge. The clear identification of a research gap supports the stated rationale for conducting this particular study. \\
Research question & 4th paragraph (\emph{``Therefore, this study aims \ldots{}''}) & Following the identification of the gap in current knowledge, the authors present the specific purpose of the study. \\
& 5th paragraph (\emph{``We hypothesized that \ldots{}''}) & The authors present the hypothesis for the research question. \\
& 5th paragraph (\emph{``This study will contribute to \ldots{}''}) & the authors include a brief summary of the key study implications to convince the audience that this research paper will add value to the field of study. \\
Methods/design & & The authors have not included a specific section summarizing the methodological approach used in the study to answer the research question. However, in the 4th paragraph, they indicated that the study will use a ``country representative sample'' from ``2015 DHS data''. \\
\bottomrule
\end{longtable}
\hypertarget{example-2}{%
\subsection{Example 2}\label{example-2}}
The second example is taken from \citet{basham2019multimorbidity}. You can download the open access PDF from \href{https://www.tandfonline.com/doi/epub/10.1080/22423982.2019.1607703}{here}.
Table 1: A study about prevalence of multimorbidity in northern vs.~southern Canada \citep{basham2019multimorbidity}
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.15}}
>{\centering\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.25}}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.60}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Elements
\end{minipage} & \begin{minipage}[b]{\linewidth}\centering
Location in the introduction section
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Comments
\end{minipage} \\
\midrule
\endhead
Big picture & 1st paragraph (\emph{``Multimorbidity is common among \ldots{}''}) & The public health problem of interest, multimorbidity, is introduced along with the magnitude of the problem and the burden of disease. \\
What is known and unknown & 2nd paragraph (\emph{``Northern Canada, which \ldots{}''}) and 3rd paragraph (\emph{``The equivocacy of findings \ldots{}''}) & The more specific research area, multimorbidity in Canadian provinces and territories, is contextualized. The authors provide a summary of relevant previous studies and highlight the gaps within these studies. The synthesis of the ``big picture,'' ``what is known'' and ``what is unknown'' elements supports the rationale for this particular study. \\
& & \\
Research question & 3rd paragraph (\emph{``The primary aim of this study \ldots{}''}) & Once the need for this particular study is identified, the authors present the specific research question and the hypothesis. \\
Methods/design & 3rd paragraph (\emph{``This study describes multimorbidity \ldots{}''}) & The authors briefly mention the methodological approach used in the study. \\
\bottomrule
\end{longtable}
Through these 2 examples, we have looked at the key elements of an introduction section of a scientific article in population and public health research. The introduction section provides the general context of the topic (big picture), the narrower research area and what is known, the gap in the existing knowledge, the specific purpose of the study and a summary of the methods and design.
\hypertarget{importance-of-a-hook}{%
\subsection{Importance of a `hook'}\label{importance-of-a-hook}}
As the author and researcher, you have the knowledge of the ``whole story'' of your study from start to end. Hence, you can write the introduction strategically. The introduction section introduces the public health problem to the audience and tries to capture their interest to continue reading. In a newspaper or magazine article, the writer aims to grab the readers' attention with a ``hook'' at the beginning. In a scientific article, although you don't necessarily want to give out all the findings and study implications initially, you should utilize the introduction section to incite the readers' and reviewers' interest. By clearly outlining the key components, the introduction section should convince the audience that the population/public health issue under investigation is critical to address and that your particular study is novel and valuable.
\hypertarget{common-pitfalls}{%
\section{Common pitfalls}\label{common-pitfalls}}
\begin{itemize}
\tightlist
\item
Common pitfalls in the introduction section include \emph{incomplete, inaccurate or outdated reviews of the literature} on the topic. For example, including literature that is tangentially related or within the same field but not directly related to the problem, may result in an incomplete or confusing review of the background knowledge. Including inadequate, incomplete or outdated information may result in the rejection of the paper.
\item
Not adequately explaining the \emph{importance or the relevance of the current knowledge in relation to the study aims} is another pitfall. This can lead to an introduction section that is less effective in communicating the relevancy and novelty of your study.
\end{itemize}
\hypertarget{tips}{%
\section{Tips}\label{tips}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Arguably, incorporating \emph{clear study aim(s)} and \emph{rationales for the study objectives} are the most important aspects of the introduction section. (i) Aims should be clearly articulated, and the design of the study should be planned accordingly. (ii) Take time to think about the justification of the current study.
\item
Provide only the \emph{key references} that are needed to describe the background knowledge, as well as what is known and unknown about the topic of interest. Including an excessive amount of literature in the introduction can be distracting. Be mindful that you will have an opportunity to contextualize your research in the literature by comparing your findings with other studies in the Discussion section. The introduction should be focused on setting the tone for what is coming next.
\item
A lengthy introduction can also make the readers lose interest. A general suggestion is that the introduction section should be about 10-15\% of the whole paper \citep{cals2013effective}.
\item
If you already have a general idea of the journals that you would like to submit your article to, the introduction can be tailored to the audience of the target journal. For example, if you are interested in submitting to journals with a heavier focus on methodology or epidemiology, you may want to highlight the novelties in the design or methods. If you are interested in submitting to clinician-focused or subject-specific journals, you may emphasize the clinical or public health implications of the study.
\end{enumerate}
\hypertarget{methods-section}{%
\chapter{Methods Section}\label{methods-section}}
Based on the IMRaD (Introduction, Methods, Results and Discussion) format discussed previously, the second section of a scientific research paper is the methods. The purpose of the methods section is to provide a comprehensive picture of the methodological approach of the study in a straightforward and transparent manner. Broadly, the methods section should allow the readers to understand the dataset under consideration and the analytical steps undertaken in the study, and give a sense of reproducibility of the study \citep{annesley2010and, kotz2013effective3}.
\hypertarget{the-bridge}{%
\section{The ``Bridge''}\label{the-bridge}}
Following the introduction section, the methods section connects the previously developed research question to the results section and equips the readers with all methodological details necessary to interpret the findings that will be presented in the next section. As such, the methods section is the ``bridge'' between the introduction and the results section. Typically, the main elements of a methods section are: study design and data collection; setting, analytic sample and variables; data and statistical analysis.
Furthermore, breaking the methods section into sub-sections can be a helpful way to present the methodological approach in an organized fashion. The main elements of the methods section can be presented as sub-sections.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{Statement of study design and data collection:} the study design (e.g.~cohort study, cross-sectional study, etc) and other key elements of the study design should be mentioned early in the methods. The data source(s) and how data was collected should be detailed.
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\tightlist
\item
Sampling strategy: for cross-sectional studies, it is recommended to describe the sampling design for the data source.
\end{enumerate}
\item
\textbf{Setting, analytic sample and variables:} if relevant, the setting, locations and timeframe (e.g.~recruitment, follow-up, data collection) can be described. For the analytic sample, how it was obtained from the target population should be clearly described, including inclusion and exclusion criteria. The key outcome and explanatory variables should be defined, including case definitions or algorithms. You can also present the details of methods of measurement or assessment if they are relevant, or if they are derived from other variables or measures. It is important to identify which covariates were included and why they were included or not included (variable selection).
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\item
For the study population, if there are any differences between people who were included in the study population compared to those who were excluded from the source population, these differences should be described.
\item
Ethics approval: a statement about ethics approval from a research ethics board or other ethical considerations should be included.
\end{enumerate}
\item
\textbf{Data/statistical analysis:} all statistical methods used for descriptive and inferential analyses should be described, including the methods used to control for confounding. If you have conducted sensitivity analyses, these should be described. The methods section should contain descriptions of statistical approach used for all findings presented in the results section. In general, the statistical program or software and the packages used are also stated in this section.
\end{enumerate}
When writing a scientific paper in epidemiology or in population and public health, the STROBE checklist \citep{von2007strengthening} can be a helpful tool to ensure that you are reporting all necessary components in the methods section. Below we will have a look at two examples of papers presenting survey-based analysis with the STROBE checklist.
\hypertarget{examples-1}{%
\section{Examples}\label{examples-1}}
\hypertarget{example-1-1}{%
\subsection{Example 1}\label{example-1-1}}
This example is taken from \citet{nikiforuk2021influence}, which was based on the NHANES data. You can download the open access PDF from \href{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278694/pdf/12889_2021_Article_11267.pdf}{here}.
Table 1: A study about chronic hepatitis C infection and monocyte-to-platelet ratio \citep{nikiforuk2021influence}
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.15}}
>{\centering\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.25}}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.60}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Elements
\end{minipage} & \begin{minipage}[b]{\linewidth}\centering
Location in the methods section
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Comments
\end{minipage} \\
\midrule
\endhead
Study design & The sub-section titled ``data, design and study population'' & The authors outline briefly this was a cross-sectional study. \\
Setting & The sub-section titled ``data, design and study population'' & The data source for the study, NHANES, is described including the target population, location, timeframe and sampling design. \\
Participants & The second paragraph of the sub-section titled ``analytic sample and variable selection'' & The authors describe how the analytic dataset was created from the data source. Exclusion criteria is also included. Figure 1 in the paper illustrates who were excluded at each stage and helps readers to understand potential sources of selection bias and how these might affect the generalizability. \\
Variables & The first and second paragraphs of the sub-section titled ``analytic sample and variable selection'' & The exposure and outcome variables of interest are clearly defined, including the explanation of how the outcome variable ``monocyte-to-platelet ratio'' was derived from two variables, complete blood count measures of monocyte count and platelet count. The authors include an additional file for more detailed description. The authors also provide the justification of how covariates were identified, with a directed acyclic graph included as an additional file. \\
Data sources/measurement & The sub-sections titled ``analytic sample and variable selection'' & The authors provide information on how the key variables were measured, and further provide a reference for additional information on the data source, NHANES. \\
Bias & Throughout the sub-section titled ``statistical analysis'' & The authors described all methods applied to adjust for any potential confounding, such as in the sub-section ``transformation of the monocyte-to-platelet ratio'' and other methods including missing data and propensity score analyses. \\
Study size & Figure 1 & Figure 1 clearly presents all people who were excluded at each stage and the remaining study sample size. \\
Quantitative variables & The sub-sections titled ``analytic sample and variable selection'' and ``statistical analysis'' & The authors describe how the continuous variables were derived for the outcome variable, and further, how the resulting outcome variable was dichotomized. \\
Statistical methods & Throughout the sub-section titled ``statistical analysis'' & The authors provide adequate information about descriptive, inferential, missing data and sensitivity analyses, including the statistical software and packages. The level of information provided ensures the reproducibility of the results if needed. The authors also include citations for relevant methods or analyses. \\
\bottomrule
\end{longtable}
\hypertarget{example-2-1}{%
\subsection{Example 2}\label{example-2-1}}
This example is taken from \citet{nethery2019household}, which was based on the CCHS data. You can download the open access PDF from \href{https://www.cmajopen.ca/content/cmajo/7/4/E646.full.pdf}{here}.
Table 2: A study about the household income and contraceptive methods \citep{nethery2019household}
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.15}}
>{\centering\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.25}}
>{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.60}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
Elements
\end{minipage} & \begin{minipage}[b]{\linewidth}\centering
Location in the methods section
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
Comments
\end{minipage} \\
\midrule
\endhead
Study design & The sub-section titled ``data source, design and study population'' & The authors state that this was a cross-sectional study. \\
Setting & The sub-section titled ``data source, design and study population'' & The data source for the study, CCHS, is described including the target population, location, timeframe, response rates and sampling design. \\
Participants & The sub-section titled ``analytic sample'' & The authors describe how the analytic dataset was drawn from the CCHS data including the inclusion and exclusion criteria. As in the first example, the authors include a flowchart as Figure 1 to illustrate who were included/excluded at each stage, resulting in the analytic sample. \\
Variables & The sub-sections titled ``outcome variables'' and ``exposure variable'' & The definitions of outcome and exposure variables are stated clearly. The authors provide additional details on the outcome variables, including some potential sources of bias, which enhances the readers ability to interpret the findings. The authors also explain how the exposure variable was dichotomized. \\
Data sources/measurement & The sub-sections titled ``outcome variables'' and ``exposure variable'' & The authors outline specific questions from the CCHS (data source) that were used to derive the outcome and exposure variables. \\
Bias & The sub-sections titled ``statistical analysis'' and ``sensitivity analysis'' & The authors describe some of the methods that were used to mitigate potential confounding (e.g.~use of survey weights, multiple imputation, sensitivity analyses). The authors also provide the explanation behind the selection of confounders, providing background references. \\
Study size & Figure 1 & As in the first example, Figure 1 presents all people who were excluded at each stage and the remaining analytic sample size. \\
Quantitative variables & N/A & The key outcome and explanatory variables in this study were binary. In the ``statistical analysis'' sub-section, it is mentioned that the ``age'' variable is included as a confounder, and it isn't clear whether this was a continuous or a categorical variable. However, this is neither key outcome nor explanatory variable, so the level of detail is perhaps not required. \\
Statistical methods & The sub-sections titled ``statistical analysis'' and ``sensitivity analysis'' & The authors provide details on the descriptive, inferential, missing data and sensitivity analyses, including the statistical software and packages. \\
Other & The sub-section titled ``ethics approval'' & The ethics approval for the use of data source is stated briefly. \\
\bottomrule
\end{longtable}
\hypertarget{significance-of-the-methods-section-in-population-and-public-health-research-paper}{%
\section{Significance of the methods section in population and public health research paper}\label{significance-of-the-methods-section-in-population-and-public-health-research-paper}}
In population and public health, the methods section of the research paper should enable readers to assess the internal and external validity of the study. It should provide all the necessary information to understand in which ways the authors have made efforts to try to address their research question as accurately as possible. As Rothman et al.~describes, generally the goal of epidemiologic research is to ``obtain a valid and precise estimate of the frequency of a disease or of the effect of an exposure on the occurrence of a disease in the source population of the study'' \citep[p.128--47]{rothman2008validity}. In addition, as the introduction section would have described the importance of the research question in relation to the population and public health, researchers often aim to generalize the study findings to the relevant population groups. By providing clear and adequate information about the target population and how the study sample was selected from the source population, the methods section allows the readers to assess the generalizability of the study findings.
As such, a well-written methods section in a research paper in population and public health should contain necessary information about the analytical steps undertaken in the study, and sufficient description of the study design and target population to enable readers to understand what some of the potential sources of bias in the study are, and what measures were taken to minimize the bias. This allows the readers to interpret the internal and external validity of the study. A well-written methods section provides credibility and validity of the results and conclusions of the study.
After reading the methods section, the readers should have an understanding of the dataset which is under consideration in the study, who the target population was, how the data was collected, and how the analytic data was created (who are included and excluded). Based on the analytic steps described in the methods section, the readers should also be able to understand how the results and conclusions in the next sections of the paper derive from the statistical analyses. Finally, the readers should be able to replicate the study if needed and reproduce the results if they had access to the data source.
\hypertarget{common-pitfalls-1}{%
\section{Common pitfalls}\label{common-pitfalls-1}}
\begin{itemize}
\tightlist
\item
Common pitfalls in the methods section can largely arise from not reviewing the key elements of the methods section carefully and missing some of the elements in consequence.
\item
Missing crucial elements providing information on the potential sources of selection or information bias can result in losing the reviewers' and editors' confidence in the validity and credibility of the study findings.
\item
Another common problem is not providing any explanations regarding the variable selection (i.e.~how the covariates were included or not included in the model).\\
\item
Be mindful about plagiarism and self-plagiarism, especially if you are using the same data source or similar methods from previously published work.
\end{itemize}
\hypertarget{tips-1}{%
\section{Tips}\label{tips-1}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
As in the introduction section, excessive and unnecessary details are discouraged. Unless the focus of your paper is the methodological approach, only discuss enough details to enable reproducibility and to highlight the elements that are relevant for the interpretation of the results. We talked about presenting the research as a ``story'', any detail that are unnecessary or redundant to understanding this story should be avoided.
\item
The methods section should also include a statement on the ethical approval or any other ethical considerations.
\item
It's often a good idea to include a citation for the statistical test or method, if this is relatively unknown to the readers. This can also increase the credibility of the methods presented.
\item
Some journals focused on patient-oriented research encourage researchers to include any processes or efforts that were made to include patients or the community members during research. If patients or community members were involved in the study at any step, it can be a good idea to describe what their involvement was.
\end{enumerate}
\hypertarget{tables-and-figures}{%
\chapter{Tables and Figures}\label{tables-and-figures}}
So far, we have looked at the components of the introduction and the methods sections of a scientific research paper. Before looking at all the components of the results section, we will discuss the most crucial elements of the results section, and perhaps of the entire research paper, the tables and figures. Tables and figures are an integral part of a scientific article as they give an overview of the results obtained from the analyses. For many readers, reviewers and editors, these are the first elements they will look at when reading a paper. Therefore, well designed tables and figures are an effective tool to convey findings and can make a great impression on the readers even before they dive into the paper.
It is widely known that tables and figures are an efficient way to present the key study findings \citep{kotz2013effective4}. Kotz and Cals \citep{kotz2013effective4} suggest planning on which findings to portray in the tables and figures early in the writing process. Once this is determined, the most important aspect when designing the tables and figures is that they should be `standalone', or self-explanatory. The readers should gain a complete understanding of the message you are trying to convey just by looking at the table or the figure. To help with this, there are a few recommended guidelines for designing tables and figures.
Below we discuss the components that can optimize the tables and figures in a scientific research article in the field of population and public health.
\hypertarget{designing-a-table}{%
\section{Designing a table}\label{designing-a-table}}
As discussed, tables in a scientific paper should be concise, but standalone. Tables must be accompanied with an informative title, descriptive column headers and descriptive row labels. A footnote should also be included that explains any abbreviations used and the analyses that were applied to produce the results in the table.
\begin{itemize}
\tightlist
\item
\textbf{Titles:} titles should be self-contained and informative. According to the American Journal of Epidemiology, titles should include details on the location of the study, the time period over which the study was conducted and the study population \citep{aje2021instructions}. Even if including these details lengthens the title, it is important to include all the necessary information so that readers don't have to go back to the manuscript to find the information they need to understand the table.
\item
\textbf{Column headers:} column headers should be succinct but descriptive as these introduce the table to the readers, along with the title.
\item
\textbf{Row labels:} when variables have multiple categories consider using a hierarchical representation (i.e., via indenting) and specify which category was used as the reference.
\item
\textbf{Footnote:} footnotes should include the definitions of the abbreviations and details on the statistical tests and analyses used.
\end{itemize}
Furthermore, rather than duplicating information in the main text, the tables should be used to complement it, and vice versa.
You should try to organize the tables in a way that ensures the reader can easily and quickly digest and comprehend the information within. Some formatting tips include:
\begin{itemize}
\tightlist
\item
Designing the table layout with three horizontal lines in total: a line at the top of the table, a line below the column headers and a line at the bottom of the table. If the table is particularly large, shading can be used to demarcate the rows. Drawing horizontal and vertical lines inside the table is highly discouraged.
\item
The number of columns should be kept to a minimum. For example, instead of having a column for the p-values, you should consider using significance codes to indicate the level of statistical significance. For example, different numbers of asterisks can be used (*P\textless0.05, **P\textless0.01, ***P\textless0.001) \citep{kotz2013effective4}.
\item
Whether the p-values are one-sided or two-sided should be specified. Some journals suggest only reporting two-sided p-values unless a one-sided test is required by the study design. As for decimal places for the p-values, the NEJM recommends reporting p-values larger than 0.01 with two decimal places, those between 0.01 and 0.001 with three decimal places, and those smaller than 0.001 as P\textless0.001 \citep{nejm2021author}.
\item
When reporting p-values or confidence intervals, the statistical test used to obtain the values needs to be described.
\item
For decimal points, do not report numbers with too many decimal places.
\item
The units of measurement for the study variables should be reported. For example, age in years or increments of 10 years, BMI in kg/m2, physical activity (min/day), etc.
\item
When displaying data in a table, do not include too many significant figures.
\item
Do not over-stylize the data using the bold, italic, and underline options \citep{franzblau2012graphs}.
\end{itemize}
\hypertarget{designing-a-figure}{%
\section{Designing a figure}\label{designing-a-figure}}
A figure can be useful when you have a lot of data that you'd like to describe or when you'd like to convey a key message. Deciding between a table or a figure depends on what kind of information you're looking to present. If your data is representing a trend, pattern or association (in particular, if there is a non-linear trend), a figure may be able to convey the information more efficiently than a table. However, if it is important to present the exact numbers for the results, consider using a table rather than a figure.
Similar to the tables, figures must be accompanied with a detailed title, descriptive labels for the y- and x-axes, a legend explaining any symbols, colours and line types, and a footnote. Generally, the title for a figure generally is placed under the image, and the footnote is placed under the title.
Keep in mind the main message you are aiming to convey with the figure. It can take some time to design a figure that meets your needs. Some general guidelines include:
\begin{itemize}
\item
Some journals may charge additional fees to print coloured figures. Therefore, it will be likely be less costly to design the initial figure in black and white.
\item
If you decide to make figures with colours, you should consider colour combinations and palettes that are accessible or colour-blind friendly. For example, the R `ggplot2' package offers some colour-blind friendly palettes \citep{chang}. You can access the chapter \href{http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/\#a-colorblind-friendly-palette}{here}.
\item
Avoid using unnecessary grid lines inside the graph or plot, except for a reference line when necessary (e.g., a line indicating a Hazard Ratio (HR) or Odds Ratio (OR) of 1.0 in a forest plot).
\item
When placing graphs and plots side-by-side, be mindful of the scale of the graphs to avoid misleading the readers. It's always a good idea to use the same scale across the graphs and figures.
\item
Avoid complicated, confusing figures such as 3-dimensional graphs \citep{franzblau2012graphs}.
\end{itemize}
\hypertarget{tables-and-figures-in-population-and-public-health-research-papers}{%
\section{Tables and figures in population and public health research papers}\label{tables-and-figures-in-population-and-public-health-research-papers}}
In research articles in epidemiology or in population and public health, generally, table 1 presents the sociodemographic and/or clinical characteristics of the study population stratified by the levels of the explanatory variable. For research papers based on a large survey datasets, table 1 includes descriptive statistics such as counts (number of people in the study sample in each category), proportions (accounting for the survey design), as well as means and standard deviations (also accounting for the survey design). Table 1 allows readers to compare the variables of interest across the levels of the explanatory/exposure variable of interest and may help them to gain an understanding of the research question. Sometimes, the authors test for differences and include a column for the p-values to indicate whether the variables presented are distributed differentially across the levels of the exposure.
Table 2 usually presents the results of the primary inferential analysis assessing the research question, i.e., the association between the main explanatory and outcome variables under investigation. If there are multiple models used or additional outcomes considered, multiple columns can be included in this table. The footnote for the table usually includes information on the analytical model, the variable selection approach and the relevant tests of significance. This table generally reports the effect of the exposure or the specific level of the explanatory variable on the outcome after adjustment for potential confounders.
Additional tables can be included that present the results of the effect modification analyses or any other sensitivity analyses that provide additional clarity to the study.
Figure 1 is most often a flowchart showing how the authors obtained the final study sample after going through the inclusion and exclusion criteria. The inclusion and exclusion criteria should be detailed and the exact number of people excluded at each step should be provided. Other figures in epidemiological studies or studies in population and public health can include: directed acyclic graphs, survival curves, forest plots, histograms, boxplots and maps.
Figures and tables are useful tools for highlighting the important findings of the study. Designing great tables and figures during the manuscript preparation phase can also be helpful for other knowledge translation activities For example, when making conference posters or oral presentations. Finally, there should be coherence between the text in the manuscript and the tables and figures. That is, the tables and figures should have a meaningful connection to the text in the results section and the story you are trying to tell in your manuscript.
\hypertarget{common-pitfalls-2}{%
\section{Common pitfalls}\label{common-pitfalls-2}}
\begin{itemize}
\tightlist
\item
\textbf{Having too many tables and figures:} Many journals usually restrict the number of tables and figures that can be includes in the main article. Having too many figures and tables can be distracting. In addition, each table or figure needs to be explained in the text of the manuscript. Even when you have opted to include some tables and figures in the supplementary materials, they need to be explained in the main text. As for the other sections, you need to find a balance between too much and too little information.
\item
\textbf{Avoid duplication.} Sometimes the authors present the same data in a table and a figure. Tables and figures should be supplementary to each other and the text in the article to emphasize the key findings and convey a continuous story.
\end{itemize}
\hypertarget{tips-2}{%
\section{Tips}\label{tips-2}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
When describing concepts in your article, be consistent in your use of keywords throughout the text, tables and figures in the article. This is particularly important if there are many terms used in the literature to describe a concept.
\item
All tables and figures must be referenced in the text and should follow a chronological order. There shouldn't be a table or a figure that is not cited in the text.
\item
Sometimes, you may have too much information in the manuscript which causes it to exceed the word limit. This information may be essential for the manuscript. In some instances, you may be able to create a table to describe some of the details in the methods rather than having to describe them all in the manuscript. In addition, some tables can include both the text descriptions and the results.
\item
If you have a target journal in mind, review the journal instructions. Formatting requirements figures and tables may differ by journal.
\end{enumerate}
\hypertarget{results-section}{%
\chapter{Results Section}\label{results-section}}
The results section is the link between the methods and the discussion section. It should provide a clear and concise summary of the findings from the research study. The authors should report the findings that were estimated using the approaches presented in the methods section, in the same order as in the methods section and the writing should be free from interpretations.
It may appear that the results is the simplest section to write. However, authors must carefully prepare this section to maintain the readers' attention and interest, and to facilitate understanding of the study. Although the results section generally only contains the quantitative information (except for qualitative and mixed-methods studies), it should still be easy to follow and understand. Presenting the findings in the results section in the same sequence as the procedures were presented in the methods section can improve the flow of the paper. In general, the results section has the following order: (i) the study sample or population characteristics, (ii) findings from the primary analysis, (iii) findings from the secondary analyses, and (iv) any additional findings that may be important to the understanding of the readers.
Below are the key characteristics and typical organization of a results section in a scientific paper based on an observational study in population and public health, along with some common pitfalls and tips.
\hypertarget{key-characteristics-and-organization-of-a-results-section}{%
\section{Key characteristics and organization of a results section}\label{key-characteristics-and-organization-of-a-results-section}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
\textbf{Study sample characteristics and descriptive statistics}
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\item
Generally, the characteristics of the study sample are described first in the results section. This can be accompanied by a flowchart detailing the inclusion and exclusion criteria that were applied to generate the final analytic sample.
\item
Rather than being overly detailed, the sample characteristics should be summarized using key information. A table containing more detailed information can be referenced to ensure the results are concise.
\item
The results section can also include the descriptive statistics for the outcome and explanatory variable, including the prevalence of a categorical outcome and/or exposure, the incidence rates of a categorical outcome in prospective studies and, the mean and standard deviation of a continuous outcome and/or exposure. Typically, descriptive statistics are presented in Table 1.
\end{enumerate}
\item
\textbf{Findings of the primary analysis}
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\item
The findings of the main research question, based on the primary analysis, are presented in this section.
\item
Estimates from the crude/unadjusted and the adjusted models evaluating the relationship between the exposure and outcome can be reported. Authors should note whether the relationship changed or remained the same following adjustment.
\item
Generally, the findings from the adjusted model for the main relationship under investigation are presented in table 2. When reporting the estimates in table 2, use caution not to introduce the table 2 fallacy \citep{westreich2013table}.
\end{enumerate}
\item
\textbf{Findings from the secondary analyses}
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\item
If interaction, effect modification or sub-group analyses were performed, the findings from these analyses should be presented in relation to the primary research question. No new interaction should be added without providing proper justification in the earlier sections.
\item
The relationships between the confounders and the outcome can also be described (but, not interpreted), in order to help readers understand the relationship under investigation.
\end{enumerate}
\item
\textbf{Any additional findings to highlight}
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\tightlist
\item
Additional findings from sensitivity analyses or other interesting findings can also be highlighted.
\end{enumerate}
\end{enumerate}
The results section should describe the study findings without being unduly pedantic or repetitive. Authors are discouraged to detail every analysis details when presenting the corresponding results and to describe redundant findings. As mentioned in the previous chapter, the results section should describe all the tables and figures included in the article. However, it is unnecessary to describe them in great detail. As the tables and figures are used to portray a large amount of data in an efficient manner, the text in the results section should summarize the key findings that are important to convey to the readers. These can include trends or patterns present in the data, group comparisons, or key estimates.
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
Although, ideally, the results section presents the study findings in a clear and objective manner, it should also be presented as a `story'. That is, the information should have a logical flow. Some authors may opt to write the results section before writing the other sections. Regardless of the order in which you write your scientific article, you should always keep in mind the `whole story' of your study and what it is you are trying to convey in your paper. Based on the research question that you posed in the introduction section and the statistical procedures you explained in the methods section, the results section provides readers with the findings from your study. You'll have the opportunity to provide explanations for these results in the discussion section. Whichever section you choose to write first, they should all work together to tell the research `story'.
\hypertarget{common-pitfalls-3}{%
\section{Common pitfalls}\label{common-pitfalls-3}}
\begin{itemize}
\tightlist
\item
The most common pitfall is including components of the methods and discussion sections in the results section. There should be a distinct separation between the methods, results and discussion sections. This separation has been encouraged as readers may be interested in different sections; some may be more interested in the methods because they wish to replicate them, some may be only interested in reading the results to include them in a meta-analysis, and some may skim through the results as they are more concerned with the interpretation in the discussion section \citep[p.99-119]{heard2016scientist}. Having a clear separation of the sections can help the readers quickly find the elements they are most interested in.
\item
Not reporting clearly which findings are from the primary or secondary or sensitivity analyses can be confusing to the reader.
\item
Focusing too heavily on the crude or unadjusted analysis should be avoided, as most unadjusted estimates from observational studies are likely confounded.
\item
Perhaps most importantly, it is crucial to report the findings objectively, without trying to influence the readers. For example, it is misleading to write an effect size was ``marginally significant'' when it was not statistically significant at a 5\% significance level.
\item
In addition, Kotz and Cals argue that only presenting p-values can be misleading. They encourage the inclusion of 95\% confidence intervals as they provide additional information such as the direction of the effect size, the size of the effect estimate and the degree of precision \citep{kotz2013effective5}.
\end{itemize}
\hypertarget{tips-3}{%
\section{Tips}\label{tips-3}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
The results are generally presented in the past tense. However, when referencing a table or a figure, they may be described in the present tense.
\item
Avoid reporting redundant numbers, as this can make the paragraphs dense. For example, if you are reporting the 95 confidence intervals of your estimates, it may not be necessary to report the p-values.
\item
Only present the results and data that is necessary for readers to understand why the conclusion you are drawing about the research question is justified. As with all the other sections, excess information can be distracting from the central point of your article.
\item
The results section should be absent of any interpretation of the findings or data. You will have an opportunity to interpret the results in the discussion section.
\item
Having sub-section headings can improve the readability and flow of the results section.
\item
Always be consistent in the way you report the study findings, including the order of presentation (e.g., the exposed group followed by the control group), the number of decimals, the terms used and the units of measurement. In addition, it is encouraged to include the absolute numbers when presenting relative measures (e.g., if reporting proportions, provide the counts in parentheses).
\end{enumerate}
\hypertarget{discussion-section}{%
\chapter{Discussion Section}\label{discussion-section}}
The discussion section is the final component of a scientific paper based on the format of IMRad (Introduction, Methods, Results and Discussion). In comparison to the other sections, the discussion section allows the authors the most freedom in writing and structuring the text, making it both liberating and challenging to write.
The objective of a discussion section is to answer the research questions that were raised in the introduction by interpreting the findings presented in the previous results section. Further, the discussion section gives the authors the opportunity to explain the study findings and their implications to a broader audience. The introduction and the discussion sections are intrinsically connected; the questions posed in the introduction are addressed in the discussion, and the key problems that are discussed in detail in the discussion were introduced earlier in the introduction. In the introduction section, the author starts with broad background information on the topic and gradually narrow down to present the specific research question(s) and the reasons supporting the need to conduct the study. The discussion section is where the author gets to address all the components touched upon in the introduction, turn the data presented in the results section into ``knowledge'' and provide a compelling argument on how the findings of the study contribute to existing knowledge, and why it was worthwhile to do the study.
\hypertarget{components-of-a-discussion-section}{%
\section{Components of a discussion section}\label{components-of-a-discussion-section}}
Keeping the connection between the introduction and the discussion section in mind, the discussion section follows an inversed funnel shape that gradually broadens, as opposed to the narrowing funnel shape that describes the components of the introduction section (Chapter 1). The discussion starts narrow, with responses to the specific research questions, and gradually broadens as the study findings are compared to previous studies and placed in the wider context of the topic. The strengths and limitations of the study are then discussed, and future directions are provided. Below, we discuss the components of the discussion section in detail.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{Summary of the main findings}
\end{enumerate}
\begin{itemize}
\tightlist
\item
The summary of the main findings provides answers to the specific research question that was posed in the introduction section. If there were many research questions or if the question was very complex, the author can briefly remind the readers what the research question was at a high level. The key findings for the main research question should be interpreted first, followed by findings from any secondary analyses.
\item
The authors should explicitly and concisely state the verdict on the research question when summarizing the main findings, as this is the key information that readers are looking for.
\item
Overreaching conclusions or overgeneralizations of the study findings should be avoided. The results should be interpreted based solely on the people who were included in the study sample. Though, it is reasonable for authors to make speculations based on the results, they should refrain from misrepresenting speculation as inference \citep[p.120-125]{heard2016scientist}.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
\textbf{Contextualizing the study findings within the existing literature}
\end{enumerate}
\begin{itemize}
\tightlist
\item
In the introduction section, the objective of presenting the background knowledge was to introduce the readers to the topic and to set the tone for the rest of the manuscript. In the discussion section, the results of the study are interpreted. Therefore, in this section, the author can expand on the high-level information introduced in the beginning of the manuscript and contextualize the findings of the study within the literature.
\item
The study findings are compared to previous studies to contextualize them in the existing literature. By comparing the study findings to similar previous studies, the author can determine whether their study findings agreed with or were contradictory to previous studies.
\item
If the findings of the study were similar, the author should explain why their new study was needed, and how it contributes to the literature. If the findings were conflicting, the author can explore the reasons for this discrepancy.
\item
While relating the study findings to other studies and placing the current study in the literature are important components of a discussion section, it's main focus should be on how the analysis and findings of this study adds to the existing body of evidence.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
\textbf{Clinical or public health relevance of the study findings}
\end{enumerate}
\begin{itemize}
\tightlist
\item
The target audience of a scientific article in population and public health research may include clinicians, other researchers, patient communities and members of the general public with an interest in the specific health topic. It is critical to stress the relevance of the study findings to the target audience. The author should keep the specific target audience in mind when explaining why the study findings are significant. For example, if the study's goal was to evaluate the effectiveness of a drug in treating a particular condition, the author may emphasize how the findings are important for patients, and how they may affect clinical decisions for healthcare providers.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
\textbf{Strengths and limitations}
\end{enumerate}
\begin{itemize}
\tightlist
\item
In this sub-section, the author explains the study's specific strengths and limitations. This is a particularly challenging and important section to compose as reviewers and readers will scrutinize it closely.
\item
The strengths and limitations that are unique to the study should be discussed. For example, authors should address how the study has done something new, how it adds to current knowledge, or how it supplements, reinforces, or contradicts previous research.
\item
Similarly, rather than discussing limitations that are generic or applicable to most studies, the limitations that are specific to the current study should be highlighted. Limitations can be related to the data, design or the methods used in the study. Unmeasured confounding specific to the study, untestable assumptions related to the study, and lost-to-follow-up or missing data are all examples of limitations. In addition, it is crucial to describe not only the limitations, but also the measures taken to mitigate them.
\item
It is critical to clearly state the limitations of the study. But, the discussion section is also where the author can defend their work and demonstrate to readers and reviewers what steps were taken to mitigate the limitations and to strengthen the robustness of the findings.
\item
It can be good practice to approach the strengths and limitations section as if it was a critical appraisal or a peer-review. Authors can ask themselves: what kinds of issues might the reviewers raise about the study? What are the potential biases? The author can anticipate these questions in advance and prepare the responses accordingly. For example, when discussing potential sources of bias or imprecision, the author can discuss the direction and magnitude of the bias or imprecision, as well as how this may affect the study findings, and what efforts were made to minimize the bias.
\item
When discussing the strengths and limitations, the author can elucidate on the robustness and the generalizability of the study findings, as well as the reproducibility of the study.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{4}
\tightlist
\item
\textbf{Future directions and implications}
\end{enumerate}
\begin{itemize}
\tightlist
\item
Depending on the focus of the paper, the discussion section should describe some of the potential study implications for clinical practice and/or research. Cals and Kotz also suggest that simply stating ``further research is needed'' is insufficient \citep{cals2013effective5}. Based on the study findings, what are the next steps for the research in this area? The author should make recommendations for future research, based on the remaining unanswered questions or unavailable variables, measures or outcomes in the data. The description of future directions should be short and specific.
\item
It may be a good idea to conclude the paper with an overall `big-picture' of the study; what is the take-home message now that you've presented the `story' of your study to the readers? The summary of the study implications should be clear and concise, written in a such way that general readers can easily grasp the key message of the study.
\end{itemize}
\hypertarget{additional-tips}{%
\section{Additional tips}\label{additional-tips}}
The purpose of the discussion section is: to provide readers with a summary of the main findings and, based on these, the answers to the central research questions posed in the introduction section, to contextualize the study findings by comparing them to previous work, and to analyze the study's specific strengths and limitations. Through this process, the author expands on the findings of the study in order to provide the most persuasive interpretations and find the broadest significance, as well as describe the study implications and offer future directions in the specific research area. The following are some additional do's and dont's for a discussion section.
Table 1: Do's and Dont's of writing a discussion section
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.42}}
>{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.58}}@{}}
\toprule
\begin{minipage}[b]{\linewidth}\raggedright
\textbf{Do's}
\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright
\textbf{Dont's}
\end{minipage} \\
\midrule
\endhead
It is okay to make speculations based on the research findings, however, make sure that you present these as speculations and not as causal inferences. Also, be clear about the study limitations & Do not present in detail all of the possible sensitivity analyses or all of the sensitivity analyses that you conducted. Only discuss those that contribute to the robustness of your interpretation of the study findings \\
Discuss if there was anything surprising about the findings or if the results went against the initial hypothesis & Do not omit any potential alternative explanations of the findings of your study. In the discussion section, reviewers will mainly be focused on whether the interpretations and conclusions drawn were supported by the evidence presented, and whether there were any alternative explanations that may be plausible \\
Present the key limitations of the study and how they may have impacted the results & Do not restate the findings too repetitively. The paper should end with a clear summarizing statement of the manuscript's storyline. \\
Provide the answer to the research questions at the beginning of the discussion section. Ensure that your answer is in line with the research question presented in the introduction section. & Do not interpret the absence of statistical significance as the absence of association. The findings should not be interpreted solely based on the p-values of the statistical tests \citep{lederer2019control} \\
Learn to criticize your own work and acknowledge the limitations transparently. But, also use the discussion section as an opportunity to defend your work and to talk about the steps that were taken to mitigate limitations & \\
\bottomrule
\end{longtable}
\hypertarget{title-and-abstract}{%
\chapter{Title and Abstract}\label{title-and-abstract}}
A well composed title and abstract are critical to a good scientific article because they are the two components that most people will read first. During the submission process to journals, the abstract is the first part of the manuscript that editors will read to decide whether to send the manuscript for review. Once the article is published, the abstract is the first part most readers will read, and sometimes, the only part of the article that the readers are able to consume. The abstract is also an important indication of the quality of the research. For example, most scientific conferences will base their selection of presentations solely on the abstract. Therefore, a well-written title and abstract are essential for the publication of a scientific article as well as for conference submissions. They are also crucial for conveying a clear, informative story to the readers who may only get to read the title and the abstract. In this chapter, we discuss characteristics of a good abstract and title as well as some tips and examples.
\hypertarget{components-of-an-abstract}{%
\section{Components of an abstract}\label{components-of-an-abstract}}
As the abstract gives editors, reviewers and readers the first impression of the study, it is critical that it contains all the necessary information, including the study aims, main findings and interpretations. Generally, abstracts need to be short, between 200 to 300 words, and are structured in subsections. The headings of the subsections depend on the journal or the conference, but the essential components tend to be consistent. Below we present four subsections that are often used to compose a structured abstract: background, methods, results and discussion.
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
\textbf{Background -- ``What is known in the literature, why is the current study needed?''}
\end{enumerate}
\begin{itemize}
\tightlist
\item
The background should contain a very brief summary of the corresponding background section of the manuscript, laying out what is the ``key'' evidence in the literature known to date and what is the gap in the knowledge that justifies the present study. This subsection should clearly identify the rationale for conducting the study. Following the rationale, the study aim or the main research question must be clarified.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{1}
\tightlist
\item
\textbf{Methods -- ``What did you do?''}
\end{enumerate}
\begin{itemize}
\tightlist
\item
The abstract should describe what methods were used to answer the scientific question or to accomplish the study aim. This section should have a brief explanation of the data source used in the study, the study time frame and the statistical methods used to conduct the primary analysis.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{2}
\tightlist
\item
\textbf{Results -- ``What did you find?''}
\end{enumerate}
\begin{itemize}
\tightlist
\item
In the abstract, key findings from the primary analysis should be presented. The analytic sample or the study population should be clearly described, as well as the effect size estimates such as the odds ratios, hazard ratios and the 95\% confidence intervals. Reporting 95\% confidence intervals rather than p-values can inform the readers about the strength, direction and variability of the estimates.
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\setcounter{enumi}{3}
\tightlist
\item
\textbf{Discussion -- ``What do the results mean? And so what?''}
\end{enumerate}
\begin{itemize}
\tightlist
\item
The main findings should be interpreted clearly and accurately. It is crucial to interpret the results carefully as to not mislead the readers. The discussion section also needs to convey what the key implications of the findings are. These may be implications for future research and for the general understanding of the topic under investigation. It is also a good idea to discuss the main limitations of the study. Particularly for observational studies, it is important to be transparent about any major limitations that might limit the internal validity and generalizability of the study.
\end{itemize}
\hypertarget{characteristics-of-a-good-abstract}{%
\section{Characteristics of a good abstract}\label{characteristics-of-a-good-abstract}}
A well-structured abstract, even without the subsections, should \textbf{describe} the research questions, aims and methods; be \textbf{critical} of the key findings and major limitations; and be insightful, discussing the key contribution to the literature and implications of the study.
The title and abstract of the study are the components that are indexed in the literature databases and are openly accessible to all readers, even when the study is published in a subscription-based journal. Therefore, some readers may only be able to access the abstract, which is why it's crucial for the abstract to be \textbf{standalone}. It should summarize all the important aspects of the study, and just by reading the abstract, the readers should be able to gain an understanding of what was done, how it was done and what was found. As abstracts are required to be short and succinct, it should avoid having too many technical details or nuanced discussion. Writing a good abstract involves \textbf{highlighting the study aim} and gap in the literature, and describing how the study tried to address the gap or to contribute to current knowledge. Finally, by reading the abstract, it should be clear to readers what the \textbf{key take home message} is.
\hypertarget{tips-for-writing-an-abstract}{%
\section{Tips for writing an abstract}\label{tips-for-writing-an-abstract}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
Write the paper first; then, re-read the paper and think about the purpose of the study, and the implications of the main finding. Keep in mind what the key messages you want to convey to readers are. It may help to write down the keywords or key messages in a list before starting to write the abstract.
\item
Take time to write, revise and re-write the abstract. The abstract is important; journal editors, conference abstract reviewers and other researchers will judge the quality of the research based on the abstract. A well-written abstract can leave a good impression.
\item
The abstract should be written in an active voice; many journals support the use of active voice over the use of passive voice (with the exception of the methods section). Most concepts and results are conveyed more succinctly and clearly when written in an active voice \citep{nature2021portfolio, thebmj2021authors}.
\item
When using subsections, each section should have two to three sentences. Additionally, the language should be simple and free of jargon/terms that are too discipline-specific.
\item
Abstract should be free of any citations or references, and the use of abbreviations should be minimal. The use of Greek letters or special characters should be avoided, as it might cause formatting issues in some indexing databases.
\item
While re-reading the abstract, note if all the sections flow coherently. Do the reported results match the research question? Are the implications specific to the results presented?
\item
With the limited word count allocated to the abstract, it is likely that there will be no space to discuss the results of the sensitivity analyses. If space permits, it is acceptable to simply report that the sensitivity analyses were performed to test the robustness of the main findings, without necessarily presenting all the results.
\item
The EQUATOR Network provides \href{https://www.strobe-statement.org/download/strobe-checklist-conference-abstracts}{guidelines for reporting observational studies in a conference abstract} \citep{strobe2021abstract}. Some of the key elements of the guidelines include:
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\tightlist
\item
Title should include the study's design, such as cohort study, case-control or cross-sectional study.
\item
In the abstract, study design and specific objectives should be clarified.
\item
The methods section should include the study setting, the timeframe, and describe the study participants, including the eligibility criteria and the statistical methods.
\item
The results section should include the number of participants included in the analytic sample, the main results, including the estimates of associations, and the measures of variability and uncertainty.
\item
The conclusions section should include the general interpretation of the study results.
\end{enumerate}
\end{enumerate}
\hypertarget{title-of-a-scientific-article}{%
\section{Title of a scientific article}\label{title-of-a-scientific-article}}
Even before the abstract, the title is the first thing readers look at, hence it is a very important element of a scientific article. Though the title shouldn't be ``click-bait'', it should still pique the interest of the readers. Moreover, it should be brief and straightforward while still conveying the central story of the paper or the main research question. The title should be coherent and consistent with the abstract, but not completely copy the main text. As the title is indexed in the databases, it should contain all the important keywords to facilitate the literature search. In population and public health research, we propose three types of titles: descriptive titles, informative/assertive titles, and inquisitive titles. \textbf{Descriptive titles} are used most often, and usually describe the main topic or association that is under investigation. The \textbf{informative or ``assertive sentence'' titles} \citep[p.79--83]{heard2016scientist} are not always preferred by all readers since they may be perceived as being too conclusive without supporting arguments or explanations, but they are catchy and summarize the key message of the study. Finally, \textbf{inquisitive titles} actually present the main scientific question that is addressed in the research paper. Table 1 presents some examples in the literature of different types of titles in epidemiology and health sciences research.
Table 1: Titles of scientific articles in epidemiology, population and public health research
\begin{longtable}[]{@{}
>{\raggedright\arraybackslash}p{(\columnwidth - 0\tabcolsep) * \real{1.00}}@{}}
\toprule
\endhead
\href{https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(21)01210-1/fulltext}{Study of mirtazapine for agitated behaviours in dementia (SYMBAD): a randomised, double-blind, placebo-controlled trial} This is an example of a descriptive title which presents the name of the trial presented, SYMBAD, and also describes the design of the study, which was a randomised controlled trial. \\
\href{https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(21)01919-X/fulltext}{Obesity management as a primary treatment goal for type 2 diabetes: time to reframe the conversation} This is another example of a descriptive title. We can assume that the keywords of this study are ``obesity management'' and ``treatment for type 2 diabetes''. The latter part of the title ``time to reframe\ldots{}'' also suggests that the study might be presenting a novel aspect to existing clinical guidelines for treatment of diabetes. \\
\href{https://www.bmj.com/content/375/bmj.n2364}{Effect of dietary sources of calcium and protein on hip fractures and falls in older adults in residential care: cluster randomised controlled trial} This is also a descriptive title; in this title the authors present the main association under investigation (``effect of dietary sources of calcium and protein'' on ``hip fractures and falls''), and the population group of interest (older adults in residential care), as well as the study design (cluster randomised controlled trial). \\
\href{https://www.sciencedirect.com/science/article/abs/pii/S0168827819304623}{Sustained virological response from interferon-based hepatitis C regimens is associated with reduced risk of extrahepatic manifestations} This is an example of an assertive sentence title; the title indicates the main finding of the study, which was that the sustained virological response from a treatment was associated with reduced risk of extrahepatic manifestations. \\
\href{https://link.springer.com/article/10.1186/s12954-020-00436-6}{Convenience and comfort: reasons reported for using drugs alone among clients of harm reduction sites in British Columbia, Canada} This is an example of an informative title which gives the readers the answer to their main research question ``what were the reasons for using drugs alone?'' as ``Convenience and comfort'' among the study population of interest (clients using harm reduction sites in British Columbia, Canada). \\
\href{https://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/s12884-020-2775-8}{Are perceived barriers to accessing health care associated with inadequate antenatal care visits among women of reproductive age in Rwanda?} This is an example of an inquisitive title, which in itself is the main research question that the paper is trying to answer. \\
\href{https://link.springer.com/article/10.1007/s00420-020-01592-9}{Are US adults with low-exposure to methylmercury at increased risk for depression? A study based on 2011--2016 National Health and Nutrition Examination Surveys (NHANES)} This is another inquisitive title, which also indicates the data source for the study (2011-2016 National Health and Nutrition Examination Surveys). \\
\href{https://link.springer.com/article/10.1186/s12939-021-01420-7}{``I want to get better, but\ldots{}'': identifying the perceptions and experiences of people who inject drugs with respect to evolving hepatitis C virus treatments} This is an example of a qualitative study in population and public health research, which uses a part of a quote from an interview with a study participant to present the central aim of the study, which was to identify the perception and experiences of people who inject drugs, with respect to evolving hepatitis C treatments \\
\bottomrule
\end{longtable}
\hypertarget{authorship}{%
\chapter{Authorship}\label{authorship}}
In this chapter, we discuss about authorship in scientific publications. There are some differences in the authorship of scientific publications depending on the field or discipline of research. In health sciences research, it is very common to see many people listed as authors on published scientific papers. Primarily since most researchers work in established research teams and collaborate with colleagues, mentees, and mentors, and research studies are rarely the work of a single person nowadays. The importance of authorship in scientific publications cannot be overstated because it has significant academic and scientific implications for researchers. Authorship of scientific papers confers the researcher's academic credibility and importance. However, the authorship also holds researchers accountable and responsible for their publications. Researchers must understand that authorship manipulation, such as ``gift authorships'' or failure to include meritable authors, is considered scientific misconduct and fraud, and may result in the retraction of published papers as well as serious academic consequences \citep{callaway2016publisher}. As a result, guidelines and processes for determining authorship may be useful for researchers in determining who should be included as an author, as well as understanding what responsibility and accountability come with authorship. Many journals now have authorship and contributorship policies, and they may even ask what specific contribution(s) each author made to the study. To determine authorship in health sciences research, the International Committee of Medical Journal Editors (ICMJE) guidelines are most commonly used \citep{international2019icmje}. The ICMJE recommends that authorship by determined by four criteria, which we outline below along with some additional suggestions.
\hypertarget{suggested-criteria-for-authorship-and-contributorship}{%
\section{Suggested criteria for authorship and contributorship}\label{suggested-criteria-for-authorship-and-contributorship}}
The ICMJE suggests the following criteria to ensure that all included authors contributed substantially to the publication and accept accountability and responsibility for the published paper. They recommend that all four of the following criteria be met before someone can be listed as an author on the publication.
\textbf{Four criteria for authorship}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\item
The individual has made substantial contributions to the project: a ``substantial'' contribution can be defined as meeting one or more of the following criteria.
\begin{enumerate}
\def\labelenumii{\alph{enumii}.}
\tightlist
\item
The person conceptualized the study, devised the design of the study, or significantly improved the conception or the design of the study.
\item
The person was a key member in the acquisition of the data used for the study.
\item
The person was responsible for the data analysis of the study, or significantly improved the data analysis.
\item
The person made significant contributions to the data interpretation.
\end{enumerate}
\item
The person wrote the first draft of the paper or made critical revisions in the subsequent versions of the paper.
\item
The person has reviewed and approved the final version of the manuscript for publication.
\item
The person acknowledged and agreed to accept accountability and responsibility for all aspects of the published work.
\end{enumerate}
These four criteria must be met in order for someone to be listed as an author. However, these criteria are meant to be used carefully as a guide to determine who merits credit as authors and can accept responsibility as such; they should not be used as loopholes to deny authorship to deserving colleagues. The ICMJE suggests that individuals who meet the first criteria of substantial contribution should be given adequate opportunity and time to contribute to the revisions and review or the manuscript in order to meet the rest of the criteria. It is not uncommon for people to meet only one or two of the four criteria. In these cases, even if they are not listed as authors, these individuals should be acknowledged as contributors to the research study in the acknowledgments.
The corresponding author is responsible for all communication with the journal, including manuscript submission, responding to reviewers' comments, communicating the review and revisions with the co-authors, re-submitting revisions, as well as responding to any inquiries about the study after it has been published. Furthermore, the corresponding author is also responsible for ensuring that all individuals deserving of authorship are given an appropriate opportunity to contribute to the publication's authorship.
In terms of the order of the authors in the authorship list, the order varies by discipline and field, but in epidemiology and public health, the person who writes the first draft of the manuscript is usually the first author. Although it is uncommon, some journals allow two co-first authors who contributed equally to the research and writing of the manuscript. In the field of biomedical and health sciences research, the principal investigator is usually the last author. Sometimes, the research study is the work of a large working group or research team; in these instances, the group or team may be named in the author list rather than each individual's name. For example, de Vries et al. \citep{de2021perceived} include the WHO World Mental Health Survey collaborators in the authorship list of their paper, and the authors' contributions section specifies which individuals are included as collaborators. Similarly, CREDENCE Trial Investigators are listed as co-authors by Zhou et al. \citep{zhou2021effect}, and all investigators are listed in the appendix.
Lastly, as briefly mentioned above, individuals who do not meet all four criteria for authorship should still be acknowledged as contributors. These individuals may be those who contributed to the study in general, but their contributions alone do not justify authorship; their specific contributions should be described and acknowledged. These individuals could be members of patient advocacy or community-based groups who contributed their time and insight to the research, those who served as liaisons for community engagement or knowledge translation, those who contributed writing assistance, technical or language editing, or those who assisted with funding acquisition. The corresponding author is responsible for acknowledging all contributors and obtaining permission from them to include their names in the acknowledgments section of the manuscript.
\hypertarget{tips-for-navigating-co-authorships}{%
\section{Tips for navigating co-authorships}\label{tips-for-navigating-co-authorships}}
Cals and Kotz \citep{cals2013effective8} suggest that preparing a written agreement describing each author's roles and responsibilities and making sure that the agreement is accepted by all co-authors can ensure the clarity of the co-authoring process, and clearly set the expectations even before starting the writing of the manuscript. Stephen Heard also offers a few tips for facilitating the writing process with co-authors \citep[p.247-259]{heard2016scientist}:
\begin{itemize}
\tightlist
\item
Use collaborative writing software or tools, such as shared documents saved and updated on the Web (e.g., Google Docs) or the ``tracked changes'' tool in Microsoft Word, to track any changes made to the original document.
\item
Designate a lead writer and keep track of the master version as well as the revised versions of the manuscript throughout the writing process.
\item
In order to communicate with the co-authors, leave comments and questions directly in the body of the manuscript.
\end{itemize}
\hypertarget{peer-review}{%
\chapter{Peer-review}\label{peer-review}}
Peer-reviewing, institutionalized over the past couple of decades, has now become an essential part of biomedical research and scientific writing \citep{rennie2003editorial}. First and foremost, scientific writing requires extensive self-reviewing of the authors' own work. Then there are informal, or ``friendly'' reviews where the researchers review the work of colleagues, mentees, co-authors and/or collaborators. There are also more ``formal'' reviews which are part of the publication process in peer-reviewed journals. Researchers are at both ends of these peer-reviewing processes, sometimes as the writer, and other times as the reviewer providing the feedback. As such, having the ability to write a helpful and relevant review aids with scientific writing. In this chapter, we discuss the review process in peer-reviewed journals, as well as some tips and guidelines for reviewing, relevant to both the reviewer and the one receiving the feedback.
\hypertarget{review-in-the-peer-reviewed-journals}{%
\section{Review in the peer-reviewed journals}\label{review-in-the-peer-reviewed-journals}}
In peer reviewed journals, once a manuscript is submitted, generally the editorial managers check the formatting of the manuscript and verify if it meets the formatting requirements set by the journal. Then, the manuscript is passed along to the editor who reviews it quickly to decide if the manuscript is promising and deemed appropriate to be sent to external reviewers for peer-reviewing. After the initial screening, the editor finds external reviewers for the manuscript. Depending on the journal, the authors may be asked to provide the names and contact information of suggested reviewers during the submission process. Some journals do not ask for suggested reviewers. The majority of journals are in-between, suggested reviewers are contacted if they are provided, otherwise, the editor finds reviewers from the journal's database of reviewers. The reviewers receive an invitation to review the manuscript and if they agree, they review the paper and send their recommendations to the editor. The reviewers must give an objective, honest and unbiased appraisal of the manuscript's strengths and areas for improvement. It is good practice for reviewers to provide supporting evidence with references when appropriate. Some suggest that peer-reviews should be ``standardized'' and based on up to date evidence \citep{moher2003peer}. Finally, once the editor receives the reviews, the editor makes a decision on the manuscript based on the reviewers' suggestions. In general, most editors rely on the reviewers' suggestions when making decisions and will try to accommodate the reviewers' suggestions and questions as best as possible. When the authors receive the reviewers' comments and suggestions, it is critical that they carefully consider each one. Organizing the revisions in a ``response to reviews'' format with point-by-point responses can be a clear and concise approach to explain how each of the reviewers' comments and concerns were considered and addressed \citep[p.222-230]{heard2016scientist}.
There are different types of blinding used in peer-reviewed journals. In single-blinded reviews, the reviewers know who the authors are, but the authors don't know who the reviewers are. In double-blinded reviews, both the authors and the reviewers do not know the identity of each other. In open-access peer-reviews, both the authors' and the reviewers' names are given. Furthermore, some open-access journals provide a peer-review history containing information about the authors and the reviewers, as well as the complete history of the reviews, including the reviewers' feedback and the authors' responses.
Peer-reviewing can be conducted even after the publication of a manuscript. The formal review of a published scientific paper are also accepted in many journals. These reviews of papers post-publication are in the form of editorials, letters to the editor, response to authors, or rapid responses. These usually raise issues that were not addressed during the initial review process before publication. Common issues include technical problems or expert-matter concerns. After these reviews, the authors generally have the opportunity to respond and try to address the raised concerns.
\hypertarget{guidelines-for-providing-a-constructive-review}{%
\section{Guidelines for providing a constructive review}\label{guidelines-for-providing-a-constructive-review}}
\begin{itemize}
\item
Avoid vague comments. Provide feedback and comments about the specific issues or problems that the authors can address and improve on.
\item
When writing a review, try to focus on the big picture of the work. Start your review with comments on what were the good things about the paper. Then provide feedback on what were the major issues of the work while being specific and constructive. Minor issues can be pointed out, but copy-editing is not needed. Journals usually provide space for reviewers to leave feedback for only the editors to read. When leaving comments for the editors, be upfront and report what you think are the major issues with the article.
\item
Follow a systematic process to conduct the review. For example, you can use established checklists and guidelines for critical appraisal of scientific articles, such as the reporting guidelines provided by the EQUATOR network \citep{altman2008equator}, the CONSORT guidelines for reporting randomised controlled trials \citep{schulz2010consort}, the STROBE guidelines for reporting observational studies \citep{von2007strengthening}, and the PRISMA guidelines for reporting systematic reviews \citep{page2021prisma}.
\item
Peer-reviewed journals generally provide specific guidelines for their reviewers. For example, the BMJ gives guiding questions for the reviewers to consider while reading the manuscript \citep{theBMJ2021resources}:
\begin{itemize}
\tightlist
\item
``Is the article important?''
\item
``Will it help the readers to make better decisions?'' This is a question pertinent to the audience of the journal. The BMJ's main audience is clinicians and researchers in the medical sciences. It is the responsibility of the reviewer to assess whether the article will be providing important knowledge to the readers of the journal.
\item
``Will the article add enough to existing knowledge?''
\item
``Does the article read well and make sense? Does it have a clear message?''
\item
For research articles they further include questions such as ``Does the work add enough to what is already in the published literature? If so, what does it add?''
\item
``Is the research question clearly defined and appropriately answered?''
\item
``(Is the overall design of the study) appropriate and adequate to answer the research question?''
\item
``(Are the methods) adequately described? Main outcome measure clear? Is the study fully reported in line with the appropriate reporting statement or checklist? Was the study ethical?''
\item
``(Do the results) answer the research question? Credible? Well presented?''
\item
``(Are the interpretation and conclusions) warranted by and sufficiently derived from/focused on the data? Discussed in the light of previous evidence? Is the message clear?''
\item
``(Are the references) up to date and relevant? Any glaring omissions?''
\item
``(Do abstracts, summary, key messages) reflect accurately what the paper says?''
\end{itemize}
\end{itemize}
\hypertarget{tips-for-getting-better-feedback-from-co-authors-and-collaborators}{%
\section{Tips for getting better feedback from co-authors and collaborators}\label{tips-for-getting-better-feedback-from-co-authors-and-collaborators}}
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Take some time to polish your manuscript. Try to catch any grammatical errors and polish the writing through careful proofreading.
\item
Set specific deadlines for reviews, with a reasonable amount of allotted time.
\item
Use software that allows for tracked changes and comments so that co-authors can leave comments and questions, and you can keep track of the changes and revisions.
\item
Generally, peer-reviewed journals have specific guidelines on how to resubmit the revised manuscript. Ensure that you are following the journal instructions.
\end{enumerate}
\hypertarget{responding-to-reviewer-comments}{%
\chapter{Responding to reviewer comments}\label{responding-to-reviewer-comments}}
Responding to peer review feedback is an integral part of the peer-reviewed publication process. Chapter 5 discussed in detail about the peer-reviewing process in biomedical research and scientific writing. We also mentioned in Chapter 5 how critical it is to consider the peer review feedback and briefly mentioned one way to organize responding to review comments, in a point-by-point response format. In this chapter, we go over how to respond to reviewer comments in greater detail and offer some pointers for responding to reviewer comments.
Firstly, once the scientific article is submitted to a peer-reviewed journal, there are generally three possible outcomes. The editors decide whether to accept the paper for publication, reject the paper for publication (either as a ``desk rejection'' because the article is deemed incompatible with the journal's objectives and readership interests, or as a rejection after a review), or ask for resubmission after revision. Almost all published papers in peer-reviewed journals have gone through this revision process; thus, receiving a ``revise and resubmit'' decision for the manuscript is a good sign; as addressing the reviewers' comments and concerns satisfactorily is likely to result in the paper being accepted for publication. Minor and major comments, as well as comments from reviewers, associate editors, and editors, may be included in the peer review feedback. The requested revision could be a minor or major revision. After the authors resubmit with revisions, the reviewers are usually asked to go over the revisions again and decide if they are satisfactory. The editors may also provide feedback and request revisions as necessary. Sometimes this revision process takes multiple rounds of peer review.
\hypertarget{tips-for-revisions-and-responding-to-review-comments}{%
\section*{Tips for revisions and responding to review comments}\label{tips-for-revisions-and-responding-to-review-comments}}
\addcontentsline{toc}{section}{Tips for revisions and responding to review comments}
Similar to the submission package for manuscript, when resubmitting a manuscript the resubmission package includes a response letter to the reviewers, which generally begins by thanking the reviewers for their commentary and includes a brief description of the major changes to the revised manuscript incurred by the feedback received; it also includes the responses to reviewers' comments, which are recommended to be organized in a point-by-point format. Each journal has its own set of rules, but in general, authors are asked to resubmit a track-change enabled revised manuscript and appendix.
As suggested previously, organizing the response to reviewer comments in a point-by-point format is a clear and concise approach to showing the reviewers how the authors considered and addressed each of the reviewers' comments and concerns. We propose tips for responding to reviewer comments below, adapted from William Noble's rules for writing a response to reviewers \citep{noble2017ten} and resource for authors provided by PLOS \citep{plos2021peerfeedback}.
\begin{itemize}
\item
\textbf{Make a plan for revisions and responses when reading the reviewer comments}
\begin{itemize}
\tightlist
\item
Revision suggestions from reviewers and editors may include essential and non-essential (but nice to have) suggestions. While reading the comments, make a note of the essential revisions and set priority for these.
\item
Consider whether you will need to conduct additional analyses in order to respond to the comments, plan time to address these accordingly.
\end{itemize}
\item
\textbf{Respond to everything in a point-by-point format}
\begin{itemize}
\tightlist
\item
To make the response to reviewer comments clear and comprehensive, list all comments provided by reviewers in a document and respond to all comments, even minor ones. Firstly, respond to the comment briefly, then indicate what changes/revisions/additions were made to the manuscript in response to the comment.
\item
Responding to comments in a point-by-point format can save time during subsequent rounds of review.
\item
When responding to comments, make sure your responses are \textbf{specific, direct, and concise}. Excessively long responses can be both unnecessary and unhelpful.
\item
It can be beneficial to use the same language and terminology as the reviewers.
\item
If you believe that the comment cannot be addressed in the current manuscript, it can be listed as a limitation and/or future work to demonstrate to the reviewers that you have considered the reviewer's concern.
\end{itemize}
\item
\textbf{Keep track-changes in the main manuscript}
\begin{itemize}
\tightlist
\item
The journals will generally ask to resubmit the revised manuscript as a track-change enabled word document. Keep all revisions, and changes tracked.
\end{itemize}
\item
\textbf{Revise the manuscript to improve the clarity of the writing}
\begin{itemize}
\tightlist
\item
\emph{``Keep calm and take stock''} \citep{plos2021peerfeedback}. Remember that the goal of the peer review process is to improve science communication. Assume that reviewers have the best intentions for your manuscript, and ultimately, you should make the best of the feedback in order to improve the manuscript's quality.
\item
If the reviewer misunderstood the point you are trying to make, assume that this may have happened with the readers and try to clarify the point.
\item
When responding to reviewer comments, the first reaction should be to make a change in the manuscript rather than just addressing it in response to reviewers. If the reviewer expressed concern about something, the readers might have a similar concern.
\item
Try to address as many of the reviewers' requests as possible, even if you believe they are unnecessary. For example, the reviewers may request additional information on the background or methods that you believe are unnecessary. These requests, however, are harmless and will most likely add another layer to the manuscript. If the reviewers request lengthy additional information, it is possible to include this as an appendix.
\end{itemize}
\item
\textbf{Try to keep the response to the reviewer comments document as self-contained as possible}
\begin{itemize}
\tightlist
\item
When you revise or add new sentences or paragraphs to the manuscript in response to reviewer comments, include these revisions or new additions in quotes/different font/colour in response to reviewer comments document, so reviewers don't have to go back to the tracked manuscript to find where the revision is made.
\item
Referring to a specific subsection of the manuscript (e.g.~in the ``Sensitivity analysis'' subsection under the ``Methods'' section) may be more helpful than referring to a specific page number of the document because the reviewer may receive a different version of the manuscript.
\end{itemize}
\item
\textbf{Make time to compose the response to the reviewer comments document}
\begin{itemize}
\tightlist
\item
It is easier to be defensive and reactive to the comments than it is to maintain objectivity and clarity when responding to feedback. It is advised to \emph{``let it sink in before writing the response''} \citep{kotz2014effective10} once you have received the review comments.
\item
We propose that responding to the comments immediately may result in a dismissive and defensive attitude towards the comments, and we recommend that responses to reviewers' comments be written with time and revisions.
\item
Sometimes, it can be helpful to address the ``easy'' comments first, and then go through multiple rounds of responses to the remaining comments. Take your time revising the response to the reviewer comments document.
\end{itemize}
\item
\textbf{Be mindful of the tone}
\begin{itemize}
\tightlist
\item
Always be constructive, respectful, and polite in your response to reviewers' comments.
\item
It is possible to disagree with a comment. However, if you disagree with a comment, explain why: be respectful and provide evidence from previous literature or your own work to support your point of view. Without a clear and sufficient explanation, the reviewers may make the same suggestion again in the subsequent round of review.
\item
Set aside time to re-read your responses to ensure that you have written them in a calm and professional manner.
\end{itemize}
\end{itemize}
The reviewers' comments provide us with insight into the minds of potential readers, and serves as an opportunity to improve the manuscript. Revisions almost always strengthen the papers, so take advantage of this process to improve your paper and make science communication more effective and thoughtful. A well-written resubmission package, including a response letter and point-by-point responses to reviewers' comments, can help to improve the manuscript and ultimately lead to its publication.
\hypertarget{collaborative-tools}{%
\chapter{Collaborative Tools}\label{collaborative-tools}}
\hypertarget{rmarkdown}{%
\section{Rmarkdown}\label{rmarkdown}}
Here is a brief tutorial on how to use RMarkdown to mix code and text together to create a reproducible article, with citation, tables and figures, i.e., how to use Rmarkdown as a tool to write scientific documents.
\hypertarget{github}{%
\section{GitHub}\label{github}}
Here is a video about how to use GitHub in collaboration with RStudio and R to write and update scientific documents. In the description of the video, you can find a table of content, and able to click on the respective sections.
\hypertarget{refs}{}
\begin{CSLReferences}{0}{0}
\end{CSLReferences}
\bibliography{book.bib,packages.bib}
\end{document}
| {
"alphanum_fraction": 0.7999248855,
"avg_line_length": 91.7505168849,
"ext": "tex",
"hexsha": "657f1a0eadf80bc34d73cfe1f3631e857f89fcf7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c6399592182db7c0e96eb47d4d9a358a503c9eb9",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "ehsanx/Scientific-Writing-for-Health-Research",
"max_forks_repo_path": "SciWriHR.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c6399592182db7c0e96eb47d4d9a358a503c9eb9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "ehsanx/Scientific-Writing-for-Health-Research",
"max_issues_repo_path": "SciWriHR.tex",
"max_line_length": 2018,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c6399592182db7c0e96eb47d4d9a358a503c9eb9",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "ehsanx/Scientific-Writing-for-Health-Research",
"max_stars_repo_path": "SciWriHR.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 29024,
"size": 133130
} |
\chapter{Internet Data Handling \label{netdata}}
This chapter describes modules which support handling data formats
commonly used on the Internet.
\localmoduletable
| {
"alphanum_fraction": 0.8263473054,
"avg_line_length": 23.8571428571,
"ext": "tex",
"hexsha": "08062d0236bf5773b4befbd06bc1e9ff28939b7a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-07-18T21:33:17.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-01-30T21:52:13.000Z",
"max_forks_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c",
"max_forks_repo_licenses": [
"PSF-2.0"
],
"max_forks_repo_name": "jasonadu/Python-2.5",
"max_forks_repo_path": "Doc/lib/netdata.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"PSF-2.0"
],
"max_issues_repo_name": "jasonadu/Python-2.5",
"max_issues_repo_path": "Doc/lib/netdata.tex",
"max_line_length": 66,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "93e24b88564de120b1296165b5c55975fdcb8a3c",
"max_stars_repo_licenses": [
"PSF-2.0"
],
"max_stars_repo_name": "jasonadu/Python-2.5",
"max_stars_repo_path": "Doc/lib/netdata.tex",
"max_stars_repo_stars_event_max_datetime": "2018-08-21T09:19:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-08-21T09:19:46.000Z",
"num_tokens": 34,
"size": 167
} |
\documentclass[12pt]{report}
\usepackage{geometry}
\geometry{letterpaper}
%%%%%%%%%%%%%%%%%%%%
\newcommand{\hide}[1]{}
%\usepackage{natbib}
\usepackage{xcolor}
\usepackage{url}
\usepackage{hyperref}
\usepackage{mathtools}
\usepackage{comment}
\usepackage{amscd}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{cases}
\usepackage{cutwin}
\usepackage{enumerate}
\usepackage{epstopdf}
\usepackage{graphicx}
\usepackage{ifthen}
\usepackage{lipsum}
\usepackage{mathrsfs}
\usepackage{multimedia}
\usepackage{placeins}
\usepackage{subcaption}
\usepackage{wrapfig}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{multicol}
\usepackage{titlesec}
\usepackage{chngcntr}
\usepackage{listings}
\counterwithout{figure}{chapter}
\counterwithout{equation}{section}
\counterwithout{equation}{chapter}
\counterwithout{table}{chapter}
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.8,0.8,0.8}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\renewcommand{\lstlistingname}{Code}% Listing -> Algorithm
\lstset{style=mystyle}
\renewcommand*{\arraystretch}{1.5}
\usepackage[square,numbers]{natbib}
\bibliographystyle{abbrvnat}
%\input{/usr/local/LATEX/Lee_newcommands.tex}
\newcommand{\itemlist}[1]{\begin{itemize}#1\end{itemize}}
\newcommand{\enumlist}[1]{\begin{enumerate}#1\end{enumerate}}
\newcommand{\desclist}[1]{\begin{description}#1\end{description}}
\newcommand{\Answer}[1]{\begin{quote}{\color{blue}#1}\end{quote}}
\newcommand{\AND}{\wedge}
\newcommand{\OR}{\vee}
\newcommand{\ra}{\rightarrow}
\newcommand{\lra}{\leftrightarrow}
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\title {gNet (v0.2.1)}
\author{Mehmet Gökçay KABATAŞ - MGokcayK \\ github.com/MGokcayK \\ \texttt{[email protected]}
}
%\date{DD.MM.YYYY} % Activate to display a given date or no date
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[theorem]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}{Definition}[section]
\newtheorem{proposition}{Proposition}[section]
%\titleformat{\chapter}[display] {\normalfont\bfseries}{}{10pt}{\Large}
\titleformat{\chapter}[hang] {\normalfont\huge\bfseries}{\chaptertitlename\ \thechapter:}{1em}{}
\begin{document}
\maketitle
\tableofcontents
\chapter{What is gNet?}
\paragraph{}
gNet is a mini Deep Learning(DL) library. It is written to understand how DL works. It is running on CPU. It is written on Python language and used :
\begin{itemize}
\item Numpy for linear algebra calculations
\item Matplotlib for plottings
\item Texttable for proper printing of model summary in cmd
\item wget for download MNIST data
\item idx2numpy for load MNIST data
\end{itemize}
some 3rd party libraries.
\paragraph{}
During devolopment, Tensorflow, Keras, Pytorch and some other libraries examined. Keras end-user approach is used. Therefore, if you are familiar with Keras, you can use gNet easily.
\paragraph{}
gNet has not a lot functions and methods for now, because subject is written when they needed to learn. Also, gNet is personal project. Thus, its development process depends on author learning process.
\section{Installation}
Installation can be done with pip or clone the git and use in local file of your workspace.
To install with [pip][https://pypi.org]
\begin{lstlisting}[language=bash, numbers=none, caption={Install with pip}, label={ex:install}]
pip install gNet
\end{lstlisting}
\chapter{Differences from previous versions}
\section{What are the difference in v0.2.1 from previous version v0.2?}
\paragraph{}
There is a several addition and changes from previous version which is v0.2. These difference can be listed as :
\begin{itemize}
\item Added Functional Layer Connection (FLC) property. Now, gNet has non sequential model creation like PyTorch.
\item Layer has `get\_layers` method for finding all layers from input layers. This method can be call from all layers.
\item `zero\_grad` method in layer can be called by any layers in the model and also in the Model class' related method. It can zeroing all grads in related parameters in all layer of the model.
\item `save\_model` implementation moved from neuralnetwork module to base layer class. Yet, method is still in the neuralnetwork and just call it.
\item `load\_model` implementation moved from neuralnetwork module to base layer class. Yet, method is still in the neuralnetwork and just call it.
\item `get\_model\_summary` implementation moved from neuralnetwork module to base layer class. Yet, method is still in the neuralnetwork and just call it. It can show previous layer with its layer no or name.
\item Loss function's loss method's `model\_params` argument changed to `output\_layer`.
\item New printing options during training and save-load\_model functions.
\item Add new functionalities to `get\_loss\_plot`, `get\_accuracy\_plot` and evaluate methods.
\item Added REGISTER\_* functionality. gNet has REGISTER functionality for custom Initializer, Loss and Optimizer class.
\end{itemize}
\section{What are the difference in v0.2 from previous version v0.1.2?}
\paragraph{}
There is a several addition and changes from previous version which is v0.1.2. These difference can be listed as :
\begin{itemize}
\item Addition of SimpleRNN, LSTM, GRU, TimeDistributed and RepeatVector layers.
\item Addition of tensor.append, tensor.tanh and elementwise tensor.maximum operations.
\item Addition of orthogonal initialization for RNN layers.
\item Fixing not calling custom activation function directly (without calling string) error.
\item Fixing tensor.power operation's negative power gradient calculation.
\item Fixing broadcasting of tensor\_sum operation.
\item Altering 2D Matrix mul. operations from `dot` to `matmul` tensor ops.
\item Altering epoch\_loss calculation for multiple loss output such as output layer is TimeDistributed(Dense(2)).
\item Removing tensor.dot operation which is unnecessary for now.
\end{itemize}
\section{What are difference in v0.1.2 from previous version v0.1}
\paragraph{}
There is a several addition and changes from previous version which is v0.1. These difference can be listed as :
\begin{itemize}
\item Addition of Conv1D, Conv3D, MaxPool1D, MaxPool3D, AveragePool1D, AveragePool3D layers.
\item Edit dtype of operations and values as float32 to increase calculation speed and memory reduction.
\item Padding approach is changed and and added several layers which has not padding before such as MaxPool2D.
\item Fixing Batch Normalization which is not applicaple with even kernel size and make layer works with 1D and 3D layers.
\item Adding assertation to some layer input parameters to avoid giving wrong input and make it understandable for end user.
\end{itemize}
\chapter{Usage with Examples}
\paragraph{}
gNet is also mini computation library based on numpy. Details can be find in next chapters. This chapter is abstract of other chapters and examples of them. One of the ability of gNet is taking gradient (derivative) of function. It is used in gNet; yet, user can calculate gradient of custom functions. Let's look at it.
\section{How calculate gradient?}
\paragraph{}
Gradient calculation has two rules. First rule is tensor which inputs of function should have\_grad = True. Second rule is call backward methods from result. To explain it, give example. There is a function $f(x,y) = 3x^2 + 2y$, user want to find $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ where $x=2, y=5$.
\paragraph{}
Analytically, $\frac{\partial f}{\partial x} = 6x, \frac{\partial f(2,5)}{\partial x} = 12$ and $\frac{\partial f}{\partial y} = 2, \frac{\partial f(2,5)}{\partial y} = 2$.
\begin{lstlisting}[language=Python, numbers=none, caption={Calculation of gradient.}, label={ex:grad-calc}]
import gNet.tensor as T
x = T.Tensor(2., have_grad=True)
y = T.Tensor(5., have_grad=True)
f = 3 * T.power(x, 2) + 2 * y # also 3 * x**2 + 2 * y can be used.
f.backward() # calculate derivatives.
print(f, '\n', x.grad, '\n', y.grad)
-----------------------------------------------
Tensor(22.0,shape=(()), have\_grad=True)
Tensor(12.0,shape=(()), have\_grad=False)
Tensor(2.0,shape=(()), have\_grad=False)
\end{lstlisting}
\section{How load MNIST Dataset?}
\paragraph{}
To load MNIST Dataset, gNet has class for it. In `utils` module, `MNIST\_Downloader` class does the job. Usage is easy. Let's look at it.
\begin{lstlisting}[language=Python, numbers=none, caption={Load MNIST Dataset.}, label={ex:mnist-load}]
from gNet import utils
mnist = utils.MNIST_Downloader()
x_train, y_train = mnist.load_train()
x_test, y_test = mnist.load_test()
# to normalize data
x_train, x_test = x_train / 255.0, x_test /255.0
print('SHAPES :', x_train.shape, y_train.shape, x_test.shape, y_test.shape)
-----------------------------------------------
SHAPES : (60000, 28, 28) (60000,) (10000, 28, 28) (10000,)
\end{lstlisting}
\section{How make one-hot vector of label of MNIST Dataset?}
\paragraph{}
gNet has work on one-hot vector for label values. If dataset is not one-hot vector, gNet has function for it. In `utils` module, `make\_one\_hot` function does the job. Usage is easy. Let's look at it.
\begin{lstlisting}[language=Python, numbers=none, caption={Making one-hot vector of label of dataset.}, label={ex:make-one-hot}]
from gNet import utils
num_classes = 10 # for MNIST dataset
y_train = utils.make_one_hot(y_train, num_classes)
y_test = utils.make_one_hot(y_test, num_classes)
print('SHAPES :', x_train.shape, y_train.shape, x_test.shape, y_test.shape)
-----------------------------------------------
SHAPES : (60000, 28, 28) (60000, 10) (10000, 28, 28) (10000, 10)
\end{lstlisting}
\section{How create MLP model?}
\paragraph{}
MLP model contain only Dense layers. Let's create model which has one hidden layer and one output layer. \textbf{Note that before first Dense layer, flatten layer should be used and in this structure, flatten layer is input layer of model. Therefore, it has `input\_shape` parameter.}
\begin{lstlisting}[language=Python, numbers=none, caption={Create MLP model.}, label={ex:create-mlp-model}]
from gNet import model as gModel
from gNet import layer
model = gModel.Model()
# add first layer as flatten layer with input_shape
model.add(layer.Flatten(input_shape=x_train[0].shape))
# add hidden layer with ReLU activation function
model.add(layer.Dense(128,'relu'))
# add output layer with Softmax activaion function
model.add(layer.Dense(10, 'softmax' ))
\end{lstlisting}
\section{How create CNN model?}
\paragraph{}
CNN model contain Conv2D, Activation, Pooling (Max or Average), Flatten and Dense layers. Let's create model which has one Conv2D layer. \textbf{Note that before first Dense layer, flatten layer should be used and in this structure but not with `input\_shape` as MLP model. Conv2D layer is input layer of model. Therefore, it has `input\_shape` parameter.}
\paragraph{}
\textbf{Note about Conv2D is input should have 3 dimensional and have `channel first`}. To do it for MNIST Dataset, 3rd dimension should be added.
\begin{lstlisting}[language=Python, numbers=none, caption={Create CNN model.}, label={ex:create-cnn-model}]
from gNet import model as gModel
from gNet import layer
# make it channel first 3D data.
x_test = x_test[:, None, :, :]
x_train = x_train[:, None, :, :]
model = gModel.Model()
# add first layer as Conv2D layer with input_shape
model.add(layer.Conv2D(filter=5, kernel=(9,9),stride=(1,1),padding='valid', input_shape=x_train[0].shape, use_bias=True))
# activate output
model.add(layer.Activation('relu'))
# pool output
model.add(layer.MaxPool2D())
# flat output for dense layer
model.add(layer.Flatten())
# add hidden layer with ReLU activation function
model.add(layer.Dense(128,'relu'))
# add output layer with Softmax activaion function
model.add(layer.Dense(10, 'softmax' ))
\end{lstlisting}
\section{How create model with Dropout layer?}
\paragraph{}
Dropout layer can be added anywhere except as input layer. General usage of Dropout layer after hidden Dense layers.
\begin{lstlisting}[language=Python, numbers=none, caption={Create model with Dropout.}, label={ex:create-dropout-model}]
from gNet import model as gModel
from gNet import layer
model = gModel.Model()
# add first layer as flatten layer with input_shape
model.add(layer.Flatten(input_shape=x_train[0].shape))
# add hidden layer with ReLU activation function
model.add(layer.Dense(128,'relu'))
# add dropout with 0.5 probability
model.add(layer.Dropout(0.5))
# add output layer with Softmax activaion function
model.add(layer.Dense(10, 'softmax' ))
\end{lstlisting}
\section{How create model with Batch Normalization layer?}
\paragraph{}
Batch Normalization layer can be after Conv2D, Activation (sometimes previous from it) and Dense layer. General suggestion of using BN layer in CNN is after Activation Layer.
\begin{lstlisting}[language=Python, numbers=none, caption={Create model with Batch Normalization.}, label={ex:create-bn-model}]
from gNet import model as gModel
from gNet import layer
model = gModel.Model()
# add first layer as Conv2D layer with input_shape
model.add(layer.Conv2D(filter=5, kernel=(9,9),stride=(1,1),padding='valid', input_shape=x_train.shape[1:], use_bias=True))
# activate output
model.add(layer.Activation('relu'))
# add Batch Normalization
model.add(layer.BatchNormalization())
# pool output
model.add(layer.MaxPool2D())
# flat output for dense layer
model.add(layer.Flatten())
# add hidden layer with ReLU activation function
model.add(layer.Dense(128,'relu'))
# add output layer with Softmax activaion function
model.add(layer.Dense(10, 'softmax' ))
\end{lstlisting}
\section{How create supervised learning structure and setup?}
\paragraph{}
After model creation, structure should be created then train it. Before training, loss function and optimizer should be setted. Let's do it.
\begin{lstlisting}[language=Python, numbers=none, caption={Set loss function and optimizer.}, label={ex:setup}]
from gNet import neuralnetwork as NN
# create structure and put created model into structure
net = NN.NeuralNetwork(model)
# setup structure
net.setup(loss_function='cce', optimizer='adam')
\end{lstlisting}
\section{How create supervised learning structure and setup with custom parameters?}
\paragraph{}
If user want to use built-in loss function, optimizer or both of them with the custom parameters, user need to create needed classes from modules. Creation of own loss function class explained in Chapter \ref{ch:loss}. Creation of own optimizer class explained in Chapter \ref{ch:optimizer}.
\begin{lstlisting}[language=Python, numbers=none, caption={Set loss function and optimizer with custom parameters.}, label={ex:setup-custom}]
from gNet import neuralnetwork as NN
from gNet import loss_functions
from gNet import optimizer
# create structure and put created moden into structure
net = NN.NeuralNetwork(model)
# create loss function
loss = loss_functions.CategoricalCrossEntropy()
# create optimizer
opt = optimizer.Adam(lr=0.0001)
# setup structure
net.setup(loss_function=loss, optimizer=opt)
\end{lstlisting}
\section{How train and evaluate model?}
\paragraph{}
After setup the structure, training of model and evaluating is easy.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate model.}, label={ex:train}]
# train model
net.train(x_train, y_train, batch_size=32, epoch=10, printing=['loss', 'accuracy'])
# evaluate model
net.evaluate(x_test, y_test)
\end{lstlisting}
\section{How train and evaluate model with validation?}
\paragraph{}
If model training with validation, there is two way to do it. First way assigning validation rate. To print validation loss and accuracy done by editting printing list.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate model with validation rate.}, label={ex:train-val-rate}]
# train model with 20% validation.
net.train(x_train, y_train, batch_size=32, epoch=10, val_rate = 0.2, printing=['loss', 'accuracy', 'val_loss', 'val_acc'])
# evaluate model
net.evaluate(x_test, y_test)
\end{lstlisting}
Second way is assigning validation data.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate model with validation data.}, label={ex:train-val-data}]
# train model with validation data where valid_x is sample validation data and valid_y is labels of them.
net.train(x_train, y_train, batch_size=32, epoch=10, val_x= valid_x, val_y=valid_y, printing=['loss', 'accuracy', 'val_loss', 'val_acc'])
# evaluate model
net.evaluate(x_test, y_test)
\end{lstlisting}
More details can be found in Chapter \ref{pr:validation}.
\section{How train and evaluate model one batch?}
\paragraph{}
After setup the structure, training batch by batch is easy. This approach is suggested when data is to big to load memory. Usage is same as `train` method.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate model on batch.}, label={ex:train-batch}]
batch_size = 32
epoch = 10
for e in range(epoch):
# get start index of data for each epoch.
_starts = np.arange(0, x_train.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for each epoch.
np.random.shuffle(_starts)
# run batchs
print("\nEpoch : ", e + 1)
for _start in _starts:
# find last index of batch and iterate other parameters.
_end = _start + batch_size
_x_batch = x_train[_start:_end]
_y_batch = y_train[_start:_end]
net.train_one_batch(_x_batch, _y_batch, printing=['loss', 'accuracy'], single_batch=False)
# set new epoch
net.new_epoch()
# get start index of data for evaluation
_starts_test = np.arange(0, x_test.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for evaluation
np.random.shuffle(_starts_test)
# run evaluation
print("\nEvaluate:")
for _start_t in _starts_test:
# find last index of batch and iterate other parameters.
_end_t = _start_t + batch_size
_x_batch_t = x_test[_start_t:_end_t]
_y_batch_t = y_test[_start_t:_end_t]
net.evaluate_one_batch(_x_batch_t, _y_batch_t, single_batch=False)
\end{lstlisting}
\section{How train and evaluate model with validation one batch?}
\paragraph{}
If model training one batch with validation, there is two way to do it. First way assigning validation rate. To print validation loss and accuracy done by editting printing list.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate batch with validation rate.}, label={ex:train-batch-val-rate}]
batch_size = 32
epoch = 10
for e in range(epoch):
# get start index of data for each epoch.
_starts = np.arange(0, x_train.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for each epoch.
np.random.shuffle(_starts)
# run batchs
print("\nEpoch : ", e + 1)
for _start in _starts:
# find last index of batch and iterate other parameters.
_end = _start + batch_size
_x_batch = x_train[_start:_end]
_y_batch = y_train[_start:_end]
# train batch with 20% validation
net.train_one_batch(_x_batch, _y_batch, val_rate = 0.2, printing=['loss', 'accuracy', 'val_loss', 'val_acc'], single_batch=False)
# set new epoch
net.new_epoch()
# get start index of data for evaluation
_starts_test = np.arange(0, x_test.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for evaluation
np.random.shuffle(_starts_test)
# run evaluation
print("\nEvaluate:")
for _start_t in _starts_test:
# find last index of batch and iterate other parameters.
_end_t = _start_t + batch_size
_x_batch_t = x_test[_start_t:_end_t]
_y_batch_t = y_test[_start_t:_end_t]
net.evaluate_one_batch(_x_batch_t, _y_batch_t, single_batch=False)
\end{lstlisting}
Second way is assigning validation data.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate batch with validation data.}, label={ex:train-batch-val-data}]
batch_size = 32
epoch = 10
for e in range(epoch):
# get start index of data for each epoch.
_starts = np.arange(0, x_train.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for each epoch.
np.random.shuffle(_starts)
# run batchs
print("\nEpoch : ", e + 1)
for _start in _starts:
# find last index of batch and iterate other parameters.
_end = _start + batch_size
_x_batch = x_train[_start:_end]
_y_batch = y_train[_start:_end]
# train model with validation data where valid_x is sample validation data and valid_y is labels of them.
net.train_one_batch(_x_batch, _y_batch, val_x= valid_x, val_y=valid_y, printing=['loss', 'accuracy', 'val_loss', 'val_acc'], single_batch=False)
# set new epoch
net.new_epoch()
# get start index of data for evaluation
_starts_test = np.arange(0, x_test.shape[0], batch_size)
# if data willing to shuffled, shuffle it by shuffling index for evaluation
np.random.shuffle(_starts_test)
# run evaluation
print("\nEvaluate:")
for _start_t in _starts_test:
# find last index of batch and iterate other parameters.
_end_t = _start_t + batch_size
_x_batch_t = x_test[_start_t:_end_t]
_y_batch_t = y_test[_start_t:_end_t]
net.evaluate_one_batch(_x_batch_t, _y_batch_t, single_batch=False)
\end{lstlisting}
More details can be found in Chapter \ref{pr:validation}.
\section{How predict of model?}
\paragraph{}
After training, prediction of unseen data can be run with `predict` method.
\begin{lstlisting}[language=Python, numbers=none, caption={Predict data on trained model.}, label={ex:predict}]
# prediction of x which is unseed data.
net.predict(x)
\end{lstlisting}
\section{How save model?}
\paragraph{}
After training, save trainable parameters with name 'gNet\_usage\_save\_model'.
\begin{lstlisting}[language=Python, numbers=none, caption={Save model with name'gNet\_usage\_save\_model'.}, label={ex:save-model}]
# save model with name 'gNet_usage_save_model'.
net.save_model('gNet_usage_save_model')
\end{lstlisting}
\section{How load model?}
\paragraph{}
After training, saved trainable parameters with name 'gNet\_usage\_save\_model' can be load by `load\_model` method.
\begin{lstlisting}[language=Python, numbers=none, caption={Load model with name'gNet\_usage\_save\_model'.}, label={ex:load-model}]
# load model with name 'gNet_usage_save_model'.
net.load_model('gNet_usage_save_model')
\end{lstlisting}
\section{How get loss and accuracy plots of traning?}
\paragraph{}
After training, plots of loss and accuracy of training can be show and saved with custom names.
\begin{lstlisting}[language=Python, numbers=none, caption={Plots of training.}, label={ex:plot-model}]
# show loss plot of training and saved as 'gNet_usage_loss.png'
net.get_loss_plot(show=True, save=True, figure_name='gNet_usage_loss.png')
# show accuracy plot of training and saved as 'gNet_usage_acc.png'
net.get_accuracy_plot(show=True, save=True, figure_name='gNet_usage_acc.png')
\end{lstlisting}
\section{How train model with Functional Layer Connection (FLC) approach?}
\paragraph{}
After prepare the data for training, model can be created like PyTorch.
\begin{lstlisting}[language=Python, numbers=none, caption={Train and evaluate model with FLC.}, label={ex:train_FLC}]
# declare Trainer
class MnistTrainer():
def __init__(self) -> None:
self.batchSize = 32
self.epoch = 10
self.createModel()
self.loss = LF.CategoricalCrossEntropy()
self.acc = self.loss.get_metric()
self.layers = self.output.get_layers() # get all connectec layer from input layer.
self._optimizer = optimizer.Adam()
self.output.get_model_summary() # get model summary
def createModel(self):
self.flatten = layer.Flatten(input_shape=x_train[0].shape)
self.flatten() # calculate layer properties as input layer.
self.h1 = layer.Dense(128,'relu')
self.h1(self.flatten) # connect the hidden layer to flatten layer as previous layer.
self.output = layer.Dense(10, 'softmax')
self.output(self.h1)
# compute model layer by layer
def compute(self, inputs, train=True):
x = self.flatten.compute(inputs, train)
x = self.h1.compute(x, train)
return self.output.compute(x, train)
def train(self):
for e in range(self.epoch):
self._ite = 0
self.acc.reset()
self._starts = np.arange(0, x_train.shape[0], self.batchSize)
self._epoch_loss = 0
for _start in self._starts:
self._ite += 1
_end = _start + self.batchSize
_x_batch = T.make_tensor(x_train[_start:_end])
_y_batch = T.make_tensor(y_train[_start:_end])
self.output.zero_grad() # zeroing all layers' grad by calling `zero_grad`
_pred = self.compute(_x_batch, True)
_loss = self.loss.loss(_y_batch, _pred, self.output)
_loss.backward()
self._epoch_loss += np.mean(_loss.value)
self._accVal = self.acc.accuracy(_y_batch,_pred)
self._optimizer.step(self.layers)
printing = 'Epoch : %d / %d ' % (e + 1, self.epoch)
printing += ' Loss : %.4f ' % (np.round(self._epoch_loss / self._ite, 4))
printing += ' Accuracy : %.4f ' % (np.round(self._accVal, 4))
print(printing, end='\r')
print("")
# create and train the Trainer
net = MnistTrainer()
net.train()
\end{lstlisting}
\chapter{Functional Layer Connection}
\label{ch:FLC}
\paragraph{}
Main purpose of the adding Functional Layer Connection (FLC) property to gNet is to has ability to create a distrubted models. Base approach is using `\_\_call\_\_` functionality of Python language to connect each layer. Important point of the model creation is each creating first layer of model. To handle the first layer creation, user just call first layer as collable object.
\paragraph{}
In code \ref{ex:train_FLC}, there is method which called `createModel` handle the creation of model. The model has 3 layer which are Flatten, Dense (hidden) and Dense (output). To assign Flatten layer as input layer, layer just called as collable object by `self.flatten()` line. After that, to connect h1 layer which is hidden layer to flatten layer by calling `self.h1(self.flatten)`. With this structure, h1 layer learn that flatten layer is previous layer and flatten layer learn that h1 is next layer. Also, user can add another hidden layer which connected to flatten layer with `self.hNew(self.flatten)`.
\paragraph{}
Layer has multiple next layers, but it has only one previous layer.
\paragraph{}
To handle layer connection property, base Layer class has `\_connect\_layer` function which takes arguments as Layer based class. Function connect each layer to each other as `preLayer` and `\_nextLayers`.
\chapter{tensor, tensor\_ops modules and Tensor class}
\paragraph{}
Tensor is basically n-dimensional array. gNet uses numpy as linear algebra library. Thus, tensor is basically numpy's n-dimensional array which is ndarray. Yet, there is a reason for creating Tensor class instead of using ndarray directly. The reason is calculation of gradient. Gradient is derivative of anything.
\paragraph{}
In DL, gradient is need for backpropagation (BP). BP is finding gradients of trainable parameters such as weights and biases and updating trainable parameters with them. To calculate gradient, gNet uses Automatic Differentiation (AD) \cite{AD_savine} like other libraries (Tensorflow, Pytorch etc.). Implementation of AD is taken by Joel Grus's autograd videos series and codes \cite{Joel_Grus}. It is similar to Pytorch. If you familiar with Pytorch, you get used to it easily.
\paragraph{}
AD will not explained it details, but it is basically using chain rule. AD has two modes which are forward and reverse AD. gNet uses reverse mode AD. AD is based on operator overloading. Therefore, Tensor class is created. It has 4 instances which are value, have\_grad, depends\_on and grad. \textbf{value} is value of Tensor. \textbf{have\_grad} is boolean value of Tensor which show it has gradient or not. If Tensor have\_grad = True, it means that Tensor has gradient. \textbf{depends\_on} is stores depending derivative functions of Tensor. It store derivative function of tensor operations. It will explained later. Lastly, \textbf{grad} is gradient of Tensor.
\section{tensor module}
\paragraph{}
In tensor module, there is Tensor class and tensor operations. Tensor operations' description written in tensor module. These operations implementation in tensor\_ops module. When \textbf{calling tensor operations, use them from tensor module}.
\section{tensor\_ops module}
\paragraph{}
In tensor\_ops module, a lot of basic operation implementation can be found. Each operation should have 4 variables which are value, have\_grad, ops\_name and depends\_on. These variables are same as Tensor instanses because there operations return new calculated Tensor. Difference is ops\_name and it is just a string of operation name for debugging BP in development stage.
\paragraph{}
To explain tensor operation, let's explain exp operation. exp operation is calculation of exponential of tensor.
\begin{lstlisting}[language=Python, numbers=none, numbers=none, caption={exp operation.}, label={lis:exp-ops}]
def exp(t: 'Tensor') -> 'Tensor':
'''
Exponent calculation of tensor. Also it is calculate its gradient of operation
if tensor have_grad = True.
'''
value = np.exp(t._value)
have_grad = t.have_grad
ops_name = '_exp'
if have_grad:
def grad_fn_exp(grad: np.ndarray) -> np.ndarray:
grad = value * grad
return grad
depends_on = [T.Dependency(t, grad_fn_exp, ops_name)]
else:
depends_on = []
return T.Tensor(value, have_grad, depends_on)
\end{lstlisting}
In \ref{lis:exp-ops}, `t` is input of opetaration which is also Tensor. value is calculation of Tensor. Be aware that it is not directly use np.exp(t), it is calculated by \textbf{t.\_value} (or t.value also works) . It means that Tensor object is not used directly in numpy, because it is not just an array.
\paragraph{}
have\_grad is boolean of Tensor which shows whether Tensor `t` has differentiable (gradient can be calculated) or not. It is needed because if Tensor `t` is not differentiable, gradient will not be calculated and it will has zero gradients. If inputs of basic operation are more than one like multiplication which has `t1` and `t2`, and one of the inputs have\_grad, return Tensor has also have\_grad.
\paragraph{}
ops\_name is string of operation name which used during debug of BP. depends\_on is another class of which stores dependent Tensor, gradient function and ops\_name. depends\_on is important because when calculation of gradient, depends\_on called recursively until whole dependencies calculated.
\paragraph{}
There is two important things. Firstly, if Tensor have\_grad, depends\_on will be store, else it will be empty. Therefore, if operations have multiple Tensor inputs like multiplication which has `t1` and `t2`, depends\_on created for each Tensor which has gradient.
\begin{lstlisting}[language=Python, numbers=none, caption={mul operation.}, label={lis:mul-ops}]
def mul(t1: 'Tensor', t2: 'Tensor') -> 'Tensor':
'''
Element wise multiplication of two `Tensor`. Also it is calculate its
gradient of operation if tensor have_grad = True.
'''
value = t1._value * t2._value
have_grad = t1.have_grad or t2.have_grad
ops_name = '_mul'
depends_on: List[Dependency] = []
if t1.have_grad:
def grad_fn_mul1(grad: np.ndarray) -> np.ndarray:
grad = grad * t2._value
# to handle broadcast, add dimension
ndims_added = grad.ndim - t1._value.ndim
for _ in range(ndims_added):
grad = grad.sum(axis=0)
for i, dim in enumerate(t1.shape):
if dim == 1:
grad = grad.sum(axis=i, keepdims=True)
return grad
depends_on.append(T.Dependency(t1, grad_fn_mul1, ops_name))
if t2.have_grad:
def grad_fn_mul2(grad: np.ndarray) -> np.ndarray:
grad = grad * t1._value
ndims_added = grad.ndim - t2._value.ndim
for _ in range(ndims_added):
grad = grad.sum(axis=0)
for i, dim in enumerate(t2.shape):
if dim == 1:
grad = grad.sum(axis=i, keepdims=True)
return grad
depends_on.append(T.Dependency(t2, grad_fn_mul2, ops_name))
return T.Tensor(value, have_grad, depends_on)
\end{lstlisting}
\paragraph{}
Lastly, the most important properties of tensor operations is gradient functions of operation. To create proper gradient, shape of gradient should assign carefully and thinks reversely. It can be explain in this way. When take analytically gradient of some functions, gradient shape will be calculated in forward way. For example, in mean operation of 3x3 tensor (or can call matrix but it is also Tensor class), result will be 1x1 tensor. Yet, because of the reverse AD which starts from result to calculate gradient of function, return of the gradient function of mean operation should be 3x3 tensor. This is the reversely thinking. If operation is elementwise, it is just same as analytical gradient calculation. On the other hand, if the operation is non-elementwise, reversely thinking should take consider into calculation.
\paragraph{}
With these notes, user can use create custom tensor operation. Make sure that inputs of operations will be Tensor. To sure that, user can call `make\_tensor` method in tensor module. The structure same for all operations in gNet.
\paragraph{}
Note that, Tensor class support most of magic methods.
\section{How calculate gradient?}
\paragraph{}
Gradient calculation has two rules. First rule is tensor which inputs of function should have\_grad = True. Second rule is call backward methods from result. To explain it, give example. There is a function $f(x,y) = 3x^2 + 2y$, user want to find $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ where $x=2, y=5$.
\paragraph{}
Analytically, $\frac{\partial f}{\partial x} = 6x, \frac{\partial f(2,5)}{\partial x} = 12$ and $\frac{\partial f}{\partial y} = 2, \frac{\partial f(2,5)}{\partial y} = 2$.
\begin{lstlisting}[language=Python, numbers=none, caption={Calculation of gradient.}, label={lis:grad-calc}]
import gNet.tensor as T
x = T.Tensor(2., have_grad=True)
y = T.Tensor(5., have_grad=True)
f = 3 * T.power(x, 2) + 2 * y # also 3 * x**2 + 2 * y can be used.
f.backward() # calculate derivatives.
print(f, '\n', x.grad, '\n', y.grad)
-----------------------------------------------
Tensor(22.0,shape=(()), have\_grad=True)
Tensor(12.0,shape=(()), have\_grad=False)
Tensor(2.0,shape=(()), have\_grad=False)
\end{lstlisting}
As seen in \ref{lis:grad-calc}, variables x and y have have\_grad = True, it means that if f.backward() called, $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ will be calculated. Their results will be as grad instance. If tensor x has have\_grad = False, x.grad will be zero tensor. So, differentiable tensors (x and y in this example) should be assign carefully.
\chapter{neuralnetwork module}
\paragraph{}
neuralnetwork module is module of Neural Network (NN) structure. gNet can create supervised learning structure for now. Other structure like unsupervised or reinforcement learning not implemented. This module has one class which is NeuralNetwork. NeuralNetwork class has one input which is model. Model can be created from model module which will be explained.
\paragraph{}
NeuralNetwork class 15 methods where `\_\_init\_\_` not included. Some of these methods are hidden for end-user. Most of the methods for end-user. Let's explain each methods and some details about them.
\paragraph{\_\_init\_\_}
Initialization of NeuralNetwork. It assigning model from input, set layer from model and set optimizer and loss function caller. Caller is dictionary of related modules' class. It stores calling strings of classes. By using caller, developer can call classes with strings. \ref{lis:isistance} is example of how to use caller.
\paragraph{\_feedForward}
This method is hidden (if \_ is first letter of function name shows that it is hidden) for end-user. This method calculates each layers which will be explained in Chapter \ref{ch:layer}. It takes two inputs. First input is `inp` which is input of first layer of model which is tensor. Second input is `train` which is boolean and indicate whether `\_feedForward` method called in trainin model or not. This boolean is needed because some of layers which are Dropout and Batch Normalization have different calculation during train. To make this difference, layer should know whether feed forward method is calling on training or not.
\paragraph{\_regularizer}
This method is also hidden for end-user. It calculates regularizer of each layer. What is regularizer will be explain in related Chapter \ref{ch:regularizer}. How implemented them in layer will be explained in related chapter.
\paragraph{\_train\_print}
This method is also hidden for end-user. Method helps for printing different paramters during training. It is called by related methods. It used two flags which are boolean values for showing printing on some conditions or not. For example, `TRAIN\_ONE\_BATCH\_FLAG` shows whether the method is called from `train\_one\_batch` method or not. This knowladge makes different printing.
\paragraph{}
Printing parameters can be listed as :
\begin{itemize}
\item Loss as 'loss',
\item Epoch loss as 'epoch\_loss',
\item Accuracy as 'accuracy',
\item Validation Loss as 'val\_loss',
\item Validation Accuracy as 'val\_acc',
\item ETA as 'ETA',
\item Nothing will print as 'no-print'.
\end{itemize}
There is a note that printing loss is average loss of training. This works in this way. Epoch loss is summation of loss of batch during epoch. Loss is calculated by $ \frac{epoch\_loss}{passed\_number\_of\_batch}$. This calculates average loss of training which is similar in Keras and Tensorflow. Accuracy calculated by metric class which will be explained in Chapter \ref{ch:metric}.
\paragraph{train}
This method is heart of supervised learning. It trains the model. Input arguments will be found in function description or called help function. The method has two loop for epochs and batches. Some notes about this method is for each batches, before calculate gradient of required parameter, make zero grad of required parameters. If not maked zero grad, gradient will be wrong. The reason of them is structure of Tensor class. To make zero grad for trainable parameters, model has `zero\_grad` function. One of notes that when epoch changes, some parameters should be reset as grad. Accuracy is also one of them.
\paragraph{}
Also, another note that when creating convolutional NN, training data should have `have\_grad=True`. If model has only Multi-Layer Perceptron (MLP) structure, it is not needed. To handle it, there is a condition which checks whether first layer is `Conv2D` or not. If remove the condition or `have\_grad` always True, speed of training will be decrease. To handle it, there is a inline condition.
\paragraph{train\_one\_batch}
This method does same job as train. Difference is how does the job. This method train only one batch of data. If user want to calculate just one batch, user should call the method. Also, there is reason for adding the method. If there is memory problem (using lots of data or bigger data) for training, user can use the method.
\paragraph{}
Important point of the method is `single\_batch` boolean variable. If model is single\_batch, it means that only calculate one batch parameters. If model is not single\_batch model calculate parameters upto that time. Example of difference between single\_batch=True and single\_batch=False is loss. When single\_batch=True, loss will be equal to that batch's loss. Yet, when single\_batch=False, loss will be equal to average loss upto that time.
\paragraph{}
\label{pr:validation}
`train` and `train\_one\_batch` methods can calculate \textbf{validation}. There is two way to do it. First way is assign validation rate parameters 'val\_rate` between 0 to 1. It split data into validation and traning seperately. Second way is assign `val\_x` and `val\_y` data directly. Important point for second way is validation data should not in train data also.
\paragraph{}
Another notes about validation is when running `train\_one\_batch` with `val\_rate`, it will print different result w.r.t running train with `val\_rate`. Reason is when use `val\_rate` for `train\_one\_batch`, it will split validation data from batch. On the other hand, when use `val\_rate` for train, it will split validation data from whole data. This makes the difference for validation results. Yet, if user input `val\_x` and `val\_y` into train and `train\_one\_batch`, it will give same results because data is same for two methods. Also `val\_rate` will not used with `val\_x` and `val\_y` at the same time.
\paragraph{}
If there is memory problem, instead of using `train`, using `train\_one\_batch` is suggested.
\paragraph{new\_epoch}
When `train\_one\_batch` methods is used for training for more than one epoch, method should be called to reset some parameters and calculate validation if there is validation case. Important point of method is usage. Call the method at the end of loop of epoch. Example can be found in \ref{ex:train-batch}
\paragraph{setup}
Setup is one of base function of NeuralNetwork class. It is assign loss function of model and optimizer of model. Built-in functions or methods can be called by string of them. To handle this, \ref{lis:isistance} is used. This approach is used a lot in gNet. \ref{lis:isistance} is one of them which assign optimizer. Firstly, it checks whether input is string or not, if string call it by optimizer caller. Caller is dictionary and created in some modules and it stores strings of related class. For example, in optimizer module there is hidden dictionary called `\_\_optimizerDecleration`. This dictionary store strings for related class such as `'adam' : Adam`. When user input `optimizer='adam'` in setup function, it calls from built-in adam optimizer from optimizer module with default parameters. Usage and creation of custom optimizer class explained in Chapter \ref{ch:optimizer}.
\begin{lstlisting}[language=Python, numbers=none, caption={Assigning optimizer.}, label={lis:isistance}]
if isinstance(optimizer, str):
_opt = optimizer.lower()
self._optimizer = self._optimizerCaller[_opt]()
else:
self._optimizer = optimizer
\end{lstlisting}
\paragraph{predict}
Predict is calling `\_feedForward' for its input. This method created for end-user. After training, user can predict their unseen example by `predict` method.
\paragraph{evaluate}
Evaluation of model. After training, user can call evaluate method to test the model with unseen test data. To handle it, evaluate method created. Evalute method can return evaluate loss and accuracy with v0.2.1.
\paragraph{evaluate\_one\_batch}
Does same job as evaluate. Difference is evaluating only one batch. One batch approach same as `train\_one\_batch` method. If model is single\_batch, it means that only evaluate one batch parameters. If model is not single\_batch model evaluate parameters upto that time. Example of difference between single\_batch=True and single\_batch=False is loss. When single\_batch=True, loss will be equal to that batch's loss. Yet, when single\_batch=False, loss will be equal to average loss upto that time.
\paragraph{save\_model}
This method save trainable variables with numpy save method w.r.t `file\_name` input. Generally, this methods creates list to save layer trainable variables. For now, only Batch Normalization layer has exception. This layer has two another parameters which are running mean and running variance should save to. To handle it, there is a condition and it is expanding list to save these parameters to. To understand which layer is Batch Normalization layer, `layer\_name` parameters stores all layer names in model class. Therefore, gNet know that which class is Batch Normalization. This method's implementation moved(in v0.2.1) to base Layer class to handle FLC(\ref{ch:FLC}) property. Yet, functionality still work as same as before.
\paragraph{load\_model}
This method is load trainable variables with numpy load method w.r.t `file\_name` input. It is opposite to `save\_model` method. It is also take care of Batch Normalization layer to handle additional stored parameters. This method's implementation moved(in v0.2.1) to base Layer class to handle FLC(\ref{ch:FLC}) property. Yet, functionality still work as same as before.
\paragraph{get\_loss\_plot and get\_accuracy\_plot}
These methods plotting loss and accuracy w.r.t iterations. Matplotlibs library used for them. `figure\_title` which set title of plot, `x\_label` which set x label of plot and `y\_label` which set y label of plot can be set as input argument. Usage can be found in their descriptions.
\paragraph{get\_model\_summary}
This method write model summary. Texttable library used for it. If developer want to adjust it, be careful for row addition. First row should be list of one variable which is list of column names. Usage can be found in its description. This method's implementation moved(in v0.2.1) to base Layer class to handle FLC(\ref{ch:FLC}) property. Yet, functionality still work as same as before.
\chapter{model module}
\paragraph{}
model module has one class which called as Model. Model class has `\_params` variable which is dictionary and stores some parameters about model of NN. To understand what is done in this class, let's look at methods.
\paragraph{\_\_init\_\_}
Initialization of Model class. It creates `\_params` dict. `\_params` has 7 values. These values are :
\begin{itemize}
\item `layer\_number` : number of layer. It increases when adding layer to model.
\item 'layer\_name' : list of name of layer. It stores layer names.
\item 'activation' : lisf of activation function of layer.
\item 'model\_neuron' : list of neuron number of layer. This number changes depends on layer.
\item 'layers' : list of layers. It stores layers' adress (which is handled by python). By this list, each layer of model can be called.
\item 'layer\_output\_shape' : list of output shape of layers. It needed for some layers such as `Conv2D` which use to calculate its output shape. Layer output shape has not contain batch size.
\item '\#parameter' : list of number of parameters of layers. It is used in model summary.
\end{itemize}
\paragraph{add}
`add` method used for add layer to model. It call layer's `\_\_call\_\_` functions, add layer to 'layers` list and increase `layer\_number`.
\paragraph{get\_layers and get\_params}
These methods returns layers of model and params of model.
\paragraph{zero\_grad}
It makes zero calculated in each layer in `layers` parameter.
%\enlargethispage{\baselineskip}
\chapter{initializer module}
\paragraph{}
initializer module contains built-in initializer classes. These classes based on base class which called `Initializer`. Built-in initializer classes listed with calling strings as :
\begin{itemize}
\item Ones init ('ones\_init')
\item Zeros init ('zeros\_init')
\item He's normal ('he\_normal')
\item He's uniform ('he\_uniform')
\item Normal init ('normal\_init')
\item Uniform init ('uniform\_init')
\item Xavier's normal ('xavier\_normal')
\item Xavier's uniform ('xavier\_uniform')
\item Orthogonal ('orthogonal').
\end{itemize}
Calling strings used by callers. Initializers have also caller in layers. If user want to use built-in initializer, just set proper input argument with calling strings. For example, default initialize\_method of Dense layer is 'xavier\_uniform'.
\paragraph{}
If user want to use built-in initializer with different parameters, user needs to create built-in class with custom parameters then input as the created class. \ref{lis:initializer-built-in-custom-params} show initializer a Dense layer with normal initializer method with mean as 0.2 and standart deviation as 0.8 . In defaults, mean equal 0.0 and standart deviation equal 1.0.
\begin{lstlisting}[language=Python, numbers=none, caption={Built-in initializer with custom parameters.}, label={lis:initializer-built-in-custom-params}]
from gNet import initializer
...
init2 = initializer.Normal_init(mean=0.2, stdDev=0.8)
net = NeuralNetwork(...)
...
net.add(Dense(100, initialize_method=init2))
...
\end{lstlisting}
\paragraph{}
initializer module has declaretor dictionary (called `caller` in other modules) which is `\_\_initializeDeclaretion`. It stores built-in class adresses with calling string. For example, He\_normal stored as `'he\_normal': He\_normal`.
\section{Creating Custom Initializer Class}
\paragraph{}
gNet supports creating custom initializer class. First of all, \textbf{class should inherited from Initializer class which is base class, and should have `get\_init` method}. Without these, class can not be called by gNet. `get\_init` method should have `shape` argument as input argument. `get\_init` method returns to initialized array (not tensor) w.r.t shape inputs. Also, base class have `\_get\_fans` method for calculate fan paremeters which are `fan\_in` and `fan\_out`.
\paragraph{}
`fan\_in` and `fan\_out` parameters different w.r.t dimension of `shape` parameter. If length of shape is 2, it means that shape is 2D, `fan\_in` will be `shape[0]` and `fan\_out` will be `shape[1]`. If length of shape is not 2, `fan\_in` will be `production of shape[1:]` and `fan\_out` will be `shape[0]`. The reason is that kind of implementation is structure of im2col algorithms which use channel first approach and details will be in Chapter \ref{ch:layer}. \textbf{`\_get\_fans` methods is not must method to use}. Therefore, if user create custom class, user can use also custom class' methods.
\paragraph{}
To give an example of custom initializer class, let's create it.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom initializer class.}, label={lis:initializer-custom-class}]
class myInitializer(initializer.Initializer):
def __init__(self, scale=0.05, **kwargs):
self._scale = scale
def get_init(self, shape=None, **kwargs) -> np.ndarray:
return np.random.uniform(-1.0, 1.0, size=shape) * self._scale
\end{lstlisting}
In \ref{lis:initializer-custom-class}, custom initalizer class created which is uniform distribution from -1 to 1 with scale factor.
\paragraph{}
To call `myInitializer` by layers, there is two way to do it. First way is put class into input arguments directly. Second way is to register custom class by calling `REGISTER\_INITIALIZER` method from module then called by calling string. \ref{lis:initializer-calling-custom-class} show two way of calling custom initializer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Calling custom initializer class.}, label={lis:initializer-calling-custom-class}]
# FIRST WAY
...
net = NeuralNetwork(...)
...
net.add(Dense(100, 'relu', initialize_method=myInitializer))
...
# SECOND WAY
...
initializer.REGISTER_INITIALIZER(myInitializer, 'myinit')
...
net = NeuralNetwork(...)
...
net.add(Dense(100, 'relu', initialize_method=myinit))
...
\end{lstlisting}
\textbf{Important note for way two is calling string of custom class should be all lower case letters. }
\chapter{layer module}
\label{ch:layer}
\paragraph{}
layer module contains built-in layer classes. These classes based on base class which called `Layer`. Built-in layer classes listed as :
\begin{itemize}
\item Dense
\item Flatten
\item Activation
\item Conv1D
\item Conv2D
\item Conv3D
\item MaxPool1D
\item MaxPool2D
\item MaxPool3D
\item AveragePool1D
\item AveragePool2D
\item AveragePool3D
\item Dropout
\item Batch Normalization
\item SimpleRNN
\item LSTM
\item GRU
\item TimeDistributed
\item RepeatVector
\end{itemize}
Layer classes should be used by model`s `add` method. User also can create custom layer w.r.t some rules.
\section{Creating Custom Layer Class}
\paragraph{}
gNet supports creating custom layer class. First of all, \textbf{class should inherited from Layer class which is base class, and should have some methods}. Without these, class can not be called by gNet. There is several methods for user should implement to create custom layer class. These are `\_\_call\_\_`, `\_init\_trainable`, `compute` and `regularize` methods.
\paragraph{\_\_call\_\_}
This method is called when layer added to model. It is hidden Python's method and called when class is created. It generally computes model parameters. Updated parameters of model can be changed by layer. Thus, be carefull
for updated parameters. Also, to handle FLC(\ref{ch:FLC}), `self.\_connect\_layer(Layer)` function should be called.
\paragraph{\_init\_trainable}
This method is called at the end of `\_\_call\_\_` method. It initialize proper parameters such as trainables. Even layer has no trainable parameter, class should has \_init\_trainable
method and just `pass`. Initialization parameters depends on layer. Thus, implementation should be carefully selected and method should be called in `\_\_call\_\_` method.
\paragraph{compute}
This method is called by `\_feedForward` method. compute method is base of computation of layer. This is the core of
computation. Implementation should be carefully done. Without compute method, layer cannot be called by NN structure. Return should be also tensor and input arguments are `inputs` and `train`. `inputs` is input of layer, and `train` is boolean variable which shows whether the compute is called during training or not.
\paragraph{regularize}
This method is called by `\_regularizer` method. regularize method is base of computation of regularization of layer. Each layer can have different regularization with this implementation.
If regularization is not need in that layer like dropout or flatten, return zero. Implementation should be carefully done.
Without regularize method, layer cannot be called by NN structure. Return should be also tensor and it has no input arguments because all regularization done by trainable parameters of layer.
\paragraph{}
Base Layer class has also other methods. If layer has only weights and bias as trainable variables, `\_set\_initalizer` and `\_get\_inits` methods are usefull. Set initializer methods for layer parameters. If one of the parameters of layer have initializer, this function should be called at the end of layer's `\_\_init\_\_` method. This method also have 'bias initializer' which can initialize bias separately. `\_get\_inits` method called after initializer setted. Method have 2 argumest which pass shape of parameters. Method returns initialized W and B respectively.
\paragraph{}
Also, base Layer class has `zero\_grad` method which zeroing trainable parameters' grad values. For trainable parameters, base Layer class has getter and setter which used in optimization module a lot.
\paragraph{\_\_init\_\_}
Init method of base Layer class has several variables such as activation function caller `\_actFuncCaller` and trainable variables list `\_trainable`. Activation function caller used when activation functions need to call. Yet, trainable list is used a lot. Trainable parameters of layer should be append to `\_trainable` list because these parameters will be updated during optimization step. Therefore; base layer have getter and setter for trainable parameter to get and set it easily. To understand general structure, Dense layer implementation will be in \ref{lis:dense-layer-imp}.
\begin{lstlisting}[language=Python, numbers=none, caption={Dense Layer implementation.}, label={lis:dense-layer-imp}]
class Dense(Layer):
def __init__(self,
neuron_number = None,
activation_function = None,
initialize_method = 'xavier_uniform',
bias_initializer = 'zeros_init',
kernel_regularizer = None,
bias_regularizer = None,
**kwargs):
super(Dense, self).__init__(**kwargs)
if activation_function == None:
activation_function = 'none'
self._activation = activation_function
self._neuronNumber = neuron_number
self._initialize_method = initialize_method
self._bias_initialize_method = bias_initializer
self._set_initializer()
self._kernel_regularizer = kernel_regularizer
self._bias_regularizer = bias_regularizer
def __call__(self, params) -> None:
'''
Update of some of model parameters and class parameters.
'''
# connect layer to this layer
self._connect_layer(Layer)
# if activation function is `str` create caller.
if isinstance(self._activation, str):
self._actCaller = self._actFuncCaller[self._activation]()
else:
self._actCaller = self._activation
self._init_trainable()
def _init_trainable(self, params):
'''
Initialization of dense layer's trainable variables.
'''
if self._layerNo == 0:
row = 1
col = 1
else:
#row = params['layer_output_shape'][self._thisLayer-1]
row = self._preLayer._layer_output_shape
if (type(row)==tuple):
#row = params['layer_output_shape'][self._thisLayer-1][0]
row = self._preLayer._layer_output_shape[0]
#col = params['layer_output_shape'][self._thisLayer]
col = self._layer_output_shape
if (type(col)==tuple):
#col = params['layer_output_shape'][self._thisLayer][0]
col = self._layer_output_shape[0]
_w_shape = (row, col)
_b_shape = [col]
#params['#parameters'].append(row*col+col)
self._numOfParams = row * col + col
# get initialized values of weight and biases
_w, _b = self._get_inits(_w_shape, _b_shape)
# append weight and biases into trainable as `tensor`.
self._trainable.append(T.Tensor(_w.astype(np.float32), have_grad=True))
self._trainable.append(T.Tensor(_b.astype(np.float32), have_grad=True))
def compute(self, inputs: T.Tensor, train: bool, **kwargs) -> T.Tensor:
'''
Computation of dense layer.
'''
_z_layer = inputs @ self._trainable[0] + self._trainable[1]
return self._actCaller.activate(_z_layer)
def regularize(self) -> T.Tensor:
"""
Regularization of layer.
"""
_res = T.Tensor(0.)
if self._weight_regularizer:
_res += self._weight_regularizer.compute(self._trainable[0])
if self._bias_regularizer:
_res += self._bias_regularizer.compute(self._trainable[1])
return _res
\end{lstlisting}
\section{Dense Layer}
\paragraph{}
Dense layer one of the basic and important layer of DL. It makes MLP structure. It has two trainable parameters which are weights and biases. These parameters change during training.
\paragraph{}
Dense layer's input should be 2D and first dimension is batch size. If previous layer output is not 2D, flatten layer should be used. Dense layer can not used as first layer of model. Even if data is 1D, flatten layer should be added because Dense layer needed previous layer output shape.
\paragraph{}
Dense layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Dense : `activation function`' of layer,
\item `activation` as `activation function` of layer,
\item `model\_neuron` as `neuron\_number` of layer,
\item `layer\_output\_shape` as `neuron\_number` of layer,
\item `\#parameters` as $\#w_{row}*\#w_{col} + \#w_{col}$ where w is weight of layer.
\end{itemize}
Dense layer's `model\_neuron` and `layer\_output\_shape` are same because output of Dense layer is 1D (without batch dimension). Yet, in some other layers, these parameters are different.
\section{Flatten Layer}
\paragraph{}
Flatten layer flat input data to make it 1D. Flatten layer calls flatten operation. Flatten operation implemented as tensor operations because gradient implementation need to be done. For example, when created model with convolution layer, flatten layer should be added to model to add dense layer. To calculate convolution layer's trainable parameters gradient, flatten operations gradient should be calculated as well because of reverse AD structure.
\paragraph{}
Flatten operation has second input arguments which is batching. If flatten operation is called in training with batch, flatten operations should be done for each batch's data. To handle it, there is a input argument. If `batching=False` flatten operation flat all data as 1D. If `batching=True`, flatten operation flat data with corresponding batch size; therefore, flatten data will be 2D. In layer, `batching=True`.
\paragraph{}
Flatten layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'flatten',
\item `activation` as 'none',
\item `model\_neuron` as output dimension of layer (without batch dimension),
\item `layer\_output\_shape` as output dimension of layer (without batch dimension),
\item `\#parameters` as '0'.
\end{itemize}
\section{Activation Layer}
\paragraph{}
Activate the previous layer output. It just use activation function to `inputs` of compute method. At the activation layer, there is no regularization like flatten layer.
\paragraph{}
Activation layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Activation Layer : `activation of layer`',
\item `activation` as 'activation of layer',
\item `model\_neuron` as 'previous layer neuron number',
\item `layer\_output\_shape` as 'previous layer output shape',
\item `\#parameters` as '0'.
\end{itemize}
\section{Dropout Layer}
\paragraph{}
Dropout is one of regularization mechanism. It killing/deactivate neuron temporary for reduce of possibility of overfitting. Implementation done from Stanford lecture notes \cite{cs231}. One of the reason of `compute` method has an input argument which called train is dropout layer becuase it acts different between training and evaluating.
\paragraph{}
During training, dropout layer kill some neurons (it means that multiply with 0 at that moment). On the other hand, during evaluation, layer do not kill some neurons. Therefore, calling `\_feedForward` method is different during training. To make this difference, there is a boolean value which is called `train`. There is no regularization like flatten layer.
\paragraph{}
Dropout layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Dropout : `dropout probability of layer`',
\item `activation` as 'none',
\item `model\_neuron` as 'previous layer neuron number',
\item `layer\_output\_shape` as 'previous layer neuron output shape',
\item `\#parameters` as '0'.
\end{itemize}
\section{Batch Normalization Layer}
\paragraph{}
Batch Normalization (BN) Layer has some difference from other layers. First of all it has two trainable values; yet, it has four saved values. Reason is difference calculation of BN during training and evaluating is not same like dropout layer.
\paragraph{}
During training, running mean and variance parameters are not used, they are just updated. Yet, during evaluation, running mean and variance parameters are used. Therefore, when saving trainable parameters, running mean and variance are also saved. This is why, `save\_model` and `load\_model` have conditions for BN.
\paragraph{}
Two trainable parameters and two running parameters have each initializer method. Thus, in `\_init\_trainable` method has four initializer caller. Also, previous layer is effecting these four parameters dimension. To handle it, there is a condition.
\paragraph{}
Batch Normalization layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Batch Normalization',
\item `activation` as 'none',
\item `model\_neuron` as 'previous layer neuron number',
\item `layer\_output\_shape` as 'previous layer neuron output shape',
\item `\#parameters` as depends on which parameter used.
\end{itemize}
\section{Conv1D Layer}
\paragraph{}
Conv1D layer is convolution of 1D data. Implementation done by \cite{MGK}. Im2col approach is used in gNet. Therefore; channel first is used in gNet. It is different than Tensorflow. Its compute method calculation based on flatten local space of input and kernels then stored as 2D array. After making 2D array, by using dot product, calculation of all convolution can be done. Then, reshaping result to proper size.
\paragraph{}
Conv1D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Conv1D',
\item `activation` as 'none',
\item `model\_neuron` as '\#filter',
\item `layer\_output\_shape` as '(filter, Channel, Width)',
\item `\#parameters` as '\#filter * kernel size + \#filter'.
\end{itemize}
\section{Conv2D Layer}
\paragraph{}
Conv2D layer is convolution of 2D data. Implementation done from Stanford lecture notes \cite{cs231}. Im2col approach is used in gNet. Therefore; channel first is used in gNet. It is different than Tensorflow. Its compute method calculation based on flatten local space of input and kernels then stored as 2D array. After making 2D array, by using dot product, calculation of all convolution can be done. Then, reshaping result to proper size.
\paragraph{}
Conv2D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Conv2D',
\item `activation` as 'none',
\item `model\_neuron` as '\#filter',
\item `layer\_output\_shape` as '(filter, H\_out, W\_out)',
\item `\#parameters` as '\#filter * H\_kernel * W\_kernel + \#filter'.
\end{itemize}
\section{Conv3D Layer}
\paragraph{}
Conv3D layer is convolution of 3D data. Implementation done by \cite{MGK}. Im2col approach is used in gNet. Therefore; channel first is used in gNet. It is different than Tensorflow. Its compute method calculation based on flatten local space of input and kernels then stored as 2D array. After making 2D array, by using dot product, calculation of all convolution can be done. Then, reshaping result to proper size.
\paragraph{}
Conv3D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'Conv3D',
\item `activation` as 'none',
\item `model\_neuron` as '\#filter',
\item `layer\_output\_shape` as '(filter, Channel, Depth, Height , Width)',
\item `\#parameters` as '\#filter * kernel\_D * kernel\_H * kernel\_W * Channel size + \#filter'.
\end{itemize}
\section{MaxPool1D Layer}
\paragraph{}
MaxPool1D layer is maximum pooling of 1D data. Implementation done by \cite{MGK}. Its compute method calculation based on flatten local space of input's max values and store values and their indexes. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
MaxPool1D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'MaxPool1D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{MaxPool2D Layer}
\paragraph{}
MaxPool2D layer is maximum pooling of 2D data. Implementation done from Stanford lecture notes \cite{cs231}. Its compute method calculation based on flatten local space of input's max values and store values and their indexes. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
MaxPool2D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'MaxPool2D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, H\_out, W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{MaxPool3D Layer}
\paragraph{}
MaxPool3D layer is maximum pooling of 1D data. Implementation done by \cite{MGK}. Its compute method calculation based on flatten local space of input's max values and store values and their indexes. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
MaxPool3D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'MaxPool3D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, D\_out, H\_out W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{AveragePool1D Layer}
\paragraph{}
AveragePool1D layer is average pooling of 1D data. Implementation done by \cite{MGK}. Its compute method calculation based on flatten local space of input's average values. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
AveragePool1D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'AveragePool1D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{AveragePool2D Layer}
\paragraph{}
AveragePool2D layer is average pooling of 2D data. Implementation done from Stanford lecture notes \cite{cs231}. Its compute method calculation based on flatten local space of input's average values. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
AveragePool2D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'AveragePool2D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, H\_out, W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{AveragePool3D Layer}
\paragraph{}
AveragePool3D layer is average pooling of 1D data. Implementation done by \cite{MGK}. Its compute method calculation based on flatten local space of input's average values. Creating output as proper shaped tensor. There is no regularization like flatten layer.
\paragraph{}
AveragePool3D layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'AveragePool3D',
\item `activation` as 'none',
\item `model\_neuron` as '0',
\item `layer\_output\_shape` as '(C, D\_out, H\_out W\_out)',
\item `\#parameters` as '0'.
\end{itemize}
\section{SimpleRNN Layer}
\paragraph{}
SimpleRNN layer is basic RNN implementation. Implementation done by \cite{MGK} based on Keras' \cite{Keras} implementation.
\paragraph{}
SimpleRNN layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'SimpleRNN' ,
\item `activation` as 'tanh',
\item `model\_neuron` as \#cell,
\item `layer\_output\_shape` as '(sequence\_length, \#cell)',
\item `\#parameters` as '\#cell * ( \#cell + \#input\_size + 1)'.
\end{itemize}
\section{LSTM Layer}
\paragraph{}
LSTM layer is one of the RNN implementation. Implementation done by \cite{MGK} based on Keras' \cite{Keras} implementation.
\paragraph{}
LSTM layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'LSTM',
\item `activation` as 'tanh' + 'sigmoid' ,
\item `model\_neuron` as \#cell,
\item `layer\_output\_shape` as '(sequence\_length, \#cell)',
\item `\#parameters` as '4 * \#cell * ( \#cell + \#input\_size + 1)'.
\end{itemize}
\section{GRU Layer}
\paragraph{}
GRU layer is one of the RNN implementation. Implementation done by \cite{MGK} based on Keras' \cite{Keras} implementation.
\paragraph{}
GRU layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'GRU',
\item `activation` as 'tanh' + 'sigmoid' ,
\item `model\_neuron` as \#cell,
\item `layer\_output\_shape` as '(sequence\_length, \#cell)',
\item `\#parameters` as '3 * \#cell * ( \#cell + \#input\_size + 1)'.
\end{itemize}
\section{TimeDistributed Layer}
\paragraph{}
TD layer is one of the RNN implementation which apply same layer in time dimension. Implementation done by \cite{MGK} based on Keras' \cite{Keras} implementation.
\paragraph{}
TD layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'TimeDistributed',
\item `activation` as 'added layer's activation' ,
\item `model\_neuron` as 'added layer's model\_neuron',
\item `layer\_output\_shape` as '(time, added layer's output)',
\item `\#parameters` as 'added layers's parameters'.
\end{itemize}
\section{RepeatVector Layer}
\paragraph{}
RepeatVector layer is one of the RNN implementation which repeat previous layer in time dimension. Implementation done by \cite{MGK} based on Keras' \cite{Keras} implementation.
\paragraph{}
RepeatVector layer's append model's some parameters as :
\begin{itemize}
\item `layer\_name' as 'RepeatVector',
\item `activation` as 'none' ,
\item `model\_neuron` as 'repeat\_number',
\item `layer\_output\_shape` as '(repeat\_number, added layer's output)',
\item `\#parameters` as '0'.
\end{itemize}
\chapter{loss\_functions module}
\label{ch:loss}
\paragraph{}
loss\_functions module contains built-in loss function classes. These classes based on base class which called `Loss`. Built-in loss functions classes listed with calling strings as :
\begin{itemize}
\item Categorical Cross Entropy ('categoricalcrossentropy', 'cce')
\item Binary Cross Entropy ('binarycrossentropy', 'bce')
\item Mean Square Error ('meansquareerror', 'mse')
\end{itemize}
Calling strings used by callers. If user want to use built-in loss functions, just set proper input argument with calling strings in setup method.
\paragraph{}
If user want to use built-in loss functions with different parameters, user needs to create built-in class with custom parameters then input as the created class. \ref{lis:loss-built-in-custom-params} show loss functions with logits. From logits means that before calculate loss, model need to normalize output w.r.t proper method such as softmax. If last layer is not softmax, from logits make it softmax. Even if last layer is not softmax and `from\_logits=False`, output will be normalized without softmax. From logits is depends on loss function. CCE has softmax, BCE has sigmoid as logits.
\begin{lstlisting}[language=Python, numbers=none, caption={Built-in loss function with custom parameters.}, label={lis:loss-built-in-custom-params}]
from gNet import loss_functions
...
loss = loss_functions.CategoricalCrossEntropy(from_logits=True)
net = NeuralNetwork(...)
...
net.setup(loss_function=loss, optimizer='adam')
...
\end{lstlisting}
\paragraph{}
loss\_functions module has declaretor dictionary (called `caller` in other modules) which is `\_\_lossFunctionsDecleration`. It stores built-in class adresses with calling string. For example, Categorical Cross Entropy stored as `'categoricalcrossentropy' : CategoricalCrossEntropy` and `'cce' : CategoricalCrossEntropy`.
\section{Creating Custom Loss Class}
\paragraph{}
gNet supports creating custom loss functions class. First of all, \textbf{class should inherited from Loss class which is base class, and should have `loss` and `get\_metric` methods}. Without these, class can not be called by gNet.
\paragraph{}
Calculation of loss function implemented in `loss` method. `loss` method should have `y\_true`, `y\_pred` and `output\_layer` as input arguments. `y\_true` is true value of label (which is input as y in training). `y\_pred` is prediction of model and it is calculated from `\_feedForward` method. `output\_layer` is output layer of model. `loss` method should be return one value tensor. One value means that shape of tensor should be 1x1.
\paragraph{}
`get\_metric` method has no input argument. It should returned proper metric calculate from `metric` module. It is needed for accuracy calculation and explain in Chapter \ref{ch:metric}.
\paragraph{}
To give an example of custom loss function class, let's create mean square error.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom loss function class.}, label={lis:loss-custom-class}]
import gNet.metric as mt
class myMSE(loss_functions.Loss):
def loss(self, y_true, y_pred, output_layer):
y_true = T.make_tensor(y_true)
y_pred = T.make_tensor(y_pred)
error = (y_pred - y_true) ** 2
return T.mean(T.tensor_sum(error, axis=-1), axis=-1)
def get_metric(self) -> mt.Metric:
return mt.CategoricalAccuracy()
\end{lstlisting}
In \ref{lis:loss-custom-class}, custom loss function class created which is mean square error.
\paragraph{}
To call `myMSE` by gNet, there is two way to do it. First way is put class into input arguments directly. Second way is to register custom class by calling `REGISTER\_LOSS\_FUNCTION` method from module then called by calling string. Code \ref{lis:loss-calling-custom-class} show two way of calling custom initializer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Calling custom loss function class.}, label={lis:loss-calling-custom-class}]
# FIRST WAY
...
net = NeuralNetwork(...)
...
net.setup(loss_function=myMSE, optimizer='adam')
...
# SECOND WAY
...
loss_functions.REGISTER_LOSS_FUNCTION(myMSE, 'mymse')
...
net = NeuralNetwork(...)
...
net.setup(loss_function='mymse', optimizer='adam')
...
\end{lstlisting}
\textbf{Important note for way two is calling string of custom class should be all lower case letters. }
\chapter{activation\_functions module}
\paragraph{}
activation\_functions module contains built-in activation function classes. These classes based on base class which called `ActivationFunction`. Built-in activation functions classes listed with calling strings as :
\begin{itemize}
\item Rectified Linear Unit Function, Relu ('relu')
\item Leaky Rectified Linear Unit Function, LRelu ('lrelu')
\item Sigmoid, ('sigmoid')
\item Softmax, ('softmax')
\item Softplus ('softplus')
\item Tanh, ('tanh')
\end{itemize}
Calling strings used by callers. If user want to use built-in activation functions, just set proper input argument with calling strings in layer such as Dense layer.
\begin{lstlisting}[language=Python, numbers=none, caption={Built-in activation function.},
label={lis:activation-built-in}]
...
net = NeuralNetwork(...)
...
net.add(Dense(100, activation_function='relu')
...
\end{lstlisting}
\paragraph{}
activation\_functions module has declaretor dictionary (called `caller` in other modules) which is `\_\_activationFunctionsDecleration`. It stores built-in class adresses with calling string. For example, ReLU stored as `'relu' : Relu`.
\section{Creating Custom ActivationFunction Class}
\paragraph{}
gNet supports creating custom activation functions class. First of all, \textbf{class should inherited from ActivationFunction class which is base class, and should have \textit{static} `activate` methods}. Without these, class can not be called by gNet.
\paragraph{}
An important note about custom activation function is \textbf{static} `activate` method. Reason behind adding `@staticmethod` is activation function class is not created in gNet. To make it more structural, activation function class is created; yet, `activate` method can be called directly.
\paragraph{}
Calculation of activation function implemented in `activate` method. `activate` method should have `x` as input argument. `x` is tensor and return of `activate` method is also tensor.
\paragraph{}
To give an example of custom loss function class, let's create ReLU and Sigmoid activation functions.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom activation function classes.}, label={lis:activation-function-custom-class}]
class myRelu(ActivationFunction):
@staticmethod
def activate(x):
x = T.where(x, x.value > 0, x, 0)
return x
class mySigmoid(ActivationFunction):
@staticmethod
def activate(x):
return 1.0 / (1.0 + T.exp(-x))
\end{lstlisting}
\paragraph{}
To call `myRelu` or `mySigmoid` by layers, there is two way to do it. First way is put class into input arguments directly. Second way is adding into `\_\_activationFunctionsDecleration`, then called by calling string. \ref{lis:activation-calling-custom-class} show two way of calling custom initializer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Calling custom activation function class.}, label={lis:activation-calling-custom-class}]
from gNet import activation_functions
# FIRST WAY
...
net = NeuralNetwork(...)
...
net.add(Dense(100, activation_function= myRelu))
...
# SECOND WAY
...
activation_functions.__activationFunctionsDecleration['mysigmoid'] = mySigmoid
...
net = NeuralNetwork(...)
...
net.add(Dense(100, activation_function= 'mysigmoid'))
...
\end{lstlisting}
\textbf{Important note for way two is calling string of custom class should be all lower case letters. }
\chapter{metric module}
\label{ch:metric}
\paragraph{}
metric module contains built-in metric classes. These classes based on base class which called `Metric`. Built-in metric classes listed as :
\begin{itemize}
\item Categorical Accuracy
\item Binary Accuracy
\end{itemize}
Metric class is not called with calling strings. Metric class created in loss function class's `get\_metric` method in gNet. Therefore; calling strings are not needed.
\section{Creating Custom Metric Class}
\paragraph{}
gNet supports creating custom metric class. First of all, \textbf{class should inherited from Metric class which is base class, and should have `accuracy` method}. Without this, class can not be called by gNet.
\paragraph{}
Calculation of accuracy implemented in `accuracy` method. `accuracy` method should have `y\_true` and `y\_pred` as input arguments. These are true and predicted label respectively.
\paragraph{}
An important note about custom metric is `\_count` and `\_total` variables. These variables are integers to calculate accuracy and they are created in base Metric class. When `y\_true` and `y\_pred` are in intended condition, `\_count` should be increased by number of intended condition and `\_total` is increased by number of batch. Also, base Metric class has reset function which make `\_count` and `\_total` to 0. This method called when new epoch started.
\paragraph{}
To give an example of custom metric class, let's create binary accuracy which has threshold.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom metric classes.}, label={lis:metric-custom-class}]
class myBinaryAccuracy(Metric):
def __init__(self, threshold=0.5) -> None:
self._threshold = threshold
def accuracy(self, y_true, y_pred) -> float:
# set values which are bigger than threshold is 1.
argmax_pred = np.where(y_pred.value > self._threshold, 1., 0.)
# find maximum values indexes
argmax_true = np.argmax(y_true.value, axis=-1).reshape(-1,1)
argmax_pred = np.argmax(argmax_pred, axis=-1).reshape(-1,1)
# check whether max indexes are equal.
# if equal add to count
self._count += np.equal(argmax_true, argmax_pred).sum()
# add how many item does validate
self._total += argmax_pred.shape[0]
return self._count / self._total
\end{lstlisting}
\paragraph{}
To call `myBinaryAccuracy` custom loss function class should be created or somehow built-in loss function class's `get\_metric` method changed. Let's changed our `myMSE` class's metric with `myBinaryAccuracy` which has threshold 0.7.
\begin{lstlisting}[language=Python, numbers=none, caption={Calling custom metric class.}, label={lis:metric-binary-custom-class}]
class myMSE(loss_functions.Loss):
...
def get_metric(self) -> mt.Metric:
return myBinaryAccuracy(threshold=0.7)
...
\end{lstlisting}
\chapter{optimizer module}
\label{ch:optimizer}
\paragraph{}
optimizer module contains built-in optimizer classes. These classes based on base class which called `Optimizer`. Built-in optimizer classes listed with calling strings as :
\begin{itemize}
\item SGD ('sgd')
\item Adagrad ('adagrad')
\item RMSprop ('rmsprop')
\item AdaDelta ('adadelta')
\item Adam ('adam')
\end{itemize}
Calling strings used by callers. If user want to use built-in optimizer, just set proper input argument with calling strings in setup method.
\paragraph{}
If user want to use built-in optimizer with different parameters, user needs to create built-in class with custom parameters then input as the created class. \ref{lis:optimizer-built-in-custom-params} show `Adam` optimizer with custom learning rate where default learning rate equal 0.001.
\begin{lstlisting}[language=Python, numbers=none, caption={Built-in optimizer with custom parameters.}, label={lis:optimizer-built-in-custom-params}]
from gNet import optimizer
...
opt = optimizer.Adam(lr=0.0001)
net = NeuralNetwork(...)
...
net.setup(loss_function='cce', optimizer=opt)
...
\end{lstlisting}
\paragraph{}
optimizer module has declaretor dictionary (called `caller` in other modules) which is `\_\_optimizerDecleration`. It stores built-in class adresses with calling string. For example, `Adam` optimizer stored as `'adam': Adam`.
\section{Creating Custom Optimizer Class}
\paragraph{}
gNet supports creating custom optimizer class. First of all, \textbf{class should inherited from Optimizer class which is base class, and should have `step` method}. Without these, class can not be called by gNet.
\paragraph{}
Calculation of optimization step is implemented in `step` method. `step` method should have `layers` as input argument. `layers` is layers of model which can be get by model's `get\_layers` method. `step` method will not be return because it is optimizing trainable values of layers.
\paragraph{}
There is two point of implementing `step` method. Firstly, each layer has own trainable parameters. Thus; update these parameters seperately. To do that there is two for loops. First loop on layers and second loop on trainable parameters of looped layer. Lastly, if there is a variable which is not trainable and updated during training, it should be a list of arrays for each layer. To initialize this parameter, `step` method should have `init` condition.
\paragraph{}
To give an example of custom Adam optimizer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom optimizer class.}, label={lis:optimizer-custom-class}]
from gNet import optimizer
class myAdam(optimizer.Optimizer)
def __init__(self,) -> None:
self.lr = 0.0001
self.beta1 = 0.9
self.beta2 = 0.999
self.eps = 1e-7 # small values to get rid of division zero error.
self.t = 1
self.init = True
self.m = []
self.v = []
self.mhat = []
self.vhat = []
def step(self, layers)->None:
# for first time call the step function, initialize parameter w.r.t layer size and trainable variable size.
if self.init:
for layer in layers:
self.m.append(np.zeros_like(layer.trainable))
self.v.append(np.zeros_like(layer.trainable))
self.mhat.append(np.zeros_like(layer.trainable))
self.vhat.append(np.zeros_like(layer.trainable))
self.init = False
# loop for layers
for ind, layer in enumerate(layers):
# loop for trainables
for ind_tra, trainable in enumerate(layer.trainable):
# update momentum
self.m[ind][ind_tra] = self.beta1 * self.m[ind][ind_tra] + (1 - self.beta1) * trainable.grad.value
# update velocity
self.v[ind][ind_tra] = self.beta2 * self.v[ind][ind_tra] + (1 - self.beta2) * (trainable.grad.value ** 2)
# update momentum hat
self.mhat[ind][ind_tra] = self.m[ind][ind_tra] / (1 - self.beta1 ** self.t)
# update velocity hat
self.vhat[ind][ind_tra] = self.v[ind][ind_tra] / (1 - self.beta2 ** self.t)
# update weights and biases
trainable.value -= self.lr * self.mhat[ind][ind_tra] / (np.sqrt(self.vhat[ind][ind_tra]) + self.eps)
self.t += 1
\end{lstlisting}
In \ref{lis:optimizer-custom-class}, custom Adam optimizer class. There is `self.init` condition to initialize some optimization parameters. Reason behind the structure is dynamic structure of gNet. Layer can be changed dynamically, thus; in `\_\_init\_\_` method, initialize these optimization parameters will not be fit with the layers. To handle it there is a boolean value called `self.init`. It setted True in `\_\_init\_\_` method, then when called first step function, it will assign as False. With this approach, each layers' trainable parameters has own optimization parameters.
\paragraph{}
To call `myAdam` by gNet, there is two way to do it. First way is put class into input arguments directly. Second way is to register custom class by calling `REGISTER\_OPTIMIZER` method from module then called by calling string. \ref{lis:optimizer-calling-custom-class} show two way of calling custom optimizer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Calling custom optimizer class.}, label={lis:optimizer-calling-custom-class}]
# FIRST WAY
...
net = NeuralNetwork(...)
...
net.setup(loss_function='cce', optimizer=myAdam)
# SECOND WAY
...
optimizer.REGISTER_OPTIMIZER(myAdam, 'myadam') = myAdam
...
net = NeuralNetwork(...)
...
net.setup(loss_function='cce', optimizer='myadam')
...
\end{lstlisting}
\textbf{Important note for way two is calling string of custom class should be all lower case letters. }
\chapter{regularizer module}
\label{ch:regularizer}
\paragraph{}
regularizer module contains built-in regularizer classes. These classes based on base class which called `Regularizer`. Built-in regularizer classes as :
\begin{itemize}
\item L1
\item L2
\item L1L2 (Containing two of them at the same time)
\end{itemize}
If user want to use built-in regularizer, just set proper input argument with calling strings in proper layer.
\paragraph{}
If user want to use built-in regularizer with different parameters, user needs to create built-in class with custom parameters then input as the created class. \ref{lis:regularizer-built-in-custom-params} show `L2` regularizer with custom $\lambda$ value.
\begin{lstlisting}[language=Python, numbers=none, caption={Built-in L2 regularizer with custom parameters.}, label={lis:regularizer-built-in-custom-params}]
from gNet import regularizer
...
regu2 = regularizer.L2(Lmb=0.001)
net = NeuralNetwork(...)
...
net.Dense(100, 'relu', weight_regularizer=regu2)
...
# or
...
net = NeuralNetwork(...)
...
net.Dense(100, 'relu', weight_regularizer=regularizer.L2(Lmb=0.001))
...
\end{lstlisting}
\paragraph{}
optimizer module has not declaretor dictionary.
\section{Creating Custom Regularizer Class}
\paragraph{}
gNet supports creating custom regularizer class. First of all, \textbf{class should inherited from Regularizer class which is base class, and should have `compute` method like layer}. Without these, class can not be called by gNet.
\paragraph{}
Calculation of regularization is implemented in `compute` method. `compute` method should have `parameters` as input argument. `parameters` is tensor. Thus; regularization can be calculated for any tensor of gNet.
\paragraph{}
To give an example of custom L2 regularizer class.
\begin{lstlisting}[language=Python, numbers=none, caption={Custom regularizer class.}, label={lis:regularizer-custom-class}]
class myL2(Regularizer):
def __init__(self):
self._lmb = 0.001
def compute(self, parameter: 'Tensor') -> 'Tensor':
self._regularizer = self._lmb * T.power(T.make_tensor(parameter), 2).sum()
return self._regularizer
\end{lstlisting}
In \ref{lis:regularizer-custom-class}, custom L2 regularizer class which $\lambda$ value is 0.001.
\paragraph{}
To call `myL2` by gNet, there is one way to do it. This way is put class into input arguments directly like
\ref{lis:regularizer-built-in-custom-params}.
\paragraph{}
Also note that gNet has layerwise regularizer. It means that every layer has its own regularization if user want it. Therefore; there can be different combination of regularization.
\chapter{conv\_utils module and utils module}
\paragraph{}
These modules are used some utility function of gNet. conv\_utils has utility functions for convolution operations.
\paragraph{}
utils has `make\_one\_hot` function which makes sparse label values into one\_hot vector label. Also, utils has `MNIST\_Downloader` which helps to download and load MNIST data \cite{MNIST}.
\paragraph{}
conv\_utils module has some functions which are helper function to find indexes of convolution operations. For each convolution layers which are Conv1D, Conv2D and Conv3D, indexes can be found differently, yet same approach. Therefore, this module has 3 subfunctions.
\bibliography{mgk}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% | {
"alphanum_fraction": 0.7536185096,
"avg_line_length": 44.1868758916,
"ext": "tex",
"hexsha": "6a2e7f9a5e84d0299f7de1574d50afa2fe6fae7f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "19f3d4e555ae37a6bcda7e52f08eddad06914cee",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MGokcayK/gNet",
"max_forks_repo_path": "docs/source/gNet-v0.2.1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "19f3d4e555ae37a6bcda7e52f08eddad06914cee",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MGokcayK/gNet",
"max_issues_repo_path": "docs/source/gNet-v0.2.1.tex",
"max_line_length": 889,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "19f3d4e555ae37a6bcda7e52f08eddad06914cee",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MGokcayK/gNet",
"max_stars_repo_path": "docs/source/gNet-v0.2.1.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-25T05:30:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-08-30T09:21:05.000Z",
"num_tokens": 23216,
"size": 92925
} |
\chapter{Standardization}
\section{Problem Statement}
The standardization process is a process to convert the instrumental values to physical values. In CCD, what is being read is the potential ($ \propto $ number of electrons $ \propto $ flux). But what does a CCD pixel value mean? $ 1 $ ADU can mean $ \SI{1}{Jy} $ at one CCD but at different one it can mean $ \SI{5}{Jy} $ because it is designed to be insensitive to photons for some reason. Thus, what astronomers do is
\begin{enumerate}
\item Make a list of objects which have known flux (e.g., star A has spectrum of blahblah, and it has $ V_\mathrm{std} $ magnitude or flux $ I_\mathrm{std} $ in the V-band). These stars are called \textbf{standard stars}.
\item Observe the target and the standard stars simultaneously. If they cannot be in the same field of view, observe them at the same night when airmasses are not too different and weather is not changed largely.
\end{enumerate}
Now the power of CCD comes in: It's highly linear, i.e., the pixel counts of $ N $ (of the target of interest) and $ N_\mathrm{std} $ are very much proportional to the original flux, $ I $ and $ I_0 $. Thus, you can use Pogson's formula, because what it requires is only the ratio of flux:
\begin{equation}\label{eq: Pogson}
V - V_\mathrm{std}
= -2.5 \lg \frac{I}{I_\mathrm{std}}
= -2.5 \lg \frac{N}{N_\mathrm{std}} ~.
\end{equation}
\begin{ex}[Simplest Standardization]
If the aperture photometry gave pixel count of $ 1000 $ for a standard star of $ V_\mathrm{std} = \m{10.00} $ and the object had pixel count of $ 500 $, the above formula will give $ V = \m{10.75} $.
\end{ex}
In practice\footnote{I extensively refered to Ch. 6 of ``A Practical Guide to Lightcurve Photometry and Analysis'' by Brian D. Warner, 2e in this subsection.}, especially when stars with catalogued magnitude are not in the same field of view as the target of interest, it is very difficult to use these formulae. In such a case, we have two FITS images to compare: one for target and one for standard star (usually far away from the target). A direct comparison of $ N $ and $ N_0 $ is difficult because
\begin{enumerate}
\item The atmosphre exists. The magnitude we observe on ground is different from the one we would have observed outside of the atmosphere (space). This gives the $ k' $ and $ k'' $ terms in \cref{eq: std} below.
\item The CCD is not ideally simple. For example, if it is more sensitive to redder wavelength, making red stars brighter than they should be. This gives the $ k $ term in \cref{eq: std} below.
\end{enumerate}
If all these are considered with proper approximations (derived later in this chapter), we can obtain the following second-order approximation of the standard magnitude of an object seen on CCD:
\begin{equation}\label{eq: std}
\begin{aligned}
M_f &= m_f + (\mathrm{effect\ of\ atmosphere}) + (\mathrm{effect\ of\ CCD}) \\
&\approx m_f - k_f' X - k_f''XC + z_f + k_f C \\
&\equiv m_{0f} + z_f + k_f C ~,
\end{aligned}
\end{equation}
where
\begin{equation}
m_{f} \equiv m_{0f} + k_f'X + k_f'' XC
\end{equation}
and
\begin{itemize}
\item $ f $: The filter (V, B, g', etc).
\item $ X $: airmass (the simplest approximation is the secant of zenith angle, $ \sec Z $).
\item $ M_f $: The \emph{standard} apparent magnitude (or the \emph{true} apparent magnitude) at filter $ f $.
\item $ m_f $: The \emph{instrumental} magnitude ($ m_f = -2.5 \lg N $).
\item $ m_{0f} $: The extra-atmospheric magnitude ($ m_f $ value we would have obtained if we were in space $ X = 0 $).
\item $ C $: The \emph{true} color index\footnote{Not necessarily include filter $ f $, but it is better that the wavelength ranges of the selected two filters ``contain'' the range of $ f $ for interpolation purpose. Also in many classical literatures, you will see the $ C $ in this equation is the \textit{observed} color, not the true color. This is just a matter of preference, because anyway it's used as a crude approximation factor, as we will see soon.}, e.g., $ B - V $ or $ r' - i' $.
\item $ k_f' $: The first order extinction coefficient at filter $ f $.
\item $ k_f'' $: The second order extinction coefficient at filter $ f $.
\item $ z_f $: The zero point at filter $ f $.
\item $ k_f $: The system transform coefficient at filter $ f $.
\item Note: lower- and upper-cased letters are used for the \emph{instrumental} and \emph{true} magnitudes, respectively\footnote{For example, $ v $, $ b $, $ m_{g'} $ are instrumental magnitudes of an object and $ V $, $ B $, and $ M_{g'} $ are true appparent magnitudes of it.}.
\end{itemize}
%The example I showed with \cref{eq: Pogson} is the case when $ k_f = k_f' = k_f'' = 0 $, or equivalently $ k_f = 0 $ and $ X = 0 $. In such a case, the true apparent magnitude is $ V = v + z_V = -2.5 \lg N + z_V $ where $ v $ is the instrumental magnitude and $ z_V $ is just a constant\footnote{}. Same goes true for the standard star, so $ V_0 = v_0 + z_V = -2.5 \lg N_0 + Z_V $. Then \cref{eq: Pogson} makes sense and $ V- V_0 = v - v_0 $. If the coefficients are non-zero, you can see that
If the subscript std again means the value for the standard star, from \cref{eq: std}:
\begin{equation}
\begin{aligned}
V - V_\mathrm{std}
&= (v - v_\mathrm{std})
+ k_V'(X - X_\mathrm{std})
+ k_V''(X C - X_\mathrm{std} C_\mathrm{std})
+ k_V(C - C_\mathrm{std})
+ \Delta z_V\\
&\neq v - v_\mathrm{std} ~.
\end{aligned}
\end{equation}
So the calculation given in the example is true only if the airmass of the object and standard star are identical AND the true color indices of them are identical. Otherwise, we cannot simply equate the right hand side of \cref{eq: Pogson} ($ = v - v_\mathrm{std} $) to the left hand side ($ = V - V_\mathrm{std} $). In space, we can remove all the atmosphere related terms since $ X = 0 $. Also $ \Delta z \approx 0 $ is assumed (discussed in \cref{ss: zeropt}), so only the $ k_f $ term remains. This is why space observation is powerful.
\section{Understanding the Standardization Formula}
In this section, I will discuss about the terms in \cref{eq: std} with some realistic data and plots. At the same time, I will give a derivation of the equation. Many textbooks only give the former; I do not want to go against that trend, but I wanted more quantitative explanations and justifications of those at the same time.
\subsection{Atmospheric Extinction}
The atmospheric extinction is dependent on the wavelength as in \cref{fig:air-ext-and-filter}. The extinction is expressed as mag/airmass, i.e., the extincted magnitude when airmass $ X = 1 $, i.e., ``$ m_f - m_{0f} $ at $ X = 1 $'' or $ k_f' + k_f''C $ in the language of \cref{eq: std}. The extinction is severe at shorter wavelengths, and that is why the Sun looks redder when it rises or sets (i.e., when airmass is larger).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{figs/air-ext-and-filter}
\caption{The atmospheric extinction as a function of wavelength at Mauna Kea, \textit{based on some 4285 standard star spectra obtained on 478 nights spread over a period of 7 years obtained by the Nearby SuperNova Factory using the SuperNova Integral Field Spectrograph.} (excerpt from The Nearby SuperNova Factory+ 2013, A\&A, 549, 8). The SDSS and Johnson-Cousins filters' filter profiles are overplotted.}
\label{fig:air-ext-and-filter}
\end{figure}
Consider an object with spectrum $ S_0(\lambda) $, measured at space, is observed at an airmass of $ X $. The intensity at filter with profile $ f_f(\lambda) $ without atmosphere is
\begin{equation}
\int_{0}^{\infty} S_0(\lambda) f_f(\lambda) d\lambda
= \int_{\lambda_1}^{\lambda_2} S_0(\lambda) f_f(\lambda) d\lambda ~.
\end{equation}
This is because $ f_f(\lambda) $ can be set as non-zero for $ \lambda \in (\lambda_1, \lambda_2) $ and 0 otherwise. If the spectrum undergoes atmospheric extinction described by optical depth of $ \tau(\lambda) $, the intensity after the filter throughput is
\begin{equation}
\int_{\lambda_1}^{\lambda_2}
S_0(\lambda) f_f(\lambda) e^{-\int \tau(\lambda) dX} d\lambda
\approx
\int_{\lambda_1}^{\lambda_2}
S_0(\lambda) f_f(\lambda) e^{-\tau(\lambda) X} d\lambda ~.
\end{equation}
Here, $ e^{\int -\tau(\lambda) dX} \approx e^{-\tau(\lambda) X} $ is used, and the integration is along the optical path along the atmosphere. When extinction is not severe ($ \tau(\lambda) X \ll 1 $ for all $ \lambda \in (\lambda_1, \lambda_2)$), $ e^{-\tau(\lambda) X} \approx 1 - \tau(\lambda) X $ (say ``approx 1''). Also when $ A \ll 1 $, $ \lg (1 - A) \approx - A / \ln 10 $ (say ``approx 2''). Combining these information, Pogson's formula states
\begin{equation}
\begin{aligned}
m_f - m_{0f}
&= -2.5 \lg
\qty( \frac{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) e^{-\tau(\lambda) X} d\lambda}
{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) d\lambda} ) \\
\mathrm{(using\ approx\ 1)}
&\approx -2.5 \lg
\qty( 1 - \frac{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) \tau(\lambda) d\lambda}
{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) d\lambda} X ) \\
\mathrm{(using\ approx\ 2)}
&\approx \frac{2.5}{\ln 10}
\frac{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) \tau(\lambda) d\lambda}
{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) d\lambda} X ~.
\end{aligned}
\end{equation}
A short digression here. Remembering $ I(\lambda) = I_0(\lambda) e^{-\tau(\lambda) X} $, we have extinction (magnitude)
\begin{equation*}
\Delta m(\lambda)
= -2.5 \lg \qty( \frac{I(\lambda)}{I_0(\lambda)} )
= 1.086 \tau(\lambda) X ~.
\end{equation*}
Then the $ y $-axis of \cref{fig:air-ext-and-filter} is $ 1.086 \tau(\lambda) \approx \tau(\lambda) $. Thus, you can roughly understand that the $ y $-axis represents $ \tau(\lambda) $. Now you can visually confirm that $ \tau(\lambda) X \ll 1 $ (``approx 1'') is reasonable. In cases such as short wavelength (shorter than $ B $ or $ g $) and high airmass observation, this assumption may break down\footnote{Also for longer wavelegths ($ \lambda \gtrsim 1 \,\mathrm{\mu m} $, including JHK bands), this approximation breaks down severely.}. This is why classical photometric observers dislike observations at airmass $ X \gtrsim 1.5\mathrm{-}2 $ (elevation $ \lesssim 48^\circ \mathrm{-} 30^\circ$). For polarimetry, however, only the \emph{ratio} of two (orthogonal) electric field intensities are important, so airmass does not matter\footnote{For example, \href{https://ui.adsabs.harvard.edu/abs/2018NatCo...9.2486I/abstract}{\texttt{ItoT+2018, NatCo, 9, 2486}} demonstrated the polarization degree is not seriously affected by airmass ($ X = 1.03 \mathrm{-} 7 $). The change in the polarization for $ X = 1 $ and 7 is at most 0.05 \%p. This is because atmospheric scattering is basically a \emph{forward} scattering, which should not induce any additional polarization degree (except for very small scattering angle multiple scatterings), although the total intensity should decrease.} The error due to the approximation, however, may not be severe compared to other error sources (e.g., changing weather).
Now coming back to the original logic, we want to further assume that $ \tau(\lambda) \approx \tilde{c}_1 + \tilde{c}_2 \lambda $ for $ \lambda \in (\lambda_1, \lambda_2) $. This is similar to approximating the black markers in \cref{fig:air-ext-and-filter} within each filter as a line because its y-axis is nothing but $ 1.086 \tau \approx \tau $. Then
\begin{equation}
m_f - m_{0f}
\approx 2.5
\qty (c_1 + c_2\frac{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) \lambda d\lambda}
{\int_{\lambda_1}^{\lambda_2} S_0(\lambda) f(\lambda) d\lambda} ) X ~.
\end{equation}
Here, $ c_1 $ and $ c_2 $ are also constants. If the filter is fixed (e.g., $ V $-band or SDSS $ g' $ filter, etc), the only unknown thing in the second term in the parentheses is $ S_0(\lambda) $, i.e., the spectral shape. If it is a black body spectrum, the shape of $ S_0 $ is determined uniquely once the color index $ C $ is known. Even if it is not a perfect black body, it is reasonable to assume the spectral shape, $ S_0(\lambda) $, and color index, $ C $, have \textit{nearly} one-to-one relationship\footnote{For example, if you look at the color-color diagram of stars, A0, F0, and G0 stars all share similar $ U - B $ colors, i.e., color index and spectral shape are not one-to-one. This happens because (1) star spectra are not perfect black bodies and (2) filter profile is not ``flat'' as a function of $ \lambda $. But to the first-order approximation, it is acceptable.}. Fortunately most wildely used color indices (such as $ B - V $ or gri colors) are more like one-to-one for \emph{many} (but not all) cases. Thus, $ C $ is an indicator of $ S_0(\lambda) $, so the second term is roughly a function of $ C $, say $ c_2 \tilde{S}(C) $. Note that this logic does not change even if $ C $ were the \textit{observed} color index (not the \textit{true} color index), as I mentioned earlier. In theoretical development like we are doing here, choice of true color index makes the concept easier to understand. In practical observation, however, it is beneficial to use the observed color index (as you may see from many other literatures), because that is what you get from the observation and the true value is what is unknown.
The final assumption we make here is that the second term is $ c_2 \tilde{S} (C) \approx c_{3f} + c_{4f} C $ as the first-order approximation. Here $ c_{3f} $ and $ c_{4f} $ have subscript $ f $ because they depend on the \textit{filter} profile, but not on the spectral shape under our simplifying assumptions, because all the dependency from the spectral shape is absorbed into $ C $. Then
\begin{equation}
m_f - m_{0f}
\approx 2.5 (c_1 + c_{3f} + c_{4f} C) X
\equiv k_f' X + k_f'' CX
\end{equation}
These are the origins of $ k_f' $ and $ k_f'' $ in \cref{eq: std}.
To illustrate the result, I used the SDSS filter system as shown in \cref{fig:air-ext-bbrad} and calculated how much magnitude extinction happens depending on the black body temperature
at airmass $ X = 1 $ in the following table. Note the color $ C $ can be any gri color, such as $ r'-\mathrm{i}' $, and it is determined once the blackbody temperature is given.
\begin{table}[ht!]
\centering
\begin{tabular}{c||ccc}
& \multicolumn{3}{c}{Extinction magnitude} \\
Blackbody & $ m_g' - m_{0 g'} $ & $ m_r' - m_{0 r'} $ & $ m_i' - m_\mathrm{0 i'} $ \\
Temperature & $ = k_{g'}' + k_{g'}'' C $ & $ = k_{r'}' + k_{r'}'' C $ & $ = k_{i'}' + k_{i'}'' C $ \\
\hline
$ \SI{3000}{K} $ & $ \m{0.142} $ & $ \m{0.081} $ & $ \m{0.034} $ \\
$ \SI{6000}{K} $ & $ \m{0.158} $ & $ \m{0.084} $ & $ \m{0.035} $ \\
$ \SI{20000}{K} $ & $ \m{0.171} $ & $ \m{0.087} $ & $ \m{0.036} $ \\
\end{tabular}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{figs/air-ext-bbrad}
\caption{A black body radiation spectrum ($ T = \SI{6000}{K} $) before and after the extinction at SDSS bands. Note that the y-axis is changed to \% per airmass (cf. \cref{fig:air-ext-and-filter}) by $ 10^{-0.4 \Delta m} $.}
\label{fig:air-ext-bbrad}
\end{figure}
As can be seen, \emph{the extinction is stronger for higher temperature (lower color index)}, so you can guess $ k_f'' < 0 $. However, the difference of extinction between the objects gets smaller as we look at longer wavelength. Thus, you can also guess $ k_f'' \sim 0 $ (except for short wavelength, e.g., $ B $ or $ u $).
These facts can also be understood qualitatively. The higher the temperature the spectrum will have more fraction of energe at shorter wavelength. Considering that the atmospheric extinction is stronger at shorter wavelength (see black markers in \cref{fig:air-ext-bbrad}), high termperature object will get more ``penalty'' when it comes into the atmosphere. That is why high temperature object is more strongly extincted, i.e., $ k_f'' $ is negative. At the same time, since the amount of extinction drops significantly at wavelengths of $ r $ and $ i $ bands, and making $ k_f'' $ itself very small.
\texttt{SmithJA+(2002)}\footnote{\url{https://ui.adsabs.harvard.edu/abs/2002AJ....123.2121S/abstract}} determined the coefficients for SDSS filters at 1.0 m Ritchey--Chr\'{e}tien telescope at the USNO Flagstaff Station, from the observations in 1998--2000 (see \cref{tab: SDSS ext}). The $ k_f' $ value fluctuates much for each night (see Fig 6 of the original publication), so I just took representative values from visual inspection. They mention that the $ k_f'' $ values were obtained by two independent pipelines, \texttt{superExcal} (method 1) and \texttt{solve\_network} (method 2). Both are quite consistent except for the $ u $ filter. The color $ C $ used for each filter is the nearby filter color: $ u'-g' $ for $ u' $, $ g'-r' $ for $ g' $, etc, and $ i' - z' $ for both $ i' $ and $ z' $. The atmospheric extinction coefficients $ (k_f',\, k_f'') $ all change as a function of time. We just hope they are reasonably constant during our observation of our targets and standard stars. From experience, we know $ k_f'' $ is always very small for filters longer than u or B-band, but is not necessarily ignorable because $ k_f'' XC $ might be larger than the accuracy you want to obtain.
\begin{table}[ht!]
\caption{The extinction coefficients of SDSS from \texttt{SmithJA+(2002)}.}
\label{tab: SDSS ext}
\centering
\begin{tabular}{c||lllll}
Parameter & u$ ' $ & g$ ' $ & r$ ' $ & i$ ' $ & z$ ' $ \\
\hline
$ k_f' $ & $ > +0.5 $ & $ +0.20 \pm 0.05 $ & $ +0.10 \pm 0.05 $ & $ +0.05 \pm 0.05 $ & $ +0.05 \pm 0.05 $\\
$ k_f'' $ method 1
& $ -0.021 \pm 0.003 $
& $ -0.016 \pm 0.003 $
& $ -0.004 \pm 0.003 $
& $ +0.006 \pm 0.003 $
& $ +0.003 \pm 0.003 $ \\
$ k_f'' $ method 2
& $ -0.032 $
& $ -0.015 $
& $ 0.000 $
& $ +0.005 $
& $ +0.006 $
\end{tabular}
\end{table}
%Atmosphere diminishes the flux of the object. For an optical depth of $ \tau $, an object with initial flux $ I_0 $ will be observed as $ I = I_0 e^{-\tau} $ and its magnitude will be increased (because flux is decreased) by $ \Delta m = -2.5 \lg (I / I_0) = \frac{2.5}{\lg e} \tau = 1.086 \tau $. But $ \tau = \int n \sigma dl $ for the number density of particles $ n $, the extinction cross-section $ \sigma $, and traveling distance $ l $. The total traveling distance, $ L $, is $ L_0 / \cos z = L_0 X $ where $ X $ is the airmass. Thus, a simple approximation that $ \tau \propto L $ will result in $ \Delta m \propto X $.
%The real story is more complicated: This extinction is of course wavelength-dependent (\cref{fig:air-ext-and-filter}). The black markers show the extinction magnitude per airmass. At zenith, $ X = 1 $ by definition, so this means that $ \Delta m \sim \m{0.2} $ at $ \lambda = \SI{4000}{\AA} $ but is $ \Delta m \ll \m{0.1} $ for $ \lambda = \SI{8000}{\AA} $. Therefore, the extinction is not simply proportional to $ X $, but the proportionality is a function of wavelength.
%\begin{thm}[Atmospheric Extinction: 1st Order]
%The extinction due to the atmosphere has the following 1st order term:
%\begin{equation*}
% \Delta m(\lambda) = k'(\lambda) X ~,
%\end{equation*}
%where $ k'(\lambda) $ is the first-order extinction coefficient and is a function of wavelength $ \lambda $. In photometry, we are dealing with filters rather than each single wavelength, so normally we denote
%\begin{equation}\label{eq: air-ext 1st ord}
% \Delta m_f = k_f' X ~.
%\end{equation}
%\end{thm}
%Consider a black body spectrum is underwent this atmospheric extinction: \cref{fig:air-ext-bbrad}. The extinction is of course a function of wavelength.
%In photometry, we are interested in the total number of photons \textit{after} multiplied with the filter profile: $ \int_{\lambda_1}^{\lambda_2} S(\lambda) f(\lambda) d\lambda $ where $ S(\lambda) $ is the spectrum and $ f(\lambda) $ is the filter profile. Thus, the magnitude change before and after the atmospheric extinction is, if the extinction is $ E(\lambda) $,
%\begin{equation}
% \Delta m =
% -2.5 \lg \qty (\frac{\int_{\lambda_1}^{\lambda_2} E(\lambda) S(\lambda) f (\lambda) d\lambda}{\int_{\lambda_1}^{\lambda_2} S(\lambda) f(\lambda) d\lambda} )
%\end{equation}
%Because it contains a ratio of the integration, it is not simple to calculate $ \Delta m $. From target to target, what changes is, except for the airmass (Note that $ E(\lambda) = e^{-\tau} \propto \sim e^X $), the $ S(\lambda) $. Thus, $ \Delta m = \Delta m (X, S) $. For a black body, this is a function of the temperature, and the temperature has (roughly) one-to-one counterpart of the color index. Thus, if the spectral shape is assumed as black body, we can write $ \Delta m = \Delta m (X, C) $ where $ C $ is the true color index. Even if it is not a black body, it is sufficient if the spectral shape and color index have \textit{nearly} one-to-one relationship. The extinction due to the spectral shape should also be proportional to the airmass to the first order approximation. Thus, as an approximation, we add a term proportion to $ CX $ to \cref{eq: air-ext 1st ord}:
%\begin{thm}[Atmospheric Extinction: 2nd Order]
%The extinction due to the atmosphere is:
%\begin{equation}
% \Delta m(\lambda) = k'(\lambda) X + k''(\lambda) C X ~,
%\end{equation}
%where $ k''(\lambda) $ is the second order atmospheric extinction coefficient.
%In photometry, we are dealing with filters rather than each single wavelength, so normally we denote
%\begin{equation}\label{eq: air-ext 2nd ord}
% \Delta m_f = k_f' X + k_f'' C X ~.
%\end{equation}
%\end{thm}
\begin{ex}[Ignoring the Second Order Extinction Term]
Consider an observation at $ X = 2 $ of $ C = 0.2 $ star. If you ignore the second order extinction term, you are making $ k_f'' XC = 0.4 k_f'' $ of uncertainty. According to \cref{tab: SDSS ext}, this is most likely smaller than 0.01 magnitude.
The Sun has $ C = g' - r' \sim 0.5 $, and then the error for Sun-like star becomes $ 1.0 k_f'' $: It is $ < \m{0.01} $ for r, i, and z bands. It is about $ 0.02\mathrm{-}0.03 $ mag for $ u' $.
The red M0 stars have $ C = g' - r' \lesssim 1.5 $ and the error is now up to $ 3.0 k_f'' $. The accuracy of $ \m{0.01} $ can be achieved in riz bands, but risky.
\end{ex}
The calculation above is only for SDSS observatory at altitude of 2.3 km. But fortunately, \texttt{BuchheimB(2005)}\footnote{\url{https://ui.adsabs.harvard.edu/abs/2005SASS...24..111B/abstract}} found that the $ k_f'' \lesssim 0.005 $ for V band even at many low-altitude observatories (including Bochum observatory at altitude 200 m, Vainu Bappu Observatory at altitude 700 m), so likely we expect the correction from the second-order extinction term is small enough.
\subsection{Transformation Coefficient}
The sensitivity of the optics, such as filter, lens, mirror, and/or CCD cover glass, is also a function of $ \lambda $. The argument is identical to atmospheric extinction, but there is no $ X $ (similar parameter will be something like the optical depth of materials blocking CCD pixel, but that should be a device-dependent constant). Then the same logic leads us to the conclusion that there should be a color term which tunes the final output of the CCD count, and that is the $ \tilde{k}_f \times c $ (``transformation'') term. Here, $ c $ is the color index of the object after all the atmospheric extinction, and the true color before it enters telescopic optics. Once we assume there exists a function $ f_c $ such that $ f_c(C) = c $ is a one-to-one function and linearly approximate $ c = f_c(C) \approx \tilde{c}_5 + \tilde{c}_6 C $ for constants $ \tilde{c}_5 $ and $ \tilde{c}_6 $, the transformation term becomes $ k_f c = k_f C + \tilde{c}_{7f} $ for the filter-dependent constant $ \tilde{c}_{7f} $. This constant is finally absorbed in to another filter-dependent constant, called the zero point, $ z_f $. Therefore we reach \cref{eq: std}. Even now, the logic holds identically if we have selected $ C $ to be the observed color, not the true color.
The transformation coefficient, which I denoted $ k_f $, is fortunately nearly constant for the given device. Warner argues that it is enough to update $ k_f $ (Warner uses notation of $ T_f $) value only about 2--4 times a year, unless you physically changed the device elements (e.g., filter, CCD, lens, etc). Moreover, from experience, we know that this is nearly zero: $ |k_f| \lesssim 0.1 $. Many cases $ |k_f| \lesssim 0.01 $. Since the range of color indices are $ \mathrm{max}(\Delta C) \lesssim \m{1} $, we have $ |k_f C| \lesssim 0.1 $, and in many cases, $ |k_f C| \lesssim 0.01 $.
\subsection{A Note on Linearity} \label{ss: linearity note}
As we noted at the beginning part of this chapter, CCD is highly linear. That means, $ N = g N_e = \alpha N_\gamma $ where $ N $ is the pixel count (after \emph{bias} and \emph{dark} subtraction), $ N_e $ is the total number of photo-electrons, $ g $ is the electronic gain of the CCD (a constant; unit of counts per electrons), and $ N_\gamma $ is the photon incident to the CCD from the true photon number $ N_\mathrm{\gamma 0} $. No higher-order terms, no other constants. There is no CCD as of 2021 that returns $ N = \alpha' N_\gamma^2 $, etc. The $ \alpha $ value may differ from pixel to pixel due to the inhomogeneity of optics or CCD pixels, but they are homogeneized by the so-called \emph{flat fielding}, so here I can safely say it is strictly constant over all the pixels.
Any kind of extinction (atmosphere or optics in front of the CCD) is multiplicative only to the linear term, i.e., $ N_\gamma = N_{\gamma 0} \times \mathrm{something_1} $ (e.g., $ e^{-\tau} $). There is neither higher-order terms like $ N_{\gamma 0}^2 $ nor addition of constant. Therefore, $ N = \mathrm{something_2} \times N_\mathrm{\gamma 0} $ and the instrumental magnitude
\begin{equation}
m_f
:= -2.5 \lg N
= \mathrm{something_3} - 2.5 \lg N_{\gamma 0}
\equiv \mathrm{something_3} + M_f ~.
\end{equation}
Thus, thanks to the linearity of CCD, we have \textbf{no additional coefficient \emph{multiplied}} in front of $ m_f $ or $ M_f $. If, for example, $ N $ were $ \alpha N_\gamma + \alpha' $, or there were other terms in the extinction (proportional to $ N^2 $ or a constant radiation from the optics), this simple relation wouldn't hold.
%In reality, the broad-band photometry (the has some tricky problem. For a simple illustration, assume (1) we don't have atmosphere, (2) have a flat throughput: $ f(\lambda) = 1 $ for $ \lambda = [\lambda_1, \lambda_2] $ and zero otherwise, (3) CCD and optics has no wavelength-dependency in its sensitivity. We observed two stellar black-body spectra of temperatures $ T_a $ and $ T_b $ ($ T_a < T_b $). Because of different radii of or distance to the stars, consider they are shooting identical number of photons per time to Earth, i.e., $ \int_{\lambda_1}^{\lambda_2} S(\lambda) / (hc / \lambda) d\lambda $ is the same for two stars. The CCD will produce the same number of photo-electrons when it is hit by, e.g., 100 photons of $ \lambda_1 $ or 100 photons of $ \lambda_2 $. Thus, the two stars will appear to have identical brightness.
To emphasize, \emph{you should not worry about whether to \textbf{multiply} something in front of $ M_f $ or $ m_f $ to satisfy \cref{eq: std}}. Their coefficients \emph{must be unity}. From our experiences, most observational experts and electrical engineers would say that you should only care about this if you are sure that some parts of the optics have serious problems (e.g., your CCD underwent serious problem and shows non-linearity).
\subsection{A Note on Zero Point} \label{ss: zeropt}
The zero point $ z_{f} $ is a constant to convert the instrumental magnitude (which is nothing but a $ -2.5 $ multiplied by $ \lg (\mathrm{count}) $) to a realistic standard magnitude system astronomers have been using. In intensity sence, this ``addition of a constant'' in magnitude represents a ``multiplication of a constant'' to the instrumental count to measure the intensity.
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% add photon/sec for 0-mag star from Ishiguro's notebook and do realistic calculation
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
What is a typical value of zero point? From telescopes with diameter $ \gtrsim \SI{1}{m} $ in Seoul, let's say we are observing objects of around $ \m{15} $ with 100 seconds exposure. Say the sky-subtracted peak value of our target is\footnote{Because CCDs are usually operated in 16-bit unsigned integer mode which represents $ 0 $ to $ 65,535 $, and non-linearity appears around $ 40,000\mathrm{-}60,000 $ ADU, the sky subtracted peak pixel value of an intermediate-brightness object in the FOV is order of $ \sim 10^4 $ after sky subtraction} $ \sim 10^4 $ and the integral of the profile gives intensity of $ \sim 10^{5} $, and the intensity per unit time is $ 10^3 $ (divided by exposure time). The instrumental magnitude is $ m \sim -2.5 \lg 10^3 = -7.5 $. This means $ z_f \sim 22.5 $. If we do realistic calculation, $ z_f $ is mostly within a range of 20 to 25. This is why, in IRAF, the default initial guess of $ z_f $ is 25 mag.
Zero point must be a constant unless the device is affected by external disturbance in our simple model dealt in this chapter. In reality it is true that this zero point fluctuate at each exposure, and that is because of the imperfect readout process of CCD electronics. $ \Delta z_V \approx 0 $ is assumed in this chapter. Although you may be unconfortable, but this is what is assumed even in professional space telescope data reduction processes, if this fact makes you more comfortable.
\section{Standardization Applied to Photometry}
Now that we justified the usage of \cref{eq: std}, let's find the cases of application. The simplest case is the \textit{differential photometry}.
If there are many celestial objects (of course including your target) in the field of view with known standard magnitudes, we can use them as standard stars. Although there can be variable stars and galaxies\footnote{Galaxies can have spectra significantly different from those of black bodies. Therefore, the coefficients $ k_f $ and $ k_f'' $, which are approximations of spectral shape (i.e., not $ k_f' $), should be different from those derived from standard \textit{stars}, which are black bodies to the first order. But mostly this effect is not serious because, as we discussed before, both $ k_f C $ and $ k_f'' X C $ terms are anyway very small. For this reason, people use $ k_f $ and $ k_f'' $ derived from standard stars for their target galaxies (or any non-black body like spectra). If you really worry about this, you must conduct spectroscopic observation, not broad-band photometry.}, if most of the objects with known magnitudes are non-variable stars, those \textit{outliers} will be smoothed out. Thus, we just assume all the celestial objects in the field of view with known standard magnitudes as ``standard stars''.
This technique is very widely used in variable star and asteroidal light curve observations. This is widely used than absolute photometry, because it is annoying and difficult to observe standard stars at different airmasses while observing your target, which requires telescope time and human labor. In asteroidal science, even single-filter differential photometry is frequently used.
\subsection{Differential Photometry: Single-Filter}
Consider there are many stars in the FOV with known (catalogued) magnitudes at multiple filters, so that the true apparent magnitude $ M_f $ and color $ C $ are known. Say, from the photometry, we could determine the instrumental magnitude of stars $ m_f $. Rearranging \cref{eq: std}:
\begin{equation} \label{eq: zeropt}
M_f - m_f = (z_f - k_f' X) + (k_f - k_f'' X) C \equiv a_f (X) + b_f (X) C
\end{equation}
The first term in the RHS, $ a_f (X) $, is a constant for all the stars in the same FITS frame, as they will have nearly identical airmass\footnote{If you observed near the horizon, $ \Delta X $ across the FOV may not be negligible. For accurate photometry, this should also be taken into account.} $ X $ and the identical zero point $ z_f $. $ M_f - m_f = a_f(X) $ for $ C = 0 $, i.e., a perfectly flat spectrum. The second term is color-dependent, but as we discussed before, it is likely to be very small. Sometimes people just call this value (LHS) as ``zero point'', although I will stick to use this term for $ z_f $ for clarity.
\begin{ex}[How small is the second term?]
Consider observatinos made in wavelength ranges around V or gri bands. From \cref{tab: SDSS ext}, $ |k_f''| \sim 0.000 \mathrm{-} 0.020 $ and we expect that $ k_f \lesssim 0.02 $ for most optics. Since $ k_f'' $ is mostly negative, $ k_f - k_f'' X $ is likely be positive, but not always (see \cref{tab: SDSS ext}). Also we have $ 1 < X \lesssim 2 $ in most cases. Then if you play with many combinations of numbers, $ b_f(X) C = (k_f - k_f'' X) C \lesssim 0.05 C $ and most likely much smaller than that. Note that the color index is mostly $ -1 \lesssim C \lesssim +1 $.
\end{ex}
What we do now is to draw several test plots as in \cref{fig:zeropt}. It is better if the color of the stars span wider than about 0.5 ($ \mathrm{max}(C) - \mathrm{min} (C) \gtrsim 0.5 $). On the left side of the figure, I plotted $ M_f $ VS $ m_f $ for $ f = R $ (Johnson--Cousins $ R_C $ filter). The fitted slope is $ \sim 1.015 $, which is near the unity as we expect from linearity. Sometimes it becomes as high as 1.05 throughout the night. Below is a residual plot. Normally, since the uncertainty in the magnitude measurement increases for larger magnitude (fainter star), the scatter of the residual must increase at large magnitude. The right panel of the figure shows the residual as a function of catalogued stars. The color-dependent slope is 0.02, and it was $ \lesssim 0.05 $ throughout the night; this is small indeed as we expected above. As we do not know our target's color, assuming the color uncertainty of $ \pm 0.5 $ will give around $ \pm 0.03 $ \textbf{\emph{systematic offset to the magnitude}} of our target. Note that this is a systematic parallel shift in magnitude, \textbf{not a random error}. Therefore, the \emph{shape} of the light curve will not be affected by this, althought the magnitude \emph{value} may have been affected by a constant.
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{figs/SNUO_STX16803-2005UD-1-1-20181012-190829_zeropoint.pdf}
\caption{A typical linearity curve with residual (left) and color-zero plot (right) on 2018-10-12 (UT 19:08:29) observation at SNUO 1-m telescope for the field stars near an asteroid (155140) 2005 UD. I used PS1 DR1 catalog, within 13.5 to 15 mag, removed any object with flag of quasar, variable, galaxy, etc, and dropped any pairs of stars if there's any nearby object in PS1 DR1 catalog.}
\label{fig:zeropt}
\end{figure}
A summary of the three plots:
\begin{table}[ht!]
\caption{Differential Photometry Diagnostic Plots}
\label{tab: 1-filter diff phot}
\centering
\begin{tabular}{c|cc|c|l}
Name & $ x $-axis & $ y $-axis & Appearance & Comments \\
\hline
Linearity & $ M_f $ & $ m_f $ & straight line &
$ |\mathrm{slope} - 1| \gtrsim 0.01 $ means you need to \\
Curve & & & slope of unity & check your photometry and/or device linerity \\
\hline
Linearity & $ M_f $ & $ M_f - m_f $ & constant value &
Scatter is larger for large $ M_f $ objects \\
Residual & & & regardless of $ M_f $ & ($ \because $ faint object's mag is less accurate) \\
\hline
Color & $ C $ & $ M_f - m_f $ & $ \sim $ constant &
If you can see a clear trend, \\
Dependency & & & regardless of $ M_f $ & the color-terms in \cref{eq: zeropt} not negligible.\\
\end{tabular}
\end{table}
If graphs do not look as expected, that means (1) many field objects are variable or highly non-blackbody-like galaxies so that they are not suitable as ``standard stars'' and the basic assumptions of \cref{eq: std} break down, (2) the airmass is too high that the 2nd order approximations do not hold, (3) catalog $ M_f $ suffer from unknown uncertainties or systematic errors so the catalog value is not reliable, (4) $ m_f $ was measured incorrectly, (5) many other possibilities. Check these before going further, because your final results may be affected.
Consider you determined the $ (a_f, b_f) $ for all the FITS files you obtained through a night, where each FITS file has different airmass. Then, if you
\begin{itemize}
\item plot $ a_f $ as $ X $, the intercept and slope are $ z_f $ and $ -k_f' $,
\item plot $ b_f $ as $ X $, the intercept and slope are $ k_f $ and $ -k_f'' $.
\end{itemize}
Note, however, the following assumptions should be met:
\begin{enumerate}
\item Atmospheric conditions (extinction coefficients $ k' $ and $ k'' $), remain constant over the night
\item Zero point $ z_f $ remain constant over the night
\item The approximations used for \cref{eq: std} (2nd-order approximation) must hold
\end{enumerate}
Because not all these are met in Seoul sky, the plots you made will not look like to have a linear trend. It can be largely scattered so that a linear fit does not seem to make sense, clear non-linear trend appears (mostly due to the third assumption breaks down at large airmass), etc.
Therefore, as a simple yet reasonable approximation, I assumed $ M_f - m_f $ for all the objects in a single FITS file should be a constant, and found this value. Then added it to the instrumental magnitude $ m_f $ of the target of interest.
\subsection{Differential Photometry: Multi-Filter}
When we have more than one filter observation, we can even eliminate the systematic offset due to the color uncertainty of the object. Just write the equation for two filters, $ x $ and $ y $
\begin{equation}
\begin{cases}
M_x - m_x &= (z_x - k_x' X_x) + (k_x - k_x''X_x) C_{xy} = a_x(X_x) + b_x(X_x) C_{xy} \\
M_y - m_y &= (z_y - k_y' X_y) + (k_y - k_y''X_y) C_{xy} = a_y(X_y) + b_y(X_y) C_{xy}
\end{cases}
\end{equation}
Note here that, for field stars with known standard magnitudes, $ M_x $, $ M_y $, and thus $ C_{xy} = M_x - M_y $ should all be known. The $ m_x $ and $ m_y $ are known from a photometry to the image. Although $ z $ values are assumed to be constant throughout the night for the same detector, I explicitly put $ z_x $ and $ z_y $ for generality. Following the logic of single-filter case, if we plot $ M_x - m_x $ as ordinate and $ C_{xy} $ as abscissa for $ N $ field stars, the $ y $-intercept is $ a_x $ and the slope is $ b_x $. Same goes for the $ y $ filter. So $ (a_x, b_x) $ and $ (a_y, b_y) $ are determined with the uncertainties.
Then for the target of interest, which $ M_x $ and $ M_y $ are unknown,
\begin{equation}
\begin{cases}
M_x^\mathrm{target} - m_x^\mathrm{target} &= a_x(X_x) + b_x(X_x) C_{xy}^\mathrm{target} \\
M_y^\mathrm{target} - m_y^\mathrm{target} &= a_y(X_y) + b_y(X_y) C_{xy}^\mathrm{target}
\end{cases}
\end{equation}
or since $ C_{xy}^\mathrm{target} = M_x^\mathrm{target} - M_y^\mathrm{target} $, (dropping $ X_x $ and $ X_y $ for brevity)
\begin{equation}
C_{xy}^\mathrm{target} = \frac{(m_x^\mathrm{target} - m_y^\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)} ~.
\end{equation}
Putting this back to the original equation,
\begin{equation}\label{eq: 2-filter diff phot sol}
\begin{cases}
M_x^\mathrm{target}
= m_x + a_x + b_x \frac{(m_x^\mathrm{target} - m_y^\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)} \\
M_y^\mathrm{target}
= m_y + a_y + b_y \frac{(m_x^\mathrm{target} - m_y^\mathrm{target}) + (a_x - a_y)}{1 - (b_x - b_y)}
\end{cases}
\end{equation}
Now we have the standard magnitude of the target in both $ x $ and $ y $ filters. Since we have taken the color of the target into account, \textbf{\emph{there is no systematic offset}} as in single-filter case.
What if we have more than 2 filters, say $ x $, $ y $, and $ z $? Solve for $ x $ and $ y $ as above by using $ (a_x, b_x) $. Then solve for $ y $ and $ z $, but this time using color as $ C_{yz} $, not $ C_{xy} $, using $ (a_y', b_y') $. Theoretically $ a_y = a_y' $, because there is no color term. However, $ b_x \neq b_x' $, because $ k_y'' $ is defined for $ C_{xy} $ for the first case, but it is defined for $ C_{yz} $ for the second case. In reality, because what we get is only the best-fit values with uncertainties, $ a_x $ and $ a_x' $ can also be different (likely within certain amount of error-bar).
\section{Photometry Using Standard Stars}
When we need accurate absolute photometry, photometric standard star observation is essential. Unlike ``field stars with known magnitude,'' standard stars are confirmed as non-variable stars with very accurate magnitudes. By observing them, we determine the coefficients ($ k_f $, $ k_f' $, and $ k_f'' $) and zero point ($ z_f $), mostly for at least two filters, and get the magnitude of our target of interest with high accuracy. Hence, I will only talk about multi-filter observation of standard stars.
A different thing for standard star frames is that there is only one star with known magnitude in the FOV, and there will be only one point in the right panel of \cref{fig:zeropt}. Therefore, we select at least two standard stars with different colors, sometimes called a \textbf{\emph{blue--red pair}}. Observe one standard star at an airmass. When the other standard star reaches similar airmass, observe it, so that you have at least two stars at the same airmass. Then you have two points in the right panel of \cref{fig:zeropt} at the given airmass. Repeat this for many airmasses, so that you obtain the $ a_f $ and $ b_f $ for many airmasses for each filter. Following the logic of multi-filter case, you can determine all the coefficients.
If you are not interested in zero point and the transformation coefficient $ k_f $, you can follow this calculation: Consider a standard star with ID $= i $ is observed for $ N_i $ times, and denote each observation as $ j $ $ (j = 1, \cdots, N_i) $, and write the parameters of \cref{eq: std} as $ M_{f, i} $, $ C_{i} $ ($ M $ and $ C $ are fixed values for a standard star, so no $ j $ is needed), $ m_{f, i}^{(j)} $, $ X_{f, i}^{(j)} $, etc. We here assume the zero point, $ z_f $, and the coefficients $ k_f $, $ k_f' $, and $ k_f'' $ are almost constant over the night\footnote{Even SDSS standard stars were also observed and analyzed under this assumption. See SmithJA (2002, AJ, 123, 2121)}. Then
\begin{equation}
\begin{cases}
M_{f, i} - m_{f, i}^{(1)}
&= z_f + k_f' X_{f, i}^{(1)} + k_f'' X_{f, i}^{(1)} C_{i} + k_f C_{i} ~, \\
M_{f, i} - m_{f, i}^{(2)}
&= z_f + k_f' X_{f, i}^{(2)} + k_f'' X_{f, i}^{(2)} C_{i} + k_f C_{i} ~, \\
\qquad\vdots & \qquad\vdots \\
M_{f, i} - m_{f, i}^{(N_i)}
&= z_f + k_f' X_{f, i}^{(N_i)} + k_f'' X_{f, i}^{(N_i)} C_{i} + k_f C_{i} ~.
\end{cases}
\end{equation}
For the first two observations of the same standard star (ID $= i $) at filter $ f $, subtracting two,
\begin{equation}
\Delta m_{f, i}^{(1, 2)}
= (k_f' + k_f'' C_i) \Delta X_{f, i}^{(1, 2)}
\quad\rightarrow\quad
\qty( \frac{\Delta m }{\Delta X} )_{f, i}^{(1, 2)}
= k_f' + k_f'' C_i ~.
\end{equation}
Of course $ \Delta m_{f, i}^{(1, 2)} = m_{f, i}^{(1)} - m_{f, i}^{(2)} $ and $ \Delta X_{f, i}^{(1, 2)} = X_{f, i}^{(1)} - X_{f, i}^{(2)} $.
Therefore, if we plot $ \qty ( \frac{\Delta m }{\Delta X} )_{f, i} $ as a function of color $ C_i $ of the standard stars with different color indices (colors of them are all known), the linear fit will give intercept of $ k_f' $ and slope of $ k_f'' $. For star $ i $, we have $ N_i $ observations, so we have $ \binom{N_i}{2} = N_i (N_i - 1) / 2 $ points at the single $ C_i $ value. If we have $ N $ standard stars of wide color range, we can have $ N \sum_i \binom{N_i}{2} $ points to fit the linear line. For a simple blue--red pair, $ N = 2 $, so you have two $ x $-axis values (color index), but many points at each $ x $-axis value.
| {
"alphanum_fraction": 0.6916304469,
"avg_line_length": 107.5899280576,
"ext": "tex",
"hexsha": "fcb7d86ca152007204296a8e0da198c60055c8c5",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-07-14T09:18:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-10T14:19:34.000Z",
"max_forks_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "ysBach/SNU_AOclass",
"max_forks_repo_path": "Books/chaps/04_Std.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527",
"max_issues_repo_issues_event_max_datetime": "2021-05-24T11:41:55.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-05-04T17:21:49.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "ysBach/SNU_AOclass",
"max_issues_repo_path": "Books/chaps/04_Std.tex",
"max_line_length": 1643,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "e2e364b08c2e6e129c267db9cbd76cfd0ab77527",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "ysBach/SNU_AOclass",
"max_stars_repo_path": "Books/chaps/04_Std.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-14T01:49:51.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-23T06:14:52.000Z",
"num_tokens": 13287,
"size": 44865
} |
% Created: Enze Chen, July 2017
% Last edited: Enze Chen, December 2017
%
% Chapter 9 of the MSE 142 coursereader. This chapter summarizes the content of the course, and provides some further topics of exploration. Some other related courses in the materials science curriculum are also discussed.
% Uncomment the following three lines and last line to individually compile this chapter
%\documentclass[12pt, english]{book}
%\usepackage{142crstyle}
%\begin{document}
\chapter{What's Next?} \label{ch:next}
%{ \doublespacing
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{dilbert-quantum}
\caption{Image courtesy of \href{http://dilbert.com/strip/2012-04-17}{Dilbert}.}
\end{figure}
Congratulations! It's been a long quarter, and you should feel proud for everything you've accomplished in this class. This short, final chapter is meant to summarize some of the key concepts and learning objectives from this course, and provide motivation for where to go from here.
\section[Recap]{Recap of content and learning objectives}
My goal with this course was to provide a basic introduction to the world of quantum mechanics, which is an immensely rich, exceedingly vast, and nefariously complicated subject. Along the way, I hope you felt sufficiently challenged to think in abstract and quantitative ways that often broke with intuition. Recall that we began with the Stern-Gerlach experiment as a shock treatment that led into a narrative of the development of quantum theory. Although many of these ideas are almost a century old, they are still hotly debated and continue to feature in scientific discoveries! \par
As you've seen, just about every problem in quantum mechanics involves formulating some version of the \Sch\ equation and then solving it, so the \Sch\ equation was a natural place for us to start. We introduced the concept of a wave function to represent a quantum state and used the statistical interpretation to arrive at the probability density function. Then, we saw our first and perhaps simplest quantum mechanical system, which was that of the particle in a box. Using this simple but powerful model, we solved the time-independent \Sch\ equation and arrived at the quantization of energy levels, which was just the first of many surprising behaviors at the nanoscale. \par
From there, we experimented with different types of rectangular potential barriers, and saw how it was possible for a particle to tunnel through a potential barrier that it could not classically surmount. This is another phenomenon that is only observed at the quantum scale, but appears in a wide variety of contexts. In particular, we used it to analyze the Kronig-Penney model for periodic potentials, and surprisingly we found entire bands of energies that were forbidden for electrons. \par
In order to analyze the quantum harmonic oscillator, we went one step further to parabolic potentials and used the formalism of operators to solve this instance of the \Sch\ equation. Those concepts provided the foundations for quantum field theory, which we developed to explain second quantization and phonons. Finally, we introduced a time-dependent term into the Hamiltonian and developed the framework of time-dependent perturbation theory to model the interaction of matter with electromagnetic fields. \par
We have learned \emph{a ton} since this course first started ten weeks ago! You now have basic fluency in the language of quantum mechanics---which many schools don't teach until the graduate level---and you're now better equipped to solve problems that originate from the nanoscale. There was quite a bit of advanced mathematics in this course, and I hope you saw those as tools to facilitate learning rather than barriers to approaching the subject.
In addition to the theory, another major goal of this course was to show the application of quantum mechanics in nanoscale devices and physical systems, many of which you might encounter in your later studies. My attempt to match an application to the topic of each chapter was both to demonstrate the utility of the knowledge and strengthen your understanding. Quantum mechanics is truly ubiquitous, and focusing only on the theory would miss out on this exciting opportunity to demonstrate its manifestations in the real world, which is important for everyone to understand, especially materials scientists. From devices like the laser and the scanning tunneling microscope to particles like quantum dots and phonons, I hope you had as much fun learning about the applications as I had sharing them.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Further topics} \label{sec:further}
As I mentioned just now, this course serves only as a brief introduction to quantum mechanics---there's so much more left to explore! Because we surveyed a wide variety of topics, you're well-prepared for further studies in this field should you choose to do so. A non-exhaustive list of interesting and important topics include:
\begin{itemize}
\item \textbf{Quantum entanglement}. This physical phenomenon occurs when groups of particles have quantum states that cannot be described independently of the states of other particles. We saw this in the case of the quantum eraser when knowing the state of one photon allowed us to immediately know the state of the coupled pair. This strange coupling effect that holds across both space and time shocked the physics community when it was first proposed and it was dubbed ``spooky action at a distance'' by none other than Einstein himself.\footnote{Check out the following article by L. Sanders, \href{https://www.sciencenews.org/article/everyday-entanglement}{\emph{Science News}} \textbf{178}, 11 (2010).}
\item \textbf{Hydrogen atom}. The list of quantum mechanical systems with analytical solutions is short,\footnote{See \href{https://en.wikipedia.org/wiki/List_of_quantum-mechanical_systems_with_analytical_solutions}{Wikipedia} for a list of solvable systems.} but it notably features the hydrogen atom, making this a great model for developing quantum theory. Analyzing the hydrogen atom gives one a better understanding of the spin and angular momentum operators, multi-particle behavior, and quantization of energy levels where the spacing gets successively \emph{smaller} with increasing energy.
\item \textbf{Dirac equation}. Some of the finer details regarding the hydrogen atom can only be solved with this equation developed by Paul Dirac, which also implies the existence of antimatter. It combines elements of quantum mechanics and special relativity, and should be approachable after our introduction to QFT.
\item \textbf{Quantum computing}. This field studies the performance of quantum computers by leveraging superposition and entanglement to perform calculations using quantum bits (qubits). Unlike regular bits, which have a value of 0 \emph{or} 1, qubits can exist in a superposition state of both 0 \emph{and} 1. Though quantum computers are still in their infancy, they may be able to solve problems that are unfeasible on classical computers and even crack state-of-the-art encryption techniques.\footnote{For some general reading on quantum computing and security, see N. Kobie, \href{http://www.wired.co.uk/article/quantum-computers-quantum-security-encryption}{\emph{Wired}}, 2016.}
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Coursework}
There are also some really great courses here at Stanford that build on top of this course, covering both theoretical and applied concepts. \par
\begin{itemize}
\item \textbf{MATSCI 195} (with Jennifer Dionne) is a graduate-level course that takes many of the concepts in this course and applies them to waves and diffraction. You will definitely see a lot more of the math surrounding Fourier transforms, dispersion relations, and phonons.
\item \textbf{MATSCI 152} (with Jennifer Dionne) is another undergraduate-level course that applies quantum mechanics to electronic materials. You will learn a lot more about band gaps and semiconductors as well as electron transport through solids. \textbf{MATSCI 199} (with Mark Brongersma) is a graduate-level course that covers similar topics as 152 but with greater depth and rigor.
\item \textbf{MATSCI 331} (with Evan Reed) is a graduate-level course that focuses on using numerical computation to solve quantum mechanical problems. You will use computational methods to compute the electronic and optical properties of various interfaces and nanostructures.
\item I do not have knowledge of specific offerings in other departments, but the ubiquitous nature of quantum mechanics means that there are plenty of courses in Applied Physics (APPPHYS), Physics (PHYSICS), Electrical Engineering (EE), and Chemistry (CHEM) that touch upon these concepts, and it's good for students to get exposure to different schools of thinking.
\end{itemize}
%} % for doublespacing | {
"alphanum_fraction": 0.7878654649,
"avg_line_length": 144.4126984127,
"ext": "tex",
"hexsha": "f70436c68fe80470f3ce8b610f280ad8d96861a1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Enze-Chen/mse_142_cr",
"max_forks_repo_path": "tex/chapter_9.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Enze-Chen/mse_142_cr",
"max_issues_repo_path": "tex/chapter_9.tex",
"max_line_length": 802,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a98585b32f26f6c189b96345d9cc1e9727156268",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Enze-Chen/mse_142_cr",
"max_stars_repo_path": "tex/chapter_9.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-13T17:08:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-13T17:08:24.000Z",
"num_tokens": 1892,
"size": 9098
} |
%% Current version: August 10, 1993
\typeout{Authors! Before beginning to work, please ftp the README file from}
\typeout{the directory /Kluwer/styles/journals at world.std.com}
\typeout{to see if you have the current version of this file!}
\typeout{This version is dated August 10, 1993}
\typeout{\space\space\space\space\space\space\space\space\space}
\typeout{\space\space\space\space\space\space\space\space\space}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Template File for Small Journal Format %%
%% Kluwer Academic Publishers %%
%% %%
%% Prepared by Amy Hendrickson, TeXnology Inc. %%
%% %%
%% Inquiries to Suzanne M. Rumsey, net address: [email protected] %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Use this for Computer Modern Fonts
\documentstyle{smjrnl}
\begin{document}
%% To be entered at Kluwers: ==>>
\journame{Small Journal Name}
\volnumber{??}
\issuenumber{??}
\issuemonth{??}
\volyear{??}
%%%% Issue Table of Contents %%%%
%% \begin{issuetoc}
%% \TOCarticle{<article title> }{<author or authors> }{<starting page no.> }
%% \end{issuetoc}
%%%% End of Issue Table of Contents %%%%
%% Individual article commands:
\begin{article}
\authorrunninghead{Author Name or Names}
\titlerunninghead{Article Title}
%\setcounter{page}{275} %% This command is optional.
%% May set page number only for first page in
%% issue, if desired.
%% <<== End of commands to be entered at Kluwers
%% Authors, start here ==>>
\title{ }
%% Author name and email address
%% Please enter your email address if you have one.
\authors{ }
\email{}
%% Affiliation address
\affil{ }
%% Article editor
\editor{ }
%% Abstract
\abstract{ }
%% Keywords
\keywords{}
\section{ }
%% End matter:
% \acknowledgements
% \appendix
% \notes
% \begin{references}
% \bibitem{xxx}
% \end{references}
%% Do not delete this! ===>>>
\end{article}
\end{document}
% Samples of commands you may use:
\section{ }
\subsection{ }
\subsubsection{ }
Our theory of the structure of the environment has been focused the
structure of living things (arguably, the largest portion of the
objects in the world) because of the aid biology gives in objectively
specifying the organization of these objects.
%% If you want to make a wide equation, precede \begin{equation}
%% with \begin{wideequation} and follow \end{equation} with
%% \end{wideequation}:
% \begin{wideequation}
% \begin{equation}
% (equation)
% \end{equation}
% \end{wideequation}
% In this wide equation the `array' command is used to split
% the math into two lines, moving the top half to the left
% and the bottom to the right.
% \begin{wideequation}
% \begin{equation}
% \begin{array}{lr}
% \sum_k P(k) \sum_i \sum_y f_i(y|k)^2\\
% &\sum_k P(k) \sum_i \sum_y f_i(y|k)^2
% \sum_k P(k) \sum_i \sum_y f_i(y|k)^2
% \end{array}
% \end{equation}
% \end{wideequation}
% To indent text, use the following commands:
% \begin{itemize}
% \item[]
% text...
% \end{itemize}
%% Algorithm for exhibiting code. Indent lines with one or more `\ '.
% \begin{algorithm}
% \ start line here
% \ \ indent line here
% \end{algorithm}
% Sample figure
% \begin{figure}[h]
% \vspace*{.5in}
% \caption{This is a figure caption.
% This is a figure caption.
% This is a figure caption.}
% \end{figure}
% Sample table, this kind of table preamble will spread
% table out to the width of the page:
% \begin{table}[h]
% \caption{This is an example table caption. As you can
% see, it will be as wide as the table that it captions.}
% \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcr}
% \hline
% $\alpha\beta\Gamma\Delta$ One&Two&Three\cr
% \hline
% one&two&three\cr
% one&two&three\cr
% \hline
% \end{tabular*}
% \end{table}
%% Make examples:
% \begin{example}
% text...
% \end{example}
%% Make theorems:
% \begin{proclaim}{Theorem <number>}
% text...
% \end{proclaim}
%% Make proof:
% \begin{proof}
% text...
% \end{proof}
% If proof ends with math, please use \inmathqed at the end of the
% equation:
% \begin{proof}
% ....
% \[
% \alpha\beta\Gamma\Delta\inmathqed
% \]
% \end{proof}
%% Proof with a title. Enter name of proof in square brackets:
% \begin{proof}[Proof of Theorem A.1]
% ...
% \end{proof}
%% Assumption or similar kind of environment
% \begin{demo}{Assumption <number>}
% text...
% \end{demo}
%% End Matter:
%%%%%%%
%% Acknowledgements here
% \acknowledgements
% text...
% Appendices:
% \appendix
%%%%%%%
%% Endnotes
%\notes
%%%%%%%
%% Make references as standard Latex.
%% for example:
%% Trying `cite', \cite{jacobs}, \cite{francis}.
% \begin{references}
% \bibitem{jacobs}Jacobs, E., ``Design Method Optimizes Scanning
% Phased Array,'' Microwaves, April 1982, pp.\ 69--70.
% \bibitem{francis} Francis, M., ``Out-of-band response of array antennas,''
% Antenna Meas. Tech. Proc., September 28--October 2, 1987, Seattle, p.~14.
% \end{references}
% Or, to make alphabetical references, with no number preceding entries:
% Maude Francis, (Francis, 87) showed important new results
% with array antennas.
% \begin{alphareferences}
% Francis, M., ``Out-of-band response of array
% antennas,'' Antenna Meas. Tech. Proc., September 28--October 2,
% 1987, Seattle, p.~14.
% Jacobs, E., ``Design Method Optimizes Scanning
% Phased Array,'' Microwaves, April 1982, pp.\ 69--70.
% \end{alphareferences}
% See smjrnl.doc for documentation on using Bibtex.
| {
"alphanum_fraction": 0.6339880744,
"avg_line_length": 22.7171314741,
"ext": "tex",
"hexsha": "05e9babcb655c6fb1d75468e8488f72183e577d2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "685d5143204e3292777d2f283fe3977a1737905b",
"max_forks_repo_licenses": [
"MIT-CMU"
],
"max_forks_repo_name": "ashok-khanna/cmu-ai-repository",
"max_forks_repo_path": "util/pubs/publishers/Kluwer/styles/journals/smjtmpl.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "685d5143204e3292777d2f283fe3977a1737905b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT-CMU"
],
"max_issues_repo_name": "ashok-khanna/cmu-ai-repository",
"max_issues_repo_path": "util/pubs/publishers/Kluwer/styles/journals/smjtmpl.tex",
"max_line_length": 76,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "685d5143204e3292777d2f283fe3977a1737905b",
"max_stars_repo_licenses": [
"MIT-CMU"
],
"max_stars_repo_name": "ashok-khanna/cmu-ai-repository",
"max_stars_repo_path": "util/pubs/publishers/Kluwer/styles/journals/smjtmpl.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1615,
"size": 5702
} |
\subsection{Singularities}
| {
"alphanum_fraction": 0.7931034483,
"avg_line_length": 7.25,
"ext": "tex",
"hexsha": "7075bb87fe6b1815a5224dabd9c8f1345cb4469a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/analysis/complexAnalysis/04-01-Singularities.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/analysis/complexAnalysis/04-01-Singularities.tex",
"max_line_length": 26,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/analysis/complexAnalysis/04-01-Singularities.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8,
"size": 29
} |
\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage[margin=1in,nohead,nofoot]{geometry}
\usepackage{wrapfig}
\usepackage{xcolor}
\usepackage{enumitem}
\setlist[itemize]{noitemsep}
\usepackage{hyperref}
\hypersetup{linkbordercolor=blue}
\usepackage{sidecap}
\usepackage{cite}
\usepackage{setspace}
\usepackage[backgroundcolor = blue!0,bordercolor = red]{todonotes}
\usepackage{xcolor}
\usepackage{amsmath}
\usepackage{fancyhdr}
\setlength{\headheight}{60pt}
\setlength{\topmargin}{-50pt}
\setlength{\headsep}{10pt}
\pagestyle{fancy}
\rhead{\textit{Robot to Camera Calibration}}
\lhead{}
\chead{}
\lfoot{March 3, 2016}
\cfoot{\thepage}
\rfoot{Georgia Tech - Michael Sobrepera}
\renewcommand{\headrulewidth}{0.1pt}
\renewcommand{\footrulewidth}{0.1pt}
\usepackage[font=footnotesize,labelfont=bf]{caption}
\usepackage{subcaption}
\setlength{\footskip}{20pt}
%Eat whitespace
\singlespacing
\usepackage[subtle]{savetrees}
\begin{document}
\section{Motivation}
When using a robot and a fixed camera together it is often necessary to calibrate the camera's coordinate frame to that of the robot base. This allows what is seen in the camera to be turned into meaningful commands for the robot.
In order to do this, I propose to use a single grid, arbitrarily mounted to the robot, past its final degree of freedom. By moving the grid to a number of different positions and capturing those positions kinematically from the robot and visually from the camera. This will generate a series of equations which can then be solved to determine the transformation from the camera to the robot.
\section{Setup}
In order to perform this procedure, the robot and camera should be fixed, with the camera viewing the robot. A checkerboard grid should be attached to the robot past the final joint, it is easiest to attach the grid to the tool flange. The grid should be asymmetrical and should have a white border.
\section{Math}
The center of this problem is finding a transformation given a set of parallel transformations and many sample points. This can be seen in \autoref{fig:transformations}
\begin{figure}
\centering
\def\svgwidth{\textwidth}
\input{transformatin_diagram.pdf_tex}
\caption{\textbf{Transformations}. The transformation $C$ can be measured by tracking the grid, so long as the camera intrinsics are well calibrated. The transformation $G$ is unknown, however, it does not change as the grid is rigidly attached to the tool center point (TCP). The transformation $T$ can be gathered directly from the robots forward kinematics. The goal is to solve for the transformation $R$, which is unknown, but constant.}
\label{fig:transformations}
\end{figure}
Given \autoref{fig:transformations} we can develop some math using dual quaternions to solve for $R$ given a number of matches pairs samples of $T$ and $C$.
First we re-define $T$ a more useful version of $T$
\begin{equation}
T = T^*
\end{equation}
We then define the transformation from the camera to the TCP using dual quaternions where given dual quaternion $q$ the dual quaternion can be decomposed into quaternions $q_r + q_d$ which can be further decomposed to individual elements $q_{r_w} + q_{r_x} i + q_{r_y} j + q_{r_z} k + (q_{d_w} + q_{d_x} i + q_{d_y} j + q_{d_z} k)\epsilon$.
We can now describe, in steps, the transformation from the camera to the UR Base. We begin by defining the transformation from the Camera to the Grid:
\begin{equation}
C_r G_r+(C_r G_d + C_d G_r)\epsilon
\end{equation}
We then extend this from the camera to the UR Base, which is equivalent to $R$:
\begin{equation}
R = C_r G_r T_r + (C_r G_r T_d + (C_r G_d + C_d G_r)T_r)\epsilon
\end{equation}
We then proceed to expand this into individual units and standard math:
\begin{equation}
\begin{split}
R_{r_w} = \\&
(C_{r_w} G_{r_w}-C_{r_x} G_{r_x}-C_{r_y} G_{r_y}-C_{r_z} G_{r_z}) T_{r_w}+\\&(C_{r_x} G_{r_w}+C_{r_w} G_{r_x}-C_{r_z} G_{r_y}+C_{r_y} G_{r_z}) T_{r_x}+\\&(C_{r_y} G_{r_w}+C_{r_z} G_{r_x}+C_{r_w} G_{r_y}-C_{r_x} G_{r_z}) T_{r_y}+\\&(C_{r_z} G_{r_w}-C_{r_y} G_{r_x}+C_{r_x} G_{r_y}+C_{r_w} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rrw}
\end{equation}
\begin{equation}
\begin{split}
R_{r_x} = \\&
(C_{r_x} G_{r_w}+C_{r_w} G_{r_x}-C_{r_z} G_{r_y}+C_{r_y} G_{r_z}) T_{r_w}-\\&(C_{r_w} G_{r_w}-C_{r_x} G_{r_x}-C_{r_y} G_{r_y}-C_{r_z} G_{r_z}) T_{r_x}+\\&(C_{r_z} G_{r_w}-C_{r_y} G_{r_x}+C_{r_x} G_{r_y}+C_{r_w} G_{r_z}) T_{r_y}-\\&(C_{r_y} G_{r_w}+C_{r_z} G_{r_x}+C_{r_w} G_{r_y}-C_{r_x} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rrx}
\end{equation}
\begin{equation}
\begin{split}
R_{r_y} = \\&
(C_{r_y} G_{r_w}+C_{r_z} G_{r_x}+C_{r_w} G_{r_y}-C_{r_x} G_{r_z}) T_{r_w}-\\&(C_{r_z} G_{r_w}-C_{r_y} G_{r_x}+C_{r_x} G_{r_y}+C_{r_w} G_{r_z}) T_{r_x}-\\&(C_{r_w} G_{r_w}-C_{r_x} G_{r_x}-C_{r_y} G_{r_y}-C_{r_z} G_{r_z}) T_{r_y}+\\&(C_{r_x} G_{r_w}+C_{r_w} G_{r_x}-C_{r_z} G_{r_y}+C_{r_y} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rry}
\end{equation}
\begin{equation}
\begin{split}
R_{r_z} = \\&
(C_{r_z} G_{r_w}-C_{r_y} G_{r_x}+C_{r_x} G_{r_y}+C_{r_w} G_{r_z}) T_{r_w}+\\&(C_{r_y} G_{r_w}+C_{r_z} G_{r_x}+C_{r_w} G_{r_y}-C_{r_x} G_{r_z}) T_{r_x}-\\&(C_{r_x} G_{r_w}+C_{r_w} G_{r_x}-C_{r_z} G_{r_y}+C_{r_y} G_{r_z}) T_{r_y}-\\&(C_{r_w} G_{r_w}-C_{r_x} G_{r_x}-C_{r_y} G_{r_y}-C_{r_z} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rrz}
\end{equation}
\begin{equation}
\begin{split}
R_{d_w} = \\&
(C_{r_w} G_{r_w} - C_{r_x} G_{r_x} - C_{r_y} G_{r_y} -
C_{r_z} G_{r_z}) T_{d_w} +\\& (C_{r_x} G_{r_w} +
C_{r_w} G_{r_x} - C_{r_z} G_{r_y} +
C_{r_y} G_{r_z}) T_{d_x} + \\&(C_{r_y} G_{r_w} +
C_{r_z} G_{r_x} + C_{r_w} G_{r_y} -
C_{r_x} G_{r_z}) T_{d_y} +\\& (C_{r_z} G_{r_w} -
C_{r_y} G_{r_x} + C_{r_x} G_{r_y} +
C_{r_w} G_{r_z}) T_{d_z} +\\& (C_{r_w} G_{d_w} -
C_{r_x} G_{d_x} - C_{r_y} G_{d_y} - C_{r_z} G_{d_z} +
C_{d_w} G_{r_w} - C_{d_x} G_{r_x} - C_{d_y} G_{r_y} -
C_{d_z} G_{r_z}) T_{r_w} -\\& (C_{r_x} G_{d_w} +
C_{r_w} G_{d_x} - C_{r_z} G_{d_y} + C_{r_y} G_{d_z} +
C_{d_x} G_{r_w} + C_{d_w} G_{r_x} - C_{d_z} G_{r_y} +
C_{d_y} G_{r_z}) T_{r_x} -\\& (C_{r_y} G_{d_w} +
C_{r_z} G_{d_x} + C_{r_w} G_{d_y} - C_{r_x} G_{d_z} +
C_{d_y} G_{r_w} + C_{d_z} G_{r_x} + C_{d_w} G_{r_y} -
C_{d_x} G_{r_z}) T_{r_y} - \\&(C_{r_z} G_{d_w} -
C_{r_y} G_{d_x} + C_{r_x} G_{d_y} + C_{r_w} G_{d_z} +
C_{d_z} G_{r_w} - C_{d_y} G_{r_x} + C_{d_x} G_{r_y} +
C_{d_w} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rdw}
\end{equation}
\begin{equation}
\begin{split}
R_{d_x} = \\&
(C_{r_x} G_{r_w} +
C_{r_w} G_{r_x} - C_{r_z} G_{r_y} +
C_{r_y} G_{r_z}) T_{d_w} -\\& (C_{r_w} G_{r_w} -
C_{r_x} G_{r_x} - C_{r_y} G_{r_y} -
C_{r_z} G_{r_z}) T_{d_x} +\\& (C_{r_z} G_{r_w} -
C_{r_y} G_{r_x} + C_{r_x} G_{r_y} +
C_{r_w} G_{r_z}) T_{d_y} - \\&(C_{r_y} G_{r_w} +
C_{r_z} G_{r_x} + C_{r_w} G_{r_y} -
C_{r_x} G_{r_z}) T_{d_z} +\\& (C_{r_x} G_{d_w} +
C_{r_w} G_{d_x} - C_{r_z} G_{d_y} + C_{r_y} G_{d_z} +
C_{d_x} G_{r_w} + C_{d_w} G_{r_x} - C_{d_z} G_{r_y} +
C_{d_y} G_{r_z}) T_{r_w} + \\&(C_{r_w} G_{d_w} -
C_{r_x} G_{d_x} - C_{r_y} G_{d_y} - C_{r_z} G_{d_z} +
C_{d_w} G_{r_w} - C_{d_x} G_{r_x} - C_{d_y} G_{r_y} -
C_{d_z} G_{r_z}) T_{r_x} - \\&(C_{r_z} G_{d_w} -
C_{r_y} G_{d_x} + C_{r_x} G_{d_y} + C_{r_w} G_{d_z} +
C_{d_z} G_{r_w} - C_{d_y} G_{r_x} + C_{d_x} G_{r_y} +
C_{d_w} G_{r_z}) T_{r_y} + \\&(C_{r_y} G_{d_w} +
C_{r_z} G_{d_x} + C_{r_w} G_{d_y} - C_{r_x} G_{d_z} +
C_{d_y} G_{r_w} + C_{d_z} G_{r_x} + C_{d_w} G_{r_y} -
C_{d_x} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rdx}
\end{equation}
\begin{equation}
\begin{split}
R_{d_y} = \\&
(C_{r_y} G_{r_w} +
C_{r_z} G_{r_x} + C_{r_w} G_{r_y} -
C_{r_x} G_{r_z}) T_{d_w} -\\& (C_{r_z} G_{r_w} -
C_{r_y} G_{r_x} + C_{r_x} G_{r_y} +
C_{r_w} G_{r_z}) T_{d_x} -\\& (C_{r_w} G_{r_w} -
C_{r_x} G_{r_x} - C_{r_y} G_{r_y} -
C_{r_z} G_{r_z}) T_{d_y} +\\& (C_{r_x} G_{r_w} +
C_{r_w} G_{r_x} - C_{r_z} G_{r_y} +
C_{r_y} G_{r_z}) T_{d_z} + \\&(C_{r_y} G_{d_w} +
C_{r_z} G_{d_x} + C_{r_w} G_{d_y} - C_{r_x} G_{d_z} +
C_{d_y} G_{r_w} + C_{d_z} G_{r_x} + C_{d_w} G_{r_y} -
C_{d_x} G_{r_z}) T_{r_w} +\\& (C_{r_z} G_{d_w} -
C_{r_y} G_{d_x} + C_{r_x} G_{d_y} + C_{r_w} G_{d_z} +
C_{d_z} G_{r_w} - C_{d_y} G_{r_x} + C_{d_x} G_{r_y} +
C_{d_w} G_{r_z}) T_{r_x} +\\& (C_{r_w} G_{d_w} -
C_{r_x} G_{d_x} - C_{r_y} G_{d_y} - C_{r_z} G_{d_z} +
C_{d_w} G_{r_w} - C_{d_x} G_{r_x} - C_{d_y} G_{r_y} -
C_{d_z} G_{r_z}) T_{r_y} -\\& (C_{r_x} G_{d_w} +
C_{r_w} G_{d_x} - C_{r_z} G_{d_y} + C_{r_y} G_{d_z} +
C_{d_x} G_{r_w} + C_{d_w} G_{r_x} - C_{d_z} G_{r_y} +
C_{d_y} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rdy}
\end{equation}
\begin{equation}
\begin{split}
R_{d_z} = \\&
(C_{r_z} G_{r_w} -
C_{r_y} G_{r_x} + C_{r_x} G_{r_y} +
C_{r_w} G_{r_z}) T_{d_w} + \\&(C_{r_y} G_{r_w} +
C_{r_z} G_{r_x} + C_{r_w} G_{r_y} -
C_{r_x} G_{r_z}) T_{d_x} - \\&(C_{r_x} G_{r_w} +
C_{r_w} G_{r_x} - C_{r_z} G_{r_y} +
C_{r_y} G_{r_z}) T_{d_y} - \\&(C_{r_w} G_{r_w} -
C_{r_x} G_{r_x} - C_{r_y} G_{r_y} -
C_{r_z} G_{r_z}) T_{d_z} + \\&(C_{r_z} G_{d_w} -
C_{r_y} G_{d_x} + C_{r_x} G_{d_y} + C_{r_w} G_{d_z} +
C_{d_z} G_{r_w} - C_{d_y} G_{r_x} + C_{d_x} G_{r_y} +
C_{d_w} G_{r_z}) T_{r_w} -\\& (C_{r_y} G_{d_w} +
C_{r_z} G_{d_x} + C_{r_w} G_{d_y} - C_{r_x} G_{d_z} +
C_{d_y} G_{r_w} + C_{d_z} G_{r_x} + C_{d_w} G_{r_y} -
C_{d_x} G_{r_z}) T_{r_x} +\\& (C_{r_x} G_{d_w} +
C_{r_w} G_{d_x} - C_{r_z} G_{d_y} + C_{r_y} G_{d_z} +
C_{d_x} G_{r_w} + C_{d_w} G_{r_x} - C_{d_z} G_{r_y} +
C_{d_y} G_{r_z}) T_{r_y} +\\& (C_{r_w} G_{d_w} -
C_{r_x} G_{d_x} - C_{r_y} G_{d_y} - C_{r_z} G_{d_z} +
C_{d_w} G_{r_w} - C_{d_x} G_{r_x} - C_{d_y} G_{r_y} -
C_{d_z} G_{r_z}) T_{r_z}
\end{split}
\label{eq:rdz}
\end{equation}
We have 6 terms which put together generate one unique equation set. Given enough samples, and with knowledge that $R$ and $G$ never change, this system becomes solvable. To do this, we will reformat the equations into a computationally friendly matrix form.
Lets try to make a matrix from this with the form:
\begin{equation}
\begin{bmatrix}
a
\end{bmatrix}
\begin{bmatrix}
coeff
\end{bmatrix}
=
\begin{bmatrix}
b
\end{bmatrix}
\label{eq:generalFormMatrix}
\end{equation}
\begin{equation}
\begin{bmatrix}
a_{0,0} & \cdots & a_{1,15} \\
\vdots & \ddots & \vdots \\
a_{n,0} & \cdots & a_{n,15}
\end{bmatrix}
\begin{bmatrix}
G_{r_w} \\
G_{r_x} \\
G_{r_y} \\
G_{r_z} \\
G_{d_w} \\
G_{d_x} \\
G_{d_y} \\
G_{d_z} \\
R_{r_w} \\
R_{r_x} \\
R_{r_y} \\
R_{r_z} \\
R_{d_w} \\
R_{d_x} \\
R_{d_y} \\
R_{d_z}
\end{bmatrix}
=
\begin{bmatrix}
0
\end{bmatrix}
\label{eq:generalMatrix}
\end{equation}
Each sample will generate a set of 8 equations ($R_{r_w}$, $R_{r_x}$, $R_{r_y}$, $R_{r_z}$, $R_{d_w}$, $R_{d_x}$, $R_{d_y}$, $R_{d_z}$) to add to the matrix. We group the equations by unknowns.
From \autoref{eq:rrw} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_w} T_{r_w}+C_{r_x} T_{r_x}+C_{r_y} T_{r_y}+C_{r_z} T_{r_z})+\\&
G_{r_x} (-C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z})+\\&
G_{r_y} (-C_{r_y} T_{r_w}-C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{r_z} (-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}+C_{r_w} T_{r_z})+\\&
R_{r_w}(-1)
\end{split}
\end{equation}
Which in turn generates a row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_w} T_{r_w}+C_{r_x} T_{r_x}+C_{r_y} T_{r_y}+C_{r_z} T_{r_z}\\
-C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z}\\
-C_{r_y} T_{r_w}-C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}+C_{r_w} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rrx} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_y} (-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}-C_{r_w} T_{r_z})+\\&
G_{r_z} (C_{r_y} T_{r_w}+C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{r_w} (C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z})+\\&
G_{r_x} (C_{r_w} T_{r_w}+C_{r_x} T_{r_x}-C_{r_y} T_{r_y}-C_{r_z} T_{r_z})+\\&
R_{r_x}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}-C_{r_w} T_{r_z}\\
C_{r_y} T_{r_w}+C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z}\\
C_{r_w} T_{r_w}+C_{r_x} T_{r_x}-C_{r_y} T_{r_y}-C_{r_z} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rry} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_y} T_{r_w}-C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{r_x} (C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}+C_{r_w} T_{r_z})+\\&
G_{r_y} (C_{r_w} T_{r_w}-C_{r_x} T_{r_x}+C_{r_y} T_{r_y}-C_{r_z} T_{r_z})+\\&
G_{r_z} (-C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z})+\\&
R_{r_y}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_y} T_{r_w}-C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}+C_{r_w} T_{r_z}\\
C_{r_w} T_{r_w}-C_{r_x} T_{r_x}+C_{r_y} T_{r_y}-C_{r_z} T_{r_z}\\
-C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rrz} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}-C_{r_w} T_{r_z})+\\&
G_{r_x} (-C_{r_y} T_{r_w}+C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{r_y} (C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z})+\\&
G_{r_z} (C_{r_w} T_{r_w}-C_{r_x} T_{r_x}-C_{r_y} T_{r_y}+C_{r_z} T_{r_z})+\\&
R_{r_z}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}-C_{r_w} T_{r_z}\\
-C_{r_y} T_{r_w}+C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z}\\
C_{r_w} T_{r_w}-C_{r_x} T_{r_x}-C_{r_y} T_{r_y}+C_{r_z} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rdw} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_w} T_{d_w}+C_{r_x} T_{d_x}+C_{r_y} T_{d_y}+C_{r_z} T_{d_z}+C_{d_w} T_{r_w}+C_{d_x} T_{r_x}+C_{d_y} T_{r_y}+C_{d_z} T_{r_z})+\\&
G_{r_x} (-C_{r_x} T_{d_w}+C_{r_w} T_{d_x}+C_{r_z} T_{d_y}-C_{r_y} T_{d_z}-C_{d_x} T_{r_w}+C_{d_w} T_{r_x}+C_{d_z} T_{r_y}-C_{d_y} T_{r_z})+\\&
G_{r_y} (-C_{r_y} T_{d_w}-C_{r_z} T_{d_x}+C_{r_w} T_{d_y}+C_{r_x} T_{d_z}-C_{d_y} T_{r_w}-C_{d_z} T_{r_x}+C_{d_w} T_{r_y}+C_{d_x} T_{r_z})+\\&
G_{r_z} (-C_{r_z} T_{d_w}+C_{r_y} T_{d_x}-C_{r_x} T_{d_y}+C_{r_w} T_{d_z}-C_{d_z} T_{r_w}+C_{d_y} T_{r_x}-C_{d_x} T_{r_y}+C_{d_w} T_{r_z})+\\&
G_{d_w} (C_{r_w} T_{r_w}+C_{r_x} T_{r_x}+C_{r_y} T_{r_y}+C_{r_z} T_{r_z})+\\&
G_{d_x} (-C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z})+\\&
G_{d_y} (-C_{r_y} T_{r_w}-C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{d_z} (-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}+C_{r_w} T_{r_z})+\\&
R_{d_w}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_w} T_{d_w}+C_{r_x} T_{d_x}+C_{r_y} T_{d_y}+C_{r_z} T_{d_z}+C_{d_w} T_{r_w}+C_{d_x} T_{r_x}+C_{d_y} T_{r_y}+C_{d_z} T_{r_z}\\
-C_{r_x} T_{d_w}+C_{r_w} T_{d_x}+C_{r_z} T_{d_y}-C_{r_y} T_{d_z}-C_{d_x} T_{r_w}+C_{d_w} T_{r_x}+C_{d_z} T_{r_y}-C_{d_y} T_{r_z}\\
-C_{r_y} T_{d_w}-C_{r_z} T_{d_x}+C_{r_w} T_{d_y}+C_{r_x} T_{d_z}-C_{d_y} T_{r_w}-C_{d_z} T_{r_x}+C_{d_w} T_{r_y}+C_{d_x} T_{r_z}\\
-C_{r_z} T_{d_w}+C_{r_y} T_{d_x}-C_{r_x} T_{d_y}+C_{r_w} T_{d_z}-C_{d_z} T_{r_w}+C_{d_y} T_{r_x}-C_{d_x} T_{r_y}+C_{d_w} T_{r_z}\\
C_{r_w} T_{r_w}+C_{r_x} T_{r_x}+C_{r_y} T_{r_y}+C_{r_z} T_{r_z}\\
-C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z}\\
-C_{r_y} T_{r_w}-C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}+C_{r_w} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rdx} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_x} T_{d_w}-C_{r_w} T_{d_x}+C_{r_z} T_{d_y}-C_{r_y} T_{d_z}+C_{d_x} T_{r_w}-C_{d_w} T_{r_x}+C_{d_z} T_{r_y}-C_{d_y} T_{r_z})+\\&
G_{r_x} (C_{r_w} T_{d_w}+C_{r_x} T_{d_x}-C_{r_y} T_{d_y}-C_{r_z} T_{d_z}+C_{d_w} T_{r_w}+C_{d_x} T_{r_x}-C_{d_y} T_{r_y}-C_{d_z} T_{r_z})+\\&
G_{r_y} (-C_{r_z} T_{d_w}+C_{r_y} T_{d_x}+C_{r_x} T_{d_y}-C_{r_w} T_{d_z}-C_{d_z} T_{r_w}+C_{d_y} T_{r_x}+C_{d_x} T_{r_y}-C_{d_w} T_{r_z})+\\&
G_{r_z} (C_{r_y} T_{d_w}+C_{r_z} T_{d_x}+C_{r_w} T_{d_y}+C_{r_x} T_{d_z}+C_{d_y} T_{r_w}+C_{d_z} T_{r_x}+C_{d_w} T_{r_y}+C_{d_x} T_{r_z})+\\&
G_{d_w} (C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z})+\\&
G_{d_x} (C_{r_w} T_{r_w}+C_{r_x} T_{r_x}-C_{r_y} T_{r_y}-C_{r_z} T_{r_z})+\\&
G_{d_y} (-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}-C_{r_w} T_{r_z})+\\&
G_{d_z} (C_{r_y} T_{r_w}+C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
R_{d_x}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_x} T_{d_w}-C_{r_w} T_{d_x}+C_{r_z} T_{d_y}-C_{r_y} T_{d_z}+C_{d_x} T_{r_w}-C_{d_w} T_{r_x}+C_{d_z} T_{r_y}-C_{d_y} T_{r_z}\\
C_{r_w} T_{d_w}+C_{r_x} T_{d_x}-C_{r_y} T_{d_y}-C_{r_z} T_{d_z}+C_{d_w} T_{r_w}+C_{d_x} T_{r_x}-C_{d_y} T_{r_y}-C_{d_z} T_{r_z}\\
-C_{r_z} T_{d_w}+C_{r_y} T_{d_x}+C_{r_x} T_{d_y}-C_{r_w} T_{d_z}-C_{d_z} T_{r_w}+C_{d_y} T_{r_x}+C_{d_x} T_{r_y}-C_{d_w} T_{r_z}\\
C_{r_y} T_{d_w}+C_{r_z} T_{d_x}+C_{r_w} T_{d_y}+C_{r_x} T_{d_z}+C_{d_y} T_{r_w}+C_{d_z} T_{r_x}+C_{d_w} T_{r_y}+C_{d_x} T_{r_z}\\
C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}-C_{r_y} T_{r_z}\\
C_{r_w} T_{r_w}+C_{r_x} T_{r_x}-C_{r_y} T_{r_y}-C_{r_z} T_{r_z}\\
-C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}-C_{r_w} T_{r_z}\\
C_{r_y} T_{r_w}+C_{r_z} T_{r_x}+C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rdy} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_y} T_{d_w}-C_{r_z} T_{d_x}-C_{r_w} T_{d_y}+C_{r_x} T_{d_z}+C_{d_y} T_{r_w}-C_{d_z} T_{r_x}-C_{d_w} T_{r_y}+C_{d_x} T_{r_z})+\\&
G_{r_x} (C_{r_z} T_{d_w}+C_{r_y} T_{d_x}+C_{r_x} T_{d_y}+C_{r_w} T_{d_z}+C_{d_z} T_{r_w}+C_{d_y} T_{r_x}+C_{d_x} T_{r_y}+C_{d_w} T_{r_z})+\\&
G_{r_y} (C_{r_w} T_{d_w}-C_{r_x} T_{d_x}+C_{r_y} T_{d_y}-C_{r_z} T_{d_z}+C_{d_w} T_{r_w}-C_{d_x} T_{r_x}+C_{d_y} T_{r_y}-C_{d_z} T_{r_z})+\\&
G_{r_z} (-C_{r_x} T_{d_w}-C_{r_w} T_{d_x}+C_{r_z} T_{d_y}+C_{r_y} T_{d_z}-C_{d_x} T_{r_w}-C_{d_w} T_{r_x}+C_{d_z} T_{r_y}+C_{d_y} T_{r_z})+\\&
G_{d_w} (C_{r_y} T_{r_w}-C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{d_x} (C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}+C_{r_w} T_{r_z})+\\&
G_{d_y} (C_{r_w} T_{r_w}-C_{r_x} T_{r_x}+C_{r_y} T_{r_y}-C_{r_z} T_{r_z})+\\&
G_{d_z} (-C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z})+\\&
R_{d_y}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_y} T_{d_w}-C_{r_z} T_{d_x}-C_{r_w} T_{d_y}+C_{r_x} T_{d_z}+C_{d_y} T_{r_w}-C_{d_z} T_{r_x}-C_{d_w} T_{r_y}+C_{d_x} T_{r_z}\\
C_{r_z} T_{d_w}+C_{r_y} T_{d_x}+C_{r_x} T_{d_y}+C_{r_w} T_{d_z}+C_{d_z} T_{r_w}+C_{d_y} T_{r_x}+C_{d_x} T_{r_y}+C_{d_w} T_{r_z}\\
C_{r_w} T_{d_w}-C_{r_x} T_{d_x}+C_{r_y} T_{d_y}-C_{r_z} T_{d_z}+C_{d_w} T_{r_w}-C_{d_x} T_{r_x}+C_{d_y} T_{r_y}-C_{d_z} T_{r_z}\\
-C_{r_x} T_{d_w}-C_{r_w} T_{d_x}+C_{r_z} T_{d_y}+C_{r_y} T_{d_z}-C_{d_x} T_{r_w}-C_{d_w} T_{r_x}+C_{d_z} T_{r_y}+C_{d_y} T_{r_z}\\
C_{r_y} T_{r_w}-C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
C_{r_z} T_{r_w}+C_{r_y} T_{r_x}+C_{r_x} T_{r_y}+C_{r_w} T_{r_z}\\
C_{r_w} T_{r_w}-C_{r_x} T_{r_x}+C_{r_y} T_{r_y}-C_{r_z} T_{r_z}\\
-C_{r_x} T_{r_w}-C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0
\end{bmatrix}^T
\end{equation}
From \autoref{eq:rdz} we generate:
\begin{equation}
\begin{split}
0= \\&
G_{r_w} (C_{r_z} T_{d_w}+C_{r_y} T_{d_x}-C_{r_x} T_{d_y}-C_{r_w} T_{d_z}+C_{d_z} T_{r_w}+C_{d_y} T_{r_x}-C_{d_x} T_{r_y}-C_{d_w} T_{r_z})+\\&
G_{r_x} (-C_{r_y} T_{d_w}+C_{r_z} T_{d_x}-C_{r_w} T_{d_y}+C_{r_x} T_{d_z}-C_{d_y} T_{r_w}+C_{d_z} T_{r_x}-C_{d_w} T_{r_y}+C_{d_x} T_{r_z})+\\&
G_{r_y} (C_{r_x} T_{d_w}+C_{r_w} T_{d_x}+C_{r_z} T_{d_y}+C_{r_y} T_{d_z}+C_{d_x} T_{r_w}+C_{d_w} T_{r_x}+C_{d_z} T_{r_y}+C_{d_y} T_{r_z})+\\&
G_{r_z} (C_{r_w} T_{d_w}-C_{r_x} T_{d_x}-C_{r_y} T_{d_y}+C_{r_z} T_{d_z}+C_{d_w} T_{r_w}-C_{d_x} T_{r_x}-C_{d_y} T_{r_y}+C_{d_z} T_{r_z})+\\&
G_{d_w} (C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}-C_{r_w} T_{r_z})+\\&
G_{d_x} (-C_{r_y} T_{r_w}+C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z})+\\&
G_{d_y} (C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z})+\\&
G_{d_z} (C_{r_w} T_{r_w}-C_{r_x} T_{r_x}-C_{r_y} T_{r_y}+C_{r_z} T_{r_z})+\\&
R_{d_z}(-1)
\end{split}
\end{equation}
Which in turn generates another row of the $a$ matrix in \autoref{eq:generalMatrix}:
\begin{equation}
\begin{bmatrix}
C_{r_z} T_{d_w}+C_{r_y} T_{d_x}-C_{r_x} T_{d_y}-C_{r_w} T_{d_z}+C_{d_z} T_{r_w}+C_{d_y} T_{r_x}-C_{d_x} T_{r_y}-C_{d_w} T_{r_z}\\
-C_{r_y} T_{d_w}+C_{r_z} T_{d_x}-C_{r_w} T_{d_y}+C_{r_x} T_{d_z}-C_{d_y} T_{r_w}+C_{d_z} T_{r_x}-C_{d_w} T_{r_y}+C_{d_x} T_{r_z}\\
C_{r_x} T_{d_w}+C_{r_w} T_{d_x}+C_{r_z} T_{d_y}+C_{r_y} T_{d_z}+C_{d_x} T_{r_w}+C_{d_w} T_{r_x}+C_{d_z} T_{r_y}+C_{d_y} T_{r_z}\\
C_{r_w} T_{d_w}-C_{r_x} T_{d_x}-C_{r_y} T_{d_y}+C_{r_z} T_{d_z}+C_{d_w} T_{r_w}-C_{d_x} T_{r_x}-C_{d_y} T_{r_y}+C_{d_z} T_{r_z}\\
C_{r_z} T_{r_w}+C_{r_y} T_{r_x}-C_{r_x} T_{r_y}-C_{r_w} T_{r_z}\\
-C_{r_y} T_{r_w}+C_{r_z} T_{r_x}-C_{r_w} T_{r_y}+C_{r_x} T_{r_z}\\
C_{r_x} T_{r_w}+C_{r_w} T_{r_x}+C_{r_z} T_{r_y}+C_{r_y} T_{r_z}\\
C_{r_w} T_{r_w}-C_{r_x} T_{r_x}-C_{r_y} T_{r_y}+C_{r_z} T_{r_z}\\
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1
\end{bmatrix}^T
\end{equation}
These rows can then be put into a solver, such as numpy.linalg.lstsq, with the b matrix set to a single column of all zeros, equal in length to the height of the resulting $a$ matrix.
\end{document} | {
"alphanum_fraction": 0.592672688,
"avg_line_length": 47.166,
"ext": "tex",
"hexsha": "86e95d25cd2d388abfb5d68989c8754b7b672aa5",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-01-26T02:31:37.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-01-26T02:31:37.000Z",
"max_forks_repo_head_hexsha": "74520563ad5a729f3289a11fc6f17f351018177f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ancy13428281619/robot2camera-calibration",
"max_forks_repo_path": "maths/robot2cam_calib.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74520563ad5a729f3289a11fc6f17f351018177f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ancy13428281619/robot2camera-calibration",
"max_issues_repo_path": "maths/robot2cam_calib.tex",
"max_line_length": 444,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "74520563ad5a729f3289a11fc6f17f351018177f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mjsobrep/robot2camera-calibration",
"max_stars_repo_path": "maths/robot2cam_calib.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-04T01:43:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-04T01:43:47.000Z",
"num_tokens": 12326,
"size": 23583
} |
\documentclass{article}
\begin{document}
\pagenumbering{gobble}
\section{Hello, World!}
This is \LaTeX!
\end{document} | {
"alphanum_fraction": 0.7711864407,
"avg_line_length": 19.6666666667,
"ext": "tex",
"hexsha": "ed2d703e6232165920e59eac23ddb0cddac3c396",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-01-14T02:42:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T00:15:44.000Z",
"max_forks_repo_head_hexsha": "f9d86ecbf1ab1b59668bee8af6c3709daa26c481",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "classabbyamp/rtex",
"max_forks_repo_path": "tests/samples/basic/minimal.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "f9d86ecbf1ab1b59668bee8af6c3709daa26c481",
"max_issues_repo_issues_event_max_datetime": "2021-01-21T01:59:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-11-15T21:10:36.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "classabbyamp/rtex",
"max_issues_repo_path": "tests/samples/basic/minimal.tex",
"max_line_length": 23,
"max_stars_count": 11,
"max_stars_repo_head_hexsha": "f9d86ecbf1ab1b59668bee8af6c3709daa26c481",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "classabbyamp/rtex",
"max_stars_repo_path": "tests/samples/basic/minimal.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-29T10:29:22.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-07-09T07:14:43.000Z",
"num_tokens": 38,
"size": 118
} |
% !Mode:: "TeX:UTF-8"
\chapter{Conclusion}
In this paper, we focus on the topic of accelerating FDTD with hardwares. First, we discussed accelerating FDTD with a built-in CPU component, VP. For using VP, the theory and a practical scheme were discussed in details. Furthermore, we modified the traditional data parallelism scheme by removing some unnecessary discrete field points. To test the advantage of our modified scheme, the 2D FDTD in TM mode were taken as an example. After the experience, we analyzed the profiling reports. Then, for executing FDTD with CUDA, we do the same things. After that, the conclusion we obtained is that the modified data parallelism scheme is better than traditional scheme, however, compared to the CUDA, it is still very inefficient. The CUDA is a powerful potential computational power waiting to be utilized in the future.
Nevertheless, there are still some places waiting to be enhanced. For example, using constant memory to store some constants, dividing the simulation area into some sub-areas.
In the research of this topic, I have learned much knowledge about FDTD algorithm, CUDA, and vector processor. Gained some valuable experience of dealing with a project has complex organization and huge size. Besides, learned the rules of doing research, including how to collect previous research contributions, find the point to enhance, examine new ideas etc., which is the solid foundation for future growth. | {
"alphanum_fraction": 0.8063186813,
"avg_line_length": 161.7777777778,
"ext": "tex",
"hexsha": "9b1c0070d1ac16a09a832c512344051b8b1e2e02",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "445351447c95a48b5f8af4b1081c3dcf0018045c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "obserthinker/bachelorgraduatethesis",
"max_forks_repo_path": "latex-en/chapters/chapter5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "445351447c95a48b5f8af4b1081c3dcf0018045c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "obserthinker/bachelorgraduatethesis",
"max_issues_repo_path": "latex-en/chapters/chapter5.tex",
"max_line_length": 820,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "445351447c95a48b5f8af4b1081c3dcf0018045c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "obserthinker/bachelorgraduatethesis",
"max_stars_repo_path": "latex-en/chapters/chapter5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 291,
"size": 1456
} |
\documentclass[bigger]{beamer}
\input{header-beam} % change to header-handout for handouts
% ====================
\title[Lecture 15]{Logic I F13 Lecture 15}
\date{Hallowe'en 2013}
% ====================
\include{header}
\setlength{\fitchprfwidth}{5em}
\section{Multiple Uses of $\forall$, $\exists$}
\subsec{Multiple Uses of $\forall$}{
\bits
\item $\sf \forall x\forall y\, A(x, y)$ \dots every pair $\langle \alpha,\beta\rangle$ satisfies A(x, y)
\item NB: $\sf \forall x\forall y\, A(x, y)$ true only if pairs $\langle \alpha, \alpha\rangle$ satisfy $\sf A(x, y)$
\item Does $\sf\forall x\forall y\, Adjoins(x, y)$ mean ``any \emph{two} objects adjoin each other''?
\item \dots true iff any pair $\langle\alpha, \beta\rangle$ satisfies $\sf Adjoins(x, y)$
\item \emph{including pairs where $\alpha = \beta$}
\item Hence, $\sf\forall x\forall y\, Adjoins(x, y)$ is \emph{always false} since no block adjoins itself!
\item Correct: $\sf \forall x\forall y(x \neq y \to Adjoins(x, y))$
\eit
}
\subsec{Multiple Uses of $\exists$}{
\bits
\item $\sf \exists x\exists y\, A(x, y)$ \dots at least one pair $\langle \alpha,\beta\rangle$ satisfies A(x, y)
\item NB: $\sf \exists x\exists y\, A(x, y)$ is already true even if a single pair $\langle \alpha, \alpha\rangle$ satisfies $\sf A(x, y)$
\item Does $\sf\exists x\exists y\, SameRow(x, y)$ mean ``there are \emph{two} objects in the same row''?
\item \dots true iff at least one pair $\langle\alpha, \beta\rangle$ satisfies $\sf SameRow(x, y)$
\item \emph{including pairs where $\alpha = \beta$}
\item Hence, $\sf\exists x\exists y\, SameRow(x, y)$ is \emph{always true} since any block is in the same row as itself!
\item Correct: $\sf \exists x\exists y(x \neq y \land SameRow(x, y))$
\eit
}
\subsec{Restricted Quantification: All}{
\bit
\item All cubes are left of all tetrahedra.
\item Any pair $\langle \alpha, \beta\rangle$ where $\alpha$ is a cube and $\beta$ is a tetrahedron is so that $\alpha$ is left of $\beta$.\pauses
\[\sf
\forall x\forall y((Cube(x) \land Tet(y)) \to LeftOf(x, y))
\]
\eit
}
\subsec{Restricted Quantification: Some}{
\bit
\item A cube is left of a tetrahedron.
\item At least one pair $\langle \alpha, \beta\rangle$ where $\alpha$ is a cube and $\beta$ is a tetrahedron is so that x is left of y.\pauses
\[\sf
\exists x\exists y(Cube(x) \land Tet(y) \land LeftOf(x, y))
\]
\eit
}
\subsec{Restricted Quantification: No}{
\bit
\item No two cubes are the same size.
\item Any pair $\langle \alpha, \beta\rangle$ where $\alpha$ and $\beta$ are different cubes is so that $\alpha$ is not the same size as $\beta$.\pauses
\[\sf
\forall x\forall y((Cube(x) \land Cube(y) \land x \neq y) \to \lnot SameSize(x, y))
\]
\item There is no pair $\langle \alpha, \beta\rangle$ where $\alpha$ and $\beta$ are different cubes of the same size.\pauses
\[\sf
\lnot \exists x\exists y(Cube(x) \land Cube(y) \land x \neq y \land SameSize(x, y))
\]
\eit
}
\section{Multiple Quantification}
\subsec{Alternating Quantifiers}{
\ben
\item $\forall x \forall y \, Hates(x, y)$
\bit\items{2-} Everyone hates everyone
\eit
\item $\exists y \exists x \, Hates(x, y)$
\bit\items{3-} Someone hates someone
\eit
\item $\forall x \exists y \, Hates(x, y)$
\bit\items{4-} Everyone hates someone
\eit
\item $\forall y \exists x \, Hates(x, y)$
\bit\items{5-} Everyone is hated by someone
\eit
\item $\exists x \forall y \, Hates(x, y)$
\bit\items{6-} Someone hates everyone
\eit
\item $\exists y \forall x \, Hates(x, y)$
\bit\items{7-} Someone is hated by everyone
\eit
\een
}
\subsec{Something/Everything Else}{
\bit
\item Remember: different variables $\neq$ different objects
\item Everyone hates someone \emph{else}:
\[\sf
\forall x\exists y(x \neq y \land Hates(x, y))
\]
\item Someone hates everyone \emph{else}:
\[\sf
\exists x\forall y(x \neq y \to Hates(x, y))
\]
\eit
}
\subsec{Convergence vs. Uniform Convergence}{
\bit
\item A function $f$ \emph{pointwise continuous} if
\[
\forall \epsilon\forall x\forall y\exists \delta(\left|x - y\right| < \delta \to \left|f(x) - f(y)\right| < \epsilon)
\]
\item A function $f$ \emph{uniformly continuous} if
\[
\forall \epsilon\exists \delta\forall x\forall y(\left|x - y\right| < \delta \to \left|f(x) - f(y)\right| < \epsilon)
\]
\eit
}
\subsec{Mary Astell, 1666--1731}{
\begin{columns}
\begin{column}{3cm}
\pgfimage[height=4cm]{astell}
\end{column}
\begin{column}{7cm}
\bit
\item British political philosopher
\item \textit{Some Reflections upon Marriage, Occasion'd by the Duke and Duchess of Mazarine's Case; which is also considered} (1700)
\item In preface to 3rd ed. 1706 reacts to William Nicholls' claim (in \textit{The Duty of Inferiors
towards their Superiors, in Five Practical Discourses} (London 1701), Discourse IV: The Duty of Wives to their
Husbands), that women are naturally inferior to men.
\eit
\end{column}
\end{columns}
}
\subsec{Astell Taking Down Nicholls}{
'Tis true, thro' Want of Learning, and of that Superior Genius which
Men as Men lay claim to, she [the author] was ignorant of the
\textit{Natural Inferiority} of our Sex, which our Masters lay down as
a Self-Evident and Fundamental Truth. She saw nothing in the Reason of
Things, to make this either a Principle or a Conclusion, but much to
the contrary; it being Sedition at least, if not Treason to assert it
in this Reign.
}
\subsec{Astell Taking Down Nicholls}{
For if by the Natural Superiority of their Sex, they
mean that \textit{every} Man is by Nature superior to \textit{every}
Woman, which is the obvious meaning, and that which must be stuck to
if they would speak Sense, it wou'd be a Sin in \textit{any} Woman to
have Dominion over \textit{any} Man, and the greatest Queen ought not
to command but to obey her Footman, because no Municipal Laws can
supersede or change the Law of Nature; so that if the Dominion of the
Men be such, the \textit{Salique Law,} as unjust as \textit{English
Men} have ever thought it, ought to take place over all the Earth,
and the most glorious Reigns in the \textit{English, Danish,
Castilian}, and other Annals, were wicked Violations of the Law of
Nature!
}
\subsec{Astell Taking Down Nicholls}{
If they mean that \textit{some} Men are superior to \textit{some}
Women this is no great Discovery; had they turn'd the Tables they
might have seen that \textit{some} Women are Superior to \textit{some}
Men. Or had they been pleased to remember their Oaths of Allegiance
and Supremacy, they might have known that \textit{One} Women is
superior to \textit{All} the Men in these Nations, or else they have
sworn to very little purpose. And it must not be suppos'd, that their
Reason and Religion wou'd suffer them to take Oaths, contrary to the
Laws of Nature and Reason of things.
\bigskip
\begin{raggedleft}\small
(Mary Astell, \textit{Reflections upon Marriage}, 1706 Preface,
iii--iv, and Mary Astell, \textit{Political Writings}, ed. Patricia
Springborg, Cambridge University Press, 1996, 9--10)
\end{raggedleft}
}
\section{Translating Step-by-Step}
\subsec{Step-by-Step Method of Translation}{
\bit
\item What if your sentence contains more than one determiner phrase?
\item Deal with each DP separately
\item Think of DP as replaced with name or variable---result has one less DP
\item When you're down to one DP, apply known methods for single quantifiers
\item This results in wffs that express properties or relations, but themselves contain quantifiers
\eit
}
\subsec{Example}{
\bit
\item {\color{red}All cubes} are left of {\color{blue}a tetrahedon}
\item {\color{red}All cubes} satisfy ``x is left of {\color{blue}a tetrahedon}'' \[
\sf {\color{red}\forall x(Cube(x) \to{}} \text{``x is left of {\color{blue}a tetrahedron}''})
\]
\item x is left of {\color{blue}a tetrahedron}\[
\sf {\color{blue}\exists y(Tet(y) \land{}} LeftOf(x, y))
\]
\item Together:
\[
\sf{\color{red}\forall x(Cube(x) \to {}} {\color{blue}\exists y(Tet(y) \land{}} LeftOf(x, y)))
\]
\eit
}
\subsec{Determiner within Determiner Phrase}{
\bit
\item {\color{red}All cubes that adjoin {\color{blue} a tet}} are large
\item All blocks that satisfy ``x is a cube that adjoins {\color{blue}a tet}'' are large
\[
\sf \forall x (\text{``x is a cube that adjoins a tet''} \to Large(x))
\]
\item x is a cube that adjoins {\color{blue}a tet}
\[
\sf Cube(x) \land \exists y(Tet(y) \land Adjoins(x, y))
\]
\item Together:
\[
\sf\forall x((Cube(x) \land \exists y(Tet(y) \land Adjoins(x, y))) \to Large(x))
\]
\eit
}
\subsec{Formalizing Astell}{
\bit
\item {\color{red} Some woman} is superior to {\color{blue} every man}
\item {\color{red} Some woman} satisfies ``x is superior to every man''
\[\sf{\color{red}\exists x(Woman(x) \land {}}\text{``x is superior to every man''})\]
\item x is superior to {\color{blue} every man}
\[\sf
{\color{blue}\forall y(Man(y) \to {}}Superior(x, y))
\]
\item Together:
\[\sf
{\color{red}\exists x(Woman(x) \land{}} {\color{blue}\forall y(Man(y) \to {}}Superior(x, y))
\]
\eit
}
\subsec{Formalizing Astell}{
\bits
\item Some woman is superior to some man
\item[] \(
\sf
\exists x(Woman(x) \land \exists y(Man(y) \land Superior(x, y)))
\)
\item Every woman is superior to every man
\item[] \(
\sf
\forall x(Woman(x) \to \forall y(Man(y) \to Superior(x, y)))
\)
\item Every woman is superior to some man
\item[]\(
\sf
\forall x(Woman(x) \to \exists y(Man(y) \land Superior(x, y)))
\)
\item Some woman is superior to every man
\item[] \(
\sf
\exists x(Woman(x) \land \forall y(Man(y) \to Superior(x, y)))
\)
\eit
}
\section{Expressive Power of Quantifiers}
\subsec{Expressing Complex Properties and Relations}{
\bit
\item Language for familial relations:
\bit
\item Parent(x, y) \dots x is a parent of y
\item F(x) \dots x is female
\eit
\item Easy properties:
\bit
\item M(x) \dots x is male
\bit\items{2-} $\sf\lnot F(x)$\eit
\item Father(x, y) \dots x is father of y
\bit\items{3-} $\sf Parent(x, y) \land \lnot F(x)$\eit
\eit
\eit
}
\subsec{Quantifiers and Expressive Power}{
\bit
\item Sibling(x, y) \dots x and y are siblings
\bit\items{2-} $\sf x \neq y \land \exists z (Parent(z, x) \land Parent(z, y))$\eit
\item OnlyChild(x) \dots x is an only child
\bit\items{3-} $\sf \lnot \exists y\, Sibling(x, y)$, i.e.,
\items{4-} $\sf \lnot\exists y(x \neq y \land \exists z (Parent(z, x) \land Parent(z, y)))$ \eit
\item Aunt(x, y) \dots x is y's aunt
\bit\items{5-} $\sf F(x) \land \exists z (Parent(z, y) \land Sibling(x, z))$, ie,
\items{6-} $\sf F(x) \land \exists z (Parent(z, y) \land {}$\\$\sf\quad x \neq z \land \exists w(Parent(w, x) \land Parent(w, z))$\eit
\eit}
\subsec{Quantifiers and Expressive Power}{
\bit
\item x is even\dots\pauses
\begin{align*}
& \exists y\, (y \times (1 + 1)) = x\\
& \exists y\, (y+ y) = x
\end{align*}
\item x evenly divides y ($x \mid y$)\pauses
\[
\exists z\, (x \times z) = y
\]
\eit
}
\subsec{Expressing ``Prime''}{
\bit
\item no numbers other than 1 and $x$ evenly divide $x$\\ (and $x$ is not 1)
\begin{align*}
\uncover<2->{\lnot\exists y&(y \neq 1 \land y \neq x \land y \mid x) \land x \neq 1} \\
\uncover<4->{\lnot\exists y&(y \neq 1 \land y \neq x \land \exists z\,(y \times z) = x) \land x \neq 1} \\
\uncover<3->{\forall y&(y \mid x \to (y = 1 \lor y = x)) \land x \neq 1}\\
\uncover<4->{\forall y&(\exists z (y \times z) = x \to (y = 1 \lor y = x)) \land x \neq 1}
\end{align*}
\item if $x \mid (y \times z)$, then $x \mid y$ or $x \mid z$ (and $x$ is neither 0 nor 1)
\begin{align*}
\uncover<6->{\forall y\forall z(x \mid (y \times z) \to (x \mid y \lor x \mid z)) \land x \neq 0 \land x \neq 1}\\
\uncover<7->{\forall y\forall z(\exists u\, (x \times u) = (y \times z) \to (\exists u\,(x \times u) = y \lor \\ \qquad \exists u\,(x \times u) = z)) \land x \neq 0 \land x \neq 1}
\end{align*}
\eit
}
\end{document}
| {
"alphanum_fraction": 0.6785290629,
"avg_line_length": 30.734375,
"ext": "tex",
"hexsha": "8d9e99df39ec6e888c3e57db8a10fbd5b3f1eb1d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "rzach/phil279",
"max_forks_repo_path": "279-lec15.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "rzach/phil279",
"max_issues_repo_path": "279-lec15.tex",
"max_line_length": 180,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "rzach/phil279",
"max_stars_repo_path": "279-lec15.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-21T10:48:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-23T13:42:54.000Z",
"num_tokens": 4034,
"size": 11802
} |
% !TeX document-id = {7f59beb7-c93a-43bd-a036-7847a50120d2}
% !TeX spellcheck = en_US
% !TeX program = make
% Dieses Dokument muss mit PDFLatex gesetzt werden
% Vorteil: Grafiken koennen als jpg, png, ... verwendet werden
% und die Links im Dokument sind auch gleich richtig
%
%Ermöglicht \\ bei der Titelseite (z.B. bei supervisor)
%Siehe https://github.com/latextemplates/uni-stuttgart-cs-cover/issues/4
\RequirePackage{kvoptions-patch}
%English:
\let\ifdeutsch\iffalse
\let\ifenglisch\iftrue
%German:
%\let\ifdeutsch\iftrue
%\let\ifenglisch\iffalse
%
\ifenglisch
\PassOptionsToClass{numbers=noenddot}{scrbook}
\else
%()Aus scrguide.pdf - der Dokumentation von KOMA-Script)
%Nach DUDEN steht in Gliederungen, in denen ausschließlich arabische Ziffern für die Nummerierung
%verwendet werden, am Ende der Gliederungsnummern kein abschließender Punkt
%(siehe [DUD96, R3]). Wird hingegen innerhalb der Gliederung auch mit römischen Zahlen
%oder Groß- oder Kleinbuchstaben gearbeitet, so steht am Ende aller Gliederungsnummern ein
%abschließender Punkt (siehe [DUD96, R4])
\PassOptionsToClass{numbers=autoendperiod}{scrbook}
\fi
%Warns about outdated packages and missing caption delcarations
%See https://www.ctan.org/pkg/nag
\RequirePackage[l2tabu, orthodox]{nag}
%Neue deutsche Trennmuster
%Siehe http://www.ctan.org/pkg/dehyph-exptl und http://projekte.dante.de/Trennmuster/WebHome
%Nur für pdflatex, nicht für lualatex
\RequirePackage{ifluatex}
\ifluatex
%do not load anything
\else
\ifdeutsch
\RequirePackage[ngerman=ngerman-x-latest]{hyphsubst}
\fi
\fi
\documentclass[
fontsize=12pt, %Default: 11pt, bei Linux Libertine zu klein zum Lesen
% BEGINN: Optionen für typearea
paper=a4,
twoside, % fuer die Betrachtung am Schirm ungeschickt
BCOR=3mm, % Bindekorrektur
DIV=13, % je höher der DIV-Wert, desto mehr geht auf eine Seite. Gute werde sind zwischen DIV=12 und DIV=15
headinclude=true,
footinclude=false,
% ENDE: Optionen für typearea
% titlepage,
bibliography=totoc,
% idxtotoc, %Index ins Inhaltsverzeichnis
% liststotoc, %List of X ins Inhaltsverzeichnis, mit liststotocnumbered werden die Abbildungsverzeichnisse nummeriert
headsepline,
cleardoublepage=empty,
parskip=half,
% draft % um zu sehen, wo noch nachgebessert werden muss - wichtig, da Bindungskorrektur mit drin
final % ACHTUNG! - in pagestyle.tex noch Seitenstil anpassen
]{scrbook}
\input{preambel/packages_and_options}
%Der untere Rand darf "flattern"
\raggedbottom
%%%
% Wie tief wird das Inhaltsverzeichnis aufgeschlüsselt
% 0 --\chapter
% 1 --\section % fuer kuerzeres Inhaltsverzeichnis verwenden - oder minitoc benutzen
% 2 --\subsection
% 3 --\subsubsection
% 4 --\paragraph
\setcounter{tocdepth}{1}
%
%%%
\makeindex
%Angaben in die PDF-Infos uebernehmen
\makeatletter
\hypersetup{
pdftitle={}, %Titel der Arbeit
pdfauthor={}, %Author
pdfkeywords={}, % CR-Klassifikation und ggf. weitere Stichworte
pdfsubject={}
}
\makeatother
\input{content/abkuerzungen}
\usepackage{titlesec}
\titlespacing*{\subsubsection}{7pt}{0ex}{0ex}
\begin{document}
%tex4ht-Konvertierung verschönern
\iftex4ht
% tell tex4ht to create picures also for formulas starting with '$'
% WARNING: a tex4ht run now takes forever!
\Configure{$}{\PicMath}{\EndPicMath}{}
%$ % <- syntax highlighting fix for emacs
\Css{body {text-align:justify;}}
%conversion of .pdf to .png
\Configure{graphics*}
{pdf}
{\Needs{"convert \csname Gin@base\endcsname.pdf
\csname Gin@base\endcsname.png"}%
\Picture[pict]{\csname Gin@base\endcsname.png}%
}
\fi
%Tipp von http://goemonx.blogspot.de/2012/01/pdflatex-ligaturen-und-copynpaste.html
%siehe auch http://tex.stackexchange.com/questions/4397/make-ligatures-in-linux-libertine-copyable-and-searchable
%
%ONLY WORKS ON MiKTeX
%On other systems, download glyphtounicode.tex from http://pdftex.sarovar.org/misc/
%
\input glyphtounicode.tex
\pdfgentounicode=1
%\VerbatimFootnotes %verbatim text in Fußnoten erlauben. Geht normalerweise nicht.
\input{macros/commands}
\pagenumbering{arabic}
\Titelblatt
%Eigener Seitenstil fuer die Kurzfassung und das Inhaltsverzeichnis
\deftripstyle{preamble}{}{}{}{}{}{\pagemark}
%Doku zu deftripstyle: scrguide.pdf
\pagestyle{preamble}
\renewcommand*{\chapterpagestyle}{preamble}
\renewcommand{\chapterheadstartvskip}{\vspace{0em}}
%Kurzfassung / abstract
%auch im Stil vom Inhaltsverzeichnis
\ifdeutsch
\section*{Kurzfassung}
\else
\section*{Abstract}
\fi
In recent years, Cloud Computing is gaining more and more popularity.
%An application designed according to the principle of Cloud Computing are executed on an extern platform and is called a Cloud Application.
But if someone will try to create a Cloud Application suitable to work with several different platforms, he will face a problem.
The problem is that each platform provides its own \textbf{A}pplication \textbf{P}rogramming \textbf{I}nterface (API) to interact with Cloud Applications.
Therefore it's difficult to create one unified application functioning on various platforms properly.
\textbf{T}opology and \textbf{O}rchestration \textbf{S}pecification for \textbf{C}loud \textbf{A}pplication (TOSCA) provides a solution for this problem.
%%This standard adds an additional level of abstraction to a Cloud Applications, in other words, a layer between the external interfaces of the Cloud Application and a Cloud service provider's API.
With the help of TOSCA, it's possible to define several models of interaction with many different APIs within one TOSCA Application.
A TOSCA runtime environment is responsible to choose and process the right model and serves as a layer between external interfaces of a TOSCA application and an API of a platform.
This allows to automate the migration of TOSCA applications between platforms which use completely different APIs.
The description of a TOSCA Application is stored in a \textbf{C}loud \textbf{S}ervice \textbf{AR}chive (CSAR), which contains all components necessary for the application life-cycle. \\
%The University of Stuttgart implemented this specification in the runtime environment named OpenTOSCA. \\
Cloud Applications are often defined in such way that during their deployment some external packages, programs and files need to be downloaded via the Internet.
These downloads can slow down the deployment and when the access to the Internet is limited, unstable or missing, they can prevent the installation at all.
In addition, the download from external sources can compromise the security of applications.
%If a Cloud Application consists of a single virtual server with one operating system, this can slightly slow down the deployment.
%But in composite Cloud Computing, a large number of identical operating systems can download a huge amount of the same data, which can significantly increase the time needed for deployment.\\
%During this work a software solution which will eliminate external dependencies in CSAR, resupply them with all packages necessary for deployment and also change the internal structure to display the achieved self-containment will be developed and implemented.
%For example, all commonly used "apt-get install" commands, which download and install packages, must be removed.
%Appropriate package must be downloaded and integrated into CSAR structure.
%Furthermore, all depended packages needed for new packages must also be added recursively.\\
This document considers the development of the solution to this problem through the predownload of the necessary data.
Different methods of encapsulation of CSARs will be defined.
It will be described the architecture of the software solution which can recognize external dependencies in a CSAR, eliminate them, resupply the CSAR with all the data necessary for deployment and also change the internal structure of the CSAR to display the achieved self-containment.
The prototype of the software will be implemented and validated.
%In addition some aspects of implementation will be described and explained.
\cleardoublepage
% BEGIN: Verzeichnisse
\iftex4ht
\else
\microtypesetup{protrusion=false}
\fi
%%%
% Literaturverzeichnis ins TOC mit aufnehmen, aber nur wenn nichts anderes mehr hilft!
% \addcontentsline{toc}{chapter}{Literaturverzeichnis}
%
% oder zB
%\addcontentsline{toc}{section}{Abkürzungsverzeichnis}
%
%%%
%Produce table of contents
%
%In case you have trouble with headings reaching into the page numbers, enable the following three lines.
%Hint by http://golatex.de/inhaltsverzeichnis-schreibt-ueber-rand-t3106.html
%
%\makeatletter
%\renewcommand{\@pnumwidth}{2em}
%\makeatother
%
\tableofcontents
% Bei einem ungünstigen Seitenumbruch im Inhaltsverzeichnis, kann dieser mit
% \addtocontents{toc}{\protect\newpage}
% an der passenden Stelle im Fließtext erzwungen werden.
\listoffigures
%\listoftables
%Wird nur bei Verwendung von der lstlisting-Umgebung mit dem "caption"-Parameter benoetigt
%\lstlistoflistings
%ansonsten:
\ifdeutsch
\listof{Listing}{Verzeichnis der Listings}
\else
\listof{Listing}{List of Listings}
\fi
%mittels \newfloat wurde die Algorithmus-Gleitumgebung definiert.
%Mit folgendem Befehl werden alle floats dieses Typs ausgegeben
\ifdeutsch
\listof{Algorithmus}{Verzeichnis der Algorithmen}
\else
%\listof{Algorithmus}{List of Algorithms}
\fi
%\listofalgorithms %Ist nur für Algorithmen, die mittels \begin{algorithm} umschlossen werden, nötig
% Abkürzungsverzeichnis
\printnoidxglossaries
\iftex4ht
\else
%Optischen Randausgleich und Grauwertkorrektur wieder aktivieren
\microtypesetup{protrusion=true}
\fi
% END: Verzeichnisse
\renewcommand*{\chapterpagestyle}{scrplain}
\pagestyle{scrheadings}
\input{preambel/pagestyle}
%
%
% ** Hier wird der Text eingebunden **
%
\input{content/einleitung}
%\input{...weitere Kapitel...}
\input{content/Basis}
\input{content/Requirements}
\input{content/concept_and_architecture}
\input{content/implementation}
\input{content/add_pm}
\input{content/check}
\input{content/zusammenfassung_und_ausblick}
%
%\input{content/latex-tipps}
%\input{content/end_listings}
%
%\renewcommand{\appendixtocname}{Anhang}
%\renewcommand{\appendixname}{Anhang}
%\renewcommand{\appendixpagename}{Anhang}
\appendix
\clearpage
%\printindex
\printbibliography
\ifdeutsch
Alle URLs wurden zuletzt am 17.\,03.\,2008 geprüft.
\else
All links were last followed on July 30, 2017.
\fi
\pagestyle{empty}
\renewcommand*{\chapterpagestyle}{empty}
\Versicherung
\end{document}
| {
"alphanum_fraction": 0.7655816534,
"avg_line_length": 37.8111888112,
"ext": "tex",
"hexsha": "546482ee80c090679185c7eacdf850b97b0af988",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e71538c15d4224916f14240ff7a5ad7cf447cca5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Jery77/BA",
"max_forks_repo_path": "ausarbeitung.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e71538c15d4224916f14240ff7a5ad7cf447cca5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Jery77/BA",
"max_issues_repo_path": "ausarbeitung.tex",
"max_line_length": 285,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e71538c15d4224916f14240ff7a5ad7cf447cca5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Jery77/BA",
"max_stars_repo_path": "ausarbeitung.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2966,
"size": 10814
} |
\chapter{Implementation} \label{sec:impl}
\import{./}{data_acquisition}
\import{./}{n-back}
\import{./}{experimental_setup}
\import{./}{dataset}
\import{./}{clf_models}
| {
"alphanum_fraction": 0.7,
"avg_line_length": 21.25,
"ext": "tex",
"hexsha": "5c8198573a43ab703f68e511eed14b6ae89f51a8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "JLysberg/thesis-NTNU",
"max_forks_repo_path": "chapters/implementation/..tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "JLysberg/thesis-NTNU",
"max_issues_repo_path": "chapters/implementation/..tex",
"max_line_length": 41,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c0a9631f89a0112b2ade27d05c22818745706fb8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "JLysberg/thesis-NTNU",
"max_stars_repo_path": "chapters/implementation/..tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 52,
"size": 170
} |
% This is an example for a chapter, additional chapter can be added in the
% skeleton-thesis.
% To generate the final document, run latex, build and quick build commands
% on the skeleton-thesis file not this one.
\chapter{DENSE ARRAY DESIGN FOR OPTIMAL TRANSMISSION USING SURFACE LOOPS}\label{chapters:chapter_2}
\vspace{-7mm}
%% Section
\section{Design Process}\label{sec:ch_2_sec_1}
With the advantages of parallel transmission (pTx) and push for high channel count, it is natural to consider the upper bound of radiating elements in a dense array.
There has yet to be published works demonstrating the viability of greater than 100 degrees of freedom or elements for static RF shimming,
in part because this is a convoluted conversation.
Many assumptions must be made before answering anything close to the question of "How many radiating elements does it take to RF shim a region of interest like the brain?"
RF elements come in many shapes and designs, coupling may remove the advantage of densely packed elements if not properly dealt with,
brain volumes no doubt vary across the patient population, power constraints are unique
to each scanner, and you want the patient to come out of the scanner without a fried brain! This work attempts to address all of these issues for a particular scanner,
specifically the Philips Achieva 7T with Multix capabilities.
A surface loop is unanimously used across all forms of MR and was chosen as the base element for the dense array.
Surface loops are often used in dense Rx/Tx arrays due to the ease of decoupling neighboring elements by overlapping the loops to cancel their mutual inductance.
A combination of decoupling strategies is used in order pack many elements into an array. Self-Decoupled coils work nicely to decouple neighboring loops in one direction,
yet coupling
Power constraints
The upper limit of transmission elements is entirely dependent on the channel count or power available and coupling between neighboring elements.
%% Subsection
\subsection{Subsection 1}\label{subsec:subsec_2.1.1}
\clearpage
| {
"alphanum_fraction": 0.8079692751,
"avg_line_length": 63.1212121212,
"ext": "tex",
"hexsha": "e4f697119c253262a92b9087d1493d95e7e6dc6c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a738a5e27af78d8f5ddb7138ba84f3696fb66fcd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "benjhardy/Dissertation",
"max_forks_repo_path": "Chapters/chapter_2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a738a5e27af78d8f5ddb7138ba84f3696fb66fcd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "benjhardy/Dissertation",
"max_issues_repo_path": "Chapters/chapter_2.tex",
"max_line_length": 171,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a738a5e27af78d8f5ddb7138ba84f3696fb66fcd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "benjhardy/Dissertation",
"max_stars_repo_path": "Chapters/chapter_2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 449,
"size": 2083
} |
\input{includes/lab_preamble}
\usepackage[utf8]{inputenc}
\def\LabCourse{AP Computer Science A}
\def\LabNumber{05}
\def\LabTitle{Calculator Lab}
\newcommand\QBlankBox[1]{
\stepcounter{QuestionCounter}
\colorbox{black!10}{\parbox{0.9875\textwidth}{
\raggedright
\textbf{Question \#\theQuestionCounter:} #1
}}
}
\newcommand\QFilledBox[2]{
\stepcounter{QuestionCounter}
\colorbox{black!10}{\parbox{0.9875\textwidth}{
\raggedright
\textbf{Question \#\theQuestionCounter:} #1
}}
\colorbox{black!5}{\parbox{0.9875\textwidth}{
\raggedright
#2
}}
}
\begin{document}
\begin{coverpages}
\ \\[2cm]
\begin{center}
\huge
\textbf{\LabTitle}
\Large
\LabCourse
\end{center}
\vspace{1.5cm}
\begin{center}
\includegraphics[scale=0.45]{graphics/logo_black}
\vspace{2.5cm}
\Large
Name: \rule{11.5cm}{0.1pt}
\end{center}
\end{coverpages}
\blankpage
\thispagestyle{empty}
\tableofcontents
\pagebreak
\section{Background}
Modern calculators and even many search engines have become adept at reading strings of mathematical symbols and being able to calculate the value of the given expression. Type into a Google search bar the expression: \code{3 + 4 * 8 - 16 / 32} and Google will correctly return the value \code{34.5}. In this lab, you will be implementing methods that will take in strings of characters, like the example above, and return the correctly calculated value of the expression.\\[\baselineskip]
Before we begin, we first need to explore a number of different ways to write a given mathematical expression. Each method differs in the abilities of humans and computers to read and process them. In fact, our tasks will primarily hinge on teaching a computer to read and process the most ``computer-readable'' version and then convert the others to that version.\\[\baselineskip]
Note that for the purposes of this lab, only the four basic, binary arithmetic operators ($+$, $-$, $\times$, $\div$) are going to be considered; however, all of the concepts explored here can be extended to handle more complex mathematical expressions and operations.
\subsection{Three Different Notations}
\EBox{Infix Notation}{
\emph{Infix notation} is the notation you are likely familiar with. When using infix notation, the binary operators are placed \emph{between} the numbers to be operated on (called its \emph{operands}). Here are a few examples:
\begin{center}
\begin{tabular}{p{0.3\textwidth} p{0.3\textwidth} p{0.3\textwidth}}
$5 + 7$ & $12 + 6 - 3$ & $3 + 4 \times 8 - 16 \div 32$
\end{tabular}
\end{center}
Note that the order of operations (\emph{PEMDAS}/\emph{BODMAS}) dictates how complex expressions are evaluated.
}
\ \\[9pt]
\EBox{Prefix Notation/Polish Notation}{
First described by Polish logician Jan Łukasiewicz in 1924, \emph{prefix notation} places each operator \emph{before} its operands. Here are the same examples as above, this time in prefix notation:
\begin{center}
\begin{tabular}{p{0.3\textwidth} p{0.3\textwidth} p{0.3\textwidth}}
$+\ 5\ 7$ & $-\ +\ 12\ 6\ 3$ & $-\ +\ 3\ \times\ 4\ 8\ \div\ 16\ 32$
\end{tabular}
\end{center}
Prefix notation has the benefit of allowing for complex expressions to follow an explicit order of operations \emph{without parenthesis or brackets}. A properly written expression can be evaluated from left-to-right. In this way, the order of operations dictates how an expression is written, rather than how it is evaluated.\\[\baselineskip]
Note that for non-commutative operations ($-$ or $\div$), the order of the operands follows their order in the written expression. That is: $\div\ 16\ 32$ will always evaluate to $0.5$ and never to $2$.
}
\ \\[9pt]
\EBox{Postfix Notation/Reverse-Polish Notation}{
Although prefix notation was first described in the 1920's, \emph{postfix notation} was only introduced in the 1950's. It was later described by famed Edsger Dijkstra as a way of evaluating mathematical expressions requiring fewer calls to computer memory. Unsurprisingly, postfix notation places each operator \emph{after} its operands. Here, again, are the same examples as above, this time in postfix notation:
\begin{center}
\begin{tabular}{p{0.3\textwidth} p{0.3\textwidth} p{0.3\textwidth}}
$5\ 7\ +$ & $12\ 6\ +\ 3\ -$ & $3\ 4\ 8\ \times\ +\ 16\ 32\ \div\ -$
\end{tabular}
\end{center}
As with prefix notation, the order of the operands for non-commutative operations follows their order in the written expression. Thus, $16\ 32\ \div$ evaluates to $0.5$.
}
\ \\[18pt]
Although infix notation is most likely easiest for us to read, it is actually fairly difficult for a computer to process correctly, particularly once additional operators and symbols, such as parenthesis or brackets, are added into it. Because of this, Activity \#1 will ask you to first implement a postfix notation calculator.
\section{Applications}
\QFilledBox{Although parenthesis are not explicitly required by prefix or postfix notation, they do help for human-readability, and so that you can better understand how these expressions are evaluated. Consider the following:
\[ -\ +\ 12\ 6\ 3\ \to -\ (+\ 12\ 6)\ 3 \]
The parenthesis make it more clear that the $+$ operator uses the two immediate operands that follow it. It then follows that the $-$ operator has operands: $(+\ 12\ 6)$ and $3$, or $18$ and $3$.\\[\baselineskip]
Add parenthesis to each of the following prefix or postfix expressions.
}{
\ \\[24pt]
\begin{minipage}{0.25\textwidth}
$+\ \times\ 3\ 2\ -\ 6\ 4$
\end{minipage}\begin{minipage}{0.25\textwidth}
$+\ +\ 5\ 7\ -\ 3\ \div\ 10\ 5$
\end{minipage}\begin{minipage}{0.25\textwidth}
$7\ 4\ -\ 6\ 5\ +\ \times$
\end{minipage}\begin{minipage}{0.25\textwidth}
$3\ 4\ 8\ \times\ +\ 16\ 32\ \div\ -$
\end{minipage}
\ \\[24pt]
}
\ \\[9pt]
\QBox{Evaluate each of the expressions in Question \#1.}{4cm}
\ \\[9pt]
\QFilledBox{Convert the following infix expression to a prefix and a postfix one.
\[ 5 \times 3 + 6 + (4 - 2) \]
}{
\begin{minipage}{0.5\textwidth}
\textbf{Prefix:}
\end{minipage}\begin{minipage}{0.5\textwidth}
\textbf{Postfix:}
\end{minipage}
\ \\[3.9cm]
}
\ \\[9pt]
\QBox{Why do you think a computer has an easier time handling prefix and postfix notations instead of infix notation?}{4cm}
\pagebreak
\section{Activity \#1}
\subsection{Introduction}
As it was noted in the background, postfix notation has been suggested as desirable for the evaluation of mathematical expressions within a computer system due to lower memory requirements than prefix notation. The reason for this is simple: when processing a string of characters representing a mathematical expression in postfix, once an operator is encountered it can be immediately evaluated. This is because all of its operands will already have been processed and stored in memory.\\[\baselineskip]
For example, an expression like this: \code{2 3 4 + -}, will have the \code{3} and \code{4} in memory as soon as the operator, \code{+} is encountered. The result, \code{7}, can then be calculated and stored in anticipation of future operators. Compare that to the prefix expression: \code{- + 3 4 2}. Here, the computer must process and store the \code{-} operator before it can be evaluated because its operands are not yet in memory, then process and store the \code{+} operator because its operands, too, are not in memory. Although alternative methods for processing prefix notation have been developed, during the 1960's the benefits of postfix notation were clear.\\[\baselineskip]
Because postfix notation has been the dominant form of writing expressions for computer evaluation since then, implementing a method for processing it will be your first exercise in this Activity. You will then process prefix notation, hopefully using some of the techniques learned while working on the \code{evalPostfix()} method. Each of these methods will rely heavily on the stack data structure for storing and accessing processed information.
\subsection{Exercises}
\begin{enumerate}
\item Implement the \code{evalPostfix()} method, which will take as a parameter a \code{String} holding a mathematical expression in postfix notation and return the result of evaluating that expression.\\
{\small\textbf{Note:} Remember that, for our purposes, $\times \to *$ and $\div \to /$.}\\
{\small\textbf{Hint:} Use a stack to hold the operands.}
\item Implement the \code{evalPrefix()} method, which will take as a parameter a \code{String} holding a mathematical expression in prefix notation and return the result of evaluating that expression.\\
{\small\textbf{Hint:} You may want to use two stacks.}
\end{enumerate}
\subsection{Questions}
\QBox{Explain how you determined whether or not a given character/characters was an operation or an operand.}{4cm}
\ \\[9pt]
\QBox{Why does evaluation of \emph{prefix} notation benefit from the use of two stacks?}{4cm}
\pagebreak
\section{Activity \#2}
%Shunting Yard Algorithm
\subsection{Introduction}
In this activity, you will be implementing a method for processing and evaluating expressions in infix notation. Remember, for the purposes of this lab, we are only considering the four basic operators ($+, -, \times, \div$) and no parenthesis or brackets. Even so, infix notation has proven so challenging for computers to process that it is normally converted to either prefix or postfix notation first, then processed using evaluation methods for the given notation.
\subsection{The Shunting-Yard Algorithm}
In addition to encouraging the use of postfix notation, Edsger Dijkstra also invented an algorithm for converting from infix notation to postfix notation. Here is a simplified version of that algorithm, in order to handle only the operators selected for this exercise. Note that the term \emph{token} is used to represent a small part of the expression being processed which could be either an operator or operand. For our purposes, each token is separated by a single space (`` '').
\begin{enumerate}
\item Create a postfix string and operator stack.
\item While there are tokens to read:
\begin{enumerate}
\item Read the next token.
\item If the token is a number, append it to the postfix string.
\item If the token is an operator:
\begin{enumerate}
\item While there is an operator on top of the operator stack:
\begin{enumerate}
\item If the current token has \emph{less} precedence than the operator on top of the stack, pop the operator off the stack and append it to the postfix string.
\end{enumerate}
\item Push the token on top of the operator stack.
\end{enumerate}
\end{enumerate}
\item Return the postfix string.
\end{enumerate}
This algorithm was named the ``shunting-yard algorithm'' due to the way the algorithm rearranges symbols in a similar fashion to how a shunting-yard might rearrange rail cars.
\subsection{Exercises}
\begin{enumerate}
\item Implement the \code{infix2postfix()} method, which will take as a parameter a \code{String} in infix notation and return a \code{String} in postfix notation.\\
{\small\textbf{Hint:} Use the shunting-yard algorithm!}
\item Implement the \code{evalInfix()} method, which will take as a parameter a \code{String} holding a mathematical expression in infix notation and return the result of evaluating that expression.
\end{enumerate}
\subsection{Questions}
\QBox{Why do you think Dijkstra offered an algorithm from converting from infix to postfix notation after advocating for postfix notation's use in computing?}{4cm}
\pagebreak
\QBlankBox{Explain how you might evaluate an expression in infix notation without first converting it to postfix notation. Do you believe the conversion process is more or less costly than the method you developed? Explain why.}
\pagebreak
\section{Final Analysis}
\QBox{Why is it important for computer scientists and programmers to consider methods for expressing information that differs from the way humans might express it?}{4cm}
\ \\[9pt]
\QBox{During the late 1960's, Hewlett Packard began designing and producing lines of engineering and financial calculators that used postfix notation. Even today, Hewlett Packard offers a (diminished) line of calculators using postfix notation. Why do you think Hewlett Packard continues to produce a few models using this notation? Why do you think few other calculators are produced using postfix notation?}{4cm}
\ \\[9pt]
\QBox{What part of the implementation of \code{evalPostfix()}, \code{evalPrefix()}, or \code{evalInfix()} did you find most challenging? How did you overcome this challenge?}{4cm}
\ \\[9pt]
\QBox{What new programming techniques or knowledge did you learn as a result of this lab?}{4cm}
\pagebreak
\blankpage
\pagebreak
\section{Template Class \& Test Cases}
\lstinputlisting[basicstyle=\small\ttfamily,tabsize=2]{files/Calculator.java}
\pagebreak
\blankpage
%Scoring Matrix
\section*{Scoring Matrix}
\vspace{0.25cm}
\renewcommand{\arraystretch}{2}
\begin{tabular} {*{4}{*{3}{| >{\bfseries\centering}p{0.0575\textwidth}}}|}
\hline
\multicolumn{12}{| c |}{\bfseries\Large\LabTitle}\\
\hline
\multicolumn{3}{| c |}{\bfseries Applications} & \multicolumn{3}{| c |}{\bfseries Activity \#1} & \multicolumn{3}{| c |}{\bfseries Activity \#2} & \multicolumn{3}{| c |}{\bfseries Final Analysis}\\
\hline
Q01 & 1 & \ & EX1 & 4 & \ & EX1 & 7 & \ & Q09 & 1 & \ \tabularnewline
\hline
Q02 & 1 & \ & EX2 & 5 & \ & EX2 & 2 & \ & Q10 & 1 & \ \tabularnewline
\hline
Q03 & 1 & \ & Q05 & 1 & \ & Q07 & 1 & \ & Q11 & 1 & \ \tabularnewline
\hline
Q04 & 1 & \ & Q06 & 1 & \ & Q08 & 1 & \ & Q12 & 1 & \ \tabularnewline
\hline
\end{tabular}
\vspace{0.25cm}
\textbf{Comments:}
\end{document}
| {
"alphanum_fraction": 0.7248665955,
"avg_line_length": 59.0546218487,
"ext": "tex",
"hexsha": "7fbe19ea47f374049e3daa1f81478b3ced326cf9",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2022-01-05T13:04:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-21T09:36:46.000Z",
"max_forks_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jmscsedu/csedu",
"max_forks_repo_path": "labs/apcsa/05_calculator/lab_calculator.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jmscsedu/csedu",
"max_issues_repo_path": "labs/apcsa/05_calculator/lab_calculator.tex",
"max_line_length": 691,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "4ad037bf3ee413daeab55a52725c15e17e6a31b3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jmscsedu/csedu",
"max_stars_repo_path": "labs/apcsa/05_calculator/lab_calculator.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-24T14:34:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-01-28T21:31:22.000Z",
"num_tokens": 4023,
"size": 14055
} |
\unnumberedchapter{Abbreviations}
\chapter*{Abbreviations}
All abbreviations used in the thesis must be listed, with their definitions, in alphabetical order. This includes trivial and commonly used abbreviations, at your discretion, but not words that have entered into general English usage (laser, for example, or DNA). In particular, non-standard abbreviations should be presented.
\begin{longtable}{rl}
PPT & positive partial transpose\\
SRPT & Schr\"odinger-Robertson partial transpose
\end{longtable} | {
"alphanum_fraction": 0.8031189084,
"avg_line_length": 57,
"ext": "tex",
"hexsha": "1ba66e5b56dc2debb865370d3b53057e214c3c20",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0484c0ffb89fe90464242d04082c35e92fbefb7a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "HannibalWangLecter/LaTeX-templates",
"max_forks_repo_path": "PhD Thesis/Preamble/abbreviations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0484c0ffb89fe90464242d04082c35e92fbefb7a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "HannibalWangLecter/LaTeX-templates",
"max_issues_repo_path": "PhD Thesis/Preamble/abbreviations.tex",
"max_line_length": 328,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "658b7f8745cc4d1ae157c1b75bc197fb4fa146b4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Pradeep20oist/LaTeX-templates",
"max_stars_repo_path": "PhD Thesis/Preamble/abbreviations.tex",
"max_stars_repo_stars_event_max_datetime": "2020-11-01T08:42:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-01T08:42:20.000Z",
"num_tokens": 122,
"size": 513
} |
\section{A Worklist Algorithm for Polymorphic Subtyping}\label{algorithmic_subtyping}
This section presents our algorithm for polymorphic
subtyping. A novel aspect of our algorithm is the use of worklist
judgments: a form of judgment that facilitates the propagation
of information.
%-------------------------------------------------------------------------------
\subsection{Syntax and Well-Formedness of the Algorithmic System}
Figure~\ref{fig:ITP:alg:syntax} shows the
syntax and the well-formedness judgment.
% - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\paragraph{Existential Variables}
In order to solve the unknown types $\tau$, the algorithmic system extends the
declarative syntax of types with \emph{existential variables} $\al$. They
behave like unification variables, but are not globally defined. Instead, the
ordered \emph{algorithmic context}, inspired by \citet{dunfield2013complete},
defines their scope. Thus
the type $\tau$ represented by the corresponding existential variable is
always bound in the corresponding declarative context $\Psi$.
%- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\paragraph{Worklist judgments} The form of our algorithmic judgments is
non-standard.
%tracks the (partial) solutions of existential variables
%in the algorithmic context; they denote a delayed substitution that is
%incrementally applied to outstanding work as it is encoutered.
%Instead of reifying the substitution,
Our algorithm keeps track of an explicit list of
outstanding work: the list $\Omega$ of (reified) \emph{algorithmic judgments}
of the form $A \leq B$,
to which a substitution can be applied once and for all to propagate the solution
of an existential variable.
\begin{figure}[t]
\[
\begin{array}{l@{\qquad}lcl}
\text{Type variables} & a, b\\
\text{Existential variables} & \al, \bt\\[3mm]
\text{Algorithmic types} &A, B, C &::=&\quad 1 \mid a \mid \al \mid \forall a. A \mid A\to B\\
\text{Algorithmic context}&\Gamma &::=&\quad \cdot \mid \Gamma, a \mid \Gamma, \al\\
\text{Algorithmic judgments}&\exps &::=&\quad \cdot \mid \jcons{A \le B}{\exps}
\end{array}
\]
\centering \framebox{$\Gamma \vdash A$}
\begin{gather*}
\inferrule*[right=$\mathtt{{wf_a}unit}$]
{~}
{\Gamma \vdash 1}
\qquad
\inferrule*[right=$\mathtt{{wf_a}var}$]
{a\in\Gamma}
{\Gamma\vdash a}
\qquad
\inferrule*[right=$\mathtt{{wf_a}exvar}$]
{\al\in\Gamma}
{\Gamma\vdash \al} \\
\inferrule*[right=$\mathtt{{wf_a}{\to}}$]
{\Gamma\vdash A \\ \Gamma\vdash B}
{\Gamma\vdash A\to B}
\qquad
\inferrule*[right=$\mathtt{{wf_a}\forall}$]
{\Gamma, a\vdash A}
{\Gamma\vdash \forall a. A}
\end{gather*}
\caption{Syntax and Well-Formedness judgment for the Algorithmic System.}\label{fig:ITP:alg:syntax}
\end{figure}
\begin{comment}
\begin{figure}[t]
\centering \framebox{$\Gamma \vdash A$}
\begin{gather*}
\inferrule*[right=$\mathtt{{wf_a}unit}$]
{~}
{\Gamma \vdash 1}
\qquad
\inferrule*[right=$\mathtt{{wf_a}var}$]
{a\in\Gamma}
{\Gamma\vdash a}
\qquad
\inferrule*[right=$\mathtt{{wf_a}exvar}$]
{\al\in\Gamma}
{\Gamma\vdash \al} \\
\inferrule*[right=$\mathtt{{wf_a}{\to}}$]
{\Gamma\vdash A \\ \Gamma\vdash B}
{\Gamma\vdash A\to B}
\qquad
\inferrule*[right=$\mathtt{{wf_a}\forall}$]
{\Gamma, a\vdash A}
{\Gamma\vdash \forall a. A}
\end{gather*}
\caption{Well-Formedness judgment of the Algorithmic System}\label{fig:alg:wf}
\end{figure}
\end{comment}
\paragraph{Hole Notation}
To facilitate context manipulation, we use the syntax $\Gamma[\Gamma_M]$ to
denote a context of the form $\Gamma_L, \Gamma_M, \Gamma_R$ where $\Gamma$ is
the context $\Gamma_L, \bullet, \Gamma_R$ with a hole ($\bullet$).
Hole notations with the same name implicitly share the same $\Gamma_L$ and $\Gamma_R$. A multi-hole notation like $\Gamma[\al][\bt]$ means $\Gamma_1,\al,\Gamma_2,\bt,\Gamma_3$.
%-------------------------------------------------------------------------------
\subsection{Algorithmic Subtyping}
The algorithmic subtyping judgment, defined in Figure~\ref{fig:ITP:alg}, has the form $\Gamma\vdash\exps$, where
$\exps$ collects multiple subtyping judgments $A\le B$.
% \bruno{Text comparing to Dunfield. Maybe mention in RW instead?:
% In contrast to the
% original formulation---which features 3 interdepenent judgments---our
% algorithmic rules are all part of the same judgment. This is better for
% formalization in proof assistants and avoids mutual dependencies.
% }
The algorithm treats $\exps$ as a worklist. In every step
it takes one task from the worklist for processing, possibly
pushes some new tasks on the worklist, and repeats this
process until the list is empty. This last and single base case
is handled by Rule~$\mathtt{a\_nil}$.
The remaining rules all deal with the first task in the worklist.
Logically we can discern 3 groups of rules.
\begin{figure}[t]
\centering \framebox{$\Gamma \vdash \exps$}
\begin{gather*}
\inferrule*[right=$\mathtt{{\le_a}nil}$]
{~}
{\Gamma \vdash \cdot}
\\ \\
\inferrule*[right=$\mathtt{{\le_a}unit}$]
{\Gamma \vdash \exps}
{\Gamma \vdash \jcons{1\le 1}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}var}$]
{a\in\Gamma \\ \Gamma \vdash \exps}
{\Gamma \vdash \jcons{a\le a}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}exvar}$]
{\al\in\Gamma \\ \Gamma \vdash \exps}
{\Gamma \vdash \jcons{\al\le \al}{\exps}}
\\
\inferrule*[right=$\mathtt{{\le_a}{\to}}$]
{\Gamma \vdash \jcons{B_1\le A_1}{\jcons{A_2\le B_2}{\exps}}}
{\Gamma \vdash \jcons{A_1\to A_2\le B_1\to B_2}{\exps}}
\\
\inferrule*[right=$\mathtt{{\le_a}\forall L}$]
{\al \text{ fresh} \\ \Gamma,\al \vdash \jcons{[\al/a]A\le B}{\exps}}
{\Gamma\vdash \jcons{\forall a. A\le B}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}\forall R}$]
{b \text{ fresh} \\ \Gamma,b \vdash \jcons{A\le B}{\exps}}
{\Gamma\vdash \jcons{A\le \forall b. B}{\exps}}
\\
\\
\inferrule*[right=$\mathtt{{\le_a}instL}$]
{\al\notin \mathit{FV}(A)\cup FV(B)\quad
\Gamma[\al[1], \al[2]]\vdash \jcons{\al[1]\to \al[2]\le A\to B}{ [\al[1]\to \al[2]/\al]\exps}}
{\Gamma[\al] \vdash \jcons{\al\le A\to B}{\exps}}
\\
\inferrule*[right=$\mathtt{{\le_a}instR}$]
{\al\notin FV(A)\cup FV(B)\quad
\Gamma[\al[1], \al[2]]\vdash \jcons{A\to B\le \al[1]\to \al[2]}{ [\al[1]\to \al[2]/\al]\exps}}
{\Gamma[\al] \vdash \jcons{A\to B\le \al}{\exps}}
\\
\\
\inferrule*[right=$\mathtt{{\le_a}solve\_ex}$]
{\Gamma[\al][]\vdash [\al/\bt]\exps}
{\Gamma[\al][\bt]\vdash \jcons{\al\le \bt}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}solve\_ex'}$]
{\Gamma[\al][]\vdash [\al/\bt]\exps}
{\Gamma[\al][\bt]\vdash \jcons{\bt\le \al}{\exps}}
\\
\inferrule*[right=$\mathtt{{\le_a}solve\_var}$]
{\Gamma[a][]\vdash [a/\bt]\exps}
{\Gamma[a][\bt]\vdash \jcons{a\le \bt}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}solve\_var'}$]
{\Gamma[a][]\vdash [a/\bt]\exps}
{\Gamma[a][\bt]\vdash \jcons{\bt\le a}{\exps}}
\\
\inferrule*[right=$\mathtt{{\le_a}solve\_unit}$]
{\Gamma[]\vdash [1/\al]\exps}
{\Gamma[\al]\vdash \jcons{\al\le 1}{\exps}}
\qquad
\inferrule*[right=$\mathtt{{\le_a}solve\_unit'}$]
{\Gamma[]\vdash [1/\al]\exps}
{\Gamma[\al]\vdash \jcons{1\le \al}{\exps}}
\end{gather*}
\caption{Algorithmic Subtyping}
\label{fig:ITP:alg}
\end{figure}
Firstly, we have five rules that are similar to those in the declarative
system, mostly just adapted to the worklist style. For instance, Rule
$\mathtt{{\le_a}{\to}}$ consumes one judgment and pushes two to the
worklist. A notable difference with the declarative Rule $\mathtt{{\le}\forall
L}$ is that Rule $\mathtt{{\le_a}\forall L}$ requires no guessing of a type $\tau$ to instantiate
the polymorphic type $\forall a. A$, but instead
introduces an existential variable $\al$ to the context and to $A$. In
accordance with the declarative system, where
the monotype $\tau$ should be bound in the context $\Psi$, here $\al$ should only
be solved to a monotype bound in $\Gamma$. More generally, for any algorithmic context $\Gamma[\al]$, the algorithmic variable $\al$
can only be solved to a monotype that is well-formed with respect to $\Gamma_L$.
Secondly, Rules $\mathtt{{\le_a}instL}$ and $\mathtt{{\le_a}instR}$ partially
instantiate existential types $\al$, to function types. The domain and range
of the new function type are undetermined: they are set to two
fresh existential variables $\al[1]$ and $\al[2]$. To make sure that
$\al[1] \to \al[2]$ has the same scope as $\al$, the new variables
$\al[1]$ and $\al[2]$ are inserted in the same position in the context
where the old variable $\al$ was. To propagate the instantiation to the remainder
of the worklist, $\al$ is substituted for $\al[1] \to \al[2]$ in $\Omega$.
The \emph{occurs-check} side-condition is necessary to prevent a diverging
infinite instantiation. For example
$1 \to \al \le \al$ would diverge with no such check.
Note that the algorithm does not choose to instantiate $\al$ directly with
$A \to B$, since the type is not guaranteed to be a monotype,
and such instantiation will be inconsistent with our predicative declarative system.
Thirdly, in the remaining six rules an existential variable can be immediately
solved. Each of the six similar rules removes an existential variable from the
context, performs a substitution on the remainder of the worklist and
continues.
The algorithm on judgment list is designed to share the context across all judgments.
However, the declarative system does not share a single context in its derivation.
This gap is filled by strengthening and weakening lemmas of both systems,
where most of them are straightforward to prove,
except for the strengthening lemma of the declarative system, which is a little trickier.
\begin{figure}[t]
$$
\inferrule*[Right=$\mathtt{{\le_a}\forall L}$]
{\inferrule*[Right=$\mathtt{{\le_a}{\to}}$]
{\inferrule*[Right=$\mathtt{{\le_a}\forall L}$]
{\inferrule*[Right=$\mathtt{{\le_a}instR}$]
{\inferrule*[Right=$\mathtt{{\le_a}{\to}}$]
{\inferrule*[Right=$\mathtt{{\le_a}solve\_ex}$]
{\inferrule*[Right=$\mathtt{{\le_a}solve\_ex}$]
{\inferrule*[Right=$\mathtt{{\le_a}unit}$]
{\inferrule*[Right=$\mathtt{a\_nil}$]
{~}
{\al[1] \vdash \cdot}
}
{\al[1] \vdash \jcons{1 \le 1}{\cdot}}
}
{\al[1], \al[2] \vdash \jcons{\al[1] \le \al[2]}{\jcons{1 \le 1}{\cdot}}}
}
{\al[1], \al[2], \bt \vdash \jcons{\al[1] \le \bt}{\jcons{\bt \le \al[2]}{\jcons{1 \le 1}{\cdot}}}}
}
{\al[1], \al[2], \bt \vdash \jcons{\bt \to \bt \le \al[1] \to \al[2]}{\jcons{1 \le 1}{\cdot}}}
}
{\al, \bt \vdash \jcons{\bt \to \bt \le \al}{\jcons{1 \le 1}{\cdot}}}
}
{\al \vdash \jcons{\forall a.\ a \to a \le \al}{\jcons{1 \le 1}{\cdot}}}
}
{\al \vdash \jcons{\al \to 1 \le (\forall a.\ a \to a) \to 1}{\cdot}}
}
{\cdot \vdash \jcons{\forall a.\ a\to 1\le (\forall a.\ a\to a)\to 1}{\cdot }}
$$
%\label{fig:alg_sample}
\caption{A Success Derivation for the Algorithmic Subtyping Relation}
\label{fig:alg_sample_success}
\end{figure}
\begin{figure}[t]
$$
\inferrule*[Right=$\mathtt{{\le_a}\forall L}$]
{\inferrule*[Right=$\mathtt{{\le_a}{\to}}$]
{\inferrule*[Right=$\mathtt{{\le_a}unit}$]
{\inferrule*[Right=$\mathtt{{\le_a}\forall R}$]
{\inferrule*[Right=$\mathtt{?}$]
{stuck
}
{\al, b \vdash \jcons{\al \le b}{\cdot}}
}
{\al \vdash {\jcons{\al \le \forall b.\ b}{\cdot}}}
}
{\al \vdash \jcons{1\le 1}{\jcons{\al \le \forall b.\ b}{\cdot}}}
}
{\al \vdash \jcons{1 \to \al \le 1\to \forall b.\ b}{\cdot}}
}
{\cdot\vdash \jcons{\forall a.\ 1\to a \le 1\to \forall b.\ b}{\cdot }}
$$
\caption{A Failing Derivation for the Algorithmic Subtyping Relation}
\label{fig:alg_sample_fail}
\end{figure}
\paragraph{Example}
We illustrate the subtyping rules through a sample derivation in
Figure~\ref{fig:alg_sample_success},
which shows that that $\forall a.\ a\to 1\le (\forall a.\ a\to
a)\to 1$. Thus the derivation starts with an empty context and a
judgment list with only one element.
% \begin{center}
% \begin{tabular}{|c|c|c|c|}\hline
% \# & Context & Worklist & Rule\\\hline
% 1&$\cdot$ & $\forall x.\ x\to 1\le (\forall x.\ x\to x)\to 1$ & $\mathtt{{\le_a}\forall L}$\\\hline
% 2&$\al$ & $\al\to 1\le (\forall x.\ x\to x)\to 1$ & $\mathtt{{\le_a}{\to}}$\\\hline
% 3&$\al$ & $\forall x.\ x\to x\le \al : 1\le 1$ & $\mathtt{{\le_a}\forall L}$\\\hline
% 4&$\al,\bt$ & $\bt\to \bt\le \al : 1\le 1$ & $\mathtt{instR}$\\\hline
% 5&$\al[1],\al[2], \bt$ & $\bt\to \bt\le \al[1]\to \al[2] : 1\le 1$ & $\mathtt{{\le_a}{\to}}$\\\hline
% 6&$\al[1],\al[2], \bt$ & $\al[1]\le \bt : \bt\le \al[2] : 1\le 1$ & $\mathtt{{\le_a}solve\_ex (\bt\leftarrow \al[1])}$\\\hline
% 7&$\al[1],\al[2]$ & $\al[1]\le \al[2] : 1\le 1$ & $\mathtt{{\le_a}solve\_ex (\al[2]\leftarrow \al[1])}$\\\hline
% 8&$\al[1]$ & $1\le 1$ & $\mathtt{{\le_a}unit}$\\\hline
% \end{tabular}
% \end{center}
In step 1, we have only one judgment, and that one has a top-level $\forall$ on
the left hand side. So the only choice is Rule $\mathtt{{\le_a}\forall L}$, which
opens the universally quantified type with an unknown existential variable
$\al$. Variable $\al$ will be solved later to some monotype that is well-formed
within the context before $\al$. That is, the empty context $\cdot$ in this
case.
In step 2, Rule $\mathtt{{\le_a}{\to}}$ is applied to the worklist,
splitting the first judgment into two.
Step 3 is similar to step 1, where the left-hand-side $\forall$ of the first
judgment is opened according to Rule $\mathtt{{\le_a}\forall L}$ with a fresh
existential variable.
In step 4, the first judgment has an arrow on the left hand side, but the
right-hand-side type is an existential variable. It is obvious
that $\al$ should be solved to a monotype of the form
$\sigma \to \tau$. Rule $\mathtt{instR}$ implements this, but avoids
guessing $\sigma$ and $\tau$ by ``splitting'' $\al$ into two existential
variables, $\al[1]$ and $\al[2]$, which will be solved to some $\sigma$ and
$\tau$ later.
Step 5 applies Rule $\mathtt{{\le_a}{\to}}$ again. Notice that after the
split, $\bt$ appears in two judgments. When the first $\bt$ is solved
during any step of the derivation, the next $\bt$ will be substituted by that
solution. This propagation mechanism ensures the consistent solution of the
variables, while keeping the context as simple as possible.
Steps 6 and 7 solve existential variables. The existential
variable that is right-most in the context is always solved in terms of the other. Therefore in step 6,
$\bt$ is solved in terms of $\al[1]$, and in step 7, $\al[2]$ is solved in terms of $\al[1]$.
Additionally, in step 6, when $\bt$ is solved, the substitution $[\al[1] /
\bt]$ is propagated to the rest of the judgment list, and thus the second
judgment becomes $\al[1]\le\al[2]$.
Steps 8 and 9 trivially finish the derivation. Notice that $\al[1]$ is not
instantiated at the end. This means that any well-scoped instantiation is fine.
\paragraph{A Failing Derivation} We illustrate the role of ordered contexts through another example: $\forall a.\ 1\to a \le 1\to \forall b.\ b$. From the declarative perspective, $a$ should be instantiated to some $\tau$ first, then $b$ is introduced to the context, so that $b\notin FV(\tau)$. As a result, we cannot find $\tau$ such that $\tau \le b$. Figure~\ref{fig:alg_sample_fail} shows the algorithmic derivation, which also fails due to the scoping---$\al$ is introduced earlier than $b$, thus it cannot be solved to $b$.
| {
"alphanum_fraction": 0.663881211,
"avg_line_length": 44.4485714286,
"ext": "tex",
"hexsha": "ab407142f0016be8fdd2c66342df6391862f5bf0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "JimmyZJX/Dissertation",
"max_forks_repo_path": "Sources/ITP/sec3.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "JimmyZJX/Dissertation",
"max_issues_repo_path": "Sources/ITP/sec3.tex",
"max_line_length": 530,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "823bfe90e4b5cc5b7d90c045670bdf4b087877cf",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "JimmyZJX/Dissertation",
"max_stars_repo_path": "Sources/ITP/sec3.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5364,
"size": 15557
} |
\chapter{IPython Notebooks}
This section contains a copy of each of the IPython notebooks used in this project showing how features were created and plots were generated for each dataset.
\section*{Blob Shape Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}95}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Warning: Cannot change to a different GUI toolkit: qt. Using osx instead.
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}56}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{hologic\PYZus{}blobs.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{synthetic\PYZus{}blobs.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{hologic\PYZus{}blob\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{features\PYZus{}from\PYZus{}blobs}\PY{p}{(}\PY{n}{hologic}\PY{p}{)}
\PY{n}{phantom\PYZus{}blob\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{features\PYZus{}from\PYZus{}blobs}\PY{p}{(}\PY{n}{phantom}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{hologic\PYZus{}blob\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}blob\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}blob\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}61}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}blob\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}blob\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}blob\PYZus{}features}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}blob\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}blob\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}62}]:} blob\_count avg\_radius std\_radius min\_radius \textbackslash{}
p214-010-60001-cr.png 78 19.054538 17.506086 8
p214-010-60005-cl.png 97 20.132590 23.255605 8
p214-010-60008-ml.png 310 18.777405 24.388840 8
p214-010-60012-cr.png 185 13.947419 7.884251 8
p214-010-60013-cr.png 141 21.821567 30.666648 8
max\_radius small\_radius\_count med\_radius\_count \textbackslash{}
p214-010-60001-cr.png 90.509668 68 4
p214-010-60005-cl.png 181.019336 94 2
p214-010-60008-ml.png 181.019336 299 5
p214-010-60012-cr.png 45.254834 152 28
p214-010-60013-cr.png 181.019336 134 1
large\_radius\_count density upper\_dist\_count 25\% \textbackslash{}
p214-010-60001-cr.png 6 40.749811 22 8
p214-010-60005-cl.png 1 41.456308 27 8
p214-010-60008-ml.png 6 37.667995 68 8
p214-010-60012-cr.png 5 47.292289 63 8
p214-010-60013-cr.png 6 44.586708 38 8
50\% 75\%
p214-010-60001-cr.png 11.313708 22.627417
p214-010-60005-cl.png 11.313708 22.627417
p214-010-60008-ml.png 11.313708 16.000000
p214-010-60012-cr.png 11.313708 16.000000
p214-010-60013-cr.png 11.313708 22.627417
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{min\PYZus{}radius}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic\PYZus{}blob\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom\PYZus{}blob\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/blob\PYZus{}features\PYZus{}ks.tex}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}64}]:} KS p-value
blob\_count 0.341667 2.774753e-07
avg\_radius 0.847222 1.360953e-42
std\_radius 0.711111 3.929143e-30
max\_radius 0.363889 3.327680e-08
small\_radius\_count 0.319444 2.024353e-06
med\_radius\_count 0.338889 3.583275e-07
large\_radius\_count 0.733333 5.112303e-32
density 0.169444 4.114705e-02
upper\_dist\_count 0.345833 1.883393e-07
25\% 0.358333 5.726005e-08
50\% 0.743056 7.334764e-33
75\% 0.777778 5.796838e-36
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}blob\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}blob\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}66}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{30}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}67}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.641093
[t-SNE] Error after 72 iterations with early exaggeration: 12.697723
[t-SNE] Error after 144 iterations: 0.495631
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/blob\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}98}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.641093
[t-SNE] Error after 100 iterations with early exaggeration: 14.272421
[t-SNE] Error after 300 iterations: 2.104165
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}99}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}99}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10cca73d0>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}71}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}72}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}73}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/blob\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}74}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}100}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}100}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10c306810>
\end{Verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}76}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}77}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}78}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/blob\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}79}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}101}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,}\PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}101}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10d9d7810>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}81}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{50}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}82}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}83}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}84}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}85}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}86}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}88}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}89}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}90}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}91}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}92}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}93}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)} \PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}94}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/blob\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{blob-analysis_files/blob-analysis_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\section*{Blob Intensity Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}131}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}82}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{real\PYZus{}intensity.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{synthetic\PYZus{}intensity.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{phantom}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}83}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,}
\PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}84}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}85}]:} \PY{n}{hologic\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{group\PYZus{}by\PYZus{}scale\PYZus{}space}\PY{p}{(}\PY{n}{hologic}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{group\PYZus{}by\PYZus{}scale\PYZus{}space}\PY{p}{(}\PY{n}{phantom}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}86}]:} \PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset} \PYZbs{}
\PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}88}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}88}]:} count mean std min 25\% \textbackslash{}
p214-010-60001-cr.png 256 0.558904 0.087279 0.328763 0.510450
p214-010-60005-ml.png 256 0.579815 0.090863 0.314655 0.530318
p214-010-60008-cr.png 256 0.493326 0.074813 0.296470 0.447373
p214-010-60012-ml.png 256 0.469238 0.081322 0.254982 0.415496
p214-010-60013-mr.png 256 0.524458 0.087562 0.285158 0.467286
50\% 75\% max skew kurtosis \textbackslash{}
p214-010-60001-cr.png 0.570363 0.622144 0.724731 -0.406150 -0.127657
p214-010-60005-ml.png 0.591799 0.644746 0.759436 -0.575184 0.634554
p214-010-60008-cr.png 0.496014 0.544188 0.679322 -0.100948 0.021890
p214-010-60012-ml.png 0.473583 0.527386 0.660004 -0.204323 -0.192516
p214-010-60013-mr.png 0.534160 0.588593 0.704526 -0.407108 0.041762
\ldots count\_9 mean\_9 std\_9 min\_9 \textbackslash{}
p214-010-60001-cr.png \ldots 131044 0.541212 0.118465 0.141845
p214-010-60005-ml.png \ldots 131044 0.541212 0.118465 0.141845
p214-010-60008-cr.png \ldots 131044 0.523970 0.099598 0.148544
p214-010-60012-ml.png \ldots 131044 0.541212 0.118465 0.141845
p214-010-60013-mr.png \ldots 131044 0.554975 0.131461 0.138075
25\%\_9 50\%\_9 75\%\_9 max\_9 skew\_9 \textbackslash{}
p214-010-60001-cr.png 0.462988 0.546140 0.624509 0.921247 -0.150691
p214-010-60005-ml.png 0.462988 0.546140 0.624509 0.921247 -0.150691
p214-010-60008-cr.png 0.455340 0.521359 0.589320 0.936893 -0.017659
p214-010-60012-ml.png 0.462988 0.546140 0.624509 0.921247 -0.150691
p214-010-60013-mr.png 0.463040 0.552301 0.645746 0.949791 0.038416
kurtosis\_9
p214-010-60001-cr.png 0.148522
p214-010-60005-ml.png 0.148522
p214-010-60008-cr.png 0.898368
p214-010-60012-ml.png 0.148522
p214-010-60013-mr.png -0.374100
[5 rows x 100 columns]
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}89}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}90}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/intensity\PYZus{}features\PYZus{}ks.tex}\PY{l+s}{\PYZdq{}}\PY{p}{,} \PY{n}{longtable}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}90}]:} KS p-value
count 0.000000 1.000000e+00
mean 1.000000 3.587622e-59
std 0.876389 1.515491e-45
min 1.000000 3.587622e-59
25\% 1.000000 3.587622e-59
50\% 1.000000 3.587622e-59
75\% 1.000000 3.587622e-59
max 1.000000 3.587622e-59
skew 0.891667 3.923764e-47
kurtosis 0.568056 2.209941e-19
count\_1 0.000000 1.000000e+00
mean\_1 1.000000 3.587622e-59
std\_1 0.981944 4.539903e-57
min\_1 1.000000 3.587622e-59
25\%\_1 1.000000 3.587622e-59
50\%\_1 1.000000 3.587622e-59
75\%\_1 1.000000 3.587622e-59
max\_1 0.997222 7.598385e-59
skew\_1 0.784722 1.335838e-36
kurtosis\_1 0.466667 3.216681e-13
count\_2 0.000000 1.000000e+00
mean\_2 1.000000 3.587622e-59
std\_2 1.000000 3.587622e-59
min\_2 1.000000 3.587622e-59
25\%\_2 1.000000 3.587622e-59
50\%\_2 1.000000 3.587622e-59
75\%\_2 1.000000 3.587622e-59
max\_2 0.994444 1.605940e-58
skew\_2 0.501389 3.410114e-15
kurtosis\_2 0.436111 1.342503e-11
\ldots \ldots \ldots
count\_7 0.000000 1.000000e+00
mean\_7 1.000000 3.587622e-59
std\_7 0.955556 4.577742e-54
min\_7 1.000000 3.587622e-59
25\%\_7 1.000000 3.587622e-59
50\%\_7 1.000000 3.587622e-59
75\%\_7 1.000000 3.587622e-59
max\_7 0.740278 1.280685e-32
skew\_7 0.319444 2.024353e-06
kurtosis\_7 0.304167 7.344875e-06
count\_8 0.000000 1.000000e+00
mean\_8 1.000000 3.587622e-59
std\_8 0.929167 3.823290e-51
min\_8 1.000000 3.587622e-59
25\%\_8 1.000000 3.587622e-59
50\%\_8 1.000000 3.587622e-59
75\%\_8 1.000000 3.587622e-59
max\_8 0.736111 2.943248e-32
skew\_8 0.547222 5.120902e-18
kurtosis\_8 0.395833 1.248634e-09
count\_9 0.000000 1.000000e+00
mean\_9 1.000000 3.587622e-59
std\_9 0.956944 3.195999e-54
min\_9 1.000000 3.587622e-59
25\%\_9 1.000000 3.587622e-59
50\%\_9 1.000000 3.587622e-59
75\%\_9 0.997222 7.598385e-59
max\_9 0.794444 1.674259e-37
skew\_9 0.702778 1.934059e-29
kurtosis\_9 0.891667 3.923764e-47
[100 rows x 2 columns]
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}91}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}92}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{20}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}93}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 2.693481
[t-SNE] Error after 65 iterations with early exaggeration: 12.552178
[t-SNE] Error after 136 iterations: 1.190411
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}94}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/intensity\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}95}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 2.693481
[t-SNE] Error after 100 iterations with early exaggeration: 16.755029
[t-SNE] Error after 314 iterations: 2.633436
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}127}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}127}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x108bf72d0>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}97}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}98}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}99}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/intensity\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}100}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}129}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}129}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x108ea9090>
\end{Verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}102}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}103}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}104}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/intensity\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}105}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}130}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}130}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10a9fe950>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}107}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{50}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}108}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}109}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}110}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}111}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}112}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}113}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}114}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}115}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}116}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}117}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}118}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}119}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}120}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/intensity\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis_files/intensity-analysis_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\section*{Blob Texture Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}97}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Warning: Cannot change to a different GUI toolkit: qt. Using osx instead.
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}56}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{real\PYZus{}texture.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{synthetic\PYZus{}texture.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{phantom}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,}
\PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{hologic\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{group\PYZus{}by\PYZus{}scale\PYZus{}space}\PY{p}{(}\PY{n}{hologic}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{group\PYZus{}by\PYZus{}scale\PYZus{}space}\PY{p}{(}\PY{n}{phantom}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}61}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}62}]:} contrast dissimilarity homogeneity energy \textbackslash{}
p214-010-60001-ml.png 217.490041 11.182093 0.089156 0.068559
p214-010-60005-cr.png 153.433967 9.793844 0.097550 0.069263
p214-010-60008-cl.png 278.832070 12.986904 0.077585 0.069229
p214-010-60012-cl.png 228.203678 11.830961 0.083644 0.068756
p214-010-60013-ml.png 233.480009 12.147959 0.081455 0.067923
contrast\_1 dissimilarity\_1 homogeneity\_1 energy\_1 \textbackslash{}
p214-010-60001-ml.png 255.275936 11.383161 0.091460 0.052581
p214-010-60005-cr.png 258.457447 9.835271 0.120798 0.066882
p214-010-60008-cl.png 277.870310 12.970713 0.079270 0.052845
p214-010-60012-cl.png 236.048146 11.893532 0.084507 0.050689
p214-010-60013-ml.png 243.073751 12.150686 0.083336 0.051553
contrast\_2 dissimilarity\_2 \ldots homogeneity\_7 \textbackslash{}
p214-010-60001-ml.png 223.895510 10.859934 \ldots 0.094498
p214-010-60005-cr.png 148.101614 9.619357 \ldots 0.133482
p214-010-60008-cl.png 277.135918 12.925121 \ldots 0.077955
p214-010-60012-cl.png 205.001806 11.381168 \ldots 0.108649
p214-010-60013-ml.png 221.442525 11.778895 \ldots 0.082859
energy\_7 contrast\_8 dissimilarity\_8 homogeneity\_8 \textbackslash{}
p214-010-60001-ml.png 0.014957 171.293827 10.346745 0.094648
p214-010-60005-cr.png 0.019024 206.110004 7.723766 0.155662
p214-010-60008-cl.png 0.019611 262.540916 12.908503 0.077737
p214-010-60012-cl.png 0.025046 188.258062 10.873996 0.089765
p214-010-60013-ml.png 0.014600 230.437848 12.089526 0.081013
energy\_8 contrast\_9 dissimilarity\_9 homogeneity\_9 \textbackslash{}
p214-010-60001-ml.png 0.014245 157.454233 9.781551 0.105151
p214-010-60005-cr.png 0.028688 132.380141 9.052372 0.108430
p214-010-60008-cl.png 0.018621 257.790604 12.788839 0.078467
p214-010-60012-cl.png 0.014025 157.454233 9.781551 0.105151
p214-010-60013-ml.png 0.015532 203.846013 11.353827 0.086053
energy\_9
p214-010-60001-ml.png 0.019975
p214-010-60005-cr.png 0.015630
p214-010-60008-cl.png 0.019016
p214-010-60012-cl.png 0.019975
p214-010-60013-ml.png 0.013723
[5 rows x 40 columns]
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/texture\PYZus{}features\PYZus{}ks.tex}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}64}]:} KS p-value
contrast 0.381944 5.383186e-09
dissimilarity 1.000000 3.587622e-59
homogeneity 1.000000 3.587622e-59
energy 1.000000 3.587622e-59
contrast\_1 0.586111 1.318746e-20
dissimilarity\_1 1.000000 3.587622e-59
homogeneity\_1 1.000000 3.587622e-59
energy\_1 1.000000 3.587622e-59
contrast\_2 0.863889 2.874007e-44
dissimilarity\_2 1.000000 3.587622e-59
homogeneity\_2 1.000000 3.587622e-59
energy\_2 1.000000 3.587622e-59
contrast\_3 0.923611 1.538596e-50
dissimilarity\_3 1.000000 3.587622e-59
homogeneity\_3 1.000000 3.587622e-59
energy\_3 1.000000 3.587622e-59
contrast\_4 0.845833 1.870608e-42
dissimilarity\_4 1.000000 3.587622e-59
homogeneity\_4 1.000000 3.587622e-59
energy\_4 1.000000 3.587622e-59
contrast\_5 0.979167 9.485680e-57
dissimilarity\_5 1.000000 3.587622e-59
homogeneity\_5 1.000000 3.587622e-59
energy\_5 1.000000 3.587622e-59
contrast\_6 1.000000 3.587622e-59
dissimilarity\_6 1.000000 3.587622e-59
homogeneity\_6 1.000000 3.587622e-59
energy\_6 0.994444 1.605940e-58
contrast\_7 0.969444 1.230285e-55
dissimilarity\_7 1.000000 3.587622e-59
homogeneity\_7 1.000000 3.587622e-59
energy\_7 1.000000 3.587622e-59
contrast\_8 0.986111 1.497318e-57
dissimilarity\_8 1.000000 3.587622e-59
homogeneity\_8 1.000000 3.587622e-59
energy\_8 0.997222 7.598385e-59
contrast\_9 1.000000 3.587622e-59
dissimilarity\_9 1.000000 3.587622e-59
homogeneity\_9 1.000000 3.587622e-59
energy\_9 1.000000 3.587622e-59
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}66}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{20}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}67}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.192292
[t-SNE] Error after 83 iterations with early exaggeration: 11.373997
[t-SNE] Error after 141 iterations: 0.460456
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/texture\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}69}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.192292
[t-SNE] Error after 100 iterations with early exaggeration: 16.345359
[t-SNE] Error after 301 iterations: 2.602024
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}98}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}98}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10d6cc350>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}71}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}72}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}73}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/texture\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}74}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}101}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}101}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10ee15750>
\end{Verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}76}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{5}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}77}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}78}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/texture\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}79}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}100}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}100}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10c0ef1d0>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}81}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{10}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}82}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}83}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}84}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}85}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}86}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}88}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}89}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}90}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}91}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}92}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}93}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}94}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/texture\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis_files/texture-analysis_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\section*{Line Shape Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}86}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}42}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{real\PYZhy{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{phantom\PYZhy{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}43}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,}
\PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}44}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{n}{hologic\PYZus{}line\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{features\PYZus{}from\PYZus{}lines}\PY{p}{(}\PY{n}{hologic}\PY{p}{)}
\PY{n}{phantom\PYZus{}line\PYZus{}features} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{features\PYZus{}from\PYZus{}lines}\PY{p}{(}\PY{n}{phantom}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{n}{hologic\PYZus{}line\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}line\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}line\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}line\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}line\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}line\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}line\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}line\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}48}]:} count mean std min 25\% 50\% 75\% \textbackslash{}
p214-010-60001-cl.png 72 161.791667 245.194659 1 61.5 94.5 137.25
p214-010-60005-ml.png 124 177.153226 252.405502 1 57.0 91.5 174.75
p214-010-60008-cl.png 105 99.695238 77.001889 1 57.0 76.0 110.00
p214-010-60012-mr.png 213 163.037559 315.235968 1 55.0 79.0 158.00
p214-010-60013-cr.png 225 155.368889 180.493203 1 68.0 95.0 162.00
max skew kurtosis upper\_dist\_count
p214-010-60001-cl.png 1744 4.786025 26.793023 16
p214-010-60005-ml.png 1725 3.323887 13.873128 31
p214-010-60008-cl.png 454 2.537117 7.788977 33
p214-010-60012-mr.png 3677 7.384801 74.429175 52
p214-010-60013-cr.png 1285 3.584626 15.784599 60
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}49}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{min}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)}
\PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{fillna}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}50}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic\PYZus{}line\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom\PYZus{}line\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/line\PYZus{}features\PYZus{}ks.tex}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}50}]:} KS p-value
count 0.933333 1.338265e-51
mean 0.593056 4.356206e-21
std 0.654167 1.450514e-25
25\% 0.143056 1.255126e-01
50\% 0.304167 7.344875e-06
75\% 0.508333 1.320817e-15
max 0.737500 2.231475e-32
skew 0.480556 5.426886e-14
kurtosis 0.506944 1.598384e-15
upper\_dist\_count 0.913889 1.724254e-49
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}line\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}line\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}52}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{20}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}53}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.347012
[t-SNE] Error after 65 iterations with early exaggeration: 12.391945
[t-SNE] Error after 132 iterations: 0.662618
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}54}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}55}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 1.347012
[t-SNE] Error after 100 iterations with early exaggeration: 16.628509
[t-SNE] Error after 302 iterations: 2.665706
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}83}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}83}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x110b11910>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}87}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x1128091d0>
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_35_1.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{verbatim}
<matplotlib.figure.Figure at 0x111347bd0>
\end{verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}85}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}85}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x10902cf90>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}67}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{50}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}69}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}70}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}71}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}72}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}73}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}74}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}75}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}76}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}77}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}78}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}79}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}80}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{line-analysis_files/line-analysis_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\section*{Line Intensity Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}83}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
Warning: Cannot change to a different GUI toolkit: qt. Using osx instead.
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}42}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{real\PYZus{}intensity\PYZus{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{synthetic\PYZus{}intensity\PYZus{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{n}{phantom}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}43}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,}
\PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}44}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{n}{hologic\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{hologic}\PY{p}{[}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{:}\PY{p}{]}\PY{p}{]}
\PY{n}{hologic\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{n}{hologic}\PY{o}{.}\PY{n}{index}\PY{p}{)}\PY{o}{.}\PY{n}{agg}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{phantom}\PY{p}{[}\PY{n}{phantom}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{:}\PY{p}{]}\PY{p}{]}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features} \PY{o}{=} \PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{n}{phantom}\PY{o}{.}\PY{n}{index}\PY{p}{)}\PY{o}{.}\PY{n}{agg}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset} \PYZbs{}
\PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}48}]:} mean std min 25\% 50\% \textbackslash{}
p214-010-60001-cl.png 0.362584 0.099563 0.176311 0.290825 0.350695
p214-010-60005-cr.png 0.390749 0.115338 0.178942 0.308598 0.382695
p214-010-60008-mr.png 0.380682 0.068131 0.252233 0.335284 0.374149
p214-010-60012-cr.png 0.310309 0.084479 0.157582 0.252724 0.301992
p214-010-60013-cl.png 0.328995 0.077626 0.185815 0.275454 0.321237
75\% max skew kurtosis
p214-010-60001-cl.png 0.425325 0.651941 0.554375 0.564813
p214-010-60005-cr.png 0.464941 0.671299 0.285397 -0.168101
p214-010-60008-mr.png 0.418809 0.586883 0.533288 0.537177
p214-010-60012-cr.png 0.359711 0.574846 0.591287 0.876965
p214-010-60013-cl.png 0.373890 0.553304 0.527032 0.352804
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}49}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}50}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom\PYZus{}intensity\PYZus{}features}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{selected\PYZus{}features}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/line\PYZus{}intensity\PYZus{}features\PYZus{}ks.tex}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}50}]:} KS p-value
mean 1.000000 3.587622e-59
std 0.966667 2.546525e-55
min 1.000000 3.587622e-59
25\% 1.000000 3.587622e-59
50\% 1.000000 3.587622e-59
75\% 1.000000 3.587622e-59
max 1.000000 3.587622e-59
skew 1.000000 3.587622e-59
kurtosis 0.213889 4.106586e-03
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}intensity\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}52}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{20}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}53}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 0.714076
[t-SNE] Error after 65 iterations with early exaggeration: 12.750609
[t-SNE] Error after 130 iterations: 0.743695
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}54}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}intensity\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}55}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 0.714076
[t-SNE] Error after 100 iterations with early exaggeration: 16.315305
[t-SNE] Error after 297 iterations: 2.612050
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}84}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}84}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x115f941d0>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}intensity\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}86}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}86}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x117134a90>
\end{Verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/line\PYZus{}intensity\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}87}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x115c09d90>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}67}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{50}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}69}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}70}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}71}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}72}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}73}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}74}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}75}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}76}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}77}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}78}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}79}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}80}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/line\PYZus{}intensity\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{intensity-analysis-lines_files/intensity-analysis-lines_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
\section*{Line Texture Analysis}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}89}]:} \PY{o}{\PYZpc{}}\PY{k}{matplotlib} \PY{n}{inline}
\PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{as} \PY{n+nn}{pd}
\PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k+kn}{as} \PY{n+nn}{np}
\PY{k+kn}{import} \PY{n+nn}{scipy.stats} \PY{k+kn}{as} \PY{n+nn}{stats}
\PY{k+kn}{import} \PY{n+nn}{matplotlib.pyplot} \PY{k+kn}{as} \PY{n+nn}{plt}
\PY{k+kn}{import} \PY{n+nn}{mia}
\end{Verbatim}
\section{Loading and Preprocessing}\label{loading-and-preprocessing}
Loading the hologic and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}42}]:} \PY{n}{hologic} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{real\PYZus{}texture\PYZus{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{c}{\PYZsh{} hologic.drop(hologic.columns[:2], axis=1, inplace=True)}
\PY{n}{hologic}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\PY{n}{phantom} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{o}{.}\PY{n}{from\PYZus{}csv}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{synthetic\PYZus{}texture\PYZus{}lines.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{c}{\PYZsh{} phantom.drop(phantom.columns, axis=1, inplace=True)}
\PY{n}{phantom}\PY{o}{.}\PY{n}{drop}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{breast\PYZus{}area}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{inplace}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
\end{Verbatim}
Loading the meta data for the real and synthetic datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}43}]:} \PY{n}{hologic\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}hologic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{hologic}\PY{p}{,} \PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/real\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}synthetic\PYZus{}meta\PYZus{}data}\PY{p}{(}\PY{n}{phantom}\PY{p}{,}
\PY{l+s}{\PYZdq{}}\PY{l+s}{meta\PYZus{}data/synthetic\PYZus{}meta.csv}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{phantom\PYZus{}meta}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZsq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZsq{}}
\end{Verbatim}
Prepare the BI-RADS/VBD labels for both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}44}]:} \PY{n}{hologic\PYZus{}labels} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{BIRADS}
\PY{n}{phantom\PYZus{}labels} \PY{o}{=} \PY{n}{phantom\PYZus{}meta}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{VBD.1}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{class\PYZus{}labels} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}labels}\PY{p}{,} \PY{n}{phantom\PYZus{}labels}\PY{p}{]}\PY{p}{)}
\PY{n}{class\PYZus{}labels}\PY{o}{.}\PY{n}{index}\PY{o}{.}\PY{n}{name} \PY{o}{=} \PY{l+s}{\PYZdq{}}\PY{l+s}{img\PYZus{}name}\PY{l+s}{\PYZdq{}}
\PY{n}{labels} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{class\PYZus{}labels}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}
\end{Verbatim}
\section{Creating Features}\label{creating-features}
Create blob features from distribution of blobs
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}45}]:} \PY{n}{hologic\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{hologic}\PY{p}{[}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{:}\PY{p}{]}\PY{p}{]}
\PY{n}{hologic\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{n}{hologic}\PY{o}{.}\PY{n}{index}\PY{p}{)}\PY{o}{.}\PY{n}{agg}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{phantom}\PY{p}{[}\PY{n}{phantom}\PY{o}{.}\PY{n}{columns}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{:}\PY{p}{]}\PY{p}{]}
\PY{n}{phantom\PYZus{}texture\PYZus{}features} \PY{o}{=} \PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{o}{.}\PY{n}{groupby}\PY{p}{(}\PY{n}{phantom}\PY{o}{.}\PY{n}{index}\PY{p}{)}\PY{o}{.}\PY{n}{agg}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{mean}\PY{p}{)}
\end{Verbatim}
Take a random subset of the real mammograms. This is important so that
each patient is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}46}]:} \PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{hologic\PYZus{}meta}\PY{o}{.}\PY{n}{drop\PYZus{}duplicates}\PY{p}{(}\PY{p}{)}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{]}
\PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{hologic\PYZus{}texture\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{patient\PYZus{}id}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Take a random subset of the phantom mammograms. This is important so
that each case is not over represented.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}47}]:} \PY{n}{syn\PYZus{}feature\PYZus{}meta} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{remove\PYZus{}duplicate\PYZus{}index}\PY{p}{(}\PY{n}{phantom\PYZus{}meta}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{syn\PYZus{}feature\PYZus{}meta}\PY{o}{.}\PY{n}{phantom\PYZus{}name}\PY{o}{.}\PY{n}{tolist}\PY{p}{(}\PY{p}{)}
\PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{create\PYZus{}random\PYZus{}subset}\PY{p}{(}\PY{n}{phantom\PYZus{}texture\PYZus{}features}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{phantom\PYZus{}name}\PY{l+s}{\PYZsq{}}\PY{p}{)}
\end{Verbatim}
Combine the features from both datasets.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}48}]:} \PY{n}{features} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{p}{,} \PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{p}{]}\PY{p}{)}
\PY{k}{assert} \PY{n}{features}\PY{o}{.}\PY{n}{shape}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{o}{==} \PY{l+m+mi}{96}
\PY{n}{features}\PY{o}{.}\PY{n}{head}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}48}]:} contrast dissimilarity homogeneity energy
p214-010-60001-cr.png 141.645681 8.313746 0.112722 0.079576
p214-010-60005-ml.png 160.060707 9.000738 0.107945 0.087887
p214-010-60008-cl.png 100.450716 7.380523 0.131506 0.069471
p214-010-60012-mr.png 147.359186 8.009818 0.097230 0.083729
p214-010-60013-ml.png 114.488132 7.682750 0.124403 0.068205
\end{Verbatim}
Filter some features, such as the min, to remove noise.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}49}]:} \PY{n}{selected\PYZus{}features} \PY{o}{=} \PY{n}{features}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)}
\end{Verbatim}
\section{Compare Real and Synthetic
Features}\label{compare-real-and-synthetic-features}
Compare the distributions of features detected from the real mammograms
and the phantoms using the Kolmogorov-Smirnov two sample test.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}50}]:} \PY{n}{ks\PYZus{}stats} \PY{o}{=} \PY{p}{[}\PY{n+nb}{list}\PY{p}{(}\PY{n}{stats}\PY{o}{.}\PY{n}{ks\PYZus{}2samp}\PY{p}{(}\PY{n}{hologic}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{,}
\PY{n}{phantom}\PY{p}{[}\PY{n}{col}\PY{p}{]}\PY{p}{)}\PY{p}{)}
\PY{k}{for} \PY{n}{col} \PY{o+ow}{in} \PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{]}
\PY{n}{ks\PYZus{}test} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{n}{ks\PYZus{}stats}\PY{p}{,} \PY{n}{columns}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{KS}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{p\PYZhy{}value}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{index}\PY{o}{=}\PY{n}{hologic}\PY{o}{.}\PY{n}{columns}\PY{p}{)}
\PY{n}{ks\PYZus{}test}\PY{o}{.}\PY{n}{to\PYZus{}latex}\PY{p}{(}\PY{l+s}{\PYZdq{}}\PY{l+s}{tables/texture\PYZus{}features\PYZus{}ks\PYZus{}lines.tex}\PY{l+s}{\PYZdq{}}\PY{p}{)}
\PY{n}{ks\PYZus{}test}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}50}]:} KS p-value
area 0.079551 9.106412e-12
min\_row 0.896401 0.000000e+00
min\_col 0.232212 4.582395e-97
max\_row 0.883959 0.000000e+00
max\_col 0.219444 9.914914e-87
contrast 0.897291 0.000000e+00
dissimilarity 0.923237 0.000000e+00
homogeneity 0.928761 0.000000e+00
energy 0.554395 0.000000e+00
\end{Verbatim}
\section{Dimensionality Reduction}\label{dimensionality-reduction}
\subsection{t-SNE}\label{t-sne}
Running t-SNE to obtain a two dimensional representation.
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}51}]:} \PY{n}{real\PYZus{}index} \PY{o}{=} \PY{n}{hologic\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\PY{n}{phantom\PYZus{}index} \PY{o}{=} \PY{n}{phantom\PYZus{}texture\PYZus{}features\PYZus{}subset}\PY{o}{.}\PY{n}{index}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}52}]:} \PY{n}{kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{learning\PYZus{}rate}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{200}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{perplexity}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{20}\PY{p}{,}
\PY{l+s}{\PYZsq{}}\PY{l+s}{verbose}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{1}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}53}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 0.380925
[t-SNE] Error after 65 iterations with early exaggeration: 11.566220
[t-SNE] Error after 129 iterations: 0.686186
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}54}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/lines\PYZus{}texture\PYZus{}SNE\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_26_0.png}
\end{center}
{ \hspace*{\fill} \\}
Running t-SNE to obtain a 3 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}55}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{tSNE}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
[t-SNE] Computing pairwise distances\ldots
[t-SNE] Computed conditional probabilities for sample 96 / 96
[t-SNE] Mean sigma: 0.380925
[t-SNE] Error after 100 iterations with early exaggeration: 16.559494
[t-SNE] Error after 296 iterations: 2.686845
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}85}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}85}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x111f55b90>
\end{Verbatim}
\subsection{Isomap}\label{isomap}
Running Isomap to obtain a 2 dimensional mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}57}]:} \PY{n}{iso\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}58}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}59}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/lines\PYZus{}texture\PYZus{}iso\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_33_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}60}]:} \PY{n}{iso\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{isomap}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{iso\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}87}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}87}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x112437c10>
\end{Verbatim}
\subsection{Locally Linear Embedding}\label{locally-linear-embedding}
Running locally linear embedding to obtain 2d mapping
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}62}]:} \PY{n}{lle\PYZus{}kwargs} \PY{o}{=} \PY{p}{\PYZob{}}
\PY{l+s}{\PYZsq{}}\PY{l+s}{n\PYZus{}neighbors}\PY{l+s}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{4}\PY{p}{,}
\PY{p}{\PYZcb{}}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}63}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}2d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{2}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}64}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}2d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/mappings/lines\PYZus{}texture\PYZus{}lle\PYZus{}mapping\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_39_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}65}]:} \PY{n}{lle\PYZus{}mapping\PYZus{}3d} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{analysis}\PY{o}{.}\PY{n}{lle}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,} \PY{n}{n\PYZus{}components}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,} \PY{o}{*}\PY{o}{*}\PY{n}{lle\PYZus{}kwargs}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}88}]:} \PY{n}{mia}\PY{o}{.}\PY{n}{plotting}\PY{o}{.}\PY{n}{plot\PYZus{}mapping\PYZus{}3d}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{,} \PY{n}{real\PYZus{}index}\PY{p}{,} \PY{n}{phantom\PYZus{}index}\PY{p}{,} \PY{n}{labels}\PY{p}{)}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{outcolor}Out[{\color{outcolor}88}]:} <matplotlib.axes.\_subplots.Axes3DSubplot at 0x11209ca10>
\end{Verbatim}
\subsection{Quality Assessment of Dimensionality
Reduction}\label{quality-assessment-of-dimensionality-reduction}
Assess the quality of the DR against measurements from the co-ranking
matrices. First create co-ranking matrices for each of the
dimensionality reduction mappings
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}67}]:} \PY{n}{max\PYZus{}k} \PY{o}{=} \PY{l+m+mi}{50}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}68}]:} \PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}2d}\PY{p}{)}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{SNE\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{iso\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm} \PY{o}{=} \PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{coranking\PYZus{}matrix}\PY{p}{(}\PY{n}{selected\PYZus{}features}\PY{p}{,}
\PY{n}{lle\PYZus{}mapping\PYZus{}3d}\PY{p}{)}
\end{Verbatim}
\subsubsection{2D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}69}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}70}]:} \PY{n}{trustworthiness\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}trustworthiness\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_48_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}71}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}72}]:} \PY{n}{continuity\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}continuity\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_50_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}73}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}2d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}74}]:} \PY{n}{lcmc\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}2d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}2d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}lcmc\PYZus{}2d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_52_0.png}
\end{center}
{ \hspace*{\fill} \\}
\subsubsection{3D Mappings}\label{d-mappings}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}75}]:} \PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{trustworthiness}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}76}]:} \PY{n}{trustworthiness3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}trustworthiness\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{trustworthiness3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}trustworthiness\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_55_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}77}]:} \PY{n}{SNE\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{continuity}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}78}]:} \PY{n}{continuity3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}continuity\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}continuity\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{continuity3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}continuity\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_57_0.png}
\end{center}
{ \hspace*{\fill} \\}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}79}]:} \PY{n}{SNE\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{SNE\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{iso\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d} \PY{o}{=} \PY{p}{[}\PY{n}{mia}\PY{o}{.}\PY{n}{coranking}\PY{o}{.}\PY{n}{LCMC}\PY{p}{(}\PY{n}{lle\PYZus{}mapping\PYZus{}3d\PYZus{}cm}\PY{p}{,} \PY{n}{k}\PY{p}{)}
\PY{k}{for} \PY{n}{k} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{2}\PY{p}{,} \PY{n}{max\PYZus{}k}\PY{p}{)}\PY{p}{]}
\end{Verbatim}
\begin{Verbatim}[commandchars=\\\{\}]
{\color{incolor}In [{\color{incolor}80}]:} \PY{n}{lcmc3d\PYZus{}df} \PY{o}{=} \PY{n}{pd}\PY{o}{.}\PY{n}{DataFrame}\PY{p}{(}\PY{p}{[}\PY{n}{SNE\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{iso\PYZus{}lcmc\PYZus{}3d}\PY{p}{,}
\PY{n}{lle\PYZus{}lcmc\PYZus{}3d}\PY{p}{]}\PY{p}{,}
\PY{n}{index}\PY{o}{=}\PY{p}{[}\PY{l+s}{\PYZsq{}}\PY{l+s}{SNE}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{Isomap}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+s}{\PYZsq{}}\PY{l+s}{LLE}\PY{l+s}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{T}
\PY{n}{lcmc3d\PYZus{}df}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{p}{)}
\PY{n}{plt}\PY{o}{.}\PY{n}{savefig}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{figures/quality\PYZus{}measures/lines\PYZus{}texture\PYZus{}lcmc\PYZus{}3d.png}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{300}\PY{p}{)}
\end{Verbatim}
\begin{center}
\adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{texture-analysis-lines_files/texture-analysis-lines_59_0.png}
\end{center}
{ \hspace*{\fill} \\}
| {
"alphanum_fraction": 0.5442553172,
"avg_line_length": 78.9853046595,
"ext": "tex",
"hexsha": "ba34ece98f09cc4dbdf6de6aec50c217284da4b1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "samueljackson92/major-project",
"max_forks_repo_path": "documents/final-report/ipython-appendix.tex",
"max_issues_count": 64,
"max_issues_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1",
"max_issues_repo_issues_event_max_datetime": "2015-05-03T15:46:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-02-05T06:34:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "samueljackson92/major-project",
"max_issues_repo_path": "documents/final-report/ipython-appendix.tex",
"max_line_length": 377,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "5d82b875944fcf1f001f9beb5e5419ba60be3bf1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "samueljackson92/major-project",
"max_stars_repo_path": "documents/final-report/ipython-appendix.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-17T00:57:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-26T16:23:29.000Z",
"num_tokens": 92458,
"size": 220369
} |
\documentclass[11pt,letterpaper]{article}
\usepackage[T1]{fontenc}
\usepackage{tgtermes}
\usepackage[hang,flushmargin]{footmisc}
\usepackage{titlesec}
\usepackage{lipsum}% just to generate text for the example
\titlespacing*{\section}
{0pt}{5ex plus 1ex minus .2ex}{1.1ex plus .2ex}
%\usepackage{mathptmx}
\usepackage{eso-pic}
%\setlength\parindent{0pt}
\AddToShipoutPictureBG{%
\ifnum\value{page}>1{
\AtTextUpperLeft{
\makebox[18.5cm][r]{
\raisebox{-2.3cm}{%
{\transparent{0.3}{\includegraphics[width=0.29\textwidth]{e-logo.png}} }} } }
}\fi
}
\AddToShipoutPicture{%
{
{\color{blGreen!70!red}\transparent{0.9}{\put(0,0){\rule{3pt}{\paperheight}}}}%
{\color{darkRed!70!purple}\transparent{1}\put(3,0){{\rule{4pt}{\paperheight}}}}
% {\color{logoPeach!80!cyan}\transparent{0.5}{\put(0,700){\rule{1cm}{.6cm}}}}%
% {\color{darkRed!60!cyan}\transparent{0.7}\put(0,706){{\rule{1cm}{.6cm}}}}
% \put(18,726){\thepage}
% \transparent{0.8}
}
}
\AddToShipoutPicture{%
\ifnum\value{page}=1
\put(257.5,956){%
\transparent{0.7}{
\includegraphics[width=0.2\textwidth]{logo.png}}}
\fi
}
\AddToShipoutPicture{%
\ifnum\value{page}>1
{\color{blGreen!70!red}\transparent{0.9}{\put(300,8){\rule{0.5\paperwidth}{.3cm}}}}%
{\color{inOne}\transparent{0.8}{\put(300,10){\rule{0.5\paperwidth}{.3cm}}}}%
{\color{inTwo}\transparent{0.3}\put(300,13){{\rule{0.5\paperwidth}{.3cm}}}}
\put(301,16){%
\transparent{0.7}{
\includegraphics[width=0.2\textwidth]{logo.png}} }
{\color{blGreen!70!red}\transparent{0.9}{\put(5.6,5){\rule{0.5\paperwidth}{.4cm}}}}%
{\color{inOne}\transparent{1}{\put(5.6,10){\rule{0.5\paperwidth}{.4cm}}}}%
{\color{inTwo}\transparent{0.3}\put(5.6,15){{\rule{0.5\paperwidth}{.4cm}}}}
\fi
}
%\pagestyle{empty} % no page number
%\parskip 7.2pt % space between paragraphs
%\parindent 12pt % indent for new paragraph
%\textwidth 4.5in % width of text
%\columnsep 0.8in % separation between columns
\setlength{\footskip}{7pt}
\usepackage[paperheight=14.5in,paperwidth=8.5in]{geometry}
\geometry{left=.78in,top=1.1in,right=.6in,bottom=1.75in} %margins
\renewcommand{\thepage}{\raisebox{-3em}{\arabic{page}}}
\usepackage[hyphens]{url}
\newcommand{\biburl}[1]{ {\fontfamily{gar}\selectfont{\textcolor[rgb]{.2,.6,0}%
{\scriptsize {\url{#1}}}}}}
%\linespread{1.3}
\newcommand{\sectsp}{\vspace{12pt}}
\usepackage{graphicx}
\usepackage{color,framed}
\usepackage{textcomp}
\usepackage{float}
\usepackage{mdframed}
\usepackage{setspace}
\newcommand{\rpdfNotice}[1]{\begin{onehalfspacing}{
\Large #1
}\end{onehalfspacing}}
\usepackage{xcolor}
\usepackage[hyphenbreaks]{breakurl}
\usepackage[hyphens]{url}
\usepackage{hyperref}
\newcommand{\rpdfLink}[1]{\href{#1}{\small{#1}}}
\newcommand{\dblHref}[1]{\href{#1}{\small{\burl{#1}}}}
\newcommand{\browseHref}[2]{\href{#1}{\Large #2}}
\colorlet{blCyan}{cyan!50!blue}
\definecolor{darkRed}{rgb}{.2,.0,.1}
\definecolor{blGreen}{rgb}{.2,.7,.3}
\definecolor{darkBlGreen}{rgb}{.1,.3,.2}
\definecolor{oldBlColor}{rgb}{.2,.7,.3}
\definecolor{blColor}{rgb}{.1,.3,.2}
\definecolor{elColor}{rgb}{.2,.1,0}
\definecolor{flColor}{rgb}{0.7,0.3,0.3}
\definecolor{logoOrange}{RGB}{108, 18, 30}
\definecolor{logoGreen}{RGB}{85, 153, 89}
\definecolor{logoPurple}{RGB}{200, 208, 30}
\definecolor{logoBlue}{RGB}{4, 2, 25}
\definecolor{logoPeach}{RGB}{255, 159, 102}
\definecolor{logoCyan}{RGB}{66, 206, 244}
\definecolor{logoRed}{rgb}{.3,0,0}
\newcommand{\colorq}[1]{{\color{logoOrange!70!black}{\q{\small\textbf{#1}}}}}
\definecolor{inOne}{rgb}{0.122, 0.435, 0.698}% Rule colour
\definecolor{inTwo}{rgb}{0.122, 0.698, 0.435}% Rule colour
\definecolor{outOne}{rgb}{0.435, 0.698, 0.122}% Rule colour
\definecolor{outTwo}{rgb}{0.698, 0.435, 0.122}% Rule colour
\usepackage[many]{tcolorbox}% http://ctan.org/pkg/tcolorbox
\usepackage{transparent}
\newlength{\bsep}
\setlength{\bsep}{-1pt}
\let\xbibitem\bibitem
\renewcommand{\bibitem}[2]{\vspace{\bsep}\xbibitem{#1}{#2}}
\newenvironment{cframed}{\begin{mdframed}[linecolor=logoPeach,linewidth=0.4mm]}{\end{mdframed}}
\newenvironment{ccframed}{\begin{mdframed}[backgroundcolor=logoGreen!5,linecolor=logoCyan!50!black,linewidth=0.4mm]}{\end{mdframed}}
\usepackage{aurical}
\usepackage[T1]{fontenc}
\usepackage{relsize}
\newcommand{\bref}[1]{\hspace*{1pt}\textbf{\ref{#1}}}
\newcommand{\pseudoIndent}{
\vspace{10pt}\hspace*{12pt}}
\newcommand{\YPDFI}{{\fontfamily{fvs}\selectfont YPDF-Interactive}}
%
\newcommand{\deconum}[1]{{\protect\raisebox{-1pt}{{\LARGE #1}}}}
\newcommand{\visavis}{vis-\`a-vis}
\newcommand{\VersatileUX}{{\color{red!85!black}{\Fontauri Versatile}}%
{{\fontfamily{qhv}\selectfont\smaller UX}}}
\newcommand{\NDPCloud}{{\color{red!15!black}%
{\fontfamily{qhv}\selectfont {\smaller NDP C{\smaller LOUD}}}}}
\newcommand{\MThreeK}{{\color{blGreen!45!black}%
{\fontfamily{qhv}\fontsize{10}{8}\selectfont {M3K}}}}
\newcommand{\lfNDPCloud}{{\color{red!15!black}%
{\fontfamily{qhv}\selectfont N{\smaller DP C{\smaller LOUD}}}}}
\newcommand{\textds}[1]{{\fontfamily{lmdh}\selectfont{%
\raisebox{-1pt}{#1}}}}
\newcommand{\dsC}{{\textds{ds}{\fontfamily{qhv}\selectfont \raisebox{-1pt}
{\color{red!15!black}{C}}}}}
\definecolor{tcolor}{RGB}{24,52,61}
\newcommand{\CCpp}{\resizebox{!}{7pt}{\AcronymText{C}}/\Cpp{}}
\newcommand{\NoSQL}{\resizebox{!}{7pt}{\AcronymText{NoSQL}}}
\newcommand{\SQL}{\resizebox{!}{7pt}{\AcronymText{SQL}}}
\newcommand{\NCBI}{\resizebox{!}{7pt}{\AcronymText{NCBI}}}
\newcommand{\HTXN}{\resizebox{!}{7pt}{\AcronymText{HTXN}}}
\newcommand{\lHTXN}{\resizebox{!}{8.5pt}{\AcronymText{HTXN}}}
\newcommand{\lsHTXN}{\resizebox{!}{9.5pt}{\AcronymText{\textcolor{tcolor}{HTXN}}}}
\usepackage{mdframed}
\newcommand{\cframedboxpanda}[1]{\begin{mdframed}[linecolor=yellow!70!blue,linewidth=0.4mm]#1\end{mdframed}}
\newcommand{\PVD}{\resizebox{!}{7pt}{\AcronymText{PVD}}}
\newcommand{\THQL}{\resizebox{!}{7pt}{\AcronymText{THQL}}}
\newcommand{\lTHQL}{\resizebox{!}{7.5pt}{\AcronymText{THQL}}}
\newcommand{\SDK}{\resizebox{!}{7pt}{\AcronymText{SDK}}}
\newcommand{\NLP}{\resizebox{!}{7pt}{\AcronymText{NLP}}}
%\newcommand{\API}{\resizebox{!}{7pt}{\AcronymText{API}}}
\newcommand{\IJST}{\resizebox{!}{7pt}{\AcronymText{IJST}}}
\newcommand{\BioC}{\resizebox{!}{7pt}{\AcronymText{BioC}}}
\newcommand{\CoNLL}{\resizebox{!}{7pt}{\AcronymText{CoNLL}}}
\newcommand{\sapp}{\resizebox{!}{7pt}{\AcronymText{Sapien+}}}
\newcommand{\lsapp}{\resizebox{!}{8.5pt}{\AcronymText{Sapien+}}}
\newcommand{\lssapp}{\resizebox{!}{9.5pt}{\AcronymText{Sapien+}}}
\newcommand{\ePub}{\resizebox{!}{7pt}{\AcronymText{ePub}}}
%\lsLPF
\newcommand{\GIT}{\resizebox{!}{7pt}{\AcronymText{GIT}}}
\newcommand{\LPF}{\resizebox{!}{7pt}{\AcronymText{LPF}}}
\newcommand{\lLPF}{\resizebox{!}{8.5pt}{\AcronymText{LPF}}}
\newcommand{\lsLPF}{\resizebox{!}{9.5pt}{\AcronymText{LPF}}}
\makeatletter
\newcommand*\getX[1]{\expandafter\getX@i#1\@nil}
\newcommand*\getY[1]{\expandafter\getY@i#1\@nil}
\def\getX@i#1,#2\@nil{#1}
\def\getY@i#1,#2\@nil{#2}
\makeatother
\newcommand{\rectann}[9]{%
\path [draw=#1,draw opacity=#2,line width=#3, fill=#4, fill opacity = #5, even odd rule] %
(#6) rectangle(\getX{#6}+#7,\getY{#6}+#8)
({\getX{#6}+((#7-(#7*#9))/2)},{\getY{#6}+((#8-(#8*#9))/2)}) rectangle %
({\getX{#6}+((#7-(#7*#9))/2)+#7*#9},{\getY{#6}+((#8-(#8*#9))/2)+#8*#9});}
\definecolor{pfcolor}{RGB}{94, 54, 73}
\newcommand{\EPF}{\resizebox{!}{7pt}{\AcronymText{ETS{\color{pfcolor}pf}}}}
\newcommand{\lEPF}{\resizebox{!}{8.5pt}{\AcronymText{ETS{\color{pfcolor}pf}}}}
\newcommand{\lsEPF}{\resizebox{!}{9.5pt}{\AcronymText{ETS{\color{pfcolor}pf}}}}
\newcommand{\XPDF}{\resizebox{!}{7pt}{\AcronymText{XPDF}}}
\newcommand{\GRE}{\resizebox{!}{8.5pt}{\AcronymText{GRE}}}
\newcommand{\lMOSAIC}{\resizebox{!}{8.5pt}{\AcronymText{MOSAIC}}}
\newcommand{\XML}{\resizebox{!}{7pt}{\AcronymText{XML}}}
\newcommand{\RDF}{\resizebox{!}{7pt}{\AcronymText{RDF}}}
\newcommand{\DOM}{\resizebox{!}{7pt}{\AcronymText{DOM}}}
\newcommand{\Covid}{\resizebox{!}{7pt}{\AcronymText{Covid-19}}}
\newcommand{\CLang}{\resizebox{!}{7pt}{\AcronymText{C}}}
\newcommand{\HNaN}{\resizebox{!}{7pt}{\AcronymText{HN%
\textsc{a}N}}}
\newcommand{\JSON}{\resizebox{!}{7pt}{\AcronymText{JSON}}}
\newcommand{\MeshLab}{\resizebox{!}{7pt}{\AcronymText{MeshLab}}}
\newcommand{\IQmol}{\resizebox{!}{7pt}{\AcronymText{IQmol}}}
\newcommand{\SGML}{\resizebox{!}{7pt}{\AcronymText{SGML}}}
\newcommand{\GUI}{\resizebox{!}{7pt}{\AcronymText{GUI}}}
\newcommand{\API}{\resizebox{!}{7pt}{\AcronymText{API}}}
\newcommand{\SDI}{\resizebox{!}{7pt}{\AcronymText{SDI}}}
\newcommand{\IDE}{\resizebox{!}{7pt}{\AcronymText{IDE}}}
\newcommand{\ThreeD}{\resizebox{!}{7pt}{\AcronymText{3D}}}
\newcommand{\FAIR}{\resizebox{!}{7pt}{\AcronymText{FAIR}}}
\newcommand{\QNetworkManager}{\resizebox{!}{7pt}{\AcronymText{QNetworkManager}}}
\newcommand{\QTextDocument}{\resizebox{!}{7pt}{\AcronymText{QTextDocument}}}
\newcommand{\QWebEngineView}{\resizebox{!}{7pt}{\AcronymText{QWebEngineView}}}
\newcommand{\HTTP}{\resizebox{!}{7pt}{\AcronymText{HTTP}}}
\newcommand{\lAcronymTextNC}[2]{{\fontfamily{fvs}\selectfont {\Large{#1}}{\large{#2}}}}
\newcommand{\AcronymTextNC}[1]{{\fontfamily{fvs}\selectfont {\large #1}}}
\colorlet{orr}{orange!60!red}
\newcommand{\textscc}[1]{{\color{orr!35!black}{{%
\fontfamily{Cabin-TLF}\fontseries{b}\selectfont{\textsc{\scriptsize{#1}}}}}}}
\newcommand{\textsccserif}[1]{{\color{orr!35!black}{{%
\scriptsize{\textbf{#1}}}}}}
\newcommand{\iXPDF}{\resizebox{!}{7pt}{\textsccserif{%
\textit{XPDF}}}}
\newcommand{\iEPF}{\resizebox{!}{7pt}{\textsccserif{%
\textit{ETSpf}}}}
\newcommand{\iSDI}{\resizebox{!}{7pt}{\textsccserif{%
\textit{SDI}}}}
\newcommand{\iHTXN}{\resizebox{!}{7pt}{\textsccserif{%
\textit{HTXN}}}}
\newcommand{\AcronymText}[1]{{\textscc{#1}}}
\newcommand{\AcronymTextser}[1]{{\textsccserif{#1}}}
\newcommand{\mAcronymText}[1]{{\textscc{\normalsize{#1}}}}
\newcommand{\FASTA}{{\resizebox{!}{7pt}{\AcronymText{FASTA}}}}
\newcommand{\SRA}{{\resizebox{!}{7pt}{\AcronymText{SRA}}}}
\newcommand{\DNA}{{\resizebox{!}{7pt}{\AcronymText{DNA}}}}
\newcommand{\MAP}{{\resizebox{!}{7pt}{\AcronymText{MAP}}}}
\newcommand{\EPS}{{\resizebox{!}{7pt}{\AcronymText{EPS}}}}
\newcommand{\CSV}{{\resizebox{!}{7pt}{\AcronymText{CSV}}}}
\newcommand{\PDB}{{\resizebox{!}{7pt}{\AcronymText{PDB}}}}
\newcommand{\TeXMECS}{\resizebox{!}{7pt}{\AcronymText{TeXMECS}}}
\newcommand{\NGML}{\resizebox{!}{7pt}{\AcronymText{NGML}}}
\newcommand{\Cpp}{\resizebox{!}{7pt}{\AcronymText{C++}}}
\newcommand{\WhiteDB}{\resizebox{!}{7pt}{\AcronymText{WhiteDB}}}
\colorlet{drp}{darkRed!70!purple}
%\newcommand{\MOSAIC}{{\color{drp}{\AcronymTextNC{\scriptsize{MOSAIC}}}}}
\newcommand{\MOSAIC}{\resizebox{!}{7pt}{\AcronymText{MOSAIC}}}
\newcommand{\mMOSAIC}{{\color{drp}{\AcronymTextNC{\normalsize{MOSAIC}}}}}
\newcommand{\MOSAICVM}{\mMOSAIC-\mAcronymText{VM}}
\newcommand{\sMOSAICVM}{\resizebox{!}{7pt}{\MOSAICVM}}
\newcommand{\sMOSAIC}{\resizebox{!}{7pt}{\MOSAIC}}
\newcommand{\LDOM}{\resizebox{!}{7pt}{\AcronymText{LDOM}}}
\newcommand{\Cnineteen}{\resizebox{!}{7pt}{\AcronymText{CORD-19}}}
\newcommand{\LXCR}{\resizebox{!}{7pt}{\AcronymText{LXCR}}}
\newcommand{\lLXCR}{\resizebox{!}{8.5pt}{\AcronymText{LXCR}}}
\newcommand{\lsLXCR}{\resizebox{!}{9.5pt}{\AcronymText{LXCR}}}
%\newcommand{\lMOSAIC}{{\color{drp}{\lAcronymTextNC{M}{OSAIC}}}}
\newcommand{\lfMOSAIC}{\resizebox{!}{9pt}{{\color{drp}{\lAcronymTextNC{M}{OSAIC}}}}}
\newcommand{\Mosaic}{\resizebox{!}{7pt}{\MOSAIC}}
\newcommand{\MosaicPortal}{{\color{drp}{\AcronymTextNC{MOSAIC Portal}}}}
\newcommand{\RnD}{\resizebox{!}{7pt}{\AcronymText{R\&D}}}
\newcommand{\QtCpp}{\resizebox{!}{8.5pt}{\AcronymText{Qt/C++}}}
\newcommand{\Qt}{\resizebox{!}{7pt}{\AcronymText{Qt}}}
\newcommand{\QtSQL}{\resizebox{!}{7pt}{\AcronymText{QtSQL}}}
\newcommand{\HTML}{\resizebox{!}{7pt}{\AcronymText{HTML}}}
\newcommand{\PDF}{\resizebox{!}{7pt}{\AcronymText{PDF}}}
\newcommand{\R}{\resizebox{!}{7pt}{\AcronymText{R}}}
\newcommand{\lGRE}{\resizebox{!}{7pt}{\AcronymText{GRE}}}
\newcommand{\p}[1]{
\vspace{.75em}#1}
\newcommand{\q}[1]{{\fontfamily{qcr}\selectfont ``}#1{\fontfamily{qcr}\selectfont ''}}
%\newcommand{\deconum}[1]{{\textcircled{#1}}}
\renewcommand{\thesection}{\protect\mbox{\deconum{\Roman{section}}}}
\renewcommand{\thesubsection}{\arabic{section}.\arabic{subsection}}
\newcommand{\llMOSAIC}{\mbox{{\LARGE MOSAIC}}}
%\newcommand{\lfMOSAIC}{\mbox{M\small{OSAIC}}}
\newcommand{\llMosaic}{\llMOSAIC}
\newcommand{\lMosaic}{\lMOSAIC}
\newcommand{\lfMosaic}{\lfMOSAIC}
\newcommand{\llWC}{\mbox{{\LARGE WhiteCharmDB}}}
\newcommand{\llwh}{\mbox{{\LARGE White}}}
\newcommand{\llch}{\mbox{{\LARGE CharmDB}}}
\usepackage{enumitem}
\colorlet{dsl}{purple!20!brown}
\colorlet{dslr}{dsl!50!blue}
\setlist[description]{%
topsep=10pt,
labelsep=12pt,
itemsep=12pt, % space between items
%font={\bfseries\sffamily}, % set the label font
font=\normalfont\bfseries\color{dslr!50!black}, % if colour is needed
}
\setlist[enumerate]{%
topsep=3pt, % space before start / after end of list
itemsep=-2pt, % space between items
font={\bfseries\sffamily}, % set the label font
% font={\bfseries\sffamily\color{red}}, % if colour is needed
}
%\usepackage{tcolorbox}
\newcommand{\slead}[1]{%
\noindent{\raisebox{2pt}{\relscale{1.15}{{{%
\fcolorbox{logoCyan!50!black}{logoGreen!5}{#1}
}}}}}\hspace{.5em}}
\let\OldLaTeX\LaTeX
\renewcommand{\LaTeX}{\resizebox{!}{7pt}{\color{orr!35!black}{\OldLaTeX}}}
\let\OldTeX\TeX
\renewcommand{\TeX}{\resizebox{!}{7pt}{\color{orr!35!black}{\OldTeX}}}
\newcommand{\LargeLaTeX}{\resizebox{!}{8.5pt}{\color{orr!35!black}{\OldLaTeX}}}
\setlength\parindent{0pt}
%\setlength\parindent{24pt}
%\input{commands}
\newcommand{\lun}[1]{\raisebox{-4pt}{\fontfamily{qcr}\selectfont{%
\LARGE{\textbf{\textcolor{tcolor}{#1}}}}}\vspace{-2pt}}
\newcommand{\inditem}{\itemindent10pt\item}
\usepackage{soul}
\definecolor{hlcolor}{RGB}{114, 54, 203}
\colorlet{hlcol}{hlcolor!35}
\sethlcolor{hlcol}
\makeatletter
\def\SOUL@hlpreamble{%
\setul{}{3ex}% !!!change this value!!! default is 2.5ex
\let\SOUL@stcolor\SOUL@hlcolor
\SOUL@stpreamble
}
\makeatother
\usepackage{scrextend}
%\vspace*{3em}
\newenvironment{mldescription}{\vspace{1em}%
\begin{addmargin}[4pt]{1em}
\setlength{\parindent}{-1em}%
\newcommand*{\mlitem}[1][]{\vspace{5pt}\par\medskip%
%\colorbox{hlcolor}{\textbf{##1}}\quad}\indent
\hl{ \textbf{##1} }\quad}\indent
}{%
\end{addmargin}
\medskip
}
\usepackage{marginnote}
\newcommand{\mnote}[1]{%
\vspace*{-2em}
\reversemarginpar
\raisebox{1em}{\marginnote{\parbox{4em}{%
\begin{mdframed}[innerleftmargin=4pt,
innerrightmargin=1pt,innertopmargin=1pt,
linecolor=red!20!cyan,userdefinedwidth=4em,
topline=false,
rightline=false]
{{\fontfamily{ppl}\fontsize{12}{0}\selectfont
\textit{#1}}}
\end{mdframed}}
}[3em]}}
\newcommand{\mnotel}[1]{%
\vspace*{-2em}
\reversemarginpar
\raisebox{-4em}{\marginnote{\parbox{4em}{%
\begin{mdframed}[innerleftmargin=4pt,
innerrightmargin=1pt,innertopmargin=1pt,
linecolor=red!20!cyan,userdefinedwidth=4em,
topline=false,
rightline=false]
{{\fontfamily{ppl}\fontsize{12}{0}\selectfont
\textit{#1}}}
\end{mdframed}}
}[3em]}}
\newcommand{\mnoteh}[3]{%
\vspace*{#1}
\reversemarginpar
\raisebox{#2}{\marginnote{\parbox{4em}{%
\begin{mdframed}[innerleftmargin=4pt,
innerrightmargin=1pt,innertopmargin=1pt,
linecolor=red!20!cyan,userdefinedwidth=4em,
topline=false,
rightline=false]
{{\fontfamily{ppl}\fontsize{12}{0}\selectfont
\textit{#3}}}
\end{mdframed}}
}[3em]}}
\newcommand{\mnoteb}[1]{%
\vspace*{1em}
\reversemarginpar
\raisebox{1em}{\marginnote{\parbox{4em}{%
\begin{mdframed}[innerleftmargin=4pt,
innerrightmargin=1pt,innertopmargin=1pt,
linecolor=red!20!cyan,userdefinedwidth=4em,
topline=false,
rightline=false]
{{\fontfamily{ppl}\fontsize{12}{0}\selectfont
\textit{#1}}}
\end{mdframed}}
}[3em]}}
\usepackage{wrapfig}
\usetikzlibrary{arrows, decorations.markings}
\usetikzlibrary{shapes.arrows}
\newcommand{\curicon}[2]{%
\node at (#1,#2) [
draw=black,
%minimum width=2ex,
inner sep=.7pt,
fill=white,
single arrow,
single arrow head extend=3pt,
single arrow head indent=1.5pt,
single arrow tip angle=45,
line join=bevel,
minimum height=4.6mm,
rotate=115
] {};
}
\makeatletter
\def\@cite#1#2{[\textbf{#1\if@tempswa , #2\fi}]}
\def\@biblabel#1{[\textbf{#1}]}
\makeatother
\hypersetup{
colorlinks=true,
citecolor=blCyan!40!green,
filecolor=magenta,
urlcolor=blue,
}
\renewcommand{\thefootnote}{\textcolor{logoGreen!80!logoBlue}{{\fontfamily{qcr}\fontseries{b}\fontsize{10}{4}\selectfont\arabic{footnote}}}}
\urlstyle{same}
%\setmainfont{QTChanceryType}
\begin{document}
{\linespread{1.2}\selectfont
\vspace*{1em}
\begin{center}
%{\relscale{1.2}{\fontfamily{qcr}\fontseries{b}\selectfont
%{\colorbox{black}{\color{blue}{\llWC{} Database Engine \\and
%\llMOSAIC{} Native Application Toolkit}}}}}
\colorlet{ctmp}{logoPeach!20!gray}
\colorlet{ctmpp}{ctmp!90!yellow}
\colorlet{ctmppp}{ctmpp!50!black}
\colorlet{ctmpppp}{ctmppp!90!logoRed}
\colorlet{ctmcyan}{ctmpppp!70!cyan}
%\vspace{2em}
%{\colorbox{darkBlGreen!30!darkRed}{%
\begin{tcolorbox}
[
%%enhanced,
%%frame hidden,
%interior hidden
arc=2pt,outer arc=0pt,
enhanced jigsaw,
width=.92\textwidth,
colback=ctmcyan!50,
colframe=logoRed!30!darkRed,
drop shadow=logoPurple!50!darkRed,
%boxsep=0pt,
%left=0pt,
%right=0pt,
%top=2pt,
]
\hspace{22pt}
\begin{minipage}{.95\textwidth}
\begin{center}
{\setlength{\fboxsep}{21pt}
\relscale{1.3}{{\fontfamily{qcr}\fontseries{b}\selectfont%
{Proposing a CORD-19 Software Development
Kit \\\vspace{.25em}to Improve Machine Readability}
}}}
\end{center}
\end{minipage}
\end{tcolorbox}
\end{center}
\vspace*{1em}
\begin{center}
\parbox{.8\textwidth}{%
{\fontfamily{pzc}\selectfont
LTS is founded by Amy Neustein, PhD,
series editor of {\bf Speech Technology and
Text Mining in Health Care} (de Gruyter),
and Editor of {\bf Advances in Ubiquitous Computing:
Cyber-Physical Systems, Smart Cities,
and Ecological Monitoring}.
(Elsevier, forthcoming).}}
\end{center}
\vspace*{1em}
\p{\Cnineteen{} (the \q{Covid-19 Open Research Dataset})
is a new coronavirus data collection which was released in conjunction with a White House initiative to
spur \hspace{-1pt} Covid-19 \hspace{-1pt} research.
This initiative is described as a
\q{call to action ... to develop new text and data mining techniques that can help the science community answer high-priority scientific questions related to \Covid{}} (see \href{https://www.whitehouse.gov/briefings-statements/call-action-tech-community-new-machine-readable-covid-19-dataset/}{https://www.whitehouse.gov/briefings-statements/call-action-tech-community-new-machine-readable-covid-19-dataset/}). The White House
spearheaded a consortium of industry and academic
institutions, led by the Allen Institute for AI Research,
who curated a \q{machine-readable Coronavirus literature collection}
which includes article metadata and (in most cases)
publication text
for over 44,000 coronavirus research papers. This
corpus is paired with links to publisher portals
(including Springer Nature,
Wiley, Elsevier, the American Society for Microbiology, and the New England Journal of Medicine), providing full open access to \Covid{}-related
literature; these resources collectively constitute
\Cnineteen{} (see \cite{CORD}).}
\p{Linguistic Technology Systems (LTS)
would like to create a Software
Development Kit (\SDK{}) to
help scientists utilize \Cnineteen{}. This \SDK{}
would include new code libraries explicitly
implemented for data-management operations
specific to \Cnineteen{}. The
\SDK{} would also include a package of applications,
modified to support \Covid{} research, that
would collectively create an integrated
and self-contained computing environment.
These two parts of the \SDK{} --- the new
code libraries and the application package ---
are outlined in this paper.}
\section{New Code Libraries within the Proposed SDK}
\p{The \Cnineteen{}
collection was formulated with the explicit goal
of promoting both \textit{text mining} and
\textit{data mining} solutions
to advance coronavirus research. This means that
\Cnineteen{} is intended to be used both as a document
archive for text mining and as a repository for
finding and obtaining coronavirus data for subsequent
research. Because the White House announcement requests
institutions to develop additional technologies
which would help scientists and jurisdictions to
take advantage of \Cnineteen{},
the collection was released with the
anticipation that industry and academia would
augment the underlying data by layering on additional
software. Our proposed \Cnineteen{} \SDK{} would
do just that: this \SDK{} would serve as a component
that would provide analytic capabilities
to make the raw \Cnineteen{} data
more valuable; it would also serve as a toolkit through which
other developers could create
new solutions targeting the \Cnineteen{} repository.}
\p{To accomplish these goals, our
proposed \SDK{} would include a collection of
new code libraries to aid programmers in the
implementation of algorithms to investigate the
\Cnineteen{} corpus. These code libraries would
enhance the underlying data by providing the following
useful features:
%\vspace{-2em}
\begin{description}[leftmargin=9pt,itemsep=4pt]
\item[Tools for Correcting Transription Errors]
Transription errors can cause the machine-readable
text archive to misrepresent the structure
and content of documents. For instance,
there are cases in \Cnineteen{}
of scientific notation and terminology
being improperly encoded. As a concrete example, \colorq{2{\textquotesingle}-C-ethynyl} is encoded in one \Cnineteen{} file
as \makebox{\colorq{2 0 -C-ethynyl}} (see \cite{Eyer} for
the human-readable publication where this error is
observed; the corresponding index in the corpus is \textcolor{blGreen!45!black}{9555f44156bc5f2c6ac191dda2fb651501a7bd7b.json}).
To help address these sorts of errors ---
which could stymie text searches
against the \Cnineteen{} corpus ---
our \SDK{} would
augment the \Cnineteen{} corpus by providing
alternate machine-readable encodings
of the corpus documents in formats such as \XML{},
whenever they are available,
as a supplement to \Cnineteen{}'s
\JSON{} representation.
Compared to article content obtained indirectly
by \q{scraping} text from \HTML{} or \PDF{}
files, these \XML{} representations
(derived from the structured documents used
for editing prior to publication) would not be subject to transcription
errors. The \SDK{} would then provide tools to
cross-reference multiple versions of each document,
so as to correct errors in the original \JSON{} encodings.
\item[Tools for Converting Between Data Formats]
Although the \Cnineteen{} corpus is published
as \JSON{} files, many text-mining tools such
as those reviewed in \cite{NeusteinText} recognize
input in alternative formats, such as \XML{},
\BioC{}, or \JSON{} trees with different
schema than \Cnineteen{}. Our proposed
\SDK{} would provide libraries to read
\Cnineteen{}'s \JSON{} files and output
data in one of these alternative formats,
so as to initiate a text mining workflow.
The \SDK{} would also include tools for
manipulating the \textit{results} of
text mining algorithms, which is
often represented in formats such as
\XML{} and \CoNLL{} (Conference on Natural
Language Learning).
\item[Tools for Enhanced Annotation]
Currently \Cnineteen{} does not
directly provide a mechanism for asserting
annotations related to text mining,
such as Named Entity Recognition or
formally recognized biomedical concepts.
However, because the archival schema supports standoff
annotation for intra-document references,
our \SDK{} can provide code for additional
standoff annotation categories of the kinds
commonly used in biomedical text mining.
As a concrete example, the corrected
text segment \colorq{2{\textquotesingle}-C-ethynyl} mentioned
earlier can be annotated as a molecular component.
\item[Tools for Research Data-Mining] Even though
many papers in \Cnineteen{} are paired with
published data sets, there is currently no tool for
locating research \textit{data}
through \Cnineteen{}.
For example, the collection of manuscripts available
through the Springer Nature portal linked
from \Cnineteen{} includes over 30 \Covid{} data sets,
but researchers can only discover that these data
sets exist by looking for a \q{supplemental materials} or
\q{data availability} addendum near the end of each article.
These Springer Nature data sets encompass a wide array of file types
and formats, including \FASTA{} (which stands for Fast-All,
a genomics format), \SRA{} (Sequence Read Archive, for
\DNA{} sequencing), \PDB{} (Protein Data Bank,
representing the \ThreeD{} geometry of protein
molecules), \MAP{} (Electron Microscopy Map), \EPS{}
(Embedded Postscript), \CSV{} (comma-separated values),
and tables represented in Microsoft Word
and Excel formats. To promote data mining
in the context of \Cnineteen{}, our
\SDK{} would (1) maintain an index of
data sets linked to \Cnineteen{} articles
and (2) merge these resources into a common representation
(such as \XML{}) wherever possible.
\item[Wrappers for Network Requests] Scientific
use of \Cnineteen{} will often require communicating
with remote servers. For example, genomics
information in the \Covid{} data sets (such as
those mentioned above that are available through
Springer Nature) is generally
provided in the form of accession numbers which
are used to query online genomics services.
Similarly, text mining algorithms often
rely on dedicated servers to perform
Natural Language Processing; these services
might take requests in \BioC{} format and respond
with \CoNLL{} data. As another case study epidemiological
studies of \Covid{} may need to access \API{}s or data
sets such as the John Hopkins University \q{dashboard}
(see \href{https://coronavirus.jhu.edu/map.html}{https://coronavirus.jhu.edu/map.html}, which is paired with a \GIT{} archive updated almost daily). To reduce the amount
of \q{biolerplate code} which developers need
for these networking requirements, our
company's \SDK{}
would provide code libraries based on the
\Qt{} Networking Module to manage networking
requests and responses. Programmers would
therefore have a unified framework with which
to construct remote queries and route responses,
a framework which could be used across
disparate scientific disciplines
(genomics, \NLP{}, epidemiology, and so forth).
\end{description}}
\p{In short, the code libraries decribed above would
augment the value of \Cnineteen{} by providing
tools out-of-the-box to help scientists
(and their codewriters) leverage \Cnineteen{} data.
Although we can expect that numerous code libraries
will be implemented so that researchers can
use \Cnineteen{}, a \Cnineteen{} \SDK{}
would be beneficial because it would integrate
\textit{multiple} libraries into a single package,
designed to be easily interoperable.
In particular, these libraries would be implemented
in a manner which prioritizes rapid development:
the \SDK{} would comprise a \textit{standalone}
and \textit{self-contained} development
environment with minimal external dependencies.
This priority would extend also to software
tools that would be bundled together with
the new code libraries. These software
tools are discussed next.}
\section{The Software Application Package within the Proposed SDK}
\p{In addition to the code libraries described
above, whose purpose would be to manipulate
\Cnineteen{} data to prepare for text mining and
data mining operations, our proposed \SDK{} would
bundle numerous applications used for database
storage, data visualization, and scripting.
The goal of this application package would be to
provide researchers with a self-contained computing
platform optimized for scientific research
and findings related to
\Covid{}. The components within this application
package would be selected with an emphasis on
tools that could be distributed in source-code
fashion, and then compiled within the \SDK{}'s
development framework with few, if any, external
dependencies. In short, the \SDK{} would try
to eliminate almost all scenarios where
programmers would need to perform a \q{system
install}; for the most part, the entire
computing platform (including scripting
and database capabilities) could be compiled
from source \q{out-of-the-box}. The
\SDK{} would also modify the applications
included in the package so as
to enhance their interoperability and their
usefulness for \Covid{} research.}
\p{The applications bundled with the \SDK{} would
likely include the following components:
\begin{itemize}
\item \XPDF{}: A \PDF{} viewer for reading full-text articles
(augmented with \Cnineteen{} features, such as integration
with biomedical ontologies);
\item AngelScript: An embeddable scripting engine
that could be used for analytic processing
of data generated by text and data mining operations
on \Cnineteen{} (see \cite{AS});
\item WhiteDB: A persistent database
engine that supports both relational
and \NoSQL{}-style architectures
(see \cite{EnarReilent});
\item IQmol: Molecular Visualization software
that can be used to study chemical data
presented in formats such as \PDB{} which
are employed by some \Covid{} data sets;
\item MeshLab: A general-purpose \ThreeD{} graphics
viewer;
\item UDPipe: a \Cpp{} library for manipulating
\CoNLL{} data;
\item LaTeXML: a \LaTeX{}-to-\XML{} converter;
\item PositLib: a library for use in high-precision computations based on the \q{Universal Number} format,
which is more accurate than traditional floating-point
encoding in some scientific contexts
(see \cite{JohnGustafson}).
\end{itemize}}
\p{It is worth noting that a data-mining platform requires
\textit{machine-readable} open-access research data
(which is a more stringent requirement than simply
pairing publications with data that can only
be understood by domain-specific
software). For example, radiological imaging can be a source
of \Covid{} data insofar as patterns of lung
scarring, such as \q{ground-glass opacity,} is a leading
indicator of the disease. Consequently, diagnostic
images of \Covid{} patients are a relevant kind of
content for inclusion in a \Covid{} data set
(see \cite{Shi} as a case-study). However,
diagnostic images are not in themselves
\q{machine readable.} When medical imaging is
used in a quantitative context (e.g., applying
Machine Learning for diagnostic pathology), it is necessary to perform Image Analysis to convert the raw data
--- in this case, radiological graphics --- into
quantitative aggregates. For instance, by using image
segmentation to demarcate geometric boundaries one
is able to define diagnostically relevant features (such
as opacity) represented as a scalar field over the segments.
In short, even after research data is openly published,
it may be necessary to perform
additional analysis on the data for it to be
a full-fledged component of a
machine-readable information space.\footnote{%
This does not mean that diagnostic images (or
other graphical data) should not be placed in a
data set; only that computational reuse of such
data will usually involve certain numeric
processing, such as image segmentation.
Insofar as this subsequent analysis is performed,
the resulting data should wherever possible
be added to the underlying image data as a
supplement to the data set.} To
deal with this sort of situation, our
proposed \SDK{} would include a \textit{procedural data-modeling vocabulary} that would identify the
interrelationships between data representations
and would define the workflows needed to
convert \Cnineteen{}-linked research data
into machine-readable data sets.}
\p{Another concern in developing an integrated \Cnineteen{}
data collection is that of indexing \Covid{} data
for both text mining \textit{and} data mining.
In particular,
our proposed \SDK{} would introduce a
system of \textit{microcitations} that apply
to portions of manuscripts \textit{as well as} data sets.
In the publishing context, a microcitation is defined as a
reference to a partially isolated fragment of a larger
document, such as a table or figure illustration, or a
sentence or paragraph defining a technical term,
or (in mathematics) the statement/proof of a definition, axiom,
or theorem. In data publishing, \q{data citations} are
unique references to data sets in their entirety or to
smaller parts of data sets. A data microcitation is then a
fine-grained reference into a data set. For example,
a data microcitation can consist of one
column in a spreadsheet,
one statistical parameter in a quantitative analysis,
or \q{the precise data records actually used in a study}
(in the words used by the Federation of Earth Science Information Partners to define microcitations;
see \cite{ESIP}).}
\p{The unique feature we propose
for our \SDK{} would be to combine the text-mining and
data-mining notions of microcitation into a \textit{unified}
framework. In particular, text-based searches
against the \Cnineteen{} corpus
would try to find matches in the
data sets indexed by our \SDK{} alongide matches within textual contet. As a concrete example,
a concept such as \q{expiratory flow} appears in \Cnineteen{}
both as a table column in research data and as a medical concept
discussed in research papers; a unified microcitation framework
should therefore map \textit{\color{drp}{expiratory flow}} as a keyphrase
to both textual locations and data set parameters.
Similarly, a concept such as
\textit{\color{drp}{2{\textquotesingle}-C-ethynyl}} (mentioned earlier in the context of transcription errors)
should be identified both as a phrase in
article texts and as a molecular component
present within compounds whose scientific
properties are investigated through \Cnineteen{}
research data. In so doing, a search for this
concept would then trigger both publication and
data-set matches.}
\section{Conclusion}
\p{The vision of a \textit{standalone} and \textit{self-contained}
\Covid{} data-set collection is consistent with
new publishing initiatives such as
Research Objects (see \cite{KhalidBelhajjame})
and \FAIR{} (\q{Findable, Accessible,
Interoperable, Reusable}; see \cite{TrifanOliveira}).
Indeed, our \Cnineteen{} \SDK{} would
function as a macro-scale Research Object,
which would be (1) \textit{self-contained} (with few or no external
dependencies); (2) \textit{transparent} (meaning that
all computing operations should be implemented by
source code within the bundle that can be examined
as code files and within a debugging session);
and (3) interactive (meaning that the bundle does not
only include raw data but also software to interactively
view and manipulate this data). Research Objects which
embrace these priorities attempt to provide data visualization,
persistence, and analysis through \GUI{}, database, and
scripting engines that can be embedded as source
code in the Research Object itself.
Our proposed \SDK{} would be based on the
same paradigm, but instead of applying it
to a single data set, it would
translate it to a larger data space integrating
the information contained in multiple
\Covid{} data sets as well as the entire
corpus of \Cnineteen{} articles.}
\vspace{-.5em}
%\noindent\lun{ETS\textsc{pf} for Scientific and Technical Applications}
\setlength{\bsep}{-2pt}
%\setlength{\parskip}{0pt}
%\setlength{\itemsep}{-2pt}
\makeatletter
\clubpenalty10000
\@clubpenalty \clubpenalty
\widowpenalty10000
\makeatother
\begin{thebibliography}{99}
\vspace{1em}
\bibitem{KhalidBelhajjame}{%
Khalid Belhajjame, \textit{et. al.},
\q{Workflow-centric research objects: First class citizens in scholarly discourse}. \biburl{https://pages.semanticscholar.org/coronavirus-research}}
\bibitem{CORD}{%
\q{COVID-19 Open Research Dataset (CORD-19)}. 2020. Version 2020-03-13. Retrieved from https://pages.semanticscholar.org/coronavirus-research. Accessed 2020-03-20. doi:10.5281/zenodo.3715506
\biburl{https://pages.semanticscholar.org/coronavirus-research}}
\bibitem{Eyer}{%
Lud\"ek Eyer, \textit{et. al.},
\q{Nucleoside analogs as a rich source of antiviral agents active against arthropod-borne flaviviruses}.
\biburl{https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5890575/}}
\bibitem{JohnGustafson}{%
John Gustafson,
\q{Beating Floating Point at its Own Game: Posit Arithmetic},
\biburl{http://www.johngustafson.net/pdfs/BeatingFloatingPoint.pdf}}
\bibitem{AS}{%
Andreas J\"onsson,
\q{AngelCode Scripting Library},
\biburl{www.AngelCode.com/AngelScript/}}
\bibitem{NeusteinText}{%
Amy Neustein, \textit{et. al.},
\q{Application of Text Mining to Biomedical Knowledge Extraction: Analyzing Clinical Narratives and Medical Literature},
\biburl{https://www.researchgate.net/publication/262372604_Application_of_Text_Mining_to_Biomedical_Knowledge_Extraction_Analyzing_Clinical_Narratives_and_Medical_Literature}}
\bibitem{ESIP}{%
Mark A. Parsons and Ruth Duerr,
\q{Data Identifiers, Versioning, and Micro-citation},
\biburl{https://www.thelancet.com/action/showPdf?pii=S1473-3099\%2820\%2930086-4}}
\bibitem{EnarReilent}{%
Enar Reilent,
\q{Whiteboard Architecture for the Multi-agent Sensor Systems},
\biburl{https://www.thelancet.com/action/showPdf?pii=S1473-3099\%2820\%2930086-4}}
\bibitem{Shi}{%
Heshui Shi, \textit{et. al.},
\q{Radiological findings from 81 patients with COVID-19
pneumonia in Wuhan, China: a descriptive study}.
\biburl{https://www.thelancet.com/action/showPdf?pii=S1473-3099\%2820\%2930086-4}}
\bibitem{TrifanOliveira}{%
Alina Trifan and Jos\'e Lu\'\i{}s Oliveira,
\q{FAIRness in Biomedical Data Discovery}.
\biburl{https://www.researchgate.net/publication/331775411_FAIRness_in_Biomedical_Data_Discovery}}
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7374205646,
"avg_line_length": 34.1297777778,
"ext": "tex",
"hexsha": "697b6866f4677d31981a08e97cdb30d6dcbf3522",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "ScignScape-RZ/ntxh",
"max_forks_repo_path": "NA3/htxn/gates-shorter.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "ScignScape-RZ/ntxh",
"max_issues_repo_path": "NA3/htxn/gates-shorter.tex",
"max_line_length": 428,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "ScignScape-RZ/ntxh",
"max_stars_repo_path": "NA3/htxn/gates-shorter.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12295,
"size": 38396
} |
\documentclass[a4paper]{book}
\usepackage{graphicx}
\usepackage[breaklinks=true]{hyperref}
\usepackage{listings}
\usepackage{color}
\usepackage{makeidx}
\usepackage{rotating}
\usepackage{tocbibind}
\usepackage{fancyhdr}
\usepackage{natbib}
\renewcommand{\familydefault}{\sfdefault}
\lstset{
breakatwhitespace=true,
language=tcl,
columns=fullflexible,
keepspaces=true,
breaklines=true,
tabsize=3,
showstringspaces=true,
extendedchars=true,
basicstyle=\small\ttfamily,
frame=lrtb,
numbers=left,
keywordstyle=\color{blue}
}
% Add mmonca keywords here
\lstset{emph={%
anneal, cascade, extract, init, insert, lowmsg, param, profile, report, save, test%
},emphstyle={\color{red}\small\ttfamily}%
}%
\pagestyle{fancy}
\fancyhf{}
\fancyhead[RO]{\includegraphics[width=1cm]{images/logo} user guide}
\fancyhead[LE]{\includegraphics[width=4mm]{images/AMM}\leftmark}
\fancyfoot[LE,RO]{\thepage}
\newcommand{\specialcell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
\newcommand{\param}[1]{{\tt #1}\index{#1}}
\newcommand{\idx}[1]{#1\index{#1}}
\newcommand{\MMonCa}{\includegraphics[width=1cm]{images/logo}}
\renewcommand{\cite}{\citet}
\makeindex
\author{Originally develop at the IMDEA Materials Institute\\
Currently supported at github.com/imartinbragado/MMonCa}
\title{\includegraphics[width=12cm]{images/logo}}
\begin{document}
\maketitle
\tableofcontents
\newpage
\chapter{Preliminaries}
\input{preliminaries}
\input{mechanics}
\input{testing}
\chapter{Running MMonCa}
\input{running}
\input{parallelism}
\chapter{Syntax}
\index{syntax}
\input{defects/syntaxis}
\chapter{Object KMC: defects and particles}
\input{defects/introduction}
\section{Description of defects and parameters}
\input{defects/MobileParticle}
\input{defects/Cluster}
\input{defects/Interfaces}
\section{Amorphization}
\input{materials/amorphization}
\chapter{Lattice KMC: Lattice atoms}
\input{defects/LKMC}
\chapter{Output}
\input{output/snapshots}
\input{output/extractinfo.tex}
\chapter{Commands}
\label{chap:commands}
All the commands can use the generic option \param{no.print} when you do not want \MMonCa\ to print out the command line.
\input{commands/anneal}
\input{commands/cascade}
\input{commands/extract}
\input{commands/init}
\input{commands/insert}
\input{commands/lowmsg}
\input{commands/param}
\input{commands/profile}
\input{commands/report}
\input{commands/restart}
\input{commands/save}
\input{commands/test}
\chapter{Limitations}
\section{extract diffusivities}
Only tracks \idx{diffusivity} of impurities, i.e., not of I or V.
\chapter{Appendix}
\section{Binding energies}
\index{binding}
There is no {\tt Particle(binding) \{ pref\_b ener\_b \}}, instead there are two parameters: {\tt Dopant(formation) \{ pref\_d ener\_d \}} and {\tt Particle(formation) \{ pref\_p ener\_p \}}. Expressions for the variable change for energies and prefactors are shown below:
\begin{tabular}{lc}
$pref_{Dop} = 1$ & $ener_{Dop} = 0$ \\
\end{tabular}
\begin{equation}
ener_{Part} = ener_{Dop} + ener_{IorV} - ener_{Bind}
\end{equation}
\begin{equation}
pref_{Part} = \frac{pref_{Dop}}{pref_{Bind}} \cdot pref_{IorV} \cdot pref_{migIorV} \cdot v_{capt}
\end{equation}
Where $ener_{IorV}$ is the \idx{formation energy} of interstitials or vacancies, $pref_{IorV}$ is the initial concentration of interstitials or vacancies, $pref_{migIorV}$ is the migration prefactor of interstitials or vacancies ({\tt IorV(migration)}), and $v_{capt}$ is the capture volume defined as:
\begin{equation}
v_{capt} = 3.65 \cdot \lambda^3
\end{equation}
Where $\lambda$ is the \idx{migration jump}.
\subsection{Example}
\begin{itemize}
\item $E_f(C_i) = E_f(I) + E_f(C) - E_b(C_i)$
\item $\left.\frac{C_{C_i}}{C_{C_{ref}}}\right|_0 = \left.\frac{C_{C}}{C_{C_{ref}}}\right|_0 \cdot C_{I_0} \cdot \nu_{mI_0} \cdot v_{capt} \cdot \frac{1}{\nu{bin}(C_i)}$
\end{itemize}
\input{Copyrights}
\bibliographystyle{apalike}
%\bibliographystyle{newapa}
\bibliography{articles}
\listoffigures
\listoftables
\printindex
\end{document}
| {
"alphanum_fraction": 0.7408228628,
"avg_line_length": 24.4518072289,
"ext": "tex",
"hexsha": "cefaad52bd7074fc33b56c0a1bb3269d55af391a",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-09T10:38:14.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-12-04T03:28:14.000Z",
"max_forks_repo_head_hexsha": "df279c2103484e89898ff4e81b45fb9ad43bcb9e",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Warmshawn/MMonCa",
"max_forks_repo_path": "doc/MMonCa.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "df279c2103484e89898ff4e81b45fb9ad43bcb9e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Warmshawn/MMonCa",
"max_issues_repo_path": "doc/MMonCa.tex",
"max_line_length": 302,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "126744a90253d7d7884c6dc7ec100db00a106a66",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "imartinbragado/MMonCa",
"max_stars_repo_path": "doc/MMonCa.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-15T09:13:49.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-23T16:20:09.000Z",
"num_tokens": 1329,
"size": 4059
} |
%!TEX root = ../dokumentation.tex
\chapter{Validation}\label{sec:Validation}
This chapter reflects on the sub-syntaxes generated by \ac{Synplifier}. It demonstrates that sub-syntaxes are compatible with the \ac{TPTP} syntax format and compares the sizes of several sub-syntaxes of interest.
%\section{Comment association}\label{sec:ValidationCommentAssociation}
%
%todo comment association
\section{Automated parser generation}\label{sec:ValidationAutomatedParserGeneration}
A goal of using \ac{Synplifier} is to be able to use an extracted sub-syntax with the automated parser generator for the \ac{TPTP} syntax \cite{VS06}.
To ensure compatibility it is possible to export an extracted sub-syntax and adding the part of the syntax concerning comments, even though it is not reachable in the original syntax (see section \ref{sec:ConceptAutomatedParserGenerator}). If this part of the syntax would not be part of the output sub-syntax file automated parser generation would result in an error because this syntax part is expected to be present.\\
Also, the automated parser generator is used to check if the output sub-syntax follows the original \ac{TPTP} syntax format.
\subsection{Building a basic parser}\label{sec:ValidationAutomatedParserGenerationBuildingBasicParser}
To demonstrate the usability and capability of \ac{Synplifier} a parser parsing only \ac{CNF}, that counts the number of \ac{CNF} clauses, is used.
The creation of the parser can be divided in the following steps:
\begin{enumerate}%[noitemsep]
\item Extract the \ac{CNF} sub-syntax from the original \ac{TPTP} syntax using \ac{Synplifier}.
\item Generate Lex and Yacc file based on the sub-syntax using the automated parser generator.
\item Modify the generated Yacc parser to count \ac{CNF} clauses.
\end{enumerate}
\subsubsection{\ac{CNF} sub-syntax extraction}\label{sec:ValidationAutomatedParserGenerationBuildingBasicParserSubSyntax}
The following listing \ref{lst:ValidationParserControlFile} contains the control file content, that extracts \ac{CNF} from the \ac{TPTP} syntax version 7.3.0.0.
The start symbol is \textit{TPTP\textunderscore file}.
\begin{lstlisting}[language = None, caption= Control file to extract \ac{CNF}, label= lst:ValidationParserControlFile]
<TPTP_file>
<annotated_formula>,::=,0,1,2,3,5
<annotations>,::=,0
\end{lstlisting}
All productions except the \textit{<cnf\textunderscore annotated>} are disabled from the \textit{<annotated\textunderscore formula>} grammar rule (line 2 of listing \ref{lst:ValidationParserControlFile}).
The \textit{<annotated\textunderscore formula>} grammar rule can be seen in the following listing \ref{lst:ValidationParserAnnotatedFormulaProductions}.
Annotations are also disabled (line 3 in listing \ref{lst:ValidationParserControlFile}).
\begin{lstlisting}[language = None,caption= \textit{<annotated\textunderscore formula>} production rule, label= lst:ValidationParserAnnotatedFormulaProductions]
<annotated_formula> ::= <thf_annotated> | <tff_annotated> | <tcf_annotated> |
<fof_annotated> | <cnf_annotated> | <tpi_annotated>
\end{lstlisting}
To extract the \ac{CNF} sub-syntax using the control file content from listing \ref{lst:ValidationParserControlFile} either the command-line interface or the GUI can be used.
\subsubsection{Lex and Yacc file generation}\label{sec:ValidationAutomatedParserGenerationBuildingBasicParserGenerateFiles}
The generated sub-syntax is input to the automated parser generator.
The automated parser generates a Lex and Yacc file, and also the corresponding C-files from that.
\subsubsection{\ac{CNF} clause counter implementation}\label{sec:ValidationAutomatedParserGenerationBuildingBasicParserClauseCounter}
To count the \ac{CNF} clauses the counter $cnf\textunderscore counter$ has been added to the parser.
Also, a main function is added, that can be seen in listing \ref{lst:ValidationParserMainFunction}.
Using this function, either a file can be passed to the parser or the input can be provided via the command-line.
After parsing is complete, the total number of \ac{CNF} clauses is output to the console.
\begin{lstlisting}[language=c, basicstyle=\scriptsize ,caption= \ac{CNF} parser main-function,label= lst:ValidationParserMainFunction]
int main(int argc, char **argv ){
++argv, --argc; /* skip over program name */
if(argc>0){
yyin = fopen(argv[0],"r");
}
else{
yyin = stdin;
}
yyparse();
printf("Total count of cnf clauses: %d\n", cnf_counter);
}
\end{lstlisting}
The incrementation of the \textit{cnf\textunderscore counter} is done in the Yacc rule-action, that has been created by the automated parser generator, for the \textit{cnf\textunderscore annotated} symbol.
\subsection{Testing the generated parser}\label{sec:ValidationAutomatedParserGeneration}
The generated parser has been successfully tested on \ac{TPTP} problems.
It returns an error if other logics than \ac{CNF} are present and correctly counts the number of \ac{CNF} clauses.
The automated parser generator can be used with any sub-syntax generated by \ac{Synplifier}, for example sub-syntaxes with \ac{FOF}, or \ac{FOF} and \ac{CNF}.
\section{Syntax size comparison}\label{sec:ValidationSyntaxSizeComparison}
Based on the extracted \ac{CNF} and \ac{FOF} sub-syntax, the size of the sub-syntaxes in comparison to the full \ac{TPTP} syntax is analyzed.
Table \ref{tbl:EvaluationSyntaxSize} contains the number of rule statements and rules of the original \ac{TPTP} syntax and extracted sub-syntaxes.\\
A rule statement in this case means the combination of left-hand side nonterminal symbol, rule type and rule alternatives.
Since rule alternatives are just a convenient way to display multiple rules with same nonterminal symbol on the left-hand side and the same rule type, in addition to the number of rule statements the number of rules is compared.\\
The reachable part of the \ac{TPTP} syntax in the second row of the table contains only the reachable symbols counted from the start symbol \textit{\textless TPTP\textunderscore file\textgreater}. Rule statements are counted by counting the nodes in grammar graph and the number of rules is counted by counting the productions list entries of all nodes.\\
Table \ref{tbl:EvaluationSyntaxSize} implies that the \ac{FOF} sub-syntax contains only 35 \% of the rule statements of the full syntax.
The \ac{CNF} sub-syntax contains only circa 30 \% of the rule statements of the full syntax.
Compared with the total number of reachable rules in the \ac{TPTP} syntax, \ac{FOF} contains circa 33 \% of the total reachable rules and \ac{CNF} contains circa 28 \% of the total reachable rules.
This shows a significant syntax reduction which was one goal of the tool usage.
\begin{table}[H]
\centering
\caption{TPTP syntax size comparison}
\begin{tabular}{lll}
\textbf{Syntax} & \textbf{Rules} & \textbf{Productions}\\\hline
TPTP syntax v. 7.3.0 & 313 & 635\\
Reachable part of TPTP syntax v. 7.3.0 & 285 & 595\\
FOF & 108 & 198\\
CNF & 92 & 165\\
\end{tabular}
\label{tbl:EvaluationSyntaxSize}
\end{table}
| {
"alphanum_fraction": 0.7831951665,
"avg_line_length": 69.0970873786,
"ext": "tex",
"hexsha": "31b3ecbcdb1c2e90e035051ac22ce942ce997afe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4d4fb68583f4c01a26fa05f1807c3e618acd9ea2",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nahku/CLDTSDocumentation",
"max_forks_repo_path": "content/Evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4d4fb68583f4c01a26fa05f1807c3e618acd9ea2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nahku/CLDTSDocumentation",
"max_issues_repo_path": "content/Evaluation.tex",
"max_line_length": 421,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4d4fb68583f4c01a26fa05f1807c3e618acd9ea2",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nahku/CLDTSDocumentation",
"max_stars_repo_path": "content/Evaluation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1789,
"size": 7117
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx} % Allows including images
\usepackage{subcaption}
\title{Weekly Report 726}
\author{Junior Team }
\date{July 2020}
\begin{document}
\maketitle
\section{Introduction}
This week we had a smaller group than usual as Natalie and Joah had family obligations. We continued to look into the square data set as well as begin to examine their corresponding simulated trajectories.
\section{Improving then Square Data Set}
\subsection{Pre-Impact Velocity}
Since the objects velocities are derived from position measurements, noise in the position can lead to extremely inaccurate velocity measurements which are amplified by the time step ($v_1 = \frac{x_0 + x_1}{dt}$).
To combat that, often a filter is applied to the data to smooth our the noise, The issue with the filter however, is it also tends to smooth out instantaneous events (like impacts). To combat this, we decided to use a linear fit to a range of positions before and after the impact. We then took the slope of those two lines as the pre and post impact velocities. Theoretically also checks out because the only external force acting on the object is gravity meaning $\dot{x}$ and $\dot{\theta}$ should be constant while $\dot{y}$ is decreasing lineally.\\
\noindent As you can see in the figure below (Fig. \ref{fig:xt6}, Fig. \ref{fig:yt6}) sometimes the filter will lead to a intermediary velocity which is inaccurate. This method also works to simply reduce the effect of any inaccurate measurements. by taking the mean.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.125]{xTrial6.jpg}
\caption{X Position and Velocity}
\label{fig:xt6}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[scale=0.125]{yTrial6.jpg}
\caption{Y Position and Velocity}
\label{fig:yt6}
\end{subfigure}
\caption{Fitting Position Curves to Find Velocity for Trial 6}
\end{figure}
\subsection{Outliers}
The next step towards improving the square data set was to re-run all of the outliers on the error graph through our data visualizer to reinspect the cases. There were some double bounces which we had misses or accidentally mislabeled and there were some cases where although it wasn't a double impact, two impacts occurred in quick succession. This would create issues when using the method described in the section above, since it requires points both before and after the impact when doing a linear fit. These cases were also discarded. In addition, Dr. Posa suggested we discard cases where the initial rotational velocity was extremely high (over 50 rad/s),
\subsection{Improvement in Results}
Next, we took our data set and re-ran it through the IRB model to see how it compared to the original data set we had come up with. The mean for the original square data set was around 0.21 which is a significant decrease for the new data set (over 50\%).
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{IRBError.jpg}
\caption{Error Plot for Classical IRB with Updated Square Data Set}
\label{fig:IRBError}
\end{figure}
\section{Simulated Trajectories}
\subsection{Visualizer}
We adapted our visualizer to show both the actual and simulated trajectories simultaneously in subplots. To our surprise, the simulated trajectories were quite different from the observed trajectories. Often times, the deviations began before any impact even occurred. This led us to wonder how Dr. Fazeli had simulated them in the first place. Will helped us learn that they were simulated using pybullet.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{Captura de pantalla 2020-08-03 a la(s) 13.51.25.png}
\caption{Live comparison of the Real vs Simulated Trajectory}
\label{fig:RealvsSim}
\end{figure}
\subsection{Comparing the Real vs Simulated Errors}
After going through the simulated data in a similar way we had done with the real data, we manually selected the impacts that we thought were acceptable. In the vast majority of the cases, we found that the cases that we considered "good" in the real data set did not directly coincide with those that we considered "good" in the simulated data set. Anyways, we took 260 cases from each group, and made the following plot that can be seen below. In essence, we discovered that on average, the simulated data gives a lower velocity error, which is four times lower than the real data set. This difference is still the case even when we remove the outliers from the real data set.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.15]{Simulated vs Real.jpg}
\caption{Error comparison of the Real vs Simulated Trajectory}
\label{fig:RealvsSimError}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7522549609,
"avg_line_length": 59.3928571429,
"ext": "tex",
"hexsha": "04b860efc9ffd0fe5c7b7bb6985c2fc41ab13f3b",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-05-19T21:01:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-19T21:01:28.000Z",
"max_forks_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "DAIRLab/ImpactModeling",
"max_forks_repo_path": "Weekly Reports/Weekly Report 726/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "DAIRLab/ImpactModeling",
"max_issues_repo_path": "Weekly Reports/Weekly Report 726/main.tex",
"max_line_length": 678,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "f6c28898845da6d48efdd6c1c696db2fb3716edf",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "DAIRLab/ImpactModeling",
"max_stars_repo_path": "Weekly Reports/Weekly Report 726/main.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-02T08:56:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-19T21:01:23.000Z",
"num_tokens": 1179,
"size": 4989
} |
\chapter{Transformation framework}
\label{chapter:transformation_framework}
The previous chapter introduced formalisations for GROOVE graphs and Ecore models. These formalisations allow us to reason about the GROOVE graphs and Ecore models in a formal way. In this chapter, these formalisations will be the foundation of a framework that allows for the formalisation of model transformations between Ecore and GROOVE.
Creating a formal transformation between Ecore models and GROOVE graphs is non-trivial, first and foremost because Ecore has more instrumentation to express individual elements that GROOVE cannot express directly. For example, Ecore models can directly express enumeration types and values, whereas GROOVE cannot. The same holds for properties related to relations and attributes, as well as user-defined data types and constants. This difference in instrumentation can be solved using encodings, which the transformation framework should support.
Another complexity with a formal transformation between Ecore models and GROOVE graphs is the infinite number of possible transformation functions. Because of the existence of infinitely many models and graphs, there is also an infinite number of model transformations possible. Since it is impractical to prove the correctness of each transformation function, a more systematic solution is needed. As discussed in \cref{sec:introduction:approach}, the transformation framework will be structured such that individual transformation functions can be composed while preserving the correctness. This composability allows the user to combine simple transformations into more substantial transformations, without the need of proving these transformations separately.
\cref{sec:transformation_framework:encodings} explains how encodings are used to deal with the elements that GROOVE cannot express directly. \cref{sec:transformation_framework:structure} explains the structure of the transformation framework, which is set up to allow the composability of transformation functions. The remaining sections in this chapter further explain how to apply this structure.
\input{tex/04_transformation_framework/01_encodings.tex}
\input{tex/04_transformation_framework/02_structure.tex}
\input{tex/04_transformation_framework/03_type_models_and_type_graphs.tex}
\input{tex/04_transformation_framework/04_instance_models_and_instance_graphs.tex} | {
"alphanum_fraction": 0.8497711194,
"avg_line_length": 171.6428571429,
"ext": "tex",
"hexsha": "7fc9124d7fe9d05b3f381aea9718c34265b6eaf6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_forks_repo_licenses": [
"AFL-3.0"
],
"max_forks_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_forks_repo_path": "thesis/tex/04_transformation_framework.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"AFL-3.0"
],
"max_issues_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_issues_repo_path": "thesis/tex/04_transformation_framework.tex",
"max_line_length": 762,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a0e860c4b60deb2f3798ae2ffc09f18a98cf42ca",
"max_stars_repo_licenses": [
"AFL-3.0"
],
"max_stars_repo_name": "RemcodM/thesis-ecore-groove-formalisation",
"max_stars_repo_path": "thesis/tex/04_transformation_framework.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 452,
"size": 2403
} |
\chapter{Applications II: \alphatap}\label{alphatapchapter}
In this chapter we examine a second application of nominal logic
programming, a declarative theorem prover for first-order classical
logic. We call this prover \alphatap, since it is based on the
\leantapsp\cite{beckert95leantap} prover and written in
\alphakanren. Our prover is a relation, without mode restrictions;
given a logic variable as the theorem to be proved, \alphatapsp
\textit{generates} valid theorems.
\leantapsp is a lean tableau-based theorem prover for first-order
logic due to \citet{beckert95leantap}. Written in
Prolog, it is extremely concise and is capable of a high rate of
inference. \leantapsp uses Prolog's cut (\texttt{!}) in three of its
five clauses in order to avoid nondeterminism, and uses
\mbox{\texttt{copy\_term/2}} to make copies of universally quantified
formulas. Although Beckert and Posegga take advantage of Prolog's
unification and backtracking features, their use of the impure cut and
\mbox{\texttt{copy\_term/2}} makes \leantapsp non-declarative.
% : reordering goals within the prover may cause divergence.
%% new definition of nondeclarative?
In this chapter we translate \leantapsp from Prolog to impure
miniKanren, using \scheme|match-a| to mimic Prolog's cut, and
\scheme|copy-termo| to mimic \mbox{\texttt{copy\_term/2}}. We then show how
to eliminate these impure operators from our translation. To eliminate the
use of \scheme|match-a|, we introduce a tagging scheme that makes our
formulas unambiguous. To eliminate the use of \scheme|copy-termo|, we
use substitution instead of copying terms. Universally quantified
formulas are used as templates, rather than instantiated directly;
instead of representing universally quantified variables with logic
variables, we use the noms of nominal logic. We then use nominal
unification to write a substitution relation that replaces quantified
variables with logic variables, leaving the original template
untouched.
The resulting declarative theorem prover is interesting for two
reasons. First, because of the technique used to arrive at its
definition: we use declarative substitution rather than
\scheme|copy-termo|. To our knowledge, there is no method for
copying arbitrary terms declaratively. Our solution is not completely
general but is useful when a term is used as a template for copying,
as in the case of \leantap. Second, because of the flexibility of the
prover itself: \alphatapsp is capable of instantiating non-ground
theorems during the proof process, and accepts non-ground
\textit{proofs}, as well. Whereas \leantapsp is fully automated and
either succeeds or fails to prove a given theorem, \alphatapsp can
accept guidance from the user in the form of a partially-instantiated
proof, regardless of whether the theorem is ground.
We present an implementation of \alphatapsp in
section~\ref{implementation} , demonstrating our technique for
eliminating cut and \mbox{\texttt{copy\_term/2}} from \leantap. Our
implementation demonstrates our contributions: first, it illustrates a
method for eliminating common impure operators, and demonstrates the
use of nominal logic for representing formulas in first-order logic;
second, it shows that the tableau process can be represented as a
relation between formulas and their tableaux; and third, it
demonstrates the flexibility of relational provers to mimic the full
spectrum of theorem provers, from fully automated to fully dependent
on the user.
This chapter is organized as follows. In section~\ref{tableau} we
describe the concept of tableau theorem proving. In
section~\ref{alphatap} we motivate our declarative prover by examining
its declarative properties and the proofs it returns. In
section~\ref{implementation} we present the implementation of
\alphatap, and in section~\ref{performance} we briefly examine
\alphatap's performance. Familiarity with tableau theorem proving
would be helpful; for more on this topic, see the references given in
section~\ref{tableau}. In addition, a reading knowledge of Prolog
would be useful, but is not necessary; for readers unfamiliar with
Prolog, carefully following the miniKanren and \alphakanrensp code
should be sufficient for understanding all the ideas in this chapter.
\section{Tableau Theorem Proving}\label{tableau}
We begin with an introduction to tableau theorem proving and its
implementation in \leantap.
Tableau is a method of proving first-order theorems that works by
refuting the theorem's negation. In our description we assume basic
knowledge of first-order logic; for coverage of this subject and a
more complete description of tableau proving, see
\citet{fitting1996fol}. For simplicity, we consider only
formulas in Skolemized \textit{negation normal form} (NNF).
Converting a formula to this form requires removing existential
quantifiers through Skolemization, reducing logical connectives so
that only $\wedge$, $\vee$, and $\neg$ remain, and pushing negations
inward until they are applied only to literals---see section~3 of
\citet{beckert95leantap} for details.
To form a tableau, a compound formula is expanded into branches
recursively until no compound formulas remain. The leaves of this
tree structure are referred to as \textit{literals}. \leantapsp forms
and expands the tableau according to the following rules. When the
prover encounters a conjunction $x \wedge y$, it expands both $x$ and
$y$ on the same branch. When the prover encounters a disjunction $x
\vee y$, it splits the tableau and expands $x$ and $y$ on separate
branches. Once a formula has been fully expanded into a tableau, it
can be proved unsatisfiable if on each branch of the tableau there
exist two complementary literals $a$ and $\neg a$ (each branch is
\textit{closed}). In the case of propositional logic, syntactic
comparison is sufficient to find complementary literals; in
first-order logic, sound unification must be used. A closed tableau
represents a proof that the original formula is unsatisfiable.
The addition of universal quantifiers makes the expansion process more
complicated. To prove a universally quantified formula \mbox{$\forall x. M$},
\leantapsp generates a logic variable $v$ and expands $M$,
replacing all occurrences of $x$ with $v$ (i.e., it expands $M^{\prime}$ where
$M^{\prime} = M[v/x]$). If \leantapsp is unable to close the current branch
after this expansion, it has the option of generating another logic
variable and expanding the original formula again. When the prover
expands the universally quantified formula \mbox{$\forall x. F(x) \wedge ( \neg F({\sf a})
\vee \neg F({\sf b}) )$}, for example, \mbox{$\forall x. F(x)$}
must be expanded twice, since $x$ cannot be instantiated to both
\textsf{a} and \textsf{b}.
\section{Introducing \alphatap}\label{alphatap}
We begin by presenting some examples of \alphatap's abilities, both in
proving ground theorems and in generating theorems. We also explore
the proofs generated by \alphatap, and show how passing
partially-instantiated proofs to the prover can greatly improve its
performance.
\subsection{Running Forwards}\label{forwards}
Both \leantapsp and \alphatapsp can prove ground theorems; in
addition, \alphatap\ produces a proof. This proof is a list
representing the steps taken to build a closed tableau for the
theorem; \citet{paulson99generic} has shown that translation to
a more standard format is possible. Since a closed tableau represents
an unsatisfiable formula, such a list of steps proves that the
negation of the formula is valid. If the list of steps is ground, the
proof search becomes deterministic, and \alphatapsp acts as a proof
checker.
\leantapsp encodes first-order formulas using Prolog terms. For
example, the term \mbox{\texttt{(p(b),all(X,(-p(X);p(s(X)))))}}
represents \mbox{$p($\textsf{b}$) \wedge \forall x . \neg p(x) \vee
p(s(x))$}. In our prover, we represent formulas using Scheme lists
with extra tags:
%, and in our final version we adopt a more extensive tagging
%scheme. The \schemeresult|forall| binder is represented by
%\alphakanren's \scheme|tie|, and variables are represented by noms.
%Our example formula is represented by the ground list:
\schemedisplayspace
\begin{schemeresponse}
(and-tag (pos (app p (app b))) (forall (tie anom (or-tag (neg (app p (var-tag anom)))
(pos (app p (app s (var-tag anom))))))))
\end{schemeresponse}
% The Prolog query \mbox{\texttt{prove(Fml,[],[],[],VarLim)}} succeeds
% if the formula \texttt{Fml} is unsatisfiable. Similarly, the
% \alphakanrensp goal \mbox{\scheme|(proveo fml '() '() '() proof)|}
% succeeds if \scheme|fml| can be shown to be unsatisfiable via the
% proof \scheme|proof|.
Consider Pelletier Problem 18~\cite{pelletier1986sfp}: \mbox{$\exists
y. \forall x. F(y) \Rightarrow F(x)$}. To prove this theorem in
\alphatap, we transform it into the following \textit{negation} of the
NNF:
\schemedisplayspace
\begin{schemeresponse}
(forall (tie anom (and-tag (pos (app f (var-tag anom))) (neg (app f (app g1 (var-tag anom)))))))
\end{schemeresponse}
\noindent where \schemeresult|`(app ,g1 (var-tag anom))| represents the
application of a Skolem function to the universally quantified
variable $a$. Passing this formula to the prover, we obtain the proof
\schemeresult|`(univ conj savefml savefml univ conj close)|. This proof
lists the steps the prover (presented in section~\ref{matcha}) follows to close
the tableau. Because both conjuncts of the formula contain the nom
$a$, we must expand the universally quantified formula more than once.
Partially instantiating the proof helps \alphatapsp prove theorems
with similar subparts. We can create a non-ground proof that describes
in general how to prove the subparts and have \alphatapsp fill in the
trivial differences. This can speed up the search for a proof
considerably. By inspecting the negated NNF of Pelletier Problem~21,
for example, we can see that there are at least two portions of the
theorem that will have the same proof. By specifying the structure of
the first part of the proof and constraining the identical portions by
using the same logic variable to represent both, we can give the
prover some guidance without specifying the whole proof. We pass the
following non-ground proof to \alphatap:
\schemedisplayspace
\vspace{-2pt}
\begin{centering}
\begin{schemeresponse}
(conj univ split (conj savefml savefml conj split Xvar Xvar)
(conj savefml savefml conj split (close) (savefml split Yvar Yvar)))
\end{schemeresponse}
\end{centering}
\vspace{-2pt}
\noindent On our test machine, our prover solves the original problem
with no help in 68 milliseconds (ms); given the knowledge that the
later parts of the proof will be duplicated, the prover takes only 27
ms. This technique also yields improvement when applied to Pelletier
Problem 43: inspecting the negated NNF of the formula, we see two
parts that look nearly identical. The first part of the negated
NNF---the part representing the theorem itself---has the following
form:
\schemedisplayspace
\vspace{-2pt}
\begin{centering}
\begin{schemeresponse}
(and-tag (or-tag (and-tag (neg (app Q (app g4) (app g3)))
(pos (app Q (app g3) (app g4))))
(and-tag (pos (app Q (app g4) (app g3)))
(neg (app Q (app g3) (app g4))))) ...)
\end{schemeresponse}
\end{centering}
\vspace{-2pt}
\noindent Since we suspect that the same proof might suffice for both
branches of the theorem, we give the prover the partially-instantiated
proof \mbox{\schemeresult|`(conj split Xvar Xvar)|}. Given just this
small amount of help, \alphatapsp proves the theorem in 720 ms,
compared to 1.5 seconds when the prover has no help at all. While
situations in which large parts of a proof are identical are rare,
this technique also allows us to handle situations in which different
parts of a proof are merely similar by instantiating as much or as
little of the proof as necessary.
\subsection{Running Backwards}\label{backwards}
% \begin{figure}[H]
% \begin{centering}
% \begin{tabular}{| r | c | c | c | c |}
% \hline
% Problem & \thinspace \leantap \thinspace\footnotemark[4] \thinspace &
% Translation\footnotemark[3] & \thinspace \alphatap\footnotemark[3]
% \thinspace & \thinspace \alphatap$\!_G$\footnotemark[4]$^,$\footnotemark[6] \\
% \hline
% 1 & ? &
% \hline
% \end{tabular}
% \caption{\alphatap's Performance on Pelletier's Problems\protect\footnotemark[2]
% \label{fig:performance}}
% \end{centering}
% \end{figure}
%\vspace{-6pt}
% Testing our prover on
% several of Pelletier's 75 problems~\cite{pelletier1986sfp} shows that
% \alphatapsp is about three to five times slower than our translation
% of \leantap. The translation solves problem 32, for example, in about
% one second, while \alphatapsp takes about three
% seconds; problem 26 takes our translation of \leantapsp about 13
% seconds, while \alphatapsp needs 36 seconds.
Unlike \leantap, \alphatapsp can generate valid theorems. Some
interpretation of the results is required since the theorems generated
are negated formulas in NNF.\footnote{The full implementation of
\alphatapsp includes a simple declarative translator from negated
NNF to a positive form.} In the example
\smallskip
\scheme|(run1 (q) (exist (x) (proveo q '() '() '() x)))|
\hspace{0.1cm}$\Rightarrow$
\schemeresult|`((and-tag (pos (app _.0)) (neg (app _.0))))|
\smallskip
\noindent
the reified logic variable \schemeresult|_.0| represents any
first-order formula $p$, and the entire answer represents the formula
$p \wedge \neg p$. Negating this formula yields the original theorem:
$\neg p \vee p$, or the law of excluded middle. We can also generate
more complicated theorems; here we use the ``generate and test'' idiom
to find the first theorem matching the negated NNF of the inference
rule {\it modus ponens}:
\schemedisplayspace
\begin{schemedisplay}
(run1 (q)
(exist (x)
(proveo x '() '() '() q)
(== `(and-tag (and-tag (or-tag (neg (app a)) (pos (app b))) (pos (app a))) (neg (app b)))
x)))
\end{schemedisplay}
\vspace{-.1cm}
\noindent $\Rightarrow$ \schemeresult|`((conj conj split (savefml close) (savefml savefml close)))|
\smallskip
\noindent This process takes about 5.1 seconds; {\it modus ponens} is the
173rd theorem to be generated, and the prover also generates a proof
of its validity. When this proof is given to \alphatap, {\it modus ponens}
is the sixth theorem generated, and the process takes only 20 ms.
Thus the declarative nature of \alphatapsp is useful both for
generating theorems and for producing proofs. Due to this flexibility,
\alphatapsp could become the core of a larger proof system. Automated
theorem provers like \leantapsp are limited in the complexity of the
problems they can solve, but given the ability to accept assistance
from the user, more problems become tractable.
%can solve more difficult problems.
%\footnotetext[7]{\alphatap$\!_G$ uses the unique name and preprocessor
% approach described in section 4.2.}
As an example, consider Pelletier Problem 47: Schubert's Steamroller.
This problem is difficult for tableau-based provers like \leantapsp
and \alphatap, and neither can solve it
automatically~\cite{beckert95leantap}. Given some help, however,
\alphatapsp can prove the Steamroller. Our approach is to prove a
series of smaller lemmas that act as stepping stones toward the final
theorem; as each lemma is proved, it is added as an assumption in
proving the remaining ones. The proof process is automated---the user
need only specify which lemmas to prove and in what order. Using this
strategy, \alphatapsp proves the Steamroller in about five seconds;
the proof requires twenty lemmas.
\alphatapsp thus offers an interesting compromise between large proof
assistants and smaller automated provers. It achieves some of the
capabilities of a larger system while maintaining the lean deduction
philosophy introduced by \leantap. Like an automated prover, it is
capable of proving simple theorems without user guidance. Confronted
with a more complex theorem, however, the user can provide a
partially-instantiated proof; \alphatapsp can then check the proof and
fill in the trivial parts the user has left out. Because \alphatapsp
is declarative, the user may even leave required axioms out of the
theorem to be proved and have the system derive them. This flexibility
comes at no extra cost to the user---the prover remains both concise
and reasonably efficient.
%% New
The flexibility of \alphatapsp means that it could be made interactive
through the addition of a read-eval-print loop and a simple proof
translator between \alphatap's proofs and a more human-readable
format. Since the proof given to \alphatapsp may be partially
instantiated, such an interface would allow the user to conveniently
guide \alphatapsp in proving complex problems. With the addition of
equality and the ability to perform single beta steps, this
flexibility would become more interesting---in addition to reasoning
about programs and proving properties about them, \alphatapsp would
instantiate non-ground programs during the proof process.
\section{Implementation}\label{implementation}
We now present the implementation of \alphatap. We begin with a
translation of \leantapsp from Prolog into \alphakanren. We then show
how to eliminate the translation's impure features through a
combination of substitution and tagging.
\leantapsp implements both expansion and closing of the tableau. When
the prover encounters a conjunction, it uses its argument
\texttt{UnExp} as a stack (Figure~\ref{fig:translation}): \leantapsp
expands the first conjunct, pushing the second onto the stack for
later expansion. If the first conjunct cannot be refuted, the second
is popped off the stack and expansion begins again. When a
disjunction is encountered, the split in the tableau is reflected by
two recursive calls. When a universal quantifier is encountered, the
quantified variable is replaced by a new logic variable, and the
formula is expanded. The \texttt{FreeV} argument is used to avoid
replacing the free variables of the formula. \leantapsp keeps a list
of the literals it has encountered on the current branch of the
tableau in the argument \texttt{Lits}. When a literal is encountered,
\leantapsp attempts to unify its negation with each literal in
\texttt{Lits}; if any unification succeeds, the branch is closed.
Otherwise, the current literal is added to \texttt{Lits} and expansion
continues with a formula from \texttt{UnExp}.
\subsection{Translation to \alphakanren}\label{translation}
While \alphakanrensp is similar to Prolog with the addition of nominal
unification, \alphakanrensp uses a variant of interleaving
depth-first search~\cite{backtracking}, so the order of
\scheme|conde| or \scheme|match-e| clauses in \alphakanrensp is irrelevant. Because of
Prolog's depth-first search, \leantapsp must use \texttt{VarLim} to
limit its search depth; in \alphakanren, \texttt{VarLim} is not
necessary, and thus we omit it.
In Figure~\ref{fig:translation} we present mK\leantap, our translation
of \leantapsp into \alphakanren; we label two clauses (\onet, \twot),
since we will modify these clauses later. To express Prolog's cuts,
our definition uses \scheme|match-a|. The final two clauses of
\leantapsp do not contain Prolog cuts; in mK\leantap, they are
combined into a single clause containing a \scheme|conde|. In place
of \leantap\thinspace's recursive call to \texttt{prove} to check the
membership of \texttt{Lit} in \texttt{Lits}, we call \scheme|membero|,
which performs a membership check using sound unification.\footnote{We define \scheme|membero| in Figure~\ref{fig:ending}; \scheme|membero| \emph{must} use sound unification, and cannot use \scheme|==-no-check|.} % Prolog's \texttt{copy\_term/2} is
% not built into \alphakanren; this addition is available as part of the
% mK\leantapsp source code.
%\begin{figure}[ht]
\begin{figure}[H]
%\vspace{-.3in}
\begin{tabular}{l l}
&
\begin{minipage}{2.3in}
\begin{schemedisplay}
(define proveo
(lambda (fml unexp lits freev)
(match-a fml
\end{schemedisplay}
\end{minipage} \\
\begin{minipage}{2.3in}
\begin{verbatim}
prove((E1,E2),UnExp,Lits,
FreeV,VarLim) :- !,
prove(E1,[E2|UnExp],Lits,
FreeV,VarLim).
\end{verbatim}
\end{minipage}
&
\begin{minipage}{2in}
\begin{schemedisplay}
(`(and-tag ,e1 ,e2)
(proveo e1 `(,e2 . ,unexp) lits freev))
\end{schemedisplay}
\end{minipage}
\\
\begin{minipage}{2in}
\begin{verbatim}
prove((E1;E2),UnExp,Lits,
FreeV,VarLim) :- !,
prove(E1,UnExp,Lits,FreeV,VarLim),
prove(E2,UnExp,Lits,FreeV,Varlim).
\end{verbatim}
\end{minipage}
&
\begin{minipage}{2in}
\vspace{1mm}
\begin{schemedisplay}
(`(or-tag ,e1 ,e2)
(proveo e1 unexp lits freev)
(proveo e2 unexp lits freev))
\end{schemedisplay}
\vspace{1mm}
\end{minipage}
\\
\begin{minipage}{2in}
\begin{verbatim}
prove(all(X,Fml),UnExp,Lits,
FreeV,VarLim) :- !,
\+ length(FreeV,VarLim),
copy_term((X,Fml,FreeV),
(X1,Fml1,FreeV)),
append(UnExp,[all(X,Fml)],UnExp1),
prove(Fml1,UnExp1,Lits,
[X1|FreeV],VarLim).
\end{verbatim}
\end{minipage}
&
\begin{minipage}{2in}
\begin{schemedisplay}
$\onet$(`(forall ,x ,body)
(exist (x1 body1 unexp1)
(copy-termo `(,x ,body ,freev)
`(,x1 ,body1 ,freev))
(appendo unexp `(,fml) unexp1)
(proveo body1 unexp1 lits
`(,x1 . ,freev))))
\end{schemedisplay}
\end{minipage}
\\
\begin{minipage}{2in}
\begin{verbatim}
prove(Lit,_,[L|Lits],_,_) :-
(Lit = -Neg; -Lit = Neg) ->
(unify(Neg,L);
prove(Lit,[],Lits,_,_)).
\end{verbatim}
\end{minipage}
&
\begin{minipage}{2in}
\begin{schemedisplay}
$\twot$(fml
(conde
((match-a `(,fml ,neg)
(`((not ,neg) ,neg))
(`(,fml (not ,fml))))
(membero neg lits))
\end{schemedisplay}
\end{minipage}
\\
\begin{minipage}{2in}
\begin{verbatim}
prove(Lit,[Next|UnExp],Lits,
FreeV,VarLim) :-
prove(Next,UnExp,[Lit|Lits],
FreeV,VarLim).
\end{verbatim}
\end{minipage}
&
\begin{minipage}{2in}
\begin{schemedisplay}
((exist (next unexp1)
(== `(,next . ,unexp1) unexp)
(proveo next unexp1 `(,fml . ,lits)
freev))))))))
\end{schemedisplay}
\end{minipage}
\\
\end{tabular}
\caption{\leantapsp and mK\leantap\thinspace: a translation from Prolog to \alphakanren
\label{fig:translation}}
%\vspace{-.3in}
\end{figure}
\subsection{Eliminating \copytermo}\label{copytermo}
\enlargethispage{1\baselineskip} %
Since \scheme|copy-termo| is an impure operator, its use makes
\scheme|proveo| non-declarative: reordering the goals in the prover
can result in different behavior. For example, moving the call to
\scheme|copy-termo| after the call to \scheme|proveo| causes the
prover to diverge when given any universally quantified formula. To
make our prover declarative, we must eliminate the use of
\scheme|copy-termo|.
Tagging the logic variables that represent universally quantified
variables allows the use of a declarative technique that creates two
pristine copies of the original term: one copy may be expanded and the
other saved for later copying. Unfortunately, this copying examines
the entire body of each quantified formula and instantiates the
original term to a potentially invalid formula.
Another approach is to represent quantified variables with symbols or
strings. When a new instantiation is needed, a new variable name can
be generated, and the new name can be substituted for the old without
affecting the original formula. This solution does not destroy the
prover's input, but it is difficult to ensure that the provided data
is in the correct form declaratively: if the formula to be proved is
non-ground, then the prover must generate unique names. If the
formula \textit{does} contain these names, however, the prover must
\textit{not} generate new ones. This problem can be solved with a
declarative preprocessor that expects a logical formula
\textit{without} names and puts them in place. If the preprocessor is
passed a non-ground formula, it instantiates the formula to the
correct form. %We have implemented this strategy in a Prolog prover we
%call \alphatap$\!_G$;
The requirement of a preprocessor, however,
means the prover itself is not declarative.
We use nominal logic to solve the \scheme|copy-termo| problem.
Nominal logic is a good fit for this problem, as it is designed to
handle the complexities of dealing with names and binders
declaratively.
%Using
%noms to represent universally quantified variables and the
%\scheme|tie| operator to represent the $\forall$ binder allows us to
%avoid the use of logic variables to represent quantified variables.
Since noms represent unique names, we achieve the benefits of the
symbol or string approach without the use of a preprocessor. We can
generate unique names each time we encounter a universally quantified
formula, and use nominal unification to perform the renaming of the
quantified variable. If the original formula is uninstantiated, our
newly-generated name is unique and is put in place correctly; we no
longer need a preprocessor to perform this function.
Using the tools of nominal logic, we can modify mK\leantapsp to
represent universally quantified variables using noms and to perform
substitution instead of copying. When the prover reaches a literal,
however, it must replace each nom with a logic variable, so that
unification may successfully compare literals. To accomplish this, we
associate a logic variable with each unique nom, and replace every nom
with its associated variable before comparing literals. These
variables are generated each time the prover expands a quantified
formula.
To implement this strategy, we change our representation of formulas
slightly. Instead of representing $\forall x. F(x)$ as
\mbox{\schemeresult|`(forall Xvar (f Xvar))|}, we use a nom wrapped in
a \scheme|var-tag| tag to represent a variable reference, and the
term constructor \scheme|tie| to represent the $\forall$ binder:
\mbox{\schemeresult|`(forall (tie anom (f (var-tag anom))))|}, where $a$ is
a nom. The \scheme|var-tag| tag allows us to distinguish noms
representing variables from other formulas. We now write a relation
\scheme|subst-lito| to perform substitution of logic variables for
tagged noms in a literal, and we modify the literal case of
\scheme|proveo| to use it. We also replace the clause handling
\schemeresult|forall| formulas and define \scheme|lookupo|. The two
clauses of \scheme|lookupo| overlap, but since each mapping in the
environment is from a unique nom to a logic variable, a particular nom
will never appear twice.
We present the changes needed to eliminate \scheme|copy-termo| from
mK\leantapsp in Figure~\ref{fig:changes}. Instead of copying the body
of each universally quantified formula, we generate a logic variable
\scheme|x| and add an association between the nom representing the
quantified variable and \scheme|x| to the current environment. When we
prepare to close a branch of the tableau, we call \scheme|subst-lito|,
replacing the noms in the current literal with their associated logic
variables.
\begin{figure}[H]
\noindent \begin{tabular}{l l}
\begin{minipage}{2.5in}
\small
\begin{schemedisplay}
$\onet$(`(forall (tie-tag ,@a ,body))
(exist (x unexp1)
(appendo unexp `(,fml) unexp1)
(proveo body unexp1 lits
`((,a . ,x) . ,env))))
$\twot$(fml
(exist (lit)
(subst-lito fml env lit)
(conde
((match-a `(,lit ,neg)
(`((not ,neg) ,neg))
(`(,lit (not ,lit))))
(membero neg lits))
((exist (next unexp1)
(== `(,next . ,unexp1) unexp)
(proveo next unexp1 `(,lit . ,lits)
env))))))
\end{schemedisplay}
\vspace{.1cm}
\end{minipage}
&
\begin{minipage}{1.2in}
\small
%\schemeinput{code/lookupo}
\begin{schemedisplay}
(define lookupo
(lambda (a env out)
(match-e env
(`((,a . ,out) . ,rest))
(`(,first . ,rest)
(lookupo a rest out)))))
\end{schemedisplay}
\begin{schemedisplay}
(define subst-lito
(lambda (fml env out)
(match-a `(,fml ,out)
(`((var-tag ,a) ,out)
(lookupo a env out))
(`((,e1 . ,e2) (,r1 . ,r2))
(subst-lito e1 env r1)
(subst-lito e2 env r2))
(`(,fml ,fml)))))
\end{schemedisplay}
\end{minipage}
\end{tabular}
\caption{Changes to mK\leantapsp to eliminate \protect\scheme|copy-termo|
\label{fig:changes}}
%\vspace{-.2in}
\end{figure}
The original \mbox{\texttt{copy\_term/2}} approach used by \leantapsp and
mK\leantapsp avoids replacing free variables by copying the list
\scheme|`(,x ,body ,freev)|. The copied version is unified with the list
\scheme|`(x1 body1 ,freev)|, so that \textit{only} the variable
\scheme|x| will be replaced by a new logic variable---the free
variables will be copied, but those copies will be unified with the
original variables afterwards. Since our substitution strategy does
not affect free variables, the \scheme|freev| argument is no longer
needed, and so we have eliminated it.
\subsection{Eliminating \matchasymbol}\label{matcha}
Both \scheme|proveo| and \scheme|subst-lito| use \scheme|match-a|
because the clauses that recognize literals overlap with the other
clauses. To solve this problem, we have designed a tagging scheme that
ensures that the clauses of our substitution and \scheme|proveo|
relations do not overlap. To this end, we tag both positive and
negative literals, applications, and variables. Constants are
represented by applications of zero arguments. Our prover thus accepts
formulas of the following form:
% \begin{center}
% \begin{tabular}{lcl}
% $<$Fml$>$ & $\rightarrow$ & $($\textsf{or} $<$Fml$>$ $<$Fml$>)$
% % \\ & $|$ &
% $|$ $($\textsf{and} $<$Fml$>$ $<$Fml$>)$
% \\ & $|$ &
% $($\textsf{forall} $<$nom$>$ $<$Fml$>)$
% % \\ & $|$ &
% $|$ $($\textsf{lit} $<$Lit$>)$
% \\
% $<$Lit$>$ & $\rightarrow$ & $($\textsf{pos} $<$Term$>)$
% % \\ & $|$ &
% $|$ $($\textsf{neg} $<$Term$>)$
% \\
% $<$Term$>$ & $\rightarrow$ & $($\textsf{sym} $<$symbol$>)$
% % \\ & $|$ &
% $|$ $($\textsf{var} $<$nom$>)$
% % \\ & $|$ &
% $|$ $($\textsf{app} $<$symbol$>$ $<$Term$>$*$)$ \\
% \end{tabular}
% \end{center}
% \begin{center}
% \begin{tabular}{lcl}
% Fml & $\rightarrow$ & $($\textsf{and} Fml Fml$)$
% % \\ & $|$ &
% $|$ $($\textsf{or} Fml Fml$)$
% % \\ & $|$ &
% $|$ $($\textsf{forall} $($\scheme|tie| nom Fml$))$
% % \\ & $|$ &
% $|$ Lit
% \\
% Lit & $\rightarrow$ & $($\textsf{pos} Term$)$
% % \\ & $|$ &
% $|$ $($\textsf{neg} Term$)$
% \\
% Term & $\rightarrow$ & %$($\textsf{sym} symbol$)$
% % \\ & $|$ &
% %$|$
% $($\textsf{var} nom$)$
% % \\ & $|$ &
% $|$ $($\textsf{app} symbol Term*$)$ \\
% \end{tabular}
% \end{center}
%\vspace{-.2cm}
\begin{center}
\begin{tabular}{lcl}
\textit{Fml} & $\rightarrow$ & $($\textsf{and} \textit{Fml} \textit{Fml}$)$
$|$ $($\textsf{or} \textit{Fml} \textit{Fml}$)$
$|$ $($\textsf{forall} $($\scheme|tie| \textit{Nom} \textit{Fml}$))$
$|$ \textit{Lit}
\\
\textit{Lit} & $\rightarrow$ & $($\textsf{pos} \textit{Term}$)$
$|$ $($\textsf{neg} \textit{Term}$)$
\\
\textit{Term} & $\rightarrow$ &
$($\textsf{var} \textit{Nom}$)$
$|$ $($\textsf{app} \textit{Symbol} \textit{Term}*$)$ \\
\end{tabular}
\end{center}
%\vspace{-.2cm}
This scheme has been chosen carefully to allow unification to compare
literals. In particular, the tags on variables \textit{must} be
discarded before literals are compared. Consider the two non-ground
literals \mbox{\schemeresult|`(not (f Xvar))|} and
\mbox{\schemeresult|`(f (p Yvar))|}. These literals are complementary:
the negation of one unifies with the other, associating $x$ with
\mbox{\schemeresult|`(p Yvar)|}. When we apply our tagging scheme,
however, these literals become \mbox{\schemeresult|`(neg (app f (var-tag Xvar)))|} and \mbox{\schemeresult|`(pos (app f (app p (var-tag Yvar))))|}, respectively, and are no longer complementary: their
subexpressions \mbox{\schemeresult|`(var-tag Xvar)|} and
\mbox{\schemeresult|`(app p (var-tag Yvar))|} do not unify. To avoid this
problem, our substitution relation discards the \textsf{var} tag when
it replaces noms with logic variables.
\begin{figure}[H]
%\vspace{-.2in}
\hspace{-.1in}
\begin{tabular}{l l}
\begin{minipage}{1.8in}
%\schemeinput{code/alphatapleft}
\begin{schemedisplay}
(define proveo
(lambda (fml unexp lits env proof)
(match-e `(,fml ,proof)
(`((and-tag ,e1 ,e2) (conj . ,prf))
(proveo e1 `(,e2 . ,unexp)
lits env prf))
(`((or-tag ,e1 ,e2) (split ,prf1 ,prf2))
(proveo e1 unexp lits env prf1)
(proveo e2 unexp lits env prf2))
(`((forall (tie-tag ,@a ,body)) (univ . ,prf))
(exist (x unexp1)
(appendo unexp `(,fml) unexp1)
(proveo body unexp1 lits
`((,a . ,x) . ,env) prf)))
(`(,fml ,proof)
(exist (lit)
(subst-lito fml env lit)
(conde
((== `(close) proof)
(match-e `(,lit ,neg)
(`((pos ,tm) (neg ,tm)))
(`((neg ,tm) (pos ,tm))))
(membero neg lits))
((exist (next unexp1 prf)
(== `(,next . ,unexp1) unexp)
(== `(savefml . ,prf) proof)
(proveo next unexp1 `(,lit . ,lits)
env prf)))))))))
\end{schemedisplay}
%\vspace{1.3cm}
\end{minipage}
&
\hspace{-0.3in}
\begin{minipage}{1.8in}
%\schemeinput{code/alphatapright}
\begin{schemedisplay}
(define appendo
(lambda-e (ls s out)
(`(() ,s ,s))
(`((,a . ,d) ,s (,a . ,r))
(appendo d s r))))
(define subst-lito
(lambda-e (fml env out)
(`((pos ,l) ,env (pos ,r))
(subst-termo l env r))
(`((neg ,l) ,env (neg ,r))
(subst-termo l env r))))
(define subst-termo
(lambda-e (fml env out)
(`((var-tag ,a) ,env ,out)
(lookupo a env out))
(`((app ,f . ,d) ,env (app ,f . ,r))
(subst-term* d env r))))
(define subst-term*
(lambda-e (tm* env out)
(`(() __ ()))
(`((,e1 . ,e2) ,env (,r1 . ,r2))
(subst-termo e1 env r1)
(subst-term* e2 env r2))))
(define membero
(lambda (x ls)
(exist (a d)
(== `(,a . ,d) ls)
(conde
((== a x))
((membero x d))))))
\end{schemedisplay}
%\vspace{1.0cm}
\end{minipage}
\end{tabular}
\caption{Final definition of \alphatap
\label{fig:ending}}
%\vspace{-.3in}
\end{figure}
Given our new tagging scheme, we can easily rewrite our substitution
relation without the use of \scheme|match-a|. We simply follow the
production rules of the grammar, defining a relation to recognize
each.
Finally, we modify \scheme|proveo| to take advantage of the same tags.
We also add a \scheme|proof| argument to \scheme|proveo|. We call
this version of the prover \alphatap, and present its definition in
Figure~\ref{fig:ending}. It is declarative, since we have eliminated
the use of \scheme|copy-termo| and every use of \scheme|match-a|. In
addition to being a sound and complete theorem prover for first-order
logic, \alphatapsp can now generate valid first-order theorems.
\section{Performance}\label{performance}
\enlargethispage{1\baselineskip} %
Like the original \leantap, \alphatapsp can prove many theorems in
first-order logic. Because it is declarative, \alphatapsp is generally
slower at proving ground theorems than mK\leantap, which is slower
than the original \leantap. Figure~\ref{fig:performance} presents a
summary of \alphatap's performance on the first 46 of Pelletier's 75
problems~\cite{pelletier1986sfp}, showing it to be roughly twice as
slow as mK\leantap.
These performance numbers suggest that while there is a penalty to be
paid for declarativeness, it is not so severe as to cripple the
prover. The advantage mK\leantapsp enjoys over the original \leantapsp
in Problem 34 is due to \alphakanren's interleaving search strategy;
as the result for mK\leantapsp shows, the original \leantapsp is faster
than \alphatapsp for any given search strategy.
Many automated provers now use the TPTP problem
library~\cite{stucliffe1994tpl} to assess performance. Even though it
is faster than \alphatap, \leantapsp solves few of the TPTP
problems. The Pelletier Problems, on the other hand, fall into the
class of theorems \leantapsp was designed to prove, and so we feel
they provide a better set of tests for the comparison between
\leantapsp and \alphatap.
\begin{figure}[h]
%\vspace{-.2in}
\begin{centering}
\begin{tabular}{l l}
\hspace{-.1in}
\begin{minipage}{2.7in}
\begin{tabular}{| r | c | c | c | } %c |
\hline
\thinspace \thinspace \# & \thinspace \leantap \thinspace &
mK\leantap \thinspace & \thinspace \alphatap
\thinspace %& \thinspace \alphatap$\!_G$\footnotemark[5]$^,$\footnotemark[7]
\\
\hline
1 & 0.1 & 0.7 & 2.0 \\
2 & 0.0 & 0.1 & 0.3 \\
3 & 0.0 & 0.2 & 0.5 \\
4 & 0.0 & 1.0 & 1.7 \\
5 & 0.1 & 1.2 & 2.5 \\
6 & 0.0 & 0.1 & 0.2 \\
7 & 0.0 & 0.1 & 0.2 \\
8 & 0.0 & 0.3 & 0.8 \\
9 & 0.1 & 4.3 & 9.7 \\
10 & 0.3 & 5.5 & 10.2 \\
11 & 0.0 & 0.3 & 0.6 \\
12 & 0.6 & 17.7 & 31.9 \\
13 & 0.1 & 3.7 & 8.2 \\
14 & 0.1 & 4.2 & 9.7 \\
15 & 0.0 & 0.8 & 1.9 \\
16 & 0.0 & 0.2 & 0.6 \\
17 & 1.1 & 9.2 & 18.1 \\
18 & 0.1 & 0.5 & 1.2 \\
19 & 0.3 & 15.1 & 33.5 \\
20 & 0.5 & 8.1 & 12.7 \\
21 & 0.4 & 22.1 & 38.7 \\
22 & 0.1 & 3.4 & 6.4 \\
23 & 0.1 & 2.5 & 5.4 \\
\hline
\end{tabular}
\end{minipage}
&
\begin{minipage}{2.5in}
\begin{tabular}{| r | c | c | c |} %c |
\hline
\# & \thinspace \leantap \thinspace &
mK\leantap \thinspace & \thinspace \alphatap
\thinspace %& \thinspace \alphatap$\!_G$\footnotemark[5]$^,$\footnotemark[7]
\\
\hline
24 & 1.7 & 31.9 & 60.3 \\
25 & 0.2 & 7.5 & 14.1 \\
26 & 0.8 & 130.9 & 187.5 \\
27 & 2.3 & 40.4 & 79.3 \\
28 & 0.3 & 19.1 & 29.6 \\
29 & 0.1 & 27.9 & 57.0 \\
30 & 0.1 & 4.2 & 9.6 \\
31 & 0.3 & 13.2 & 23.1 \\
32 & 0.2 & 23.9 & 42.4 \\
33 & 0.1 & 15.9 & 39.2 \\
34 & 199129.0 & 7272.9 & 8493.5 \\
35 & 0.1 & 0.5 & 1.1 \\
36 & 0.2 & 6.7 & 12.4 \\
37 & 0.8 & 123.3 & 169.2 \\
38 & 8.9 & 4228.8 & 8363.8 \\
39 & 0.0 & 1.1 & 2.8 \\
40 & 0.2 & 8.1 & 19.2 \\
41 & 0.1 & 6.9 & 17.0 \\
42 & 0.4 & 15.0 & 32.1 \\
43 & 43.2 & 668.4 & 1509.6 \\
44 & 0.3 & 15.1 & 35.7 \\
45 & 3.4 & 145.3 & 239.7 \\
46 & 7.7 & 505.5 & 931.2 \\
\hline
\end{tabular}
\end{minipage}
\end{tabular}
\caption{Performance of \leantap, mK\leantap, and \alphatapsp on the
first 46 Pelletier Problems.
All times are in milliseconds, averaged over 100 trials.
All tests were run \mbox{under} Debian
Linux on an IBM Thinkpad
X40 with a 1.1GHz Intel Pentium-M processor and 768MB RAM.
\leantapsp tests were run under SWI-Prolog 5.6.55;
mK\leantapsp and \alphatapsp tests were run under Ikarus Scheme
0.0.3+.
\label{fig:performance}}
\end{centering}
%\vspace{-.2in}
\end{figure}
\section{Applicability of These Techniques}
To avoid the use of \scheme|copy-termo|, we have represented
universally quantified variables with noms rather than logic
variables, allowing us to perform substitution instead of copying. To
eliminate \scheme|match-a|, we have enhanced the tagging scheme for
representing formulas.
Both of these transformations are broadly applicable. When
\scheme|match-a| is used to handle overlapping clauses, a carefully
crafted tagging scheme can often be used to eliminate
overlapping. When terms must be copied, substitution can often be used
instead of \scheme|copy-termo|---in the case of \alphatap, we use a
combination of nominal unification and substitution.
| {
"alphanum_fraction": 0.7102049989,
"avg_line_length": 38.3978906999,
"ext": "tex",
"hexsha": "fd63ff8d2dfc5cd6a9b17f6fa86bd769dcf63ad8",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2018-09-14T05:01:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-29T13:58:01.000Z",
"max_forks_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "holtzermann17/dissertation-single-spaced",
"max_forks_repo_path": "alphatap.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_issues_repo_issues_event_max_datetime": "2018-08-09T02:33:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-08-08T18:10:18.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "holtzermann17/dissertation-single-spaced",
"max_issues_repo_path": "alphatap.tex",
"max_line_length": 249,
"max_stars_count": 50,
"max_stars_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "holtzermann17/dissertation-single-spaced",
"max_stars_repo_path": "alphatap.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-10T12:49:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-11T21:22:55.000Z",
"num_tokens": 12045,
"size": 40049
} |
\chapter{Thema\label{chapter:thema}}
\lhead{Thema}
\begin{refsection}
\chapterauthor{Hans Muster und Heiri Meier}
\printbibliography[heading=subbibliography]
\end{refsection}
| {
"alphanum_fraction": 0.802259887,
"avg_line_length": 19.6666666667,
"ext": "tex",
"hexsha": "278b9f8216804cb7ea5a7610c3c9d056f7f03cc1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a7c452c44097ca851c661d3bc1093204ddae7f67",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "MatthiasRubin/SeminarDGL",
"max_forks_repo_path": "skript/thema/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a7c452c44097ca851c661d3bc1093204ddae7f67",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "MatthiasRubin/SeminarDGL",
"max_issues_repo_path": "skript/thema/main.tex",
"max_line_length": 43,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a7c452c44097ca851c661d3bc1093204ddae7f67",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "MatthiasRubin/SeminarDGL",
"max_stars_repo_path": "skript/thema/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-05T07:48:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-05T07:48:28.000Z",
"num_tokens": 58,
"size": 177
} |
\chapter{Nestable Engines}\label{nestable-engines}
Our implementation of ferns in Chapter~\ref{fernsimpl} requires
nestable engines \cite{RDybvi89,hieb94subcontinuations}, which we
present here with minimal comment. The implementation uses a global
variable, \scheme|state|, which holds two values: the number of ticks
available to the currently running engine or \scheme|#f| representing
infinity; and a continuation. \scheme|make-engine| makes an engine out
of a thunk. \scheme|engine| is a macro that makes an engine from an
expression. \scheme|timed-lambda| is like \scheme|lambda| except that
it passes its body as a thunk to \scheme|expend-tick-to-call|, which
ensures a tick is spent before the body is evaluated and passes the
suspended body to the continuation if no ticks are available. Programs
that use this embedding of nestable engines (and by extension our
embedding of \scheme|frons|) should not use \scheme|call/cc|, because
the uses of \scheme|call/cc| in the nestable engines implementation
may interact with other uses in ways that are difficult for the
programmer to predict.
%\newpage
%\enlargethispage{30pt}
\schemedisplayspace
\schemeinput{fernscode/coaxappendix.ss}
\schemeinput{fernscode/appendix-extra.ss}
| {
"alphanum_fraction": 0.8014527845,
"avg_line_length": 47.6538461538,
"ext": "tex",
"hexsha": "fa9a8ed9ffebc1606d5c7274577864746a0bfbdc",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2018-09-14T05:01:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-29T13:58:01.000Z",
"max_forks_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "holtzermann17/dissertation-single-spaced",
"max_forks_repo_path": "enginesappendix.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_issues_repo_issues_event_max_datetime": "2018-08-09T02:33:25.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-08-08T18:10:18.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "holtzermann17/dissertation-single-spaced",
"max_issues_repo_path": "enginesappendix.tex",
"max_line_length": 70,
"max_stars_count": 50,
"max_stars_repo_head_hexsha": "aca0e56a33916596c98709308342d9ccabd4718b",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "holtzermann17/dissertation-single-spaced",
"max_stars_repo_path": "enginesappendix.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-10T12:49:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-11T21:22:55.000Z",
"num_tokens": 321,
"size": 1239
} |
\section{\module{array} ---
Efficient arrays of uniformly typed numeric values.}
\declaremodule{builtin}{array}
\modulesynopsis{Efficient arrays of uniformly typed numeric values.}
\index{arrays}
This module defines a new object type which can efficiently represent
an array of basic values: characters, integers, floating point
numbers. Arrays are sequence types and behave very much like lists,
except that the type of objects stored in them is constrained. The
type is specified at object creation time by using a \dfn{type code},
which is a single character. The following type codes are defined:
\begin{tableiii}{c|l|c}{code}{Type code}{C Type}{Minimum size in bytes}
\lineiii{'c'}{character}{1}
\lineiii{'b'}{signed int}{1}
\lineiii{'B'}{unsigned int}{1}
\lineiii{'h'}{signed int}{2}
\lineiii{'H'}{unsigned int}{2}
\lineiii{'i'}{signed int}{2}
\lineiii{'I'}{unsigned int}{2}
\lineiii{'l'}{signed int}{4}
\lineiii{'L'}{unsigned int}{4}
\lineiii{'f'}{float}{4}
\lineiii{'d'}{double}{8}
\end{tableiii}
The actual representation of values is determined by the machine
architecture (strictly speaking, by the \C{} implementation). The actual
size can be accessed through the \member{itemsize} attribute. The values
stored for \code{'L'} and \code{'I'} items will be represented as
Python long integers when retrieved, because Python's plain integer
type cannot represent the full range of \C{}'s unsigned (long) integers.
The module defines the following function and type object:
\begin{funcdesc}{array}{typecode\optional{, initializer}}
Return a new array whose items are restricted by \var{typecode}, and
initialized from the optional \var{initializer} value, which must be a
list or a string. The list or string is passed to the new array's
\method{fromlist()} or \method{fromstring()} method (see below) to add
initial items to the array.
\end{funcdesc}
\begin{datadesc}{ArrayType}
Type object corresponding to the objects returned by
\function{array()}.
\end{datadesc}
Array objects support the following data items and methods:
\begin{memberdesc}[array]{typecode}
The typecode character used to create the array.
\end{memberdesc}
\begin{memberdesc}[array]{itemsize}
The length in bytes of one array item in the internal representation.
\end{memberdesc}
\begin{methoddesc}[array]{append}{x}
Append a new item with value \var{x} to the end of the array.
\end{methoddesc}
\begin{methoddesc}[array]{buffer_info}{}
Return a tuple \code{(\var{address}, \var{length})} giving the current
memory address and the length in bytes of the buffer used to hold
array's contents. This is occasionally useful when working with
low-level (and inherently unsafe) I/O interfaces that require memory
addresses, such as certain \cfunction{ioctl()} operations. The returned
numbers are valid as long as the array exists and no length-changing
operations are applied to it.
\end{methoddesc}
\begin{methoddesc}[array]{byteswap}{x}
``Byteswap'' all items of the array. This is only supported for
integer values. It is useful when reading data from a file written
on a machine with a different byte order.
\end{methoddesc}
\begin{methoddesc}[array]{fromfile}{f, n}
Read \var{n} items (as machine values) from the file object \var{f}
and append them to the end of the array. If less than \var{n} items
are available, \exception{EOFError} is raised, but the items that were
available are still inserted into the array. \var{f} must be a real
built-in file object; something else with a \method{read()} method won't
do.
\end{methoddesc}
\begin{methoddesc}[array]{fromlist}{list}
Append items from the list. This is equivalent to
\samp{for x in \var{list}:\ a.append(x)}
except that if there is a type error, the array is unchanged.
\end{methoddesc}
\begin{methoddesc}[array]{fromstring}{s}
Appends items from the string, interpreting the string as an
array of machine values (i.e. as if it had been read from a
file using the \method{fromfile()} method).
\end{methoddesc}
\begin{methoddesc}[array]{insert}{i, x}
Insert a new item with value \var{x} in the array before position
\var{i}.
\end{methoddesc}
\begin{methoddesc}[array]{read}{f, n}
\deprecated {1.5.1}
{Use the \method{fromfile()} method.}
Read \var{n} items (as machine values) from the file object \var{f}
and append them to the end of the array. If less than \var{n} items
are available, \exception{EOFError} is raised, but the items that were
available are still inserted into the array. \var{f} must be a real
built-in file object; something else with a \method{read()} method won't
do.
\end{methoddesc}
\begin{methoddesc}[array]{reverse}{}
Reverse the order of the items in the array.
\end{methoddesc}
\begin{methoddesc}[array]{tofile}{f}
Write all items (as machine values) to the file object \var{f}.
\end{methoddesc}
\begin{methoddesc}[array]{tolist}{}
Convert the array to an ordinary list with the same items.
\end{methoddesc}
\begin{methoddesc}[array]{tostring}{}
Convert the array to an array of machine values and return the
string representation (the same sequence of bytes that would
be written to a file by the \method{tofile()} method.)
\end{methoddesc}
\begin{methoddesc}[array]{write}{f}
\deprecated {1.5.1}
{Use the \method{tofile()} method.}
Write all items (as machine values) to the file object \var{f}.
\end{methoddesc}
When an array object is printed or converted to a string, it is
represented as \code{array(\var{typecode}, \var{initializer})}. The
\var{initializer} is omitted if the array is empty, otherwise it is a
string if the \var{typecode} is \code{'c'}, otherwise it is a list of
numbers. The string is guaranteed to be able to be converted back to
an array with the same type and value using reverse quotes
(\code{``}). Examples:
\begin{verbatim}
array('l')
array('c', 'hello world')
array('l', [1, 2, 3, 4, 5])
array('d', [1.0, 2.0, 3.14])
\end{verbatim}
\begin{seealso}
\seemodule{struct}{Packing and unpacking of heterogeneous binary data.}
\end{seealso}
| {
"alphanum_fraction": 0.7474158053,
"avg_line_length": 36.3515151515,
"ext": "tex",
"hexsha": "38342a3b651ca6e8658edf913f9d4c2840a2e0ef",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_forks_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_forks_repo_name": "1byte2bytes/cpython",
"max_forks_repo_path": "Doc/lib/libarray.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_issues_repo_name": "1byte2bytes/cpython",
"max_issues_repo_path": "Doc/lib/libarray.tex",
"max_line_length": 73,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7fbaeb819ca7b20dca048217ff585ec195e999ec",
"max_stars_repo_licenses": [
"Unlicense",
"TCL",
"DOC",
"AAL",
"X11"
],
"max_stars_repo_name": "1byte2bytes/cpython",
"max_stars_repo_path": "Doc/lib/libarray.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-25T21:41:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-25T21:41:07.000Z",
"num_tokens": 1630,
"size": 5998
} |
\section{Cigánypecsenye}
\label{ciganypecsenye}
Time: 1 \( \frac{1}{2} \) hours (10 minutes prep, 45 minutes making fries, 30+ minutes making bacon, 5 minute cooking pork chops)
Serves: 4
\begin{multicols}{2}
\subsection*{Ingredients}
\begin{itemize}
\item 4 thin pork chops, bone in or out (about 1 \( \frac{1}{2} \) pounds)
\item 6 cloves garlic, diced
\item 1 \( \frac{1}{2} \) cups oil (vegetable works)
\item 4 teaspoons Hungarian spicy paprika
\item 2 teaspoon marjoram
\item 4 teaspoon ground mustard seed
\item 2 teaspoon coarse salt
\item 1 teaspoon fresh cracked pepper
\item 4 servings of French Fries (Recipe to be added someday)
\item 4 slices of Szalonna (Hungarian style bacon), very thick cut, with rind
\item Additional Hungarian spicy paprika to taste
\end{itemize}
\subsection*{Hardware}
\begin{itemize}
\item Skillet
\item Medium mixing bowl
\end{itemize}
\clearpage
\subsection*{Instructions}
\begin{enumerate}
\item Put the 6 gloves of garlic, 1 \( \frac{1}{2} \) cups oil, 4 teaspoons of paprika, 2 teaspoons marjoram, 4 teaspoons ground mustard seed, 2 teaspoons salt, and 1 teaspoon of pepper into the mixing bowl.
\item Mix thouroughly.
\item Place the pork chops into the bowl, and make sure that there is oil and spices on all sides, and covering the pork chops.
\item Allow the pork chops to soak in the spice-oil while preparing bacon, occasionally spooning spices from the bottom back on top.
\item Take the 4 slices of thick cut szalonna, ensure that they have slits down the non-rind side, about every 1 inch.
\item Place the slices in the skillet at low heat.
\item Carefully raise the heat a small amount (between \( \frac{1}{2} \) and 1 notch) every three minute, while flipping the zsallona.
\item Once the bacon is somewhat crispy, remove and set aside.
\item Cook the pork chops (probably about 2 at a time) for about 2 minute at a side in the fat.
\item Place all pork chops in a dish in the oven, covered, at low heat.
\item Fry the french fries in the fat.
\item Plate the fries, then place a pork chop on each pile.
\item Sprinkle additional paprika to taste over the pork chop.
\item Place a szallona slice on each pork chop.
\end{enumerate}
\subsection*{Notes}
\begin{itemize}
\item Based partly on a trip to Budapest, partly on this recipe: \url{http://www.nosalty.hu/recept/egyszeru-ciganypecsenye}, with the following differences:
\begin{itemize}
\item Less oil is used.
\item More paprika is added after cooking.
\item Hard for me to say, as my Hungarian is basically nill.
\end{itemize}
\item FOR NEXT TIME: Try instead of ground mustard seed, cooking the chops without paprika, then spreading mustard and paprika on after cooking. This may be more authentic...
\item "Cigánypecsenye" translates to "Gypsy Steak"
\item "Szallona" is just Hungarian for "bacon", however when I use that word in this recipe I am referring to a particular style of bacon I cannot hardly find in the states. It it still made from the pork belly, but is basically entirely just fat and rind, with none of the bits of meat strips typical in American bacon. I recommend googling "ciganypecsenye bacon" for examples. Regular American bacon can be used instead, made similar to the \nameref{crispyBacon} recipe, but make it less crispy, so it cute with teh pork chop easier.
\item The Szallona is cut along an edge to allow it to curl while cooking, and get more crispy edges, please see pictures online for examples.
\item Be very careful cooking the mostly-fat bacon, as it can burn very quickly.
\item If you manage to use the mostly-fat style szallona, I don't recommend eating the rind, instead eat each little slice off the rind.
\item Concerning paprika: Hungarian papriak is (I'm quite convinced now) the best paprika in the world. While you can find it online, it's no substitute for the real thing. If you do not know where to get good paprika, please befriend a Hungarian (all of whom I've met are extraordinarily nice people) and ask them.
\item I recommend "spicy" paprika, which is not terribly spicy really, it just means don't use "sweet" paprika, which is also not terribly sweet, they just taste different.
\end{itemize}
\end{multicols}
\clearpage | {
"alphanum_fraction": 0.7380462137,
"avg_line_length": 65.2388059701,
"ext": "tex",
"hexsha": "a94d14105e6886acae7ab31a9546718fbb79cdc7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "abddcdb60e9422d63d945e7a9ec019c0288e34d7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "calebwatt15/caleb-watt-cookbook",
"max_forks_repo_path": "chapters/entrees/ciganypecsenye.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "abddcdb60e9422d63d945e7a9ec019c0288e34d7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "calebwatt15/caleb-watt-cookbook",
"max_issues_repo_path": "chapters/entrees/ciganypecsenye.tex",
"max_line_length": 539,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "abddcdb60e9422d63d945e7a9ec019c0288e34d7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "calebwatt15/caleb-watt-cookbook",
"max_stars_repo_path": "chapters/entrees/ciganypecsenye.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-24T18:06:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-10T06:39:22.000Z",
"num_tokens": 1159,
"size": 4371
} |
\section{Third Phase: Validation}
This segment of the document describes the results of the third and final phase of the project.
Unimplemented functions and bugs from the second phase are reviewed and solutions explained.
In addition test cases and their results are described, to validate the correctness of the implementation.
At the end the whole project and its success is critically evaluated.
\clearpage
\subsection{Testing}
In order to validate the implemented algorithms, following tests were written.
\subsubsection{Bad Channel Filtering}
To eliminate the influence of bad channels (\ref{badChannel}), they are assumed to have an infinite distance to every other vertex. The test checks if all entries of columns that were identified as bad channels were set to infinity.\\
The success of this test showed that the function \textit{GeometryInfo::filterBadChannels} indeed overwrites all relevant entries.
\subsubsection{Empty Inputs}
Because sturdiness is an important criterion for the program, empty inputs must not lead to a crash of the software.
\begin{aims}
\item[\hspace*{11mm} Sensor Projecting] If the function for sensor projecting gets passed an empty vector of sensor coordinates, it should return an empty vector of vertex IDs. \\
The success of this test showed that the function \textit{GeometryInfo::projectSensors} indeed does not crash upon empty inputs.
\end{aims}
\begin{aims}
\item[\hspace*{11mm} SCDC] If the function for surface constrained distance calculations gets passed an empty subset of vertices, it should calculate the full distance table for the passed mesh. This is verified by comparing dimensions of the output matrix. \\
The success of this test showed that the function \textit{GeometryInfo::scdc} indeed returns the full distance table.
\end{aims}
\begin{aims}
\item[\hspace*{11mm} Weight Matrix Creation] If the function for creating a weight matrix gets passed an empty vector of sensor indices, it should return an empty matrix. This is verified by testing the size of the output object.\\
If the function gets passed an empty distance table, it should give out a warning and return a null pointer.
This is verified by testing if the output really is a null pointer.\\
The success of this test showed that the function \textit{GeometryInfo::createInterpolationMat} indeed does not crash upon empty inputs and returns empty outputs.
\end{aims}
\subsubsection{Matrix Dimensions}
Because the later live interpolation is achieved by multiplying a vector of sensor signals with the precalculated interpolation matrix, matching dimensions are crucial.\\
The amount of rows inside the output matrix of the SCDC must be equal to the number of vertices inside the used mesh and the amount of columns must be equal to the number of projected sensors. \\
The output matrix of the weight matrix creation must have the same dimensions as the passed distance table.\\
The result of a multiplication of sensor signals and the interpolation matrix must be a column vector that contains as many values as the number of rows inside the interpolation matrix.\\
The success of these tests showed that the program only produces matching dimensions and thus does not create arithmetic exceptions.
\subsubsection{Coefficients of Interpolation Matrix}
Since an interpolated vector of signals must not amplify the signal itself, the coefficients of a row inside the interpolation matrix must add up to one.
This is verified by adding all entries of each row and checking if the result is one.\\
The success of this test showed that the calculated interpolation matrix indeed does not amplify or weaken the interpolated signal.
| {
"alphanum_fraction": 0.8057593045,
"avg_line_length": 65.7321428571,
"ext": "tex",
"hexsha": "1dbc2c412c3d5a68736bd4a1975baddf6cfbb693",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2017-04-23T15:55:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-04-23T15:55:31.000Z",
"max_forks_repo_head_hexsha": "9b89b3d7fe273d9f4ffd69b504e17f284eaba263",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "13grife37/mne-cpp-swpold",
"max_forks_repo_path": "revdocs/Reviewdokument/RD05-ThirdPhase.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9b89b3d7fe273d9f4ffd69b504e17f284eaba263",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "13grife37/mne-cpp-swpold",
"max_issues_repo_path": "revdocs/Reviewdokument/RD05-ThirdPhase.tex",
"max_line_length": 261,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "9b89b3d7fe273d9f4ffd69b504e17f284eaba263",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "13grife37/mne-cpp-swpold",
"max_stars_repo_path": "revdocs/Reviewdokument/RD05-ThirdPhase.tex",
"max_stars_repo_stars_event_max_datetime": "2017-04-26T16:30:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-20T20:21:16.000Z",
"num_tokens": 743,
"size": 3681
} |
\documentclass[aps,rmp,reprint,superscriptaddress,notitlepage,10pt]{revtex4-1}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath,amsthm,amsfonts,amssymb,amscd}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{enumerate}
\usepackage[final]{hyperref}
\graphicspath{{../figs/}}
\begin{document}
\title{SeqSpace: positional information of scRNAseq data}
\author{Nicholas Noll}
\affiliation{Kavli Institute for Theoretical Physics, University of California, Santa Barbara \looseness=-5}
\author{Madhav Mani}
\affiliation{Department of Applied Math, Northwestern University \looseness=-5}
\author{Boris Shraiman}
\affiliation{Kavli Institute for Theoretical Physics, University of California, Santa Barbara \looseness=-5}
\begin{abstract}
\end{abstract}
\maketitle
\section{Introduction}
\section{Data normalization}
\section{Spatial inference using BDTNP}
\section{De novo manifold learning}
\section{Discussion}
\section{Supplememntary Information}
\section*{Acknowledgement}
We are grateful to Eric Wieschaus, Sebastian Streichan, and Eyal Karzbrun for stimulating discussions.
This study was funded by [].
\bibliography{cite}{}
\end{document}
| {
"alphanum_fraction": 0.8041594454,
"avg_line_length": 29.5897435897,
"ext": "tex",
"hexsha": "a5c7972068a6c7476bcabd2c9d94fbcdb4d03cb4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ec2165fbe80af1280ef7c69071f472d13977d5d1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "nnoll/seqspace",
"max_forks_repo_path": "docs/paper/draft.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ec2165fbe80af1280ef7c69071f472d13977d5d1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "nnoll/seqspace",
"max_issues_repo_path": "docs/paper/draft.tex",
"max_line_length": 108,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ec2165fbe80af1280ef7c69071f472d13977d5d1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "nnoll/seqspace",
"max_stars_repo_path": "docs/paper/draft.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 326,
"size": 1154
} |
% language=uk
\startcomponent mk-luafitsin
\environment mk-environment
\chapter{How \LUA\ fits in}
\subject{introduction}
Here I will discuss a few of the experiments that drove the
development of \LUATEX. It describes the state of affairs around
the time that we were preparing for \TUG\ 2006. This development
was pretty demanding for Taco and me but also much fun. We were in
a kind of permanent Skype chat session, with binaries flowing in
one direction and \TEX\ and \LUA\ code the other way. By gradually
replacing (even critical) components of \CONTEXT\ we had a real
test bed and torture tests helped us to explore and debug at the
same time. Because Taco uses \LINUX\ as platform and I mostly use
\MSWINDOWS, we could investigate platform dependent issues
conveniently. While reading this text, keep in mind that this is
just the beginning of the game.
I will not provide sample code here. When possible, the \MKIV\
code transparantly replaces \MKII\ code and users will seldom
notices that something happens in different way. Of course the
potential is there and future extensions may be unique to \MKIV.
\subject{compatibility}
The first experiments, already conducted with the experimental
versions involved runtime conversion of one type of input into
another. An example of this is the (TI) calculator math input
handler that converts a rather natural math sequence into \TEX\
and feeds that back into \TEX. This mechanism eventually will
evolve into a configurable math input handler. Such applications
are unique to \MKIV\ code and will not be backported to \MKII. The
question is where downward compatibility will become a problem. We
don't expect many problems, apart from occasional bugs that result
from splitting the code base, mostly because new features will not
affect older functionality. Because we have to reorganize the code
base a bit, we also use this opportunity to start making a variant
of \CONTEXT\ which consists of building blocks: \METATEX. This is
less interesting for the average user, but may be of interest for
those using \CONTEXT\ in workflows where only part of the
functionality is needed.
\subject{metapost}
Of course, when I experiment with such new things, I cannot let
\METAPOST\ leave untouched. And so, in the early stage of \LUATEX\
development I decided to play with two \METAPOST\ related
features: conversion and runtime processing.
Conversion from \METAPOST\ output to \PDF\ is currently done in
pure \TEX\ code. Apart from convenience, this has the advantage
that we can let \TEX\ take care of font inclusions. The tricky
part of this conversion is that \METAPOST\ output has some weird
aspects, like \DVIPS\ specific linewidth snapping. Another nasty
element in the conversion is that we need to transform paths when
pens are used. Anyhow, the converter has reached a rather stable
state by now.
One of the ideas with \METAPOST\ version 1\high{+} is that we will
have an alternative output mode. In the perspective of \LUATEX\ it
makes sense to have a \LUA\ output mode. Whatever converter we
use, it needs to deal with \METAFUN\ specials. These are
responsible for special features like transparency, graphic
inclusion, shading, and more. Currently we misuse colors to signal
such features, but the new pre|/|post path hooks permit more
advanced implementations. Experimenting with such new features is
easier in \LUA\ than in \TEX.
The \MKIV\ converter is a multi||pass converter. First we clean up the
\METAPOST\ output, next we convert the \POSTSCRIPT\ code into \LUA\
calls. We assume that this \LUA\ code eventually can be output directly
from \METAPOST. We then evaluate this converted \LUA\ blob, which results
in \TEX\ commands. Think of:
\starttyping
1.2 setlinejoin
\stoptyping
turned into:
\starttyping
mp.setlinejoin(1.2)
\stoptyping
becoming:
\starttyping
\PDFcode{1.2 j}
\stoptyping
which is, when the \PDFTEX\ driver is active, equivalent to:
\starttyping
\pdfliteral{1.2 j}
\stoptyping
Of course, when paths are involved, more things happen behind the
scenes, but in the end an \type {mp.path} enters the \LUA\
machinery.
When the \MKIV\ converter reached a stable state, tests
demonstrated then the code was upto 20\% slower that the pure
\TEX\ alternative on average graphics, and but faster when many
complex path transformations (due to penshapes) need to be done.
This slowdown was due to the cleanup (using expressions) and
intermediate conversion. Because Taco develops \LUATEX\ as well as
maintains and extends \METAPOST, we conducted experiments that
combine features of these programs. As a result of this, shortcuts
found their way into the \METAPOST\ output.
\useMPlibrary[mis]
\placefigure
[]
[fig:mptopdf]
{converter test figure}
{\scale[width=\hsize]{\useMPgraphic{mptopdf-test}}}
Cleaning up the \METAPOST\ output using \LUA\ expressions takes
relatively much time. However, starting with version 0.970
\METAPOST\ uses a preamble, which permits not only short commands,
but also gets rid of the weird linewidth and filldraw related
\POSTSCRIPT\ constructs. The moderately complex graphic that we
use for testing (\in {figure} [fig:mptopdf]) takes over 16 seconds
when converted 250 times. When we enable shortcuts we can avoid
part of the cleanup and runtime goes down to under 7.5 seconds.
This is significantly faster than the \MKII\ code. We did experiments
with simulated \LUA\ output from \METAPOST\ and then the \MKIV\
converter really flies. The values on Taco's system are given
between parenthesis.
\starttabulate[|||||]
\HL
\NC \bf prologues/mpprocset \NC \bf 1/0 \NC \bf 1/1 \NC \bf 2/0 \NC \bf 2/1 \NC \NR
\HL
\NC \MKII \NC ~8.5 (~5.7) \NC ~8.0 (5.5) \NC ~8.8 \NC ~8.5 \NC \NR
\NC \MKIV \NC 16.1 (10.6) \NC ~7.2 (4.5) \NC 16.3 \NC ~7.4 \NC \NR
\HL
\stoptabulate
The main reason for the huge difference in the \MKIV\ times is
that we do a rigourous cleanup of the older \METAPOST\ output
in order avoid messy the messy (but fast) code that we use in
the \MKII\ converter. Think of:
\starttyping
0 0.5 dtransform truncate idtransform setlinewidth pop
closepath gsave fill grestore stroke
\stoptyping
In the \MKII\ converter, we push every number or keyword on a
stack and use keywords as trigger points. In the \MKIV\ code
we convert the stack based \POSTSCRIPT\ calls to \LUA\
function calls. Lines as shown are converted to single calls
first. When \type {prologues} is set to~2, such line no longer
show up and are replaced by simple calls accompanied by
definitions in the preamble. Not only that, instead of verbose
keywords, one or two character shortcuts are used. This means
that the \MKII\ code can be faster when procsets are used
because shorter strings end up in the stack and comparison
happens faster. On the other hand, when no procsets are used,
the runtime is longer because of the larger preamble.
Because the converter is used outside \CONTEXT\ as well, we
support all combinations in order not to get error messages, but
the converter is supposed to work with the following settings:
\starttyping
prologues := 1 ;
mpprocset := 1 ;
\stoptyping
We don't need to set \type {prologues} to~2 (font encodings
in file) or~3 (also font resources in file). So, in the end, the
comparison in speed comes down to 8.0 seconds for \MKII\ code and
7.2 seconds for the \MKIV\ code when using the latest greatest
\METAPOST. When we simulate \LUA\ output from \METAPOST, we end
up with 4.2 seconds runtime and when \METAPOST\ could produce the
converter's \TEX\ commands, we need only 0.3 seconds for embedding
the 250 instances. This includes \TEX\ taking care of handling the
specials, some of which demand building moderately complex \PDF\
data structures.
But, conversion is not the only factor in convenient \METAPOST\
usage. First of all, runtime \METAPOST\ processing takes time. The
actual time spent on handling embedded \METAPOST\ graphics is also
dependent on the speed of starting up \METAPOST, which in turn
depends on the size of the \TEX\ trees used: the bigger these are,
the more time \KPSE\ spends on loading the \type {ls-R} databases.
Eventually this bottleneck may go away when we have \METAPOST\ as
a library. (In \CONTEXT\ one can also run \METAPOST\ between runs.
Which method is faster, depends on the amount and complexity of
the graphics.)
Another factor in dealing with \METAPOST, is the usage of text in
a graphic (\type {btex}, \type {textext}, etc.). Taco Hoekwater,
Fabrice Popineau and I did some experiments with a persistent
\METAPOST\ session in the background in order to simulate a
library. The results look very promising: the overhead of embedded
\METAPOST\ graphics goes to nearly zero, especially when we also
let the parent \TEX\ job handle the typesetting of texts. A side
effect of these experiments was a new mechanism in \CONTEXT\ (and
\METAFUN) where \TEX\ did all typesetting of labels, and
\METAPOST\ only worked with an abstract representation of the
result. This way we can completely avoid nested \TEX\ runs (the
ones triggered by \METAPOST). This also works ok in \MKII\ mode.
Using a persistent \METAPOST\ run and piping data into it is not
the final solution if only because the terminal log becomes messed
up too much, and also because intercepting errors is real messy.
In the end we need a proper library approach, but the experiments
demonstrated that we needed to go this way: handling hundreds of
complex graphics that hold typeset paragraphs (being slanted and
rotated and more by \METAPOST), tooks mere seconds compared to
minutes when using independent \METAPOST\ runs for each job.
\subject{characters}
Because \LUATEX\ is \UTF\ based, we need a different way to deal with
input encoding. For this purpose there are callbacks that intercept
the input and convert it as needed. For context this means that the
regime related modules get a \LUA\ based counterparts. As a prelude to
advanced character manipulations, we already load extensive unicode
and conversion tables, with the benefit of being able to handle case
handling with \LUA.
The character tables are derived from unicode tables and \MKII\
\CONTEXT\ data files and generated using \MTXTOOLS. The main
character table is pretty large, and this made us experiment a bit
with efficiency. It was in this stage that we realized that it
made sense to use precompiled \LUA\ code (using \type {luac}).
During format generation we let \CONTEXT\ keep track of used \LUA\
files and compiled them on the fly. For a production run, the
compiled files were loaded instead.
Because at that stage \LUATEX\ was already a merge between
\PDFTEX\ and \ALEPH, we had to deal with pretty large format
files. About that moment the \CONTEXT\ format with the english
user interface amounted to:
\starttabulate[|c|c|c|c|c|]
\NC \bf date \NC \bf luatex \NC \bf pdftex \NC \bf xetex \NC \bf aleph \NC \NR
\NC 2006-09-18 \NC 9 552 042 \NC 7 068 643 \NC 8 374 996 \NC 7 942 044 \NC \NR
\stoptabulate
One reason for the large size of the format file is that the
memory footprint of a 32 bit \TEX\ is larger than that of good old
\TEX, even with some of the clever memory allocation techniques as
used in \LUATEX. After some experiments where size and speed were
measured Taco decided to compress the format using a level~3 \ZIP\
compression. This brilliant move lead to the following size:
\starttabulate[|c|c|c|c|c|]
\NC \bf date \NC \bf luatex \NC \bf pdftex \NC \bf xetex \NC \bf aleph \NC \NR
\NC 2006-10-23 \NC 3 135 568 \NC 7 095 775 \NC 8 405 764 \NC 7 973 940 \NC \NR
\stoptabulate
The first zipped versions were smaller (around 2.3 meg), but in
the meantime we moved the \LUA\ code into the format and the
character related tables take some space.
\start \it How stable are the mentioned numbers? Ten months after writing the
previous text we get the following numbers: \stop
\starttabulate[|c|c|c|c|c|]
\NC \bf date \NC \bf luatex \NC \bf pdftex \NC \bf xetex \NC \bf aleph \NC \NR
\NC 2007-08-16 \NC 5 603 676 \NC 7 505 925 \NC 8 838 538 \NC 8 369 206 \NC \NR
\stoptabulate
They are all some 400K larger, which is probably the result of changes in
hyphenation patterns (we now load them all, some several times depending on the
font encodings used). Also, some extra math support has been brought in the kernel
and we predefine a few more things. However, \LUATEX's format has become much
larger! Partly this is the result of more \LUA\ code, especially \OPENTYPE\ font
handling and attributes related code. The extra \TEX\ code is probably compensated
by the removal of obsolete (at least for \MKIV) code. However, the significantly
larger number is mostly there because a different compression algorithm is used:
speed is now favoured over efficiency.
\subject{debugging}
In the process of experimenting with callbacks I played a bit with
handling \TEX\ error information. An option is to generate an
\HTML\ page instead of spitting out the usual blob of into on the
terminal. In \in {figure} [fig:error] and \in {figure} [fig:debug]
you can see an example of this.
\placefigure[][fig:error]{An example error screen.}{\externalfigure[mk-error.png][width=\textwidth]}
\placefigure[][fig:debug]{An example debug screen.}{\externalfigure[mk-debug.png][width=\textwidth]}
Playing with such features gives us an impression of what kind of
access we need to \TEX's internals. It also formed a starting
point for conversion routines and a mechanism for embedding \LUA\
code in \HTML\ pages generated by \CONTEXT.
\subject{file io}
Replacing \TEX's in- and output handling is non||trival. Not only
is the code quite interwoven in the \WEBC\ source, but there is also
the \KPSE\ library to deal with. This means that quite some callbacks
are needed to handle the different types of files. Also, there is output
to the log and terminal to take care of.
Getting this done took us quite some time and testing and
debugging was good for some headaches. The mechanisms changed a
few times, and \TEX\ and \LUA\ code was thrown away as soon as
better solutions came around. Because we were testing on real
documents, using a fully loaded \CONTEXT\ we could converge to a
stable version after a while.
Getting this \IO\ stuff done is tightly related to generating the
format and starting up \LUATEX. If you want to overload the file
searching and \IO\ handling, you need overload as soon as possible.
Because \LUATEX\ is also supposed to work with the existing \KPSE\
library, we still have that as fallback, but in principle one
could think of a \KPSE\ free version, in which case the default
file searching is limited to the local path and memory
initialization also reverts to the hard coded defaults. A
complication is that the soure code has \KPSE\ calls and
references to \KPSE\ variables all over the place, so occasionally
we run into interesting bugs.
Anyhow, while Taco hacked his way around the code, I converted my
existing \RUBY\ based \KPSE\ variant into \LUA\ and started working
from that point. The advantage of having our own \IO\ handler is
that we can go beyond \KPSE. For instance, since \LUATEX\ has,
among a few others, the \ZIP\ libraries linked in, we can read from
\ZIP\ files, and keep all \TEX\ related files in \TDS\ compliant \ZIP\
files as well. This means that one can say:
\starttyping
\input zip:///somezipfile.zip?name=/somepath/somefile.tex
\stoptyping
and use similar references to access files. Of course we had to make
sure that \KPSE\ like searching in the \TDS\ (standardized \TEX\ trees)
works smoothly. There are plans to link the curl library into \LUATEX,
so that we can go beyong this and access repositories.
Of course, in order to be more or less \KPSE\ and \WEBC\
compliant, we also need to support this paranoid file handling, so
we provide mechanisms for that as well. In addition, we provide
ways to create sandboxes for system calls.
Getting to intercept all log output (well, most log output) was
a problem in itself. For this I used a (preliminary) \XML\ based
log format, which will make log parsing easier. Because we have
full control over file searching, opening and closing, we can
also provide more information about what files are loaded. For
instance we can now easily trace what \TFM\ files \TEX\ reads.
Implementing additional methods for locating and opening files is
not that complex because the library that ships with \CONTEXT\
is already prepared for this. For instance, implementing support
for:
\starttyping
\input http://www.someplace.org/somepath/somefile.tex
\stoptyping
involved a few lines of code, most of which deals with caching the
files. Because we overload the whole \IO\ handling, this means that
the following works ok:
% \bgroup \loggingall
\startbuffer
\placefigure
[][]
{http handling}
{\externalfigure
[http://www.pragma-ade.com/show-gra.pdf]
[page=1,width=\textwidth]}
\stopbuffer
\typebuffer \ifx\ctxlua \undefined \else \getbuffer \fi
% \egroup
Other protocols, like \FTP\ are also supported, so one can say:
\starttyping
\typefile {ftp://anonymous:@ctan.org/tex-archive/systems\
/knuth/lib/plain.tex}
\stoptyping
On the agenda is playing with database, but by the time that we enter
that stage linking the \type {curl} libraries into \LUATEX\ should
have taken place.
\subject{verbatim}
The advance of \LUATEX\ also permitted us to play with a long
standing wish of catcode tables, a mechanism to quickly switch
between different ways of treating input characters. An example of
a place where such changes take place is verbatim (and in \CONTEXT\
also when dealing with \XML\ input).
We already had encountered the phenomena that when piping back
results from \LUA\ to \TEX, we needed to take care of catcodes so
that \TEX\ would see the input as we wished. Earlier experiments
with applying \type {\scantokens} to a result and thereby
interpreting the result conforming the current catcode regime was
not sufficient or at least not handy enough, especially in the
perspective of fully expandable \LUA\ results. To be honest, the \type
{\scantokens} command was rather useless for this purposes due to its
pseudo file nature and its end||of||file handling but in \LUATEX\
we now have a convenient \type {\scantextokens} which has no side
effects.
Once catcode tables were in place, and the relevant \CONTEXT\ code
adapted, I could start playing with one of the trickier parts of
\TEX\ programming: typesetting \TEX\ using \TEX, or verbatim.
Because in \CONTEXT\ verbatim is also related to buffering and
pretty printing, all these mechanism were handled at once. It
proved to be a pretty good testcase for writing \LUA\ results back
to \TEX, because anything you can imagine can and will interfere
(line endings, catcode changes, looking ahead for arguments, etc).
This is one of the areas where \MKIV\ code will make things look
more clean and understandable, especially because we could move
all kind of postprocessing (needed for pretty printing, i.e.\
syntax highlighting) to \LUA. Interesting is that the resulting
code is not beforehand faster.
Pretty printing 1000 small (one line) buffers and 5000 simple
\type {\type} commands perform as follows:
\starttabulate[|l|c|c|c|c|]
\NC \NC \TEX\ normal \NC \TEX\ pretty \NC \LUA\ normal \NC \LUA\ pretty \NC \NR
\NC buffer \NC 2.5 (2.35) \NC ~4.5 (3.05) \NC 2.2 (1.8) \NC ~2.5 (2.0) \NC \NR
\NC inline \NC 7.7 (4.90) \NC 11.5 (7.25) \NC 9.1 (6.3) \NC 10.9 (7.5) \NC \NR
\stoptabulate
Between braces the runtime on Taco's more modern machine is shown.
It's not that easy to draw conclusions from this because \TEX\
uses files for buffers and with \LUA\ we store buffers in memory.
For inline verbatim, \LUA\ call's bring some overhead, but with
more complex content, this becomes less noticable. Also, the \LUA\
code is probably less optimized than the \TEX\ code, and we don't
know yet what benefits a Just In Time \LUA\ compiler will bring.
\subject{xml}
Interesting is that the first experiments with \XML\ processing
don't show the expected gain in speed. This is due to the fact
that the \CONTEXT\ \XML\ parser is highly optimized. However, if
we want to load a whole \XML\ file, for instance the formal
\CONTEXT\ interface specification \type {cont-en.xml}, then we can
bring down loading time (as well as \TEX\ memory usage) down from
multiple seconds to a blink of the eyes. Experiments with internal
mappings and manipulations demonstrated that we may not so much
need an alternative for the current parser, but can add additional,
special purpose ones.
We may consider linking \XSLTPROC\ into \LUATEX, but this is yet
undecided. After all, the problem of typesetting does not really
change, so we may as well keep the process of manipulating and
typesetting separated.
\subject{multipass data}
Those who know \CONTEXT\ a bit will know that it may need multiple
passes to typeset a document. \CONTEXT\ not only keeps track of
index entries, list entries, cross references, but also optimizes
some of the output based on information gathered in previous
passes. Especially so called two||pass data and positional
information puts some demands on memory and runtime. Two||pass
data is collapsed in lists because otherwise we would run out of
memory (at least this was true years ago when these mechanisms
were introduced). Positional information is stored in hashes and
has always put a bit of a burden on the size of a so called
utility file (\CONTEXT\ stores all information in one auxiliary
file).
These two datatypes were the first we moved to a \LUA\ auxiliary
file and eventually all information will move there. The advantage
is that we can use efficient hashes (without limitations) and only
need to run over the file once. And \LUA\ is incredibly fast in
loading the tables where we keep track of these things. For
instance, a test file storing and reading 10.000 complex positions
takes 3.2 seconds runtime with \LUATEX\ but 8.7 seconds with
traditional \PDFTEX. Imagine what this will save when dealing with
huge files (400 page 300 Meg files) that need three or more passes
to be typeset. And, now we can without problems bump position
tracking to milions of positions.
\subject{resources}
Finding files is somewhat tricky and has a history in the \TEX\
community and its distributions. For reasons of packaging and
searching files are organized in a tree and there are rules for
locating files of given types in this tree. When we say
\starttyping
\input blabla.tex
\stoptyping
\TEX\ will look for this file by consulting the path specification
associated with the filetype. When we say
\starttyping
\input blabla
\stoptyping
\TEX\ will add the \type {.tex} suffix itself. Most other filetypes
are not seen by users but are dealt with in a similar way internally.
As mentioned before, we support reading from other resources than
the standard file system, for instance we can input files from
websites or read from \ZIP\ archives. Although this works quite well,
we need to keep in mind that there are some conflicting interests:
structured search based on type related specifications versus more
or less explicit requests.
\starttyping
\input zip:///archive.zip?name=blabla.tex
\input zip:///archive.zip?name=/somepath/blabla.tex
\stoptyping
Here we need to be rather precise in defining the file location. We can
of course build rather complex mechanisms for locating files here, but
at some point that may backfire and result in unwanted matches.
If you want to treat a \ZIP\ archive as a \TEX\ tree, then you need
to register the file:
\starttyping
\usezipfile[archive.zip]
\usezipfile[tex.zip][texmf-local]
\usezipfile[tex.zip?tree=texmf-local]
\stoptyping
The first variant registers all files in the archive, but the
next two are equivalent and only register a subtree. The registered
tree is prepended to the \type {TEXMF} specification and thereby
may overload existing trees.
If an acrhive is not a real \TEX\ tree, you can access files anywhere
in the tree by using wildcards
\starttyping
\input */blabla.tex
\input */somepath/blabla.tex
\stoptyping
These mechanisms evolve over time and it may take a while before they
stabelize. For instance, the syntax for the \ZIP\ inclusion has been
adapted more than a year after this chapter was written (which is
why this section is added).
\stopcomponent
| {
"alphanum_fraction": 0.7755034936,
"avg_line_length": 43.6804308797,
"ext": "tex",
"hexsha": "9391b63c25eb6c6322ece2ffdd69c0faedb76a71",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kensh/pandoc_resume",
"max_forks_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/mk/mk-luafitsin.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kensh/pandoc_resume",
"max_issues_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/mk/mk-luafitsin.tex",
"max_line_length": 100,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74ea55abde343f4e6e07fa6cd94694816e6e3cc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kensh/pandoc_resume",
"max_stars_repo_path": "tex/texmf-context/doc/context/sources/general/manuals/mk/mk-luafitsin.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6295,
"size": 24330
} |
\addcontentsline{toc}{section}{Abbreviations}
\section*{Abbreviations}
% \thispagestyle{plain} % surpress header on first page
\hspace{20cm}
\begin{table}[H]
% \centering
\begin{tabular}{ll}
\toprule
% Term\phantom{space} & Meaning \\ \midrule
$\textbf{ANOVA}$ & Analysis of Variance \\
$\textbf{FP}$ & Factors Prioritisation \\
$\textbf{FF}$ & Factors Fixing \\
$\textbf{VC}$ & Variance Cutting \\
$\textbf{FM}$ & Factors Mapping \\
$\textbf{CI}$ & Confidence interval \\
\bottomrule
\end{tabular}
\end{table} | {
"alphanum_fraction": 0.6605504587,
"avg_line_length": 19.4642857143,
"ext": "tex",
"hexsha": "f1c1ea6ada4ed71a215d6dca6cf602c9c53047f9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3bb9a55b356eee4aee65d0e035731809db57acc6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bhmueller/thesis",
"max_forks_repo_path": "tex/abbreviations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3bb9a55b356eee4aee65d0e035731809db57acc6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bhmueller/thesis",
"max_issues_repo_path": "tex/abbreviations.tex",
"max_line_length": 55,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3bb9a55b356eee4aee65d0e035731809db57acc6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bhmueller/thesis",
"max_stars_repo_path": "tex/abbreviations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 203,
"size": 545
} |
\input{"/Users/brandonwilliams/Documents/LaTeX Includes/notespreamble.tex"}
\input{"/Users/brandonwilliams/Documents/LaTeX Includes/extrapackages.tex"}
\input{"/Users/brandonwilliams/Documents/LaTeX Includes/extracommands.tex"}
\input{"/Users/brandonwilliams/Documents/LaTeX Includes/knots.tex"}
\begin{document}
\title{\Large Knot Theory}
\author{Brandon Williams \\ \texttt{[email protected]}}
\maketitle
\tableofcontents
\newpage
\section{Knots, Links, Braids and Tangles}
\label{Knots, Links, Braids and Tangles}
In this section we develop the necessary combinatorial machinery needed for a serious study of knots, links, braids and tangles. We are motivated to study knots and links because they are intuitive, easy to visualize, and in a sense control the topology of 3- and 4-manifolds. Braids are also appealing since they have clear geometric meaning, yet turn out to be entirely algebraic. Then, one might wonder how much of the theory of knots, links and braids can be carried over to the more general setting of tangles, in which case one has opened a can of worms.
\subsection{Knots and Links}
\label{Knots and Links}
A classical knot $k$ is a smooth embedding of the circle $S^1$ in the 3-sphere $S^3$. We will almost always use $k$ to denote the image of the knot. We have chosen to use smooth embeddings, as opposed to topological embeddings, to prevent so called \emph{wild} knots from occurring. Equivalently we can consider PL embeddings. Two knots $k,k'$ in $S^3$ are said to be \textbf{isotopic} if they are ambiently isotopic, i.e. there is an isotopy $F : S^3 \times I \rightarrow S^3$ of the 3-sphere such that $F(-,0) = \id$ and $F(k,1) = k'$. The \textbf{unknot} is the standardly embedded circle in the 3-sphere (one can think of it as an equator). We can extend the concept of knot in the obvious way by defining a \textbf{link} to be a smooth embedding of finitely many disjoint copies of $S^1$. Our primary goal, at least in the beginning stages of the theory, will be to construct invariants that allow us to determine when two knots are not isotopic. This will be taken up in \cref{Classical Knot Invariants}.
The most common method of constructing these invariants is from a knot diagram. Slightly perturbing a knot $k$ (if necessary) to miss the point at infinity in $S^3$, we can think of $k$ as being embedded in $\mathbb R^3$. If $\Pi$ is a plane in $\mathbb R^3$ disjoint from $k$, then we can orthogonally project $k$ onto $\Pi$ to obtain an immersed circle $D$ in $\Pi$. If $\Pi$ is chosen generically then $D$ will only have double point singularities, and so we can decorate these points with over and under crossings such that the strand in $k$ closest to $\Pi$ passes ``under'' the other strand. This decorated, immersed circle in the plane is called a \textbf{knot diagram} for $k$. Of course, a knot can have many different diagrams, and each diagram can look very different.
In order for these diagrams to be of mathematical interest to us we need to come up with a set of ``moves'' that can be performed so that any two diagrams for a knot can be related via a finite sequence of these moves. There are three moves shown in \cref{reidemeister-moves} that clearly change the diagram of a knot, but do not change the isotopy type of the knot. These moves are called the first, second and third Reidemeister moves, and amazingly these moves generate all isotopies.
\begin{thm}[Redemeister's Theorem]
\label{Reidemeister's Theorem}
Two knots $k,k'$ are isotopic if and only if any of their diagrams can be connected via a finite sequence of Reidemeister moves of the first, second and third kind.
\end{thm}
\begin{figure}[tb]
\centering
\includegraphics{"\graphicspath/reidemeister-moves"}
\caption{Reidemeister Moves}
\label{reidemeister-moves}
\end{figure}
This theorem is crucial to defining invariants of knots, for it essentially tells that we can define invariants by first defining them on diagrams, and then showing the quantity remains unchanged under the Reidemeister moves.
Many times it will be useful to give a link an orientation, which we can simply think of as an arrow on each component of the link. A link $L$ with $n$ components has $2^n$ orientations. Two oriented links are said to be isotopic if there is an isotopy between the links that is compatible with the orientations. If $L$ is an oriented link, then $-L$ denotes the same link with the the orientation of each component reversed, and is called the \textbf{reverse}. A link isotopic to its reverse is called \textbf{invertible}. The trefoil is an example of an invertible knot.
For an oriented knot $k$ and diagram $D$, let $w(D)$ be the sum of the signs of all crossings in the diagram $D$, where the sign of a crossing is shown in \cref{positive-negative-crossings}. This value is called the \textbf{writhe} of the diagram, and it is invariant under only the second and third Reidemeister moves, but not the first. If $k'$ is another oriented knot disjoint from $k$, and $D$ is a diagram for $k \cup k'$, then we define the linking number of $k$ and $k'$, denoted by $\lk(k,k')$, to be half of the sum of the signs of all crossings in $D$. One can easily check that this value remains invariant under the Reidemeister moves, hence it is an invariant of two component, oriented links. Clearly if one component, say $k$, is contained in a 3-ball disjoint from $k'$, then $\lk(k,k')=0$. Further, we have the relations $\lk(k,k')=\lk(k',k)$ and $\lk(-k,k')=-\lk(k,k')$.
Let $N(k)$ denote a tubular neighborhood of $k$ in $S^3$. A simple, closed curve $\mu$ in $\partial N(k)$ is called a \textbf{meridian} if it bounds a disc in $N(k)$. Similarly, a simple, closed curve $\ell$ in $\partial N(k)$ is called a \textbf{canonical longitude} if it bounds a surface in $S^3 \backslash k$. These curves are uniquely defined, up to isotopy. Further, if $k$ is oriented, then we orient $\mu$ so that $\lk(k,\mu)=+1$ and orient $\ell$ so that it points in the same direction as $k$, i.e. the algebraic intersection number $\mu \cdot \ell$ is also $+1$.
\begin{figure}
\centering
\includegraphics[scale=3]{"\graphicspath/up-cross-pos"} \ \ \ \ \ \ \ \ \ \ \ \ \includegraphics[scale=3]{"\graphicspath/up-cross-neg"}
\caption{Positive and negative crossings}
\label{positive-negative-crossings}
\end{figure}
\subsection{Braids}
\label{Braids}
An \textbf{$n$-stranded braid} is an embedding $\sigma$ of $n$ copies of $[0,1]$ in $\mathbb R^2 \times [0,1]$ such that the boundary points of the $k$-th strand are mapped to $(k,0)$ and $(k,1)$ in $\mathbb R^2 \times [0,1]$, and the composition $\pi_{[0,1]} \circ \sigma$ has no critical points (where $\pi_{[0,1]}$ is projection onto the second factor). For each $i=1,\ldots,n-1$ let $\sigma_i$ denote the braid whose stands are just the
\subsection{Tangles}
\label{Tangles}
\newpage
\section{Classical Knot Invariants}
\label{Classical Knot Invariants}
\subsection{Introduction}
\label{Introduction}
From our definition of equivalence of knots we can easily see that the homeomorphism type of the knot complement $S^3 \backslash k$ is an invariant of the knot. In fact, by a difficult theorem of Gordon and Luecke this is a \emph{complete} knot invariant (but not link invariant). We can get more computable invariants of the knot by taking common topological invariants of $S^3 \backslash k$, such as the fundamental group or homology groups. While we will discuss the fundamental group in \cref{The Knot Group}, the next proposition says that the homology groups do not give us anything interesting.
\begin{prop}
If $L$ is an $n$ component link, then $H_1(S^3 \backslash L) \cong \mathbb Z^n$ is generated by the meridians of the components of $L$.
\end{prop}
\begin{proof}
We prove this for the case of a knot ($n=1$), and the general case will follow easily. The 3-sphere can be written as the union of the open sets $U = \nu k$ and $V = S^3 \backslash k$, so a part of the Mayer-Vietoris sequence gives us
\[ 0 = H_2(S^3) \longrightarrow H_1(U \cap V) \longrightarrow H_1(U) \oplus H_1(V) \longrightarrow H_1(S^3) = 0 \]
Clearly $U \cap V$ is homotopy equivalent to the 2-torus, and $U$ is homotopy equivalent to a circle, so $H_1(U \cap V) \cong \mathbb Z^2$ and $H_1(U) \cong \mathbb Z$. The middle map must be an isomorphism, so we have $H_1(S^3 \backslash k) \cong \mathbb Z$. Further, we can find a basis $\mu,\ell$ of simple, closed, oriented curves for $H_1(U \cap V)$ such that $\mu=0$ in $H_1(U)$ (i.e. $\mu$ is a meridian) and $\ell=0$ in $H_1(V)$ (i.e. $\ell$ is a longitude). Since the middle map can be defined as $(i_*,-j_*)$, where $i_*,j_*$ are the maps induced by the inclusions of $U \cap V$ into $U$ and $V$ respectively, we see that $j_*\mu$ generates $H_1(V)$.
\end{proof}
A \textbf{Seifert surface} for an oriented link $L$ is an oriented, smoothly embedded surface $F$ in $S^3$ such that $\partial F = L$ and the orientation induced on $L$ by $F$ agrees with the fixed orientation on $L$. These surfaces exist for all knots in $S^3$; in fact, there is an explicit algorithm we can use to get a picture for a Seifert surface from a diagram. The algorithm is as follows. Resolve all the crossings in the diagram in a manner that preserves orientations, resulting in a collection of closed, non-overlapping circles. Fill these circles with discs, but now layered in 3-dimensions, and for each crossing glue in a twisted band between corresponding discs. The result will be an orientable surface with boundary precisely $L$. If $L$ has $n$ components, and the diagram used in the above algorithm has $c$ crossings and $d$ circles in the oriented resolution, then the constructed Seifert surface will have Euler characteristic and genus
\begin{equation}
\label{euler characteristic genus seifert surface}
\chi(F) = d - c \ \ \ \ \ \ \ \ \ \ g(F) = 1+\frac{c-d-n}{2}
\end{equation}
\begin{prop}
Let $k,k'$ be disjoint, oriented knots, and $F'$ an oriented Seifert surface for $k'$. Then the linking number $\lk(k,k')$ can be computed as
\begin{enumerate}
\item the algebraic intersection number $k \cdot F'$.
\item the integer $n$ such that as homology classes $[k] = n[\mu']$ in $H_1(S^3 \backslash k')$, where $\mu'$ is a meridian of $k'$.
\end{enumerate}
\end{prop}
\begin{proof}
\end{proof}
A basic question is the following: how are two Seifert surfaces for isotopic knots related? Let us consider ``moves'' we can perform on Seifert surfaces to produce new Seifert surfaces. First, if $G$ is an isotopy of a knot $k$, then we can also isotope a Seifert surface $F$ with $G$. Next, we can cut two disjoint discs $D_1,D_2$ out of $F$, and glue in a smoothly embedded cylinder $S^1 \times I$ that is disjoint from $F$. This increases the genus of the Seifert surface by one, but does not change the boundary, so it is still a Seifert surface. This new surface is called the \textbf{stabilization} of $F$. Conversely, if we can find a simple closed curve $c$ embedded in $F$ such that there is a disc in $S^3 \backslash F$ bounded by $c$, then we can cut $F$ open along $c$, and close the surface up by gluing in two discs. We have now decreased the genus of the surface by one without changing the boundary. This new surface is called the \textbf{destabilization} of $F$.
\begin{prop}
\label{Seifert surface moves}
If $k,k'$ are isotopic knots with Seifert surfaces $F,F'$ respectively, then $F$ and $F'$ are related by a finite sequence of isotopies, stabilizations and destabilizations.
\end{prop}
\begin{proof}
Let $G : S^3 \times I \rightarrow S^3$ be an isotopy with $G(-,0)=\id$ and $G(k,1)=k'$. The set of points $M' = \lcb (G(x,t),t) \st x \in k,t\in I \rcb$ forms a smoothly embedded surface in $S^3 \times I$ with boundary precisely $k \times 0 \cup k' \times 1$. Let $M$ be the closed surface $F \times 0 \cup M' \cup F' \times 1$. There is a compact, smoothly embedded 3-manifold $W$ in $S^3 \times I$ such that $\partial W = M$. By perturbing $W$ slightly we can assume that the time slices $W_t := S^3 \times t \cap W$ are embedded surfaces for all but finitely many $t$. This description induces a Morse function $h : W \rightarrow \mathbb R$. The level sets $h^{-1}(t) = W_t$ change by a $k-1$ surgery when they cross a critical point of index $k$. We can arrange so that there are only critical points of index $1$ and $2$ for $0 < t < 1$, and so the level sets change by 0- and 1-surgeries. These surgeries are precisely stabilizations and destabilizations, hence $W$ provides a finite sequence of isotopies, stabilizations and destabilizations to obtain $F'$ from $F$.
\end{proof}
We will now describe many operations that can be performed on knots to obtain more complicated knots from simpler pieces. The first operation is the \textbf{connect sum} operation, which consists of first placing two knots $k_1,k_2$ in $\mathbb R^3$ such that they are separated by a plane $\Pi$. Remove a small open segment from each knot, and place the resulting boundary points on the separating plane such that the 4 points overlap in pairs. The resulting knot is denoted by $k_1 \# k_2$, and depends only on the isotopy class of $k_1$ and $k_2$. A knot is said to be \textbf{prime} if it cannot be written as a connect sum of non-trivial knots.
\todo{satellites, doubles, ...}
\subsection{Scalar Invariants}
\label{Scalar Invariants}
In this section we will derive some computable invariants from Seifert matrices. These invariants depend on the following result describing a pairing of the homology of a surface with the homology of the complement of the surface.
\begin{prop}
\label{pairing of H_1(F) and H_1(S^3 - F)}
Let $F$ be a compact, orientable surface of genus $g$ with $n$ boundary components. Then there is a bilinear, non-singular pairing $\beta : H_1(F) \times H_1(S^3 \backslash F) \rightarrow \mathbb Z$ such that for homology classes $[c] \in H_1(F)$ and $[d] \in H_1(S^3 \backslash F)$ represented by simple, closed, oriented curves $c,d$ we have $\beta([c],[d])=\lk(c,d)$.
\end{prop}
\begin{proof}
Recall that $H_1(F)$ is a free abelian group of rank $2g+n-1$. If $U = \nu F$, then $U$ is a 3-ball with $2g+n-1$ handles attached. Let $V$ be the complement of $F$ in $S^3$ so that $U \cap V$ is homotopy equivalent to a surface of genus $2g+n-1$. Then, part of the Mayer-Vietoris sequence gives us
\[ 0 = H_2(S^3) \longrightarrow H_1(U \cap V) \longrightarrow H_1(U) \oplus H_1(V) \longrightarrow H_1(S^3) = 0 \]
Since the middle map is an isomorphism we must have $H_1(V) \cong \mathbb Z^{2g+n-1}$. Let $\lcb x_i \rcb_{i=1}^{2g+n-1}$ and $\lcb y_j \rcb_{i=1}^{2g+n-1}$ be a bases represented by simple, closed, oriented curves for $H_1(F)$ and $H_1(S^3 \backslash F)$ such that $\lk(x_i,y_j) = \delta_{ij}$. This can be done because we can choose the first basis so that the curve $x_i$ runs once through the $i$-th handle, and then we can simply choose $y_i$ to be the meridian of this curve, and orient coherently. Define $\beta : H_1(F) \times H_1(S^3 \backslash F) \rightarrow \mathbb Z$ on this basis by
\[ \beta(x_i,y_j) = \delta_{ij} \]
and extend linearly. If $[c] \in H_1(F), [d] \in H_1(S^3 \backslash F)$ are homology classes represented by simple, closed, oriented curves $c,d$, then we can write $[c] = \sum c_i x_i$ and $[d] = \sum d_i y_i$ for some constants $c_i,d_i$. Then $\lk(x_i,d)$ is an integer $n$ such that $[d]=n[\mu_i] \in H_1(S^3 \backslash x_i)$, where $\mu_i$ is a meridian of $x_i$. However, this meridian is just $y_i$, hence
\[ \sum d_j y_j = [d] = n_i y_i \]
hence $n = d_i = \lk(x_i,d)$. Then we have
\[ [c] = \sum c_i x_i = \sum c_i \lk(x_i,d) [\mu] = \sum c_i d_i [\mu] \]
where $\mu$ is a meridian of $d$. Therefore $\lk(c,d)=\sum c_i d_i$, which is precisely $\beta([c],[d])$, and so the proposition is proved.
\end{proof}
Using this pairing we define another pairing $\alpha : H_1(F) \times H_1(F) \rightarrow \mathbb Z$ called the \textbf{Seifert form} of $F$. Since $F$ has an orientation we can take a tubular neighborhood $F \times [-1,1]$ of $F$ in $S^3$ such that the normal vector of $F$ points towards $F \times 1$. Let $i^\pm : F \rightarrow S^3 \backslash F$ denote the inclusion of $F$ into the $F \times \pm 1$ slice of the tubular neighborhood. For a homology class $x \in H_1(F)$ we will use the notation $x^\pm$ for the class $i_*^\pm x$ in $H_1(S^3 \backslash F)$. We now define the pairing by $\alpha(x,y) = \beta(x,y^+)$ (note that this pairing is not necessarily symmetric). If we have a basis $\lcb x_i \rcb_{i=1}^{2g+n-1}$ for $H_1(F)$, then the matrix $S = (\alpha(x_i,x_j))_{ij}$ is called the \textbf{Seifert matrix} of $F$ with respect to the basis $\lcb x_i \rcb$. If the basis elements are represented by simple, closed, oriented curves in $F$, then $S = (\lk(x_i,x_j^+))_{ij} = (\lk(x_i^-,x_j))_{ij}$. Let $\lcb y_i \rcb_{i=1}^{2g+n-1}$ be a basis for $H_1(S^3 \backslash F)$ dual to $\lcb x_i \rcb$ with respect to $\beta$; that is, $\beta(x_i,y_j) = \delta_{ij}$. If the entries of the Seifert matrix are written as $S = (s_{ij})$, then we have $x_i^\pm$ can be written in the basis $\lcb y_i \rcb$ as follows
\begin{equation}
\label{x_i^+ in y basis}
x_i^+ = \sum_j s_{ji} y_j
\end{equation}
\begin{equation}
\label{x_i^+- in y basis}
x_i^- = \sum_j s_{ij} y_j
\end{equation}
If we choose a different basis $\lcb x_i' \rcb$ for $H_1(F)$, and let $S'$ be the associated Seifert matrix, then $S$ and $S'$ are related by $S' = U^T S U$, where $U$ is an invertible, integral matrix (i.e. $\det U = \pm 1$). The process of transforming a Seifert matrix $S$ to $U^T S U$ will be called an $S_1$ move on Seifert matrices.
By \cref{Seifert surface moves} we have a set of moves that can be performed on Seifert surfaces that generate \emph{all} Seifert surfaces for a knot. We want to see how these moves change the Seifert matrix. Clearly isotopy leaves the Seifert matrix unchanged since we can isotopy our basis along with the surface. Let $F'$ be a new Seifert surface obtained by stabilizing $F$ once. Then we can form a basis for $H_1(F')$ by adding two new generators to the basis $\lcb x_i \rcb$ for $H_1(F)$. Let $y_2$ be a meridian of the cylinder glued in during the stabilization process, and let $y_1$ be a longitude. Then $y_2$ does not link with any of the other $x_i^+$'s, $y_1$ and $y_2^+$ link either positively or negatively exactly once while $y_2$ and $y_1^+$ do not link, and $y_1$ links with the $x_i^+$'s in an unpredictable manner. Therefore the Seifert matrix $S'$ of $F'$ is of the form
\begin{equation}
\label{S_2 move}
S' = \begin{pmatrix} & & & * & 0 \\ & S & & \vdots & \vdots \\ & & & * & 0 \\ * & \cdots & * & * & \pm 1 \\ 0 & \cdots & 0 & 0 & 0 \end{pmatrix}
\end{equation}
where the $*$ entries can be arbitrary integers. Transforming a Seifert matrix $S$ to a matrix in the above form will be called an $S_2$ move. From \cref{Seifert surface moves} we now know that any function on the isotopy classes of oriented links defined in terms of Seifert matrices will be well-defined it the function remains invariant under $S_1$ and $S_2$ moves.
Before defining such invariants, let us look at how Seifert matrices change under other operations. For example, if we change the orientation of the link (this means changing the orientation of \emph{every} component), then the orientation of the Seifert surface $F$ is reversed. Pushing curves off the surface in the normal direction will also be reversed, hence the Seifert matrix of $-F$ is simply $S^T$. On the other hand, let $\overline L$ denote the \textbf{mirror} of $L$, i.e. the reflection of $L$ through any plane in $\mathbb R^3$. In a diagram we can take this to be switching all crossings. We can also reflect the Seifert surface $F$ to obtain a surface $\overline F$ for $\overline L$, as well as the basis curves to obtain $\lcb \overline x_i \rcb$. Clearly the Seifert matrix with respect to this basis will simply be $-S$.
\begin{prop}
Let $S$ be a Seifert matrix for an oriented link $L$. Then $|\det(S+S^T)|$ is invariant under the $S_1$ and $S_2$ moves on Seifert matrices.
\end{prop}
\begin{proof}
Let $S' = U^T S U$, where $U$ is an invertible integral matrix, then
\[ \det(S'+S'^T) = \det(U^T(S+S^T)U) = \det(S+S^T) \]
so this quantity is invariant under $S_1$ moves. Next let $S'$ be the Seifert matrix obtained from applying the $S_2$ move (as in \cref{S_2 move}), then
\[ \det(S'+S'^T) = \det \begin{pmatrix} & & & * & 0 \\ & S+S^T & & \vdots & \vdots \\ & & & * & 0 \\ * & \cdots & * & * & \pm 1 \\ 0 & \cdots & 0 & \pm 1 & 0 \end{pmatrix} \]
By performing a cofactor expansion along the last column, and then another expansion along the last row, we are left with $\det(S+S^T)$, up to sign. Therefore $|\det(S+S^T)|$ is invariant under the $S_1$ moves.
\end{proof}
This proposition gives us our first link invariant: the \textbf{determinant} of an oriented link $L$, denoted by $\det(L)$, is the determinant of the symmetrization of a Seifert matrix for $L$. One can easily check that the determinant satisfies $\det(L)=\det(-L)$, hence it can be considered an invariant of unoriented knots, but it can behave unpredictably with respect to reversing orientations on specific components of links. Further, we have $\det(L)=\det(\overline L)$, so the determinant cannot detect when mirrors are non-isotopic.
\begin{prop}
Let $S$ be a Seifert matrix for an oriented link $L$ and $\omega \neq 1$ a complex number of unit modulus. Then $\sigma((1-\omega)S+(1-\overline\omega S^T)$ is invariant under the $S_1$ and $S_2$ moves on Seifert matrices, where $\sigma$ denotes the signature of a Hermitian matrix.
\end{prop}
\begin{proof}
Since signature is invariant under transformations $U^T S U$ we clearly have invariance under $S_1$ moves. Next let $S'$ be the Seifert matrix obtained from applying the $S_2$ move (as in \cref{S_2 move}), then
\[ (1-\omega)S' + (1-\overline\omega)S'^T = \begin{pmatrix} & & & * & 0 \\ & (1-\omega)S+(1-\overline\omega)S^T & & \vdots & \vdots \\ & & & * & 0 \\ * & \cdots & * & * & \pm (1-\omega) \\ 0 & \cdots & 0 & \pm (1-\overline\omega) & 0 \end{pmatrix} \]
By adding multiples of the last column and last row to the other columns and rows we obtain a matrix with the same signature of the form
\[ \begin{pmatrix} & & & 0 & 0 \\ & (1-\omega)S+(1-\overline\omega)S^T & & \vdots & \vdots \\ & & & 0 & 0 \\ 0 & \cdots & 0 & 0 & \pm (1-\omega) \\ 0 & \cdots & 0 & \pm (1-\overline\omega) & 0 \end{pmatrix} \]
The signature of this matrix is the sum of the signatures of the block matrices. The bottom block matrix has signature zero, therefore we have invariance under the $S_2$ move.
\end{proof}
We can now define the $\omega$-signature of an oriented link $L$ to be the signature of the Hermitian matrix $(1-\omega)S+(1-\overline\omega)S^T$, and we denote this number by $\sigma_\omega(L)$. We will usually consider signatures with $\omega=-1$, which we call \emph{the} signature and write it simply as $\sigma(L)$. Again we have $\sigma_\omega(L)=\sigma_\omega(-L)$, so signature can be considered an unoriented knot invariant, however this time we have $\sigma_\omega(L) = -\sigma_\omega(\overline L)$. Therefore if a link is isotopic to its mirror (also known as \textbf{amphichiral}), then its signature is zero.
\begin{prop}
Signature is additive with respect to connect sum: $\sigma(k_1 \# k_2) = \sigma(k_1) + \sigma(k_2)$.
\end{prop}
\begin{proof}
Let $F_1,F_2$ be Seifert surfaces for $k_1,k_2$ respectively, and let $\lcb x_i \rcb_{i=1}^{2g_1}$ and $\lcb y_i \rcb_{i=1}^{2g_2}$ be bases for the groups $H_1(F_1)$ and $H_1(F_2)$. Then $\lcb x_1,\ldots,x_{2g_1},y_1,\ldots,y_{2g_2} \rcb$ forms a basis for $H_1(F_1 \natural F_2)$, and the $x_i$'s do not link with any of the $y_i$'s, hence the Seifert matrix is of the form
\[ S = \begin{pmatrix} S_1 & 0 \\ 0 & S_2 \end{pmatrix} \]
where $S_1,S_2$ are the Seifert matrices for $k_1,k_2$. The signature of the symmetrization of this is just $\sigma(S_1+S_1^T)+\sigma(S_2+S_2^T)$, so the proposition follows.
\end{proof}
Although the genus of a Seifert surface is not a link invariant (since we can stabilize and destabilize the surfaces), we can take the minimum genus over all Seifert surfaces. The \textbf{genus} of a link $L$ is defined to be the minimum value of $g(F)$, where $F$ ranges over all Seifert surfaces for $L$, and is denoted by $g(L)$. Clearly isotopic links have equal genus, and the only knot with genus 0 is the unknot.
\begin{prop}
The genus of a knot is additive with respect to connect sum: $g(k_1 \# k_2) = g(k_1) + g(k_2)$.
\end{prop}
\begin{proof}
If $F_1,F_2$ are Seifert surfaces for $k_1,k_2$ respectively, then we can form the boundary sum $F_1 \natural F_2$ to obtain a surface of genus $g(F_1)+g(F_2)$ bounding $k_1 \# k_2$, hence $g(k_1 \# k_2) \leq g(k_1) + g(k_2)$.
On the other hand, let $F$ be a Seifert surface for $k_1 \# k_2$. We can arrange $k_1 \# k_2$ in $S^3$ such that there is a sphere $\Sigma$ intersecting $k_1 \# k_2$ in precisely two points, and the arc running between these two points inside $\Sigma$ is the part of $k_2$ in the connect sum. By slightly perturbing $F$ and $\Sigma$, if necessary, we can assume that $F \cap \Sigma = \beta \cup c_1 \cup \cdots c_n$, where $\beta$ is an arc with boundary $k_1 \# k_2 \cap \Sigma$, and the $c_i$'s are simple, closed curves. Let $c$ be one of these curves such that the disc it bounds in $\Sigma$ does not contain any other $c_i$'s. If we cut out an annular region of $c$ in $F$, then the resulting surface has two new boundary components. Glue in two parallel discs to close up this surface, which we will denote by $F'$. This can be done since we assume the spanning disc of $c$ in $\Sigma$ did not contain any other curves. We claim that $F'$ has genus less than or equal to the genus of $F$. If $c$ does not separate $F$, then $F'$ is connected and has genus $g(F)-1$ since we cut open a handle and capped it off. If $c$ separates $F$, then $F'$ has two components, one of which is closed and one of which has boundary $k_1 \# k_2$. Let $F''$ be the latter component, then clearly $F''$ has genus less than or equal to the genus of $F$. We now have a Seifert surface with genus no greater than $g(F)$, but now it has one less intersection with $\Sigma$. Repeating this we obtain a Seifert surface $F'''$ which intersects $\Sigma$ only at $\beta$, and which has genus no greater than the genus of the original surface $F$. Let $F_1$ and $F_2$ be the surfaces lying on either side of $\beta$ in $F'''$, then $F_1$ and $F_2$ have boundary $k_1 \cup \beta$ and $k_2 \cup \beta$, which are isotopic to $k_1$ and $k_2$. Therefore
\[ g(k_1) + g(k_2) \leq g(F_1) + g(F_2) = g(F''') \leq g(F) \]
and since this works for any initial surface $F$, we have $g(k_1)+g(k_2) \leq g(k_1 \# k_2)$.
\end{proof}
Similar to the genus of a link is the \textbf{4-ball genus} of a link. This is the minimum of $g(F)$, where $F$ ranges over all smoothly embedded, compact, orientable surfaces in $D^4$ with boundary $\partial F = L \subset \partial D^4$, and is denoted by $g^*(L)$. If we relax the requirement of smoothly embedded to be only a locally flat, topological embedding, then we obtain the \textbf{topological 4-ball genus} denoted by $g^T(L)$. We clearly have $g^T(L) \leq g^*(L) \leq g(L)$. A link $L$ with $g^*(L)=0$ is called \textbf{smoothly slice}, or usually just \textbf{slice}.
\begin{prop}
If $k$ is a slice knot, then there is a $2g \times 2g$ Seifert matrix for $k$ of the form
\[ \begin{pmatrix} 0 & P \\ Q & R \end{pmatrix} \]
where $P,Q,R$ are $g \times g$ integral matrices.
\end{prop}
\begin{cor}
If $k$ is a slice knot, then $\sigma(k) = 0$.
\end{cor}
\begin{proof}
By the previous proposition we have that the symmetrization of a Seifert matrix for $k$ is of the form
\[ \begin{pmatrix} 0 & P \\ P^T & Q \end{pmatrix} \]
where $P$ is any integral matrix and $Q$ is a symmetric. By a result we will prove later (\todo{insert reference}) we have that $\det(k) \neq 0$, hence the above matrix is a symmetric, non-degenerate, bilinear form which vanishes on a half dimensional space. The signature of such forms is zero, hence $\sigma(k)=0$.
\end{proof}
Two knots $k_1,k_2$ in $S^3$ are said to be \textbf{concordant} if we can smoothly embed the cylinder $S^1 \times I$ in $S^3 \times I$ such that the $t=0$ slice of the cylinder is precisely $k_1$ in $S^3 \times 0$, and the $t=1$ slice is precisely $k_2$ in $S^3 \times 1$. If $k$ is a slice knot, then we can cut a small disc out of a slicing disc to obtain an concordance of $k$ with the unknot. Conversely, if $k$ is concordant to the unknot, then we can cap off the cylinder to obtain a slicing disc for $k$, hence $k$ is slice. We can compose concordances by stacking them (just like cobordisms), so we have that concordance defines an equivalence relation on the set of knots in $S^3$. Let $\mathcal C_1$ denote the set of concordance classes. We claim that $\mathcal C_1$ forms a group under the connect sum operation. First we show that the connect sum operation descends to a well defined function on $\mathcal C_1$. If $k_i,k_i'$ ($i=1,2$) are knots such that $k_i$ is concordance to $k_i'$, then we can splice together the cylinder between $k_1$ and $k_1'$ with the cylinder with $k_2$ and $k_2'$ to obtain a concordance between $k_1\# k_2$ and $k_1'\# k_2$. The concordance class of the unknot serves as the identity. Finally, the inverse of a class $[k]$ is the class of $k$'s mirror, $[\overline k]$. To see this we only need to prove that for any knot $k$, $k \# \overline k$ is slice.
\begin{prop}
For any knot $k$, the connect sum $k \# \overline k$ is slice.
\end{prop}
\begin{proof}
We construct a slicing disc explicitly. Arrange $k \# \overline k$ in $\mathbb R^3$ such that the plane $\mathbb R^2 \times 0 \subset \mathbb R^3$ intersects it in precisely two points, those points being on the connect sum band. Let $k_+$ be the part of $k \# \overline k$ that lines in the half space $\mathbb R^2 \times \lcb z \geq 0 \rcb$. We will spin this part of the knot through $\mathbb R^4$ about the plane $\mathbb R^2 \times 0 \times 0$ by $\pi$. In particular, let $D$ be the set of points
\[ D = \lcb (x,y,z \cos \theta,z \sin \theta) \in \mathbb R^4 \st (x,y,z) \in k_+, 0 \leq \theta \leq 1 \rcb \]
Then $D$ is a disc embedded in $\mathbb R^4$ with boundary precisely $k \# \overline k$.
\end{proof}
\begin{cor}
The set of concordance classes of knots $\mathcal C_1$ is a group under $\#$.
\end{cor}
\begin{cor}
Signature descends to a homomorphism $\sigma_\omega : \mathcal C_1 \rightarrow \mathbb Z$.
\end{cor}
Another scalar invariant we can derive from Seifert matrices is the \textbf{determinant}. If $L$ is an oriented link and $S$ a Seifert matrix, then we define the determinant to be $\det(L) := |\det(S+S^T)|$.
\begin{prop}
The determinant is a well-defined invariant of oriented links.
\end{prop}
\begin{proof}
The determinant $|\det(S+S^T)|$ is clearly invariant under $S_1$ moves. Let $S'$ be a Seifert matrix for $L$ obtained from $S$ by an $S_2$ move. Then
\[ \det(S'+S'^T) = \det\begin{pmatrix} & & * & 0 \\ & S+S^T & \vdots & \vdots \\ & & * & 0 \\ * & \cdots & * & \pm 1 \\ 0 & \cdots & \pm 1 & 0 \end{pmatrix} \]
By performing a cofactor expansion along the last column, and then along the last row of the result matrix we get $\det(S+S^T)$, up to sign, hence the determinant is invariant under $S_2$ moves.
\end{proof}
\subsection{The Alexander Polynomial I}
\label{The Alexander Polynomial I}
The Alexander polynomial is an invariant of oriented links that takes values in the formal Laurent polynomials $\mathbb Z[t^{1/2},t^{-1/2}]$. There are many constructions of this invariant, two of which we will describe in the current section, and others will be discussed later on. Let $L$ be an oriented link with oriented Seifert surface $F$.
First we describe the construction of an infinite cyclic covering space $p : X_\infty \rightarrow X$ of the link complement $X = S^3 \backslash \nu L$; that is, the group deck transformations of $p$ is isomorphic to $\mathbb Z$. Cut open $X$ along $F$ to obtain a new space $Y$ with two boundary components: $\partial Y = F^+ \coprod F^-$. Take a countable collection of copies of this space $Y_i = Y \times \lcb i \rcb$ ($i \in \mathbb Z$), and form the quotient spaces $X_\infty$ from $\coprod_i Y_i$ by identifying $F_i^-$ in $Y_i$ with $F_{i+1}^+$ in $Y_{i+1}$. Define the map $p : X_\infty \rightarrow X$ on the piece $Y_i$ by $p(x,i) = x$. This map is clearly well-defined and continuous, and turns $X_\infty$ into a covering space. Let $t : X_\infty \rightarrow X_\infty$ be the map defined on $Y_i$ by $t(x,i) = (x,i+1)$. Then $t$ is a deck transformation (i.e. a homeomorphism commuting with the projection map), and $t$ generates the infinite cyclic group of deck transformations of $p$.
\begin{prop}
Let $p : X_\infty \rightarrow X$ and $p' : X_\infty' \rightarrow X$ be the infinite cyclic coverings of the link complement associated to two Seifert surfaces $F$ and $F'$, respectively. Then $p$ is isomorphic to $p'$ via a homeomorphism that is equivariant with respect to the $\mathbb Z$ actions on each space.
\end{prop}
\begin{proof}
A loop $\gamma$ in $X$ lifts to a loop $\widetilde\gamma$ in $X_\infty$ if and only if $\widetilde\gamma(0)$ and $\widetilde\gamma(1)$ are in the same copy of $Y$ in $X_\infty$. This means that each time $\widetilde\gamma$ passes through the wall $F_i^-=F_{i+1}^+$ between $Y_i$ and $Y_{i+1}$ in one direction, it must also pass through the same wall in the other direction, i.e. $\gamma$ algebraically intersects $F$ zero times. This happens if and only if $\gamma$ links with $L$ zero times; in other words, the sum of the linking numbers of $\gamma$ with each component of $L$ is zero. So, the notion of lifting paths in $p$ and $p'$ is independent of $F$, hence $p_*(\pi_1(X_\infty))$ and $p_*'(\pi_1(X_\infty'))$ are equal, and therefore $p$ is isomorphic to $p'$.
Let $h : X_\infty \rightarrow X_\infty'$ be the homeomorphism such that $p' \circ h = p$, as constructed above. We want to show that $th(x)=h(tx)$ for all $x \in X_\infty$. Let $\widetilde\gamma$ be a path in $X_\infty$ from $x$ to $tx$. Then $\gamma=p\circ\widetilde\gamma$ is a path in $X$ that links exactly $+1$ with $L$. We clearly have $p' \circ h \circ \widetilde\gamma = p \circ \widetilde\gamma = \gamma$, hence $h \circ \widetilde\gamma$ is a lift of $\gamma$ with respect to $p'$. This implies that $h \circ \widetilde\gamma$ is a path in $X_\infty'$ from $h(x)$ to $th(x)$, hence we have $th(x)=h(tx)$.
\end{proof}
This proposition tells us that the isomorphism class of the covering space constructed from a Seifert surface is actually an invariant of the oriented link $L$, and so all of its computable invariants will also be invariants for $L$. For example, the abelian group $H_1(X_\infty)$ is an invariant of $L$. In fact, we can give this abelian group more structure by extending the action of $t$ on $H_1(X_\infty)$ to an action of $\mathbb Z[t,t^{-1}]$, where $t$ also denotes the automorphism of $H_1(X_\infty)$ induced by the deck transformation $t$. This $\mathbb Z[t,t^{-1}]$-module is called the \textbf{Alexander module}. We can extract useful invariants from this module, but first we recall some basic ideas concerning presentation matrices.
Let $R$ be a commutative ring with unit, and let $M$ be an $R$-module. A \textbf{presentation} of this module is an exact sequence
\[ E \stackrel{\alpha}{\longrightarrow} F \stackrel{\phi}{\longrightarrow} M \longrightarrow 0 \]
of $R$-modules, where $E$ and $F$ are free. If $\lcb x_i \rcb$ is a basis for $E$ and $\lcb y_j \rcb$ a basis for $F$, then there are elements $a_{ij} \in R$ such that $\alpha(x_i) = \sum_j a_{ji} y_j$. The matrix $A = (a_{ij})$ of elements in $R$ is called a presentation matrix for $M$. If this matrix is $m \times n$, then the \textbf{$r$-th elementary ideal} of $M$ is defined to be the ideal $\mathcal I_r \subset R$ generated by the determinant of all $(m-r+1) \times (m-r+1)$ minors of $A$. One can show that these ideals do not depend on the presentation of $M$. If $m=n$, then only one of the elementary ideals can be non-zero; in particular, $\mathcal I_1$ is just the ideal generated by $\det A$.
\begin{thm}
\label{presentation matrix of H_1 of infinite cyclic covering}
Let $F$ be a Seifert surface, and $S$ an associated Seifert matrix, for an oriented link $L$. Then $tA - A^T$ is a presentation matrix for $H_1(X_\infty)$ as a $\mathbb Z[t,t^{-1}]$-module.
\end{thm}
\begin{proof}
We can write $X_\infty = U \cup V$ where $U = \cup_i Y_{2i}$ and $V = \cup_i Y_{2i+1}$. Then we have $U \cap V = \cup_i F_i$, where $F_i$ is the copy of $F$ in $X_\infty$ where $F_i^-$ is identified with $F_{i+1}^+$. Part of the Mayer-Vietoris sequences gives us
\[ H_1(X_\infty) \longrightarrow H_0(U \cap V) \stackrel{(-i_*,j_*)}{\longrightarrow} H_0(U) \oplus H_0(V) \]
where $i_*,j_*$ are the maps induced by inclusion of $U \cap V$ into $U$ and $V$, respectively. Note that $H_0(U)$ and $H_0(V)$ are not $\mathbb Z[t,t^{-1}]$-modules but their direct sum is, so the above maps are $\mathbb Z[t,t^{-1}]$-module homomorphisms. We can identify $H_0(U \cap V)$ with $H_0(F) \otimes_{\mathbb Z} \mathbb Z[t,t^{-1}]$ generated by $1 \otimes 1$, as well as $H_0(U) \oplus H_0(V)$ with $H_0(Y) \otimes_{\mathbb Z} \mathbb Z[t,t^{-1}]$. The second map sends the generator $1 \otimes 1$ to $-(1 \otimes 1) + (1 \otimes t)$. Therefore this map is injective, and so the first map in the above is the zero map. This means another part of the Mayer-Vietoris sequence can be written as
\[ H_1(U \cap V) \stackrel{(-i_*,j_*)}{\longrightarrow} H_1(U) \oplus H_1(V) \longrightarrow H_1(X_\infty) \longrightarrow 0 \]
hence we have a presentation of $H_1(X_\infty)$. To compute the presentation matrix let $\lcb x_i \rcb$ be a basis for $H_1(F)$, and let $\lcb y_i \rcb$ be the dual basis for $H_1(S^3 \backslash F)$ with respect to the pairing $\beta$ defined in \cref{pairing of H_1(F) and H_1(S^3 - F)}. Then we can identify $H_1(U \cap V)$ with $H_1(F) \otimes_{\mathbb Z} \mathbb Z[t,t^{-1}]$, which has basis $\lcb x_i \otimes 1 \rcb$, and we can identify $H_1(U) \oplus H_1(V)$ with $H_1(Y) \otimes_{\mathbb Z} \mathbb Z[t,t^{-1}]$, which has basis $\lcb y_i \otimes 1 \rcb$. Under the first map the generator $x_i \otimes 1$ is mapped to $-(x_i^- \otimes 1) + (x_i^+ \otimes t)$, which according to \cref{x_i^+ in y basis,x_i^+- in y basis} can be written as
\[ -\sum_j s_{ji} y_j \otimes 1 + \sum_j s_{ij} \otimes t \]
Therefore the matrix of this map is precisely $tS - S^T$.
\end{proof}
The polynomial $\det(tS-S^T)$ is a generator of the elementary first ideal of $H_1(X_\infty)$. It is called the \textbf{Alexander polynomial} of the link $L$, and is denoted by $\Delta_L(t)$. Note that right now $\Delta_L(t)$ is defined only up to a multiple of $\pm t^{-\pm n}$. We will see soon that there is a naturally chosen normalization so that $\Delta_L(t)$ is a symmetric Laurent polynomial, except now we have to allow Laurent polynomials in $\mathbb Z[t^{1/2},t^{-1/2}]$. Since for the time being we can only talk about the Alexander polynomial up to multiplication by a unit in $\mathbb Z[t,t^{-1}]$ we introduce the notation $p(t) \dotequal q(t)$ to mean the polynomials $p,q \in \mathbb Z[t,t^{-1}]$ are equal up to a multiplication by a unit.
\begin{prop}
\sloppyspace
\begin{enumerate}
\item For any oriented link $L$, $\Delta_L(t) \dotequal \Delta_L(t^{-1})$.
\item For any knot $k$, $\Delta_k(1) = \pm 1$.
\item For any oriented link $L$, $\Delta_L(1) = 0$.
\item For an oriented link $L$, $|\Delta_L(-1)| = \det(L)$.
\item For a slice knot $k$, $\Delta_k(t) \dotequal p(t)p(t^{-1})$ for some polynomial $p \in \mathbb Z[t]$.
\item If $L$ is a split link, then $\Delta_L(t)=0$.
\end{enumerate}
\end{prop}
\begin{proof}
\sloppyspace
\begin{enumerate}
\item We have
\[ \Delta_L(t^{-1}) = \det(t^{-1} S - S^T) = \det(t^{-1} (S - t S^t)) = \pm t^{-n} \det(t S^t - S) \dotequal \det(tS-S^T) \dotequal \Delta_L(t) \]
where $n$ is the dimension of the Seifert matrix $S$.
\item When evaluating $\Delta_k(t)$ at 1, it does not matter which representative of $\Delta_k(t)$ we choose (at least up to sign). We have
\[ \Delta_k(1) = \det(S-S^T) = \det\left( \lk(x_i,x_j^+) - \lk(x_i^-,x_j) \right) \]
This latter matrix is easily seen to be the intersection form on $F$, which is an anti-symmetric, non-degenerate, bilinear form. As such, it can be written as a direct sum of the matrices $\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$, hence has determinant equal to 1.
\item We can use similar reasoning as shown above, except now the intersection form on a surface with more than one boundary component is degenerate, and hence its determinant is zero.
\item We have
\[ |\Delta_L(-1)| = |\det(-S-S^T)| = |\pm \det(S+S^T)| = \det(L) \]
\item Since $k$ is slice there is a Seifert matrix for $k$ of the form
\[ \begin{pmatrix} 0 & P \\ Q & R \end{pmatrix} \]
Therefore we have
\[ \Delta_k(t) \dotequal \det(tP-Q)\det(tQ-p) \]
and the proposition follows.
\item If $L$ is split then it can be written as $L = L_1 \cup L_2$, where $L_1$ and $L_2$ can be separated by a plane. If $F_1,F_2$ are Seifert surfaces of $L_1,L_2$ respectively, then $F = F_1 \# F_2$ is a Seifert surface for $L$. However, if we choose one of the generators of $H_1(F)$ to be the meridian of the cylinder used to perform the connect sum, then the associated Seifert matrix will have a column and row of all zeros (since this meridian does not link with any other generators). Therefore $\det(tS-S^T)=0$.
\end{enumerate}
\end{proof}
We now pin down the indeterminacy in the definition of $\Delta_L(t)$. We do this by defining it absolutely by
\[ \Delta_L(t) = \det(t^{1/2}S-t^{-1/2}S^T) \]
where $S$ is a Seifert matrix for the oriented link $L$. This is called the \textbf{Conway normalization} of the Alexander polynomial. If $F$ is the Seifert surface associated to $S$, then $S$ is a $(2g+n-1) \times (2g+n-1)$ matrix, where $g$ is the genus of $F$ and $n$ is the number of components in $L$. It is clear now that $\Delta_L(t)$ consists only of powers of $t^{\pm 1}$ if $n$ is odd; otherwise, it consists only of \emph{odd} powers of $t^{\pm 1/2}$. Of course, we need to check that this polynomial is well-defined.
\begin{prop}
The Laurent polynomial $\det(t^{1/2}S-t^{-1/2}S^T)$ is an isotopy invariant of oriented links.
\end{prop}
\begin{proof}
We just need to check that this polynomial is invariant under the $S_1$ and $S_2$ moves on Seifert matrices. We clearly have
\[ \det(t^{1/2}U^TSU - t^{-1/2}U^TS^TU) = \det(U^T)\det(t^{1/2}S-t^{-1/2}S^T)\det(U) = \det(t^{-1/2}S-t^{-1/2}S^T) \]
so it remains invariant under $S_1$ moves. If $S'$ is a Seifert matrix for $L$ obtained from applying an $S_2$ move to $S$, then we have
\[ \det(t^{1/2}S'-t^{-1/2}S'^T) = \begin{pmatrix} & & & * & 0 \\ & t^{1/2}S-t^{-1/2}S^T & & \vdots & \vdots \\ & & & * & 0 \\ * & \cdots & * & * & \pm t^{1/2} \\ 0 & \cdots & 0 & \pm t^{-1/2} & 0 \end{pmatrix} \]
If we perform a cofactor expansion along the last column and then on the last row of the result matrix we see that the above determinant is precisely
\[ \pm t^{1/2} \left( \pm t^{-1/2} \left( \det(t^{1/2}S - t^{-1/2}S^T) \right) \right) = \det(t^{1/2}S - t^{-1/2}S^T) \]
Therefore the polynomial is invariant under the $S_2$ moves.
\end{proof}
This allows us to prove many fundamental properties of the invariants we have defined so far.
\begin{cor}
\sloppyspace
\begin{enumerate}
\item For a knot $k$, $\det(k)$ is an odd integer.
\item For a knot $k$, $\sigma(k)$ is an even integer.
\item For a knot $k$, $\deg \Delta_k(t) \leq 2g(k)$.
\end{enumerate}
\end{cor}
\begin{proof}
\sloppyspace
\begin{enumerate}
\item Let us write $\Delta_k(t) = a_0+a_1(t+t^{-1} + \cdots a_n(t+t^{-1})$. Then $1=\Delta_k(1) = a_0+2a_1+\cdots+2a_n$, hence $a_0$ is odd. This implies $\det(k)=\Delta_k(-1)=a_0-2a_1-\cdots-2a_n$ is odd.
\item Suppose $S$ is a $2g \times 2g$ matrix. Since $\det(S+S^T) \neq 0$, we have $b^++b^-=2g$, where $b^\pm$ is the dimension of the maximal positive/negative-definite subspace of $S+S^T$ on $\mathbb Z^{2g}$. It follows that $\sigma(k)=b^+-b^-$ is even.
\item Let $F$ be a Seifert surface of minimal genus $g$, and let $S$ be a $2g \times 2g$ Seifert matrix associated to $F$. This means that the highest positive power of $t$ that can appear in $\Delta_k(t)$ is $g$, hence $\deg \Delta_k(t) \leq 2g = 2g(k)$.
\end{enumerate}
\end{proof}
\begin{example}
Let us compute all the invariants we have constructed above for the right-handed trefoil $k$ (see \cref{left-handed-trefoil}). Applying Seifert's algorithm to this diagram produces a surface $F$ from two discs with three twisted bands attached. Using the counting formulas from \ref{euler characteristic genus seifert surface} shows that $F$ is of genus 1. Choose generators $x_1,x_2$ of $H_1(F)$ as shown in \cref{left-trefoil-seifert-algorithm}. One can easily compute the associated Seifert matrix to be
\[ S = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \]
From this we can compute
\[ \sigma(S+S^T) = \sigma\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} = 2 \]
\[ \det(S+S^T) = 3 \]
\[ \Delta_k(t) = t - 1 + t^{-1} \]
Since we produced a Seifert surface of genus 1, and clearly $k$ is not the unknot, we have $g(k) = 1$. Further, $k$ is not a slice knot since $\sigma(k) \neq 0$. This can also be seen by noticing that $\Delta_k(t)$ does not factor as $p(t)p(t^{-1})$. This shows that $g^*(k)=1$.
\begin{figure}
\centering
\includegraphics{"\graphicspath/left-trefoil"}
\caption{The left-handed trefoil}
\label{left-handed-trefoil}
\end{figure}
\begin{figure}
\centering
\includegraphics{"graphics/left-trefoil-seifert-algorithm"}
\caption{Generating curves on the Seifert surface for the trefoil}
\label{left-trefoil-seifert-algorithm}
\end{figure}
\end{example}
\begin{example}
For odd integers $p,q,r$, the $(p,q,r)$ pretzel knot is shown in \cref{pretzel-knot}, and is denoted by $P(p,q,r)$. Even though all the crossings in \cref{pretzel-knot} are drawn to be positive, we are taking the convention that if $p>0$, then the twists are positive, and if $p<0$, then the twists are negative. Note that $P_{(1,1,1)}$ is the right-hand trefoil. Applying the Seifert algorithm to this diagram produces a genus 1 surface, and we can pick generators $x_1,x_2$ for $H_1(F)$ in the same way as above. Oriented in a specific way we get that the Seifert matrix is
\[ S = \frac{1}{2} \begin{pmatrix} p+q & q+1 \\ q-1 & q+r \end{pmatrix} \]
Therefore the Alexander polynomial is
\[ \Delta_{P(p,q,r)}(t) = \frac{1}{4} \left( t(pq+pr+qr+1) -2(pq+pr+qr-1) + t^{-1}(pq+pr+qr+1) \right) \]
If we choose $(p,q,r)$ such that $pq+pr+qr=-1$, then we have $\Delta_{P(p,q,r)}(t)=1$, which is the same as the unknot. An example of such a triple is $(p,q,r)=(-3,5,7)$. It is interesting to note that by a theorem of Freedman any knot whose Alexander polynomial is 1 is topologically slice.
\begin{figure}
\centering
\includegraphics[scale=1]{"\graphicspath/pretzel-knot"}
\caption{The $(p,q,r)$ pretzel knot}
\label{pretzel-knot}
\end{figure}
\end{example}
\subsection{The Knot Group}
\label{The Knot Group}
Since the homeomorphism type of the link complement $S^3 \backslash L$ is an invariant of the link we can use the tools from algebraic topology to obtain other, more computable, invariants for $L$. However, as we saw before, the homology of $S^3 \backslash L$ does not provide anything interesting. It turns out the fundamental group of $S^3 \backslash L$ gives a wealth of information, and we call it the \textbf{knot group} of $L$.
We will begin with describing how to obtain a presentation of $\pi_1(S^3 \backslash L)$ from a diagram of $L$ called the \textbf{Wirtinger presentation}. Orient $L$ in any way (this is just for convenience), and fix a diagram for $L$. Let $\ell_1,\ldots,\ell_k$ be the arcs in the diagram that run from one under crossing to the very next under crossing. For example, the diagram for the trefoil in \cref{left-handed-trefoil} has three such arcs. Each arc $\ell_i$ gives a generator $x_i$ of $\pi_1(S^3 \backslash L)$ in the following way. Let $x_i$ be the loop that starts at the point at infinity (which we think of as the viewer's eye in the diagram), travels around the arc $\ell_i$ once, and then goes back to the point at infinity. We orient $x_i$ so that it links positively with the arc $\ell_i$. Each crossing in the diagram with give a relation in $\pi_1(S^3 \backslash L)$. Let $\ell_i$ be the over strand at a crossing, and $\ell_j$ and $\ell_k$ the strands that travel into and out of the crossing, respectively. If the crossing is positive, then we add the relation $x_ix_j=x_kx_i$, and if the crossing is negative we add the relation $x_jx_i=x_ix_k$. This presentation is by no means the most efficient, as we will see soon, but it is useful as a theoretic tool and for simple computations.
\begin{example}
trefoil...
\end{example}
\begin{example}
The Wirtinger presentation can given very inefficient presentations of the knot group. For example, for the standard diagram of the $(p,q)$ torus knot the Wirtinger presentation would have $pq+p+q-1$ generators and $(q-1)p$ relations. However, we will show that this knot group has a very simple presentation: $\< x,y \st x^p=y^q \>$. Give $S^3$ the standard genus one Heegaard splitting $H_1 \cup H_2$, and let $T_{p,q}$ be embedded in the torus $H_1 \cap H_2 = \partial H_1 = \partial H_2$ so that it wraps $p$ times around the meridian of $\partial H_1$ and $q$ times around the longitude of $\partial H_1$. Let $U = H_1 \backslash T_{p,q}$ and $V = H_2 \backslash T_{p,q}$. These spaces are homotopy equivalent to $H_1$ and $H_2$, hence their fundamental groups are free on one generator, say $x$ and $y$ respectively. Their intersection is an annulus, and this fundamental group is also free on one generator $z$. By the Seifert-van Kampen theorem we have that $\pi_1(S^3 \backslash T_{p,q})$ is generated by $x$ and $y$, and the relation comes from setting $z$ represented as a word in $\pi_1(U)$ equal to $z$ represented as a word in $\pi_1(V)$. We can easily see that including $z$ into $\pi_1(U)$ becomes the word $x^p$, and including $z$ into $\pi_1(V)$ becomes the word $y^q$, and so we have computed a presentation of $\pi_1(S^3 \backslash T_{p,q})$.
\end{example}
It is possible to compute the Alexander polynomial of a knot (up to multiplication by units in $\mathbb Z[t,t^{-1}]$) from a presentation of its knot group. To describe this we introduce Fox's free differential calculus. Let $X$ be a bouquet of $n$ circles, $F = \< x_1,\ldots,x_n \st \ \>$ the fundamental group of $X$, and $\tilde X$ the universal cover of $X$. We can give $\tilde X$ a simplicial decomposition by thinking of it as an $n$-valent tree. Let $X_i$ denote the simplex that is the lift of the loop $x_i$. Let $G$ be a group that is finitely presented as $\< x_1,\ldots,x_n \st r_1,\ldots,r_m \>$, and let $M$ be the $\mathbb Z[G]$-module freely generated by the simplices $X_i$. Define a map $\Delta : G \rightarrow M$ by the following properties:
\begin{enumerate}
\item $\Delta(x_i) = X_i$
\item $\Delta(x_i^{-1}) = -x_i^{-1} X_i$
\item $\Delta(x_ix_j) = X_i + x_i X_j$
\end{enumerate}
These properties allow us to recursively compute $\Delta(w)$ for any word $w$ in the $x_i$'s. Given a word $w$ in the $x_i$'s, we define the \textbf{Fox derivative} of $w$ with respect to the generator $x_i$ to be the coefficient of $X_i$ in $\Delta(w)$, and we denote it by $\pfrac{w}{x_i}$. This is an element of $\mathbb Z[G]$. With these definitions we have
\[ \Delta(w) = \pfrac{w}{x_1} X_1 + \cdots + \pfrac{w}{x_n} X_n \]
and the following properties
\begin{enumerate}
\item $\displaystyle\pfrac{x_i}{x_j} = \delta_{ij}$
\item $\displaystyle\pfrac{(x^k)}{x} = \begin{cases} 1+x+\cdots+x^{k-1} & k \geq 0 \\ x^k+x^{k+1}+\cdots+x^{-1} & k<0 \end{cases}$
\item $\displaystyle\pfrac{(w_1w_2)}{x} = \pfrac{w_1}{x} + w_1 \pfrac{w_2}{x}$
\end{enumerate}
The Jacobian of this presentation of $G$ is the $n \times m$ matrix $\left( \pfrac{x_i}{r_j} \right)_{ij}$ of elements of $\mathbb Z[G]$.
We now apply this strange formalism to the knot group of a knot. Suppose $\pi_1(S^3 \backslash k)$ is presented as $\< x_1,\ldots,x_n \st r_1,\ldots,r_m \>$, and let $J$ be the Jacobian of this presentation. The group $H_1(S^3 \backslash k)$ is isomorphic to $\mathbb Z$ and generated by a meridian, which we will denote by $t$, and we will write the group structure multiplicatively. The abelianization map $\phi : \pi_1(S^3 \backslash k) \rightarrow H_1(S^3 \backslash F)$ sends each generator $x_i$ to a power of $t$, and can be extended to a map $\phi : \mathbb Z[\pi_1(S^3 \backslash k)] \rightarrow \mathbb Z[t,t^{-1}]$. Let $J^\phi$ denote the $n \times m$ matrix with entries in $\mathbb Z[t,t^{-1}]$ obtained by applying $\phi$ to each entry. Amazingly, we can compute the Alexander polynomial from $J^\phi$.
\begin{prop}
The Alexander polynomial $\Delta_k(t)$ of $k$ is the \unfinished (up to multiplication by units in $\mathbb Z[t,t^{-1}]$) ...
\end{prop}
\begin{example}
Let $k$ be the left-handed trefoil. We already saw that $\pi_1(S^3 \backslash k) = \< x,y \st xyx = yxy \>$. Let $r = xyxy^{-1}x^{-1}y^{-1}$, then we can easily compute the derivatives of this relation with respect to $x$ and $y$ to get the following Jacobian
\[ J = \begin{pmatrix} 1+xy-xyxy^{-1}x^{-1} & x+xy-xyxy^{-1}-xyxy^{-1}x^{-1}y^{-1} \end{pmatrix} \]
Since our presentation of $\pi_1(S^3 \backslash k)$ came from the Wirtinger presentation we have that $x$ and $y$ are meridians of $k$, hence $\phi(x)=\phi(y)=t$. Then $J^\phi$ can be seen to be
\[ J^\phi = \begin{pmatrix} 1+t^2-t & t+t^2-1 \end{pmatrix} \]
This matrix has two minors, both of which are equal, and so we see (again) that the Alexander polynomial of the trefoil is $1-t+t^2$ (up to units).
\end{example}
\begin{example}
Let $p,q \geq 1$ be relatively prime integers. We saw that the knot group of the torus knot can be presented as $\pi_1(S^3 \backslash T_{p,q}) = \< x,y \st x^p=y^q \>$. Let $r=x^py^{-q}$, then the Jacobian of this presentation is
\[ J = \begin{pmatrix} 1+x+x^2 + \cdots x^{p-1} & x^p(y^{-q}+y^{-q+1}+\cdots+y^{-1}) \end{pmatrix} \]
With this presentation we have $\phi(x)=t^q$ and $\phi(y)=t^p$, hence $J^\phi$ is
\[ J^\phi = \begin{pmatrix} 1+t^q+t^{2q}+\cdots+t^{q(p-1)} & t^{pq}(t^{-pq}+t^{p(-q+1)}+\cdots+t^{-p}) \end{pmatrix} \]
We need to find the greatest common denominator of these two polynomials. We can drop the $t^{pq}$ factor in the second polynomial since it is a unit in $\mathbb Z[t,t^{-1}]$. Then we can rewrite these polynomials as
\[ \frac{1-t^{pq}}{1-t^q}, \ \ \ \ \ \frac{1-t^{pq}}{1-t^p} \]
To compute the $\gcd$ of these polynomials we notice that the following basic identity holds
\[ \gcd\left( \frac{a}{b},\frac{a}{c} \right) = \frac{a \cdot \gcd(b,c)}{bc} \]
Since $p$ and $q$ are relatively prime we have $\gcd(1-t^p,1-t^q)=1-t$, hence
\[ \Delta_k(t) \dotequal \gcd\left(\frac{1-t^{pq}}{1-t^q},\frac{1-t^{pq}}{1-t^p}\right) = \frac{(1-t)(1-t^{pq})}{(1-t^p)(1-t^q)} \]
If we normalize this polynomial so that it is a symmetric Laurent polynomial in $\mathbb Z[t,t^{-1}]$, then we see that its degree is $(p-1)(q-1)$, hence the genus of $T_{p,q}$ is \emph{at least} $\frac{(p-1)(q-1)}{2}$. On the other hand, applying Seifert's algorithm to the standard diagram of $T_{p,q}$ produces a Seifert surface of genus $\frac{(p-1)(q-1)}{2}$, hence we have
\[ g(T_{p,q}) = \frac{(p-1)(q-1)}{2} \]
\end{example}
\newpage
\section{Polynomial Invariants}
\label{Polynomial Invariants}
\subsection{The Kauffman Bracket}
\label{The Kauffman Bracket}
\subsection{The Alexander Polynomial II}
\label{The Alexander Polynomial II}
We will show that the Conway normalization of the Alexander polynomial satisfies what is known as a skein relation. This relations fits into a more modern framework for knot polynomials that generalizes to an infinite family of polynomial invariants. For an oriented link $L$, let $L_+,L_-,L_0$ be the three links which are identical to $L$ outside a neighborhood of a crossing, but at the crossing differ by changing the crossing to be positive, negative, or resolving the crossing in an orientation preserving fashion. A \textbf{skein relation} is a formula that relates the Alexander polynomial of the links $L_+,L_-$ and $L_0$.
\begin{thm}
For an oriented knot $L$ the Alexander polynomial $\Delta_L(t)$ satisfies the skein relation
\[ \Delta_{L_+}(t) - \Delta_{L_-}(t) = (t^{1/2}-t^{-1/2}) \Delta_{L_0}(t) \]
\end{thm}
\begin{proof}
Let $F_0$ be a Seifert surface for $L_0$. We can form Seifert surfaces $F_+,F_-$ for $L_+,L_-$ by attaching a twisted band to $F_0$. This band adds one generator to $H_1$, so let $\lcb x_i \rcb_{i=1}^n$ be a basis for $H_1(F_0)$ and $x_{n+1}$ a curve that runs once through this band. Then the associated Seifert matrices $S_+,S_-$ take the form
\[ S_+ = \begin{pmatrix} & & & a_1 \\ & S_0 & & \vdots \\ & & & a_n \\ b_1 & \cdots & b_n & N \end{pmatrix} \ \ \ \ \ \ \ S_- = \begin{pmatrix} & & & a_1 \\ & S_0 & & \vdots \\ & & & a_n \\ b_1 & \cdots & b_n & N-1 \end{pmatrix} \]
for some integer $N$. Then we have $\Delta_{L_+}(t)$ and $\Delta_{L_+}(t)$ are of the form
\begin{align*}
\Delta_{L_+}(t) &= \det\begin{pmatrix} & & & t^{1/2}a_1-t^{-1/2}b_1 \\ & t^{1/2}S_0-t^{-1/2}S_0^T & & \vdots \\ & & & t^{1/2}a_n-t^{-1/2}b_n \\ t^{1/2}b_1-t^{-1/2}a_1 & \cdots & t^{1/2}b_n-t^{-1/2}a_n & (t^{1/2}-t^{-1/2})N \end{pmatrix} \\
\Delta_{L_-}(t) &= \det\begin{pmatrix} & & & t^{1/2}a_1-t^{-1/2}b_1 \\ & t^{1/2}S_0-t^{-1/2}S_0^T & & \vdots \\ & & & t^{1/2}a_n-t^{-1/2}b_n \\ t^{1/2}b_1-t^{-1/2}a_1 & \cdots & t^{1/2}b_n-t^{-1/2}a_n & (t^{1/2}-t^{-1/2})(N-1) \end{pmatrix}
\end{align*}
If we compute these determinants by performing a cofactor expansion along the last column, then we see that all the terms in $\Delta_{L_+}(t)-\Delta_{L_-}(t)$ cancel except for the last, which is just
\[ (t^{1/2}-t^{-1/2}) \det(t^{1/2}S_0-t^{-1/2}S_0^T) = (t^{1/2}-t^{-1/2})\Delta_{L_0}(t) \]
This verifies the skein relation.
\end{proof}
The skein relation allows us to calculate $\Delta_L(t)$ in a recursive fashion, although in practice it can be quite complicated and unwieldy.
\subsection{The Jones Polynomial}
\label{The Jones Polynomial}
\subsection{The HOMFLYPT Polynomial}
\label{The HOMFLYPT Polynomial}
\newpage
\section{Quantum Invariants}
\label{Quantum Invariants}
There is an interesting way to view tangles and their invariants from the perspective of quantum field theory, and this leads to a large collection of invariants known as \emph{quantum invariants}. Before getting to this let us see how quantum mechanics might lead us to invariants of \emph{flat tangles} (i.e. 1-dimensional manifolds). Dirac introduced the \textbf{bra} $\<a|\right.$ and \textbf{ket} $\left.|b\>$ notation for quantum mechanics. Mathematically $\<a|\right.$ is just a vector in some complex Hilbert space $V$ (the space of states), which we will take to be finite dimensional, and $\left.|b\>$ is just a covector in the dual space $V^*$. By $\<a|b\>$ we mean to evaluate the covector on the vector, and so this is a complex number. Physically $a$ and $b$ are states of some process, and the number $\<a|b\>$ is known as the \textbf{amplitude}. The amplitude behaves similarly to a probability measure so that if a process $a \rightarrow b$ can be factored as $a \rightarrow c \rightarrow b$, then $\<a|b\>=\<a|c\>\<c|b\>$, and if a process $a \rightarrow b$ consists of two disjoint processes $a_1 \rightarrow b_1$ and $a_2 \rightarrow b_2$, then $\<a|b\>=\<a_1|b_1\>+\<a_2|b_2\>$. For a process $a \rightarrow b$ we have $\left.|b\> \in V^*$ by definition, but we can also think of $\<a|\right.$ as the map $\mathbb C \rightarrow V$ that sends $1$ to $\<a|\right.$. Then we can think of the process $a \rightarrow b$ as the map $\left.|b\>\<a|\right. : \mathbb C \rightarrow \mathbb C$, and the amplitude is just the evaluation of this map at $1$.
We now consider something similar to the above, but now in one spatial dimension and one time dimension (a $(1+1)$ quantum field theory). In this theory we start at time $t=0$ with a finite collection of particles (possibly empty) on a line, and as time travels to $t=1$ these particles will trace out a flat tangle in $\mathbb R \times [0,1]$. During the evolution of this system some particles may come together and annihilate (corresponding to maxima in the tangle), or there may be a creation of new particles (corresponding to minima). Drawing our inspiration from quantum field theory we describe this process as a linear map $V^{\otimes i} \rightarrow V^{\otimes j}$, where there are $i$ particles at time $t=0$ and $j$ particles at time $t=1$. We decide now that we want this theory to depend only on the topological data of the system, so that if we can deform one tangle into the another, then the associated linear maps are the same. In order to do this we need to know how we can deform one tangle into another.
By slightly perturbing the tangle, if necessary, we can assume that its projection onto $[0,1]$ is a Morse function such that all critical points occur at distinct heights. Basic Morse theory tells us that all flat tangles are generated by cups, caps and vertical line segments, and their deformations are generated by the isotopy that creates a minimum/maximum pair from a vertical line segment, or the reverse. With this we can think of the tangle as the being a stack of many simpler tangles, each of which consists of a bunch of vertical line segments and at most one cup or cap. The vertical line segments correspond to the identity map $V \rightarrow V$, and let $\alpha = \<a|\right. : \mathbb C \rightarrow V \otimes V$ be the map associated to the simple creation of two particles, and $\beta = \left.|b\> : V \otimes V \rightarrow \mathbb C$ the map from a simple annihilation of two particles (see ??). Now the linear map $V^{\otimes i} \rightarrow V^{\otimes j}$ can be thought of as a composition of the maps induced by the elementary tangles that form our tangle. We cannot just pick any maps $\alpha,\beta$ and expect to give this to give us something well-defined. However, since we can get between any two diagrams of our tangle via a finite sequence of maximum/minimum pair creations and deletions we just need check that the map associated to a maximum/minimum pair is the identity (see ??).
The two ways of creating a minimum/maximum pair from a vertical line segment correspond to the maps $(\id \otimes \beta) \circ (\alpha \otimes \id)$ and $(\beta \otimes \id) \circ (\id \otimes \alpha)$. Choose a basis $\lcb e^1,\ldots,e^n \rcb$ for $V$, and let $\lcb c_{ij} \rcb$ and $\lcb c^{ij} \rcb$ be complex numbers such that $\alpha(1) = c_{ij} e^i \otimes e^j$ and $\beta(e^i\otimes e^j)=c^{ij}$. Then $(\id \otimes \beta) \circ (\alpha \otimes \id) = \id$ implies that
\begin{align*}
e^k &= (\id \otimes \beta) \circ (\alpha \otimes \id)(1 \otimes e^k) \\
&= (\id \otimes \beta) \left( c_{ij} e^i \otimes e^j \otimes e^k \right) \\
&= c_{ij} c^{jk} e^i
\end{align*}
therefore $c_{ij} c^{jk} = \delta_i^k$, hence the matrices $(c_{ij})$ and $(c^{ij})$ are inverses of each other. The other map $(\beta \otimes \id) \circ (\id \otimes \alpha)$ gives the same restriction on $\alpha$ and $\beta$, so all we need in order to get a well defined topological quantum field theory is for $\beta$ to be a non-degenerate bilinear form on $V$, and $\alpha$ to be the inverse.
A mathematically succinct way of describing our work above is that we found a ``monoidal representation'' of the category of flat tangles. Let us describe this loaded statement.
%Consider the category whose objects are non-negative integers, and a morphism between $m$ and $n$ is an isotopy class of 1-dimensional manifolds embedded in $\mathbb R \times [0,1]$ whose boundary is precisely $\lcb 1,\ldots,m \rcb \times \lcb 0 \rcb \cup \lcb 1,\ldots,n \rcb \times \lcb 1 \rcb$. The identity morphism on $n$ is simply $\lcb 1,\ldots,n \rcb \times [0,1]$, and composition of morphisms is done by stacking the flat tangles vertically. This is called the \textbf{category of (unoriented, unframed) flat tangles}, and it forms a strict monoidal category, where $m \otimes n = m+n$ and the product of morphisms is the horizontal ``stacking'' of tangles. This category is freely generated by the object 1 and the morphism that look like a cup and cap modulo the planar move relation. Then the invariant constructed above can be thought as the monoidal functor $F$ from the category of flat tangles to $\catvect{\mathbb C}$, the category of complex vector spaces, that sends $1$ to $V$, the cup morphism to $\alpha$ and the cap morphism to $\alpha$. If $T$ is a flat tangle, then the invariant constructed above is simply $F(T)$.
\subsection{Operator Invariants}
\label{Operator Invariants}
%We can use ideas from quantum field theory, as we did above, to define invariants of arbitrary tangles (not just flat tangles), and indeed arbitrary manifolds. Mathematically this
%Again using inspiration from quantum field theory we will try to repeat the above program for tangles. We start with a complex vector space $V$. Let $T$ be a tangle embedded in $\mathbb R^2 \times [0,1]$ whose endpoints lie in $\mathbb R \times \lcb 0 \rcb \times \lcb 0,1 \rcb$, and let $D$ be a diagram of this tangle in $\mathbb R \times [0,1]$. If $T$ has $i$ lower end points and $j$ upper end points, then we will define a linear map $\< D \> \in \Hom(V^{\otimes i},V^{\otimes j})$, called the \textbf{bracket} of $D$, and show that it is an invariant of the tangle.
%Recall that
%Recall that we can choose the diagram $D$ such that there are finitely many points $a_0=0 < a_1 < \ldots < a_{n-1} < a_n = 1$ in $[0,1]$ with the property that the part of the diagram in $[a_{i-1},a_i]$ is an elementary tangle.
\newpage
\appendix
\section{Appendices}
\subsection{Flavors of Algebras}
\label{Ribbon Hopf Algebras}
\subsubsection{Hopf Algebras}
\label{Regular Hopf Algebras}
\subsubsection{Quasi-Triangular Hopf Algebras}
\label{Quasi-Triangular Hopf Algebras}
\subsubsection{Ribbon Hopf Algebras}
\label{Ribbon Hopf Algebras}
\subsubsection{A Diagrammatic Calculus}
\label{A Diagrammatic Calculus}
\newpage
\subsection{Some Category Theory}
\label{Some Category Theory}
Here we introduce the necessary category theory background for doing knot theory, but we assume the basics of category theory (categories, functors, natural families of morphisms, adjoints) are already known to the reader. This section starts by defining the important concept of a monoidal category theory, and then systematically adds additional structure piece by piece until we arrive at the the concept of a ribbon category. One can consider these definitions to be motivated by one of the simplest objects in low dimensional topology (1-dimensional manifolds), and it turns out that many invariants of knots, links and tangles arise as functors into these special categories. At the end of the section we will develop a graphical calculus for dealing with ribbon categories that is inspired from the theory of tangles.
\subsubsection{Monoidal Categories}
\label{Monoidal Categories}
A \textbf{(strict) monoidal category} is a category $\mathscr C$ equipped with a bifunctor $\otimes : \mathscr C \times \mathscr C \rightarrow \mathscr C$ and distinguished object $1$ such that the following axioms hold:
\begin{enumerate}
\item $\otimes$ is associative in the sense that $(- \otimes -) \otimes - = - \otimes (- \otimes -)$ as functors $\mathscr C \times \mathscr C \times \mathscr C \rightarrow \mathscr C$.
\item $1$ is a unit for $\otimes$ in the sense that $- \otimes 1 = 1 \otimes - = \id_{\mathscr C}$ as functors $\mathscr C \rightarrow \mathscr C$.
\end{enumerate}
This definition may seem a little strict, hence the name, since in category theory we usually consider objects only up to isomorphism type and formulas involving functors up to natural isomorphism. So, suppose we require associativity to hold only up to natural isomorphism; that is, there is a natural family of isomorphisms $a$ between the functors $(- \otimes -) \otimes -,- \otimes (- \otimes -) : \mathscr C \times \mathscr C \times \mathscr C \rightarrow \mathscr C$. However, when taking the product of more than three objects we do not want the resulting object to depend on the way we distribute the parentheses, and so $a$ must satisfy extra conditions. Saunders Mac Lane found exactly what conditions $a$ must satisfy, using ideas on higher associativity laws in homotopy theory developed by James Stasheff, in what has become known as Mac Lane's coherence theorem. We will not discuss this theorem, but we will state now the axioms of a relaxed monoidal category.
A \textbf{(relaxed) monoidal category} is a category $\mathscr C$ with a bifunctor $\otimes : \mathscr C \times \mathscr C \rightarrow \mathscr C$, a distinguished object $e$, and natural family of isomorphisms $a : - \otimes (- \otimes -) \rightarrow (- \otimes -) \otimes -, l : 1 \otimes - \rightarrow \id_{\mathscr C}, r : - \otimes 1 \rightarrow \id_{\mathscr C}$ that satisfy the following axioms
\begin{enumerate}
\item The pentagonal diagram
\[
\xymatrix
@C=2pc
@R=2pc
{
& (A \otimes B) \otimes (C \otimes D) \ar[rd]^{a_{A\otimes B,C,D}} & \\
A \otimes (B \otimes (C \otimes D)) \ar[d]_{\id_A \otimes a_{B,C,D}} \ar[ur]^{a_{A,B,C \otimes D}} & & ((A \otimes B) \otimes C) \otimes D \\
A \otimes ((B \otimes C) \otimes D) \ar[rr]_{a_{A,B\otimes C,D}} & & (A \otimes (B \otimes C)) \otimes D \ar[u]_{a_{A,B,C} \otimes \id_D}
}
\]
commutes for all objects $A,B,C,D$ in $\mathscr C$.
\item The triangular diagram
\[
\xymatrix
@C=2pc
@R=2pc
{
& A \otimes B & \\
A \otimes (1 \otimes B) \ar[rr]_{a_{A,1,B}} \ar[ur]^{\id_A \otimes l_B} & & (A \otimes 1) \otimes B \ar[ul]_{r_A \otimes \id_B}
}
\]
commutes for all objects $A,B$ in $\mathscr C$.
\item The morphisms $l_1, r_1 : 1 \otimes 1 \rightarrow 1$ are equal.
\end{enumerate}
Note that a strict monoidal category is nothing but a relaxed one for which the natural families of isomorphisms $a,l,r$ are the identity.
\begin{example}
The category of vector spaces over a field $k$ is the prototypical example of a monoidal category. Products of objects and linear maps are given by tensor products, and the 1-dimensional vector space $k$ serves as the unit object. This is not a strict monoidal category, but there is an obvious natural family of isomorphisms $A \otimes (B \otimes C) \rightarrow (A \otimes B) \otimes C$ making all the above diagrams commute.
\end{example}
% CATEGORY OF FLAT TANGLES
%The prototypical example of a monoidal category is the \textbf{category of (unoriented) flat tangles}. The objects of this category are finite collections of points in $[0,1]$, and a morphism between objects $A$ and $B$ is an ambient isotopy class (relative boundary) of smooth embeddings of a 1-manifold in $[0,1] \times [0,1]$ whose boundary is precisely $A \times \lcb 0 \rcb \cup B \times \lcb 1 \rcb$. Composition of morphisms $f : A \rightarrow B, g : B \rightarrow C$ is done by stacking the 1-manifold representing $g$ on top of $f$ in $[0,1] \times [0,2]$, and then shrinking this square to fit in $[0,1] \times [0,1]$. The identity morphism on an object $A$ is $A \times I$, and these constructions clearly give us a category. Note that the isomorphism class of an object is completely determined by the size $|A|$ of the set. The product $A \otimes B$ of two objects is the set $1/2 A \cup 1/2(1+B)$, where the arithmetic on a set is done element-wise, i.e. set the points of $B$ on the right of $A$ in $[0,2]$, and then shrink the interval to fit in $[0,1]$. If $f : A \rightarrow B, g : C \rightarrow D$ are morphism, then $f \otimes g$ is the morphism obtained by placing $g$ on the right of $f$ in $[0,2] \times [0,1]$, and then shrinking the square to fit in $[0,1] \times [0,1]$. The identity object for this product is the empty set $\emptyset$, and these constructions make our category into a strict monoidal category. We can also find an explicit set of generators and relations for this category. All objects (except for the identity) are generated by the object $1$ with one element by taking products. Let $| : 1 \rightarrow 1$ denote the identity morphism on $1$, $\cup : \emptyset \rightarrow 1 \otimes 1$ be the morphism that is a simple cup and $\cap : 1 \otimes 1 \rightarrow \emptyset$ the morphism that is a simple cap. Then every morphism in this category can be constructed by taking compositions and products of $\cup$ and $\cap$. There are relations among these generating morphisms given by Morse theory. In particular, we know that
%\[ (| \otimes \cap) \circ (\cup \otimes |) = | = (\cap \otimes |) \circ (| \otimes \cup) \]
%This gives a complete set of generators and relations for the category of flat tangles.
We want to define a monoidal functor as a functor between monoidal categories that preserves the monoidal structure. However, since the domain and target of the functor can be any combination of strict and relaxed, and the functor itself can be strict or relaxed, there are many definitions for monoidal functor that could be made. We will give the definition in the most general form (relaxed functor and relaxed categories) and one can get the stricter versions by requiring the appropriate family of natural morphisms to be isomorphisms. A \textbf{monoidal functor} between monoidal categories $(\mathscr C,\otimes,1,a,l,r)$ and $(\mathscr C',\otimes',1',a',l',r')$ consists of a functor $F : \mathscr C \rightarrow \mathscr C'$, a natural family of \emph{morphisms} $\varphi : (F-) \otimes' (F-) \rightarrow F(-\otimes-)$ and a \emph{morphism} $f : 1' \rightarrow F(1)$ such that the following axioms hold:
\begin{enumerate}
\item The hexagonal diagram
\[
\xymatrix
@R=2pc
@C=1pc
{
& F(A) \otimes' (F(B) \otimes' F(C)) \ar[dl]_{\id_{F(A)} \otimes \varphi_{A,B}} \ar[dr]^{a'} & \\
F(A) \otimes' (F(B \otimes C)) \ar[d]_{\varphi_{A,B\otimes C}} & & (F(A) \otimes' F(B)) \otimes' F(C) \ar[d]^{\varphi_{A,B} \otimes \id_{F(C)}} \\
F(A \otimes (B \otimes C)) \ar[dr]_{F \circ a} & & (F(A \otimes B)) \otimes' F(C) \ar[dl]^{\varphi_{A\otimes B,C}} \\
& F((A \otimes B) \otimes C) &
}
\]
for all objects $A,B,C$ in $\mathscr C$.
\item The rectangular diagrams
\[
\xymatrix
@R=3pc
@C=3pc
{
F(A) \otimes' 1' \ar[r]^{r_{F(A)}'} \ar[d]_{\id_{F(A)} \otimes f} & F(A) \\
F(A) \otimes' F(1) \ar[r]_{\varphi_{A,1}} & F(A \otimes 1) \ar[u]_{F \circ r_A}
}
\ \ \ \ \ \ \
\xymatrix
@R=3pc
@C=3pc
{
1' \otimes' F(A) \ar[r]^{l_{F(A)}'} \ar[d]_{f \otimes \id_{F(A)}} & F(A) \\
F(1) \otimes' F(A) \ar[r]_{\varphi_{1,A}} & F(1 \otimes A) \ar[u]_{F \circ l_A}
}
\]
\end{enumerate}
A monoidal functor is said to be \textbf{strong} if the family of morphisms $\varphi$ and morphism $f$ are \emph{isomorphisms}, and called \textbf{strict} if these morphisms are the \textbf{identity}. We note that the distinction between strict and relaxed monoidal categories is not necessary because one can show that all relaxed monoidal categories are monoidally equivalent to a strict monoidal category.
Since the category of vector spaces over $k$ is our prototypical example of a monoidal category it is natural to wonder if the notion of a dual vector space can be generalized to monoidal categories. Recall that given a finite dimensional vector space $V$, there is a natural map $\cap_V : V^* \otimes V \rightarrow k$ given by $\lambda \otimes v \mapsto \lambda(v)$. If we fix a basis $\lcb e_i \rcb$ for $V$, then there is also a map $\cap_V : k \rightarrow V \otimes V^*$ given by $1 \mapsto \sum e_i \otimes e^i$, where $\lcb e^i \rcb$ is the dual basis for $V^*$. One can show that $\cup_V$ does not depend on the basis chosen used to define it, and these maps satisfy $(\id_V \otimes \cap_V)(\cup_V \otimes \id_V) = \id_V$ and $(\cap_V \otimes \id_{V^*})(\id_{V^*} \otimes \cup_V) = \id_{V^*}$. These are the properties we will use to generalize the idea of duality.
In particular,
The category of vector spaces has the property that there is a natural family of isomorphisms $V \otimes W \rightarrow W \otimes V$ that behaves nicely with associativity of tensor products. The generalization of this property to monoidal categories is known as a braiding. In particular, a \textbf{braiding} in a monoidal category $(\mathscr C,\otimes,e,a,l,r)$ is a natural family of isomorphisms $c_{A,B} : A \otimes B \rightarrow B \otimes A$ which satisfies the following axioms:
\begin{enumerate}
\item The triangular diagram
\[
\xymatrix
@R=2pc
@C=2pc
{
& A & \\
1 \otimes A \ar[ru]^{l_A} \ar[rr]_{c_{1,A}} & & A \otimes 1 \ar[ul]_{r_A}
}
\]
commutes for all objects $A$ in $\mathscr C$.
\item Both of the hexagonal diagrams
\[
\xymatrix
@R=2pc
@C=0pc
{
& (A \otimes B) \otimes C \ar[ld]_{a_{A,B,C}^{-1}} \ar[rd]^{c_{A \otimes B,C}} & \\
A \otimes (B \otimes C) \ar[d]_{\id_A \otimes c_{B,C}} & & C \otimes (A \otimes B) \ar[d]^{a_{C,A,B}} \\
A \otimes (C \otimes B) \ar[dr]_{a_{A,C,B}} & & (C \otimes A) \otimes B \ar[dl]^{c_{C,A} \otimes \id_B} \\
& (A \otimes C) \otimes B &
}
\ \ \ \ \ \ \
\xymatrix
@R=2pc
@C=0pc
{
& A \otimes (B \otimes C) \ar[ld]_{a_{A,B,C}} \ar[rd]^{c_{A,B \otimes C}} & \\
(A \otimes B) \otimes C \ar[d]_{c_{A,B} \otimes \id_C} & & (B \otimes C) \otimes A \ar[d]^{a_{B,C,A}^{-1}} \\
(B \otimes A) \otimes C \ar[dr]_{a_{B,A,C}^{-1}} & & B \otimes (C \otimes A) \ar[dl]^{\id_B \otimes c_{C,A}} \\
& B \otimes (A \otimes C) &
}
\]
commute for all objects $A,B,C$ in $\mathscr C$.
\end{enumerate}
Note that the inverse $c^{-1}$ of the braiding is also a braiding. A braided monoidal category $(\mathscr C,\otimes,e,a,l,r,c)$ is said to be \textbf{symmetric} if $c^2 = \id$.
\newpage
\bibliography{knot-theory-bib}
\bibliographystyle{plain}
\addcontentsline{toc}{section}{\refname}
\end{document}
| {
"alphanum_fraction": 0.7040239059,
"avg_line_length": 93.7808383234,
"ext": "tex",
"hexsha": "d9c7e09b1e67f435d456167b77e2f006585c232f",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-05-13T16:46:16.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-11T13:27:57.000Z",
"max_forks_repo_head_hexsha": "d208af0e6edd6293bbc939aa033bb9b5c0bedbce",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mbrandonw/my-math-notes",
"max_forks_repo_path": "knot-theory/knot-theory.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d208af0e6edd6293bbc939aa033bb9b5c0bedbce",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mbrandonw/my-math-notes",
"max_issues_repo_path": "knot-theory/knot-theory.tex",
"max_line_length": 2070,
"max_stars_count": 42,
"max_stars_repo_head_hexsha": "d208af0e6edd6293bbc939aa033bb9b5c0bedbce",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mbrandonw/my-math-notes",
"max_stars_repo_path": "knot-theory/knot-theory.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-12T03:01:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-20T15:25:24.000Z",
"num_tokens": 24846,
"size": 78307
} |
\subsection{The function \stackequal}
\Label{sec:stackequal}
The function \stackequal in the following listing
is the runtime counterpart for the \logicref{StackEqual} predicate.
Note that this specifications explicitly refers to valid stacks.
\input{Listings/stack_equal.h.tex}
The implementation of \stackequal in the next listing
compares two stacks according to the same rules of predicate \StackEqual.
\input{Listings/stack_equal.c.tex}
| {
"alphanum_fraction": 0.8106904232,
"avg_line_length": 28.0625,
"ext": "tex",
"hexsha": "23b9e42950cfe61a49193cc6a738a2abd1b4c759",
"lang": "TeX",
"max_forks_count": 19,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z",
"max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fraunhoferfokus/acsl-by-example",
"max_forks_repo_path": "Informal/stack/stack_equal.tex",
"max_issues_count": 22,
"max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fraunhoferfokus/acsl-by-example",
"max_issues_repo_path": "Informal/stack/stack_equal.tex",
"max_line_length": 73,
"max_stars_count": 90,
"max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fraunhoferfokus/acsl-by-example",
"max_stars_repo_path": "Informal/stack/stack_equal.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z",
"num_tokens": 102,
"size": 449
} |
\paragraph{Numerical Experiment of Restriction}
We test to different restriction, on is 7-point (based on linear) and the other is 9 point (based on bilinear).
The kernel of 7-point is
\begin{equation}
\begin{pmatrix}
0 &0.5 &0.5\\
0.5 &1 &0.5\\
0.5 &0.5 &0
\end{pmatrix}
\end{equation}
The kernel of 9-point is
\begin{equation}
\begin{pmatrix}
0.25 &0.5 &0.25\\
0.5 &1 &0.5\\
0.25 &0.5 &0.25
\end{pmatrix}
\end{equation}
Suppose the prolongation is $P$,
we can write a step iteration of two layer multigrid method as:
\begin{equation}
x^{t+1}-x^*=[I-(D-U)^{-1}A][I-(D-L)^{-1}A][I-P(P^T A P)^{-1}P^T)A](x^t-x^*)
\end{equation}
write $I-BA=[I-(D-U)^{-1}A][I-(D-L)^{-1}A][I-P(P^T A P)^{-1}P^T)A]$
Some numerical experiment are briefly list below:
Our problem is a Possion equation on $n\times n$ conforming triangular grid. Boundary condition is Dirichlet or Neumann.
\begin{table}[!htp]
\caption{Dirichlet Boundary}
\begin{tabular}{l |c}
n& 32\\ \hline
7-point & 0.2308 \\
9-point & 0.1984 \\
\end{tabular}
\end{table}
\begin{table}[!htp]
\caption{Neumann Boundary}
\begin{tabular}{l |c}
n& 32\\ \hline
7-point & 0.2325 \\
9-point & 0.2232 \\
\end{tabular}
\end{table}
For multilayer, we choose $A_i = P_i^T A P_i$, then
\begin{equation}
x^{t+1}-x^*=[\Pi_{i=1}^{J}(I-T_i)](x^t-x^*)
\end{equation}
where $T_i=\Pi_iR_i\Pi_i^TA$, $R_i$ is the GS smoother of $A_i$.
\begin{table}[!htp]
\caption{Dirichlet Boundary}
\begin{tabular}{l |c c}
Layer& 2&3 \\ \hline
7-point & 0.3234 & 0.7433\\
9-point & 0.2870 & 0.6256\\
\end{tabular}
\end{table}
\begin{table}[!htp]
\caption{Neumann Boundary}
\begin{tabular}{l |c c}
Layer& 2&3 \\ \hline
7-point & 0.3289 & 0.7167\\
9-point & 0.3155 & 0.6283\\
\end{tabular}
\end{table}
| {
"alphanum_fraction": 0.6359032077,
"avg_line_length": 23.6933333333,
"ext": "tex",
"hexsha": "79b5654ac4728a2600e7525880b3dcb1bf5d7615",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "liuzhengqi1996/math452",
"max_forks_repo_path": "6DL/amg-p.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "liuzhengqi1996/math452",
"max_issues_repo_path": "6DL/amg-p.tex",
"max_line_length": 120,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "liuzhengqi1996/math452",
"max_stars_repo_path": "6DL/amg-p.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 783,
"size": 1777
} |
\chapter{String Processing}
In addition to its groundbreaking expression evaluation, by combining
compelling string processing features from its ancestors Icon and
\index{SNOBOL4}SNOBOL4, Unicon provides some of the most flexible and
readable built-in string processing facilities found in any language.
If you are used to string processing in a mainstream language, hold on
to your hat: things are about to get interesting.
In this chapter you will learn
\begin{itemize}
\item How to manipulate strings and sets of characters
\item the string scanning \index{control structure}control structure,
used to match patterns directly in code
\item the pattern type, used to match patterns constructed as data
\item How to write custom \index{pattern matching}pattern matching
primitives, with \index{backtracking}backtracking
\item techniques for matching regular expressions and context free grammars
\end{itemize}
\section{The String and Cset Types}
All mainstream programming languages have a string type, but the
details of Unicon's string type set it apart from other languages.
And almost no other mainstream languages feature a data type
dedicated to character sets, which are quite useful.
\subsection{String Indexes}
You have already seen string literals delimited by double quotes, and
the most common operators that work on strings: the size of a string is
given by the unary \texttt{*} operator, substrings can be picked out
with square-bracketed indexes, and two strings can be concatenated with the
\texttt{{\textbar}{\textbar}} operator. It is time for a
deeper explanation of the meaning of indexes as they are used with
strings and lists.
\index{string!indexes 1 based}Indexes in a string refer to the positions
{\em between\/} characters. The positions are numbered starting from 1. The
index 0 refers to the position after the last character in the string,
and negative indices count from the right side of the string:
\begin{center}
\includegraphics[width=3.6075in,height=1.0417in]{ub-img/ub-img7.png}
\end{center}
\vspace{-0.25cm}{\sffamily\bfseries Figure 3-1:}
{\sffamily Positive and Negative String Indices}
\bigskip
The expression \index{slice!string s[i:j]}\texttt{s[i:j]} refers to the
\index{substring}substring of \texttt{s} that lies between positions
\texttt{i} and \texttt{j}. If either \texttt{i} or j is not a valid
index into \texttt{s}, the expression fails. The expression
\texttt{s[k]} is short for \texttt{s[k:k+1]} and refers to a single
\index{character}character at position \texttt{k}. The expression
\texttt{s[k+:n]} is the substring of length \texttt{n} starting at
position \texttt{k}. If \texttt{s} is the string
\texttt{"hello, world!"} then the
expressions
\iconcode{
s[7] := " puny " \\
s[13:18] := "earthlings"
}
\noindent
change \texttt{s} into \texttt{"hello, puny
earthlings!"}, illustrating the ease with which
insertions and substitutions are made. The first assignment changes the
string to \texttt{"hello, puny world!"},
replacing a single character with six characters and thereby increasing
its length. The second assignment operates on the modified string,
replacing \texttt{"world"} with
\texttt{"earthlings"}.
Strings are values, just like numbers; if you copy a string and then
work on the copy, the original will be left unchanged:
\iconcode{
s := "string1" \\
new\_s := s \\
new\_s[7] := "2"
}
Now the value of \texttt{new\_s} is
"string2" but \texttt{s} is left unchanged.
As mentioned in Chapter 1, strings can be compared with string
\index{comparison operator!string}comparison operators such as
\texttt{==}.
\iconcode{
if line[1] == "\#" then ...}
If you find you are writing many such tests, the string processing you
are doing may be more cleanly handled using the string scanning
facilities, described below. But first, here is some more detail on the
character set data type, which is used in many of the string scanning
functions.
\subsection{Character Sets}
A cset is a set of characters. It has the usual properties of sets:
order is not significant, and a character can only occur once in a
cset. A \index{cset literal}cset literal is represented with single
quotes:
\iconcode{
c := 'aeiou'}
Since characters can only occur once in a cset, duplicates in a cset
literal are ignored; for example,
\texttt{'aaiiee'} is equivalent to
\texttt{'aie'}. Strings can be
converted to csets and vice versa. Since csets do not contain
duplicates, when a string is converted to a cset, all the duplicates
are removed.
Therefore to see if a string is composed of all the vowels and no
consonants:
\iconcode{
if cset(s) == 'aeiou' then ...}
Or, to find the number of distinct characters in a string:
\iconcode{
n := *cset(s)}
The \texttt{!} operator generates the members of a cset in sorted order;
this is also useful in some situations.
\subsection{Character Escapes}
Both strings and csets rely on the backslash as an escape character
within string literals. A backslash followed by an \index{escape
codes}\textit{escape code} of one or more characters specifies a
non-printable or control character. Escape codes may be specified by a
numeric value given in hex or octal format - for example,
\texttt{"{\textbackslash}x41"}.
Alternatively, any control character may be specified with an escape
code consisting of the caret (\texttt{\^{}}) followed by the alphabetic
letter of the control character. A cset containing control-C,
control-D, and control-Z could be specified as
\texttt{'{\textbackslash}\^{}c{\textbackslash}\^{}d{\textbackslash}\^{}z'}.
For the most common character escapes, a single-letter code is defined,
such as \texttt{"{\textbackslash}t"} for
the tab character, or
\texttt{"{\textbackslash}n"} for the
newline. For all other characters, the character following the
backslash is the character; this is how quotes or backslashes are
included in literals. The escape codes are summarized in Table 3-1.
\medskip
% \pagebreak
\begin{center}
{\sffamily\bfseries
Table 3-1
Escape Codes and Characters
}
\end{center}
\begin{center}
\begin{xtabular}{|m{0.38in}|m{0.8in}|m{0.4in}|m{0.855in}|m{0.38in}|m{1.14in}|m{0.38in}|m{0.68in}|}
\hline
\sffamily\bfseries Code &
\sffamily\bfseries Character &
\sffamily\bfseries Code &
\sffamily\bfseries Character &
\sffamily\bfseries Code &
\sffamily\bfseries Character &
\sffamily\bfseries Code &
\sffamily\bfseries Character\\\hline
\ \ {\textbackslash}b &
backspace &
\ \ {\textbackslash}d &
delete &
\ \ {\textbackslash}e &
escape &
\ \ {\textbackslash}f &
form feed\\\hline
\ \ {\textbackslash}l &
line feed &
\ \ {\textbackslash}n &
newline &
\ \ {\textbackslash}r &
carriage return &
\ \ {\textbackslash}t &
tab\\\hline
\ \ {\textbackslash}v &
vertical tab &
\ {\textbackslash}' &
quote &
\ \ {\textbackslash}" &
double quote &
\ \ {\textbackslash}{\textbackslash} &
backslash\\\hline
\ {\textbackslash}\textit{ooo} &
octal &
{\textbackslash}x\textit{hh} &
hexadecimal &
\ {\textbackslash}\^{}\textit{x} &
Control-\textit{x} &
~
&
~
\\\hline
\end{xtabular}
\end{center}
\section{String Scanning}
Strings are ordered sequences of symbols. A string's vital information
is conveyed both in its individual elements and in
the number and order in which the symbols appear.
There is a fundamental duality between writing the code to analyze a
string, and writing down some data that describes or abstracts
that string. The same duality is seen in Unicon's string scanning
control structure described in this section, and the pattern data type
used in matching operators, which is described in this next section.
Unicon's main building block for string analysis is a control
structure called \index{string!scanning}\textit{string scanning}. A
\index{scanning!environment}scanning environment consists of a string
\index{subject string}\index{string!subject}\textit{subject} and an
integer \index{position,
string}\index{string!position}\textit{position} within the subject at
which scanning is to be performed. These values are held by the keyword
variables \texttt{\&subject} and \texttt{\&pos}. Scanning environments
are created by an expression of the form
\iconcode{
\textit{s} ? \textit{expr}}
The binary \texttt{?} operator sets the subject to its left argument and
initializes the position to 1; then it executes the expression on the
right side.
The expression usually has an interesting combination of various
\index{matching functions}\textit{matching
functions} in it. Matching functions change the position, and return
the substring between the old and new positions. For example:
\texttt{move(j)} moves the position \texttt{j} places to the
right and returns the substring between the old and new position. This
string will have exactly \texttt{j} characters in it. When the position
cannot move as directed, for example because there are less than
\texttt{j} characters to the right, \index{move(i)}\texttt{move()}
fails. Here is a simple example:
\iconcode{
text ? \{ \\
\> while move(1) do \\
\> \ \ \ write(move(1)) \\
\> \}
}
This code writes out every other character of the string in variable
\texttt{text}.
Another function is \index{tab(i)}\texttt{tab(i)}, which sets the
position \texttt{\&pos} to its argument and returns the substring that
it passed over. So the expression \texttt{tab(0)} will return the
substring from the current position to the end of the string, and set
the position to the end of the string.
Several string scanning functions examine a string and generate the interesting
positions in it. We have already seen \index{find()}\texttt{find()},
which looks for substrings. In addition to the other parameters that
define what the function looks for, these string functions end with
three optional parameters: a string to examine and two integers. These
functions \index{default!scanning parameters}default their string
parameter to \texttt{\&subject}, the string being scanned. The two
integer positions specify where in the string the processing will be
performed; they default to 1 and 0 (the entire string), or
\texttt{\&pos} and 0 if the string defaulted to \texttt{\&subject}.
Here is a \index{generator}generator that produces the words from the
input:
\iconcode{
procedure getword() \\
local wchar, line \\
\> wchar := \&letters ++ \&digits ++
'{\textbackslash}'-' \\
\> while line := read() do \\
\> \ \ \ line ? while tab(upto(wchar)) do \{ \\
\> \ \ \ \ \ \ word := tab(many(wchar)) \\
\> \ \ \ \ \ \ suspend word \\
\> \ \ \ \ \ \ \} \\
end
}
Variable \texttt{wchar} is a cset of characters that are
allowed in words, including apostrophe (which is escaped) and hyphen
characters. \index{upto(c)}\texttt{upto(c)} returns the next position
at which a character from the cset \texttt{c} occurs. The
\index{many(c)}\texttt{many(c)} function returns the position after a
sequence of characters from \texttt{c}, if one or more of them occur at
the current position. The expression \texttt{tab(upto(wchar))} advances
the position to a character from \texttt{wchar}; then
\texttt{tab(many(wchar))} moves the position to the end of the word and
returns the word that is found. This is a generator, so when it is
resumed, it takes up execution from where it left off and continues to
look for words (reading the input as necessary).
Notice the first line: the cset \texttt{wchar} is the set union of the
upper- and lowercase letters (the value of the keyword
\texttt{\&letters}) and the digits (the keyword \texttt{\&digits}).
This cset union is performed each time \texttt{getword()} is called,
which is inefficient if \texttt{getword()} is called many times. The
procedure could instead calculate the value once and store it for all
future calls to \texttt{getword()}.
Declaring the variable to be static will cause its value to
persist across calls to the procedure. Normal local variables are
initialized to the null value each time a procedure is entered. To do
this, add these two lines to the beginning of the procedure:
\iconcode{
static wchar \\
initial wchar := \&letters ++ \&digits ++
'{\textbackslash}'-'
}
The \index{match(s)}\texttt{match(s)} function takes a string argument
and succeeds if \texttt{s} is found at the current position in the
subject. If it succeeds, it produces the position at the end of the
matched substring. This expression
\iconcode{
if tab(match("-")) then sign := -1 else sign
:= 1}
\noindent
looks to see if there is a minus sign at the current position; if one is
found, \texttt{\&pos} is moved past it and the variable \texttt{sign}
is assigned a -1; otherwise, it gets a 1. The expression
\texttt{tab(match(s))} occurs quite often in string scanning, so it is
given a shortcut: \texttt{=s}. The section on pattern matching
later in this chapter will explain that this
``unary equals'' operator has an additional, more powerful use.
The last two string scanning functions to round out
Icon's built-in repertoire are
\index{any(c)}\texttt{any(c)} and \index{bal()}\texttt{bal(c1,c2,c3)}.
\texttt{any(c)} is similar to \texttt{many()}, but only tests a single
character being scanned to see if it is in cset \texttt{c}. The
\texttt{bal()} function produces positions at which a character in
\texttt{c1} occurs, similar to \texttt{upto()}, with the added
stipulation that the string up to those positions is \textit{balanced}
with respect to characters in \texttt{c2} and \texttt{c3}. A string is
balanced if it has the same number of characters from \texttt{c2} as
from \texttt{c3} and there are at no point more \texttt{c3} characters
present than \texttt{c2} characters. The \texttt{c1} argument defaults
to \texttt{\&cset}. Since \texttt{c2} and \texttt{c3} default to
\texttt{'('} and
\texttt{')'}, \texttt{bal()} defaults
to find balanced parentheses.
The restriction that \texttt{bal()} only returns positions at which a
character in \texttt{c1} occurs is a bit strange. Consider what you
would need to do in order to write an expression that tells whether a
string \texttt{s} is balanced or not.
You might want to write it as \texttt{s ? (bal() = *s+1)} but
\texttt{bal()} will never return that position. Concatenating an extra
character solves this problem:
\iconcode{
procedure isbalanced(s) \\
\> return (s {\textbar}{\textbar} " ") ?
(bal() = *s+1) \\
end
}
If string \texttt{s} is very large, this solution is not cheap, since it
creates a new copy of string \texttt{s}. You might write a version of
\texttt{isbalanced()} that doesn't use the
\texttt{bal()} function, and see if you can make it run faster than
this version. An example later in this chapter shows how to use
\texttt{bal()} in a more elegant manner.
\subsection*{File Completion}
Consider the following gem, attributed to Jerry \index{Nowlin,
Jerry}Nowlin and Bob \index{Alexander, Bob}Alexander. Suppose you want
to obtain the full name of a file, given only the first few letters of
a filename and a list of complete \index{filename completion}filenames.
The following one line procedure does the trick:
\iconcode{
procedure complete(prefix, filenames) \\
\> suspend match(prefix, p := !filenames) \& p \\
end
}
This procedure works fine for lists with just a few members and also for
cases where \texttt{prefix} is fairly large.
\subsubsection*{Backtracking}
\index{backtracking}The matching functions we have seen so far,
(\texttt{tab()} and \texttt{move()}), are actually
\index{generator}generators. That is, even though they only produce one
value, they suspend instead of returning. If expression evaluation ever
resumes one of these functions, they restore the old value of
\texttt{\&pos}. This makes it easy to try alternative matches starting
from the same position in the string:
\iconcode{
s ? (="0x" \& tab(many(\&digits ++
'abcdefABCDEF'))) {\textbar} \\
\> tab(many(\&digits))
}
This expression will match either a hexadecimal string in the format
used by C or a decimal integer. Suppose \texttt{s} contains the string
\texttt{"0xy"}. The first part of the
expression succeeds and matches the
\texttt{"0x"}; but then the expression
\texttt{tab(many(\&digits ++
'abcdef'))} fails; this causes Unicon
to resume the first \texttt{tab()}, which resets the position to the
beginning of the string and fails. Unicon then evaluates the expression
\texttt{tab(many(\&digits))} which succeeds (matching the string
\texttt{"0"}); therefore the entire
expression succeeds and leaves \texttt{\&pos} at 2.
{\sffamily\bfseries
Warning}
Be careful when using \texttt{tab()} or \texttt{move()} in a
surrounding expression that can fail! The fact that \texttt{tab()}
and \texttt{move()} reset \texttt{\&pos} upon expression
failure causes confusion and bugs when it happens accidentally.
\subsection*{Concordance Example}
Listing 3-1 illustrates the above concepts and introduces a few more.
Here is a program to read a file, and generate a
\index{concordance}concordance that prints each word followed by a list
of the lines on which it occurs. Short words like
\texttt{"the"} aren't
interesting, so the program only counts words longer than three
characters.
\bigskip
{\sffamily\bfseries Listing 3-1}
{\sffamily\bfseries A simple concordance program}
\iconcode{
procedure main(args) \\
\> (*args = 1) {\textbar} stop("Need a
file!") \\
\> f := open(args[1]) {\textbar}
stop("Couldn't open ",
args[1]) \\
\> wordlist := table() \\
\> lineno := 0 \\
\ \\
\> while line := map(read(f)) do \{ \\
\> \ \ \ lineno +:= 1 \\
\> \ \ \ every word := getword(line) do \\
\> \ \ \ \ \ \ if *word {\textgreater} 3 then \{ \\
\> \ \ \ \ \ \ \ \ \ \# if word isn't in the table,
set entry to empty list \\
\> \ \ \ \ \ \ \ \ \ /wordlist[word] := list() \\
\> \ \ \ \ \ \ \ \ \ put(wordlist[word], lineno) \\
\> \ \ \ \ \ \ \ \ \ \} \\
\> \ \ \ \} \\
\> L := sort(wordlist) \\
\> every l := !L do \{ \\
\> \ \ \ writes(l[1],
"{\textbackslash}t") \\
\> \ \ \ linelist := "" \\
\> \ \ \ \# Collect line numbers into a string \\
\> \ \ \ every linelist {\textbar}{\textbar}:= (!l[2]
{\textbar}{\textbar} ", ") \\
\> \ \ \ \# trim the final ", " \\
\> \ \ \ write(linelist[1:-2]) \\
\> \ \ \ \} \\
end \\
\ \\
procedure getword(s) \\
\> s ? while tab(upto(\&letters)) do \{ \\
\> \ \ \ word := tab(many(\&letters)) \\
\> \ \ \ suspend word \\
\> \ \ \ \} \\
end
}
\noindent If we run this program on this input:
\iconcode{
Half a league, half a league, \\
Half a league onward, \\
All in the valley of Death \\
Rode the six hundred.
}
\noindent the program writes this output:
\iconcode{
death \ \ 3 \\
half \ \ \ 1, 2 \\
hundred 4 \\
league \ 1, 1, 2 \\
onward \ 2 \\
rode \ \ \ 4 \\
valley \ 3
}
First, note that the \texttt{main()} procedure requires a command-line
argument, the name of a file to open. Also, we pass all the lines read
through the function \texttt{map()}. This is a function that takes
three arguments, the first being the string to map; and the second and
third specifying how the string should be mapped on a character by
character basis. The defaults for the second and third arguments are
the uppercase letters and the lowercase letters, respectively;
therefore, the call to \texttt{map()} converts the line just read in to
all \index{lower case}lowercase.
\section{Pattern Matching}
Pattern matching in Unicon is like string scanning on steroids.
Patterns encode as data what sort of strings to match, instead of writing
string scanning code to perform the match directly. A
pattern data type allows complex patterns to be composed from pieces.
The patterns can then be used in the middle of string scans to give
that notation a boost, or used on their own. Arguably, you don't need
patterns because anything that can be done to strings, can be done
using string scanning. But when the pattern solution is usually
shorter, more readable, and runs faster, why wouldn't everyone use them?
Patterns are understood in terms of two different points in time, the
point when the pattern is constructed, and the times at which it is
used to match a string. Most of the programming work for patterns
involves formulating the pattern construction, but most of the
computation occurs later on during the pattern matches.
The next two subsections describe ways to
create many and various complex patterns, while the two notations for
using patterns are relatively simple and require little space. All of
this only becomes clear with numerous examples that will follow.
\subsection{Regular Expressions}
The literal values of the pattern type are regular expressions,
enclosed in less than ($<$) and greater than ($>$) symbols. The
notation of regular expressions is very old and very famous in
computer science, and readers already familiar with them may wish
to skim this section.
Within $<$ and $>$ symbols, the normal Unicon interpretation of
operators does not apply; instead a set of regular expression
operators is used to express simple string patterns concisely. The
following are examples of regular expressions.
\bigskip
\begin{tabular}{|l|l|} \hline
regular expression & is a pattern that... \\ \hline
\texttt{$<$abc$>$} & matches abc \\
\texttt{$<$a$|$b$|$c$>$}& matches a or b or c \\
\texttt{$<$[a-c]$>$} & matches a or b or c \\
\texttt{$<$ab?c$>$} & matches a followed optionally by b followed by c \\
\texttt{$<$ab*c$>$} & matches a followed by 0 or more b's followed by c \\
\texttt{$<$a*b*c*$>$} & matches a's followed by b's followed by c's \\ \hline
\end{tabular}
\subsection{Pattern Composition}
Regular expressions are awesome, but there are many patterns that they
cannot express. Unicon has many pattern functions and operators that
construct new patterns, often from existing pattern
arguments. Sometimes, they simply make it convenient to store parts of
patterns in variables that can then be used at various places in
larger patterns. Other times, they make it possible to write patterns
are not easily written as regular expressions.
\bigskip
\begin{tabular}{|l|l|} \hline
composer & constructs a pattern that... \\ \hline
\texttt{p1 {\textbar\textbar} p2} & matches if pattern p1 is followed by pattern p2 \\
\texttt{p1 .{\textbar} p2} & matches if pattern p1 or pattern p2 matches \\
\texttt{p -$>$ v} & assigns s to v if an entire pattern match succeeds \\
\texttt{p =$>$ v} & assigns s to v if a pattern match makes it here \\
\texttt{.$>$ v} & assigns the current position in the match to v \\
\texttt{`v`} & evaluates v at pattern match time \\
\texttt{Abort()} & causes an entire pattern match to fail immediately \\
\texttt{Any(c)} & matches a character in cset c \\
\texttt{Arb()} & matches anything \\
\texttt{Arbno(p)} & matches pattern p as few (zero or more) times as possible \\
\texttt{Bal()} & matches the shortest non-null substring with balanced parentheses\\
\texttt{Break(c)} & matches the substring until a character in cset c occurs \\
\texttt{Breakx(c)}& matches the substring until a character in cset c occurs \\
\texttt{Fail()} & fails to match, triggering any alternative(s) \\
\texttt{Fence()} & fails to match, preventing alternative(s) \\
\texttt{Len(i)} & matches any i characters \\
\texttt{NotAny(c)} & matches any one character not in cset c \\
\texttt{Nspan(c)} & matches 0 or more characters in cset c, as many as possible \\
\texttt{Pos(i)} & sets the cursor to position i \\
\texttt{Rem()} & matches the remainder of the string \\
\texttt{Span(c)} & matches 1 or more characters in cset c, as many as possible \\
\texttt{Succeed()}& causes the entire pattern match to succeed immediately \\
\texttt{Tab(i)} & matches from the current position to position i, moving to
that location \\
\texttt{Rpos(i)} & sets the position i characters from the right end of the
string \\
\texttt{Rtab(i)} & matches from the current position to i
characters from the right end. \\ \hline
\end{tabular}
\bigskip
This table summarizes a facility for which an entire chapter could be
written. Besides what extra information you find on these functions in
this chapter and in Appendix A, Unicon Technical Report 18 covers
these constructors in more detail. The concepts generally are
translated directly from SNOBOL4, so consulting SNOBOL4 books may also
be of use.
Most operands and arguments are required to be of type pattern, with the
exception of those marked as type integer (i) or cset (c), and those
which are variables (v). If a pattern is required, a cset may be
supplied, with semantics equivalent to the pattern which will match
any member of the cset. Otherwise if the argument is not a pattern
it will be converted to a string; strings are converted to patterns
that match the string.
Variable operands may be simple variables or references with a
subscript or field operator. The translator may not currently handle
arbitrarily complex variable references within patterns. The
unevaluated expression (backquotes) operator does handle
function calls and simple method invocations in addition to
variables.
\subsection{Pattern Match Operators}
A pattern match is performed within a string scanning environment
using the unary equals operator, \texttt{=p}. If \texttt{p}
is matched at the current position, \texttt{=p} produces the
substring matched and moves the position by that amount.
There is also a pattern match control structure, \texttt{s ?? p}, which creates
a new string scanning environment for \texttt{s} and looks for pattern
\texttt{p} within \texttt{s}, working from left to right.
\subsection{Scopes of Unevaluated Variables}
Since a pattern can be passed as a parameter, variables used in
patterns might get used outside of the scope where the pattern was
constructed, potentially anywhere in the program. In SNOBOL4 this was
not an issue mostly because all variables were global. In Unicon
variables are not global by default, and the variables used
during pattern matching are evaluated in the scope of the pattern match,
not references to locals that existed back during pattern construction
time.
To make things more fun, it is impractical to apply the usual rules
for implicit variable declaration to variables that do not appear in a
procedure body because they are referenced in a pattern that was
constructed elsewhere. If you use a variable in a pattern and pass
that pattern into a different scope, you must declare that variable
explicitly, either as a global or in the scope where it is used in a
pattern match.
\section{String Scanning and Pattern Matching Miscellany}
Many topics related to string scanning and pattern matching do not
easily fit into one of the preceding sections, but are nevertheless
important.
\subsection{Grep}
Grep, an acronym defined variously, is one of the oldest UNIX
utilities, which searches files for occurrences of a pattern defined
by a regular expression.
\bigskip
{\sffamily\bfseries Listing 3-2}
{\sffamily\bfseries A simple grep-like program}
\iconcode{
link regexp \\
procedure main(av) \\
\> local f, re, repl \\
\> every (f{\textbar}re{\textbar}repl) := pop(av) \\
\> f := open(f) {\textbar} stop("can't open file named: ", f) \\
\> while line := read(f) do \\
\> \ \ \ write(re\_sub(line, re, repl)) \\
end \\
procedure re\_sub(str, re, repl) \\
\> result := "" \\
\> str ? \{ \\
\> \ \ \ while j := ReFind(re) do \{ \\
\> \ \ \ \ \ \ result {\textbar}{\textbar}:= tab(j)
{\textbar}{\textbar} repl \\
\> \ \ \ \ \ \ tab(ReMatch(re)) \\
\> \ \ \ \ \ \ \} \\
\> \ \ \ result {\textbar}{\textbar}:= tab(0) \\
\> \ \ \ \} \\
\> return result \\
end
}
To replace all occurrences of
\texttt{"read{\textbar}write"} with
\texttt{"IO operation"} you could type
\iconcode{
igrep mypaper.txt "read{\textbar}write"
"IO Operation"}
Since the program has access to the pattern matching operation at a
finer grain, more complex operations are possible, this
search-and-replace is just an example.
\subsection{Grammars}
\index{grammar}Grammars are collections of rules that describe
\index{syntax}\textit{syntax}, the combinations of words allowed in a
language. Grammars are used heavily both in linguistics and in computer
science. \index{pattern matching}Pattern matching using a grammar is
often called \index{parse}\textit{parsing}, and is one way to match
patterns more complex than regular expressions can handle. This section
presents some simple programming techniques for parsing context free
grammars. Context free grammars utilize a \index{stack}stack to
recognize a fundamentally more complex category of patterns than
regular expressions can; they are defined below.
For linguists, this treatment is elementary, but introduces useful
programming techniques.
%% Chapter 18 refers to lexyacc.tex, which is no longer in the book.
%% For writers of programming language compilers,
%% an automatic parser generator tool that you can use with Unicon or Icon
%% is described in Chapter 18.
If you are not interested in grammars, you
can skip the rest of this chapter.
A \index{context-free grammar}context-free grammar or CFG is a set of
rules or \textit{productions}. Here is an example:
\iconcode{
S -{\textgreater} S S \\
\> {\textbar} ( S ) \\
\> {\textbar} ( )
}
This grammar has three productions. There are two kinds of symbols,
\textit{non-terminals} like \texttt{S} that can be replaced by the
string on the right side of a rule, and \textit{terminals} like
\texttt{(} and \texttt{)}. An application of a production rule is called
a derivation. One special non-terminal is called the
\textit{start symbol}; a string is accepted by the grammar if there is a
sequence of derivations from the start symbol that leads to the string.
By convention the start symbol is the first non-terminal in the
definition of the grammar. (This grammar only has one non-terminal, and
it is also the start symbol.)
This grammar matches all strings of balanced parentheses. The string
\texttt{(()(()()))} can be matched by this derivation:
\iconcode{
S -{\textgreater} (S) -{\textgreater} (SS) -{\textgreater} (()S)
-{\textgreater} (()(S)) -{\textgreater} \\
\> \ \ (()(SS)) -{\textgreater} (()(()S)) -{\textgreater} (()(()()))
}
\subsection*{Parsing}
This section is a discussion of parsers written by hand in Unicon.
It would not be right to talk about parsing context free grammars
without mentioning the standard tool, iyacc, that the Unicon language
translater itself is written in. Iyacc is an industrial strength
parser generator, derived from the open source "Berkely yacc", that
generates parsers as .icn source files compatible with Icon and Unicon.
Iyacc comes with Unicon distributions and is documented in Unicon Technical
Report 3 at http://unicon.org/utr/utr3.pdf.
Unicon can parse grammars in a natural way using matching
functions. A production
\iconcode{
A -{\textgreater} B a D \\
\> {\textbar} C E b
}
\noindent
can be mapped to this matching function:
\iconcode{
procedure A() \\
\> suspend (B() \& ="a" \& D())
{\textbar} (C() \& E() \& ="b") \\
end
}
\noindent
This procedure first tries to match a string matched by \texttt{B},
followed the character \texttt{a}, followed by a string matched by
\texttt{D}. If \texttt{D} \index{expression failure}fails, execution
backtracks across the \texttt{="a"}
(resetting \texttt{\&pos}) and resume \texttt{B()}, which will attempt
the next match.
If the sub-expression to the left of the \index{alternation operator (
{\textbar} )}alternation fails, then execution will try the
sub-expression on the right, \texttt{C() \& E() \&
="b"} until something matches - in which
case \texttt{A} succeeds, or nothing matches - which will cause it to
fail.
Parsers for any CFG can be written in this way. However, this is an
expensive way to do it! Unicon's
expression evaluation will try all possible derivations
trying to match a string. This is not a good way to parse, especially
if the grammar is amenable to lookahead methods.
%% The chapter 18 referred to here is lexyacc.tex (which was removed
%% in January 2013).
%% A more efficient
%% method is given in the next section. For serious parsing jobs, Chapter
%% 18 shows how to use the Unicon versions of the standard
%% industrial-strength lexical analyzer and parser generation tools, lex
%% and yacc.
\subsection*{Doing It Better}
Many grammars can be parsed more efficiently using
well-known techniques - consult a book on compilers for details. Here
is one way of parsing a grammar using some of the built-in
functions. Consider this grammar for an arithmetic expression:
\iconcode{
E -{\textgreater} T {\textbar} T + E \\
T -{\textgreater} F {\textbar} F * T \\
F -{\textgreater} a {\textbar} b {\textbar} c {\textbar} ( E )
}
\noindent
Listing 3-3 is an Unicon program that recognizes strings produced by
this grammar:
\bigskip
{\sffamily\bfseries Listing 3-3}
{\sffamily\bfseries Expression parser}
\iconcode{
procedure main() \\
\> while line := read() do \\
\> \ \ \ if expr(line) == line then write("Success!") \\
\> \ \ \ else write("Failure.") \\
end \\
procedure expr(s) \\
\> s ? \{ \\
\> \ \ \ while t := tab(bal('+'))
do \{ \\
\> \ \ \ \ \ \ term(t) {\textbar} \index{fail}fail ;
="+" \\
\> \ \ \ \ \ \ \} \\
\> \ \ \ term(tab(0)) {\textbar} fail \\
\> \ \ \ \} \\
\> return s \\
end \\
procedure term(s) \\
\> s ? \{ \\
\> \ \ \ while f := tab(bal('*'))
do \{ \\
\> \ \ \ \ \ \ factor(f) {\textbar} fail ;
="*" \\
\> \ \ \ \ \ \ \} \\
\> \ \ \ factor(tab(0)) {\textbar} fail \\
\> \ \ \ \} \\
\> return s \\
end \\
procedure factor(s) \\
\> s ? suspend ="a" {\textbar}
="b" {\textbar}
="c" {\textbar} ( ="("
{\textbar}{\textbar}
expr(tab(bal(')')))
{\textbar}{\textbar} =")" ) \\
end
}
The interesting procedure here is \index{bal()}\texttt{bal()}. With
\texttt{')'} as its first argument,
\texttt{bal()} scans to the closing parenthesis, skipping over any
parentheses in nested subexpressions, which is exactly what is needed
here.
The procedure \texttt{factor()} is written according to the rule in the
previous section. The procedures \texttt{expr()} and \texttt{term()}
have the same structure. The \texttt{expr()} procedure skips any
subexpressions (with balanced parentheses) and looks for a \texttt{+}.
We know that this substring is a well-formed expression that is not a
sum of terms, therefore, it must be a term. Similarly \texttt{term()}
looks for \texttt{*} and it knows that the expression does not contain
any \texttt{*} operators at the same nesting level; therefore it must
be a factor.
Notice that the procedures return the strings that they matched. This
allows us to check if the whole line matched the grammar rather than
just an initial substring. Also, notice that \texttt{factor()} uses
string concatenation instead of conjunction, so that it can return the
matched substring.
\section*{Summary}
Unicon's string processing facilities are extensive.
Simple operations are very easy, while more complex string analysis has
the support of a special control structure, string scanning. String
scanning is not as concise as regular expression pattern matching, but
it is fundamentally more general because the code and patterns are
freely intermixed.
| {
"alphanum_fraction": 0.7326352332,
"avg_line_length": 37.9514038877,
"ext": "tex",
"hexsha": "5217124630b50224c9d2756b08a09fc3f76b0dfe",
"lang": "TeX",
"max_forks_count": 16,
"max_forks_repo_forks_event_max_datetime": "2022-03-01T06:01:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-14T04:32:36.000Z",
"max_forks_repo_head_hexsha": "df79234dc1b8a4972f3908f601329591c06bd141",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "jschnet/unicon",
"max_forks_repo_path": "doc/book/string.tex",
"max_issues_count": 83,
"max_issues_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_issues_repo_issues_event_max_datetime": "2022-03-22T11:32:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-03T20:07:12.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "MatthewCLane/unicon",
"max_issues_repo_path": "doc/book/string.tex",
"max_line_length": 98,
"max_stars_count": 35,
"max_stars_repo_head_hexsha": "29f68fb05ae1ca33050adf1bd6890d03c6ff26ad",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "MatthewCLane/unicon",
"max_stars_repo_path": "doc/book/string.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-01T06:00:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-11-29T13:19:55.000Z",
"num_tokens": 9330,
"size": 35143
} |
%!TEX root = ../notes.tex
\section{March 17, 2022}
\subsection{Midterm Review}
\subsubsection*{General Advice}
\begin{itemize}
\item 5-7pm. Location: Barus \& Holley 168.
\item There are 5 problems:
\begin{itemize}
\item Each are weighted equally, some have multiple sections in them.
\item There is a bonus problem for a \emph{token} number of points.
\end{itemize}
\item Think about problems before starting! Don't begin immediately.
\end{itemize}
\subsubsection*{Key Topics}
\begin{enumerate}[1)]
\item
Unique factorization in $\ZZ$ (\cref{thm:unique-factorization}). Key points:
\begin{itemize}
\item Existence (using well-ordering of $\ZZ_+$)
\item Uniqueness (using prime elements being irreducible elements in $\ZZ$)
\end{itemize}
\item
$\ZZ$ is a Euclidean domain with Euclidean function $\mathsf{abs}$ (absolute value) (\cref{cor:z-euclidean}).
\begin{itemize}
\item Argument uses well-ordering of $\ZZ_+$ applied to the set $S = \{a - bq\mid b\in \ZZ\}$ when trying to divide $a$ by $b$.
\item Repeated application of this property yields the Euclidean algorithm for finding $\gcd$'s.
\end{itemize}
\item
Bezout's Identity (\emph{not} Bezout's Theorem)
If $a, b\in\ZZ$ are integers (not both $0$) and $c\in \ZZ$, then there exists $x, y\in\ZZ$ such that
\[ax + by = c\]
if and only if $\gcd(a, b)\mid c$.
\begin{itemize}
\item We take set $S = \{ax + by\mid x, y\in\ZZ\}$ and use well-ordering to show that the smallest element has to be $c$.
\end{itemize}
\item
From Bezout to solving linear congruences in $1$ variable, the linear congruence
\[ax\equiv b\pmod{m}\]
is equivalent to
\[ax - my = b\]
for some $y\in \ZZ$. Applying Bezout's tells us that this equation is solvable if and only if $\gcd(a, b)\mid b$. When a solution exists, there are $d$ solutions modulo $m$.
\begin{itemize}
\item Showing there are $d$ solutions: you divide $a, b, m$ by $\gcd(a, m)$, then you have a modulus $\frac{m}{\gcd(a, m)}$ where we have a unique solution. We lift up to solutions modulo $m$.
\end{itemize}
\item
Sunzi's theorem (\cref{thm:crt}). For $m, n\in\ZZ_+$ with $(m, n) = 1$. And $a, b\in\ZZ$, then the simultaneous congruences
\begin{align*}
x\equiv a\pmod{m} \\
x\equiv b\pmod{n}
\end{align*}
have a \emph{unique} solution modulo $mn$.
\begin{itemize}
\item We have $\pi : \ZZ/mn\ZZ\to \ZZ/m\ZZ\times \ZZ/n\ZZ$ be the natural projectsion where $\ker(\pi) = \{0\}$ since $(m, n) = 1$.
\end{itemize}
\item
Structure of group of units (\cref{cor:cyclicity-of-unit-groups}). $U(m)$ is cyclic $\iff$ $m = 1, 2, 4, p^e, 2p^e$.
\end{enumerate}
\subsubsection*{Practice Problems}
\begin{problem}
Find the integer $0\leq a\leq 36$ such that
\[3777^{\left(1144523^{56245501}\right)} \equiv a\pmod{37}\]
\end{problem}
We can reduce the base $3777\equiv 3\pmod{37}$. We reduce $1144523\equiv 11\pmod{\phi(37)}$. We can reduce the upper power $56245501\equiv 1\pmod{\phi(\phi(37))}$. This reduces to
\[3^{11}\equiv a\pmod{37}\]
which gives $a\equiv 28\pmod{37}$.
\begin{problem}
Let $p\in\ZZ$ be a prime and let $g$ be a primitive root mod $p$. Describe the set
\[\{g^k\mid g^k \text{ is a primitive root mod $p$}\}\]
\end{problem}
\begin{proof}
We claim that $\gcd(k, p-1) = 1$. Then for any element $a = g^\alpha$, we can find power $(g^k)^\beta = g^\alpha$ since we have $g^{p-1}\equiv 1$ so $k\beta - x(p-1) = \alpha$ for some $x$, which only has solutions by Bezout's identity when $\gcd(k, p-1)$.
\end{proof}
\begin{lemma*}
Prove that for any finite group $G$ of order $n$ and any $g\in G$, the cyclic group $\langle g^k\rangle$ for $k$ such that $\gcd(k, \ord(g)) = 1$ equals $\langle g\rangle$.
\end{lemma*}
\begin{proof}
Let $d = (k, \ord(g))$. Then there exists $x, y\in\ZZ$ such that
\[\ord(g)\cdot x + k\cdot y = d\]
so
\begin{align*}
g^d & = g^{\ord(g)\cdot x + ky} \\
& = g^{\ord(g)\cdot x}\cdot g^{ky} \\
& = g^{ky}
\end{align*}
so $g^d\in \langle g^k\rangle \implies \langle g^d\rangle \subseteq \langle g^k\rangle$. We have $\langle g^k\rangle\subseteq \langle g^d\rangle$ since $d\mid k$. Thus $\langle g^d\rangle = \langle g^k\rangle$.
We also have that $(g^k)^{\ord(g)/d} = (g^{\ord{g}})^{k/d} = 1$ so if $d = (k, \ord(g))>1$ then $\ord(g^k)< \ord(g)$.
So together we have that $\langle g\rangle = \langle g^k\rangle$ if and only if $(g, \ord(g))=1$.
\end{proof}
\begin{problem}
Prove
\begin{proposition*}
If $f : \ZZ_+\to \CC$ is a nonzero multiplicative function, then $f^{-1}$ (the Dirichlet inverse) exists and is multiplicative.
\end{proposition*}
\end{problem}
\begin{proof}
Let $h$ be given by
\begin{align*}
h(p^k) & = f^{-1}(p^k)\qquad\text{prime powers $p^k$} \\
h(n) & = h(p_1^{e_1})\cdots h(p_k^{e_k})
\end{align*}
then $(f\star h)(p^k) = I(p^k)$. Both $f\star h$ and $I$ are multiplicative, so
\[(f\star h)(n) = I(n)\quad\forall n\in\ZZ\]
and $h = f^{-1}$.
(Existence, $f(1) = 1$ for any multiplicative function, so in particular our given $f$ satisfies $f(1)\neq 0$.)
\end{proof}
\begin{problem}
Define $\lambda : \ZZ_+\to \CC$ by
\[\lambda(n) = (-1)^{e_1 + e_2 + \cdots}\]
where the $e_i$'s are the exponents on the prime factorization of $n$. Let
\[g(n) = \sum_{d\mid n}\lambda(d)\]
Prove that
\[g(n) = \begin{cases}
1 & \text{if $n$ is square} \\
0 & \text{otherwise}
\end{cases}\]
\end{problem}
\begin{proof}
We note that $\lambda$ is multiplicative, and $g$ is a summatory function of $\lambda$ which is multiplicative. So we just prove on prime powers. If we have prime power with even exponent, then $p, p^2, \dots, p^{e_1}\mid p^{e_1}$ gives $1 + (-1) + 1 + (-1) + \cdots + 1 = 1$. We have $0$ otherwise.
\end{proof} | {
"alphanum_fraction": 0.5960575214,
"avg_line_length": 47.976744186,
"ext": "tex",
"hexsha": "69fd627bca7affa10279daf0541310fb8e277100",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4",
"max_forks_repo_licenses": [
"BSL-1.0"
],
"max_forks_repo_name": "jchen/math1560-notes",
"max_forks_repo_path": "lectures/2022-03-17.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSL-1.0"
],
"max_issues_repo_name": "jchen/math1560-notes",
"max_issues_repo_path": "lectures/2022-03-17.tex",
"max_line_length": 303,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "a3605894c69d4e3dd7f90829523ff3ec3c73a6f4",
"max_stars_repo_licenses": [
"BSL-1.0"
],
"max_stars_repo_name": "jchen/math1560-notes",
"max_stars_repo_path": "lectures/2022-03-17.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-03T20:28:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-02-02T15:41:56.000Z",
"num_tokens": 2116,
"size": 6189
} |
%%%%%%%%%%%%%%%%%%%%%%%%% preamble %%%%%%%%%%%%%%%%%%
\documentclass{article}
\usepackage[utf8]{inputenc} % input encoding
\usepackage{graphicx} % to insert figures
\usepackage{amsmath,amsfonts} %to insert complex equations
\usepackage{natbib} % bibliography package
\bibliographystyle{dinat} % bibliography style
\usepackage{hyperref} %
\usepackage[english]{babel} % to insert text in English
\usepackage{blindtext} % to insert sample text
\title{Intro to \LaTeX}
\author{Dr Patricia Ternes\thanks{[email protected]}}
\date{\today}
%%%%%%%%%%%%%%%%%%%%%%%%% preamble %%%%%%%%%%%%%%%%%%
\begin{document}
\maketitle
\begin{abstract}
\blindtext[1]
\end{abstract}
\tableofcontents
\listoftables
\listoffigures
%%%%%%%%%%%%%%%%%%%%%%%%% sessions and subsections! %%%%%%%%%%%%%%%%%%
\section{Introduction}
\blindtext[1]
\subsection{My first subsection}
\blindtext[1]
\subsection{Another subsection}
\blindtext[1]
\subsection{Another subsection}
\blindtext[1]
\subsubsection{Sub sub session}
\blindtext[1]
\paragraph{Is it works?}
\blindtext[1]
%%%%%%%%%%%%%%%%%%%%%%%%% Lists! %%%%%%%%%%%%%%%%%%
\section{Lists}
\subsection{Unordered List}
\begin{itemize}
\item red
\item blue
\item green
\end{itemize}
\blindtext[1]
\subsection{Description List}
\begin{description}
\item[apple] red
\item[sky] blue
\item[leaf] green
\item yellow
\end{description}
\blindtext[1]
\subsection{Ordered List}
\begin{enumerate}
\item green
\item red
\item yellow
\item blue
\end{enumerate}
\blindtext[1]
\subsection{Nested list}
\begin{enumerate}
\item green
\begin{itemize}
\item leaf
\end{itemize}
\item red
\begin{itemize}
\item apple
\end{itemize}
\item yellow
\item blue
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%% Figures! %%%%%%%%%%%%%%%%%%
\section{Figure}\label{sec:fig}
Figure \ref{fig:sample-fig} shows a sample figure from the graficx package.
\blindtext[1]
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{example-grid-100x100bp}
\caption{Sample figure from graphicx package.}
\label{fig:sample-fig} % used to reference the figure in the text
\end{figure}
Figure \ref{fig:uni-pic} shows a picture of University of Leeds. This picture is part of the Session \ref{sec:fig}.
\blindtext[1]
\begin{figure}[h] % [h] = try to put float "here"
\centering
\includegraphics[width=0.4\textwidth]{uni.jpg}
\caption{University of Leeds picture.}
\label{fig:uni-pic} % used to reference the figure in the text
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%% Table! %%%%%%%%%%%%%%%%%%
\section{Table}\label{sec:tab}
\blindtext[1] See Table \ref{tab:col_fruit} in Session \ref{sec:tab} to see a fruit of each colour.
\begin{table}[h]
\centering
\begin{tabular}{|c||c|}\hline % "\hline" = drawn a horizontal line
Colour & Fruit \\ \hline \hline % "&" = change column, "\\" = line break
Red & Apple \\
Yellow & Banana \\
Green & Grape \\
Blue & blueberry \\ \hline
\end{tabular}
\caption{Table of colours and fruits.}
\label{tab:col_fruit}
\end{table}
\blindtext[1]
%%%%%%%%%%%%%%%%%%%%%%%%% Math! %%%%%%%%%%%%%%%%%%
\section{Math}\label{sec:math}
You can insert math inline by using \$ delimiters, like: $y = \alpha + \beta x$.
You can add the equation in a new paragraph:
\[
\text{May the } \frac{d p}{d t} \text{ be with you!}
\]
Or you can add the equation in an ordered math environment:
According \cite{newton1850}, the force is defined as:
\begin{equation}
F = ma = m\frac{dv}{dt} = \frac{dp}{dt},
\end{equation}
where $p$ is the momentum.
It is also possible combine the equation environment with an array environment:
\begin{equation}
A = \left(
\begin{array}{ccc}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33} \\
\end{array}
\right)
\end{equation}
\bibliography{sample}
\end{document} | {
"alphanum_fraction": 0.6232350756,
"avg_line_length": 24.7668711656,
"ext": "tex",
"hexsha": "46cf8e2122f9ff22aa1fbad7ac0ccd8ba1c30495",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6b78c848d9011a236c6a369a2431d99c09075ca8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "patricia-ternes/LIDA-LaTeX-workshop",
"max_forks_repo_path": "inputs/tex_project/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6b78c848d9011a236c6a369a2431d99c09075ca8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "patricia-ternes/LIDA-LaTeX-workshop",
"max_issues_repo_path": "inputs/tex_project/main.tex",
"max_line_length": 115,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6b78c848d9011a236c6a369a2431d99c09075ca8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "patricia-ternes/LIDA-LaTeX-workshop",
"max_stars_repo_path": "inputs/tex_project/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1195,
"size": 4037
} |
\documentclass{article}
% If you're new to LaTeX, here's some short tutorials:
% https://www.overleaf.com/learn/latex/Learn_LaTeX_in_30_minutes
% https://en.wikibooks.org/wiki/LaTeX/Basics
% Formatting
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
\usepackage[titletoc,title]{appendix}
\usepackage{hyperref}
% Math
% https://www.overleaf.com/learn/latex/Mathematical_expressions
% https://en.wikibooks.org/wiki/LaTeX/Mathematics
\usepackage{amsmath,amsfonts,amssymb,mathtools}
% Images
% https://www.overleaf.com/learn/latex/Inserting_Images
% https://en.wikibooks.org/wiki/LaTeX/Floats,_Figures_and_Captions
\usepackage{graphicx,float}
% Tables
% https://www.overleaf.com/learn/latex/Tables
% https://en.wikibooks.org/wiki/LaTeX/Tables
% Algorithms
% https://www.overleaf.com/learn/latex/algorithms
% https://en.wikibooks.org/wiki/LaTeX/Algorithms
\usepackage[ruled,vlined]{algorithm2e}
\usepackage{algorithmic}
% Code syntax highlighting
% https://www.overleaf.com/learn/latex/Code_Highlighting_with_minted
\usepackage{minted}
\usemintedstyle{borland}
% References
% https://www.overleaf.com/learn/latex/Bibliography_management_in_LaTeX
% https://en.wikibooks.org/wiki/LaTeX/Bibliography_Management
\usepackage{biblatex}
\addbibresource{references.bib}
% Title content
\title{Statistical Physics - Paris Physics Master\\
Home exercices - Solution}
\author{David L. Paipa}
\date{November 2020}
\begin{document}
\maketitle
% Abstract
% \begin{abstract}
% Add your abstract here.
% \end{abstract}
% Introduction and Overview
\section{Random Walk}
% Example Subsection
\subsection{Displacement}
\textit{
Consider a random walk in three dimensions: each step is a vector whose components are three random numbers uniformly distributed in the interval $[-a;+a]$ (x; y components) or $[d -a; d + a]$ (z component), with $a > 0$. What are the average displacement and the mean squared displacement after N steps? Explain your calculations, and propose a possible physical system that could be modelled in this way.}
% Example Subsubsection
\subsection*{Solution}
\subsubsection*{position after N steps}
First lets show the expected value of the position for each of the axes i.e. $\langle n_x\rangle ,\langle n_y \rangle,\langle n_z\rangle$. It is known that the deduction process for X-axis is the same as for Y-axis, since when adding a new step $x_i$ to the random walk they use the same distribution of values $P(n)$. I asume the walk walk starts at the coordinates $(0, 0, 0)$\\
Lets denote $\langle n_x \rangle_N$ the X position of the walker at the step $N$, being this equivalent to:
\begin{equation}
\langle n_x \rangle_N = \langle \sum_{i=1}^N x_i \rangle = \sum_{i=1}^N \langle x_i \rangle
\end{equation}
Where each $x_i$ is taken from a uniform distribution in the range $[-a,a]$ . As each $x_i$is uncorrelated with $x_j$ if $i \neq j$, is true that
\begin{equation}
\langle n_x \rangle_N = N \langle x_i \rangle
\end{equation}
We can asume that a value $\ell$ is taken suchthat $0 \leq \ell \leq a$ and that $- \ell$ is equally probable in the given distribution for any $\ell$. Then, $P(x_i = -\ell) = P(x_i = \ell) = 1/2$ and therefore,
\begin{equation}
\langle x_i \rangle = \sum_{k} p_k x_k = \frac{1}{2}(-\ell) + \frac{1}{2}(\ell) = 0 \quad \forall \ell \in [0,a]
\end{equation}
where $p_k$ is the probability associated with the outcome $x_k$. Since this happens for any $\ell$ magnitude in the given interval , I conclude $\langle x_i \rangle = 0$ . Then,
\begin{equation}
\langle n_x \rangle_N = \langle n_y \rangle_N = 0
\end{equation}
For $\langle n_z\rangle$ the distribution for every new step is different. We use the same element of random magnitude $\ell$ within the same ranges $0 \leq \ell \leq a$ (behaving as $\langle x_i \rangle$), but this time we add a constant factor $d$. We can say that,
\begin{equation}
\langle n_z \rangle_N =\langle \sum_{i=1}^N z_i \rangle = \langle \sum_{i=1}^N d + x_i \rangle = \sum_{i=1}^N d + \langle x_i \rangle
\end{equation}
Since each $z_i$ is equally independent form $z_j$ when $i \neq j$, then its true that
\begin{equation}
\langle n_z \rangle_N = N \left( d + \langle x_i \rangle \right) = Nd
\end{equation}
Given the previous definitions I can conclude that
\begin{equation}
\langle \vec{r} \rangle_N = (0,0,Nd)
\end{equation}
These conclusions can be compared with simulations on a random walk using the rules shown previously. The results observed agree with the theory .
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/pos_randomwalk.png}
\caption{Random walk in each axis showing the final position of the walkers after N steps, the mean value obtained by the simulation in each axis and the analytically inferred expected value.}
\end{figure}
\subsubsection*{Mean squared position}
For the mean squared displacements in each axes I will use an integral solution. I denote the mean squared displacements as $\langle n_x^2\rangle ,\langle n_y^2 \rangle,\langle n_z^2\rangle$. Again, I will asume the solution for $\langle n_x^2\rangle$ also applies for $\langle n_y^2\rangle$. By definition,
\begin{equation}
\langle n_x^2\rangle_N = \langle \left(\sum_{i=1}^N x_i\right)^2 \rangle = \langle \sum_{i=1}^N x_i^2 + \sum_{i=1}^N \sum_{\substack{j=1\\i\neq j}}^N x_i x_j \rangle = N \langle x_i^2 \rangle
\end{equation}
Using the argument that each $x_i$ is an independant variable, it is true that $\langle x_i x_j \rangle = 0 $ for $i \neq j$. For the value of $\langle x_i^2\rangle$ we have that
\begin{equation}
\langle x_i^2\rangle = \int_{-a}^{a} x_i^2 P(x_i) dx = \frac{1}{2a} \left[ \frac{x_i^3}{3} \right]_{-a}^{a} = \frac{2a^3}{6a} = \frac{a^2}{3}
\end{equation}
Note that the factor $(2a)^{-1}$ is the probability $P(x_i)$ for each outcome in the interval $[-a,a]$ , since the accumulated probability in the interval should be 1. Therefore,
\begin{equation}
\langle n_x^2\rangle_N =\langle n_y^2\rangle_N = N\frac{a^2}{3}
\end{equation}
Notice that the variance of $n_x$ matches with the variance of the uniform distribution, which states that for a distribution of uniform probability (normalized) in an interval of length $\ell$ the variance is
\begin{equation}
Var(U)_{\ell} = \frac{\ell^2}{12}
\end{equation}
In the case of $n_x$ (and by extension, also applies for $n_y$) we have that
\begin{equation}
Var(n_x) = \langle n_x^2 \rangle - \langle n_x \rangle^2 = \langle n_x^2 \rangle = \frac{a^2}{3} = \frac{(2a)^2}{12}
\end{equation}
For $ \langle n_z^2 \rangle_N$ I used a different approach. We start from the definition of the variance of $n_z$ ,which is also the same variance of $n_x$ and $n_y$. This can be inferred from the fact that the displacement of the sampling around the mean value is the same (a) and it uses the same distribution to sample (Uniform distribution). Then,
\begin{equation}
Var(n_z)_N = N\frac{a^2}{3} =\langle n_x^2 \rangle_N - \langle n_z \rangle^2_N
\end{equation}
From the previous expression\footnote{We can asume that the variance of a sum of random variables is equal to the sum of the variances since the term asociated with the covariance is canceled due to the argument of idependency. This allows the multiplication of the variance by N for the sum of N idependent outcomes.} we can infer the value of $\langle n_z^2 \rangle$ as
\begin{equation}
\langle n_z^2 \rangle_N = Var(n_z)_N + \langle n_z \rangle^2_N = N\frac{a^2}{3} + (Nd)^2
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/pos2_randomwalk.png}
\caption{ The mean value of the squared displacement obtained by the simulation in each axis and the analytically inferred expected value.}
\end{figure}
the mean squared displacement is given by
\begin{equation}
\langle \vec{r}^2 \rangle_N = <n_x^2>_N + <n_y^2>_N + <n_z^2>_N = Na^2 + (Nd)^2
\end{equation}
Then
\begin{equation}
<D> = \sqrt{ \langle \vec{r}^2 \rangle_N} = \sqrt{Na^2 + (Nd)^2}
\end{equation}
We can verify these values with the simulated walks mentioned previously.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{imgs/d_randomwalk.png}
\caption{ The value of the root mean squared displacement i.e. the squared root of the mean squared distance.}
\end{figure}
When looking to the walks in Figure \ref{img:walks} we can observe difussion phoenomena in an isotropic medium (given that fluctioations in al three dimensions are of the same magnitude $a$) but with a aditional constant displcement in one of the three spatial dimensions. This can be compared to physical systems such as a drop of ink (or any pullutant) diluting in a solvant under the influence of gravity. Another good example could be smoke originated in a point and spreading upwards due to convection of hot air.
\subsection{Central Limit Theorem}
\textit{Explain what we can infer about the long-time displacement of a molecule in a liquid from the central limit theorem, without knowing the detailed interatomic forces. Can we apply the same reasoning to an impurity atom in a solid?}
\subsection*{Solution}
The central Limit Theorem states that the sum (normalized) of many \textbf{independent random variables} tends to a normal distribution even if the distribution from which the random variables come does not have Gaussian properties. This scenario can be seen in the random walks, or Brownian movement, since these are precisely the result of the sum of many random outcomes from the same distribution\footnote{More generalized representations of the theorem state that the sum of random variables of different distributions (different mean and variance) can also be represented with this theorem. The necessary condition is that both the mean and the variance of the distributions are finite.}. After a long time and with enough particles subject to Brownian motion, it is easy to show that the trend is towards a normal distribution. In the case studied, it can be observed in the position distributions in each of the axes and the RMS displacement distribtuion that the central limit theorem applies.
The displacement of a molecule in a liquid is describable under the phenomenon of Brownian motion. This is because the molecule is constantly colliding with the molecules of the liquid solvent, bouncing and traveling an arbitrary small distance in an equally random direction. If the solvent is isotropic and homogeneous, these displacements are not biased towards any direction or conditioned by any intensive property of the liquid itself. The latter is inferred since changing properties such as density or viscosity of the fluid would only affect the average distances of these small displacements and the frequency in which they happen, but would leave the nature of the random walk of the molecule in the solvent invariant. The homogeneous medium ensures that in a medium without any type of external potential (for example, gravity) for any arbitrary vector $\vec{r}$ of displacement, the vector $-\vec{r}$ is just as probable and therefore for a long period of time the limit theorem central applies to describe the stochastic process.\\
A solid is a structure organized in static lattices of atoms or ions that are only allowed to vibrate in these networks, transmitting specific frequencies in oscillations without changing places within the lattice. The Brownian phenomenon cannot explain the motion of an impurity within a solid mainly because, other than being fixed positions within the lattice, these vibrations depend on interatomic forces, and these forces do not necessarily have to be spatially isotropic.
\section{Partition functions}
\subsection{Electric polarization}
\textit{Calculate the electric polarization \textbf{P} of an ideal gas, consisting of molecules having a constant electric dipole moment $p$ in a homogeneous external electric field \textbf{E} at temperature $T$. What is the dielectric constant of this gas at small fields?}
\subsection*{Solution}
\subsubsection*{Polarization}
First lets remember the relation between the thermodinamic potentials and the Free energy of the system:
\begin{equation}
dF(E,T,V) = -PdV-SdT-\textbf{P}d\textbf{E}
\end{equation}
Where the polarization \textbf{P} is given by
\begin{equation}
\textbf{P} = -\frac{dF}{d\textbf{E}}
\end{equation}
But the definition of free energy is given by the partition function for the ensemble of $N$ particles subject to the electric field in question. It is important to bear in mind that these particles for now are punctual and do not have kinetic energy associated with rotation, despite the fact that their orientation does influence the potential energy associated with said field. The Free energy of the ensemble is
\begin{equation}
F = -K_B T \ln{Z}
\end{equation}
Where $Z$ is the associated partition function. This represents the space possible micro-states for the ensemble, showing the statistical properties of the system. For the case of fixed number of particles $N$ we have that
\begin{equation}
Z = \left( N! h^{3N}\right)^{-1} \int e^{- \beta \mathcal{H}(q_k,p_k)} d^{3}q_k d^{3}p_k
\label{eq:Z_part}
\end{equation}
In this last equation $p_k$ and $q_k$ are the respective momentum and postion coordinates of the phase space in which the Hamiltonian is expressed, and the integral is made over all the phase space. To define the Energy operator for one particle, or Hamiltonian, we have that the potential energy is only given by the external electric field \textbf{E}, so that
\begin{equation}
U = -\Vec{E}\cdot \Vec{p} = -|E||p|\cos \theta = U(\theta)
\end{equation}
Being $\theta$ the angle formed between the dipole moment $\Vec{p}$ and the electric field $\Vec{E}$. The kinetic energy is only given by the momentum $\vec{p_k}$ of the particle, so the Hamiltonian is of the form
\begin{equation}
H_1 = \frac{\vec{p_k}^2}{2m} -|E||p|\cos \theta
\end{equation}
Since we are dealing with identic classical particles we have that the Hamiltonian for the system is
\begin{equation}
\mathcal{H} = \sum_{i=1}^N \left( \frac{\vec{p_{k}_{i}}}{2m} -|E||p_i|\cos \theta \right)
\end{equation}
Using this representation of the Hamiltonian we can separate the integral of the momentum and spatial coordinates in the equation \ref{eq:Z_part}.Given the symetry of the problem its better to work in spherical coordinates for the position space. This leaves the partition function as
\begin{equation}
Z_N = \left( N! h^{3N}\right)^{-1} \left( \int_{-\infty}^{\infty} e^{- \frac{\beta}{2m}\vec{p_{k}}^2}d^{3}p_k \int e^{\beta \textbf{E} p \cos \theta}d^{3}q_k \right)^N
\end{equation}
\begin{equation*}
Z_N = \left( N! h^{3N}\right)^{-1} \left(\int_{-\infty}^{\infty} e^{- \frac{\beta}{2m} p_{k}^2}dp_k\right)^{3N} \left(\int_{0}^{2\pi} d\phi \int_{0}^{\pi} e^{\beta \textbf{E} p \cos \theta}\sin\theta d\theta \right)^{N}
\end{equation*}
Solving the first integral for the momentum space we obtain that
\begin{equation*}
\left(\int_{-\infty}^{\infty} e^{- \frac{\beta}{2m} p_{k}^2}dp_k\right)^{3N} = \left( \frac{2m\pi}{\beta} \right)^{\frac{3N}{2}}
\end{equation*}
We can separate the integral over $\phi$ since the system is independent of this coordinate. At this point we have that the partition function is given by
\begin{equation}
Z_N = \frac{1}{N!}\left( \frac{2m\pi}{h^2 \beta} \right)^{\frac{3N}{2}} (2\pi)^N \left( \int_{0}^{\pi} e^{\beta \textbf{E} p \cos \theta}\sin\theta d\theta \right)^{N}
\end{equation}
Where the sulution for this integral over $\theta$ is
\begin{equation*}
\left( \int_{0}^{\pi} e^{\beta \textbf{E} p \cos \theta}\sin\theta d\theta \right)^{N} = \left( \frac{2}{\beta p \textbf{E}} \sinh{\beta p \textbf{E}} \right)^{N}
\end{equation*}
Finally, an expression for the partition function for N particles is
\begin{equation}
Z_N = \frac{1}{N!}\left[\left( \frac{2m\pi}{h^2 \beta} \right)^{\frac{3}{2}} \left( \frac{4\pi}{\beta p \textbf{E}} \sinh{\left(\beta p \textbf{E}\right)} \right)\right]^{N}
\end{equation}
The free energy is given by
\begin{equation}
F = -K_B T \ln{Z} = K_B T\left[ \ln{N!}-\frac{3N}{2} \ln{\left( \frac{2m\pi}{h^2 \beta}\right)} - N\ln{\left( \frac{4\pi}{\beta p \textbf{E}} \right)} - N \ln{\left( \sinh{\left(\beta p \textbf{E}\right)} \right)}\right]
\end{equation}
and derivating this result by the elecrtical field \textbf{E} we obtain
\begin{equation}
-\frac{\partial F}{\partial \textbf{E}} = N\left[-\frac{1}{\beta\textbf{E}} + p \coth{\left(\beta p \textbf{E}\right)}\right] = \textbf{P}
\end{equation}
\subsubsection*{Dielectric constant}
The dielectric constant is te response of the ensemble in polarization \textbf{P} to the changes of the field \textbf{E}. This can be approximated with the expansión in Laurent Series for the function $coth(x)$, which state that
\begin{equation}
\coth (x) = \frac{1}{x} + \frac{x}{3} - \frac{x^3}{45} + \ldots
\end{equation}
\begin{equation*}
\coth (x) = \frac{1}{x} + \frac{x}{3} - \mathcal{O}(x^3)
\end{equation*}
Bu tgiven that we are dealing with small electric fields we can neglect the third order of magnitude of $x$ (which is proportional to \textbf{E}), then the approximation of the polarization\footnote{I leave aside the extensive property of N factor for simplicity in the formulas. It can be introduced by a simple product in further steps} for small fields tends to
\begin{equation}
\textbf{P} = -\frac{1}{\beta E} + \frac{1}{\beta E} + \frac{p^2 \beta \textbf{E}}{3} + \mathcal{O}(\textbf{E}^3) = \frac{p^2 \textbf{E}}{3 K_B T}
\end{equation}
Therefore, the dielectric constant is given by
\begin{equation}
\mathcal{X} = \frac{\textbf{P}}{\textbf{E}} = \frac{p^2}{3 K_B T}
\end{equation}
In the additional figure \ref{img:polarization} we can see the shape of this polarization and the accuracy of low Field approximations.
\subsection{Non-degenerate Atomic levels}
\textit{Consider a system composed of a very large number $N$ of distinguishable atoms at rest and mutually noninteracting, each atom having only two (nondegenerate) energy levels: 0, $\epsilon > 0$. Consider the limit $N \to \infty$. What is the maximum possible value of $E/N$ if the system is in thermal equilibrium at $T > 0$? Compute the entropy per atom $S/N$ as a function of $E/N$.}
\subsection*{Solution}
\subsubsection*{Mean Energy per atom}
When dealing with thermodynamic equilibrium is important to tkae into account that there are no energy flows and that entropy S is maximum in the isolated system. First, lets denote that the partition function of the system is
\begin{equation}
Z = e^{-\beta \epsilon_o}+e^{-\beta \epsilon_1} = 1 + e^{-\beta \epsilon}
\end{equation}
and by extension the ocupation numbers for both energy states are
\begin{equation}
n_o = \frac{N}{Z} e^{-\beta \epsilon_o} = \frac{N}{1 + e^{-\beta \epsilon}}
\end{equation}
\begin{equation*}
n_1 = \frac{N}{Z} e^{-\beta \epsilon_1} = \frac{N e^{-\beta \epsilon}}{1 + e^{-\beta \epsilon}}
\end{equation*}
The following table is constructed based on these functions, which shows how these occupation numbers behave at different temperature regimes, high $T\uparrow$(or $T>0$) and low $T\downarrow$ ($T\approx 0$).
\begin{center}
\begin{tabular}{c | c c}
n & $T\downarrow$ & $T\uparrow$\\
\hline
$n_o$ & 0 & $\sim N$\\
$n_1$ & $N/2$ & $N/2$\\
\end{tabular}
\end{center}
The mean energy per atom given by $E/N$ in the regime where $T>0$ is deduced by
\begin{equation}
E/N = \frac{n_o \epsilon_o + n_1 \epsilon_1}{N} = \frac{\epsilon}{2}
\end{equation}
\subsubsection*{Mean Entropy per atom}
For the entropy we have the classical Boltzmann formula
\begin{equation}
S = K_B \ln \Omega
\end{equation}
Where $\Omega$ is the total number of microstates the system can reach. Since the system has two states, this $\Omega$ can be approached as a binomial function of the states. The only contributions to the total energy are those given by the atoms in the excited state, so that $E/\epsilon$ represent the number of atoms that are in the excited state. Then,
\begin{equation}
S = K_B \ln \left( \frac{N!}{\left(\frac{E}{\epsilon}\right)!\left(N - \frac{E}{\epsilon}\right)!} \right)
\end{equation}
%\left(\right)
\begin{equation*}
= K_B\left[ \ln\left(N!\right) - \ln\left(\left(\frac{E}{\epsilon}\right)!\right)-\ln\left(\left(N - \frac{E}{\epsilon}\right)!\right)\right]
\end{equation*}
Notice that since $N \to \infty$ and $E/\epsilon >> 1$ then is also true that $N - E/\epsilon >> 1$. From now on lets use the notation $E/N = \hat{epsilon}$ This is useful because this allows the approximation of the stirling formula \footnote{$\ln(N!) \approx N \ln(N) - N$ for $N>>1$}. This leave us with the expression
\begin{equation}
S/N = \frac{K_B}{N}\left[ N\ln(N) -\frac{E}{\epsilon}\ln \left(\frac{E}{\epsilon}\right)-\left(N-\frac{E}{\epsilon}\right)\ln\left(N - \frac{E}{\epsilon}\right)\right]
\end{equation}
\begin{equation*}
= K_B\left[\ln(N) -\frac{E}{N\epsilon}\ln \left(\frac{E}{\epsilon}\right)-\left(1-\frac{E}{N\epsilon}\right)\ln\left(N - \frac{E}{\epsilon}\right)\right]
\end{equation*}
Now , we force the desired ralation of $S/N$ in terms of $\hat{\epsilon}$
\begin{equation}
S/N = K_B\left[\ln(N) -\frac{\hat{\epsilon}}{\epsilon}\ln \left(\frac{E}{\epsilon}\right)+\left(1-\frac{\hat{\epsilon}}{\epsilon}\right)\ln\left(\frac{1/N}{1-\frac{\hat{\epsilon}}{\epsilon}} \right)\right]
\end{equation}
Finally obtaining
\begin{equation}
S/N = K_B\left[\frac{\hat{\epsilon}}{\epsilon}\ln \left(\frac{\epsilon}{\hat{\epsilon}}\right)+\left(1-\frac{\hat{\epsilon}}{\epsilon}\right)\ln\left(\frac{1}{1-\frac{\hat{\epsilon}}{\epsilon}} \right)\right]
\end{equation}
\section{Free energy landscapes}
\subsection{Transition rates}
\textit{The graph here below represents the energy along the trajectory of a metastable system simulated in the $NVT$ ensemble. From the graph, make an approximate estimate of the probability of the two metastable states A and B, and of the transition rates $\kappa_{A \to B}$ and $\kappa_{B \to A}$. see figure \ref{img:FELS}}
\subsection*{Solution}
We know that tere are two states, A and B, and each one is centered around an energy value ($\epsilon_A \approx -3 K_B T$ and $\epsilon_B \approx -7K_B T$). \\
First lets quantify the relative probability of being in each state. For this, its just necessary to count for each timestep the current state and take this as the frequency $\hat{f}$ given the sample, then calculate the relative frequency $f$ given the total timesteps measured.
\begin{equation}
\hat{f}(A) = 10 \implies f(A) = \frac{1}{4} = \mathcal{P}(A)
\end{equation}
\begin{equation}
\hat{f}(B) = 30 \implies f(B) = \frac{3}{4} = \mathcal{P}(B)
\end{equation}
But taking into account each timestep as a transition between two states, we can classifiy each timestep as $T_{ij}$ which is associated with the event of passing from state $i$ to state $j$. then,
\begin{center}
\begin{tabular}{c | c c c c}
\hline
$T_{ij}$ & $\hat{f}$ & f & $\hat{P}_A$ & $\hat{P}_B$\\
\hline
$T_{AA}$ & 3 & 3/40 & 3/10 & -\\
$T_{AB}$ & 7 & 7/40 & 7/10 & -\\
$T_{BA}$ & 7 & 7/40 & - & 7/30 \\
$T_{BB}$ & 23 & 23/40 & - & 23/30\\
\hline
\end{tabular}
\end{center}
Here $\hat{P}_A$ and $\hat{P}_B$ are the relative probabilities \footnote{ Note that the realtive frequency of being in a state $S$ at any time is the realtive frequency of transitioning from the other state to the state $S$ plus the realtive frequency of being in the state $S$ and staying in the same state. Then for any state $S_i$ of $k$ available states:
\begin{equation*}
f(S_i) = \sum_{j=1}^{k} f(T_{S_jS_i})
\end{equation*}
}given the initial state, since , for example, a transition form A to B is only possible if the system was in the state A. These probabilities are the respective trnasition rates we are looking for, therefore
\begin{center}
\begin{tabular}{c | c}
\hline
\kappa_{A \to A} & 3/10\\
\kappa_{A \to B} & 7/10\\
\kappa_{B \to A} & 7/30\\
\kappa_{B \to B} & 23/30\\
\hline
\end{tabular}
\end{center}
\subsection{Energy differences}
\textit{Now estimate $\Delta F_{AB}$ (free energy difference), $\Delta E_{AB}$ (energy difference), $T\Delta S_{AB}$ (temperature $\times$ entropy difference), explaining your calculation. Which state has larger entropy?}
\subsection*{Solution}
For a system evolution in a long period of time we know that for a state S it is true that
\begin{equation}
\lim_{t to \infty}\mathcal{P}(S) = \frac{1}{Z} e^{-\beta F_S}
\end{equation}
Where $F_S$ is the free energy associated with the state S.Take into account that $\Delta F_{AB} = F_A -F_B$ and that both states A and B are described ithin the same partition function, Therefore
\begin{equation}
\frac{\mathcal{P}(A)}{\mathcal{P}(B)} = e^{\beta(F_A-F_B)} \quad \implies \Delta F_{AB} = K_B T \ln{\left( \frac{\mathcal{P}(B)}{\mathcal{P}(A)}\right)}
\end{equation}
Since we have already calculated $\mathcal{P}(A)$ and $\mathcal{P}(B)$ we also know that $\mathcal{P}(B)/\mathcal{P}(A) = 3$ , then
$\Delta F_{AB} = 1.0986 K_B T \approx 1.1 K_B T$.\\
For the difference in energy we have that $\Delta E_{AB} = \epsilon_A -\epsilon_B \approx 4 K_B T$.\\
Remembering the thermodinamic relations \footnote{Helmholtz relation} it is true that
\begin{equation}
\Delta F = \Delta E- T \Delta S \quad \implies \Delta E_{AB} - \Delta F_{AB} = T \Delta S_{AB}
\end{equation}
Given this, the Delta values needed to stablish this relations are already known. Then, $T \Delta S_{AB} = 4K_B T - 1.1 K_B T = 2.9 K_B T = T (S_A - S_B)$
With this result I can conclude that \textbf{the state A has a larger entropy}.
\subsection{Order parameter Q}
\textit{Suppose that the transition between A and B is described well by an order parameter $Q$: make a simple sketch of the free energy landscape $F(Q)$, indicating the location of transition state configurations. Give a possible example of such a physical system, and of the corresponding order parameter.}
\subsection*{Solution}
An example of this type of physical system would be the free energy of coiling in polymers suspended in a solvent. The viscous properties (which on a microscopic scale is related with electrostatic phenomena) and the density of the polymer in the solvent can generate metastable states for certain radii of gyration in polymers. If we keep the viscosity and density of the solvent fixed, the radius of gyration of the polymers (or also the distance between the monomers of the polymer chain) can serve as a control parameter in this system. This example can be evidenced in article \cite{oligom_2014} in figure 4 where the free energy is modeled from these radii and the metastable states can be noticed with the Free energy landscape.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/FEL_Q.png}
\caption{Sketch of the Free energy landscape with the calculated difference between $F_A$ and $F_B$.The values displayed for F are arbitrary.The lower the value of F for a certain state, the more likely it is that the system is in that state.}
\label{img:walks}
\end{figure}
\subsection{Averages}
\textit{In this exercise you replaced ensemble averages with another type of averages: define both of them mathematically. What are the conditions for the two types to give the same results? Give a couple of examples of physical experiments where these conditions hold, and a couple where they do not hold.}
\subsection*{Solution}
In the exercise, averages over time are used to assume statistical properties that are obtained from averages made over the ensembles. The averages over time are calculated with arithmetic means on the properties of \textbf{one system} at different times of observation for a given long experiment, while the ensemble averages are calculated as avergaes of the observed property on the assumption of \textbf{multiple systems} for a given time without any kind of relationship with the time of evolution (causal disconnection and independence between the states of the system). These averages are equivalent in \textbf{ergodic systems}, in which it is assumed that any trajectory (temporal evolution from an initial point) will visit all the possible states of the system given a sufficiently long observation time. It can be inferred that an arithmetic mean for the observed property on the points of the phase space visited by a single system trajectory (all of them) is equivalent to the average value obtained by taking into account all possible systems.
Applied to, for example, a system of many particles, it is considered equivalent to take the temporal average of a certain property for a single particle in a sufficiently long observation time, to take the average of said property for all the particles at a certain instant. The greater the number of (indistinguishable) particles, it is mor accurate to say that both averages are equivalent following the \textbf{ergodic hypothesis}.
Another relevant condition to have the equivalence between the averages is that they must be \textbf{stationary processes}. This means that the probabilities associated with the observed properties should not change according to time translation. If this is not fulfilled, observing a single system over a long period is not equivalent to observing all systems for a certain time, since the properties of the systems are different depending on the time that it is chosen to observe. It is important to note that \textbf{not all stationary processes are associated with ergodic systems, but ergodic systems must be stationary processes}.
\subsubsection*{Mathematical descriptions}
For time averages, the properties to be measured at different instants of time are taken into account for the same system $S$. Lets asume an observable $A$ present in the properties of the $i$-th system (starting in the point $x_o$ at $t_o$ )that is measured $n$ times over a given observation time $T$ :
\begin{equation}
\hat{A_t}(S) = \frac{1}{n}\sum_{t = 0}^{n-1} A(x_t)
\end{equation}
This under the premise that $t_o < t_1 < t_2 < \ldots < t_{n-1 < T}$\\
For the ensemble average we have an ensemble composed by N systems, and at a given time the properties of all systems $s_i$ are observed and averaged.
\begin{equation}
\hat{A_e}(S) = \frac{1}{N}\sum_{i = 1}^{N} A(s_i)
\end{equation}
The systems are equivalent when we have large observation times for $A_t$ and a large number of systems for $A_e$.
\subsubsection*{Examples}
An example of a non-ergodic system is the unbiased random walk, for which the temporal average of the displacement of a single walk is an erratic and random value, but for many systems the average displacement is 0. Other example is playing billard in an eliptical table, that has some initial states where despite all the possible collisions they cannot reach certain areas of the table. \\
An example of an ergodic system is the toss of a coin. By flipping the coin many times (with results 1 or 0), the average is eventually 1/2 for a long observation time. If many coins are picked and tossed at the same time, the average of the outcomes also tends to 1/2. Other good example is the ideal gas in a box, where it is assumed that after a long observation times, all accesible states are visited by the system in the phase space.
\subsection{Poison's law}
\textit{In general, rare transitions between metastable states are well described by Poisson's probability law: why? Compute the mean and the standard deviation of the first passage time (the time we have to wait to observe a transition from one metastable state to another one), considering that its probability density is given by the probability to observe zero jumps between 0 and t multiplied by the probability to observe one jump between t and t + dt.}
\subsection*{Solution}
The Poisson distribution is valid for the case of rare state transitions as it meets the following premises:
\begin{itemize}
\item A state transition can occur any number of times in a given time interval.
\item A transition event is independent of other past transition events.
\item The rate of ocurrance of the transition event remains constant over time.
\item a longer observation time proportionally implies more observed transition envents.
\end{itemize}
The Poisson distribution that describes \textit{"observing 0 transitions between time 0 and t"} is
\begin{equation}
P_o(r) = e^{-rt}
\end{equation}
where $r$ is the observed rate of trnasitions present in a period of time. I calculated this rate as 14/40 given that ther are 14 timesteps corresponding to a transition in an interval of observation of 40 timesteps.\\
The Poisson distributioon for observing an event in the next $\delta_t$ after t is
\begin{equation}
P_1(r) = r\delta_t e^{-r\delta_t}
\end{equation}
The probability distribution for the first passage time is given by the product f the two distributions, but
as $\delta_t \to 0$ , I can expand as
\begin{equation}
P(r) =e^{-rt} \left(r\delta_t e^{-r\delta_t}\right) = e^{-rt} \left(r\delta_t - \left(r\delta_t \right)^2 + \mathcal{O}(\delta_t^3)\right) \approx r\delta_t e^{-rt}
\end{equation}
Form this poisson distribution the properties of mean and standar deviation can be inferred as:
\begin{equation}
\mu_p \approx r = 0.35
\end{equation}
\begin{equation*}
\sigma_p \approx = \sqrt{r} = 0.59
\end{equation*}
\newpage
\section{Scientific articles}
\subsection*{Solution}
\textbf{Chosen Article: } \textit{S.K. Ma, Calculation of Entropy from Data of Motion, Journal of Statistical
Physics, Vol. 26, No. 2 (1981) } \cite{entropy_1981}
\subsection{Introduction and concepts}
The paper proposes to use an \textbf{ergodic approach} to calculate the entropy, among other thermodynamic properties of a system, from the trajectory that said system travels in \textbf{phase space}. This way of approaching the problem from the measurements of the system takes into account a \textbf{detailed history} of said trajectory over a considerable period of time to numerically calculate said quantities. As will be discussed later in this analysis, the conclusions made from a familiar example, such as an Ising kinetic model, allow corroborating the theoretical assumptions made regarding the reduction to \textbf{uncorrelated subsystems} within the assembly, as well as the validity of the numerical approximation with respect to the analytical calculation of the observed properties of the system in \textbf{thermodynamic equilibrium}. Another interesting result is the analysis of the nature of the \textbf{metastable states} and their influence on the trajectories of the phase space, as well as the study of the \textbf{relaxation times} that are defined for the system.
\subsubsection{Mechanical and Ensemble Approach}
The perspective of the analysis is established from two points of view: the \textbf{mechanical view}, which takes into account the trajectory in the phase space and history of the system to make conclusions regarding its thermodynamic properties, and the \textbf{ensemble view}, which interprets the system as a Gibbs ensemble for which a region in phase space is defined that depends on the properties of the entire system, and so do the thermodinamic properties. \\
The \textbf{ensemble view} requires the mathematical formulation of the system as a defined ensemble and its thermodynamic properties are derived from the representation of the ensemble in the phase space as a hypervolume of many (even infinite) possible subsystems, a volume that moves into a restricted region of the phase space following certain conservation properties (Liouville's Theorem). This makes the analysis of phase space and thermodynamic properties \textbf{independent of time}. This point of view requires defining the type of ensemble to be treated, which depends on the conserved quantities of the system and its boundary conditions. A weakness of this approach to the problem is the \texbf{ambiguity} that arises when dealing with metastable states.\\
On the other hand, the \textbf{mechanical view} analyzes how the system moves within the phase space with the evolution of the system for a considerable time of observation and concludes statistical properties from the history of states \textit{visited}. Note that to approach the problem from this perspective it is necessary to make some assumptions:
\begin{itemize}
\item The observation time is \textbf{considerably longer} than the estimated relaxation time defined according to the system. This relaxation time is associated with the shortest relevant changes within the phase space.
\item Separate states both spatially (within phase space) and/or temporally (several relaxation times) are \textbf{uncorrelated}.
\item The states that are obtained over time are \textbf{randomly distributed} in the phase space so that the trajectory represents a random \textbf{sampling} of the space of possible configurations. This implies a direct correlation with the canonical and grand canonical ensemble where there is a uniform distribution in the phase space.
\item If you have a system with many degrees of freedom (spatial dimensions, rotation, spin states, etc) and sseveral associated subsystems (particles, for example), it is almost certain that in the time considered for the trajectory, \textbf{not all} the possible microstates that the system can adopt will be \textit{visited}. This compromises the perspective of the ergodic system from which one starts.
\item The latter implies that estimating the total number of possible states is \textit{apparently impossible}, however one of the main highlights of the article is how deduct this number from the \textbf{number of coincidences} that can be had in the phase space given a trajectory for a defined timelapse (the central method of the article is based on counting coincidences in the phase space to determine the entropy).
\end{itemize}
\subsubsection{The Coincidence counting method }
Given a system with $n$ sampling points along a trajectory in the phase space the expected number of coincidences $N_c$ among a number $\Gamma$ of possible microstates is given by
\begin{equation}
N_c = \frac{n(n-1)}{2} \frac{1}{\Gamma}
\end{equation}
Having this in a set of $N_t = n(n-1)/2$ trials implies the existence of a coincidence probabilty per trial of $R = N_c/N_t = 1/\Gamma$. Given that the equation for entropy is
\begin{equation}
S = \ln{\Gamma} = - \ln{R}
\end{equation}
This last statement is very important because it implies that if the number of coincidences $N_c$ and the number of trials $n$ are known, then $\Gamma$, a counterpart of the system \textbf{partition function}, can be estimated. It also exposes one of the limitations of the method and that is that to have a good estimate of $\Gamma$ it is required that $N_c > 1$ and by extension $\sqrt{\Gamma} \leq n$.
\subsection{Interesting results and statements}
The article proposes a simulation method of a kinetic Ising system, in which there is a set of spins that have a coupling term with the other spins of the system. The probability of flipping for each spin depends on the temperature of the system and the coupling term with the other spins. Tajectories in the phase space are simulated for many trials i.e. letting the system evolve step by step for a considerable time. The coincidences are counted as well as a cross section $V_s$ per state which determines the allowed similarity between two configurations to be considered as a coincidence. From this \textit{experiment} the article states:
\begin{itemize}
\item Even for a relatively small Time for the trajectories, the estimate of the entropy missed for approximately 5\% the result form the analytical meyhod, which is amazing taking into account the simplicity of the method.
\item metastable states can be seen as regions of the phase space in which the trajectory remains \textit{trapped} for relatively small periods of time. If the period of observation is short enough this states have well-defined thermodynamic properties. If the observation time is considerably long the system would leave this region eventually, so any metastable states under this macroscopic lense are not stabilities of defined properties anymore. This metastable states are easy to solve analytically in some cases, but when these regions get too complicated, the \textit{safety path} is to analyze the trajectories.
\item Th third Law of Thermodynamics states that for a \textit{Zero Temperature} the entropy vanishes. We know based on experiments that in this limit the entropy is nonzero because of the irreversibility of mestastable states. The coincidence counting method used on the Ising simulation agrees with the third law of thermodynamics since a Temperature approximating to Zero implies the \textbf{cessation of motion}.
\item Problems such as strong energy barriers limiting the phase space generate recurring metastates that, despite having very long observation times, prevent all possible system configurations from being explored. This takes away accuracy from the mechanical approach.
\item states that are out of equilibrium are seen as \textbf{transitory states} between metastable states. These fluctuations are associated with a low frequency noise as a consequence of a constant flow and, relating this to cases of molecular configurations, it can be seen how these configurations that make up the transient flow are arranged as a \textit{cyclic path} between one or several metastable states. This would imply a \textbf{correlation} between separate microstates, \textbf{violating one of the initial assumptions} of the mechanical view. This can also be related to trajectories that are accessible only through a specific set of microstates.
\end{itemize}
The method of counting coincidences is a clever approach to the scenario of ergodic systems and is applicable to any system, even if its complexity is difficult to analyze mathematically, from which a detailed history of its trajectory in phase space can be obtained.
% Computational Results
\printbibliography
\newpage
\begin{appendices}
\section{Additional Plots}
\textit{all plots available in The notebook referenced in \cite{script_sims}}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/walks.png}
\caption{Random walks in 3D following the given probaility ditributions for the steps. The labeled final positions are projected in the backgroud planes.}
\label{img:walks}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/hw_FELS.png}
\caption{Free Energy Landscape available in the homework.}
\label{img:FELS}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{imgs/polarization.png}
\caption{Polarization as a function of the field and the low field approximation where \textbf{E} is small}
\label{img:polarization}
\end{figure}
\end{appendices}
% \begin{table}
% \centering
% \begin{tabular}{rll}
% & Name & Years \\
% \hline
% 1 & Frosty & 1922-1930 \\
% 2 & Frosty II & 1930-1936 \\
% 3 & Wasky & 1946 \\
% 4 & Wasky II & 1947 \\
% 5 & Ski & 1954 \\
% 6 & Denali & 1958 \\
% 7 & King Chinook & 1959-1968\\
% 8 & Regent Denali & 1969 \\
% 9 & Sundodger Denali & 1981-1992 \\
% 10 & King Redoubt & 1992-1998 \\
% 11 & Prince Redoubt & 1998 \\
% 12 & Spirit & 1999-2008 \\
% 13 & Dubs I & 2009-2018 \\
% 14 & Dubs II & 2018-Present
% \end{tabular}
% \caption{UW mascots as described in \cite{washington_huskies}.}
% \label{tab:mascots}
% \end{table}
% begin{figure}[tb] % t = top, b = bottom, etc.
% Summary and Conclusions
% \section{Summary and Conclusions}
% Add your summary and conclusions here.
% References
% Appendices
% \begin{appendices}
% % MATLAB Functions
% \section{MATLAB Functions}
% Add your important MATLAB functions here with a brief implementation explanation. This is how to make an \textbf{unordered} list:
% \begin{itemize}
% \item \texttt{y = linspace(x1,x2,n)} returns a row vector of \texttt{n} evenly spaced points between \texttt{x1} and \texttt{x2}.
% \item \texttt{[X,Y] = meshgrid(x,y)} returns 2-D grid coordinates based on the coordinates contained in the vectors \texttt{x} and \texttt{y}. \text{X} is a matrix where each row is a copy of \texttt{x}, and \texttt{Y} is a matrix where each column is a copy of \texttt{y}. The grid represented by the coordinates \texttt{X} and \texttt{Y} has \texttt{length(y)} rows and \texttt{length(x)} columns.
% \end{itemize}
% % MATLAB Codes
% \section{MATLAB Code}
% Add your MATLAB code here. This section will not be included in your page limit of six pages.
% \begin{listing}[h]
% \inputminted{matlab}{example.m}
% \caption{Example code from external file.}
% \label{listing:examplecode}
% \end{listing}
% \end{appendices}
\end{document}
| {
"alphanum_fraction": 0.746649239,
"avg_line_length": 71.6492063492,
"ext": "tex",
"hexsha": "89e1b07bdac812ea5bb7f9efc46c0027e19daf63",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "d6e82cda785943cfb365c14aaa0f16cd9edbb313",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "PaipaPsyche/StatisticalPhysics_PPM",
"max_forks_repo_path": "Homework/Doc/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d6e82cda785943cfb365c14aaa0f16cd9edbb313",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "PaipaPsyche/StatisticalPhysics_PPM",
"max_issues_repo_path": "Homework/Doc/main.tex",
"max_line_length": 1088,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d6e82cda785943cfb365c14aaa0f16cd9edbb313",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "PaipaPsyche/StatisticalPhysics_PPM",
"max_stars_repo_path": "Homework/Doc/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12445,
"size": 45139
} |
\documentclass[9pt,twocolumn,twoside,lineno]{pnas-new}
% Use the lineno option to display guide line numbers if required.
% Note that the use of elements such as single-column equations
% may affect the guide line number alignment.
\templatetype{pnasresearcharticle} % Choose template
% {pnasresearcharticle} = Template for a two-column research article
% {pnasmathematics} = Template for a one-column mathematics article
% {pnasinvited} = Template for a PNAS invited submission
\usepackage{widetext}
\usepackage{csquotes}
\usepackage{tabularx}
\usepackage{graphicx}
\title{A Quantitative Synthesis of Early Language Acquisition Using Meta-Analysis}
% Use letters for affiliations, numbers to show equal authorship (if applicable) and to indicate the corresponding author
\author[a,1]{ Molly Lewis}
\author[b]{Mika Braginsky}
\author[c]{Sho Tsuji}
\author[c]{Christina Bergmann}
\author[d]{Page Piccinini}
\author[c]{Alejandrina Cristia}
\author[a]{Michael C. Frank}
\affil[a]{Department of Psychology, Stanford University}
\affil[b]{Department of Brain and Cognitive Sciences, MIT}
\affil[c]{Laboratoire de Sciences Cognitives et Psycholinguistique, ENS}
\affil[d]{ NeuroPsychologie Interventionnelle, ENS }
% Please give the surname of the lead author for the running footer
\leadauthor{Lewis}
% Please add here a significance statement to explain the relevance of your work
\significancestatement{The acquisition of natural language is one of the most striking feats in human development: A baby born into any community will learn the sounds, words, and grammar of the language spoken. To develop a theory of this process, psychologists have conducted many experimental studies examining linguistic skills in isolation, such as the acquisition of sounds or words. However, there is reason to think these skills may not be learned in isolation, but instead may depend on each other. We present a meta-analysis of the literature spanning 12 different linguistic skills. We use this dataset to provide the first broad quantitative synthesis of the language acquisition field. Our findings suggest a high degree of interdependence in the language acquisition process.}
% Please include corresponding author, author contribution and author declaration information
\authorcontributions{ ML, ST, CB, PP, AC, and MF wrote the paper. ML, ST, CB, AC, and MF coded papers for the meta-analytic dataset. All authors contributed to data analysis. MB, MF, and ML developed the Metalab website infrastructure.}
\authordeclaration{The authors declare no conflict of interest.}
{\correspondingauthor{\textsuperscript{1}To whom correspondence should be addressed. E-mail: [email protected]}
\authordeclaration{This article contains supporting information online at \url{http://rpubs.com/mll/synthesisSI}\\ \\
Data deposition: The data reported in this paper have been deposited in GitHub, a web-based repository hosting service, \url{https://github.com/langcog/metalab/}
\authordeclaration{}}
% Keywords are not mandatory, but authors are strongly encouraged to provide them. If provided, please include two to five keywords, separated by the pipe symbol, e.g:
\keywords{developmental psychology $|$ language acquisition $|$ quantitative theories $|$ meta-analysis}
\begin{abstract}
To acquire a language, children must learn a range of skills, from the
sounds of their language to the meanings of words. These skills are
typically studied in isolation in separate research programs, but a growing body of evidence points to interdependencies across skills
in the acquisition process (e.g., Feldman, Myers, White, Griffiths, \& Morgan, 2013;
Johnson, Demuth, Jones, \& Black, 2010; Shukla, White, \& Aslin, 2011).
Here, we suggest that the meta-analytic method can support the process of
building systems-level theories, as well as
provide a tool for detecting bias in a literature. We present
meta-analyses of 12 phenomena in language acquisition, with over 700
effect sizes. We find that the language acquisition literature overall
has a high degree of evidential value. We then present a quantitative
synthesis of language acquisition phenomena that suggests interactivity
across the system.
\end{abstract}
\dates{This manuscript was compiled on \today}
\doi{\url{www.pnas.org/cgi/doi/10.1073/pnas.XXXXXXXXXX}}
\begin{document}
% Optional adjustment to line up main text (after abstract) of first page with line numbers, when using both lineno and twocolumn options.
% You should only change this length when you've finalised the article contents.
\verticaladjustment{-2pt}
\maketitle
\thispagestyle{firststyle}
\ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{}
% If your first paragraph (i.e. with the \dropcap) contains a list environment (quote, quotation, theorem, definition, enumerate, itemize...), the line after the list may have some extra indentation. If this is the case, add \parshape=0 to the end of the list environment.
\dropcap{C}hildren beginning to acquire a language must learn its sounds, its word
forms, and their meanings, and a number of other component skills of
language understanding and use. A synthetic theory that explains the
inputs, mechanisms, and timeline of this process is an aspirational goal
for the field of early language learning. One important aspect of such a
theory is an account of how the acquisition of individual skills depends
on others. For example, to what extent must the sounds of a language be
mastered prior to learning word meanings? Although a huge body of
research addresses individual aspects of early language learning (see
e.g., Kuhl, 2004 for review), only a small amount of work addresses the
question of relationships between different skills (e.g., Feldman,
Myers, White, Griffiths, \& Morgan, 2013; Johnson, Demuth, Jones, \&
Black, 2010; Shukla, White, \& Aslin, 2011). Yet if such relationships
exist, they should play a central role in our theories.
The effort to build synthetic theories is further complicated by the
fact that there is often uncertainty about the developmental trajectory
of individual skills. Developmental trajectories are typically
communicated via verbal (often binary) summaries of a set of variable
experimental findings (e.g., \enquote{by eight months, infants can
segment words from fluent speech}). In the case of contradictory
findings then, theorists may be uncertain about which experimental
findings can be used to constrain the theory, and often must resort to
verbal discounting of one finding or the other based on methodological
or theoretical factors. Resolving this issue requires a method for
synthesizing findings in a more systematic and principled fashion.
We suggest that a solution to both of these challenges---building
integrative whole-system views and evaluating evidential strength in a
field of scientific research---is to describe experimental findings in
quantitative, rather than qualitative, terms. Quantitative descriptions
allow for the use of quantitative methods for aggregating experimental
findings in order to evaluate evidential strength. In addition,
describing experimental findings as quantitative estimates provides a
common language for comparing across phenomena, and a way to make more
precise predictions. In this paper, we consider the domain of language
acquisition and demonstrate how the quantitative tools of meta-analysis
can support theory building in psychological research.
Meta-analysis is a quantitative method for aggregating across
experimental findings (Glass, 1976; Hedges \& Olkin, 2014). The
fundamental unit of meta-analysis is the \emph{effect size}: a
scale-free, quantitative measure of \enquote{success} in a phenomenon.
Importantly, an effect size provides an estimate of the size of an
effect, as well as a measure of uncertainty around this point estimate.
With this quantitative measure, we can apply the same reasoning we use
to aggregate noisy measurements over participants in a single study: By
assuming each study, rather than participant, is sampled from a
population, we can appeal to a statistical framework to combine
estimates of the effect size for a given phenomenon.
Meta-analytic methods can support theory building in several ways.
First, they provide a way to evaluate which effects in a literature are
most likely to be observed consistently, and thus should constrain the
theory. This issue is particularly important in light of recent evidence
that an effect observed in one study may be unlikely to replicate in
another (Ebersole et al., 2015; Open Science Collaboration, 2012, 2015).
Failed replications are difficult to interpret, however, because they
may result from a wide variety of causes, including an initial false
positive, a subsequent false negative, or differences between initial
and replication studies, such that making causal attributions in a
situation with two conflicting studies is often difficult (Anderson et
al., 2016; Gilbert, King, Pettigrew, \& Wilson, 2016). By aggregating
evidence across studies and assuming that there is some variability in
true effect size from study to study, meta-analytic methods can provide
a more veridical description of the empirical landscape, which in turn
leads to better theory-building.
Second, meta-analysis supports theory building by providing higher
fidelity descriptions of phenomena. Given an effect size estimate,
meta-analytic methods provide a method for quantifying the amount
variability around this point estimate. Furthermore, the quantitative
framework allows researchers to measure potential moderators in effect
size. This ability is particularly important for developmental phenomena
because building a theory requires a precise description of changes in
effect size across development. Individual papers typically describe an
effect size for 1-2 age groups, but the ultimate goal for the theorist
is to detect a moderator---age---in this effect. Given that moderators
always require more power to detect (Button et al., 2013), it may be
quite difficult to identify size from individual papers. By aggregating
across papers using meta-analytic methods, however, we may be better
able to detect these changes, leading to more precise description of the
empirical phenomena.
\renewcommand{\arraystretch}{1.5}
\begin{table*}[t!]
\footnotesize
\setlength\tabcolsep{1.5pt}
\caption{Overview of meta-analyses in dataset.}
\begin{tabular}{lp{4cm} p{8cm}r}
\toprule
\textbf{Level} & \textbf{Phenomenon} & \textbf{Description} & \textbf{N papers (conditions)} \\
\midrule
Prosody & IDS preference \newline {\scriptsize (Dunst, Gorman, \& Hamby, 2012)} & {\scriptsize Looking times as a function of whether infant-directed vs. adult-directed speech is presented as stimulation.} & 16 (49) \\
Sounds & Phonotactic learning \newline {\scriptsize (Cristia, in prep.)} & {\scriptsize Infants' ability to learn phonotactic generalizations from a short exposure. } & 15 (47) \\
~ & Vowel discrimination (native) \newline {\scriptsize (Tsuji \& Cristia, 2014)} & {\scriptsize Discrimination of native-language vowels, including results from a variety of methods. } & 29 (114) \\
~ & Vowel discrimination (non-native) \newline {\scriptsize (Tsuji \& Cristia, 2014)} & {\scriptsize Discrimination of non-native vowels, including results from a variety of methods. } & 15 (48) \\
& Statistical sound learning \newline {\scriptsize (Cristia, in prep.)} & {\scriptsize Infants' ability to learn sound categories from their acoustic distribution. } & 9 (17) \\
& Word segmentation \newline {\scriptsize (Bergmann \& Cristia, 2015) } & {\scriptsize Recognition of familiarized words from running, natural speech using behavioral methods. } & 68 (285) \\
Words & Mutual exclusivity \newline {\scriptsize (Lewis \& Frank, in prep.)} &{\scriptsize Bias to assume that a novel word refers to a novel object in forced-choice paradigms.}
& 20 (60) \\
~ & Sound Symbolism \newline {\scriptsize (Lammertink et al., 2016)} &{\scriptsize Bias to assume a non-arbitrary relationship between form and meaning ("bouba-kiki effect") in forced-choice paradigms.}
& 11 (44) \\
~ & Concept-label advantage \newline {\scriptsize (Lewis \& Long, unpublished)} & {\scriptsize Infants' categorization judgments in the presence and absence of labels. } & 14 (49) \\
~ & Online word recognition \newline {\scriptsize (Frank, Lewis, \& MacDonald, 2016)} & {\scriptsize Online word recognition of familiar words using two-alternative forced choice preferential looking. } & 6 (14) \\
Communication & Gaze following \newline {\scriptsize (Frank, Lewis, \& MacDonald, 2016)} & {\scriptsize Gaze following using standard multi-alternative forced-choice paradigms. } & 12 (33) \\
~ & Pointing and vocabulary \newline {\scriptsize (Colonnesi et al., 2010)} & {\scriptsize Concurrent correlations between pointing and vocabulary.} & 12 (12) \\
\bottomrule
\end{tabular}
\end{table*}
Finally, effect size estimates also provide a common language for
comparing across phenomena. In the current work, this common language
allows us to consider the relationship between different phenomena in
the language acquisition domain (\enquote{meta-meta-analysis}). Through
cross-phenomenon comparisons, we can understand not only the trajectory
of a particular phenomenon, such as word learning, but also how the
trajectory of each phenomenon might relate to other skills, such as
sound learning, gaze following, and many others. This more holistic
description of the empirical landscape can inform theories about the
extent to which there is interdependence between the acquisition of
different linguistic skills.
Meta-analytic methods can be applied to any literature, but we believe
that developmental research provides a particularly important case where
they can contribute to theory development. One reason is that
developmental studies may be uniquely vulnerable to false findings
because collecting data from children is expensive, and thus sample
sizes are often small and studies are underpowered. In addition, the
high cost and practical difficulties associated with collecting large
developmental datasets means that replications are relatively rare in
the field. Meta-analysis provides a method for addressing these issues
by harnessing existing data to estimate effect sizes and developmental
trends.
\begin{figure*}[t!]
\centering
\includegraphics[width=17.3cm]{figs/unnamed-chunk-2-1.pdf}
\caption{Funnel plot for each meta-analysis. Each effect size estimate
is represented by a point, and the mean effect size is shown as a red
dashed line. The grey dashed line shows an effect size of zero. The
funnel corresponds to a 95\% CI around this mean. In the absence of true
heterogeneity in effect sizes (no moderators) and bias, we should expect
all points to fall inside the funnel.}
\end{figure*}
We take as our ultimate goal a broad theory of language acquisition that
can explain and predict the range of linguistic skills a child acquires.
As a first step toward this end, we collected a dataset of effect sizes
in the language acquisition literature across 12 phenomena at many different levels of linguistic
representation and processing (Metalab;
\url{http://metalab.stanford.edu}; see Table 1 for description of phenomena).
We use this dataset to demonstrate
how meta-analysis supports building theory building in two ways. We first
use meta-analytic techniques to evaluate the evidential value of the
empirical landscape in language acquisition research. We find broadly
that this literature has strong evidential value, and thus that the
effects reported in the literature should constrain our theorizing of
language acquisition. We then turn toward the task of synthesizing these
findings across phenomena and offer a preliminary, quantitative
synthesis.
\section*{Replicability of the field}\label{replicability-of-the-field}
To assess the replicability of language acquisition phenomena, we
conducted several diagnostic analyses: Meta-analytic estimates of effect
size, fail-safe-N (Orwin, 1983), funnel plots, and p-curve (Simonsohn,
Nelson, \& Simmons, 2014b, 2014a; Simonsohn, Simmons, \& Nelson, 2015).
These analytical approaches each have limitations, but taken together,
they provide converging evidence about whether an effect is likely to
exist, and the extent to which publication bias and other questionable
research practices are present in the literature. Overall, we find most
phenomena in the language acquisition literature have evidential value,
and can therefore provide the basis for theoretical development. We also
find evidence for some bias, as well as evidence that two
phenomena---phonotactic learning and statistical sound learning---likely
describe null or near-null effects.
\subsection*{Meta-Analytic Effect Size}\label{meta-analytic-effect-size}
To estimate the overall effect size of a literature, effect sizes are
pooled across papers to obtain a single meta-analytic estimate. This
meta-analytic effect-size can be thought of as the \enquote{best
estimate} of the effect size for a phenomenon given all the available
data in the literature. Table 2, column 2 presents meta-analytic effect
size estimates for each of our phenomena. We find evidence for a
non-zero effect size in 10 out of 12 of the phenomena in our dataset,
suggesting these literatures describe non-zero effects. In the case of
phonotactic learning and sound category learning, however, we find that
the meta-analytic effect size estimate does not differ from zero,
indicating that these literatures do not describe robust effects (as
first reported in Cristia, in prep.).
We next turn to methods of assessing evidential value that describe the
degree to which a literature has evidential value, and thus the degree
to which it should constrain our theory building. In the following three
analyses---fail-safe-N, funnel plots, and p-curves---we attempt to
quantify the evidential value of these literatures.
\subsection*{Fail-safe-N}\label{fail-safe-n}
One approach for quantifying the reliability of a literature is to ask,
How many missing studies with null effects would have to exist in the
\enquote{file drawer} in order for the overall effect size to be zero?
This is called the \enquote{fail-safe} number of studies (Orwin, 1983).
This number provides an estimate of the size and variance of an effect
using the intuitive unit of number of studies. To calculate this effect,
we estimated the overall effect size for each phenomenon (Table 2,
column 2), and then used this to estimate the fail-safe-N (Table 2,
column 3).
Because of the large number of positive studies in many of the
meta-analyses we assessed, this analysis suggests a very large number of
studies would have to be \enquote{missing} in each literature (\(M\) =
3,470) in order for the overall effect sizes to be 0. Thus, while it is
possible that some reporting bias is present in the literature, the
overall large fail-safe-N suggests that the literature nonetheless
likely describes robust effects.
This analysis provides a quantitative estimate of the size of an effect
in an intuitive unit, but it does not assess analytical or publication
bias (Scargle, 2000). Importantly, if experimenters are exercising
analytical flexibility through practices like selective reporting of
analyses or p-hacking, then the number and magnitude of observed true
effects in the literature may be greatly inflated. In the next analysis,
we assess the presence of bias through funnel plots.
\subsection*{Funnel Plots}\label{funnel-plots}
Funnel plots provide a visual method for evaluating whether variability
in effect sizes is due only to differences in sample size. A funnel plot
shows effect sizes versus a metric of sample size, standard error. If
there is no bias in a literature, we should expect studies to be
randomly sampled around the mean, with more variability for less precise
studies.
Figure 1 presents funnel plots for each of our 12 meta-analyses. These
plots show evidence of asymmetry (bias) for several of our phenomena
(Table 2, column 4). However, an important limitation of this method is
that it is difficult to determine the source of this bias. One
possibility is that this bias reflects true heterogeneity in phenomena
(e.g., different
ages).\footnote{The role of moderators such as age can be interactively explored on the Metalab website (http://metalab.stanford.edu).}
P-curve analyses provide one method for addressing this issue, which we
turn to next.
\subsection*{P-curves}\label{p-curves}
\begin{figure*}[t!]
\centering
\includegraphics[width=17.3cm]{figs/p_curve_plots-1.pdf}
\caption{P-curve for each meta-analysis (Simonsohn, Nelson, \& Simmons,
2014). In the absence of p-hacking, we should expect the observed
p-curve (blue) to be right-skewed (more small values). The red dashed
line shows the expected distribution of p-values when the effect is
non-existent (the null is true). The green dashed line shows the
expected distribution if the effect is real, but studies only have 33\%
power. Grey ribbons show 95\% confidence intervals estimated from a
multinomial distribution. Text on each plot shows the number of p-values
for each dataset that are less than .05 and thus are represented in each
p-curve (\enquote{sig. ps}), relative to the total number of conditions
for that phenomenon. Each plot also shows the proportion of p-values
that were derived from test statistics reported in the paper
(\enquote{prop. test stat.}); all others were derived by conducting
analyses on the descriptive statistics or transforming reported effect
sizes.}
\end{figure*}
A p-curve is the distribution of p-values for the statistical test of
the main hypothesis across a literature (Simonsohn et al., 2014b, 2014a,
2015). Critically, if there is a robust effect in the literature, the
shape of the p-curve should reflect this. In particular, we should
expect the p-curve to be right-skewed with more small values (e.g., .01)
than large values (e.g., .04). An important property of this analysis is
that we should expect this skew independent of any true heterogeneity in
the data, such as age. Evidence that the curve is in fact right-skewed
would suggest that the literature is not biased, and that it provides
evidential value for theory building.
\begin{table}[b!]
\scriptsize
\setlength\tabcolsep{1.2pt}
\caption{Summary of replicability analyses.}
\begin{tabular*}{8.7cm}{lrrrr}
\toprule
%\textbf{Phenomenon} & \textbf{\textit{d}} & \textbf{fail-safe-N} & \textbf{funnel skew} & \textbf{p-curve skew}\\
\textbf{Phenomenon} & \textbf{\textit{d}} & \multicolumn{1}{p{1.1cm}}{\centering \textbf{Fail-Safe} \\ \textbf{N}} & \multicolumn{1}{p{1.1cm}}{\centering \textbf{Funnel} \\ \textbf{Skew}} & \multicolumn{1}{p{1.1cm}}{\centering \textbf{P-curve} \\ \textbf{Skew}} \\
\midrule
IDS preference & 0.7 [0.52, 0.88] & 3507 & 1.5 & -10.4*\\
Phonotactic learning & 0.04 [-0.09, 0.16] & 45 & -1.43 & -1.52\\
Vowel discrim.\ (native) & 0.68 [0.56, 0.81] & 8724 & 8.55* & -9.76*\\
Vowel discrim.\ (non-native) & 0.66 [0.42, 0.9] & 3391 & 3.86* & -8.89*\\
Statistical sound learning & -0.19 [-0.42, 0.03] & $\dagger$ & -2.99* & -1.03\\
Word segmentation & 0.19 [0.14, 0.23] & 5374 & 2.59* & -9.4*\\
Mutual exclusivity & 1.01 [0.68, 1.33] & 6443 & 8.26* & -12.87*\\
Sound symbolism & 0.12 [-0.02, 0.25] & 526 & 1.42 & -5.56*\\
Concept-label advantage & 0.47 [0.33, 0.61] & 2337 & 1.37 & -4.79*\\
Online word recognition & 1.36 [0.84, 1.88] & 1934 & 2.61* & -14.51*\\
Gaze following & 1.27 [0.93, 1.61] & 4277 & 3.3* & -18.66*\\
Pointing and vocabulary & 0.98 [0.62, 1.34] & 1617 & 1.25 & -6.33*\\
\bottomrule
\end{tabular*}
\addtabletext{ \textit{d} = Effect size (Cohen's {\it d}) estimated from a random-effect model; fail-safe-N = number of missing studies that would have to exist in order for the overall effect size to be zero; funnel skew = test of asymmetry in funnel plot using the random-effect Egger's test (Sterne \& Egger, 2005); p-curve skew = test of the right skew of the p-curve using the Stouffer method (Simonsohn, Simmons, \& Nelson, 2015); Brackets give 95\% confidence intervals. Star indicates p-values less than .05. $\dagger$Fail-safe-N is not available here because the meta-analytic effect size estimate is less than 0.}
\end{table}
P-values for each condition were calculated based on the reported test
statistic. However, test statistics were not available for many
conditions, either because they were not reported or because they were
not coded. To remedy this, we also calculated p-values indirectly based
on descriptive statistics (means and standard deviations; see SI for
details).
Figure 2 shows p-curves for each of our 12 meta-analyses. All p-curves
show evidence of right skew, with the exception of phonotactic learning
and statistical sound learning (Table 2, column 5). This pattern did not
differ when only reported test-statistics were used to calculate
p-curves (see SI).
In sum, then, meta-analytic methods, along with our dataset of effect
sizes, provide an opportunity to assess the replicability of the field
of language acquisition. Across a range of analyses, we find that this
literature shows some evidence for bias, but overall, it is quite
robust.
\section*{Quantitative Evaluation of
Theories}\label{quantitative-evaluation-of-theories}
Next, we turn to how these data can be used to constrain and develop
theories of language acquisition.
Meta-analytic methods provide a precise, quantitative description of the
developmental trajectory of individual phenomena. Figure 3 presents the
developmental trajectories of the phenomena in our dataset at each level
in the linguistic hierarchy. By describing how effect sizes change as a
function of age, we can begin to understand what factors might moderate
that trajectory, such as aspects of a child's experience or maturation.
For example, the meta-analysis on mutual exclusivity (the bias for
children to select a novel object, given a novel word; Markman \&
Wachtel, 1988) suggests a steep developmental trajectory of this skill.
We then can use these data to build quantitative models to understand
how aspects of experience (e.g., vocabulary development) or maturational
constraints may be related to this trajectory (e.g., Frank, Goodman, \&
Tenenbaum, 2009; McMurray, Horst, \& Samuelson, 2012).
\begin{figure*}[t!]
\centering
\includegraphics[width=17.3cm]{figs/fig3_lab.pdf}
\caption{Method-residualized effect size plotted as a function of age
across the 10 meta-analyses in our dataset shown to have evidential
value (excluding phonotactic learning and sound category learning).
Lines show logarithmic model fits. Each point corresponds to a
condition, with the size of the point indicating the number of
participants.}
\end{figure*}
%\begin{figure}%[tbhp]
%\centering
%\includegraphics[width=.8\linewidth]{frog}
%\caption{Placeholder image of a frog with a long example caption to show justification setting.}
%\label{fig:frog}
%\end{figure}
In addition, meta-analytic methods provide an approach for synthesizing
across different linguistic skills via the language of effect sizes. The
ultimate goal is to use meta-analytic data to build a single,
quantitative model of the language acquisition system, much like those
developed for individual language acquisition phenomena, like word
learning. Developing a single quantitative model is a lofty goal,
however, and will likely require much more precise description of the
phenomena than is available in our dataset. Nevertheless, we can use our
data to distinguish between broad meta-theories about the
interdependency of skills.
We first consider two intuitive theories of task-to-task dependencies
that have been articulated in a number of forms. The stage-like theory
proposes that linguistic skills are acquired sequentially beginning with
skills at the lowest level of the linguistic hierarchy. Under this
theory, once a skill is mastered, it can be used to support the
acquisition of skills higher in the linguistic hierarchy. In this way, a
child sequentially acquires the skills of language,
\enquote{bootstrapping} from existing knowledge at lower levels to new
knowledge at higher levels. There is a wide range of evidence consistent
with this view. For example, there is evidence that prosody supports the
acquisition of sound categories (e.g., Werker et al., 2007), word
boundaries (e.g., Jusczyk, Houston, \& Newsome, 1999), grammatical
categories (e.g., Shi, Werker, \& Morgan, 1999), and even word learning
(e.g., Shukla et al., 2011).
A second possibility is that there is interactivity in the language
system such that multiple skills are learned simultaneously across the
system. For example, under this proposal, a child does not wait to begin
learning the meanings of words until the sounds of a language are
mastered; rather, the child is jointly solving the problem of word
learning in concert with other language skills. This possibility is
consistent with predictions of a class of hierarchical Bayesian models
that suggest that more abstract knowledge may be acquired quickly,
before lower-level information, and may in turn support the acquisition
of lower information (``blessing of abstraction,'' Goodman, Ullman, \&
Tenenbaum, 2011). There is evidence for this proposal from work that
suggests word learning supports the acquisition of lower-level
information like phonemes (Feldman et al., 2013). More broadly, there is
evidence that higher-level skills like word learning may be acquired
relatively early in development, likely before lower level skills have
been mastered (e.g., Bergelson \& Swingley, 2012; Tincoff \& Jusczyk,
1999).
These two theories make different predictions about relative
trajectories of skills across development. Within the meta-analytic
framework, we can represent these different trajectories schematically
by plotting the effect sizes for different skills across development. In
particular, the bottom-up theory predicts serial acquisition of skills
(Figure 4; left) while the interactive theory predicts simultaneous
acquisition (left center). We can also specify many other possible
trajectories by varying the functional form and parameters of the model.
Figure 4 (\enquote{Ad hoc}; right center) shows several other possible
trajectories. For example, a skill might have a non-monotonic
trajectory, increasing with age, and then decreasing. By specifying the
shape of these developmental trajectories and the age at which
acquisition begins, we can consider many patterns of developmental
trajectories, and how these different patterns, in turn, constrain our
meta-theories of development.
Our data allow us to begin to differentiate between this space of
theories. Figure 4 (right) presents a synthetic representation of the
developmental trajectories of the skills in our dataset with literatures
shown to have evidential value (all but phonotactic learning and sound
category learning). We find strong evidence for the simultaneous
acquisition of skills---children begin learning even high-level skills,
like the meanings of words, early in development, and even low-level
skills like sound categories show a protracted period of development.
This pattern is consistent with an interactive theory of language
acquisition, and at least prima facie inconsistent with stage-like
theories. In future research, we can use this approach to distinguish
between a larger space of meta-theories and, ultimately, refine our way
towards a single quantitative theory of language acquisition.
\section*{Discussion}\label{discussion}
Building a theory of a complex psychological phenomenon requires making
good inductive inferences from the available data. Meta-analysis can
support this process by providing a toolkit for quantitative description
of individual behaviors and their relationship to important moderators
(e.g., age, in our case). Here, we apply the meta-analytic toolkit to
the domain of language acquisition---a domain where there are concerns
of replicability, and where high-fidelity data are needed for theory
building. We find that the existing literature in this domain describes
mostly robust phenomena and thus should form the basis of theory
development. We then aggregate across phenomena to offer the first
quantitative synthesis of the field. We find evidence that linguistic
skills are acquired interactively rather than in a stage-like fashion.
\begin{figure*}[t!]
\centering
\includegraphics[width=17.3cm]{figs/fig4_lab.pdf}
\caption{The left two panels show the developmental trajectories
predicted under different meta-theories of language acquisition. The
stage-like theory predicts that a child will not begin learning the next
skill in the linguistic hierarchy until the previous skill has been
mastered. The interactive theory predicts that multiple skills may be
simultaneously acquired. The third panel shows other possible
developmental trajectories (decreasing, linear, and non-monotonic). The
fourth panel shows the observed meta-analytic data. Effect size is
plotted as a function of age from 0-3 years, across 10 different
phenomena (excluding phonotactic learning and sound category learning).
Model fits are the same as in Figure 3. These developmental curves
suggest there is interactivity across language skills, rather than
stage-like learning of the linguistic hierarchy. GF: Gaze following; IDS: IDS preference; LA: Concept -label advantage; ME: Mutual exclusivity; VD-(N)N: Vowel discrimination (non-)native; PV: Pointing-vocabulary correlations; SS: Sound symbolism; WR: Word recognition.}
\end{figure*}
In this paper, we focused on theoretical motivations for building
meta-analysis, but naturally, there are many other practical reasons for
conducting a quantitative synthesis. For example, when planning an
experiment, an estimate of the size of an effect on the basis of prior
literature can inform the sample size needed to achieve a desired level
of power. Meta-analytic estimates of effect sizes can also aid in design
choices: If a certain paradigm or measure tends to yield overall larger
effect sizes than another, the strategic researcher might select this
paradigm in order to maximize the power achieved with a given sample
size. These and other advantages, illustrated with the same database
used here, are explained in Bergmann et al.~(in prep.).
Despite its potential, there are a number of important limitations to
the meta-analytic method as a tool for theory building in psychological
research. One challenging issue is that in many cases method and
phenomenon are confounded. This is problematic because a method with
less noise than another will produce a bigger effect size for the same
phenomenon. As a result, it is difficult to determine the extent to
which a difference in effect size between two phenomena is due to an
underlying difference in the phenomena, or merely to a difference in
they way it was tested. While method may account for some variability in
our dataset, we find that method does not have a large impact on effect
size for phenomena, relative to other moderators like age (see SI).
Nevertheless, the covariance between method and phenomenon in our
dataset limits our ability to directly compare effect sizes across
phenomena.
Second, meta-analysis, like all analysis methods, requires the
researcher to make analytical decisions, and these decisions may be
subject to the biases of the researcher. We believe that a virtue of the
current approach is that we have applied the same analytical method
across all phenomena we examined, thus limiting our \enquote{degrees of
freedom} in the analysis. However, in some cases this uniform approach
to data analysis means that we are unable to take into consideration
aspects of a particular phenomenon that might be relevant. For example,
in a stand-alone meta-analysis on vowel discrimination, Tsuji and
Cristia (2014) elected only to include papers that tested at least two
different age groups as a way of focusing on age differences while
controlling for other possible differences between experiments. Others
however might have reasonably dealt with this issue in another way, by
normalizing effect sizes across methods, for example. Notably, this
analytical decision has consequences for interpretation: Tsuji and
Cristia (2014) found a moderate decrease in effect size with age for
non-native vowel discrimination, while the current analysis suggests a
moderate increase. We believe that the systematic, uniform analytical
approach used here is the most likely to minimize bias by the researcher
and reveal robust psychological phenomena. There may be cases however
where this one-size-fits-all approach is inappropriate, particularly in
meta-analyses with high heterogeneity.
There are also limits to this method for inferring a meta-theory of
language acquisition. Meta-theories of language acquisition suggest a
particular causal relationship between different skills and how they
change over development. For example, the interactive theory suggests
that skills at higher levels \emph{support} the acquisition at lower
levels, even before skills at lower levels are mastered. In the
meta-analytic framework, this predicts that there should be simultaneous
development of skills across the language hierarchy---as we observe in
the current work. Importantly, however, this analysis is inherently
correlational, and therefore we cannot directly infer a causal
relationship between acquisition at lower levels and acquisition at
higher levels. That is, while the observed pattern is consistent with
the interactive theory, it is also possible that there is no causal
relationship between skills across the language hierarchy, merely
parallel trajectories of acquisition. For this reason, experimental work
must go hand-in-hand with meta-analysis to address causal questions.
Finally, there are a number of important limitations to the
meta-analytic method more broadly. One issue is that the method relies
on researchers conducting replications of the same study across a range
of ages and, critically, reporting these data so that they can be used
in meta-analyses. To the extent that researchers do not conduct these
studies, or report the necessary statistics in their write-ups (e.g.,
means and standard deviations), the meta-analytic method cannot be
applied. In addition, the meta-analytic method, as in the case of
qualitative forms of synthesis (e.g., literature review), is limited by
the potential presence of bias, which can come from a range of sources
including non-representative participant populations, failure to publish
null findings, and analytical degrees-of-freedom. To the extent these
biases are present in the literature, methods of synthesizing these
findings will also be biased.
In sum, understanding the psychological mechanisms underlying complex
phenomena is a difficult inferential task: The researcher must develop a
predictive and explanatory theory on the basis of limited and noisy
experimental data. Here we have focused on language acquisition as a
case study of how meta-analytic methods can be productively leveraged as
a tool for theory building. Meta-analytic methods allow the researcher
to determine whether phenomena are robust, synthesize across
contradictory findings, and ultimately, build an integrative theory
across phenomena. Moving forward, we see meta-analysis as a powerful
tool in the researcher's toolkit for developing quantitative theories to
account for complex psychological phenomena.
\matmethods{
We analyzed 12 different phenomena in language acquisition. We selected these particular
phenomena because of their theoretical importance or because a
previously-published meta-analysis already existed.
To obtain estimates of effect size, we either coded or adapted others'
coding of papers reporting experimental data (see SI for
details). Within each paper, we calculated a separate effect size estimate for
each experiment and age group (we refer to each measurement separated by
age as a \enquote{condition}). In total, our sample includes estimates
from 227 papers, 772 different conditions and 9,329 participants. The
process for selecting papers from the literature differed by domain,
with some individual meta-analyses using more systematic approaches than
others (see SI for specific search strategies). Nevertheless,
meta-analytic methods for aggregating even the smallest sample of
studies are likely to be less biased than qualitative methods
(Valentine, Pigott, \& Rothstein, 2010).
}
\showmatmethods % Display the Materials and Methods section
\acknow{We thank....}
%\showacknow % Display the acknowledgments section
% \pnasbreak splits and balances the columns before the references.
% If you see unexpected formatting errors, try commenting out this line
% as it can run into problems with floats and footnotes on the final page.
\pnasbreak
%Bibliography
\bibliography{metalab_synthesis}
\bibliographystyle{pnas2016}
\nocite{kuhl2004early,feldman2013word,johnson2010synergies,shukla2011prosody,glass1976primary,hedges2014statistical,ebersole2015many,open2012open,open2015estimating,Gilbert1037,anderson2016response,button2013power,orwin1983fail,simonsohn2014p,simonsohn2014power,simonsohn2015better,orwin1983fail,scargle1999publication,simonsohn2014p,simonsohn2014power,simonsohn2015better,markman1988,frank2009using,mcmurray2012word,werker2007infant,jusczyk1999beginnings,shi1999newborn,shukla2011prosody,goodman2011learning,feldman2013word,bergelson2016,tincoff1999some,lfprep,dunst2012preference,frank2016performance,tsuji2014perceptual,bergmann2015development,sterne2005regression,bergmanneducational,lammertink2016,lewisunpublished,colonnesi2010relation,cristiastatisticalinprep,valentine2010many,viechtbauer2010conducting}
\end{document} | {
"alphanum_fraction": 0.7840227713,
"avg_line_length": 58.7136986301,
"ext": "tex",
"hexsha": "481fae6317bd0d9f683d753a0a858ad6a28e8b79",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3d66130ec4c4c82761acfdd18df5a18b5f5cac0a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "metalabgithub/metalab2",
"max_forks_repo_path": "writeups/synthesis_paper/PNAS/pnas-metalab-synthesis.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3d66130ec4c4c82761acfdd18df5a18b5f5cac0a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "metalabgithub/metalab2",
"max_issues_repo_path": "writeups/synthesis_paper/PNAS/pnas-metalab-synthesis.tex",
"max_line_length": 811,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3d66130ec4c4c82761acfdd18df5a18b5f5cac0a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "metalabgithub/metalab2",
"max_stars_repo_path": "writeups/synthesis_paper/PNAS/pnas-metalab-synthesis.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 9850,
"size": 42861
} |
\documentclass[a4paper,11pt]{scrartcl}
\usepackage{fullpage}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{cite}
%\usepackage{showlabels}
\usepackage[usenames,dvipsnames]{pstricks}
\usepackage{epsfig}
\usepackage{pst-grad} % For gradients
\usepackage{pst-plot} % For axes
\title{Seminararbeit Numerik}
\subtitle{Iterative Solvers - Algebraic Multi-grid}
\author{Bernd Schwarzenbacher, Daniel Herold}
\begin{document}
\maketitle
\tableofcontents
\pagebreak
\section{Motivation} \label{section:motiv}
\subsection{Iterative Solvers} \label{section:iter}
Iterative methods are used to solve big systems of linear equations in form of
matrices.
In practice the conjugate gradient (CG) method is the most often used iterative method for
the (often sparse and positive-definite) matrices arising from finite-element
discretization of partial differential equations.
This method is based on substituting the original problem by the fixed-point
equation:
$$b-Ax = 0 \iff x + \tau C^{-1} (b-Ax) = x$$
where $C$ is an invertible matrix.
From this formulation, one can immediately derive the
{\em Richardson Iteration}\/, which is a simple iterative method and is
used here to illustrate some essential aspects:
\begin{algorithm}
\caption{Richardson Iteration}
\begin{algorithmic}
\STATE \text{start value} \: $x_{0}$
\FOR{$j = 0, 1, 2, \dots$}
\STATE $x_{j+1} = x_{j} + C^{-1} (b - Ax_{j})$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The matrix $C$ denotes the so-called preconditioner, which is used to
improve the condition of the problem to obtain faster convergence speed.
It should fulfill two major conditions:
\begin{itemize}
\item cheap matrix-vector multiplication with $C^{-1}$
\item good approximation of $A$
\end{itemize}
A first approach for $C$ is to take the diagonal of $A$ as the
preconditioner:
$$C = diag(A)$$
This constitutes the {Jacobi method}:
$$ x_j^{new} = x_j^{old} + \frac{1}{A_{jj}} \left(b_{j} -
\sum_{k} A_{jk} x_k^{old}\right)
\qquad j = 1 \dots n$$
It can be altered to use the newly computed entries as well. This leads to
the {Gauss-Seidel method}, which is described in more detail in
chapter~\ref{section:gs}.
\cite{iterative}
\subsection{Algebraic Multi-Grid}
The Multi-grid (MG) method now seeks to combine the smoothing of high
frequency errors by the Jacobi or Gauß-Seidel method with a second method which
better addresses lower frequency errors.
Therefore a prolongation matrix $P$ is introduced, which for a basic example
can look like:
$$P =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}
$$
Now there are several approaches to come up with a prolongation matrix.
The Algebraic Multi-grid (AMG) method we adopted uses only the information
in the matrix $A$ to compute the coarsening and then the projection matrix.
It does so by weighting the relative strength of connecting vertices to
determine which vertices should collapse on the next level.
Once the prolongation matrix is computed it is used to compute the update in a
step of an iterative method similar to the preconditioners described in
chapter~\ref{section:iter}.
First the transposed prolongation matrix projects the residuum on a
coarser space, where the problem can be solved directly and then the computed
update gets prolongated back onto the fine space by multiplication with $P$.
An iterative step with two levels is thus described by:
$$x^{new} = x^{old} +\underbrace{P}_\text{refine}\cdot \underbrace{(P^{T} A P)^{-1}}_\text{solve the coarsed problem}\cdot \underbrace{P^{T}}_\text{coarse the residuum} (b - A x^{old})$$
The combination with the Gauß-Seidel method is provided by solving the problem
recursively on multiple levels and applying a forward and backward smoothing
step on each level. Additionally a transposed matrix vector multiplication
is needed to project onto the coarse space.
See Listing~\ref{lst:mult} for more details on the AMG algorithm.
Those three points are subject to performance improvements by parallelization
which is the main point we address.
\cite{multigrid}
\section{Sparse Matrices}
In our case the sparse matrices are stored in the Compressed Row Storage (CRS)
format. The commands {\em firsti[k]}\/ and {\em lasti[k]}\/ provide the
position of the first and last entry of the row k in the index vector.
An iteration through one row is very easy to provide:
\lstset{language=C++, numbers=none, captionpos=b,
caption={Row-iteration of sparsematrices }}
\begin{lstlisting}
for (int j = firsti[k]; j < lasti[k]; ++j)
{...}
\end{lstlisting}
The difficulty with sparse matrices is the iteration among any other order as
the next section will show.
\section{Matrix Vector Multiplication}
The multiplication of a sparse matrix and a given vector is very simple.
The entry k of the output vector is given by the summation of the products
of the matrix entries in the row k with the entries of the vector. In other
words, one entry of the output vector depends only on one row of the matrix
and vice versa.
One of the difficulties of parallelization is given by multiple writing
performances on a single variable at the same time. In the case of a simple
matrix vector multiplication this is no big deal to handle: each row has to be
executed by a single thread, there is no limitation in which order which thread
calculates its vector entry.
\lstset{language=C++, numbers=left, captionpos=b,
caption={Matrix vector multiplication with parallel for section.}}
\begin{lstlisting}
void MultAddParallel w(double s, const BaseVector & x, BaseVector & y)
{
#pragma omp parallel for
for (int i = 0; i < this->Height(); ++i) {
int first = firsti [i];
int last = firsti [i+1];
for (int j = first; j < last; ++j) {
y(i) += s * data[j] * x(colnr[j]);
}
}
}
\end{lstlisting}
\subsection{Transposed Matrix Vector Multiplication}\label{section:trans}
To provide the multiplication of a vector with the transposed matrix, a
sequential algorithm is written in a similar way.
\lstset{language=C++, numbers=none, captionpos=b,
caption={Transposed vector multiplication algorithm.}}
\begin{lstlisting}
for (int i = 0; i < this->Height(); ++i) {
int first = firsti [i];
int last = firsti [i+1];
for (int j = first; j < last; ++j) {
y(colnr[j]) += s * data[j] * x(i);
}
}
\end{lstlisting}
The difference to the previous algorithm is due to the dependencies of a single
output vector entry. It is calculated by iterating through the entries
of one matrix column. This is not efficient due to the CRS format.
Therefore the algorithm has to iterate first through the matrix rows.
The access of distinct entries per row is no problem for the sequential
code, but simple parallelization causes issues.
Concurrent reading and writing operations to one variable can lead to incorrect
stored values as shown in figure~\ref{figure:parallelwriting}. Two
thread are trying to write at the same variable at almost the same time. The
second thread starts his procedure before thread 1 has finished and overwrites the
result of the first thread.
\begin{figure}[ht]
\includegraphics{graphic/parallel_writing_problem.pdf}
\caption{Problem with Parallelization of Writing Processes}
\label{figure:parallelwriting}
\end{figure}
\begin{samepage}
Some approaches to solve this problem are:
\begin{itemize}
\item atomic writing process (easy to program, but very inefficient)
\item atomic writing process with dynamic thread partitioning
\item atomic writing process with static thread separation
\item matrix coloring without any atomic processes
\end{itemize}
\end{samepage}
All those solutions efficiency depend on the number of entries in the matrix.
Less entries and a larger matrix result in higher amounts of calculations per
time and thus are suited for parallel algorithms.
Whereas a sequential algorithm fares better than a parallel solution for
matrices with a high amount of nonzero elements for this problem.
The \textbf{atomic} version is very straight forward. The idea is to avoid the
problem shown in figure~\ref{figure:parallelwriting} by making the operations
atomic. This means that a thread blocks the address it accesses till it is
finished with its read-modify-write operation.
\lstset{language=C++, numbers=none, captionpos=b,
caption={Atomic section in the transposed vector multiplication.}}
\begin{lstlisting}
#pragma omp atomic
fy(colnr[j]) += s * data[j] * fx(i);
\end{lstlisting}
This part is the bottleneck of the algorithm, because every thread has to has
pass this line and they block each other.
The \textbf{dynamic thread partitioning} solves another problem of this code.
{\em OMP for}\/ (as used in the atomic version) splits the for-loop into a
evenly distributed amount of iterations for each thread beforehand. Due to
differing amount of calculations each thread has to perform, some may finish
faster than other thread. This relates to the specific structure of matrices
resulting in a finite element discretization, the number of nonzero elements
per row can vary significantly.
Instead it is possible to split the jobs dynamically. Faster threads are
assigned new iterations while slower thread still work on the old iterations.
The dynamic assignment produces some overhead, which can be mitigated by
assigning chunks of iterations.
\lstset{language=C++, numbers=none, captionpos=b,
caption={Dynamic for loop.}}
\begin{lstlisting}
#pragma omp for schedule (dynamic, 100)
\end{lstlisting}
The \textbf{balancing option} performs a similar assignment of iterations, but
does so before the for-loop starts instead of a dynamic assignment. The work
for the calculation of each row is approximated with the number of nonzero
elements in this row and split the loop accordingly. (see listing~\ref{lst:tranbal}
Every method still has the same problem with the atomic operation in its
essential part. We can avoid this by "\textbf{coloring}" each row of the matrix.
Rows are assigned to the same color if they don't write on the same entries.
Parallelization of this group is therefore possible, as no conflicting
concurrent operations will occur. Thus it is possible to iterate through the
colors sequentially and parallelize each color.
Not writing at the same entry is equivalent to not sharing any columns with
nonzero elements. Figure~\ref{figure:coloring} shows an example of the
coloring process.
\begin{figure}[ht]
\includegraphics[width=0.32\textwidth]{graphic/coloringT2.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT3.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT4.eps}
\includegraphics[width=0.32\textwidth]{graphic/coloringT5.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT6.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT7.eps}
\includegraphics[width=0.32\textwidth]{graphic/coloringT8.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT9.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringT10.eps}
\caption{Coloring of a Simple Matrix}\label{figure:coloring}
\end{figure}
This is easy to perform for small matrices. To cope with larger matrices we
iterate through the matrix for every color and store the position of blocked
nonzero elements for the specific color in a mask.
Each row is compared to the mask and if no nonzero entry of the row interferes
with the already blocked entries, the row is added to the current color and the
mask gets updated accordingly.
As long as there are uncolored rows of the matrix new colors will be used.
To further optimize this we use a mask for 32 colors at once.
(see listing~\ref{lst:trancol}
and~\ref{lst:tranmul})
\section{Gauß-Seidel Method} \label{section:gs}
As mentioned in chapter~\ref{section:motiv} we use two Gauß-Seidel steps on
each level. The Gauß-Seidel method is similar to the Jacobi method.
Both methods need to compute the vector $Ax$. The difference is, that the
Gauß-Seidel method also uses the new information obtained by previous
computations of entries.
In particular a new entry is given by:
$$ x_j^{new} = x_j^{old} + \frac{1}{A_{jj}} \left(b_{j} -
\sum_{k \in K^{new}}A_{jk} x_k^{new} - \sum_{k \in K^{old}}A_{jk} x_k^{old}
\right) \qquad j = 1 \dots n$$
$K^{new}$ denotes the index set of newly calculated entries
of $x$ and $K^{old}$ denotes the index set of old entries.
\cite{iterative}
The calculation order in the forward Gauß-Seidel precondition is arbitrary,
but to achieve a symmetric preconditioner the backward smoothing needs to be
in exact reverse order.
Entries of the output vector $x$ must not be calculated concurrently if they
depend on each other. As in chapter~\ref{section:trans} described, this would
cause incorrect entries. The problems with parallelization are also very
similar to chapter~\ref{section:trans}, as are the ideas to cope with them.
A coloring algorithm can be used as well and is similar in its implementation.
As with the transposed matrix vector multiplication we group the rows which
may run parallel without influencing each other.
\subsection{Coloring}
To calculate the product $A_jx$ which modifies the entry $x_j$ (where $A_j$
denotes the j-th row of the matrix $A$), for every $k$ with $A_{jk} \neq 0$,
$x_k$ is used. According to this observation, every row $A_j$, which should be
added to an existing color must not have any nonzero elements on the positions $A_{jk}$
for all $k$ indices of rows marked with this color.
Instead of testing all remaining rows for this property, we use that $A$ is
symmetric: $A_{kj} = 0 \Leftrightarrow A_{jk} = 0$. Thus we only check
the specific row we added to one color to determine which rows can also
relate to it.
The main advantage of this method is, that the coloring process has to be
done only once, whereas it is used by GSSmooth and GSSmoothBack as well.
Also the order of colors is stored by the process itself.
For a small matrix the coloring process is demonstrated in
figure~\ref{figure:coloringGS}.
\begin{figure}
\includegraphics[width=0.32\textwidth]{graphic/coloringGS1.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringGS4.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringGS7.eps}
\includegraphics[width=0.32\textwidth]{graphic/coloringGS8.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringGS9.eps}\hfill\vline\hfill
\includegraphics[width=0.32\textwidth]{graphic/coloringGS10.eps}
\caption{Coloring of a Simple Matrix for the Gauß-Seidel Algorithm.}
\label{figure:coloringGS}
\end{figure}
\subsection{Speed Improvement}
As mentioned, the AMG preconditioner is a combination of the Gauß-Seidel
preconditioner and intermediate projections onto coarser spaces
by multiplying with the transposed prolongation matrix. These steps are
executed alternately until the matrix is small enough to invert it directly.
Then the correction is refined to its original size and backwards Gauß-Seidel is
used in between these steps.
This sums up to one step of the AMG-Preconditioner. Many of such steps are
executed as a part of the CG method until the solution is sufficiently exact.
(see listing~\ref{lst:mult})
Speed improvements by parallelization are possible within Gauß-Seidel steps
and the transposed matrix vector multiplication. The Gauß-Seidel coloring has to
be done separately for each level before the CG method
(see listing~\ref{lst:jacobi}), but can be used again
in each CG iteration (listing~\ref{lst:gss} and~\ref{lst:gssb}).
For the transposed matrix vector multiplication a balancing algorithm is used.
To measure and visualize the time of these steps a trace explorer can be used.
{\em Vite\footnote{http://vite.gforge.inria.fr}}\/ is a tool to visualize
execution traces in Pajé or OTF format for debugging and profiling parallel or
distributed applications.
The work of every thread is shown in a single row. Sections in the code have
different but specific colors, so it is obvious which parts of the algorithm
took most of the computing time and how the work is distributed among the
threads.
Figure~\ref{figure:sequ} shows a CG method with sequential AMG, whereas
figure~\ref{figure:par} shows it with parallel AMG\@.
Only one thread is working in the sequential version compared to 24 threads in
the parallel.
In figure~\ref{figure:par} and~\ref{figure:step} the red blocks indicate a
thread of forward Gauß-Seidel smoothing, the purple ones are projections and
prolongations and the green ones indicate backward Gauß-Seidel smoothing.
In figure~\ref{figure:step} a single step of the AMG is shown. The large matrix
needs more time to perform a Gauß-Seidel step.
This is indicated by the big red blocks at the beginning. Only one color is
calculated at the same time, but its rows can be processed by several threads
simultaneously.
\begin{figure}
\includegraphics[width=1\textwidth]{seq.png}
\caption{AMG sequential}\label{figure:sequ}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{iterations_trace.png}
\caption{AMG parallel}\label{figure:par}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{undistributed_coloring_gs.png}
\caption{AMG single parallel recursion}\label{figure:step}
\end{figure}
Due to the matrix being stored split in half on to the specific structure of
the {\em Vector}\/ server, its two nodes have different speed of writing and
reading at parts of the matrix.
Depending on the tread to node correspondence, some colors are faster
calculated by the first half of threads, and others by the other half.
This emerges from the coloring process. When iterating over the matrix rows,
the colors tend to cluster, as it is unlikely that rows have columns in common.
To mitigate this effect we iterate with an offset over the matrix rows in the
coloring process.
A comparison of figure~\ref{figure:step} and figure~\ref{figure:stepdis} shows
that most of the gray spaces between the blocks can be eliminated by minimizing the
correspondence of rows and colors. (see listing~\ref{lst:jacobi})
\begin{figure}
\includegraphics[width=1\textwidth]{one_iteration.png}
\caption{AMG single parallel recursion with distribution}\label{figure:stepdis}
\end{figure}
\pagebreak
\section{Code}
\lstset{language=C++, numbers=left, captionpos=t, breaklines=true,
caption={Static balancing in the transposed vector multiplication.},label=lst:tranbal}
\begin{lstlisting}
// The separation of the matrix is calculated based on the number
// of non-zero elements in the matrix. Each thread has to do almost
// the same amount of work.
void TranMultBalance (double s, const BaseVector & x, BaseVector & y) const
{
FlatVector<double> fx = x.FV<double> ();
FlatVector<double> fy = y.FV<double> ();
int height = this->Height();
Array<int> thread_separation;
#pragma omp parallel
{
int num_threads = omp_get_num_threads();
#pragma omp single
{
thread_separation = Array<int>(num_threads+1);
// load for one thread
int separation_step = ceil(this->nze / num_threads);
// find section of rows in the matrix for each thread
int thread_i = 1;
for (int row = 0; row < height; ++row)
{
if (firsti[row] >= thread_i * separation_step)
{
thread_separation[thread_i] = row;
++thread_i;
}
}
thread_separation[0] = 0;
thread_separation[num_threads] = height;
}
// parallelization of the transposed matrix-vector-multiplication based on the
// previous calculated thread separation
#pragma omp for
for (int thread_i = 1; thread_i <= num_threads; ++thread_i)
{
for (int i = thread_separation[thread_i-1];
i < thread_separation[thread_i]; ++i)
{
int first = firsti [i];
int last = firsti [i+1];
for (int j = first; j < last; ++j)
{
#pragma omp atomic
fy(colnr[j]) += s * data[j] * fx(i);
}
}
}
}
}
\end{lstlisting}
\lstset{language=C++, numbers=left, captionpos=t,
caption={Coloring of a matrix for transposed vector multiplication},
label=lst:trancol}
\begin{lstlisting}
void Coloring ()
{
int height = this->Height();
int width = this->Width();
Array<int> row_color(height); // storage of the colors for each row
row_color = -1;
int maxcolor = 0;
int basecol = 0;
Array<unsigned int> mask(width); // 32-bit masks for each column
int found = 0;
do
{
mask = 0;
for (int row = 0; row < height; ++row)
{
// next row if row already has a color
if (row_color[row] >= 0) continue;
int first = firsti [row];
int last = firsti [row+1];
unsigned check = 0;
// check is the union of all mask bits corresponding
// to nze's in the row
for (int i = first; i < last; ++i) // iterate over entries of the row
check |= mask[colnr[i]];
// find next free color
if (check != UINT_MAX)
{
found++;
unsigned checkbit = 1;
int color = basecol;
while (check & checkbit)
{
color++;
checkbit *= 2;
}
row_color[row] = color; // set color to current row
if (color > maxcolor) maxcolor = color;
for (int i = first; i < last; ++i)
mask[colnr[i]] |= checkbit; // update mask
}
}
basecol += 8*sizeof(unsigned int);
}
while (found < height);
// count rows per color
Array<int> cntcol(maxcolor+1);
cntcol = 0;
for (int row = 0; row < height; ++row)
++cntcol[row_color[row]];
coloring_ = table<int>(cntcol);
cntcol = 0;
// construct mapping of rows to colors for the iteration
for (int row = 0; row < height; ++row)
coloring_[row_color[row]][cntcol[row_color[row]]++] = row;
std::cout << "needed " << maxcolor+1 << " colors" << std::endl;
}
\end{lstlisting}
\lstset{language=c++, numbers=left, captionpos=t,
caption={transposed vector multiplication by coloring a matrix},
label=lst:tranmul}
\begin{lstlisting}
void TranMultAdd4 (double s, const BaseVector & x, BaseVector & y)
{
FlatVector<double> fx = x.FV<double> ();
FlatVector<double> fy = y.FV<double> ();
Coloring();
#pragma omp parallel
{
// iterate sequential over colors
for (auto color : coloring_)
{
// iterate parallel over corresponding rows
#pragma omp for
for (int i = 0; i < color.Size(); ++i)
{
int first = firsti [color[i]];
int last = firsti [color[i]+1];
for (int j = first; j < last; ++j)
{
fy(colnr[j]) += s * data[j] * fx(color[i]);
}
}
}
}
}
\end{lstlisting}
\lstset{language=C++, numbers=left, captionpos=t, breaklines=true,
caption={Multiplication Method of the AMG Preconditioner},label=lst:mult}
\begin{lstlisting}
void my_AMG_H1 :: Mult (const ngla::BaseVector & b, ngla::BaseVector & x) const
{
// direct solve at lowest level
if (inv) {
x = (*inv) * b;
return;
}
auto residuum = pmat->CreateVector();
auto coarse_x = coarsemat->CreateVector();
auto coarse_residuum = coarsemat->CreateVector();
x = 0;
jacobi->GSSmooth (x, b); // one Gauss-Seidel step
if (recAMG) {
residuum = b - (*pmat) * x;
coarse_residuum = ngla::Transpose (*prol) * residuum; // coarsening
// recursive solving of coarser problem
recAMG->Mult(coarse_residuum, coarse_x);
x += (*prol) * coarse_x; // prolongate back to fine grid
}
jacobi->GSSmoothBack (x, b); // Gauss-Seidel back
}
\end{lstlisting}
\lstset{language=c++, numbers=left, captionpos=t,
caption={Gauß-Seidel coloring and balancing},label=lst:jacobi}
\begin{lstlisting}
void JacobiPrecond<TM,TV_ROW,TV_COL> :: Coloring ()
{
int height = mat.Height();
int width = mat.Width();
Array<int> row_color(height); // storage of colors for each row
row_color = -1;
int maxcolor = 0;
int basecol = 0;
Array<unsigned int> mask(width); // 32-bit masks for each column
int found = 0;
int num_threads = task_manager->GetNumThreads();
do
{
mask = 0;
int block_size = ceil((double)height / (double)num_threads);
// iterate over blocks to get rows of the whole matrix
// for every color
for (int block_row = 0; block_row < block_size; ++block_row)
{
for (int threadi = 0; threadi < num_threads; ++threadi)
{
int row = block_row + threadi * block_size;
if (row >= height) break;
// next row if row already has a color
if (row_color[row] >= 0) continue;
int first = mat.firsti [row];
int last = mat.firsti [row+1];
unsigned check = 0;
// check is the union of all mask bits corresponding
// to nze's in the row
for (int i = first; i < last; ++i) // iterate over entries of the row
check |= mask[mat.colnr[i]];
// find the next possible color
if (check != UINT_MAX)
{
found++;
unsigned checkbit = 1;
int color = basecol;
while (check & checkbit)
{
color++;
checkbit *= 2;
}
row_color[row] = color; // set the color to the current row
if (color > maxcolor) maxcolor = color;
mask[row] |= checkbit; // update the mask
}
}
}
basecol += 8*sizeof(unsigned int);
}
while (found < height);
Array<int> cntcol(maxcolor+1);
cntcol = 0;
for (int row = 0; row < height; ++row)
++cntcol[row_color[row]]; // count rows per color
coloring_ = Table<int>(cntcol);
cntcol = 0;
// construct colors to row mapping
for (int row = 0; row < height; ++row)
coloring_[row_color[row]][cntcol[row_color[row]]++] = row;
balancing_.SetSize (maxcolor+1);
for (auto c : Range (balancing_))
{
balancing_[c].Calc (coloring_[c].Size(),
[&] (int bi)
{
return 5 + mat.GetRowIndices(coloring_[c][bi]).Size();
});
}
std::cout << "needed " << maxcolor+1 << " colors" << std::endl;
}
\end{lstlisting}
\lstset{language=c++, numbers=left, captionpos=t,
caption={Gauß-Seidel Smooth},label=lst:gss}
\begin{lstlisting}
void JacobiPrecond<TM,TV_ROW,TV_COL> ::
GSSmooth (BaseVector & x, const BaseVector & b) const
{
FlatVector<TV_ROW> fx = x.FV<TV_ROW> ();
const FlatVector<TV_ROW> fb = b.FV<TV_ROW> ();
// iterate sequential over colors
for (int j = 0; j < this->coloring_.Size(); ++j)
{
auto color = this->coloring_[j];
// task-manager based parallelization
ParallelFor( this->balancing_[j],
[this, fx, fb, color] (int i)
{
int row = color[i];
if (!this->inner || this->inner->Test(row))
{
TV_ROW ax = mat.RowTimesVector (row, fx);
fx(row) += invdiag[row] * (fb(row) - ax);
}
}, 4);
}
}
\end{lstlisting}
\lstset{language=c++, numbers=left, captionpos=t,
caption={Gauß-Seidel SmoothBack},label=lst:gssb}
\begin{lstlisting}
void JacobiPrecond<TM,TV_ROW,TV_COL> ::
GSSmoothBack (BaseVector & x, const BaseVector & b) const
{
FlatVector<TV_ROW> fx = x.FV<TV_ROW> ();
const FlatVector<TV_ROW> fb = b.FV<TV_ROW> ();
// iterate sequential over colors
for (int j = this->coloring_.Size()-1; j >= 0; --j)
{
auto color = this->coloring_[j];
// task-manager based parallelization
ParallelFor( this->balancing_[j],
[this, fx, fb, color] (int i)
{
int row = color[i];
if (!this->inner || this->inner->Test(row))
{
TV_ROW ax = mat.RowTimesVector (row, fx);
fx(row) += invdiag[row] * (fb(row) - ax);
}
}, 4);
}
}
\end{lstlisting}
\bibliography{doc}{}
\bibliographystyle{plain}
\end{document}
| {
"alphanum_fraction": 0.7105114039,
"avg_line_length": 35.0757763975,
"ext": "tex",
"hexsha": "1213a50bdc9e717a6d4b81c66427ba4a11d849cf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e3d00236038bbb1720dcbe4ac9ff955d2c619dd0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bschwb/numerik_seminar",
"max_forks_repo_path": "doc/doc.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e3d00236038bbb1720dcbe4ac9ff955d2c619dd0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bschwb/numerik_seminar",
"max_issues_repo_path": "doc/doc.tex",
"max_line_length": 186,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e3d00236038bbb1720dcbe4ac9ff955d2c619dd0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bschwb/numerik_seminar",
"max_stars_repo_path": "doc/doc.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7535,
"size": 28236
} |
\section{Continued Work and Interest}\label{section:continued_work}
As pyhf is the first fully differentiable instance of \texttt{HistFactory}, there is great potential for further speedup through refinement and use of the full parallelism and GPU hardware acceleration the different computational frameworks were designed for.
There is ongoing work on the pyhf project to prepare it for full use.
All work is outlined and tracked in the project and issue tracking associated with the pyhf GitHub page under the DIANA/HEP GitHub group.
The most prevalent issues are summarized in the list below:
\begin{itemize}
\item Finish completion of the optimizer for the MXNet backend
\item Provide a full suite of benchmarks
\item Implement full GPU acceleration support in the backends
\item Gain access to a GPU cluster and test the benchmark suite
\item Improve optimizers (provide alternatives to Newton's method)
\item Complete Sphinx based web documentation generated from the code
\item Add different interpolation schemes
\item Add more systematic variations
\end{itemize}
In addition, there has been interest in use of pyhf by members of the high energy physics phenomenology community in use of reinterpretation of experimental search results, given its easy to use API.
The pyhf project already contains tutorial example Jupyter notebooks~\cite{Kluyver:2016aa} and exists in a ``Binderized'' environment~\cite{Binder} such that it is usable for examples through a web portal with no installation required.
Additional example Jupyter notebooks are being planned and developed to make it easier to teach pyhf's API to new users.
| {
"alphanum_fraction": 0.8164634146,
"avg_line_length": 74.5454545455,
"ext": "tex",
"hexsha": "f245edd502e1fe0bcd996caed3d84f30604122cc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "matthewfeickert/DIANA-Proposal-Feickert",
"max_forks_repo_path": "summary_report/src/future.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670",
"max_issues_repo_issues_event_max_datetime": "2018-10-11T14:33:40.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-04-10T20:42:05.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "matthewfeickert/DIANA-Proposal-Feickert",
"max_issues_repo_path": "summary_report/src/future.tex",
"max_line_length": 259,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dac5181e7e87e747fbdb3c5a6d0201f723fa4670",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "matthewfeickert/DIANA-Proposal-Feickert",
"max_stars_repo_path": "summary_report/src/future.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 336,
"size": 1640
} |
\title{Classes of Inference}
{{navbar}}
\subsubsection{Classes of Inference}
Inference is broadly classified under three classes: variational
inference, Monte Carlo, and exact inference.
We highlight how to use inference algorithms from each class.
As an example, we assume a mixture model with latent mixture
assignments \texttt{z}, latent cluster means \texttt{beta}, and
observations \texttt{x}:
\begin{equation*}
p(\mathbf{x}, \mathbf{z}, \beta)
=
\text{Normal}(\mathbf{x} \mid \beta_{\mathbf{z}}, \mathbf{I})
~
\text{Categorical}(\mathbf{z}\mid \pi)
~
\text{Normal}(\beta\mid \mathbf{0}, \mathbf{I}).
\end{equation*}
\subsubsection{Variational Inference}
In variational inference, the idea is to posit a family of approximating
distributions and to find the closest member in the family to the
posterior \citep{jordan1999introduction}.
We write an approximating family,
\begin{align*}
q(\beta;\mu,\sigma) &= \text{Normal}(\beta; \mu,\sigma), \\[1.5ex]
q(\mathbf{z};\pi) &= \text{Categorical}(\mathbf{z};\pi),
\end{align*}
using TensorFlow variables to represent its parameters
$\lambda=\{\pi,\mu,\sigma\}$.
\begin{lstlisting}[language=Python]
from edward.models import Categorical, Normal
qbeta = Normal(loc=tf.Variable(tf.zeros([K, D])),
scale=tf.exp(tf.Variable(tf.zeros[K, D])))
qz = Categorical(logits=tf.Variable(tf.zeros[N, K]))
inference = ed.VariationalInference({beta: qbeta, z: qz}, data={x: x_train})
\end{lstlisting}
Given an objective function, variational inference optimizes the
family with respect to \texttt{tf.Variable}s.
Specific variational inference algorithms inherit from
the \texttt{VariationalInference} class to define their own methods, such as a
loss function and gradient.
For example, we represent
MAP
estimation with an approximating family (\texttt{qbeta} and
\texttt{qz}) of \texttt{PointMass} random variables, i.e., with all
probability mass concentrated at a point.
\begin{lstlisting}[language=Python]
from edward.models import PointMass
qbeta = PointMass(params=tf.Variable(tf.zeros([K, D])))
qz = PointMass(params=tf.Variable(tf.zeros(N)))
inference = ed.MAP({beta: qbeta, z: qz}, data={x: x_train})
\end{lstlisting}
\texttt{MAP} inherits from \texttt{VariationalInference} and defines a
loss function and update rules; it uses existing optimizers inside
TensorFlow.
\subsubsection{Monte Carlo}
Monte Carlo approximates the posterior using samples
\citep{robert1999monte}. Monte Carlo is an inference where the
approximating family is an empirical distribution,
\begin{align*}
q(\beta; \{\beta^{(t)}\})
&= \frac{1}{T}\sum_{t=1}^T \delta(\beta, \beta^{(t)}), \\[1.5ex]
q(\mathbf{z}; \{\mathbf{z}^{(t)}\})
&= \frac{1}{T}\sum_{t=1}^T \delta(\mathbf{z}, \mathbf{z}^{(t)}).
\end{align*}
The parameters are $\lambda=\{\beta^{(t)},\mathbf{z}^{(t)}\}$.
\begin{lstlisting}[language=Python]
from edward.models import Empirical
T = 10000 # number of samples
qbeta = Empirical(params=tf.Variable(tf.zeros([T, K, D]))
qz = Empirical(params=tf.Variable(tf.zeros([T, N]))
inference = ed.MonteCarlo({beta: qbeta, z: qz}, data={x: x_train})
\end{lstlisting}
Monte Carlo algorithms proceed by updating one sample
$\beta^{(t)},\mathbf{(z)}^{(t)}$ at a time in the empirical approximation.
%
Markov chain Monte Carlo does this sequentially to update
the current sample (index $t$ of \texttt{tf.Variable}s) conditional on
the last sample (index $t-1$ of \texttt{tf.Variable}s).
%
Specific Monte Carlo samplers determine the update rules;
they can use gradients such as in Hamiltonian Monte Carlo
\citep{neal2011mcmc} and graph
structure such as in sequential Monte Carlo \citep{doucet2001introduction}.
\subsubsection{Non-Bayesian Methods}
As a library for probabilistic modeling (not necessarily Bayesian
modeling), Edward is agnostic to the paradigm for inference. This
means Edward can use frequentist (population-based) inferences,
strictly point estimation, and alternative foundations for parameter
uncertainty.
For example, Edward supports non-Bayesian methods such as generative
adversarial networks (GANs)
\citep{goodfellow2014generative}.
For more details, see the \href{/tutorials/gan}{GAN tutorial}.
In general, we think opening the door to non-Bayesian approaches is a
crucial feature for probabilistic programming. This enables advances
in other fields such as deep learning to be complementary: all is in
service for probabilistic models and thus it makes sense to combine
our efforts.
\subsubsection{Exact Inference}
In order to uncover conjugacy relationships between random variables
(if they exist), we use symbolic algebra on nodes in the computational
graph. Users can then integrate out variables to automatically derive
classical Gibbs \citep{gelfand1990sampling},
mean-field updates \citep{bishop2006pattern}, and exact inference.
For example, can calculate a conjugate posterior analytically by
using the \texttt{ed.complete_conditional} function:
\begin{lstlisting}[language=Python]
from edward.models import Bernoulli, Beta
# Beta-Bernoulli model
pi = Beta(1.0, 1.0)
x = Bernoulli(probs=pi, sample_shape=10)
# Beta posterior; it conditions on the sample tensor associated to x
pi_cond = ed.complete_conditional(pi)
# Generate samples from p(pi | x = NumPy array)
sess.run(pi_cond, {x: np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])})
\end{lstlisting}
\begin{center}\rule{3in}{0.4pt}\end{center}
The classes below inherit methods from base inference classes;
see the \href{/api/inference-development}{development page} for more
details.
{%sphinx
.. autoclass:: edward.inferences.VariationalInference
:members:
.. autoclass:: edward.inferences.KLqp
:members:
.. autoclass:: edward.inferences.ReparameterizationKLqp
.. autoclass:: edward.inferences.ReparameterizationKLKLqp
.. autoclass:: edward.inferences.ReparameterizationEntropyKLqp
.. autoclass:: edward.inferences.ScoreKLqp
.. autoclass:: edward.inferences.ScoreKLKLqp
.. autoclass:: edward.inferences.ScoreEntropyKLqp
.. autoclass:: edward.inferences.GANInference
:members:
.. autoclass:: edward.inferences.BiGANInference
:members:
.. autoclass:: edward.inferences.WGANInference
:members:
.. autoclass:: edward.inferences.ImplicitKLqp
:members:
.. autoclass:: edward.inferences.KLpq
:members:
.. autoclass:: edward.inferences.MAP
:members:
.. autoclass:: edward.inferences.Laplace
:members:
.. autoclass:: edward.inferences.MonteCarlo
:members:
.. autoclass:: edward.inferences.MetropolisHastings
:members:
.. autoclass:: edward.inferences.Gibbs
:members:
.. autoclass:: edward.inferences.HMC
:members:
.. autoclass:: edward.inferences.SGLD
:members:
.. autoclass:: edward.inferences.SGHMC
:members:
.. automodule:: edward.inferences.conjugacy
:members: complete_conditional
%}
\subsubsection{References}\label{references}
| {
"alphanum_fraction": 0.7548245614,
"avg_line_length": 31.6666666667,
"ext": "tex",
"hexsha": "17b8ff65f31f5c3743f8af3aa4c89fc5fb21bd1e",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-06-13T06:58:00.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-13T06:58:00.000Z",
"max_forks_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "NunoEdgarGFlowHub/edward",
"max_forks_repo_path": "docs/tex/api/inference-classes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "NunoEdgarGFlowHub/edward",
"max_issues_repo_path": "docs/tex/api/inference-classes.tex",
"max_line_length": 78,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "298fb539261c71e34d5e7aa5a37ed8a029df0820",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "NunoEdgarGFlowHub/edward",
"max_stars_repo_path": "docs/tex/api/inference-classes.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1912,
"size": 6840
} |
\Extrachap{Solutions}
\section*{Problems of Chapter~\ref{intro}}
\begin{sol}{prob1}
The solution\index{problems}\index{solutions} is revealed here.
\end{sol}
\begin{sol}{prob2}
\textbf{Problem Heading}\\
(a) The solution of first part is revealed here.\\
(b) The solution of second part is revealed here.
\end{sol}
| {
"alphanum_fraction": 0.7320872274,
"avg_line_length": 18.8823529412,
"ext": "tex",
"hexsha": "a443831b425aefbdf2379e1c444d9473ac9712d0",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-02-18T15:21:45.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-02-18T15:21:45.000Z",
"max_forks_repo_head_hexsha": "fde0cebec08b5958d536f191fd669c427d8876e5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "esgomezm/Thesis-template",
"max_forks_repo_path": "template_springer_book/solutions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fde0cebec08b5958d536f191fd669c427d8876e5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "esgomezm/Thesis-template",
"max_issues_repo_path": "template_springer_book/solutions.tex",
"max_line_length": 63,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "fde0cebec08b5958d536f191fd669c427d8876e5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "esgomezm/Thesis-template",
"max_stars_repo_path": "template_springer_book/solutions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T14:44:41.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-18T14:34:03.000Z",
"num_tokens": 97,
"size": 321
} |
\section{Introduction}
This paper analyzes contracts on selected components of the standard library.
| {
"alphanum_fraction": 0.8333333333,
"avg_line_length": 25.5,
"ext": "tex",
"hexsha": "37d59153c5b4a62a36ce4e7f00d38f052130188b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9c47bad046a784c954d79e98814e8482658324fa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jdgarciauc3m/stdcpp",
"max_forks_repo_path": "contract-lib/intro.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9c47bad046a784c954d79e98814e8482658324fa",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jdgarciauc3m/stdcpp",
"max_issues_repo_path": "contract-lib/intro.tex",
"max_line_length": 77,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9c47bad046a784c954d79e98814e8482658324fa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jdgarciauc3m/stdcpp",
"max_stars_repo_path": "contract-lib/intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 17,
"size": 102
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.