Search is not available for this dataset
text
string | meta
dict |
---|---|
\lesson{1}{Sep 7 2021 Tue (10:20:34)}{Grammer}{Unit 1}
\subsubsection*{Parts of a Sentence}
\begin{itemize}
\item \textbf{Phrase:} It adds information and begins with a preposition, but
it cannot stand alone and does not have a complete subject or predicate.
\item \textbf{Predicate:} It contains the verb and tells something about the
subject of the sentence.
\item \textbf{Dependent:} It cannot stand on its own without the information
that follows.
\item \textbf{Subject:} It tells what the sentence is about.
\item \textbf{Verb:} This shows the action of the sentence.
\end{itemize}
\subsubsection*{The Five Comma Rules}
\begin{itemize}
\item \textbf{Rule 1}: Use a comma to separate three or more items in a series.
\item \textbf{Rule 2}: Use a comma and a confection to separate two complete thoughts.
\item \textbf{Rule 3}: Use a semi-colon to fix a comma splice.
\item \textbf{Rule 4}: Use a comma to set off a phrase or clause of three or more words at the begging of a sentence.
\item \textbf{Rule 5}: Use a comma to set off a parenthetical element or appositive.
\end{itemize}
\newpage
| {
"alphanum_fraction": 0.745308311,
"avg_line_length": 41.4444444444,
"ext": "tex",
"hexsha": "5f89fd355fbd01c212f669d2bc12e50764f4f421",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_forks_repo_licenses": [
"Info-ZIP"
],
"max_forks_repo_name": "SingularisArt/notes",
"max_forks_repo_path": "Grade-10/semester-1/hs-english/unit-1/lesson-1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Info-ZIP"
],
"max_issues_repo_name": "SingularisArt/notes",
"max_issues_repo_path": "Grade-10/semester-1/hs-english/unit-1/lesson-1.tex",
"max_line_length": 119,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "de33e73ca7df9d3adcb094aa9909ea0337e68fad",
"max_stars_repo_licenses": [
"Info-ZIP"
],
"max_stars_repo_name": "SingularisArt/notes",
"max_stars_repo_path": "Grade-10/semester-1/hs-english/unit-1/lesson-1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-16T07:29:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-31T12:45:26.000Z",
"num_tokens": 332,
"size": 1119
} |
\chapter{Background}
\label{background}
%intro to background
This project involved the use of protein interaction prediction to build weighted \ac{PPI} networks to improve the performance of Community Detection on a \ac{PPI} network for disease research.
The following chapter describes the relationship of the disease to the proteins of the synapse.
\ac{PPI} networks, and their application to disease research is then described.
%FIN: I'd probably explain the \ac{PPI} acronym once more for people that skipped the beginning
%The aim of the Community Detection algorithm was to gain insight into structure of the interactions between proteins at the synapse to aid disease research.
%FIN: you need to justify why the synapse is important in neuronal disease research with a reference at least
%In the following chapter these different components, and how they fit together, will be described.
%what is a protein-protein interaction network?
\section{The synapse and protein interaction}
%intro paragraph, why proteins are important
Proteins consist of approximately 20\% of cell mass in a typical eukaryote\autocite{lodish_molecular_2000}.
%FIN: quantify - account for ~20% of the cell mass of a "typical eurkaryote" http://www.ncbi.nlm.nih.gov/books/NBK21473/
%FIN: "synthesis of proteins accounts for a considerable if highly variable proportion of the normal metabolism of a cell" http://www.sciencedirect.com/science/article/pii/S1574789109000751
Each of these proteins are molecules which fit into the machinery of a cell within the human body.%FIN: I dislike this sentence from an evolutionary perspective - plenty are shit - "Proteins facilitate almost every aspect of cellular metabolism across all forms of life. They have evolved complex and extensive systems of interaction with one another and other cellular macro and micromolecules"
Functions of these cells include almost all cellular functions; there are proteins capable of pumping ions, reshaping DNA and fluorescing\autocite{alberts_molecular_2008}. %FIN:shoulda read the next sentence first before writing the last comment... still change that last 'tuned' sentence
A crude model of the cell is to map the interactions between these molecular machines to try to guess about the functioning of the cell. %FIN: explain why this is crude - it ignores the interaction of proteins with everything else!
These models are \ac{PPI} networks and can be useful for targeting proteins in disease research\autocite{chen_identifying_2013}. %FIN: expand a little: how are they useful? They allow targeted research, can identify additional previously uncharacterised disease factors, are targets for molecular therapies add a reference to one of the many \ac{PPI} papers you are citing that explain why \ac{PPI}s matter
%what are synapses?
Synapses are the contacts between nerve cells where the vast majority of communication between nerve cells occurs, the only exceptions being through signalling molecules that can cross the cell membrane as shown in figure \ref{fig:actzone}.
There are two types of synapses in the nervous system, electrical and chemical\autocite{kandel_principles_2000}.
Electrical synapses form a simple electrical connection through an ionic substrate between two neurons.
Chemical synapses are involved in a much more complex system of neurotransmitter release and reception.
Synapses are therefore important to the functioning of the nervous system.
A problem with synapse function will likely cause large problems to the nervous system, so diseases of the nervous system are likely to involve problems with synapse function. %FIN: use the word neurotransmission its a great word, bonus points for 'impinge'
%As the cell is composed of proteins, so is the synapse composed of proteins. %FIN: there are lots of other shit too - I'd get rid of this its too simple in this form. "As synapse function heavily features the interaction of many proteins e.g. transporters, signal-receptors, ... etc" - maybe choose some random specific examples
Investigating the functioning of these proteins will help to explain the functioning of the synapse and hopefully provide insight into the diseases of the synapse\autocite{synsys}.%FIN:such as? Some autistic spectrum disorders have been connected with heritable loss-of-function mutations in synaptic regulatory proteins such as Neuroglobin 4 http://jp.physoc.org/content/587/4/727
%going deeper, why do we care about proteins at the synapse, mention SYNSYS
The proteins at the synapse drive synaptic communication, which in turn defines the functioning of the brain. %FIN: explain what the synapse is- the princpial interaction interface of a neuron - maybe move synapse section to before where yout alk about it
As these proteins define the functioning of the brain any disorders which affect the brain are very likely to involve these proteins. %FIN: doesn't really justify why \ac{PPI} itself is important though
Disorders which affect the brain are also very common and poorly understood, affecting one in three people in the developed world. %FIN: \ac{PPI} offer opportunities to use known factors to identify additional uncharacterised protein factors in neuronal diseases
Curing these diseases therefore may be possible through a greater understanding of the interactions of proteins at the synaptic level\autocites{synsys,chua_architecture_2010}.
%but what is a protein-protein interaction network?
Physical interaction between proteins can be inferred from a range of different experiments.
Typical contemporary protein interaction networks rely on databases of confirmed interactions from a variety of experiments, for example in \textcite{kenley_detecting_2011} several well-known interaction databases were used, such as BioGRID\autocite{stark_biogrid:_2006}. %FIN:give an example of one or two e.g. \ac{DIP} and BioGRID
By forming a network from these individual interactions as edges and clustering this network \textcite{kenley_detecting_2011} were able to predict complexes and functional associations. %FIN: example paper doesn't sound quite right - "Kenley et al. were able". Be more specific in what a functional association is
As with functional association, through associating community members with disease it is possible to associate communities with diseases, as will be discussed in chapter \ref{methods}. %FIN:how? If members of this functional association has been previously implicated in diseases or something
%historical work in the field
Two papers, \textcite{ito_comprehensive_2001} and \textcite{uetz_comprehensive_2000}, were able to leverage large volumes of recent interaction data and build interaction networks. % mention they collected a lot of their own data in earlier papers by molecular methods (yeast-2h)
These papers were able to make interesting discoveries about the network of interactions in yeast simply by investigating subnetworks in the network that was produced. %FIN: for example? identification of many previously unidentified proteins involved in yeast vesicular transport such as Ygl161c, Ygl198w, and Ylr324w
%which network are we interested in and stating the aims of the project
The aim of this project is to extend work in the field of protein interaction prediction \autocites{qi_evaluation_2006,mcdowall_pips:_2009,rodgers-melnick_predicting_2013,von_mering_string:_2005} to weighting protein interactions with a posterior probability through the use of varied data sources.
Specifically, the interactions we are considering are those of the active zone network illustrated in figure \ref{fig:actzone} found as part of the SYNSYS project\autocite{synsys}.
This data forms a set of proteins and a prepared unweighted list of protein interactions summarised in table \ref{tab:synsys}.
These proteins and their interactions were found through immuno-precipitation, or pull-down, experiments in the mouse hippocampus focusing on the pre-synapse.
In these experiments a set of bait proteins are selected and used to attract a set of prey proteins, the interaction between bait and prey being the interactions detected.
The exact set of interactions used in the unweighted network used was prepared prior to this project using additional resources: HIPPIE\autocite{schaefer_hippie:_2012}, InterologWalk\autocite{gallone_bio::homology::interologwalk_2011}, BiGRID\autocite{stark_biogrid:_2006}, CCSB\autocite{yu_high-quality_2008}, HPRD\autocite{baolin_hprd:_2007}, IntAct\autocite{hermjakob_intact:_2004} and MDC\autocite{futschik_comparison_2007}.
%The interaction network we are investigating in this work is referred to throughout as the active zone network in the synapse.
%These proteins are part of the pre-synapse and are illustrated in figure \ref{fig:actzone}.
%Proteins identified as part of this network were used as baits in the pull-down experiments whose results are used in this project to build the \ac{PPI} network which is the focus of the weighted and unweighted Community Detection.
%FIN: what is a pull-down experiment you still haven't explained it or why it is 'gold-standard' or that it is one of the gold-standards for demonstrating \ac{PPI} in-vitro. Life scientists love to differentiate between in-vitro and in-vivo. While pull-down experiments can show interactions in-vitro (i.e. a test-tube) it doesn't necessarily mean the cells will interact in-vivo (in the cell). That is why demonstrating that two proteins that interact also co-localise in the cell is important to confirm functional interaction. Cells, and especially eukaryotic cells aren't really big bags of proteins (and other molecules) as they are often drawn in books - they contain a complex set of compartmentalisation, diffusion gradients and active retention or inactivation of proteins in certain areas
\begin{figure}
\centering
\includegraphics[width=\textwidth]{actzone.png}
\caption{An illustration of the proteins identified to be involved in the active zone network\autocite{chua_architecture_2010}.}
\label{fig:actzone}
\end{figure}
%synsys table: no. of baits, preys, interactions, etc?
\begin{table}
\centering
\begin{tabular}{l c c}
\multicolumn{3}{c}{Number of} \\
baits & preys & interactions \\
\hline
24 & 1548 & 9372 \\
\end{tabular}
\caption{A table summarising the results of the pull-down experiments performed as part of the SYNSYS project\autocite{synsys}, and the active zone network defined using them, used in this project.}
\label{tab:synsys}
\end{table}
%function of the active zone network? What does it do?
%what is community detection?
\section{Protein complexes and community detection}
As mentioned in the previous section it is possible to analyse \ac{PPI} networks to detect protein complexes and functional groups. %FIN: to identify predict \ac{PPI} - unless you have proper co-localised interaction you can't say for certain they interact
This has recently been achieved through use of Community Detection\autocites{chen_identifying_2013,wang_recent_2010}, which uses various methods to find community structure in graphs.
%what is community structure?
Community structure is described as a characteristic of graphs which have many connections within sub-groups but few connections outside that group\autocite{newman_communities_2012}.
Unfortunately, this description is not specific on exact measures for a graph to have community structure. %FIN: a little flippant - the specific criteria for a community in a graph is an open topic of discussion in the literature or something
Community detection algorithms are simply tested on graphs that are agreed to exhibit community structure with the aim of finding the pre-defined communities. %FIN: by whom, cite an example of one of these test graphs
%describe how these algorithms usually work
Two important approaches to the problem of Community Detection are traditional hierarchical methods and more recent optimization based methods\autocite{newman_communities_2012}.
Hierarchical methods were developed in the field of sociology and involves grading nodes by how highly connected they are in the network and then using this value to group nodes into communities.
Optimization based methods involves, such as spectral modularity, involves grading edges and removing them iteratively to reveal the community structure.
%a different measure known as betweenness, which is analogous to the current flowing along edges if the graph were an electric circuit, and then allows a reductive technique where edges are removed iteratively to reveal sub-graphs without connections between them. %FIN:sentence needs fixed/split it too complex
%example paper using community detection on ppi graphs?
%FIN: http://www.biomedcentral.com/1752-0509/4/100/
%what is protein-protein interaction prediction?
\section{Protein-protein interaction prediction}
Protein interaction prediction was developed to solve the problem of incomplete and unreliable interaction data by combining both direct and indirect information\autocite{qi_learning_2008}.
Direct information are the result of experiments, such as yeast two-hybrid, intended to directly find protein-protein interactions. %FIN: explain what y2h is and why it is good - you reconstitute some split marker (transcription factor or something) only when two proteins interact from 2 genetically modified yeast hybrids. So if you see that marker it means that the proteins you've put in each hybrid interact
Indirect information includes biological data that was not gathered directly to find interactions, such as gene expression data. %FIN: explain; two proteins can only interact if they are both expressed at the same time and place within a cell therefore co-expression and co-localisation data are important sources of indirect evidence for \ac{PPI} (in fact are necessary for true \ac{PPI})
%what are features?
To predict a protein interaction we need to have a value or sequence of values from which to make our guess as to the existence of an interaction.
For each interaction this set of values are known as features.
The bulk of this project, described in chapter \ref{methods}, involved obtaining these values for every feature necessary to train the classifier and classify the interactions of the synaptic network. %FIN: maybe expand on this being non-trivial due to the plethora of data sources and alternative identifiers they use.
%what is the classifier
The classifier, or model, is a machine learning algorithm that can learn from a labelled training set how to sort these vectors of features into the appropriate category. %citation to Murphy? %FIN: citation to the Murphy's law might be appropriate...
However, these algorithms cannot make predictions unless the training data is informative.
Also, the training data must be an accurate representation of the case the algorithm is planned to be applied to. %FIN:make this specific to your data as well - so for example the training data must use validated examples of protein protein interactions
%why do we want to predict protein-protein interactions?
%reference to ENTS and similar projects aiming to make full interactomes
%how this is different to our goal
Completing the interactome of a given organism from incomplete data is a major goal for some works in the protein interaction field, such as \textcite{rodgers-melnick_predicting_2013}.
The goal in this project is to appropriately weight interactions in a \ac{PPI} network to improve the performance of a Community Detection algorithm.
%what's the point in weighting connections?
Weakly interacting proteins will have a lower confidence in their interacting at all, as it will have been observed less frequently.
Therefore, by weighting the interactions in a \ac{PPI} network according to our confidence we can also make the \ac{PPI} network reflect more closely the true interactions existing in vivo. %FIN: i.e. proteins predicted to interact actually interacting in vivo
%what data sources were used to predict protein-protein interactions?
\section{Data sources and networks}
\label{back:sources}
%Different types of sources used with reference to other works
Many different data sources were considered for inclusion in this project. %FIN: such as - which ones haven't been mentioned in the table - that you considered but discarded
These different data sources fall into categories described in table \ref{tab:sources}, while the results of the SYNSYS pulldown experiments can be found in table \ref{tab:synsys}.
\begin{table}
\centering
\small
\begin{tabular}{p{0.4\textwidth} p{0.5\textwidth}}
Data source type & Examples \\
\hline
\multirow{4}{*}{\parbox{0.4\textwidth}{Primary interaction databases}} & \ac{DIP}\autocite{xenarios_dip_2002} \\
& \ac{HIPPIE}\autocite{schaefer_hippie:_2012} \\
& BioGRID\autocite{stark_biogrid:_2006} \\
& iRefIndex\autocite{razick_irefindex:_2008} \\
\hline
Pull-down experiment results & Described in table \ref{tab:synsys} \\
\hline
\multirow{2}{*}{\parbox{0.4\textwidth}{Associated features}} & Features derived from Gene Ontology\autocite{ashburner_gene_2000}, described in section \ref{go} \\
& Those used in \textcite{rodgers-melnick_predicting_2013}, described in section \ref{ents} \\
\hline
\multirow{2}{*}{\parbox{0.4\textwidth}{Other \ac{PPI} prediction resources}} & \ac{STRING}\autocite{von_mering_string:_2005} \\
& InterologWalk\autocite{gallone_bio::homology::interologwalk_2011} \\
\end{tabular}
\caption{A table summarising the different sources of data used in the course of the project.}
\label{tab:sources}
\end{table}
%why they were chosen
The indirect sources of data were chosen based on usage in the literature, such as in the case of Gene Ontology\autocite{qi_evaluation_2006}.
Many of those considered were found to difficult to use, due to poor accesibility, such as in the case of gene expression, or differences in naming conventions.
Direct data sources were listed by investigating all of the available databases which could be of use and choosing from these.
%FIN: this is weak and needs expanded on - evaluation of different data sources and issues you had getting them were a major part of the work in this project - some data sources were poorly accessible or standardised - give examples. Some weren't informative etc. Justify why GO terms could be useful (co-localisation and co-functionalisation information more likely to interact)
\section*{Conclusion}
%The goal of this project involved obtaining weights for a \ac{PPI} network correlated with the strength of different protein interactions to improve the performance of a Community Detection algorithm.
%Improving the performance in this way, it was hoped would produce new insight into protein interactions that could cause disease.
\ac{PPI} networks are an important tool in the investigation of cell and synaptic function.
Protein interaction prediction and various interaction databases provide a way to construct these networks.
The next chapter discusses the construction of weighted networks making use of the interaction network described in table \ref{tab:synsys} and the data ssources described in table \ref{tab:sources}.
| {
"alphanum_fraction": 0.78990658,
"avg_line_length": 111.9090909091,
"ext": "tex",
"hexsha": "67b078c0b5f6e30327b5d1765226631143db1d19",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gngdb/opencast-bio",
"max_forks_repo_path": "report/sections/background.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gngdb/opencast-bio",
"max_issues_repo_path": "report/sections/background.tex",
"max_line_length": 800,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "9fb110076295aafa696a9f8b5070b8d93c6400ce",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gngdb/opencast-bio",
"max_stars_repo_path": "report/sections/background.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-06T02:44:38.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-02-24T20:44:39.000Z",
"num_tokens": 4291,
"size": 19696
} |
\section{Power and Weight Budget}
\label{sec:PW_Budget}
%We will copy the thing from the provisional application and update it with new values....
%\begin{table}[H}
%\centering
%\label{budget}
% \begin{tabular}{|c|c|c|c|c|c|}\hline
% \caption{Power and weight budget for SORA 2.0}
% Component & Voltage (VDC) & Peak Current (mA) & Duty Cycle (\%) & Power (mW) & Weight (g)\\ \hline
% 30 to 12 V DC/DC Converter & 30 & & 100 & NA & NA \\ \hline
% Polulu Driver & 30 & & 100 & NA & NA \\ \hline
% Polulu Driver & 30 & & 100 & NA & NA \\ \hline
% RESU & 12 & 65 & 100 & 780 & 9 \\ \hline
% Pump 1 Heater & 12 & & 10 & NA & NA \\ \hline
% Pump 2 Heater & 12 & & 10 & NA & NA \\ \hline
% Pump 1 & 24 & 270 & 75 & 6480 & 1769.01 \\ \hline
% Pump 2 & 24 & 270 & 75 & 6480 & 1769.01 \\ \hline
% Solenoid 1 & 12 & 241.67 & 50 & 2900.04 & 5 \\ \hline
% Solenoid 2 & 12 & 241.67 & 50 & 2900.04 & 5 \\ \hline
% Servo Motor & 12 & 7 & 0.1 & 84 & 10 \\ \hline
% Temperature Sensor 1 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 2 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 3 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 4 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 5 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 6 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Temperature Sensor 7 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
% Pressure \& Altitude Sensor 1 & 3.3 & 1.4 & 100 & 4.62 & 0.005 \\ \hline
% Real Time Clock (RTC) & 3.3 & 0 & 100 & 0.00198 & 0.005 \\ \hline
% GPS & 3.3 & 41 & 100 & 135.3 & 0.005 \\ \hline
% Real Time Clock (RTC) & 3.3 & 0 & 100 & 0.00198 & 0.005 \\ \hline
% Humidity Sensor & 3.3 & 0.5 & 100 & 1.65 & 0.0025 \\ \hline
% BNO Multisensor & 3.3 & 0.1 & 100 & 0.33 & 0.005 \\ \hline
% Total & & 1138.97 & & 22839.53 & 3992.24 \\ \hline
%
% \end{tabular}
%\end{table}
In order to stay within the power constraints, a robust power supply will need to handle all the components of the payload. The power supply we will be using is the same PPM-DC-ATX-P by WinSystems INC that we used on our first flight~\cite{SORA}. During our last flight it operated flawlessly and powered our payload throughout the whole mission. It offers the desired number of \SI{+5}{\volt} and \SI{+12}{\volt} outputs needed to power the payload's electronics. This power supply effectively takes \SI{+30}{\volt} and steps it down to two \SI{+12}{\volt} and two \SI{+5 }{\volt} outputs. One of the \SI{+12}{\volt} outputs goes to the Arduino since it can step down to the appropriate voltages internally while the other goes to a PWM motor for the solenoid. One of the \SI{+5 }{\volt} outputs powers two analog sensors that will be sent to HASP through the EDAC connection (more on that in the next sections). The Radiation subsystem will be powered by batteries for the duration of the flight (see Appendix A). The power supply also has four ground outputs that will be used by each respective component.
\begin{table}[H]
\centering
\caption{Power and weight budget for SORA 2.0}
\label{tab:budget}
\bigskip
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\bfseries Component} & \multicolumn{1}{c|}{\bfseries Voltage (VDC)} & \multicolumn{1}{c|}{\bfseries Current (mA)} & \multicolumn{1}{c|}{\bfseries Duty Cycle (\%)} & \multicolumn{1}{c|}{\bfseries Power (mW)} & \multicolumn{1}{c|}{\bfseries Weight (g)} \\
\hline
30 to 12 V DC/DC Converter & 30 & 1500 & 100 & 45000 & 10 \\ \hline
%Polulu Driver & 30 & 1000 & CAL & CAL & 1 \\ \hline
%Polulu Driver & 30 & 1000 & CAL & CAL & 1 \\ \hline
RESU and MiniPIX~\ref{tab:Sensors} & 12 & 290 & 100 & 3480 & 15 \\ \hline
%Pump 1 Heater with driver & 12 & 180 & 40 & 2160 & 2 \\ \hline
%Pump 2 Heater with driver & 12 & 180 & 40 & 2160 & 2 \\ \hline
Pump 1 w/ Solenoid & 24 & 670 & 80 & 16080 & 1800 \\ \hline
Pump 2 w/ Solenoid & 24 & 670 & 80 & 16080 & 1800 \\ \hline
%Solenoid 1 & 12 & 241.67 & 50 & 2900.04 & 5 \\ \hline
%Solenoid 2 & 12 & 241.67 & 50 & 2900.04 & 5 \\ \hline
%Servo Motor & 12 & 7 & 0.1 & 84 & 10 \\ \hline
%Temperature Sensor 1 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
%Temperature Sensor 2 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
%Temperature Sensor 3 & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
%MiniPIX & 5 & 0.09 & 100 & 0.45 & 0.02 \\ \hline
%Pressure \& Altitude Sensor 1 & 3.3 & 1.4 & 100 & 4.62 & 0.005 \\ \hline
%Real Time Clock (RTC) & 3.3 & 0 & 100 & 0.00198 & 0.005 \\ \hline
%GPS & 3.3 & 41 & 100 & 135.3 & 0.005 \\ \hline
%Real Time Clock (RTC) & 3.3 & 0 & 100 & 0.00198 & 0.005 \\ \hline
%Humidity Sensor & 3.3 & 0.5 & 100 & 1.65 & 0.0025 \\ \hline
%BNO 6055 & 3.3 & 0.1 & 100 & 0.33 & 0.005 \\ \hline
Clean Box & N/A & N/A & N/A & N/A & 10000 \\ \hline
Structure w/ bolts & N/A & N/A & N/A & N/A & 5000 \\ \hline
Total & 30 & 1.74 (peak) & 100 & 45000 & 18764 \\ \hline
\end{tabular}
\medskip
\end{table}
% \subsubsection{Powering It All Up}
| {
"alphanum_fraction": 0.6166431024,
"avg_line_length": 60.3780487805,
"ext": "tex",
"hexsha": "97272c77be4f2e2ec193ce2def03f81f87318026",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-22T15:04:04.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-22T15:04:04.000Z",
"max_forks_repo_head_hexsha": "e1b8ab8f762e7f9926878b5fca557c3a5f167747",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "UH-MiniPix-Collaboration/Application",
"max_forks_repo_path": "PW_Budget.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "e1b8ab8f762e7f9926878b5fca557c3a5f167747",
"max_issues_repo_issues_event_max_datetime": "2017-12-13T16:10:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-11T07:09:48.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "UH-MiniPix-Collaboration/Application",
"max_issues_repo_path": "PW_Budget.tex",
"max_line_length": 1119,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "e1b8ab8f762e7f9926878b5fca557c3a5f167747",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "UH-MiniPix-Collaboration/HASP_Application",
"max_stars_repo_path": "PW_Budget.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-10T15:15:20.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-09T23:50:40.000Z",
"num_tokens": 2105,
"size": 4951
} |
\documentclass{article}
\usepackage{latexsym}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsthm}
\newtheorem{theorem}{Theorem}[section]
\newtheorem*{problem}{Problem}
\begin{document}
\title{Eisenstein triples}
\author{Dave Neary}
\maketitle
\section{Introduction}
\begin{problem}
Find positive integer solutions to the equation:
\[ \frac{1}{a+b} + \frac{1}{b+c} = \frac{3}{a+b+c} \]
\end{problem}
Cross-multiplying by $a+b+c$:
\[ \frac{a+b+c}{a+b} + \frac{a+b+c}{b+c} = 3 \]
Then we can simplify:
\begin{align*}
\frac{a+b}{a+b} + \frac{c}{a+b} + \frac{a}{b+c} + \frac{b+c}{b+c} &= 3 \\
\frac{c}{a+b} + \frac{a}{b+c} &= 1 \\
c(b+c) + a(a+b) &= (a+b)(b+c) \\
a^2 -ac +c^2 &= b^2
\end{align*}
\section{Geometric approach}
The cosine rule for triangles is:
\[ b^2 = a^2 + c^2 - 2ac \cos B \]
The equation above is simply the cosine rule for triangles, applied with the angle
$B = \frac{\pi}{3}$ radians. So the lengths $a, b, c$ are side lengths of a triangle with
an angle of $\frac{\pi}{3}$ radians (or 60 degrees) at angle $B$. We could try some
geometric approaches at this point to try to find a general formula for integer lengths.
One approach which can be used to find Pythagorean triples (that is, integers which are
the sides of a right-angled triangle) is to consider the unit circle:
\[ x^2 + y^2 = 1 \]
and to search for rational points on this circle. Then, if we find points
$(\frac{a}{c},\frac{b}{c})$ on the circle, we can generate a Pythagorean triple $(a,b,c)$.
And one way to do that is to consider the lines through the point $(-1,0)$ with a rational
slope. For every rational number, this will intersect the unit circile in two points, and we
can show that if the slope is rational, then the intersection points will also be rational.
That is, given the equation $y = m(x+1)$, and the unit circle $x^2+y^2=1$, the intersection
points satisfy:
\begin{align*}
x^2+m^2(x+1)^2 &= 1 \\
(1+m^2)x^2 +2m^2x +(m^2-1) &= 0 \\
x &= \frac{-2m^2 \pm \sqrt{4m^4 -4(m^2+1)(m^2-1)}}{2(1+m^2)} \\
x &= \frac{-m^2 \pm 1}{m^2+1} = -1 \text{ or } \frac{1-m^2}{1+m^2} \\
y &= 0 \text{ or } \frac{2m}{1+m^2}
\end{align*}
Which results in the familiar formula for generating Pythagoream triples $(m^2-n^2, 2mn, m^2+n^2)$.
We can try a similar approach here with the ellipse $x^2-xy+y^2=1$. The line through the point
$(-1,0)$ will again be $y = m(x+1)$ and we can solve for the intersection points by substitution:
\begin{align*}
x^2 - x(m(x+1)) + (m(x+1))^2 &= 1 \\
(1-m+m^2)x^2 +(2m^2-m)x +(m^2-1) &= 0 \\
x &= \frac{-(2m^2-m) \pm \sqrt{(2m^2-m)^2 -4(m^2-1)(1-m+m^2)}}{2(1-m+m^2)} \\
x &= \frac{-2m^2 +m \pm (m-2)}{2(m^2-m+1)} = -1 \text{ or } \frac{1-m^2}{1-m+m^2} \\
y &= 0 \text{ or } \frac{2m-m^2}{1-m+m^2}
\end{align*}
Which means, given any rational number $m=\frac{a}{b}$ for which the line $y=m(x+1)$ intersects
the ellipse in two places, we can find a rational point $(x,y)$ which will satisfy the equation
of the ellipse.
Let's check that it works: we will try $m=\frac{2}{5}$. Then:
\[ (x,y) = \left(\frac{1 - (\frac{2}{5})^2}{1-\frac{2}{5} + (\frac{2}{5})^2},
\frac{2(\frac{2}{5}) - (\frac{2}{5})^2}{1-\frac{2}{5} + (\frac{2}{5})^2} \right)\]
Multiplying above and below the line of both by $5^2$, we get:
\[ (x,y) = (\frac{5^2 - 2^2}{5^2- 2\times5 + 2^2},
\frac{2\times2\times5 - 2^2}{5^2 - 2\times5 + 2^2}) = (\frac{21}{19},\frac{16}{19}) \]
And we can check that $19^2 = 361 = 21^2 - 21\times 16 + 16^2 = 441-336+256$ gives us a triple
satisfying the equation. We can repeat this for any rational number $\frac{m}{n}$ giving a triple
$(a,b,c) = (n^2-m^2, 2mn - m^2, n^2-nm+m^2)$ with $a^2 - ab + b^2 = c^2$.
\section{Eisenstein integers approach}
By factoring $a^2 + b^2 = (a+ib)(a-ib)$ we can also generate Pythagorean triples. Defining
$N(a+ib) = a^2 + b^2$, we can show that $N(z_1 z_2) = N(z_1)N(z_2)$, and therefore, that:
\begin{align*}
(m^2+n^2)^2 & = (N(m+in))^2 \\
&= N((m+in)^2) \\
&= N(m^2-n^2 + 2mni) \\
&= (m^2-n^2)^2 + (2mn)^2
\end{align*}
which gives the Pythagorean triple $(a,b,c) = (m^2-n^2,2mn,m^2+n^2)$.
Inspired by this method of finding Pythagorean triples by squaring Gaussian integers, let's
see if there is an analog for Eisenstein triples too. It is not a surprise to learn that there is,
and that such complex numbers are known as Eisenstein integers.
Starting from the equation $ a^2 - ab + b^2 = c^2$, we can complete the square, and factorize
across the complex numbers, and see what happens.
\begin{align*}
a^2 - ab + b^2 &= \left(a-\frac{b}{2}\right)^2 + \frac{3}{4}b^2 \\
&= \left(a-\frac{b}{2}\right)^2 + \left(\frac{\sqrt{3}b}{2}\right) \\
&= \left(a + b\frac{-1-\sqrt{3}}{2}\right)\left(a + b\frac{-1+\sqrt{3}}{2}\right) \\
&= \left(a + \omega b\right) \left(a + \omega^2 b\right)
\end{align*}
where $\omega$ is the primitive cube root of 1.
Some results about the cube roots of 1:
\[ 1 + \omega + \omega^2 = 0 \]
\[ \omega^2 = \bar{\omega} = -1 - \omega\]
\[ \omega^3 = 1 = \omega \bar{\omega} \]
Eisenstein integers are any numbers that can be written in the form $a+\omega b$ for
$a,b \in \mathbb{Z}$. They form a commutative ring (which means that they are closed under addition,
multiplication, and both multiplication and addition operations are commutative and distributive,
and contain a multiplicative and additive identity).
What makes them super useful for our purposes here is that we now have a way to express $c^2$ as a
product of eisenstein integers:
\[ c^2 = (a+\omega b) (a + \bar{\omega} b) \]
And we can use the same norm argument that worked for Pythagorean triples here, if we define
$N(m+\omega n) = m^2-mn+n^2$ (which is the standard complex number norm). Then:
\begin{align*}
N((m+\omega n)^2) &= N(m^2+2mn\omega + n^2(-1-\omega)) \\
&= N(m^2-n^2+(2mn-n^2)\omega) \\
&= (m^2-n^2)^2 - (m^2-n^2)(2mn-n^2) + (2mn-n^2)^2 \\
N((m+\omega n)^2) &= (N(m+\omega n))^2 \\
&= (m^2 - mn + n^2)
\end{align*}
So we have found another method to prove that given $m,n \in \mathbb{Z}$ we can construct a triple
$(a,b,c) = (m^2-n^2, 2mn-n^2, m+2-mn+n^2)$ such that $a^2-ab+b^2 = c^2$.
\end{document}
| {
"alphanum_fraction": 0.6331764324,
"avg_line_length": 40.8013245033,
"ext": "tex",
"hexsha": "7fc2a1391ed8d78932865d3caa7c4cf765c3af5f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "dneary/math",
"max_forks_repo_path": "Eisenstein_triples.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "dneary/math",
"max_issues_repo_path": "Eisenstein_triples.tex",
"max_line_length": 100,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "129b2093c01b12ddc2e61abd331c95da2177803c",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "dneary/math",
"max_stars_repo_path": "Eisenstein_triples.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2333,
"size": 6161
} |
%!TEX root = ../../main.tex
\chapter{Conclusions and Future Work}
\label{chapter:Conclusion}
We conclude with a summary of each chapter and detail their main contributions. We then examine our findings in relation to the research questions set out in chapter \ref{chapter:introduction}. Finally, we describe future areas and directions of research.
In chapter \ref{chapter:background} we described the background information necessary to understand this thesis. We gave an overview of Information Retrieval, how documents are represented, and what evaluation measures are used.
We described the Topic Detection and Tracking (TDT) project, and gave a basic overview of how approaches to TDT approaches worked.
We defined what an event is, and summarised the most relevant literature in event detection on social media, and finished with an overview of existing test collections and how they are created.
In chapter \ref{chapter:collection} we described the creation of the first large-scale corpus for the evaluation of event detection approaches on Twitter.
We proposed a new and refined definition of `event' for event detection on Twitter.
We detailed the approaches used to generate candidate events, and the crowdsourced methodology used to gather annotations and relevance judgements.
The collection and relevance judgments represent the first large-scale test collection that is suitable for the evaluation of event detection on Twitter.
In chapter \ref{chapter:detection} we proposed a novel entity-based event detection approach for Twitter, that uses named entities to partition and efficiently cluster tweets, and a burst detection method to identify clusters related to real-world events.
We performed an in-depth evaluation of the detection approach using the Events 2012 corpus, which we believe was the first of its kind, and compared automated evaluation approaches with a crowdsourced evaluation.
Our approach outperforms existing approaches with large improvements to both precision and recall.
We described some of the issues that remain to be solved before automated evaluation can full replace crowdsourced evaluations of event detection approaches.
Finally, in chapter \ref{chapter:newsworthiness}, we proposed a method of scoring tweets based on their Newsworthiness.
We used heuristics to assign quality labels to tweets and learn term likelihood ratios, and calculate Newsworthiness scores.
We evaluated the classification and scoring accuracy using the Events 2012 corpus, and found it to be effective at classifying documents as Newsworthy or Noise.
We proposed a cluster Newsworthiness score that can be used as a feature for event detection, and evaluated it by filtering clusters produced using the entity-based clustering approach proposed in chapter \ref{chapter:detection}, finding that it can be used to increase precision even at small cluster sizes.
\section{Research Questions}
\textbf{RQ1: Can we develop a methodology that allows us to build a test collection for the evaluation of event detection approaches on Twitter?}\\
In chapter \ref{chapter:collection}, we answered this research question by creating a large-scale corpus with relevance judgements for the evaluation of event detection on Twitter.
Since the publication of \cite{McMinn2013} describing the corpus, more than 240 researchers and groups have registered to download the Events 2012 corpus, and it has been cited by more than 90 publications, and used in the development and evaluation of several event detection approaches for Twitter (including several PhD and Masters theses).
We used the collection we developed to evaluate our entity-based event detection approach, and our newsworthiness scoring technique, demonstrating that the collection is suitable for evaluating event detection approaches on Twitter.
\textbf{RQ2: Can entities (people, places, organizations) be used to improve real-world event detection in a streaming setting on Twitter?} \\
Chapter \ref{chapter:detection} describes our entity-based, real-time event detection approach for Twitter.
Our entity-based approach partitions tweets based on the entities they contain to perform real-time clustering in an efficient manner, and uses a lightweight burst detection approach to identify unusual volumes of discussion around entities.
We found that it is possible to use entities to detect real-world event in a streaming setting on Twitter, and by evaluating this approach using the Events 2012 corpus, we found that it out-performed two state-of-the-art baselines in both precision (0.636 vs 0.285) and recall (0.383 vs 0.308).
\textbf{RQ3: Can event detection approaches be evaluated in a systematic and fair way?} \\
In chapter \ref{chapter:detection}, we used an automated evaluation methodology to evaluate our proposed event detection approach, and examined how these results compare to a crowdsourced evaluation.
We determined that although it is possible to automatically evaluate event detection approaches for Twitter, there remain a number of key challenges and issues that need to be addressed before automated evaluation can fully replace manual evaluation of event detection approaches.
\cite{Hasan17} surveyed real-time event detection techniques for Twitter in early 2017, and noted that the Events 2012 corpus remained the only corpus for the evaluation of event detection approaches on Twitter, suggesting that its continued use could help conduct fair performance comparisons between different event detection approaches.
\textbf{RQ4: Can we determine the newsworthiness of an individual tweet from content alone?} \\
The Newsworthiness Scoring approach we developed in chapter \ref{chapter:newsworthiness} uses a set of heuristics to assign quality labels to tweets and learn term likelihood ratios to produce Newsworthiness Score for tweets in real-time.
We evaluated the scores as a classification and scoring task, and found that the approach is able to label Newsworthy and Noise tweets with a high degree of accuracy.
We then used the Newsworthiness Score to estimate cluster Newsworthiness as a feature for event detection.
We used the entity-based clustering approach proposed in chapter \ref{chapter:detection}, but filtered out clusters with low newsworthiness scores, resulting in extremely high precision with very few tweets.
\section{Future Work}
This thesis makes a number of contributions to the topic of event detection on Twitter.
We have built upon decades of previous work and made a number of proposals that improve upon existing approaches, enabling the creation of a test collection and improvements to event detection approaches for Twitter.
During this, we have identified a number of key areas where we believe future research could be focused and taken further.
\subsubsection{An Updated Collection} There are a number of opportunities to improve test collections for the evaluation of event detection.
We note a number of areas where the Events 2012 corpus is lacking, such as a non-exhaustive list of events, and incomplete relevance judgements.
Whilst we do not believe these issues can ever be fully solved, improvements could be made by applying our methodology to additional event detection approaches and using these results to enrich the existing events and annotations.
Although we believe it may always be necessary to use crowdsourced evaluations to fully measure the effectiveness of an event detection approach, improvements to annotation coverage could help to improve performance estimates using the Events 2012 corpus.
\subsubsection{New Collections}
The Events 2012 corpus is now over six years old.
Twitter has changed considerably in that time: the userbase has grown and changed, and the length of tweets has increased from 140 to 280 characters.
We have proposed a methodology and demonstrated that it can be used to (relatively) quickly and easily build a test collection for Twitter.
The creation of new test collections would allow us to better understand how changes to Twitter have affected the performance of event detection approaches, and would enable a more thorough evaluation by comparing performance across two or more datasets.
\subsubsection{Name Entity Recognition}
In chapter \ref{chapter:detection}, we argue that named entities play a strong role in describing events, and base our event detection approach on name entities.
We also rely heavily on them for event merging in chapter \ref{chapter:collection}.
However, named entity recognition on tweets is still a difficult task, and although performance of standard toolkits like the Stanford NER is adequate, there is much room for improved recognition, which could then feed improvements
in entity-based event detection approaches.
In this vein, a detailed analysis of how NER performance affects detection performance would be an interesting area of research that could give insight into the limits of entity-based event detection approaches.
\subsubsection{Entity Linking and Disambiguation}
The application of Entity Linking and more robust disambiguation techniques could improve performance in a number of ways, particularly for event detection approaches such as ours that rely heavily on named entities.
We found that using three entity classes (person, location or organization) offers some improvements over using a single entity class, and it is likely that better disambiguation techniques would yield even better results.
Improvements to entity linking could help in a number of areas.
Although the co-occurrence method we used worked reasonably well, there are clearly improvements that could be made.
The use of an ontology to automatically link entities could offer improvements in a number of areas, for example, by linking a CEO to a company, or a politician to their country.
If this information was known in advance, then links could be found between events even without explicit mentions.
\subsubsection{Supervised Learning}
Supervised learning could offer vast improvements to event detection approaches and have yet to be explored in depth.
Word Embeddings, such as word2vec \citep{mikolov2013distributed}, GloVe \citep{pennington2014glove} or ELMo \citep{DBLP:journals/corr/abs-1802-05365}, offer potential improvements to clustering performance through improved similarity measurements and could offer a solution to issues such as lexical mismatch and the use of slang or abbreviations.
User classification, for example to identify good and reliable information sources, could help to improve the detection of smaller events by reducing the volume of tweets required before a decision can be made.
It would be interesting to examine how different types of user could be leveraged to improve detection.
Journalists, for example, may be useful for the detection of political or business news, however for unexpected events, it may be necessary to quickly identify reliable eye-witnesses and sources who are physically close to the event as they are likely to have the most up-to-date and correct information.
A supervised approach may prove to be effective at discovering these users quickly.
Supervised approaches could also prove useful for identifying things such as fake news, or the manipulation of news by state actors -- something that is becoming an increasingly important as social media plays a more important role in people's voting decisions.
\subsubsection{Scoring Features}
The majority of event detection approaches still rely on basic tweet or user volume for cluster ranking and to perform filtering.
However, basic features like these ignore many of the benefits of event detection on Twitter, such as the ability to detect breaking news events before they have been widely reported (and thus have only a very small volume).
The Newsworthiness Score we developed shows promise and is able to detect events with high precision from very few tweets, however this approach uses only content based features to determine newsworthiness and could be improved in a number of ways.
The heuristics used to select tweets and train the models could be improved, or, more likely, replaced by supervised approaches.
Non-content features, such as the user's credibility, description and location could also be taken into consideration for scoring, perhaps even using the context of the event to improve scoring.
Of course, there is a wide range of novel features that could be explored, and as more training data is gathered, supervised approaches could be trained that will likely outperform current approaches to event detection.
\subsubsection{Evaluation Improvements}
The evaluation of event detection is still a very challenging area that would benefit from considerably work.
Determining the performance of an event detection approach is difficult without also performing crowdsourced evaluations.
Precision and Recall will be underestimated using the Events 2012 annotations due to incomplete relevance judgements and event coverage, however it is not yet clear if this underestimation will apply evenly to all event detection approaches or if it will affect some more than others.
An investigation into this would prove invaluable and could pave a path forward.
As it stands, it is not clear if work should focus on increasing annotation coverage, developing a more robust evaluation methodology and set of metrics, or developing an entirely novel evaluation framework.
It is likely that modest improvements can be made simply by tweaking the methodology used by this work, with perhaps a new set of metrics to measure different aspects of event detection, similar to the different evaluation methodologies used by the TDT project.
The development of an entirely new evaluation framework, whilst the most radical solution, is also likely to be the most successful.
A number of approaches could be taken, from automatically matching candidate events to news articles, to a fully crowdsourced evaluation methodology where only the differences between runs are evaluated.
| {
"alphanum_fraction": 0.825145949,
"avg_line_length": 127.6909090909,
"ext": "tex",
"hexsha": "5d468b70159a90d8014e4bacf64a3981a7feb2c3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b83bc0913b6b8ad1f5202c37fa0623a784b6e09a",
"max_forks_repo_licenses": [
"Xnet",
"X11"
],
"max_forks_repo_name": "JamesMcMinn/Thesis-Revisions",
"max_forks_repo_path": "Chapters/Conclusion/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b83bc0913b6b8ad1f5202c37fa0623a784b6e09a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Xnet",
"X11"
],
"max_issues_repo_name": "JamesMcMinn/Thesis-Revisions",
"max_issues_repo_path": "Chapters/Conclusion/main.tex",
"max_line_length": 348,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "b83bc0913b6b8ad1f5202c37fa0623a784b6e09a",
"max_stars_repo_licenses": [
"Xnet",
"X11"
],
"max_stars_repo_name": "JamesMcMinn/Thesis-Revisions",
"max_stars_repo_path": "Chapters/Conclusion/main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2632,
"size": 14046
} |
\section{Abstract}
\begin{frame}{Abstract}
Noiseprint~\cite{cozzolino2018noiseprint} is a CNN-based method, born for tampering detection, able to extract a camera model fingerprint, resistant to scene content and enhancing model-related artifacts. In this presentation, we show the results of testing Noiseprint against a dataset~\cite{montibeller} of out-camera images edited with modern post-processing software.
\begin{figure}
\centering
\includegraphics[width=0.44\textwidth]{../drawable/examples/example-noiseprint.png}
\caption{Example of a Noiseprint extraction.}
\end{figure}
\end{frame}
\section{Introduction}
\begin{frame}{Problem}
Noiseprint~\cite{cozzolino2018noiseprint}'s original goal was to provide a tampering detection technique, but further studies outlined that it happened to provide acceptable results in the field of camera model identification.
\medskip
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{../drawable/examples/example-tampering.png}
\caption{Example of tampering detection with Noiseprint.}
\end{figure}
\medskip
\onslide<2->{
While being a relatively new paper (2018), Noiseprint has been trained on quite old and free-from-edits image database, the Dresden database~\cite{dresden}.
}
\end{frame}
\begin{frame}{Problem}
This database features in-camera photographs from models released in the late 2000s, and none of them feature any kind of image manipulation.
\medskip
\onslide<2->{
We aim to verify if the results on the Noiseprint paper are replicable on photos edited using image post-processing software.
}
\end{frame}
\begin{frame}{Dresden database}
The Dresden database used in this presentation is a narrowed down variant~\cite{dresden}.
\begin{itemize}
\item<2-> \textbf{9458} photos: \begin{itemize}
\item<3-> \textbf{27} camera models
\item<3-> between \textbf{80} and \textbf{1357} photos per model
\item<4-> \textbf{73} different devices
\item<4-> between \textbf{1} and \textbf{4} devices per model
\end{itemize}
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{../drawable/examples/example-dresden-D70_0_19451.JPG}
\end{figure}
\end{frame}
\begin{frame}{Outcamera database}
\begin{itemize}
\item<1-> \textbf{821} photos shot by Andrea Montibeller~\cite{montibeller}: \begin{itemize}
\item<2-> \textbf{20} clean images, for the fingerprint
\item<3-> the others received radial correction with \textbf{Gimp}, \textbf{PTLens}, \textbf{Adobe Photoshop}, and \textbf{Adobe Lightroom}
\item<4-> a fifth group contains photos corrected with a purposedly wrong camera profile
\end{itemize}
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.66\textwidth]{../drawable/examples/example-outcamera-im24.jpg}
\end{figure}
\end{frame}
\begin{frame}{Expected output}
We want to obtain a handful of ROC curves that show us the performance of Noiseprint on both datasets.
\medskip
To obtain this, we will:
\begin{itemize}
\item<2-> filter the datasets to make them comparable;
\item<3-> choose $N$ pictures for each camera model (27 + 1) to calculate a camera fingerprint;
\item<4-> for each camera model (Dresden) / group (Outcamera) of the two datasets: \begin{itemize}
\item choose $M$ pictures of the subset for the \texttt{H1} hypothesis;
\item choose $M$ random pictures from other \textit{Dresden} pictures for \texttt{H0};
\item calculate \textit{normalized cross-correlation} for each \texttt{H0/H1} picture and plot the results.
\end{itemize}
\end{itemize}
% TODO mettere un disegnino?
\end{frame}
\section{Execution}
\subsection{Preparation}
\begin{frame}{Balancing the datasets}
\begin{figure}
\centering
\includegraphics[height=0.78\textheight]{../drawable/meme.png}
\end{figure}
\end{frame}
\begin{frame}{Data selection (Outcamera)}
The dataset - with only Canon 1200D images - has
\begin{itemize}
\item \textbf{20} images for creating a camera fingerprint (as required) $\Rightarrow$ all fingerprints will require \textbf{20} images each.
\item \textbf{801} edited images \begin{itemize}
\item divided into groups of roughly \textbf{160} pictures each
\end{itemize}
\end{itemize}
\medskip
\onslide<2->{
For each group (5 total):
\begin{itemize}
\item \textbf{160} \texttt{H1} pictures of the group itself
\item \textbf{160} \texttt{H0} pictures chosen at random from the Dresden database
\end{itemize}
}
\end{frame}
\begin{frame}[fragile]{Data selection (Dresden)}
Since our goal is to make an unbiased test, we will make this dataset uniform w.r.t. Outcamera and extract roughly \textbf{800} photos, obtaining a mini Dresden dataset.
\medskip
\onslide<2->{
To further make it uniform over camera models, we extract 30 photos from each camera\footnote{ $30 \approx \frac{801}{27}$, where 801 is the amount of photos in the Outcamera dataset, and 27 is the number of cameras in the Dresden dataset}.
}
\medskip
\onslide<3->{
For each model:
\begin{itemize}
\item \textbf{30} \texttt{H1} pictures
\item \textbf{30} \texttt{H0} pictures randomly chosen from other models
\end{itemize}
}
\end{frame}
\begin{frame}[fragile]{Data selection}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{../drawable/diagram.png}
\end{figure}
\end{frame}
\subsection{Data extraction}
\begin{frame}[fragile]{Camera fingerprints}
\begin{enumerate}
\item verify that all chosen images from the same camera have same size
\item use a parallel script for extracting each Noiseprint (for convenience, we extract \textbf{all} of them)
\end{enumerate}
\medskip
\begin{lstlisting}
python3 noiseprint/main_extraction.py "$image"
"$OUTDIR/${image//\.JPG/}.mat"
\end{lstlisting}
\medskip
\onslide<2->{
$\Rightarrow$ the fingerprint of each camera model is the average of its 20 image Noiseprints
}
\end{frame}
\subsection{Validation}
\begin{frame}{Cross-correlating images}
With the \texttt{.mat} files ready, for each fingerprint $FP$ another script:
\begin{itemize}
\item<2-> selects the \texttt{H1} images available and randomly chooses $K$ images for the \texttt{H0} case\footnote{30 for Dresden, 160 for Outcamera};
\item<3-> obtains a $1024 \times 1024$ central crop of each image;
\item<4-> computes $ncc$ for each \texttt{H1} and \texttt{H0} image against $FP$;
\item<5-> dumps the results to a JSON for later analysis.
\end{itemize}
\end{frame}
%\begin{frame}{Final steps}
% With the results readily available in JSON format, a final Python script computes a relevant visualization of data depending on the request (e.g. ROC curve, confusion matrix, table).
%\end{frame}
\section{Results}
\begin{frame}{ROC curves}
\begin{figure}
\centering
\subfloat{\includegraphics[width=.48\textwidth]{../drawable/results/roc-bulk__9252-all-dresden.png}} \quad
\subfloat{\includegraphics[width=.48\textwidth]{../drawable/results/roc-bulk__5654-all-outcamera.png}}
\captionsetup{justification=centering}
\caption{ROC curves for the Dresden and Outcamera dataset.\\
Left: $AUC = 0.91$; Right: $AUC = 0.57$}
\end{figure}
\end{frame}
\begin{frame}{ROC curve (both datasets; $AUC = 0.77$)}
\vspace{-1.6em}
\begin{figure}
\centering
\includegraphics[height=.8\textheight]{../drawable/results/roc-bulk__7700-all-both.png}
\end{figure}
\end{frame}
\begin{frame}{Tables - TPR/FPR Dresden}
\vspace{-1em}
\begin{figure}
\centering
\subfloat{\includegraphics[width=.42\textwidth]{../drawable/results/conftable-dresden.png}}
\end{figure}
\end{frame}
\begin{frame}{Tables - TPR/FPR Outcamera}
\vspace{-1em}
\begin{figure}
\centering
\subfloat{\includegraphics[width=.4\textwidth]{../drawable/results/conftable-outcamera.png}}
\end{figure}
\end{frame}
\begin{frame}{Tables - TPR/FPR combined}
\vspace{-1em}
\begin{figure}
\centering
\subfloat{\includegraphics[width=.4\textwidth]{../drawable/results/conftable-all.png}}
\end{figure}
\end{frame}
\begin{frame}{Setting the thresholds}
\begin{table}[]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\textbf{Experiment} & \textbf{0.2 FPR} & \textbf{0.1 FPR} & \textbf{0.05 FPR} & \textbf{Optimal} \\
\hline
\texttt{dresden} & 0.4207 & 0.4941 & 0.5294 & 0.4411 {\footnotesize (0.1630 FPR)}\\
\texttt{outcamera} & 0.4490 & 0.4993 & 0.5221 & 0.3864 {\footnotesize (0.4324 FPR)} \\
\texttt{all} & 0.4364 & 0.4951 & 0.5280 & 0.4081 {\footnotesize (0.2785 FPR)} \\
\hline
\end{tabular}
\captionsetup{justification=centering}
\caption{Alternative thresholds, chosen using the \textit{set FPR} method, and the optimal one, chosen with the \textit{minimum ROC distance} method~\cite{roc-curve}}.
\end{table}
\end{frame}
\section{Future work}
\begin{frame}{Multiclass classification}
The availability of a large number of fingerprints means that Noiseprint translates decently in a multiclass classification scenario.
\medskip
Being trained on the Dresden database, Noiseprint performs accurately for some models and terribly for others, coherently with the paper results.
\medskip
A larger and more diverse dataset could allow Noiseprint to be re-trained on newer camera models.
\end{frame}
\begin{frame}{Confusion matrix - Dresden}
\vspace{-.9em}
\begin{figure}
\centering
\includegraphics[height=.87\textheight]{../drawable/results/confmatrix-dresden.png}
\end{figure}
\end{frame}
\begin{frame}{Confusion vector - Canon EOS 1200D}
\vspace{-.9em}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../drawable/results/confmatrix-outcamera-cropped.png}
\end{figure}
\end{frame}
\section{References}
\begin{frame}{References}
\bibliographystyle{IEEEtran-sorted-tt}
\bibliography{biblio}
\end{frame}
\section{Appendix}
\begin{frame}[fragile]{Extra: code for reordering photos}
We can sort the database with these commands.
\begin{lstlisting}
ls | sed -r "s|^(.*)_([0-9]*)_(.*) mv \1_\2_\3 \1|g" \
> ../move.sh
ls | sed -r "s/^(.*)_([0-9]*)_.*/\1/g" \
| uniq | xargs mkdir -p
../move.sh
\end{lstlisting}
\medskip
We can now get the amount of photos per folder:
\medskip
\begin{lstlisting}
for d in *; do cd $d && ls | wc -l && cd ..; done
\end{lstlisting}
\end{frame}
\begin{frame}[fragile]{Extra: code for selecting random photos}
To select 20 random pictures:
\medskip
\begin{lstlisting}
for d in *; do cd $d && ls | shuf | head -n 20
| xargs -I {} /bin/mv {} ../.. && cd ..; done
\end{lstlisting}
\end{frame}
\begin{frame}{Extra: output of \texttt{verify-all-sizes.sh}}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../drawable/screenshot-verifysizes.png}
\end{figure}
\end{frame}
\begin{frame}{Extra: example of JSON output}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../drawable/examples/example-output.png}
\end{figure}
\end{frame} | {
"alphanum_fraction": 0.6589115081,
"avg_line_length": 32.2654155496,
"ext": "tex",
"hexsha": "aca1004564c86c422f44a98c01753674b7135522",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "36b9b8131c00b9e4357ac01f4912731baa943604",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mfranzil-unitn/unitn-m-mds",
"max_forks_repo_path": "03-project/report/slides.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "36b9b8131c00b9e4357ac01f4912731baa943604",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mfranzil-unitn/unitn-m-mds",
"max_issues_repo_path": "03-project/report/slides.tex",
"max_line_length": 375,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "36b9b8131c00b9e4357ac01f4912731baa943604",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mfranzil-unitn/unitn-m-mds",
"max_stars_repo_path": "03-project/report/slides.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3386,
"size": 12035
} |
\documentclass{article}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage[table]{xcolor}
\begin{document}
\begin{titlepage}
\title{\huge{Logic Gates}}
\author{Dumebi Valerie Duru}
\maketitle
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{pictures/pau logo.png}
\end{figure}
\end{titlepage}
\tableofcontents
\newpage
\section{Logic Gates}
A logic gate is a building block of a digital circuit which is at the heart of any computer operation. Behind every digital system is a logic gate.Logic gates perform logical operations that take binary input (0s and 1s) and produce a single binary output. They are used in most electronic device including:
\begin{table}[h!]
\begin{center}
\label{tab: table 1}
\begin{tabular}{c c c}
\hline
\textcolor{red}{Smartphones} & \textcolor{green}{Tablets} & \textcolor{blue}{Memory Devices}\\
\hline
\includegraphics[width=0.1\linewidth]{pictures/phone.jpg} & \includegraphics[width=0.1\linewidth]{pictures/tablet.jpg} & \includegraphics[width=0.1\linewidth]{pictures/memory device.jpg}\\
\hline
\end{tabular}
\end{center}
\end{table}
\newline
Now think of a logic gate like a light switch, it is either in an ON or OFF position. Similarly, the input output terminals are always in one of two binary positions false(0) and true(1). Each gate has its own logic or set of rules that determines how it acts based on multiple inputs outlined in a truth table.
\newpage
\section{Types of Logic Gates}
Fundamental gates are \textbf{AND, OR and NOT}
Derived Gates are \textbf{NAND, NOR, XOR and XNOR} (derived from the fundamental gates)
Universal Gates are\textbf{NAND and NOR gates} (the fundamental logic gates can be realized through them).
\subsection{Fundamental Gates}
\subsubsection{AND Gate}
The expression C = A X B reads as “C equals A AND B“
The multiplication sign (X) stands for the AND operation, same for ordinary multiplication of 1s and 0s.\newline
\begin{figure}[h!]
\begin{center}
\caption{AND Gate}
\label{fig 1: AND gate}
\includegraphics[width=0.5\linewidth]{pictures/AND Gate.jpg}
\end{center}
\end{figure}
The AND operation produces a true output (result of 1) only for the single case when all of the input variables are 1 and a false output (result of 0) where one or more inputs are 0.
\subsubsection{OR Gate}
The expression C = A + B reads as “C equals A OR B". It is the inclusive “OR”
The Addition (+) sign stands for the OR operation. \newline
\begin{figure}[h!]
\caption{OR Gate}
\label{fig 2: OR Gate}
\begin{center}
\includegraphics[width=0.5\linewidth]{pictures/OR Gate.jpg}
\end{center}
\end{figure}
The OR operation produces a true output (result of 1) when any of the input variable is 1 and a false output (result of 0) only when all the input variables are 0.
\newpage
\subsubsection{NOT Gate}
The NOT gate is called a logical inverter.
It has only one input. It reverses the original input (A) to give an inverted output C.
\begin{figure}[h!]
\begin{center}
\caption{NOT Gate}
\label{fig 3: NOT Gate}
\includegraphics[width=0.5\linewidth]{pictures/NOT gate.jpg}
\end{center}
\end{figure}
\subsection{Derived Gates}
\subsubsection{NOR Gate}
The NOR (NOT OR) gate circuit is an inverter OR gate
\begin{center}
\begin{math}
C = \overline{A+B}
\end{math}
\end{center}
Reads as C = NOT of A or B
The NOR Gate gives a true output (result of 1) only when both inputs are false (0)
\textcolor{red}{The NOR Gate is a universal gate because it can be used to form any other kind of gate}
\begin{figure}[h!]
\caption{NOR Gate}
\label{fig 4: NOR Gate}
\begin{center}
\includegraphics[width=0.5\linewidth]{pictures/NOR Gate.jpg}
\end{center}
\end{figure}
\subsubsection{NAND Gate}
The NAND (NOT AND) Gate is an inverted AND Gate
\begin{center}
\begin{math}
C = \overline{A * B}
\end{math}
\end{center}
Reads as C = NOT of A AND B
The NAND Gate gives a false output (result of 0) only when both inputs are true (1)
\newline
\textcolor{red}{The NAND Gate is a universal gate because it can be used to form any other kind of gate.}
\begin{figure}[h!]
\caption{NAND Gate}
\label{fig 5: NAND Gate}
\begin{center}
\includegraphics[width=0.5\linewidth]{pictures/NAND Gate.jpg}
\end{center}
\end{figure}
\subsubsection{XOR Gate}
An XOR (exclusive OR) gate acts in the same way as the exclusive OR logical connector.
It gives a true output (result of 1) if one, and only one, of the inputs to the gate is true (1), i.e either or but not both
\begin{center}
\begin{math}
C = \overline{A}.B + \overline{B}.A
\end{math}
\end{center}
\begin{table}[h!]
\centering
\caption{XOR Truth Table}
\label{tab 2: XOR table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\rowcolor{blue!60} A & B & $\overline{A}$ & $\overline{A}$.B & $\overline{B}$ & $\overline{B}$.A & C = $\overline{A}$.B + $\overline{B}$.A \\
\hline
\rowcolor{blue!10} 0 & 0 & 1 & 0 & 1 & 0 & 0\\
\rowcolor{blue!20} 1 & 0 & 0 & 0 & 1 & 1 & 1\\
\rowcolor{blue!10} 0 & 1 & 1 & 1 & 0 & 0 & 1\\
\rowcolor{blue!20} 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ \hline
\end{tabular}
\end{table}
\subsubsection{XNOR Gate}
The XNOR (exclusive - NOR) gate is a combination XOR gate followed by an inverter.
\begin{center}
\begin{math}
C= \overline{\overline{A}.B + \overline{B}.A}
\end{math}
\end{center}
\begin{table}[h!]
\centering
\caption{XNOR Truth Table}
\label{tab 3: XNOR table}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\rowcolor{blue!60} A & B & $\overline{A}$ & $\overline{A}$.B & $\overline{B}$ & $\overline{B}$.A & $\overline{A}$.B + $\overline{B}$.A & C = $\overline{\overline{A}.B + \overline{B}.A}$ \\
\hline
\rowcolor{blue!10} 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1\\
\rowcolor{blue!20} 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\
\rowcolor{blue!10} 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0\\
\rowcolor{blue!20} 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline
\end{tabular}
\end{table}
\newpage
\section{Logic Gates and Their Truth Tables}
\begin{figure}[h!]
\includegraphics[width=1\linewidth]{pictures/Logic gate and truth table.jpg }
\cite{picture}
\end{figure}
\newpage
\section{Summary}
\begin{itemize}
\item{Using different combination of logic gates, complex operations can be performed.}
\item{With the Universal logic gates - NAND and NOR, any other gate can be built}
\item{There is no limit to the number of gates that can be arranged together in a single device.}
\item{However, in practice, there is a limit to the number of gates that can be packed into a given physical space.}
\item{Arrays of logic gates are found in digital integrated circuits.}
\item{The logic gates are abstract representations of real electronic circuits}
\item{In computers, Logic gates are built using transistors combined with other electrical components like resistors and diodes.}
\item{These electrical components are wired together in order to transform a particular input to give a desired output}
\end{itemize}
\newpage
\section{Quiz}
\begin{enumerate}
\item{What is the output of an AND gate if the inputs are 1 and 0?}
\item{Explain the difference between the AND gate and the OR gate.}
\item{What is the output of a NOT gate if the inputs is 0?}
\item{Which logic gate is this?}
\item{Which gate is also known a logical converter?}
\end{enumerate}
\cite{logicgates2021}
\cite{logicgates}
\newpage
\bibliography{logicGates.bib}
\bibliographystyle{ieeetr}
\end{document} | {
"alphanum_fraction": 0.6374410449,
"avg_line_length": 39.7548076923,
"ext": "tex",
"hexsha": "293ff1c21585af7c8d534350174c0ac8f1dcd345",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6c63aa0879fdc1d99ec8600e29046413093fcfe2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Dumebi35/DumebiCSC101",
"max_forks_repo_path": "week5/Class project II/Class Project II Week 5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6c63aa0879fdc1d99ec8600e29046413093fcfe2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Dumebi35/DumebiCSC101",
"max_issues_repo_path": "week5/Class project II/Class Project II Week 5.tex",
"max_line_length": 314,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6c63aa0879fdc1d99ec8600e29046413093fcfe2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Dumebi35/DumebiCSC101",
"max_stars_repo_path": "week5/Class project II/Class Project II Week 5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2692,
"size": 8269
} |
\section{Russian ``Active Measures'' Social Media Campaign}
\markboth{Russian ``Active Measures'' Social Media Campaign}{Russian ``Active Measures'' Social Media Campaign}
The first form of Russian election influence came principally from the Internet Research Agency, LLC (IRA), a Russian organization funded by Yevgeniy Viktorovich Prigozhin and companies he controlled, including Concord Management and Consulting LLC and Concord Catering (collectively ``Concord'').% 2
\footnote{The Office is aware of reports that other Russian entities engaged in similar active measures operations targeting the United States.
Some evidence collected by the Office corroborates those reports, and the Office has shared that evidence with other offices in the Department of Justice and~FBI.}
The IRA conducted social media operations targeted at large U.S. audiences with the goal of sowing discord in the U.S. political system.% 3
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit}
\textit{see also} SM-2230634, serial 44 (analysis).
The FBI case number cited here, and other FBI case numbers identified in the report, should be treated as law enforcement sensitive given the context.
The report contains additional law enforcement sensitive information.}
These operations constituted ``active measures'' (\foreignlanguage{russian}{активные мероприятия}), a term that typically refers to operations conducted by Russian security services aimed at influencing the course of international affairs.% 4
\footnote{As discussed in \hyperlink{section.1.5}{Part V} below, the active measures investigation has resulted in criminal charges against 13 individual Russian nationals and three Russian entities, principally for conspiracy to defraud the United States, in violation of 18~U.S.C. \S~371.
\textit{See} \hyperlink{subsection.1.5.1}{Volume~I, Section V.A}, \textit{infra}; Indictment, \textit{United States~v.\ Internet Research Agency, et~al.}, 1 :18-cr-32 (D.D.C. Feb.~16, 2018), Doc.~1 (``\textit{Internet Research Agency} Indictment'').}
The IRA and its employees began operations targeting the United States as early as 2014.
Using fictitious U.S. personas, IRA employees operated social media accounts and group pages designed to attract U.S. audiences.
These groups and accounts, which addressed divisive U.S. political and social issues, falsely claimed to be controlled by U.S. activists.
Over time, these social media accounts became a means to reach large U.S. audiences.
IRA employees travelled to the United States in mid-2014 on an intelligence-gathering mission to obtain information and photographs for use in their social media posts.
IRA employees posted derogatory information about a number of candidates in the 2016 U.S. presidential election.
By early to mid-2016, IRA operations included supporting the Trump Campaign and disparaging candidate Hillary Clinton.
The IRA made various expenditures to carry out those activities, including buying political advertisements on social media in the names of U.S. persons and entities.
Some IRA employees, posing as U.S. persons and without revealing their Russian association, communicated electronically with individuals associated with the Trump Campaign and with other political activists to seek to coordinate political activities, including the staging of political rallies.% 5
\footnote{\textit{Internet Research Agency} Indictment \S\S~52, 54, 55(a), 56,~74; \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
The investigation did not identify evidence that any U.S. persons knowingly or intentionally coordinated with the IRA's interference operation.
By the end of the 2016 U.S. election, the IRA had the ability to reach millions of U.S. persons through their social media accounts.
Multiple IRA-controlled Facebook groups and Instagram accounts had hundreds of thousands of U.S. participants.
IRA-controlled Twitter accounts separately had tens of thousands of followers, including multiple U.S. political figures who retweeted IRA-created content.
In November 2017, a Facebook representative testified that Facebook had identified 470 IRA-controlled Facebook accounts that collectively made 80,000 posts between January 2015 and August 2017.
Facebook estimated the IRA reached as many as 126 million persons through its Facebook accounts.% 6
\footnote{\textit{Social Media Influence in the 2016 U.S. Election, Hearing Before the Senate Select Committee on Intelligence}, 115th Cong.~13 (11/1/17) (testimony of Colin Stretch, General Counsel of Facebook)
(``We estimate that roughly 29 million people were served content in their News Feeds directly from the IRA's 80,000 posts over the two years.
Posts from these Pages were also shared, liked, and followed by people on Facebook, and, as a result, three times more people may have been exposed to a story that originated from the Russian operation.
Our best estimate is that approximately 126 million people may have been served content from a Page associated with the IRA at some point during the two-year period.'').
The Facebook representative also testified that Facebook had identified 170 Instagram accounts that posted approximately 120,000 pieces of content during that time.
Facebook did not offer an estimate of the audience reached via Instagram.}
In January 2018, Twitter announced that it had identified 3,814 IRA-controlled Twitter accounts and notified approximately 1.4~million people Twitter believed may have been in contact with an IRA-controlled account.% 7
\footnote{Twitter, Update on Twitter's Review of the 2016 US Election (Jan.~31, 2018).}
\subsection{Structure of the Internet Research Agency}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 8
\footnote{\textit{See} SM-2230634, serial 92.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 9
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 10
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
The organization quickly grew.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 11
\footnote{\textit{See} SM-2230634, serial 86 \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 12
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
The growth of the organization also led to a more detailed organizational structure.
\blackout{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 13
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
Two individuals headed the IRA's management: its general director, Mikhail Bystrov, and its executive director, Mikhail Burchik.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 14
\footnote{\textit{See, e.g.}, SM-2230634, serials 9, 113 \&~180 \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 15
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
As early as spring of 2014, the IRA began to hide its funding and activities.
\blackout{Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 16
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.} \textit{See} SM-2230634, serials 131 \& 204.}
The IRA's U.S. operations are part of a larger set of interlocking operations known as ``Project Lakhta,'' \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 17
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 18
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\subsection{Funding and Oversight from Concord and Prigozhin}
Until at least February 2018, Yevgeniy Viktorovich Prigozhin and two Concord companies funded the IRA\null.
Prigozhin is a wealthy Russian businessman who served as the head of Concord.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
Prigozhin was sanctioned by the U.S. Treasury Department in December 2016,% 19
\footnote{U.S. Treasury Department, ``Treasury Sanctions Individuals and Entities in Connection with Russia's Occupation of Crimea and the Conflict in Ukraine'' (Dec.~20, 2016).}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 20
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 21
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
Numerous media sources have reported on Prigozhin's ties to Putin, and the two have appeared together in public photographs.% 22
\footnote{\textit{See, e.g.}, Neil MacFarquhar, \textit{Yevgeniy Prigozhin, Russian Oligarch Indicted by U.S., Is Known as ``Putin's Cook''}, New York Times (Feb.~16, 2018).}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 23
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 24
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 25
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor} \textit{see also} SM-2230634, serial 113 \blackout{HOM}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 26
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 27
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 28
\footnote{The term ``troll'' refers to internet users---in this context, paid operatives---who post inflammatory or otherwise disruptive content on social media or other websites. }
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
IRA employees were aware that Prigozhin was involved in the IRA's U.S. operations, \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 29
\footnote{\blackout{Investigative Technique: Lorem ipsum.} \textit{See} SM-2230634, serials 131 \& 204.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 30
\footnote{\textit{See} SM-2230634, serial 156.}
In May~2016, IRA employees, claiming to be U.S. social activists and administrators of Facebook groups, recruited U.S. persons to hold signs (including one in front of the White House) that read ``Happy 55th Birthday Dear Boss,'' as an homage to Prigozhin (whose 55th birthday was on June~1, 2016).% 31
\footnote{\textit{Internet Research Agency} Indictment \P~12(b); \textit{see also} 5/26/16 Facebook Messages, ID 1479936895656747 (United Muslims of America) \& \blackout{Personal Privacy: Lorem ipsum}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 32
\footnote{\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor} \textit{see also} SM-2230634, serial 189. \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\subsection{The IRA Targets U.S. Elections}
\subsubsection{The IRA Ramps Up U.S. Operations as Early as 2014}
The IRA's U.S. operations sought to influence public opinion through online media and forums.
By the spring of 2014, the IRA began to consolidate U.S. operations within a single general department, known internally as the ``Translator'' (\foreignlanguage{russian}{переводчик}) department.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
IRA subdivided the Translator Department into different responsibilities, ranging from operations on different social media platforms to analytics to graphics and IT\null.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 33
\footnote{\blackout{Harm to Ongoing Investigation} \textit{See} SM-2230634, serial 205.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 34
\footnote{\textit{See} SM-2230634, serial 204 \blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 35
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 36
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\begin{quote}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\end{quote}
\blackout{Harm to Ongoing Matter}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 37
\footnote{\blackout{Harm to Ongoing Investigation}}
IRA employees also traveled to the United States on intelligence-gathering missions.
In June 2014, four IRA employees applied to the U.S. Department of State to enter the United States, while lying about the purpose of their trip and claiming to be four friends who had met at a party.% 38
\footnote{\textit{See} SM-2230634, serials 150 \& 172 \blackout{Harm to Ongoing Investigation}}
Ultimately, two IRA employees---Anna Bogacheva and Aleksandra Krylova---received visas and entered the United States on June~4, 2014.
Prior to traveling, Krylova and Bogacheva compiled itineraries and instructions for the trip.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 39
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 40
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 41
\footnote{\blackout{Harm to Ongoing Investigation}}
\subsubsection{U.S. Operations Through IRA-Controlled Social Media Accounts}
Dozens of IRA employees were responsible for operating accounts and personas on different U.S. social media platforms.
The IRA referred to employees assigned to operate the social media accounts as ``specialists.''% 42
\footnote{\blackout{Harm to Ongoing Investigation}}
Starting as early as 2014, the IRA's U.S. operations included social media specialists focusing on Facebook, YouTube, and Twitter.% 43
\footnote{\blackout{Harm to Ongoing Investigation}}
The IRA later added specialists who operated on Tumblr and Instagram accounts.% 44
\footnote{\textit{See, e.g.}, SM-2230634, serial 179}
Initially, the IRA created social media accounts that pretended to be the personal accounts of U.S. persons.% 45
\footnote{\textit{See, e.g.}, Facebook ID 100011390466802 (Alex Anderson);
Facebook ID 100009626173204 (Andrea Hansen);
Facebook ID 100009728618427 (Gary Williams);
Facebook ID 100013640043337 (Lakisha Richardson).}
By early 2015, the IRA began to create larger social media groups or public social media pages that claimed (falsely) to be affiliated with U.S. political and grassroots organizations.
In certain cases, the IRA created accounts that mimicked real U.S. organizations.
For example, one IRA-controlled Twitter account, \UseVerb{TENGOP}, purported to be connected to the Tennessee Republican Party.% 46
\footnote{The account claimed to be the ``Unofficial Twitter of Tennessee Republicans'' and made posts that appeared to be endorsements of the state political party.
\textit{See, e.g.}, \UseVerb{TENGOP}, 4/3/16 Tweet (``Tennessee GOP backs \UseVerb{DJT} period \UseVerb{MAGA} \UseVerb{TNGOP} \UseVerb{TENNESSEE} \UseVerb{GOP}'').}
More commonly, the IRA created accounts in the names of fictitious U.S. organizations and grassroots groups and used these accounts to pose as anti-immigration groups, Tea Party activists, Black Lives Matter protesters, and other U.S. social and political activists.
The IRA closely monitored the activity of its social media accounts.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 47
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 48
\footnote{\textit{See, e.g.}, SM-2230634 serial 131 \blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
By February 2016, internal IRA documents referred to support for the Trump Campaign and opposition to candidate Clinton.% 49
\footnote{The IRA posted content about the Clinton candidacy before Clinton officially announced her presidential campaign.
IRA-controlled social media accounts criticized Clinton's record as Secretary of State and promoted various critiques of her candidacy.
The IRA also used other techniques.
\blackout{Harm to Ongoing Investigation}}
For example, \blackout{Harm to Ongoing Matter} directions to IRA operators \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor.}
``Main idea: Use any opportunity to criticize Hillary [Clinton] and the rest (except Sanders and Trump -- we support them).''% 50
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
The focus on the U.S. presidential campaign continued through 2016. In \blackout{HOM} 2016 internal \blackout{HOM} reviewing the IRA-controlled Facebook group ``Secured Borders,'' the author criticized the ``lower number of posts dedicated to criticizing Hillary Clinton'' and reminded the Facebook specialist ``it is imperative to intensify criticizing Hillary Clinton.''% 51
\footnote{\blackout{Harm to Ongoing Investigation}}
IRA employees also acknowledged that their work focused on influencing the U.S. presidential election.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 52
\footnote{\blackout{Harm to Ongoing Investigation}}
\subsubsection{U.S. Operations Through Facebook}
Many IRA operations used Facebook accounts created and operated by its specialists.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\begin{enumerate}[i]
\item \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\item \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\item \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\end{enumerate}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 53
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 54
\footnote{\blackout{Harm to Ongoing Investigation}}
The IRA Facebook groups active during the 2016 campaign covered a range of political issues and included purported conservative groups (with names such as ``Being Patriotic,'' ``Stop All Immigrants,'' ``Secured Borders,'' and ``Tea Party News''), purported Black social justice groups (``Black Matters,'' ``Blacktivist,'' and ``Don't Shoot Us''), LGBTQ groups (``LGBT United''), and religious groups (``United Muslims of America'').
Throughout 2016, IRA accounts published an increasing number of materials supporting the Trump Campaign and opposing the Clinton Campaign.
For example, on May~31, 2016, the operational account ``Matt Skiber'' began to privately message dozens of pro-Trump Facebook groups asking them to help plan a ``pro-Trump rally near Trump Tower.''% 55
\footnote{5/31/16 Facebook Message, ID 100009922908461 (Matt Skiber) to ID \blackout{Personal Privacy}.
5/31/16 Facebook Message, ID 100009922908461 (Matt Skiber) to ID \blackout{Personal Privacy}}
To reach larger U.S. audiences, the IRA purchased advertisements from Facebook that promoted the IRA groups on the newsfeeds of U.S. audience members.
According to Facebook, the IRA purchased over 3,500 advertisements, and the expenditures totaled approximately \$100,000.% 56
\footnote{\textit{Social Media Influence in the 2016 U.S. Election}, Hearing Before the Senate Select Committee on Intelligence, 115th Cong.~13 (11/1/17) (testimony of Colin Stretch, General Counsel of Facebook).}
During the U.S. presidential campaign, many IRA-purchased advertisements explicitly supported or opposed a presidential candidate or promoted U.S. rallies organized by the IRA (discussed below).
As early as March 2016, the IRA purchased advertisements that overtly opposed the Clinton Campaign.
For example, on March~18, 2016, the IRA purchased an advertisement depicting candidate Clinton and a caption that read in part, ``If one day God lets this liar enter the White House as a president -- that day would be a real national tragedy.''% 57
\footnote{3/18/16 Facebook Advertisement ID 6045505152575.}
Similarly, on April~6, 2016, the IRA purchased advertisements for its account ``Black Matters'' calling for a ``flashmob'' of U.S. persons to ``take a photo with \UseVerb{HCFP} or \UseVerb{NOHILL}.''% 58
\footnote{4/6/16 Facebook Advertisement ID 6043740225319.}
IRA-purchased advertisements featuring Clinton were, with very few exceptions, negative.% 59
\footnote{\textit{See} SM-2230634, serial 213 (documenting politically-oriented advertisements from the larger set provided by Facebook).}
IRA-purchased advertisements referencing candidate Trump largely supported his campaign.
The first known IRA advertisement explicitly endorsing the Trump Campaign was purchased on April~19, 2016.
The IRA bought an advertisement for its Instagram account ``Tea Party News'' asking U.S. persons to help them ``make a patriotic team of young Trump supporters'' by uploading photos with the hashtag `` \UseVerb{K4T}.''% 60
\footnote{4/19/16 Facebook Advertisement ID 6045151094235.}
In subsequent months, the IRA purchased dozens of advertisements supporting the Trump Campaign, predominantly through the Facebook groups ``Being Patriotic,'' ``Stop All Invaders,'' and ``Secured Borders.''
Collectively, the IRA's social media accounts reached tens of millions of U.S. persons.
Individual IRA social media accounts attracted hundreds of thousands of followers.
For example, at the time they were deactivated by Facebook in mid-2017, the IRA's ``United Muslims of America'' Facebook group had over 300,000 followers, the ``Don't Shoot Us'' Facebook group had over 250,000 followers, the ``Being Patriotic'' Facebook group had over 200,000 followers, and the ``Secured Borders'' Facebook group had over 130,000 followers.% 61
\footnote{\textit{See} Facebook ID 1479936895656747 (United Muslims of America);
Facebook ID 1157233400960126 (Don't Shoot);
Facebook ID 1601685693432389 Being Patriotic);
Facebook ID 757183957716200 (Secured Borders).
\blackout{Harm to Ongoing Investigation}
\blackout{Harm to Ongoing Investigation}
\blackout{Harm to Ongoing Investigation}}
According to Facebook, in total the IRA-controlled accounts made over 80,000 posts before their deactivation in August 2017, and these posts reached at least 29 million U.S persons and ``may have reached an estimated 126 million people.''% 62
\footnote{\textit{Social Media Influence in the 2016 U.S. Election}, Hearing Before the Senate Select Committee on Intelligence, 115th Cong.~13 (11/1/17) (testimony of Colin Stretch, General Counsel of Facebook).}
\subsubsection{U.S. Operations Through Twitter}
A number of IRA employees assigned to the Translator Department served as Twitter specialists.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 63
\footnote{\blackout{Harm to Ongoing Investigation}}
The IRA's Twitter operations involved two strategies.
First, IRA specialists operated certain Twitter accounts to create individual U.S. personas, \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 64
\footnote{\blackout{Harm to Ongoing Investigation}}
Separately, the IRA operated a network of automated Twitter accounts (commonly referred to as a bot network) that enabled the IRA to amplify existing content on Twitter.
\paragraph{Individualized Accounts}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 65
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 66
\footnote{\blackout{Harm to Ongoing Investigation}}
The IRA operated individualized Twitter accounts similar to the operation of its Facebook accounts, by continuously posting original content to the accounts while also communicating with U.S. Twitter users directly (through public tweeting or Twitter's private messaging).
The IRA used many of these accounts to attempt to influence U.S. audiences on the election.
Individualized accounts used to influence the U.S. presidential election included \UseVerb{TENGOP} (described above); \UseVerb{jennabrams} (claiming to be a Virginian Trump supporter with 70,000 followers); \UseVerb{PAMELA} (claiming to be a Texan Trump supporter with 70,000 followers); and \UseVerb{America1st} (an anti-immigration persona with 24,000 followers).% 67
\footnote{Other individualized accounts included \UseVerb{MNUS} (an account with 3,800 followers that posted pro-Sanders and anti-Clinton material).}
In May~2016, the IRA created the Twitter account \UseVerb{MFT}, which promoted IRA-organized rallies in support of the Trump Campaign (described below).% 68
\footnote{\textit{See} \UseVerb{MFT}, 5/30/16 Tweet (first post from account).}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\begin{quote}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 69
\footnote{\blackout{Harm to Ongoing Investigation}}
\end{quote}
Using these accounts and others, the IRA provoked reactions from users and the media.
Multiple IRA-posted tweets gained popularity.% 70
\footnote{For example, one IRA account tweeted, ``To those people, who hate the Confederate flag.
Did you know that the flag and the war wasn't about slavery, it was all about money.''
The tweet received over 40,000 responses.
\UseVerb{JENNABRAMS} 4/24/17 (2:37~p.m.) Tweet.}
U.S. media outlets also quoted tweets from IRA-controlled accounts and attributed them to the reactions of real U.S. persons.% 71
\footnote{Josephine Lukito \& Chris Wells, \textit{Most Major Outlets Have Used Russian Tweets as Sources for Partisan Opinion: Study}, Columbia Journalism Review (Mar.~8, 2018);
\textit{see also Twitter Steps Up to Explain} \UseVerb{NYV} \textit{to Ted Cruz}, Washington Post (Jan.~15, 2016) (citing IRA tweet);
\textit{People Are Slamming the CIA for Claiming Russia Tried to Help Donald Trump}, U.S. News \& World Report (Dec.~12, 2016).}
Similarly, numerous high-profile U.S. persons, including former Ambassador Michael McFaul,% 72
\footnote{\UseVerb{MCFAUL} 4/30/16 Tweet (responding to tweet by \UseVerb{JENNABRAMS}).} Roger Stone,% 73
\footnote{\UseVerb{ROGER} 5/30/16 Tweet (retweeting \UseVerb{PAMELA});
\UseVerb{ROGER} 4/26/16 Tweet (same).}
Sean Hannity,% 74
\footnote{\UseVerb{HANNITY} 6/21/17 Tweet (retweeting \UseVerb{PAMELA}).}
and Michael Flynn~Jr.,% 75
\footnote{\UseVerb{MFJ} 6/22/17 Tweet (``RT \UseVerb{JENNABRAMS}: This is what happens when you add the voice over of an old documentary about mental illness onto video of SJWs\dots'').}
retweeted or responded to tweets posted to these IRA-controlled accounts.
Multiple individuals affiliated with the Trump Campaign also promoted IRA tweets (discussed below).
\paragraph{IRA Botnet Activities}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}% 76
\footnote{A botnet refers to a network of private computers or accounts controlled as a group to send specific automated messages.
On the Twitter network, botnets can be used to promote and republish (``retweet'') specific tweets or hashtags in order for them to gain larger audiences.}
\begin{quote}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}% 77
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 78
\footnote{\blackout{Harm to Ongoing Investigation}}
\end{quote}
In January 2018, Twitter publicly identified 3,814 Twitter accounts associated with the IRA\null.% 79
\footnote{Eli Rosenberg, \textit{Twitter to Tell 677,000 Users they Were Had by the Russians. Some Signs Show the Problem Continues}, Washington Post (Jan.~19, 2019).}
According to Twitter, in the ten weeks before the 2016 U.S. presidential election, these accounts posted approximately 175,993 tweets, ``approximately 8.4\% of which were election-related.''% 80
\footnote{Twitter, ``Update on Twitter's Review of the 2016 US Election'' (updated Jan.~31, 2018).
Twitter also reported identifying 50,258 automated accounts connected to the Russian government, which tweeted more than a million times in the ten weeks before the election.}
Twitter also announced that it had notified approximately 1.4~million people who Twitter believed may have been in contact with an IRA-controlled account.% 81
\footnote{Twitter, ``Update on Twitter's Review of the 2016 US Election'' (updated Jan.~31, 2018).}
\subsubsection{U.S. Operations Involving Political Rallies}
The IRA organized and promoted political rallies inside the United States while posing as U.S. grassroots activists.
First, the IRA used one of its preexisting social media personas (Facebook groups and Twitter accounts, for example) to announce and promote the event.
The IRA then sent a large number of direct messages to followers of its social media account asking them to attend the event.
From those who responded with interest in attending, the IRA then sought a U.S. person to serve as the event's coordinator.
In most cases, the IRA account operator would tell the U.S. person that they personally could not attend the event due to some preexisting conflict or because they were somewhere else in the United States.% 82
\footnote{8/20/16 Facebook Message, ID 100009922908461 (Matt Skiber) to \blackout{Personal Privacy}}
The IRA then further promoted the event by contacting U.S. media about the event and directing them to speak with the coordinator.% 83
\footnote{\textit{See, e.g.}, 7/21/16 Email, \UseVerb{JOSHMILTON} to \blackout{Personal Privacy};
7/21/16 Email, \UseVerb{JOSHMILTON} to \blackout{Personal Privacy}}
After the event, the IRA posted videos and photographs of the event to the IRA's social media accounts.% 84
\footnote{\UseVerb{MFT} 6/25/16 Tweet (posting photos from rally outside Trump Tower).}
The Office identified dozens of U.S. rallies organized by the IRA\null. The earliest evidence of a rally was a ``confederate rally'' in November 2015.% 85
\footnote{Instagram ID 2228012168 (Stand For Freedom) 11/3/15 Post (``Good evening buds!
Well I am planning to organize a confederate rally [\dots] in Houston on the 14 of November and I want more people to attend.'').}
The IRA continued to organize rallies even after the 2016 U.S. presidential election.
The attendance at rallies varied.
Some rallies appear to have drawn few (if any) participants while others drew hundreds.
The reach and success of these rallies was closely monitored \blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\begin{wrapfigure}{l}{2.1in}
\vspace{-20pt}
\begin{center}
\includegraphics[width=2in]{images/p-31-coal-miners-poster.png}%
\end{center}
\vspace{-20pt}
\caption*{IRA Poster for Pennsylvania Rallies organized by the IRA}
\vspace{-10pt}
\label{fig:coal-miners-poster}
\end{wrapfigure}
From June 2016 until the end of the presidential campaign, almost all of the U.S. rallies organized by the IRA focused on the U.S. election, often promoting the Trump Campaign and opposing the Clinton Campaign.
Pro-Trump rallies included three in New York; a series of pro-Trump rallies in Florida in August 2016; and a series of pro-Trump rallies in October 2016 in Pennsylvania.
The Florida rallies drew the attention of the Trump Campaign, which posted about the Miami rally on candidate Trump's Facebook account (as discussed below).% 86
\footnote{The pro-Trump rallies were organized through multiple Facebook, Twitter, and email accounts.
\textit{See, e.g.}, Facebook ID 100009922908461 (Matt Skiber);
Facebook ID 1601685693432389 (Being Patriotic);
Twitter Account \UseVerb{MFT};
\UseVerb{BEINGPATRIOTIC}.
(Rallies were organized in New York on June~25, 2016; Florida on August~20, 2016; and Pennsylvania on October~2, 2016.)}
Many of the same IRA employees who oversaw the IRA's social media accounts also conducted the day-to-day recruiting for political rallies inside the United States.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 87
\footnote{\blackout{Harm to Ongoing Investigation}}
\subsubsection{Targeting and Recruitment of U.S. Persons}
As early as 2014, the IRA instructed its employees to target U.S. persons who could be used to advance its operational goals.
Initially, recruitment focused on U.S. persons who could amplify the content posted by the IRA\null.
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.}
\begin{quote}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 88
\footnote{\blackout{Harm to Ongoing Investigation}}
\end{quote}
IRA employees frequently used \blackout{Investigative Technique} Twitter, Facebook, and Instagram to contact and recruit U.S. persons who followed the group.
The IRA recruited U.S. persons from across the political spectrum.
For example, the IRA targeted the family of \blackout{Personal Privacy: Lorem ipsum} and a number of black social justice activists while posing as a grassroots group called ``Black Matters US\null.''% 89
\footnote{3/11/16 Facebook Advertisement ID 6045078289928, 5/6/16 Facebook Advertisement ID 6051652423528, 10/26/16 Facebook Advertisement ID 6055238604687;
10/27/16 Facebook Message, ID \blackout{Personal Privacy} \& ID 100011698576461 (Taylor Brooks).}
In February 2017, the persona ``Black Fist'' (purporting to want to teach African-Americans to protect themselves when contacted by law enforcement) hired a self-defense instructor in New York to offer classes sponsored by Black Fist.
The IRA also recruited moderators of conservative social media groups to promote IRA-generated content,% 90
\footnote{8/19/16 Facebook Message, ID 100009922908461 (Matt Skiber) to \blackout{Personal Privacy}}
as well as recruited individuals to perform political acts (such as walking around New York City dressed up as Santa Claus with a Trump mask).% 91
\footnote{12/8/16 Email, \UseVerb{ROBOTCL} to \UseVerb{BEINGPATRIOTIC} (confirming Craigslist advertisement).}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 92
\footnote{8/18--19/16 Twitter DMs, \UseVerb{MFT} \& \blackout{Personal Privacy}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.}% 93
\footnote{\textit{See, e.g.}, 11/11--27/16 Facebook Messages, ID 100011698576461 (Taylor Brooks) \& ID \blackout{Personal Privacy} (arranging to pay for plane tickets and for a bull horn).
}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.}% 94
\footnote{\textit{See, e.g.}, 9/10/16 Facebook Message, ID 100009922908461 (Matt Skiber) \& ID \blackout{Personal Privacy} (discussing payment for rally supplies);
8/18/16 Twitter DM, \UseVerb{MFT} to \blackout{Personal Privacy} (discussing payment for construction materials).}
\blackout{Harm to Ongoing Matter} as the IRA's online audience became larger, the IRA tracked U.S. persons with whom they communicated and had successfully tasked (with tasks ranging from organizing rallies to taking pictures with certain political messages).
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 95
\footnote{\blackout{Harm to Ongoing Investigation}}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.}
\subsubsection{Interactions and Contacts with the Trump Campaign}
The investigation identified two different forms of connections between the IRA and members of the Trump Campaign.
(The investigation identified no similar connections between the IRA and the Clinton Campaign.)
First, on multiple occasions, members and surrogates of the Trump Campaign promoted---typically by linking, retweeting, or similar methods of reposting---pro-Trump or anti-Clinton content published by the IRA through IRA-controlled social media accounts.
Additionally, in a few instances, IRA employees represented themselves as U.S. persons to communicate with members of the Trump Campaign in an effort to seek assistance and coordination on IRA-organized political rallies inside the United States.
\paragraph{Trump Campaign Promotion of IRA Political Materials}
Among the U.S. ``leaders of public opinion'' targeted by the IRA were various members and surrogates of the Trump Campaign.
In total, Trump Campaign affiliates promoted dozens of tweets, posts, and other political content created by the IRA\null.
\begin{itemize}
\item Posts from the IRA-controlled Twitter account \UseVerb{TENGOP} were cited or retweeted by multiple Trump Campaign officials and surrogates, including Donald J. Trump~Jr.,% 96
\footnote{\textit{See, e.g.}, \UseVerb{DJTJ} 10/26/16 Tweet (``RT \UseVerb{TENGOP}: BREAKING Thousands of names changed on voter rolls in Indiana.
Police investigating \UseVerb{VOTERFRAUD}. \UseVerb{DRAINTS}.'');
\UseVerb{DJTJ} 11/2/16 Tweet (``RT \UseVerb{TENGOP}: BREAKING: \UseVerb{VOTERFRAUD} by counting tens of thousands of ineligible mail in Hillary votes being reported in Broward County, Florida.'');
\UseVerb{DJTJ} 11/8/16 Tweet (``RT \UseVerb{TENGOP}:This vet passed away last month before he could vote for Trump.
Here he is in his \UseVerb{MAGAhat}.
\UseVerb{voted} \UseVerb{ElectionDay}.'').
Trump~Jr.\ retweeted additional \UseVerb{TENGOP} content subsequent to the election.}
Eric Trump,% 97
\footnote{\UseVerb{ERICTRUMP} 10/20/16 Tweet (``RT \UseVerb{TENGOP}: BREAKING Hillary shuts down press conference when asked about DNC Operatives corruption \& \UseVerb{VOTERFRAUD} \UseVerb{debatenight} \UseVerb{TrumpB}'').}
Kellyanne Conway,% 98
\footnote{\UseVerb{KellyannePolls} 11/6/16 Tweet (``RT \UseVerb{TENGOP}: Mother of jailed sailor: `Hold Hillary to same standards as my son on Classified info' \UseVerb{hillarysemail} \UseVerb{WeinerGate}.'').}
Brad Parscale,% 99
\footnote{\UseVerb{parscale} 10/15/16 Tweet (``Thousands of deplorables chanting to the media: `TellTheTruth!' RT if you are also done w/biased Media! \UseVerb{FridayFeeling}'').}
and Michael T. Flynn.% 100
\footnote{\UseVerb{GenFlynn} 11/7/16 (retweeting \UseVerb{TENGOP} post that included in part ``\UseVerb{DJT} \& \UseVerb{mikepence} will be our next POTUS \& VPOTUS.'').}
These posts included allegations of voter fraud,% 101
\footnote{\UseVerb{TENGOP} 10/11/16 Tweet (``North Carolina finds 2,214 voters over the age of 110!!'').}
as well as allegations that Secretary Clinton had mishandled classified information.% 102
\footnote{\UseVerb{TENGOP} 11/6/16 Tweet (``Mother of jailed sailor: `Hold Hillary to same standards as my son on classified info['] \UseVerb{hillaryemail} \UseVerb{WeinerGate}.'\thinspace'').}
\item A November~7, 2016 post from the IRA-controlled Twitter account \UseVerb{PAMELA} was retweeted by Donald J. Trump~Jr.% 103
\footnote{\UseVerb{DJTJ} 11/7/16 Tweet (``RT \UseVerb{PAMELA}: Detroit residents speak out against the failed policies of Obama, Hillary \& democrats\dots.'').}
\item On September~19, 2017, President Trump's personal account \UseVerb{DJT} responded to a tweet from the IRA-controlled account \UseVerb{10gop} (the backup account of \UseVerb{TENGOP}, which had already been deactivated by Twitter). The tweet read: ``We love you, Mr.~President!''% 104
\footnote{\UseVerb{DJT} 9/19/17 (7:33~p.m.) Tweet (``THANK YOU for your support Miami! My team just shared photos from your TRUMP SIGN WAVING DAY, yesterday! I love you -- and there is no question -- TOGETHER, WE WILL MAKE AMERICA GREAT AGAIN!''}
\end{itemize}
IRA employees monitored the reaction of the Trump Campaign and, later, Trump Administration officials to their tweets.
For example, on August~23, 2016, the IRA-controlled persona ``Matt Skiber'' Facebook account sent a message to a U.S. Tea Party activist, writing that ``Mr.~Trump posted about our event in Miami! This is great!''% 105
\footnote{8/23/16 Facebook Message, ID 100009922908461 (Matt Skiber) to \blackout{Personal Privacy}}
The IRA employee included a screenshot of candidate Trump's Facebook account, which included a post about the August~20, 2016 political rallies organized by the IRA\null.
\begin{wrapfigure}{r}{2.1in}
\vspace{-20pt}
\begin{center}
\includegraphics[width=2in]{images/p-34-trump-facebook.png}%
\end{center}
\vspace{-20pt}
\caption*{Screenshot of Trump Facebook Account (from Matt Skiber)}
\vspace{-10pt}
\label{fig:trump-facebook}
\end{wrapfigure}
\blackout{Harm to Ongoing Matter: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.}% 106
\footnote{\blackout{Harm to Ongoing Investigation}}
\paragraph{Contact with Trump Campaign Officials in Connection to Rallies}
Starting in June 2016, the IRA contacted different U.S. persons affiliated with the Trump Campaign in an effort to coordinate pro-Trump IRA-organized rallies inside the United States.
In all cases, the IRA contacted the Campaign while claiming to be U.S. political activists working on behalf of a conservative grassroots organization.
The IRA's contacts included requests for signs and other materials to use at rallies,% 107
\footnote{\textit{See, e.g.}, 8/16/16 Email, \UseVerb{JOSHMILTON} to \blackout{Personal Privacy}\UseVerb{dtcom} (asking for Trump/Pence signs for Florida rally);
8/18/16 Email, \UseVerb{JOSHMILTON} to \blackout{Personal Privacy}\UseVerb{dtcom} (asking for Trump/Pence signs for Florida rally);
8/12/16 Email, \UseVerb{JOSHMILTON} to \blackout{Personal Privacy}\UseVerb{dtcom} (asking for ``contact phone numbers for Trump Campaign affiliates'' in various Florida cities and signs).
}
as well as requests to promote the rallies and help coordinate logistics.% 108
\footnote{8/15/16 Email, \blackout{Personal Privacy} to \UseVerb{JOSHMILTON} (asking to add to locations to the ``Florida Goes Trump,'' list);
8/16/16 Email, to \blackout{Personal Privacy} to \UseVerb{JOSHMILTON} (volunteering to send an email blast to followers).}
While certain campaign volunteers agreed to provide the requested support (for example, agreeing to set aside a number of signs), the investigation has not identified evidence that any Trump Campaign official understood the requests were coming from foreign nationals.
\hr
In sum, the investigation established that Russia interfered in the 2016 presidential election through the ``active measures'' social media campaign carried out by the IRA, an organization funded by Prigozhin and companies that he controlled.
As explained further in \hyperlink{subsection.1.5.1}{Volume~I, Section V.A}, \textit{infra}, the Office concluded (and a grand jury has alleged) that Prigozhin, his companies, and IRA employees violated U.S. law through these operations, principally by undermining through deceptive acts the work of federal agencies charged with regulating foreign influence in U.S. elections.
| {
"alphanum_fraction": 0.7948055189,
"avg_line_length": 106.2462121212,
"ext": "tex",
"hexsha": "0e50a8e5afa58cbd1eb6471653ee8ea4adf946c6",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2019-12-10T19:38:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-04-20T21:02:20.000Z",
"max_forks_repo_head_hexsha": "d8fac96fa2d04aa31516d7079533b20703d8dfee",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "mds08011/multi-publish",
"max_forks_repo_path": "src/volume-1/active-measures.tex",
"max_issues_count": 64,
"max_issues_repo_head_hexsha": "3aa16a20104f48623ce8e12c8502ecb1867a40f8",
"max_issues_repo_issues_event_max_datetime": "2019-09-02T03:01:19.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-04-20T13:38:54.000Z",
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "ascherer/mueller-report",
"max_issues_repo_path": "src/volume-1/active-measures.tex",
"max_line_length": 1372,
"max_stars_count": 57,
"max_stars_repo_head_hexsha": "3aa16a20104f48623ce8e12c8502ecb1867a40f8",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "ascherer/mueller-report",
"max_stars_repo_path": "src/volume-1/active-measures.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-16T13:32:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-04-20T13:29:36.000Z",
"num_tokens": 13474,
"size": 56098
} |
\chapter*{Acknowledgements}
\label{ch:Acknowledgements}
\addcontentsline{toc}{chapter}{Acknowledgements}
And of course include your acknowledgements here: my supervisor was always there for me, taught me so much and hence will receive eternal praise and gratefulness. Actually, I rarely saw him/her and that was just as well.
| {
"alphanum_fraction": 0.8103975535,
"avg_line_length": 54.5,
"ext": "tex",
"hexsha": "620c4ba7418ece09c5b1775ab19a81ca379e0b56",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2021-04-13T09:12:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-07-02T14:23:29.000Z",
"max_forks_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "zoeschindler/ResearchSkills",
"max_forks_repo_path": "Labs/LaTeX/Template-BScMSc-Freiburg/chapters/acknowledgements.tex",
"max_issues_count": 34,
"max_issues_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0",
"max_issues_repo_issues_event_max_datetime": "2020-01-24T08:34:26.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-06-24T09:32:33.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "zoeschindler/ResearchSkills",
"max_issues_repo_path": "Labs/LaTeX/Template-BScMSc-Freiburg/chapters/acknowledgements.tex",
"max_line_length": 220,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "2fb606d3068bced37c8da6fb8fe2525be96a58a0",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "florianhartig/ResearchSkills",
"max_stars_repo_path": "Labs/LaTeX/Template-BScMSc-Freiburg/chapters/acknowledgements.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-08T05:59:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-20T22:16:38.000Z",
"num_tokens": 72,
"size": 327
} |
\chapter{List of publications}
\begin{itemize}
\item scNMT-seq enables joint profiling of chromatin accessibility DNA methylation and transcription in single cells. \textit{Nature Communications}. 9, 781 (2018). doi: 10.1038/s41467-018-03149-4
\item \textbf{Argelaguet, R.} \& Velten B. et al. Multi-Omics Factor Analysis: a framework for unsupervised integration of multi-omics data sets. \textit{Mol Syst Biol} 14, (2018). doi:10.15252/msb.20178124
\item \textbf{Argelaguet, R.} \& Clark S.J. et al. Multi-omics profiling of mouse gastrulation at single-cell resolution. \textit{Nature} 576, 487-491 (2019). doi:10.1038/s41586-019-1825-8
\item \textbf{Argelaguet, R.} \& Arnol D. et al. et al. MOFA+: a statistical framework for comprehensive integration of multi-modal single-cell data. \textit{Genome Biol}. 21, 111 (2020). doi:10.1186/s13059-020-02015-1
\item Haak B.W., \textbf{Argelaguet, R.} et al. Intestinal transkingdom analysis on the impact of antibiotic perturbation in health and critical illness. \textit{bioRxiv}. (2020). doi:10.1101/2020.06.25.171553v1
\item Kapourani C.A. & \textbf{Argelaguet, R.} scMET: Bayesian modelling of DNA methylation heterogeneity at single-cell resolution. \texit{bioRxiv}. (2020). doi: 10.1101/2020.07.10.196816
\end{itemize}
Clark, S.J. \& \textbf{Argelaguet, R.} et al. C.A. Kapourani, T. M. Stubbs, H.J. Lee, C. Alda-Catalinas, F. Krueger, G. Sanguinetti, G. Kelsey, J.C. Marioni, O. Stegle W. Reik
| {
"alphanum_fraction": 0.742642026,
"avg_line_length": 104.3571428571,
"ext": "tex",
"hexsha": "83aaea15846fcc7ceaf2be0e04c17d8cbac6f50d",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2021-10-04T08:25:50.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-09T04:47:49.000Z",
"max_forks_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rargelaguet/thesis",
"max_forks_repo_path": "Introduction/list_of_publications.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rargelaguet/thesis",
"max_issues_repo_path": "Introduction/list_of_publications.tex",
"max_line_length": 219,
"max_stars_count": 15,
"max_stars_repo_head_hexsha": "ff3f7b996710c06d6924b7e780a4a9531651a3a0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rargelaguet/thesis",
"max_stars_repo_path": "Introduction/list_of_publications.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-26T07:24:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-08T13:01:16.000Z",
"num_tokens": 525,
"size": 1461
} |
\subsection{Creating a First Version of the App}
Since Meteor was chosen, a multiple-choice quiz tutorial in Meteor was used to guide the first version of the app. Modifications were made, for example making it responsive and changing it to YoungDrive's graphic profile.
The app was pushed to GitHub, and first hosted on Meteor free storage, then available via youngdrive.meteorapp.com. For Android and iOS, it was made possible to install the app from the computer. For each day of the training in Zambia, new quizzes were added to the app, which created a belt path (see \ref{progress-payoffs}).
After iteration 2, a different hosting platform was needed when the Meteor free tier was removed, where Heroku was chosen. Staging environment using Heroku allowed changes on specific GitHub branches to deploy updates automatically on Heroku servers. The MongoDB database was created using the Heroku plugin MongoLab. A Meteor build-pack was used to allow Meteor to be used with Heroku.
It was also tested to upload the app to Android Play Store. The necessary steps from Cordova needed to be followed, screenshots needed to be uploaded, and some administrative tasks. After this, it only took a day for the app to appear on the Play Store, and everything worked satisfactory.
| {
"alphanum_fraction": 0.8046875,
"avg_line_length": 128,
"ext": "tex",
"hexsha": "a5262ceae7749955899c80b08177855cb8a93d22",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "marcusnygren/YoungDriveMasterThesis",
"max_forks_repo_path": "methods/application-implementation/iteration_2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "marcusnygren/YoungDriveMasterThesis",
"max_issues_repo_path": "methods/application-implementation/iteration_2.tex",
"max_line_length": 386,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e8639a356a7d2d4866819d7a569a24cc06e6a17",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "marcusnygren/YoungDriveMasterThesis",
"max_stars_repo_path": "methods/application-implementation/iteration_2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 262,
"size": 1280
} |
\documentclass[a4paper,twoside,subsubproblemsty=Alph,problemsty=Roman,subproblemsty=arabic]{HomeworkAssignment}
\usepackage[ngerman]{babel}
\usepackage{tikz}
\usetikzlibrary{%
arrows,%
shapes.misc,% wg. rounded rectangle
shapes.arrows,%
chains,%
matrix,%
positioning,% wg. " of "
scopes,%
decorations.pathmorphing,% /pgf/decoration/random steps | erste Graphik
shadows%
}
\usepackage{amsmath}
\usepackage[autostyle,german = guillemets]{csquotes}
\author{Autor Eins 1701\\ Autor Zwei 74656}
\date{\today}
\deadline{\today}
\tutorial{\"Ubungsgruppe 42}
\subject{Programmierung}
\begin{document}
\tikzset{
nonterminal/.style={
% The shape:
rectangle,
% The size:
minimum size=6mm,
% The border:
very thick,
draw=red!50!black!50, % 50% red and 50% black,
% and that mixed with 50% white
% The filling:
top color=white, % a shading that is white at the top...
bottom color=red!50!black!20, % and something else at the bottom
% Font
font=\itshape
},
terminal/.style={
% The shape:
rounded rectangle,
minimum size=6mm,
% The rest
very thick,draw=black!50,
top color=white,bottom color=black!20,
font=\ttfamily},
skip loop/.style={to path={-- ++(0,#1) -| (\tikztotarget)}}
}
{
\tikzset{terminal/.append style={text height=1.5ex,text depth=.25ex}}
\tikzset{nonterminal/.append style={text height=1.5ex,text depth=.25ex}}
}
\maketitle
\tableofcontents
\newproblem
\newsubproblem
\newsubsubproblem
\begin{align*}
&&S_2 \\
S_2 \rightarrow & A.S_2 & A.S_2\\
A \rightarrow & B & B.S_2\\
B \rightarrow & p & p.S_2\\
S_2 \rightarrow & A.S_2 & p.A.S_2\\
A \rightarrow & B & p.B.S_2\\
B \rightarrow & q & p.q.S_2\\
S_2 \rightarrow & A. & p.q.A.\\
A \rightarrow & B:-B & p.q.B:-B.\\
B \rightarrow & r & p.q.r:-B.\\
B \rightarrow & q & p.q.r:-q.\\
\end{align*}
Der Ausdruck wird akzeptiert.
\begin{align*}
\mathcal{W}(p.q.r:-q) = & \mathcal{W}(p.q.)\cup\{r\}\\
= & \mathcal{W}(p.)\cup{q}\cup\{r\}\\
= & \{p\}\cup\{q\}\cup\{r\}\\
= & \{p,q,r\}\\
\end{align*}
\newsubsubproblem
\begin{align*}
&&S_2\\
S_2 \rightarrow & A.S_2 & A.S_2\\
A \rightarrow & B:-B & B:-B.S_2\\
B \rightarrow & q & q:-B-S_2\\
B \rightarrow & p & q:-p.S_2\\
S_2 \rightarrow & A. & q:-p.A.\\
A \rightarrow & B:-B & q:-p.B:-B.\\
B \rightarrow & p & q:-p.p:-B.\\
B \rightarrow & q & q:-p.p:-q.\\
\end{align*}
Der Ausdruck wird akzeptiert.
\begin{align*}
\mathcal{W}(q:-p.p:-q.) & = \mathcal{W}(q:-p.)\\
& = \emptyset
\end{align*}
\newsubsubproblem
\begin{align*}
&&S_2\\
S_2 \rightarrow & A.S_2 & A.S_2\\
A \rightarrow & B:-B & B:-B.S_2\\
B \rightarrow & q & q:-B.S_2\\
B \rightarrow & p & q:-p.S_2\\
S_2 \rightarrow & A. & q:-p.A.\\
A \rightarrow & B & q:-p.B.\\
B \rightarrow & p & q:-p.p.\\
\end{align*}
Der Ausdruck wird Akzeptiert.
\begin{align*}
\mathcal{W}(q:-p.p.) & = \mathcal{W}(q:-p.) \cup \{ p \}\\
& = \emptyset \cup \{ p \}\\
& = \{ p \}\\
\end{align*}
\newsubsubproblem
Der Ausdruck wird nicht Akzeptiert, da \enquote{t} kein Symbol des Alphabetes ist.
\newsubproblem
Sei $\mathcal{S}$ eine Sprache und $\mathcal{P}$ ein Programm.
Zu zeigen:\\
\begin{align*}
&&\mathcal{P} \text{ ist semantisch korrekt bzgl. } \mathcal{S} \Rightarrow & \mathcal{P} \text{ ist syntaktisch korrekt}\\
\Leftrightarrow&&\mathcal{P} \text{ ist syntaktisch Falsch} \Rightarrow & \mathcal{P} \text{ ist semantisch falsch} & \text{(entspricht Def.)}\\
&&&&qed
\end{align*}
\subsection*{c)}
Seien $\mathcal{A}_1$ und $\mathcal{A}_2$ zwei Ausdrücke in einer Sprache und es gelte:
\begin{align*}
&& \mathcal{W}(\mathcal{A}_1) \neq \mathcal{W}(\mathcal{A}_2) & \Rightarrow \mathcal{A}_1 \neq \mathcal{A}_2\\
\text{dann gilt auch: }&& \mathcal{A}_1 = \mathcal{A}_2 & \Rightarrow \mathcal{W}(\mathcal{A}_1) = \mathcal{W}(\mathcal{A}_2) \\
&&&&qed
\end{align*}
\newproblem[3] % Ueberspringt aufgabe 2
\newsubproblem
$G = (\{S,A,B\},\{a,b\},P,S\}$ mit den Produktionsregeln $P$:
\begin{align*}
S \rightarrow & A \\
S \rightarrow & B \\
A \rightarrow & aAb\\
A \rightarrow & AA\\
A \rightarrow & a\\
B \rightarrow & \varepsilon\\
B \rightarrow & Bb\\
\end{align*}
\newsubproblem
\begin{align*}
S_1& = ( \{ b \} | S_2)\\
S_2& = [ [S_2] a [S_2] b [S_2] ]\\
\end{align*}
\newpage
\newsubproblem
\begin{figure}[h]
\begin{tikzpicture}[
>=latex,thick,
/pgf/every decoration/.style={/tikz/sharp corners},
fuzzy/.style={decorate,
decoration={random steps,segment length=0.5mm,amplitude=0.15pt}},
minimum size=6mm,line join=round,line cap=round,
terminal/.style={rectangle,draw,fill=white,fuzzy,rounded corners=3mm},
nonterminal/.style={rectangle,draw,fill=white,fuzzy},,
node distance=4mm,
]
\ttfamily
\begin{scope}[start chain,
every node/.style={on chain},
terminal/.append style={join=by {->,shorten >=-1pt,
fuzzy,decoration={post length=4pt}}},
nonterminal/.append style={join=by {->,shorten >=-1pt,
fuzzy,decoration={post length=4pt}}},
support/.style={coordinate,join=by fuzzy}
]
\node [support] (start) {};
\node [support,xshift=5mm] (after start2) {};
\node [support,xshift=5mm] (after start) {};
\node [terminal,xshift=5mm] (b) {b};
\node [support,xshift=5mm] (before end) {};
\node [support,xshift=5mm] (before end2) {};
\node [coordinate,join=by ->] (end) {};
\end{scope}
\node (s2) [nonterminal,above=of b] {$S_2$};
\node (support) [below=of b] {};
\begin{scope}[->,decoration={post length=4pt},rounded corners=2mm,
every path/.style=fuzzy]
\draw (after start2) |- (s2);
\draw (s2) -| (before end2);
\draw (before end) -- +(0,-.7) -| (after start);
\end{scope}
\end{tikzpicture}
\caption{Regel $S_1$}
\end{figure}
\begin{figure}[h]
\begin{tikzpicture}[
>=latex,thick,
/pgf/every decoration/.style={/tikz/sharp corners},
fuzzy/.style={decorate,
decoration={random steps,segment length=0.5mm,amplitude=0.15pt}},
minimum size=6mm,line join=round,line cap=round,
terminal/.style={rectangle,draw,fill=white,fuzzy,rounded corners=3mm},
nonterminal/.style={rectangle,draw,fill=white,fuzzy},,
node distance=4mm,
]
\ttfamily
\begin{scope}[start chain,
every node/.style={on chain},
terminal/.append style={join=by {->,shorten >=-1pt,
fuzzy,decoration={post length=4pt}}},
nonterminal/.append style={join=by {->,shorten >=-1pt,
fuzzy,decoration={post length=4pt}}},
support/.style={coordinate,join=by fuzzy}
]
\node [support] (start) {};
\node [support,xshift=5mm] (after start) {};
\node [support,xshift=5mm] (line S2_1) {};
\node [support,xshift=5mm] (before a) {};
\node [terminal,xshift=5mm] (a) {a};
\node [support,xshift=5mm] (after a) {};
\node [support,xshift=5mm] (line S2_2) {};
\node [support,xshift=5mm] (before b) {};
\node [terminal,xshift=5mm] (b) {b};
\node [support,xshift=5mm] (before end) {};
\node [coordinate,join=by ->] (end) {};
\end{scope}
\node (s2_1) [nonterminal,above=of line S2_1] {$S_2$};
\node (s2_2) [nonterminal,above=of line S2_2] {$S_2$};
\begin{scope}[->,decoration={post length=4pt},rounded corners=2mm,
every path/.style=fuzzy]
\draw (after start) |- (s2_1);
\draw (s2_1) -| (before a);
\draw (after a) |- (s2_2);
\draw (s2_2) -| (before b);
\draw (before end) -- +(0,-.7) -| (after start);
\end{scope}
\end{tikzpicture}
\caption{Regel $S_2$}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.5883383383,
"avg_line_length": 30.045112782,
"ext": "tex",
"hexsha": "f573ce1f4ea4fafb746e3be15bea86b8ad0e8846",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2022-03-29T23:20:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-17T07:35:09.000Z",
"max_forks_repo_head_hexsha": "6a59f9b4a78f35c847e55c4a3c94480c437ce160",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ACHinrichs/LaTeX-templates",
"max_forks_repo_path": "examples/eg_assignment_3.tex",
"max_issues_count": 8,
"max_issues_repo_head_hexsha": "6a59f9b4a78f35c847e55c4a3c94480c437ce160",
"max_issues_repo_issues_event_max_datetime": "2018-03-15T18:25:28.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-05-05T08:35:34.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ACHinrichs/LaTeX-templates",
"max_issues_repo_path": "examples/eg_assignment_3.tex",
"max_line_length": 144,
"max_stars_count": 37,
"max_stars_repo_head_hexsha": "6a59f9b4a78f35c847e55c4a3c94480c437ce160",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ACHinrichs/LaTeX-templates",
"max_stars_repo_path": "examples/eg_assignment_3.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-21T01:04:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-11-01T20:58:33.000Z",
"num_tokens": 2865,
"size": 7992
} |
\par
\chapter{{\tt Lock}: Mutual Exclusion Lock object}
\label{chapter:Lock}
\par
The {\tt Lock} object is an object that is used to insulate the
rest of the library from the particular thread package that is
active.
The {\tt FrontMtx}, {\tt ChvList}, {\tt ChvManager},
{\tt SubMtxList} and {\tt SubMtxManager} objects all
may contain a mutual exclusion lock to govern access to their
critical sections of code in a multithreaded environment.
Instead of putting the raw code that is specific to a particular
thread library into each of these objects,
each has a {\tt Lock} object.
It is this {\tt Lock} object that contains the code and data
structures for the different thread libraries.
\par
At present we have the Solaris and POSIX thread libraries supported
by the {\tt Lock} object.
The header file {\tt Lock.h} contains {\tt \#if/\#endif} statements
that switch over the supported libraries.
The {\tt THREAD\_TYPE} parameter is used to make the switch.
Porting the library to another thread package requires making
changes to the {\tt Lock} object.
The parallel factor and solve methods that belong to the
{\tt FrontMtx} object also need to have additional code inserted into
them to govern thread creation, joining, etc, but the switch is
made by the {\tt THREAD\_TYPE} definition found in the header file
{\tt Lock.h}.
It is possible to use the code without any thread package ---
simply set {\tt THREAD\_TYPE} to {\tt TT\_NONE} in the {\tt Lock.h}
file.
| {
"alphanum_fraction": 0.7672354949,
"avg_line_length": 44.3939393939,
"ext": "tex",
"hexsha": "3856ccf650c0c755a0e4e3b0f3f70bcb78f7baa1",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/Lock/doc/intro.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/Lock/doc/intro.tex",
"max_line_length": 69,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/Lock/doc/intro.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 357,
"size": 1465
} |
\documentclass{article}
\usepackage[a4paper, total={6in, 10in}]{geometry}
%\setlength{\parskip}{0.01cm plus4mm minus3mm}
\usepackage{multicol}
\usepackage{enumitem}
\usepackage{listings}
\usepackage{xparse}
\usepackage[superscript,biblabel]{cite}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{caption}
\usepackage{blindtext}
\usepackage{dirtree}
\usepackage{amsmath}
\usepackage{xcolor}
\colorlet{cBlue}{blue!80}
\colorlet{cPurple}{blue!40!red}
\colorlet{cRed}{red!60}
\NewDocumentCommand{\codeword}{v}{
\texttt{\textcolor{cBlue}{#1}}
}
\NewDocumentCommand{\cmd}{v}{
\textit{\textcolor{cPurple}{#1}}
}
% Styles stolen from https://www.overleaf.com/learn/latex/Code_listing
\definecolor{codegreen}{rgb}{0,0.6,0}
\definecolor{codegray}{rgb}{0.5,0.5,0.5}
\definecolor{codepurple}{rgb}{0.58,0,0.82}
\definecolor{backcolour}{rgb}{0.95,0.95,0.92}
\lstdefinestyle{mystyle}{
backgroundcolor=\color{backcolour},
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\tiny\color{codegray},
stringstyle=\color{codepurple},
basicstyle=\ttfamily\footnotesize,
breakatwhitespace=false,
breaklines=true,
captionpos=b,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
\lstset{style=mystyle}
\title{\Huge Coursework 3: Predator-Prey Simulation}
\author{William Bradford Larcombe \small (K21003008) \\ Pawel Makles \small (K21002534)}
\date{\small Created: 1st March 2021 \\ Last Modified: 2nd March 2021}
\begin{document}
% Cover Page
\maketitle
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{../screenshot.png}
\caption{Screenshot of the Simulation}
\end{figure}
\newpage
\begin{multicols}{2}
\section{The Simulation}
\subsection{Running the Simulation}
On most platforms, the simulation should run in BlueJ \cite{BlueJ} as expected by right-clicking the \codeword{Main} class and selecting \codeword{runSimulation()} (as seen in \autoref{fig:bluej}).
This may fail if you either restart the simulation or are using a Mac M1 as GLFW expects us to be running on \codeword{THREAD0}.
On Mac M1, you can run it as such:
\begin{lstlisting}
java -XstartOnFirstThread -jar ppa-cw3-1.0.0-bluej-all-natives.jar\end{lstlisting}
In general, running outside of BlueJ:
\begin{lstlisting}
java -jar ppa-cw3-all-natives.jar\end{lstlisting}
In any case, (if on old hardware or another issue crops up), a demo of the application can be found on YouTube:
\url{https://youtu.be/yUPDJ3hFPvw}
\subsection{Species List}
The following species are present:
\begin{itemize}
\setlength\itemsep{0.01em}
\item \textbf{Bird} (\autoref{fig:bird-model})
\item \textbf{Bunny} (\autoref{fig:bunny-model})
\item \textbf{Ferret} (\autoref{fig:ferret-model})
\item \textbf{Falcon} (\autoref{fig:falcon-model})
\item \textbf{Fish} (\autoref{fig:fish-model})
\item \textbf{Grass} (\autoref{fig:grass-model})
\item \textbf{Kelp} (\autoref{fig:kelp-model})
\item \textbf{Pine} (\autoref{fig:trees-model})
\item \textbf{Snake} (\autoref{fig:snake-model})
\end{itemize}
\subsection{Species Interactions}
\subsubsection{Herbivorous Interactions}
Bunnies and Ferrets both interact with grass by eating it. This is governed by the EatFoliageBehaviour. Fish interact with Kelp in the same way, eating it.
Herbivorous Birds interact with Trees by eating the fruit off them - this is governed by the EatFruitFromTreesBehaviour. Unlike the grass-eating or kelp-eating of other animals, birds do not harm the trees by eating their fruit.
\subsubsection{Carnivorous Interactions}
The HuntBehaviour provides animals with the ability to hunt down other animals for food.
Falcons will hunt down and eat Bunnies, Fish, and Ferrets.
Snakes will hunt down and eat Bunnies.
\subsubsection{Other Interactions}
Due to the threat posed by their predators, most prey animals will flee them (with their FleeBehaviour): Bunnies flee Snakes and Falcons, and Ferrets flee Falcons. Fish can't see the falcons coming from above the water, and so never get a chance to flee!
\subsubsection{Intra-Species Interactions}
All animals (Birds, Bunnies, Ferrets, Falcons, Fish, and Snakes) seek out other members of their species to mate with and produce children. This is governed by the BreedBehaviour.
\subsection{Challenge Tasks}
We completed the following challenge tasks:
\begin{itemize}
\setlength\itemsep{0.01em}
\item Multi-layered simulation: simulating plants and other foliage as well as aerial animals.
\item Brain / Behaviour Model: advanced per-entity behaviour.
\item 3D Rendered Simulation: 3D representation of the simulation world.
\item World Generation: generating a 3D world with biomes.
\end{itemize}
% Layered simulation and interactions between them
\section{Multi-layered Simulation}
% this is one of the provided challenge tasks
We decided that when simulating plants, it would be advantageous to use the same entity system as used for animals to leverage the systems already in place for them. However, it would be necessary for plants and animals to occupy the same space on the simulation grid.
To allow for this, we created a layered system where different entities could reside on different layers, passing 'over' one another. An Enum was used to define the layers.
After implementing this system for plants, we expanded the system, adding an 'aerial entities' layer. This allowed for birds to fly over other entities without blocking thier movement. A vertical offset was also set for each layer, to allow the aerial layer's entities to be rendered in the air.
% Could talk about implementation of AI
\section{Brain / Behaviour Model}
The AI behaviours of entities are implemented using a system based around 'Behaviours' and 'Brains'.
Behaviours are a basic unit of AI - something like fleeing predators, or hunting prey, or idly wandering around are implemented as behaviours. Behaviours are attached to an entity using the Entity Brain system. The Brain has a list of Behaviours in order of priority. Once per tick, the highest priority Behaviour that reports it can run is run.
The Behaviours and Brains system makes it simple to compose complex AI by attaching simple Behaviours to an entity. This also reduces code duplication, as many entities may need to use the same AI building blocks.
% Graphics Section
\section{3D Rendered Simuation}
One of the first things we did on the project was to bootstrap an OpenGL project and jump straight into 3D, adding an additional dimension to the simulation makes it easier to both see and appreciate the different interactions and behaviours between species.
\subsection{OpenGL: Nothing to Drawing}
Typically the setup for an OpenGL context and the subsequent rendering is as follows: configure any native bindings we need (initialise LWJGL \cite{LWJGL}, the library we are using to pull in native bindings), create a new Window and OpenGL context within it, compile required shader programs, upload meshes and textures to the GPU ahead of rendering, and then we drop into a permanent loop which only breaks once the Window requests close or the user initiated some action to close that we caught.
\subsubsection{Writing and Compiling Shaders}
All of the shaders we've had to write are written in GLSL \cite{GLSL}, it is a rather straightforward language used for working with the GPU. Vertices are first processed in the aptly named the ``vertex shader'' where we apply transformation matrices and get everything ready to draw, then the ``fragment shader'' where we figure out how each pixel should be coloured.
\subsubsection{Camera and View Projection}
For the camera, we decided to just have simple zoom and grab. The eye position is calculated by first finding the distance and height from the point we are looking at.
We use $zoom$ to denote the linear zoom factor, $ground$ to denote the angle between the tangent and the water plane, and we use $view$ to denote the angle rotation around the centre point.
\[
\begin{array}{l}
tangent = 5 + 1.1^{zoom} \\
distance = tangent \cdot \cos(ground) \\
height = tangent \cdot \sin(ground)
\end{array}
\]
Then we find the actual eye position by taking an offset from the position that we are looking at and factoring in the rotation around the centre point.
\[
\vec{eye} = (x + d cos(view), y + h, z + d sin(view))
\]
Afterwards, we leverage JOML \cite{JOML} which is a linear algebra library for Java to create a ViewProjection matrix. We specify a perspective of a roughly $ 75^\circ $ field of view with a near plane of $ 0.01 $f, and far plane of $ 1000.0 $f. The camera is positioned at $ \vec{eye} $ and is looking at the centre.
\subsubsection{Uniforms and the MVP}
Given the View Projection matrix, we can multiply it by our Model transformation matrix to find the modelViewProjection, this can be multiplied by the per-vertex coordinates to give the screen-space coordinates for rendering.
Matrices are uploaded to the shader through the use of uniforms (memory locations that the shader can access), it is quite straightforward but one key detail we optimise for is that we cache the location of uniform names per-shader in order to be able to skip the look-up every time we render.
\subsubsection{Lighting the Scene}
For this project, we chose to just do simple directional lighting which was used to represent the day-night cycle. The light has 3 properties each of which are exposed and can be modified on the fly during the simulation: Light Direction / Position, Light Ambient and Light Diffuse. We use a subset of the Phong lighting model, (see \autoref{fig:phong}), but chose not to implement specular lighting.
To get started, we first have to make sure we have all of the data we need from the vertex shader:
\begin{lstlisting}
mat4 model;
fragPos = vec3(model *
vec4(vertexPos, 1.0));
fragNormal = vertexNormal *
mat3(transpose(inverse(model)));\end{lstlisting}
Here we transform the vertex into world-space and also transform the vertex normal, but we run into a minor issue if we were to just use the model projection (see \autoref{fig:normals}), we must use the inverse-transpose matrix to preserve the correct magntiude and retain the correct direction of the normal post-transformation, an article by Lighthouse3D explains the necessity of doing this \cite{lighthouse3d}.
Then we can calculate the lighting in the fragment shader, we start off by finding the ambient light on the object, this is pretty simple and we just pass it through:
\begin{lstlisting}
vec3 ambient = light.ambient;\end{lstlisting}
Next we prepare the light direction and normal vectors for processing, we normalise the normal vector we calculated earlier and then determine the light direction (it could either be an absolute position or relative direction):
\begin{lstlisting}
vec3 lightDir;
if (light.dir.w == 0.0) {
lightDir = normalize(
light.dir.xyz - fragPos);
} else {
lightDir = normalize(-light.dir.xyz);
}\end{lstlisting}
We can now find the dot product of the normal vector and the light direction, this gives us the strength of the diffuse lighting at that point.
\begin{lstlisting}
float diff = max(dot(norm, dir), 0.0);
vec3 diffuse = diff * light.diffuse;\end{lstlisting}
Finally, we combine all of the lighting calculations together:
\begin{lstlisting}
return objectColour *
vec4(ambient + diffuse, 1.0);\end{lstlisting}
\subsection{Uploading and Rendering}
\subsubsection{Mesh to GPU}
To draw actual objects to the screen, we have to first prepare and upload meshes, we begin by creating a Vertex Array Object \cite{vao} which stores all of the information we provide it ready to render in the future. Each attribute (vertices, normals, UVs, etc) array is given its own Vertex Buffer Object \cite{vbo} to which it is uploaded.
\subsubsection{Textures to GPU}
OpenGL makes consuming textures very easy, we first load the texture we want using STB Image \cite{stb} which is included in LWJGL, then we configure the texture properties such as alignment and scaling before uploading the buffer we loaded as-is. Some key things to look out for were that textures were always bound to the TEXTURE0 slot since we didn’t need anything more for each mesh, and that OpenGL UV coordinates start from the bottom left which means we flipped all texture resources upside down before loading them. (see \autoref{fig:uvmapping})
\subsection{Optimising Render Pipeline}
Within our simulation, we quite often have to render hundreds to thousands of objects at a time, so we had to make sure that the simulation could still run on lower-end systems while delivering at least some what of a smooth performance, or at the very least allowed the user to easily look around when the simulation is paused.
\subsubsection{Face Culling}
One of the easiest optimisations to do, although your meshes have to also conform, is to tell the GPU to simply ignore any faces that would not typically be visible. This uses a neat trick by which we order vertices in faces in a clockwise or anti-clockwise manner, so if the GPU detects a specific winding, it can simply reject drawing those faces. In a lot of cases, this should double performance since typically models are quite uniform in how many faces are on each side.
\subsubsection{Indexing}
Indexing is a technique used to create meshes where we generate all of the individual vertex data points and then create an Element Array Buffer which is a list of integers, when drawing triangles we take 3 integers at once which are treated as an index into the attribute arrays that are then used to create the triangle face.
\subsubsection{Instancing}
Another technique to reduce the time on the CPU and let the GPU work in parallel, is to send all the transformation matrices required at once to draw any amount of uniquely positioned meshes. On modern hardware (supporting OpenGL 4.3+), which does not include Mac and Mac M1, we can take advantage of a new extension allowing the use of Shader Storage Buffer Objects \cite{ssbo}. SSBOs allow us to upload a huge amount of data to the shader where it can be consumed as if it were a normal uniform, if the feature is present on the machine \cite{extension-list}, the shader code is modified at runtime to add support for the use of SSBOs. Once enabled, we can instead use glDrawElementsInstanced instead of individually drawing the mesh at each transformation. This has a noticeable effect on devices that support the feature.
\subsubsection{Mesh Optimisation}
One of the biggest bottlenecks on lower-end hardware is just the sheer amount of faces and vertices we want to render, to combat this, we pre-processed all of the meshes we use using Blender by using the Decimate filter (see \autoref{fig:decimate}).
\subsubsection{Dynamic Level of Detail}
Continuing from the previous section: For high level of detail (camera is nearby), we chose to aim for about 1 to 2k faces per object, then halved it for medium and low levels of detail. Before performing a render pass of entities, we first calculate the distance between the camera’s eye position (calculated earlier) and the entity’s current location, then we resolve this to a suitable level of detail such as high, medium, low, or do not render at all.
\[
\begin{array}{l}
d = \begin{bmatrix}
\vec{eye} - \vec{model}
\end{bmatrix} \\
LoD = \begin{cases}
\text{High}, &d \leq 20 \\
\text{Medium}, &d \leq 70 \\
\text{Low}, &d \leq 250 \\
\text{Do Not Render}, &d > 250
\end{cases}
\end{array}
\]
% other provided tasks that we haven't done:
% \subsection{Weather}
% \subsection{Disease}
\section{World Generation}
The simulation takes place on a procedurally-generated world. This allows for different starting conditions and therefore different results for each run.
Generation of is done in four stages:
\subsection{Biome Generation}
We use FastNoiseLite \cite{FastNoiseLite} to create noise and hence a randomised biome map. This affects the ground colour and what entities may spawn later on. We use Cellular noise with the parameters:
\begin{itemize}
\setlength\itemsep{0.01em}
\item $frequency = 0.015$
\item Hybrid Cellular Distance Function
\item Cell Value Return Type
\end{itemize}
\subsection{Heightmap}
A second noise is used to create the heightmap. This creates variation in the vertical level of the ground, creating an interesting terrain with high and low areas. Areas under a certain height level are considered to be underwater; underwater areas have a different ecosystem. Noise parameters:
\begin{itemize}
\setlength\itemsep{0.01em}
\item OpenSimplex2S
\item $frequency = 0.05$
\item FBm Fractal Type
\item $octaves_f = 4$
\item $lacunarity_f = 0.6$
\item $gain_f = 1.8$
\item $weighted\_strength_f = -0.4$
\end{itemize}
The heightmap is adjusted so that areas outside a circle around the centre of the world are smoothly pulled down to sea level. This creates the effect of an island-shaped world generation.
To do this, let $height_i$ be the initial height, $height_t$ be the target (sea-floor) height, $x$ and $y$ be the absolute coordinates on the grid, $width$ and $depth$ represent the size of the world, $dist_b$ be the distance at which beaches start, and $dist_s$ be the distance at which the ground level is fully down to the sea.
\[
\begin{aligned}
x_a &= x - \frac{width}{2} \\
z_a &= z - \frac{depth}{2} \\
dist_c &= \sqrt{x_a^2 + z_a^2} \\
dist_c &= \begin{cases}
height_i, &dist_c < dist_b \\
height_t, &dist_c \ge dist_s
\end{cases} \\
x &= \frac{dist_c - dist_b}{dist_s - dist_b} \\
factor &= \begin{cases}
4 \cdot x^3, &x < 0.5 \\
1 - \frac{(-2 \cdot x + 2)^3}{2}, &x \ge 0.5
\end{cases}
\end{aligned}
\]
Hence we find our final height with a simple interpolation:
\[
h = (1 - factor) \cdot height_i + factor * height_t
\]
\subsection{Offsets}
Each position on the world grid is then given a random offset and rotation. These are applied to entities in each grid position later, when rendering, breaking up the monotony of the grid and making it look more natural.
\subsection{Entity Spawning}
Finally, entities are spawned into the world. Each entity type is assigned a set of parameters for spawning, such as biome restrictions, chance to spawn per grid tile, and land/water restrictions. These are used to determine what entities should be spawned.
\end{multicols}
\newpage
\begin{thebibliography}{9}
\bibitem{BlueJ}
BlueJ \\
\url{https://bluej.org/}
\bibitem{LWJGL}
Lightweight Java Game Library \\
\url{https://www.lwjgl.org/}
\bibitem{GLSL}
OpenGL Shading Language \\
\url{https://www.khronos.org/opengl/wiki/OpenGL_Shading_Language}
\bibitem{JOML}
Java OpenGL Math Library \\
\url{https://github.com/JOML-CI/JOML}
\bibitem{learnopengl-lighting}
LearnOpenGL - Basic Lighting \\
\url{https://learnopengl.com/Lighting/Basic-Lighting}
\bibitem{lighthouse3d}
The Normal Matrix - Lighthouse3D \\
\url{https://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/}
\bibitem{vao}
Vertex Specification - OpenGL Wiki \#Vertex Array Object \\
\url{https://www.khronos.org/opengl/wiki/Vertex_Specification#Vertex_Array_Object}
\bibitem{vbo}
Vertex Specification - OpenGL Wiki \#Vertex Buffer Object \\
\url{https://www.khronos.org/opengl/wiki/Vertex_Specification#Vertex_Buffer_Object}
\bibitem{opengl-tutorial-textured-cube}
A Textured Cube - OpenGL Tutorial \\
\url{https://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/}
\bibitem{stb}
org.lwjgl.stb (LWJGL 3.3.1) \\
\url{https://javadoc.lwjgl.org/org/lwjgl/stb/package-summary.html}
\bibitem{ssbo}
Shader Storage Buffer Object - OpenGL Wiki \\
\url{https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object}
\bibitem{extension-list}
Khronos OpenGL® Registry - The Khronos Group Inc \\
\url{https://www.khronos.org/registry/OpenGL/index_gl.php}
\bibitem{FastNoiseLite}
FastNoiseLite - Fast Portable Noise Library \\
\url{https://github.com/Auburn/FastNoiseLite}
\textbf{\\ References in code.}
\bibitem{code-ref-1}
\codeword{Util.java} OpenGL Wiki. Calculating a Surface Normal \# Pseudo Code \\
\url{https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal#Pseudo-code}
\bibitem{code-ref-2}
\codeword{FlowLayout.java} MDN. Basic concepts of flexbox \\
\url{https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox}
\textbf{\\ Libraries used:}
\bibitem{library-1}
\cmd{org.lwjgl:lwjgl} \url{https://www.lwjgl.org/}
\bibitem{library-2}
\cmd{org.joml:joml} \url{https://github.com/JOML-CI/JOML}
\bibitem{library-3}
\cmd{de.javagl:obj} \url{https://github.com/javagl/Obj}
\bibitem{library-4}
\cmd{lib/FastNoiseLite.java} \url{https://github.com/Auburn/FastNoiseLite}
\textbf{\\ Resources used:}
% Models used:
% Bird
\bibitem{model-bird}
Sketchfab. Bird - Scarlet Tanager (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/bird-scarlet-tanager-fdee35447e1f45c490af19036f57c36e}
% Bunny
\bibitem{model-bunny}
Sketchfab. Bunny (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/bunny-4cc18d8e0552459b8897948b81cb20ad}
% Falcon
\bibitem{model-falcon}
Sketchfab. Fraiser (Falcon) (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/fraiser-234a8576b3d0409aab8545c72ba7e1db}
% Ferret
\bibitem{model-ferret}
Sketchfab. Ferret Fbx (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/ferret-fbx-7897b4c642f7429e873b08c790717c19}
% Fish
\bibitem{model-fish}
Sketchfab. Coral Fish (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/coral-fish-ea8d002da75a4dd09658b962722279c5}
% Grass
\bibitem{model-grass}
Sketchfab. Grass (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/grass-7ebe6950dd4446babb31e3905b3b30d2}
% Kelp
\bibitem{model-kelp}
Sketchfab. Kelp (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/kelp-83202894d3f64a129d7fdeefc044aed8}
% Pine
\bibitem{model-trees}
Sketchfab. Low poly trees (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/low-poly-trees-51cae4a194344e8bbfbd0a4cff205f76}
% Raccoon
\bibitem{model-raccoon}
Sketchfab. Raccoon (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/raccoon-d9ecbe0e31a94e5c80aa3b567ec797ec}
% Snake
\bibitem{model-snake}
Sketchfab. Snake (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/snake-c8ce54d20c8e4169b0a8f0975e90f254}
% Among Us
\bibitem{model-among-us}
Sketchfab. Among Us Astronaut - Clay (licensed under CC BY 4.0) \\
\url{https://sketchfab.com/3d-models/among-us-astronaut-clay-20b591de51eb4fc3a4c5a4d40c6011d5}
% Fonts used:
\bibitem{retro-font}
OpenGameArt. 8x8 Font - Chomp's Wacky Worlds Beta (licensed under CC0) \\
\url{https://opengameart.org/content/8x8-font-chomps-wacky-worlds-beta}
% Textures used:
\bibitem{wave-emoji}
Mutant Remix. Wave Emoji (licensed under CC BY-NC-SA 4.0) \\
\url{https://mutant.revolt.chat}
\bibitem{play-pause-icons}
Boxicons. Play / Pause Icons (MIT License) \\
\url{https://boxicons.com/}
\bibitem{weather-icons}
Freeicons. Weather Iconset by Oscar EntMont (no restrictions) \\
\url{https://freeicons.io/icon-list/weather-2}
\bibitem{grass-texture}
Pixel-Furnace. Grass (no restrictions) \\
\url{https://textures.pixel-furnace.com/texture?name=Grass}
\bibitem{water-texture}
Unsplash. Water (\href{https://unsplash.com/license}{Unsplash License}) \\
\url{https://unsplash.com/photos/6k6MEvIncH4}
\end{thebibliography}
\newpage
% Appendix
\captionsetup{justification=centering,margin=3cm}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/world1.jpg}
\caption{World Shot 1 / 3} \label{fig:screenshot-world-1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/world2.jpg}
\caption{World Shot 2 / 3} \label{fig:screenshot-world-2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/world3.jpg}
\caption{World Shot 3 / 3} \label{fig:screenshot-world-3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/close1.jpg}
\caption{Close Shot 1 / 3} \label{fig:screenshot-close-1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/close2.jpg}
\caption{Close Shot 2 / 3} \label{fig:screenshot-close-2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/close3.jpg}
\caption{Close Shot 3 / 3} \label{fig:screenshot-close-3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/ui-welcome.png}
\caption{UI: Welcome Screen} \label{fig:screenshot-ui-welcome}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{../screenshots/ui-help.png}
\caption{UI: Help Menu} \label{fig:screenshot-ui-help}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/bluej.png}
\caption{Simulation running in BlueJ \cite{BlueJ}} \label{fig:bluej}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/phong.png}
\caption{Phong Lighting Model \cite{learnopengl-lighting}} \label{fig:phong}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/transforming_normals.png}
\caption{Transforming Normal Vectors \cite{learnopengl-lighting}} \label{fig:normals}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/opengluv.png}
\caption{OpenGL UV mapping \cite{opengl-tutorial-textured-cube}} \label{fig:uvmapping}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/decimate.png}
\caption{Decimate Filter in Blender} \label{fig:decimate}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/bird.png}
\caption{Bird Model \cite{model-bird}} \label{fig:bird-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/bunny.png}
\caption{Bunny Model \cite{model-bunny}} \label{fig:bunny-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/falcon.png}
\caption{Falcon Model \cite{model-falcon}} \label{fig:falcon-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/ferret.png}
\caption{Ferret Model \cite{model-ferret}} \label{fig:ferret-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/fish.png}
\caption{Fish Model \cite{model-fish}} \label{fig:fish-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/grass.png}
\caption{Grass Model \cite{model-grass}} \label{fig:grass-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/kelp.png}
\caption{Kelp Model \cite{model-kelp}} \label{fig:kelp-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/trees.png}
\caption{Trres Model \cite{model-trees}} \label{fig:trees-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/raccoon.png}
\caption{Raccoon Model \cite{model-raccoon}} \label{fig:raccoon-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/snake.png}
\caption{Snake Model \cite{model-snake}} \label{fig:snake-model}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/models/among_us.png}
\caption{Among Us Model \cite{model-among-us}} \label{fig:among-us-model}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.6574681351,
"avg_line_length": 51.4713375796,
"ext": "tex",
"hexsha": "65cc2fd4baf86e45da627a1c9bbcf7abb7374bb8",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "faada92d55599f2dfb867c68db63c4debbdf5e6e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "insertish/ppa-cw3",
"max_forks_repo_path": "report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "faada92d55599f2dfb867c68db63c4debbdf5e6e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "insertish/ppa-cw3",
"max_issues_repo_path": "report/report.tex",
"max_line_length": 845,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "faada92d55599f2dfb867c68db63c4debbdf5e6e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "insertish/ppa-cw3",
"max_stars_repo_path": "report/report.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-03T12:10:55.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-03T12:10:55.000Z",
"num_tokens": 8024,
"size": 32324
} |
\chapter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
\section*{History \& context}
\addcontentsline{toc}{section}{History \& context}
According to the annals of history, the term \textit{computational neuroscience} was coined by Eric L. Schwartz in 1985 when he organized a conference\footnote{\citep{schwartz1990computational}} to provide a summary of the current status of a field at that time known under a variety of names such as neural modeling, brain theory, and neural networks. The central idea of computational neuroscience, to study the brain through mathematical models, is much older than that, however. First modern traces can be found as early as in 1907 when Louis Lapicque introduced\footnote{\citep{Lapicque1907}} the integrate and fire model. While crude, the model managed to describe observed interactions of a nerve fiber long before the exact mechanisms responsible for generation of neuron action potentials were discovered. Doing so, it greatly contributed to our understanding of the brain and laid a strong foundation for further research.
One of the main approaches in computational neuroscience goes under the name \textit{system identification}: first, through experimentation one obtains a large dataset of input and output pairs of a given neural system -- e.g. pairs of images and responses of neurons in primary visual cortex. These are subsequently fitted through machine learning methods with a model, in the hope of identifying the computations that the neural system does to translate the inputs to its outputs.
Such methods effectively enable us to predict the response of a system to an arbitrary plausible input. Doing so is one of the best ways to test our understanding of such a system. While having a model that predicts the response accurately does not necessarily have to mean our understanding of the biological fundamentals is correct, it is a clear signal it is at least plausible. On the other hand, the opposite, when a model based on our knowledge is not accurate, is proof we should revisit our assumptions. Such an approach is particularly effective in early stages of sensory systems whose primary function is to produce a response encoding to a clearly defined input -- sensory stimulus.
Unsurprisingly, going as far back as to the 1960s, visual neuroscience has been close to traditional image processing and has used its toolkit for computational modeling. The influence was not one-sided. Several ideas, such as convolution layers, inspired by biological observations\footnote{\citep{Lindsay_2020}} have found their way back to classical computer vision. In recent years, the combination of advancements in deep learning and higher volume of better data has caused an increased focus on deep learning inspired neural models and, at the same time, a slow decline of classical system identification approaches.
Deep learning inspired models are only a rough approximation of biological reality, not only at the level of single neuron biophysics but also in terms of overall network architecture. For example, deep neural network (DNN) architectures typically do not model interactions between neurons within a single layer or do not account for direct nonlinear interactions between neurons, such as those provided by neuromodulation or attentional mechanisms in the biological brain.
Regardless, classic DNNs are still immensely useful. Through the ability to fit their many parameters to real data, and access to ever increasing toolkit of high performance methods borrowed from classical machine learning, they allow for effective means of approximating the stimulus-response function in many neural subsystems. And thanks to their abstraction level, rapid experimentation of higher level concepts is also easier. Much like the integrate and fire model that also abstracted over certain details, they can still inform our understanding of the nature of vision processing.
\section*{Motivation}
\addcontentsline{toc}{section}{Motivation}
Despite the success of DNNs in neuroscience, having outclassed classical system identification methods in most domains, there remains a substantial gap between the predictions these models give and the actual measured neural responses. This is true even in the most peripheral stages of visual processing such as primary visual cortex. Furthermore, the poor interpretability of the DNN models has been identified as a major challenge limiting their usefulness for understanding neural computation.
To address these issues, a recent model by \cite{antolik}, which will be the focus of the present thesis, explored the idea of incorporating more biologically plausible components into the DNN framework. It showed that such an approach leads to a model with fewer parameters, better interpretability, and outperformed, at the time of writing, other state-of-the-art approaches.
However, the \citeauthor{antolik} \citeyear{antolik} study also showed shortcomings. The model was very sensitive to random initialization, most hyper-parameters of the model were not thoroughly studied, and more direct one-to-one comparison of the biologically inspired versus classical DNN components was missing. Furthermore, since its publishing, several studies using classical DNN techniques managed to demonstrate slight improvement over the \citeauthor{antolik} data. Finally, the model has been implemented in an ad-hoc way in now discontinued ML framework Theano, which poses a problem for further exploration of the bio-inspired DNNs architecture idea.
\section*{Goals}
\addcontentsline{toc}{section}{Goals}
The goal of this thesis is to address the outlined outstanding issues with the \citeauthor{antolik} study, in order to provide a stable, fine-tuned, and well characterized implementation in a modern neuroscience oriented DNN framework to enable future experimentation with this bio-inspired architecture. Following contributions are the goals of this thesis:
\begin{itemize}
\item \textbf{Assess the \citeauthor{antolik} model (and its variants) in terms of stability and sensitivity to hyperparameter finetuning:} Especially on low quantities of noisy data, which is common in the field of sensory neuroscience, models tend to be relatively unstable with respect to both hyperparameters fine-tuning but also random initialization. We want to quantify the effects and hopefully explore ways to mitigate them to ensure any conclusions drawn are not due to a chance but rather architectural properties.
\item \textbf{Evaluate novel architectures inspired by classical computer vision:} We want to compare the benefits of biologically restricted techniques with more computationally generic approaches from classical deep computer vision and investigate whether hard constraints could be replaced with well chosen regularization schemes. Furthermore, test the impact of several deep learning techniques that were shown to work on classical computer vision problems.
\item \textbf{Improve upon \citeauthor{antolik} model:} Decrease the gap between the original model and more recent less biologically constrained architectures that demonstrated improved performance.
\item \textbf{Contribute new functionality to the NDN3 library toolbox:} We contribute all tools developed to conduct experiments in this thesis upstream to the NDN3 framework or, where not applicable, publish them separately under open source license. The goal is to enable others to iterate on our findings and evaluate the results in a shared environment.
\item \textbf{(Re)implement and assess the \citeauthor{antolik} model and several of its variants within the NDN3 framework:} Similar to the previous point, implementing and analysing various models within a shared framework aims to enable rapid prototyping, and generally facilitate further research.
\item \textbf{Identify opportunities for further research:} Since our set of goals is relatively broad, we will not be able to dive deep into every encountered question. As a result, we want to identify opportunities for more focused further research.
\end{itemize}
This way, the present thesis represents a stepping stone that will accelerate the future long-term research program on bio-inspired DNN architectures for neuroscientific data underway in the \citeauthor{antolik} group.
\section*{Thesis structure}
\addcontentsline{toc}{section}{Thesis structure}
This thesis is divided into 4 parts. First, in \hyperref[ch:1]{chapter 1} we provide the theoretical background. We start with a high level overview of the initial part of the visual processing system. Then, we introduce both computational neuroscience generally and the tools it uses for system identification tasks, as well as provide a brief summary of the deep learning techniques that will be relevant for our experiments. In \hyperref[ch:2]{chapter 2}, we introduce the \citeauthor{antolik} model and other especially relevant work in more detail. Second, in chapters \hyperref[ch:3]{chapter 3} we describe the implementation of the additional methods necessary to realize our architectures and the experiments pipeline. Then, in \hyperref[ch:4]{chapter 4}, we introduce our methodology for both the model exploration and results analysis, and finish with a training regime overview.
\hyperref[ch:5]{Chapter 5} with experiments and their results follows. We start by reimplementing the \citeauthor{antolik} model. In the first section, we analyse it in terms of stability and sensitivity to training hyperparameters, attempting to get the best and most stable version possible. Then we test regularization on the fully connected layers, impact of additional non-linearity, and input scaling. In the second section, we move towards larger architectural changes, testing elements from other state of the art models, traditional deep computer vision, but also drawing inspiration from simpler tools of computational neuroscience. We conduct a comparison between various computationally similar architectures differing only by the explicitness of imposed regularization. In the third section we explored the effects of transfer learning, by training one instance of a model on all of the dataset’s three separate subsets pooled together. Further, we train the best variants of architectures from previous sections and compare their results to assess the universality of their improvements.
Finally, the last \hyperref[ch:6]{chapter} offers a summary of our findings, provides lessons learned, and suggests ample opportunities for further research. | {
"alphanum_fraction": 0.8210744972,
"avg_line_length": 220.6458333333,
"ext": "tex",
"hexsha": "ab815fc4b587e30b5b221445e525407e2d2014c8",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-11-25T21:44:31.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-25T21:44:31.000Z",
"max_forks_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "petrroll/msc-thesis",
"max_forks_repo_path": "text/chapters/00_preface.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229",
"max_issues_repo_issues_event_max_datetime": "2020-11-26T21:02:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-11-26T12:37:50.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "petrroll/msc-thesis",
"max_issues_repo_path": "text/chapters/00_preface.tex",
"max_line_length": 1101,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "65219d1819f7d93f154bd2dc1484727a52a00229",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "petrroll/msc-thesis",
"max_stars_repo_path": "text/chapters/00_preface.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2056,
"size": 10591
} |
\subsection{Factors of Influence}
\begin{frame}{Factors of Influence}
\begin{itemize}[<+->]
\item language
\item location
\item social information: what your friends like
\end{itemize}
\end{frame}
\begin{frame}{The Filter Bubble}
\begin{center}
\href{http://dontbubble.us}{dontbubble.us}\\
\href{http://www.thefilterbubble.com/}{www.thefilterbubble.com}
\end{center}
\end{frame}
\framedgraphic{The Filter Bubble}{../images/bubble-1.png}
\framedgraphic{The Filter Bubble}{../images/bubble-2.png}
| {
"alphanum_fraction": 0.7299412916,
"avg_line_length": 26.8947368421,
"ext": "tex",
"hexsha": "81443bfdb55734b48c05d393fc588d7af5e0c366",
"lang": "TeX",
"max_forks_count": 400,
"max_forks_repo_forks_event_max_datetime": "2022-03-19T04:07:59.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-05T06:22:18.000Z",
"max_forks_repo_head_hexsha": "cd0d97f85fadb59b7c6e9062b37a8bf7d725ba0c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "DoubleL61/LaTeX-examples",
"max_forks_repo_path": "presentations/English/LaTeX/end.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "cd0d97f85fadb59b7c6e9062b37a8bf7d725ba0c",
"max_issues_repo_issues_event_max_datetime": "2021-05-02T21:28:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-10T13:10:47.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "DoubleL61/LaTeX-examples",
"max_issues_repo_path": "presentations/English/LaTeX/end.tex",
"max_line_length": 65,
"max_stars_count": 1231,
"max_stars_repo_head_hexsha": "a1bf9fe422969be1ca4674394ebd2170c07f7693",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RalfGuder/LaTeX-examples",
"max_stars_repo_path": "presentations/English/LaTeX/end.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T17:43:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-07T04:04:25.000Z",
"num_tokens": 161,
"size": 511
} |
\par
\subsection{{\tt PFV} : {\tt float *} vector methods}
\label{subsection:Utilities:proto:PFV}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
float ** PFVinit ( int n ) ;
\end{verbatim}
\index{PFVinit@{\tt PFVinit()}}
This is the allocator and initializer method for {\tt float*} vectors.
Storage for an array with size {\tt n} is found and each
entry is filled with {\tt NULL}.
A pointer to the array is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PFVfree ( float **p_vec ) ;
\end{verbatim}
\index{PFVfree@{\tt PFVfree()}}
This method releases the storage taken by {\tt p\_vec[]}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PFVcopy ( int n, float *p_y[], float *p_x[] ) ;
\end{verbatim}
\index{PFVcopy@{\tt PFVcopy()}}
This method copies {\tt n} entries from {\tt p\_x[]} to {\tt p\_y[]},
i.e.,
{\tt p\_y[i] = p\_x[i]} for {\tt 0 <= i < n}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PFVsetup ( int n, int sizes[], float vec[], float *p_vec[] ) ;
\end{verbatim}
\index{PFVsetup@{\tt PFVsetup()}}
This method sets the entries of {\tt p\_vec[]} as pointers into {\tt
vec[]} given by the {\tt sizes[]} vector,
i.e.,
{\tt p\_vec[0] = vec}, and
{\tt p\_vec[i] = p\_vec[i-1] + sizes[i-1]}
for {\tt 0 < i < n}.
%-----------------------------------------------------------------------
\end{enumerate}
| {
"alphanum_fraction": 0.479409957,
"avg_line_length": 34.6170212766,
"ext": "tex",
"hexsha": "9d13886ef8513a1a6cb2bc29c8fc9b43d0ed78f5",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PFV.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PFV.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PFV.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 414,
"size": 1627
} |
\documentclass[12pt]{cdblatex}
\usepackage{examples}
\begin{document}
\section*{The Gauss relation for the curvature of a hypersurface}
\begin{cadabra}
{a,b,c,d,e,f,g,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices.
\nabla_{#}::Derivative.
K_{a b}::Symmetric.
g^{a}_{b}::KroneckerDelta.
# Define the projection operator
hab:=h^{a}_{b} -> g^{a}_{b} - n^{a} n_{b}.
# 3-covariant derivative obtained by projection on 4-covariant derivative
vpq:=v_{p q} -> h^{a}_{p} h^{b}_{q} \nabla_{b}{v_{a}}.
# Compute 3-curvature by commutation of covariant derivatives
vpqr:= h^{a}_{p} h^{b}_{q} h^{c}_{r} ( \nabla_{c}{v_{a b}} - \nabla_{b}{v_{a c}} ).
substitute (vpq,hab)
substitute (vpqr,vpq)
distribute (vpqr)
product_rule (vpqr)
distribute (vpqr)
eliminate_kronecker(vpqr)
# Standard substitutions
substitute (vpqr,$h^{a}_{b} n^{b} -> 0$)
substitute (vpqr,$h^{a}_{b} n_{a} -> 0$)
substitute (vpqr,$\nabla_{a}{g^{b}_{c}} -> 0$)
substitute (vpqr,$n^{a} \nabla_{b}{v_{a}} -> -v_{a} \nabla_{b}{n^{a}}$)
substitute (vpqr,$v_{a} \nabla_{b}{n^{a}} -> v_{p} h^{p}_{a}\nabla_{b}{n^{a}}$)
substitute (vpqr,$h^{p}_{a} h^{q}_{b} \nabla_{p}{n_{q}} -> K_{a b}$)
substitute (vpqr,$h^{p}_{a} h^{q}_{b} \nabla_{p}{n^{b}} -> K_{a}^{q}$)
# Tidy up and display the results
{h^{a}_{b},\nabla_{a}{v_{b}}}::SortOrder.
sort_product (vpqr)
rename_dummies (vpqr)
canonicalise (vpqr)
factor_out (vpqr,$h^{a?}_{b?}$)
factor_out (vpqr,$v_{a?}$) # cdb(gauss,vpqr)
\end{cadabra}
\subsection*{The Gauss relation for the curvature of a hypersurface}
\begin{align*}
D_{r}(D_{q}v_p) - D_{q}(D_{r}v_p) = \cdb{gauss}
\end{align*}
\vspace{15pt}
\begin{latex}
\begin{align*}
D_{r}(D_{q}v_p) - D_{q}(D_{r}v_p) = \cdb{gauss}
\end{align*}
\end{latex}
\end{document}
| {
"alphanum_fraction": 0.5899784483,
"avg_line_length": 25.7777777778,
"ext": "tex",
"hexsha": "4e60e88ff393e328417195f1cc152050f1666924",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-03-30T17:17:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-06-27T03:29:40.000Z",
"max_forks_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "leo-brewin/hybrid-latex",
"max_forks_repo_path": "cadabra/examples/example-06.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "leo-brewin/hybrid-latex",
"max_issues_repo_path": "cadabra/examples/example-06.tex",
"max_line_length": 86,
"max_stars_count": 16,
"max_stars_repo_head_hexsha": "2debaf3f97eb551928d08dc4baded7ef7a4ab29a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "leo-brewin/hybrid-latex",
"max_stars_repo_path": "cadabra/examples/example-06.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-31T23:16:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-12T06:31:49.000Z",
"num_tokens": 756,
"size": 1856
} |
\section{Anexos}
| {
"alphanum_fraction": 0.7222222222,
"avg_line_length": 6,
"ext": "tex",
"hexsha": "6306b75e69ea3d1cb99cf408ea372e3d2d12d1c2",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "84f402d23fdcb66681f0194db60d8cae3479d8c5",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "brenoec/cefetmg.msc.wolfram",
"max_forks_repo_path": "report/sections/8. appendices.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "84f402d23fdcb66681f0194db60d8cae3479d8c5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "brenoec/cefetmg.msc.wolfram",
"max_issues_repo_path": "report/sections/8. appendices.tex",
"max_line_length": 16,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "84f402d23fdcb66681f0194db60d8cae3479d8c5",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "brenoec/cefetmg.msc.wolfram",
"max_stars_repo_path": "report/sections/8. appendices.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8,
"size": 18
} |
\documentclass[]{article}
\title{Collaborative LaTeX Writing using Git and Github Actions}
\author{AUTHOR}
\usepackage{listings}
\usepackage[colorlinks,urlcolor=blue,linkcolor=blue,citecolor=blue]{hyperref}
\begin{document}
\maketitle
\section{Compiling Locally}
\subsection{Prerequisites}
Compiling locally is possible with any LaTeX distribution; the github actions workflow running on Ubuntu 20.04 uses the following packages (all installable via `apt`):
\subsection*{Mandatory}
\begin{itemize}
\item \texttt{texlive-latex-recommended}
\item \texttt{texlive-latex-extra}
\end{itemize}
\subsection*{Optional}
\begin{itemize}
\item \texttt{texlive-latex-utils} \quad (for \texttt{texcount})
\item \texttt{rubber} \quad (for using the \texttt{Makefile})
\item \texttt{perl} \quad (for \texttt{texcount} and \texttt{latexdiff})
\item \texttt{latexdiff}
\end{itemize}
\subsection{Makefile}
For convenience a Makefile is included which relies on the [rubber](https://gitlab.com/latex-rubber/rubber/) LaTeX wrapper:
\begin{center}
\begin{lstlisting}[language=Bash,morekeywords={make}]
$ make # generate paper.pdf
$ make clean # cleanup
$ make spellcheck # run codespell
$ make count # run TexCount
$ make diff # run latexdiff with master
\end{lstlisting}
\end{center}
\end{document}
| {
"alphanum_fraction": 0.7416232316,
"avg_line_length": 24.8703703704,
"ext": "tex",
"hexsha": "a360cf82857bb44d2f76a87505681739b4ebf3a0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a15b34728c763f90eb71324244254225a0d0a71a",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "vanderhe/latex-github-collab",
"max_forks_repo_path": "tex/paper.tex",
"max_issues_count": 3,
"max_issues_repo_head_hexsha": "a15b34728c763f90eb71324244254225a0d0a71a",
"max_issues_repo_issues_event_max_datetime": "2022-02-25T21:40:35.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-02-25T17:15:43.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "vanderhe/latex-github-collab",
"max_issues_repo_path": "tex/paper.tex",
"max_line_length": 167,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a15b34728c763f90eb71324244254225a0d0a71a",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "vanderhe/latex-github-collab",
"max_stars_repo_path": "tex/paper.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 387,
"size": 1343
} |
\chapter{Equations and Algorithm}
\label{sec-figures}
\section{Equations and Algorithm}
Let us assume $\gamma_{ij}$ is the number of matched keypoints among two keyframes, $i$ and $j$. These matches yield a distinct scale difference $\sigma_{ij} $ depending on the number of matched keypoints $\gamma_{ij}$. The optimal scale difference $\sigma^*$ will be
\begin{equation}
\begin{aligned}
\sigma^* &= \operatorname*{argmax}_{\gamma} \frac{1}{2}|\gamma(\sigma_{ij}), \gamma(\sigma_{i'j'}) | ,
\end{aligned}
\label{eqa-1}
\end{equation}
where $\sigma_{ij} $ and $\sigma_{i'j'}$ are the two nearest points such that\\
\begin{equation}
\begin{split}
\quad |\sigma_{ij} - \sigma_{i'j'}| \leq \Delta^* \quad \forall \quad i, j, i' \text{ and } j' \in \mathds{Z}^+, \quad \Delta^* \in \mathbb{R}.
\end{split}
\label{eqa-1}
\end{equation}
The estimation is further explained in Algorithm \ref{algorithm-1}.\\
\begin{algorithm}[!h]
\SetKwInput{KwData}{Input}
\SetKwInput{KwResult}{Output}
\KwData{Matched keyframes $\mathbf{K}_{r_{ID}} = \{\mathbf{K}^i_{s},\mathbf{K}^j_{t}\}$ , poses $^w\mathbf{T}_{r_{ID}} = \{^w\mathbf{T}_{s}, \, ^w\mathbf{T}_{t} \}$, point clouds $P_{r_{ID}}^\mathcal{F} = \{P^{\mathcal{F}_s^i}_{s(i)},P^{\mathcal{F}_t^j}_{t(j)}$\} with $i,j \in \mathds{Z}^+$ }
\KwResult{Optimal scale difference $\sigma^*$, initial guess relative transformation $^{si}\mathbf{T}_{ti}^{IG}$ }
initialization\;
\For{$z = -1:1$} {
$ P_{s(i+z)}^{\mathcal{F}_w} = \,^w\mathbf{T}_{s(i+z)} (P^{\mathcal{F}_{s(i+z)}}_{s(i+z)}) $ \;
$ P_{t(j+z)}^{\mathcal{F}_w} = \,^w\mathbf{T}_{t(j+z)} (P^{\mathcal{F}_{t(j+z)}}_{t(j+z)}) $ \;
\SetKwFunction{FMain}{ PCR-Pro \cite{Bhutta2018}}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$\mathbf{K}_{r_{ID}}$,$P_{r_{ID}}^{\mathcal{F}_w}$}}{
Estimate volume ratio $r_{vol}$ of $P_{s(i+z)}^{\mathcal{F}_w}, P_{t(j+z)}^{\mathcal{F}_w}$ \;
$^{s(i+z)}\mathbf{T}_{t(j+z)}^{RC} \longleftarrow \gamma_z \longleftarrow \mathbf{K}^{i+z}_{s},\mathbf{K}^{j+z}_{t} $ \;
$ \sigma_z \longleftarrow ^{s(i+z)}\mathbf{T}_{t(j+z)}^{RC} , \gamma_z, ^w\mathbf{T}_{s(i+z)} , ^w\mathbf{T}_{t(j+z)} $\;
$^{s(i+z)}\mathbf{T}_{t(j+z)}^{IG} \longleftarrow \sigma_z, P_{s(i+z)}^{\mathcal{F}_w},P_{t(j+z)}^{\mathcal{F}_w} $ \;
\KwRet $\sigma_z, ^{s(i+z)}\mathbf{T}_{t(j+z)}^{IG} $ \; }
}
\eIf{$r_{vol} > 0.5$}{
$\Delta^* = 5 $\;
\For{$x = -1: 1$}{
\For{$y = -1: 1$}{
\If{$x \neq y$}{
$ \Delta = |\sigma_x - \sigma_y| $\;
\If{$\gamma^* < \gamma_{xy}$ \&\& $\Delta^* > \Delta$ \&\& $\Delta^* \neq 0$ }{
$\sigma^* = avg(\sigma_x,\sigma_y)$ \;
$\Delta^* = \Delta$ \;
$\gamma^* = \gamma_{xy}$ \;
}
}
}
}
}{ $\sigma^* =\sigma_{xy=00} $ \; }
\caption{Finest Tuning for Optimal Scale Estimation}
\label{algorithm-1}
\end{algorithm}
\newpage
\section{Appendix}
\newpage
\renewcommand*{\bibname}{\section{References}}
\bibliographystyle{ieeetr}
\bibliography{Thesis}
| {
"alphanum_fraction": 0.5902636917,
"avg_line_length": 44.8181818182,
"ext": "tex",
"hexsha": "0879968187703f4d62d8b423792d98ebb6942b01",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "eb7beb0d00cd1a0ac04fda6c7c378bb6ebd4f29c",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "UsmanMaqbool/hkust-phd-mphil-thesis",
"max_forks_repo_path": "chapter4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "eb7beb0d00cd1a0ac04fda6c7c378bb6ebd4f29c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "UsmanMaqbool/hkust-phd-mphil-thesis",
"max_issues_repo_path": "chapter4.tex",
"max_line_length": 294,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "eb7beb0d00cd1a0ac04fda6c7c378bb6ebd4f29c",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "UsmanMaqbool/hkust-phd-mphil-thesis",
"max_stars_repo_path": "chapter4.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-17T03:10:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-17T03:10:07.000Z",
"num_tokens": 1250,
"size": 2958
} |
\chapter{Extending Peptizer}
%
\section{Introduction}
\npar Upon inspecting MS/MS identification results, a critical scientist has certain \textbf{expectations}. Some of these are automatically used by the database search algorithm to identify an MS/MS spectrum while others are not applicable on a large scale and therefore not used. To the first group belong mass differences between fragment ions as these are general and informative for the peptide sequence. Database search algorithms commonly use these as \textbf{robust features} to interpret a MS/MS spectrum. To the second group belong less general features. For example, it is known that proline-containing peptides are more prone to fragment N-terminally to proline upon CID. But, since not every peptide contains a proline database search algorithms cannot use this as a general robust feature. Hence, this type of information can still be used in a \textbf{complementary }level aside the database search algorithm.
\npar The Peptizer platform does just this. If a critical scientist has certain expectations for his or her MS/MS results, these expectations can be translated into \textbf{custom Agents}. Moreover, the methodology to group these expectations can be translated into a custom \textbf{AgentAggregator}. Both suggest extending Peptizer \textbf{to automate the inspections of a critical scientist's expectations}.
\npar This chapter explains how to extend Peptizer by creating custom Agents and AgentAggregators.
%
\section{Overview}
\npar Both the Agent and the AgentAggregator have a basic \textbf{input output }operation. This is illustrated by the figures below.
%%%%%%
\begin{figure}[H]
\begin{center}
\includegraphics[width=.85\textwidth]{agent_concept}
\caption{\label{agent_concept} The basic input/output structure of an Agent. As input, the Agent receives a PeptideIdentification object that consists of a single MS/MS spectrum and a number of peptides hypothesises suggested for that spectrum. After this input has been processed by an Agent, a vote is casted as output. This vote refelcts the Agent's opinion for selecting or not selecting the given peptide identification.}
\end{center}
\end{figure}
%
%%%%%%
\begin{figure}[H]
\begin{center}
\includegraphics[width=.85\textwidth]{agentaggregator_concept}
\caption{\label{agentaggregator_concept} The basic input/output structure of an AgentAggregator. As input, the AgentAggregator receives a collection of Agent votes. Therein, \textit{i} Agents vote for selection of a peptide identification, \textit{j} Agents vote neutral for selection while \textit{k} Agents vote against selection. The Agent agregator then processes all votes and concludes as output whether or not the peptide identification is matching and should therefore be selected or not.}
\end{center}
\end{figure}
%
\npar This concept must be kept in mind when extending Peptizer. Next up are instructions for creating an \textbf{easy Agent} followed by a \textbf{more complex Agent}. Finally, the creation of an\textbf{ easy AgentAggregator }is instructed as well.
\section{Writing the first Agent}
\npar For the easy example, a length Agent will be made. Its aim is to \textbf{verify whether a peptide's length is shorter or longer then a given threshold length}. The Agent could then for example be used to select short peptides as these are more prone to generate false positive peptide identifications.
\npar \textit{First} and most important: each custom Agent must \textbf{extend the abstract Agent class}. Thereby, each Agent has \textbf{common variables} and a \textbf{common signature}. Examples for common variables are a name, a status for both the activity and the veto option as well as a common method to get a unique identifier.
\npar \textit{Second}, to set an Agent's variables from the agent.xml configuration file, an Agent must be \textbf{initialized}. After being initialized, an Agent can then receive input and produce output. The initialization is also the place to define \textbf{custom parameters} like the threshold length in this example. This is illustrated in the code snapshot below. There, the value of the LENGTH variable is used to get the appropriate value from the agent.xml configuration file.
%
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Constructing an Agent}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
public class LengthAgent extends Agent {
public static final String LENGTH = "length";
public LengthAgent(){
initialize(LENGTH);
}
}
\end{verbatim}
\end{algorithm}
%
\npar \textit{Third}, the \textbf{inspect method }must be implemented from the abstract Agent class. This method reflects the basic input/output structure of an Agent: \textbf{a PeptideIdentification object is an input parameter} and \textbf{an array of AgentVotes is returned as output} (one for each peptide hypothesis from the MS/MS spectrum). Note that the different votes are static by on the \textbf{AgentVote enum}.
\npar The inspect method for the LengthAgent can be seen as following. Get the length threshold as a local variable from a Properties object that was created during initialization of the Agent. Then create an array to fit n Agentvotes where n equals the number of confident peptide hypothesises made for the MS/MS spectrum (so we decide to only inspect on confident peptide hypothesises). Then an AgentReport is created for each of these peptide hypothesesis, wherein results are persisted.
\npar If the length of the peptide sequence is less then the length threshold, the LengthAgent votes positive for selection. Else it votes neutral for selection. Finaly, the reports are made and stored along the PeptideIdentification. These reports are used to display the PeptideIdentification in the Peptizer graphical user interface.
%
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Inspect method of the Length Agent}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
public AgentVote[] inspect(PeptideIdentification aPeptideIdentification) {
int lLength = Integer.parseInt((String) (this.iProperties.get(LENGTH)));
AgentVote[] lVotes = new AgentVote[aPeptideIdentification.getNumberOfConfidentPeptideHits()];
for (int i = 0; i < lVotes.length; i++) {
PeptideHit lPeptideHit = aPeptideIdentification.getPeptideHit(i);
AgentReport lReport = new AgentReport(getUniqueID());
int lPeptideLength = lPeptideHit.getSequence().length();
if (lPeptideLength < lLength) {
lVotes[i] = AgentVote.POSITIVE_FOR_SELECTION;
} else {
lVotes[i] = AgentVote.NEUTRAL_FOR_SELECTION;
}
lReport.addReport(AgentReport.RK_RESULT, lVotes[i]);
lReport.addReport(AgentReport.RK_TABLEDATA, new Integer(lPeptideLength));
lReport.addReport(AgentReport.RK_ARFF, new Integer(lPeptideLength));
aPeptideIdentification.addAgentReport(i + 1, getUniqueID(), lReport);
}
return lVotes;
}
\end{verbatim}
\end{algorithm}
%
\npar In the end, the LengthAgent returns an Agentvote for each confident peptide hypotheseis. Whereas each vote reflects the length of the peptide in relation to the given length threshold.
\npar After this easy example, lets create an Agent with more advance processing features.
\section{A more advanced Agent}
\subsection{Background}
This section starts with a quote from a 2004 paper by Ross et al.:
\begin{quote}
\textsf{We have developed a multiplexed set of reagents for quantitative protein analysis that place isobaric mass labels at the N termini and lysine side chains of peptides in a digest mixture. The reagents are differentially isotopically labelled such that all derivatized peptides are isobaric and chromatographically indistinguishable, but yield signature or reporter ions following CID that can be used to identify and quantify individual members of the multiplex set.}
\newline
\footnotesize Ross, P. L. et al. (2004). "Multiplexed protein quantitation in Saccharomyces cerevisiae using amine-reactive isobaric tagging reagents." Mol Cell Proteomics 3(12): 1154-69.
\end{quote}
\npar Ross et al. introduced the \ITRAQ methodology that has become a widespread tool for quantitative proteomics. If this chemistry is used, then \ITRAQ reporter ions are expected to appear in the MS/MS spectrum. If these reporter ions appear in unequal intensities, then the corresponding peptide was differentially abundant in both samples.
\npar This can now be defined as a case to create a new Agent:\\{\textbf{Do \ITRAQ reporter ions appear at different intensity?}
%
%SECTION
\subsection{Creating a custom Agent to inspect expectations}
\npar A custom Agent inspecting the appearance of \ITRAQ reporter ions in the MS/MS spectrum will now be created. This Agent will be named as the ReporterIonAgent. Just as any other Agent in Peptizer, the ReporterIonAgent must extend the abstract Class Agent. This abstract class has common methods and variables among all Agents.\footnote{Read more on abstract classes at \url{http://java.sun.com/docs/books/tutorial/java/IandI/abstract.html}} Examples of common features are the name, the activity or the veto status as well as the getters and the setters to modify these variables. As such, the ReporterIonAgent must only encode its differences from other Agents.
\npar A custom Agent like the ReporterIonAgent can be created from \textbf{the template signature of an Agent extension}. This template is availlable as the \textbf{DummyAgent} in the standard Peptizer distribution.\footnote{The DummyAgent can be found in the Peptizer package be.proteomics.mat.util.agents. Click to browse the \href{http://genesis.ugent.be/peptizer/xref/be/proteomics/mat/util/agents/DummyAgent.html}{Java source code} or the \href{http://genesis.ugent.be/peptizer/apidocs/be/proteomics/mat/util/agents/DummyAgent.html}{JavaDocs} on this DummyAgent template to create custom Agents.}
\begin{description}
\item[Parameters] Declare optional parameters that are used as options by this Agent.\\\textit{The mass over charge values for reporter ions}
\item[Constructor] Declare the instantiation method of an Agent extension.\\\textit{The initiation of the super class Agent and the set-up of the private variables of ReporterIonAgent}
\item[Inspection] Define the inspection logic that the Agent must perform.\\\textit{The inspection of the Agent reports whether ITRAQ reporter ions appear at different intensities}
\item[Description] Document the aim of this Agent.
\end{description}
\npar These elements are illustrated by the DummyAgent in the following Java code snippet.
%
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Agent signature in a code outline}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
public class DummyAgent extends Agent {
//Parameters
public static final String DUMMY_PROPERTY = "dummy";
//Constructor
public DummyAgent(){
super();
...
}
//Inspection
public AgentVote[] inspect(PeptideIdentification aPeptideIdentification){
AgentVote vote = null;
...
return vote;
}
//Description
public String getDescription(){
return "Agent description";
}
}
\end{verbatim}
\end{algorithm}
%
\npar Ok, so now being aware of the code signature of an Agent, the ReporterIonAgent can be created similarly.
\subsubsection{PARAMETERS}
\npar First, the different \textbf{parameters} required by this Agent must be defined. Since these are read from the configuration file, they must have fixed identifiers. There are two parameters holding the values for the two \textbf{reporter ion masses} and a \textbf{fold ratio threshold} between the two reporter ion intensities for this Agent to inspect. Finally, there is also a parameter on the \textbf{error tolerance} that is allowed upon matching the reporter ion masses with a fragment ion from the MS/MS spectrum.
\npar In Java, each of these is encoded as final static Strings, since these are then always \textbf{identical }and \textbf{accessible}.
\npar The parameters for the ReporterIonAgent are illustrated in the following Java code snippet.
%
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Parameters for the ReporterIonAgent}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
//Mass over charge value for the first reporter ion.
public static final String MASS_1 = "reporter_mz_1";
//Mass over charge value for the second reporter ion.
public static final String MASS_2 = "reporter_mz_2";
//Fold ratio between the two reporter ions.
public static final String RATIO = "ratio";
//Error tolerance for matching the expected
//reporter ion mass over charge to a fragment ion.
public static final String ERROR = "error";
\end{verbatim}
\end{algorithm}
%
%
\npar Ok, after defining the parameters, the code for constructing the Agent can be written.
\subsubsection{CONSTRUCTOR}
\npar The constructor is a special kind of routine as it is only once used upon starting a new Java object. The following parts can be recognized in the code:
\begin{description}
\item[Call superclass constructor] to initiate all methods and variables common to all Agents at their superclass.
\item[Read properties] from the agent.xml configuration file for the Agent with this unique identifier.
\item[Set general variables] as given by the agent.xml configuration file to general agent variables like the name, the active and the veto status.
\item[Set specific variables] as given by the agent.xml configuration file to specific agent variables like the reporter ion masses, the ratio and the error tolerance.\\Note that this is enclosed by a try \& catch statement. If these variables are not inside the configuration file, then Peptizer will log an exceptional GUI message before shutting down the application.
\end{description}
\npar The construction of the ReporterIonAgent is illustrated in the following Java code snippet:
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Construction of an Agent}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
/**
* Construct a new instance of the ReporterIonAgent.
*/
public ReporterIonAgent() {
super();
Properties prop = MatConfig.getInstance().getAgentProperties(this.getUniqueID());
super.setName(prop.getProperty("name"));
super.setActive(Boolean.valueOf(prop.getProperty("active")));
super.setVeto(Boolean.valueOf(prop.getProperty("veto")));
try {
this.iProperties.put(MASS_1, prop.getProperty(MASS_1));
this.iProperties.put(MASS_2, prop.getProperty(MASS_2));
this.iProperties.put(RATIO, prop.getProperty(RATIO));
this.iProperties.put(ERROR, prop.getProperty(ERROR));
} catch (NullPointerException npe) {
MatLogger.logExceptionalGUIMessage(
"Missing Parameter!!", "Parameters " + MASS_1 + ", " + MASS_2 + " , " +
RATIO + "and "+ ERROR + " are required! for Agent " + this.getName() +
" !!\nExiting..");
System.exit(0);
}
}
\end{verbatim}
\end{algorithm}
\npar With all this information, the ReporterIonAgent is ready to inspect an MS/MS spectrum for its reporter ions.
\subsubsection{INSPECTION}
\npar The inspection is the core of an Agent since this logic leads to the Agent's vote. The \textbf{input }of the inspection is a PeptideIdentifcation object. Such an object has a single MS/MS spectrum and multiple peptide hypothesises. The \textbf{output} of the inspection is a vote as an AgentVote enumeration. There are three types of votes:
\begin{enumerate}
\item A vote approving to select the peptide hypothesis for a given property.\\\textit{Peptide hypothesises of MS/MS spectra with deviating reporter ion intensities}
\item A vote being neutral to select the peptide hypothesis for a given property.\\\textit{Peptide hypothesises from MS/MS spectra with equal reporter ion intensities}
\item A vote denying to select the peptide hypothesis for a given property.\\\textit{Peptide hypothesises from MS/MS spectra lacking reporter ion fragment ions}
\end{enumerate}
\npar Note that examples for the ReporterIonAgent shown in \textit{italics} depend on expectations of the scenario, on what peptide hypothesises are interesting to the case that is initially defined. Therefore it is very important to document the Agents in depth.
%
\npar This ReporterIonAgent is documented exhaustively so it is possible to read through to code step by step. As such, the source code of the ReporterIonAgent is included below. Comments are formatted in grey italics while Java keywords are blue and text values are green. The inspection part of the code starts from line 82. From thereon, the following parts can be recognized:
\begin{description}
\item[Preparing the variables (line 109)] Here, a set of variables are defined that are needed to perform the inspection.\\\textit{Variables to hold the observed peak intensity or matching status.}
\item[Inspecting for the reporter ions(line 145)] Here, the actual inspection is performed.\\\textit{Finding the reporter ions in the MS/MS spectrum, calculate their intensity ratio and test if it meets the expectations.}
\item[Making an inspection report and committing the votes(line 236)] Here, the results of the inspection are stored and returned as an AgentVote for each peptide hypothesis.\\\textit{Store the intensity ratio and the votes in a report that will be read by the AgentAggregator.}
\end{description}
%
%% IntelliJ print options:
% Gentium basic, 10pt, Line numbers, Color printing, Syntax printing.
% Header left, Gentium Basic bold, 12pt
% Wrap at word breaks, Top 1.0, Left 1.0, Right 1.0, Bottom 0.7
%
\includepdf[pages={1-5}]{reporterionagent_source.pdf}
\subsubsection{DESCRIPTION}
\npar The final method that must be implemented for each Agent is a descriptive method. Here resides the hard coded description the user reads in the Agent table upon starting a new Peptizer selection task. In addition, when the user is validating a Peptide hypothesis this description shows up in the information table.
\npar It is important to stress shortly what it does, but also how an Agent casts a vote.
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{Agent description}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
/**
* Describe the ReporterIonAgent shortly.
*/
public String getDescription() {
return "<html>Inspects for the abberant reporter ion intensities." +
"<b>Selects when two reporter ions ( " + this.iProperties.get(MASS_1) +
" , " + this.iProperties.get(MASS_2) + ") have a more then " +this.iProperties.get(RATIO) +
" fold intensity ratio.";
}
\end{verbatim}
\end{algorithm}
\section{Using a custom Agent in Peptizer}
\npar The custom ReporterIonAgent is ready to be used in Peptizer. Therefore, the ReporterIonAgent must be added to the agent configuration file. This includes information on the classpath and classname as well as the Agent's parameters (see \ref{configuration_files} for more information on the configuration files). This looks as following:
%
\begin{verbatim}
<agent>
<uniqueid>peptizer.agents.custom</uniqueid>
<property name="name">Reporter Ion Agent</property>
<property name="active">true</property>
<property name="veto">false</property>
<property name="reporter_mz_1">114.1</property>
<property name="reporter_mz_2">117.1</property>
<property name="ratio">1.5</property>
<property name="error">0.2</property>
</agent>
\end{verbatim}
\npar When a new selection task is started , the ReporterIonAgent is availlable to inspect peptide hypothesises as illustrated below.
%
%%%%%%
%FIGURE%
%%%%%%
\begin{figure}[H]
\begin{center}
\includegraphics[width=.85\textwidth]{reporterion_task}
\caption{\label{reporterion_task}After adding the ReporterIon Agent to the agent configuration file, the ReporterIon Agent can be used for creating a new Peptizer task.}
\end{center}
\end{figure}
%
\npar Peptizer will then inspect each MS/MS spectrum for deviating reporter ion intensities by using the ReporterIon Agent. As such, peptide hypothesises originating from MS/MS spectra with deviating reporter ion intensities will be selected and shown in the manual validation GUI of Peptizer. This is shown in the figure below. Note that this list can also be saved as a comma separated file (see \ref{save_csv_txt}).
%
%%%%%%
%FIGURE%
%%%%%%
\begin{figure}[H]
\begin{center}
\includegraphics[width=.85\textwidth]{reporterion_validation}
\caption{\label{reporterion_task}By using the ReporterIon Agent, Peptizer selected all peptide hypothesises from MS/MS spectra with deviating reporter ion intensities. Both green boxes show how the ReporterIon Agent first identifies the reporter ions in the MS/MS spectrum and used these to calculate the intensity ratio of \textbf{3.42}. Since this is more then the threshold ratio that was set to \textbf{1.5}, this peptide hypothesis was selected for its deviating reporter ion intesities in the MS/MS spectrum.}
\end{center}
\end{figure}
%
\section{Writing your own AgentAggregator}
\npar Finally, when a series of Agents voted on a MS/MS spectrum and it's peptide hypothesises, these \textbf{Agent votes are input for an AgentAggregator}. The task of an AgentAggregator is then to bundle these votes and produce a conclusion whether or not the a PeptideIdentification matches a profile defined by its Agents.
\npar The BestHitAggregator serves as an example for creating a custom AgentAggregator. The construction of a AgentAggregator is very similar to that of an Agent. First, all AgentAggregators must \textbf{extend the abstract AgentAggregator class} so to have a common set of variables and a \textbf{common signature}. Second, an AgentAggregator must also be \textbf{initialized} to set its properties from the aggregator.xml configuration file.
%
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{AgentAggregator construction}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
public class BestHitAggregator extends AgentAggregator {
public static final String SCORE = "score";
public BestHitAggregator() {
initialize(SCORE);
}
\end{verbatim}
\end{algorithm}
%
\npar Also like in the Agent, the input/output structure of the AgentAggregator makes sense upon \textbf{implementing the abstract match method }from the AgentAggregator class. Again, a \textbf{PeptideIdentification object serves as an input parameter }and \textbf{a single AgentAggregatorResult is returned as output}. Note that the different aggregation results are static on the AgentAggregatorResult enum.
\npar A collection of Agents is set upon starting a Peptizer task on the abstract AgentAggregator class. Therefore, an AgentAggregator implementation has no concern on the type of Agents, it must only be aware that there are some Agents ready for voting.
\npar \textit{First}, a number of local variables are declared that are used during the routine. Then, there is a check if there is any confident peptide hypothesis for this MS/MS spectrum. Only then starts \textbf{an iteration over all the availlable Agents}. Each Agent then \textbf{inspects the PeptideIdentification and returns an AgentVote}. As this is the BestHitAggregator, \textbf{only the votes for the best peptide hypothesis are taken into consideration here}. During iteration, the veto status of an Agent is also logged, but only if the Agent is positive for selection.
%
%%%%%
%CODE%
%%%%%
\begin{algorithm}[H]
\caption{BestHitAggregator matching method}
\scriptsize
\vspace{0.3cm}
\begin{verbatim}
public AgentAggregationResult match(PeptideIdentification aPeptideIdentification) {
boolean boolConfident = false;
boolean boolMatch = false;
boolean boolVetoWasCalled = false;
Integer lThresholdScore = new Integer(iProperties.getProperty(SCORE));
int counter = -1;
AgentVote[] results = new AgentVote[iAgentsCollection.size()];
if (aPeptideIdentification.getNumberOfConfidentPeptideHits() > 0) {
boolConfident = true;
for (Agent lAgent : iAgentsCollection) {
counter++;
results[counter] = lAgent.inspect(aPeptideIdentification)[0];
if (results[counter] == AgentVote.POSITIVE_FOR_SELECTION && lAgent.hasVeto()) {
boolVetoWasCalled = true;
}
}
if (boolVetoWasCalled) {
boolMatch = true;
} else {
int lSumScore = sumVotes(results);
if (lSumScore >= lThresholdScore) {
boolMatch = true;
}
}
}
if (boolConfident) {
if (boolMatch) {
return AgentAggregationResult.MATCH;
} else {
return AgentAggregationResult.NON_MATCH;
}
} else {
return AgentAggregationResult.NON_CONFIDENT;
}
}
\end{verbatim}
\end{algorithm}
%
\npar When the iteration has finished, a few lines of logic aggregate the votes. \textit{First}, if an Agent with veto rights was positive for selection then it is a match for sure. \textit{Second}, all votes are summed and compared with the scoring threshold as given by the user. If the sum is greater then the threshold, then it is a match. Else, a PeptideIdentification is not matched or not confident. The AgentAggregator concludes by returning a corresponding AgentAggregatorResult object.
\npar In the end, the BestHitAggregator will thus have returned a conclusion on a given PeptideIdentification. Those PeptideIdentifications with a \textbf{AgentAggregatorResult.MATCH} result are subsequently presented in the \textbf{manual validation interface} of Peptizer. | {
"alphanum_fraction": 0.7609824126,
"avg_line_length": 61.1706161137,
"ext": "tex",
"hexsha": "37af32adcfe14bd3c9e74131beadf87ac06bc736",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "55faddedda079fbd2cb6fda05c596f0719dcc879",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "compomics/peptizer",
"max_forks_repo_path": "manual/peptizer-extensions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "55faddedda079fbd2cb6fda05c596f0719dcc879",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "compomics/peptizer",
"max_issues_repo_path": "manual/peptizer-extensions.tex",
"max_line_length": 923,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "55faddedda079fbd2cb6fda05c596f0719dcc879",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "compomics/peptizer",
"max_stars_repo_path": "manual/peptizer-extensions.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5897,
"size": 25814
} |
\section{Risk estimation}\label{sec:RiskEstimators}
Let~$\func{\dec}{\domainX}{\real}$ be a decision function from feature vector~$\X$ to a real number, and let~$\func{\loss}{\domainX \times \domainY}{\real_{{\geq}0}}$ be the loss function. \textit{Risk estimator},~$\risk$,\footnote{Although expected risk~$\risk$ is a function of~$\dec$ as shown in Eq.~\eqref{eq:RiskEstimator:Expectation}, we drop~$\dec$ from our notation for brevity going forward.} quantifies $\dec$'s~expected loss formally as
\begin{equation}\label{eq:RiskEstimator:Expectation}
\risk(\dec) = \mathbb{E}_{(\X,\y) \sim \joint}\sbrack{\floss{\decX}{\y}}\text{.}
\end{equation}
Since $\joint$~is unknown, $\risk$~is also unknown so in practice empirical estimate~$\emprisk$ is used. This section provides an overview of the PN, PU (specifically nnPU), and PUbN risk estimators as well as their empirical estimates.\footnote{All empirical risk estimates described here support both batch and stochastic gradient descent.}
\subsection{PN --- positive-negative}
PN~classification has access to both positive and negative labeled examples. Therefore, Eq.~\eqref{eq:RiskEstimator:Expectation} exactly specifies its expected risk.
\paragraph{Empirical Estimation} Estimating the PN~empirical risk is straightforward as shown in Eq.~\eqref{eq:EmpRisk:PN}; it is merely the mean loss for all examples in~$\train$. This formulation applies irrespective of any covariate shift/bias.
\begin{equation}\label{eq:EmpRisk:PN}
\emprisk = \frac{1}{\abs{\train}} \sum_{(\X,\y) \in \train} \floss{\decX}{\y}
\end{equation}
\subsection{nnPU --- non-negative positive-unlabeled}
Since positive\-/unlabeled~(PU) learning has no negative labeled examples, traditional supervised learning cannot be used. By Bayes' Rule, the expected risk can be decomposed into the risk associated with each label (positive and negative) as shown in Eq.~\eqref{eq:Risk:Bayes}. ${\varrisk{D}{\ypred}}$ denotes the expected loss when predicting label~$\ypred$ for samples drawn from distribution~${\pDist_{D}}$ where ${D \in \set{\textnormal{P}, \textnormal{N}, \X}}$.
\begin{align}
\risk &= \prior \mathbb{E}_{\X \sim \pcond}\sbrack{\floss{\decX}{\pcls}} + (1-\prior) \mathbb{E}_{\X \sim \ncond}\sbrack{\floss{\decX}{\ncls}} \nonumber \\
&= \prior \prisk{P} + (1-\prior) \nrisk{N} \label{eq:Risk:Bayes}
\end{align}
Since the unlabeled set is drawn from marginal distribution,~$\marginal$, it is clear that:
\begin{align}
\nrisk{U} &= \mathbb{E}_{\X \sim \marginal} \sbrack{\floss{\X}{\ncls}} \nonumber \\
&= \prior \mathbb{E}_{\X \sim \pcond} \sbrack{\floss{\X}{\ncls}} + (1 - \prior) \mathbb{E}_{\X \sim \ncond} \sbrack{\floss{\X}{\ncls}}\nonumber \\
&= \prior \nrisk{P} + (1 - \prior) \nrisk{N} \label{eq:Risk:Unlabeled}
\end{align}
\noindent
Rearranging the above and combining it with Eq.~\eqref{eq:Risk:Bayes} yields the unbiased positive\-/unlabeled~(uPU) risk estimator below.~\cite{duPlessis:2014}
\begin{equation}\label{eq:Risk:uPU}
\risk = \prior \prisk{P} + \nrisk{U} - \prior \nrisk{P}
\end{equation}
\paragraph{Non\-/negativity} By the definitions of~$\loss$ and~$\risk$, it is clear that $\nrisk{N}$~must be non\-/negative. However, highly expressive learners (e.g.,~neural networks) often cause ${\nrisk{U} - \prior \nrisk{P}}$ to slip negative --- primarily due to overfitting. Kiryo\etal~\cite{Kiryo:2017} proposed the non\-/negative positive\-/unlabeled~(nnPU) risk estimator in Eq.~\eqref{eq:Risk:nnPU}; the primary difference versus uPU is the negative risk surrogate is explicitly forced non\-/negative by the~$\max$.
\begin{equation}\label{eq:Risk:nnPU}
\risk = \prior \prisk{P} + \max\left\{0, \nrisk{U} - \prior \nrisk{P} \right\}
\end{equation}
Whenever the negative risk surrogate is less than~0, nnPU updates the model parameters using special gradient ${\gamma \nabla_{\theta} \prior \nrisk{P} - \nrisk{U}}$ where hyperparameter~${\gamma \in (0,1]}$ attenuates the learning rate. Observe that the negative risk surrogate is deliberately negated; this is done to ``defit'' the learner so that it no longer underestimates the negative class' expected risk.
Although forcing~$\nrisk{N}$'s surrogate to be non-negative introduces an estimation bias (i.e.,~expected value does not equal the true expectation), nnPU often performs better in practice and guarantees ERM uniform convergence.
\paragraph{Empirical Estimation} Each risk term in nnPU can be empirically estimated from the training set where:
\begin{equation}\label{eq:EmpRisk:Pos}
\evrisk{P}{\ypred} = \frac{1}{\abs{\ptrain}} \sum_{\X \in \ptrain} \floss{\decX}{\ypred}
\end{equation}
\noindent
and for unlabeled set ${\utrain \sim \marginal}$,
\begin{equation}
\evrisk{U}{-} = \frac{1}{\abs{\utrain}} \sum_{\X \in \utrain} \floss{\decX}{\ncls} \text{.}
\end{equation}
\noindent
Prior~$\prior$ and attenuator~$\gamma$ are hyperparameters.
\subsection{PUbN --- positive, unlabeled, biased-negative}
Let $\latent$~be a latent random variable representing whether tuple~${(\X,\y)}$ is eligible for labeling. The joint distribution,~${\trijoint = \pDist(\X, \y, \latent)}$, then becomes trivariate. By definition, ${\pDist(\latent = \pcls \vert \X, \y = \pcls) = 1}$ (i.e.,~no positive selection bias) or equivalently ${\pDist(\y = \ncls \vert \X, \latent = \ncls) = 1}$. The biased-negative conditional distribution is therefore~${\bncond = \pDist(\X \vert \y = \ncls, \latent = \pcls)}$.
The marginal distribution can be partitioned as
\begin{equation*}
\begin{aligned}
\marginal = &\pDist(\y = \pcls) \pDist(\X \vert \y = \pcls) \\
&+ \underbrace{\pDist(\y = \ncls, \latent = \pcls)}_{\plabel}\pDist(\X \vert \y = \ncls, \latent = \pcls)
+ \underbrace{\pDist(\y = \ncls, \latent = \ncls)}_{1 - \prior - \plabel} \pDist(\X \vert \y = \ncls, \latent = \ncls)
\end{aligned}
\end{equation*}
\noindent
where ${\plabel = \pDist(\y = \ncls, \latent = \pcls)}$ is a hyperparameter. The expected risk therefore becomes
\begin{equation}\label{eq:Risk:WithBN}
\risk = \prior \prisk{P} + \plabel \nrisk{bN} + (1 - \prior - \rho) \smrisk
\end{equation}
Define ${\sigma(\X) = \pDist(\latent = \pcls \vert \X)}$. While the proof is well beyond the scope of this document, Hsieh\etal~\cite{Hsieh:2018} proved that with guaranteed estimation error bounds $\smrisk$~decomposes as
\begin{equation}\label{eq:ExpectedRisk:PUbN:Latent}
\begin{aligned}
\smrisk = &\mathbb{E}_{\X \sim \marginal}\sbrack{\mathbbm{1}_{\sigX \leq \eta} \floss{\decX}{\ncls} \sigdiff} \\
&+ \prior \mathbb{E}_{\X \sim \pcond} \sbrack{\mathbbm{1}_{\sigX > \eta} \floss{\decX}{\ncls} \frac{\sigdiff}{\sigX}} \\
&+ \plabel \mathbb{E}_{\X \sim \bncond} \sbrack{\mathbbm{1}_{\sigX > \eta} \floss{\decX}{\ncls} \frac{\sigdiff}{\sigX}}
\end{aligned}
\end{equation}
\noindent
where $\mathbbm{1}$~is the indicator function and $\eta$~is a hyperparameter that controls the importance of unlabeled data versus $\textnormal{P}$/$\textnormal{bN}$ data.
\paragraph{Empirical Estimation} Similar to nnPU, $\prisk{P}$~and~$\nrisk{bN}$ can be estimated directly from~$\ptrain$ and~$\bntrain$ respectively. Estimating~$\smrisk$ is more challenging and actually requires the training of two classifiers.
First, $\sigX$~is empirically estimated by training a positive\-/unlabeled probabilistic classifier with labeled set ${\ptrain \sqcup \bntrain}$ and unlabeled set~$\utrain$; refer to this learned approximation as~$\hsigX$. Probabilistic classifiers must be adequately calibrated to generate probabilities. Hsieh\etal\ try to achieve this by training with the logistic loss but that provides no calibration guarantees.~\cite{Guo:2017}
Rather than specifying hyperparameter~$\eta$ directly, Hsieh\etal\ instead specified hyperparameter~$\tau$ to calculate~$\eta$ via
\begin{equation}\label{eq:EtaCalculation}
\abs{\setbuild{\X \in \utrain}{\hsigX \leq \eta}} = \tau (1 - \prior - \plabel)\abs{\utrain} \text{.}
\end{equation}
\noindent
This approach provides a more intuitive view into the balance between~$\utrain$ and $\ptrain$/$\bntrain$.
PUbN's second classifier minimizes Eq.~\eqref{eq:Risk:WithBN} and empirically estimates the risk when ${\latent = \ncls}$ as
\begin{equation}\label{eq:EmpRisk:PUbN:Latent}
\begin{aligned}
\smrisk = &\frac{1}{\abs{\utrain}} \sum_{\xvar{U} \in \utrain} \sbrack{\mathbbm{1}_{\hsig(\xvar{U}) \leq \eta} \floss{\dec(\xvar{U})}{\ncls} \big(1 - \hsig(\xvar{U})\big)} \\
&+\frac{\prior}{\abs{\ptrain}} \sum_{\xvar{P} \in \ptrain} \sbrack{\mathbbm{1}_{\hsig(\xvar{P}) > \eta} \floss{\dec(\xvar{P})}{\ncls} \frac{1 - \hsig(\xvar{P})}{\hsig(\xvar{P})}} \\
&+\frac{\plabel}{\abs{\bntrain}} \sum_{\xvar{bN} \in \bntrain} \sbrack{\mathbbm{1}_{\hsig(\xvar{bN}) > \eta} \floss{\dec(\xvar{bN})}{\ncls} \frac{1 - \hsig(\xvar{bN})}{\hsig(\xvar{bN})}} \text{.}
\end{aligned}
\end{equation}
| {
"alphanum_fraction": 0.6959135954,
"avg_line_length": 70.7165354331,
"ext": "tex",
"hexsha": "b27b5887a9069fecf56e20a097efbc0b2ca8526e",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-12-01T14:21:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-25T07:33:10.000Z",
"max_forks_repo_head_hexsha": "e1e039ca9f228051f6a3682b3ee71665e4d693d0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ZaydH/cis510_nlp",
"max_forks_repo_path": "project/tex/risk_estimators.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e1e039ca9f228051f6a3682b3ee71665e4d693d0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ZaydH/cis510_nlp",
"max_issues_repo_path": "project/tex/risk_estimators.tex",
"max_line_length": 527,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e1e039ca9f228051f6a3682b3ee71665e4d693d0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ZaydH/cis510_nlp",
"max_stars_repo_path": "project/tex/risk_estimators.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3021,
"size": 8981
} |
\documentclass[10pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{setspace}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{color}
\definecolor{deepblue}{rgb}{0,0,0.5}
\definecolor{deepred}{rgb}{0.6,0,0}
\definecolor{deepgreen}{rgb}{0,0.5,0}
\usepackage{listings}
\DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold
\DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal
% Python style for highlighting
\newcommand\pythonstyle{\lstset{
language=Python,
basicstyle=\ttm,
otherkeywords={self}, % Add keywords here
keywordstyle=\ttb\color{deepblue},
emphstyle=\ttb\color{deepred}, % Custom highlighting style
stringstyle=\color{deepgreen},
frame=tb, % Any extra options here
showstringspaces=false %
}}
% Python environment
\lstnewenvironment{python}[1][]
{
\pythonstyle
\lstset{#1}
}
{}
\begin{document}
\title{ECE 4750 Lab 1: Iterative Integer Multiplier}
\author{Akshay Dongaonkar (akd54) \& Avinash Navada (abn44) \& Vivek Gaddam (vrg22)}
\maketitle
\section{Introduction}
In many programming algorithms, multiplication is a key step in driving the algorithm towards completion.
Many digital signal processing algorithms spend most of their time multiplying values.
Given our media heavy, highly connected Internet of Things (IoT), more signals will need to be processed.
Therefore, we have synthesized an iterative multiplier that supports the mul instruction as defined in the PARCv1 Instruction Set Architecture (ISA).
Eventually, this multiplier will be a module in a fully synthesizable multicore processor.
We fully implemented two designs of an iterative multiplier: the base design is a fixed latency, 34 cycle iterative multiplier, while the alternative design is a variable latency iterative multiplier with bounds of 3 to 34 cycles.
While we do not expect much additional overhead on clock frequency in our alternative design,
we expect a significant increase in area and energy. Running our array of unit tests on both designs reveals that the variable design consumes far less clock cycles than does the base design (8.88 cycles/multiplication (alternative design) vs. 34 cycles/multiplication (base)).
We also scale down our hardware use in the variable design and determine that the associated performance downgrade on the same unit tests (to 9.1 cycles/multiplication) is still far superior to the base design. We expect the alternative design to be used over the base in most implementations and potential applications while still meeting modern technology constraints (focus on energy, large die sizes, etc).
\section{Project Management}
Given our group's various skill levels, we assigned Avinash to be verification lead, Vivek to be the architect, and Akshay to be the design lead.
Our initial project roadmap was fairly aggressive and required us to be finished by Wednesday, September $10^{th}$.
The intial Gantt Chart is shown in Figure~\ref{fig:gantt}.
The actual roadmap is shown as a separate Gantt Chart at the end of the document in Figure~\ref{fig:gantt_actual}.
\begin{figure}
\centering
\includegraphics[scale=0.45]{gantt}
\caption{The initial Gantt chart postulating progress}
\label{fig:gantt}
\end{figure}
The breakdown of work follows:
Vivek implemented most of the baseline design and some of the alternate design in Verilog.
Avinash implemented some of the alternate design and created our testing suite for Lab 1.
Akshay came up with the designs for the alternate design and wrote most of the writeup.
Akshay helped debug the Verilog code.
Vivek corrected the majority of errors in the RTL code.
Implementation of the RTL code progressed fairly smoothly with the exception of a few logical errors that were somewhat difficult to catch.
Vivek completed the baseline design while Avinash finished the directed test cases. We then tested the baseline design with these tests until it passed. This was one milestone in our agile development approach. Similarly, Akshay, Vivek, and Avinash completed the alternate design while Avinash finished the random test cases and added datasets for evaluation. After this was done, we spent a lot of time debugging the entire functionality of each design individually. We relied on the variety of tests in the testing suite to provide verification for this lab.
Some lessons learned from this lab were how to use tools such as GTKWave and line tracing to catch logic errors.
Another idea for testing in future assignments is unit tests for the functionality of major modules within our implementation in addition to end-to-end tests of our overall implementation.
We ran into CMS troubles towards the end, requiring us to take a late day.
Next time, we will be more vigilant in this department.
\section{Baseline Design}
The baseline design works on the following interface:
given a 64 bit message containing our operands, a clock signal,
and required signals to conform to the val/rdy microprotocol,
output the lower 32 bits of the multiplication to your consumer.
This interface is shown in Figure~\ref{fig:model}.
Our algorithm for computing the multiplication is as follows.
If the least significant bit (lsb) of the second operand (call it $b$) is 1, add it to an accumulator.
Then, shift your first operand (call it $a$) left by one bit and the second operand to the right logically by one bit.
This is the first algorithm we are taught when we learn to multiply and is a good baseline for comparison because of the fact that a shift by one bit takes one cycle, making the design conceptually simple.
The pseudocode for this is shown in Figure~\ref{fig:code}.
Our implementation follows the structure of this pseudo-code very closely.
We have three registers, one each for $a$, $b$, and $result$ (our accumulator).
We have 2 shifters, one logical left shifter and one logical right shifter.
We have an equivalent multiplexer for the clause that adds to our accumulator if the lsb of $b$ is $1$.
\begin{figure}[b]
\centering
\begin{minipage}{.5\textwidth}
\includegraphics[scale=0.6]{FLmodel} %width=.4\linewidth
\caption{Interface for our model\\
Inclusive of val/rdy microprotocol}
\label{fig:model}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{imul}
\caption{Pseudocode for our baseline multiply algorithm}
\label{fig:code}
\end{minipage}
\label{fig:test}
\end{figure}
However, the control logic and the data flow are not implemented as a monolithic design.
Instead, we implement a \textit{control-datapath split.}
The control module sends control signals to the datapath module, altering the flow of data into the registers.
This occurs when we are accepting new values into our operand registers,
writing/not-writing partial sums to $result$, and outputing a computed value.
The datapath module sends the lsb of $b$ to the control module so it can compute the appropriate dataflow.
The datapath is shown in Figure~\ref{fig:datapath}.
The lsb is directly used to determine whether to partially sum into $result$, a direct mapping to the pseudocode.
As seen from the pseudocode, there is no escape from the \verb+for+ loop.
Therefore, this implementation \textbf{must} take at least 32 cycles.
In fact, our implementation takes 34 cycles, as we have one cycle to accept data and one cycle to state we are done.
The state machine capturing this logic is shown in Figure~\ref{fig:BaseFSM}.
This implementation, therefore, does exploit patterns in the underlying data.
In the most extreme case where $b$ is zero, the hardware checks every bit of $b$ and decides 32 times not to add to $result$.
We should reduce that down to one decision not to write to $result$.
While modularity was used in taking advantage of structures (such as shifters, registers, and adders) that were individually verified for correctness, we could have also used hierarchy. Notice that each register has a multiplexer in front of it directing output.
We could have wrapped that structure into a module and reused it three times.
This would have allowed us to test incrementally and unit-by-unit.
Instead, we were forced to rely on end-to-end testing and test all functionality at once. This may have contributed to us not sooner catching RTL bugs.
Nevertheless, our use of encapsulation (by way of the val/rdy microprotocol) was a major feature of our design.
\section{Alternative Design}
The primary downside to our baseline implementation is the large latency in computing the product.
We propose several alternative designs and attempt to increase performance.
Since we cannot use concepts of indirection or caching, we instead will consider pipelineing our multiplier.
We will also consider exploiting the underlying structure of our operands to increase performance.
There are multiple ways to exploit structure in our operands; we will attempt to exploit structure in two ways.
First, we will roll over consecutive zeros by shifting more than 1 bit if possible.
Second, we will reduce the granularity of how many zeros we shift over to powers of two, as this reduces hardware costs
while still increasing performance over the fixed latency multiplier.
\begin{figure}[b]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[scale=0.4]{Datapath}
\caption{Datapath Diagram. Again, the structure mimics the pseudocode.}
\label{fig:datapath}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[scale=0.4]{BaseFSM}
\caption{FSM capturing control logic.}
\label{fig:BaseFSM}
\end{minipage}
\end{figure}
Given the linear path of our FSM, it makes sense to pipeline our constructions.
We can create stages between every state transition and have our val/rdy microprotocol mediate flow through the pipeline.
However, consider the hardware required. We need $34 \times 3 = 102$ registers, $34 \times 4 = 136$ two way multiplexers,
and $34$ logical left and right shifters.
This does not include the $34$ full adders we need.
The cost of this design is enormous!
Additionally, this design only achieves high throughput if we get repeated multiplications.
Consider single cycle processors.
This design will add exactly zero improvement in those processors, as we cannot pipeline instructions.
Also, since we are assuming we are going to get instructions from a canonical C program, this design seems impractical.
Instead, we can exploit the structure of the 32 bit operands we get.
We are likely to have some repeated set of zeros of length $n$ in our second operand.
Our algorithm in these cases does nothing to our result register, wasting $n$ cycles.
So, we will instead shift $n$ bits, saving us $n-1$ cycles in our computation.
We need one cycle to actually determine that there are $n$ bits we can skip.
Our state machine for this logic is show in Figure~\ref{fig:AltFSM}.
We implement this change using the \verb+casez+ construct.
If there are multiple zeros in the lsb, we increment our counter by $n-1$ and shift the appropriate amount.
So, if there is a zero in $b$, we shift 32 bits and spend only one cycle in the calc stage.
Therefore, our total overhead is 3 cycles: one for IDLE, one for CALC, and one for DONE.
If the $b$ operand is all ones, we take our usual 34 cycles.
That provides our range of multiply cycles we can take.
An advantage to this design is that the datapath gets modified only slightly.
We only add one more control signal to our shifters, and the lone status signal from the base design (the lsb of $b$) is now expanded to the full 32 bits of $b$.
In other words, \textbf{the datapath components for the alternative design are the same as for the base design (Figure~\ref{fig:datapath})}.
This allowed us to rapidly implement the alternative design.
However, instead of shifting by $n$ bits, let us shift by powers of two.
This allows us to reduce the number of cases we need to check from 32 to 6.
This is a five-fold reduction in hardware! As mentioned in the introduction, this hardware reduction comes with little performance loss, and thus may be more appropriate in applications with technology constraints.
\section{Testing Strategy}
The base and alternative designs were tested using the provided test harness, to which were added additional directed and random test cases.
The directed and random test cases were made to cover all types of possible inputs, including (decimal) zeros and ones, small and large numbers, positive and negative numbers, numbers with masked lower/middle bits, and combinations of these. The magnitude of the inputs was also varied considerably from 0 to 32 bits (even though the bit width of all the inputs is 32). Furthermore, test source/sink delays were also added to random tests to test the val/rdy microprotocol. A summary of all the different types of test cases is given in the table below.
\begin{table}[h]
\begin{tabular}{|c | c | c|}
\hline
\textbf{Test Case} & \textbf{Test Case} & \textbf{Test Case} \\
\hline
$\times$ of 0, 1, and $-1$ & small $(+) \times (+)$ & small $(-) \times (+)$ \\
small $(-) \times (+)$ & small $(-) \times (-)$ & large $(+) \times (+)$ \\
large $(+) \times (-)$ & large $(-) \times (+)$ & large $(-) \times (-)$ \\
$\times$ with low bits msk & $\times$ with mid bits mks & sparse $(+/-) \times (+/-)$ \\
dense $(+/-) \times (+/-)$ & RS $\times$ RS & RS $\times$ with RD \\
RL $\times$ RS & RL $\times$ RL with RD & RS $\times$ RS with low bits msk \\
RS $\times$ RS with low bits msk w/RD & RL $\times$ RL with mid bits msk & \\
\hline
\end{tabular}
\caption{RS = Random Small, RL = Random Large, RD = Random Delays, msk = masked}
\end{table}
At first only the functional model and base design worked correctly with these unit tests, while the alternative design failed most of them. At this point, we also used line traces and GTKWave to do more granular testing to ensure outputs were actually being generated and at the expected times.
Ultimately both the base and alternative designs, as well as the functional model, worked correctly with all test cases. Although the tests were comprehensive and were developed in accordance with the increasing complexity of the control and datapath modules, we could have targeted the control and datapath modules separately with specific tests to ensure correct functionality before incorporating them together. However, this lab was simple enough for the test suite we wrote to be more than sufficient, especially since development of the baseline design was aided by comparison with the GCD model from Tutorial 3.
\section{Evaluations and Conclusion}
As soon as the base and alternative designs were completed, the next step was to test the performance of the models. In order to achieve this, the simulator harness was built and run (using the given random dataset of small numbers) to generate the total number of cycles and the average number of cycles per transaction for the base and alternative designs. To expand on this, we added three more random datasets to the simulator harness to check the performance of the designs with different inputs: large numbers, low bits masked, and middle bits masked. The results of the simulator for each dataset and design are summarized in the table below.
\begin{table}[h]
\begin{tabular} {|l | r | r | r | r|}
\hline
\textbf{Design} & \textbf{DS: SI} & \textbf{DS: LI} & \textbf{DS: LB masked} & \textbf{DS: MB masked} \\
\hline
Baseline: Total Cycles & 1751.00 & 1751.00 & 1751.00 & 1751.00 \\
Baseline: Avg. Cycles / Transaction & 35.02 & 35.02 & 35.02 & 35.02 \\
Alternative: Total Cycles & 444.00 & 783.00 & 666.00 & 574.00 \\
Alternative: Avg. Cycles / Transaction & 8.88 & 15.66 & 13.32 & 11.48 \\
Alt-$2^n$: Total Cycles & 455.00 & 817.00 & 723.00 & 654.00 \\
Alt-$2^n$: Avg. Cycles / Transaction & 9.10 & 16.34 & 14.46 & 13.08 \\
\hline
\end{tabular}
\caption{Baseline v. Alternative Design Performance. DS = Dataset. SI = Small Inputs. LI = Large Inputs. LB = Low Bits. MB = Middle Bits.}
\end{table}
From the above table, we can see that the baseline design performs the same for all datasets, which is to be expected since it is a fixed-latency design. However, the alternative design performs far better than the baseline design for all input datasets as it capitalizes on inputs with consecutive zeroes to reduce latency. Interestingly, the alternative design performed better for the small inputs dataset than it did for the large inputs dataset, perhaps due to the large inputs dataset containing less consecutive zeroes and relatively denser inputs. It also performed better for the masked datasets than it did for the large inputs dataset due to the guaranteed presence of consecutive zeroes. The "powers of 2" alternative design (that only capitalized on numbers of consecutive zeroes that are powers of 2) did better than the baseline design but only slightly worse than the alternative design as expected since it didn't account for numbers of consecutive zeroes that aren't powers of 3 (1, 3, 5, 6, etc.).
One area of improvement we considered but couldn't implement due to time constraints was capitalizing on consecutive ones in negative numbers. The idea was to convert negative inputs to their positive equivalents by flipping the bits and adding 1, multiplying the positive numbers together, and finally converting the product back to its negative equivalent. By converting the inputs to positive numbers, we can use the alternative design to capitalize on the consecutive zeroes (it doesn't work with consecutive ones).
The possible alternative implementations discussed in the alternative design section could be used to further reduce latency and improve performance, especially by taking advantage of data-level parallelism even more.
\section{Additional Figures}
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{gantt_actual}
\caption{Our actual progress on the lab}
\label{fig:gantt_actual}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.6]{AltFSM}
\caption{FSM capturing control logic for the alternative design.}
\label{fig:AltFSM}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.7353063777,
"avg_line_length": 67.3403508772,
"ext": "tex",
"hexsha": "3f3dd6c09eedc57d7aa191d3be6af4895da6ca9d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-11-10T00:12:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-11-10T00:12:24.000Z",
"max_forks_repo_head_hexsha": "5b0887c79de600aee433653a6006d381bc514298",
"max_forks_repo_licenses": [
"BSD-3-Clause",
"MIT"
],
"max_forks_repo_name": "vrg22/ece4750-lab5-mcore",
"max_forks_repo_path": "lab1/writeup/solutions.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b0887c79de600aee433653a6006d381bc514298",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause",
"MIT"
],
"max_issues_repo_name": "vrg22/ece4750-lab5-mcore",
"max_issues_repo_path": "lab1/writeup/solutions.tex",
"max_line_length": 1017,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5b0887c79de600aee433653a6006d381bc514298",
"max_stars_repo_licenses": [
"BSD-3-Clause",
"MIT"
],
"max_stars_repo_name": "vrg22/ece4750-lab5-mcore",
"max_stars_repo_path": "lab1/writeup/solutions.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-15T04:04:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-18T14:47:35.000Z",
"num_tokens": 4411,
"size": 19192
} |
\lab{Least squares and Eigenvalues}{Least squares and Eigenvalues}
\objective{Use least squares to fit curves to data and use QR decomposition to find eigenvalues}
\label{lab:givens}
\section*{Least Squares}
A linear system $A\x=\b$ is \emph{overdetermined} if it has no solutions.
In this situation, the \emph{least squares solution} is a vector $\widehat{\x}$ hat is ``closest'' to a solution.
By definition, $\widehat{\x}$ is the vector such that $A\widehat{\x}$ will equal the projection of $\b$ onto the range of $A$.
We can compute $\widehat{\x}$ by solving the \emph{Normal Equation} $A\trp A\widehat{\x} = A\trp \b$ (see [TODO: ref textbook] for a derivation of the Normal Equation).
\subsection*{Solving the normal equation}
If $A$ is full rank, we can use its QR decomposition to solve the normal equation.
In many applications, $A$ is usually full rank, including when least squares is used to fit curves to data.
Let $A=QR$ be the QR decomposition of $A$, so $R = \left(\begin{array}{c}R_0\\
0\\ \end{array} \right)$
where $R_0$ is $n \times n$, nonsingular, and upper triangular.
It can be shown that $\widehat{\x}$ is the least squares solution to $A\x=\b$ if and only if $R_0\widehat{\x} = (Q\trp \b)[:n].$
Here, $(Q\trp \b)[:n]$ refers to the first $n$ rows of $Q\trp \b$.
Since $R$ is upper triangular, we can solve this equation quickly with back substitution.
\begin{problem}
Write a function that accepts a matrix $A$ and a vector $b$ and returns the least squares solution to $Ax=b$.
Use the QR decomposition as outlined above.
Your function should use SciPy's functions for QR decomposition and for solving triangular systems, which are \li{la.qr()} and \li{la.solve_triangular()}, respectively.
\end{problem}
\subsection*{Using least squares to fit curves to data}
The least squares solution can be used to find the curve of a chosen type that best fits a set of points.
\subsubsection*{Example 1: Fitting a line}
For example, suppose we wish to fit a general line $y=mx+b$ to the data set $\{(x_k, y_k)\}_{k=1}^n$.
When we plug the constants $(x_k, y_k)$ into the equation $y=mx+b$, we get a system of linear equations in the unknowns $m$ and $b$.
This system corresponds to the matrix equation
\[
\begin{pmatrix}
x_1 & 1\\
x_2 & 1\\
x_3 & 1\\
\vdots & \vdots\\
x_n & 1
\end{pmatrix}
\begin{pmatrix}
m\\
b
\end{pmatrix}=
\begin{pmatrix}
y_1\\
y_2\\
y_3\\
\vdots\\
y_n
\end{pmatrix}.
\]
Because this system has two unknowns, it is guaranteed a solution if it has two or fewer equations.
In applications, there will usually be more than two data points, and these will probably not lie in a straight line, due to measurement error.
Then the system will be overdetermined.
The least squares solution to this equation will be a slope $\widehat{m}$ and $y$-intercept $\widehat{b}$ that produce a line $y = \widehat{m}x+\widehat{b}$ which best fits our data points.
%DO: spring constant as an example of this
% circle fit
% mention in this situation A will usually be full rank.
%todo: least squares and invertibiility.
Let us do an example with some actual data. Imagine we place different loads on a spring and measure the displacement, recording our results in the table below.
%TODO: get data points that are not so close to an actual line
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c}
displacement (cm)& 1.04 &2.03 &2.95 &3.92 &5.06 &6.00 &7.07 \\ \hline
load (dyne) & 3.11& 6.01& 9.07& 11.99 & 15.02& 17.91& 21.12\\
\end{tabular}
\end{table}
Hooke's law from physics says that the displacement $x$ should be proportional to the load $F$, or $F = kx$ for some constant $k$.
The equation $F=kx$ describes a line with slope $k$ and $F$-intercept 0.
So the setup is similar to the setup for the general line we discussed above, except we already know $b=0$
When we plug our seven data points $(x,F)$ pairs into the equation $F=kx$, we get seven linear equations in $k$, corresponding to the matrix equation
\[
\begin{pmatrix}
1.04\\
2.03\\
2.95\\
3.92\\
5.06\\
6.00\\
7.07\\
\end{pmatrix}
\begin{pmatrix}k\end{pmatrix} =
\begin{pmatrix}
3.11 \\
6.01\\
9.07\\
11.99\\
15.02\\
17.91\\
21.12\\
\end{pmatrix}.
\]
We expect such a linear system to be overdetermined, and in fact it is: the equation is $1.04k = 3.11$ which implies $k=2.99$, but the second equation is $2.03k = 6.01$ which implies $k=2.96$.
We can't solve this system, but its least squares solution is a ``best'' choice for $k$.
We can find the least squares solution with the SciPy function \li{linalg.lstlsq()}.
This function returns a tuple of several values, the first of which is the least squares solution.
\begin{lstlisting}
>>> A = np.vstack([1.04,2.03,2.95,3.92,5.06,6.00,7.07])
>>> b = np.vstack([3.11,6.01,9.07,11.99,15.02,17.91,21.12])
>>> k = la.lstsq(A, b)[0]
>>> k
array([[ 2.99568294]])
\end{lstlisting}
Hence, to two decimal places, $k = 3.00$.
We plot the data against the best-fit line with the following code, whose output is in Figure \ref{fig:spring_fit}
\begin{lstlisting}
>>> from matplotlib import pyplot as plt
>>> x0 = np.linspace(0,8,100)
>>> y0 = k[0]*x0
>>> plt.plot(A,b,'*',x0,y0)
>>> plt.show()
\end{lstlisting}
\begin{figure}
\includegraphics[width=\textwidth]{line_lstsq}
\caption{The graph of the spring data together with its linear fit.}
\label{fig:spring_fit}
\end{figure}
%TODO: find more interesting data and make a sample plot
\begin{problem}
Load the \li{linepts} array from the file \texttt{data.npz}. The following code stores this array as \li{linepts}.
\begin{lstlisting}
linepts = np.load('data.npz')['linepts']
\end{lstlisting}
The \li{linepts} array has two columns corresponding to the $x$ and $y$ coordinates of some data points.
\begin{enumerate}
\item Use least squares to fit the line $y=mx+b$ to the data.
\item Plot the data and your line on the same graph.
\end{enumerate}
\end{problem}
%
%\section*{General Line Fitting}
%
%Suppose that we wish to fit a general line, that is $y=m x+b$, to the data set
%$\{(x_k,y_k)\}^n_{k=1}$. Assume that the line does not cross through the origin,
%as in the previous example. Then we seek both a slope and a $y$-intercept.
%In this case, we set up the following linear system $A x = b$, or more precisely
%\[
%\begin{pmatrix}
%x_1 & 1\\
%x_2 & 1\\
%x_3 & 1\\
%\vdots & \vdots\\
%x_n & 1
%\end{pmatrix}
%\begin{pmatrix}
%m\\
%b
%\end{pmatrix}=
%\begin{pmatrix}
%y_1\\
%y_2\\
%y_3\\
%\vdots\\
%y_n
%\end{pmatrix}.
%\]
%Note that $A$ has rank $2$ as long as not all of the $x_k$ values are the same.
%Hence, the least squares solution
%is given by
%$$
%\widehat{x} = (A^HA)^{-1}A^Hb.
%$$
%In what sense does this solution give us the best fit line for the data? Recall that since $A$ is injective,
%the matrix $A(A^HA)^{-1}A^H$ is an orthogonal projector onto the range of $A$, which means that
%$A(A^HA)^{-1}A^Hb = A\widehat{x}$ is the closest vector (with respect to the 2-norm) to $b$ that lies in the
%range of $A$. That is, $\widehat{x}$ minimizes the error between $Ax$ and $b$, where the error is given
%by the distance between these vectors, $\|b-Ax\|_2$. Another way to say this is that $\widehat{x}$ gives the
%values $m$ and $b$ for which the sum of the squares of the distances from each data point $y_k$ to the value
%$y = mx_k + b$ is as small as possible.
\subsubsection*{Example 2: Fitting a circle}
Now suppose we wish to fit a general circle to a data set $\{(x_k, y_k)\}_{k=1}^n$. Recall that the equation of a circle with radius $r$ and center $(c_1,c_2)$ is
\begin{equation}
\label{circle}
(x-c_1)^2 + (y-c_2)^2 = r^2.
\end{equation}
What happens when we plug a data point into this equation? Suppose $(x_k, y_k)=(1,2)$.
\footnote{You don't have to plug in a point for this derivation, but it helps us remember which symbols are constants and which are variables.} Then
\begin{equation*}\label{equ:example}
5 = 2c_1+4c_2+(r^2-c_1^2-c_2^2).
\end{equation*}
To find $c_1$, $c_2$, and $r$ with least squares, we need \emph{linear} equations.
Then Equation \ref{equ:example} above is not linear because of the $r^2$, $c_1^2$, and $c_2^2$ terms.
We can do a trick to make this equation linear: create a new variable $c_3$ defined by $c_3 = r^2-c_1^2-c_2^2$.
Then Equation \ref{equ:example} becomes
\[
5=2c_1+4c_2+c_3,
\]
which \emph{is} linear in $c_1$, $c_2$, and $c_3$. Since $r^2 = c_3+c_1^2+c_2^2$, after solving for the new variable $c_3$ we can also find $r$.
For a general data point $(x_k, y_k)$, we get the linear equation
\[
2c_1x_k+2c_2y_k+c_3=x_k^2+y_k^2.
\]
Thus, we can find the best-fit circle from the least squares solution to the matrix equation
\begin{equation}\label{equ:circle_fit}
\begin{pmatrix}
2 x_1 & 2 y_1 & 1\\
2 x_2 & 2 y_2 & 1\\
\vdots & \vdots & \vdots \\
2 x_n & 2 y_n & 1
\end{pmatrix}
\begin{pmatrix}
c_1\\
c_2\\
c_3
\end{pmatrix}=
\begin{pmatrix}
x_1^2 + y_1^2\\
x_2^2 + y_2^2\\
\vdots\\
x_n^2 + y_n^2
\end{pmatrix}.
\end{equation}
If the least squares solution is $\widehat{c_1}, \widehat{c_2}$, $\widehat{c_3}$, then the best-fit circle is
\[
(x-\widehat{c_1})^2 + (y-\widehat{c_2})^2 = \widehat{c_3}+\widehat{c_1}^2+\widehat{c_2}^2.
\]
Let us use least squares to find the circle that best fits the following nine points:
%TODO: get data points that are not so close to an actual circle
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c}
$x$& 134 &104 &34 &-36 &-66 &-36 &34 &104 & 134 \\ \hline
$y$& 76& 146& 176& 146 & 76& 5& -24 & 5 & 76\\
\end{tabular}
\end{table}
We enter them into Python as a $9\times 2$ array.
\begin{lstlisting}
>>> P = np.array([[134,76],[104,146],[34,176],[-36,146],
[-66,76],[-36,5],[34,-24],[104,5],[134,76]])
\end{lstlisting}
We compute $A$ and $b$ according to Equation \ref{equ:circle_fit}.
\begin{lstlisting}
>>> A = np.hstack((2*P, np.ones((9,1))))
>>> b = (P**2).sum(axis=1)
\end{lstlisting}
Then we use SciPy to find the least squares solution.
\begin{lstlisting}
>>> c1, c2, c3 = la.lstsq(A, b)[0]
\end{lstlisting}
We can solve for $r$ using the relation $r^2 = c_3+c_1^2+c_2^2$.
\begin{lstlisting}
>>> r = sqrt(c1**2 + c2**2 + c3)
\end{lstlisting}
A good way to plot a circle is to use polar coordinates.
Using the same variables as before, the equation for a general circle is $x=r\cos(\theta)+c_1$ and $y=r\sin(\theta)+c_2$.
With the following code we plot the data points and our best-fit circle using polar coordinates.
The resulting image is Figure \ref{fig:circle}.
\begin{lstlisting}
# In the polar equations for a circle, theta goes from 0 to 2*pi.
>>> theta = np.linspace(0,2*np.pi,200)
>>> plt.plot(r*np.cos(theta)+c1,r*np.sin(theta)+c2,'-',P[:,0],P[:,1],'*')
>>> plt.show()
\end{lstlisting}
\begin{figure}
\includegraphics[width=\textwidth]{circle.pdf}
\caption{The graph of the some data and its best-fit circle.}
\label{fig:circle}
\end{figure}
\begin{comment}
\begin{problem}
Write a function \li{fitCircle} that does the following.
Load the \texttt{circlepts} array from \texttt{data.npz}.
This consists of two columns corresponding to the $x$ and $y$ values of a given
data set. Use least squares to find the center and radius of the circle that best
fits the data. Then plot the data points and the circle on the same graph.
The function should return nothing.
\end{problem}
\end{comment}
%TODO: figure out how to plot this problem
\begin{problem}
\leavevmode
\begin{enumerate}
\item Load the \texttt{ellipsepts} array from \texttt{data.npz}. This array has two columns corresponding to the $x$ and $y$ coordinates of some data points.
\item Use least squares to fit an ellipse to the data.
The general equation for an ellipse is
\[
ax^2 + bx + cxy + dy + ey^2 = 1.
\]
You should get $0.087$, $-0.141$, $0.159$, $-0.316$, $0.366$ for $a, b, c, d,$ and $e$ respectively.
%\item Plot the data and your line on the same graph.
\end{enumerate}
\end{problem}
%TODO: keep this?
\begin{comment}
In these Least Squares problems, we have found best fit lines and ellipses relative to the 2-norm.
It is possible to generalize the idea of best fit curves relative to other norms.
See Figure \ref{Fig:ellipse} for an illustration of this.
\begin{figure}[h]
\label{ellipsefit}
\centering
\includegraphics[width=\textwidth]{ellipsefit.pdf}
\caption{Fitting an ellipse using different norms.}
\label{Fig:ellipse}
\end{figure}
\end{comment}
\begin{comment}
\section*{Loading Data from .npz Files}
For Least Squares problems as well as in many other contexts, loading data is often a necessary step before
proceeding with further analysis. Here we briefly review another data format in Python and the commands used
to load the data.
A \li{.npz} file is a compressed binary file that contains an archive of NumPy data structures.
A given file may therefore contain several arrays, each array associated with a unique string that identifies it.
When you load a \li{.npz} file in Python, a dictionary-like object is returned, and you can access the data by
providing the appropriate key. Note that when you load a \li{.npz} file, you must also be sure to close it when
you are finished. This is taken care of automatically if you use the \li{with ... as} keywords.
As an example, suppose that we have a file named \li{grades.npz} that contains several arrays, each giving the
homework scores of a particular student in a particular class. Assuming that one of the arrays is associated with
the key \li{'Abe'}, we can load this array in the following way:
\begin{lstlisting}
>>> with np.load('grades.npz') as grades:
>>> abe_grades = grades['Abe']
>>> abe_grades
array([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])
\end{lstlisting}
You will need to apply this technique in the next problem.
\end{comment}
\section*{Computing eigenvalues}
The eigenvalues of a matrix are the roots of its characteristic polynomial.
Thus, to find the eigenvalues of an $n \times n$ matrix, we must compute the roots of a degree-$n$ polynomial.
This is easy for small $n$.
For example, if $n=2$ the quadratic equation can be used to find the eigenvalues.
However, Abel's Impossibility Theorem says that no such formula exists for the roots of a polynomial of degree 5 or larger.
\begin{theorem}[Abel's Impossibility Theorem]
There is no general algebraic solution for solving a polynomial equation of degree $n\geq5$.
\label{thm:Abel}
\end{theorem}
Thus, it is impossible to write an algorithm that will exactly find the eigenvalues of an arbitrary matrix.
(If we could write such an algorithm, we could also use it to find the roots of polynomials, contradicting Abel's theorem.)
This is a significant result.
It means that we must find eigenvalues with \emph{iterative methods}, methods that generate sequences of approximate values converging to the true value.
\subsection*{The power method}
There are many iterative methods for finding eigenvalues.
The power method finds an eigenvector corresponding to the \emph{dominant} eigenvalue of a matrix, if such an eigenvalue exists.
The dominant eigenvalue of a matrix is the unique eigenvalue of greatest magnitude.
To use the power method on a matrix $A$, begin by choosing a vector $\x_0$ such that $\|\x_0\|=1$. Then recursively define
\[
x_{k+1}=\frac{Ax_k}{\norm{Ax_k}}.
\]
If
\begin{itemize}
\item $A$ has a dominant eigenvalue $\lambda$, and
\item the projection of $\x_0$ into the subspace spanned by the eigenvectors corresponding to $\lambda$ is nonzero,
\end{itemize}
then the vectors $\x_0, \x_1, \x_2, \ldots$ will converge to an eigenvector of $A$ corresponding to $\lambda$.
(See [TODO: ref textbook] for a proof when $A$ is semisimple, or [TODO: ref something else] for a proof in the general case.)
If all entries of $A$ are positive, then $A$ will always have a dominant eigenvalue (see [TODO: ref something!] for a proof).
There is no way to guarantee that the second condition is met, but if we choose $\x_0$ randomly, it will almost always satisfy this condition.
Once you know that $\x$ is an eigenvector of $A$, the corresponding eigenvalue is equal to the \emph{Raleigh quotient}
\[
\lambda = \frac{\langle Ax, x \rangle}{\|\x\|^2}.
\]
\begin{problem}
Write a function that implements the power method to compute an eigenvector. Your function should
\begin{enumerate}
\item Accept a matrix and a tolerance \li{tol}.
\item Start with a random vector.
\item Use the 2-norm wherever a norm is needed (use \li{la.norm()}).
\item Repeat the power method until the vector changes by less than the tolerance. In mathematical notation, you are defining $x_0, x_1, \ldots x_k$, and your function should stop when $\|x_{k+1}-x_k\| < \text{tol}$.
\item Return the found eigenvector and the corresponding eigenvalue (use \li{np.inner()}).
\end{enumerate}
Test your function on positive matrices.
\end{problem}
\begin{comment}
An overview of the proof of the method is that you can write a matrix in Jordan Conical form $A=VJV^{-1}$ where $V$ is the matrix of the generalized eigenspaces.
But the first column is is the eigenvector corresponding to largest eigenvalue and $J$ is a upper trianglar matrix of eigenvalues and ones.
Note that $A^k=VJ^kV^{-1}$. The limit as $k \rightarrow \infty$ of $(\frac{1}{\lambda_1}J)^k$ is a matrix of all zeros except for a one in the upper right hand corner.
So $(\frac{A}{\norm{A}})^k \approx VJ^kV^{-1}$ So the largest eigenvalue dominates.
\end{comment}
\subsection*{The QR algorithm}
The disadvantage of the power method is that it only finds the largest eigenvector and a corresponding eigenvalue.
To use the QR algorithm, let $A_0=A$. Then let $Q_kR_k$ be the QR decomposition of $A_k$, and recursively define
\[
A_{k+1}=R_kQ_k.
\]
Then $A_0, A_1, A_2, \ldots $ will converge to a matrix of the form
\begin{equation*}
\label{eq:Schur form}
S =
\begin{pmatrix}
S_1 &* & \cdots & * \\
0 &S_2 & \ddots & \vdots \\
\vdots & \ddots & \ddots & * \\
0 & \cdots & 0 & S_m
\end{pmatrix}
\end{equation*}
where $S_i$ is a $1\times1$ or $2\times2$ matrix.\footnote{If $S$ is upper triangular (i.e., all $S_i$ are $1\times1$ matrices), then $S$ is the \emph{Schur form} of $A$.
If some $S_i$ are $2\times2$ matrices, then $S$ is the \emph{real Schur form} of $A$.}
The eigenvalues of $A$ are the eigenvalues of the $S_i$.
This algorithm works for three reasons. First,
\[
Q_k^{-1}A_kQ_k = Q_k^{-1}(Q_kR_k)Q_k = (Q_k^{-1}Q_k)(R_kQ_k) = A_{k+1},
\]
so $A_k$ is similar to $A_{k+1}$.
Because similar matrices have the same eigenvalues, $A_k$ has the same eigenvalues as $A$.
Second, each iteration of the algorithm transfers some of the ``mass'' from the lower to the upper triangle.
This is what makes $A_0, A_1, A_2, \ldots$ converge to a matrix $S$ which has the described form.
Finally, since $S$ is block upper triangular, its eigenvalues are just the eigenvalues of its diagonal blocks (the $S_i$).
A $2 \times 2$ block will occur in $S$ when $A$ is real but has complex eigenvalues.
In this case, the complex eigenvalues occur in conjugate pairs, each pair corresponding to a $2 \times 2$ block on the diagonal of $S$.
\subsubsection*{Hessenberg preconditioning}
Often, we ``precondition'' a matrix by putting it in upper Hessenberg form before passing it to the QR algorithm.
This is always possible because every matrix is similar to an upper Hessenberg matrix (see Lab \ref{}).
Hessenberg preconditioning is done for two reasons.
First, the QR algorithm converges much faster on upper Hessenberg matrices because they are already close to triangular matrices.
Second, an iteration of the QR algorithm can be computed in $\mathcal{O}(n^2)$ time on an upper Hessenberg matrix, as opposed to $\mathcal{O}(n^3)$ time on a regular matrix.
This is because so many entries of an upper Hessenberg matrix are 0.
If we apply the QR algorithm to an upper Hessenberg matrix $H$, then this speed-up happens in each iteration of the algorithm, since if $H = QR$ is the QR decomposition of $H$ then $RQ$ is also upper Hessenberg.
\begin{problem}
Write a function that implements the QR algorithm with Hessenberg preconditioning as described above.
Do this as follows.
\begin{enumerate}
\item Accept a matrix \li{A}, a number of iterations \li{niter}, and a tolerance \li{tol}
\item Put \li{A} in Hessenberg form using \li{la.hessenberg()}.
\item Compute the matrix $S$ by performing the QR algorithm \li{niter} times.
Use the function \li{la.qr()} to compute the QR decomposition.
\item Iterate through the diagonal of $S$ from top to bottom to compute its eigenvalues.
For each diagonal entry,
\begin{enumerate}
\item If this is the last diagonal entry, then it is an eigenvalue.
\item If the entry below this one has absolute value less than \li{tol}, assume this is a $1\times 1$ block.
Then the current entry is an eigenvalue.
\item Otherwise, the current entry is at the top left corner of a $2 \times 2$ block.
Calculate the eigenvalues of this block.
Use the \li{sqrt} function from the scimath library to find the square root of a negative number.
You can import this library with the line \li{from numpy.lib import scimath}.
\end{enumerate}
\item Return the (approximate) eigenvalues of \li{A}.
\end{enumerate}
You can check your function on the matrix
\[
\begin{pmatrix}
4 & 12 & 17 & -2 \\
-5.5& -30.5 & -45.5 & 9.5\\
3. & 20. & 30. & -6. \\
1.5 & 1.5& 1.5& 1.5
\end{pmatrix},
\]
which has eigenvalues $1+2i, 1-2i, 3$, and 0. You can also check your function on random matrices against \li{la.eig()}.
\label{prob:qr_solver}
\end{problem}
\begin{comment}
\begin{problem}
\label{prob:QR_eig_hessenberg}
Write a version of the QR algorithm that performs the QR algorithm by computing the Hessenberg form of a matrix, then computing various QR decompositions of the Hessenberg form of the matrix.
Use your solutions to \ref{prob:hessenberg} (where you computed the Hessenberg form of a matrix) and Problem \ref{prob:givens_hessenberg_modified} to do the necessary computations (where you computed the QR decomposition of a Hessenberg matrix and wrote code for multiplication by $Q$ that works in $\mathcal{O} \left( n^2 \right)$ time).
The solution to Problem \ref{prob:givens_hessenberg_modified} is especially important because it allows the compution of each QR decomposition and each $R Q = \left( Q^T R^T \right)$ in $\mathcal{O} \left( n^2 \right)$ time.
\end{problem}
\end{comment}
\begin{comment}
\begin{problem}
If $A$ is normal, its Schur form is diagonal.
For normal $A$, have your function additionally output the eigenvector corresponding to each eigenvalue.
Hint 1: Test your function on Hermitian and real symmetric matrices; they are both normal.
Hint 2: Your work in Problem \ref{problem:similarity proof} will help.
You have already made all the necessary calculations, you just need to store the information correctly.
\end{problem}
\end{comment}
\begin{comment}
\begin{problem}
Test your implementation with random matrices.
Try real-valued and symmetric matrices.
Compare your output to the output from the eigenvalue solver.
How many iterations are necessary?
How large can $A$ be?
\end{problem}
\end{comment}
The QR algorithm as described in this lab is not often used.
Instead, modern computer packages use the implicit QR algorithm, which is an improved version of the QR algorithm.
Lastly, iterative methods besides the power method and QR method are often used to find eigenvalues.
Arnoldi iteration is similar to the QR algorithm but exploits sparsity.
Other methods include the Jacobi method and the Rayleigh quotient method.
| {
"alphanum_fraction": 0.7206649421,
"avg_line_length": 42.9526411658,
"ext": "tex",
"hexsha": "fbc76771ee365061695a1c3053d0a6a675ee1a56",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "jessicaleete/numerical_computing",
"max_forks_repo_path": "Labs/LeastSquaresEigs/LstsqEigs.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "jessicaleete/numerical_computing",
"max_issues_repo_path": "Labs/LeastSquaresEigs/LstsqEigs.tex",
"max_line_length": 338,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cc71f51f35ca74d00e617af3d1a0223e19fb9a68",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "jessicaleete/numerical_computing",
"max_stars_repo_path": "Labs/LeastSquaresEigs/LstsqEigs.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 7210,
"size": 23581
} |
\documentclass[12pt,oneside,a4paper,english]{article}
\usepackage[T1]{fontenc}
\usepackage[latin2]{inputenc}
\usepackage[margin=2.25cm,headheight=26pt,includeheadfoot]{geometry}
\usepackage[english]{babel}
\usepackage{listings}
\usepackage{color}
\usepackage{titling}
\usepackage[framed, numbered]{matlab-prettifier}
\usepackage{changepage}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{enumitem}
\usepackage{graphicx}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{caption}
\usepackage{tocloft}
\usepackage{setspace}
\usepackage{multirow}
\usepackage{titling}
\usepackage{float}
\usepackage{comment}
\usepackage{booktabs}
\usepackage{indentfirst}
\usepackage{lscape}
\usepackage{booktabs,caption}
\usepackage[flushleft]{threeparttable}
\usepackage[english]{nomencl}
\usepackage{xcolor}
\usepackage{lipsum}
% --- set footer and header ---
\pagestyle{fancy}
\fancyhf{}
\setlength{\parindent}{2em}
\title{Channel Charting Report} % to reference as \title, dont use \maketitle
\lstset{language=Matlab,
style=Matlab-editor,
basicstyle=\normalsize\mlttfamily,
numbers=left,
numberstyle={\scriptsize\color{black}}, % size of the numbers
numbersep=0.5cm
}
\newlist{steps}{enumerate}{1}
\setlist[steps, 1]{leftmargin=1.5cm,label = Step \arabic*:}
\renewcommand{\headrulewidth}{1pt}
\renewcommand{\footrulewidth}{1pt}
%\lhead{\Title}
\rhead{\nouppercase{\rightmark}}
\lhead{\Title}
\rfoot{\includegraphics[height=1.25cm]{root/logo.pdf}} % right header logo
\setlength\headheight{16pt}
\setlength{\footskip}{50pt}
\lhead{\Title} %rightH title
\cfoot{\thepage}
% --- End of page settings ---
\begin{document}
\pagenumbering{roman}
\input{sources/frontpage.tex}
\newpage
\doublespacing
%\addcontentsline{toc}{section}{Table of Contents}
\renewcommand{\baselinestretch}{1}\normalsize
\tableofcontents
\renewcommand{\baselinestretch}{1}\normalsize
%\singlespacing
\thispagestyle{fancy} % force page style
\newpage
\pagenumbering{arabic}
\fancyfoot[C]{Page \thepage\ of \pageref{EndOfText}}
\section{Introduction} \label{ch1}
\input{sources/introduction.tex}
\pagebreak
\section{Theory}
\input{sources/Theory.tex}
\pagebreak
\section{Data Exploration}
\input{sources/DataExploration.tex}
\pagebreak
\section{Methods}
\input{sources/Methods.tex}
\pagebreak
\section{Results}
\input{sources/Results.tex}
\label{EndOfText}
\newpage
\addcontentsline{toc}{section}{References}
\bibliography{document.bib}
\bibliographystyle{ieeetr}
\newpage
\section{Appendix A} \label{ch6}
\label{endOfDoc}
\end{document}
| {
"alphanum_fraction": 0.774498229,
"avg_line_length": 22.2894736842,
"ext": "tex",
"hexsha": "a0d0484df41dd4d72dad5dd98977190da41a67e6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c09843c8b3b0265929b16439afa4b84edb491ea",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "evangstav/Channel_Charting",
"max_forks_repo_path": "reports/main_report/main.tex",
"max_issues_count": 19,
"max_issues_repo_head_hexsha": "5c09843c8b3b0265929b16439afa4b84edb491ea",
"max_issues_repo_issues_event_max_datetime": "2022-03-12T00:15:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-24T18:10:34.000Z",
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "evangstav/Channel_Charting",
"max_issues_repo_path": "reports/main_report/main.tex",
"max_line_length": 77,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "5c09843c8b3b0265929b16439afa4b84edb491ea",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "evangstav/Channel_Charting",
"max_stars_repo_path": "reports/main_report/main.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-22T07:52:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-06-29T12:09:00.000Z",
"num_tokens": 818,
"size": 2541
} |
\documentclass[10pt,conference,letterpaper]{IEEEtran}
\usepackage{times,amsmath,amssymb,mathrsfs,bm}
\usepackage{epsfig,graphics,subfigure}
\usepackage{multirow,url}
\usepackage[usenames,dvipsnames]{color}
\def \st {\small \tt}
\title{The Kaldi Speech Recognition Toolkit}
%
\makeatletter
\def\author#1{\gdef\@author{#1\\}}
\makeatother
\author{{Daniel Povey}$^1$, {Arnab Ghoshal}$^2$, \\
{Gilles Boulianne}$^3$, {Luk\'{a}\v{s} Burget}$^{4,5}$, {Ond\v{r}ej
Glembek}$^4$, {Nagendra Goel}$^6$, {Mirko Hannemann}$^4$, \\
{Petr Motl\'{i}\v{c}ek}$^7$, {Yanmin Qian}$^8$, {Petr Schwarz}$^4$,
{Jan Silovsk\'{y}}$^9$, {Georg Stemmer}$^{10}$, {Karel Vesel\'{y}}$^4$
\vspace{1.6mm}\\
\fontsize{10}{10}\selectfont\itshape
$^1$\,Microsoft Research, USA, {\st [email protected]}; \\
$^2$\,Saarland University, Germany, {\st [email protected]}; \\
$^3$\,Centre de Recherche Informatique de Montr\'{e}al, Canada;
$^4$\,Brno University of Technology, Czech Republic; \\
$^5$\,SRI International, USA;
$^6$\,Go-Vivace Inc., USA; $^7$\,IDIAP Research Institute, Switzerland;
$^8$\,Tsinghua University, China; \\
$^9$\,Technical University of Liberec, Czech Republic;
$^{10}$\,University of Erlangen-Nuremberg, Germany
}
% \author{%
% {First~Author{\small $~^{\#1}$}, Second~Author{\small $~^{*2}$} }%
% \vspace{1.6mm}\\
% \fontsize{10}{10}\selectfont\itshape
% $^{\#}$\,First Author Affiliation\\
% City, Country\\
% \fontsize{9}{9}\selectfont\ttfamily\upshape
% $^{1}$\,[email protected]%
% \vspace{1.2mm}\\
% \fontsize{10}{10}\selectfont\rmfamily\itshape
% $^{*}$\,Second Author Affiliation\\
% City, Country\\
% \fontsize{9}{9}\selectfont\ttfamily\upshape
% $^{2}$\,[email protected]%
% }
\begin{document}
\maketitle
%
\begin{abstract}
We describe the design of Kaldi, a free, open-source toolkit for speech
recognition research. Kaldi provides a speech recognition system based on
finite-state transducers (using the freely available OpenFst), together with
detailed documentation and scripts for building complete
recognition systems. Kaldi is written is C++, and the core library supports
modeling of arbitrary phonetic-context sizes, acoustic modeling with subspace
Gaussian mixture models (SGMM) as well as standard Gaussian mixture models,
together with all commonly used linear and affine transforms. Kaldi is released
under the Apache License v2.0, which is highly nonrestrictive, making it
suitable for a wide community of users.
\end{abstract}
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Introduction}
\label{sec:intro}
Kaldi\footnote{According to legend, Kaldi was the Ethiopian goatherd who
discovered the coffee plant.} is an open-source toolkit for speech recognition
written in C++ and licensed under the Apache License v2.0.
% It is intended for use by speech recognition researchers.
The goal of Kaldi is to have modern and flexible code that is easy to
understand, modify and extend. Kaldi is available on SourceForge (see
http://kaldi.sf.net/). The tools compile on the commonly used Unix-like
systems and on Microsoft Windows.
Researchers on automatic speech recognition (ASR) have several potential
choices of open-source toolkits for building a recognition system. Notable
among these are: HTK \cite{htkbook}, Julius \cite{julius} (both written in C),
Sphinx-4 \cite{sphinx} (written in Java), and the RWTH ASR toolkit \cite{rwth}
(written in C++). Yet, our specific requirements|a finite-state transducer
(FST) based framework, extensive linear algebra support, and a non-restrictive
license|led to the development of Kaldi. Important features of Kaldi include:
% The work on Kaldi started during the 2009 Johns Hopkins University summer
% workshop,
% % project titled ``Low Development Cost, High Quality Speech Recognition
% % for New Languages and Domains,''
% where we were working on acoustic modeling
% using subspace Gaussian mixture model (SGMM) \cite{sgmm_csl} and automated
% lexicon learning.
\paragraph*{Integration with Finite State Transducers} We compile against the
OpenFst toolkit \cite{openfst} (using it as a library).
\paragraph*{Extensive linear algebra support} We include a matrix library that wraps
standard BLAS and LAPACK routines.
\paragraph*{Extensible design} We attempt to provide our algorithms in the most
generic form possible. For instance, our decoders work with an interface that
provides a score for a particular frame and FST input symbol. Thus the decoder
could work from any suitable source of scores.
\paragraph*{Open license} The code is licensed under Apache v2.0, which is one
of the least restrictive licenses available.
\paragraph*{Complete recipes} We make available complete recipes for
building speech recognition systems, that work from widely available databases
such as those provided by the Linguistic Data Consortium (LDC).
\paragraph*{Thorough testing}
The goal is for all or nearly all the code to have corresponding test routines.
The main intended use for Kaldi is acoustic modeling research; thus, we view
the closest competitors as being HTK and the RWTH ASR toolkit (RASR). The chief
advantage versus HTK is modern, flexible, cleanly structured code and better
WFST and math support; also, our license terms are more open than either HTK
or RASR.
The paper is organized as follows: we start by describing the structure of the
code and design choices (section \ref{sec:flavor}). This is followed by
describing the individual components of a speech recognition system that the
toolkit supports: feature extraction (section \ref{sec:feats}), acoustic
modeling (section \ref{sec:am}), phonetic decision trees (section
\ref{sec:tree}), language modeling (section \ref{sec:lm}), and decoders
(section \ref{sec:decoder}). Finally, we provide some benchmarking results in
section \ref{sec:expt}.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Overview of the toolkit}
\label{sec:flavor}
We give a schematic overview of the Kaldi toolkit in figure \ref{fig:kaldi-lib}.
The toolkit depends on two external libraries that are also freely available:
one is OpenFst \cite{openfst} for the finite-state framework, and the other is
numerical algebra libraries. We use the standard ``Basic Linear Algebra
Subroutines'' (BLAS)%\footnote{http://www.netlib.org/blas/}%\cite{blas}
and ``Linear Algebra PACKage'' (LAPACK)\footnote{Available from: http://www.netlib.org/blas/ and http://www.netlib.org/lapack/ respectively.}
%\cite{lapack}
routines for the latter.
% purpose, the details of which are described in section \ref{sec:matrix}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\columnwidth]{figs/kaldi-lib}\\
\caption{A simplified view of the different components of Kaldi. The
library modules can be grouped into those that depend on linear algebra
libraries and those that depend on OpenFst. The {\em decodable} class
bridges these two halves. Modules that are lower down in the schematic
depend on one or more modules that are higher up.}
\label{fig:kaldi-lib}
\end{center}
\end{figure}
% We aim for the toolkit to be as loosely coupled as possible to make it easy
% to reuse and refactor. This is reflected in the structure of the toolkit, where
The library modules can be grouped into two distinct halves, each depending
on only one of the external libraries (c.f. Figure \ref{fig:kaldi-lib}). A
single module, the {\st DecodableInterface} (section \ref{sec:decoder}),
bridges these two halves.
Access to the library functionalities is provided through command-line tools
written in C++, which are then called from a scripting language for building
and running a speech recognizer.
% While this is similar to the traditional approach followed in several toolkits (e.g. HTK), the Kaldi approach differs fundamentally in how we view the tools.
Each tool has very specific functionality with a small set of command line
arguments: for example, there are separate executables for accumulating
statistics, summing accumulators, and updating a GMM-based acoustic model using
maximum likelihood estimation.
% As such the code for the executables tend to be very simple with only a small set of command line arguments.
Moreover, all the tools can read from and write to pipes which makes it
easy to chain together different tools.
To avoid ``code rot'', We have tried to structure the toolkit in such a way that
implementing a new feature will generally involve adding new code and
command-line tools rather than modifying existing ones.
% An approach that has recently become popular is to have a scripting
% language such as Python call the C++ code directly, and to have the outer-level
% control flow and system design done in this scripting language. This is
% the approach used in IBM's Attila toolkit~\cite{attila}. The design of
% Kaldi does not preclude doing this in future, but for now we have avoided
% this approach because it requires users to be proficient in two different
% programming languages.
%A final point to mention is that we tend to prefer provably correct
%algorithms. There has been an effort to avoid recipes and algorithms that
%could possibly fail, even if they don't fail in the ``normal case'' (for
%example, FST weight-pushing, which normally helps but which can fail or make things
%much worse in certain cases).
% % -------------------------------------------------------------------------
% % -------------------------------------------------------------------------
% \section{The Kaldi Matrix Library}
% \label{sec:matrix}
% The Kaldi matrix library provides C++ classes for vector and different types of
% matrices (description follows), as well as methods for linear algebra routines,
% particularly inner and outer products, matrix-vector and matrix-matrix
% products, matrix inversions and various matrix factorizations like Cholesky
% and singular value decomposition (SVD) that are required by
% various parts of the toolkit. The library avoids operators in favor of function
% calls, and requires the matrices and vectors to have correct sizes instead of
% automatically resizing the outputs.
% The matrix library does not call the Fortran interface of BLAS and LAPACK
% directly, but instead calls their C-language interface in form of CBLAS and
% CLAPACK. In particular, we have tested Kaldi using the ``Automatically Tuned
% Linear Algebra Software'' (ATLAS) \cite{atlas} library, the
% Intel Math Kernel Library (MKL) library, and the Accelerate
% Framework on OS X. It is possible to compile Kaldi with only ATLAS, even though
% ATLAS does not provide some necessary LAPACK routines, like SVD and eigenvalue
% decompositions. For those routines we use a C++ implementation of code from the
% ``Java Matrix Package'' (JAMA) \cite{jama} project.
% % -------------------------------------------------------------------------
% \subsection{Matrix and vector types}
% The matrix library defines the basic {\st Vector} and {\st Matrix} classes,
% which are templated on the floating point precision type ({\st float} or {\st
% double}).
% {\scriptsize
% \begin{verbatim}
% template<typename Real> class Vector;
% template<typename Real> class Matrix;
% \end{verbatim}}
% % // Symmetric packed matrix class:
% % template<typename Real> class SpMatrix;
% % // Triangular packed matrix class:
% % template<typename Real> class TpMatrix;
% The {\st Matrix} class corresponds to the general matrix (GE) in BLAS and
% LAPACK. There are also special classes for symmetric matrices ({\st SpMatrix})
% and triangular matrices ({\st TpMatrix}) (for Cholesky factors). Both of these
% are represented in memory as a ``packed'' lower-triangular matrix. Currently,
% we only support linear algebra operations with real numbers, although it is
% possible to extend the functionality to include complex numbers as well, since
% the underlying BLAS and LAPACK routines support complex numbers also.
% For matrix operations that involve only part of a vector or matrix, the {\st
% SubVector} and {\st SubMatrix} classes are provided, which are treated as a
% pointer into the underlying vector or matrix. These inherit from a common base
% class of {\st Vector} and {\st Matrix}, respectively, and can be used in any
% operation that does not involve memory allocation or deallocation.
% % -------------------------------------------------------------------------
% % -------------------------------------------------------------------------
% \section{Finite State Transducer (FST) library}
% \label{sec:fstext}
% We compile and link against an OpenFst~\cite{openfst}, which is an open-source
% weighted finite-state transducer library. Both our training and decoding code
% accesses WFSTs, which are simply OpenFst's C++ objects (we will sometimes
% refer to these just as FSTs).
% We also provide code for various extensions to the OpenFst library, such as a
% constructed-on-demand context FST ($C$) which allows our toolkit to work
% efficiently for wide phonetic context. There are also different versions of or
% extensions to some of the fundamental FST algorithms such as determinization,
% which we implement with epsilon removal and mechanisms to preserve stochasticity
% (discussed in Section~\ref{sec:graphs}); minimization, which we modify to work
% with non-deterministic input; and composition, where we provide a more efficient
% version. We provide command-line tools with interfaces similar to OpenFst's
% command line tools, to allow these algorithms to be used from the shell.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
% \section{Finite State Transducer Algorithms}
% \label{sec:fstext}
% % -------------------------------------------------------------------------
% \subsection{Determinization}
% We use a different determinization algorithm from the one in OpenFst. Our
% determinization algorithm is actually closer to the standard FST
% determinization algorithm than the one in OpenFst, in that it does epsilon
% removal along with determinization (thus, like many other FST algorithms, we do
% not consider epsilon to be a ``real symbol'').
% Our determinization algorithm has a different way of handling what happens when
% a transition in the initial determinized output has more than one output symbol
% on it. The OpenFst determinization algorithm uses a function that moves the
% output symbols (encoded as weights) around while maintaining equivalence,
% in order to ensure that no arc has more than one (encoded) output symbol; it
% does not introduce new states with epsilon symbols on their input, which is the
% most ``obvious'' thing to do. However, this algorithm can fail for the output
% of our determinization algorithm, because there can be cycles with more outputs
% than states on the cycle (because it does epsilon removal). Instead, whenever
% we encounter a link with more than one output symbols, we create a chain with a
% sufficient number of intermediate states to accommodate all the output symbols.
% The weight and input symbol go on the first link of this chain. The output of
% our DeterminizeStar algorithm is deterministic according to the definitions
% OpenFst uses, i.e. treating epsilons as a normal symbols. Its output does have
% epsilons on the input side, which is against the normal definition of
% determinism, but this is to be viewed as an encoding mechanism for allowing
% more than one output symbol on a link, and in any case it only happens in quite
% specific circumstances (an epsilon arc is always the only arc out of a state).
% % One other difference is that our program fstdeterminizestar does not require
% % the input FST to have its output symbols encoded as weights.
% We supply a function which casts an FST in the tropical semiring to the log
% semiring before determinizing, and then converts back. This is the form of
% determinization used in our algorithms, as it preserves stochasticity (see
% Section \ref{sec:fst:stoch}).
% % -------------------------------------------------------------------------
% \subsection{Removing epsilons}
% We supply an epsilon-removal algorithm called {\st RemoveEpsLocal()} that is
% guaranteed to never blow up the FST, but on the other hand is not guaranteed to
% remove all epsilons. This function has slightly different behaviour from
% OpenFst's {\st RemoveEps()} function, because it will combine two arcs if one
% has an input epsilon and one an output epsilon. The function preserves FST
% equivalence.
% There is also a function {\st RemoveEpsLocalSpecial()} that preserves
% equivalence in the tropical semiring while preserving stochasticity in the log
% semiring (for more on stochasticity see next section). This is a case where the
% usefulness of the semiring formalism breaks down a little bit, as we have to
% consider two semirings simultaneously.
% % -------------------------------------------------------------------------
% \subsection{Preserving stochasticity and testing it}
% \label{sec:fst:stoch}
% We define a stochastic FST as an FST in which, in the FST's semiring the sum of
% the weights of the arcs out of any given state (plus the final-weight) equals
% one (in the semiring). This concept is most useful and natural in the log
% semiring; essentially, a stochastic FST is one where the sum of the weights out
% of a given arc is one (e.g. a properly normalized HMM would correspond to a
% stochastic FST).
% We aim for most of the FST algorithms we use to preserve stochasticity, in the
% sense that they will produce stochastic outputs given stochastic inputs. For
% non-stochastic inputs, we aim that the minimum and maximum range of the weights
% will not get larger. In order to preserve stochasticity, the FSTs that we
% compose with have to have certain properties. Consider L, for example.
% We require that for any linear FST corresponding to a path through G, call this
% FST F, the product L o F must be stochastic. This basically means that L have
% properly normalized pronunciation probabilities. The actual property formally
% required may be a little stronger than this (this relates to ensuring that the
% probabilities appear at the ``right time'').
% % -------------------------------------------------------------------------
% \subsection{Minimization}
% We use the minimization algorithm supplied by OpenFst, but we apply a patch
% before compiling OpenFst so that minimization can be applied to
% non-deterministic FSTs. The reason for this is so that we can remove
% disambiguation symbols before minimizing, which is more optimal (it allows
% minimization to combine more states). Fundamentally, OpenFst's minimization
% algorithm is applicable to non-deterministic FSTs. This is
% the same thing the fstminimize program does except that it does not do weight
% pushing. This is desirable for us because the way we ensure stochasticity
% entails avoiding any weight pushing.
% % -------------------------------------------------------------------------
% \subsection{Composition}
% For the most part we use OpenFst's own composition algorithms, but we do make
% use of a more efficient composition algorithm for certain common cases. It uses
% the ``Matcher'' concept of OpenFst; a Matcher is a kind of helper class used
% during composition that performs lookup on the arcs out of a state to find any
% arcs with a particular input or output symbol. The normal matcher that OpenFst
% uses is SortedMatcher, which relies on arcs being sorted on the relevant label,
% and does a binary search. TableMatcher detects cases where it would be
% efficient to build a table indexed by label, and for those states it avoids the
% overhead of binary search. This leads to a speedup when composing with things
% like lexicons that have a very high out-degree.
% % -------------------------------------------------------------------------
% \subsection{Adding and removing disambiguation symbols}
% Our FST recipes (like other transducer-based recipes) rely on disambiguation
% symbols. In the normal recipes, these are added to the input side of the
% lexicon FST (L) to make it determizable. We also add disambiguation symbols to
% G and C (see Disambiguation symbols). Whenever we do a composition and the FST
% on the right has disambiguation symbols on its input, we (in theory) add to
% each state in the left-hand FST a self-loop for each of the disambiguation
% symbols, which has that symbol on both its input and output. Note that the
% actual integer symbols id's for the disambiguation symbols on the left and
% right may not be the same. For instance, we have a special symbol \#0 in G
% (where epsilon would normally be). The symbol-id for this would generally be
% the highest-numbered word plus one. But when we want to pass this symbol
% through L, we need a symbol in L's input symbol table (which mainly contains
% phones), to represent \#0. We have a function AddSelfLoops() that takes a
% mutable FST and two vectors of labels (a label is an integer id for a symbol).
% The vectors are the same size, and represent corresponding input and output
% labels for the disambiguation symbols. This function adds self-loops to each
% final state and each state with non-epsilon output symbols on at least one arc
% out of it.
% We remove disambiguation symbols with the function DeleteISymbols(), accessible
% on the command line with the program fstrmsymbols.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Feature Extraction}
\label{sec:feats}
Our feature extraction and waveform-reading code aims to create standard MFCC
and PLP features, setting reasonable defaults but leaving available the options
that people are most likely to want to tweak (for example, the number of mel
bins, minimum and maximum frequency cutoffs, etc.). We support most commonly
used feature extraction approaches: e.g. VTLN, cepstral mean and variance normalization,
LDA, STC/MLLT, HLDA, and so on.
%The feature extraction
%pipeline is implemented as a series of functions, each of which output a
%matrix of floating point numbers and take a matrix as input, except the
%windowing function, which reads the waveform samples as a vector.
% The windowing function can optionally dither (add random Gaussian noise to)
% the waveform, remove DC offset and pre-emphasize the higher frequencies.
% Our FFT implementation~\cite{rico_book} works for window lengths that are not
% powers of 2, and we also provide an implementation of split-radix FFT for
% 0-padded windows whose lengths are powers of 2.
% We support cepstral liftering
% and optional warping of the Mel filter banks using vocal tract length
% normalization (VTLN).
%Cepstral mean and variance normalization, dynamic (i.e.
%delta) features of arbitrary order, splicing of arbitrary number of frames
% to the left or right of the current frame
%and dimensionality reduction using linear discriminant analysis (LDA) or
%heteroscedastic linear discriminant analysis (HLDA) \cite{hlda} are supported
%at the executable layer.
% through simple command line tools.
%Additionally, we support reading and writing of features in the format used by
%HTK \cite{htkbook}.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Acoustic Modeling}
\label{sec:am}
Our aim is for Kaldi to support conventional models (i.e. diagonal GMMs) and
Subspace Gaussian Mixture Models (SGMMs), but to also be easily extensible to
new kinds of model.
% Following the general design philosophy of Kaldi, the
% acoustic modeling code is made up of classes with very specific functionality
% that do not ``know anything'' about how they get used. For example, the {\st
% DiagGmm} class just stores the parameters of a diagonal-covariance Gaussian
% mixture model (together with accessors and mutators for the parameters) and
% provides methods for likelihood computation. Estimation of GMMs is handled by a
% separate class\footnote{In fact, different estimation classes are responsible
% for different estimation algorithms, e.g. ML or MMI.} that accumulates the
% sufficient statistics.
% -------------------------------------------------------------------------
\subsection{Gaussian mixture models}
We support GMMs with diagonal and full covariance structures. Rather than
representing individual Gaussian densities separately, we directly implement a
GMM class that is parametrized by the {\em natural parameters}, i.e. means
times inverse covariances and inverse covariances. The GMM classes also store
the {\em constant} term in likelihood computation, which consist of all the
terms that do not depend on the data vector.
% In other words, the constant term is the likelihood of the zero vector.
Such an implementation is suitable for efficient log-likelihood computation
with simple dot-products.
% -------------------------------------------------------------------------
\subsection{GMM-based acoustic model}
The ``acoustic model'' class {\st AmDiagGmm} represents a collection of {\st DiagGmm} objects, indexed by ``pdf-ids'' that correspond to context-dependent
HMM states. This class does not represent any HMM structure, but just a collection of densities (i.e. GMMs).
% with a slightly richer interface that supports, among other things, setting the number of Gaussian components in each pdf proportional to the occupancy of the corresponding HMM state.
There are separate classes that represent the HMM structure, principally the
topology and transition-modeling code and the code responsible for compiling
decoding graphs, which provide a mapping between the HMM states and the pdf
index of the acoustic model class.
% The classes for estimating the acoustic model parameters are likewise implemented as a collection of GMM estimators.
Speaker adaptation and other linear transforms like maximum likelihood linear
transform (MLLT) \cite{mllt} or semi-tied covariance (STC) \cite{stc} are
implemented by separate classes.
% -------------------------------------------------------------------------
\subsection{HMM Topology}
It is possible in Kaldi to separately specify the HMM topology for each
context-independent phone. The topology format allows nonemitting states, and
allows the user to pre-specify tying of the p.d.f.'s in different HMM states.
% -------------------------------------------------------------------------
\subsection{Speaker adaptation}
We support both model-space adaptation using maximum likelihood linear
regression (MLLR) \cite{mllr} and feature-space adaptation using feature-space
MLLR (fMLLR), also known as constrained MLLR \cite{gales_linxform}. For both
MLLR and fMLLR, multiple transforms can be estimated using a regression tree
\cite{regtree}. When a single fMLLR transform is needed, it can be used as an
additional processing step in the feature pipeline.
% It is also possible to only estimate the bias vector of an affine transform,
% or the bias vector and a diagonal transform matrix, which are suitable when
% the amount of adaptation data is small\footnote{This is currently implemented only for fMLLR.}.
The toolkit also supports speaker normalization using a linear approximation
to VTLN, similar to \cite{lvtln}, or conventional feature-level VTLN, or
a more generic approach for gender normalization which we call the ``exponential
transform''~\cite{asru_et}. Both fMLLR and VTLN can be used for speaker adaptive training (SAT) of the acoustic models.
% -------------------------------------------------------------------------
\subsection{Subspace Gaussian Mixture Models}
For subspace Gaussian mixture models (SGMMs), the toolkit provides an
implementation of the approach described in \cite{sgmm_csl}. There is a single
class {\st AmSgmm} that represents a whole collection of pdf's; unlike the
GMM case there is no class that represents a single pdf of the SGMM. Similar to
the GMM case, however, separate classes handle model estimation and speaker
adaptation using fMLLR.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Phonetic Decision Trees}
\label{sec:tree}
Our goals in building the phonetic decision tree code were to make it
efficient for arbitrary context sizes (i.e. we avoided enumerating
contexts), and also to make it general enough to support a wide range of
approaches. The conventional approach is, in each HMM-state of each
monophone, to have a decision tree that asks questions about, say,
the left and right phones. In our framework, the decision-tree
roots can be shared among the phones and among the states of the
phones, and questions can be asked about any phone in the context window,
and about the HMM state. Phonetic questions can be supplied based on
linguistic knowledge, but in our recipes the questions are generated
automatically based on a tree-clustering of the phones.
Questions about things like phonetic stress (if marked in the dictionary)
and word start/end information are supported via an extended phone set;
in this case we share the decision-tree roots among the different versions of the
same phone.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Language Modeling}
\label{sec:lm}
Since Kaldi uses an FST-based framework, it is possible, in principle, to use
any language model that can be represented as an FST.
% We are working on mechanisms that are able to handle LMs that would get too
% large when represented this way.
We provide tools for converting LMs in the standard ARPA
format to FSTs. In our recipes, we have used the IRSTLM toolkit
\footnote{Available from: http://hlt.fbk.eu/en/irstlm} for
purposes like LM pruning. For building LMs from raw text, users may use the
IRSTLM toolkit, for which we provide installation help, or a more fully-featured
toolkit such as SRILM~\footnote{Available from: http://www.speech.sri.com/projects/srilm/}.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Creating Decoding Graphs}
\label{sec:graphs}
All our training and decoding algorithms use Weighted Finite State Transducers
(WFSTs). In the conventional
recipe~\cite{wfst}, the input symbols on the decoding graph correspond to
context-dependent states (in our toolkit, these symbols are
numeric and we call them pdf-ids). However, because we allow different phones
to share the same pdf-ids, we would have a number of problems with this
approach, including not being able to determinize the FSTs, and not having
sufficient information from the Viterbi path through an FST to work out the
phone sequence or to train the transition probabilities. In order to fix these
problems, we put on the input of the FSTs a slightly more fine-grained integer
identifier that we call a ``transition-id'', that encodes the pdf-id, the phone
it is a member of, and the arc (transition) within the topology specification
for that phone. There is a one-to-one mapping between the ``transition-ids''
and the transition-probability parameters in the model: we decided
make transitions as fine-grained as we could without increasing the
size of the decoding graph.
%An advantage of having the transition-ids as the
%graph input symbols is that all we need in Viterbi-based model training is the
%sequence of input symbols (transition-ids) in the Viterbi path through the
%FST. We call this sequence an alignment. A set of alignments gives us all the
%information we need in order to train the p.d.f.'s and the transition
%probabilities. Since an alignment encodes the complete phone sequence, it is
%possible to convert alignments between different decision trees.
Our decoding-graph construction process is based on the recipe described
in~\cite{wfst}; however, there are a number of differences. One important one
relates to the way we handle ``weight-pushing'', which is the operation that is
supposed to ensure that the FST is stochastic. ``Stochastic'' means that the
weights in the FST sum to one in the appropriate sense, for each state (like a
properly normalized HMM). Weight pushing may fail or may lead to bad pruning
behavior if the FST representing the grammar or language model ($G$) is not
stochastic, e.g. for backoff language models. Our approach is to
avoid weight-pushing altogether, but to ensure that each stage of graph creation
``preserves stochasticity'' in an appropriate sense. Informally, what this
means is that the ``non-sum-to-one-ness'' (the failure to sum to one) will
never get worse than what was originally present in $G$.
% This requires changes
%to some algorithms, e.g. determinization.
%There are other
%differences too: we minimize after removing disambiguation symbols, which is
%more optimal but requires changes to the minimization code of OpenFst; and we
%use a version of determinization that removes input epsilon symbols, which
%requires certain changes in other parts of the recipe (chiefly: introducing
%disambiguation symbols on the input of $G$).
% The graph creation process required in test time is put together at the
% shell-script level. In training
% time, the graph creation is done as a C++ program which can be made more
% efficient. The aim is for all the C++ tools to be quite generic, and not to
% have to know about ``special'' things like silence. For instance, alternative
% pronunciations and optional silence are supplied as part of the lexicon FST
% ($L$) which is generally produced by a script.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Decoders}
\label{sec:decoder}
We have several decoders, from simple to highly optimized; more will be added
to handle things like on-the-fly language model rescoring and lattice
generation. By ``decoder'' we mean a C++ class that implements the core
decoding algorithm. The decoders do not require a particular type of acoustic
model: they need an object satisfying a very simple interface with a function
that provides some kind of acoustic model score for a particular (input-symbol
and frame).
{\scriptsize
\begin{verbatim}
class DecodableInterface {
public:
virtual float LogLikelihood(int frame, int index) = 0;
virtual bool IsLastFrame(int frame) = 0;
virtual int NumIndices() = 0;
virtual ~DecodableInterface() {}
};
\end{verbatim}}
Command-line decoding programs are all quite simple, do just one
pass of decoding, and are all specialized for one decoder and one
acoustic-model type. Multi-pass decoding is implemented at the script level.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Experiments}
\label{sec:expt}
We report experimental results on the Resource Management (RM) corpus and on
Wall Street Journal.
% We note that the experiments reported here should be fully reproducible,
% except for minor differences in WER due to differences in compiler behavior
% and random number generation algorithms.
The results reported here correspond to version 1.0 of Kaldi; the scripts that
correspond to these experiments may be found in {\st egs/rm/s1} and
{\st egs/wsj/s1}.
% and we will provide ``system identifiers'' (corresponding to
%training runs) to help locate particular experiments in our scripts.
% The scripts include all data preparation stages, and require only the
% original datasets as distributed by the Linguistic Data Consortium (LDC).
% %% Arnab, if you don't like this format, we can discuss other options.
% {\scriptsize
% \begin{verbatim}
% svn co \
% https://kaldi.svn.sourceforge.net/svnroot/kaldi/kaldi-v1.0
% \end{verbatim}
% }
% ; see
% {\scriptsize \verb|egs/rm/README.txt|} and {\scriptsize \verb|egs/wsj/README.txt|}.
% In the experimental results in this section we will provide system identifiers
% to make it easier to locate the corresponding experiments.
\subsection{Comparison with previously published results}
% We first report some results intended to demonstrate that the basic
% algorithms included in the toolkit give results comparable to those
% previously reported in the literature.
\begin{table}
\centering{
\caption{Basic triphone system on Resource Management: \%WERs}
\label{rm:baseline}
\begin{tabular}{|c|c|c|c|c|c|} \hline
& \multicolumn{5}{|c|}{ Test set } \\ \cline{2-6}
& Feb'89 & Oct'89 & Feb'91 & Sep'92 & Avg \\ \hline
HTK & 2.77 & 4.02 & 3.30 & 6.29 & 4.10 \\
Kaldi & 3.20 & 4.21 & 3.50 & 5.86 & 4.06 \\ \hline
\end{tabular}}
\end{table}
Table~\ref{rm:baseline} shows the results of a context-dependent triphone system
with mixture-of-Gaussian densities; the HTK baseline numbers are taken
from~\cite{Povey:ICASSP99} and the systems use essentially the same algorithms.
The features are MFCCs with per-speaker cepstral mean subtraction. The language
model is the word-pair bigram language model supplied with the RM corpus.
The WERs are essentially the same. Decoding time was about 0.13$\times$RT, measured
on an Intel Xeon CPU at 2.27GHz. The system identifier for the Kaldi results is
tri3c.
Table~\ref{wsj:baseline} shows similar results for the Wall Street Journal
system, this time without cepstral mean subtraction. The WSJ corpus comes with
bigram and trigram language models.
% and most of our experiments use a pruned version of the trigram language model (with the number of entries reduced from 6.7 million to 1.5 million) since our
% fully-expanded FST gets too large with the full language model (we are working on decoding strategies that can work with large language models).
% For comparison with published results, we report bigram
% decoding in Table~\ref{wsj:baseline},
and we compare with published numbers using the bigram language model. The
baseline results are reported in~\cite{Reichl:ITSAP00}, which we refer to as
``Bell Labs'' (for the authors' affiliation), and a HTK system described
in~\cite{Woodland:ICASSP94}. The HTK system was gender-dependent (a gender-independent
baseline was not reported), so the HTK results are slightly better. Our decoding
time was about 0.5$\times$RT.
\begin{table}
\centering{
\caption{Basic triphone system, WSJ, 20k open vocabulary, bigram LM, SI-284 train: \%WERs}
\label{wsj:baseline}
\begin{tabular}{|c|c|c|} \hline
& \multicolumn{2}{|c|}{ Test set } \\ \cline{2-3}
& Nov'92 & Nov'93 \\ \hline
Bell & 11.9 & 15.4 \\
HTK (+GD) & 11.1 & 14.5 \\
KALDI & 11.8 & 15.0 \\ \hline
\end{tabular}}
\end{table}
\subsection{Other experiments}
Here we report some more results on both the WSJ test sets (Nov'92 and Nov'93)
using systems trained on just the SI-84 part of the training data, that
demonstrate different features that are supported by Kaldi.
% Note that the triphone results for the WSJ sets are worse than those in Table
% \ref{wsj:baseline} due to the smaller training set.
We also report results on the RM task, averaged over 6 test sets: the 4
mentioned in table \ref{rm:baseline} together with Mar'87 and Oct'87. The best
result for a conventional GMM system is achieved by a SAT
% a speaker-adaptively trained
system that splices 9 frames (4 on each side of the
current frame) and uses LDA to project down to 40 dimensions, together with
MLLT. We achieve better performance on average, with an SGMM system trained
on the same features, with speaker vectors and fMLLR adaptation. The last
line, with the best results, includes the ``exponential transform''~\cite{asru_et} in
the features.
\begin{table}
\centering{
\caption{Results on RM and on WSJ, 20k open vocabulary, bigram LM, trained on half of SI-84: \%WERs}
\label{results}
\begin{tabular}{|l|c|c|c|} \hline
& RM (Avg) & WSJ Nov'92 & WSJ Nov'93 \\ \hline
Triphone & 3.97 & 12.5 & 18.3 \\
\,\, + fMLLR & 3.59 & 11.4 & 15.5 \\
\,\, + LVTLN & 3.30 & 11.1 & 16.4 \\
Splice-9 + LDA + MLLT & 3.88 & 12.2 & 17.7 \\
\,\, + SAT (fMLLR) & 2.70 & 9.6 & 13.7 \\
\,\, + SGMM + spk-vecs & 2.45 & 10.0 & 13.4 \\
\qquad + fMLLR & 2.31 & 9.8 & 12.9 \\
\qquad + ET & 2.15 & 9.0 & 12.3 \\
\hline
\end{tabular}}
\end{table}
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section{Conclusions}
\label{sec:conclusion}
We described the design of Kaldi, a free and open-source speech recognition
toolkit. The toolkit currently supports modeling of context-dependent phones of
arbitrary context lengths, and all commonly used techniques that can be
estimated using maximum likelihood. It also supports the recently proposed
SGMMs. Development of Kaldi is continuing and we are working on using large
language models in the FST framework, lattice generation and discriminative
training.
% -------------------------------------------------------------------------
% -------------------------------------------------------------------------
\section*{Acknowledgments}
{\footnotesize
We would like to acknowledge participants and collaborators in the 2009 Johns
Hopkins University Workshop, including Mohit Agarwal, Pinar Akyazi, Martin
Karafiat, Feng Kai, Ariya Rastrow, Richard C. Rose and Samuel Thomas; Patrick
Nguyen, for introducing the participants in that workshop and for help with WSJ
recipes, and faculty and staff at JHU for their help during that workshop,
including Sanjeev Khudanpur, Desir\'{e}e Cleves, and the late Fred Jelinek.
We would like to acknowledge the support of Geoffrey Zweig and Alex Acero at
Microsoft Research. We are grateful to Jan (Honza) \v{C}ernock\'{y}
for helping us organize the workshop at the Brno University of Technology
during August 2010 and 2011. Thanks to Tomas Ka\v{s}p\'{a}rek for system
support and Renata Kohlov\'{a} for administrative support.
We would like to thank Michael Riley, who visited us in Brno to deliver
lectures on finite state transducers and helped us understand OpenFst; Henrique
(Rico) Malvar of Microsoft Research for allowing the use of his FFT code; and
Patrick Nguyen for help with WSJ recipes.
We would like to acknowledge the help with coding and documentation from
Sandeep Boda and Sandeep Reddy (sponsored by Go-Vivace Inc.) and Haihua Xu.
We thank Pavel Matejka (and Phonexia s.r.o.) for allowing the use of feature
processing code.
% It is possible that this list of contributors contains oversights; any
% important omissions are unlikely to be intentional.
During the development of Kaldi, Arnab Ghoshal was supported by the European
Community's Seventh Framework Programme under grant agreement no. 213850
(SCALE); the BUT researchers were supported by the Technology Agency of the
Czech Republic under project No. TA01011328, and partially by Czech MPO project
No. FR-TI1/034.
The JHU 2009 workshop was supported by National Science Foundation Grant Number
IIS-0833652, with supplemental funding from Google Research, DARPA's GALE program
and the Johns Hopkins University Human Language Technology Center of Excellence.
% Finally, it is quite possible that someone else who helped us significantly was
% inadvertently omitted here; if so, we apologize.
}
\bibliographystyle{IEEEtran}
\bibliography{refs}
\end{document}
% LocalWords: Kaldi automata OpenFst SGMM SourceForge toolkits HTK RWTH FST LM
% LocalWords: BLAS LAPACK PACKage decodable refactor DecodableInterface GMM Xu
% LocalWords: functionalities executables Cholesky SVD Fortran CBLAS CLAPACK
% LocalWords: MKL JAMA templated SpMatrix TpMatrix SubVector SubMatrix WFSTs
% LocalWords: deallocation OpenFst's FSTs determinization stochasticity MFCC
% LocalWords: PLP mel pre FFT cepstral liftering VTLN LDA heteroscedastic HLDA
% LocalWords: GMMs SGMMs DiagGmm accessors mutators MMI AmDiagGmm pdf MLLT STC
% LocalWords: nonemitting fMLLR AmSgmm pdf's triphones monophone biphone LMs
% LocalWords: ARPA IRSTLM SRILM determinize backoff rescoring WER egs triphone
% LocalWords: WERs MFCCs bigram xRT spk LVTLN vecs WSJ Mohit Agarwal Pinar Ka
% LocalWords: Akyazi Karafiat Feng Ariya Rastrow Sanjeev Khudanpur Desir Boda
% LocalWords: Cleves Henrique Malvar Sandeep Reddy Haihua Matejka Phonexia rek
% LocalWords: Zweig Acero Honza ernock Kohlov MPO IIS DARPA's
| {
"alphanum_fraction": 0.7120021022,
"avg_line_length": 55.0204819277,
"ext": "tex",
"hexsha": "dfd927d065ed41b0690b08166bc2e01aaf57078a",
"lang": "TeX",
"max_forks_count": 267,
"max_forks_repo_forks_event_max_datetime": "2022-03-30T12:18:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-06-07T08:33:28.000Z",
"max_forks_repo_head_hexsha": "8e30fddb300a87e7c79ef2c0b9c731a8a9fd23f0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "shuipi100/kaldi",
"max_forks_repo_path": "misc/papers/asru11_toolkit/kaldi_asru.tex",
"max_issues_count": 49,
"max_issues_repo_head_hexsha": "8e30fddb300a87e7c79ef2c0b9c731a8a9fd23f0",
"max_issues_repo_issues_event_max_datetime": "2019-12-24T11:13:34.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-10-24T22:06:28.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "shuipi100/kaldi",
"max_issues_repo_path": "misc/papers/asru11_toolkit/kaldi_asru.tex",
"max_line_length": 186,
"max_stars_count": 805,
"max_stars_repo_head_hexsha": "8e30fddb300a87e7c79ef2c0b9c731a8a9fd23f0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "shuipi100/kaldi",
"max_stars_repo_path": "misc/papers/asru11_toolkit/kaldi_asru.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-26T09:13:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-05-28T02:32:04.000Z",
"num_tokens": 10711,
"size": 45667
} |
\chapter{Conclusion} \label{sec:conclusionChapter}
We conclude this report by highlighting the contributions of this project:
\begin{itemize}
\item
Within the scope defined in Section~\ref{sec:scope}, we have explained the data structure presented in \cite{patrascu2014dynamic}, together with helper functions and algorithms. This has been complemented with illustrative examples.
\item
We have implemented and documented the data structure, making it publicly available for future work.
\item
We have implemented and included correctness tests in the repository. These attest to the correctness of the implementations we present and that were designed such that they can be used to assess the correctness of further implementations.
\item
We have listed other data structures that are of interest within the context of the dynamic predecessor problem, pointing to future work in the form of benchmarks.
\end{itemize}
\section{Final Remarks}
\begin{itemize}
\item
There is a fine balance between space and time consumption when developing and implementing sub-logarithmic data structures and algorithms. For example, van Emde Boas trees is a very fast data structure, but with a big space consumption drawback. Fusion Trees improve on space consumption, but updates take excessive time. Dynamic Fusion Trees, introduced in this project, seem to find the perfect balance between space and time consumption.
\item
As mentioned in Section~\ref{sec:modelsofcomputation}, theoretical bounds have the potential to hide big constants.
This project has been about implementing algorithms that use a constant number of operations to compute the result, assuming that these are an improvement in comparison with logarithmic algorithms (such as binary search).
We want to highlight that despite the fact that it is plausible to implement the \textit{Dynamic Fusion Node} using only $O(1)$ operations of the word RAM model, the hidden constant might not be negligible.
For instance, the {\ttfamily DynamicFusionNodeBinaryRank} implementation from Section~\ref{sec:DynamicFusionNodeBinaryRank} computes $\text{rank}(x)$ with binary search.
In this particular implementation, and because we set $w = 64$, allowing us to store at most $k = 16$ keys, we know that the worst-case running time of a $\text{rank}(x)$ query will be when the node is full, e.g., $n = k = 16$.
Computing $\log_2 16 = 4$.
We conclude that, when this implementation is full, it will take at most $4$ iterations for rank to compute the answer.
The {\ttfamily DynamicFusionNodeDontCaresRank} implementation from Section~\ref{sec:rankWithDontCares} uses $O(1)$ operations to compute a $\text{rank}(x)$ query. The constant number of operations used by this method is larger than the $4$ iterations taken by the binary search rank algorithm from Section~\ref{sec:DynamicFusionNodeBinaryRank}.
The {\ttfamily match} subroutine alone, used by the rank with "don't cares" algorithm, uses $5$ word RAM operations, and the whole rank algorithm needs to run this subroutine at least once; and if the key is not in the set, it needs to run it once again. That is already $10$ operations, not counting with some other necessary subroutines such as the select query and finding the branching bit, $j$.
Our conclusion is that $w = 64$ is potentially a very small word size to take advantage of the running time of these algorithms.
\end{itemize}
\section{Future Work} \label{sec:futureWork}
In this section, we leave some suggestions on how further work on this topic can be conducted. We have split these into three main categories:
\begin{enumerate}
\item
Implementation, which covers specific data structures or algorithms.
\item
Optimization, which entails improving the present code.
\item
Benchmarking, which points to other data structures that either solve partially or totally the dynamic predecessor
\end{enumerate}
\subsection{Implementation} \label{sec:FutureWorkImplementation}
\paragraph*{Delete methods}
The implementation featured in Section~\ref{sec:InsertDontCares} implements the {\ttfamily delete} method naively, but all the ingredients needed to implement it while adhering to \textit{Inserting a key} section of \cite{patrascu2014dynamic} are already present in the implementation.
\paragraph*{Key compression in constant time}
One of the bottlenecks of implementations from Sections~\ref{sec:rankWithDontCares} and \ref{sec:InsertDontCares} is how the compressed keys with "don't cares" are maintained. Specifically, the next iterative step in the implementation from Section~\ref{sec:InsertDontCares} is to implement functions that compute the compressed keys in $O(1)$ time, as explained in Chapters $[3.2 - 3.3]$ of \cite{patrascu2014dynamic}.
\paragraph*{Dynamic Fusion Tree}
After enabling all the operations at the node level in $O(1)$ time with the previous steps' implementation, all that remains to complete the implementation is to implement a $k$-ary tree using a \textit{Dynamic Fusion Node} as its node. This is covered in Chapter $4$ of \cite{patrascu2014dynamic}.
\paragraph*{Non-recursive implementation}
Chapter 4 of \cite{patrascu2014dynamic} mentions that once the dynamic fusion tree is implemented, the rank operation on a tree is given by a recursive function. Recursion in Java can be slow \cite{shirazi2003java}, and for this reason, a non-recursive alternative is preferred.
\subsection{Benchmarking}
\paragraph*{msb functions}
It would be interesting to see how the different msb functions implemented in this project compare to each other in terms of time.
\paragraph*{Dynamic Predecessor Problem Data Structures}
Once the \textit{Dynamic Fusion Tree} is fully implemented, the next logical step would be to benchmark it with other data structures that solve, either partially or entirely, the dynamic predecessor problem. These would be the data structures mentioned in Section~\ref{sec:IntegerSets}.
\subsection{Optimization}
\paragraph*{Space}
Space consumption can be improved by, for instance, avoiding and combining some of the fields. When $k = 8$, $bKey$ would only use $8$ of the $64$ allocated bits, and $index$ would use $3 \times 8 = 24$ bits. So, in total, $32$ bits for both words. This is an example of how space can be improved.
\paragraph*{Simulate a longer word size}
One of the limitations of the present implementation is that the maximum integer size is 64 bits on a common modern machine. An interesting improvement would be to simulate larger word size $w$, which would increase $k$, allowing to store more keys at the node level. This would have to be carefully studied to keep every operation in $O(1)$ time.
\paragraph*{Lookup {\ttfamily M} constant}
The {\ttfamily Util} class includes a function, {\ttfamily M}, which takes $b$ and $w$ as parameters and returns the word $(0^{b-1}1)^{(w/b)}$. It is a widely used function in this repository, and it has been implemented with a loop. With limited word size, this could easily be implemented as a lookup function, improving the running time up to $O(1)$.
\paragraph*{Profiling}
This technique allows us to see which sections of the code the CPU uses most of the time, indicating possible code bottlenecks. By using it, the implementation can be fine-tuned up to the point where it can be competitive. | {
"alphanum_fraction": 0.776734201,
"avg_line_length": 82.8111111111,
"ext": "tex",
"hexsha": "ff99abc998e8d0eb9493e1226cecb5e3a857236c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "150b3a39ad2bfd4bd30897762ee7777b18b53157",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hugo-brito/DynamicIntegerSetsWithOptimalRankSelectPredecessorSearch",
"max_forks_repo_path": "src/tex/02_Chapters/13_FinalThoughts.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "150b3a39ad2bfd4bd30897762ee7777b18b53157",
"max_issues_repo_issues_event_max_datetime": "2020-06-19T18:07:24.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-06-18T17:48:39.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hugo-brito/DynamicIntegerSetsWithOptimalRankSelectPredecessorSearch",
"max_issues_repo_path": "src/tex/02_Chapters/13_FinalThoughts.tex",
"max_line_length": 445,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "150b3a39ad2bfd4bd30897762ee7777b18b53157",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hugo-brito/DynamicIntegerSetsWithOptimalRankSelectPredecessorSearch",
"max_stars_repo_path": "src/tex/02_Chapters/13_FinalThoughts.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-14T03:28:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-17T11:52:11.000Z",
"num_tokens": 1713,
"size": 7453
} |
\section{Proposed solution}
Before going into the details of the solution, let's describe the graphs
used for the analysis. The dataset used is a set of synthetic
graph generated using the Erdos-Renyi model.
Therefore, the graph generates $ER(n, p)$ is parametric in $p$ and $n$, whereas $n$ is the number of nodes and $p$ can be interpreted formally as the probability
of attaching a node regardless the previous realizations. On the another hand it
can be seen as the density of the graph. In particular the set used by the experiments
is composed by four different graphs: $ER(10000, 0.2), ER(10000, 0.4), ER(10000, 0.8), ER(100000, 0.002)$.
The number of nodes was chosen as a tradeoff on a reasonable size, to evaluate the
goodness of the algorithm and the cost in terms of memory and space required, since
the space and time complexity are $O(n^2)$ which are not negligible\footnote{for low density a better algorithm were used provided in the networkx library, that were proposed by Vladimir Batagelj and Ulrik Brandes\cite{PhysRevE.71.036113}}.
\subsection{Problem Analysis}
\subsubsection{Completion time}
\label{sec:comp-time}
The sequential algorithm described in section .\ref{sub:seq-version} is an
iterative algorithm with an initialization phase where the $i$-th iteration visits level $i$. The
processing of the $i$-th level requires to go through each node
$v : d(v, s) = i$ and visit its neighborhood.
The time needed to process a node is independent of the
level at which it is visited as it only depends on the size
of its neighborhood. Hence, let the time taken by the node $v$ be denoted
with $T_v$. Then the time taken by the level $i$ is $T_{L_i} = \sum_{v : d(v, s) = i}T_v$.
Note that different levels require different times depending on the topology of the graph. In particular, the $i$-th level is influenced by the number of nodes at distance $i$ from the root
and by the size of the neighborhood of these nodes. The completion time of the $i$-th
iteration is given by $T_i = T_{L_i} + T_{swap} + T_{clear}$, i.e. the time taken by the level $i$ plus two additional factor: the former, $T_{swap}$, is the time taken to swap
$L$ and $\hat{L}$, which is the same for all the iterations since it is a swap of two pointers; the latter, $T_{clear}$, is
time taken by the cleaning of the new $\hat{L}$ so it depends on number of elements.
Let $G$ be the graph induced by the root node then the completion time for the sequential algorithm is:
$$
T_{seq}(G) = T_{init} + \sum^{\bar{d}(G)}_{i=1} T_i \leq T_{init} + (\bar{d}(G) \cdot max_i \ T_i)
$$
Where $T_{init}$ is the time taken by the
initialization phase, therefore it takes into account: the initialization of the
vector of visits, the initialization of $F_i$ and $F_{i+1}$, the analysis
of the root node, the insertion of its neighbors in the frontier and
the creation of the vector of visits.
There mainly are three kind of possible parallelism:
\begin{enumerate}
\item The frontier-level: which divides
the work required by the computation of the single frontier among the workers;
\item The neighborhood-level: which divides the work required by the single
node computation, hence the visit of its neighbors among the workers;
\item A combination of both 1 and 2.
\end{enumerate}
The analysis is focused on the first approach since the second
one is very sensitive to the local topology of a single node.
Moreover,
in real scenarios $\bar{k}$ is a very small value, which causes small units of work.
In addition, still considering real scenarios, $\bar{d}$ is a very small number
and $n$ is a big value,
whereby the frontiers will contain a very large number of nodes.
\\
\\
Ideally, considering $nw$ workers one could think that the analysis of a single
frontier $F_i$ can be divided equally among the workers, obtaining a parallel
completion time for a single iteration $T^{par}_{T_i} \approx \frac{T_{L_i}}{nw} + T_{swap} + T_{clear}$.
However, this does not take into account that the analysis of
$L_{i+1}$ can start only when the analysis of $L_{i}$ is terminated,
which causes an additional time factor to synchronize all the workers.
The parallel completion time for a single iteration becomes then:
$T^{par}_{T_i} \approx \frac{T_{L_i}}{nw} + T_{sync} + T_{swap} + T_{clear}$.
Hence,
there are three tasks that can not be done concurrently,
as they are in critical section:
\begin{enumerate}
\item Checking if a node has already been visited and, in the case it is
necessary, updating the vector of visits;
\item Adding the nodes found to $F_{i+1}$;
\item Update the number of occurrences.
\end{enumerate}
Lets denote the additional factors listed above with:
$T_{visited}$ due to 1, $T_{merge}$ due to 2 and $T_{update}$ due to 3.
The first two are required at each iteration, the last one can be
added directly once to $T_{par}$ since each workers
can keep a counter of the found occurrences and at the end sum it.
The completion time of single iteration in parallel can be better
approximated by: $T^{par}_{T_i} \approx \frac{T_{L_i}}{nw}
+ T_{sync}
+ T_{visited} + T_{merge} + T_{swap} + T_{clear}$, resulting in the following
parallel completion time:
$$
T_{par}(G, nw) \approx T_{init} + T_{update} + \sum^{\bar{d}(G)}_{i=1} T^{par}_{T_i}
$$
Of course the above approximation considers the workload as perfectly balanced,
that in principle is not easy to achieve, since it is not only
a matter of the number of nodes (i.e. it is not enough to assign equal-size
subset of the frontier to each worker). On the other hand, the completion time $T_{S \subseteq L_i}$
is influenced by the completion time of each node $v \in S$, namely $T_v$. A necessary condition for the best scenario is that at iteration $i$, not necessary the best,
is the one where the number of nodes at level $L_{i+1}$ is equally
distributed among the neighbors of nodes contained in $L_i$.
\subsubsection{Sequential analysis}
In order to design a proper parallel solution,
the times of the various operations and phases that the sequential
version requires were measured, to identify if there are any bottlenecks.
The expected ones are the memory accesses, more in detail:
\begin{itemize}
\item The time to access an element of the graph contained in the frontier,
$T_{read(v[i]\in G)}$, that is
random and unpredictable, since it only depends on the topology of the graph. One could
imagine that a possible improvement is to sort the frontier, this probably causes
an overhead that exceeds the gain, however, more
considerations require an additional
analysis that this work does not take into account.
\item The same reasoning on accessing graph elements
also applies to the time to access the vector of visits, $T_{visited[v]}$, where, however,
the access order is influenced by the organization of the neighborhood
of the nodes in $L_i$.
\end{itemize}
The accesses to the current frontier $F_i$ and the neighborhood $\mathcal{N}(v)$ are efficient, since they are a scan from the
first to the last elements
of the vector, thus optimal in number of I/Os. Some of
the measures to support this are presented in Table \ref{tab:seq-meas}\footnote{The results are obtained as an average over 10 runs}. Note the time taken by $T_{read(v \in \mathcal{N}(v))}$, $T_{read(visited[v])}$, $T_{write(visited[v])}$ were not measured since on average they took at most $\approx 5ns$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|| l | c | c | c | c |}
\hline
& $ER(10000, 0.2)$ & $ER(10000, 0.4)$ & $ER(10000, 0.8)$ & $ER(100000, 0.002)$ \\ \hline \hline
$T_{clear}$ & $\approx 164ns$& $\approx 176ns$& $\approx 167ns$ & $\approx 155ns$
\\ \hline
$T_{swap}$ & $\approx 154ns$& $\approx 168ns$& $\approx 163ns$ & $\approx 140ns$
\\ \hline
$T_{read(i \in F_i)}$ & $\approx 160ns$& $\approx 164ns$ & $\approx 166ns$ & $\approx 150ns$
\\ \hline
$T_{read(v[i] \in G)}$ & $\approx 1874ns$& $\approx 2837ns$& $\approx 4233ns$ & $\approx 771ns$
\\ \hline
\hline
\end{tabular}
\end{center}
\caption{Sequential measurements}
\label{tab:seq-meas}
\end{table}
As can be observed, as the density increases the time taken by the I/O increases. This happens because the bigger is the vector to read the higher is the number of pages it takes in the upper-level cache, i.e. higher is the number of access to those pages for a scan. The sequential completion
times measured is shown in Table \ref{tab:seq-results} as average on 10 runs.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|| l | c | c | c | c ||}
\hline
& $ER(10000, 0.2)$ & $ER(10000, 0.4)$ & $ER(10000, 0.8)$ & $ER(100000, 0.002)$ \\ \hline \hline
$\bar{T}_{seq}$ & $65ms$ & $126ms$ & $244ms$ & $127ms$ \\ \hline
$\sigma(T_{seq})$ & $1ms$ & $422\mu s$ & $548\mu s$ & $399 \mu s$ \\ \hline
\hline
\end{tabular}
\end{center}
\caption{Sequential results}
\label{tab:seq-results}
\end{table}
Considering the results obtained, different scenarios can be explored as interesting for the analysis of the parallel version: for instance a graph having a small number of nodes and fully connected generates a big frontier w.r.t. the number of nodes, although the time needed to process the frontier is too small compared to the overheads of thread creation. Similar scenario happens for processing graph with large number of nodes and really high value of $\bar{d}$ where the frontiers generated in average will be small, thus the overhead in synchronization is expected to exceeds the gain.
\subsection{Solution Description}
In order to prevent the load balancing issues
due to the variance of $k_{out}$, the proposed solution
uses a dynamic scheduling, which can be easily implemented
in a Master-Worker fashion. In the
present case, an auto-scheduling policy is implemented: all the workers have access to the frontier
and obtain a new task of work using a shared
data structure. Retrieving a task has the cost of an
atomic fetch+add on an integer. Here a task correspond to a chunk, i.e.
start and end indexes of $F_i$. Note that the size of the chunk $cw$
is a parameter of the solution. Thanks to this shared data structure
the master does not need to
prepare the tasks for the workers. Instead it performs the same work as
the worker with the difference that is also responsible for
the swap and the cleaning of the new $F_{i+1}$.
\subsubsection{Local next frontier}
The trivial way to produce $F_{i+1}$ is to share an array among the workers and insert each new element in mutual exclusion, each new element found to the array. This solution is as easy as inefficient, since it produces different problems. First of all, false sharing occurs more frequently and the overhead due the atomicity is really high. In the proposed solution each worker has a local version of $F_i$, which contains only the nodes found by the worker at the end of the visit. The worker atomically adds all the elements of the local frontier to the global one. This reduces false sharing and decreases the overhead, since the processing is much faster.
\subsubsection{Expected performance}
\label{sec:how-it-perform}
The solution mentioned above mitigates, thanks to dynamic scheduling,
the problems due to the variance of $k_{out}$.
However, it does not eliminate the
problem in its entirely, because it remains
a parallelism only at the frontier level.
The worst case is when an huge hub occurs:
since the worker who has to process it,
will take much longer.
\\
\\
To discuss with more specificity the expected performance, it is necessary to evaluate it with respect to the topology and statistical properties of the input graph:
\begin{itemize}
\item In general, the lower the value of $\bar{d}$ is, the higher the expected gain of the parallel version, because the overhead due to the level-synchronization will be the smaller and, the number of nodes in the frontiers will be higher, since $n$ should be divided among few frontiers.
\item In addition, fixing $\bar{d}$ and enlarging $n$ increase the probably that some levels will contain a number of nodes high enough to achieve a good speedup increase.
\item In addition to the previous property, if the $k_{out}$ is considered, it is possible to better estimate how the frontiers will be made. Namely, as the variance in $k_{out}$ is low, the confidence in the fact the nodes between the frontiers will be better distributed increases. This is not true in general, because actually it depends on which node are present in the different neighborhoods: the fewer are the duplicates among the nodes neighborhood the better distributed the nodes will be across the frontiers.
\item Instead, the density itself does not say too much about about the distribution of the nodes among the levels, but in some specific case it can helps. For instance the synthetic dataset generated through the Erdos-Renyi increasing the value of $p$, i.e. the density, the algorithm tends to generate a small number of levels due to independency in drawing random edges: for small value of p the probability of finding node in higher level increases.
\end{itemize}
For example, in real graphs it is known that $n$ is a big number and $\bar{d}$ is really small number. Even though in those graphs hubs occurs often and this generates an high variance in $k_{out}$ and increments the load balancing issue.
\subsubsection{The influence of chunksize}
\label{sec:chunksize}
As mentioned before the proposed solution is parametric in $cw$, namely the chunksize, this measure refers to the number of nodes of $F_i$ that each workers will consider as a single task. The pop operation is as written in the previous sections with an efficient data structure that implements it at the cost of an atomic \texttt{fetch-add}. The choice of $cw$ value should be a trade-off between the advantage of having small tasking and the time taken by the pop operation. Small chunk size guarantees a fairer distribution among the workers, as cons it increments the number of pops, hence the overhead. Moreover if the expected variance in $k_{out}$ is high the advantage of having small chunk increase.
\subsection{Analysis and measurements}
All the experiments have been performed fixing $cw$ as 32, this number has been chosed according to what is written in section \ref{sec:chunksize} observing the overhead better described in section \ref{sec:overhead}.
\subsubsection{Build and execution}
The source code have been complied with \texttt{g++17} using the commands:
\begin{itemize}
\item \texttt{g++ -std=c++17 ./src/bfs-sequential.cpp -o ./build/bfs-sequential -O3} \\ \texttt{-Wall -I include/ -ftree-vectorize }
\item \texttt{g++ -std=c++17 ./src/bfs-pthread.cpp -o ./build/bfs-pthread -O3 -Wall} \\ \texttt{-pthread -I include/ -ftree-vectorize }
\item \texttt{g++ -std=c++17 ./src/bfs-fastflow.cpp -o ./build/bfs-fastflow -O3 -Wall}\\ \texttt{-pthread -I include/ -ftree-vectorize }
\end{itemize}
The execution of the sequential code requires 3 positional arguments:
\begin{enumerate}
\item \texttt{inputFile} : string, the path to the graph
\item \texttt{startingNodeId} : integer, the id of the from which the bfs will start
\item \texttt{labelTarget} : integer, label whose occurrences are to be counted
\end{enumerate}
The parallel versions requires 2 additional positional arguments:
\begin{enumerate}
\item \texttt{nw} : integer, the number of workers to use
\item \texttt{k}: integer, chunk size
\end{enumerate}
All the version print by default the completion time.
\subsubsection{pthread implementation}
In the \texttt{pthread} implementation, the main thread acts as master and so,
it executes the initialization phase, creates
the threads and then starts its work as described. As soon as the visit of the graph
is completed the master collects the result and prints it together with the completion time. The synchronization at the end of each level, among the workers and the master,
is implemented using an active wait barrier implemented through mutex and conditional variable.
\\
To handle the critical section in the check and update of the visited nodes a vector of atomic booleans was used as the vector of visits. The check and update was accomplished via a \texttt{compare and exchange} instruction, which was done at the node level in the frontier. Meanwhile at the neighborhood level (the one implemented in the sequential version) to avoid inserting duplicates a non-atomic vector of boolean was used, namely the \textit{vector of insertion}. Since the vector of insertion is composed by non-atomic boolean this does not guarantee that duplicates will not be added, but the checks at node level guarantee that the node will not be visited more than once. This was made because the vector of insertion generates few duplicates in the frontier the checks at node level will have often the same result, hence this helps to exploiting better the pipeline of the new processor through the branch prediction. The probability of adding duplicates grows as the $C(v)$ of the nodes $v$ in the previous frontier increase. A downside of the introduction of this vector is that it increases the probability that false sharing occurs.
\subsubsection{Overhead analysis}
\label{sec:overhead}
To better evaluate the performance as the number of threads increases, it is useful to observe the trend that the overheads show. With this type of analysis it is also possible to make a choice on the effective number of threads to use in case the nature (e.g. using the approximation of the completion time provided in section \ref{sec:comp-time}), intended as topology and statistical properties, of the graph is known. Two three classes of overhead can be identified:
\begin{enumerate}
\item the first class grows linearly in the number of thread, which contains the creation of the threads $\bar{T}_{thd}$.
\item the second class grows linearly in the $\bar{d}$ (the number of frontier): the synchronization time and the waiting for the lock, $T_{merge}$.
\item the third class instead depends on the size of $F_i$, which contains: the overhead to checks and update atomically, $T_{visit}$, the vector of visits and the one needed to pop a new task, $T_{pop}$.
\end{enumerate}
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|| l | c ||}
\hline
$T_{thd}$ & $130 \mu s$ \\ \hline
$T_{visited}$ & $280ns$ \\ \hline
$T_{pop}$ & $480ns$ \\ \hline
\hline
\end{tabular}
\end{center}
\label{tab:static-overhead-results}
\caption{Static overhead measurements}
\end{table}
Even though the synchronization belongs to the second class, its value depends on the number of threads as shown in figure \ref{fig:level-sync}. The same applies to $T_{merge}$, the lock waiting, for which it also depends on the size of the local frontiers. Its trend is non-trivial as shown in figure \ref{fig:lock}: if the number of threads grows the overhead increases, in any case if the generated local frontiers are small as in the case of Erdos-Renyi with low density the growth decreases.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/level-sync.pdf}
\caption{Level-synchronization overheads}
\label{fig:level-sync}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/lock-wait.pdf}
\caption{Lock overheads}
\label{fig:lock}
\end{minipage}
\end{figure}
\subsubsection{Fastflow implementation}
The \texttt{FastFlow} version implements the same mechanisms as in the \texttt{pthread} version, except for the synchronization mechanisms, since it uses the Master-Worker with feedback queue. The master after having performed the initialization phase and indicated to the threads which is the pointer to the queue where to write the nodes they find, performs the same work of the workers and orchestrates the level-synchronization. The level-synchronization is implemented through the task queues and feedback queue, that were used to notify the master with the number of occurrences found in the level. Whenever the master receives all the feedbacks from all the workers, it swaps the pointers of $F_i$ and $F_{i+1}$ and informs the threads to work on the next level by indicating the new pointers pushing them on the task queues of each worker. Initially to start the worker the master pushes the initial pointers on the task queues.
\subsubsection{Results}
The following section reports all the results obtained by both the \texttt{pthread} and the \texttt{FastFlow} implementation. All the synthetic dataset have been exploited in order to show empirically what has been discussed until now. The results have been shown plotting the performances in terms of completion time and speedup reached. The results obtained by the
\texttt{FastFlow} version are worse in general than the \texttt{pthread} implementation. The gap shows an increasing trend as the number of threads grows. This gap is probably due to the fact that the \texttt{pthread} version can be more specific and implement the bare minimum in a more efficient way for the specific problem, especially in the synchronization mechanisms.
\\
As discussed earlier as the number of nodes in the frontiers increases, an increase in speedup is expected. In the Erdos-Renyi model fixed the number of nodes $n$, as $p$ increases we obtain graphs with a smaller number of frontiers, i.e. $\bar{d}$ smaller, and generating very large frontiers. This is well highlighted comparing the different plots in Figures
\ref{fig:perf_02}, \ref{fig:perf_04} and \ref{fig:perf_08}. Interestingly, as the number of nodes in the frontiers increases and, the number of frontiers decrease (because $n$ is fixed), the gap between the performances of the two versions shows a decreasing trend.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_performance_02_time.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_speedup_02_time.pdf}
\end{minipage}
\caption{Performance and speedup using ER(10000, 0.2)}
\label{fig:perf_02}
\begin{minipage}{1\textwidth}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_performance_04_time.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_speedup_04_time.pdf}
\end{minipage}
\caption{Performance and speedup using ER(10000, 0.4)}
\label{fig:perf_04}
\begin{minipage}{1\textwidth}
\end{minipage}
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_performance_08_time.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_speedup_08_time.pdf}
\end{minipage}
\caption{Performance and speedup using ER(10000, 0.8)}
\label{fig:perf_08}
\end{figure}
Moreover, as mentioned in section \ref{sec:how-it-perform} it is not only a matter of density, instead different factors must be taken into account, such as the number of nodes and $\bar{d}$. This is clear looking at the result obtained in Figure \ref{fig:perf_08} and \ref{fig:real-case}.
\begin{figure}[h]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_performance_002_time.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_speedup_002_time.pdf}
\end{minipage}
\begin{minipage}{1\textwidth}
\caption{Performance and speedup using ER(100000, 0.002)}
\label{fig:perf_002}
\end{minipage}
\end{figure}
\subsection{A real use case}
To further evaluate the solutions, a real case is shown, using as
input a real network of $149279$ nodes and $\approx 10530774$ edges, hence $0.00095$ as density. The
network was built as a network
of interactions happened in the
month of october
2020 in the /r/politics subreddit. The nodes represent users who have
commented or written posts, while the edges from user
A to user B indicates that the former has posted a comment to the latter.
The results obtained are reported in terms of performance and speedup in
the Figure \ref{fig:real-case}.
\begin{figure}[h]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_performance_real_time.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{plots/fastflow_speedup_real_time.pdf}
\end{minipage}
\caption{Real use case speedup}
\label{fig:real-case}
\end{figure}
| {
"alphanum_fraction": 0.7441842032,
"avg_line_length": 72.0942857143,
"ext": "tex",
"hexsha": "e77e2477d75278369c90969b45a3073b8ee3f2fe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0a2829febd37dc98c1c4a77f4bdda25f517ff815",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "giuseppegrieco/parallel-bfs",
"max_forks_repo_path": "report/sections/2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0a2829febd37dc98c1c4a77f4bdda25f517ff815",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "giuseppegrieco/parallel-bfs",
"max_issues_repo_path": "report/sections/2.tex",
"max_line_length": 1150,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0a2829febd37dc98c1c4a77f4bdda25f517ff815",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "giuseppegrieco/parallel-bfs",
"max_stars_repo_path": "report/sections/2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6404,
"size": 25233
} |
\documentclass[fleqn, final]{../styles/unmphythesis}
\usepackage{../styles/qxd}
\renewcommand{\thechapter}{6}
%\newcommand{\thechapter}{1}
\makeindex
\begin{document}
\title{Enhanced cooperativity for quantum-nondemolition-measurement--induced spin squeezing of atoms coupled to a nanophotonic waveguide}
%================================================================%
%\begin{abstract}
%We study the enhancement of cooperativity in the atom-light interface near a nanophotonic waveguide for application to quantum nondemolition (QND) measurement of atomic spins. Here the cooperativity per atom is determined by the ratio between the measurement strength and the decoherence rate. Counterintuitively, we find that by placing the atoms at an azimuthal position where the guided probe mode has the lowest intensity, we increase the cooperativity. This arises because the QND measurement strength depends on the interference between the probe and scattered light guided into an orthogonal polarization mode, while the decoherence rate depends on the local intensity of the probe. Thus, by proper choice of geometry, the ratio of good to bad scattering can be strongly enhanced for highly anisotropic modes. We apply this to study spin squeezing resulting from QND measurement of spin projection noise via the Faraday effect in two nanophotonic geometries, a cylindrical nanofiber and a square waveguide. We find that, with about 2500 atoms and using realistic experimental parameters, $ \sim 6.3 $ and $ \sim 13 $ dB of squeezing can be achieved on the nanofiber and square waveguide, respectively.
%\end{abstract}
%\maketitle
%<*Faradayprotocol>
\chapter[Enhanced cooperativity for measurement--induced spin squeezing]{Enhanced cooperativity for QND-measurement--induced spin squeezing of atoms coupled to a nanophotonic waveguide}\label{chap:Faraday}
%===================INTRODUCTION=====================%
\section{Introduction}
Cooperativity is a measure of the entangling strength of the atom-light interface in quantum optics. Originally introduced in cavity quantum electrodynamics (QED), the cooperativity per atom, $C_1$, can be expressed in terms of the ratio of the coherent coupling rate to the decoherence rates, $C_1 = g^2/(\Gamma_c \Gamma_A)$, where $g$ is the vacuum Rabi frequency, $\Gamma_c$ is the cavity decay rate, and $\Gamma_A$ is the atomic spontaneous emission rate out of the cavity~\cite{Kimble1998}. Alternatively, we can write $C_1 = (\sigma_0/A) \mathcal{F}$, where $\sigma_0$ is the resonant photon scattering cross section of the atom, $A$ is the cavity mode area, and $\mathcal{F}$ is the cavity finesse. Expressed in this way, cooperativity is seen to arise due to scattering of photons preferentially into the cavity mode, compared to emission into free space, here enhanced by the finesses due to the Purcell effect. Strong coupling dynamics seen in pioneering experiments in atomic cavity QED~\cite{Raimond2001Manipulating, Miller2005} is now a mainstay in quantum information processing in systems ranging from quantum dots~\cite{Akimov2007, Akopian2006, Liu2010} to circuit QED~\cite{Wallraff2004Strong, Hofheinz2009Synthesizing}. The $N_A$ atom cooperativity, $C_N = (N_A \sigma_0/A) \mathcal{F} =( OD) \mathcal{F}$, where $OD$ is the resonant optical depth. In this configuration, the collective degrees of the atom can be manipulated by their common coupling to the cavity mode.
Cooperativity also characterizes the atom-light interface in the absence of a cavity. In free space, an atom at the waist of a laser beam will scatter into the forward direction at a rate $\kappa \propto (\sigma_0/A) \gamma_s$, where $\gamma_s$ is the photon scattering rate into $4 \pi$ steradians~\cite{Baragiola2014}. Here the single-atom cooperativity can be expressed to be proportional to the ratio of these rates, $C_1 \propto \kappa/\gamma_s \propto \sigma_0/A$. The $N_A$-atom cooperativity, in a plane-wave approximation, ignoring effects of diffraction and cloud geometry~\cite{Baragiola2014}, $C_N \propto N_A \sigma_0/A = OD$. To be self-consistent, here the beam area must be very large, so $C_1$ is very small, e.g., $C_1 \sim 10^{-6}$, but for a sufficiently large ensemble, the $OD$ can be large enough to lead to entanglement between the collective atomic degrees of freedom and the light. In this situation, measurement of the light leads to back action on the ensemble and, for an appropriate quantum nondemolition (QND) interaction, results in squeezing of the collective spin~\cite{Kuzmich1998, Takahashi1999Quantum}. QND measurement-induced spin squeezing has been observed in free-space dipole-trapped ensembles~\cite{Appel2009Mesoscopic, Takano2009Spin, Sewell2012Magnetic} and in optical cavities~\cite{Schleier-Smith2010States, Cox2016Deterministic, Hosten2016}. The rate of decoherence is set by the rate of optical pumping, $\gamma_{op} \propto \gamma_s$, and we can characterize the cooperativity per atom as $C_1 = \kappa/\gamma_{op}$.
In recent years nanophotonic waveguides have emerged as a new geometry that complements cavity QED, and can lead to strong cooperativity~\cite{Vetsch2010Optical, Chang2013, Hung2013, Yu2014, Douglas2015, Asenjo-Garcia2017Exponential, Qi2016}. Notably, the effective beam area of a tightly guided mode can be much smaller than in free space and propagate for long distances without diffraction. As such, $\sigma_0/A$ can be orders of magnitude larger than in free space, e.g., $\sigma_0/A \sim 0.1$, and contribute collectively for a modest ensemble of a few thousand atoms trapped near the surface of the waveguide. Moreover, in some cases the Purcell effect can further enhance forward scattering into the guided mode compared with scattering into free space. Taken together, these features make nanophotonic waveguides a promising platform for the quantum atom-light interface.
In this chapter we show that one can achieve an additional enhancement to the cooperativity in a nanophotonic geometry that is not possible in free space. In particular, we consider the QND measurement of the collective spin of an atomic ensemble via a Faraday interaction\index{Faraday effect} followed by polarization spectroscopy. In this configuration the polarimeter effectively performs a homodyne measurement, where the probe is the ``local oscillator," which interferes with the light scattered into the orthogonally polarized guided mode~\cite{Baragiola2014}. This signal thus depends on the spatial overlap of the two orthogonal polarization modes at the position of the atom. In contrast, decoherence due to photon scattering into unguided $4 \pi$ steradians occurs at a rate $\gamma_s$ determined only by the intensity of the probe. The net result is that the cooperativity per atom, $C_1 \propto \kappa/\gamma_s$, primarily depends on the strength of the {\em orthogonal polarization mode}, and this factor can be enhanced, especially for highly anisotropic guided modes. Counterintuitively, we see that the strongest cooperativity arises when the atom is placed at the position of minimum intensity of the azimuthally anisotropic probe mode where the intensity of the initially unoccupied orthogonal mode is maximum.
We study the enhanced cooperativity for two nanophotonic geometries: a cylindrical nanofiber formed by tapering a standard optical fiber with cold atoms trapped in the evanescent wave, as recently employed in a variety of experimental studies~\cite{Vetsch2010Optical, Goban2012, Reitz2013, Lee2015, Goban2014, Reitz2014, Volz2014Nonlinear, Beguin2014, Mitsch2014,Kato2015Strong, Sayrin2015, Sayrin2015a, Mitsch2014a, Solano2017Dynamics, Beguin2017Observation}, and a nanofabricated suspended square waveguide, currently investigated at Sandia National Laboratories~\cite{Lee2017Characterizations}. For each geometry we study the use of the Faraday effect\index{Faraday effect} and polarimetry to perform a QND measurement of the magnetic spins~\cite{Smith2003a}, and, thereby, induce squeezing of collective spins of cesium atoms. A dispersive measurement of the number of atoms trapped near the surface of an optical nanofiber was first performed in~\cite{Dawkins2011}, and quantum spin projection noise was recently detected using a QND measurement with a two-color probe in~\cite{Beguin2014} and~\cite{Beguin2017Observation}. Previously, we studied QND measurement-induced spin squeezing mediated by a birefringence interaction~\cite{Qi2016}. We see here that, through the enhanced cooperativity, QND measurement via the Faraday effect can lead to substantial squeezing, greater than 10 dB in some geometries, for 2500 atoms.
The remainder of the chapter is organized as follows. In Sec. II, we lay out the theoretical description of the QND measurement and the relevant measurement strength. In addition, we describe how decoherence is included in the model through a first-principles stochastic master equation, here for the case of alkali atoms, cesium in particular. From this we see how cooperativity emerges as the key parameter that characterizes the squeezing. We calculate in Sec. III the squeezing dynamics for the different nanophotonic waveguides, atomic preparations, and measurement protocols. We conclude with a summary and outlook for future work in Sec. IV.
%========================== Theory ===================================%
\section{QND measurement and cooperativity via the Faraday effect} \label{Sec::QNDandCooperativityTheory}
The theoretical framework describing the propagation of light guided in a nanofiber and interacting with trapped atoms in the dispersive regime was detailed in our previous work~\cite{Qi2016}. We review the salient features here and include the generalization to the square waveguide.
For waveguides that are symmetric under a $\pi/2$ rotation around the $z$ (propagation) axis, there are two degenerate polarizations for each guided mode and for each propagation direction. Assuming a nanophotonic waveguide that supports only the lowest order guided mode, and restricting our attention to modes propagating in the positive $z$-direction, we denote $\mbf{u}_H(\mbf{r}_\perp)$ and $\mbf{u}_V(\mbf{r}_\perp)$ as the horizontally and vertically polarized modes that adiabatically connect to $x$ and $y$ linearly polarized modes, respectively, as the cross section of the waveguide becomes large compared to the optical wavelength. Note that in typical nanophotonic geometries these guided modes also have a nonnegligible $z$ component. For a cylindrically symmetric nanofiber, these are the well-studied HE$_{11}$ modes; for a \SWG, these are the quasi-TE$_{01}$ and quasi-TM$_{01}$ modes, shown in Fig.~\ref{fig:nanofiberSWG_E_ints}. One can solve for the guided modes of a cylindrical fiber analytically~\cite{Kien2004,Vetsch2010Opticala,Qi2016}. We use a vector finite difference method to numerically solve for the guided eigenmodes of the square waveguide~\cite{Fallahkhair2008} with core material of $ \rm{Si}_3\rm{N}_4 $ whose index of refraction is $ n=2 $~\cite{Lee2013}.
\begin{figure}[tbp]
\centering
\includegraphics[width=.69\textwidth]{../media/Figs/nanofiberswg_Hmode6}
\caption[Fundamental guided modes of the nanophotonic waveguides.]{Fundamental guided modes of the nanophotonic waveguides. (a) Electric field components of the $H$-polarized HE$_{11}$ mode of a circular nanofiber. From left to right: $ \mathrm{Re}[u_x(\br\!_\perp)] $, $ \mathrm{Re}[u_y(\br\!_\perp)] $, and $ \mathrm{Im}[u_z(\br\!_\perp)] $ in the $ xy $ plane. (b) Same as (a) but for the $H$-polarized quasi-TE$_{01}$ mode of a square waveguide. Black lines outline the waveguide boundary. The color scale is normalized to the maximum value of all field components for each waveguide mode. All other mode components not shown for both waveguide geometries vanish everywhere. (c) The normalized intensity distribution on the transverse plane for both geometries. Blue arrows indicate the local electric field's direction and amplitude (relative length) at positions along the vertical waveguide axis, which only have an $ x $-component of the mode. Stars indicate typical positions of trapped atoms, $r_\perp'/a=1.8 $ for the nanofiber~\cite{Vetsch2010Optical} and a similar scale for the \SWG, $ r_\perp'/w=1.0 $, where $ a $ and $ w $ are the radius and width of the waveguides respectively). Dotted light gray lines show the corresponding $ V $-mode contour which is the $ H $ mode rotated by $ 90^\circ $ around the waveguide propagation axis. The atom's azimuthal position is chosen to be at a position with the $V$ mode being strongest. }\label{fig:nanofiberSWG_E_ints}
\end{figure}
The quasimonochromatic positive frequency component of the quantized field associated with these guided modes $(g)$ at frequency $\omega_0$ takes the form
\begin{align}\label{eq:Ebp}
\hat{\mathbf{E}}^{(+)}_g(\mbf{r}, t) &= \sqrt{ \frac{2 \pi \hbar \omega_0}{ v_g} } \left[\mathbf{u}_H(\mbf{r}\!_\perp) \hat{a}_H(t) + \mathbf{u}_V(\mbf{r}\!_\perp) \hat{a}_V(t)\right] e^{i (\beta_0 z- \omega_0 t)},
\end{align}
where $v_g$ is the group velocity, and $ \beta_0 $ is the propagation constant of the guided modes. In the first Born approximation the dispersive interaction of the guided field with $N_A$ atoms trapped near the surface of the waveguide at positions $\{\mbf{r}'_\perp, z_n\}$, detuned far from resonance, is defined by the scattering equation~\cite{Qi2016},
\begin{equation}
\hat{\mathbf{E}}^{(+)}_{g,out}(\mbf{r}, t)=\hat{\mathbf{E}}^{(+)}_{g,in}(\mbf{r}, t)+\sum_{n=1}^{N_A} \tensor{\mbf{G}}_{g} (\mbf{r}, \mbf{r}'_n;\omega_0) \cdot \hat{\tensor{\boldsymbol{\alpha}}}{}^{(n)} \cdot \hat{\mathbf{E}}^{(+)}_{g,in}(\mbf{r}'_n, t),
\end{equation}
where $\hat{\tensor{\boldsymbol{\alpha}}}{}^{(n)}$ is the atomic polarizability operator of the $n$th atom, and
\begin{equation}
\tensor{\mathbf{G}}^{(+)}_g(\br,\br'_n; \omega_0) = 2\pi i \frac{\omega_0}{v_g } \sum_{p} \mathbf{u}_{p} (\br_\perp)\mathbf{u}^*_{p}
(\br_{\perp}^\prime) e^{i \beta_0(z-z'_n)} \label{Eq::GreensGuided_chap6}
\end{equation}
is the dyadic Green's function for a dipole to radiate into the forward-propagating guided mode. In principle, the Green's function for an $N_A$-atom chain decomposes into collective sub- and superradiant normal modes~\cite{Asenjo-Garcia2017Atom,Asenjo-Garcia2017Exponential}, but in the far-detuning limit, these are all equally excited. The result is equivalent to the symmetric mode of independently radiating dipoles. \FloatBarrier The input-output relation for the mode operators then reads~\cite{Qi2016}
\begin{equation}
\hat{a}^{out}_p(t) = \hat{a}^{in}_p(t) +i \sum_{p'} \hat{\phi}_{p,p'} \hat{a}^{in}_{p'}(t) ,
\end{equation}
where
\begin{equation}
\hat{\phi}_{p,p'} = 2\pi \frac{\omega_0}{v_g} \mbf{u}^*_p (\mbf{r}'_\perp) \cdot \sum_{n=1}^{N_A} \hat{\tensor{\boldsymbol{\alpha}}}{}^{(n)} \cdot \mbf{u}_{p'} (\mbf{r}'_\perp)
\end{equation}
is the phase operator associated with scattering polarization $p' \rightarrow p$ by a collective atomic operator. When $p=p'$, this is a phase shift; for $p \neq p'$, this leads to a transformation of the polarization of the guided mode.
The Faraday effect\index{Faraday effect} arises from the irreducible rank 1 (vector) component of the polarizability tensor~\cite{Deutsch2010a}. Given an atom with hyperfine spin $f$, this contribution is $\hat{\alpha}^{vec}_{ij} = i \alpha_1 \epsilon_{ijk} \hat{f}_k$, where $\alpha_1 = C^{(1)}_{f}\frac{\sigma_0}{4\pi k_0}\frac{\Gamma_A}{2\Delta} $ is the characteristic polarizability. In alkali atoms $C^{(1)}_f=\mp\frac{1}{3f}$ for the D1- and D2-line transitions, respectively. We take the detuning, $ \Delta $, large compared to the excited-state hyperfine splitting. The resonant scattering cross section on a unit oscillator strength is $\sigma_0 = 6\pi/k_0^2$, where $k_0=\omega_0/c$. The polarization transformation associated with scattering from $H$ to $V$ mode is determined by the operator
\begin{equation}
\hat{\phi}_{VH} = i 2\pi \frac{\omega_0}{v_g}\alpha_1 \left[ \mbf{u}^*_V (\mbf{r}'_\perp) \times \mbf{u}_{H} (\mbf{r}'_\perp) \right] \cdot \hat{\mbf{F}},
\end{equation}
where $\hat{\mbf{F}}=\sum_n \hat{\mbf{f}}^{(n)}$ is the collective spin of the atomic ensemble. Thus,
\begin{equation}\label{eq:aoutain}
\hat{a}^{out}_V(t) = \hat{a}^{in}_V(t) +i \hat{\phi}_{V,H} \hat{a}^{in}_{H}(t)= \hat{a}^{in}_V(t) - 2\pi \frac{\omega_0}{v_g}\alpha_1 \left[ \mbf{u}^*_V (\mbf{r}'_\perp) \times \mbf{u}_{H}\right(\mbf{r}'_\perp)] \cdot \hat{\mbf{F}}\, \hat{a}^{in}_{H}(t),
\end{equation}
and similarly for scattering from $V$ to $H$.
\begin{figure}[htb]
\centering
\includegraphics[width=.95\textwidth]{../media/Figs/FaradaySchematics}
\caption[Schematic polarization spectroscopy geometry and the evolution of the probe for the QND measurement and spin-squeezing generation based on the Faraday effect. ]{(a) Schematic polarization spectroscopy geometry for the QND measurement and spin-squeezing generation based on the Faraday effect\index{Faraday effect}. Atoms trapped near the surface of the nanophotonic waveguide cause a Faraday rotation of the guided light, which is measured in a polarimeter that detects the $S_2$ component of the Stokes vector (intensity in the diagonal $D$ minus antidiagonal $\bar{D}$ modes). (b) The evolution of the light's polarization state on the \Poincare sphere (left to right). The Stokes vector of the light is prepared along the $S_1$ direction, and the Faraday interaction causes a rotation around the $S_3$ axis. Shot noise, shown as the uncertainty bubble, limits the resolution of the detection. (c) Evolution of the collective state before and after measurement (left to right). The spin is prepared in a coherent state\index{state!spin coherent state}, with projection noise\index{noise!projection noise} shown as the uncertainty bubble. After the measurement the uncertainty in $F_z$ is squeezed, and the direction is correlated with the measurement outcome on the polarimeter. }\label{fig:spinsqueezingschematic}
\end{figure}
The polarization transformation can be expressed as a rotation of the Stokes vector of the light on the \Poincare sphere with operator components
\begin{subequations}\label{Eq::StokesComponents}
\begin{align}
\hat{S}_1(t) & = \smallfrac{1}{2}\big[ \hat{a}^\dag_H(t) \hat{a}_H(t)-\hat{a}^\dag_V(t) \hat{a}_V(t) \big], \\
\hat{S}_2(t) & = \smallfrac{1}{2}\big[ \hat{a}^\dag_H(t) \hat{a}_V(t)+\hat{a}^\dag_V(t) \hat{a}_H(t) \big], \\
\hat{S}_3(t) & = \smallfrac{1}{2i}\big[ \hat{a}^\dag_H(t) \hat{a}_V(t) -\hat{a}^\dag_V(t) \hat{a}_H(t) \big].
\end{align}
\end{subequations}
By measuring the output Stokes vector in a polarimeter, we perform a QND measurement of a collective atomic operator to which it was entangled. In a proper configuration, this leads to squeezing of a collective spin. Launching $H$-polarized light corresponds to the initial Stokes vector along $S_1$, and Faraday rotation\index{Faraday effect} leads to an $S_2$ component, which is measured in a polarimeter [Fig.~\ref{fig:spinsqueezingschematic}(a)]. Taking the $H$-mode as a coherent state\index{state!spin coherent state} with amplitude $\beta_H$, the signal of the polarimeter measures $\hat{S}_2^{out} = (\beta_H \hat{a}_V^{\dag out} +\beta^*_H \hat{a}_V^{out})/2$. Using this expression we see that the polarimeter acts as a homodyne detector, with the input $H$ mode acting as the local oscillator and the photons scattered into the $V$ mode as the signal. Formally, the input-output relation follows from the scattering equation, Eq.~\eqref{eq:aoutain}, and reads
\begin{equation}
\hat{S}^{out}_2 = \hat{S}^{in}_2 +i \big( \hat{\phi}_{VH}- \hat{\phi}_{HV} \big) \hat{S}^{in}_1 = \hat{S}^{in}_2 + \chi_3(\mbf{r}'_\perp) \hat{F}_z \hat{S}^{in}_1.
\end{equation}
The first term $\hat{S}^{in}_2$ represents the shot-noise, which fundamentally limits the resolution of the measurement and thus the spin squeezing that can be obtained in a given time interval. The second term is the homodyne signal, where we have expressed the rotation angle around the 3 axis of the \Poincare sphere as
\begin{equation}
\chi_3(\mbf{r}'_\perp) = -\frac{4 \pi \omega_0}{v_g} \alpha_1 \left\vert \text{Re} \left[ \mbf{u}^*_V (\mbf{r}'_\perp) \times \mbf{u}_H (\mbf{r}'_\perp) \right] \right\vert =-C^{(1)}_f \frac{\sigma_0}{A_{Far}(\mbf{r}'_\perp)} \frac{\Gamma_A}{2 \Delta}.
\end{equation}
We emphasize here the dependence of the rotation angle on the position of the atom in the transverse plane, $\mbf{r}'_\perp$, assumed equal for all atoms in the chain. In particular, $\chi_3(\mbf{r}'_\perp)$ depends on the {\em overlap} of $\mbf{u}_H (\mbf{r}'_\perp)$ and $\mbf{u}_V (\mbf{r}'_\perp)$ , indicating atomic scattering of photons from the $H$ to the $V$ modes associated with the Faraday interaction\index{Faraday effect}. We have characterized this overlap by an effective area that defines the Faraday interaction at the position of the atom,
\begin{equation}\label{eq:AFar}
\AF(\mbf{r}'_\perp) = \frac{1}{n_g \left\vert \text{Re} \left[ \mbf{u}^*_V (\mbf{r}'_\perp) \times \mbf{u}_H (\mbf{r}'_\perp) \right]\right\vert},
\end{equation}
where $n_g = c/v_g$ is the group index. A more tightly confined (smaller) area corresponds to a stronger interaction.
By monitoring the Faraday rotation\index{Faraday effect}, we can perform a continuous measurement on the collective spin projection $\hat{F}_z$.
Note that we have chosen the $ z $ axis as the quantization axis for the collective spin measurement, which follows the detailed discussion in Appendix~\ref{chap:choiceofquantizationaxisFaraday}.
The ``measurement strength," which characterizes the rate at which we gain information and thereby squeeze the spin, is given by
\begin{equation}\label{eq:kappa}
\kappa = \left\vert \chi_3(\mbf{r}'_\perp) \right\vert^2 \frac{P_{\rm in}}{\hbar \omega_0},
\end{equation}
where $P_{in}$ is the input power transported into the guided mode. The measurement strength is the rate at which photons are scattered from the guided $H$ to the guided $V$ mode. Decoherence arises due to diffuse scattering into unguided modes and the accompanied optical pumping of the spin. In principle, the photon scattering rate into $4\pi$ steradians is modified over free space due to the Purcell effect, but we neglect this correction here. In the case of the nanofiber, this is a small effect at typical distances at which the atom is trapped~\cite{LeKien2005,Kien2008}. For the square waveguide, we examine this correction in future work.
Decoherence is due to optical pumping of the spin between different magnetic sublevels. Henceforth, we restrict ourselves to alkali atoms driven on the D1 or D2 line, at optical pumping rate
\begin{equation}
\gamma_{op} =\frac{2}{9} \sigma(\Delta) \frac{I_{\rm in}(\mbf{r}'_\perp)}{\hbar \omega_0}.
\end{equation}
Here $\sigma(\Delta) = \sigma_0 \frac{\Gamma_A^2}{4 \Delta^2}$ is the photon scattering cross section at the detuning $\Delta$ for a unit oscillator strength transition, and the factor of $2/9$ reflects the branching ratio for absorbing a $\pi$-polarized laser photon followed by spontaneous emission of a photon, causing optical pumping to another spin state. $I_{\rm in}(\mbf{r}'_\perp) = n_g P_{\rm in}\vert \mbf{u}_H (\mbf{r}'_\perp) \vert^2 \equiv P_{\rm in}/\Ai(\mbf{r}'_\perp) $ is the input intensity into the guided $H$ mode at the position of the atom, where we have defined
\begin{equation}\label{eq:Ain}
\Ai(\mbf{r}'_\perp) = \frac{1}{n_g \vert \mbf{u}_H (\mbf{r}'_\perp) \vert ^2}
\end{equation}
to be the effective area associated with the input mode. We thus define the cooperativity per atom
\begin{equation}\label{eq:C1}
C_1 (\mbf{r}'_\perp) = \frac{\kappa}{\gamma_{op}} =\frac{\sigma_0}{2f^2} \frac{ \Ai(\mbf{r}'_\perp) }{[\AF(\mbf{r}'_\perp)]^2}.
\end{equation}
This is our central result. Roughly, $1/[\AF(\mbf{r}'_\perp)]^2 \sim \vert \mbf{u}_V (\mbf{r}'_\perp) \vert ^2 \vert \mbf{u}_H (\mbf{r}'_\perp) \vert ^2$, thus $ C_1(\mbf{r}'_\perp) \sim \sigma_0 \vert \mbf{u}_V (\mbf{r}'_\perp) \vert ^2$. In the context of homodyne measurement, the signal to be measured is proportional to the overlap between the $ H $ and the $ V $ modes, while the decoherence rate depends on the intensity of the local oscillator or $ H $ mode. How large the initially {\em unoccupied} $ V $ mode is at the atoms' position determines the signal-to-noise ratio for a QND measurement. We thus enhance the cooperativity by choosing the position of the atoms so that the {\em orthogonal}, unoccupied mode is large, while the intensity of the local input mode that causes decoherence is small.
We contrast this with squeezing arising from a birefringence interaction, as we studied in [24]. Linear birefringence corresponds to a relative phase between the ordinary and extraordinary linear polarizations, which can arise due to both the geometry of the anisotropic modes relative to the placement of the atoms, and the atoms' tensor polarizability. Here, the coupling is not optimal at the position of minimum intensity; it is maximum at the angle 45$^\circ$ between the $H$ and the $V$ modes. As such, one will not see as strong of an enhancement of the cooperativity as we find in our protocol employing the Faraday effect\index{Faraday effect}.
\begin{figure}[htb]
\centering
\begin{minipage}[h]{0.25\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][$1/\AF$]{
%\input{../media/Figs/nanofiber_invA_Far_xy.tex}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure0_1}
\label{fig:nanofiber_invAFarxy}
}
\vfill
\subfloat[h][$ 1/\Ain $]{
\label{fig:nanofiber_invAin_xy}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure1_1}
%\input{../media/Figs/nanofiber_invA_in_xy.tex}
}
\end{minipage}\hfill
\begin{minipage}[h]{0.75\linewidth}
\subfloat[h][$ C_1 $]{
\label{fig:nanofiber_C1_xy}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure2_1}
%\input{../media/Figs/nanofiber_C1_xy.tex}
}
\end{minipage}
%\end{tabular}
\caption[Contour plots of the effective mode areas and the cooperativity per atom near an optical nanofiber.]{Contour plots of the effective mode areas and the cooperativity per atom near an optical nanofiber. Contour plots of the \protect\subref*{fig:nanofiber_invAFarxy} reciprocal effective Faraday interaction mode area, Eq.~\eqref{eq:AFar}, and \protect\subref*{fig:nanofiber_invAin_xy} the reciprocal input mode area in the $ xy $ plane, Eq.~\eqref{eq:Ain}. An $ x $-polarized incident mode is assumed. \protect\subref*{fig:nanofiber_C1_xy} Contour plot of the cooperativity, Eq.~\eqref{eq:C1} in the $ xy $-plane. The isovalue lines of $ C_1 $ increase by $ 0.002428 $ at each gradient step from the outside inwards. The $ x$ and $y $ coordinates are scaled in units of $ a $ for all three plots.}\label{fig:nanofiber_Aeffgeometry}
\end{figure}
\begin{figure}[htb]
\centering
\begin{minipage}[h]{0.25\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][$1/\AF$]{
%\input{fig/swg_invA_Far_xy.tex}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure3_1}
\label{fig:square waveguide_invAFarxy}
}
\vfill
\subfloat[h][$ 1/\Ain $]{
\label{fig:square waveguide_invAin_xy}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure4_1}
%\input{fig/swg_invA_in_xy.tex}
}
\end{minipage}\hfill
\begin{minipage}[h]{0.75\linewidth}
\subfloat[h][$ C_1 $]{
\label{fig:square waveguide_C1_xy}
\includegraphics[width=0.9\linewidth]{../media/Figs/FaradayProtocol-figure5_1}
%\input{fig/swg_C1_xy.tex}
}
\end{minipage}
%\end{tabular}
\caption[Contour plots of the effective mode areas and the cooperativity per atom near a square waveguide.]{Similar to Fig.~\ref{fig:nanofiber_Aeffgeometry}, but for the square waveguide. In Subfig.~\protect\subref*{fig:square waveguide_invAFarxy}, the contour lines outside of the waveguide are essentially concentric circles. There are distortions near the four corners of the square waveguide shown in the plot, mainly caused by numerical divergences. In Subfig.~\protect\subref*{fig:square waveguide_C1_xy}, the isovalue lines of $ C_1 $ increase by $ 0.005109 $ at each gradient step from the outside inwards. The $ x$ and $y $ coordinates are scaled in units of $ w $ for all three plots.}\label{fig:square waveguide_Aeffgeometry}
\end{figure}
Figures~\ref{fig:nanofiber_Aeffgeometry} and~\ref{fig:square waveguide_Aeffgeometry} show plots of $1/\AF$, $1/\Ai$, and $C_1$ as a function of the position of the atom in the transverse plane, $\mbf{r}'_\perp$, for the two nanophotonic geometries. We see that $\AF$ is essentially cylindrically symmetric sufficiently far from the surface for both the nanofiber and the square waveguide geometries and thus the measurement strength is basically independent of the azimuthal position of the atoms. In contrast, $\Ai$ is azimuthally anisotropic. For input $x$ polarization, $1/\Ai$ is the smallest along the $y$ axis at a given radial distance, which corresponds to the lowest intensity of the input $H$ mode, and thus lowest optical pumping rate $\gamma_{op}$. This angle corresponds to the position at which $\vert \mbf{u}_V (\mbf{r}'_\perp) \vert$ is largest and thus yields the largest enhancement of $C_1$. Thus, counterintuitively, we enhance the cooperativity by placing the atom at the angle of minimum input intensity. This enhancement is even greater for the square waveguide, which has more anisotropic guided modes compared to the cylindrical nanofiber. For typical geometries, given a nanofiber with radius $a = 225$ nm and atoms trapped on the $y-$axis, a distance $200$ nm ($0.8a$) from the surface, the single-atom cooperativity is $C_1 =0.00728$ at the optimal trapping angle; for the square waveguide of width $w =300$ nm, with atoms trapped $150$ nm from the surface, $C_1 =0.0102$ at optimum. Thus, with order $1000$ trapped atoms the $N_A$-atom cooperativity is of the order of $ 10 $, sufficient to generate substantial spin squeezing.
\FloatBarrier
\section{Spin-squeezing dynamics}
Given an ensemble of $N_A$ atoms initially prepared in a spin coherent state\index{state!spin coherent state} for the hyperfine spin $f$, polarized in the transverse plane, e.g., along the $x$ axis, a QND measurement of the collective spin $F_z$ will squeeze the uncertainty of that component. The metrologically relevant squeezing parameter is~\cite{Wineland1992},
\begin{align}\label{eq:xi2Faraday}
\xi^2 &\equiv \frac{2 N_A f\expect{\Delta F_z ^2}}{\expect{\hat{F}_x}^2}.
\end{align}
Under the assumption that the state is symmetric with respect to the exchange of any two atoms, valid when we start in a spin coherent state\index{state!spin coherent state} and all couplings are uniform over the ensemble, the collective expectation value can be decomposed into
\begin{subequations}\label{eq:Ftof_squeezing}
\begin{align}
\expect{\Delta F_z^2} &= N_A \expect{\Delta f_z^2}+N_A(N_A-1)\left. \expect{\Delta f_z^{(i)}\Delta f_z^{(j)}}\right|_{i\neq j},\label{eq:DeltaFz2}\\
\expect{\hat{F}_x } & =N_A \expect{\hat{f}_x} \label{eq:expectFx}.
\end{align}
\end{subequations}
The first term in Eqs.~\eqref{eq:DeltaFz2} and~\eqref{eq:expectFx} is the projection noise\index{noise!projection noise} associated with the $N_A$ identical spin-$f$ atoms, and the second term in Eq.~\eqref{eq:DeltaFz2} is determined by two-body covariances, $ \left.\expect{\Delta f_z^{(i)}\Delta f_z^{(j)}}\right|_{i\neq j}=\expect{\Delta f_z^{(1)}\Delta f_z^{(2)}} = \expect{\hat{f}_z^{(1)}\hat{f}_z^{(2)}}-\expect{\hat{f}_z^{(1)}} \expect{\hat{f}_z^{(1)}} $. Negative values of these two-body correlations correspond to the pairwise entanglement between atoms, leading to spin squeezing~\cite{Wang2003Spin}. Note that when the detuning is sufficiently far off-resonance, all collective sub- and superradiant modes~\cite{Asenjo-Garcia2017Atom,Asenjo-Garcia2017Exponential} are equally (and thus symmetrically) excited. In this chapter, we work in the dispersive regime with a few thousand atoms and can safely ignore the atom-atom interaction caused by multiple scattering, and hence the collective atomic system satisfies the exchange symmetry.
To study the spin-squeezing dynamics, we follow the method first developed by Norris~\cite{Norris2014}. We employ a first-principles stochastic master equation for the collective state of $N_A$ atoms,
\begin{align}\label{eq:totaldrhodt}
\mathrm{d}\hat{\rho}= \left.\mathrm{d}\hat{\rho}\right|_{QND}+\left.\mathrm{d}\hat{\rho}\right|_{op}.
\end{align}
The first term on the right-hand side of Eq.~\eqref{eq:totaldrhodt} governs the spin dynamics arising from QND measurement~\cite{Jacobs2006,Baragiola2014},
\begin{align}
\left.\mathrm{d}\hat{\rho}\right|_{QND} &= \sqrt{\frac{\kappa}{4}}\mathcal{H}\left[\hat{\rho} \right]\mathrm{d}W + \frac{\kappa}{4}\mathcal{L}\left[ \hat{\rho}\right]\mathrm{d}t,
\end{align}
where $\kappa$ is the measurement strength defined in Eq.~\eqref{eq:kappa}, and $\mathrm{d}W$ is a stochastic Wiener interval. The conditional dynamics are generated by superoperators that depend on the {\em collective} spin:
\begin{subequations}
\begin{align}
\mathcal{H}\left[ \hat{\rho}\right] &= \hat{F}_z \hat{\rho} + \hat{\rho}\hat{F}_z -2\expect{\hat{F}_z}\hat{\rho}, \\
\mathcal{L}\left[ \hat{\rho} \right] &= \hat{F}_z \hat{\rho}\hat{F}_z -\frac{1}{2}\left(\hat{\rho}\hat{F}_z^2+\hat{F}_z^2\hat{\rho} \right)=\frac{1}{2}\left[\hat{F}_z,\left[\hat{\rho},\hat{F}_z \right] \right].
\end{align}
\end{subequations}
The second term governs decoherence arising from optical pumping, which acts {\em locally} on each atom$,\mathrm{d}\hat{\rho}|_{op}=\sum_n^{N_A} \mathcal{D}^{(n)}\left[ \hat{\rho}\right] \mathrm{d}t$, where
\begin{equation}
\mathcal{D}^{(n)}\left[ \hat{\rho}\right] = -\frac{i}{\hbar}\left(\hat{H}^{(n)}_{\rm eff}\hat{\rho} - \hat{\rho} \hat{H}^{(n)\dag}_{\rm eff}\right) + \gamma_{op} \sum_q \hat{W}^{(n)}_q \hat{\rho}\hat{W}^{(n)\dag}_q.
\label{op_superator}
\end{equation}
Here $\hat{H}^{(n)}_{\rm eff}$ is the effective non-Hermitian Hamiltonian describing the local light shift and absorption by the $n$th atom and $\hat{W}^{(n)}_q$ is the jump operator corresponding to optical pumping through absorption of a laser photon followed by spontaneous emission of a photon of polarization $q$~\cite{Deutsch2010a} (see Appendix~\ref{Sec::opticalpumpinginrotatingframe}).
The rate of decoherence is characterized by the optical pumping rate, $\gamma_{op}$. Note that the optical pumping superoperator, Eq.~\eqref{op_superator}, is not trace preserving when restricted to a given ground-state hyperfine manifold $f$. Optical pumping that transfers atoms to the other hyperfine manifold in the ground-electronic state is thus treated as loss. Moreover, if the atoms are placed at the optimal position in the transverse plane, the local field of the guided mode is linearly polarized. In that case the vector light shift vanishes, and for detunings large compared to the excited-state hyperfine splitting, the rank-2 tensor light shift is negligible. The light shift is thus dominated by the scalar component, which has no effect on the spin dynamics. In that case $\hat{H}_{\rm eff} = -i \frac{\hbar\gamma_{op}}{2} \hat{\mathbb{1}}$, representing an equal rate of absorption for all magnetic sublevels.
Following the work of Norris~\cite{Norris2014}, the solution to the master equation is made possible by three approximations. First, we restrict the subspace of internal magnetic sublevels that participate in the dynamics. The system is initialized in a spin coherent state\index{state!spin coherent state}, with all atoms spin-polarized along the $x$ axis. We denote this the ``fiducial state," $\ket{\uparrow} = \ket{f, m_x =f}$. Through QND measurement, spin squeezing is induced by entanglement with the ``coupled state," $\ket{\downarrow} = \ket{f, m_x=f-1}$. Optical pumping is dominated by ``spin flips" $\ket{\uparrow}\rightarrow \ket{\downarrow}$ and ``loss" due to pumping to the other hyperfine level. Finally, we include a third internal magnetic sublevel, $\ket{T} = \ket{f, m_x=f-2}$, to account for ``transfer of coherences," which can occur in spontaneous emission~\cite{Norris2012Enhanced,Norris2014} (see Fig.~\ref{fig:spinsqueezinglevelstructure}). Restricted to this qutrit basis, the internal hyperfine spin operators are
\begin{subequations}\label{eq:f_in_xbasis}
\begin{align}
\hat{f}_x &= f \hat{\sigma}_{\uparrow \uparrow} +(f-1) \hat{\sigma}_{\downarrow \downarrow} + (f-2) \hat{\sigma}_{T T}, \\
\hat{f}_y &=-i \sqrt{\frac{f}{2}} \left(\hat{\sigma}_{\uparrow \downarrow} - \hat{\sigma}_{\downarrow \uparrow}\right) -i \sqrt{\frac{2f-1}{2}} \left(\hat{\sigma}_{\downarrow T} - \hat{\sigma}_{T \downarrow }\right) \\
\hat{f}_z &= \sqrt{\frac{f}{2}} \left(\hat{\sigma}_{\uparrow \downarrow} + \hat{\sigma}_{\downarrow \uparrow}\right) + \sqrt{\frac{2f-1}{2}} \left(\hat{\sigma}_{\downarrow T} + \hat{\sigma}_{T \downarrow }\right),
\end{align}
\end{subequations}
where we have defined the atomic population and coherence operators $\hat{\sigma}_{ba}=\ket{b}\bra{a}$.
\begin{figure}[!tbp]
\centering
\includegraphics[width=.55\textwidth]{../media/Figs/FaradaySqueezingLevelStructure}
\caption[Schematic energy level diagram for cesium atoms probed on the D2 line, $6S_{1/2} \rightarrow 6P_{3/2}$. ]{Schematic energy level diagram for cesium atoms probed on the D2 line, $6S_{1/2} \rightarrow 6P_{3/2}$. Relevant dynamics are restricted to a truncated qutrit subspace of ground levels. Atoms are prepared in the fiducial state $\ket{\uparrow}$, and driven by $x$-polarized ($\pi$) light. The Faraday rotation corresponds to coherent scattering of $\pi \rightarrow \sigma$ in this basis, and measurement backaction leads to entanglement between pairs of atoms in $\ket{\uparrow}=\ket{f=4,m_x=4}$ and the coupled state $\ket{\downarrow}=\ket{f=4,m_x=3}$. Optical pumping can cause spin flips $\ket{\uparrow} \rightarrow \ket{\downarrow}$ and $ \ket{\downarrow} \rightarrow \ket{T}=\ket{f=4,m_x=2}$. The latter process is included to account for transfer of coherences. The detuning $\Delta$ is taken to be large compared to the excited-state hyperfine splitting.}
\label{fig:spinsqueezinglevelstructure}
\end{figure}
\FloatBarrier
Second, we assume that the collective state is symmetric under exchange of spins. This approximation is valid when all atoms see the same probe intensity and in the far-detuning regime when all sub- and superradiant modes are equally excited. With this, we can limit our attention to the symmetric subspace and define, for example, the symmetric two-body covariances by
\begin{align}
\expect{\Delta\sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)}}_s \equiv \frac{1}{2}\left[\expect{\Delta\sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)}}+\expect{\Delta\sigma_{ba}^{(2)}\Delta\sigma_{dc}^{(1)}} \right] ,
\end{align}
where the superscripts, (1) and (2), label arbitrarily two atoms in the ensemble. Due to the exchange symmetry, $ \expect{\Delta\sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)}}_s=\expect{\Delta\sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)}}=\expect{\Delta\sigma_{ba}^{(2)}\Delta\sigma_{dc}^{(1)}} $, which reduces the number of $ n $-body moments required to simulate the spin dynamics of the ensemble.
Third, we make the Gaussian approximation, valid for large atomic ensembles, so that the many-body state is fully characterized by one- and two-body correlations. Equivalently, the state is defined by the one- and two-body density operators, with matrix elements $\rho^{(1)}_{a, b} =\expect{\hat{\sigma}_{ba}}$, $\rho^{(1,2)}_{ac,bd}=\expect{\Delta \sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)} }_s$ in the symmetric subspace. We track the evolution of the correlation functions through a set of coupled differential equations~\cite{Norris2014}. Optical pumping, acting locally, couples only $n$-body correlations to themselves, e.g.,
\begin{equation}\label{eq:dsigmabadc_op}
\left.d\expect{\Delta \sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)} }_s\right|_{op} = \expect{\mathcal{D}^\dagger[\Delta \sigma_{ba}^{(1)}]\Delta\sigma_{dc}^{(2)} }_sdt + \expect{\Delta \sigma_{ba}^{(1)} \mathcal{D}^\dagger[\Delta\sigma_{dc}^{(2)}] }_sdt .
\end{equation}
QND measurement generates higher order correlations according to
\begin{equation}\label{eq:dsigmaba_QND}
\left.d\expect{\hat{\sigma}_{ba}}\right|_{QND} =\frac{\kappa}{4}\expect{\mathcal{L}^\dagger\left[\hat{\sigma}_{ba} \right]}dt + \sqrt{\frac{\kappa}{4}}\expect{\mathcal{H}^\dagger\left[\hat{\sigma}_{ba} \right]}dW .
\end{equation}
We can truncate this hierarchy in the Gaussian approximation, setting third-order cumulants to $ 0 $. Thus, for example,
\begin{align}
\left.d\expect{\Delta \sigma_{ba}^{(1)} \Delta \sigma_{dc}^{(2)}}_s \right|_{QND} &= \left.d\expect{\hat{\sigma}_{ba}^{(1)} \hat{\sigma}_{dc}^{(2)}}_s \right|_{QND} - \left. \expect{\hat{\sigma}_{ba}} \right|_{QND} \left( \left.d\expect{\hat{\sigma}_{dc}} \right|_{QND}\right) \nonumber\\
&\quad - \left. \expect{\hat{\sigma}_{dc}} \right|_{QND} \left( \left.d\expect{\hat{\sigma}_{ba}} \right|_{QND}\right)
- \left.d\expect{\sigma_{ba}} \right|_{QND}\left.d\expect{\sigma_{dc}} \right|_{QND} \nonumber \\
&= -\kappa\expect{\Delta \sigma^{(1)}_{ba} \Delta F_z }_s \expect{\Delta F_z \Delta \sigma_{dc}^{(2)} }_sdt,\label{eq:dsigmabadc_QND}
\end{align}
where we have employed the Ito calculus $dW^2 = dt$.
Note that when the third-order cumulants are set to $ 0 $, the contribution of the $\mathcal{L}$ superoperator to the dynamics of the two-body covariances vanishes, $ \left.d\expect{\Delta \sigma_{ba}^{(1)} \Delta \sigma_{dc}^{(2)}}_s\right|_\mathcal{L} =\frac{\kappa}{4}\expect{\mathcal{L}^\dagger\left[\Delta\sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)} \right]}_sdt=0 $. This indicates that the events of no-photon detection under the Gaussian-state approximation do not affect atom-atom correlations; the measurement backaction and squeezing arise from the homodyne detection in the guided modes.
Using all of the approximations above, we can efficiently calculate the collective spin dynamics for the ensemble of qutrits (dimension $d=3$) with $ d^2=9 $ equations for the one-body quantity, $ \expect{\hat{\sigma}_{ba}} $, and $ d^2(d^2+1)/2=45 $ equations for the two-body covariances, $ \expect{\Delta \sigma_{ba}^{(1)}\Delta\sigma_{dc}^{(2)} }_s $, in the symmetric subspace independent of the number of atoms. With this formalism in hand, we can calculate the squeezing parameter, Eq.~\eqref{eq:xi2Faraday}, as a function of the time by finding time-dependent solutions for the one-body averages $\expect{\hat{f}_x}$ and $\expect{\Delta f_z^2}$, and the two-body covariances $\expect{\Delta f_z^{(1)} \Delta f_z^{(2)}}$. The detailed approach to calculating the collective spin dynamics is given in Appendix~\ref{Sec::opticalpumpinginrotatingframe}.
Once the dynamics of the microscopic operators are solved, we follow Appendix~\ref{Appendix::collectivespinoperators} to find the dynamics of some key collective operators.
Using this formalism, we calculate the squeezing of an ensemble of cesium atoms, initially spin-polarized in the $6S_{1/2},\ket{f=4, m_x=4}$ state. We choose the guided mode frequency near the D2 resonance, $6S_{1/2}\rightarrow 6P_{3/2}$, far detuned compared to the excited-state hyperfine splitting. In Fig.~\ref{fig:xi_rpfix_NA_t}, we plot the reciprocal of the spin squeezing parameter as a function of the time and its peak as a function of the atom number, $ N_A $, in both the optical nanofiber and the square waveguide cases. By placing atoms $ 200$ nm from the nanofiber surface ($ r'\!_\perp=1.8a $), our simulations for $ 2500 $ atoms yield $ 6.3 $ dB of squeezing. Using the square waveguide platform with the same number of atoms placed $150 $ nm from the surface, our calculation yields $12.9$ dB squeezing. As we have shown in Sec.~\ref{Sec::QNDandCooperativityTheory}, the square waveguide geometry enhances the anisotropic contrast of the two orthogonal guided modes and dramatically reduces the relative local intensity when the atoms are placed on the $ y $ axis. This results in a large cooperativity and higher peak spin squeezing, achieved in a shorter time and with a relatively slower decay compared to the nanofiber.
In addition, in Figs.~\ref{fig:nanofiber_peakxi_NA_rp1d8a} and~\ref{fig:square waveguide_peakxi_NA_rp1d} we show how the peak squeezing scales with the number of trapped atoms when the atom positions are fixed as above.
\begin{figure}[!tbp]
\centering
\begin{minipage}[h]{0.99\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][]{
%\input{../media/Figs/nanofiber_xi_t_rp1d8a_NA2500.tex}
\includegraphics[width=0.47\linewidth]{../media/Figs/nanofiber_xi_t_rp1d8a_NA2500.tex} % To put tikzpicture in the includegraphics environment, I have to use package tikzscale. See https://tex.stackexchange.com/questions/36297/pgfplots-how-can-i-scale-to-text-width
%\includegraphics{../media/Figs/FaradayProtocol-figure6}
\label{fig:nanofiber_xi_t_rp1d8a_NA2500}
}
\hfill
\subfloat[h][]{
\label{fig:nanofiber_peakxi_NA_rp1d8a}
%\includegraphics{../media/Figs/FaradayProtocol-figure7}
%\input{../media/Figs/nanofiber_peakxi_NA_rp1d8a.tex}
\includegraphics[width=0.46\linewidth]{../media/Figs/nanofiber_peakxi_NA_rp1d8a.tex}
}
\end{minipage}\vfill
\begin{minipage}[h]{0.99\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][]{
%\input{../media/Figs/swg_xi_t_rp1d_NA2500.tex}
\includegraphics[width=0.47\linewidth]{../media/Figs/swg_xi_t_rp1d_NA2500.tex}
%\includegraphics{../media/Figs/FaradayProtocol-figure8}
\label{fig:square waveguide_xi_t_rp1d_NA2500}
}
\hfill
\subfloat[h][]{
\label{fig:square waveguide_peakxi_NA_rp1d}
%\includegraphics{../media/Figs/FaradayProtocol-figure9}
%\input{../media/Figs/swg_peakxi_NA_rp1d.tex}
\includegraphics[width=0.46\linewidth]{../media/Figs/swg_peakxi_NA_rp1d.tex}
}
\end{minipage}
%\end{tabular}
\caption[Evolution of the reciprocal spin-squeezing parameter and atom-number-dependence of the peak squeezing parameter.]{ Reciprocal spin-squeezing parameter, Eq.~\eqref{eq:xi2Faraday}. [\protect\subref*{fig:nanofiber_xi_t_rp1d8a_NA2500} and~\protect\subref*{fig:square waveguide_xi_t_rp1d_NA2500}] Plots of $ \xi^{-2} $ as a function of time in units of the optical pumping rate $\gamma_{op}$, for the cylindrical nanofiber and square waveguide, respectively, for $N_A =2500$, with other parameters given in the text. These curves peak at the time determined by the detailed balance of reduced uncertainty due to QND measurement and decoherence due to optical pumping. [\protect\subref*{fig:nanofiber_peakxi_NA_rp1d8a} and~\protect\subref*{fig:square waveguide_peakxi_NA_rp1d}] Plots of the peak value $ \xi^{-2} $ as a function of $ N_A $ for the nanofiber and square waveguide, respectively. }\label{fig:xi_rpfix_NA_t}
\end{figure}
\FloatBarrier
In the absence of any other noise, the cooperativity of atom-light coupling increases as the atoms are placed closer to the waveguide surface. Figures~\ref{fig:nanofiber_peakxi_rp_NA2500} and~\ref{fig:square waveguide_peakxi_rp_NA2500} show the peak squeezing as a function of $ r\!_\perp $ for both the nanofiber and the square waveguide geometries with $2500$ atoms. With the same setting, we also plot out the cooperativity, $ C_1 $, on a logarithm scale in Figs.~\ref{fig:nanofiber_C1_y} and~\ref{fig:square waveguide_C1_y} as a function of the atom radial distance to the center of both waveguide geometries. We find that $C_1$ is proportional to $ e^{-\beta r\!_\perp} $ and the peak squeezing scales as $ \sqrt{OD} $, where $ \beta \approx 1.65/a $ for the nanofiber and $ \beta \approx 2.14\times 2/w $ for the \SWG with $2500$ atoms. The cooperativity of the \SWG increases more rapidly than that of the nanofiber as the atoms approach the waveguide surface.
\begin{figure}[!tbp]
\centering
\begin{minipage}[h]{\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][]{
\includegraphics[width=0.46\linewidth]{../media/Figs/nanofiber_peakxi_rp_NA2500.tex}
%\includegraphics{../media/Figs/FaradayProtocol-figure10}
\label{fig:nanofiber_peakxi_rp_NA2500}
}
\hfill
\subfloat[h][]{
\label{fig:nanofiber_C1_y}
%\includegraphics{../media/Figs/FaradayProtocol-figure11}
\includegraphics[width=0.49\linewidth]{../media/Figs/nanofiber_C1_y.tex}
}
\end{minipage}\vfill
\begin{minipage}[h]{0.95\linewidth}
%\begin{tabular}{*{2}{b{0.2\textwidth-2\tabcolsep}}}
\subfloat[h][]{
\includegraphics[width=0.47\linewidth]{../media/Figs/swg_peakxi_rp_NA2500.tex}
%\includegraphics{../media/Figs/FaradayProtocol-figure12}
\label{fig:square waveguide_peakxi_rp_NA2500}
}
\hfill
\subfloat[h][]{
\label{fig:square waveguide_C1_y}
%\includegraphics{../media/Figs/FaradayProtocol-figure13}
\includegraphics[width=0.475\linewidth]{../media/Figs/swg_C1_y.tex}
}
\end{minipage}
%\end{tabular}
\caption[Peak squeezing parameter and cooperativity for the nanofiber and the square waveguide cases as the atoms change their radial positions. ]{[\protect\subref*{fig:nanofiber_peakxi_rp_NA2500} and~\protect\subref*{fig:square waveguide_peakxi_rp_NA2500}] Peak squeezing parameter and [\protect\subref*{fig:nanofiber_C1_y} and~\protect\subref*{fig:square waveguide_C1_y}] cooperativity at the optimal azimuthal trapping position as a function of the radial distance to the waveguide axis, for $N_A =2500$, with other parameters given in the text. Nanofiber case [\protect\subref*{fig:nanofiber_peakxi_rp_NA2500} and~\protect\subref*{fig:nanofiber_C1_y}]; square waveguide case [\protect\subref*{fig:square waveguide_peakxi_rp_NA2500} and~\protect\subref*{fig:square waveguide_C1_y}]. }\label{fig:peakxi_rp_NA}
\end{figure}
%\FloatBarrier
%\comment{Worth noting that we predict an enhanced spin squeezing effect using the Faraday interaction on the nanofiber interface compared to the birefringence interaction scheme discussed in Ref.~\cite{Qi2016}. Different from the Faraday effect scheme discussed here, the birefringence scheme utilizes the phase difference associated with the $ H $- and $ V $-modes. If we only consider the scalar interaction, the birefringence interaction reaches the maximum value when the intensity difference of the $ H $- and $ V $-mode components of the local field reaches the largest value. That is to place the atoms on the $ x $- or $ H $-axis, or $ 45^\circ $ from the probe's polarization direction. However, in this geometry, the probe's mode intensity at the atom position is not the lowest among other azimuthal positions, which indicates the decoherence is not minimal. Using the definition of cooperativity as the ratio between the birefringence interaction strength (the good part) and the decoherence strength (the bad part), the birefringence squeezing scheme has to choose a non-trivial quantization axis direction in order to optimize the spin squeezing effect incorporating with the small tensor interaction effect of the light and atoms. The fact that the good part and the bad part of the cooperativity of the birefringence scheme cannot be optimized at the same time may be the reason why it doesn't generate as much squeezing as the Faraday scheme. Moreover, the birefringence protocol requires some magic frequency and special filters to cancel the photon number fluctuation and other noise sources, while the Faraday protocol doesn't require such a complicated setting. Last, to be not confused, the two protocols studied here and in Ref.~\cite{Qi2016} are indeed based on different effects. One can easily prove that the Faraday interaction Hamiltonian terms of the birefringence protocol (see Eq.(43) in Ref.~\cite{Qi2016}) vanish given the internal symmetry of the clock state of atoms used for the protocol. Under the far-detuning condition, the Faraday effect scheme discussed in this chapter will neither induce birefringence effect. }
\section{Summary and Outlook}
In this chapter we have studied the cooperativity of the atom-photon interface for two nanophotonic geometries: a cylindrical nanofiber and a square waveguide. Due to the anisotropic nature of the guided modes, one can strongly enhance the cooperativity by trapping atoms at positions that maximize the rate at which photons are forward-scattered into the orthogonally polarized guided mode, while simultaneously minimizing the rate at which they are scattered into free space. Counterintuitively, the optimal geometry is such that atoms at a certain distance from the surface are trapped at the azimuthal angle of the minimal intensity of the probe. We applied this idea to study the generation of a spin squeezed state\index{state!spin squeezed state} of an ensemble atoms induced by QND measurement, mediated by the Faraday interaction\index{Faraday effect} in the dispersive regime.
With realistic parameters, our simulation shows more than $ 6 $ dB of squeezing for the cylindrical nanofiber or $ 12 $ dB for the square waveguide with $ 2500 $ atoms. The amount of spin squeezing we predict based on the Faraday effect is substantially larger than that for the birefringence-based spin-squeezing protocol studied in our earlier work~\cite{Qi2016}. Although we have only considered a cylindrical nanofiber and a square waveguide, the ideas presented in this chapter are applicable to other nanophotonic waveguide geometries which could show enhanced cooperativity. In addition, our model of decoherence, simplified when the detuning is large compared to the excited-state hyperfine splitting, is almost certainly not the optimal operating condition.
Our simulations are based on a first-principles stochastic master equation that includes QND measurement backaction that generates entanglement and results in spin squeezing, as well as decoherence due to optical pumping by spontaneous photon scattering~\cite{Norris2014, Baragiola2014, Qi2016}. This simulation is made possible by a set of simplifying assumptions: (i) we restrict each atom to a qutrit, embedded in the hyperfine manifold of magnetic sublevels; (ii) the state is exchange symmetric with respect to any two atomic spins; and (iii) the many-body state is fully characterized by one- and two-body correlations (the Gaussian approximation). With these, we solve for the metrologically relevant squeezing parameter as a function of time and see the tradeoffs between QND measurement and decoherence for various geometries and choices of parameters. Our method is extendable to include higher order correlations, which become manifest at large squeezing, when the Holstein-Primakoff (Gaussian) approximation breaks down. The computational framework we have developed here should allow us to study $n$-body moments with an acceptable computational load.
In future work we intend to extend our analysis in a number of directions. While we have focused here on the enhancement of the cooperativity in a nanophotonic waveguide-based QND measurement, we did not fully analyze the impact of the Purcell effect and the modification of spontaneous emission rates in our simulations. We have also neglected here the motion of atoms in the optical lattice. In practice, however, these effects are important, the latter having been observed in the nanofiber experiments~\cite{Solano2017Dynamics, Solano2017Alignment, Beguin2017Observation, Solano2017Optical}. We include these in future studies, with an eye towards the development of new strategies for atomic cooling and state initialization in the nanophotonic platforms~\cite{Meng2017ground}. We expect that our proposed geometry, which places the atoms at positions of minimum intensity, could also help reduce the perturbation of the motion of trapped atoms due to the probe, which causes a disturbance in the signal~\cite{Solano2017Dynamics}.
Finally, we have studied here the dispersive regime of a QND measurement, where the probe light is detuned far off-resonance, and multiple scattering of photons among atoms is negligible. To extend our theory, it is necessary to include collective effects such as super- and subradiance~\cite{Asenjo-Garcia2017Exponential, Asenjo-Garcia2017Atom, Solano2017Super}, with applications including quantum memories~\cite{Sayrin2015, Gouraud2015Demonstration}, and the generation of matrix product states and cluster states~\cite{Economou2010, Lodahl2017Chiral, Schwartz2016Deterministic, Pichler2016Photonic, Pichler2017Photonic}.
%The theoretical difficulty of studying these many-body effects lays down to the exponentially expanded Hilbert quantum space which cannot be stored efficiently on a classical computer. With decoherence, the computational framework developed here based on the $n$-body moments can help shed a light on the dominant quantum correlations and many-body interactions.
%</Faradayprotocol>
\section{Acknowledgments}
We thank Perry Rice for extensive discussions on sub- and superradiance and their effects on collective spin squeezing in the dispersive coupling regime and Ezad Shojaee for helpful discussions on the effects of decoherence on spin squeezing. We also thank Poul Jessen for insightful discussions regarding the enhancement of cooperativity. We acknowledge the UNM Center for Advanced Research Computing for computational resources used in this study. This work was supported by the NSF, under Grant No. PHY-1606989, and the Laboratory
Directed Research and Development program at Sandia National Laboratories.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology \& Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
%===============Appendix================= %
\appendix
%\begin{appendix}
%<*quantizationaxisforFaradaysqueezing>
\chapter[Quantization axis for the Faraday-interaction protocol]{The choice of quantization axis for the Faraday-interaction-based spin squeezing protocol}\label{chap:choiceofquantizationaxisFaraday}
Faraday interaction\index{Faraday effect} is the interaction when the helicity of the input and output of the light signal is preserved.
In the context of QND measurement, our Faraday-interaction-based protocol\index{Faraday effect} uses a quasi-linear mode input to generate a polarization state rotation on the equator of the \Poincare sphere to measure the spin state.
The Faraday phase change of the light's polarization corresponds to a rotation about the $ \mathbf{S}_3 $ axis on the \Poincare sphere, in general.
From Eq.\eqref{eq:chippp}, the coupling atomic term is proportional to
\begin{align}
\hat{\chi}_{i3} &= \hat{\chi}_{HV}-\hat{\chi}_{VH}=\hat{\chi}_{RR}-\hat{\chi}_{LL},
\end{align}
where the last step is derived in Appendix~\ref{chap:basistransfHS}. These relations indicate the Faraday effect is generated by the intensity or photon flux difference between the right- and left-polarized output light components or by the phase difference between two linearly polarized modes due to the vector polarizability of the atoms.
One can prove that the strength of the Faraday interaction is dominated by the vector interaction term in Eq.\eqref{eq:chippp}.
This implies that, to maximize the Faraday interaction, it is optimal to choose a quantization axis along the direction of $ \mathbf{v}_F=\mathbf{u}_H^*(\br'\!_\perp)\!\times\!\mathbf{u}_{V}(\br'\!_\perp)\!-\!\mathbf{u}_V^*(\br'\!_\perp)\!\times\!\mathbf{u}_{H}(\br'\!_\perp)$ given the atoms are placed at $ \br'\!_\perp $ position in the transverse plane of the waveguide.
In general, this product of modes may be elliptical with at least one direction component is imaginary while others are real.
If this happens, since the quantization axis is the direction in which a magnetic field is pointing to in 3D real space, the optimal choice of quantization axis should be the direction corresponding to the largest real component of the $ \mathbf{v}_F $ vector.
For the waveguides discussed in this dissertation, this occurrence of imaginary product direction doesn't seem to happen.
Take the example of a cylindrical nanofiber, at an arbitrary position $ \br'\!_\perp=(r'\!_\perp,\phi') $ of atoms in the transverse plane (see Appendix~\ref{chap:fibereigenmodes}),
\begin{align}
\mathbf{u}_H^*(r'\!_\perp,\phi')\times \mathbf{u}_V(r'\!_\perp,\phi') &= 2u_{r\!_\perp} u_\phi\mathbf{e}_z - 2iu_zu_{r\!_\perp}\sin2\phi \mathbf{e}_\phi + 2iu_\phi u_z\cos2\phi \mathbf{e}_{r\!_\perp} \\
\mathbf{u}_V^*(r'\!_\perp,\phi')\times \mathbf{u}_H(r'\!_\perp,\phi') &= -2u_{r\!_\perp} u_\phi\mathbf{e}_z - 2iu_zu_{r\!_\perp}\sin2\phi \mathbf{e}_\phi + 2iu_\phi u_z\cos2\phi \mathbf{e}_{r\!_\perp},
\end{align}
and therefore,
\begin{align}\label{eq:Faradayaxis}
\mathbf{v}_F=\mathbf{u}_H^*(\br'\!_\perp)\!\times\!\mathbf{u}_{V}(\br'\!_\perp)\!-\!\mathbf{u}_V^*(\br'\!_\perp)\!\times\!\mathbf{u}_{H}(\br'\!_\perp) = 4u_{r\!_\perp} u_\phi\mathbf{e}_z,
\end{align}
where $ u_{r\!_\perp}=u_{r\!_\perp}(r'\!_\perp) $, $ u_\phi=u_\phi(r'\!_\perp) $ and $ u_z=u_z(r'\!_\perp) $ are the circularly polarized mode components with a positive helicity, which are independent of longitudinal and azimuthal positions.
Based on the result of $ \mathbf{v}_F $ above, choosing $ z $ direction as the quantization axis is optimal for the QND measurement and spin squeezing protocol for atoms trapped near an optical nanofiber.
This conclusion may be general to translation-symmetric waveguides at large, which have a constant or slow change of index of refraction along the light propagation direction in the wavelength scale while the cross-section of the waveguides is uniform and can be arbitrary shape.
From the perspective of transformation optics~\cite{Leonhardt2006Optical,Kundtz2011Electromagnetic}, one can choose a set of orthogonal $H$ and $V$ modes for a waveguide of this type if these modes can be generated by adiabatically transforming the corresponding orthogonal set of modes from a cylindrical nanofiber boundary condition to the targeted waveguide.
Since the transformation is approximately restricted to the $xy$-plane of the coordinate system transformation determined by a Jacobian matrix, Eq.~\eqref{eq:Faradayaxis} will preserve the form in the new waveguide coordinate system where only $ z $-component is non-zero and should be chosen as the optimal choice of the quantization axis for Faraday-effect--based spin squeezing.
%</quantizationaxisforFaradaysqueezing>
%<*Faradayprotocoldetails>
\chapter{Modeling collective spin dynamics}\label{Sec::opticalpumpinginrotatingframe}
In this Appendix we give further details of the equations of motion for spin squeezing as a function of the time, as discussed in the text, building on the work of Norris~\cite{Norris2014}. In the symmetric subspace, and in the Gaussian approximation, we track a set of one- and two-body correlation functions that determine the metrologically relevant squeezing parameter, Eqs. (\ref{eq:xi2Faraday} and~\ref{eq:Ftof_squeezing}). The evolution is determined by the dynamics induced by the QND measurement and decoherence due to optical pumping, Eqs. \eqref{eq:totaldrhodt}--\eqref{op_superator}. For concreteness, we consider here cesium atoms, initially prepared in a spin coherent state\index{state!spin coherent state}, with all atoms in the stretched state polarized along the $x$ axis, $\ket{\uparrow} = \ket{6S_{1/2}, f=4, m_x=4}$. The atoms are trapped on the $y$ axis at a distance $r'_\perp$ from the core axis of the waveguide and are probed with guided light in the $H$ mode, which has linear polarization in the $x$ direction at the position of the atoms. We take the detuning large compared to the excited-state hyperfine splitting, e.g., 4 GHz red detuned from the D2 line, $\ket{6S_{1/2}, f=4}\rightarrow \ \ket{6P_{3/2},f'=3}$. Spontaneous emission from the probe may result in optically pumping of atoms to the other hyperfine manifold, $f=3$; we treat this as a loss channel under the approximation that, over the time interval of interest, there is a negligible probability that these atoms will repump to $f=4$. We also include a bias static magnetic field along the $z$ axis. This does not affect the QND measurement of $F_z$, but ultimately, we must calculate all dynamics in the rotating frame.
Spontaneous emission, optical pumping, and the resulting decoherence act locally on each atomic spin, governed by the master equation, Eq. (\ref{op_superator}). For light linearly polarized in the $x$ direction, and for detunings large compared to the excited-state hyperfine splitting, this takes the simplified form~\cite{Deutsch2010a}
\begin{align}\label{eq:op_master}
\left.\dt{\hat{\rho}^{(n)}}\right|_{op}\!\!\!\! =\mathcal{D}[\hat{\rho}^{(n)}] \!= -\frac{i}{\hbar}[ \hat{H}\!_A, \hat{\rho}^{(n)}] \!-\! {\gamma_{op}}\hat{\rho}^{(n)} \!+\! \frac{\gamma_{op}}{4 f^2} \left(\hat{f}^{(n)}_+ \hat{\rho}^{(n)} \hat{f}^{(n)}_- \!+\! \hat{f}^{(n)}_- \hat{\rho}^{(n)} \hat{f}^{(n)}_+\right),
\end{align}
where $\hat{f}^{(n)}_\pm = \hat{f}^{(n)}_z \pm i \hat{f}^{(n)}_y$ are the raising and lowering operators for projection of spin along the $x$ axis, and $\gamma_{op} = \frac{\Gamma_A^2}{18 \Delta^2}\;\sigma_0 \frac{I_{in}}{\hbar \omega_0}$ is the optical pumping for linear polarization in the far-detuned limit, given intensity $I_{in}$ at the position of the atom (we assume that all atoms are trapped the same distance for the waveguide and, thus, see the same intensity). The atomic Hamiltonian is $\hat{H}_A = \sum_n \hbar \Omega_0 \hat{f}^{(n)}_z + \hat{H}_{LS}$, the sum of the Zeeman interaction with a bias magnetic field, giving rise to Larmor precession at frequency $\Omega_0$, and the light shift, $\hat{H}_{LS}=\hbar \chi_3 \hat{F}_z \hat{S}_3$, due to the probe. Even for far detuning, the residual tensor light shift cannot be neglected as it scales at $1/\Delta^2$, the same as $\gamma_{op}$ and $\kappa$. In principle, a two-color probe can remove the tensor term which would otherwise lead to a degradation of the mean spin and, thus, a reduction in metrologically useful squeezing~\cite{Saffman2009,Montano2015Quantum}. We neglect this effect here. Finally, going to the rotating frame at the Larmor frequency, $\hat{f}^{(n)}_x \rightarrow \hat{f}^{(n)}_x \cos(\Omega_0 t) + \hat{f}^{(n)}_y \sin(\Omega_0 t)$, $\hat{f}^{(n)}_y \rightarrow \hat{f}^{(n)}_y \cos(\Omega_0 t) - \hat{f}^{(n)}_x \sin(\Omega_0 t)$, and averaging the rapidly oscillating terms (RWA), the master equation takes the form
\begin{align}\label{eq:op_master2}
\left.\dt{\hat{\rho}^{(n)}}\right|_{op} \!\!\!= \mathcal{D}[\hat{\rho}^{(n)}] \Rightarrow -{\gamma_{op}} \hat{\rho}^{(n)} \!+\! \frac{\gamma_{op}}{8 f^2} \left(\hat{f}^{(n)}_x \!\hat{\rho}^{(n)}\! \hat{f}^{(n)}_x \!+\! \hat{f}^{(n)}_y \!\hat{\rho}^{(n)}\! \hat{f}^{(n)}_y \!+\! 2 \hat{f}^{(n)}_z \!\hat{\rho}^{(n)}\! \hat{f}^{(n)}_z\right).
\end{align}
With atoms initially prepared in the ``fiducial state,"\index{state!fiducial state} $\ket{\uparrow} = \ket{f=4,m_x=4}$, we include in the dynamics the ``coupled state,"\index{state!coupled state} $\ket{\downarrow} = \ket{f=4,m_x=3}$, and the ``transfer state,"\index{state!transfer state} $\ket{T} = \ket{f=4,m_x=2}$. Restricted to this qutrit substate, the spin projector operators $\{ \hat{f}_x, \hat{f}_y, \hat{f}_z \}$ are given in Eqs.~\eqref{eq:f_in_xbasis}.
With all the components of the stochastic master equation defined in Eqs.~\eqref{eq:totaldrhodt}--\eqref{op_superator}, one can derive the equations of motion of one- and two-body moments straightforwardly using the symmetric Gaussian state approximation. Some explicit results have been given in Eqs.~\eqref{eq:dsigmabadc_op}--\eqref{eq:dsigmabadc_QND}. The equation of motion for the optical pumping dynamics of the one-body moment $ \expect{\hat{\sigma}_{ba}} $, is given by
\begin{align}
\left.d\expect{\hat{\sigma}_{ba}}\right|_{op}&= \expect{\mathcal{D}^\dagger\left[\hat{\sigma}_{ba} \right]}dt=\sum_{c,d}\tr(\mathcal{D}^\dagger\left[\hat{\sigma}_{ba}\right]\hat{\sigma}_{dc} )\expect{\hat{\sigma}_{dc}}dt.\label{eq:dsigmaba_op_expand}
\end{align}
The last step is to project the expression onto the $ \expect{\hat{\sigma}_{dc}} $ qutrit basis ($ d,c\in \{\uparrow,\downarrow,T\} $) with coefficients defined in the trace form in the equation above.
The equations of two-body moments of the optical pumping can be derived similarly. Continuing from Eq.~\eqref{eq:dsigmabadc_op}, we have
\begin{align}
&\quad\left.d\expect{\!\Delta \sigma_{ba}^{(\!1\!)}\Delta\sigma_{dc}^{(\!2\!)} }_s\right|_{op} \!= \expect{\Delta\mathcal{D}^\dagger[ \sigma_{ba}^{(1)}]\Delta\sigma_{dc}^{(2)} }_sdt + \expect{\Delta \sigma_{ba}^{(1)} \Delta\mathcal{D}^\dagger[\sigma_{dc}^{(2)}] }_sdt\nn \\
&\!\!=\!\sum_{m,n}\! \tr\!\left(\mathcal{D}^\dagger[\hat{\sigma}_{ba} ]\hat{\sigma}_{mn} \right)\!\expect{\!\Delta\sigma_{mn}^{(\!1\!)}\Delta\sigma_{dc}^{(\!2\!)} }_s \!\!+\!\! \sum_{m,n}\!\tr\!\left(\mathcal{D}^\dagger[\hat{\sigma}_{dc}]\hat{\sigma}_{mn} \right)\!\expect{\!\Delta\hat{\sigma}_{ba}^{(\!1\!)}\Delta\sigma_{mn}^{(\!2\!)} }_s .\label{eq:dsigmabadc_op_expand}
\end{align}
%Similarly, Eqs.~\eqref{eq:dsigmaba_QND} and~\eqref{eq:dsigmabadc_QND} can be expanded easily by using $ \Delta F_z=\Delta \sum_i^{N_A} f_z^{(n)}=\sum_i^{N_A} \sqrt{\frac{f}{2}}(\Delta\sigma_{\uparrow\downarrow}^{(n)}+\Delta\sigma_{\downarrow\uparrow}^{(n)}) + \sqrt{\frac{2f-1}{2}}(\Delta \sigma_{\downarrow T}^{(n)}+\Delta\sigma_{T\downarrow}^{(n)}) $, where the subscription $ (n) $ can only be either $ (1) $ or $ (2) $ with $ 1 $ or $ N_A-1 $ copies depending on whether the $ \hat{\sigma}_{ba} $ operator is applied on the same atom as the other joint cumulants in the two-body moment variables.
In deriving Eqs.~\eqref{eq:dsigmaba_QND} and~\eqref{eq:dsigmabadc_QND}, we have used the Gaussian state assumption to write three-body moments in connection to one- and two-body moments, $ \expect{\hat{A}\hat{B}\hat{C}}=\expect{\hat{A}\hat{B}}\expect{\hat{C}}+ \expect{\hat{A}\hat{C} }\expect{\hat{B}}+\expect{\hat{B}\hat{C} }\expect{\hat{A}}-2\expect{\hat{A}}\expect{\hat{B}}\expect{\hat{C}} $.
If we keep only the coherence operators of the nearest coupling states, Eqs.~\eqref{eq:dsigmaba_op_expand} and~\eqref{eq:dsigmabadc_op_expand} recover the same results found by Norris~\cite{Norris2014}.
We apply similar techniques to derive the equations of motion due to QND measurement, Eqs.~\eqref{eq:dsigmaba_QND} and~\eqref{eq:dsigmabadc_QND}, to yield
\begin{align}\label{eq:dsigmaba_QND_expand}
&\quad \left.d\expect{\hat{\sigma}_{ba}}\right|_{QND}\nonumber\\ &=\frac{\kappa}{4}\sum_{c,d}\tr\left(\mathcal{L}^\dagger\left[\hat{\sigma}_{ba} \right]\hat{\sigma}_{dc}\right) \expect{\hat{\sigma}_{dc}}dt + \sqrt{\frac{\kappa}{4}}\sum_{c,d}\tr\left(\mathcal{H}^\dagger\left[\hat{\sigma}_{ba} \right]\hat{\sigma}_{dc}\right) \expect{\hat{\sigma}_{dc}}dW .
\end{align}
\begin{align}
&\quad\left.d\expect{\!\Delta \sigma_{ba}^{(\!1\!)}\! \Delta \sigma_{dc}^{(\!2\!)}}_s \right|_{QND} \nonumber\\
&\!\!= -\kappa\left\{\frac{1}{2}\left[ \sqrt{\frac{f}{2}}\left(\delta_{a\uparrow}\expect{\hat{\sigma}_{b\downarrow}}+ \delta_{b\downarrow}\expect{\hat{\sigma}_{\uparrow a}}+\delta_{a\downarrow}\expect{\hat{\sigma}_{b\uparrow}} +\delta_{b\uparrow}\expect{\hat{\sigma}_{\downarrow a} } \right)\right.\right.\nn\\
&\quad\quad\quad\quad\left. +\sqrt{\frac{2f-1}{2}}\left(\delta_{a\downarrow}\expect{\hat{\sigma}_{bT}} + \delta_{bT}\expect{\hat{\sigma}_{\downarrow a} }+\delta_{aT}\expect{\hat{\sigma}_{b\downarrow} } +\delta_{b\downarrow}\expect{\hat{\sigma}_{Ta} } \right)\right] \nn\\
&\quad\quad\quad -\sqrt{\frac{f}{2}}\expect{\hat{\sigma}_{ba} }\left(\expect{\hat{\sigma}_{\uparrow\downarrow} }+\expect{\hat{\sigma}_{\downarrow\uparrow} } \right) -\sqrt{\frac{2f-1}{2}} \expect{\hat{\sigma}_{ba}}\left(\expect{\hat{\sigma}_{\downarrow T}}+\expect{\hat{\sigma}_{T\downarrow}} \right)\nn\\ &\quad\quad\quad+(N_A-1)\left[\sqrt{\frac{f}{2}}\left(\expect{\Delta \sigma^{(1)}_{ba} \Delta\sigma_{\uparrow\downarrow}^{(2)}}_s \!+\!\expect{\Delta\sigma_{ba}^{(1)}\Delta\sigma_{\downarrow\uparrow}^{(2)}}_s\right)\right.\nn\\
&\quad\quad\quad\quad\quad\quad\quad\quad\left.\left. + \sqrt{\frac{2f\!-\! 1}{2}}\left(\expect{\Delta \sigma^{(1)}_{ba}\Delta \sigma_{\downarrow T}^{(2)}}_s \!+\! \expect{\Delta\sigma_{ba}\Delta\sigma_{T\downarrow}^{(2)}}_s\right)\right] \right\}\nn\\
&\quad\cdot\left\{ \frac{1}{2}\left[ \sqrt{\frac{f}{2}}\left(\delta_{c\uparrow}\expect{\hat{\sigma}_{d\downarrow}}+ \delta_{d\downarrow}\expect{\hat{\sigma}_{\uparrow c}}+\delta_{c\downarrow}\expect{\hat{\sigma}_{d\uparrow}} +\delta_{d\uparrow}\expect{\hat{\sigma}_{\downarrow c} } \right)\right.\right.\nn\\
&\quad\quad\quad\quad\left. +\sqrt{\frac{2f-1}{2}}\left(\delta_{c\downarrow}\expect{\hat{\sigma}_{dT}} + \delta_{dT}\expect{\hat{\sigma}_{\downarrow c} }+\delta_{cT}\expect{\hat{\sigma}_{d\downarrow} } +\delta_{d\downarrow}\expect{\hat{\sigma}_{Tc} } \right)\right] \nn\\
&\quad\quad -\sqrt{\frac{f}{2}}\expect{\hat{\sigma}_{dc} }\left(\expect{\hat{\sigma}_{\uparrow\downarrow} }+\expect{\hat{\sigma}_{\downarrow\uparrow} } \right) -\sqrt{\frac{2f-1}{2}} \expect{\hat{\sigma}_{dc}}\left(\expect{\hat{\sigma}_{\downarrow T}}+\expect{\hat{\sigma}_{T\downarrow}} \right)\nn\\
&\quad\quad + (N_A-1)\left[ \sqrt{\frac{f}{2}} \left(\expect{\Delta\sigma_{\uparrow\downarrow}^{(1)}\Delta\sigma_{dc}^{(2)}}_s \!+\!\expect{\Delta\sigma_{\downarrow\uparrow}^{(1)}\Delta \sigma_{dc}^{(2)}}_s\right)\right. \nn\\
&\quad\quad\quad\quad\quad\quad\quad\left.\left. + \sqrt{\frac{2f\!-\! 1}{2}}\left(\expect{\!\Delta \sigma_{\downarrow T}^{(\!1\!)}\Delta\sigma_{dc}^{(\!2\!)}}_s \!+\!\expect{\!\Delta\sigma_{T\downarrow}^{(\! 1\! )} \Delta \sigma_{dc}^{(\!2\!)} }_s\right)\!\right]\! \right\}dt,\label{eq:dsigmabadc_QND_expand}
\end{align}
By combining the optical pumping and QND measurement contribution to Eqs.~\eqref{eq:dsigmaba_op_expand}--\eqref{eq:dsigmabadc_QND_expand}, one can find a set of stochastic equations of a closed set of variables of
\begin{equation}
\left\{\expect{\hat{\sigma}_{ba}},\expect{\Delta\sigma_{ba}\Delta\sigma_{dc} }_s\left|a,b,c,d\in \{\uparrow,\downarrow,T \}\right. \right\}
\end{equation}
in the symmetric qutrit subspace.
%All the coefficients in these equations are calculated numerically.
The matrix of equations is sparse and close to diagonal, which indicates that only nearest-neighbor coupling is possible in the $\left\{\expect{\hat{\sigma}_{ba}},\expect{\Delta\sigma_{ba}\Delta\sigma_{dc} }_s\right\}$ basis.
To calculate up to two-body moments, we treat the system to have only two distinguishable atoms, and initially each atom is prepared in the $ \ket{\Psi}=\ket{\uparrow}_x $ state. Then the initial value of $ \expect{\hat{\sigma}_{ba} } $ and $ \expect{\Delta\sigma_i\Delta\sigma_j}=\expect{\Delta\sigma_{ba}\Delta\sigma_{dc}}_s $ at time $ t=0 $ can be given by
\begin{subequations}
\begin{align}
\expect{\hat{\sigma}_{ba}(0)}&=\bra{\Psi}\hat{\sigma}_{ba}\ket{\Psi}\\
\expect{\Delta\sigma_i\Delta\sigma_j(0)} &= (\bra{\Psi}\otimes\bra{\Psi})\frac{\hat{\sigma}_{ba}^{(1)} \otimes\hat{\sigma}_{dc}^{(2)}+\hat{\sigma}_{dc}^{(1)} \otimes\hat{\sigma}_{ba}^{(2)}}{2}(\ket{\Psi}\otimes\ket{\Psi})\nn\\
&\quad-\bra{\Psi}\hat{\sigma}_{ba}\ket{\Psi}\bra{\Psi}\hat{\sigma}_{dc}\ket{\Psi}.
\end{align}
\end{subequations}
With the initial values and equations of motion given above, the stochastic time evolution of the one- and two-body moments can then be solved by evolving the equations from the initial value to arbitrary time $ t $ in a Wiener process.
The number of non-zero elements in the coefficient matrix is proportional to the number of variables. To respect the nature of the exchange symmetry in two-body moment variables, we label $ \hat{\sigma}_{ba} $ as $ \hat{\sigma}_i $ with $ i=1,2,\cdots, 9 $ for $ b$ and $a $ counting from $ (\uparrow,\downarrow,T) $ qutrit state labels; the two-body moment variables can then be ordered in the symmetric subspace as $ \expect{\Delta\sigma_i\Delta\sigma_j} $ given $ j\ge i $ to represent $ \expect{\!\Delta \sigma_{ba}^{(\!1\!)}\! \Delta \sigma_{dc}^{(\!2\!)}}_s $ satisfying $ \expect{\!\Delta \sigma_{ba}^{(\!1\!)}\! \Delta \sigma_{dc}^{(\!2\!)}}_s=\expect{\!\Delta \sigma_{dc}^{(\!1\!)}\! \Delta \sigma_{ba}^{(\!2\!)}}_s $ under the exchange symmetry.
In the symmetric qutrit subspace, we have $ 45 $ two-body moment variables and corresponding sparse second-order equations, which we solve numerically.
Our formalism of deriving the spin dynamics using the microscopic operators allows us to generalize the equations of motion to $ n $-body dynamics straightforwardly. For example, following Eq.~\eqref{eq:dsigmabadc_op_expand}, the equations of $ n $-body moments due to optical pumping can be given by
\begin{align}
&\quad\left.d\expect{\!\Delta \sigma_{b^{(1)}a^{(1)}}^{(1)}\Delta\sigma_{b^{(2)}a^{(2)}}^{(2)}\cdots \Delta\sigma_{b^{(n)}a^{(n)}}^{(n)} }_s\right|_{op} \nn\\
&\!\!=\!\sum_{p,q}\! \tr\!\left(\mathcal{D}^\dagger[\hat{\sigma}_{b^{(1)}a^{(1)}} ]\hat{\sigma}_{pq} \right)\!\expect{\!\Delta\sigma_{pq}^{(1)}\Delta\sigma_{b^{(2)}a^{(2)} }^{(2)}\cdots \Delta\sigma_{b^{(n)}a^{(n)} }^{(n)} }_s \nn\\
&\quad + \sum_{p,q}\!\tr\!\left(\mathcal{D}^\dagger[\hat{\sigma}_{b^{(2)}a^{(2)} }]\hat{\sigma}_{pq} \right)\!\expect{\!\Delta\hat{\sigma}_{b^{(1)} a^{(1)} }^{(\!1\!)}\Delta\sigma_{pq}^{(2)}\cdots \Delta\sigma_{b^{(n)}a^{(n)} }^{(n)} }_s+\cdots \\
&\!\!=\sum_{i=1}^n\sum_{p,q} \! \tr\!\left(\mathcal{D}^\dagger[\hat{\sigma}_{b^{(i)}a^{(i)}} ]\hat{\sigma}_{pq} \right) \!\expect{\!\hat{\sigma}_{b^{(1)}a^{(1)}}^{(1)} \cdots \Delta\sigma_{pq}^{(i)}\cdots \Delta\sigma_{b^{(n)}a^{(n)} }^{(n)} }_s .\label{eq:dsigmabadcn_op_expand}
\end{align}
In general, to include the $ n $-body correlations, $ \sum_i^n\left(\!\begin{array}{c}i\\d^2\end{array} \! \right) $ equations are required and the formalism above can be extended to cut off the covariances at the $ (n+1)^{\text{th}} $ order. Here, $ d $ is the dimension of the atomic subspace for the spin squeezing simulations. Despite the considerable number of equations required, the collective spin dynamics problem may still be efficiently solved because the coefficient matrices of the set of equations are almost diagonal and are very sparse due to the $ \delta $-function-like selection rules of products of $ \hat{\sigma}_{ba} $ operators in deriving the spin dynamics equations.
%</Faradayprotocoldetails>
%<*collectiveandmicroscopicoperators>
%===================APPENDIX: Relationships between collective spin operators and microscopic operators =====================%
\chapter[Collective and microscopic operators in the symmetric subspace]{Relations between some collective operators and the first- and second-moments in the symmetric subspace} \label{Appendix::collectivespinoperators}
%\qxd{Possible Todo: Relations between collective angular momentum operators and microscopic $\sigma_{ba}$ operators in the qubit and qutrit symmetric subspace. See handwritten notes: QND measurement and spin squeezing using Faraday interactions (part IV).}
We use the following relations to calculate the collective operator dynamics once the equations of microscopic operators, $ \hat{\sigma}_{ba}$ and $ \expect{\Delta\sigma_i\Delta\sigma_j} $, are solved.
\begin{subequations}
\begin{align}
\expect{\hat{F}_x} &= \sum_i^{N_A}\expect{\hat{f}_x}\nonumber\\
&= -N_A \left[f\expect{\sigmauu}+(f-1)\expect{\sigmadd}+(f-2)\expect{\sigmatt } \right]\label{eq:Fx_qutrit}\\
\expect{\Delta F_z^2} &= N_A\expect{\Delta f_z^2} + N_A(N_A-1)\expect{\Delta f_z^{(1)}\Delta f_z^{(2)} }_s\nn\\
&=N_A\left\{ \frac{f}{2}\left(\expect{\sigmauu}+\expect{\sigmadd}-2\expect{\sigmaud}\expect{\sigmadu}-\expect{\sigmaud}^2-\expect{\sigmadu}^2 \right)\right. \label{eq:DeltaFz2_fz}\\
&\quad\quad+ \frac{2f-1}{2}\left(\expect{\sigmadd}+\expect{\sigmatt}-2\expect{\sigmadt}\expect{\sigmatd}-\expect{\sigmadt}^2-\expect{\sigmatd}^2 \right)\nn\\
&\quad\quad + \frac{\sqrt{f(2f-1)}}{2}\left[\expect{\sigmaut }+\expect{\sigmatu}\right. \nn\\
&\quad\left.\phantom{\frac{\sqrt{f(2f-1)}}{f}}\left. -2\expect{\sigmaud}(\expect{\sigmadt}+\expect{\sigmatd}) -2\expect{\sigmadu}(\expect{\sigmadt}+\expect{\sigmatd} ) \right] \right\}\nn\\
&\quad +N_A(N_A-1)\left[\frac{f}{2}\left(\expect{\Dsigmaud^{(1)}\Dsigmaud^{(2)} }_s \right.\right. \label{eq:DeltaFz2_qutrit}\\
&\qquad\qquad\qquad\qquad\qquad\qquad \left.\left. +2\expect{\Dsigmaud^{(1)}\Dsigmadu^{(2)} }_s+\expect{\Dsigmadu^{(1)}\Dsigmadu^{(2)} }_s \right)\right. \nn \\
&\quad\quad + \frac{2f-1}{2}(\expect{\Dsigmadt^{(1)}\Dsigmadt^{(2)} }_s +2\expect{\Dsigmadt^{(1)}\Dsigmatd^{(2)} }_s +\expect{\Dsigmatd^{(1)}\Dsigmatd^{(2)} }_s) \nn\\
&\quad\quad + \sqrt{f(2f-1)}\left(\expect{\Dsigmaud^{(1)}\Dsigmadt^{(2)} }_s +\expect{\Dsigmaud^{(1)}\Dsigmatd^{(2)} }_s\right.\nn\\
&\qquad\qquad\qquad\qquad \left.\phantom{\frac{2f-1}{2}}\left. +\expect{\Dsigmadu^{(1)}\Dsigmadt^{(2)} }_s+\expect{\Dsigmadu^{(1)}\Dsigmatd^{(2)} }_s \right)\right]\nn
\end{align}
\end{subequations}
If further approximations are allowed, the collective spin dynamics can also be calculated in the \\ $ \left\{\left.\expect{\hat{f}_i},\expect{\Delta f_i^{(1)}\Delta f_j^{(2)} }_s\right|i,j=0,x,y,z \right\} $ basis with $ \hat{f}_0=\hat{\mathbbm{1}} $. For example, one can prove that, by ignoring couplings between $ \ket{\uparrow} $ and $ \ket{T} $, Eqs.~\ref{eq:Fx_qutrit} and~\ref{eq:DeltaFz2_qutrit} lead to
\begin{subequations}
\begin{align}
\expect{\hat{F}_x} &= -f \expect{\hat{N}_\uparrow}-(f-1)\expect{\hat{N}_\downarrow}-(f-2)\expect{\hat{N}_T}\\
\expect{\Delta F_z^2} &\approx \expect{\hat{N}_f}\expect{\Delta f_z^2}+\expect{\hat{N}_f}(\expect{\hat{N}_f}-1)\expect{\Delta f_z^{(1)}\Delta f_z^{(2)} }\!_s,
\end{align}
\end{subequations}
where $ \hat{N}_i=\sum_n^{N_A}\hat{\sigma}_{ii}^{(n)} $ (for $ i=\uparrow,\downarrow,T $) and $ \hat{N}_f=\hat{N}_\uparrow+\hat{N}_\downarrow+\hat{N}_T $ are the population operators. Then the problem is to establish and solve the stochastic equations for the expectation values and covariances of the population and angular momentum operators. For example, the QND measurement process which generates the spin squeezing dynamics is governed by
\begin{align}
d\expect{\Delta f_z^2}|_{QN\!D} &= -\kappa \left[\expect{\Delta f_z^2}+(N_A-1)\expect{\Delta f_z^{(1)}\Delta f_z^{(2)} }_s \right]^2dt\nn\\
&=d\expect{\Delta f_z^{(1)}\Delta f_z^{(2)} }_s|_{QN\!D}.
\end{align}
It seems the spin squeezing dynamics is purely determined by the $ \expect{\Delta f_z^2} $ and $ \expect{\Delta f_z^{(1)}\Delta f_z^{(2)} }$, but in deriving the equations for the full set of operator basis elements considering both the optical pumping and QND measurement contributions, all operators become coupled to each other, which doesn't have a nice general form to be easily coded in a computer program. Also, since it involves more approximations than using the $ \hat{\sigma}_{ba} $ representation, we only use this approach to verify our results, but not to generate the final simulation results presented in this piece of dissertation work.
%</collectiveandmicroscopicoperators>
%\end{appendix}
\bibliographystyle{../styles/abbrv-alpha-letters-links}
\bibliography{../refs/Archive}
\printindex
\end{document}
| {
"alphanum_fraction": 0.7389041389,
"avg_line_length": 154.6930147059,
"ext": "tex",
"hexsha": "a5d479d887e6fec7cf64d469b31297bbaa366acb",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-07-17T21:55:09.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-17T21:55:09.000Z",
"max_forks_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "i2000s/PhD_Thesis",
"max_forks_repo_path": "chap6/CooperativityEnhancement.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_issues_repo_issues_event_max_datetime": "2018-07-18T01:47:21.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-18T01:47:21.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "i2000s/PhD_Thesis",
"max_issues_repo_path": "chap6/CooperativityEnhancement.tex",
"max_line_length": 2155,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "a9bc6bc4213896c70c90cbb3d9b533782d428761",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "i2000s/PhD_Thesis",
"max_stars_repo_path": "chap6/CooperativityEnhancement.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-27T19:11:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-26T01:58:05.000Z",
"num_tokens": 25467,
"size": 84153
} |
\section{Evaluation}\label{sec:evaluation}
%\red{Aim of this section: •.}
%\red{Aim of this section: Brief introduction of what are we going to talk about in this section.}
% ns-3: https://www.nsnam.org/
We conduct simulation based on realistic scenario trace file to verify the feasibility of our proposal. We decided to recreate the small public transport network of the UAB campus using the Network Simulator 3 (NS-3). NS-3 is a well-known discrete-event simulator targeted primarily for research~\cite{ns-3-webpage}. Moreover, we explain how the mobility model was obtained as well as what parameters were used during the simulation. Finally, we will test our proposed method in the simulation to evaluate how it performs.
\subsection{Scenario: UAB campus buses}
%\red{Aim of this section: Explain a bit how the campus scenario works and how can be this useful in practice.}
In order to test our proposal we considered a very small public transportation network that works inside the Autonomous University of Barcelona (UAB) composed by 5 buses that make different routes around the UAB campus. It is important to note that every single bus makes the same route daily. By this way, this scenario can be seen as a good example of deterministic networks.
Each bus has a DTN node that allows users to achieve secret communications as well as source anonymity using onion routing protocol.
\subsection{Mobility Model}
%\red{Aim of this section: explain how we get this scenario: open street maps -> sumo -> ns-3... }
We obtained the mobility model going through different stages. First, we exported the UAB campus road map from OpenStreetMap into SUMO software~\cite{sumo}, filtering some unnecessary items like buildings and railways with the Java OpenStreetMap editor tool~\cite{josm}.
% bus-schedule: http://www.uab.cat/doc/horaris_busUAB_2015
Once the campus roads were imported in SUMO, we recreated the bus movements of each bus taking into consideration the official bus schedule of the UAB public transportation network \cite{bus-schedule}. In addition, we tuned some bus characteristics like acceleration and deceleration parameters in order to get coherent travel times.
%sumo-to-ns-2: http://www.ijarcsse.com/docs/papers/Volume_4/4_April2014/V4I4-0416.pdf
Finally, we converted the model to a NS-2 mobility trace as is explained in \cite{sumo-to-ns-2}. The NS-2 mobility trace can be used in NS-3. We used the simulator to obtain important contact related data of the campus network, i.e: information about the duration of the contacts as well as the instant of time when they occurred.
\subsection{Simulation setup}
%\red{Aim of this section: Explain and define the values used in the simulation itself as well as how we know that there is a neighbour able to contact with.}
% ns-3-dtn: https://www.nsnam.org/wiki/Current_Development
The DTN modules are under current development in NS-3 \cite{ns-3-dtn}. For this reason we decided to implement a neighbour discovery in the application layer. This application broadcasts beacon messages periodically looking for new contact opportunities. The interval time is the time to wait between beacons while the expiration time is the time a neighbour is considered valid, these parameters can be set up manually.
\begin{table}[h!]
\centering
\begin{tabular}{l|l}
Parameter & Value \\
\hline
Number of nodes & 5 \\
Wi-Fi range & 100 meters \\
Interval time & 1 second \\
Expiration time & 2 seconds \\
Simulation time & 15 hours \\
DSS Rate & 1 Mbps \\
IEEE 802.11 specification & 802.11b
\end{tabular}
\caption{Simulation setup.}
\label{table:simulation-parameters}
\end{table}
In table~\ref{table:simulation-parameters} we show different parameters for the neighbour discovery application as well as different wireless parameters to be suitable with resource-constrained computers.
\subsection{Simulation results}
%\red{Aim of this section: Explain the results of this simulation. What we get explaining why.}
In this section we show some data related to the simulations. We provide a
general overview of the network and measures related to the contact behavior.
We also analyze our proposal over the real scenario of the UAB campus.
\begin{figure}[h!bt]
\centering
\includegraphics[scale=0.70]{imgs/statistics/contacts-duration}
\caption{Number of contacts with the same duration.}
\label{fig:contact-duration-group}
\end{figure}
In figure~\ref{fig:contact-duration-group} we have an overall view of the
network activity.
There is a group that has really short contact times, these contacts can be suitable for those applications that do not require a huge amount of data to be sent. There is another group that has long contact times (nearly 7 minutes), this group is able to perform complex communications sending higher amount of data. The average contact time is near 1 minute, suitable for several applications like anonymous sensing systems. A summary of contacts related information during the simulation is shown in table~\ref{table:contact-information}. With this simple network analysis we show that our evaluation model can be used for several applications.
\begin{table}[h]
\centering
\begin{tabular}{l|l}
Metric & Value \\
\hline
Number of contacts & 1161 \\
Average contact time & 72.72 seconds \\
Maximum contact time & 412 seconds\\
Minimum contact time & 1 second
\end{tabular}
\caption{Contact information.}
\label{table:contact-information}
\end{table}
In figure~\ref{fig:ntime-data} after fixing \textit{s=1; d=5; t=0} and \textit{k=10} we set the \textit{n} value from \textit{n=5} to \textit{n=40}. We computed the average delivery time mean of the \textit{k=10} given paths. The minimum number of nodes allowed in onion routing is 5, i.e: at least 3 routers plus source and destination nodes. Unlike in traditional networks, in DTNs not always the shortest paths are the quickest ones, but even so we can see in the figure an increasing tendency.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.70]{imgs/statistics/ntime-data}
\caption{Average delivery time considering the variation of the path length.}
\label{fig:ntime-data}
\end{figure}
In figure~\ref{fig:ktime-data} after fixing \textit{s=1; d=5; t=0} and \textit{n=4} we set the \textit{k} value from \textit{k=1} to \textit{k=15}. We can see that the order of appearance of each new path is not intermediately correlated to time (a new path can be quicker or slower than the previous ones). By this way, it is even more difficult to guess the path chosen. In this example, there is a maximum number of paths for the given set of attributes of 13. After that, as we can see, no more paths will be gotten.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.70]{imgs/statistics/ktime-data}
\caption{Average delivery time considering varying number of paths.}
\label{fig:ktime-data}
\end{figure}
%
%\begin{figure}[hbt]
% \centering
% \includegraphics[scale=0.70]{imgs/presentation/evil-path.eps}
% \caption{test}
%\end{figure}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../paper"
%%% End:
% LocalWords: UAB
| {
"alphanum_fraction": 0.7763120965,
"avg_line_length": 60.3898305085,
"ext": "tex",
"hexsha": "39a82e8bd1772a3a2c775e9cbff872a6ae8b1ab9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "91e16826c0fe55dbdcd81473ab3c4ca0b3ede244",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "AdrianAntunez/DTN-Onion-Routing-NS-3",
"max_forks_repo_path": "paper/inc/evaluation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "91e16826c0fe55dbdcd81473ab3c4ca0b3ede244",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "AdrianAntunez/DTN-Onion-Routing-NS-3",
"max_issues_repo_path": "paper/inc/evaluation.tex",
"max_line_length": 647,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "91e16826c0fe55dbdcd81473ab3c4ca0b3ede244",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "AdrianAntunez/DTN-Onion-Routing-NS-3",
"max_stars_repo_path": "paper/inc/evaluation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1741,
"size": 7126
} |
\section{Compatibility and versioning}\label{sec:dsdl_versioning}
\subsection{Rationale}
Data type definitions may evolve over time as they are refined to better address the needs of their applications.
UAVCAN defines a set of rules that allow data type designers to modify and advance their
data type definitions while ensuring backward compatibility and functional safety.
\subsection{Semantic compatibility}\label{sec:dsdl_semantic_compatibility}
A data type $A$ is \emph{semantically compatible} with a data type $B$
if relevant behavioral properties of the application are invariant under the substitution of $A$ with $B$.
The property of semantic compatibility is commutative.
\begin{remark}[breakable]
The following two definitions are semantically compatible and can be used interchangeably:
\begin{minted}{python}
uint16 FLAG_A = 1
uint16 FLAG_B = 256
uint16 flags
\end{minted}
\begin{minted}{python}
uint8 FLAG_A = 1
uint8 FLAG_B = 1
uint8 flags_a
uint8 flags_b
\end{minted}
It should be noted here that due to different set of field and constant attributes,
the source code auto-generated from the provided definitions may be not drop-in replaceable,
requiring changes in the application;
however, source-code-level application compatibility is orthogonal to data type compatibility.
The following supertype may or may not be semantically compatible with the above
depending on the semantics of the removed field:
\begin{minted}{python}
uint8 FLAG_A = 1
uint8 flags_a
\end{minted}
\end{remark}
\begin{remark}
Let node $A$ publish messages of the following type:
\begin{minted}{python}
float32 foo
float64 bar
\end{minted}
Let node $B$ subscribe to the same subject using the following data type definition:
\begin{minted}{python}
float32 foo
float64 bar
int16 baz # Extra field; implicit zero extension rule applies.
\end{minted}
Let node $C$ subscribe to the same subject using the following data type definition:
\begin{minted}{python}
float32 foo
# The field 'bar' is missing; implicit truncation rule applies.
\end{minted}
Provided that the semantics of the added and omitted fields allow it,
the nodes will be able to interoperate successfully despite using different data type definitions.
\end{remark}
\subsection{Versioning}
\subsubsection{General assumptions}
The concept of versioning applies only to composite data types.
As such, unless specifically stated otherwise, every reference to ``data type''
in this section implies a composite data type.
A data type is uniquely identified by its full name,
assuming that every root namespace is uniquely named.
There is one or more versions of every data type.
A data type definition is uniquely identified by its full name and the version number pair.
In other words, there may be multiple definitions of a data type differentiated by their version numbers.
\subsubsection{Versioning principles}
Every data type definition has a pair of version numbers ---
a major version number and a minor version number, following the principles of semantic versioning.
For the purposes of the following definitions, a \emph{release} of a data type definition stands for
the disclosure of the data type definition to its intended users or to the general public,
or for the commencement of usage of the data type definition in a production system.
In order to ensure a deterministic application behavior and ensure a robust migration path
as data type definitions evolve, all data type definitions that share the same
full name and the same major version number shall be semantically compatible with each other.
The versioning rules do not extend to scenarios where the name of a data type is changed,
because that would essentially construe the release of a new data type,
which relieves its designer from all compatibility requirements.
When a new data type is first released,
the version numbers of its first definition shall be assigned ``1.0'' (major 1, minor 0).
In order to ensure predictability and functional safety of applications that leverage UAVCAN,
it is required that once a data type definition is released,
its DSDL source text, name, version numbers, fixed port-ID, and other properties cannot undergo any
modifications whatsoever, with the following exceptions:
\begin{itemize}
\item Whitespace changes of the DSDL source text are allowed,
excepting string literals and other semantically sensitive contexts.
\item Comment changes of the DSDL source text are allowed as long as such changes
do not affect semantic compatibility of the definition.
\item A deprecation marker directive (section~\ref{sec:dsdl_directives}) can be added or removed\footnote{%
Removal is useful when a decision to deprecate a data type definition is withdrawn.
}.
\end{itemize}
Addition or removal of the fixed port identifier is not permitted after a data type definition
of a particular version is released.
Therefore, substantial changes can be introduced only by releasing new definitions (i.e., new versions)
of the same data type.
If it is desired and possible to keep the same major version number for a new definition of the data type,
the minor version number of the new definition shall be one greater than the newest existing minor version
number before the new definition is introduced.
Otherwise, the major version number shall be incremented by one and the minor version shall be set to zero.
An exception to the above rules applies when the major version number is zero.
Data type definitions bearing the major version number of zero are not subjected to any compatibility requirements.
Released data type definitions with the major version number of zero are permitted to change in arbitrary
ways without any regard for compatibility.
It is recommended, however, to follow the principles of immutability, releasing every subsequent definition
with the minor version number one greater than the newest existing definition.
For any data type, there shall be at most one definition per version.
In other words, there shall be exactly one or zero definitions
per combination of data type name and version number pair.
All data types under the same name shall be also of the same kind.
In other words, if the first released definition of a data type is of the message kind,
all other versions shall also be of the message kind.
\subsubsection{Fixed port identifier assignment constraints}
The following constraints apply to fixed port-ID assignments:
\begin{align*}
\exists P(x_{a.b}) &\rightarrow \exists P(x_{a.c})
&\mid&\ b < c;\ x \in (M \cup S)
\\
\exists P(x_{a.b}) &\rightarrow P(x_{a.b}) = P(x_{a.c})
&\mid&\ b < c;\ x \in (M \cup S)
\\
\exists P(x_{a.b}) \land \exists P(x_{c.d}) &\rightarrow P(x_{a.b}) \neq P(x_{c.d})
&\mid&\ a \neq c;\ x \in (M \cup S)
\\
\exists P(x_{a.b}) \land \exists P(y_{c.d}) &\rightarrow P(x_{a.b}) \neq P(y_{c.d})
&\mid&\ x \neq y;\ x \in T;\ y \in T;\ T = \left\{ M, S \right\}
\end{align*}
where $t_{a.b}$ denotes a data type $t$ version $a.b$ ($a$ major, $b$ minor);
$P(t)$ denotes the fixed port-ID (whose existence is optional) of data type $t$;
$M$ is the set of message types, and $S$ is the set of service types.
\subsubsection{Data type version selection}
DSDL compilers should compile every available data type version separately,
allowing the application to choose from all available major and minor version combinations.
When emitting a transfer, the major version of the data type is chosen at the discretion of the application.
The minor version should be the newest available one under the chosen major version.
When receiving a transfer, the node deduces which major version of the data type to use
from its port identifier (either fixed or non-fixed).
The minor version should be the newest available one under the deduced major version\footnote{%
Such liberal minor version selection policy poses no compatibility risks since all definitions under the same
major version are compatible with each other.
}.
It follows from the above two rules that when a node is responding to a service request,
the major data type version used for the response transfer shall be the same that is used for the request transfer.
The minor versions may differ, which is acceptable due to the major version compatibility requirements.
\begin{remark}[breakable]
A simple usage example is provided in this intermission.
Suppose a vendor named ``Sirius Cybernetics Corporation'' is contracted to design a
cryopod management data bus for a colonial spaceship ``Golgafrincham B-Ark''.
Having consulted with applicable specifications and standards, an engineer came up with the following
definition of a cryopod status message type (named \verb|sirius_cyber_corp.b_ark.cryopod.Status|):
\begin{minted}{python}
# sirius_cyber_corp.b_ark.cryopod.Status.0.1
float16 internal_temperature # [kelvin]
float16 coolant_temperature # [kelvin]
uint8 FLAG_COOLING_SYSTEM_A_ACTIVE = 1
uint8 FLAG_COOLING_SYSTEM_B_ACTIVE = 2
# Status flags in the lower bits.
uint8 FLAG_PSU_MALFUNCTION = 32
uint8 FLAG_OVERHEATING = 64
uint8 FLAG_CRYOBOX_BREACH = 128
# Error flags in the higher bits.
uint8 flags # Storage for the above defined flags (this is not the recommended practice).
\end{minted}
The definition is then deployed to the first prototype for initial laboratory testing.
Since the definition is experimental, the major version number is set to zero in order to signify the
tentative nature of the definition.
Suppose that upon completion of the first trials it is identified that the units should track their
power consumption in real time for each of the three redundant power supplies independently.
It is easy to see that the amended definition shown below is not semantically compatible
with the original definition; however, it shares the same major version number of zero, because the backward
compatibility rules do not apply to zero-versioned data types to allow for low-overhead experimentation
before the system is deployed and fielded.
\begin{minted}{python}
# sirius_cyber_corp.b_ark.cryopod.Status.0.2
truncated float16 internal_temperature # [kelvin]
truncated float16 coolant_temperature # [kelvin]
saturated float32 power_consumption_0 # [watt] Power consumption by the redundant PSU 0
saturated float32 power_consumption_1 # [watt] likewise for PSU 1
saturated float32 power_consumption_2 # [watt] likewise for PSU 2
uint8 FLAG_COOLING_SYSTEM_B_ACTIVE = 1
uint8 FLAG_COOLING_SYSTEM_A_ACTIVE = 2
# Status flags in the lower bits.
uint8 FLAG_PSU_MALFUNCTION = 32
uint8 FLAG_OVERHEATING = 64
uint8 FLAG_CRYOBOX_BREACH = 128
# Error flags in the higher bits.
uint8 flags # Storage for the above defined flags (this is not the recommended practice).
\end{minted}
The last definition is deemed sufficient and is deployed to the production system
under the version number of 1.0: \verb|sirius_cyber_corp.b_ark.cryopod.Status.1.0|.
Having collected empirical data from the fielded systems, the Sirius Cybernetics Corporation has
identified a shortcoming in the v1.0 definition, which is corrected in an updated definition.
Since the updated definition, which is shown below, is semantically compatible\footnote{%
The topic of data serialization is explored in detail in section~\ref{sec:dsdl_data_serialization}.
} with v1.0, the major version number is kept the same and the minor version number is incremented by one:
\begin{minted}{python}
# sirius_cyber_corp.b_ark.cryopod.Status.1.1
saturated float16 internal_temperature # [kelvin]
saturated float16 coolant_temperature # [kelvin]
float32[3] power_consumption # [watt] Power consumption by the PSU
bool flag_cooling_system_b_active
bool flag_cooling_system_a_active
# Status flags (this is the recommended practice).
void3 # Reserved for other flags
bool flag_psu_malfunction
bool flag_overheating
bool flag_cryobox_breach
# Error flags (this is the recommended practice).
\end{minted}
Since the definitions v1.0 and v1.1 are semantically compatible,
UAVCAN nodes using either of them can successfully interoperate on the same bus.
Suppose further that at some point a newer version of the cryopod module,
equipped with better temperature sensors, is released.
The definition is updated accordingly to use \verb|float32| for the temperature fields instead of \verb|float16|.
Seeing as that change breaks the compatibility, the major version number has to be incremented by one,
and the minor version number has to be reset back to zero:
\begin{minted}{python}
# sirius_cyber_corp.b_ark.cryopod.Status.2.0
float32 internal_temperature # [kelvin]
float32 coolant_temperature # [kelvin]
float32[3] power_consumption # [watt] Power consumption by the PSU
bool flag_cooling_system_b_active
bool flag_cooling_system_a_active
void3
bool flag_psu_malfunction
bool flag_overheating
bool flag_cryobox_breach
\end{minted}
Imagine that later it was determined that the module should report additional status information
relating to the coolant pump.
Thanks to the implicit truncation and the implicit zero extension rules,
the new fields can be introduced in a semantically-compatible way without releasing
a new major version of the data type:
\begin{minted}{python}
# sirius_cyber_corp.b_ark.cryopod.Status.2.1
float32 internal_temperature # [kelvin]
float32 coolant_temperature # [kelvin]
float32[3] power_consumption # [watt] Power consumption by the PSU
bool flag_cooling_system_b_active
bool flag_cooling_system_a_active
void3
bool flag_psu_malfunction
bool flag_overheating
bool flag_cryobox_breach
float32 rotor_angular_velocity # [radian/second] (usage of RPM would be non-compliant)
float32 volumetric_flow_rate # [meter^3/second]
# Coolant pump fields (extension over v2.0; implicit truncation/extension rules apply)
# If zero, assume that the values are unavailable.
\end{minted}
It is also possible to add an optional field at the end wrapped into a variable-length
array of up to one element, or a tagged union where the first field is empty
and the second field is the wrapped value.
In this way, the implicit truncation/extension rules would automatically make such optional field
appear/disappear depending on whether it is supported by the receiving node.
Nodes using v1.0, v1.1, v2.0, and v2.1 definitions can coexist on the same network,
and they can interoperate successfully as long as they all support at least v1.x or v2.x.
The correct version can be determined at runtime from the port identifier assignment as described in
section~\ref{sec:basic_subjects_and_services}.
In general, nodes that need to maximize their compatibility are likely to employ all existing major versions of
each used data type.
If there are more than one minor versions available, the highest minor version within the major version should
be used in order to take advantage of the latest changes in the data type definition.
It is also expected that in certain scenarios some nodes may resort to publishing the same message type
using different major versions concurrently to circumvent compatibility issues
(in the example reviewed here that would be v1.1 and v2.1).
The examples shown above rely on the primitive scalar types for reasons of simplicity.
Real applications should use the type-safe physical unit definitions available in the SI namespace instead.
This is covered in section~\ref{sec:application_functions_si}.
\end{remark}
| {
"alphanum_fraction": 0.7426865312,
"avg_line_length": 48.0550724638,
"ext": "tex",
"hexsha": "7b333d678dfe20f263253c4fa8c729c4df2d0828",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4b87f2a79842bfad35953679b596f63988d51264",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "Finwood/specification",
"max_forks_repo_path": "specification/dsdl/compatibility.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4b87f2a79842bfad35953679b596f63988d51264",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "Finwood/specification",
"max_issues_repo_path": "specification/dsdl/compatibility.tex",
"max_line_length": 117,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4b87f2a79842bfad35953679b596f63988d51264",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "Finwood/specification",
"max_stars_repo_path": "specification/dsdl/compatibility.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3756,
"size": 16579
} |
\documentclass{scrreprt}
\title{Building a WebGL Orrery}
\author{\textbf{Student ID:} 110059875}
\date{2014-11-21 23:59}
\usepackage{tocbibind}
\usepackage{hyperref}
\usepackage{siunitx}
\usepackage{graphicx}
\usepackage{svg}
\usepackage{float}
\begin{document}
\maketitle
\pagenumbering{roman}
\abstract{\addcontentsline{toc}{chapter}{Abstract} Building an interactive Orrery to show relationship between the Sun, planets, moons and celestial bodies within the solar system using WebGL and JavaScript. Utilising Phong shading with ambient, diffuse and specular terms. Alpha blending modes, to layer clouds over the Earth, and add rings to Saturn. Customisable parameters for the scene including camera position, zoom, field of view, simulation speed, and lighting attributes such as colour and position.
\vspace{3\baselineskip}
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/finished-result.png}
\caption{An image of the finished result.}
\end{figure}
}
\tableofcontents
\listoffigures
\chapter{Introduction}
\pagenumbering{arabic}
WebGL is a relatively recent\cite{khronoswebglspec} JavaScript API based on OpenGL ES 2.0 by the Khronos Group for producing interactive 3D and 2D graphics from a web browser, without any additional or proprietary plug-ins. WebGL utilises the power of graphics processing units in today's computers to provide a rich and accessible experience for 3D graphics.
WebGL was intended to be used by developers creating abstractions on-top of WebGL to make easy to use graphics libraries. This also helps when writing object oriented code (OO), as WebGL takes its procedural roots from its OpenGL ancestors. However since this was for the most part a learning exercise, we were told to use only WebGL and a library for Vector and Matrix functions.
By using WebGL, we have a pretty low level way of accessing the graphics hardware to produce 3D visualisations. These visualisations can be incredibly effective in education. Things that are hard to visualise as a static 2D image on paper come alive when shown on a screen in a 3D perspective.
An interactive Orrery can be great at showing students how orbital systems work. Especially when showing hard to visualise systems such as Saturn, with its rings and moons or Uranus, with its \ang{98} axial tilt.
To get more realistic results, we can use texture mapping to paint the surfaces of the spheres, use Phong shading to get a representation of a realistic light model with ambient lighting, diffuse shading, specular reflections.
\chapter{Methods}
The project was started by first setting up a Git repository. This is a vital first step in every single piece of work I undertake. It ensures I can easily get back to any point in time for my entire codebase. I can create branches to experiment, and track exactly how long I'm spending doing what and when.
I had to set up my project structure. All of the WebGL tutorials that I could find were strictly procedural. There was no OO influence in the code, and it was a little hard to follow as it was vastly different than what I was used to when using Java. I started out by creating a nice structure to load the canvas and WebGL context in a nice modular manner.
From there I created a scene class, which contains all the necessary information about a scene. In here, I had two loops, update and draw. Draw was looped using the requestAnimationFrame API to let the browser decide when to draw frames. This has a nice benefit of pausing when the tab is not active, reducing the load on the computer when not needed.
The update cycle is ran using setTimeout which runs every 16.667ms, which works out at 60 frames per second (FPS). By using setTimeout, the positions of planets can update consistently. However if the update cycle is too computationally complex, it will not slow down the drawing aspect of the application in any way. You can still pan around and zoom at a smooth 60FPS.
Next was choosing a library to add in support for vectors and matrices in JavaScript. For this, I used glMatrix\cite{glmatrix}, which has been in development since mid-2010 and provides over 4000 lines to provide many vector and matrix operations.
I then utilised tutorials found on WebGL Academy\cite{webglacademy} and Learning WebGL\cite{learningwebgl} to slowly build up my application from scratch. I found the WebGL Academy tutorials especially helpful, as it offered a step by step approach to programming using WebGL and explained things very thoroughly, which made it easy to adapt the code into my OO model.
To build up a scene, I decided to have an array of objects, and for each object in the array, iterate through and run their respective update and draw methods.
Building up meshes to get spheres was interesting. I had to start with triangles, then squares, and then cubes before I could create a sphere. The jump from a cube to a sphere was actually probably one of the bigger hurdles in the project, especially getting the normal maps working correctly once I had implemented lighting.
I also by accident, discovered I could create a skybox to have stars in the background of my scene. This is basically putting the entire scene inside a cube, and texturing the inside faces with stars.
Once the meshes were in place, I could focus on getting the orbits right. To do this, I needed several parameters to define a planet's movements. I needed a spin speed, axial tilt, orbit radius, orbit speed, orbit inclination and orbit offset. (See Fig. 2.1)
After getting the orbits right, texturing was really easy to do, as WebGL does most of the work for you. The textures I used were provided with the assignment spec, from NASA's Blue Marble collection\cite{bluemarblenasa} and Solar Views for the Sun\cite{solarviews}.
Directional Phong shading was one of the last things to be implemented. I started off by just having an ambient lighting term. This was an RGB value multiplied by the surface texture's RGB value. Diffuse was done by multiplying by the dot product of the normal vector, and the light direction vector. Specular was added by multiplying by the reflection vector of the normal, the view vector and a constant.
Once the directional phong was implemented, it was just a case of turning it into a point light by normalising the light position, minus the view position.
Finally as a quick addition, a loading screen was added to ensure that textures don't pop into place when loading. There's a spinner and also some helpful instructions displayed while the textures are loading.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/orbital-elements}
\caption{A diagram of orbital elements.\cite{orbitalelements}}
\end{figure}
\chapter{Results}
This project turned out very well. I had managed to do most of the things I set it out to do. I exceeded the expectations of the specification and although I didn't manage everything (orbit paths, etc. discussed in the next section) I am happy with the overall result. It's visually impressive, and is fun to play around with. I think it serves as a good visual model for how the solar system's orbits work relative to each other.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/screenshot}
\caption{A screenshot of the final program.}
\end{figure}
As you can see, the controls on the bottom of the page let the user modify lots of things in the scene. This keeps the program from getting stale too fast, and is fun to play around with, especially the lighting ones as you'll see a few screenshots down.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/earthcloseup}
\caption{A closeup of the Earth, showing a cloud layer and Phong shading}
\end{figure}
The two layers of the earth are visible in this picture. You can clearly see the cloud layer floating on top of the earth as they would in real life. The polygon could of the sphere could have perhaps been a bit higher here to help with the looks.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/saturncloseup}
\caption{A closeup of Saturn and its rings.}
\end{figure}
Properly alpha blended, the stars in the background are visible through the rings.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/lighting}
\caption{A screenshot showing lighting values that have been changed.}
\end{figure}
Ambient lighting is purple, the point lighting is green with a positional offset to the left of the Sun.
\begin{figure}[H]
\includegraphics[width=\textwidth]{images/fov}
\caption{A forced perspective, with a low field of view and zoom.}
\end{figure}
\chapter{Discussion}
This project has many possible extensions, one of them may be to make the project include a more abstracted framework, for example having geometry classes for cubes, spheres, etc. However it could also be argued that using a framework such as three.js\cite{threejs} will accomplish this very easily.
More features include elliptical orbits, which I did not get around to implementing, lines showing the paths of the orbits, or even n-body simulation, so that changing the mass of an object has an affect on other objects.
A very nice visual effect that could enhance the looks of this project could be a basic lens flare from the sun, bloom or a glow.
Bump mapping or normal mapping the surface of planets could greatly enhance the effect that Phong lighting gives on the surfaces.
jsOrrery\cite{buildingjsorrery} implements a lot of these ideas and it is a really impressive application of WebGL.
\chapter{Conclusion}
In conclusion, if I were to write something similar again. I would definitely use a library such as Three.js to help with the abstraction of WebGL. However the project has clearly not been unsuccessful. I was able to create a nice looking approximation of the solar system, along with interactive controls and advanced shading features.
\chapter{Acknowledgments}
I would like to acknowledge M. Foskett for the help on 3D graphics concepts and for lending me three (ancient) books on 3D graphic theory. These really helped my understanding, despite being old as the concepts have pretty much remained the same. The only difference being that old methods can now be achieved in real-time.\cite{computergraphicsprinciplesandpractice}\cite{graphicsgems}\cite{three-dimensionalcomputergraphics}
\chapter{Appendices}
All project code and commit history will be publicly available through GitHub shortly after the submission deadline at the following address.
\url{https://github.com/bbrks/webgl-orrery}
A hosted demo of the project can also be found at the following URL, however this code may not represent the submitted state of the project, and may be either outdated or updated after the submission deadline.
\url{http://dev.bbrks.me/orrery}
\begin{thebibliography}{9}
\bibitem{khronoswebglspec}
Khronos Group (2011, Mar. 3). ``Khronos Releases Final WebGL 1.0 Specification''. Khronos Group [Online].
Available: \url{https://www.khronos.org/news/press/khronos-releases-final-webgl-1.0-specification}. [Accessed: Nov. 21, 2014].
\bibitem{glmatrix}
B. Jones \& C. MacKenzie. ``glMatrix - Javascript Matrix and Vector library for High Performance WebGL apps''. glMatrix [Online].
Available: \url{http://glmatrix.net}. [Accessed: Nov. 21, 2014].
\bibitem{webglacademy}
X. Bourry. ``WebGL Academy''. WebGL Academy [Online].
Available: \url{http://webglacademy.com}. [Accessed: Nov. 21, 2014].
\bibitem{learningwebgl}
G. Thomas. ``The Lessons''. Learning WebGL [Online].
Available: \url{http://learningwebgl.com/blog/?page_id=1217}. [Accessed: Nov. 21, 2014].
\bibitem{orbitalelements}
Wikipedia. ``Orbital Elements''. Wikipedia [Online].
Available: \url{http://en.wikipedia.org/wiki/Orbital_elements}. [Accessed: Nov. 21, 2014].
\bibitem{bluemarblenasa}
NASA. ``Blue Marble Collections''. NASA Visible Earth [Online].
Available: \url{http://visibleearth.nasa.gov/view_cat.php?categoryID=1484}. [Accessed: Nov. 21, 2014].
\bibitem{solarviews}
Solar Views. ``Artistic Cylindrical Map of the Sun''. Views of the Solar System [Online].
Available: \url{http://solarviews.com/cap/sun/suncyl1.htm}. [Accessed: Nov. 21, 2014]
\bibitem{threejs}
three.js. ``three.js - Javascript 3D library''. three.js [Online].
Available: \url{http://threejs.org}. [Accessed: Nov. 21, 2014].
\bibitem{webglcheatsheet}
Khronos Group. ``WebGL 1.0 API Quick Reference Card''. Khronos Group [Online].
Available: \url{https://www.khronos.org/files/webgl/webgl-reference-card-1_0.pdf}. [Accessed: Nov. 21, 2014].
\bibitem{buildingjsorrery}
M. V\'{e}zina. ``Building jsOrrery, a Javascript / WebGL Solar System''. La Grange Lab [Online].
Available: \url{http://lab.la-grange.ca/en/building-jsorrery-a-javascript-webgl-solar-system}. [Accessed: Nov. 21, 2014].
\bibitem{computergraphicsprinciplesandpractice}
J. Foley, et al., \textit{Computer Graphics Principles and Practice Second Edition}. Addison Wesley, 1995.
\bibitem{graphicsgems}
A. Glassner, \textit{Graphics Gems}. Academic Press, 1994.
\bibitem{three-dimensionalcomputergraphics}
A. Watt, \textit{Fundamentals of Three-Dimensional Computer Graphics}. Addison Wesley, 1989.
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.7739201546,
"avg_line_length": 67.255,
"ext": "tex",
"hexsha": "8804b2a73ee8f02baafbd938dfaee176859f7cbe",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "be873c667c6abada47494422489e9db5c654da91",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bbrks/webgl-orrery",
"max_forks_repo_path": "webgl-solar-system-report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "be873c667c6abada47494422489e9db5c654da91",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bbrks/webgl-orrery",
"max_issues_repo_path": "webgl-solar-system-report.tex",
"max_line_length": 509,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "be873c667c6abada47494422489e9db5c654da91",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bbrks/webgl-orrery",
"max_stars_repo_path": "webgl-solar-system-report.tex",
"max_stars_repo_stars_event_max_datetime": "2015-01-28T22:10:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-28T22:10:56.000Z",
"num_tokens": 3161,
"size": 13451
} |
%!TEX TS-program = xelatex
\documentclass[]{friggeri-cv}
\addbibresource{bibliography.bib}
\begin{document}
\header{jakub}{jantosik}
{software engineer}
% In the aside, each new line forces a line break
\begin{aside}
\section{about}
Eindhoven
Netherlands
~
\href{mailto:[email protected]}{[email protected]}
\href{tel:+31682139873}{+31682139873}
\section{languages}
Proficient in English
Basic German
\section{programming}
JavaScript, TypeScript
(ECMAScript, node.js)
React, Vue 2
C, Rust, Java
Vulkan
\end{aside}
\section{interests}
travelling, programming in general, web development, computer graphics, computer vision, fitness, tennis, game development, Linux, web design
\section{education and certificates}
\begin{entrylist}
\entry
{2017}
{AWS Certified Developer – Associate}
{Amazon Web Services}
{Amazon certification}
\entry
{2017}
{Programming in HTML5 with JavaScript and CSS3}
{Microsoft}
{Exam 480, Microsoft certification}
\entry
{2014-2016}
{Mgr. - Applied Informatics}
{Faculty of Mathematics, Physics and Informatics}
{Thesis: Robot Karel (Javascript interpreter)}
\entry
{2011-2014}
{Bc. - Applied Informatics}
{Faculty of Mathematics, Physics and Informatics}
{Thesis: Vector editor for lower secondary school}
\entry
{2011}
{National comparative exams}
{SCIO}
{NSZ Mathematics}
\entry
{2008}
{First Certificate of English}
{University of Cambridge - ESOL Examinations}
{Cambridge ESOL Level 1 Certificate}
\entry
{2003-2011}
{University preparation}
{Gymnázium Bilíkova}
{}
\end{entrylist}
\section{experience}
\textbf{IKEA - Full Stack Software Engineer}
\begin{itemize}
\item Since April 2020
\item Develop the new generation kitchen planner
\item Design microservices with serverless architecture to achieve robust systems
\item Develop a stable and performant furniture management portal and the 3D kitchen planner using modern frontend/3D frameworks - React, 3dvia
\item Working with: React, AWS, DynamoDB, MysQL, Neptune (graph database), Lambda, Serverless, Serverless Framework, NodeJS, JavaScript, CI/CD (CodeBuild, Bitbucket pipelines), Microservice, Serverless Architecture, TDD (Test-driven development)
\item Creation of reusable node modules
\item Delivering working prototypes and proof-of-concepts
\end{itemize}
\textbf{Philips - Software Engineer}
\begin{itemize}
\item 2017-2020
\item Develop new generation HealthSuite platforms
\item Design microservices with serverless architecture to achieve robust systems
\item Working with: AWS, Linux, DynamoDB, Lambda, CloudFormation, Serverless, Serverless Framework, NodeJS, JavaScript, TypeScript, JMeter, Jenkins, CI/CD, Microservice, Serverless Architecture, TDD (Test-driven development)
\item Fully automating build pipeline process maintaining security and HIPAA, GDPR compliances
\item Heavy focus on IoT and RESTful APIs, infrastructure and simple deployment
\item Creation of reusable node modules
\item Delivering working prototypes and proof-of-concepts
\end{itemize}
\textbf{Accenture - Software Engineer}
\begin{itemize}
\item Since 2016
\item Working on prototypes using Tensorflow, React, Angular 2
\end{itemize}
\section{personal projects}
\textbf{JVec}
\begin{itemize}
\item web vector graphics editor
\item used technologies: JavaScript, jQuery, PHP, Bootstrap 3, Google Fonts API
\end{itemize}
\textbf{Distinguishing Paintings From Photographs}
\begin{itemize}
\item using multiple features application differentiates the images of real scenes from the paintings
\item option to train the classifier with the custom database of images
\item implementation is based on paper written by Florin Cutzu, Riad Hammoud, Alex Leyk
\item used technologies: Matlab
\end{itemize}
\textbf{Robot Karel}
\begin{itemize}
\item educational programming environment
\item user creates programs that control the robot by manipulating the puzzle-like blocks
\item the blocks are translated into JavaScript which is interpreted by my own sandboxed interpreter
\item used technologies: WebGL, JavaScript, jQuery, PHP
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7698337292,
"avg_line_length": 33.1496062992,
"ext": "tex",
"hexsha": "d2a6b3284754fa1c921af2f0d349c1cba9ab0a3b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9cc124d9c062788380e32910be29dc6450e039be",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jammymalina/cv",
"max_forks_repo_path": "cv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9cc124d9c062788380e32910be29dc6450e039be",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jammymalina/cv",
"max_issues_repo_path": "cv.tex",
"max_line_length": 247,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9cc124d9c062788380e32910be29dc6450e039be",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jammymalina/cv",
"max_stars_repo_path": "cv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1107,
"size": 4210
} |
\documentclass[a4]{jgaa-art}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[toc,page]{appendix}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode}
\usepackage{hhline}
\usepackage{array}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newenvironment{definition}[1][Definition]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}}
\makeatletter
\def\BState{\State\hskip-\ALG@thistlm}
\makeatother
\algnewcommand{\LineComment}[1]{\State \(\triangleright\) #1}
\title{Enhancing PQ-tree Planarity Algorithms for non-adjacent $s$ and $t$ }
\author{Shoichiro Yamanishi}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
The PQ-tree based planarity testing algorithm presented
by Booth and Lueker in \cite{BL76} requires an st-ordering where $\{s,t\}$ must exist.
We propose an enhancement to the algorithm that permits st-ordering of any vertex pair.
The enhancement is made by introducing a type of circular consecutiveness of pertinent
leaves denoted by {\it complementarily partial},
where the pertinent leaves can be consecutively arranged only at both ends of the frontier with
one or more non-pertinent leaves in between.
The implementation is enhanced with 4 additional templates P7, P8, Q4, and Q5.
The correctness of the new algorithm is given following the proof in
\cite{EVEN79} and \cite{LEC67}
on the equivalence of PQ-tree and its new reduction operations to the corresponding bush form.
The complexity of the new algorithm stays $O(|N|)$.
\end{abstract}
\section{Introduction}\label{se:intro}
%Context
%Purpose
%Summary
%Forecast
PQ-tree data structure and the reduction algorithm were first proposed by Booth and Lueker \cite{BL76}
in 1976.
PQ-tree is a rooted ordered tree with three node types: L-node represents a leaf with no children, P-node
permits any permutation of its children, Q-node has an ordering on its children but it can be reversed.
PQ-tree is used to represent a set of permissible permutations of elements in $S$, and the reduction
operation tries to find a consecutive arrangement of subset $U$ of $S$.
It has many applications including the planarity testing of an undirected biconnected graph $G(V,E)$.
Booth and Lueker proposed a planarity testing algorithm in their original paper \cite{BL76}.
It has $O(|N|)$ time complexity, and it depends on a particular ordering on graph vertices called st-ordering,
where the two terminal nodes $s$ and $t$ must be adjacent as in $\{s,t\}$.
An st-ordering can be found in $O(|N|)$ time with an algorithm such as \cite{TARJAN86}, assuming $|E| \le 3|V| - 6$ .
The algorithm falls in a category called vertex addition.
Each graph vertex $v$ is processed in an iteration according to the st-ordering.
Conceptually each graph vertex is added to the bush form \cite{LEC67},\cite{EVEN79}, and the PQ-tree
evolves reflecting the state of the bush form.
Each iteration of the algorithm transforms the PQ-tree for $v$
and its incident edges.
In the beginning of an iteration, the tree leaves that correspond to the incoming graph edges incident to $v$,
or {\it pertinent leaves}, are gathered consecutively in the tree by transforming the tree
in a series of template applications.
The minimum connected subtree for all the pertinent leaves, or {\it pertinent tree}, is removed from the tree,
and then new leaves that corresponds to the outgoing graph edges incident to $v$ are added to the tree node
that was the pertinent tree root.
The st-ordering ensures there is at least one incoming graph edge and one out going graph edge at
each iteration except for the first and the last.
At the first iteration the leaves for the outgoing graph edges of the first graph vertex in the st-ordering
are attached to the initial P-node, and the PQ-tree evolves form there over the iterations.
At an iteration, if the pertinent leaves can not be arranged consecutively, the algorithm aborts and declares it
non-planar.
At the second to last iteration, if all the pertinent leaves are consecutively arranged, the graph is declared planar.
The PQ-tree itself is an elegant data structure to represent a set of permissible permutations of elements,
but its reduction algorithm involves 10 rather cumbersome tree transformation operations called templates, though
each of them is straightforward and intuitive to understand.
Since the original algorithm was published, many graph planarity-related algorithms have been proposed,
but some of them were later proven to have some issues in them \cite{OZAWA81} \cite{JTS89} \cite{KANT92}.
For example, J\"unger, Leipert, and Mutzel \cite{JUNGER97} discuss the pitfalls and difficulties of using
PQ-trees for maximal planarization graphs.
It seems to take a very careful attention when we apply PQ-tree to graph planarity algorithms despite its apparent
straight-forwardness and intuitiveness.
Another data structure called PC-tree was proposed by Shih and Hsu \cite{HSU99}.
It is a rootless tree to express permissible 'circular' permutations.
Hsu \cite{HSU99} proposes a planarity test algorithm using PC-tree with iterative vertex
addition along a DFS exploration of graph vertices.
Hsu \cite{HSU01} compares PQ-tree and PC-tree in terms of planarity testing.
Hsu\cite{HSU01} mentions testing 'circular ones' property with PC-tree and
'consecutive ones' property with PQ-tree.
In this paper, we will expand the PQ-tree planarity algorithm for any (not necessarily adjacent) $s$,$t$ pair.
A non-adjacent st-ordering will introduce a new type of consecutiveness through the course of the algorithm on the PQ-tree.
It is a type of 'circular' permutation.
We define the type of circular permutations that the original algorithm can not handle
as {\it complementarily partial}.
We show the insufficiency of the set of the original templates in Booth and Lueker \cite{BL76} with a specific example, and propose 4
new templates to handle complementarily partial nodes.
Then we prove the correctness of the new algorithm following Lempel, Even, and Cederbaum \cite{LEC67} and Even \cite{EVEN79} in equivalence to the corresponding bush form.
We show the time complexity stays $O(|N|)$.
We then discuss some implementation details.
The algorithm mainly consists of two parts called BUBBLE() and REDUCE() in \cite{BL76}.
We show that no change is required for BUBBLE(), but the 4 new templates have to be added to REDUCE().
PQ-tree and its reduction technique are also used for some planarization algorithms such as
\cite{OZAWA81}, \cite{JTS89}, and \cite{KANT92}.
In those algorithms, the costs are calculated per tree node,
and some tree leaves that have associated graph edges are removed based on the costs to maintain planarity.
The costs are basically the number of descendant tree leaves that would have to be removed to
make the tree node of certain pertinent and non-pertinent type.
We briefly propose an improvement on the cost values.
Without the improvement, some graph edges can be removed unnecessarily as a consequence of lack of handling the {\it complementarily partial} nodes defined below.
\section{Circular Consecutiveness}\label{se:insuff}
The insufficiency of the existing algorithm is best shown by a real example.
Please see figure~\ref{fig:pq_tree_01} and~\ref{fig:bush_form_01}.
They are taken from a snapshot of the original algorithm for the graph and the st-numbering shown in Appendix A.
Figure~\ref{fig:pq_tree_01} is the PQ-tree at the 23rd iteration after
BUBBLE(), and figure~\ref{fig:bush_form_01} is the corresponding bush form.
In this iteration the graph vertex $2$ is to be added.
The pertinent leaves that correspond to $\{8,2\}$,$\{3,2\}$, and $\{7,2\}$
are about to be consecutively arranged in REDUCE().
Those edges can be consecutively arranged in a circular manner.
However, the reduction fails at Q-node $Q4$ as none of the templates Q1,Q2,
or Q3 can handle this arrangement.
In $Q4$, the pertinent leaves can only be arranged consecutively
at both ends with one or more non-pertinent leaves in between.
As shown above, the original algorithm is not capable of handling this type
of arrangements of the pertinent leaves with pertinent leaves at both ends.
This is a result of using a rooted tree structure to handle circular
consecutiveness around the outer face of the corresponding bush form.
Once a Q-node has formed in the PQ-tree, at a later iteration
if there is a pertinent node of complementarily partial,
there will be no way to arrange the pertinent nodes consecutively using the
original set of templates,
even if the corresponding bush form permits circular consecutiveness.
\begin{figure}[!htb]
\centering
\begin{minipage}[b]{0.64\textwidth}
\includegraphics[width=\textwidth]{pq_tree_sample_01}
\caption{PQ-tree}
\label{fig:pq_tree_01}
\end{minipage}
\hfill
\begin{minipage}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{bush_form_01}
\caption{Bush form}
\label{fig:bush_form_01}
\end{minipage}
\end{figure}
\section{Enhancement with New Templates}\label{se:correction}
In the previous section we showed a type of node arrangement in PQ-tree that
the original algorithm can not handle.
In this section we first formulate this condition by defining a new
pertinent node type {\it complementarily partial}, and then
introduce 4 new templates.
Then we discuss other changes required in BUBBLE() and REDUCE().
\begin{definition}
A \emph{P-node} is \emph{complementarily partial}, if either of the following
holds:
\begin{enumerate}
\item \label{item:P2}It is not a pertinent root, and it satisfies the
condition for template $P6$ on the arrangement of child nodes,
i.e., if there are exactly two singly partial children. (Figure \ref{fig:pq_tree_02})
\item \label{item:P1}There is exactly one complementarily partial child, and all the
children are full. (Figure \ref{fig:pq_tree_03})
\end{enumerate}
\end{definition}
\begin{definition}
A \emph{Q-node} is \emph{complementarily partial}, if either of the following
holds:
\begin{enumerate}
\item \label{item:Q2}The children are arranged as the complement of permissible arrangement
for template $Q3$, i.e., if the descendant pertinent leaves can be arranged
consecutively at both ends with one or more non-pertinent leaves in between.
(Figure \ref{fig:pq_tree_04})
\item \label{item:Q1}There is exactly one complementarily partial child, and all the children
are full
(Figure \ref{fig:pq_tree_05})
\end{enumerate}
\end{definition}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{pq_tree_sample_02}
\caption{P-node Condition 1, Template P7}
\label{fig:pq_tree_02}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{pq_tree_sample_03}
\caption{P-node Condition 2, Template P8}
\label{fig:pq_tree_03}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{pq_tree_sample_04}
\caption{Q-node Condition 1, Template Q4}
\label{fig:pq_tree_04}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.55\textwidth]{pq_tree_sample_05}
\caption{Q-node Condition 2, Template Q5}
\label{fig:pq_tree_05}
\end{figure}
The first condition for Q-node is formerly defined using a regular expression
for the children as:
\[F+((S(D?|E+D))|(E+D))|SE*D\]
where $F$ denotes a full child, $S$ a singly partial child, $E$ a
non-pertinent child, and $D:=((F|S)F*)$ for notational convenience.
First we show there is no need to change BUBBLE() to handle the complementarily
consecutive cases.
If the PQ-tree permits complementarily partial arrangement, then the tree
node will be the pertinent tree node. During BUBBLE(), the parent of
all the pertinent nodes will be eventually found, and there will be no need
for a surrogate parent, or {\it pseudo node} in \cite{BL76}.
Next, we introduce 4 new templates for each of 4 conditions shown in the
definitions above. Basically Template P7 is a complement version of Template P6,
and Template Q4 is a complementary version of Q3. Template P8 and Q5 are
for trivial recursive cases.
Finally we show the updated REDUCE().
\begin{algorithm}
\caption{Template P7}\label{template_P7}
\begin{algorithmic}[1]
\Procedure{TemplateP7}{X: reference to a node object}
\If {$X.type \ne P$} \Return false
\EndIf
\If {Number of singly partial children $\ne 2$} \Return false
\EndIf
\State $newX \gets \text{CreateNewPNode()}$
\State Move all the full children of $X$ to $newX$
\State $C_1 \gets \text{Singly partial child}_1$
\State $C_2 \gets \text{Singly partial child}_2$
\State Remove links of $C_1$ and $C_2$ from $X$
\LineComment At this point $X$ contains zero or more empty children only.
\State Save the location of $X$ in the PQ-tree to $L_X$
\State Unlink $X$ from the PQ-tree
\If {Number of empty children of $X > 1$}
\State Put $X$ to the empty side of sibling list of $C_1$
\ElsIf {Number of empty children of $X = 1$}
\State Put the empty child to the empty side of sibling list of $C_1$
\State Discard $X$
\Else
\State Discard $X$
\EndIf
\State Concatenate the children list of $C_2$ to $C_1$'s on the empty sides
\State Discard $C_2$
\State $C_1.pertinentType \gets \textit{ComplementarilyPartial}$
\If {Number of full children of $newX \ge 1$}
\State Put $C_1$ under $newX$
\State Link $newX$ at $L_X$ in the PQ-tree
\State $newX.pertinentType \gets \textit{ComplementarilyPartial}$
\Else
\State Link $C_1$ at $L_X$ in the PQ-tree
\EndIf
\Return true
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Template P8}\label{template_P8}
\begin{algorithmic}[1]
\Procedure{TemplateP8}{X: reference to a node object}
\If {$X.type \ne P$} \Return false
\EndIf
\State $|F| \gets $ Number of full children of $X$
\State $|C| \gets $ Number of children of $X$
\If {$|F|+1 \ne |C|$} \Return false
\EndIf
\State $C_{cp} \gets $the non-full child of $X$
\If {$C_{cp}.pertinentType \ne \textit{ComplementarilyPartial}$} \Return false
\EndIf
\State {$X.pertinentType \gets \textit{ComplementarilyPartial}$}
\State\Return {true}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Template Q4}\label{template_Q4}
\begin{algorithmic}[1]
\Procedure{TemplateQ4}{X: reference to a node object}
\If {$X.type \ne Q$} \Return false
\EndIf
\If {Children of $X$ are not ordered according to the condition for Q4}
\Return false
\EndIf
\For {each $C_{sp}$ of singly partial children}
\State Flatten $C_{sp}$ into $X$ such that the full side of the children list
of $C_{sp}$ is concatenated to the full immediate sibling of $C_{sp}$
\EndFor
\State $X.pertinentType \gets \textit{ComplementarilyPartial}$
\State \Return {true}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Template Q5}\label{template_Q5}
\begin{algorithmic}[1]
\Procedure{TemplateQ5}{X: reference to a node object}
\If {$X.type \ne Q$} \Return false
\EndIf
\State $|F| \gets $ Number of full children of $X$
\State $|C| \gets $ Number of children of $X$
\If {$|F|+1 \ne |C|$} \Return false
\EndIf
\LineComment {The check above can be made wihtout calculating $|C|$}
\State $C_{cp} \gets $the non-full child of $X$
\If {$C_{cp}.pertinentType \ne \textit{ComplementarilyPartial}$} \Return false
\EndIf
\State {$X.pertinentType \gets \textit{ComplementarilyPartial}$}
\State\Return {true}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{REDUCE}\label{reduce}
\begin{algorithmic}[1]
\Procedure{REDUCE}{T, S}
\LineComment $PERTINENT\_LEAF\_COUNT$ is shortened to $PLC$.
\LineComment $PERTINENT\_CHILD\_COUNT$ is shortened to $PCC$.
\State $QUEUE \gets $ empty list
\For {each leaf $X \in S$}
\State place $X$ to the back of $QUEUE$
\State $X.PLC \gets 1$
\EndFor
\While {$|QUEUE| > 0$}
\State remove $X$ from the front of $QUEUE$
\If {$X.PLC < |S|$}
\Comment $X$ is not $ROOT(T,S)$
\State $Y \gets X.PARENT$
\State $X.PLC \gets X.PLC + Y.PLC$
\State $X.PCC \gets X.PCC - 1$
\If {$X.PCC = 0$}
\State place $Y$ to the back of $QUEUE$
\EndIf
\If {not TEMPLATE\_L1(X)}
\If {not TEMPLATE\_P1(X)}
\If {not TEMPLATE\_P3(X)}
\If {not TEMPLATE\_P5(X)}
\If {not TEMPLATE\_P7(X)}
\If {not TEMPLATE\_P8(X)}
\If {not TEMPLATE\_Q1(X)}
\If {not TEMPLATE\_Q2(X)}
\If {not TEMPLATE\_Q4(X)}
\If {not TEMPLATE\_Q5(X)}
\State {$T \gets T(\emptyset, \emptyset)$}
\State {\bf exit} from {\bf do}
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\Else
\Comment $X$ is not $ROOT(T,S)$
\If {not TEMPLATE\_L1(X)}
\If {not TEMPLATE\_P2(X)}
\If {not TEMPLATE\_P4(X)}
\If {not TEMPLATE\_P6(X)}
\If {not TEMPLATE\_P8(X)}
\If {not TEMPLATE\_Q1(X)}
\If {not TEMPLATE\_Q2(X)}
\If {not TEMPLATE\_Q3(X)}
\If {not TEMPLATE\_Q4(X)}
\If {not EMPLATE\_Q5(X)}
\State {$T \gets T(\emptyset, \emptyset)$}
\State {\bf exit} from {\bf do}
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndIf
\EndWhile
\Return T
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Correctness of the New Algorithm}\label{se:correctness}
We prove the correctness following the series of lemmas, theorems, and a corollary given in
Section 8.4 of \cite{EVEN79} by Even.
In their book, the equivalence between the transformations on the bush form
and the reductions on PQ-tree is left for the readers on pp 190.
We fill the missing part with the following proof.
In a similar case, Hsu \cite{HSU01} tries to prove the equivalence between
PQ-tree and PC-tree in its Theorem 5, but it is not sufficient.
It proves the equivalence from PQ-tree to PC-tree for each of the templates.
However the sufficiency of the PQ-tree templates for all the possible
PC-tree transformations is not given.
The proof presented here is relatively long involving many notations and concepts.
However, it may be a useful guide to study the details of the PQ-tree behavior.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{proof_overview}
\caption{Overview of the Proof of the Correctness of the Proposed Algorithm}
\label{fig:proof_overview}
\end{figure}
Figure \ref{fig:proof_overview} shows the overview of the proof.
It proves the equivalence between bush form with the operations on it and the
corresponding PQ-tree with the operations on it.
The proof uses two intermediate representations: Marked bush form and its underlying rooted embedded bc-tree.
A marked bush form is based on a bush form by placing a root marker on a cut vertex or an edge on the outer
face of a block.
Such a marker splits the circular arrangement of virtual edges around the bush form into one of three types of
a linear consecutive arrangement with two designated end points at the marker.
The root marker also introduces the root-to-descendants orientation to the bush form.
We prove in Lemma \ref{lem:lemma1} that a circular consecutive arrangement of virtual edges
by arbitrary reorderings of incident edges around cut vertices and flippings of blocks in a bush form is
equivalent to a linear consecutive arrangement by reordering of cut vertices and flipping of blocks
from the leave components toward the root. In the proof we also show that the incident components
around each cut vertex or a block will be arranged in one of 5 types of orderings.
We then introduce the underlying block-cut tree of the marked bush form, called rooted embedded bc-tree,
and prove equivalence between the rooted embedded bc-tree and the PQ-tree with their operations in Lemma 2 and 3.
First, we introduce some concepts, operations, and notations required for the following discussions.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.4\textwidth]{marked_bush_form_00}
\caption{A Bush Form}
\label{fig:bush_form}
\end{figure}
\begin{definition}
Type of nodes and edges along the outer face of a bush form
\begin{itemize}
\item {\bf \emph{virtual node}} of a bush form is a node of degree 1 that represents a copy of a node
in the original graph \cite{EVEN79}. In figure \ref{fig:bush_form}, $v1$...$v14$ are virtual nodes.
\item {\bf \emph{virtual edge}} of a bush form is an edge incident to a virtual node.
\item {\bf \emph{pertinent virtual node}} is a virtual node that corresponds to a pertinent leaf
in PQ-tree. I.e., a virtual node to be merged.
\item {\bf \emph{pertinent virtual edge}} is a virtual edge incident to a pertinent virtual node.
\end{itemize}
\end{definition}
\begin{definition}
Operations on a bush form and a marked bush form.
\begin{itemize}
\item {\bf \emph{attach}} is an operation to attach new virtual edges to a vertex $v$ in the bush form.
As a result it makes $v$ a cut vertex in the bush form.
\item {\bf \emph{reorder}} of a cut vertex in a bush form is the operation to rearrange the circular ordering of
the incident edges around the cut vertex.
\item {\bf \emph{flip}} of a block in a bush form is reversing the combinatorial embedding of the block.
As a result, the ordering the outer face of the block is reversed.
\item {\bf \emph{merge}} is an operation to merge virtual nodes into single real node in the bush form. As a result a new
block forms in the bush form.
\end{itemize}
\end{definition}
\begin{definition}
If a bush form is decomposed to maximal connected components by removing a cut vertex $c$, or a block $B$,
an {\bf \emph{active component}} of a cut vertex $c$ or $B$ is a maximal connected component incident
to $c$ or $B$ that has at least one virtual node in it.
In figure \ref{fig:bush_form}, $c3$ has three maximally connected components.
The component that includes $b5$, $c11$, $c10$, and $b4$ is not an active component. The other two are.
\end{definition}
\begin{definition}
An {\bf \emph{orienting}} cut vertex or a block is a cut vertex or a block in the bush form that has at least 3 incident
active components if the corresponding node in the PQ-tree is not the root.
If the corresponding PQ-tree is the root, then it is a cut vertex or a block that has at least 2 incident active
components. Such a correspondence is proved in Lemma 3.
In figure \ref{fig:bush_form}, assuming $c1$ corresponds to the root of the PQ-tree, $c1$, $b1$, and $c7$
are orienting.
The components $c3$, $b5$, $c5$, and $c8$ are not.
\end{definition}
\begin{definition}
Additional operations on a marked bush form. These are used in the proof of Lemma \ref{lem:lemma2} and
\ref{lem:lemma3}.
\begin{itemize}
\item {\bf \emph{interlock}} is an operation to fix the orientation of
one block relative to another. As a result, flipping one of them will flip the other.
\item {\bf \emph{split}} is an operation to change a node in the bush form to $k_2$, $k_3$, or $C_4$,
and distribute the incident edges among them. If the split is for a $k_3$ or $C_4$,
a new block will result in the (marked) bush form.
\end{itemize}
\end{definition}
The following definitions are for the marked bush forms.
If we place a root marker on a cut vertex, or an edge of an outer face of a block,
it will split the circular consecutive arrangement around the bush form into one
of 3 types. In figures \ref{fig:marked_bush_form_cut_vertex} and
\ref{fig:marked_bush_form_block}, the dots and line segments in navy blue indicate
the marked vertex or edge. The dots in light blue are pertinent virtual nodes.
The dots in light yellow are non-pertinent virtual nodes.
\begin{figure}[!htb]
\centering
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_01}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_02}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_03}
\end{minipage}
\caption{Bush forms with the root marker on a cut vertex}\label{fig:marked_bush_form_cut_vertex}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_04}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_05}
\end{minipage}
\hfill
\begin{minipage}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{marked_bush_form_06}
\end{minipage}
\caption{Bush forms with the root marker on a block}\label{fig:marked_bush_form_block}
\end{figure}
\begin{definition}
Types of linear consecutive arrangements on the marked bush form.
\begin{itemize}
\item {\it singly partially consecutive}: the pertinent virtual nodes are arranged
on either side of the linear order.
In figure \ref{fig:marked_bush_form_cut_vertex} left,
($v1$, $v3$, $v4$, $v14$, $v13$, $v12$, $v11$, $v5$, $v7$, $v6$, $v8$, $v9$, $v10$, $v2$) is such an arrangement.
In figure \ref{fig:marked_bush_form_block} left,
($v10$, $v1$, $v2$, $v14$, $v13$, $v12$, $v11$, $v3$, $v4$, $v5$, $v6$, $v7$, $v9$, $v8$) is such an arrangement.
\item {\it doubly partially consecutive}: the pertinent virtual nodes are arranged
in the middle of the linear order.
In figure \ref{fig:marked_bush_form_cut_vertex} center,
($v2$, $v14$, $v13$, $v12$, $v11$, $v1$, $v3$, $v4$, $v5$, $v7$, $v6$, $v8$, $v9$, $v10$) is such an arrangement.
In figure \ref{fig:marked_bush_form_block} center,
($v10$, $v11$, $v12$, $v13$, $v14$, $v1$, $v2$, $v3$, $v4$, $v5$, $v6$, $v7$, $v8$, $v9$) is such an arrangement.
\item {\it complementarily partially consecutive}: the pertinent virtual nodes are arranged on both side of
the linear order with one or more non-pertinent nodes in the middle.
In figure \ref{fig:marked_bush_form_cut_vertex} right,
($v1$, $v2$, $v3$, $v4$, $v5$, $v7$, $v6$, $v9$, $v8$, $v10$, $v11$, $v12$, $v13$, $v14$) is such an arrangement.
In figure \ref{fig:marked_bush_form_block} right,
($v5$, $v4$, $v3$, $v2$, $v1$, $v14$, $v13$, $v12$, $v11$, $v10$, $v6$, $v7$, $v8$, $v9$) is such an arrangement.
\end{itemize}
\end{definition}
\begin{definition}
{\bf \emph{rooted embedded bc-tree}} is the underlying block-cut vertex tree of a marked bush form.
The root marker on the bush form gives a natural root-to-descendants orientation
on the underlying block-cut tree, and the embedding in the bush form determines
an embedding of the block-cut tree and the embeddings of the blocks.
\end{definition}
\begin{definition}
{\bf \emph{pertinent root}} of a rooted embedded bc-tree is the highest (the closest to the root) node
in the minimal connected subtree that spans the nodes for all the pertinent virtual nodes.
\end{definition}
The left-to-right ordering of the children of the root node is determined as follows.
If the root is for a cut vertex, then pick an arbitrary incident component of the root of the marked
bush form, and arrange the incident components in the counter-clock wise ordering.
If the root is for a block, then pick the incident component immediately after the marked edge in the
counter-clock wise ordering, and arrange the incident components accordingly.
\begin{definition}
\emph{Pertinent Types} of the nodes in a rooted embedded bc-tree are recursively defined as follows.
\begin{itemize}
\item {\bf \emph{empty}}:
Each child of the node $n$ is either non-pertinent virtual edge or an empty node.
\item {\bf \emph{singly partial}}:
The node $n$ meets one of the following conditions.
\begin{itemize}
\item $n$ is for a cut vertex, there is one singly partial child, and there is no complementarily partial child.
\item $n$ is for a cut vertex, there is no singly partial child, at least one empty child, at least one full child, and no complementarily partial child.
\item $n$ is for a block, there is no complementarily partial child, there is at least one full child, all the full children are consecutively embedded on the outer face on either side of the parent with possibly at most one singly partial child at the boundary between full children and the empty ones.
\item $n$ is for a block, there is no complementarily partial child, there is no full child, and there is exactly one singly partial child immediately next to the parent.
\end{itemize}
\item {\bf \emph{doubly partial}}:
The node $n$ meets one of the following conditions.
\begin{itemize}
\item $n$ is the pertinent root, $n$ is for a cut vertex, and there are exactly two singly partial children.
\item $n$ is the pertinent root, $n$ is for a block, at least one full child, all the full children are consecutively embedded in the middle of
the outer face away from the parent, and possibly a singly partial child at each of the two boundaries
between full and empty children.
\item $n$ is the pertinent root, $n$ is for a block, there is no full child,
and there are exactly two consecutively embedded singly partial children.
\end{itemize}
\item {\bf \emph{complementarily partial}}:
The node $n$ meets one of the following conditions.
\begin{itemize}
\item $n$ is not the pertinent root, $n$ is for a cut vertex, and there are exactly two singly partial children.
\item $n$ is for a cut vertex and the children are all full except for one that is complementarily partial.
\item $n$ is for a block, and the children are arranged as the complement of the arrangement for doubly partial.
\item $n$ is for a block and the children are all full except for one that is complementarily partial.
\end{itemize}
\item {\bf \emph{full}}:
If all the children of $n$ are either pertinent virtual edge or full.
\end{itemize}
\end{definition}
We have defined all the necessary types and operations.
Next we prove the equivalence of a PQ-tree to its bush form.
\begin{lemma}\label{lem:lemma1}
If and only if the pertinent virtual nodes in a bush form are arranged consecutively by arbitrary
reorder and flip operations, then there is a sequence of reorder and flip operations
on any rooted embedded bc-tree from the leaf nodes toward the root to arrange the pertinent virtual
edges in one of the linear consecutive arrangements.
At each node of the rooted embedded bc-tree, the operation is such that its children are arranged
in one of 5 pertinent types.
\end{lemma}
\begin{proof}
The details is given in Appendix \ref{App:AppendixB}.
The 'only if' part is trivial.
The 'if' part is by induction on $|T_{BF}|$ of the rooted embedded bc-tree $T_{BF}$ of a marked bush form $BF$.
\end{proof}
We prove the equivalence between the marked bush \& its underlying rooted embedded bc-tree,
and the PQ-tree with Lemma \ref{lem:lemma2} \& \ref{lem:lemma3} in the same induction step
on the number of iterations of the algorithm.
\begin{lemma}\label{lem:lemma2}
Following holds for PQ-tree and its bush form:
\begin{itemize}
\item There is a one-to-one mapping between a P-node and an orienting cut vertex in the marked bush form.
\item There is a one-to-one mapping between a Q-node and an orienting block in the marked bush form.
\end{itemize}
\end{lemma}
Lemma \ref{lem:lemma2} gives the location of the marker on the marked bush form.
If the root of the PQ-tree is a P-node, then there is a corresponding orienting cut vertex in the bush form,
and we place the marker on it.
If the root is a Q-node, there is a corresponding block $B$ in the bush form.
We place the marker on an edge $e$ of $B$ on the outer face.
The edge $e$ is determined as follows.
The children of the root Q-node in the left-to-right ordering in the PQ-tree corresponds
to a counter-clock wise ordering of the orienting active components around $B$ in the bush form.
Proceed from the cut vertex that corresponds to the right most child of the Q-node
in the counter-clock wise orientation, find the first edge $e$ on the outer face of $B$.
\begin{lemma}\label{lem:lemma3}
If and only if the pertinent virtual nodes in the marked bush form
can be consecutively arranged into one of 5 types
using a series of reorder and flip operations from the leaves to the root,
then there is an equivalent series of transformations by templates
in REDUCE() for the PQ-tree to arrange the corresponding pertinent
leaves to the same consecutive type.
\end{lemma}
\begin{proof}
The details of a proof of Lemma \ref{lem:lemma2} \& \ref{lem:lemma3} is given in Appendix \ref{App:AppendixC}.
The 'only if' part is trivial.
The 'if' part is by induction on the number of iterations of the algorithm
Lemma \ref{lem:lemma3} is proved by examining exhaustively
all the cases of operations on the marked bush form and finding equivalent templates of PQ-tree.
Lemma \ref{lem:lemma2} is proved by examining all the cases for the births and the deaths of
the orienting cut vertices and blocks, and find equivalent P-nodes and Q-nodes on PQ-tree.
\end{proof}
\begin{theorem}
If and only if the pertinent virtual edges in the bush form can be circularly consecutively arranged,
then REDUCE() can transform the corresponding PQ-tree such that the pertinent
leaves are arranged consecutively in one of 5 pertinent types.
\end{theorem}
\begin{proof}
This is a direct application of Lemma 1, 2, and 3.
\end{proof}
This concludes the correctness of the new algorithm.
\section{Time Complexity}\label{se:complexity}
\begin{theorem}
The time complexity of the new algorithm stays $O(|N|)$.
\end{theorem}
\begin{proof}
We follow the discussion given in the original \cite{BL76} around
Theorem 5, which depends on Lemma 2, 3 and 4 in [1].
Lemma 2 in [1] holds for the new algorithm as there is no change in BUBBLE().
Lemma 3 in [1] holds for the updated REDUCE() with new templates as follows.
It's easy to see the required time for P8, Q4, and Q5 are on the order
of the number of pertinent children. As for P7, just like P5, the empty children
can implicitly stay in the original P-node, and it will be put under
the new Q-node. This way, P7 runs in the order of the number of
pertinent children.
Lemma 4 holds for the updated REDUCE() as there are only $|S|$ leaves
and at most $O(|S|)$ nonunary nodes in the PQ-tree.
Theorem 5 holds as the Lemma 2, 3, and 4 hold, and also
the new templates P7, P8, Q4, and Q5 do not apply to the unary nodes.
In theorem 5, We substitute $m=|E|$, $n=|V|$, and $SIZE(\mathbb{S})=|E|$, and hence
the algorithm runs in $O(|E|+|V|+|E|) = O(|V|)$, assuming $|E| \le 3|V| -6$.
\end{proof}
\section{Implementation Issues}\label{se:implementation}
In this section we discuss two topics regarding implementation.
First we discuss the changes required to remove the pertinent tree of the complementarily partial
type. Second we propose an improvement to Ozawa-type planarization algorithms with a new
cost calculation technique.
A reference implementation is found in github:yamanishi/colameco.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\textwidth]{pq_tree_com}
\caption{Removal of a Complementarily Partial Pertinet Tree}
\label{fig:pq_tree_com}
\end{figure}
Removal of a complementarily partial pertinent sub-tree and emitting new leaves
from a new P-node is different from the other cases.
A complementarily partial pertinent subtree originates from a Q-node
in template P7, or Q4. All the other nodes above this lowest Q-node including the tree root
are pertinent nodes.
They will be removed and the Q-node will become the new tree root.
The P-node where the new leaves are attached will be put under
the Q-node on either end. See figure \ref{fig:pq_tree_com}.
There has not been a correct $O(|N|^2)$ linear time maximal planarization algorithm using PQ-tree as shown
in \cite{JUNGER97}, except for $O(|N||E|)$ add-edge-and-test type that calls an $O(|N|)$ planarity test multiple times.
However those algorithms such as the first stage of \cite{JTS89} called
$PLANALIZE(G)$ generates a planar spanning connected subgraph and it can be used as a base graph for further maximal planarization.
Here we base our discussion on \cite{JTS89} and propose an improvement with additional
node type and a cost value. \cite{JTS89} defines 4 node types: W, B, H, and A which correspond to
empty, full, singly partial, and doubly partial respectively. We propose a new type C which
corresponds to 'complementarily partial', and it associated cost value $c$ for each node.
The value for $c$ in $COMPUTE1()$ in \cite{JTS89} is calculated as follows.
\begin{enumerate}
\item $X$ is a pertinent leaf, $c = 0$.
\item $X$ is a full node, $c = 0$.
\item $X$ is a partial P-node, $c = a$.
\item $X$ is a partial Q-node, $c = min\{\gamma_1,\gamma_2\}$.
\end{enumerate}
\begin{equation}
\begin{aligned}
\gamma_1 &= \sum_{i \in P(X)}(w_i) - max_{i \in P(X)}\{(w_i - c_i)\} \\
\gamma_2 &= \sum_{i \in P(X)}(w_i) - (max_{i \in P_L(X)}\{(w_i - h_i)\}
+ max_{i \in P_R(X)}\{(w_i - h_i))\}
\end{aligned}
\end{equation}
where $P_L(X)$ means the maximal consecutive sequence of pertinent children of X from the
left end such that all the nodes except the right most one are full. The right most one
may be either full or singly partial. $P_R(X)$ is defined similarly from the right end.
After the cost calculation in the bottom-up manner, the type of the nodes can be determined
top-down from the tree root using the new type $C$.
In this way, the algorithm is capable of handling the complementarily partial situations, and
would be able to reduce the number of edges removed.
\section{Experiments}\label{se:experiments}
The execution time of the new algorithm is reported in Figure \ref{fig:plot_02}.
X-axis is the time to process one graph in micro seconds taken from clock() function available on C++ standard.
Y-axis indicates the number of vertices in the given biconnected planar graph.
The test program was run on Mac 2.8GHz Corei7.
The program was compiled by Apple LLVM 8.0.0.
The plot shows the linearity of the algorithm.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{plot_performance_01}
\caption{Processing Time of the New Algorithm}
\label{fig:plot_02}
\end{figure}
For all the test cases the new algorithm detected planarity.
Here 'ramdom'ness is not rigorous in the mathematical sense.
It is handled by the pseudo-random number generator library std::rand() and std::srand() in C++.
The code and the data used for this experiments are found in github:yamanishi/colameco.
\section{Conclusion}\label{se:conclusion}
We have shown an enhancement to the original PQ-tree planarity algorithm proposed by \cite{BL76}
and proved the correctness. The time complexity stays in $O(|N|)$.
The enhancement applies not only to the planarity test for the graphs, but also for anything
involving circular consecutive ones. As far as the author knows, it seems
there is no other applications than planarity testing.
\section{Acknowledgements}
The author wishes to thank whoever has encouraged and supported him for this article.
\clearpage
\bibliographystyle{abbrvurl}
\bibliography{pq_tree_enhancement}
\clearpage
% \appendix
\begin{appendices}
\section{A Planar Biconnected Graph and a ST-Numbering}\label{App:AppendixA}
Following is the planar biconnected graph and the
st-numbering used to produce the PQ-tree and the bush form in figure
\ref{fig:pq_tree_01} and \ref{fig:bush_form_01}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{sample_graph}
\caption{Biconnected Planar Graph Used for Figure \ref{fig:pq_tree_01} and \ref{fig:bush_form_01}}
\label{fig:sample_graph}
\end{figure}
The st-numbering:
$(8, 13, 7, 14, 19, 20, 25, 24, 23, 22, 21, 16, 11, 17, 15, 9, 10, 5, 4, 3, 2, 1, 6, 12, 18)$
\section{Proof of Lemma \ref{lem:lemma1}}\label{App:AppendixB}
The 'only if' part is trivial. We prove the 'if' part by induction on
$|T_{BF}|$
of the rooted embedded bc-tree $T_{BF}$ of a marked bush form $BF$.
Assume after some arbitrary reorder and flip operations, the pertinent virtual edges have been arranged
consecutively around the bush form.
This is equivalent to arranging the pertinent virtual edges into one of 5 types of
linear consecutive arrangements in a marked bush form $BF$.
If $|T_{BF}| = 1$, the lemma trivially holds as the bush form consists of one pertinent or non-pertinent virtual node.
If $|T_{BF}| = 2$, the lemma trivially holds as the bush form consists of $k_2$ whose incident nodes are a cut vertex and a virtual node. Technically the node in $k_2$ is not a cut vertex but we consider it a cut vertex for the sake of the arguments.
Assume $|T_{BF}| \ge 3$.
We split $T_{BF}$ at the root node by splitting one connected component $C$ from the rest of $T_{BF}$.
If the marker in $BF$ is on a cut vertex $c$, then we split one connected active component $C$ of $T_{BF}$.
If the marker is on an edge $e$ of a block $B$ in $BF$, then we pick the closest cut vertex $c$
after $e$ in the counter-clock wise ordering around $B$
such that the maximal connected component after
removing the block from $c$ is still active.
Let $n_c$ be the corresponding node of $c$ in $T_{BF}$.
$C$ is the maximal connected component of $T_{BF} \setminus B$ that contains $n_c$.
Now $T_{BF}$ is decomposed into two components $D=T_{BF} \setminus C$ and $C$ at $n_c$, and
$BF$ is decomposed into two parts $BF_{D}$ and $BF_{C}$ at $c$.
$BF_{D}$ and $BF_{C}$ can be considered two marked bush form with the marker placed on $c$ in $BF_{C}$.
We examine each of the 5 types of linear consecutive arrangements around $BF$,
and find all the possible combinations of pertinent types of $BF_{D}$ and $BF_{C}$.
We observe table \ref{tab:table1} enumerates all the possible combinations.
We observe for each linear consecutive arrangement around $BF$, any other
arrangements of $BF_{T\setminus C}$ or $BF_{C}$ not shown in table \ref{tab:table1}
would lead to non-nonsecutive arrangement of the pertinent virtual nodes around $BF$.
By the induction hypothesis, the condition on the pertinent types of the children hold
for both $BF_{D}$ and $BF_{C}$.
For each of 14 cases in Table \ref{tab:table1}, if we re-compose $BF_{D}$ and $BF_{C}$ at $c$ into $BF$,
then we can see the condition on the corresponding pertinent type for the case holds for $BF$.
For example, if and only if $BF$ is singly partial, then there will be 6 permissible combinations of the
pertinent types of $BF_{D}$ and $BF_{C}$. We observe any other combination would leat to a different
pertinent type of $BF$ or to a non-planar arrangement.
If $BF_{D}$ ends up in full, and $BF_{C}$ in singly partial after some operations from the leaves to the root
by the induction hypothesis,
then it is easy to see that the $BF$ will end up in singly partial as desired, possibly after flipping
$BF_{C}$ and reordering all the incident component around $c$ or $B$.
The other cases can be proved in the same manner.
\begin{table}[h]
\begin {tabular}{|l|l|l|}
\hhline{|=|=|=|}
{\bf Type on $BF$} & {\bf Type on $BF_{D}$} & {\bf Type on $BF_{C}$} \\
\hhline{|=|=|=|}
Empty & Empty & Empty \\
\hhline{|=|=|=|}
Singly Partial & Empty & Singly Partial \\
\hline
Singly Partial & Empty & Full \\
\hline
Singly Partial & Singly Partial \textsuperscript{*1} & Empty \\
\hline
Singly Partial & Singly Partial \textsuperscript{*2} & Full \\
\hline
Singly Partial & Full & Empty \\
\hline
Singly Partial & Full & Singly Partial \\
\hhline{|=|=|=|}
Doubly Partial & Empty & Doubly Partial \\
\hline
Doubly Partial & Singly Partial \textsuperscript{*3} & Empty \\
\hline
Doubly Partial & Singly Partial \textsuperscript{*2} & Singly Partial \\
\hline
Doubly Partial & Doubly Partial & Empty \\
\hhline{|=|=|=|}
Complementarily Partial & Singly Partial \textsuperscript{*4} & Singly Partial \\
\hline
Complementarily Partial & Full & Complementarily Partial \\
\hhline{|=|=|=|}
Full & Full & Full \\
\hhline{|=|=|=|}
\multicolumn{3}{p{12cm}}{\textsuperscript{*1}\footnotesize{If the root is a block, then the pertinent virtual edges are on $e$ side, not on $c$ side.}}\\
\multicolumn{3}{p{12cm}}{\textsuperscript{*2}\footnotesize{If the root is a block, then the pertinent virtual edges are on $c$ side, not on $e$ side.}}\\
\multicolumn{3}{p{12cm}}{\textsuperscript{*3}\footnotesize{If the root is a block, then the pertinent virtual edges are on $c$.
If the root is a cut vertex, this row does not apply.}}\\
\multicolumn{3}{p{12cm}}{\textsuperscript{*4}\footnotesize{If the root is a block, then the pertinent virtual edges are on $e$. If the root is a cut vertex, this row does not apply.}}\\
\end {tabular}
\caption{\textbf{All the arrangements of pertinent types of $BF_{T\setminus C}$ and $BF_{C}$.}}
\label{tab:table1}
\end{table}
\section{Proof of Lemma \ref{lem:lemma2} and \ref{lem:lemma3}}\label{App:AppendixC}
At the first iteration, Lemma \ref{lem:lemma2} and \ref{lem:lemma3} trivially hold.
The only operation is an attach operation
for the initial virtual edges to an initial cut vertex $c$ in the bush form,
and the corresponding operation on PQ-tree is creation of a new P-node and attaching pertinent
leaves to it. The root marker will be placed on $c$.
By the induction hypothesis, Lemma \ref{lem:lemma2}, and \ref{lem:lemma3}, hold up to $i$-th iteration.
By Lemma \ref{lem:lemma1} and \ref{lem:lemma2},
there is a marked bush form $BF$ whose marker is placed on a cut vertex $c$ if
the root of the PQ-tree is P-node, or on an edge of a block $B$ if the root is a Q-node.
Assume we have arranged the pertinent nodes for the $i+1$-th iteration by an arbitrary set
of reorder and flip operations on the bush form.
Then by Lemma \ref{lem:lemma1},
we can arrange the child components of each node of the rooted embedded bc-tree $T_{BF}$
into one of 5 pertinent types from the descendants toward the root in $T_{BF}$.
Now we examine each pertinent type for a cut vertex and a block in a marked bush form
for their equivalence to their
corresponding nodes in the PQ-tree. We introduce two new operations to the marked bush form, which
are interlock and split.
Table \ref{tab:table2} shows all the operations on an orienting cut vertex
and their equivalent templates on PQ-tree.
Table \ref{tab:table3} is for an orienting block.
Following explains the symbols used in those tables.
\begin {itemize}
\item a square is a component
\item a circle is a cut vertex
\item sky blue indicates pertinent component.
\item yellow indicates non-pertinent component.
\item a square enclosing '$p$' is a parent bock component.
This does not apply if the cut vertex is the root.
\item a circle enclosing '$p$' is a parent cut vertex component.
This does not apply if the block is the tree root.
\item a square in sky blue is a full component.
\item a square in yellow is a non-pertinent component.
\item a rectangle enclosing yellow and sky blue squares is a singly partial component.
\item a circle is in sky blue with a yellow wedge is a complementarily partial component.
\item a polygon is a block.
\item a grey triangle or a quadrilateral is a block induced by a split operation at a cut vertex.
\item a dashed broken line indicates interlocking of two blocks.
\end {itemize}
We observe the rows of those tables cover exhaustively all the cases of 4 pertinent types. (Empty type
is not considered.)
Then we can prove the equivalence for each case with the aid of interlock and split operations
as well as reorder and flip.
For example, take the case for 'Singly Partial 2' for an orienting cut vertex $c$.
Originally, $c$
has 3 full child components, 3 non-pertinent child components, and a singly partial child component.
To make it singly partial, we first reorder the incident ordering of those children so that
the full children are arranged consecutively on one side, the non-pertinent children on the other,
and the singly partial child between those two oriented such that the full side of the singly partial
child is on the full side of children. Please note it is not a circular ordering due to the presence
of the parent component.
To make those changes permanent, we split $c$
to a triangle $k_3$ or a qiadrilateral $C_4$ and fix the orientation of the singly partial child
relative to them with the interlock operation. The 3rd column in Table \ref{tab:table2} is for the case
if $c$ is a root component, and the 5th column is for the case if it is not.
We can see those are in fact equivalent to the template P4 and P5 respectively for the case in which
the are both full and non-pertinent components.
In the 3rd column, the interlocked $k_3$ and the singly partial child together correspond to
the resultant Q-node in P4 with the orientation of the children preserved. If there are more than 1
full child component, the full vertex on $k_3$ becomes a new orienting cut vertex, which corresponds
to the new full P-node under the Q-node in P4. Similarly, in the 5th column, the interlocked $C_4$
and the singly partial child together correspond to the resultant Q-node in P5.
The other cases can be examined in the same way.
We see that in fact, those tables cover all the cases for all the templates for PQ-tree.
In case of 'Doubly Partial 4', there are no corresponding PQ-tree templates, as those nodes are above the
pertinent root.
This concludes the proof of Lemma 3.
As for Lemma 2, we prove the equivalence in the birth and the death of the orienting cut vertex
and the corresponding P-node, and the equivalence of an orienting block to Q-node.
An orienting cut vertex is born only at the following location.
\begin{itemize}
\item an attach operation with more than 1 virtual edge to be attached.
\end{itemize}
Please note that a split operation can produce $k_3$ or $C_4$, which means 2 or 3 new cut vertices
are created. However, the one incident to the parent is not orienting as it has just 2 active components.
The one incident to the interlocked singly partial child is not orienting either.
The one on the full side is a temporary cut vertex which will be absorbed inside a newly created
block on a merge operation later. The remaining one on the non-pertinent side is considered to have been
transferred from the original cut vertex. So, effectively a split operation does not create new orienting cut vertex.
An orienting cut vertex will die at the following locations.
\begin{itemize}
\item Becoming full. In this case the children of the cut vertex will be merged into a new cut vertex,
and eventually it will have two active components and will become non-orienting.
\item Becoming complementarily partial, and it has a complementarily partial child.
In this case all the full children together with the parent will be merged into a new cut vertex,
and eventually it will have two active components and will become non-orienting.
\item Singly Partial 1, non-pertinent root, and there is one non-pertinent child. In this case the new cut vertex
on the non-pertinent side of $k_3$ will have only one child, and it has two active components and it
will become non-orienting.
\item Singly Partial 3, non-pertinent root, and there is one non-pertinent child. In this case the new cut vertex
on the non-pertinent side of $k_3$ will have only one child, and it has two active components and it
will become non-orienting.
\item Singly Partial 4, pertinent root and non-pertinent root, and there is no non-pertinent child. In this case, the $k_3$ by the split
operation does not produce any cut vertex for non-pertinent children and hence the cut vertex vanish.
\item Doubly Partial or Complementarily Partial 3, Root and Non-root, and there is no non-pertinent child.
In this case, the $C_4$ by the split
operation does not produce any cut vertex for non-pertinent children and hence the cut vertex vanish.
\end{itemize}
An orienting block is born at the following conditions.
\begin{itemize}
\item $k_3$ or $C_4$ is generated by a split operation
\end{itemize}
An orienting block dies at the following locations.
\begin{itemize}
\item Where a singly partial child is interlocked to the parent. In this case there is a singly
partial block in the component specified by the singly partial child, and the parent.
\item Becoming full. In this case the children of the block will be merged into a new cut vertex,
and eventually it will have two active components and will become non-orienting.
\item Becoming complementarily partial, and it has a complementarily partial child.
In this case all the full children together with the parent will be merged into a new cut vertex,
and eventually it will have two active components and will become non-orienting.
\end{itemize}
For each of the birth and death cases above, there is a corresponding creation or destruction of
P-node or Q-node. For example, take the 3rd death case of an orienting cut vertex that is a singly
partial and non-root and that has one non-pertinent child. This corresponds to template P3.
Since there is one non-pertinent child, it will be directly placed as one of two children of the Q-node.
Eventually, the original P-node is considered removed from the PQ-tree.
We can show the equivalence for the other cases. We can also show those cases cover all the locations
in the templates, attach and removal of the pertinent subtree in which new P-nodes and Q-nodes are created
and destructed. This concludes the proof of Lemma \ref{lem:lemma2}.
\begin{table}[h]
\def\arraystretch{1}
\centering
\begin{tabular}
{|C{0.22\textwidth}|C{0.14\textwidth}|C{0.14\textwidth}|C{0.05\textwidth}|C{0.14\textwidth}|C{0.05\textwidth}|}
\hline
Pertinent type & Original State & Operation for Pert Root & PQ Op. & Operation for Non-Pert Root & PQ Op.\\
\hline
Full&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_01_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_01_root} &
P1 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_01_nonroot} &
P1 \\
\hline
Empty &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_11_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_11_root} &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_11_nonroot} &
- \\
\hline
Singly Partial 1&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_02_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_02_root} &
P2 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_02_nonroot} &
P3 \\
\hline
Singly Partial 2&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_03_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_03_root} &
P4 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_03_nonroot} &
P5 \\
\hline
Singly Partial 3&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_04_before} &
N/A&
- &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_04_nonroot} &
P5 \\
\hline
Singly Partial 4&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_05_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_05_root} &
P4 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_05_nonroot} &
P5 \\
\hline
Doubly Partial or Complementarily Partial 1&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_06_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_06_root} &
P6 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_06_nonroot} &
P7 \\
\hline
Doubly Partial or Complementarily Partial 2&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_07_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_07_root} &
P6 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_07_nonroot} &
P7 \\
\hline
Doubly Partial or Complementarily Partial 3&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_08_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_08_root} &
P6 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_08_nonroot} &
P7 \\
\hline
Doubly Partial 4&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_10_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_10_root} &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_10_nonroot} &
- \\
\hline
Complementarily Partial&
\includegraphics[width=0.1\textwidth]{bc_transform_cv_09_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_09_root} &
P8 &
\includegraphics[width=0.1\textwidth]{bc_transform_cv_09_nonroot} &
P8 \\
\hline
\end{tabular}
\caption{Operations on an orienting cut vertex and equivalent PQ-tree templates}
\label{tab:table2}
\end{table}
\begin{table}[h]
\def\arraystretch{1}
\centering
\begin{tabular}
{|C{0.22\textwidth}|C{0.14\textwidth}|C{0.14\textwidth}|C{0.05\textwidth}|C{0.14\textwidth}|C{0.05\textwidth}|}
\hline
Pertinent type & Original State & Operation for Pert Root & PQ Op. & Operation for Non-Pert Root & PQ Op.\\
\hline
Full&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_01_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_01_root} &
Q1 &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_01_nonroot} &
Q1 \\
\hline
Empty&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_12_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_12_root} &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_12_nonroot} &
- \\
\hline
Singly Partial 1&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_02_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_02_root} &
Q2 &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_02_nonroot} &
Q2 \\
\hline
Singly Partial 2&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_03_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_03_root} &
Q2 &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_03_nonroot} &
Q2 \\
\hline
Doubly Partial 1&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_04_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_04_root} &
Q3 &
N/A &
- \\
\hline
Doubly Partial 2&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_05_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_05_root} &
Q3 &
N/A &
- \\
\hline
Doubly Partial 3&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_06_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_06_root} &
Q3 &
N/A &
- \\
\hline
Doubly Partial 4&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_11_before} &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_11_nonroot} &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_11_root} &
- \\
\hline
Complementarily Partial 1&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_07_before} &
N/A &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_07_nonroot} &
Q4 \\
\hline
Complementarily Partial 2&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_08_before} &
N/A &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_08_nonroot} &
Q4 \\
\hline
Complementarily Partial 3&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_09_before} &
N/A &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_09_nonroot} &
Q4 \\
\hline
Complementarily Partial 4&
\includegraphics[width=0.1\textwidth]{bc_transform_bl_10_before} &
N/A &
- &
\includegraphics[width=0.1\textwidth]{bc_transform_bl_10_nonroot} &
Q5 \\
\hline
\end{tabular}
\caption{Operations on an orienting block and equivalent PQ-tree templates}
\label{tab:table3}
\end{table}
\end{appendices}
\end{document}
| {
"alphanum_fraction": 0.7370724404,
"avg_line_length": 45.7688253012,
"ext": "tex",
"hexsha": "e684a69fe4c2bd8d9c3c937a7645e53933f3e8ed",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-08-10T21:13:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-08-10T21:13:51.000Z",
"max_forks_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ShoYamanishi/wailea",
"max_forks_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b",
"max_issues_repo_issues_event_max_datetime": "2021-06-18T17:31:13.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-06-18T17:31:13.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ShoYamanishi/wailea",
"max_issues_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex",
"max_line_length": 303,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "e6263ed238ae32233a58d169868a4a94bf03a30b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ShoYamanishi/wailea",
"max_stars_repo_path": "docs/pq_tree_enhancement/pq_tree_enhancement.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-02T18:38:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-15T10:07:47.000Z",
"num_tokens": 16873,
"size": 60781
} |
This section describes all interfaces between components, their interaction and their input/output parameters. For further, detailed explanations about their functioning, dependencies, resources, operations and parameters read the RASD release.
\subsection{Application$\leftrightarrow$Client Interface}
\begin{itemize}
\item [] \textbf{Components}:
\begin{itemize}
\item PowerEnJoyServer (Application)
\item Mobile App (Client)
\end{itemize}
\item [] \textbf{Communication system}: JPA, JDBC APIs;
\item [] \textbf{Protocols}: standard HTTPS protocol;
\end{itemize}
\subsection{Application$\leftrightarrow$Database Interface}
\begin{itemize}
\item [] \textbf{Components}:
\begin{itemize}
\item Data Manager (PowerEnJoyServer) (Application)
\item PowerEnjoy DB (Database)
\end{itemize}
\item [] \textbf{Communication system}: JPA, JDBC APIs;
\item [] \textbf{Protocols}: standard TCP/IP protocols;
\end{itemize}
\subsection{Map services}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
% Zone Management
\multirow{2}{*}{\textbf{Zones Management}}
& \multirow{2}{*}{Drivers}
& location & [Position]\\
&& available car & [Car]\\ [1.5ex]
\hline\\
% Google API
\multirow{2}{*}{\textbf{Google Maps APIs}}
& \multirow{1}{*}{Drivers}
& address & [string]\\
&& location car & [Position]\\ [1.5ex]
\hline\\
\end{tabular}
\end{center}
\subsection{Account manager}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
% REGISTRATION
\multirow{4}{*}{\textbf{Registration}}
\multirow{4}{*}
& \multirow{4}{*}{Visitors}
& email & [string]\\
&& password & [string]\\
&& license ID & [string]\\
&& .. .& \\ [1.5ex]
\hline\\
% LOGIN
\multirow{2}{*}{\textbf{Login}}
& \multirow{2}{*}{Visitors}
& email & [string]\\
&& password & [string]\\ [1.5ex]
\hline\\
% EMAIL CONFIRMATION
\multirow{2}{*}{\textbf{Email Confirmation}}
& \multirow{1}{*}{Driver}
& \multirow{2}{*}{password} & \multirow{2}{*}{[string]}\\ [1.5ex]
\hline\\
% PROFILE EDITING
\multirow{4}{*}{\textbf{Profile Editing}}
& \multirow{2}{*}{Drivers}
& new email & [string]\\
&& ... & \\ [1.5ex]
\hline\\
% PROFILE DELETING
\multirow{2}{*}{\textbf{Profile Deleting}}
& \multirow{2}{*}{Drivers}
& token & [/]\\
&& password & [string]\\ [1.5ex]
\hline
\end{tabular}
\end{center}
\newpage
\subsection{Notification}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
% Car Reservation NOTIFICATION
\multirow{4}{*}{\textbf{Car Reservation Notification}}
& \multirow{4}{*}{Drivers}
& car location & [Position]\\
&& ETA & [interval]\\
&& car state & [boolean]\\
&& ... & \\ [1.5ex]
\hline\\
% BILL RIDE NOTIFICATION
\multirow{4}{*}{\textbf{Bill Ride Notification}}
& \multirow{4}{*}{Drivers}
& time of rent & [Time]\\
&& total amount & [float]\\
&& discount & [float]\\
&& ... & \\ [1.5ex]
\hline
\end{tabular}
\end{center}
\subsection{Ride manager}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
% RIDE REQUEST
\multirow{9}{*}{\textbf{Ride Management}}
& \multirow{9}{*}{Drivers}
& user ID & [string]\\
&& start location & [Position]\\
&& start time & [Time]\\
&& end time & [Time]\\
&& end location & [Position]\\
&& num passengers & [int]\\
&& battery state & [float]\\
&& bill & [float]\\
&& ... & \\ [1.5ex]
\hline\\
% Reservation
\multirow{2}{*}{\textbf{Reservation Update}}
& \multirow{2}{*}{Drivers}
& request status & [enum]\\
&& new request status & [enum]\\ [1.5ex]
\hline\\
% Ride update
\multirow{2}{*}{\textbf{Request Status Update}}
& \multirow{2}{*}{Drivers}
& ride status & [enum]\\
&& new ride status & [enum]\\ [1.5ex]
\hline\\
\end{tabular}
\end{center}
\subsection{Zone manager}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
\multirow{4}{*}{\textbf{Zone Management}}
& \multirow{4}{*}{Drivers}
& car location & [Position]\\
&& discount & [float]\\
&& car state & [boolean]\\
&& ... & \\ [1.5ex]
\hline\\
\multirow{2}{*}{\textbf{Zone Update}}
& \multirow{2}{*}{Drivers}
& status & [enum]\\
&& new zone & [Zone]\\ [1.5ex]
\hline\\
\end{tabular}
\end{center}
\subsection{Bill manager}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
\multirow{5}{*}{\textbf{Bill Management}}
& \multirow{5}{*}{Drivers}
& car location & [Position]\\
&& battery state & [float]\\
&& num passengers & [int]\\
&& time charged & [float]\\
&& ... & \\ [1.5ex]
\hline\\
\multirow{2}{*}{\textbf{PayPal APIs}}
& \multirow{2}{*}{Drivers}
& email & [string]\\
&& total amount & [float]\\ [1.5ex]
\hline\\
\end{tabular}
\end{center}
\subsection{Car manager}
\begin{center}
\begin{tabular}{ l | c | l r }
\multirow{2}{*}{\textbf{Operation}} & \textbf{Involved} & \textbf{Input/Output} & \multirow{2}{*}{[type]}\\
& \textbf{Users} & \textbf{Parameters} & \\ [1.5ex]
\hline\hline\\
% GENERAL
\multirow{2}{*}{\textit{(General)}}
& \multirow{2}{*}{\textit{(All)}}
& token & [/]\\
&& errors & [/]\\ [1.5ex]
\hline\\
% RIDE REQUEST
\multirow{5}{*}{\textbf{Car Management}}
& \multirow{5}{*}{Drivers}
& car location & [Position]\\
&& battery state & [float]\\
&& num passengers & [int]\\
&& time charged & [float]\\
&& ... & \\ [1.5ex]
\hline\\
\multirow{2}{*}{\textbf{Car Update}}
& \multirow{2}{*}{Drivers}
& new car status & [enum]\\
&& new car location & [Position]\\ [1.5ex]
\hline\\
\end{tabular}
\end{center}
\newpage
\subsection{Application$\leftrightarrow$Problem Manager Interface}
\begin{itemize}
\item [] \textbf{Components}:
\begin{itemize}
\item Problem Manager APIs
\item Mobile App (Client)
\end{itemize}
\item [] \textbf{Communication system}: JAX-RS API (RESTful interface)
\item [] \textbf{Protocols}: standard HTTPS protocol;
\end{itemize}
\subsection{Data manager}
See subsection 2.6.1, \textit{"Application$\leftrightarrow$Database Interface"}. | {
"alphanum_fraction": 0.5576372315,
"avg_line_length": 28.1208053691,
"ext": "tex",
"hexsha": "eeed9d31e05bf9fa47996b2a078b11b2ad688274",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "FrancescoZ/PowerEnJoy",
"max_forks_repo_path": "Workspace/DD/architecturalDesign/component.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "FrancescoZ/PowerEnJoy",
"max_issues_repo_path": "Workspace/DD/architecturalDesign/component.tex",
"max_line_length": 244,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1ced41ed087251661b6c9783753b1912ddc31759",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "FrancescoZ/PowerEnJoy",
"max_stars_repo_path": "Workspace/DD/architecturalDesign/component.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3201,
"size": 8380
} |
\chapter{Isotropic energy probability densities in the laboratory frame}
\label{Sec:isotropic-lab}
The \xendl\ library supports several formats for energy probability densities
which are isotropic in the
laboratory frame. These data are typically used for equilibrium
reactions and for fission neutrons. Because the outgoing
distribution is isotropic, the probability density
$\pi(\Elab', \mulab \mid E)$ in Eq.~(\ref{def_pi}) takes the form
\begin{equation}
\pi(\Elab', \mulab \mid E) = \pi_0(\Elab' \mid E).
\label{isotropic-pi}
\end{equation}
Consequently, for the number-conserving
matrices only the $\ell = 0$ Legendre order,
\begin{equation}
\Inum_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\calE_h'} d\Elab' \, \pi_0(\Elab' \mid E)
\label{InumI4-0}
\end{equation}
needs to be computed,
and Eq.~(\ref{Ien}) for the energy-preserving transfer matrix becomes
\begin{equation}
\Ien_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\calE_h'} d\Elab' \, \pi_0(\Elab' \mid E) \Elab'.
\label{IenI4-0}
\end{equation}
The data $\pi_0(\Elab' \mid E)$ may be given in \xendl\ either as
a table of values or as parameters in a function formula.
Because several of the function formulas for isotropic energy
probability densities are given in terms of incomplete gamma
functions, these are discussed first. This is followed by a presentation
of the functional formulas for isotropic probability densities. Then,
the treatment of tables of~$\pi_0(\Elab' \mid E)$ for isotropic emission
in the laboratory frame is discussed. The section closes with
the special treatment of the evaporation of delayed fission neutrons.
\section{Computational aspects of incomplete gamma functions}
Many
of the function formulas for $\pi_0(\Elab' \mid E)$ make use of the
lower incomplete gamma function
\begin{equation}
\gamma(\kappa, x) =
\int_0^x dt\, t^{\kappa - 1} e^{-t}
\label{def-gamma}
\end{equation}
with $\kappa > 0$. The upper incomplete gamma
function is
\begin{equation}
\Gamma(\kappa, x) =
\int_x^\infty dt\, t^{\kappa - 1} e^{-t},
\label{def-Gamma}
\end{equation}
and they are related by
$$
\gamma(\kappa, x) + \Gamma(\kappa, x) = \Gamma(\kappa) =
\int_0^\infty dt\, t^{\kappa - 1} e^{-t}.
$$
In order to reduce the difficulties of computer round-off,
the formula
$$
\int_a^b dt\, t^{\kappa - 1} e^{-t} =
\gamma( \kappa, b ) - \gamma( \kappa, a )
$$
is used when $0 \le a < b \le 1$, and
$$
\int_a^b dt\, t^{\kappa - 1} e^{-t} =
\Gamma( \kappa, a ) - \Gamma( \kappa, b )
$$
is used when $1 \le a < b$. Either form may be used when
$a < 1 < b$.
Note that even though it is possible to write down
exact formulas for $\gamma(\kappa, x)$ when $\kappa$ is a
positive integer, it is better not to use them in the computations.
For example, it is true that
$$
\gamma(2, x) = 1 - (1 + x)e^{-x}.
$$
For values of $x$ near zero, this formula involves subtracting
from 1 a number very close to 1 to get a result close to~$x^2/2$.
This is may lead to bad round-off errors in the computer arithmetic,
and it is far better to
use the software for~$\gamma(2, x)$.
\section{Functional formulas for isotropic probability densities}
The functional formulas used in \xendl\ for energy
probability densities~$\pi_0(\Elab' \mid E)$ are the evaporation model,
the Maxwell model, the Watt model, and the Madland-Nix model.
These models are discussed in turn. For all of these models the
energy of the outgoing particle is in the laboratory frame.
\subsection{Evaporation model}
For the evaporation model the formula is
\begin{equation}
\pi_0(\Elab' \mid E) = C \Elab' \expon{- \frac{\Elab'}{\Theta(E)}}
\label{evaporationF}
\end{equation}
with $0 \le \Elab' \le E - U$. The value of $C$ in Eq.~(\ref{evaporationF})
is chosen so that
$$
\int_0^{E - U} d\Elab' \, \pi_0(\Elab' \mid E) = 1.
$$
That is,
$$
C = \frac{1}{\Theta^2 \gamma(2, (E - U)/\Theta)}.
$$
The data consist of the energy of the reaction $U$ and pairs of
values $\{E, \Theta(E)\}$. The 1-dimensional interpolation methods
of Section~\ref{Sec:1d-interp} are used to
determine the value of $\Theta$ for intermediate values of the
energy $E$ of the incident particle.
According to the comment on incomplete gamma functions above,
for the calculation of $\Inum_{g,h,0}$ on an outgoing energy bin,
$E_0 \le \Elab' \le E_1$ the expression
$$
\int_{E_0}^{E_1} d\Elab' \, \pi_0(\Elab' \mid E) =
C\Theta^2[\gamma(2, E_1/\Theta) - \gamma(2, E_0/\Theta)]
$$
is used when $E_0 \le \Theta$, and
$$
\int_{E_0}^{E_1} d\Elab' \, \pi_0(\Elab' \mid E) =
C\Theta^2[\Gamma(2, E_0/\Theta) - \Gamma(2, E_1/\Theta)]
$$
is used when $E_0 > \Theta$. Analogously, for the
calculation of $\Ien_{g,h,0}$
$$
\int_{E_0}^{E_1} d\Elab' \, \Elab' \pi_0(\Elab' \mid E) =
C\Theta^3[\gamma(3, E_1/\Theta) - \gamma(3, E_0/ \Theta)]
$$
is used when $E_0 \le \Theta$, and
$$
\int_{E_0}^{E_1} d\Elab' \, \Elab' \pi_0(\Elab' \mid E) =
C\Theta^3[\Gamma(3, E_0/\Theta) - \Gamma(3, E_1/\Theta)]
$$
is used otherwise.
\subsubsection{Input file data for the evaporation model}
The process identifier in Section~\ref{data-model} is\\
\Input{Process: evaporation spectrum}{}\\
These data are always in the laboratory frame,\\
\Input{Product Frame: lab}{}
One item of model-dependent data in Section~\ref{model-info}
is the value of $U$ used in defining the range of outgoing
energies $E$ in Eq.~(\ref{evaporationF}), and it is given by\\
\Input{U:}{$U$}\\
The other input data are the values of $\Theta(E)$ in
Eq.~(\ref{evaporationF}) depending on the incident energy~$E$.
All of these energies, $U$, $E$, and $\Theta(E)$, must be in the same
units as the energy bins in Sections~\ref{Ein-bins} and~\ref{Eout-bins}.
The format for these data is\\
\Input{Theta: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, \Theta(E)\}$.
The interpolation flag is one of those for simple lists as in
Section~\ref{interp-flags-list}.
For example, in units of MeV one may have\\
\Input{U: 11.6890}{}\\
\Input{Theta: n = 2}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 12.0 1.04135}{}\\
\Input{ 20.0 1.04135}{}
\subsection{Maxwell model}
The formula for the Maxwell is
\begin{equation}
\pi_0(\Elab' \mid E) = C \sqrt{\Elab' } \, \expon{- \frac{\Elab' }{\Theta(E)}}
\label{MaxwellF}
\end{equation}
for $0 \le \Elab' \le E - U$. This model is often used for fission neutrons.
The value of $C$ in Eq.~(\ref{MaxwellF}) is given by
$$
C = \frac{1}{\Theta^{3/2} \gamma(3/2, (E - U)/\Theta)}.
$$
Because of round-off problems with small values of $x$,
it is unwise to use the mathematically equivalent formula
$$
\gamma(3/2, x) =
\frac{\sqrt{\pi}}{2}\, \erf{\sqrt{x}} - \sqrt{x}\,e^{-x}.
$$
The data consist of the energy of the reaction $U$ and pairs of
values $\{E, \Theta(E)\}$. The parameter $\Theta$ is interpolated
by the methods of Section~\ref{Sec:1d-interp} to obtain intermediate values.
Depending on the value of $E_0/\Theta$,
the calculation of $\Inum_{g,h,0}$ on an outgoing energy bin
$E_0 \le \Elab' \le E_1$ uses the expression
$$
\int_{E_0}^{E_1} d\Elab' \, \pi_0(\Elab' \mid E) =
C\Theta^{3/2}[\gamma({3/2}, E_1/\Theta) - \gamma({3/2}, E_0/\Theta)]
$$
or
$$
\int_{E_0}^{E_1} d\Elab' \, \pi_0(\Elab' \mid E) =
C\Theta^{3/2}[\Gamma({3/2}, E_0/\Theta) - \Gamma({3/2}, E_1/\Theta)].
$$
Analogously, the calculation of $\Ien_{g,h,0}$ uses either
$$
\int_{E_0}^{E_1} d\Elab' \, \Elab' \pi_0(\Elab' \mid E) =
C\Theta^{5/2}[\gamma({5/2}, E_1/\Theta) - \gamma({5/2}, E_0/\Theta)]
$$
or
$$
\int_{E_0}^{E_1} d\Elab' \, \Elab' \pi_0(\Elab' \mid E) =
C\Theta^{5/2}[\Gamma({5/2}, E_0/\Theta) - \Gamma({5/2}, E_1/\Theta)].
$$
\subsubsection{Input file data for the Maxwell model}
The process identifier in Section~\ref{data-model} is\\
\Input{Process: Maxwell spectrum}{}\\
Again, this data is in the laboratory frame,\\
\Input{Product Frame: lab}{}
One item of model-dependent data in Section~\ref{model-info}
is the value of $U$ used in defining the range of outgoing
energies $E$ in Eq.~(\ref{MaxwellF}), and it is given by\\
\Input{U:}{$U$}\\
The other input data are the values of $\Theta(E)$ in
Eq.~(\ref{MaxwellF}) depending on the incident energy~$E$.
These energies, $U$, $E$, and $\Theta(E)$, must all be in the same
units as the energy bins in Sections~\ref{Ein-bins} and~\ref{Eout-bins}.
The format for such data is\\
\Input{Theta: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, \Theta(E)\}$.
The interpolation flag is one of those for simple lists as in
Section~\ref{interp-flags-list}.
For example, in units of MeV one may have\\
\Input{U: -20}{}\\
\Input{Theta: n = 2}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 1.0e-11 1.28}{}\\
\Input{ 20.0 1.28}{}
\subsection{Watt model}
Another model sometimes used for fission neutrons in \xendl\ is the Watt
formula
\begin{equation}
\pi_0(\Elab' \mid E) = C \sinh{\sqrt{b\Elab' }}\, \expon{- \frac{\Elab' }{a}}
\label{WattF}
\end{equation}
for $0 \le \Elab' \le E - U$.
The value of $C$ in Eq.~(\ref{WattF})
is given by
$$
\frac{1}{C} =
\frac{az\sqrt{\pi}}{2}\, \expon{z^2}
\left(
\erf{y - z} - \erf{y + z}
\right) -
a \expon{-y^2} \sinh{\sqrt{b(E - U)}}
$$
with $y = \sqrt{(E - U)/a}$ and $z = \sqrt{ab /4}$.
The data consist of the energy of the reaction $U$ and pairs of
values $\{E, a(E)\}$ and $\{E, b(E)\}$. For intermediate incident
energies $E$, the parameters $b$ and~$a$ are interpolated by
the methods of Section~\ref{Sec:1d-interp}.
\subsubsection{Input file data for the Watt model}
The process identifier in Section~\ref{data-model} is\\
\Input{Process: Watt spectrum}{}\\
This data is in the laboratory frame,\\
\Input{Product Frame: lab}{}
One item of model-dependent data in Section~\ref{model-info}
is the value of $U$ used in defining the range of outgoing
energies $E$ in Eq.~(\ref{WattF}), and it is given by\\
\Input{U:}{$U$}\\
The other input data are the values of $a(E)$ and $b(E)$ in
Eq.~(\ref{WattF}).
The energies, $U$, $E$, and $a(E)$, must be in the same
units as the energy bins in Sections~\ref{Ein-bins} and~\ref{Eout-bins},
and the units for $b(E)$ are the reciprocal of these units.
The format for these data is\\
\Input{a: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, a(E)\}$ and\\
\Input{b: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, b(E)\}$.
The interpolation flags for $a$ and $b$ are those for simple lists as in
Section~\ref{interp-flags-list}.
For example, with energies in MeV one may have\\
\Input{U: -10}{}\\
\Input{a: n = 11}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 1.000000e-11 9.770000e-01}{}\\
\Input{ 1.500000e+00 9.770000e-01}{}\\
\Input{}{ $\cdots$}\\
\Input{ 3.000000e+01 1.060000e+00}{}\\
\Input{b: n = 11}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 1.000000e-11 2.546000e+00}{}\\
\Input{ 1.500000e+00 2.546000e+00}{}\\
\Input{}{ $\cdots$}\\
\Input{ 3.000000e+01 2.620000e+00}{}
\subsection{Madland-Nix model}\label{Sec:Madland}
The Madland-Nix model~\cite{Madland} for prompt fission neutrons uses
the formula
\begin{equation}
\pi_0(\Elab' \mid E) = \frac{C}{2}\, [g(\Elab' , E_{FL}) + g(\Elab' , E_{FH})]
\label{Madland-NixF}
\end{equation}
for
\begin{equation}
0 \le \Elab' \le \texttt{maxEout},
\label{Madland-Nix-E-range}
\end{equation}
where \texttt{maxEout} is one of the input parameters.
Note that the range of outgoing energies Eq.~(\ref{Madland-Nix-E-range})
is independent of the incident energy.
In fact, the \ENDF\ manual~\cite{ENDFB} gives no way for the data to specify the
maximum outgoing energy for the Madland-Nix model.
In Eq.~(\ref{Madland-NixF}) $E_{FL}$ is the average kinetic energy of the light fission
fragments, and $E_{FH}$ is the average kinetic energy of the heavy fission
fragments. The function $g(\Elab' , E_F)$ in Eq.~(\ref{Madland-NixF}) is given in terms
of the parameters $T_m$ and
\begin{equation}
u_1 = \frac{(\sqrt{\Elab' } - \sqrt{E_F})^2}{T_m}, \quad
u_2 = \frac{(\sqrt{\Elab' } + \sqrt{E_F})^2}{T_m}
\label{Madland-Nixu}
\end{equation}
by the formula
\begin{equation}
g(\Elab' , E_F) = \frac{1}{3\sqrt{E_F T_m}}
\left[
u_2^{3/2}E_1(u_2) - u_1^{3/2}E_1(u_1) -
\Gamma(3/2, u_2) + \Gamma(3/2, u_1)
\right],
\label{Madland-Nixg}
\end{equation}
where $E_1$ denotes the exponential integral
$$
E_1(x) = \int_x^\infty dt\, \frac{1}{t}e^{-t}.
$$
It is clear from the definitions that
$$
E_1(x) = \Gamma(0, x),
$$
but software to compute $\Gamma(\kappa, x)$ generally requires
that $\kappa$ be positive.
The data for the Madland-Nix model contains the average energies
$E_{FL}$ and $E_{FH}$ as well as pairs of values $\{E, T_m(E)\}$.
The interpolation rule for $T_m$ is also given.
If the range of outgoing energies is taken to be $0 \le \Elab' < \infty$
in Eq.~(\ref{Madland-NixF}), then $C = 1$. For other ranges of $\Elab' $
and for computation of $\Inum_{g,h,0}$, it follows from Eq.~(\ref{Madland-Nixg})
that it is necessary to compute integrals
\begin{equation}
\calG_i( a, b ) = \int_a^b d\Elab' \, u_i^{3/2}E_1(u_i)
\label{Madland-Nix-u-integral}
\end{equation}
and
\begin{equation}
\calH_i( a, b ) = \int_a^b d\Elab' \, \Gamma(3/2, u_i)
\label{Madland-Nix-Gamma-integral}
\end{equation}
with $i = 1$, 2.
The values of the integrals Eqs.~(\ref{Madland-Nix-u-integral})
and~(\ref{Madland-Nix-Gamma-integral}) are conveniently expressed
in terms of the parameters
\begin{equation}
\alpha = \sqrt{T_m}, \quad
\beta = \sqrt{E_F},
\label{Madland-Nix-alpha-beta}
\end{equation}
\begin{equation}
A = \frac{(\sqrt{a} + \beta)^2}{\alpha^2}, \quad
B = \frac{(\sqrt{b} + \beta)^2}{\alpha^2},
\label{Madland-Nix-A-B}
\end{equation}
and
\begin{equation}
A' = \frac{( \beta - \sqrt{a})^2}{\alpha^2}, \quad
B' = \frac{(\sqrt{b} - \beta)^2}{\alpha^2}.
\label{Madland-Nix-A-B-prime}
\end{equation}
One might think it sufficient to calculate
$$
\calG_i( 0, b ) \quad \text{and} \quad
\calH_i( 0, b )
$$
in Eqs.~(\ref{Madland-Nix-u-integral}) and~(\ref{Madland-Nix-Gamma-integral})
and to use
\begin{equation*}
\begin{split}
\calG_i( a, b ) &= \calG_i( 0, b ) - \calG_i( 0, a ), \\
\calH_i( a, b ) &= \calH_i( 0, b ) - \calH_i( 0, a )
\end{split}
\end{equation*}
for $i = 1$, 2. In fact, this approach is suitable only for $i = 2$.
The reason for the difficulty is seen from Eqs.~(\ref{Madland-Nixu})
and~(\ref{Madland-Nix-alpha-beta}), in that
\begin{equation}
u_1^{3/2} = \begin{cases}
(\beta - \sqrt{\Elab' })^3 / \alpha^3 \quad &\text{for $0 \le \Elab' \le \beta^2$}, \\
(\sqrt{\Elab' } - \beta)^3 / \alpha^3 \quad &\text{for $\Elab' > \beta^2$}.
\end{cases}
\label{Madland-Nix-u1}
\end{equation}
Consequently, the integrals used to compute $\calG_i( a, b )$
and~$\calH_i( a, b )$ in Eqs.~(\ref{Madland-Nix-u-integral})
and (\ref{Madland-Nix-Gamma-integral}) are evaluated as
\begin{equation}
\calG_1( a, \beta^2 ) =
\frac{\alpha \beta}{2} \, \gamma \left( 2, A' \right)
-\frac{2 \alpha^2}{5} \, \gamma \left( \frac{5}{2}, A' \right) +
\left[
\frac{2 \alpha \sqrt{A'}}{5} - \frac{\beta}{2}
\right] \alpha {A'}^2 E_1( A' )
\quad \text{for $0 \le a < \beta^2$},
\label{Madland-Nix-G1a}
\end{equation}
\begin{equation}
\calG_1( \beta^2, b ) =
\frac{\alpha \beta}{2} \, \gamma \left( 2, B' \right)
+ \frac{2 \alpha^2}{5} \, \gamma \left( \frac{5}{2}, B' \right) +
\left[
\frac{\beta}{2} + \frac{2 \alpha \sqrt{B'}}{5}
\right] \alpha {B'}^2 E_1( B' )
\quad \text{for $b > \beta^2$},
\label{Madland-Nix-G1b}
\end{equation}
\begin{equation}
\begin{split}
\calG_2( 0, b ) = &
\frac{2 \alpha^2}{5} \, \gamma \left( \frac{5}{2}, B \right) -
\frac{\alpha \beta}{2} \, \gamma \left( 2, B \right) -
\frac{\beta^5}{10 \alpha^3} \,e^{-B} + {}\\
& \left[
\frac{2 \alpha^2}{5} B^{5/2} - \frac{\alpha \beta}{2} {B}^2 +
\frac{ \beta^5}{10 \alpha^3} \right] E_1( B) - C_1
\quad \text{for $b \ge 0$},
\end{split}
\label{Madland-Nix-G2}
\end{equation}
\begin{equation}
\calH_1(a, \beta^2) = 2 \alpha \beta \, \gamma \left( 2, A' \right) -
\alpha^2 \, \gamma \left( \frac{5}{2}, A' \right) +
(\beta^2 - a) \, \Gamma\left( \frac{3}{2}, A' \right)
\quad \text{for $0 \le a < \beta^2$},
\label{Madland-Nix-H1a}
\end{equation}
\begin{equation}
\calH_1(\beta^2, b) = 2 \alpha \beta \, \gamma \left( 2, B' \right) +
\alpha^2 \, \gamma \left( \frac{5}{2}, B' \right) +
(b - \beta^2) \, \Gamma\left( \frac{3}{2}, B' \right)
\quad \text{for $b \ge \beta^2$},
\label{Madland-Nix-H1b}
\end{equation}
and
\begin{equation}
\calH_2(0, b) = \alpha^2 \, \gamma \left( \frac{5}{2}, B \right) -
2 \alpha\beta \, \gamma \left( 2, B \right) +
\beta^2 \, \gamma \left( \frac{3}{2}, B \right) +
b \, \Gamma \left( \frac{3}{2}, B \right) - C_2
\quad \text{for $b > 0$}.
\label{Madland-Nix-H2}
\end{equation}
In the relations for $\calG_2(0, b)$ and $\calH_2(0, b)$ above, $C_1$
and~$C_2$ are constants of integration.
\begin{figure}
\input{fig5-1}
\end{figure}
In order to illustrate how the above integration formulas may be
derived, consider the case of Eq.~(\ref{Madland-Nix-G1b})
for $\calG_1( \beta^2, b )$ defined in
Eq.~(\ref{Madland-Nix-u-integral}) with $u_1$ as in
Eq.~(\ref{Madland-Nix-u1}) and with~$b > \beta^2$. Substitution
of the definition of the exponential integral~$E_1$ gives the double
integral
$$
\calG_1( \beta^2, b ) = \int_{\beta^2}^b d\Elab' \,
u_1^{3/2} \int_{u_1}^\infty dt \, \frac{1}{t} \, e^{-t}.
$$
The region of integration for this integral is the union of the
two shaded domains in Figure~\ref{Fig:Madland-Nix}.
The integral over the darker shaded region of Figure~\ref{Fig:Madland-Nix} is
$$
J_{11} = \int_{\beta^2}^b d\Elab' \, u_1^{3/2}\int_{u_1}^{B'} dt\, \frac { e^{-t}}{t}.
$$
Reversal of the order of integration transforms this integral to
$$
J_{11} = \int_0^{B'} dt\, \frac{1}{t} \,e^{-t}
\int_{\beta^2}^{(\alpha\sqrt{t} + \beta)^2}
d\Elab' \, u_1^{3/2}.
$$
Under the substitution
\begin{equation*}
\Elab' = (\alpha \sqrt{u_1} + \beta)^2,
% \label{u_for_E}
\end{equation*}
the inner integral takes the form
$$
\int_{\beta^2}^{(\alpha\sqrt{t} + \beta)^2}
d\Elab' \, u_1^{3/2} =
\int_0^t du_1 \, u_1^{3/2}\left(
\alpha^2 + \frac{\alpha\beta}{\sqrt{u_1}}
\right) =
\frac{2\alpha^2}{5}t^{5/2} + \frac{\alpha\beta}{2}t^2.
$$
Thus, it follows that the integral over the dark shaded region in
Figure~\ref{Fig:Madland-Nix} is
\begin{equation*}
J_{11} =
\frac{2\alpha^2}{5} \, \gamma(5/2, B') + \frac{ \alpha\beta}{2} \, \gamma(2, B').
% \label{intJ11}
\end{equation*}
This relation gives the first two terms on the right-hand side
of Eq.~(\ref{Madland-Nix-G1b}).
The other terms on the right-hand side of Eq.~(\ref{Madland-Nix-G1b})
result from evaluation of the integral over the light shaded region in
Figure~\ref{Fig:Madland-Nix},
$$
J_{12} = \int_{\beta^2}^b d\Elab' \, u_1^{3/2} \int_{B'}^\infty dt\, \frac {e^{-t}}{t}
= \int_{B'}^\infty dt\, \frac{1}{t} \, e^{-t}
\int_{\beta^2}^{b}
d\Elab' \, u_1^{3/2}.
$$
\subsubsection{Input file data for the Madland-Nix model}
The process identifier in Section~\ref{data-model} is\\
\Input{Process: Madland-Nix spectrum}{}\\
This data is in the laboratory frame,\\
\Input{Product Frame: lab}{}
The model-dependent data in Section~\ref{model-info}
contains values of $E_{FL}$, the average kinetic energy of the
light fission fragment and $E_{FH}$, the average kinetic energy of the
heavy fission fragment. These parameters are given by\\
\Input{EFL:}{$E_{FL}$}\\
\Input{EFH:}{$E_{FH}$}\\
The user must also specify a maximum outgoing energy
\texttt{maxEout} for use in Eq.~(\ref{Madland-Nix-E-range}).
The other input data are the values of $T_m$ as a function of
incident energy in
Eq.~(\ref{Madland-NixF}). The format for these data is\\
\Input{TM: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, T_m(E)\}$.
The interpolation flag is one of those for simple lists as in
Section~\ref{interp-flags-list}.
The energies, $E_{FL}$, $E_{FH}$, $E$, and $T_m(E)$, must be in the same
units as the energy bins in Sections~\ref{Ein-bins} and~\ref{Eout-bins}.
For example, in MeV units one may have\\
\Input{EFL: 1.029979}{}\\
\Input{EFH: 0.5467297}{}\\
\Input{maxEout: 60}{}\\
\Input{TM: n = 38}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 1.0000000e-11 1.0920640e+00}{}\\
\Input{ 5.0000010e-01 1.1014830e+00}{}\\
\Input{}{ $\cdots$}\\
\Input{ 2.0000000e+01 1.1292690e+00}{}
\section{Energy probability density tables}\label{Sec:isotropicTables}
Another form of isotropic probability density data $\pi_0(\Elab' \mid E)$
Eq.~(\ref{isotropic-pi}) in \xendl\ is in the form of tables. The computation
of transfer matrices for such data given in the laboratory frame is
discussed here. For data in the center-of-mass frame, this is a
special case of Legendre expansions discussed in Section~\ref{Ch:Legendre-cm}
with Legendre order zero.
For given
incident energies $E_i$, the data consist of pairs
$\{E_{k,j}', \pi_0(E_{k,j}' \mid E_k)\}$ as in Eq.~(\ref{EPtable}).
For such tabular data,
computation of the integrals $\Inum_{g,h,0}$ in Eq.~(\ref{InumI4-0})
and $\Ien_{g,h,0}$ in Eq.~(\ref{IenI4-0}) depends on the type
of interpolation used between
different incident energies.
The effects of the unit-base map Eq.~(\ref{unit-base-map}) are
discussed here. The considerations are the same, whether the
unit-base map is used alone or as a component of interpolation
by cumulative points.
After the unit-base transformation Eq.~(\ref{unit-base-map})
the integrals Eqs.~(\ref{InumI4-0}) and~(\ref{IenI4-0}) take the form
\begin{equation}
\Inum_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\widehat\calE_h'} d\widehat \Elab' \,
\widehat\pi_0(\widehat \Elab' \mid E)
\label{InumhatI4-0}
\end{equation}
and
\begin{equation}
\Ien_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\widehat\calE_h'} d\widehat \Elab' \,
\widehat\pi_0(\widehat \Elab' \mid E) \Elab' .
\label{IenhatI4-0}
\end{equation}
In these intergrals $\widehat\calE_h'$ denotes result of mapping the
outgoing energy bin $\calE_h'$ with the transformation Eq.~(\ref{unit-base-map}).
Furthermore, $\Elab' $ in Eq.~(\ref{IenhatI4-0}) is to be obtained from $\widehat \Elab' $
using the inverse unit-base mapping Eq.~(\ref{unitbaseInvert}).
\begin{figure}
\input{fig5-2}
\end{figure}
Figure~\ref{Fig:unit-base-region} illustrates the effect of
the unit-base map Eq.~(\ref{unit-base-map}).
For incident energies $E = E_{k-1}$ and~$E = E_k$,
1-dimensional interpolation is used
to produce data at a common set of unit-base outgoing energies
$\{\widehat E_j'\}$. In the left-hand
portion of Figure~\ref{Fig:unit-base-region}, suppose that
probability densities $\pi_0(\Elab' \mid E)$ are given at incident energies
$E = E_{k-1}$ and~$E = E_k$ and at unit-base outgoing energies
$\widehat E_{j-1}'$ and $\widehat E_j'$.
Then for this set of data, the range of
integration over $E$ in Eqs.~(\ref{InumhatI4-0}) or (\ref{IenhatI4-0}) requires
both that $E_{k-1} < E < E_k$ and that $E$ be in the bin~$\calE_g$. The
outgoing energy~$\Elab' $ is required to be in the bin~$\calE_h'$ and to satisfy
the constraint $\widehat E_{j-1}' < \widehat \Elab' < \widehat E_j'$.
The right-hand portion of Figure~\ref{Fig:unit-base-region}
shows a rectangle with vertices at $E = E_{k-1}$ and~$E = E_k$
and at $\widehat \Elab' = \widehat E_{j-1}'$ and~$\widehat \Elab' = \widehat E_j'$,
and data values $\widehat\pi_\ell(\widehat \Elab' \mid E)$ are given at
these corners after any required interpolation in outgoing energy.
The values of $\widehat\pi_\ell(\widehat \Elab' \mid E)$
interior to this rectangle are determined by interpolation.
The contribution of this potion of the data to the transfer matrix is obtained
by integrating Eqs.~(\ref{InumhatI4-0}) or (\ref{IenhatI4-0}) over the shaded
region in Figure~\ref{Fig:unit-base-region}.
\subsection{Input of isotropic energy probability tables}
\label{Sec:isotropic-table-lab}
The process identifier in Section~\ref{data-model} is\\
\Input{Process: isotropic energy probability table}{}\\
This option permits either the center-of-mass or the laboratory frame.
For data in the laboratory frame, the command in
Section~\ref{Reference-frame} is\\
\Input{Product Frame: lab}{}
The data as in Section~\ref{model-info}
for tables of isotropic energy probability densities is entered
in the format\\
\Input{EEpPData: n = $K$}{}\\
\Input{Incident energy interpolation:}{probability interpolation flag}\\
\Input{Outgoing energy interpolation:}{list interpolation flag}\\
The interpolation flag for incident energy is one those used for
probability density tables in Section~\ref{interp-flags-probability},
and that for outgoing energy is one for simple lists.
This information is followed
by $K$ sections of the form\\
\Input{Ein: $E$:}{$\texttt{n} = J$}\\
with $J$ pairs of values of $\Elab'$ and $\pi_E(\Elab' \mid E)$.
An example with energies in eV of the model-dependent section of the input file for
isotropic energy probability density tables is\\
\Input{EEpPData: n = 4}{}\\
\Input{Incident energy interpolation: lin-lin unitbase}{}\\
\Input{Outgoing energy interpolation: flat}{}\\
\Input{ Ein: 1.722580000000e+07 : n = 34}{}\\
\Input{\indent 0.000000000000e+00 0.000000000000e+00}{}\\
\Input{\indent 1.000000000000e-08 0.000000000000e+00}{}\\
\Input{\indent 1.778280000000e-08 2.766140000000e-07}{}\\
\Input{\indent 3.162280000000e-08 4.918960000000e-07}{}\\
\Input{\indent }{ $\cdots$}\\
\Input{\indent 5.623410000000e-01 8.396540000000e-01}{}\\
\Input{\indent 1.000000000000e+00 0.000000000000e+00}{}\\
\Input{ $\cdots$}{}\\
\Input{ Ein: 2.000000000000e+07 : n = 38}{}\\
\Input{\indent 0.000000000000e+00 0.000000000000e+00}{}\\
\Input{\indent 7.500000000000e-03 0.000000000000e+00}{}\\
\Input{\indent 1.333710000000e-02 4.877750000000e-14}{}\\
\Input{\indent 2.371710000000e-02 8.674000000000e-14}{}\\
\Input{\indent }{ $\cdots$}\\
\Input{\indent 2.250000000000e+06 4.413810000000e-08}{}\\
\Input{\indent 2.750000000000e+06 0.000000000000e+00}{}\\
Note that for these data it is not clear what should be used as the minimum outgoing energy.
In particular for incident energy $E_0 = 1.72258 \times 10^7$ eV,
it is not clear whether it is more reasonable to set $\Eminzero' = 0$ or $\Eminzero' = 1.77828
\times 10^{-8}$ eV in the unit-base interpolation. The \gettransfer\ code uses $\Eminzero' = 0$,
to be consistent with Eq.~(\ref{Eout-ranges}).
\section{General evaporation of delayed fission neutrons}
For some fissionable targets, the energy spectra data for delayed
fission neutrons is represented in \xendl\ in the form
\begin{equation}
\pi_0(\Elab' \mid E) = g\left(\frac{\Elab'}{\Theta(E)} \right).
\label{general-evaporation}
\end{equation}
For this model, values of $\Theta$ are given as a function of~$E$,
and values of $g$ as a function of $x = \Elab'/\Theta(E)$. In fact, all of the
general evaporation data in \xendl\ have $\Theta$ constant,
and the \gettransfer\ code requires that $\Theta$ be constant.
The isotropic probability density $\pi_0(\Elab' \mid E)$ in
Eq.~(\ref{general-evaporation}) is then independent of~$E$.
In this case, the integrals
$\Inum_{g,h,0}$ in Eq.~(\ref{InumI4-0}) and $\Ien_{g,h,0}$ in Eq.~(\ref{IenI4-0})
needed for the transfer matrix become simply products of 1-dimensional
integrals
$$
\Inum_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\calE_h'} d\Elab' \, g(\Elab' /\Theta )
$$
and
$$
\Ien_{g,h,0} =
\int_{\calE_g} dE \, \sigma ( E ) M(E) w(E) \widetilde \phi_0(E)
\int_{\calE_h'} d\Elab' \, g(\Elab' /\Theta ) \Elab' .
$$
\subsection{Input of data for the general evaporation model}
For the general evaporation model, the process identifier in Section~\ref{data-model} is\\
\Input{Process: general evaporation}{}\\
This data is in the laboratory frame,\\
\Input{Product Frame: lab}{}
The model-dependent data in Section~\ref{model-info}
consist of pairs $\{E, \Theta(E)\}$ and of pairs
$\{x, g(x)\}$ with $x = \Elab'/\Theta$. The format for these data is\\
\Input{Theta: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{E, \Theta(E)\}$ and\\
\Input{g: n = $n$}{}\\
\Input{Interpolation:}{interpolation flag}\\
with $n$ pairs of entries $\{x, g(x)\}$.
In both cases, the interpolation flag is one of those for simple lists as in
Section~\ref{interp-flags-list}.
The $\Theta$ parameter is dimensionless, and the units for $E$ and~$x$
must be the same as those for the energy bins.
For example, in MeV one may have\\
\Input{Theta: n = 2}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{1.0e-11 1.0}{}\\
\Input{20.0 1.0}{}\\
\Input{g: n = 185}{}\\
\Input{Interpolation: lin-lin}{}\\
\Input{ 0.0000000e+00 3.1433980e-01}{}\\
\Input{ 1.0000000e-02 2.8124280e+00}{}\\
\Input{ 2.0000000e-02 3.1373560e+00}{}\\
\Input{ }{ $\cdots$}\\
\Input{ 1.8400000e+00 0.0000000e+00}{}\
| {
"alphanum_fraction": 0.6545087483,
"avg_line_length": 39.3642384106,
"ext": "tex",
"hexsha": "57869fea0f9911df9cc30f33b1e1c83678a118e7",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-03-03T22:54:43.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-03T22:41:41.000Z",
"max_forks_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "brown170/fudge",
"max_forks_repo_path": "Merced/Doc/isotropic.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a",
"max_issues_repo_issues_event_max_datetime": "2021-12-01T01:54:34.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-08-04T16:14:45.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "brown170/fudge",
"max_issues_repo_path": "Merced/Doc/isotropic.tex",
"max_line_length": 98,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "4f818b0e0b0de52bc127dd77285b20ce3568c97a",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "brown170/fudge",
"max_stars_repo_path": "Merced/Doc/isotropic.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-21T10:16:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-29T23:46:24.000Z",
"num_tokens": 11102,
"size": 29720
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Deedy - One Page Two Column Resume
% LaTeX Template
% Version 1.3 (22/9/2018)
%
% Original author:
% Debarghya Das (http://debarghyadas.com)
%
% Original repository:
% https://github.com/deedydas/Deedy-Resume
%
% v1.3 author:
% Zachary Taylor
%
% v1.3 repository:
% https://github.com/ZDTaylor/Deedy-Resume
%
% IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX
%
% This template uses several fonts not included with Windows/Linux by
% default. If you get compilation errors saying a font is missing, find the line
% on which the font is used and either change it to a font included with your
% operating system or comment the line out to use the default font.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% TODO:
% 1. Add various styling and section options and allow for multiple pages smoothly.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% CHANGELOG:
% v1.3:
% 1. Removed MacFonts version as I have no desire to maintain it nor access to macOS
% 2. Switched column ordering
% 3. Changed font styles/colors for easier human readability
% 4. Added, removed, and rearranged sections to reflect my own experience
% 5. Hid last updated
%
% v1.2:
% 1. Added publications in place of societies.
% 2. Collapsed a portion of education.
% 3. Fixed a bug with alignment of overflowing long last updated dates on the top right.
%
% v1.1:
% 1. Fixed several compilation bugs with \renewcommand
% 2. Got Open-source fonts (Windows/Linux support)
% 3. Added Last Updated
% 4. Move Title styling into .sty
% 5. Commented .sty file.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Known Issues:
% 1. Overflows onto second page if any column's contents are more than the vertical limit
% 2. Hacky space on the first bullet point on the second column.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[]{deedy-resume-reversed}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% LAST UPDATED DATE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \lastupdated
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% TITLE NAME
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\namesection{Paweł Paczuski}{ %\urlstyle{same}\href{http://example.com}{example.com}| \href{http://example2.co}{example2.co}\\
\href{mailto:[email protected]}{[email protected]} | \href{tel:+48508737322}{+48508737322}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% COLUMN ONE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{minipage}[t]{0.60\textwidth}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EXPERIENCE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experience}
\runsubsection{\href{https://pacode.io}{PACODE}}
\descript{| Co-Founder }
\location{Nov 2017 – Present}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\begin{tightemize}
\item Leading a team of four talented developers.
\item Working directly with clients to analyze their expectations and deliver custom software solutions.
\item Designing and leading software scaling strategy in a successful Polish startup -- \href{https://sundose.io}{Sundose}
\item Working on socially responsible projects with \href{https://fundacjapozaschematami.pl}{Fundacja Poza Schematami}, e.g. \href{https://znajdzieszmnie.pl}{Znajdziesz mnie? -- computer game that increases alcohol awareness among teenagers}.
\item Several projects for the Internet community, e.g. \href{https://copypasta.pl}{Copypasta.pl} -- community-driven web portal that aggregates urban legends and funny stories reaching 50 k unique visitors a month, \href{https://wasteless.io}{Wasteless.io -- virtual fridge that allows users to minimaze the amount of food they waste}.
\item Conducting research projects in the field of computer graphics, data science, machine learning.
\item IT consulting for Polish youtubers, e.g. \href{https://www.youtube.com/user/GargamelVlog}{Jakub Chuptyś (GargamelVlog)}, \href{https://www.youtube.com/channel/UCrtZVvEXEYWO1J_nOzL3fiw}{Waksy}.
\end{tightemize}
\sectionsep
\runsubsection{\href{https://upmedic.pl}{UPMEDIC}}
\descript{| Co-Founder }
\location{May 2015 – present | Warsaw, Poland}
\begin{tightemize}
\item Started as a small side-project that turned into a business supporting radiologists in everyday work at Medical Centers the Medici in Łódź, Poland.
\item Conducting research projects in the field of medical informatics, structured reporting, data mining.
\item Conducting a presentation at the most prestigious radiological event in Poland -- \href{https://upmedic.pl/static/homepage/images/products/Streszczenia_42_Zjazd_PLTR.pdf}{42 nd Congress of the Polish Medical Society of Radiology}.
\item Developing an EHR (Electronic Health Record) system for a network of clinics .
\item Developing telemedical solutions for remote radiological consultations -- \href{https://poradalekarza.pl}{PoradaLekarza -- launching in Summer 2020}.
\item Merged upmedic into pacode in May 2020 as one of our products.
\end{tightemize}
\sectionsep
\runsubsection{CI GAMES}
\descript{| Junior C++ Programmer }
\location{Jun 2016 – Dec 2016 | Warsaw, Poland}
\begin{tightemize}
\item Implementing rendering tweaks to the CRYENGINE game engine backing Sniper Ghost Warrior 3 for PC, XBOX ONE, PS4.
\item Working in a big organization with strict deadlines.
\item Working in a team of 15 developers cooperating with game designers, testers, FX artists.
\end{tightemize}
\sectionsep
\runsubsection{MICROSOFT}
\descript{| Intern }
\location{Jul 2014 – Oct 2014 | Warsaw, Poland}
\begin{tightemize}
\item Working in pair with UX Designer to deliver visually stunning applications to the user.
\item Developing apps and games for Windows, Windows Phone and Microsoft Azure.
\item Working remotely on software projects.
\end{tightemize}
\sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% RESEARCH
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Research}
% \runsubsection{Cornell Robot Learning Lab}
% \descript{| Researcher}
% \location{Jan 2014 – Jan 2015 | Ithaca, NY}
% Worked with \textbf{\href{http://www.cs.cornell.edu/~ashesh/}{Ashesh Jain}} and \textbf{\href{http://www.cs.cornell.edu/~asaxena/}{Prof Ashutosh Saxena}} to create \textbf{PlanIt}, a tool which learns from large scale user preference feedback to plan robot trajectories in human environments.
% \sectionsep
% \runsubsection{Cornell Phonetics Lab}
% \descript{| Head Undergraduate Researcher}
% \location{Mar 2012 – May 2013 | Ithaca, NY}
% Led the development of \textbf{QuickTongue}, the first ever breakthrough tongue-controlled game with \textbf{\href{http://conf.ling.cornell.edu/~tilsen/}{Prof Sam Tilsen}} to aid in Linguistics research.
% \sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% COMMUNITY SERVICE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Community Service}
% \begin{tabular}{rll}
% 2013 -- 2018 & Tennessee & St. Baldrick's Foundation\\
% 2014 -- 2017 & Tennessee & American Cancer Society's Hope Lodge\\
% 2013 -- 2015 & Tennessee & Habitat for Humanity\\
% 2011 -- 2015 & Tennessee & Special Olympics\\
% \end{tabular}
% \sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% SOCIETIES
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Societies}
% \begin{tabular}{rll}
% 2018 -- 2018 & National & Association of Computing Machinery (ACM)\\
% 2017 -- 2019 & National & Scrum Alliance Certified ScrumMaster\\
% 2015 -- 2019 & University & Shackouls Honors College\\
% \end{tabular}
% \sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% AWARDS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Awards}
% \begin{tabular}{rll}
% 2015 & 99\textsuperscript{th} percentile & National Merit Scholarship Finalist\\
% \end{tabular}
% \sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PUBLICATIONS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section{Publications}
% \renewcommand\refname{\vskip -1.5cm} % Couldn't get this working from the .cls file
% \bibliographystyle{abbrv}
% \bibliography{publications}
% \nocite{*}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% COLUMN TWO
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{minipage}
\hfill
\begin{minipage}[t]{0.33\textwidth}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% EDUCATION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Education}
\subsection{Warsaw University of Technology}
\descript{Computer Science}
\location{2014 - 2020}
% College of Engineering \\
Master of Engineering in the field of Computer Science \\
Scholarship for best 10\% of students for 9 semesters. \\
Vice-President at Innovative Software Solutions Student Society \\
\location{Graduated with 5 (very good) as the overall score.}
\sectionsep
\subsection{V Liceum Ogólnokształcące im. Ks. J. Poniatowskiego w Warszawie}
\descript{High school, Chemistry, Physics, Maths}
\location{2011 - 2014}
\sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% SKILLS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Skills}
\subsection{Programming}
Python \textbullet{} C/C++ \textbullet{} C\# \textbullet{} JavaScript \textbullet{} Bash \\
\sectionsep
\subsection{Technology}
Docker \textbullet{} AWS \textbullet{} Azure \textbullet{} Linux \textbullet{} \\
Artificial Intelligence \textbullet{} Relational Databases
\textbullet{} React \textbullet{} Redux \textbullet{} Immutablejs \textbullet{} SNOMED CT
\sectionsep
\subsection{Soft skills}
Design thinking \textbullet{} philosophy \textbullet{} language games \href{https://www.instagram.com/paczopaczos/}{ig:@paczopaczos}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Societies
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Societies}
\subsection{Fundacja im. Lesława Pagi}
Alumni of the second edition of Young Innovators programme
\sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% LINKS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Links}
Gitlab:// \href{https://gitlab.com/paczos}{\bf paczos} \\
LinkedIn:// \href{https://www.linkedin.com/in/pawel-paczuski/}{\bf pawel-paczuski} \\
\sectionsep
\end{minipage}
\end{document} \documentclass[]{article}
| {
"alphanum_fraction": 0.652863087,
"avg_line_length": 35.2937062937,
"ext": "tex",
"hexsha": "93fe754ca4231c571c3d3a60bb11072498343567",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bdf8b206c9323c822646a150b5138a92693802da",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "paczos/pawel-paczuski-cv",
"max_forks_repo_path": "deedy_resume-reversed.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bdf8b206c9323c822646a150b5138a92693802da",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "paczos/pawel-paczuski-cv",
"max_issues_repo_path": "deedy_resume-reversed.tex",
"max_line_length": 337,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bdf8b206c9323c822646a150b5138a92693802da",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "paczos/pawel-paczuski-cv",
"max_stars_repo_path": "deedy_resume-reversed.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2606,
"size": 10094
} |
\chapter{Conclusions and recommendations for future research}
\ifpdf
\graphicspath{{Chapter7/figs/raster/}{Chapter7/figs/pdf/}{Chapter7/figs/}}
\else
\graphicspath{{Chapter7/figs/vector/}{Chapter7/figs/}}
\fi
\section{Conclusions}
This PhD has made advances in several aspects of multi-scale modelling of
granular flows and understanding the complex rheology of dry and submerged
granular flows. The significant contributions of this PhD are summarised in
this chapter.
\subsection{Multi-scale modelling of dry granular flows}
A multi-scale approach was adopted to study the granular flow behaviour. The
material point method, a continuum approach, was used to model the macro-scale
response, while the grain-scale behaviour was captured using a discrete element
technique. In the present study, a two-dimensional DEM code was developed in
C++ to study the micro-scale rheology of dry granular flows. The Verlet-list
algorithm was implemented for neighbourhood detection to improve the
computational efficiency. A linear-elastic model with a frictional contact
behaviour is used to model dense rapid granular flows. A sweep-line Voronoi
tessellation algorithm was implemented, in the present study, to extract
continuum properties such as bulk density from the local grain-scale
simulations.
In order to capture the macro-scale response, a template-based
three-dimensional C++11 Material Point Method code (an Eulerian-Lagrangian
approach), developed at the University of Cambridge, was modified and extended
to study granular flows as a continuum. In the present study, the Generalised
Interpolation Material Point GIMP method was implemented to reduce the
cell-crossing noise and oscillations observed during large-deformation
problems, when using the standard MPM. The three-dimensional MPM code was
parallelised to run on multi-core systems, thus improving the computational
efficiency. The algorithm of the MPM code was improved to handle multi-body
dynamics and interactions.
\subsubsection*{Granular column collapse}
Previous studies on granular collapse have shown a power-law dependence between
the run-out and the initial aspect ratio of the column. However, the origin of
power-law behaviour and the change in the run-out behaviour for tall columns
was unexplained. Also, the reason for longer run-out distances for
tall columns using the continuum approach was still lacking. Most studies
were focused on mono-disperse grain sizes.
Multi-scale simulations of dry granular flows were performed to capture the
local rheology, and to understand the capability and limitations of continuum
models in realistic simulation of granular flow dynamics. For short columns,
the run-out distance is found to be proportional to the granular mass
destabilised above the failure surface. The spreading results from a
Coulomb-like failure of the edges and is a frictional dissipation process. The
continuum approach, using a simple frictional dissipation model, is able to
capture the flow dynamics of short columns. Unlike short columns, the collapse
of tall columns is characterised by an initial collisional regime and a
power-law dependence between the run-out and the initial aspect ratio of the
granular column is observed. MPM simulations show longer run-out behaviour in
the case of tall columns. In MPM simulations, the total initial
potential energy stored in the system is completely dissipated through friction
over the entire run-out distance. The energy evolution study reveals that the
lack of a collisional dissipation mechanism in MPM results in a substantially
longer run-out distance for large aspect ratio columns. Continuum
approaches using frictional laws are able to capture the flow kinematics at
small aspect ratios, which is characterised by an inertial
number \textit{I} less than 0.2 indicating a dense granular flow regime.
However, a continuum approach like the MPM is unable to precisely describe the
flow dynamics of tall columns, which is characterised by an initial collisional
regime (\textit{I} > 0.2). DEM studies on the role of initial material
properties reveal that the initial packing fraction and the distribution of the
kinetic energy in the system have a significant influence on the flow
kinematics and the run-out behaviour. For the same material, a dense granular
packing results in a longer run-out distance in comparison to the initially
loose granular column. Hence it is important to consider macroscopic parameters
like packing fraction and dilatancy behaviour, which are due to meso-scale
grain arrangements, when modelling the granular system as a continuum.
\clearpage
\subsubsection*{Granular slopes subjected to horizontal excitations}
The ability of MPM to model transient flows that do not involve collision
is further investigated. In the
present study, multi-scale analyses of a granular slope subjected to
horizontal excitations reveal a power-law dependence of the run-out distance
and time as a function of the input energy with non-trivial exponents. The
power-law behaviour is found to be a generic feature of granular dynamics. Two
different regimes are observed depending on the input energy. The low energy
regime reflects mainly the destabilisation of the pile, with a run-out time
independent of the input energy. Whereas, the high energy regime involves
spreading dynamics, which is characterised by a decay time that is defined as
the time required for the input energy to decline by a factor $1/2$.
The distribution of the kinetic energy in the system is found to have a
significant influence in the low energy regime, where a large
fraction of the input energy is consumed in the destabilisation process.
However at higher input energy, where most of the energy is dissipated during
the spreading phase, the run-out distance has a weak dependence on
the distribution of velocity in the granular mass. The duration of the flow
shows similar behaviour to the run-out, however, a slope subjected to a
gradient velocity flows quicker than a slope subjected to a uniform
horizontal velocity. The material characteristics of the granular slope affect
the
constant of proportionality and not the exponent in the power-law relation
between the run-out and the input energy. The run-out distance and the decay
time decrease as the friction increases. This effect is much more pronounced at
low values of friction.
The MPM is successfully able to simulate the transient evolution of granular
flow with a single input parameter, the macroscopic friction angle. This study
exemplifies the suitability of the MPM, as a continuum approach, in modelling
large-deformation granular flow dynamics and opens the possibility of realistic
simulations of geological-scale flows on complex topographies.
\subsection{Granular flows in fluid}
A two-dimensional coupled lattice Boltzmann - DEM technique was developed in
C++ to understand the local rheology of granular flows in fluid. A
multi-relaxation time LBM approach was implemented in the present study to
ensure numerical stability. The coupled LBM--DEM technique offers the
possibility to capture the intricate micro-scale effects such as the
hydrodynamic instabilities. The coupled LBM-DEM involves modelling interactions
of a few thousand soil grains with a few million fluid nodes. Hence, in the
present study the LBM-DEM approach was implemented on the General Purpose
Graphics Processing Units. The GPGPU implementation of the coupled LBM -- DEM
technique offers the capability to model large scale fluid -- grain systems,
which are otherwise impossible to simulate using conventional computational
techniques. In the present study, simulations involving up to 5000 soil grains
interacting with 9 million LBM fluid nodes were modelled. Efficient data
transfer mechanisms that achieve coalesced global memory ensure that the GPGPU
implementation scales linearly with the domain size. Granular flows in fluid
involve soil grains interacting with fluid resulting in formation of turbulent
vortices. In order to model the turbulent nature of granular flows, the LBM-MRT
technique was coupled with the Smargonisky turbulent model. The LBM-DEM code
offers the possibility to simulate large-scale turbulent systems and probe
micro-scale properties, which are otherwise impossible to capture in complex
fluid - grain systems.
\subsubsection*{Granular collapse in fluid}
Unlike dry granular collapse, the run-out
behaviour in fluid is dictated by the initial volume fraction. Although
previous studies have shown the influence of the initial packing density on the
run-out behaviour, the effect of initial density, permeability, slope angle,
aspect ratio and presence of fluid on the run-out behaviour have largely been
ignored. The difference in the mechanism of flow behaviour between
dense and loose granular columns was not precisely understood. Previous studies
have shown that only the dry collapse results in the farthest run-out distance
in comparison with submerged conditions. Two-dimensional LB-DEM simulations
were performed to understand the behaviour of submarine granular flows and the
influence of various parameters on the flow dynamics.
Two-dimensional LB-DEM simulations pose a problem of non-interconnected
pore-space between the soil grains which are in contact with each other. In
the present study, a hydrodynamic radius, a reduction in the radius of the
grains, was adopted during the LBM computation stage to ensure continuous
pore-space for the fluid flow. A relation between the hydrodynamic radius and
the permeability of the granular media was obtained.
In order to understand the difference in the mechanism of granular flows in the
dry and submarine conditions, LBM-DEM simulations of granular column collapse
are performed and are compared with the dry case. Unlike the dry granular
collapse, the run-out behaviour in fluid is found to be dictated by the initial
volume fraction. For dense granular columns, the run-out distance in fluid is
much shorter than its dry counterpart. Dense granular columns experience
significantly high drag forces and develop large negative pore-pressures during
the initial stage of collapse resulting in a shorter run-out distance. On the
contrary, granular columns with loose packing and low permeability tend to flow
further in comparison to dry granular columns. This is due to entrainment of
water at the flow front leading to hydroplaning.
In both dense and loose initial packing conditions, the run-out distance is
found to increase with decreasing permeability. An increase in the
hydrodynamic radius from 0.7 to 0.95 R increases the
normalised run-out by 25\%. With a decrease in permeability, the duration
required for the flow to initiate takes longer due to the development of large
negative pore-pressures. However, the low permeability of the granular mass
results in entrainment of water at the flow front causing hydroplaning. For the
same thickness and velocity of the flow, the potential for hydroplaning is
influenced by the density of the flowing mass. Loose columns are more likely to
hydroplane than the dense granular masses resulting in a longer run-out
distance. This is in contrast to the behaviour observed in the dry collapse,
where dense granular columns flow longer in comparison to loose columns.
Similar to the dry condition, a power-law relation is observed between the
initial aspect ratio and the run-out distance in fluid. For a given
aspect ratio and initial packing density, the run-out distance in the dry case
is usually longer than the submerged condition. However, for the same kinetic
energy, the run-out distance in fluid is found to be significantly higher than
the dry conditions. The run-out distance in the granular collapse has a
power-law relation with the peak kinetic energy. For the same peak kinetic
energy, the run-out distance is found to increase with decrease in the
permeability. The permeability, a material property, affects the constant of
proportionality and not the exponent of the power-law relation between the
run-out and the peak kinetic energy.
The number of vortices formed during a collapse in fluid is found to be
proportional to the amount of material destabilised. The vortices are formed
only during the spreading stage of collapse. The formation of eddies during the
collapse of tall columns indicates that most of the potential energy gained
during the free-fall is dissipated through viscous drag and turbulence.
\subsubsection*{Granular collapse down inclined planes}
The influence of slope angle on the effect of permeability and the initial
packing density on the run-out behaviour are studied. For increasing slope
angle, the viscous drag on the dense column tends to predominate over the
influence of hydroplaning on the run-out behaviour. The difference in the
run-out between the dry and the submerged conditions, for a dense granular
assembly, increases with increase in the slope angle above an inclination of
5\si{\degree}. In contrast to the dense granular columns, the loose granular
columns show a longer run-out distance in immersed conditions. The run-out
distance increases with increase in the slope angle in comparison to the dry
cases. The low permeable loose granular column retains the water entrained at
the base of the flow front resulting in sustained lubrication effect. In
contrast to the dry granular collapse, for all slope inclinations the loose
granular column in fluid flows further than the dense column.
For granular collapse on inclined planes, the run-out distance is unaffected by
the initial packing density at high permeability conditions. For collapse down
inclined planes at high permeabilities, the viscous drag forces predominate
resulting in almost the same run-out distance for both dense and loose initial
conditions. However, at low permeability the entrainment of water at the flow
front and the reduction in the effective stress of the flowing mass result in a
longer run-out distance in the loose condition than the dense case as the slope
angle increases.
In tall columns, the run-out behaviour is found to be influenced by the
formation of vortices during the collapse. The interaction of the surface
grains with the surround fluid results in formation of vortices uniquely during
the horizontal acceleration stage. The vortices result in redistribution of
granular mass and thus affect the run-out behaviour. This effect is
predominant on steeper slopes.
\section{Recommendations for future research}
Further research can be pursued along two directions: \textit{a}. improvement
of the numerical tools and constitutive models to realistically simulate
large-deformation problems and \textit{b}. investigation of the rheology of
granular flows using experimental and numerical tools.
\subsection{Development of numerical tools}
\subsubsection*{Discrete element method}
The two-dimensional discrete element method, developed in the present study,
can be extended to three-dimensions to model realistic soil flow
problems. Although the linear-elastic contact model is found to be sufficient
to
describe rapid granular flows, further research using Hertz-Mindlin or other
advanced contact model should be performed. Computationally, the DEM is limited
by the number of grains that can be realistically simulated. Hence, it is
important to be able to run DEM simulations on multi-core systems or on GPUs to
model large-scale geometries. The initial grain properties are found to have a
significant influence on the run-out behaviour, hence, it is vital to model
grains of different shapes to understand their influence on the run-out
distance. Agglomerates can also be used to study the effect of grain-crushing
as the flow progresses down slopes.
\subsubsection*{Material point method}
The current MPM code is capable of solving both 2D and 3D granular flow
problems. Further research should focus on modelling three-dimensional granular
flow problems and validate the suitability of MPM in modelling geological scale
run-out behaviours. As the scale of the domain increases, the computational
time increases especially when using GIMP method. To improve the computational
efficiency, the material point method developed in the present study should be
modified to run on large clusters. The dynamic re-meshing
technique~\citep{Shin2010a} should be implemented to efficiently solve
large deformation problems. The dynamic meshing approach is useful for problems
involving motion of a finite size body in unbounded domains, in which the
extent of material run-out and the deformation is unknown a \textit{priori}.
The approach involves searching for cells that only contain material points,
thereby avoiding unnecessary storage and computation.
The current MPM code is capable of handling fluid-solid interactions in
two-dimensions. Further research should be pursued to implement a fully-coupled
3D MPM code. The MPM code can also be extended to include the
phase-transition behaviour in a continuum domain for partially fluidised
granular flows~\citep{Aranson2002, Aranson2001, Volfson2003}. Fluid - solid
interactions result in pressure oscillations. Further research is essential to
explore advanced stabilisation methods that can be used to avoid the
oscillations that occur due to incompressibility.
\subsubsection*{Lattice Boltzmann - DEM coupling}
The GPGPU parallelised 2D LBM-DEM coupled code, developed in the present study,
should be extended to three-dimensions. This would involve a very high
computational cost and hence it is important to parallelise the LBM-DEM code
across multiple GPUs through a Message Passing Interface (MPI) similar to a
large cluster parallelisation. A three-phase system of granular solids, water
and air can be developed to realistically capture debris flow behaviour. The LB
code can be extended to include a free surface, which can be used to
investigate the influence of submarine mass movements on the free surface, such
as tsunami generation.
\subsubsection*{Constitutive models}
DEM simulations of granular flow problems reveal that the initial material
properties play a crucial role on the run-out evolution. The granular materials
experience change in the packing fraction as the flow progresses. Hence, it is
important to consider advanced models such as Nor-Sand, a critical state based
model, and $\mu(I)$ to model the dense granular flows. The behaviour of the
soil under large deformations can be better expressed with a critical state
model. The modified Nor-Sand constitutive model~\citep{Robert2010} implemented
in the present study can be used in large-deformation flow problems. The
$\mu(I)$ rheology, which is capable of capturing the complex rheology of dense
granular flow, can be extended to include the effect of fluid
viscosity~\citep{Pouliquen2005} to model granular flows in fluids.
\subsection{Understanding the rheology of granular flows}
\subsubsection*{Granular column collapse}
Although two-dimensional simulations provide a good understanding of the
physics of granular flows, it is important to perform three-dimensional
analysis to understand the realistic granular flow behaviour. Multi-scale
simulations of three dimensional granular collapse experiments can be performed
in dry and submerged conditions to understand the flow kinematics. Further
research is essential to quantify the influence of initial packing density,
shape and size of grains on the run-out behaviour for different initial aspect
ratios. This would provide a basis for macro-scale parameters that are required
to model the granular flow behaviour on a continuum scale.
\subsubsection*{Slopes subjected to horizontal excitation}
This work may be pursued along two directions: \textit{a}. experimental
realization of a similar set-up with different modes of energy injection and
\textit{b}. investigating the effect of various particle shapes or the presence
of an ambient fluid. Although numerical simulations are generally reliable,
with realistic results found in the past studies of steady flows, the transient
phases are more sensitive than steady flows and hence experimental
investigations are necessary for validation. This configuration is also
interesting for investigating the behaviour of a submerged slope subjected
earthquake loadings.
\subsubsection*{Granular flow down inclined planes}
Multi-scale analyses of large deformation flow problems such as the flow of dry
granular materials down an inclined flume can be performed. This analysis will
provide an insight on the limits of the continuum approach in modelling large
deformation problems, which involve high shear-rates. The influence of
parameters, such as particle size, density, packing and dilation, on the flow
dynamics can be explored. These studies will be useful in describing the
granular flow behaviour using the $\mu(I)$ rheology.
\subsubsection*{Granular flows in fluid}
Three dimensional LBM-DEM simulations of granular collapse in fluid can be
carried out with varying shape, friction angle and size of particles to
understand the influence of initial material properties on the run-out
behaviour. Parametric analyses on the initial properties can be used to develop
a non-dimensional number that is capable of delineating different flow regimes
observed in granular flows in a fluid. Further research can be carried out on
the collapse of tall columns and the influence of vortices on the run-out
behaviour and re-distribution of the granular mass during the flow.
\newpage
\thispagestyle{empty}
\begin{figure*}[tbhp]
\centering
\includegraphics[height=0.95\textheight]{word_cloud}
\end{figure*} | {
"alphanum_fraction": 0.8133631102,
"avg_line_length": 60.2774725275,
"ext": "tex",
"hexsha": "239d6bd776e652895803c0f64f8c3cc469fcc683",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2021-10-04T08:16:32.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-09-17T20:14:26.000Z",
"max_forks_repo_head_hexsha": "18cab9acd8bed4970dea72d8b5c6cc0617c14f3a",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "kks32/phd-thesis",
"max_forks_repo_path": "Chapter7/chapter7.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "18cab9acd8bed4970dea72d8b5c6cc0617c14f3a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "kks32/phd-thesis",
"max_issues_repo_path": "Chapter7/chapter7.tex",
"max_line_length": 80,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "18cab9acd8bed4970dea72d8b5c6cc0617c14f3a",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "kks32/phd-thesis",
"max_stars_repo_path": "Chapter7/chapter7.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-21T22:01:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-08-31T01:39:52.000Z",
"num_tokens": 4559,
"size": 21941
} |
\section{The \replacecopy algorithm}
\Label{sec:replacecopy}
The \replacecopy algorithm of the \cxx Standard Library \cite[\S 28.6.5]{cxx-17-draft} substitutes
specific elements from general sequences.
%
Here, the general implementation
has been altered to process \valuetype ranges.
The new signature reads:
\begin{lstlisting}[style=acsl-block]
size_type replace_copy(const value_type* a, size_type n, value_type* b,
value_type v, value_type w);
\end{lstlisting}
The \replacecopy algorithm copies the elements from the range \inl{a[0..n]}
to range {\inl{b[0..n]}}, substituting every occurrence of \inl{v} by \inl{w}.
The return value is the length of the range.
As the length of the range is already a parameter of
the function this return value does not contain new
information.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.50\textwidth]{Figures/replace.pdf}
\caption{\Label{fig:replace} Effects of \replace}
\end{figure}
Figure~\ref{fig:replace} shows the behavior of \replacecopy at hand of an example
where all occurrences of the value~3 in~\inl{a[0..n-1]} are replaced with the
value~2 in~\inl{b[0..n-1]}.
\subsection{The predicate \Replace}
We start with defining in the following listing the predicate \logicref{Replace}
that describes the intended relationship between the input array \inl{a[0..n-1]}
and the output array \inl{b[0..n-1]}.
Note the introduction of \emph{local bindings} \inl{\\let ai = ...}
and \inl{\\let bi = ...} in the definition of \Replace (see \cite[\S 2.2]{ACSLSpec}).
\input{Listings/Replace.acsl.tex}
This listing also contains a second, overloaded version of \Replace
which we will use for the specification of the related in-place
algorithm \specref{replace}.
%\clearpage
\subsection{Formal specification of \replacecopy}
Using predicate \Replace the specification of \specref{replacecopy}
is as simple as shown in the following listing.
Note that we also require that the input range \inl{a[0..n-1]} and
output range \inl{b[0..n-1]} do not overlap.
\input{Listings/replace_copy.h.tex}
\subsection{Implementation of \replacecopy}
The implementation (including loop annotations) of \implref{replacecopy}
is shown in the following listing.
Note how the structure of the loop annotations resembles
the specification of \specref{replacecopy}.
\input{Listings/replace_copy.c.tex}
\clearpage
| {
"alphanum_fraction": 0.7579697987,
"avg_line_length": 32.6575342466,
"ext": "tex",
"hexsha": "9f3bd4f9812213bca623944c7e2a61ffcd9282ca",
"lang": "TeX",
"max_forks_count": 19,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T16:27:06.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-21T13:49:31.000Z",
"max_forks_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "fraunhoferfokus/acsl-by-example",
"max_forks_repo_path": "Informal/mutating/replace_copy.tex",
"max_issues_count": 22,
"max_issues_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_issues_repo_issues_event_max_datetime": "2021-06-17T07:10:16.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-18T13:30:41.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "fraunhoferfokus/acsl-by-example",
"max_issues_repo_path": "Informal/mutating/replace_copy.tex",
"max_line_length": 98,
"max_stars_count": 90,
"max_stars_repo_head_hexsha": "d8472670150fb3ff4360924af2d0eb14bc80d1e2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "fraunhoferfokus/acsl-by-example",
"max_stars_repo_path": "Informal/mutating/replace_copy.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-07T06:07:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-14T04:17:53.000Z",
"num_tokens": 648,
"size": 2384
} |
%\vspace{-5pt}
\section{Related Work}
\label{sec:related}
We use natural language techniques for predicting code.
In this section, we review some literature on language models and
neural network based language models.
\subsection{Traditional Language Models}
Statistical models of languages have been used extensively in various natural
language processing tasks. One of the simplest such models proposed is the
$n$-gram model, where the frequencies of consecutive $n$ tokens are used to
predict the probability of occurence of a sequence of tokens. The frequencies of
such $n$-grams are obtained from a corpus, and are smoothed by various
algorithms such as Kneser-Ney smoothing~\cite{ref:kneser-ney} to account for
sparsity of $n$-grams. Such models can be easily extended to code completion,
by modeling each token as a word and doing $n$-gram analysis over a
codebase. However, such models do not capture
the high fraction of unknown tokens from variable and function names very well.
%used for code completion considering the high fraction of {\tt UNKNOWN} tokens
%in codes. For example, even in a simple case such as {\tt for(int SOMEVAR = 0;
%SOMEVAR < EXPRESSION; SOMEVAR++)}, the token {\tt SOMEVAR} can be arbitrary, and
%{\tt EXPRESSION} can consist of multiple tokens. Also, it has been shown that
%neural network based language models far outperform such traditional language
%models. Hence, in our work, we only consider neural network based language
%models.
\subsection{Word Vector Embeddings and Neural Network Language Models (NNLM)}
In their seminal work, Bengio et al.~\cite{ref:embedding} first propose a
vector embedding space for words for constructing a language model,
that significantly outperforms traditional $n$-gram based
approaches. Their proposal maps each word to a vector and expresses the
probability of a token as the result of a multi-layer neural network on the
vectors of a limited window of neighboring words. T. Mikolov
et al.~ \cite{ref:regularities} show that such models can capture word-pair relationships
(such as ``{\em king} is to {\em queen} as {\em man} is to {\em woman}'') in
the form of vector operations (i.e., $v_{\text king} -
v_{\text queen} = v_{\text man} - v_{\text woman}$). To distinguish such models
from {\it recurrent} models, we
call them {\it feed-forward} neural network language models.
We use the same concept in our token-embedding. We found singificant clustering
of similar tokens, such as {\tt uint8\_t} and {\tt uint16\_t}, but were unable
to fund vector relationships.
%We use the same concept in our token-embeddings, where we attempt to predict
%probabilities of the next token depending on the token-vectors of a fixed window
%of prior tokens.
%We were able to observe significant clustering in the word
%vectors of similar tokens (e.g., vectors of {\tt uint8\_t} and {\tt uint16\_t}
%are very similar). However, we were {\it unable} to find vector relationships
%between tokens such as those discussed above.
Later, such feed-forward NNLMs have been extended by Mikolov et
al.~\cite{ref:mikolov:wvec} where they propose the continuous
bag-of-words (CBOW), and the skip-gram. In CBOW, the probability of a token is
obtained from the average of vectors of the neighboring tokens, whereas in
skip-gram model the probabilities of the neighboring tokens is obtained from the
vector of a single token. The main advantage of such models is their
simplicity, which enables them to train
% relative to a multi-layer feed-forward network, which enables them to be trained
faster on billion-word corpuses. However, in our case, the size of data was not
so huge, and we could train even a 4-layer model in 3 days.
\subsection{RNN Language Models}
\label{sec:rel:rnnlm}
A significant drawback of feed-forward neural network based language models is
their inability to consider dependencies longer than the window size. To fix
this shortcoming, Mikolov et al. proposed a recurrent neural network based
language model~\cite{ref:rnnlm}, which associates a cell with a hidden state
at each position, and updates the state with each token. Subsequently, more
advanced memory cells such as LSTM~\cite{ref:lstm} and GRU~\cite{ref:gru} have
been used for language modeling tasks. More sophsticated models such as
tree-based LSTMs ~\cite{ref:treelstm} have also
been proposed. We experimented with using GRUs in our setup, but surprisingly
did not find them to be competitive with feed-forward networks.
\subsection{Attention Based Models}
\label{sec:attn}
Recently, neural network models with {\it attention}, i.e., models that weigh
different parts of the input differently have received a lot of attention (pun
intended) in areas such as image captioning~\cite{ref:showattendtell} and
language translation~\cite{ref:nmt,ref:nmt2}. In such models, the actual task of
prediction is separated into two parts: the attention mechanism which
``selects'' the part of input that is important, and the actual prediction
mechanism which predicts the output given the weighted input. We found an
attention based feed-forward model to be the best performing model among the
ones we considered.
\subsection{Code Completion}
\label{sec:code-completion}
Frequency, association and matching neighbor based
approaches~\cite{ref:learningexamples} are used to improve predictions of IDEs such
as Eclipse.
Our learning approach, on the other hand, attempts to automatically
learn such rules and patterns.
| {
"alphanum_fraction": 0.7903906107,
"avg_line_length": 57.4,
"ext": "tex",
"hexsha": "a7a60d35f6e7059b629df810b44018760740dec3",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2021-11-07T02:18:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-06-11T13:39:19.000Z",
"max_forks_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rafallewanczyk/ml_code_completion",
"max_forks_repo_path": "docs/report/related.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4",
"max_issues_repo_issues_event_max_datetime": "2019-11-02T11:13:06.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-02-18T21:04:12.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rafallewanczyk/ml_code_completion",
"max_issues_repo_path": "docs/report/related.tex",
"max_line_length": 89,
"max_stars_count": 36,
"max_stars_repo_head_hexsha": "def4364cc14a3c34ab4d2623b2f2743acb04fae4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rafallewanczyk/ml_code_completion",
"max_stars_repo_path": "docs/report/related.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-06T08:28:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-31T08:14:18.000Z",
"num_tokens": 1303,
"size": 5453
} |
\documentclass[oneside,a4paper]{book}
\usepackage[T1]{fontenc} % required for luximono!
\usepackage{lmodern}
\usepackage[scaled=0.8]{luximono} % typewriter font with bold face
% To install the luximono font files:
% getnonfreefonts-sys --all or
% getnonfreefonts-sys luximono
%
% when there are trouble you might need to:
% - Create /etc/texmf/updmap.d/99local-luximono.cfg
% containing the single line: Map ul9.map
% - Run update-updmap followed by mktexlsr and updmap-sys
%
% This commands must be executed as root with a root environment
% (i.e. run "sudo su" and then execute the commands in the root
% shell, don't just prefix the commands with "sudo").
% formats the text according the set language
\usepackage[english]{babel}
\usepackage[table,usenames]{xcolor}
% generates indices with the "\index" command
\usepackage{makeidx}
% enables import of graphics. We use pdflatex here so do the pdf optimisation.
%\usepackage[dvips]{graphicx}
\usepackage[pdftex]{graphicx}
\usepackage{pdfpages}
% includes floating objects like tables and figures.
\usepackage{float}
% for generating subfigures with ohne indented captions
\usepackage[hang]{subfigure}
% redefines and smartens captions of figures and tables (indentation, smaller and boldface)
\usepackage[hang,small,bf,center]{caption}
% enables tabstops and the numeration of lines
\usepackage{moreverb}
% enables user defined header and footer lines (former "fancyheadings")
\usepackage{fancyhdr}
% Some smart mathematical stuff
\usepackage{amsmath}
% Package for rotating several objects
\usepackage{rotating}
\usepackage{natbib}
\usepackage{epsf}
\usepackage{dsfont}
\usepackage[algochapter, boxruled, vlined]{algorithm2e}
%Activating and setting of character protruding - if you like
%\usepackage[activate,DVIoutput]{pdfcprot}
% If you really need special chars...
\usepackage[latin1]{inputenc}
% Hyperlinks
\usepackage[colorlinks,hyperindex,plainpages=false,%
pdftitle={Yosys Manual},%
pdfauthor={Clifford Wolf},%
%pdfkeywords={keyword},%
pdfpagelabels,%
pagebackref,%
bookmarksopen=false%
]{hyperref}
% For the two different reference lists ...
\usepackage{multibib}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{pdfpages}
\usepackage{listings}
\usepackage{pifont}
\usepackage{skull}
% \usepackage{draftwatermark}
\usepackage{tikz}
\usetikzlibrary{calc}
\usetikzlibrary{arrows}
\usetikzlibrary{scopes}
\usetikzlibrary{through}
\usetikzlibrary{shapes.geometric}
\lstset{basicstyle=\ttfamily}
\def\B#1{{\tt\textbackslash{}#1}}
\def\C#1{\lstinline[language=C++]{#1}}
\def\V#1{\lstinline[language=Verilog]{#1}}
\newsavebox{\fixmebox}
\newenvironment{fixme}%
{\newcommand\colboxcolor{FFBBBB}%
\begin{lrbox}{\fixmebox}%
\begin{minipage}{\dimexpr\columnwidth-2\fboxsep\relax}}
{\end{minipage}\end{lrbox}\textbf{FIXME: }\\%
\colorbox[HTML]{\colboxcolor}{\usebox{\fixmebox}}}
\newcites{weblink}{Internet References}
\setcounter{secnumdepth}{3}
\makeindex
\setlength{\oddsidemargin}{4mm}
\setlength{\evensidemargin}{-6mm}
\setlength{\textwidth}{162mm}
\setlength{\textheight}{230mm}
\setlength{\topmargin}{-5mm}
\setlength{\parskip}{1.5ex plus 1ex minus 0.5ex}
\setlength{\parindent}{0pt}
\lstdefinelanguage{liberty}{
morecomment=[s]{/*}{*/},
morekeywords={library,cell,area,pin,direction,function,clocked_on,next_state,clock,ff},
morestring=[b]",
}
\lstdefinelanguage{rtlil}{
morecomment=[l]{\#},
morekeywords={module,attribute,parameter,wire,memory,auto,width,offset,size,input,output,inout,cell,connect,switch,case,assign,sync,low,high,posedge,negedge,edge,always,update,process,end},
morestring=[b]",
}
\begin{document}
\fancypagestyle{mypagestyle}{%
\fancyhf{}%
\fancyhead[C]{\leftmark}%
\fancyfoot[C]{\thepage}%
\renewcommand{\headrulewidth}{0pt}%
\renewcommand{\footrulewidth}{0pt}}
\pagestyle{mypagestyle}
\thispagestyle{empty}
\null\vfil
\begin{center}
\bf\Huge Yosys Manual
\bigskip
\large Clifford Wolf
\end{center}
\vfil\null
\eject
\chapter*{Abstract}
Most of today's digital design is done in HDL code (mostly Verilog or VHDL) and
with the help of HDL synthesis tools.
In special cases such as synthesis for coarse-grain cell libraries or when
testing new synthesis algorithms it might be necessary to write a custom HDL
synthesis tool or add new features to an existing one. It this cases the
availability of a Free and Open Source (FOSS) synthesis tool that can be used
as basis for custom tools would be helpful.
In the absence of such a tool, the Yosys Open SYnthesis Suite (Yosys) was
developped. This document covers the design and implementation of this tool.
At the moment the main focus of Yosys lies on the high-level aspects of
digital synthesis. The pre-existing FOSS logic-synthesis tool ABC is used
by Yosys to perform advanced gate-level optimizations.
An evaluation of Yosys based on real-world designs is included. It is shown
that Yosys can be used as-is to synthesize such designs. The results produced
by Yosys in this tests where successflly verified using formal verification
and are comparable in quality to the results produced by a commercial
synthesis tool.
\bigskip
This document was originally published as bachelor thesis at the Vienna
University of Technology \cite{BACC}.
\chapter*{Abbreviations}
\begin{tabular}{ll}
AIG & And-Inverter-Graph \\
ASIC & Application-Specific Integrated Circuit \\
AST & Abstract Syntax Tree \\
BDD & Binary Decicion Diagram \\
BLIF & Berkeley Logic Interchange Format \\
EDA & Electronic Design Automation \\
EDIF & Electronic Design Interchange Format \\
ER Diagram & Entity-Relationship Diagram \\
FOSS & Free and Open-Source Software \\
FPGA & Field-Programmable Gate Array \\
FSM & Finite-state machine \\
HDL & Hardware Description Language \\
LPM & Library of Parameterized Modules \\
RTLIL & RTL Intermediate Language \\
RTL & Register Transfer Level \\
SAT & Satisfiability Problem \\
% SSA & Static Single Assignment Form \\
VHDL & VHSIC Hardware Description Language \\
VHSIC & Very-High-Speed Integrated Circuit \\
YOSYS & Yosys Open SYnthesis Suite \\
\end{tabular}
\tableofcontents
\include{CHAPTER_Intro}
\include{CHAPTER_Basics}
\include{CHAPTER_Approach}
\include{CHAPTER_Overview}
\include{CHAPTER_CellLib}
\include{CHAPTER_Prog}
\include{CHAPTER_Verilog}
\include{CHAPTER_Optimize}
\include{CHAPTER_Techmap}
% \include{CHAPTER_Eval}
\appendix
\include{CHAPTER_Auxlibs}
\include{CHAPTER_Auxprogs}
\chapter{Command Reference Manual}
\label{commandref}
\input{command-reference-manual}
\include{CHAPTER_Appnotes}
% \include{CHAPTER_StateOfTheArt}
\bibliography{literature}
\bibliographystyle{alphadin}
\bibliographyweblink{weblinks}
\bibliographystyleweblink{abbrv}
\end{document}
| {
"alphanum_fraction": 0.762741652,
"avg_line_length": 30.0792951542,
"ext": "tex",
"hexsha": "ecc7e4c99828fd0a945122ace018bef64c9c3994",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "978933704b496e36699067ce4893946c6030e52c",
"max_forks_repo_licenses": [
"MIT",
"ISC",
"Unlicense"
],
"max_forks_repo_name": "rubund/yosys",
"max_forks_repo_path": "manual/manual.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "978933704b496e36699067ce4893946c6030e52c",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT",
"ISC",
"Unlicense"
],
"max_issues_repo_name": "rubund/yosys",
"max_issues_repo_path": "manual/manual.tex",
"max_line_length": 190,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "978933704b496e36699067ce4893946c6030e52c",
"max_stars_repo_licenses": [
"MIT",
"ISC",
"Unlicense"
],
"max_stars_repo_name": "rubund/yosys",
"max_stars_repo_path": "manual/manual.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1914,
"size": 6828
} |
% $Id: ant32-vm.tex,v 1.3 2002/04/17 20:05:58 ellard Exp $
\chapter{Virtual Memory Architecture}
\label{VirtualMemoryArchitecture}
Ant-32 is a paged architecture, with additional segmentation in the
virtual address space. The page size is 4KB.
Ant-32 contains a software-managed translation look-aside buffer (TLB)
that maps virtual addresses to physical addresses. The TLB contains
at least 16 entries, and the number of entries must be a power of two.
A TLB miss generates an exception.
The top 2 bits of each virtual address determine the segment number.
Segment 0 is the only segment accessible in user mode, while
segments 1-3 are accessible only in supervisor mode.
Ant-32 supports up to one GB of physical address space. Physical
memory begins at address 0, but need not be contiguous. Memory-mapped
devices are typically located at the highest addresses, but the
implementor is free to place them wherever necessary.
Each segment is one GB in size, corresponding to the size of physical
memory, but corresponds to a different way of interpreting virtual
addresses or accessing the memory. Segments 0 and 1 are mapped
through the TLB, and may be cached. Segments 2 and 3 are not mapped
through the TLB-- the physical address for each virtual address in
these segments is formed by removing the top two bits from the virtual
address. Segment 2 may be cached, but segment 3 is never cached for
either read or write access (and is intended to be used for
memory-mapped devices).
The next 18 bits of each virtual address are the virtual page number.
The bottom 12 bits are the offset into the page.
\begin{figure}[ht]
\caption{TLB Entry Format}
\begin{center}
\begin{tabular}{l|p{0.7in}|p{2.5in}|p{1.2in}|}
\cline{2-4}
& bits 31-30 & bits 29-12 & bits 11-0 \\
\cline{2-4}
Upper Word & 0 & Physical page number & Page Attributes \\
\cline{2-4}
Lower Word & Segment & Virtual page number & {\em (Available for OS)} \\
\cline{2-4}
\end{tabular}
\end{center}
\end{figure}
Each TLB entry consists of two 32-bit words. The top 20 bits of the
upper word are the top 20 bits of the physical address of the page (if
the VALID bit is not set in the lower word). Note that since Ant-32
physical addresses are only 30 bits long, the upper two bits of the
address, when written as a 32-bit quantity, must always be zero. The
lower 12 bits of the lower word contain the page attributes bits. The
page attributes include {\sc VALID}, {\sc READ}, {\sc WRITE}, {\sc
EXEC}, {\sc DIRTY}, and {\sc UNCACHE} bits, as defined in figure
\ref{TLB-attr-bits}. The remaining bits are reserved.
The top 20 bits of the lower word are the top 20 bits of the virtual
address (the segment number and the virtual page number for that
address). The lower 12 bits of the upper word are ignored by the
address translation logic, but are available to be used by the
operating system to hold relevant information about the page.
\begin{figure}[ht]
\caption{\label{TLB-attr-bits} TLB Page Attribute Bits}
\begin{center}
\begin{tabular}{|p{1.0in}|p{0.2in}|p{0.2in}|p{0.2in}|p{0.2in}|p{0.2in}|p{0.2in}|}
\hline
{\em Reserved} & U & D & V & R & W & X \\
\hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|l|l|p{4in}|}
\hline
{\bf Name} & {\bf Bit} & {\bf Description} \\
\hline
\hline
{\sc EXEC} & 0 & Instruction fetch memory access to addresses mapped
by this TLB entry is allowed. \\
\hline
{\sc WRITE} & 1 & Write memory access to addresses mapped
by this TLB entry is allowed. \\
\hline
{\sc READ} & 2 & Read memory access to addresses mapped
by this TLB entry is allowed. \\
\hline
{\sc VALID} & 3 & Indicates a valid TLB entry. When this is
set to 0, the contents of the rest of the TLB
entry are irrelevant. \\
\hline
{\sc DIRTY} & 4 & Indicates a dirty page. When this is set to 1,
it indicates that the page referenced by this TLB
entry has been written. This bit is set to 1
automatically whenever a write occurs to the page,
but can be reset to 0 using the instructions that
modify TLB entries. \\
\hline
{\sc UNCACHE} & 5 & An uncacheable page. When this is set to 1,
the page referenced by the entry will not be
cached in any processor cache. \\
\hline
\end{tabular}
\end{center}
\end{figure}
Note that the top bit of the virtual address in the TLB will always be
zero, because only segments 0 and 1 are mapped through the TLB, and
the top two bits of the physical address will always be zero because
physical addresses have only 30 bits. If values other than zero are
assigned to these bits the result is undefined.
Translation from virtual to physical addresses is done as follows for
any fetch, load, or store:
\begin{enumerate}
\item The virtual address is split into the segment, virtual
page number, and page offset.
\item If the page offset is not divisible by the size of the
data being fetched, loaded, or stored, an alignment
exception occurs.
All memory accesses must be aligned according to their
size. In Ant-32, there are only two sizes-- bytes, and
4-byte words. Word addresses must be divisible by 4,
while byte addresses are not restricted.
\item If the segment is not 0 and the CPU is in user mode, a
segment privilege exception occurs.
\item If the segment is 2 or 3, then the virtual address (with
the segment bits set to zero) is treated as the
physical address, and the algorithm terminates.
\item The TLB is searched for an entry corresponding to the
segment and virtual page number, and with its {\sc
VALID} bit set to 1. If no such entry exists, a TLB
miss exception occurs.
Note that if there are two or more valid TLB entries
corresponding to the same virtual page, exception 13
(TLB multiple match) will occur when the entry table
is searched. (This exception will also occur if the
{\tt tlbpi} instruction is used to search the TLB
table.)
\item If the operation is not permitted by the page, a
TLB protection exception occurs.
Note that a memory location can be fetched for
execution if its TLB entry is marked as executable
even if it is not marked as readable.
\item Otherwise, the physical address is constructed from the
top 20 bits of the upper word of the TLB and the lower
12 bits of the virtual address.
\item If the physical address does not exist (which can only
be detected when a memory operation is performed) a
bus error exception occurs.
\end{enumerate}
Any of the exceptions that can occur during this process can be caught
by the operating system, and the offending instruction can be
restarted if appropriate.
Note that the order in which the conditions that trigger these
exceptions are checked is {\bf undefined}, except for the dependency
that segment privileges are checked before the TLB is probed, and the
TLB protection is checked before the address is sent to the memory
system. For example, attempting to load a word from address {\tt
0x80001001} in user mode can cause either an alignment exception
(because the page offset is not divisible by 4) or a segment privilege
exception (because the segment is 1 and the processor is in user
mode).
Along with a 30-bit physical address, an {\em uncache} bit is sent to
the memory system whenever a memory operation is performed. This bit
is defined by the following rules:
\begin{itemize}
\item If the segment is 0 or 1, then the {\em uncache} bit is set to
the value of the {\sc UNCACHE} bit of the TLB entry that maps
that address.
\item If the segment is 2, then the {\em uncache} bit is set to 0.
\item If the segment is 3, then the {\em uncache} bit is set to 1.
\end{itemize}
If the implementation does not include a cache, or the cache
is disabled, then this bit is ignored.
| {
"alphanum_fraction": 0.7477105637,
"avg_line_length": 37.0956937799,
"ext": "tex",
"hexsha": "292468c8c78d1ae223c82267ea9113753a535c1d",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-07-15T04:09:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-07-15T04:09:05.000Z",
"max_forks_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "geoffthorpe/ant-architecture",
"max_forks_repo_path": "Documentation/ant32-vm.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "geoffthorpe/ant-architecture",
"max_issues_repo_path": "Documentation/ant32-vm.tex",
"max_line_length": 81,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "d85952e3050c352d5d715d9749171a335e6768f7",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "geoffthorpe/ant-architecture",
"max_stars_repo_path": "Documentation/ant32-vm.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2127,
"size": 7753
} |
% ------------------------------------------------------------------------------
% NOTE: The change summary is added to the table of contents without
% a paragraph number.
\phantomsection
\addcontentsline{toc}{section}{Change Summary}
\section*{Change Summary}\label{sec:changesummary}
\begin{longtable}[h]{|L{7.5cm}|L{7.5cm}|}\hline
\rowcolor{cyan}
Change & Justification\ER
\endhead
Initial version. & New document.\ER
\end{longtable}
% ------------------------------------------------------------------------------
| {
"alphanum_fraction": 0.5141776938,
"avg_line_length": 33.0625,
"ext": "tex",
"hexsha": "19e5ec73400421cfd3fb7b9cc671441d1cfcca6c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a5f74ae71640f2422436cbe506b35a9d0fd4e903",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Traap/newdoc",
"max_forks_repo_path": "data/change-summary.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a5f74ae71640f2422436cbe506b35a9d0fd4e903",
"max_issues_repo_issues_event_max_datetime": "2019-02-16T17:17:39.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-02-16T15:13:14.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Traap/newdoc",
"max_issues_repo_path": "data/change-summary.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a5f74ae71640f2422436cbe506b35a9d0fd4e903",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Traap/newdoc",
"max_stars_repo_path": "data/change-summary.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 122,
"size": 529
} |
\chapter*{User Accounts, Access, Data Locations}
% G. Bakos contacted former postdoc Miguel de Val-Borro (now employed by NASA) to create an account for me on the relevant servers. I am now in the hatuser and hatpipebin groups. Bakos says we will discuss later whether I need an account on TRAC (e.g. hatreduc), but not for the time being.
VPN into UW Network then connect to server:
\begin{minted}{bash}
smb://research.drive.wisc.edu/rmathieu
\end{minted}
Discuss the K2 mosaic pipeline here
% The K2 pipeline is installed on phs3 and there is some raw data available under /nfs/phs3/ar1/ (91\% full at this time). As of 09/16/2015, Field 0 data were moved by Chelsea to phs11 for image subtraction. I now pretty much only use phs11.
\section*{Raw Data Location}
% Chelsea downloaded K2 data from Campaigns 0, 1, and 2. I really only need to use Campaign 0 data at this point in time.
% \begin{itemize}
% \item \textbf{Campaign 0 -- data is under reduced on server phs11 (image subtraction necessary)}
% \item Campaign 1 -- fully reduced light curve on server phs3
% \item Campaign 2 -- under reduced on server phs11 (image subtraction necessary)
% \end{itemize}
% Directories used by Chelsea:
% \begin{itemize}
% \item phs3 $\rightarrow$ /nfs/phs3/ar1 (close to full)
% \item phs11 $\rightarrow$ /nfs/phs11/ar0
% \item \textcolor{blue}{The data I want are located on phs11 in the subdirectory\\ \textbf{/S/PROJ/hatuser/2015\_K2/K2\_0} (read only files)}
% \end{itemize}
\subsection*{Data Reduction Location}
\section*{Guidance from Melinda}
\subsection*{Basics}
\subsection*{The K2 Field 0 Data Set}
Kepler original campaign channel 81 contains NGC 6791. All the availble raw fits files are available at \textbf{~/rmathieu/NGC6791}. I do not use these raw files for anything as k2-mosaic downloads the data as a part of the stitching process.
'''
For each quarter in each of the four years we have to do
\begin{minted}{bash}
\$ k2mosaic tpflist Q[N] [CHN] \> tpflist.txt
\$ k2mosaic mosaic tpflist.txt
\end{minted}
if you get stopped add --cadence #####..##### between mosaic and tpf
e.g. \$ k2mosaic mosx aic 30771..33935 tpflist.txt
\\ \\
Channels are 81, 53, 29, 1 so this has to occur for \\
Q4, Q8, Q12, Q16 for channel 81 \\
Q3, Q7, Q11, Q15 for channel 53 or 1 \\
Q2, Q6, Q10, Q14 for channel 29 \\
Q1, Q5, Q9, Q13, Q17 for channel 1 or 53 (Q1 is only 34 days, Q0 is 9 days) \\
We start with just channel 81 for our initial round of analysis
\subsubsection{90$^\circ$ Roll}
"The Kepler spacecraft rotates by 90o approximately every 93 days to keep its solar arrays directed towards the Sun (Haas et al. 2010). The first 10 days of science data obtained as the last activity during commissioning is referred to as Q0. There were 34 days of observations during Q1 following Q0 at the same orientation. Subsequent quarters are referred to as Q2, Q3, etc., and these each contain 93 days of observations. Transit searches are performed nominally every three months after each quarterly data set has been downlinked to the ground and processed from CAL through PDC" KSCI-19081-001 Data Processing Handbook \S8
\subsection*{Stitching Field with \texttt{k2-mosaic}}
Used the k2-mosaic \texttt{http://k2mosaic.geert.io/} package to create *.fits files of all available cadence of channel 81
\subsection*{Make the \texttt{fistar} Files}
The \textbf{.fistar} file is really just a simple text file with the sources extracted.
To create these files, I used the \textttt{fistar} command in the file containing the fits files:
\\\texttt{for i in *.fits; do fistar "\$i" -o \$(echo \$i | sed s/fits/fistar/) -s flux --comment
--model elliptic --flux-threshold 1000 --algorithm uplink --iterations symmetric=2,general=1
--format id,x,y,bg,amp,s,d,k,flux,s/n,cmax,fwhm,npix,sigma,delta
--mag-flux 16.62,30; done}\\
I now do this in the python script
\subsection*{??\texttt{grmatch}}
Make sure that the output is sane by checking \textbf{Residual} and \textbf{Unitarity} and \textbf{Ratio}.
\subsection*{Transform to \texttt{fitrans}}
\texttt{fitrans *.fits -k --input-transformation *.itrans --reverse -o *-xtrns.fits}.\\
These new output fits files have been shifted to our master coordinate reference frame and may be stacked. It is important to make sure that they look as though they have been shifted appropriately. I do this by creating .png image files from the .fits files and then stringing them together to a quick movie.
\subsection*{Mean Photo Reference Frame from \sout{\texttt{ficombine}}}
Using \texttt{ficombine} fails because the input list it too long so I do this with \texttt{numpy}.
The desired command is given here:\\
\texttt{ficombine *xtrns.fits --mode mean -o photref.fits}
\begin{center}
\includegraphics[width=0.5\textwidth]{meanField.png}
\end{center}
\subsubsubsection*{Using \texttt{ficonv} + \texttt{ficombine}}
"A better way to stack the images is to use \texttt{ficonv}. This method, however, builds upon the prior, as it will use the outputted photoref file described in the previous section.
To start the ficonv+ficombine process, I need to generate something called a \texttt{region file}. The region file gives \texttt{ficonv} an idea of the position of bright stars on the frame to have a good initial start.
The way to go about doing this is to use the script regslct.py, which I have edited and renamed as \texttt{melinda\_regslct.py}.
This script has a single function in it and takes in a \texttt{fistar} file. Joel recommends using the highest focused image, however with Kepler it is unlikely that the focus will change much. To determine the sharpest focused image, check the median value for the \textbf{s} parameter in the \texttt{.fistar} files, which is the inverse variance of the star.
Using the smallest value of the FWHM or the largest \textbf{s} value instead?\\ \\
Double check the following values in the script: \textbf{colflux}, \textbf{xsize}, and \textbf{ysize}.
\begin{center}
\includegraphics[width=0.8\textwidth]{regslct.png}
\end{center}
\begin{center}
\includegraphics[width=0.8\textwidth]{header.png}
\end{center}
\subsubsection*{? \texttt{ficonv}}
Takes a \texttt{.itrans}, \texttt{*xtrns.fits}, and a \texttt{region} file to run. I also use the stacked photref frame as the file that the other fits files are compared to and then subtracted off. This is called the \texttt{reffile} in the script.\\
\here
Results in the output of \texttt{.kernel} and \texttt{-sub.fits} files. \\
The script has compared the psf of all the frames and convolved the files to have the least sharpest psf.
The next step is to simply combine the images.
%%%LEFT OFF HERE FOR NOTEBOOK EDITING%%%%
\subsection*{Subtraction}
Chelsea recommended that I incorporate an output file and do the subtraction from the command line. A simple change of the \textbf{os} option to the \textbf{oc} option allows me to output the file that we will subtract. \\
\texttt{fiarith "'fiign-xtrns.fits'-'fiign-subtract.fits'" -o diff.fits}\\
Include an example of a first test run of a subtracted image, as compared to the raw data.
\begin{center}
\includegraphics[width=0.8\textwidth]{Figure1Proposal.png}
\end{center}
The next task is to fine tune the parameters of the \textbf{melinda\_run\_ficonv.py} script. These parameters include \textbf{b}, \textbf{i}, \textbf{d} in the script. Joel claims I should start with nice low numbers, perhaps even 0!
\subsection*{High Resolution Photref -- photometry with subtracted image}
Chelsea sent me an email with a helpful, in-depth description of the this step.\\
Doing normal aperture photometry on the master image (i.e. the photometry reference), requires:
\begin{enumerate}
\item \textbf{fiphot} - program used for aperture photometry
\item list of x,y coordinates for the stars we want to do photometry on
\item fits image (photoref, with regions properly masked)
\item list of aperture to do the photometry on **start with a big list, making plots, and then choose the good few ones.
\item method to convert magnitude to flux, I had a guess for Kepler, which is not too bad
\end{enumerate}
All stars I query will have temporary HAT identifier numbers.
After I release the light curve it will be linked to UCAC numbers.
The script that does the high-resolution photometry procedure is the shell script, \textbf{cmrawphot.sh}, however Chelsea recently wrote up a Python script for this bit.
The Python script is in the \textbf{chelseasrc} directory and is called \textbf{run\_iphot.py}. I renamed a version of this as \textbf{melinda\_run\_iphot.py}. I filled in the inputs referring to my past instruction and the \textbf{cmrawphot.sh} scripts and tested it on a few frames.\\ In order to work, I had to address a pickling error with a \textbf{copy\_reg module} workaround. It is now working fine, but it does take some time to run -- \textit{approximately 30 minutes or so}.
\subsubsection*{Getting list of x and y coordinates}
\textit{How to get a list of x,y coordinates of stars?}\\
Chelsea already has this list compiled, so I can skip this bit. Here is the relevant background: \\
The file Chelsea made is called \textbf{photref.cat}
This is done by getting the astrometry solution on the photo ref/astro ref.
Let's assume I have a file provided by Chelsea called \textbf{photref.trans}.
This transfile is similar to the \textbf{itrans} file I used before,
except it provides the polynomial transformation between
$x_{i}$, $\eta$ $\rightarrow$ x,y (itrans provides the transformation between x,y $\rightarrow$ x,y).
Usually this match has a worse result compared to the x,y$\rightarrow$x,y match.\\
Of course, for each of the stars, I only know about the RA and DEC.
$x_{i}$ and $\eta$ are obtained by calculating the distance of this star to a
field center RA0 and DEC0, using a tangent transformation.
So, there are two steps of transformation:
\begin{itemize}
\item RA and DEC $\rightarrow$ $x_{i}$ and $\eta$ (requires RA0 and DEC0)
\item RA0 and DEC0 are read from the second to last line of the \textbf{transfile}.
\end{itemize}
\textcolor{red}{\textit{Review this above at a later time.}}\\ \\
The transformation itself is performed by calling: \\
\textbf{grtrans --input catfile --col-radec 2,3 --wcs 'tan,ra=rac,dec=dec,degrees' --col-out 5,6 --output outfile}\\ \\
$x_{i}$ and $\eta$ $\rightarrow$ RA and DEC (requires the transformation file)\\
\textbf{grtrans --input outfile (from above) --col-xy 5,6 -col-out 5,6 --input-transformation transfile --output starlistfile}
\subsubsection*{Photometry on Subtracted Frames}
This entire procedure is self-contained in the \textbf{melinda\_run\_iphot.py} script and therefore this bit can be ignored.
This portion of the procedure actually performs photometry on the subtracted frames and uses the raw photometry from the master frame to infer the actual flux/magnitude of each stars on the non-subtracted frames. \\
This is done by calling the shell script, \textbf{IMG-3-PHOT\_M35.sh}.
Required to know:
\begin{itemize}
\item path for various files
\item raw photometry file
\item aperture list we used for raw photometry file
\item subtracted fits file
\end{itemize}
This script also requires the \textbf{itrans} file, the \textbf{JD} (or cadence or serial number) for each frame so later on I can generate light curves much easier. \\ \\
First the script calls the \textbf{fiphot} program in the subtracted frame photometry mode,then it transforms one sets of x,y coordinates back into the original frame coordinate using the \textbf{.itrans file}, and finally it prints out other relevant information that the light curve requires. \\ \\
\textcolor{red}{\textit{Prints this ``relevant information'' in what format? Is this just the newly generated \textbf{.iphot} frames and the \textbf{photref\_M35.cmraw.phot} file?}} | {
"alphanum_fraction": 0.7590137634,
"avg_line_length": 65.4309392265,
"ext": "tex",
"hexsha": "60cf609587bf8cf667511cade5cd659ca278a0d9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cdc65efed766ab1f2b9e8c71ee7120db6010795d",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "rlmcclure/WOCS_ngc6791",
"max_forks_repo_path": "NGC 6791 Kepler Project Notebook/field_procedure.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cdc65efed766ab1f2b9e8c71ee7120db6010795d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "rlmcclure/WOCS_ngc6791",
"max_issues_repo_path": "NGC 6791 Kepler Project Notebook/field_procedure.tex",
"max_line_length": 630,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cdc65efed766ab1f2b9e8c71ee7120db6010795d",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "rlmcclure/WOCS_ngc6791",
"max_stars_repo_path": "NGC 6791 Kepler Project Notebook/field_procedure.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3247,
"size": 11843
} |
%%% exercise 05 %%%
\documentclass{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath, amssymb, amsfonts, enumitem, fancyhdr, tikz}
% get rid of paragraph indent
\setlength{\parindent}{0 pt}
% allow section.equation numbering
\numberwithin{equation}{section}
% allows you to copy-paste code segments. requires pygments to be installed
% natively, i.e. not just in a Python venv! for example, on WSL Ubuntu you
% need to sudo apt-get install python3-pygments (unfortunately, the version
% will be a bit dated) and cannot rely on your venv Pygments version.
\usepackage{minted}
% alternative to minted that does not require Python, LaTeX only. listings is
% however disgusting out of the box and some setup is required.
\usepackage{listings, xcolor}
% makes clickable links to sections
\usepackage{hyperref}
% make the link colors blue, as well as cite colors. urls are magenta
\hypersetup{
colorlinks, linkcolor = blue, citecolor = blue, urlcolor = magenta
}
% fancy pagestyle so we can use fancyhdr for fancy headers/footers
\pagestyle{fancy}
% add logo in right of header. note that you will have to adjust logo path!
\fancyhead[R]{\includegraphics[scale = 0.15]{../bac_logo1.png}}
% don't show anything in the left and center header
\fancyhead[L, C]{}
% give enough space for logo by reducing top margin height, head separator,
% increasing headerheight. see Figure 1 in the fancyhdr documentation. if
% \topmargin + \headheight + \headsep = 0, original text margins unchanged.
\setlength{\topmargin}{-60 pt}
\setlength{\headheight}{50 pt}
\setlength{\headsep}{10 pt}
% remove decorative line in the fancy header
\renewcommand{\headrulewidth}{0 pt}
% title, author + thanks, date
\title{Exercise 5}
\author{Derek Huang\thanks{NYU Stern 2021, BAC Advanced Team.}}
\date{June 6, 2021\thanks{Original version released on March 30, 2021.}}
% shortcut links. the % characters strip extra spacing.
\newcommand{\pytest}{\href{https://docs.pytest.org/en/stable/}{pytest}}
\newcommand{\minimize}{%
\href{%
https://docs.scipy.org/doc/scipy/reference/generated/%
scipy.optimize.minimize.html%
}{\texttt{scipy.optimize.minimize}}%
}
% don't wanna keep typing this out
\newcommand{\tr}{\operatorname{tr}}
\newcommand{\npinv}{%
\href{%
https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html%
}{\texttt{numpy.linalg.inv}}%
}
\newcommand{\pdb}{%
\href{https://docs.python.org/3/library/pdb.html}{\texttt{pdb}}%
}
\begin{document}
\maketitle
% need to include this after making title to undo the automatic
% \thispagestyle{plain} command that is issued.
\thispagestyle{fancy}
\section{Introduction}
The goal of this exercise is to implement a linear discriminant classifier,
computing the sample covariance matrix using a maximum likelihood estimate
with optional shrinkage.
\section{Instructions}
\subsection{General}
The \texttt{exercise\_05.py} file contains a skeleton for the
\texttt{LinearDiscriminantAnalysis} class, unit tests, and \pytest{} fixtures.
Your job is to finish implementing the \texttt{LinearDiscriminantAnalysis}
class's \texttt{fit} and \texttt{score} methods by replacing the
\texttt{\#\#\# your code goes here \#\#\#} comment blocks with appropriate
Python code.
\medskip
Your code \textbf{must} be written in the areas marked by these blocks. Do
\textbf{not} change any of the pre-written code. The exercise is complete
$ \Leftrightarrow $ \texttt{pytest /path/to/exercise\_05.py} executes with
zero test failures. Note that attributes ending with \texttt{\_}, ex.
\texttt{coef\_}, are created \textbf{during} the fitting process.
\subsection{Implementing \texttt{fit}}
The class prior estimates must be assigned to the local variable
\texttt{priors}, the class mean estimates must be assigned to the local
variable \texttt{means}, and the covariance matrix estimate must be assigned
to the local variable \texttt{cov}. At the end of the \texttt{fit} method,
\texttt{priors} is assigned to \texttt{self.priors\_}, \texttt{means} is
assigned to \texttt{self.means\_}, and \texttt{cov} is assigned to
\texttt{self.covariance\_}.
\subsection{%
Computing \texttt{priors\_}, \texttt{means\_}, \texttt{covariance\_}%
}
The estimates for the class priors and class means should be computed via
maximum likelihood. Note that the shape of \texttt{means\_} is specified in
the docstring to be \texttt{(n\_classes, n\_features)}. The covariance matrix
estimate $ \hat{\mathbf{\Sigma}} \succeq \mathbf{0} \in
\mathbb{R}^{d \times d} $ is such that for $ \alpha \in [0, 1] $ we have
\begin{equation} \label{cov_est}
\hat{\mathbf{\Sigma}} \triangleq \alpha\frac{\tr(\mathbf{S})}{d}\mathbf{I}
+ (1 - \alpha)\mathbf{S}
\end{equation}
Here $ \mathbf{S} \succeq \mathbf{0} \in \mathbb{R}^{d \times d} $ is the
maximum likelihood estimate for the covariance matrix and
$ \tr(\mathbf{S}) $ is its trace, i.e.
\begin{equation*}
\tr(\mathbf{S}) = \sum_{i = 1}^ds_{ii}
\end{equation*}
Note that in this context, $ \tr(\mathbf{S}) / d $ is the average of the
input features' variances. In practice, other \textit{shrinkage estimators}
may be used, especially when $ d $ is much larger than $ N $, for example
the \href{http://www.ledoit.net/ole1a.pdf}{Ledoit-Wolf}\footnote{
In portfolio optimization, Ledoit-Wolf instead refers to
\href{http://www.ledoit.net/honey.pdf}{%
a particular constant correlation shrinkage estimator%
}.
} and
\href{https://arxiv.org/pdf/0907.4698.pdf}{%
oracle approximating shrinkage%
} estimators, which are beyond the scope of our introductory coverage.
\subsection{Implementing \texttt{score}}
The \texttt{score} method of the \texttt{LinearDiscriminantAnalysis} class
computes the accuracy of the predictions. Given parameters $ \theta $,
classification function $ F_\theta $ (the classifier), and data
$ \mathcal{D} \triangleq \{(\mathbf{x}_1, y_1), \ldots (\mathbf{x}_N, y_N)\} $,
where the $ y_k $ values may either be numeric or class labels, the accuracy
$ \mathcal{A}_\theta $ of the model on the data $ \mathcal{D} $ is such that
\begin{equation*}
\mathcal{A}_\theta(\mathcal{D}) = \frac{1}{|\mathcal{D}|}
\sum_{(\mathbf{x}, y) \in \mathcal{D}}\mathbb{I}_{\{y\}}\left(
F_\theta(\mathbf{x})
\right)
\end{equation*}
Note that $ \operatorname{im}\mathcal{A}_\theta = [0, 1] $, so for
data $ \mathcal{D} $, $ \mathcal{A}_\theta(\mathcal{D}) $ is the
fraction of examples the model correctly classified.
\subsection{Tips}
\begin{enumerate}
\item
Carefully read the \texttt{LinearDiscriminantAnalysis} docstrings and
review slide 8.
\item
In the \texttt{fit} method, don't forget to either assign your covariance
matrix estimate to a local variable named \texttt{cov} or replace the
variable name passed to the \npinv{} call with the name of the local
variable holding a reference to the covariance matrix estimate.
\item
In the \texttt{score} method, you should first call the \texttt{predict}
method and use it get predicted labels.
\item
Use NumPy functions whenever possible for efficiency, brevity, and ease
of debugging.
\item
Invoke \pytest{} with the \texttt{--pdb} flag to start \pdb, the Python
debugger. Doing so allows you to inspect the variables in the current
call frame and look more closely at what went wrong.
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.7292535306,
"avg_line_length": 40.1891891892,
"ext": "tex",
"hexsha": "40390c22e5cdde2b8954f909277774676cb60c1d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "phetdam/bac-advanced-ml",
"max_forks_repo_path": "lessons/lecture_05/exercise_05.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "phetdam/bac-advanced-ml",
"max_issues_repo_path": "lessons/lecture_05/exercise_05.tex",
"max_line_length": 79,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "phetdam/bac-advanced-ml",
"max_stars_repo_path": "lessons/lecture_05/exercise_05.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2135,
"size": 7435
} |
\documentclass{llncs}
\usepackage{url}
\usepackage{proof}
\usepackage{amssymb}
\usepackage{stmaryrd}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{comment}
\newcommand{\dgm}[2][1.5]{
\begin{center}
\scalebox{#1}{
\includegraphics{diagrams/#2.pdf}
}
\end{center}
}
\newcommand{\todo}[1]{\textbf{TODO:} #1}
\newcommand{\jacques}[1]{\textsc{Jacques says:} #1}
\newcommand{\amr}[1]{\textsc{Amr says:} #1}
\hyphenation{a-reas}
%subcode-inline{bnf-inline} name Pi
%! swap+ = \mathit{swap}^+
%! swap* = \mathit{swap}^*
%! dagger = ^{\dagger}
%! assocl+ = \mathit{assocl}^+
%! assocr+ = \mathit{assocr}^+
%! assocl* = \mathit{assocl}^*
%! assocr* = \mathit{assocr}^*
%! identr* = \mathit{uniti}
%! identl* = \mathit{unite}
%! dist = \mathit{distrib}
%! factor = \mathit{factor}
%! (o) = \fatsemi
%! (;) = \fatsemi
%! (*) = \times
%! (+) = +
%! foldB = fold_B
%! unfoldB = unfold_B
%! foldN = fold_N
%! unfoldN = unfold_N
%! trace+ = \mathit{trace}^{+}
%! trace* = \mathit{trace}^{\times}
%! :-* = \multimap
%! :-+ = \multimap^{+}
%! emptyset = \emptyset
%subcode-inline{bnf-inline} regex \{\{(((\}[^\}])|[^\}])*)\}\} name main include Pi
%! [^ = \ulcorner
%! ^] = \urcorner
%! [v = \llcorner
%! v] = \lrcorner
%! [[ = \llbracket
%! ]] = \rrbracket
%! ^^^ = ^{\dagger}
%! eta* = \eta
%! eps* = \epsilon
%! Union = \bigcup
%! in = \in
%! |-->* = \mapsto^{*}
%! |-->> = \mapsto_{\ggg}
%! |-->let = \mapsto_{let}
%! |--> = \mapsto
%! <--| = \mapsfrom
%! |- = \vdash
%! <=> = \Longleftrightarrow
%! <-> = \leftrightarrow
%! -> = \rightarrow
%! ~> = \leadsto
%! ::= = ::=
%! /= = \neq
%! vi = v_i
%! di = d_i
%! si = s_i
%! sj = s_j
%! F = \texttt{F}
%! T = \texttt{T}
%! forall = \forall
%! exists = \exists
%! empty = \emptyset
%! Sigma = \Sigma
%! eta = \eta
%! where = \textbf{where}
%! epsilon = \varepsilon
%! least = \phi
%! loop+ = loop_{+}
%! loop* = loop_{\times}
%! CatC = {\mathcal C}
%! CatA = {\mathcal A}
%! gamma = \gamma
%! {[ = \{
%! ]} = \}
%! elem = \in
%! dagger = ^\dagger
%! alpha = \alpha
%! beta = \beta
%! rho = \rho
%! @@ = \mu
%! @ = \,@\,
%! Pow = \mathcal{P}
%! Pi = \Pi
%! PiT = \Pi^{o}
%! PiEE = \Pi^{\eta\epsilon}_{+}
%! PiT = \Pi^{o}
%! PiTF = \Pi^{/}
%! bullet = \bullet
%! * = \times
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\title{Fractional Types}
\author{Roshan P. James$^{1}$ \and Zachary Sparks$^{1}$ \and Jacques Carette$^{2}$ \and Amr Sabry$^{1}$}
\institute{$^{(1)}$~Indiana University \qquad $^{(2)}$~McMaster University}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
In previous work, we developed a \emph{first-order},
information-preserving, and reversible programming language {{Pi}} founded
on type isomorphisms. Being restricted to first-order types limits the
expressiveness of the language: it is not possible, for example, to
abstract common program fragments into a higher-level combinator. In this
paper, we introduce a higher-order extension of {{Pi}} based on the novel
concept of \emph{fractional types} {{1/b}}. Intuitively, a value of a
fractional type {{1/v}} represents \emph{negative} information. A
function is modeled by a pair {{(1/v1,v2)}} with {{1/v1}}
representing the needed argument and {{v2}} representing the
result. Fractional values are first-class: they can be freely propagated
and transformed but must ultimately --- in a complete program --- be offset
by the corresponding amount of positive information.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
We are witnessing a convergence of ideas from several distinct research
communities (physics, mathematics, and computer science) towards replacing
\emph{equalities} by \emph{isomorphisms}. The combined programme has sparked
a significant amount of research that unveiled new and surprising connections
between geometry, algebra, logic, and computation (see~\cite{baez2011physics}
for an overview of some of the connections).
In the physics community, Landauer~\cite{Landauer:1961,Landauer},
Feynman~\cite{springerlink:10.1007/BF02650179}, and others have interpreted
the laws of physics as fundamentally related to computation. The great
majority of these laws are formulated as equalities between different
physical observables which is unsatisfying: \emph{different} physical
observables should not be related by an \emph{equality}. It is more
appropriate to relate them by an \emph{isomorphism} that witnesses, explains,
and models the process of transforming one observable to the other.
In the mathematics and logic community, Martin-L\"of developed an extension
of the simply typed $\lambda$-calculus originally intended to provide a
rigorous framework for constructive
mathematics~\cite{citeulike:7374951}. This theory has been further extended
with \emph{identity types} representing the proposition that two terms are
``equal.'' (See~\cite{streicher,warren} for a survey.) Briefly speaking,
given two terms $a$ and $b$ of the same type $A$, one forms the type
$\texttt{Id}_A(a,b)$ representing the proposition that~$a$ and~$b$ are equal:
in other words, a term of type $\texttt{Id}_A(a,b)$ witnesses, explains, and
models the process of transforming $a$ to $b$ and vice-versa.
%% AMR: I am not sure how to take this comment into account. I think we
%% should make a specific connection to Martin-Lof type theory to at least
%% justify the mention of groupoids in the conclusion. I agree though that
%% this is a very technical point and I do find it hard to express the point
%% in a couple of sentences. Suggestions welcome.
In the computer science community, the theory and practice of type
isomorphisms is well-established. Originally, such type isomorphisms were
motivated by the pragmatic concern of searching large libraries of functions
by providing one of the many possible isomorphic types for the desired
function~\cite{Rittri:1989:UTS:99370.99384}. More recently, type isomorphisms
have taken a more central role as \emph{the} fundamental computational
mechanism from which more conventional, i.e., irreversible computation, is
derived. In our own previous
work~\cite{James:2012:IE:2103656.2103667,rc2011,rc2012} we started with the
notion of type isomorphism and developed from it a family of programming
languages, {{Pi}} with various superscripts, in which computation is an
isomorphism preserving the information-theoretic entropy.
%% Moreover these languages have models in various flavors of symmetric
%% monoidal categories (which are the backbone of models for linear logic and
%% quantum computation).
%% We have also invested a significant amount of time in using {{Pi}} as
%% an actual programming language and established several idioms for
%% programming large classes of interesting (recursive) programs in
%% {{Pi}}.
A major open problem remains, however: a higher-order extension of
{{Pi}}. This extension is of fundamental importance in all the originating
research areas.
%% In physics, it allows a process or observer to be treated as
%% ``data'' that can be transformed by higher-level processes or observed by
%% meta-level observers.
In physics, it allows for quantum states to be viewed as processes and
processes to be viewed as states, such as with the Choi-Jamiolkowski
isomorphism~\cite{choi1975completely,jamiolkowski1972linear}. In mathematics
and logic, it allows the equivalence between different proofs of type
$\texttt{Id}_A(a,b)$ to itself be expressed as an isomorphism (of a higher
type) $\texttt{Id}_{\texttt{Id}_A(a,b)}(p,q)$. Finally, in computer science,
higher-order types allow code to abstract over other code fragments as well
as the manipulation of code as data and data as code.
Technically speaking, obtaining a higher-order extension requires the
construction of a \emph{closed category} from the underlying monoidal
category for~{{Pi}}. Although the general idea of such a construction is
well-understood, the details of adapting it to an actual programming language
are subtle. Our main novel technical device to achieving the higher-order
extension is a \emph{fractional type} which represents \emph{negative
information} and which is so named because of its duality with conventional
product types.
%% By adding a formal (linear) dual to (type theoretic) products, we can
%% internalize the notion that ``taking an input'' corresponds
%% to negative information, which turns out to be the basis of first-class
%% functions.
The remainder of the paper reviews {{Pi}} and then introduces the syntax and
semantics of the extension with fractional types. We then study the
properties of the extended language and establish its expressiveness via
several constructions and examples that exploit its ability to model
higher-order computations.
%% AMR:
%% I think the previous paragraph in the intro that talks about Martin-Lof type
%% theory provides the context. Anything else??
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Background: {{Pi}} }
\label{sec:pi}
We review our language {{Pi}} providing the necessary background and
context for our higher-order extension.\footnote{The presentation in this
section focuses on the simplest version of {{Pi}}. Other versions
include the empty type, recursive types, and trace operators but these
extensions are orthogonal to the higher-order extension emphasized in this
paper.} The terms of {{Pi}} are not classical values and functions;
rather, the terms are isomorphism witnesses. In other words, the terms of
{{Pi}} are proofs that certain ``shapes of values'' are isomorphic.
And, in classical Curry-Howard fashion, our operational semantics shows how
these proofs can be directly interpreted as actions on ordinary values which
effect this shape transformation. Of course, ``shapes of values'' are very
familiar already: they are usually called \emph{types}. But frequently one
designs a type system as a method of classifying terms, with the eventual
purpose to show that certain properties of well-typed terms hold, such as
safety. Our approach is different: we start from a type system, and then
present a term language which naturally inhabits these types, along with an
appropriate operational semantics.
\paragraph*{Data.}
We view {{Pi}} as having two levels: it has traditional values, given by:
%subcode{bnf} include main
% values, v ::= () | left v | right v | (v, v)
\noindent and these are classified by ordinary types:
%subcode{bnf} include main
% value types, b ::= 1 | b + b | b * b
\noindent
Types include the unit type {{1}}, sum types {{b1+b2}}, and
product types {{b1*b2}}. Values include {{()}} which is the only value of
type {{1}}, {{left v}} and {{right v}} which inject~{{v}} into a sum type,
and {{(v1,v2)}} which builds a value of product type.
%% But these values should be regarded as largely ancillary: we do not
%% treat them as first-class citizens, and they only occur when observing
%% the effect of an isomorphism.
%% \paragraph*{Isomorphisms.} The important terms of {{Pi}} are witnesses to
%% (value) type isomorphisms. They are also typed, by the shape of the (value)
%% type isomorphism they witness {{b <-> b}}. Specifically, they are witnesses to
%% the following isomorphisms:
\paragraph*{Isomorphisms.} The terms of {{Pi}} witness
type isomorphisms of the form {{b <-> b}}. They consist of base
isomorphisms, as defined below, and their composition.
%subcode{bnf} include main
%! columnStyle = r@{\hspace{-0.5pt}}r c l@{\hspace{-0.5pt}}l
%swap+ :& b1 + b2 & <-> & b2 + b1 &: swap+
%assocl+ :& b1 + (b2 + b3) & <-> & (b1 + b2) + b3 &: assocr+
%identl* :& 1 * b & <-> & b &: identr*
%swap* :& b1 * b2 & <-> & b2 * b1 &: swap*
%assocl* :& b1 * (b2 * b3) & <-> & (b1 * b2) * b3 &: assocr*
%dist :&~ (b1 + b2) * b3 & <-> & (b1 * b3) + (b2 * b3)~ &: factor
\noindent Each line of the above table introduces a pair of dual
constants\footnote{where {{swap*}} and {{swap+}} are self-dual.} that witness
the type isomorphism in the middle. These are the base (non-reducible) terms
of the second, principal level of {{Pi}}. Note how the above has two
readings: first as a set of typing relations for a set of constants. Second,
if these axioms are seen as universally quantified, orientable statements,
they also induce transformations of the (traditional) values. The
(categorical) intuition here is that these axioms have computational content
because they witness isomorphisms rather than merely stating an extensional
equality.
The isomorphisms are extended to form a congruence relation by adding the
following constructors that witness equivalence and compatible closure:
%subcode{proof} include main
%@ ~
%@@ id : b <-> b
%
%@ c : b1 <-> b2
%@@ sym c : b2 <-> b1
%
%@ c1 : b1 <-> b2
%@ c2 : b2 <-> b3
%@@ c1(;)c2 : b1 <-> b3
%---
%@ c1 : b1 <-> b3
%@ c2 : b2 <-> b4
%@@ c1 (+) c2 : b1 + b2 <-> b3 + b4
%
%@ c1 : b1 <-> b3
%@ c2 : b2 <-> b4
%@@ c1 (*) c2 : b1 * b2 <-> b3 * b4
\noindent The syntax is overloaded: we use the same symbol at the value-type level
and at the isomorphism-type level for denoting sums and products. Hopefully
this will not cause undue confusion.
It is important to note that ``values'' and ``isomorphisms'' are completely
separate syntactic categories which do not intermix. The semantics of the
language come when these are made to interact at the ``top level'' via
\emph{application}:
%subcode{bnf} include main
% top level term, l ::= c v
% \noindent
% To summarize, the syntax of {{Pi}} is given as follows.
% \begin{definition}{(Syntax of {{Pi}})}
% \label{def:Pi}
% %subcode{bnf} include main
% % value types, b ::= 1 | b+b | b*b
% % values, v ::= () | left v | right v | (v,v)
% %
% % iso.~types, t ::= b <-> b
% % base iso ::= swap+ | assocl+ | assocr+
% % &|& unite | uniti | swap* | assocl* | assocr*
% % &|& dist | factor
% % iso comb., c ::= iso | id | sym c | c (;) c | c (+) c | c (*) c
% % top level term, l ::= c v
% \end{definition}
The language presented above, at the type level, models a commutative ringoid
where the multiplicative structure forms a commutative monoid, but the
additive structure is just a commutative semigroup. Note that the version of
{{Pi}} that includes the empty type with its usual laws exactly captures, at
the type level, the notion of a \emph{semiring} (occasionally called a
\emph{rig}) where we replace equality by isomorphism. Semantically, {{Pi}}
models a \emph{bimonoidal category} whose simplest example is the category of
finite sets and bijections. In that interpretation, each value type denotes a
finite set of a size calculated by viewing the types as natural numbers and
each combinator {{c : b1 <-> b2}} denotes a bijection between the sets
denoted by~{{b1}} and~{{b2}}. (We discuss the operational semantics in
conjuction with our extension in the next section.)
% Operationally, the
% semantics of {{Pi}} is given using two mutually recursive interpreters: one
% going forward and one going backwards. The use of {{sym}} switches control
% from one evaluator to the other. We will present the operational semantics in
% the next section along with the extension with fractional types and
% values. For now, we state without proof that the evaluation of well-typed
% combinators always terminates and that {{Pi}} is logically reversible, i.e.,
% that for all combinators {{c : b1 <-> b2}} and values {{v1 : b1}} and {{v2 :
% b2}} we have the forward evaluation of {{c v1}} produces {{v2}} iff the
% backwards evaluation of {{c v2}} produces {{v1}}.
%% AMR:
%% \jacques{Since we have Agda code for the above, should we refer to it
%% here, and make it available?}
%% That is a great idea but it would take a significant amount of time to
%% clean up the code and make it presentable. If someone is willing to do
%% that, then we should definitely add a reference to the code and make it
%% available.
%% Jacques: Right. I'll definitely work on cleaning up the paper first,
%% then see if there is time left for the code.
%% As is usual with type theories,
%% the coherence conditions (pentagonal and triangle identity) are not explicit,
%% but would correspond to certain identity types being trivial.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Language: {{PiTF}} }
The language {{Pi}} models isomorphisms of values rather well, and as we
established before~\cite{James:2012:IE:2103656.2103667,rc2011,rc2012}, it is
logically reversible and each computation it expresses preserves the
information-theoretic entropy. However, purposefully, it has no distinguished
notion of \emph{input} or \emph{output}; more precisely, because of strict
preservation of information, these two concepts coincide. This is the root
of reversibility. But if we want to model functions of any flavor as values,
we need to differentiate between these notions. Our idea, inspired by
computational dualities~\cite{Filinski:1989:DCI:648332.755574,Curien:2000},
polarity~\cite{Girard87tcs,10.1109/LICS.2010.23}, and the categorical notion
of \emph{compact
closure}~\cite{Selinger:2007:DCC:1229185.1229207,Abramsky:2004:CSQ:1018438.1021878},
is to introduce a \emph{formal dual} for our types. We thus consider a type
in a negative position to possess negative information, as it is really a
request for information. Since information is logarithmic, this means that a
request for information should behave like a \emph{fractional type}.
% AMR: a function a -o b is modeled by (1/a,b) not (a,b)
% Jacques: Exactly! 1/a is negative information is input, b is positive is
% output.
% AMR: relations are indeed only presented as a way to explain the semantics
The extension of the sets of types and values from {{Pi}} to {{PiTF}} is
simple:
%subcode{bnf} include main
% value types, b ::= ... | 1/b
% values, v ::= ... | 1/v
\noindent For a given type {{b}}, the values of type {{1/b}} are of the form
{{1/v}} where {{v : b}}. Note that {{1/v}} is purely formal, as is
{{1/b}}.
Semantically, we are extending the symmetric monoidal category modeling
{{Pi}} to a \emph{compact closed} one, i.e., a category in which morphisms
are representable as objects. In such a setting, the new dual objects (i.e.,
the fractionals) must satisfy the following isomorphism witnessed by two new
type indexed combinators {{eta*_b}} and {{eps*_b}}:
\begin{multicols}{3}
\dgm{eta_times2}
\columnbreak
%subcode{bnf} include main
%! columnStyle = r r c l l
%eta*_b :& 1 & <-> & 1/b * b &: eps*_b
%% XXX: diagrams need unit wires(?)
%% Roshan: We should just be clear that unit wires are dropped. Dgms
%% including them dont communicate any additional information and
%% appear crowded.
\columnbreak
\dgm{eps_times2}
\end{multicols}
\vspace{-10pt}
\noindent From a programming perspective, we think of a value {{1/v}} as a
\emph{first-class constraint} that can only be satisfied if it is matched
with an actual value {{v}}. In other words, {{1/v}} is a \emph{pattern} representing the absence of
some information of a specific shape (i.e., \emph{negative information}),
that can only be reconciliated by an actual value {{v}}. The combinator
{{eta*_b : 1 <-> 1/b * b}} thus represents a fission point which creates ---
out of no information --- an equal amount of negative and positive
information. Symmetrically, the combinator {{eps*_b : 1/b * b <-> 1}} matches
up an equal amount of negative and positive information producing no residual
information.
Historically, versions of compact closed categories were introduced by
Abramsky and Coecke~\cite{Abramsky:2004:CSQ:1018438.1021878} and by
Selinger~\cite{Selinger:2007:DCC:1229185.1229207} as the generalization of
monoidal categories to model quantum computation. The first and most
intuitive example of such categories is the category of finite sets and
relations. In that case, the dual operation (i.e., the fractional type)
collapses by defining {{1/b}} to be {{b}}. In spite of this degenerate
interpretation of fractionals, this is still an interesting category as it
provides a model for a reversible higher-order language. Indeed, any relation
is reversible via the standard \emph{converse} operation on a relation.
Furthermore, any relation between {{b1}} and {{b2}} can be represented as
a subset of {{b1 * b2}}, i.e., as a value.
In this section, we provide a simple semantics of {{PiTF}} in the
category of finite sets and relations, in the following way: We
interpret each combinator {{c : b1 <-> b2}} as a \emph{relation}
between the sets denoted by {{b1}} and {{b2}} respectively. Sets
{{b1}} and {{b2}} can be related only if the same information measure.
As we illustrate, this semantics allows \emph{arbitrary} relations
(including the empty relation and relations that are not isomorphisms)
to be represented as values in {{PiTF}}. In Sec.~\ref{sec:cat} we
discuss more refined semantics that are more suitable for richer
categories.
\begin{definition}[Denotation of Value Types]
\label{chx:def:denot}
Each type denotes a finite set of values as follows:
%subcode{opsem} include main
%! <- = \leftarrow
%! union = \cup
% [[1]] '= {[ () ]}
% [[b1 + b2]] '= {[ left v ~|~ v <- [[b1]] ]} union {[ right v ~|~ v <- [[b2]] ]}
% [[b1 * b2]] '= {[ (v1, v2) ~|~ v1 <- [[b1]], v2 <- [[b2]] ]}
% [[1/b]] '= {[ 1/v ~|~ v <- [[b]] ]}
\end{definition}
We specify
relations using a deductive system whose judgments are of the form
{{ v1 ~~c~~ v2 }} indicating that the pair {{(v1,v2)}} is in the relation
denoted by {{c}}.
\begin{definition}[Relational semantics]
\label{def:relational-PiTF}
Each combinator {{c : b1 <-> b2}} in {{PiTF}} denotes a relation
as specified below.~\footnote{
In the interest of brevity, we have treated the type annotations in the base
isomorphisms as implicit; for example, the rule for {{assocr*}} should really
read:
%subcode{proof} include main
%@ v1 in b1
%@ v2 in b2
%@ v3 in b3
%@@ ((v1, v2), v3) ~~assocr*_{b1, b2, b3}~~ (v1, (v2, v3))
}
%subcode{proof} include main
%@ ~
%@@ (left v) ~~swap+~~ (right v)
%
%@ ~
%@@ (right v) ~~swap+~~ (left v)
%----
%@ ~
%@@ (left v) ~~assocl+~~ (left (left v))
%
%@ ~
%@@ (right (left v)) ~~assocl+~~ (left (right v))
%----
%@ ~
%@@ (right (right v)) ~~assocl+~~ (right v)
%
%@ ~
%@@ (left (left v)) ~~assocr+~~ (left v)
%----
%@ ~
%@@ (left (right v)) ~~assocr+~~ (right (left v))
%
%@ ~
%@@ (right v) ~~assocr+~~ (right (right v))
%----
%@ ~
%@@ ((), v) ~~unite~~ v
%
%@ ~
%@@ v ~~uniti~~ ((), v)
%
%@ ~
%@@ (v1, v2) ~~swap*~~ (v2, v1)
%----
%@ ~
%@@ (v1, (v2, v3)) ~~assocl*~~ ((v1, v2), v3)
%
%@ ~
%@@ ((v1, v2), v3) ~~assocr*~~ (v1, (v2, v3))
%----
%@ ~
%@@ (left v1, v3) ~~dist~~ (left (v1, v3))
%
%@ ~
%@@ (right v2, v3) ~~dist~~ (right (v2, v3))
%----
%@ ~
%@@ (left (v1, v3)) ~~factor~~ (left v1, v3)
%
%@ ~
%@@ (right (v2, v3)) ~~factor~~ (right v2, v3)
%----
%@ ~
%@@ v ~~id~~ v
%
%@ v' ~~c~~ v
%@@ v ~~(sym c)~~ v'
%
%@ v1 ~~c1~~ v2
%@ v2 ~~c2~~ v3
%@@ v1 ~~(c1(;)c2)~~ v3
%----
%@ v ~~c1~~ v'
%@@ (left v) ~~(c1 (+) c2)~~ (left v')
%
%@ v ~~c2~~ v'
%@@ (right v) ~~(c1 (+) c2)~~ (right v')
%----
%@ v1 ~~c1~~ v1'
%@ v2 ~~c2~~ v2'
%@@ (v1, v2) ~~(c1 (*) c2)~~ (v1', v2')
%
%@ ~
%@@ () ~~eta*_b~~ (1/v,v)
%
%@ ~
%@@ (1/v,v) ~~eps*_b~~ ()
\end{definition}
The semantics above defines a relation rather than a function. It is still
reversible in the sense that it ``mode-checks''. In other words, for any
combinator~{{c}} we can treat either its left or its right side as an input;
this allows us to define {{sym}} as just relational converse, which amounts
to an argument swap. We can then give an operational interpretation to the
semantic relation by defining two interpreters: a forward evaluator and a
backwards evaluator that exercise the relation in opposite directions, but
which now can return a \emph{set} of results. This operational view exploits
the usual conversion of a relation from a subset of {{A * B}} to a function
from {{A}} to the powerset of {{B}} whose composition is given by the Kleisli
composition in the powerset monad. It is straightforward to check that if the
forward evaluation of {{c v1}} produces {{v2}} as a possible answer then the
backwards evaluation of {{c v2}} produces {{v1}} as a possible answer. One
can check that all relations for {{Pi}} are total functional relations whose
converse are also total functional relations -- aka isomorphisms. However
{{eta*_b}} relates {{()}} to multiple values (one for each value of type {{b}}),
and is thus not functional; {{eps*_b}} is exactly its relational converse, as
expected.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Expressiveness and Examples}
Having introduced the syntax of our extended language, its semantics in the
category of sets and relations, and its operational semantics using forward
and backwards interpreters, we now illustrate its expressiveness as a
higher-order programming language. In the following presentation consisting
of numerous programming examples, we use {{bool}} as an abbreviation of
{{1+1}} with {{true}} as an abbreviation for {{left ()}} and {{false}} as an
abbreviation for {{right ()}}. In addition, instead of presenting the code
using the syntax of {{Pi}}, we use circuit diagrams that are hopefully more
intuitive. Each diagrams represents a combinator whose evaluation consists of
propagating values along the wires. For the sake of readability, we omit
obvious re-shuffling circuitry and elide trivial unit wires which carry no
information.\footnote{The full code is available at
\url{www.cas.mcmaster.ca/~carette/PiFractional}.}
%%%%%%%%%%%%
\subsection{First-Class Relations}
The most basic additional expressiveness of {{PiTF}} over {{Pi}} is the
ability to express relations as values. Indeed a value of type {{1/b1 * b2}}
is a pair of a \emph{constraint} that can only be satisfied by some
{{v1 : b1}} and a value {{v2 : b2}}. In other words, it corresponds to a
function or relation which when ``given'' a value {{v1 : b1}} ``releases''
the value {{v2 : b2}}. To emphasize this view, we introduce the abbreviation:
%subcode{bnf} include main
% b1 :-* b2 &::=& 1/b1 * b2
\noindent which suggests a function-like behavior for the pair of a fractional value
and a regular value. What is remarkable is that we can almost trivially turn
any combinator {{c : b1 <-> b2}} into a (constant) value of
type {{b1 :-* b2}} as shown below on the left:
%subcode{opsem} include main
%! columnStyle = rcl
% name &:& (b1 <-> b2) -> (1 <-> (b1 :-* b2))
% name c &=& eta*_{b1} (;) (id (*) c)
\begin{multicols}{2}
\dgm{function-times}
\dgm{delimc-times}
\end{multicols}
\noindent Dually, as illustrated above on the right, we also have a
{{coname}} combinator which when given an answer {{v2 : b2}}, and a request
for the corresponding input {{1/b1}} for {{c : b1 <-> b2}}, eliminates this
as a un-needed computation (see also Sec. 3, \cite{abramsky-2008}).
More generally any combinator manipulating values of type {{b}} can be turned
into a combinator manipulating values of type {{1/b}} and vice-versa. This is
due to the fact that fractionals types satisfy a self-dual involution
relating {{b}} and {{1/(1/b)}}:
%subcode{opsem} include main
%! columnStyle = rcl
% doubleDiv &:& b <-> 1/(1/b)
% doubleDiv &=& uniti (;) (eta*_{1/b} (*) id) (;) assocr* (;) (id (*) eps*_b) (;) swap* (;) unite
% jacques: elide for now, to save space, as this diagram is backwards.
% \dgm{involution-times}
% As a simple example, consider {{swap+}} denoting boolean
% negation. It is straightforward to check that the combinator
% {{name~swap+ : 1 <-> (bool :-* bool)}} denotes the following relation:
% %subcode{proof} include main
% %@ ~
% %@@ () ~~(name~swap+)~~ (1/false,true)
% %
% %@ ~
% %@@ () ~~(name~swap+)~~ (1/true,false)
% \noindent The relation {{name~swap+}} maps {{()}} to a value which can be used
% to convert any {{false}} to {{true}} and vice-versa. We formalize this
% ``apply'' combinator below.
%%%%%%%%%%%%
\subsection{Higher-Order Relations}
We are now a small step from implementing various higher-order combinators
that manipulate functions or relations. In particular, we can \emph{apply},
\emph{compose}, \emph{curry}, and \emph{uncurry} values representing
relations. We first show the realization of \emph{apply}:
%subcode{opsem} include main
%! columnStyle = rcl
% apply &:& (b1 :-* b2) * b1 <-> b2
% apply &=& swap* (;) assocl* (;) (swap* (*) id) (;) (eps*_{b1} (*) id) (;) unite
\begin{multicols}{2}
\dgm[1.2]{apply-times1}
\dgm{apply-times2}
\end{multicols}
\noindent Intuitively, we simply match the incoming argument of type {{b1}} with
the constraint encoded by the function. If they match, they cancel
each other and the value of type {{b2}} is exposed with no
constraints. Otherwise, the result is undefined.
\noindent A flipped variant is also useful:
%subcode{opsem} include main
%! columnStyle = rcl
% apply' &:& b1 * (b1 :-* b2) <-> b2
% apply' &=& assocl* (;) (swap* (*) id) (;) (eps*_{b1} (*) id) (;) unite
\noindent Function or relation composition is now straightforward:
%subcode{opsem} include main
%! columnStyle = rcl
% compose &:& (b1 :-* b2) * (b2 :-* b3) -> (b1 :-* b3)
% compose &=& assocr* (;) (id (*) apply')
\begin{multicols}{2}
\dgm[1.0]{compose-times1}
\dgm{compose-times2}
\end{multicols}
% \dgm{compose-times3}
\noindent
We can also derive currying (and dually, uncurrying) combinators. Observe that
the type of {{curry}} needs to be {{(b1 * b2 :-* b3) <-> (b1 :-* (b2 :-* b3))}},
which can be written in mathematical notation as
{{1/(b1 * b2) * b3 = (1/b1) * ((1/b1) * b3)}}. This means {{curry}} can be
written using {{recip}}, the implementation of the mathematical identity
{{1/(b1 * b2) = 1/b1 * 1/b2}}:
%subcode{opsem} include main
%! columnStyle = rcl
% recip &:& 1 / (b1 * b2) <-> 1 / b1 * 1 / b2
% recip &=& (uniti) (;) (uniti) (;) (assocl*) (;) ((eta*_{b1} (*) eta*_{b2}) (*) id) (;)
% && (reorder (*) id) (;) (assocr*) (;) (id (*) swap*) (id (*) eps*) (;) swap* (;) (unite)
\noindent
where {{reorder}} is the obvious combinator of type {{b1 * (b2 * b3) * b4 <-> b1 * (b3 * b2) * b4}}.
With {{recip}} out of the way, we can easily write {{curry}}:
%subcode{opsem} include main
%! columnStyle = rcl
% curry &:& b1 * b2 :-* b3 <-> b1 :-* (b2 :-* b3)
% curry &=& (recip * id) (;) assocr*
\noindent That {{recip}} is the heart of currying seems quite remarkable.
%%%%%%%%%%%%
\subsection{Feedback, Iteration, and Trace Operators}
Mathematically speaking, recursion and iteration can be expressed using
categorical trace
operators~\cite{joyal1996traced,Hasegawa:1997:RCS:645893.671607}. In a
language like {{Pi}} there are two natural families of trace operators that
can be defined, an additive family (explored in detail in our previous
work~\cite{rc2011}) and a multiplicative family, which is expressible using
fractionals.
The idea of the multiplicative trace operator is as follows. We are given a
computation {{c : b1 * b2 <-> b1 * b3}}, and we build a ``looping'' version
which feeds the output value of type {{b1}} back as an input. Effectively,
this construction cancels the common type {{b1}}, to produce a new combinator
{{trace*~c:b2<->b3}}.
With fractionals, {{trace*}} becomes directly expressible:
%subcode{opsem} include main
%! columnStyle = rcl
% trace*_b &:& ((b * b1) <-> (b * b2)) -> (b1 <-> b2)
% trace*_b c &=& uniti (;) (eta*_b (*) id) (;) assocr* (;)
% && (id (*) c) (;) assocl* (;) (eps*_b (*) id) (;) unite
As an example, we can use the operational semantics (outlined in the previous
section) to calculate the result of applying {{trace*_{bool} (swap+ (*) id)}}
to {{false}}.
%subcode{opsem} include main
%! columnStyle = ll
% {[ ((),false) ]} & (uniti)
% {[ ((1/false,false),false), ((1/true,true),false) ]} & (eta*_{bool} * id)
% {[ (1/false, (false,false)), (1/true,(true,false)) ]} & (assocr*)
% {[ (1/false, (true,false)), (1/true,(false,false)) ]} & (id (*) not )
% {[ ((1/false,true),false), ((1/true,false),false) ]} & (assocl*)
% emptyset & (eps*_{bool} * id)
\noindent The computation with {{true}} gives the same result. This confirms
that although our semantics is formally reversible, it does not result in
isomorphisms.
More abstractly, the evaluation of {{trace*~c : b2 <-> b3}} must ``guess'' a
value of type {{b1}} to be provided to to the inner combinator
{{c : b1 * b2 <-> b1 * b3}}.
This value {{v1 : b1}} cannot be arbitrary: it must be such that it is the value
produced as the first component of the result. In general, there may be
several such fixed-point values, or none. Indeed with a little ingenuity
(see the next section), one can
express any desired relation by devising a circuit which keeps the desired pairs as
valid fixed-points.
% \begin{example}
% Consider the combinator {{c : bool * 1 <-> bool * 1}}, with {{c = id}}.
% \noindent Using {{trace*}}, we construct the
% combinator {{trace* c : 1 <-> 1}}. Applying this to
% {{() : 1}} requires us to find a value {{b : bool}} such that
% {{(b,()) ~~d~~ (b,())}}. Given that the type {{bool}}
% has two values, there are two values {{false}} and {{true}} that satisfy the
% constraint. In other words, {{() ~~(trace* c)~~() }}.
% \end{example}
% \begin{example}
% \label{ch3:ex:annihilate}
% Consider a small variation on the combinator {{c}} above:
% {{c : bool * 1 <-> bool * 1}}
% {{c = swap+ (*) id }}
% \noindent which negates the boolean component of its incoming
% pair. Using the multiplicative trace operator, we can construct the
% combinator {{trace* c : 1 <-> 1}} as before. But now, applying this
% combinator to {{() : 1}} requires us to find a value {{b : bool}} such
% that {{(b,()) ~~swap+~~ (b,())}} which is impossible since {{swap+}}
% has no fixed points. Operationally, the evaluation of
% such a combinator should produce no value (were it to even terminate).
% \end{example}
%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{(Finite) Relational Programming}
\label{ch3:sec:lp}
Relational programming leverages set-theoretic relations and their
composition to express computational tasks in a declarative way.
% For instance, consider this example:
% {{parent = {[ (A,B), (B,C), (A,D) ]} }}
% {{grandparent = parent (o) parent}}
% \noindent The example defines a relation {{parent}} specified using a
% set of tuples and another relation {{grandparent}} specified using the
% composition of two parent relations. If we wrote this example in a
% relational language (e.g., Prolog) and we executed the query
% {{grandparent(A)}}, we would get the answer {{ {[C]} }}.
It turns out that with the addition of the multiplicative trace, and
the move to relations motivated in the previous section, we can
express relational programming.
Consider the relation {{R}} on booleans given by:
{{ {[(false,false), (false,true), (true,false) ]}. }}
\noindent We can define a combinator {{c_R}} whose denotation is {{R}}, by
defining a combinator {{cInner : (a * bool) <-> (a * bool)}} for some type
{{a}} such that {{c_R = trace* cInner}}. The basic requirement of {{cInner}}
is that for each desired pair {{(v1,v2)}} in {{R}}, it maps {{(a0,v1)}} to
{{(a0,v2)}} for some value {{a0}} and that for each pair {{(v1,v2)}} that is
\emph{not} in {{R}}, it maps {{(a1,v1)}} to {{(a2,v2)}} for \emph{different}
values {{a1}} and {{a2}}.
% %subcode{proof} include main
% %@ ~
% %@@ ((false,false),false) ~~cInner~~ ((false,false),false)
% %----
% %@ ~
% %@@ ((false,true),false) ~~cInner~~ ((false,true),true)
% %----
% %@ ~
% %@@ ((false,true),true) ~~cInner~~ ((false,true),false)
% %----
% %@ ~
% %@@ ((false,false),true) ~~cInner~~ ((true,false),true)
% %----
% %@ ~
% %@@ ((true,false),false) ~~cInner~~ ((true,true),false)
% %----
% %@ ~
% %@@ ((true,false),true) ~~cInner~~ ((true,true),true)
% %----
% %@ ~
% %@@ ((true,true),false) ~~cInner~~ ((true,false),false)
% %----
% %@ ~
% %@@ ((true,true),true) ~~cInner~~ ((false,false),true)
% In the first three lines, the first argument is a fixed point and hence the
% multiplicative trace would map the second input to the second output
% producing the desired relation. In the remaining five cases, the first
% argument is not a fixed point and hence all these cases would be rejected as
% solutions to the constraint imposed by the multiplicative trace. It simply
% remains to find the actual realization of {{cInner}} that would produce the
% behavior above.
% %subcode{opsem} include main
% %! columnStyle = rcl
% % cInner &:& (bool * bool) * bool <-> (bool * bool) * bool
% % cInner &=& sym (swap* (;) assocl* (;) (cnot (*) id) (;) assocr* (;) swap* (;) toffoli (;)
% % && ((swap* (;) cnot (;) swap*) (*) id) (;) toffoli (;)
% % && (assocr* (;) swap* (;) toffoli (;) swap* (;) assocl*) (;) toffoli (;) (cnot (*) id))
% \noindent where {{toffoli}} and {{cnot}} are the conventional reversible gates (their definition in {{Pi}} is in~\cite{rc2011}).
% %
% % c_R &:& bool <-> bool
% % c_R &=& trace* cInner
% The above example should convince the reader that the language {{Pi}}
% with multiplicative trace is expressive enough to model finite relational
% programming.
% %%%%%%%%%%%%%%%
% \subsection{Solving Constraints}
% \label{ch3:sec:constraints}
% A large class of constraint satisfaction problems can be expressed
% using multiplicative traces. We illustrate the main ideas with the
% implementation of a SAT solver.
For small examples, finding such combinators by trial and error is a
relatively straightforward but tedious task. It is, however, possible to
automate this task by expressing what is essentially a reversible SAT solver
as follows. In the usual setting, an instance of SAT is a function~{{f}}
which, when given some boolean inputs, returns {{true}} or {{false}}. The
function returns {{true}} when the inputs satisfy the constraints imposed by
the structure of {{f}} and a solution to the SAT problem is the set of all
inputs on which {{f}} produces {{true}}. The basic idea of our construction
is to use {{trace*}} to annihilate values that fail to satisfy the
constraints represented by the SAT instance {{f}}. (The accompanying code
details the construction of such a solver.)
% In the usual setting, an instance of SAT is a function~{{f}} which,
% when given some boolean inputs, returns {{true}} or {{false}}. The
% function returns {{true}} when the inputs satisfy the constraints
% imposed by the structure of {{f}} and a solution to the SAT problem is
% the set of all inputs on which {{f}} produces {{true}}. The basic idea
% of our construction is to generalize the annihilation circuit from
% Ex.~\ref{ch3:ex;annihilation} to only annihilate values that fail to
% satisfy the constraints represented by the SAT instance {{f}}. To
% achieve this goal, we must however deal with several important
% details.
% First, because we are in a reversible world, our instance of SAT must be
% expressed as an isomorphism: this is easily achieved by the construction
% which embeds any boolean function {{f}} into a reversible one {{iso_f}} with
% a larger domain and range. Given such a reversible function {{iso_f}} which
% represents a SAT instance, we first construct the circuit below:
% \begin{center}
% \scalebox{1.2}{
% \includegraphics{diagrams/sat2.pdf}
% }
% \end{center}
% As shown in the circuit, the reversible SAT instance {{iso_f}} takes
% two sets of values and produces two outputs. The incoming values
% labeled \textsf{inputs} are the inputs we need to test for
% satisfiability. The other incoming values labeled \textsf{heap} are
% the additional inputs needed to embed the original SAT instance {{f}}
% into a reversible function. If these \textsf{heap} values are all
% initialized to {{false}}, the output wire \textsf{satisfied?}
% corresponds to the output that {{f}} would have produced on
% \textsf{inputs}. The other outputs labeled \textsf{garbage} are not
% needed for their own sake but they are important because they are used
% as inputs to the adjoint of {{iso_f}} to reproduce the inputs exactly,
% in anticipation of closing the loop with {{trace*}}.
% To summarize, the top half of the circuit is the identity function
% except that we have also managed to produce a boolean wire labeled
% \textsf{satisfied?} that tells us if the inputs satisfy the desired
% constraints. We can take this boolean value and use it to decide
% whether to negate the bottom wire (labeled \textsf{control
% wire}). Specifically, if the inputs do \emph{not} satisfy {{f}}, the
% control wire is negated. The last wire labeled \textsf{heap control
% wire} is negated if the heap values do not have the right initial
% values, i.e., are not all {{false}}.
% Let us call the above construction {{sat_f}}. If we now close the loop
% using {{trace*}}, two things should happen:
% \begin{itemize}
% \item configurations in which the \textsf{heap} values are not all
% {{false}} will be annihilated;
% \item configurations in which the \textsf{inputs} do not satisfy {{f}}
% will cause the \textsf{satisfied?} wire to be negated and hence will
% also be annihilated.
% \end{itemize}
% In other words, the only configurations that will survive are the ones in
% which the \textsf{inputs} satisfy {{f}}. We simply need to arrange to
% \emph{clone} these values and produce them as the output of the whole
% circuit. The final construction is therefore:
% \begin{center}
% \scalebox{1.5}{
% \includegraphics{diagrams/sat3.pdf}
% }
% \end{center}
% To make the previous discussion concrete, we present a small, but
% complete, example. In our example, the SAT instance {{f}} is tiny: it
% takes two inputs. This function is embedded into a reversible function
% {{iso_f}} of type
% {{((bool * bool) * bool) <-> ((bool * bool) * bool)}} where the last
% input represents the heap and the first two outputs represent the garbage.
% The realization of {{sat_f}} given below is parametrized by such
% a function {{iso_f}}. The inputs to {{sat_f}} are
% \textsf{heap control}, \textsf{control}, \textsf{heap}, \textsf{input-1}, and
% \textsf{input-2}. Its operation is simple: if the \textsf{heap} is {{true}},
% \textsf{heap control} is negated, and if the last output of {{iso_f}}
% is {{false}}, \textsf{control} is negated:
% %subcode{opsem} include main
% %! columnStyle = rcl
% % sat_f &:& ((((bool * bool) * bool) * bool) * bool) <-> ((((bool * bool) * bool) * bool) * bool)
% % sat_f &=& ((swap* (*) id) (*) id) (;)
% % && ((assocl* (*) id) (*) id) (;)
% % && (((cnot (*) id) (*) id) (*) id) (;)
% % && assocr* (;)
% % && (assocr* (*) id) (;)
% % && (swap* (*) id) (;)
% % && assocr* (;)
% % && (id (*) assocl*) (;)
% % && (id (*) isof) (;)
% % && swap* (;)
% % && assocr* (;)
% % && (id (*) (id (*) swap*)) (;)
% % && (id (*) assocl*) (;)
% % && (id (*) ((inot (*) id) (*) id)) (;)
% % && (id (*) (cnot (*) id)) (;)
% % && (id (*) ((inot (*) id) (*) id)) (;)
% % && (id (*) assocr*) (;)
% % && (id (*) (id (*) swap*)) (;)
% % && assocl* (;)
% % && swap* (;)
% % && (id (*) Sym isof) (;)
% % && (id (*) assocr*) (;)
% % && assocl* (;)
% % && assocl*
% Given the construction of {{sat_f}} we can build the full solver as
% follows. The overall input is the cloning heap. The combinator given
% to {{trace*}} takes the cloning heap and the inputs flowing around the
% loop and produces two copies of these inputs. One copy is produced as
% the overall output and another is fed back around the loop.
% %subcode{opsem} include main
% %! columnStyle = rcl
% % solve_f &:& bool * bool <-> bool * Bool
% % solve_f &=& trace* (
% % && (assocr* (*) id) (;)
% % && assocr* (;)
% % && (id (*) swap*) (;)
% % && assocl* (;)
% % && (assocr* (*) id) (;)
% % && (clone2 (*) id) (;)
% % && swap* (;)
% % && (swap* (*) id) (;)
% % && ((swap* (*) id) (*) id) (;)
% % && assocl* (;)
% % && assocl* (;)
% % && (sat_f (*) id) (;)
% % && (assocr* (*) id) (;)
% % && (swap* (*) id) (;)
% % && ((id (*) swap*) (*) id) (;)
% % && (assocl* (*) id) (;)
% % && ((id (*) swap*) (*) id))
% We can test our {{solve_f}} combinator using several SAT
% instances. Here are two possible instances. The first instance is
% satisfied by {{(false,false)}} and the
% second is satisfied by {{(false,true)}} and {{(true,true)}}.
% %subcode{opsem} include main
% %! columnStyle = rcl
% % iso_{f_1} &:& ((bool * bool) * bool) <-> ((bool * bool) * bool)
% % iso_{f_1} &=& (assocr* (;) swap* (;) toffoli (;) swap* (;) assocl*) (;)
% % && (((swap+ (*) id) (*) id) (;) toffoli (;) ((swap+ (*) id) (*) id)) (;)
% % && (id (*) swap+)
% %
% % iso_{f_2} &:& ((bool * bool) * bool) <-> ((bool * bool) * bool)
% % iso_{f_2} &=& toffoli
% It can indeed be verified using the semantics that the {{solve_f}}
% combinators instantiated with the SAT instances produce the expected
% results.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
\label{sec:cat}
We have introduced the idea of \emph{fractional types} in the context of a
reversible language founded on type isomorphisms and preservation of
information. Values of fractional types represent \emph{negative
information}, a concept which is difficult to introduce in a conventional
language that allows arbitrary creation and deletion of information but which
is much simpler to deal with when the surrounding language infrastructure
guarantees preservation of information. Fractional types and values can be
used to express a simple and elegant notion of higher-order functions: a
function {{b1 :-* b2}} is a first-class value consisting of negative
information drawn from {{b1}} and positive information drawn from {{b2}}.
The interpretation of our language {{PiTF}} in the category of sets and
relations is adequate in the sense that it produces a language in which every
program is a reversible relation and in which relations are first-class
values. This relational model is however, unsatisfactory, for several
reasons:
\begin{itemize}
\item the type {{1/b}} is interpreted in the same way as {{b}}, which gives no
insight into the ``true meaning'' of fractionals;
\item the interpretation is inconsistent with the view of types as algebraic
structures; for example, {{Pi}} with the empty type, is the
categorification of a semiring but although fractional types in {{PiTF}}
syntactically ``look like'' rational numbers, there is no such formal
connection to the rational numbers;
\item finally, we have lost some delicate structure moving from {{Pi}} to
{{PiTF}} as we can express arbitrary relations between types and not just
\emph{isomorphisms}.
\end{itemize}
For these reasons, it is interesting to consider other possible semantic
interpretations of fractionals. A natural alternative is to use the
``canonical'' compact closed category, that is finite dimensional vector
spaces and linear
maps~\cite{Selinger:2011:FDH:1942319.1942398,hasegawa2008finite}
over fields of characteristic $0$ (or even the category of finite dimensional
Hilbert spaces). Let us fix an arbitrary field $k$ of characteristic $0$.
Then each type {{b}} in {{PiTF}} is interpreted as a finite dimensional
vector space $\mathbf{V}_b$ over the field $k$. In particular:
\begin{itemize}
\item every vector space contains a zero vector which means that the type
{{0}} (if included in {{PiTF}}) would not be the ``empty'' type.
Furthermore all combinators would be
strict as they would have to map the zero vector to itself.
\item the type {{1}} is interpreted as a 1-dimensional vector space and hence
is isomorphic to the underlying field.
\item the fractional type {{1/b}} is interpreted as the dual vector space
to the vector space $\mathbf{V}_b$ representing {{b}} consisting of all
the \emph{linear functionals} on $\mathbf{V}_b$.
\item one can then validate certain desirable properties: {{1/(1/b)}} is
isomorphic to~{{b}}; and {{eps*_b}} corresponds to a bilinear form which
maps a dual vector and a vector to a field element.
\end{itemize}
In such categories, the fractional type is given a non-trivial
interpretation. Indeed, while the space of column vectors is isomorphic to
the space of row vectors, they are nevertheless quite different, being
$(1,0)$ and $(0,1)$ tensors (respectively). In other words, the category
provides a more refined model in which isomorphic negative-information values
and positive-information are not identified.
Although this semantics appears to have ``better'' properties than the
relational one, we argue that it is not yet \emph{the} ``perfect''
semantics. By including the zero vector, the language has morphisms that do
not correspond to isomorphisms (in particular it allows partial morphisms by
treating the {{0}} element of the zero-dimensional vector space as a
canonical ``undefined'' value). It is also difficult to reconcile the
interpretation with the view that the types correspond to the (positive)
rational numbers, something we are actively seeking. What would really be a
``perfect'' interpretation is one in which we can only express
{{Pi}}-isomorphisms as first class values and in which the types are
interpreted in a way that is consistent with the rational numbers.
Fortunately, there is promising significant work on the groupoid
interpretation of type theory~\cite{Hofmann96thegroupoid} and on the
categorification of the rational numbers~\cite{math/9802029} that may well give
us the model we desire. The fundamental idea in both cases is that groupoids
(viewed as sets with explicit isomorphisms as morphisms) naturally have a
\emph{fractional cardinality}. Types would be interpreted as groupoids, and
terms would be (invertible) groupoid actions. The remaining challenge is to
identify those groupoid actions which are proper generalizations of
isomorphisms \emph{and} can be represented as groupoids, so as to obtain a
proper interpretation for a higher-order language. As the category of
groupoids is cartesian closed, this appears eminently feasible.
Another promising approach is the use of dependent types for {{eps*_b}} and
{{eta*_b}}; more precisely, {{eps*_b}} would have type
{{Sigma (v:b) (1/v,v) <-> 1}} where {{(1/v,v)}} here denotes a singleton type.
This extra precision appears to restrict combinators of {{PiTF}} back to
denoting only isomorphisms.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
{\small
\bibliographystyle{splncs03}
\bibliography{cites}
}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\appendix
\section{Categorical Background}
\label{app:cat}
We recall that a \emph{symmetric monoidal category} is a category together
with a bifunctor $\otimes$, a distinguished object $I$, and natural
isomorphisms $\alpha_{A,B,C} : (A \otimes B) \otimes C \rightarrow A \otimes
(B \otimes C)$, $\lambda_A : A \rightarrow I \otimes A$, and $\sigma_{A,B} :
A \otimes B \rightarrow B \otimes A$ subject to standard coherence
conditions~\cite{nla.cat-vn1051288}. Following common practice
(e.g.,~\cite{Selinger:2007:DCC:1229185.1229207}), we write $\rho_A =
\sigma_{I,A} \circ \lambda_A : A \rightarrow A \otimes I$.
A \emph{compact closed category} is a symmetric monoidal category where each
object $A$ is assigned a dual object $A^*$, together with a unit map $\eta_A
: I \rightarrow A^* \otimes A$ and a counit map $\epsilon_A : A \otimes A^*
\rightarrow I$, such that:
\[\begin{array}{rcl}
\lambda_A^{-1} \circ (\epsilon_A \otimes A) \circ \alpha_{A,A^*,A}^{-1} \circ (A \otimes \eta_A) \circ \rho_A &=& \mathit{id}_A \\
\rho_{A^*}^{-1} \circ (A^* \otimes \epsilon_A) \circ \alpha_{A^*,A,A^*} \circ (\eta_A \otimes A^*) \circ \lambda_A &=& \mathit{id}_{A^*}
\end{array}\]
A \emph{dagger category} is a category together with an involutive,
identity-on-objects, contravariant functor $\dagger$. Concretely, this means
that to every morphism $F : A \rightarrow B$, one associates a morphism
$f^{\dagger} : B \rightarrow A$, called the \emph{adjoint} of $f$, such that
for all $f : A \rightarrow B$ and $g : B \rightarrow C$, we have:
\[\begin{array}{rcl}
\mathit{id}^\dagger_A &=& \mathit{id}_A \\
(g \circ f)^\dagger &=& f^\dagger \circ g^\dagger \\
(f^\dagger)^\dagger &=& f
\end{array}\]
A \emph{dagger symmetric monoidal category} is a symmetric monoidal category
with a dagger structure such that the contravariant functor $\dagger$
coherently preserves the symmetric monoidal structure. Concretely, this
requirement means that for all $f : A \rightarrow B$ and $g : C \rightarrow
D$, we have:
\[\begin{array}{rcl}
(f \otimes g)^\dagger &=& f^\dagger \otimes g^\dagger \\
\alpha^\dagger_{A,B,C} &=& \alpha^{-1}_{A,B,C} \\
\lambda^\dagger_A &=& \lambda^{-1}_A \\
\sigma^\dagger_{A,B} &=& \sigma^{-1}{A,B}
\end{array}\]
\begin{definition}[Dagger Compact Closed Category]
\label{def:cat}
A \emph{dagger compact closed category} is a dagger symmetric monoidal
category that is also compact closed and such that for all $A$:
\[
\eta_A = \sigma_{A,A^*} \circ \epsilon^\dagger_A
\]
\end{definition}
\todo{biproducts}
\begin{verbatim}
homotopy equivalence???
axioms for field or meadow; actually we probably a semifield
definition of logical reversibility with relations
definition of information preservation in the case of relations; fanout as an
example which we can write
explain in detail the size of 2 * 1/2; it should have exactly one element;
there are two elemens in 2 but the 1/2 identifies them somehow; be precise
we can get the empty relation (eta ; id * swap; eps) what does that mean?
connection to vector spaces; projectors; inner product. Given dual vectors,
we can have a standalone |0><0| which is a piece of a function or a projector
if viewed by itself; we can also have |v><v'| which produces the scalars YES
or NO. We have an isomorphism between matrices and vectors by viewing
|0><0|+|1><1| as |00>+|11>
Perhaps if restrict the ``top-level'' to non-fractional types, we can never
observe a function built from from fractionals; we must apply it and in that
case, the system overall behaves like Pi. (no empty relation for example)???
Idea suggested by Zach is to change eps to (1/a) * a * a <-> a
The code implementing the 16 relations... all 16 relations have the same
information content as () ??? So we should treat (1/bool * bool) as opaque
things; we can only extract information from them by applying them.
Include dneg, name (inv cnot) as example of programs which when fed
non-fractional values produce fractional values
Also include recipT which shows that 1/ distributes over products and tens
which shows that we can manipulate fractional types in interesting ways
Using relational semantics is good because it's clear; but it's also bad
unless we spend time to explain the operational model
Explain 'name' in detail; it IS a faithful representation of the input
relation as a value; we don't get to see the full relation though; we
interact with it by giving it inputs and observing the related ouptut come
out. If we make two copies and use each in a different context (applying each
to a different value, we can ``see'' that it's really a full relation and not
just one branch.)
total functions have size 1; partial functions have fractional sizes!!!
denotation of type in Pi is a set; in Pi/ we get a set with an equivalence
relation (Id by default) but fractionals introduce first-class equivalence
relations that can be used to restrict the sets.
---
Having re-read Baez-Dolan, I now feel better.
On 13-02-08 08:53 AM, Amr Sabry wrote:
> I think all the observations in this email are due to the fact that we
> haven't formalized the size of fractionals using homotopy equivalence. I
> started doing this but abandoned it for now but the bottom line is that
> the size of b * 1/b is 1.
Again, agree - more deeply so now. In more detail, my current thinking:
- v \in b has size 1. So |b| counts how many elements it has, whenever
b is a type of Pi. The sum and product rules apply.
- for b * 1/b to have size 1, then 1/b must have size 1/|b| (assuming
these are independent types, which we have been assuming all along)
Attempt #1
- there are |b| elements in 1/b, and if size is additive, ( 1/|b| =
sum_{1/v \in 1/b} |1/v| ) should hold; since |1/v| should be constant,
this sum is |b|*c, so c = 1/|b|^2. Weird.
Let us look at a single 'function' in bool -o bool, namely the identity
function. It should be 'the same' as { (1/true, true), (1/false, false)
}. The cardinality of that set is (1/(2^2)*1 + 1/(2^2)*1) = 1/4 + 1/4 =
1/2. From weird to probably-wrong.
Attempt #2
- the elements of 1/b are not distinguishable from each other (i.e. they
are all isomorphic). So each element 1/v has b automorphisms, so |1/b|
= sum_{aut classes of 1/v \in 1/b} 1/|aut 1/v| = sum_{singleton} 1/|b| =
1/|b|. [as per p.14 of Baez-Dolan]
The size of the set { (1/true, true), (1/false, false) } is now 1/2*1 +
1/2*1 = 1. So this 'set' indeed represents a single "concept".
Note that in attempt #2, we get a new phenomenon. Consider the
(partial) relation {(1/true, false)}, also of type bool -o bool. It has
size 1/2 ! So, if I have not erred somewhere, only total functions have
size 1. Which, I must admit, I rather like. It's not that partial
functions are outlawed, they just don't pull as much weight [pun intended].
So 1/b is very much like a collection of |b| constraints, all of which
are "the same" (externally). While they may have internal structure, it
is not visible to any of our combinators, so up to iso, there is only a
single element of 1/b, of size 1/|b|.
Yet another way to look at it: let's assume we are looking at FinSet_0,
where in fact our universe U has finitely (say N) things in it. Then
the singleton set 1 is actually the isomorphism class of {u_1, ...,
u_N}. There are N of them, but they have N automorphisms in a single
orbit, so the quotient has size 1 (i.e. |S| = N but |G| = N too, so |S
// G| = 1 ).
So an element of 1/b is closer to an action of b on b; we call it a
constraint, or a pattern-match. I think thinking of it as 'actions' is
definitely the right way to go, as we will be able to have different
kinds of actions on a type b that just those that come from the elements.
> I don't know how much of a priority this is but if it is we should try
> to work it out in detail. The good and bad news is that we will have a
> model much richer and much more complicated than relations. My thought
> when writing sec. 3 was that we would use the simple but slightly
> inadequate model of relations (which for one thing confuses v and 1/v)
> and simply mention that the right abstraction is that of 'biproduct
> dagger compact closed categories' which presumably would include the
> richer model.
I am as convinced that the right model is 'biproduct dagger compact
closed categories' as I am that the right model for the type level is
'categorified field'.
\end{verbatim}
\begin{comment}
Each type in {{PiTF}} is interpreted as a \emph{groupoid}. A groupoid can be
equivalently viewed as an oriented graph, a generalization of a group, or a
special category. We present the categorical view below.
A groupoid is defined by:
\begin{itemize}
\item a set of objects $x,y,\ldots$;
\item for each pair of objects, a (possibly empty) set $G(x,y)$ of morphisms
$x \rightarrow y$;
\item for each object $x$, there exists an identity morphism {{id_x in G(x,x)}};
\item for each triple of objects $x,y,z$, we have a function
{{comp_{x,y,z} : G(x,y) * G(y,z) -> G(x,z)}}
\item there exists a function {{inv_{x,y} : G(x,y) -> G(y,x)}}
\end{itemize}
such that for all {{f : x -> y}}, {{g : y -> z}}, and {{h : z -> w}}, we have:
%subcode{bnf} include main
% comp_{x,x,y}(id_x,f) &=& f \\
% comp_{x,y,y}(f,id_y) &=& f \\
% comp_{x,y,z}(f, comp_{y,z,w}(g,h)) &=& comp_{x,z,w}(comp_{x,y,z}(f,g),h) \\
% comp_{y,x,y}(inv_{x,y}(f),f) &=& id_y \\
% comp_{x,y,x}(f,inv_{x,y}(f)) &=& id_x
\noindent We now proceed to explain how each {{PiTF}} type denotes a groupoid. First,
each {{Pi}} type denotes a set as follows.
\begin{definition}[Denotation of {{Pi}} Value Types {{ [[ b ]] }}]
\label{chx:def:denot}
Each {{Pi}}type denotes a finite set of values as follows:
%subcode{opsem} include main
%! <- = \leftarrow
%! union = \cup
% [[1]] '= {[ () ]}
% [[b1 + b2]] '= {[ left v ~|~ v <- [[b1]] ]} union {[ right v ~|~ v <- [[b2]] ]}
% [[b1 * b2]] '= {[ (v1, v2) ~|~ v1 <- [[b1]], v2 <- [[b2]] ]}
\end{definition}
For each of the types {{b}} above, the corresponding groupoid consists of the
set {{ [[b]] }} of elements with only the trivial identity morphisms. The
interesting groupoid is the one associated with {{1/b}}. It is defined as
follows.
Fractional types and values add considerable expressiveness to our
language.
\end{comment}
If it were not for the cases of {{eta*}} and {{eps*}}, each combinator
{{c : b1 <-> b2}} would define a total one-to-one function between the sets
{{ [[ b1 ]] }} and {{ [[ b2 ]] }}.
This is indeed the situation for the language {{Pi}}
defined in the previous section. In the presence of fractionals, this simple
interpretation of combinators as one-to-one functions is not sufficient%
\footnote{The interpretation of combinators as one-to-one functions is also
not valid in the presence of recursive
types~\cite{rc2011,James:2012:IE:2103656.2103667}.}.
The intuitive reason, is that the fission point combinator
{{eta*_b : 1 <-> 1/b * b}} must be
able to map the element {{() : 1}} to any pair of the form {{(1/v,v)}} with
{{v:b}}. Similarly, the combinator {{eps*_b : 1/b * b <-> 1}} must be able to
accept pairs {{(1/v1,v2)}} in which the negative information {{1/v1}} does
not match the positive information {{v2}}. On such pairs, {{eps*_b}} is
undefined, i.e., it does not produce {{():1}}.
discuss two generalizations of the semantics
vector spaces (more refined because it doesn't collapse 1/b to b; also each
vector space must have a zero vector; so we have an explicit notion of
``error'' which we can use and track at run time. Adding this error
eliminates at RUN TIME relations which are not isomorphisms I think.
a potentially better and more semantics is based on groupoids; mention that
we can have sets (with structure) who cardinality is fractionals and that this
would formalize nicely the negative information
| {
"alphanum_fraction": 0.66812772,
"avg_line_length": 44.1218169305,
"ext": "tex",
"hexsha": "6896e45aa1b6c8b3ace2512b96432bd323c88ce9",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2019-09-10T09:47:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-29T01:56:33.000Z",
"max_forks_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "JacquesCarette/pi-dual",
"max_forks_repo_path": "rc13/frac.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_issues_repo_issues_event_max_datetime": "2021-10-29T20:41:23.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-06-07T16:27:41.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "JacquesCarette/pi-dual",
"max_issues_repo_path": "rc13/frac.tex",
"max_line_length": 137,
"max_stars_count": 14,
"max_stars_repo_head_hexsha": "003835484facfde0b770bc2b3d781b42b76184c1",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "JacquesCarette/pi-dual",
"max_stars_repo_path": "rc13/frac.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-05T01:07:57.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-18T21:40:15.000Z",
"num_tokens": 18054,
"size": 64109
} |
\documentclass[10pt,a4paper]{article}
%\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{placeins}
\usepackage{booktabs}
\usepackage{caption}
\captionsetup[figure]{font=normal}
%\includeonly{oppgave2}
\makeindex
\addtolength{\oddsidemargin}{-.875in}
\addtolength{\evensidemargin}{-.875in}
\addtolength{\textwidth}{1.75in}
\addtolength{\topmargin}{-.875in}
%\addtolength{\textheight}{1.75in}
%\lstset{inputpath=#1]
%\lstset{inputpath=source}
\graphicspath{{./source/}}
\begin{document}
\title{Project 4 FYS4150 computational physics}
\author{Lars Johan Brodtkorb}
\maketitle
\tableofcontents
\newpage
\include{1}
%\subsection{Program code project 2}
%\label{sec:programcode_1}
%
%\lstinputlisting[language=Python]{"C:/Users/LJB/workspace/My_projects/FYS4150/FYS4150/Project_2/source/project_2.py"}
\end{document} | {
"alphanum_fraction": 0.7764830508,
"avg_line_length": 20.085106383,
"ext": "tex",
"hexsha": "bb796c16282486b27ac73d62fcbedca950193b4c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "95ac4e09b5aad133b29c9aabb5be1302abdd8e65",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "larsjbro/FYS4150",
"max_forks_repo_path": "project_4/project4.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "95ac4e09b5aad133b29c9aabb5be1302abdd8e65",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "larsjbro/FYS4150",
"max_issues_repo_path": "project_4/project4.tex",
"max_line_length": 118,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "95ac4e09b5aad133b29c9aabb5be1302abdd8e65",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "larsjbro/FYS4150",
"max_stars_repo_path": "project_4/project4.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 315,
"size": 944
} |
\subsection{Related Work} \label{sec:related_work}
As highlighted in the previous section, the installation, deployment and
configuration of HPC clusters can be tedious work. Consequently, a variety of
open-source and proprietary cluster management tools have been created that
provide different approaches to help address this problem. In the open-source
arena, two widely known solutions have been Rocks and OSCAR. Rocks
\cite{rocks2003,rocks_url} is an open-source Linux-based clustering solution
that aims to reduce the complexity of building HPC clusters. It provides a
combined bundle of an underlying CentOS distribution with additional software
components covering the different administrative needs of a cluster,
and offers a GUI to help administrators walk through
different steps of the installation procedure. All cluster services and tools
are installed and configured during the initial installation of the front-end
with no need to download and %manual
configure other external
packages. Furthermore, with the use of Rolls \cite{rolls2004}, the
administrator can customize the base installation
%of packages
with additional
optional software that integrates seamlessly
%and automatically
into the management and
packaging mechanisms of the base software.
%It makes use of Kickstart and RPM to manage the distribution of node file
%system and provides component based configuration method to realize module
%reuse.
Rolls
%appliances
exist to deploy Rocks in alternative environments
%large-scale environments different
%than HPC
such as sensor networks \cite{rolls_sensors2012} and clouds \cite{rolls_cloud2011}.
The most recent version of Rocks, version 6.2, was released in May 2015 and is
based on CentOS~(v6.6).
OSCAR \cite{oscar2001,oscar_url} (Open Source Cluster Application Resources) is
a fully integrated and easy to install software bundle designed for high
performance cluster computing.
%The fundamental function of OSCAR is to build
%and maintain clusters.
OSCAR follows a different methodology than Rocks; once
the front-end is installed and booted, the cluster building components are
downloaded and installed through tools that try to simplify the complexity of
the different administrative tasks. There have been some variations of OSCAR
based on the same framework to cover different types of cluster environments
such as Thin-Oscar for diskless platforms, and HA-Oscar to support High
Availability. OSCAR has previously been supported on multiple Linux distributions such as
CentOS and Debian. However,
%even if OSCAR and its variations are still being
%used in production,
the project is no longer actively maintained and the latest
version 6.1.1 was released in 2011.
%OSCAR also provides a GUI for installation and configuration purposes. OSCAR
%is based upon a virtual image of the target machine using System Installation
%Suite (SIS). There is also an OSCAR database (ODA) that stores cluster
%information, a parallel distributed “shell” tool set called C3 and an
%environment management facility called Env-Switcher.
A common issue that arises in the design of system management tool kits is the
inevitable trade-off between ease-of-use and customization flexibility. Rocks
has adopted a more turn-key approach which necessitates the need for some level
of embedded configuration management and Rocks leverages a hierarchical XML
schema. As will be discussed further in \S\ref{sec:repo_enable}, OpenHPC is
providing an HPC focused software repository and adopts a more building-block
approach. Consequently, it expects a certain level of expertise from the
administrator, but is intended to offer a greater choice of software
components, promote flexibility for use in a variety of system environments
and scales, be compatible with multiple Linux distributions, and be
interoperable with standalone configuration management systems.
%It is currently focused only on HPC environments.
Furthermore, as
a community effort, OpenHPC is supported and maintained by a group of vendors,
research centers and laboratories that share common goals of minimizing
duplicated effort and sharing of best practices.
Several end-user oriented projects also exist to mitigate the complexity of HPC
and scientific software management including EasyBuild~\cite{easybuild2012} and
Spack~\cite{spack2015}. Both of these systems provide convenient methods for
building and installing many common HPC software packages. With similar goals,
OpenHPC differs in scope and process. We aim to provide a complete cluster
software stack capable of provisioning and administering a system in addition
to user-space development libraries. OpenHPC also seeks to leverage standard
Linux tools and practices to install and maintain software. Both EasyBuild and
Spack are currently packaged in the OpenHPC distribution to allow users to
further extend and customize their environment.
| {
"alphanum_fraction": 0.8266721044,
"avg_line_length": 56.367816092,
"ext": "tex",
"hexsha": "2336cadbfb34d00af52859dba421bc22b5513cd2",
"lang": "TeX",
"max_forks_count": 224,
"max_forks_repo_forks_event_max_datetime": "2022-03-30T00:57:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-12T21:17:03.000Z",
"max_forks_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "utdsimmons/ohpc",
"max_forks_repo_path": "docs/papers/HPCSYSPROS/related_work.tex",
"max_issues_count": 1096,
"max_issues_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T21:48:41.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-11-12T09:08:22.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "utdsimmons/ohpc",
"max_issues_repo_path": "docs/papers/HPCSYSPROS/related_work.tex",
"max_line_length": 89,
"max_stars_count": 692,
"max_stars_repo_head_hexsha": "70dc728926a835ba049ddd3f4627ef08db7c95a0",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "utdsimmons/ohpc",
"max_stars_repo_path": "docs/papers/HPCSYSPROS/related_work.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T03:45:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-11-12T13:56:43.000Z",
"num_tokens": 1016,
"size": 4904
} |
% chapters/dp.tex
\chapter{Dynamic Programming} \label{chapter:dp}
\input{algs/dp/binom-recursive}
\begin{figure}[h]
\includegraphics[width = 0.75\textwidth]{figs/binom-4-2}
\caption{Calculate $\binom{4}{2}$ recursively.}
\label{fig:binom-recursive}
\end{figure}
\begin{figure}[h]
\includegraphics[width = 0.65\textwidth]{figs/pascal}
\caption{Pascal triangle for binomial coefficients.}
\label{fig:pascal}
\end{figure}
\input{algs/dp/binom-dp}
\[
(n-k+1) + (k) + k (n-k) = nk - k^2 + n + 1
\]
\input{algs/dp/max-subarray-origin}
\input{algs/dp/max-subarray}
| {
"alphanum_fraction": 0.6879310345,
"avg_line_length": 21.4814814815,
"ext": "tex",
"hexsha": "705059a688c88e627c88f57187ba67f09a913c1c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "hengxin/algorithms-pseudocode",
"max_forks_repo_path": "chapters/dp.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "hengxin/algorithms-pseudocode",
"max_issues_repo_path": "chapters/dp.tex",
"max_line_length": 58,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "5c8265b6368f851337ca9c0dd1476c07b6e29f83",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "hengxin/algorithms-pseudocode",
"max_stars_repo_path": "chapters/dp.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-27T13:01:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-06T08:52:25.000Z",
"num_tokens": 216,
"size": 580
} |
\documentclass{uhthesis}
% \usepackage{showframe}
% \usepackage[T1]{fontenc}
% \usepackage{verdana}
% Loaded packages
\title{Title of the dissertation that happens to be very long and will definitely need two lines to be typeset properly}
\author{Author Name}
\date{\today}
\promotor{Promotor Name, Another Name}
\mentor{Mentor Name, Final Name}
\degreename{Autodidact degree of LaTeX}
\degreetype{Fictional thesis}
\department{Department of learning stuff}
\acyear{2021 - 2022}
\begin{document}
\maketitle
\tableofcontents
\newpage
\section{Abstract}
\section{Introduction}
Important statement and citation as an example \cite{totesimportant}
\section{Review}
\subsection{Topic 1}
\subsection{Topic 2}
\section{Research questions}
\section{Methods}
\section{Results}
\subsection{Experiment 1}
\subsection{Experiment 2}
\begin{appendices}
\section{Appendices can be added here as sections}
\end{appendices}
\printbibliography
\end{document} | {
"alphanum_fraction": 0.7711340206,
"avg_line_length": 22.0454545455,
"ext": "tex",
"hexsha": "c44afdd17ea7f8e357aa0c35c0be00cc63f40791",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "755f5a0ea6c6d75aaaecc7f589087ee7f31507cd",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Allyson-Robert/UHasselt_Master_Bachelor_Thesis_LaTeX_Template",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "755f5a0ea6c6d75aaaecc7f589087ee7f31507cd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Allyson-Robert/UHasselt_Master_Bachelor_Thesis_LaTeX_Template",
"max_issues_repo_path": "main.tex",
"max_line_length": 120,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "755f5a0ea6c6d75aaaecc7f589087ee7f31507cd",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Allyson-Robert/UHasselt_Master_Bachelor_Thesis_LaTeX_Template",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 269,
"size": 970
} |
\documentclass{article}
\usepackage{fancyhdr}
\usepackage{extramarks}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amsfonts}
\usepackage{tikz}
\usepackage[plain]{algorithm}
\usepackage{algpseudocode}
\usepackage{hyperref}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{minted}
\usetikzlibrary{automata,positioning}
%
% Basic Document Settings
%
% Custom sectioning
\usepackage{sectsty}
\allsectionsfont{\centering \normalfont\scshape}
\captionsetup{justification=centering,
singlelinecheck=false
}
\graphicspath{ {required/} }
\topmargin=-0.45in
\evensidemargin=0in
\oddsidemargin=0in
\textwidth=6.5in
\textheight=9.0in
\headsep=0.25in
\linespread{1.1}
\pagestyle{fancy}
\lhead{\hmwkAuthorName}
\chead{\hmwkClass}
\rhead{\hmwkTitle}
\lfoot{\lastxmark}
\cfoot{\thepage}
\renewcommand\headrulewidth{0.4pt}
\renewcommand\footrulewidth{0.4pt}
\setlength\parindent{0pt}
%
% Create Problem Sections
%
\newcommand{\enterProblemHeader}[1]{
\nobreak\extramarks{}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
}
\newcommand{\exitProblemHeader}[1]{
\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
\stepcounter{#1}
\nobreak\extramarks{Problem \arabic{#1}}{}\nobreak{}
}
\setcounter{secnumdepth}{0}
\newcounter{partCounter}
\newcounter{homeworkProblemCounter}
\setcounter{homeworkProblemCounter}{1}
\nobreak\extramarks{Problem \arabic{homeworkProblemCounter}}{}\nobreak{}
%
% Homework Problem Environment
%
% This environment takes an optional argument. When given, it will adjust the
% problem counter. This is useful for when the problems given for your
% assignment aren't sequential. See the last 3 problems of this template for an
% example.
%
\newenvironment{homeworkProblem}[1][-1]{
\ifnum#1>0
\setcounter{homeworkProblemCounter}{#1}
\fi
\section{Problem \arabic{homeworkProblemCounter}}
\setcounter{partCounter}{1}
\enterProblemHeader{homeworkProblemCounter}
}{
\exitProblemHeader{homeworkProblemCounter}
}
%
% Homework Details
% - Title
% - Due date
% - Class
% - Section/Time
% - Instructor
% - Author
%
\newcommand{\hmwkTitle}{Homework\ 1}
\newcommand{\hmwkDueDate}{January 23, 2015}
\newcommand{\hmwkClass}{Information Integration on the Web}
\newcommand{\hmwkClassInstructor}{Professor Ambite \& Knoblock}
\newcommand{\hmwkAuthorName}{Tushar Tiwari}
%
% Title Page
%
\title{
\vspace{2in}
\textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\
\normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate}\\
\vspace{0.1in}\large{\textit{\hmwkClassInstructor}}
\vspace{3in}
}
\author{\textbf{\hmwkAuthorName}}
\date{}
\renewcommand{\part}[1]{\textbf{\large Part \Alph{partCounter}}\stepcounter{partCounter}\\}
%
% Various Helper Commands
%
% Useful for algorithms
\newcommand{\alg}[1]{\textsc{\bfseries \footnotesize #1}}
% For derivatives
\newcommand{\deriv}[1]{\frac{\mathrm{d}}{\mathrm{d}x} (#1)}
% For partial derivatives
\newcommand{\pderiv}[2]{\frac{\partial}{\partial #1} (#2)}
% Integral dx
\newcommand{\dx}{\mathrm{d}x}
% Alias for the Solution section header
\newcommand{\solution}{\textbf{\large Solution}}
% Probability commands: Expectation, Variance, Covariance, Bias
\newcommand{\E}{\mathrm{E}}
\newcommand{\Var}{\mathrm{Var}}
\newcommand{\Cov}{\mathrm{Cov}}
\newcommand{\Bias}{\mathrm{Bias}}
\begin{document}
\maketitle
\pagebreak
\begin{flushleft}
\section{Statement of Integrity}
I, Tushar Tiwari, declare that the submitted work is original and adheres to all University policies and acknowledge the consequences that may result from a violation of those rules.
\section{Screenshots}
\begin{figure}[h]
\centering
\fbox{\includegraphics[width=0.6\textwidth]{search}}
\urldef\myurl\url{http://collections.lacma.org/search/site/?page=0&f[0]=bm_field_has_image%3Atrue&f[1]=im_field_curatorial_area%3A32&f[2]=im_field_classification%3A22}
\caption{The search page with the option American Paintings already chosen. \\ This provides us with a custom url that can be exploited to navigate and then extract all paintings. \\ \myurl}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{american-painting}}
\caption{An american painting}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{european-painting}}
\caption{A european painting}
\end{subfigure}
\caption{Examples of the paintings on the website}
\end{figure}
\newpage
\section{Sample Record}
Let's look at a sample painting and the output that it produces. For purposes of demonstration the link to this sample painting is hard-coded into the program so that the output produced is only for the painting in consideration.
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=0.98\textwidth]{184481}}
\urldef\myurl\url{http://collections.lacma.org/node/184481}
\caption{The chosen painting found at \myurl.}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{hardcoding}}
\caption{Hardcoding the url into the code}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{make}}
\caption{Execution of make command}
\end{subfigure}
\caption{Sample Run}
\end{figure}
\textbf{Note:}
\begin{itemize}
\item The url for the painting is hardcoded only for purposes of demonstration. The usual run will automatically extract image urls from the search page.
\item Not all paintings will have all data fields assosciated with them since the data is unavailable for those paintings. These missing fields have been ignored as they serve no purpose except for increasing the file size which is not desirable.
\end{itemize}
\newpage
\begin{figure}[H]
\centering
\inputminted[frame=single,
framesep=3mm,
linenos=true,
xleftmargin=21pt,
tabsize=4]{js} {required/output.json}
\caption{Output JSON of sample run}
\end{figure}
\newpage
\section{Challenges}
\begin{itemize}
\item Unicode characters are a problem in Python 2 since the default string supports only ascii characters. Due to this, certain string operations become very complicated. The problem was overcome by porting the code to Python 3 where the default string supports unicode.
\item Extracting the artist's details required complex regular expressions since all information pertaining to the artist, like name, date of birth, place of birth, was provided in a single string in the html.
\end{itemize}
\section{Tools Used}
\subsection{Python with BeautifulSoup}
Python is one of the most easiest yet powerful languages out there which works cross-platform. Installing and distributing dependencies is also quite easy. Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping.
It parses anything you give it, and does the tree traversal stuff for you. You can tell it "Find all the links", or "Find all the links of class externalLink", or "Find all the links whose urls match "foo.com", or "Find the table heading that's got bold text, then give me that text.". Some of the features that make it powerful are:
\begin{itemize}
\item Provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. It doesn't take much code to write an application
\item {Automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't detect one. Then you just have to specify the original encoding.}
\item{Sits on top of popular Python parsers like lxml and html5lib, allowing you to try out different parsing strategies or trade speed for flexibility.}
\end{itemize}
\section{Fields not Extracted}
\begin{itemize}
\item The size of the painting in inches is not extracted because the same size is also presented in centimeteres which is more accurate.
\item The category of art, i.e. paintings, sculptures, etc., is skipped because we have hardcoded the category to be paintings in our url. Thus, all results fetched are paintings.
\item The museum disclaimers are also overlooked because they are not related to the painting.
\end{itemize}
\end{flushleft}
\end{document} | {
"alphanum_fraction": 0.7645007556,
"avg_line_length": 34.8299595142,
"ext": "tex",
"hexsha": "64ddad46753fe910771681c16b69f038e8f26618",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "290179f9da255c8ab169386674884c9622c60915",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tushart91/study-usc",
"max_forks_repo_path": "Information Integration/Homework/HW1/documentation/main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "290179f9da255c8ab169386674884c9622c60915",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tushart91/study-usc",
"max_issues_repo_path": "Information Integration/Homework/HW1/documentation/main.tex",
"max_line_length": 333,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "290179f9da255c8ab169386674884c9622c60915",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tushart91/study-usc",
"max_stars_repo_path": "Information Integration/Homework/HW1/documentation/main.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-29T20:33:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-29T20:33:50.000Z",
"num_tokens": 2390,
"size": 8603
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode$for(hyperrefoptions)$,$hyperrefoptions$$endfor$}{hyperref}
\PassOptionsToPackage{hyphens}{url}
$if(colorlinks)$
\PassOptionsToPackage{dvipsnames,svgnames,x11names}{xcolor}
$endif$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Much of the following ~250 lines are adapted from mla-tex package % (fold)
\documentclass[12pt]{article}
%% MLA requires 8.5x11 (letterpaper) and 1in margins on all sides.
\usepackage[letterpaper]{geometry}
\geometry{
top=1.0in,
bottom=1.0in,
left=1.0in,
right=1.0in
}
%% Package fancyhdr allows customizing the headers and footers.
%% Setting the pagestyle is required for the customized
%% headers/footers to be used. \fancyhf{} removes the default contents
%% of the headers and footers, leaving them blank.
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
% https://tex.stackexchange.com/q/528358
\setlength\headheight{15pt}
\usepackage{enumitem}
\setlist[itemize]{noitemsep, topsep=0pt}
%% Set a running header and page number in the upper-right corner.
\rhead{\ifno{headername}{\thepage}{\get{headername}~\thepage}}
%% Remove the horizontal rule that is usually displayed just below the
%% page header.
\renewcommand*{\headrulewidth}{0pt}
%% Set the appropriate font (Tinos or Times New Roman).
% Load New TX if not using OpenType-compatible engine
\iftutex
\usepackage{fontspec}
\setmainfont{Times New Roman}
\else
\RequirePackage[T1]{fontenc}
\RequirePackage{newtxtext}
\fi
%% Use package ragged2e to inhibit justification. Vanilla
%% \raggedright screws up paragraph indents.
\usepackage{ragged2e}
\setlength\RaggedRightParindent\parindent
\RaggedRight
%% MLA requires exactly 0.5in paragraph indents.
\setlength{\parindent}{0.5in}
%% MLA also says that every paragraph should be indented, including
%% the first paragraph of a section.
\usepackage{indentfirst}
%% Make a new version of the {center} environment that doesn't add
%% extra spacing.
\newenvironment{centered}
{\parskip=0pt\centering\begingroup}
{\endgroup\par\ignorespacesafterend}
\newenvironment{titling}
{\par}
{\par}
\newenvironment{pageheader}
{\par}
{\par}
%% Everyone loves double-spacing.
\usepackage{setspace}
\setstretch{2}
% Messy header stuff to follow...
\newcommand*{\newfield}[1]{%
\unset{#1}%
\expandafter\newcommand\csname #1\endcsname[1]{%
\expandafter\def\csname value#1\endcsname{##1}}%
}
\newcommand*{\renewfield}[1]{%
\unset{#1}%
\expandafter\renewcommand\csname #1\endcsname[1]{%
\expandafter\def\csname value#1\endcsname{##1}}%
}
\newcommand*{\get}[1]{\csname value#1\endcsname}
\newcommand{\ifno}[3]{%
\expandafter\ifdefempty\csname value#1\endcsname{#2}{#3}%
}
\newcommand*{\unset}[1]{%
\expandafter\def\csname value#1\endcsname{\textbackslash #1\{?\}}%
}
%% Fields used in header.
\newfield{fullname}
\newfield{secondfullname}
\newfield{lastname}
\newfield{headername}
% \newfield{professor}
% \newfield{class}
% \newfield{postal}
% \newfield{email}
% \newfield{telephone}
% \renewfield{date}
% \renewfield{title}
% %% Default values.
$if(date)$\date{$date$}$else$\date{\today}$endif$
%% Define a general environment for inserting MLA-style headers.
\newenvironment{mlaheader}{
\begingroup%
\rmfamily%
\fontsize{12}{2}%
\setlength{\parindent}{0pt}
}{%
\endgroup%
}
%% And a convenience function for the most common case.
\newcommand*{\makeheader}{%
\begin{mlaheader}
$if(author)$\noindent $author$$endif$
$if(professor)$\par\noindent $professor$$endif$
$if(class)$\par\noindent $class$$endif$
$if(postal)$\par\noindent $postal$$endif$
$if(email)$\par\noindent $email$$endif$
$if(telephone)$\par\noindent $telephone$$endif$
$if(date)$\par\noindent $date$\par$endif$
\end{mlaheader}
$if(title)$\par\mlatitle\par$endif$
}
% This test is useful for setting ODT styles with tex4ht
\makeatletter
\newif\ifhtlatex
\@ifpackageloaded{tex4ht}
{\newcommand*{\mlatitle}{\begin{titling}$title$\end{titling}}}
{\newcommand*{\mlatitle}{\begin{centered}$title$\end{centered}}}
\makeatother
% The \secondtitle is used only when there's a title page
\makeatletter
\newif\ifhtlatex
\@ifpackageloaded{tex4ht}
{\newcommand*{\secondtitle}{\begin{pageheader}$title$\end{pageheader}}}
{\newcommand*{\secondtitle}{\clearpage\begin{centered}$title$\end{centered}}}
\makeatother
% Defines the title page, which technically differs between PDF / ODT
\makeatletter
\newif\ifhtlatex
\@ifpackageloaded{tex4ht}
% for ODT
{
% redefine mlatitle
\newcommand*{\mlatitlespec}{
\begin{titling}
\par\mbox{ }\par\mbox{ }\par\mbox{ }\par\mbox{ }\par\mbox{ }\par
$title$
\par\mbox{ }\par\mbox{ }\par
$if(author)$$author$\par\mbox{ }\par\mbox{ }\par\mbox{ }\par$endif$
$if(class)$\par\noindent $class$$endif$
$if(professor)$\par\noindent $professor$$endif$
$if(postal)$\par\noindent $postal$$endif$
$if(email)$\par\noindent $email$$endif$
$if(telephone)$\par\noindent $telephone$$endif$
$if(date)$\par\noindent $date$$endif$
\end{titling}}
% define mlatitlepage
\newcommand*{\mlatitlepage}{%
\setcounter{page}{0}
\thispagestyle{empty}
\hspace{0pt}
\vfill
\mlatitlespec
\vfill
\hspace{0pt}
\secondtitle
\par
}}
% for PDF
{\newcommand*{\mlatitlepage}{%
\setcounter{page}{0}
\thispagestyle{empty}
\hspace{0pt}
\vfill
\mlatitle \par\mbox{ }\par\mbox{ }\par
\begin{centered}
\ifno{fullname}{}{
\get{fullname} \par\mbox{ }\par\mbox{ }\par
}
$if(class)$\par\noindent $class$$endif$
$if(professor)$\par\noindent $professor$$endif$
$if(postal)$\par\noindent $postal$$endif$
$if(email)$\par\noindent $email$$endif$
$if(telephone)$\par\noindent $telephone$$endif$
$if(date)$\par\noindent $date$$endif$
\end{centered}%
\vfill
\hspace{0pt}
\secondtitle
\par
}}
\makeatother
% Reformatting section headers, etc.
\makeatletter
\renewcommand \thesection{\@arabic\c@section.}
\renewcommand\thesubsection{\thesection\@arabic\c@subsection}
\renewcommand\thesubsection{\thesection\@arabic\c@subsection}
\renewcommand \section{\@startsection%
{section}
{1}
{\z@}%
{-4.5ex \@plus -1ex \@minus -.2ex}%{\z@}%
{\lineskip}%
{\normalfont}}
\renewcommand \subsection{\@startsection%
{subsection}
{2}
{\z@}%
{\z@}%
{\lineskip}%
{\normalfont}}
\renewcommand\subsubsection{\@startsection%
{subsubsection}
{3}
{\z@}%
{\z@}%
{\lineskip}%
{\normalfont}}
\renewcommand \paragraph{\@startsection%
{paragraph}
{4}
{\z@}%
{\z@}%
{\lineskip}%
{\normalfont}}
\renewcommand \subparagraph{\@startsection%
{subparagraph}
{5}
{\parindent}%
{\z@}%
{\lineskip}%
{\normalfont}}
%% Formatting section headings
% \def\section{\@startsection{section}{1}{\z@}{-5.25ex plus -1ex minus
% -.2ex}{1.5ex plus .2ex}{\center}}
% \def\thesection{\arabic{section}.}
\makeatother
% end adapted from mla-tex package % (end)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Needed for figures
\usepackage{graphicx}
% Needed for lists
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
$if(title-meta)$
pdftitle={$title-meta$},
$endif$
$if(author-meta)$
pdfauthor={$author-meta$},
$endif$
$if(lang)$
pdflang={$lang$},
$endif$
$if(subject)$
pdfsubject={$subject$},
$endif$
$if(keywords)$
pdfkeywords={$for(keywords)$$keywords$$sep$, $endfor$},
$endif$
$if(colorlinks)$
colorlinks=true,
linkcolor={$if(linkcolor)$$linkcolor$$else$Maroon$endif$},
filecolor={$if(filecolor)$$filecolor$$else$Maroon$endif$},
citecolor={$if(citecolor)$$citecolor$$else$Blue$endif$},
urlcolor={$if(urlcolor)$$urlcolor$$else$Blue$endif$},
$else$
hidelinks,
$endif$
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[american]{babel}
\usepackage{csquotes}
$if(biblio-style)$\usepackage[$biblio-style$]{biblatex-chicago}$else$\usepackage[notes]{biblatex-chicago}$endif$
\addbibresource{$bibliography$}
$if(lastname)$\headername{$lastname$}$else$\headername{$title$}$endif$
$if(anonymous)$\headername{$title$}$endif$
$if(anonymous)$\renewcommand{\makeheader}{\mlatitlepage}$endif$
$if(author)$
\fullname{$author$}
$endif$
$if(repeatname)$\secondfullname{$author$}$else$\secondfullname{}$endif$
$if(title)$
\title{$title$$if(thanks)$\thanks{$thanks$}$endif$}
$endif$
$if(author)$
\author{$author$}
$endif$
$if(highlighting-macros)$
$highlighting-macros$
$endif$
\makeatletter
\newif\ifhtlatex
\@ifpackageloaded{tex4ht}
{\defbibheading{bibliography}[\bibname]{%
\begin{pageheader}#1\end{pageheader}\addcontentsline{toc}{section}{\bibname}%
}}
{\defbibheading{bibliography}[\bibname]{%
\section*{\centering #1}%
\markboth{#1}{#1}%
\addcontentsline{toc}{section}{\bibname}}}
\makeatother
\makeatletter
\newif\ifhtlatex
\@ifpackageloaded{tex4ht}
% {\newcommand*{\mlaworkscited}{\begin{pageheader}Works Cited\end{pageheader}\addcontentsline{toc}{section}{Works Cited}\printbibliography[heading=none]}}
{\newcommand*{\mlaworkscited}{\printbibliography}}
{\newcommand*{\mlaworkscited}{\clearpage\printbibliography}}
\makeatother
\begin{document}
$if(titlepage)$\mlatitlepage$else$\makeheader$endif$$if(abstract)$\begin{abstract}$abstract$\end{abstract}$endif$$for(include-before)$$include-before$$endfor$
$body$%
% \newpage%
% \printbibliography%
$if(bibliography)$\mlaworkscited$endif$
\end{document} | {
"alphanum_fraction": 0.6938482079,
"avg_line_length": 27.0472222222,
"ext": "tex",
"hexsha": "657c167891c45fef27ace44c6649da89883ad57f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jmclawson/rmd4mla",
"max_forks_repo_path": "inst/rmarkdown/templates/mla/resources/chitemplate.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065",
"max_issues_repo_issues_event_max_datetime": "2022-03-13T15:14:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-03-13T15:14:30.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jmclawson/rmd2mla",
"max_issues_repo_path": "inst/rmarkdown/templates/mla/resources/chitemplate.tex",
"max_line_length": 158,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3250911388b2ed52855b282724e2ed54a3358065",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jmclawson/rmd2mla",
"max_stars_repo_path": "inst/rmarkdown/templates/mla/resources/chitemplate.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3220,
"size": 9737
} |
\chapter{Optimization and Equation System Solving(Solve)}
\section{Usage}
\subsection{Optimization}
\subsubsection{Alogorithms}
There are r types of algorithms: unbounded, bounded, linear constrained and nonlinear contrained. Each algorithm is implemented in an object and we pass problem functions to these objects. We use a problem function to return all related information including function value, gradients, nonlinear constraint values etc. To return these information, we use a tuple as return type.
A list of algorithms:
\begin{enumerate}
\item Unbounded. Heuristic: \cd{GA}, \cd{PSO}.
\item Bounded:
\item Linear Constrained:
\end{enumerate}
\subsubsection{Variable Types}
\begin{enumerate}
\item \textbf{x, y, gradient} are forced to be \cd{VectorXd} type and \textbf{hessian} are forced to be \cd{MatrixXd} type. If \textbf{x,y} of objective function or nonlinear constraints are scalars, we still use a length one \cd{VectorXd} to store it. One advantage of this is that we don't have to deal with number of variables. Note that \cd{VectorXd} is a \textbf{column} vector.
If \textbf{y} is a vector of length more than one. One can set \cd{SolveOption.type} to value either of "least square" or "norm". For "norm", we optimize norm of \textbf{y}.
\item One can pass external data by pointer using template.
\end{enumerate}
\subsubsection{GA Examples}
\begin{lstlisting}
//Objective function.
double fun1(VectorXd x){
return pow(x[0],2)+pow(x[1],2);
};
GA ga_optimizer("min", 2);
//Following are some non-necessary settings. They have default values.
ga_optimizer.set_population();
VectorXd x0;
x0<<10,10;
auto flag=ga_optimizer.solve(fun1, x0);
cout<<x0<<endl;
\end{lstlisting}
\subsubsection{Write An Optimization Problem}
\begin{lstlisting}
class prob: ProblemBase<VectorXd>{
}
\end{lstlisting}
\subsubsection{Constrained Optimization}
\subsection{Nonlinear System}
\begin{lstlisting}
\end{lstlisting}
\section{Common Objects}
\subsection{SolveOption}
\subsection{SolveResult}
\section{Solve optimization problems}
\subsection{Choose A Solver}
If a solver requires gradient and hessian, however they are not provided, then default difference approximation will be used.
\begin{itemize}
\item \textbf{Special}: \cd{QPSolver}(Constrained QP)
\item \textbf{Unconstrained}:
\item \textbf{Linear inequality constrained}:\cd{LCOBYQASolver}(doesn't use gradient and hessian).
\item \texbt{Nonlinear constrained}:\cd{LSSQPSolver}(Large scale SQP).
\item \textbf{Nonlinear least square}:
\item \textbf{Evolutionary}:\cd{DESolver}.
\item \textbf{Heuristic Method}: \cd{PSOSolver}.
\end{itemize}
\subsection{SolverBase}
\subsection{QPSolve}
\cd{QPSolve} is a basic component of many other solvers. It's not a derived class of \cd{SolverBase}. It solve a problem:
\begin{align}
\min_x & f(x)=\frac{1}{2}x^TGx+x^Tc \\
subject & Ax \leq b
\end{align}
Usage:
\begin{lstlisting}
\end{lstlisting}
\section{EQSystem - Solve nonlinear system of equations}
| {
"alphanum_fraction": 0.7604930047,
"avg_line_length": 30.9484536082,
"ext": "tex",
"hexsha": "d370688dbd78d9df44e6d8692208a67a6d3ce907",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kilasuelika/SciStaLib",
"max_forks_repo_path": "Documentation/Solve.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kilasuelika/SciStaLib",
"max_issues_repo_path": "Documentation/Solve.tex",
"max_line_length": 384,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "103a75e6a433c2b873abb7ecd4da675028b782db",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kilasuelika/SciStaLib",
"max_stars_repo_path": "Documentation/Solve.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 830,
"size": 3002
} |
\chapter{Materials and Methods}
\section{Data}
| {
"alphanum_fraction": 0.7872340426,
"avg_line_length": 15.6666666667,
"ext": "tex",
"hexsha": "62759718bcffce83a896feecb90e0518e9f31853",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0f1e4d8bb82f31b340c6fb6c173206d86d94986e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "VMHidalgo/UChileMasterTemplate",
"max_forks_repo_path": "Thesis/Manuscript/BodyMatter/4.Methods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0f1e4d8bb82f31b340c6fb6c173206d86d94986e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "VMHidalgo/UChileMasterTemplate",
"max_issues_repo_path": "Thesis/Manuscript/BodyMatter/4.Methods.tex",
"max_line_length": 31,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0f1e4d8bb82f31b340c6fb6c173206d86d94986e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "VMHidalgo/UChileMasterTemplate",
"max_stars_repo_path": "Thesis/Manuscript/BodyMatter/4.Methods.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12,
"size": 47
} |
%-------------------------------------------------------------------------------
% Contexts
%-------------------------------------------------------------------------------
\subsection{Contexts}
Contexts map (DeBruijn) variables to types.
\newcommand{\Ctx}{\ \mathtt{ctx}}
\bigskip
\framebox{$\Gamma\Ctx$}
\bigskip
$$
\begin{array}{cc}
\infer{\cdot\Ctx}{} &
\infer{\Gamma,A\Ctx}{\Gamma\Ctx & \CheckTy[\cdot]{A}{\Type}}
\end{array}
$$
| {
"alphanum_fraction": 0.3567251462,
"avg_line_length": 25.65,
"ext": "tex",
"hexsha": "b264acaf69fc9d88a2d13490fa43726bfcb65a3a",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-03-07T19:33:29.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-05-06T01:32:34.000Z",
"max_forks_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "kryptine/twelf",
"max_forks_repo_path": "src/inverse/tex/old/context.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2",
"max_issues_repo_issues_event_max_datetime": "2021-02-27T22:17:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-27T22:17:51.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "kryptine/twelf",
"max_issues_repo_path": "src/inverse/tex/old/context.tex",
"max_line_length": 80,
"max_stars_count": 61,
"max_stars_repo_head_hexsha": "1edad1846921cc962138cd4a5a703d3b1e880af2",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "kryptine/twelf",
"max_stars_repo_path": "src/inverse/tex/old/context.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-25T12:41:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-24T18:10:58.000Z",
"num_tokens": 112,
"size": 513
} |
\documentclass{report}
\usepackage{bibentry}
\nobibliography*
\usepackage[\OPTIONS]{subfiles}
\begin{document}
Here, we are testing two things. First, spreading the files across
different directories and generating the bibliography with
\verb|bibtex| used to be a problem. Let's hope that it is not anymore.
Second, the \verb|bibentry| package defines commands that give an
error when used in the preamble. For this they check whether the
\verb|\document| command has a specific value. In older versions of
\verb|subfiles| they found the wrong value and complained, even though
they were correctly used.
\chapter{First chapter}
\subfile{sub/sub}
\end{document}
| {
"alphanum_fraction": 0.7918552036,
"avg_line_length": 34.8947368421,
"ext": "tex",
"hexsha": "04e48350c419900adb88b3ab2e1616c96e44b8bf",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-02-08T15:57:30.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-09-25T10:43:49.000Z",
"max_forks_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "mrpiggi/subfiles",
"max_forks_repo_path": "tests/bibentry/main.tex",
"max_issues_count": 28,
"max_issues_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_issues_repo_issues_event_max_datetime": "2021-07-14T19:02:27.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-11-05T19:10:32.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "mrpiggi/subfiles",
"max_issues_repo_path": "tests/bibentry/main.tex",
"max_line_length": 70,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "6b9cc4cdc47c73faadc8c847e29c01ac372c4b19",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "mrpiggi/subfiles",
"max_stars_repo_path": "tests/bibentry/main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-13T17:24:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-02-02T05:29:35.000Z",
"num_tokens": 163,
"size": 663
} |
\documentclass[a4paper,11pt,final]{article}
\input{preamble}
\input{abs}
\input{java}
\input{definitions}
\usepackage{amsmath}
\usepackage{amsthm}
\newcommand{\deliverableTitle}{The ABS Foreign Language Interface (ABSFLI)}
\title{\deliverableTitle}
\author{Jan Sch\"{a}fer \and Peter Y. H. Wong}
\begin{document}
\maketitle
\section{Introduction}
This document contains first ideas of how to
connect ABS to foreign languages (FLs).
ABS should be able to interact with FLs like
Java to be able to write critical components of a system
in ABS.
As FLs we mainly consider Java, but further
options are: Maude, Scala, and Erlang
\section{Main questions}
There are essentially two questions to answer:
\begin{enumerate}
\item How to use FLs from ABS
\item How to use ABS from Fls
\end{enumerate}
\PW{We should think in terms of functions/datatypes, imperatives and
concurrency?}
\subsection{Concurrency}
\begin{itemize}
\item \PW{Java object should only invoke ABS method synchronously?}
\item \PW{Does it make sense to asynchronously invoke Java object? Or both
ways must be synchronous?}
\end{itemize}
\section{Possible Solutions}
\subsection{Deep Integration (the Scala way)}
One would do a deep integration of Java and treat Java packages as Modules.
For example:
\begin{absexamplen}
import * from java.lang;
{
Double d = new Double();
}
\end{absexamplen}
\noindent Advantages: Easy use for ABS to access the Java libraries \\
\noindent Disadvantages
\begin{itemize}
\item Tightly coupled to Java
\item Not possible to extend this approach to other languages, e.g. Erlang/C
...
\item Difficult to implement (type-checking)
\end{itemize}
\subsection{Loose Integration (the JNI way)}
Use an approach that is similar to JNI. i.e., define interfaces/methods/classes
as \emph{foreign}. Provide an implementation for these interfaces in the
target language
\noindent Advantages:
\begin{itemize}
\item loose coupling
\item independent of actual language
\end{itemize}
\noindent Foreign ABS Classes/Functions -- Idea: classes and functions can be
declared to be Foreign, either by using an annotations, or by using some keyword.
\begin{absexamplen}
module Test;
def Int random() = foreign;
interface Output {
Unit println();
}
[Foreign]
class OuputImpl implements Output { }
\end{absexamplen}
\section{How to link Java to ABS interfaces/classes/functions}
\subsection{Use Conventions}
For example, by having a special naming theme, e.g. a Java class with the same
name as the corresponding ABS class. Example:
\noindent ABS:
\begin{absexamplen}
module Foo;
interface Baz { }
[Foreign] class Bar implements Baz { }
\end{absexamplen}
\noindent Java:
\begin{javaexample}
package Foo;
public class Bar {
}
\end{javaexample}
\subsubsection{Discussion}
Using conventions has the disadvantage that it is not very flexible. In
particular, if classes have to be put in the same package as the generated ABS
classes this could lead to name clashes.
\PW{How to deal with sub-classing? e.g. Should we have ABS interfaces also for
abstract classes?}
\subsection{Use Annotations}
Use annotations on the Java level to connect Java code to ABS.
Example:
\noindent ABS:
\begin{absexamplen}
module Foo;
interface Baz {
Unit print(String s);
}
[Foreign]
class Bar implements Baz { }
\end{absexamplen}
\noindent Java:
\begin{javaexample}
package foo;
@AbsClass{"Foo.Bar"}
public class Bar {
ABSUnit print(ABSString s2) {
System.out.println(s2.toString());
}
}
\end{javaexample}
\noindent Possible Java Code
\begin{javaexample}
package Test;
import abs.backend.java.afi.*;
// function definitions are put in a
// class named "Def" and must be static
@AbsDef{"Test.random"}
public static ABSInt random() {
return new Random().nextInt();
}
public class OutputImpl {
public ABSUnit println(ABSString s) {
System.out.println(s.toString());
}
}
\end{javaexample}
\end{document} | {
"alphanum_fraction": 0.7503155769,
"avg_line_length": 22.6342857143,
"ext": "tex",
"hexsha": "1aff6b1f23faa0e1944556fac1557c1fe47ba148",
"lang": "TeX",
"max_forks_count": 33,
"max_forks_repo_forks_event_max_datetime": "2022-01-26T08:11:55.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-23T09:08:09.000Z",
"max_forks_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "oab/abstools",
"max_forks_repo_path": "abs-foreign-interface/brainstorming/brainstorming.tex",
"max_issues_count": 271,
"max_issues_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc",
"max_issues_repo_issues_event_max_datetime": "2022-03-28T09:05:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-30T19:04:52.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "oab/abstools",
"max_issues_repo_path": "abs-foreign-interface/brainstorming/brainstorming.tex",
"max_line_length": 81,
"max_stars_count": 38,
"max_stars_repo_head_hexsha": "6f245ec8d684efb0977049d075e853a4b4d7d8dc",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "oab/abstools",
"max_stars_repo_path": "abs-foreign-interface/brainstorming/brainstorming.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T19:26:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-04-23T09:08:06.000Z",
"num_tokens": 1020,
"size": 3961
} |
\chapter{Testing regression parameter estimates with Z-tests and T-tests}
| {
"alphanum_fraction": 0.8026315789,
"avg_line_length": 19,
"ext": "tex",
"hexsha": "cee15961c8348a1070b533fa8e250bb33d4369f0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/hypothesisRegression/00-00-Chapter_name.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/hypothesisRegression/00-00-Chapter_name.tex",
"max_line_length": 73,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/hypothesisRegression/00-00-Chapter_name.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15,
"size": 76
} |
%\newpage
%\appendix
%\chapter{Bill of Materials} | {
"alphanum_fraction": 0.7551020408,
"avg_line_length": 16.3333333333,
"ext": "tex",
"hexsha": "4cdb0a98d7e62668809224f919c41214bca39703",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bac8057dcbc45e0b982553ad5c29ec84d1f0319b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "skyshiro/latex",
"max_forks_repo_path": "appendix-outline.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bac8057dcbc45e0b982553ad5c29ec84d1f0319b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "skyshiro/latex",
"max_issues_repo_path": "appendix-outline.tex",
"max_line_length": 28,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "bac8057dcbc45e0b982553ad5c29ec84d1f0319b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "skyshiro/latex",
"max_stars_repo_path": "appendix-outline.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 15,
"size": 49
} |
\documentclass[11pt]{article}
\usepackage{geometry}
% Math packages.
\usepackage{amsmath,amssymb,amstext,amsfonts}
% Add figures.
\usepackage{graphicx}
% Metadata
\author{[Name goes here.]}
\title{[Title of report goes here.]}
\begin{document}
\maketitle
\section{Executive Summary}
[Summary goes here.]
\section{Statement of the Problem}
[Statement of the problem goes here.]
\section{Description of the Mathematics}
[Description of the mathematics goes here.]
\section{Description of the Algorithms and Implementation}
[Description of the algorithms and implementation goes here.]
\section{Description of the Experimental Design and Results}
[Description of the experimental design and results goes here.]
\section{Conclusions}
[Statement of the problem goes here.]
\end{document}
| {
"alphanum_fraction": 0.7752808989,
"avg_line_length": 18.6279069767,
"ext": "tex",
"hexsha": "8881b3c1d96c270d8b46285d1ebf83966ce7816e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "notmatthancock/notmatthancock.github.io",
"max_forks_repo_path": "teaching/acm-computing-seminar/resources/prog/assignment-template/report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "notmatthancock/notmatthancock.github.io",
"max_issues_repo_path": "teaching/acm-computing-seminar/resources/prog/assignment-template/report/report.tex",
"max_line_length": 63,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "notmatthancock/notmatthancock.github.io",
"max_stars_repo_path": "teaching/acm-computing-seminar/resources/prog/assignment-template/report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 187,
"size": 801
} |
\documentclass[12pt]{article}
\usepackage[top=1in,bottom=1in,left=0.75in,right=0.75in,centering]{geometry}
\usepackage{fancyhdr}
\usepackage{epsfig}
\usepackage[pdfborder={0 0 0}]{hyperref}
\usepackage{palatino}
\usepackage{wrapfig}
\usepackage{lastpage}
\usepackage{color}
\usepackage{ifthen}
\usepackage[table]{xcolor}
\usepackage{graphicx,type1cm,eso-pic,color}
\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{wasysym}
\def\course{CS 4102: Algorithms}
\def\homework{Divide and Conquer / Sorting Basic: Recurrence Relations}
\def\semester{Spring 2021}
\newboolean{solution}
\setboolean{solution}{false}
% add watermark if it's a solution exam
% see http://jeanmartina.blogspot.com/2008/07/latex-goodie-how-to-watermark-things-in.html
\makeatletter
\AddToShipoutPicture{%
\setlength{\@tempdimb}{.5\paperwidth}%
\setlength{\@tempdimc}{.5\paperheight}%
\setlength{\unitlength}{1pt}%
\put(\strip@pt\@tempdimb,\strip@pt\@tempdimc){%
\ifthenelse{\boolean{solution}}{
\makebox(0,0){\rotatebox{45}{\textcolor[gray]{0.95}%
{\fontsize{5cm}{3cm}\selectfont{\textsf{Solution}}}}}%
}{}
}}
\makeatother
\pagestyle{fancy}
\fancyhf{}
\lhead{\course}
\chead{Page \thepage\ of \pageref{LastPage}}
\rhead{\semester}
%\cfoot{\Large (the bubble footer is automatically inserted into this space)}
\setlength{\headheight}{14.5pt}
\newenvironment{itemlist}{
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}}
{\end{itemize}}
\newenvironment{numlist}{
\begin{enumerate}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}}
{\end{enumerate}}
\newcounter{pagenum}
\setcounter{pagenum}{1}
\newcommand{\pageheader}[1]{
\clearpage\vspace*{-0.4in}\noindent{\large\bf{Page \arabic{pagenum}: {#1}}}
\addtocounter{pagenum}{1}
\cfoot{}
}
\newcounter{quesnum}
\setcounter{quesnum}{1}
\newcommand{\question}[2][??]{
\begin{list}{\labelitemi}{\leftmargin=2em}
\item [\arabic{quesnum}.] {} {#2}
\end{list}
\addtocounter{quesnum}{1}
}
\definecolor{red}{rgb}{1.0,0.0,0.0}
\newcommand{\answer}[2][??]{
\ifthenelse{\boolean{solution}}{
\color{red} #2 \color{black}}
{\vspace*{#1}}
}
\definecolor{blue}{rgb}{0.0,0.0,1.0}
\begin{document}
\section*{\homework}
\question[3]{
You are a hacker, trying to gain information on a secret array of size $n$. This array contains $n-1$ ones and exactly $1$ two; you want to determine the index of the two in the array.\\
\\
Unfortunately, you don't have access to the array directly; instead, you have access to a function $f(l1, l2)$ that compares the sum of the elements of the secret array whose indices are in $l1$ to those in $l2$. This function returns $-1$ if the $l1$ sum is smaller, $0$ if they are equal, and $1$ if the sum corresponding to $l2$ is smaller.\\
\\
For example, if the array is $a=[1,1,1,2,1,1]$ and you call $f([1,3,5],[2,4,6])$ then the return value is $1$ because $a[1]+a[3]+a[5]=3<4=a[2]+a[4]+a[6]$. Design an algorithm to find the index of the $2$ in the array using the least number of calls to $f()$. Suppose you discover that $f()$ runs in $\Theta(max(|l1|,|l2|))$, what is the overall runtime of your algorithm?
}
\vspace{12pt}
\question[3]{
In class, we looked at the \emph{Quicksort algorithm}. Consider the \textbf{worst-case scenario} for quick-sort in which the worst possible pivot is chosen (the smallest or largest value in the array). Answer the following questions:
\begin{itemize}
\item What is the probability of choosing one of the two worst pivots out of $n$ items in the list?
\item Extend your formula. What is the probability of choosing the one of the worst possible pivots \emph{for EVERY recursive call} until reaching the base case. In other words, what is the probability quicksort fully sorts the list while choosing the worst pivot choice every time it attempts to do so?
\item What is the limit of your formula above as the size of the list grows. Is the chance of getting Quicksort's worst-case improving, staying constant, or converging on some other value.
\item Present one sentence on what this means. What are the chances that we actually get Quicksort's worst-case behavior?
\end{itemize}
}
\vspace{12pt}
%----------------------------------------------------------------------
\noindent Directly solve, by unrolling the recurrence, the following relation to find its exact solution.
\question[2]{
$T(n) = T(n-1) + n$
}
\vspace{12pt}
%----------------------------------------------------------------------
\noindent Use induction to show bounds on the following recurrence relations.
\question[2]{
Show that $T(n)=2T(\sqrt{n})+log(n) \in O(log(n)*log(log(n)))$. \emph{Hint: Try creating a new variable m and substituting the equation for m to make it look like a common recurrence we've seen before. Then solve the easier recurrence and substitute n back in for m at the end.}
}
\answer[0 in]{
...
}
\question[2]{
Show that $T(n)=4T(\frac{n}{3})+n \in \Theta(n^{log_3(4)})$. You'll need to subtract off a lower-order term to make the induction work here. \emph{Note: we are using big-theta here, so you'll need to prove the upper AND lower bound.}
}
\answer[0 in]{
...
}
\vspace{12pt}
%----------------------------------------------------------------------
\noindent Use the master theorem (or main recurrence theorem if applicable) to solve the following recurrence relations. State which case of the theorem you are using and why.
\question[2]{
$T(n)=2T(\frac{n}{4})+1$
}
\answer[0 in]{
...
}
\question[2]{
$T(n)=2T(\frac{n}{4})+\sqrt{n}$
}
\answer[0 in]{
...
}
\question[2]{
$T(n)=2T(\frac{n}{4})+n$
}
\answer[0 in]{
...
}
\question[2]{
$T(n)=2T(\frac{n}{4})+n^2$
}
\answer[0 in]{
...
}
\end{document}
| {
"alphanum_fraction": 0.6912156167,
"avg_line_length": 30.4594594595,
"ext": "tex",
"hexsha": "736e751b1ad3d2e2365c46168a03256b560f77a8",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-03-29T23:06:24.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-31T21:10:50.000Z",
"max_forks_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "csonmezyucel/cs4102-f21",
"max_forks_repo_path": "homeworks/divideconq-basic/recurrenceRelations.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "csonmezyucel/cs4102-f21",
"max_issues_repo_path": "homeworks/divideconq-basic/recurrenceRelations.tex",
"max_line_length": 372,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "csonmezyucel/cs4102-f21",
"max_stars_repo_path": "homeworks/divideconq-basic/recurrenceRelations.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1730,
"size": 5635
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[12pt,twocolumn,tighten]{aastex63}
%\documentclass[12pt,twocolumn,tighten,trackchanges]{aastex63}
\usepackage{amsmath,amstext,amssymb}
\usepackage[T1]{fontenc}
\usepackage{apjfonts}
\usepackage[figure,figure*]{hypcap}
\usepackage{graphics,graphicx}
\usepackage{hyperref}
\usepackage{natbib}
\usepackage[caption=false]{subfig} % for subfloat
\usepackage{enumitem} % for specific spacing of enumerate
\usepackage{epigraph}
\renewcommand*{\sectionautorefname}{Section} %for \autoref
\renewcommand*{\subsectionautorefname}{Section} %for \autoref
\newcommand{\tn}{TOI~1937} % target star name
\newcommand{\pn}{TOI~1937b} % planet name
\newcommand{\cn}{NGC~2516} % cluster name
\newcommand{\kms}{\,km\,s$^{-1}$}
\newcommand{\stscilink}{\textsc{\url{archive.stsci.edu/hlsp/cdips}}}
\newcommand{\datasetlink}{\textsc{\dataset[doi.org/10.17909/t9-ayd0-k727]{https://doi.org/10.17909/t9-ayd0-k727}}}
%% Reintroduced the \received and \accepted commands from AASTeX v5.2.
%% Add "Submitted to " argument.
\received{---}
\revised{---}
\accepted{---}
\submitjournal{TBD.}%AAS journals.}
\shorttitle{TDB.}
\begin{document}
\defcitealias{bouma_wasp4b_2019}{B19}
\title{
Cluster Difference Imaging Photometric Survey. IV.
TOI 1937Ab. Youngest Hot Jupiter, or Just Tidally Spun Up?
}
%\suppressAffiliations
%\NewPageAfterKeywords
\input{authors.tex}
\begin{abstract}
We report the discovery and confirmation of a hot Jupiter,
TOI\,1937Ab, and present the evidence for and against its youth
($\approx$120\,Myr).
We found the planet in images taken by the NASA TESS mission
using the CDIPS pipeline.
We measured its mass (1.X Mjup) and orbital obliquity
(0$\pm$XX$^\circ$) using PFS at Magellan.
Gaia kinematics suggest that the star is in the halo of NGC\,2516.
The possible youth is corroborated by the {\bf G2V} star's 6.X day
rotation period, the planet's 22.X hour orbital period, and the
star's metallicity ([Fe/H] X.X$\pm$Y.Y dex) being consistent with
that of the cluster (X.X$\pm$Z.Z dex).
However, the star's spectrum does not show lithium, which argues
against it being truly young.
We outline tests that could support or refute the youth of the
system, and outline the reasons why few, if any, sub-100 Myr hot
Jupiters have been securely detected.
\end{abstract}
\keywords{
Exoplanets (498),
Transits (1711),
Exoplanet evolution (491),
Stellar ages (1581),
Young star clusters (1833)
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
Depending on the process that produces them, hot Jupiters arrive on
their tiny orbits on timescales of anywhere between megayears and
gigayears (CITE, CITE, CITE).
One way to distinguish between these processes is to find
young ($\lesssim$100\,Myr) hot Jupiters.
However the search for such hot Jupiters has yielded remarkably few,
if any, secure detections.
What is the youngest hot Jupiter known?
Transits have provided a pile of tantalizing candidates. HIP\,67522b,
with an age, orbital period, and size of XX\,Myr, X.X\,days, and
TOOSMALL\,$R_\oplus$ respectively, comes close
\citep{rizzuto_tess_2020}. Its
size however is smaller than expected for a Jovian-mass object given
its age, by a factor of $\approx1.5\times$ (CITE Burrows, Fortney,
Thorngren models, CITE Owen 2020 entropy). A mass measurement is
therefore needed to resolve its status as either a hot Jupiter, or an
inflated Neptunian world. No other planet comes as close to fitting
the bill. V1298 Tau b, part of a unique system with at least four
transiting planets, is appropriately large and young. However its orbital
period (XX\,days) is too long to be a hot Jupiter (CITE David 2019,
2020). The dips in PTFO 8-8695, long-interpreted as a candidate hot
Jupiter, are chromatic and show a changing orbital phase. This is
inconsistent with a planetary interpretation (van Eyken 2012, Yu+15,
Onitsuka+17, Tanimoto+20, Bouma+20).
Reports of very young hot Jupiters have come from the radial velocity
(RV) technique, but these reports are difficult to verify. TW Hya
(age=X\,Myr), for instance, showed a radial velocity semi-amplitude of
$\approx$200\,m\,s$^{-1}$ with a period of 3.6\,days, as measured in
optical spectra \citep{setiawan_young_2008} (CITEP ALSO age reference).
Even though \citet{setiawan_young_2008} observed no significant
correlation between activity indicators and the RV signal, subsequent
infrared velocities measured by \citet{huelamo_tw_2008} showed small
variations $\lesssim 35\,$m\,s$^{-1}$. \citet{huelamo_tw_2008}
ultimately concluded that activity-induced variations were the source
of the optical signal. More recently, very young hot Jupiters have
been reported around CI Tau, V830 Tau, and TAP 26
\citep{johns-krull_CI_Tau_candidate_2016,donati_hot_2016,donati_hot_2017,yu_hot_2017,biddle_k2_2018,flagg_co_2019}.
The planetary nature of at least two of these signals (CI Tau and V830
Tau) has been debated \citep{donati_magnetic_2020,damasso_gaps_2020}.
More generally, the Bayesian model comparison problem of ``stellar
activity alone'' versus ``stellar activity plus planet'' has been
shown to require significant amounts of both data and statistical
machinery \citep[{\it
e.g.},][]{barragan_radial_2019,klein_simulated_2020} To date, none of
the reported young hot Jupiter detections have performed this style of
analysis. Such an analysis, or else the acquisition of multi-color
radial velocities, seems like a requirement given the challenges of
detecting small signals in the presence of significant stellar
variability.
%TODO: make figure
So where does that leave the search for young hot Jupiters?
A quick query to the NASA Exoplanet Archive yields the current state
of the search, showcased in Figure~\ref{fig:rp_vs_age}.
(FIGURE: Rp vs Age, and Mp\,sini vs Age).
The youngest transiting hot Jupiter ($R_p>R_{\rm Jup}$, $P<10\,{\rm
days}$) appears to be Qatar-4b.
Discovered by (CITE), the age was reported to be very low because of
gyrochrones.
Not great (CITE CITE).
Section~\ref{sec:observations} describes the identification of the
candidate, and the follow-up observations that led to its confirmation.
The planet can only be understood with respect to its (putative)
host cluster, so in turn we analyze the
available six-dimensional positions and kinematics (Section~\ref{sec:gaia6d}),
the rotation periods of stars in \cn\
(Section~\ref{sec:rotation}), and the available lithium measurements
(Section~\ref{sec:lithium}).
We synthesize this data in
Section~\ref{sec:system} to present our best interpretation of the
system itself, in turn presenting our interpretation of the
cluster
(Section~\ref{subsec:cluster}), the star (Section~\ref{subsec:star})
and the planet (Section~\ref{subsec:planet}). We conclude in
Section~\ref{sec:discussion} by discussing which questions we have
been able to answer, and which remain open.
\section{Identification and Follow-up Observations}
\label{sec:observations}
\subsection{TESS Photometry}
\label{subsec:tess}
TESS: S7 + S9
\subsection{Gaia Astrometry and Imaging}
\label{subsec:gaia}
Gaia:
Within 20 arcsec, there are three sources.
% 5489726768531119616 (target, G=13.02, plx=2.38 +/- 0.017 mas, so Bp-Rp=1.00).
% Note RV = 27.77km/s, +/- 9.99 km/s. Quotes E(Bp-Rp) = 0.1925
% 5489726768531118848 (G=17.59, Bp=17.7, Rp=16.2, Bp-Rp=1.502,...
% but has plx=2.32 +/- 0.118 mas, and VERY SIMILAR proper motions. So, it's a
% binary. Presumably this is the companion that Ziegler found.)
% There's also 5489726768531122560, a G=19.5 companion at further sep.
% ...
% Within 30 arcsec, there are... 8 sources. The others don't have parallaxes or
% proper motions of significant interest.
\subsection{High-Resolution Imaging}
\label{subsec:speckle}
Howell et al took some, for me, and had a non-detection.
Also, Ziegler et al took some.
They get the 2.1 arcsec separation, faint companion.
\subsection{Ground-based Time-Series Photometric Follow-up}
\label{subsec:groundphot}
LCOGT
5 light curves. 3 i band, 1 g band, 1 z band
El Sauce
1 r band.
\subsection{Spectroscopic Follow-up}
\label{subsec:spectra}
\subsubsection{SMARTS 1.5$\,$m / CHIRON}
\label{subsec:chiron}
CHIRON x1 recon 2020-02-04, another circa 2021.
\subsubsection{PFS}
PFS template + RVs
\paragraph{Steve Shectman on resolution in 3x3 binning mode:}
I think what you've done is taken the resolution of the spectrograph with 1x2 binning, which is about 130,000, and divided by three. The actual situation is as follows. The scale at the detector is 158 microns per arcsecond, so the 0.3 arcsecond slit is 47 microns wide. An unbinned pixel is 9 microns, which corresponds to 450 m/s, and the measured FWHM is about 5 unbinned pixels (45 microns or 2.25 km/s). So then the resolution is 300,000/2.25 = ~133,000.
So the unbinned spectra are nicely sampled with about 5 pixels per FWHM, while the binned spectra are somewhat undersampled with about 1.7 pixels per FWHM. There's a small loss in resolution caused by the discrete pixel size (it kind of adds 1 pixel in quadrature to the underlying FWHM), so maybe the effective FWHM when the spectra are binned 3x3 is more like 2 pixels instead of 1.7, corresponding to 6 pixels unbinned. That would still be a resolution of about 110,000. I don't actually have a rigorous measurement of the resolution with 3x3 binning, but I think 110,000 is a reasonable number to quote.
%\input{TOI837_rv_table.tex}
\section{Kinematics}
\subsection{NGC 2516 broadly}
\subsection{TOI 1937 specifically}
If you back-integrate 20 Myr, it gets closer. But then, it gets
further.
\section{Rotation}
\subsection{NGC 2516 broadly}
\subsection{TOI 1937 specifically}
\section{Lithium}
\subsection{NGC 2516 broadly}
\subsection{TOI 1937 specifically}
\subsection{Comparison Stars That We Got Spectra For}
They are at
/Users/luke/Dropbox/proj/earhart/data/spectra/comparison\_stars.
There are 6 CHIRON ones, of varying assailability.
And 1 PFS unassailable member.
\section{System Modeling}
\label{sec:system}
\subsection{The Cluster}
\label{subsec:cluster}
\pararaph{Metallicity}
(Quoting Jilinski+09)
``After an initial period where several authors (see Terndrup et al.
2002) found for NGC 2516 a metallicity of a few tenths dex below the
solar, more recent analysis (Irwin et al. 2007; Jeffries et al. 2001;
Sciortino et al. 2001; Sung et al. 2002) places NGC 2516 with a
metallicity close to solar. ''
\subsubsection{Physical Characteristics}
\label{subsec:clusterchar}
\paragraph{Mass} 1.8Mjup
\paragraph{Obliquity} Less than 20 deg or so. Consistent with
expectations given its short orbital period (e.g., Anderson et al
2021).
\subsubsection{HR Diagram}
\label{subsec:hr}
\subsection{The Star}
\label{subsec:star}
\subsubsection{Membership of \tn\ in \cn}
\label{subsec:member}
\subsubsection{Rotation}
\subsubsection{Lithium}
\subsubsection{Stellar Parameters}
\label{subsec:starparams}
\subsection{The Planet}
\label{subsec:planet}
\section{Discussion}
\label{sec:discussion}
\paragraph{Isn't the rotation period a little slow?}
Yes. But hot Jupiters affect the star's rotation, and it's hard to be
sure how much. Consider TOI 1431b, an Am-stype with vsini=8km/s
hosting a retrograde hot Jupiter.
Generally speaking, retrograde systems will spin down the star (e.g.,
Anderson+2021), even if the planet is quickly ``realigned''.
The latter realignment would be needed in our case to understand TOI
1937Ab, since the observed orbit is prograde.
\subsection{The present}
The minimum orbital period beyond which a planet is tidally sheared
apart depends on the concentration of mass within the planet
(Rappaport+2013).
For a VERY CENTRALLY CONCENTRATED planet with \tn's
mean density of 0.XX g/cm3, this period is XX.X hours.
For the limit of a uniform density distribution, the period would be
XX.X hours.
is
XX.X hr (e.g., Rappaport+ 2013).
Given ,
The Roche limit
Roche limit for a close-orbiting planet can
be expressed as a minimum orbital period depending
on the density distribution of the planet (Rappaport
et al. 2013). For WASP-12b, with a mean density of
0.46 g cm3, the Roche-limiting orbital period is 14.2 hr
assuming the mass of the planet to be concentrated near
the center, and 18.6 hr in the opposite limit of a spher-
ical and incompressible planet. The
\subsection{The future}:
- Detailed multi-abundance modelling.
i.e,. Get spectra of other cluster stars using PFS, and compare the metallicities.
- 2$\farc$5 separate, multiband imager. (With LCOGT, 2m in Australia,
z-band)->Do PSF fitting.
- K-band image. MagAO? Multicolor imager.
E.g., Keck-NIRC2 (if it were in the North).
???What is the prediction for the G-K color???
Doesn't seem all that promising. G-mag would predict something like
M2.5V or M3.5V (0.40Msun or 0.30Msun from mamajek).
At 100 Myr, a 0.20Msun star is 18Gcm vs 16Gcm (12\% inflated;
Burrows+01). This is not a whole lot. And it is worse for more massive
stars.
- Maybe GROND -- has K-band. At La Silla. On 2.2m. griz+K.
Going whole hog, maybe SPHERE could do it.
- Get a spectrum of the secondary?
Andrew Mann suggests this is possible.
The primary is G=13.0, secondary G=17.6. The age test would be
based on Halpha.
``If there's no halpha you can be sure it's old, but if there is
some it's kinda inconclusive''
``10-40\% of M3s are inactive, assuming that the SpT is right ''
``get the M dwarf spectrum in good seeing and confirm age, or
don't bother''
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\clearpage
\acknowledgements
\raggedbottom
The authors thank X and Y for fruitful discussions.
%
L.G.B. and J.H. acknowledge support by the TESS GI Program, program
NUMBER, through NASA grant NUMBER.
%
This study was based in part on observations at Cerro Tololo
Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID
2020A-0146; 2020B-NUMBER PI: L{.}~Bouma), which is managed by the
Association of Universities for Research in Astronomy (AURA) under a
cooperative agreement with the National Science Foundation.
%
ACKNOWLEDGE PFS / CAMPANAS.
%
This paper includes data collected by the TESS mission, which are
publicly available from the Mikulski Archive for Space Telescopes
(MAST).
%
Funding for the TESS mission is provided by NASA's Science Mission
directorate.
%
% The ASTEP project acknowledges support from the French and Italian
% Polar Agencies, IPEV and PNRA, and from Universit\'e C\^ote d'Azur
% under Idex UCAJEDI (ANR-15-IDEX-01). We thank the dedicated staff at
% Concordia for their continuous presence and support throughout the
% Austral winter.
% %
% This research received funding from the European Research Council
% (ERC) under the European Union's Horizon 2020 research and innovation
% programme (grant n$^\circ$ 803193/BEBOP), and from the
% Science and Technology Facilities Council (STFC; grant n$^\circ$
% ST/S00193X/1).
%
% The Digitized Sky Survey was produced at the Space Telescope Science
% Institute under U.S. Government grant NAG W-2166.
% Figure~\ref{fig:scene} is based on photographic data obtained using
% the Oschin Schmidt Telescope on Palomar Mountain.
%
This research was based in part on observations obtained at the
Southern Astrophysical Research (SOAR) telescope, which is a joint
project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e
Inova\c{c}\~{o}es (MCTI/LNA) do Brasil, the US National Science
Foundation's NOIRLab, the University of North Carolina at Chapel Hill
(UNC), and Michigan State University (MSU).
This research made use of the Exoplanet Follow-up Observation
Program website, which is operated by the California Institute of
Technology, under contract with the National Aeronautics and Space
Administration under the Exoplanet Exploration Program.
% %
% This research made use of the NASA Exoplanet Archive, which is
% operated by the California Institute of Technology, under contract
% with the National Aeronautics and Space Administration under the
% Exoplanet Exploration Program.
% %
This research made use of the SVO Filter Profile Service
(\url{http://svo2.cab.inta-csic.es/theory/fps/}) supported from the Spanish
MINECO through grant AYA2017-84089.
Resources supporting this work were provided by the NASA High-End
Computing (HEC) Program through the NASA Advanced Supercomputing (NAS)
Division at Ames Research Center for the production of the SPOC data
products.
%
% A.J.\ and R.B.\ acknowledge support from project IC120009 ``Millennium
% Institute of Astrophysics (MAS)'' of the Millenium Science Initiative,
% Chilean Ministry of Economy. A.J.\ acknowledges additional support
% from FONDECYT project 1171208. J.I.V\ acknowledges support from
% CONICYT-PFCHA/Doctorado Nacional-21191829. R.B.\ acknowledges support
% from FONDECYT Post-doctoral Fellowship Project 3180246.
% %
% C.T.\ and C.B\ acknowledge support from Australian Research Council
% grants LE150100087, LE160100014, LE180100165, DP170103491 and
% DP190103688.
% %
% C.Z.\ is supported by a Dunlap Fellowship at the Dunlap Institute for
% Astronomy \& Astrophysics, funded through an endowment established by
% the Dunlap family and the University of Toronto.
% %
% D.D.\ acknowledges support through the TESS Guest Investigator Program
% Grant 80NSSC19K1727.
%
%
%
% %
% Based on observations obtained at the Gemini Observatory, which is
% operated by the Association of Universities for Research in Astronomy,
% Inc., under a cooperative agreement with the NSF on behalf of the
% Gemini partnership: the National Science Foundation (United States),
% National Research Council (Canada), CONICYT (Chile), Ministerio de
% Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina),
% Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil),
% and Korea Astronomy and Space Science Institute (Republic of Korea).
% %
% Observations in the paper made use of the High-Resolution Imaging
% instrument Zorro at Gemini-South. Zorro was funded by the NASA
% Exoplanet Exploration Program and built at the NASA Ames Research
% Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett
% Quigley.
% %
% This research has made use of the VizieR catalogue access tool, CDS,
% Strasbourg, France. The original description of the VizieR service was
% published in A\&AS 143, 23.
% %
% This work has made use of data from the European Space Agency (ESA)
% mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed
% by the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
% \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding
% for the DPAC has been provided by national institutions, in particular
% the institutions participating in the {\it Gaia} Multilateral
% Agreement.
%
% (Some of) The data presented herein were obtained at the W. M. Keck
% Observatory, which is operated as a scientific partnership among the
% California Institute of Technology, the University of California and
% the National Aeronautics and Space Administration. The Observatory was
% made possible by the generous financial support of the W. M. Keck
% Foundation.
% The authors wish to recognize and acknowledge the very significant
% cultural role and reverence that the summit of Maunakea has always had
% within the indigenous Hawaiian community. We are most fortunate to
% have the opportunity to conduct observations from this mountain.
%
% \newline
%
\software{
\texttt{arviz} \citep{arviz_2019},
\texttt{astrobase} \citep{bhatti_astrobase_2018},
%\texttt{astroplan} \citep{astroplan2018},
\texttt{AstroImageJ} \citep{collins_astroimagej_2017},
\texttt{astropy} \citep{astropy_2018},
\texttt{astroquery} \citep{astroquery_2018},
%\texttt{BATMAN} \citep{kreidberg_batman_2015},
\texttt{ceres} \citep{brahm_2017_ceres},
\texttt{cdips-pipeline} \citep{bhatti_cdips-pipeline_2019},
\texttt{corner} \citep{corner_2016},
%\texttt{emcee} \citep{foreman-mackey_emcee_2013},
\texttt{exoplanet} \citep{exoplanet:exoplanet}, and its
dependencies \citep{exoplanet:agol20, exoplanet:kipping13, exoplanet:luger18,
exoplanet:theano},
%\texttt{IDL Astronomy User's Library} \citep{landsman_1995},
\texttt{IPython} \citep{perez_2007},
%\texttt{isochrones} \citep{morton_2015_isochrones},
%\texttt{lightkurve} \citep{lightkurve_2018},
\texttt{matplotlib} \citep{hunter_matplotlib_2007},
%\texttt{MESA} \citep{paxton_modules_2011,paxton_modules_2013,paxton_modules_2015}
\texttt{numpy} \citep{walt_numpy_2011},
\texttt{pandas} \citep{mckinney-proc-scipy-2010},
\texttt{pyGAM} \citep{serven_pygam_2018_1476122},
\texttt{PyMC3} \citep{salvatier_2016_PyMC3},
\texttt{radvel} \citep{fulton_radvel_2018},
%\texttt{scikit-learn} \citep{scikit-learn},
\texttt{scipy} \citep{jones_scipy_2001},
\texttt{tesscut} \citep{brasseur_astrocut_2019},
%\texttt{VESPA} \citep{morton_efficient_2012,vespa_2015},
%\texttt{webplotdigitzer} \citep{rohatgi_2019},
\texttt{wotan} \citep{hippke_wotan_2019}.
}
\
\facilities{
{\it Astrometry}:
Gaia \citep{gaia_collaboration_gaia_2016,gaia_collaboration_gaia_2018}.
{\it Imaging}:
Second Generation Digitized Sky Survey,
SOAR~(HRCam; \citealt{tokovinin_ten_2018}).
%Keck:II~(NIRC2; \url{www2.keck.hawaii.edu/inst/nirc2}).
%Gemini:South~(Zorro; \citealt{scott_nessi_2018}.
{\it Spectroscopy}:
CTIO1.5$\,$m~(CHIRON; \citealt{tokovinin_chironfiber_2013}),
PFS ({\bf CITE}),
% MPG2.2$\,$m~(FEROS; \citealt{kaufer_commissioning_1999}),
AAT~(Veloce; \citealt{gilbert_veloce_2018}).
%Keck:I~(HIRES; \citealt{vogt_hires_1994}).
%{\bf VLT (number), UVES and GIRAFFE} (CITE: Pasquini et al 2002)
% Euler1.2m~(CORALIE),
% ESO:3.6m~(HARPS; \citealt{mayor_setting_2003}).
{\it Photometry}:
% ASTEP:0.40$\,$m (ASTEP400),
% CTIO:1.0m (Y4KCam),
% Danish 1.54m Telescope,
El Sauce:0.356$\,$m,
% Elizabeth 1.0m at SAAO,
% Euler1.2m (EulerCam),
% Magellan:Baade (MagIC),
% Max Planck:2.2m (GROND; \citealt{greiner_grond7-channel_2008})
% NTT,
% SOAR (SOI),
TESS \citep{ricker_transiting_2015}.
% TRAPPIST \citep{jehin_trappist_2011},
% VLT:Antu (FORS2).
}
% \input{TOI837_phot_table.tex}
% \input{TOI837_rv_table.tex}
% \input{ic2602_ages.tex}
\input{starparams.tex}
\input{compparams.tex}
% \input{model_posterior_table.tex}
\clearpage
\bibliographystyle{yahapj}
\bibliography{bibliography}
\listofchanges
%\allauthors
\end{document}
| {
"alphanum_fraction": 0.7508255911,
"avg_line_length": 39.0895008606,
"ext": "tex",
"hexsha": "03ff2f84378f07ace37e2b3bb917f834802d4642",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1d2b65d58655725f43c1bf9705b897bf767d4ca1",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lgbouma/earhart",
"max_forks_repo_path": "planet_paper/ms.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1d2b65d58655725f43c1bf9705b897bf767d4ca1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lgbouma/earhart",
"max_issues_repo_path": "planet_paper/ms.tex",
"max_line_length": 611,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1d2b65d58655725f43c1bf9705b897bf767d4ca1",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lgbouma/earhart",
"max_stars_repo_path": "planet_paper/ms.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6863,
"size": 22711
} |
\documentclass{article}
\setlength{\topmargin}{-.5in}
\setlength{\oddsidemargin}{.125in}
\setlength{\textwidth}{6.25in}
% page numbering style
\usepackage{fancyhdr}
\pagestyle{fancy}
\renewcommand{\headrulewidth}{0pt}
\fancyhf{}
\fancyfoot[R]{\thepage}
\fancypagestyle{plain}{%
\renewcommand{\headrulewidth}{0pt}%
\fancyhf{}%
\fancyfoot[R]{\thepage}%
}
\usepackage{float}
\usepackage{Sweave}
\begin{document}
\input{report-concordance}
\title{Mathematics Developers Survey 2016}
\author{Nejc Ilenic}
\date{}
\maketitle
\section{Introduction}
Anonymised responses from Stack Overflow Annual Developer Survey are published each year along the results to encourage their further analysis. Being curious about where in the world and in which domain a data scientist should start his / her career, we attempt to answer some of the relevant questions by analysing available data.
\vspace{2mm}
An important thing to note when interpreting the results however is that this dataset may not be a represantative sample from the population of mathematics developers. One should keep in mind that these are developers who were aware of the survey and were willing to answer the questions.
\section{Data preparation}
The dataset was constructed from survey that took place from January 7 to January 25, 2016, with responses originating from Stack Overflow, Stack Exchange technical sites, Facebook and Twitter. Most of the questions are demographic or relating to professional work and technology. Raw data consists of 56030 samples and 66 features, all of which are optional.
In order to obtain an adequately sizable sample, we have decided to include all respondents that belong to the occupation group of mathematics developers, which includes data scientists, machine learning developers and developers with statistics and mathematics backgrounds. After filtering out other occupations and responses with unknown countries there are 2132 samples left for analysis.
\section{Exploratory analysis}
We are primarily interested in answering two questions: where in the world and in which domains (industries) mathematics developers love their jobs the most. Additionally, we want to learn how job satisfaction depends on other factors like compensation, age, gender, etc. We will attempt to answer the first two questions by comparing the level of satisfaction across groups and the last one by estimating linear relationships between variables.
\subsection{Job satisfaction among countries}
Number of mathematics developers per countries can be seen in Figure \ref{fig_0}. Minimum number of 35 respondents is required to take the country into account and all others are placed into a single group called \textit{Other}. Note that selected countries and number of answers may be different when doing inference of specific features due to missing values (i.e. optional answers in the survey). Majority of respondents are from United States, followed by a combination of countries with less than 35 developers, United Kingdom, Germany and India.
\begin{figure}[H]
\centering
\includegraphics{report-005}
\caption{Number of mathematics developers per country.}\label{fig_0}
\end{figure}
The question from the survey regarding job satisfaction is: 'How satisfied are you with your current job(s)?' with six possible answers: 'I don't have a job', 'I hate my job', 'I'm somewhat dissatisfied with my job', 'I'm neither satisfied nor dissatisfied', 'I'm somewhat satisfied with my job' and 'I love my job'. This is a typical approach to scaling answers in survey research and for group comparison we will treat the response as a categorical variable. Additionally, we will discard answers of people without the job, resulting in five categories.
A multinomial-dirichlet model can be used to model categorical data and since we are mainly interested in a single category ('I love my job') an alternative would be to transform it to a binary variable and use a binomial-beta model instead. Nevertheless, we will model all five categories simultaneously and focus on a single category when interpreting the results. We will use MCMC approximations even though dirichlet and beta priors are conjugate for the two sampling models and posteriors could easily be derived analytically.
A posterior parameters sample is obtained for each country independently by running a MCMC algorithm for 5000 iterations with 400 warmup samples. We have no opinions or information from related studies regarding parameters, thus uninformative priors are used. Traceplots and MCMC summaries are not included in the report, but can be found in the \textit{plots} and in the \textit{mcmc\_summaries} directories. Nothing abnormal can be spotted and it seems that all the chains have converged.
In Figure \ref{fig_1} a posterior predictive check can be seen. Sampling distributions of log odds of the answer 'I love my job' calculated from posterior predictive samples of the same sizes as the observed samples are plotted for each of the countries and there are no noticeable discrepancies between the replicated and the observed data (with respect to the selected statistic). Note that visual posterior predictive checks are 'sanity checks' more than anything else (e.g. their usage as a model selection technique would be inadequate).
\begin{figure}[H]
\centering
\includegraphics{report-010}
\caption{Job satisfaction among countries posterior predictive check. Densities are sampling distributions of log odds of the answer 'I love my job' calculated from posterior predictive samples of the same sizes as the observed samples. Red lines indicate the observed log odds.}\label{fig_1}
\end{figure}
Result of the sampling can be seen in Figure \ref{fig_2}. Plotted are 90\% confidence intervals for posterior probabilities of the answer 'I love my job' for all countries. It can be concluded that there are either no meaningful differences among the countries regarding the probability of someone loving his / her job or we simply can't answer the question with this data. We can however take a more indirect approach by estimating which variables are positively correlated with job satisfaction and compare those among groups instead.
\begin{figure}[H]
\centering
\includegraphics{report-012}
\caption{90\% confidence intervals for posterior probabilities of the answer 'I love my job' for all countries.}\label{fig_2}
\end{figure}
\subsection{Explanatory variables for job satisfaction}
Here we are concerned with how job satisfaction varies with a set of selected variables. Specifically, we are interested in how person's age, gender, purchasing power, whether he or she works remotely, values unit testing, commits code at least once a day and whether he or she works in a big company and has PhD explain the level of their job satisfaction. Age and purchasing power are continuous and all other variables are binary. It should be stated that purchasing power is calculated as compensation in dollars divided by a Big Mac index of the respondent's country (an informal way of measuring the purchasing power parity, i.e. how many Big Macs a person can buy per year) and company is regarded big if number of employees is more than 99. All other variables should be self explanatory.
We will treat the outcome (5 possible answers as before) as an ordinal variable (i.e. a categorical for which values are ordered, but the distances between them are unknown) to preserve information regarding the order. Ordinal logistic regression with uninformative priors is our model of choice here. Again 5000 posterior samples are drawn using a MCMC algorithm with 400 warmup iterations. Traceplots and MCMC summaries are not included in the report, but can be found in the \textit{plots} and in the \textit{mcmc\_summaries} directories. Nothing abnormal can be spotted and it seems that all the chains have converged.
In Figure \ref{fig_3} a posterior predictive check can be seen. Plotted are histograms of 20 replicated samples along with a histogram of the observed sample. In some plots differences can be spotted between replicated and observed data and in others histograms are almost identical. We will conclude that our model fits the data sufficiently for our purposes and assume that discrepancies are due to the sampling variability.
\begin{figure}[H]
\centering
\includegraphics{report-017}
\caption{Posterior predictive check for conditional distribution of job satisfaction given explanatory variables. Plotted are histograms of 20 replicated samples along with a histogram of the observed sample.}\label{fig_3}
\end{figure}
The observed Pearson correlation coefficients (covariance of the two variables divided by the product of their standard deviations) of selected explanatory variables are plotted in Figure \ref{fig_4}. There are no strong correlations between regressors with exception of correlation between age and purchasing power ($+0.36$), which we have to keep in mind when interpreting the regression coefficients. Otherwise results from observed sample seem somewhat reasonable: remote work is positively correlated with age and negatively with working in a big company and age is positively correlated with purchasing power.
\begin{figure}[H]
\centering
\includegraphics{report-019}
\caption{Pearson correlation coefficients of selected explanatory variables.}\label{fig_4}
\end{figure}
Next we examine posterior regression coefficients' 90\% confidence intervals in Figure \ref{fig_5}. Let's first focus on continuous variables age and purchasing power. We can't be certain but it seems that as age grows satisfaction level declines. On the other hand we can say much more confidently that purchasing power is positively correlated with job satisfaction, which seems quite reasonable. With dummy variables we have to keep in mind that we have modeled a 'reference' female developer which doesn't commit code at least once a day, doesn't have a PhD, doesn't value unit testing, doesn't work in a big company and doesn't work remotely. We can then intepret signs of coefficients as positive or negative effect that corresponding variables have if we change their values from false to true keeping all other variables constant. For example if we say that our 'reference' developer starts to work remotely it is possible that her satisfaction level will rise, although we should be conservative here as the 5th percentile is actually below zero. Similarly if she starts to work in a big company it is likely that her satisfaction level will decline. 90\% confidence intervals of other variables contain the value of zero so we won't conclude their effect on response variable. Our level of uncertainty is the lowest for purchasing power so we turn our attention to that in the next section.
\begin{figure}[H]
\centering
\includegraphics{report-021}
\caption{90\% confidence intervals for posterior regression coefficients of selected explanatory variables for job satisfaction.}\label{fig_5}
\end{figure}
\subsection{Purchasing power among countries}
We will compare purchasing power among countries as results in previous section strongly suggest its positive correlation with job satisfaction. In spite of that one should bear in mind that the highest purchasing power doesn't imply the highest job satisfaction and that the opposite case can occur just as well. There may be other (even unmeasured) factors that we have not taken into account and have stronger (negative) correlation with job satisfaction. The only conclusions we will be able to draw based on results from this section are regarding the purchasing power itself.
In Figure \ref{fig_6} we can see the observed densities of purchasing power for all countries. Based on the plots it seems that gamma or Weibull models would be appropriate for this random variable, however as we are only interested in comparing means across groups we can use the normal model (central limit theorem). Additionally we turn to the hierarchical normal model to combine the information from all countries.
\begin{figure}[H]
\centering
\includegraphics{report-023}
\caption{Observed densities of purchasing power for all countries.}\label{fig_6}
\end{figure}
A posterior sample of group means is obtained for each country by running a MCMC algorithm for 5000 iterations with 400 warmup samples. Like before we have no opinions or information from related studies regarding parameters, thus uninformative priors are used. Traceplots and MCMC summaries are not included in the report, but can be found in the \textit{plots} and in the \textit{mcmc\_summaries} directories. Nothing abnormal can be spotted and it seems that all the chains have converged.
Posterior predictive check can be seen in Figure \ref{fig_7}. Sampling distributions of means of posterior predictive samples of the same sizes as the observed samples are plotted along with the observed means. We can observe a practical demonstration of the central limit theorem (larger sample size - smaller uncertainty). There are no noticeable discrepancies between the replicated and the observed data with respect to the selected statistic (the mean).
\begin{figure}[H]
\centering
\includegraphics{report-027}
\caption{Purchasing power among countries posterior predictive check. Densities are sampling distributions of means of posterior predictive samples of the same sizes as the observed samples. Red lines indicate the observed means.}\label{fig_7}
\end{figure}
Result of the sampling can be seen in Figure \ref{fig_8}. Plotted are 90\% confidence intervals for purchasing power posterior means for all countries. We won't calculate exact probabilities of comparisons of means (although we could, we have the posterior distributions), but rather compare them visually. Mean purchasing power is higher in United States and Australia than in all other countries (more than 20000 Big Macs per year). United Kingdom has higher mean purchasing power than all other countries (excluding United States and Australia) and seems similar to Switzerland's, although level of uncertainty is much higher for the latter due to a smaller sample size. Canada, Germany and all countries with less than 35 respondents combined have higher mean purchasing power than Italy.
In next sections the same comparisons for job satisfaction and purchasing power are given for different domains (industries) rather than countries. All methodologies (models, number of iterations of a MCMC algorithm, etc) are identical to previous sections, thus only interpretations are given.
\begin{figure}[H]
\centering
\includegraphics{report-029}
\caption{90\% confidence intervals for purchasing power posterior means for all countries.}\label{fig_8}
\end{figure}
\subsection{Job satisfaction among industries}
Number of mathematics developers per industries can be seen in Figure \ref{fig_9}. As before, minimum number of 35 respondents is required to take the industry into account and all others are placed into a single group called \textit{Other}. Note that selected industries and number of answers may be different when doing inference of specific features due to missing values (i.e. optional answers in the survey).
\begin{figure}[H]
\centering
\includegraphics{report-031}
\caption{Number of mathematics developers per industry.}\label{fig_9}
\end{figure}
In Figure \ref{fig_10} a posterior predictive check can be seen. Like previously, sampling distributions of log odds of the answer 'I love my job' calculated from posterior predictive samples of the same sizes as the observed samples are plotted for each of the industries and there are no noticeable discrepancies between the replicated the observed data (with respect to the selected statistic). An interesting regard is that the log odds ratio, like the mean, also has a normal asymptotic distribution.
\begin{figure}[H]
\centering
\includegraphics{report-036}
\caption{Job satisfaction among industries posterior predictive check. Densities are sampling distributions of log odds of the answer 'I love my job' calculated from posterior predictive samples of the same sizes as the observed samples. Red lines indicate the observed log odds.}\label{fig_10}
\end{figure}
Result of the sampling can be seen in Figure \ref{fig_11}. Plotted are 90\% confidence intervals for posterior probabilities of the answer 'I love my job' for all industries. The only speculations we can make with this data is that for the Software Products and Education domains the probability of the answer 'I love my job' seems higher than for the Finance / Banking domain (although we will stay conservative and not make any claims).
\begin{figure}[H]
\centering
\includegraphics{report-038}
\caption{90\% confidence intervals for posterior probabilities of the answer 'I love my job' for all industries.}\label{fig_11}
\end{figure}
\subsection{Purchasing power among industries}
Posterior predictive check for the hierarchical normal model can be seen in Figure \ref{fig_12}. Sampling distributions of means of posterior predictive samples of the same sizes as the observed samples are plotted along with the observed means. There are no noticeable discrepancies between the replicated the observed data with respect to the selected statistic (the mean).
\begin{figure}[H]
\centering
\includegraphics{report-043}
\caption{Purchasing power among industries posterior predictive check. Densities are sampling distributions of means of posterior predictive samples of the same sizes as the observed samples. Red lines indicate the observed means.}\label{fig_12}
\end{figure}
Result of the sampling can be seen in Figure \ref{fig_13}. Plotted are 90\% confidence intervals for purchasing power posterior means for all industries. We are most certain that the mean purchasing power is higher in Finance / Banking than in all other domains (except in the Internet domain and the Media / Advertising domain). Mathematics developers in Education have lower mean purchasing power than those in the Software Products, Media / Advertising, Internet, Healthcare and Finance / Banking domains. There are no meaningful differences in remaining domains.
\begin{figure}[H]
\centering
\includegraphics{report-045}
\caption{90\% confidence intervals for purchasing power posterior means for all industries.}\label{fig_13}
\end{figure}
\section{Conclusion}
In this project we compared probabilities that a mathematics developer loves his / her job and mean purchasing powers among countries and industries. Additionally, we were interested in how job satisfaction varies with a set of selected variables.
We suspect that age and working in a big company are negatively correlated with job satisfaction, we suspect that working remotely is positively correlated with job satisfaction and we are quite certain that purchasing power is positively correlated with job satisfaction. Our degree of belief is quite high that mean purchasing power is the highest in Australia and United States. We also believe, that mean purchasing power is the highest in the Finance / Banking industry and that it is also much higher than in education.
As stated at the beginning, one has to keep in mind that this dataset may not be a represantative sample from the population of mathematics developers. An obvisous drawback is also the amount of data we have for each country and as a consequence a lot of countries were not taken into account when comparing job satisfaction and purchasing power among groups.
\end{document}
| {
"alphanum_fraction": 0.8102488121,
"avg_line_length": 89.7844036697,
"ext": "tex",
"hexsha": "c89cc768be6f6347f2744fbf38639614334129c5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2fa76b844b080aec7d7a30a18dabf5580ceebcc9",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "inejc/math-devs-survey",
"max_forks_repo_path": "2016/report/report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2fa76b844b080aec7d7a30a18dabf5580ceebcc9",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "inejc/math-devs-survey",
"max_issues_repo_path": "2016/report/report.tex",
"max_line_length": 1400,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2fa76b844b080aec7d7a30a18dabf5580ceebcc9",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "inejc/math-devs-survey",
"max_stars_repo_path": "2016/report/report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4016,
"size": 19573
} |
% Created 2016-10-10 Mon 14:00
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fixltx2e}
\usepackage{graphicx}
\usepackage{grffile}
\usepackage{longtable}
\usepackage{wrapfig}
\usepackage{rotating}
\usepackage[normalem]{ulem}
\usepackage{amsmath}
\usepackage{textcomp}
\usepackage{amssymb}
\usepackage{capt-of}
\usepackage{hyperref}
\author{Joseph P. McKenna}
\date{\today}
\title{ACM Computing Seminar Fortran Guide}
\hypersetup{
pdfauthor={Joseph P. McKenna},
pdftitle={ACM Computing Seminar Fortran Guide},
pdfkeywords={},
pdfsubject={},
pdfcreator={Emacs 24.5.1 (Org mode 8.3.6)},
pdflang={English}}
\begin{document}
\maketitle
\tableofcontents
\section{Introduction}
\label{sec:orgheadline2}
This guide is intended to quickly get you up-and-running in scientific computing with Fortran.
\subsection{About the language}
\label{sec:orgheadline1}
Fortran was created in the 1950s for mathematical \textbf{FOR}-mula \textbf{TRAN}-slation, and has since gone through a number of revisions (FORTRAN 66, 77, and Fortran 90, 95, 2003, 2008, 2015). The language standards are put forth by the Fortran standards committee \href{http://www.j3-fortran.org}{J3} in a document (ISO 1539-1:2010) available for purchase. The language syntax and intrinsic procedures make it especially suited for scientific computing. Fortran is a \textbf{statically-typed} and \textbf{compiled} language, like C++. You must declare the \textbf{type}, i.e. integer, real number, etc. of variables in programs you write. Your programs will be translated from human-readable \emph{source code} into an executable file by software called a \textbf{compiler}. Fortran is \textbf{not case-sensitive}, so \texttt{matrix} and \texttt{MaTrIx} are translated to the same token by the compiler.
\section{Getting started}
\label{sec:orgheadline9}
The software that you need to get started comes prepackaged and ready to download on most Linux distributions. There are a few options for emulating a Linux environment in Windows or Mac OS, such as a virtual machine (VirtualBox) or package manager (MinGW or Cygwin on Windows and Brew on Mac OS).
\subsection{Text editor}
\label{sec:orgheadline3}
You will write the source code of your programs using a text editor. There are many options that have features designed for programming such as syntax highlighting and auto-completion. If you are an impossible-to-please perfectionist, you might want to check out \href{https://www.gnu.org/s/emacs/}{Emacs}. If you are easier to please, you might want to check out \href{https://www.sublimetext.com/}{Sublime Text}.
\subsection{Compiler}
\label{sec:orgheadline4}
To translate your source code into an executable, you will need a Fortran compiler. A free option is \textbf{gfortran}, part of the GNU compiler collection (gcc). The features of the Fortran language that are supported by the \texttt{gfortran} compiler are specified in the \href{https://gcc.gnu.org/onlinedocs/gfortran/}{compiler manual}. This is your most complete reference for the procedures intrinsic to Fortran that your programs can use. At the time of this writing, \texttt{gfortran} completely supports Fortran 95 and partially supports more recent standards.
\subsection{Writing and compiling a program}
\label{sec:orgheadline7}
A program is delimited by the \texttt{begin program} / \texttt{end program} keywords. A useful construct for keeping code that a program can use is called a \textbf{module}. A module is delimited by the \texttt{begin module} / \texttt{end module} keywords.
\subsubsection{Hello world}
\label{sec:orgheadline5}
Let's write a tiny program that prints "hello world" to the terminal screen in \texttt{hello.f90}.
\begin{verbatim}
1 program main
2 print*, 'hello world'
3 end program main
\end{verbatim}
To compile the program, execute the following command on the command line in the same directory as \texttt{hello.f90}
\begin{verbatim}
gfortran hello.f90
\end{verbatim}
This produces an executable file named \texttt{a.out} by default (On Windows, this is probably named \texttt{a.exe} by default). To run, execute the file.
\begin{verbatim}
./a.out
\end{verbatim}
\begin{verbatim}
hello world
\end{verbatim}
We could have specified a different name for the executable file during compilation with the \texttt{-o} option of \texttt{gfortran}.
\begin{verbatim}
gfortran hello.f90 -o my_executable_file
\end{verbatim}
On Windows, you should append the \texttt{.exe} extension to \texttt{my\_executable\_file}.
\subsubsection{Template}
\label{sec:orgheadline6}
Now let's write an empty source code template for future projects. Our source code template will consist of two files in the same directory (\url{./source/}). In the following files, the contents of a line after a \texttt{!} symbol is a comment that is ignored by the compiler. One file \texttt{header.f90} contains a module that defines things to be used in the main program.
\begin{verbatim}
1 module header
2 implicit none
3 ! variable declarations and assignments
4 contains
5 ! function and subroutine definitions
6 end module header
\end{verbatim}
This file should be compiled with the \texttt{-c} option of \texttt{gfortran}.
\begin{verbatim}
gfortran -c header.f90
\end{verbatim}
This outputs the \textbf{object file} named \texttt{header.o} by default. An object file contains machine code that can be \emph{linked} to an executable. A separate file \texttt{main.f90} contains the main program.
\begin{verbatim}
1 program main
2 use header
3 implicit none
4 ! variable declarations and assignments
5 ! function and subroutine calls
6 contains
7 ! function and subroutine definitions
8 end program main
\end{verbatim}
On line 2 of \texttt{main.f90}, we instruct the main program to use the contents of \texttt{header.f90}, so we must link \texttt{header.o} when compiling \texttt{main.f90}.
\begin{verbatim}
gfortran main.f90 header.o -o main
\end{verbatim}
To run the program, execute the output file \texttt{main}.
\begin{verbatim}
./main
\end{verbatim}
As you get more experience, you may find it cumbersome to repeatedly execute \texttt{gfortran} commands with every modification to your code. A way around this is to use the \texttt{make} command-line utility. Using \texttt{make}, all the of the compilation commands for your project can be coded in a file named \texttt{makefile} in the same directory as your \texttt{.f90} source files. For example, the template above could use the following \texttt{makefile}.
\begin{verbatim}
1 COMPILER = gfortran
2 SOURCE = main.f90
3 EXECUTABLE = main
4 OBJECTS = header.o
5
6 all: $(EXECUTABLE)
7 $(EXECUTABLE): $(OBJECTS)
8 $(COMPILER) $(SOURCE) $(OBJECTS) -o $(EXECUTABLE)
9 %.o: %.f90
10 $(COMPILER) -c $<
\end{verbatim}
Then, to recompile both \texttt{header.f90} and \texttt{main.f90} after modifying either file, execute
\begin{verbatim}
make
\end{verbatim}
in the same directory as \texttt{makefile}. The first four lines of the \texttt{makefile} above define the compiler command, file name of the main program, file name of the executable to be created, and file name(s) of linked object file(s), respectively. If you wrote a second module in a separate file \texttt{my\_second\_header.f90} that you wanted to \texttt{use} in \texttt{main.f90}, you would modify line 4 of \texttt{makefile} to \texttt{OBJ = header.o my\_second\_header.o}. The remaining lines of \texttt{makefile} define instructions for compilation.
\subsection{Exercises}
\label{sec:orgheadline8}
\begin{enumerate}
\item Compile and run \texttt{hello.f90}.
\item Execute \texttt{man gfortran} in any directory to bring up the manual for \texttt{gfortran}. Read the description and skim through the options. Do the same for \texttt{make}.
\end{enumerate}
\section{Data types}
\label{sec:orgheadline23}
In both programs and modules, variables are declared first before other procedures. A variable is declared by listing its data type followed by \texttt{::} and the variable name, i.e. \texttt{integer :: i} or \texttt{real :: x}.
We will use the \texttt{implicit none} keyword at the beginning of each program and module as in line 2 of \texttt{header.f90} and line 3 of \texttt{main.f90} in Section \ref{sec:orgheadline6}. The role of this keyword is to suppress implicit rules for interpreting undeclared variables. By including it, we force ourselves to declare each variable we use, which should facilitate debugging when our program fails to compile. Without it, an undeclared variable with a name such as \texttt{i} is assumed to be of the \texttt{integer} data type whereas an undeclared variable with a name such as \texttt{x} is assumed to be of the \texttt{real} data type.
In addition to the most common data types presented below, Fortran has a \texttt{complex} data type and support for data types defined by the programmer (see Section \ref{sec:orgheadline10}).
\subsection{The \texttt{logical} type}
\label{sec:orgheadline12}
A \texttt{logical} data type can have values \texttt{.true.} or \texttt{.false.}. Logical expressions can be expressed by combining unary or binary operations.
\begin{verbatim}
1 logical :: a,b,c
2 a = .true.
3 b = .false.
4
5 ! '.not.' is the logical negation operator
6 c = .not.a ! c is false
7
8 ! '.and,' is the logical and operator
9 c = a.and.b ! c is false
10
11 ! '.or.' is the logical or operator
12 c = a.or.b ! c is true
13
14 ! '==' is the test for equality
15 c = 1 == 2 ! c is false
16
17 ! '/=' is test for inequality
18 c = 1 /= 2 ! c is true
19 print*, c
\end{verbatim}
Other logical operators include
\begin{itemize}
\item \texttt{<} or \texttt{.lt.}: less than
\item \texttt{<=} or \texttt{.le.}: less than or equal
\item \texttt{>} or \texttt{.gt.}: greater than
\item \texttt{>=} or \texttt{.ge.}: greater than or equal
\end{itemize}
Logical expressions are often used in \hyperref[sec:orgheadline11]{control structures}.
\subsection{The \texttt{integer} type}
\label{sec:orgheadline13}
An \texttt{integer} data type can have integer values. If a real value is assigned to an \texttt{integer} type, the decimal portion is chopped off.
\begin{verbatim}
1 integer :: a = 6, b = 7 ! initialize a and b to 6 and 7, resp
2 integer :: c
3
4 c = a + b ! c is 13
5 c = a - b ! c is -1
6 c = a / b ! c is 0
7 c = b / a ! c is 1
8 c = a*b ! c is 42
9 c = a**b ! c is 6^7
10 c = mod(b,a) ! c is (b mod a) = 1
11 c = a > b ! c is 0 (logical gets cast to integer)
12 c = a < b ! c is 1 (logical gets cast to integer)
\end{verbatim}
\subsection{Floating point types}
\label{sec:orgheadline14}
The two floating point data types \texttt{real} and \texttt{double precision} correspond to \href{https://en.wikipedia.org/wiki/IEEE_floating_point}{IEEE 32- and 64-bit floating point data types}. A constant called \emph{machine epsilon} is the least positive number in a floating point system that when added to 1 results in a floating point number larger than 1. It is common in numerical analysis error estimates.
\begin{verbatim}
1 real :: a ! declare a single precision float
2 double precision :: b ! declare a double precision float
3
4 ! Print the min/max value and machine epsilon
5 ! for the single precision floating point system
6 print*, tiny(a), huge(a), epsilon(a)
7
8 ! Print the min/max value and machine epsilon
9 ! for the double precision floating point system
10 print*, tiny(b), huge(b), epsilon(b)
\end{verbatim}
\begin{verbatim}
1.17549435E-38 3.40282347E+38 1.19209290E-07
2.2250738585072014E-308 1.7976931348623157E+308 2.2204460492503131E-016
\end{verbatim}
\subsection{The \texttt{character} type}
\label{sec:orgheadline15}
A \texttt{character} data type can have character values, i.e. letters or symbols. A character string is declared with a positive \texttt{integer} specifying its maximum possible length.
\begin{verbatim}
1 ! declare a character variable s at most 32 characters
2 character(32) :: s
3
4 ! assign value to s
5 s = 'file_name'
6
7 ! trim trailing spaces from s and
8 ! append a character literal '.txt'
9 print*, trim(s) // '.txt'
\end{verbatim}
\begin{verbatim}
file_name.txt
\end{verbatim}
\subsection{Casting}
\label{sec:orgheadline16}
An \texttt{integer} can be cast to a \texttt{real} and vice versa.
\begin{verbatim}
1 integer :: a = 1, b
2 real :: c, PI = 3.14159
3
4 ! explicit cast real to integer
5 b = int(PI) ! b is 3
6
7 ! explicit cast integer to real then divide
8 c = a/real(b) ! c is .3333...
9
10 ! divide then implicit cast real to integer
11 c = a/b ! c is 0
\end{verbatim}
\subsection{The \texttt{parameter} keyword}
\label{sec:orgheadline17}
The \texttt{parameter} keyword is used to declare constants. A constant must be assigned a value at declaration and cannot be reassigned a value. The following code is not valid because of an attempt to reassign a constant.
\begin{verbatim}
1 ! declare constant variable
2 real, parameter :: PI = 2.*asin(1.) ! 'asin' is arcsine
3
4 PI = 3 ! not valid
\end{verbatim}
The compiler produces an error like \texttt{Error: Named constant ‘pi’ in variable definition context (assignment)}.
\subsection{Setting the precision}
\label{sec:orgheadline18}
The \texttt{kind} function returns an \texttt{integer} for each data type. The precision of a floating point number can be specified at declaration by a literal or constant \texttt{integer} of the desired \texttt{kind}.
\begin{verbatim}
1 ! declare a single precision
2 real :: r
3 ! declare a double precision
4 double precision :: d
5 ! store single precision and double precision kinds
6 integer, parameter :: sp = kind(r), dp = kind(d)
7 ! set current kind
8 integer, parameter :: rp = sp
9
10 ! declare real b in double precision
11 real(dp) :: b
12
13 ! declare real a with precision kind rp
14 real(rp) :: a
15
16 ! cast 1 to real with precision kind rp and assign to a
17 a = 1.0_rp
18
19 ! cast b to real with precision kind rp and assign to a
20 a = real(b,rp)
\end{verbatim}
To switch the precision of each variable above with kind \texttt{rp}, we would only need to modify the declaration of \texttt{rp} on line 8.
\subsection{Pointers}
\label{sec:orgheadline19}
Pointers have the same meaning in Fortran as in C++. A pointer is a variable that holds the \textbf{memory address} of a variable. The implementation of pointers is qualitatively different in Fortran than in C++. In Fortran, the user cannot view the memory address that a pointer stores. A pointer variable is declared with the \texttt{pointer} modifier, and a variable that it points to is declared with the \texttt{target} modifier. The types of a \texttt{pointer} and its \texttt{target} must match.
\begin{verbatim}
1 ! declare pointer
2 integer, pointer :: p
3 ! declare targets
4 integer, target :: a = 1, b = 2
5
6 p => a ! p has same memory address as a
7 p = 2 ! modify value at address
8 print*, a==2 ! a is 2
9
10 p => b ! p has same memory address as b
11 p = 1 ! modify value at address
12 print*, b==1 ! b is 1
13
14 ! is p associated with a target?
15 print*, associated(p)
16
17 ! is p associated with the target a?
18 print*, associated(p, a)
19
20 ! point to nowhere
21 nullify(p)
\end{verbatim}
\begin{verbatim}
T
T
T
F
\end{verbatim}
\subsection{Arrays}
\label{sec:orgheadline22}
The length of an array can be fixed or dynamic. The index of an array starts at 1 by default, but any index range can be specified.
\subsubsection{Fixed-length arrays}
\label{sec:orgheadline20}
An array can be declared with a single \texttt{integer} specifying its length in which cast the first index of the array is 1. An array can also be declared with an \texttt{integer} range specifying its first and last index.
Here's a one-dimensional array example.
\begin{verbatim}
1 ! declare arrray of length 5
2 ! index range is 1 to 5 (inclusive)
3 real :: a(5)
4
5 ! you can work with each component individually
6 ! set the first component to 1
7 a(1) = 1.0
8
9 ! or you can work with the whole array
10 ! set the whole array to 2
11 a = 2.0
12
13 ! or you can with slices of the array
14 ! set elements 2 to 4 (inclusive) to 3
15 a(2:4) = 3.0
\end{verbatim}
And, here's a two-dimensional array example.
\begin{verbatim}
1 ! declare 5x5 array
2 ! index range is 1 to 5 (inclusive) in both axes
3 real :: a(5,5)
4
5 ! you can work with each component individually
6 ! set upper left component to 1
7 a(1,1) = 1.0
8
9 ! or you can work with the whole array
10 ! set the whole array to 2
11 a = 2.0
12
13 ! or you can with slices of the array
14 ! set a submatrix to 3
15 a(2:4, 1:2) = 3.0
\end{verbatim}
Fortran includes intrinsic functions to operate on an array \texttt{a} such as
\begin{itemize}
\item \texttt{size(a)}: number of elements of \texttt{a}
\item \texttt{minval(a)}: minimum value of \texttt{a}
\item \texttt{maxval(a)}: maximum value of \texttt{a}
\item \texttt{sum(a)}: sum of elements in \texttt{a}
\item \texttt{product(a)}: product of elements in \texttt{a}
\end{itemize}
See the \texttt{gfortran} documentation for more.
\subsubsection{Dynamic length arrays}
\label{sec:orgheadline21}
Dynamic arrays are declared with the \texttt{allocatable} modifier. Before storing values in such an array, you must \texttt{allocate} memory for the array. After you are finished the array, you ought to \texttt{deallocate} the memory that it occupies.
Here's a one-dimensional array example.
\begin{verbatim}
1 ! declare a one-dim. dynamic length array
2 real, allocatable :: a(:)
3
4 ! allocate memory for a
5 allocate(a(5))
6
7 ! now you can treat a like a normal array
8 a(1) = 1.0
9 ! etc...
10
11 ! deallocate memory occupied by a
12 deallocate(a)
13
14 ! we can change the size and index range of a
15 allocate(a(0:10))
16
17 a(0) = 1.0
18 ! etc...
19
20 deallocate(a)
\end{verbatim}
Without the last \texttt{dellaocate} statement on line 20 the code above is valid, but the memory that is allocated for \texttt{a} will not be freed. That memory then cannot be allocated to other resources.
Here's a two-dimensional array example.
\begin{verbatim}
1 ! declare a two-dim. dynamic length array
2 real, allocatable :: a(:,:)
3
4 ! allocate memory for a
5 allocate(a(5,5))
6
7 ! now you can treat a like a normal array
8 a(1,1) = 1.0
9 ! etc...
10
11 ! deallocate memory occupied by a
12 deallocate(a)
13
14 ! we can change the size and index range of a
15 allocate(a(0:10,0:10))
16
17 a(0,0) = 1.0
18 ! etc...
19
20 deallocate(a)
\end{verbatim}
\section{Control structures}
\label{sec:orgheadline11}
Control structures are used to direct the flow of code execution.
\subsection{Conditionals}
\label{sec:orgheadline27}
\subsubsection{The \texttt{if} construct}
\label{sec:orgheadline24}
The \texttt{if} construct controls execution of a single block of code. If the block of code is more than one line, it should be delimited by an \texttt{if} / \texttt{end if} pair. If the block of code is one line, it can be written on one line. A common typo is to forget the \texttt{then} keyword following the logical in an \texttt{if} / \texttt{end if} pair.
\begin{verbatim}
1 real :: num = 0.75
2
3 if (num < .5) then
4 print*, 'num: ', num
5 print*, 'num is less than 0.5'
6 end if
7
8 if (num > .5) print*, 'num is greater than 0.5'
\end{verbatim}
\begin{verbatim}
num is greater than 0.5
\end{verbatim}
\subsubsection{Example: \texttt{if} / \texttt{else} and random number generation}
\label{sec:orgheadline25}
The \texttt{if} / \texttt{else} construct controls with mutually exclusive logic the execution of two blocks of code.
The following code generates a random number between 0 and 1, then prints the number and whether or not the number is greater than 0.5
\begin{verbatim}
1 real :: num
2
3 ! seed random number generator
4 call srand(789)
5
6 ! rand() returns a random number between 0 and 1
7 num = rand()
8
9 print*, 'num: ', num
10
11 if (num < 0.5) then
12 print*, 'num is less than 0.5'
13 else
14 print*, 'num is greater then 0.5'
15 end if
16
17 ! do it again
18 num = rand()
19
20 print*, 'num: ', num
21
22 if (num < 0.5) then
23 print*, 'num is less than 0.5'
24 else
25 print*, 'num is greater then 0.5'
26 end if
\end{verbatim}
\begin{verbatim}
num: 6.17480278E-03
num is less than 0.5
num: 0.783314705
num is greater then 0.5
\end{verbatim}
Since the random number generator was seeded with a literal integer, the above code will produce the \emph{same} output each time it is run.
\subsubsection{Example: \texttt{if} / \texttt{else if} / \texttt{else}}
\label{sec:orgheadline26}
The \texttt{if} / \texttt{else if} / \texttt{else} construct controls with mutually exclusive logic the execution of three or more blocks of code. The following code generates a random number between 0 and 1, then prints the number and which quarter of the interval \([0,1]\) that the number is in.
\begin{verbatim}
1 real :: num
2
3 ! seed random number generator with current time
4 call srand(time())
5
6 ! rand() returns a random number between 0 and 1
7 num = rand()
8
9 print*, 'num:', num
10
11 if (num > 0.75) then
12 print*, 'num is between 0.75 and 1'
13 else if (num > 0.5) then
14 print*, 'num is between 0.5 and 0.75'
15 else if (num > 0.25) then
16 print*, 'num is between 0.25 and 0.5'
17 else
18 print*, 'num is between 0 and 0.25'
19 end if
\end{verbatim}
\begin{verbatim}
num: 0.679201365
num is between 0.5 and 0.75
\end{verbatim}
Since the random number generator was seeded with the current time, the above code will produce a \emph{different} output each time it is run.
\subsection{Loops}
\label{sec:orgheadline34}
\subsubsection{The \texttt{do} loop}
\label{sec:orgheadline29}
A \texttt{do} loop iterates a block of code over a range of integers. It takes two \texttt{integer} arguments specifying the minimum and maximum (inclusive) of the range and takes an optional third \texttt{integer} argument specifying the iteration stride in the form \texttt{do i=min,max,stride}. If omitted, the stride is 1.
The following code assigns a value to each component of an array then prints it.
\begin{verbatim}
1 integer :: max = 10, i
2 real, allocatable :: x(:)
3
4 allocate(x(0:max))
5
6 do i = 0,max
7 ! assign to each array component
8 x(i) = i / real(max)
9
10 ! print current component
11 print "('x(', i0, ') = ', f3.1)", i, x(i)
12 end do
13
14 deallocate(x)
\end{verbatim}
\begin{verbatim}
x(0) = 0.0
x(1) = 0.1
x(2) = 0.2
x(3) = 0.3
x(4) = 0.4
x(5) = 0.5
x(6) = 0.6
x(7) = 0.7
x(8) = 0.8
x(9) = 0.9
x(10) = 1.0
\end{verbatim}
An \emph{implicit} \texttt{do loop} can be used for formulaic array assignments. The following code creates the same array as the last example.
\begin{verbatim}
1 integer :: max = 10
2 real, allocatable :: x(:)
3
4 allocate(x(0:max))
5
6 ! implicit do loop for formulaic array assignment
7 x = [(i / real(max), i=0, max)]
8
9 deallocate(x)
\end{verbatim}
\paragraph{Example: row-major matrix}
\label{sec:orgheadline28}
The following code stores matrix data in a one-dimensional array named \texttt{matrix} in \texttt{row-major} order. This means the first \texttt{n\_cols} elements of the array will contain the first row of the matrix, the next \texttt{n\_cols} of the array will contain the second row of the matrix, etc.
\begin{verbatim}
1 integer :: n_rows = 4, n_cols = 3
2 real, allocatable :: matrix(:)
3 ! temporary indices
4 integer :: i,j,k
5
6 ! index range is 1 to 12 (inclusive)
7 allocate(matrix(1:n_rows*n_cols))
8
9 ! assign 0 to all elements of matrix
10 matrix = 0.0
11
12 do i = 1,n_rows
13 do j = 1,n_cols
14 ! convert (i,j) matrix index to "flat" row-major index
15 k = (i-1)*n_cols + j
16
17 ! assign 1 to diagonal, 2 to sub/super-diagonal
18 if (i==j) then
19 matrix(k) = 1.0
20 else if ((i==j-1).or.(i==j+1)) then
21 matrix(k) = 2.0
22 end if
23 end do
24 end do
25
26 ! print matrix row by row
27 do i = 1,n_rows
28 print "(3(f5.1))", matrix(1+(i-1)*n_cols:i*n_cols)
29 end do
30
31 deallocate(matrix)
\end{verbatim}
\begin{verbatim}
1.0 2.0 0.0
2.0 1.0 2.0
0.0 2.0 1.0
0.0 0.0 2.0
\end{verbatim}
\subsubsection{The \texttt{do while} loop}
\label{sec:orgheadline32}
A \texttt{do while} loop iterates while a logical condition evaluates to \texttt{.true.}.
\paragraph{Example: truncated sum}
\label{sec:orgheadline30}
The following code approximates the geometric series
\begin{equation*}
\sum_{n=1}^{\infty}\left(\frac12\right)^n=1.
\end{equation*}
The \texttt{do while} loop begins with \(n=1\) and exits when the current summand does not increase the current sum. It prints the iteration number, current sum, and absolute error
\begin{equation*}
E=1-\sum_{n=1}^{\infty}\left(\frac12\right)^n.
\end{equation*}
\begin{verbatim}
1 real :: sum = 0.0, base = 0.5, tol = 1e-4
2 real :: pow = 0.5
3 integer :: iter = 1
4
5 do while (sum+pow > sum)
6 ! add pow to sum
7 sum = sum+pow
8 ! update pow by one power of base
9 pow = pow*base
10
11 print "('Iter: ', i3, ', Sum: ', f0.10, ', Abs Err: ', f0.10)", iter, sum, 1-sum
12
13 ! update iter by 1
14 iter = iter+1
15 end do
\end{verbatim}
\begin{verbatim}
Iter: 1, Sum: .5000000000, Abs Err: .5000000000
Iter: 2, Sum: .7500000000, Abs Err: .2500000000
Iter: 3, Sum: .8750000000, Abs Err: .1250000000
Iter: 4, Sum: .9375000000, Abs Err: .0625000000
Iter: 5, Sum: .9687500000, Abs Err: .0312500000
Iter: 6, Sum: .9843750000, Abs Err: .0156250000
Iter: 7, Sum: .9921875000, Abs Err: .0078125000
Iter: 8, Sum: .9960937500, Abs Err: .0039062500
Iter: 9, Sum: .9980468750, Abs Err: .0019531250
Iter: 10, Sum: .9990234375, Abs Err: .0009765625
Iter: 11, Sum: .9995117188, Abs Err: .0004882812
Iter: 12, Sum: .9997558594, Abs Err: .0002441406
Iter: 13, Sum: .9998779297, Abs Err: .0001220703
Iter: 14, Sum: .9999389648, Abs Err: .0000610352
Iter: 15, Sum: .9999694824, Abs Err: .0000305176
Iter: 16, Sum: .9999847412, Abs Err: .0000152588
Iter: 17, Sum: .9999923706, Abs Err: .0000076294
Iter: 18, Sum: .9999961853, Abs Err: .0000038147
Iter: 19, Sum: .9999980927, Abs Err: .0000019073
Iter: 20, Sum: .9999990463, Abs Err: .0000009537
Iter: 21, Sum: .9999995232, Abs Err: .0000004768
Iter: 22, Sum: .9999997616, Abs Err: .0000002384
Iter: 23, Sum: .9999998808, Abs Err: .0000001192
Iter: 24, Sum: .9999999404, Abs Err: .0000000596
Iter: 25, Sum: 1.0000000000, Abs Err: .0000000000
\end{verbatim}
\paragraph{Example: estimating machine epsilon}
\label{sec:orgheadline31}
The following code finds machine epsilon by shifting the rightmost bit of a binary number rightward until it falls off. Think about how it does this. Could you write an algorithm that finds machine epsilon using the function \texttt{rshift} that shifts the bits of float rightward?
\begin{verbatim}
1 double precision :: eps
2 integer, parameter :: dp = kind(eps)
3 integer :: count = 1
4
5 eps = 1.0_dp
6 do while (1.0_dp + eps*0.5 > 1.0_dp)
7 eps = eps*0.5
8 count = count+1
9 end do
10
11 print*, eps, epsilon(eps)
12 print*, count, digits(eps)
\end{verbatim}
\begin{verbatim}
2.2204460492503131E-016 2.2204460492503131E-016
53 53
\end{verbatim}
\subsubsection{Example: the \texttt{exit} keyword}
\label{sec:orgheadline33}
The \texttt{exit} keyword stops execution of code within the current scope.
The following code finds the \emph{hailstone sequence} of \(a_1=6\) defined recursively by
\begin{equation*}
a_{n+1} =
\begin{cases}
a_n/2 & \text{if } a_n \text{ is even}\\
3a_n+1 & \text{ if } a_n \text{ is odd}
\end{cases}
\end{equation*}
for \(n\geq1\). It is an open conjecture that the hailstone sequence of any initial value \(a_1\) converges to the periodic sequence \(4, 2, 1, 4, 2, 1\ldots\). Luckily, it does for \(a_1=6\) and the following infinite \texttt{do} loop exits.
\begin{verbatim}
1 integer :: a = 6, count = 1
2
3 ! infinite loop
4 do
5 ! if a is even, divide by 2
6 ! otherwise multiply by 3 and add 1
7 if (mod(a,2)==0) then
8 a = a/2
9 else
10 a = 3*a+1
11 end if
12
13 ! if a is 4, exit infinite loop
14 if (a==4) then
15 exit
16 end if
17
18 ! print count and a
19 print "('count: ', i2, ', a: ', i2)", count, a
20
21 ! increment count
22 count = count + 1
23 end do
\end{verbatim}
\begin{verbatim}
count: 1, a: 3
count: 2, a: 10
count: 3, a: 5
count: 4, a: 16
count: 5, a: 8
\end{verbatim}
\section{Input/Output}
\label{sec:orgheadline40}
\subsection{File input/output}
\label{sec:orgheadline37}
\subsubsection{Reading data from file}
\label{sec:orgheadline35}
The contents of a data file can be read into an array using \texttt{read}. Suppose you have a file \texttt{./data/array.txt} that contains two columns of data
\begin{verbatim}
1 1.23
2 2.34
3 3.45
4 4.56
5 5.67
\end{verbatim}
This file can be opened with the \texttt{open} command. The required first argument of \texttt{open} is an \texttt{integer} that specifies a \emph{file unit} for \texttt{array.txt}. Choose any number that is not in use. The unit numbers \texttt{0}, \texttt{5}, and \texttt{6} are reserved for system files and should not be used accidentally. Data are read in \textbf{row-major} format, i.e. across the first row, then across the second row, etc.
The following code reads the contents of \texttt{./data/array.txt} into an array called \texttt{array}.
\begin{verbatim}
1 ! declare array
2 real :: array(5,2)
3 integer :: row
4
5 ! open file and assign file unit 10
6 open (10, file='./data/array.txt', action='read')
7
8 ! read data from file unit 10 into array
9 do row = 1,5
10 read(10,*) array(row,:)
11 end do
12
13 ! close file
14 close(10)
\end{verbatim}
\subsubsection{Writing data to file}
\label{sec:orgheadline36}
Data can be written to a file with the \texttt{write} command.
\begin{verbatim}
1 real :: x
2 integer :: i, max = 5
3
4 ! open file, specify unit 10, overwrite if exists
5 open(10, file='./data/sine.txt', action='write', status='replace')
6
7 do i = 0,max
8 x = i / real(max)
9
10 ! write to file unit 10
11 write(10,*) x, sin(x)
12 end do
\end{verbatim}
This produces a file \texttt{sine.txt} in the directory \texttt{data} containing
\begin{verbatim}
0.00000000 0.00000000
0.200000003 0.198669329
0.400000006 0.389418334
0.600000024 0.564642489
0.800000012 0.717356086
1.00000000 0.841470957
\end{verbatim}
\subsection{Formatted input/output}
\label{sec:orgheadline38}
The format of a \texttt{print}, \texttt{write}, or \texttt{read} statement can be specified with a \texttt{character} string. A format character string replaces the \texttt{*} symbol in \texttt{print*} and the second \texttt{*} symbol in \texttt{read(*,*)} or \texttt{write(*,*)}. A format string is a list of literal character strings or character descriptors from
\begin{itemize}
\item \texttt{a}: character string
\item \texttt{iW}: integer
\item \texttt{fW.D}: float point
\item \texttt{esW.DeE}: scientific notation
\item \texttt{Wx}: space
\end{itemize}
where \texttt{W}, \texttt{D}, and \texttt{E} should be replaced by numbers specifying width, number of digits, or number of exponent digits, resp. The width of a formatted integer or float defaults to the width of the number when \texttt{W} is \texttt{0}.
\begin{verbatim}
1 character(32) :: fmt, a = 'word'
2 integer :: b = 1
3 real :: c = 2.0, d = 3.0
4
5 ! character string and 4 space-delimited values
6 print "('four values: ', a, 1x i0, 1x f0.1, 1x, es6.1e1)", trim(a), b, c, d
7
8 ! character string and 2 space-delimited values
9 fmt = '(a, 2(f0.1, 1x))'
10 print fmt, 'two values: ', c, d
\end{verbatim}
\begin{verbatim}
four values: word 1 2.0 3.0E+0
two values: 2.0 3.0
\end{verbatim}
\subsection{Command line arguments}
\label{sec:orgheadline39}
Arguments can be passed to a program from the command line using \texttt{get\_command\_argument}. The first argument received by \texttt{get\_command\_argument} is the program executable file name and the remaining arguments are passed by the user. The following program accepts any number of arguments, each at most 32 characters, and prints them.
\begin{verbatim}
1 program main
2 implicit none
3
4 character(32) :: arg
5 integer :: n_arg = 0
6
7 do
8 ! get next command line argument
9 call get_command_argument(n_arg, arg)
10
11 ! if it is empty, exit
12 if (len_trim(arg) == 0) exit
13
14 ! print argument to screen
15 print"('argument ', i0, ': ', a)", n_arg, trim(arg)
16
17 ! increment count
18 n_arg = n_arg+1
19 end do
20
21 ! print total number of arguments
22 print "('number of arguments: ', i0)", n_arg
23
24 end program main
\end{verbatim}
After compiling to \texttt{a.out}, you can pass arguments in the executing command.
\begin{verbatim}
./a.out 1 2 34
\end{verbatim}
\begin{verbatim}
argument 0: ./a.out
argument 1: 1
argument 2: 2
argument 3: 34
number of arguments: 4
\end{verbatim}
\section{Functions/Subroutines}
\label{sec:orgheadline52}
Functions and subroutines are callable blocks of code. A \texttt{function} returns a value from a set of arguments. A \texttt{subroutine} executes a block of code from a set of arguments but does not explicitly return a value. Changes to arguments made within a \texttt{function} are not returned whereas changes to arguments made within a \texttt{subroutine} can be returned to the calling program. Both functions and subroutines are defined after the \texttt{contains} keyword in a \texttt{module} or \texttt{program}.
\subsection{Writing a function}
\label{sec:orgheadline42}
The definition of a function starts with the name of the function followed by a list of arguments and return variable. The data types of the arguments and return variable are defined within the \texttt{function} body.
\subsubsection{Example: \texttt{linspace}: generating a set of equally-space points}
\label{sec:orgheadline41}
The following program defines a function \texttt{linspace} that returns a set of equidistant points on an interval. The main function makes a call to the function.
\begin{verbatim}
1 program main
2 implicit none
3
4 real :: xs(10)
5
6 ! call function linspace to set values in xs
7 xs = linspace(0.0, 1.0, 10)
8
9 ! print returned value of xs
10 print "(10(f0.1, 1x))" , xs
11
12 contains
13
14 ! linspace: return a set of equidistant points on an interval
15 ! min: minimum value of interval
16 ! max: maximum value of interval
17 ! n_points: number of points in returned set
18 ! xs: set of points
19 function linspace(min, max, n_points) result(xs)
20 real :: min, max, dx
21 integer :: n_points
22 integer :: i
23 real :: xs(n_points)
24
25 ! calculate width of subintervals
26 dx = (max-min) / real(n_points-1)
27
28 ! fill xs with points
29 do i = 1,n_points
30 xs(i) = min + (i-1)*dx
31 end do
32
33 end function linspace
34
35 end program main
\end{verbatim}
\begin{verbatim}
.0 .1 .2 .3 .4 .6 .7 .8 .9 1.0
\end{verbatim}
\subsection{Writing a subroutine}
\label{sec:orgheadline44}
The definition of a subroutine begins with the name of the subroutine and list of arguments. Arguments are defined within the \texttt{subroutine} body with one of the following intents
\begin{itemize}
\item \texttt{intent(in)}: changes to the argument are not returned
\item \texttt{intent(inout)}: changes to the argument are returned
\item \texttt{intent(out)}: the initial value of the argument is ignored and changes to the argument are returned.
\end{itemize}
Subroutines are called using the \texttt{call} keyword followed by the subroutine name.
\subsubsection{Example: polar coordinates}
\label{sec:orgheadline43}
The following code defines a subroutine \texttt{polar\_coord} that returns the polar coordinates \((r,\theta)\) defined by \(r=\sqrt{x^2+y^2}\) and \(\theta=\arctan(y/x)\) from the rectangular coordinate pair \((x,y)\).
\begin{verbatim}
1 program main
2
3 real :: x = 1.0, y = 1.0, rad, theta
4
5 ! call subroutine that returns polar coords
6 call polar_coord(x, y, rad, theta)
7 print*, rad, theta
8
9 contains
10
11 ! polar_coord: return the polar coordinates of a rect coord pair
12 ! x,y: rectangular coord
13 ! rad,theta: polar coord
14 subroutine polar_coord(x, y, rad, theta)
15 real, intent(in) :: x, y
16 real, intent(out) :: rad, theta
17
18 ! compute polar coord
19 ! hypot = sqrt(x**2+y**2) is an intrinsic function
20 ! atan2 = arctan with correct sign is an intrinsic function
21 rad = hypot(x, y)
22 theta = atan2(y, x)
23
24 end subroutine polar_coord
25
26 end program main
\end{verbatim}
\begin{verbatim}
1.41421354 0.785398185
\end{verbatim}
\subsection{Passing procedures as arguments}
\label{sec:orgheadline47}
An \texttt{inteface} can be used to pass a function or subroutine to another function or a subroutine. For this purpose, an \texttt{interface} is defined in the receiving procedure essentially the same way as the passed procedure itself but with only declarations and not the implementation.
\subsubsection{Example: Newton's method for rootfinding}
\label{sec:orgheadline45}
Newton's method for finding the root of a function \(f:\mathbb{R}\rightarrow\mathbb{R}\) refines an initial guess \(x_0\) according to the iteration rule
\begin{equation*}
x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}
\end{equation*}
for \(n\geq1\) until \(f(x)\) is less than a chosen tolerance or a maximum number of iterations.
The following code defines a subroutine \texttt{newton\_root} that returns a root of an input function as well as the number of iterations of Newton's method used to find the root. It is called by the main program to approximate the positive root of \(f(x)=x^2-2\) from an initial guess \(x_0=1\).
\begin{verbatim}
1 program main
2 implicit none
3
4 character(64) :: fmt
5 real :: x = 1.0
6 integer :: iter = 1000
7
8 ! call newton rootfinding function
9 call newton_root(f, df, x, iter, 1e-6, .true.)
10
11 ! print found root and number of iterations used
12 fmt = "('number of iterations: ', i0, ', x: ', f0.7, ', f(x): ', f0.7)"
13 print fmt, iter, x, f(x)
14
15 contains
16
17 ! function f(x) = x^2 - 2
18 function f(x) result(y)
19 real :: x, y
20 y = x*x - 2
21 end function f
22
23 ! function df(x) = 2x
24 function df(x) result(dy)
25 real :: x, dy
26 dy = 2*x
27 end function df
28
29 ! newton_root: newtons method for rootfinding
30 ! f: function with root
31 ! df: derivative of f
32 ! x: sequence iterate
33 ! iter: max number of iterations at call, number of iterations at return
34 ! tol: absolute tolerance
35 ! print_iters: boolean to toggle verbosity
36 subroutine newton_root(f, df, x, iter, tol, print_iters)
37
38 ! interface to function f
39 interface
40 function f(x) result(y)
41 real :: x, y
42 end function f
43 end interface
44
45 ! interface to function df
46 interface
47 function df(x) result(dy)
48 real :: x, dy
49 end function df
50 end interface
51
52 real, intent(inout) :: x
53 real, intent(in) :: tol
54 integer, intent(inout) :: iter
55 logical, intent(in) :: print_iters
56 integer :: max_iters
57
58 max_iters = iter
59 iter = 0
60
61 ! while f(x) greater than absolute tolerance
62 ! and max number of iterations not exceeded
63 do while (abs(f(x))>tol.and.iter<max_iters)
64 ! print current x and f(x)
65 if (print_iters) print "('f(', f0.7, ') = ', f0.7)", x, f(x)
66
67 ! Newton's update rule
68 x = x - f(x)/df(x)
69
70 ! increment number of iterations
71 iter = iter + 1
72 end do
73
74 end subroutine newton_root
75
76 end program main
\end{verbatim}
\begin{verbatim}
f(1.0000000) = -1.0000000
f(1.5000000) = .2500000
f(1.4166666) = .0069444
f(1.4142157) = .0000060
number of iterations: 4, x: 1.4142135, f(x): -.0000001
\end{verbatim}
\subsubsection{Example: The midpoint rule for definite integrals}
\label{sec:orgheadline46}
The midpoint rule approximates the definite integral \(\int_a^bf(x)~dx\) with integrand \(f:\mathbb{R}\rightarrow\mathbb{R}\) by
\begin{equation}
\label{eq:orglatexenvironment1}
\Delta x\sum_{i=1}^nf(\bar{x}_i)
\end{equation}
where \(\Delta x=(b-a)/n\), \(x_i=a+i\Delta x\) and \(\bar{x}_i=(x_{i-1}+x_i)/2\).
The following code defines a function \texttt{midpoint} that computes the approximation eq. \ref{eq:orglatexenvironment1} given \(a\), \(b\), and \(n\). The main program calls \texttt{midpoint} to approximate the definite integral of \(f(x)=1/x\) on \([1,e]\) for a range of \(n\).
\begin{verbatim}
1 program main
2 implicit none
3
4 real, parameter :: E = exp(1.)
5 integer :: n
6 real :: integral
7
8 ! Approximate the integral of 1/x from 1 to e
9 ! with the midpoint rule for a range of number of subintervals
10 do n = 2,20,2
11 print "('n: ', i0, ', M_n: ', f0.6)", n, midpoint(f, 1.0, E, n)
12 end do
13
14 contains
15
16 ! function f(x) = 1/x
17 function f(x) result(y)
18 real :: x, y
19 y = 1.0/x
20 end function f
21
22 ! midpoint: midpoint rule for definite integral
23 ! f: integrand
24 ! a: left endpoint of interval of integration
25 ! b: right endpoint of interval of integration
26 ! n: number of subintervals
27 ! sum: approximate definite integral
28 function midpoint(f, a, b, n) result(sum)
29
30 ! interface to f
31 interface
32 function f(x)
33 real :: x, y
34 end function f
35 end interface
36
37 real :: a, b, min, xi, dx, sum
38 integer :: n, i
39
40 ! subinterval increment
41 dx = (b-a)/real(n)
42 ! minimum to increment from
43 min = a - dx/2.0
44
45 ! midpoint rule
46 do i = 1,n
47 xi = min + i*dx
48 sum = sum + f(xi)
49 end do
50 sum = sum*dx
51 end function midpoint
52
53 end program main
\end{verbatim}
\begin{verbatim}
n: 2, M_n: .976360
n: 4, M_n: .993575
n: 6, M_n: .997091
n: 8, M_n: .998353
n: 10, M_n: .998942
n: 12, M_n: .999264
n: 14, M_n: .999459
n: 16, M_n: .999585
n: 18, M_n: .999672
n: 20, M_n: .999735
\end{verbatim}
\subsection{Polymorphism}
\label{sec:orgheadline49}
An \texttt{interface} can be used as an entry into two different implementations of a subroutine or function with the same name so long as the different implementations have different argument signatures. This may be particularly useful for defining both a single precision and double precision version of a function or subroutine.
\subsubsection{Example: machine epsilon}
\label{sec:orgheadline48}
The following code implements two versions of a function that computes machine epsilon in either single or double precision. The different implementations are distinguished by their arguments. The single precision version \texttt{mach\_eps\_sp} accepts one single precision float and the double precision version \texttt{mach\_eps\_dp} accepts one double precision float. Both functions are listed in the \texttt{interface} and can be called by its name \texttt{mach\_eps}.
\begin{verbatim}
1 program main
2 implicit none
3
4 integer, parameter :: sp = kind(0.0)
5 integer, parameter :: dp = kind(0.d0)
6
7 interface mach_eps
8 procedure mach_eps_sp, mach_eps_dp
9 end interface mach_eps
10
11 print*, mach_eps(0.0_sp), epsilon(0.0_sp)
12 print*, mach_eps(0.0_dp), epsilon(0.0_dp)
13
14 contains
15
16 function mach_eps_sp(x) result(eps)
17 real(sp) :: x, eps
18 integer :: count = 0
19
20 eps = 1.0_sp
21 do while (1.0_sp + eps*0.5 > 1.0_sp)
22 eps = eps*0.5
23 count = count+1
24 end do
25 end function mach_eps_sp
26
27 function mach_eps_dp(x) result(eps)
28 real(dp) :: x, eps
29 integer :: count = 0
30
31 eps = 1.0_dp
32 do while (1.0_dp + eps*0.5 > 1.0_dp)
33 eps = eps*0.5
34 count = count+1
35 end do
36 end function mach_eps_dp
37
38 end program main
\end{verbatim}
\begin{verbatim}
1.19209290E-07 1.19209290E-07
2.2204460492503131E-016 2.2204460492503131E-016
\end{verbatim}
\subsection{Recursion}
\label{sec:orgheadline51}
A function or subroutine that calls itself must be defined with the \texttt{recursive} keyword preceding the construct name.
\subsubsection{Example: factorial}
\label{sec:orgheadline50}
The following code defines a recursive function \texttt{factorial} that computes \(n!\). If \(n>1\), the function call itself to return \(n(n-1)!\), otherwise the function returns \(1\). The main program calls \texttt{factorial} to compute \(5!\).
\begin{verbatim}
1 program main
2 implicit none
3
4 ! print 5 factorial
5 print*, factorial(5)
6
7 contains
8
9 ! factorial(n): product of natural numbers up to n
10 ! n: integer argument
11 recursive function factorial(n) result(m)
12 integer :: n, m
13
14 ! if n>1, call factorial recursively
15 ! otherwise 1 factorial is 1
16 if (n>1) then
17 m = n*factorial(n-1)
18 else
19 m = 1
20 end if
21
22 end function factorial
23
24 end program main
\end{verbatim}
\begin{verbatim}
120
\end{verbatim}
\section{Object-oriented programming}
\label{sec:orgheadline56}
\subsection{Derived types}
\label{sec:orgheadline10}
Data types can be defined by the programmer. Variables and procedures that belong to a defined data type are declared between a \texttt{type} / \texttt{end type} pair. Type-bound procedures, i.e. functions and subroutines, are defined by the \texttt{procedure} keyword followed by \texttt{::} and the name of the procedure within the \texttt{type} / \texttt{end type} pair after the \texttt{contains} keyword. A variable with defined type is declared with the \texttt{type} keyword and the name of the type. The variables and procedures of a defined type variable can be accessed by appending a \texttt{\%} symbol to the name of the variable.
\begin{verbatim}
1 ! define a 'matrix' type
2 ! type-bound variables: shape, data
3 ! type-bound procedures: construct, destruct
4 type matrix
5 integer :: shape(2)
6 real, allocatable :: data(:,:)
7 contains
8 procedure :: construct
9 procedure :: destruct
10 end type matrix
11
12 ! declare a matrix variable
13 type(matrix) :: mat
14
15 ! assign value to type-bound variable
16 mat%shape = [3,3]
\end{verbatim}
\subsection{Modules}
\label{sec:orgheadline53}
A type-bound procedure can be defined after the \texttt{contains} keyword in the same program construct, i.e. a \texttt{module}, as the type definition. The first argument in the definition of a type-bound procedure is of the defined type and is declared within the procedure body with the \texttt{class} keyword and the name of the type.
\begin{verbatim}
1 module matrix_module
2 implicit none
3
4 type matrix
5 integer :: shape(2)
6 real, allocatable :: data(:,:)
7 contains
8 procedure :: construct
9 procedure :: destruct
10 end type matrix
11
12 contains
13
14 ! construct: populate shape and allocate memory for matrix
15 ! m,n: number of rows,cols of matrix
16 subroutine construct(this, m, n)
17 class(matrix) :: this
18 integer :: m, n
19 this%shape = [m,n]
20 allocate(this%data(m,n))
21 end subroutine construct
22
23 ! destruct: deallocate memory that matrix occupies
24 subroutine destruct(this)
25 class(matrix) :: this
26 deallocate(this%data)
27 end subroutine destruct
28
29 end module matrix_module
\end{verbatim}
To define variables of the \texttt{matrix} type in the main program, tell it to \texttt{use} the module defined above with \texttt{use matrix\_module} immediately after the \texttt{program main} line. The procedures bound to a defined type can be access through variables of that type by appending the \texttt{\%} symbol to the name of the variable.
\begin{verbatim}
1 program main
2 use matrix_module
3 implicit none
4
5 type(matrix) :: mat
6 mat%shape = [3,3]
7
8 ! create matrix
9 call mat%construct(3,3)
10
11 ! treat matrix variable 'data' like an array
12 mat%data(1,1) = 1.0
13 ! etc...
14
15 ! destruct matrix
16 call matrix%destruct()
17 end program main
\end{verbatim}
\subsection{Example: determinant of random matrix}
\label{sec:orgheadline54}
The following module defines a \texttt{matrix} type with two variables: an \texttt{integer} array \texttt{shape} that stores the number of rows and columns of the matrix and a \texttt{real} array \texttt{data} that stores the elements of the matrix. The type has four procedures: a subroutine \texttt{construct} that sets the shape and allocates memory for the data, a subroutine \texttt{destruct} that deallocates memory, a subroutine \texttt{print} that prints a matrix, and a function \texttt{det} that computes the determinant of a matrix. Note \texttt{det} is based on the definition of determinant using cofactors, and is very inefficient. A function \texttt{random\_matrix} defined within the module generates a matrix with uniform random entries in \([-1,1]\).
\begin{verbatim}
1 module matrix_module
2 implicit none
3
4 type matrix
5 integer :: shape(2)
6 real, allocatable :: data(:,:)
7 contains
8 procedure :: construct
9 procedure :: destruct
10 procedure :: print
11 procedure :: det
12 end type matrix
13
14 contains
15
16 subroutine construct(this, m, n)
17 class(matrix) :: this
18 integer :: m,n
19 this%shape = [m,n]
20 allocate(this%data(m,n))
21 end subroutine construct
22
23 subroutine destruct(this)
24 class(matrix) :: this
25 deallocate(this%data)
26 end subroutine destruct
27
28 ! print: formatted print of matrix
29 subroutine print(this)
30 class(matrix) :: this
31 ! row_fmt: format character string for row printing
32 ! fmt: temporary format string
33 character(32) :: row_fmt, fmt = '(a,i0,a,i0,a,i0,a)'
34 ! w: width of each entry printed
35 ! d: number of decimal digits printed
36 integer :: w, d = 2, row
37 ! find largest width of element in matrix
38 w = ceiling(log10(maxval(abs(this%data)))) + d + 2
39 ! write row formatting to 'row_fmt' variable
40 write(row_fmt,fmt) '(',this%shape(2),'(f',w,'.',d,',1x))'
41 ! print matrix row by row
42 do row = 1,this%shape(1)
43 print row_fmt, this%data(row,:)
44 end do
45 end subroutine print
46
47 ! det: compute determinant of matrix
48 ! using recursive definition based on cofactors
49 recursive function det(this) result(d)
50 class(matrix) :: this
51 type(matrix) :: submatrix
52 real :: d, sgn, element, minor
53 integer :: m, n, row, col, i, j
54
55 m = this%shape(1)
56 n = this%shape(2)
57 d = 0.0
58
59 ! compute cofactor
60 ! if 1x1 matrix, return value
61 if (m==1.and.n==1) then
62 d = this%data(1,1)
63 ! if square and not 1x1
64 else if (m==n) then
65 ! cofactor sum down the first column
66 do row = 1,m
67 ! sign of term
68 sgn = (-1.0)**(row+1)
69 ! matrix element
70 element = this%data(row,1)
71 ! construct the cofactor submatrix and compute its determinant
72 call submatrix%construct(m-1,n-1)
73 if (row==1) then
74 submatrix%data = this%data(2:,2:)
75 else if (row==m) then
76 submatrix%data = this%data(:m-1,2:)
77 else
78 submatrix%data(:row-1,:) = this%data(:row-1,2:)
79 submatrix%data(row:,:) = this%data(row+1:,2:)
80 end if
81 minor = submatrix%det()
82 call submatrix%destruct()
83
84 ! determinant accumulator
85 d = d + sgn*element*minor
86 end do
87 end if
88 end function det
89
90 ! random_matrix: generate matrix with random entries in [-1,1]
91 ! m,n: number of rows,cols
92 function random_matrix(m,n) result(mat)
93 integer :: m,n,i,j
94 type(matrix) :: mat
95 ! allocate memory for matrix
96 call mat%construct(m,n)
97 ! seed random number generator
98 call srand(time())
99 ! populate matrix
100 do i = 1,m
101 do j = 1,n
102 mat%data(i,j) = 2.0*rand() - 1.0
103 end do
104 end do
105 end function random_matrix
106
107 end module matrix_module
\end{verbatim}
The main program uses the \texttt{matrix\_module} defined above to find the determinants of a number of random matrices of increasing size.
\begin{verbatim}
1 program main
2 use matrix_module
3 implicit none
4
5 type(matrix) :: mat
6 integer :: n
7
8 ! compute determinants of random matrices
9 do n = 1,5
10 ! generate random matrix
11 mat = random_matrix(n,n)
12
13 ! print determinant of matrix
14 print "('n: ', i0, ', det: ', f0.5)", n, det(mat)
15
16 ! destruct matrix
17 call mat%destruct()
18 end do
19
20 end program main
\end{verbatim}
\begin{verbatim}
./main
\end{verbatim}
\begin{verbatim}
n: 1, det: -.68676
n: 2, det: .45054
n: 3, det: .37319
n: 4, det: -.27328
n: 5, det: .26695
\end{verbatim}
\subsection{Example: matrix module}
\label{sec:orgheadline55}
\begin{verbatim}
1 module matrix_module
2 implicit none
3
4 public :: zeros
5 public :: identity
6 public :: random
7
8 type matrix
9 integer :: shape(2)
10 real, allocatable :: data(:,:)
11 contains
12 procedure :: construct => matrix_construct
13 procedure :: destruct => matrix_destruct
14 procedure :: norm => matrix_norm
15 end type matrix
16
17 type vector
18 integer :: length
19 real, allocatable :: data(:)
20 contains
21 procedure :: construct => vector_construct
22 procedure :: destruct => vector_destruct
23 procedure :: norm => vector_norm
24 end type vector
25
26 ! assignments
27 interface assignment(=)
28 procedure vec_num_assign, vec_vec_assign, mat_num_assign, mat_mat_assign
29 end interface assignment(=)
30
31 ! operations
32 interface operator(+)
33 procedure vec_vec_sum, mat_mat_sum
34 end interface operator(+)
35
36 interface operator(-)
37 procedure vec_vec_diff, mat_mat_diff
38 end interface operator(-)
39
40 interface operator(*)
41 procedure num_vec_prod, num_mat_prod, mat_vec_prod, mat_mat_prod
42 end interface operator(*)
43
44 interface operator(/)
45 procedure vec_num_quot, mat_num_quot
46 end interface operator(/)
47
48 interface operator(**)
49 procedure mat_pow
50 end interface operator(**)
51
52 ! functions
53 interface norm
54 procedure vector_norm, matrix_norm
55 end interface norm
56
57 ! structured vectors/matrices
58 interface zeros
59 procedure zeros_vector, zeros_matrix
60 end interface zeros
61
62 interface random
63 procedure random_vector, random_matrix
64 end interface random
65
66 contains
67
68 subroutine matrix_construct(this, m, n)
69 class(matrix) :: this
70 integer :: m,n
71 this%shape = [m,n]
72 allocate(this%data(m,n))
73 end subroutine matrix_construct
74
75 subroutine vector_construct(this, n)
76 class(vector) :: this
77 integer :: n
78 this%length = n
79 allocate(this%data(n))
80 end subroutine vector_construct
81
82 subroutine matrix_destruct(this)
83 class(matrix) :: this
84 deallocate(this%data)
85 end subroutine matrix_destruct
86
87 subroutine vector_destruct(this)
88 class(vector) :: this
89 deallocate(this%data)
90 end subroutine vector_destruct
91
92 ! assignment
93 subroutine vec_num_assign(vec,num)
94 type(vector), intent(inout) :: vec
95 real, intent(in) :: num
96 vec%data = num
97 end subroutine vec_num_assign
98
99 subroutine vec_vec_assign(vec1,vec2)
100 type(vector), intent(inout) :: vec1
101 type(vector), intent(in) :: vec2
102 vec1%data = vec2%data
103 end subroutine vec_vec_assign
104
105 subroutine mat_num_assign(mat,num)
106 type(matrix), intent(inout) :: mat
107 real, intent(in) :: num
108 mat%data = num
109 end subroutine mat_num_assign
110
111 subroutine mat_mat_assign(mat1,mat2)
112 type(matrix), intent(inout) :: mat1
113 type(matrix), intent(in) :: mat2
114 mat1%data = mat2%data
115 end subroutine mat_mat_assign
116
117 ! operations
118 function vec_vec_sum(vec1,vec2) result(s)
119 type(vector), intent(in) :: vec1, vec2
120 type(vector) :: s
121 call s%construct(vec1%length)
122 s%data = vec1%data + vec2%data
123 end function vec_vec_sum
124
125 function mat_mat_sum(mat1,mat2) result(s)
126 type(matrix), intent(in) :: mat1, mat2
127 type(matrix) :: s
128 call s%construct(mat1%shape(1),mat1%shape(2))
129 s%data = mat1%data+mat2%data
130 end function mat_mat_sum
131
132 function vec_vec_diff(vec1,vec2) result(diff)
133 type(vector), intent(in) :: vec1, vec2
134 type(vector) :: diff
135 call diff%construct(vec1%length)
136 diff%data = vec1%data-vec2%data
137 end function vec_vec_diff
138
139 function mat_mat_diff(mat1,mat2) result(diff)
140 type(matrix), intent(in) :: mat1, mat2
141 type(matrix) :: diff
142 call diff%construct(mat1%shape(1),mat1%shape(2))
143 diff%data = mat1%data-mat2%data
144 end function mat_mat_diff
145
146 function num_vec_prod(num,vec) result(prod)
147 real, intent(in) :: num
148 type(vector), intent(in) :: vec
149 type(vector) :: prod
150 call prod%construct(vec%length)
151 prod%data = num*vec%data
152 end function num_vec_prod
153
154 function num_mat_prod(num,mat) result(prod)
155 real, intent(in) :: num
156 type(matrix), intent(in) :: mat
157 type(matrix) :: prod
158 call prod%construct(mat%shape(1),mat%shape(2))
159 prod%data = num*mat%data
160 end function num_mat_prod
161
162 function mat_vec_prod(mat,vec) result(prod)
163 type(matrix), intent(in) :: mat
164 type(vector), intent(in) :: vec
165 type(vector) :: prod
166 call prod%construct(mat%shape(1))
167 prod%data = matmul(mat%data,vec%data)
168 end function mat_vec_prod
169
170 function mat_mat_prod(mat1,mat2) result(prod)
171 type(matrix), intent(in) :: mat1, mat2
172 type(matrix) :: prod
173 call prod%construct(mat1%shape(1),mat2%shape(2))
174 prod%data = matmul(mat1%data,mat2%data)
175 end function mat_mat_prod
176
177 function vec_num_quot(vec,num) result(quot)
178 type(vector), intent(in) :: vec
179 real, intent(in) :: num
180 type(vector) :: quot
181 call quot%construct(vec%length)
182 quot%data = vec%data/num
183 end function vec_num_quot
184
185 function mat_num_quot(mat,num) result(quot)
186 type(matrix), intent(in) :: mat
187 real, intent(in) :: num
188 type(matrix) :: quot
189 call quot%construct(mat%shape(1),mat%shape(2))
190 quot%data = mat%data/num
191 end function mat_num_quot
192
193 function mat_pow(mat1,pow) result(mat2)
194 type(matrix), intent(in) :: mat1
195 integer, intent(in) :: pow
196 type(matrix) :: mat2
197 integer :: i
198 mat2 = mat1
199 do i = 2,pow
200 mat2 = mat1*mat2
201 end do
202 end function mat_pow
203
204 ! functions
205 function vector_norm(this,p) result(mag)
206 class(vector), intent(in) :: this
207 integer, intent(in) :: p
208 real :: mag
209 integer :: i
210 ! inf-norm
211 if (p==0) then
212 mag = 0.0
213 do i = 1,this%length
214 mag = max(mag,abs(this%data(i)))
215 end do
216 ! p-norm
217 else if (p>0) then
218 mag = (sum(abs(this%data)**p))**(1./p)
219 end if
220 end function vector_norm
221
222 function matrix_norm(this, p) result(mag)
223 class(matrix), intent(in) :: this
224 integer, intent(in) :: p
225 real :: mag, tol = 1e-6
226 integer :: m, n, row, col, iter, max_iters = 1000
227 type(vector) :: vec, last_vec
228 m = size(this%data(:,1)); n = size(this%data(1,:))
229
230 ! entry-wise norms
231 if (p<0) then
232 mag = (sum(abs(this%data)**(-p)))**(-1./p)
233 ! inf-norm
234 else if (p==0) then
235 mag = 0.0
236 do row = 1,m
237 mag = max(mag,sum(abs(this%data(row,:))))
238 end do
239 ! 1-norm
240 else if (p==1) then
241 mag = 0.0
242 do col = 1,n
243 mag = max(mag,sum(abs(this%data(:,col))))
244 end do
245 ! p-norm
246 else if (p>0) then
247 vec = random(n)
248 vec = vec/vec%norm(p)
249 last_vec = zeros(n)
250 mag = 0.0
251 do iter = 1,max_iters
252 last_vec = vec
253 vec = this*last_vec
254 vec = vec/vec%norm(p)
255 if (vector_norm(vec-last_vec,p)<tol) exit
256 end do
257 mag = vector_norm(this*vec,p)
258 end if
259 end function matrix_norm
260
261 ! structured vectors/matrices
262 function random_matrix(m,n) result(mat)
263 integer :: m,n
264 type(matrix) :: mat
265 call mat%construct(m,n)
266 call random_seed()
267 call random_number(mat%data)
268 end function random_matrix
269
270 function random_vector(n) result(vec)
271 integer :: n
272 type(vector) :: vec
273 call vec%construct(n)
274 call random_seed()
275 call random_number(vec%data)
276 end function random_vector
277
278 function zeros_vector(n) result(vec)
279 integer :: n
280 type(vector) :: vec
281 call vec%construct(n)
282 vec = 0.0
283 end function zeros_vector
284
285 function zeros_matrix(m,n) result(mat)
286 integer :: m,n
287 type(matrix) :: mat
288 call mat%construct(m,n)
289 mat = 0.0
290 end function zeros_matrix
291
292 function identity(m,n) result(mat)
293 integer :: m,n,i
294 type(matrix) :: mat
295 call mat%construct(m,n)
296 do i = 1,min(m,n)
297 mat%data(i,i) = 1.0
298 end do
299 end function identity
300
301 end module matrix_module
\end{verbatim}
\begin{verbatim}
1 program main
2 use matrix_module
3 implicit none
4
5 type(vector) :: vec1, vec2
6 type(matrix) :: mat1, mat2
7 real :: x
8 integer :: i
9
10 ! 0s, id, random
11 mat1 = zeros(3,3)
12 call mat1%destruct()
13 mat1 = identity(3,3)
14 mat2 = random(3,3)
15 mat1 = mat1*mat1
16 vec1 = zeros(3)
17 call vec1%destruct()
18 vec1 = random(3)
19 vec2 = random(3)
20 ! +,-,*,/,**
21 mat1 = mat1+mat2
22 vec1 = vec1+vec2
23 mat1 = mat1-mat2
24 vec1 = vec1-vec2
25 vec1 = mat1*vec2
26 mat1 = mat2*mat1
27 mat1 = 2.0*mat1
28 vec1 = 2.0*vec1
29 mat1 = mat1/2.0
30 vec1 = vec1/2.0
31 mat2 = mat1**3
32 ! norm
33 x = norm(vec1,0)
34 x = norm(vec1,1)
35 x = norm(mat1,-1)
36 x = norm(mat1,0)
37 x = norm(mat1,1)
38 x = norm(mat1,2)
39 call vec1%destruct
40 call vec2%destruct
41 call mat1%destruct
42 call mat2%destruct
43 end program main
\end{verbatim}
\begin{verbatim}
./main
\end{verbatim}
\end{document} | {
"alphanum_fraction": 0.6826445009,
"avg_line_length": 34.1741188848,
"ext": "tex",
"hexsha": "339b6c3f42468dfdffccd6e856ce08ef5dae246a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "notmatthancock/notmatthancock.github.io",
"max_forks_repo_path": "teaching/acm-computing-seminar/resources/langs/fortran/index.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "notmatthancock/notmatthancock.github.io",
"max_issues_repo_path": "teaching/acm-computing-seminar/resources/langs/fortran/index.tex",
"max_line_length": 907,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "abcd91cc7c2653c5243fe96ba2fd681ec03930bb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "notmatthancock/notmatthancock.github.io",
"max_stars_repo_path": "teaching/acm-computing-seminar/resources/langs/fortran/index.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 20014,
"size": 64965
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Beamer Presentation
% LaTeX Template
% Version 1.0 (10/11/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND THEMES
%----------------------------------------------------------------------------------------
\documentclass{beamer}
\mode<presentation> {
% The Beamer class comes with a number of default slide themes
% which change the colors and layouts of slides. Below this is a list
% of all the themes, uncomment each in turn to see what they look like.
%\usetheme{default}
%\usetheme{AnnArbor}
%\usetheme{Antibes}
%\usetheme{Bergen}
%\usetheme{Berkeley}
%\usetheme{Berlin}
%\usetheme{Boadilla}
%\usetheme{CambridgeUS}
%\usetheme{Copenhagen}
%\usetheme{Darmstadt}
%\usetheme{Dresden}
%\usetheme{Frankfurt}
%\usetheme{Goettingen}
%\usetheme{Hannover}
%\usetheme{Ilmenau}
%\usetheme{JuanLesPins}
\usetheme{Luebeck}
%\usetheme{Madrid}
%\usetheme{Malmoe}
%\usetheme{Marburg}
%\usetheme{Montpellier}
%\usetheme{PaloAlto}
%\usetheme{Pittsburgh}
%\usetheme{Rochester}
%\usetheme{Singapore}
%\usetheme{Szeged}
%\usetheme{Warsaw}
% As well as themes, the Beamer class has a number of color themes
% for any slide theme. Uncomment each of these in turn to see how it
% changes the colors of your current slide theme.
%\usecolortheme{albatross}
%\usecolortheme{beaver}
%\usecolortheme{beetle}
%\usecolortheme{crane}
%\usecolortheme{dolphin}
%\usecolortheme{dove}
%\usecolortheme{fly}
%\usecolortheme{lily} % das ist gut
%\usecolortheme{orchid}
%\usecolortheme{rose}
%\usecolortheme{seagull}
%\usecolortheme{seahorse}
%\usecolortheme{whale}
%\usecolortheme{wolverine}
%\setbeamertemplate{footline} % To remove the footer line in all slides uncomment this line
\setbeamertemplate{footline}[page number] % To replace the footer line in all slides with a simple slide count uncomment this line
\setbeamertemplate{navigation symbols}{} % To remove the navigation symbols from the bottom of all slides uncomment this line
}
\usepackage{graphicx} % Allows including images
\graphicspath{{PDF/},{./}}
\DeclareGraphicsExtensions{.pdf,.png}
\usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables
\usepackage{url}
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\title[StreamGraph]{StreamGraph} % The short title appears at the bottom of every slide, the full title is only on the title page
\author{Florian Ziesche, Jakob Karge and Boris Graf} % Your name
\institute[TUB] % Your institution as it will appear on the bottom of every slide, may be shorthand to save space
{
Technische Universit\"at Berlin\\ % Your institution for the title page
}
\date{\today} % Date, can be changed to a custom date
\begin{document}
\begin{frame}
\titlepage % Print the title page as the first slide
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{streamgraph}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Overview} % Table of contents slide, comment this block out to remove it
\tableofcontents % Throughout your presentation, if you choose to use \section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\end{frame}
%----------------------------------------------------------------------------------------
% PRESENTATION SLIDES
%----------------------------------------------------------------------------------------
%------------------------------------------------
\section{Introduction}
%------------------------------------------------
\subsection{Idea \& Goals}
% Jakob
\begin{frame}
\frametitle{Idea \& Goals}
\begin{itemize}
\item Visualize parallel programming
\item Graphical layer on top of a textual language
\end{itemize}
\begin{itemize}
\item StreamIt: subset representation
\end{itemize}
\end{frame}
\subsection{SreamIt}
% Florian
\begin{frame}
\frametitle{StreamIt}
\begin{itemize}
\item Developed for streaming applications
\item Pipeline-based structure
\end{itemize}
\begin{block}{Base constructs}
\begin{itemize}
\item Filter: "The Workhorse of StreamIt"; \textit{work}-, \textit{init}-functions
\item Pipeline: linear succession of StreamIt constructs
\item Split-Join: creating parallel paths
\begin{itemize}
\item Split: split single data stream into multiple data streams
\item Join: join previously splitted streams together
\end{itemize}
% \item FeedbackLoop
\end{itemize}
\end{block}
For further information on StreamIt see \cite{streamIt}.
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Parallelism in StreamIt}
\begin{block}{Split-Join}
Splitting (or duplicating) of data:\\
\begin{itemize}
\item Paths independent between \textit{split} and \textit{join}
\end{itemize}
As result paths may be processed in parallel
\end{block}
\begin{block}{Pipeline}
Continuous data stream through filters:\\
\begin{itemize}
\item \textit{Work}-functions are constantly recalculated.
\item Filters can be processed independently.
\end{itemize}
Therefore filters in pipelines can be parallelised.
\end{block}
\end{frame}
%------------------------------------------------
\subsection{StreamGraph GUI}
% Boris
\begin{frame}
\frametitle{StreamGraph GUI}
Visual abstractions of StreamIt toplogy
\begin{columns}[c]
\begin{column}{0.48\textwidth}
\begin{itemize}
\item Split \& join integrated into filters
\item Implicit pipelines
\end{itemize}
\end{column}
\hfill
\begin{column}{0.48\textwidth}
\includegraphics[width=0.6\textwidth]{FilterBoxGraphic}
\label{fig_filterAbstraction}
\end{column}
\end{columns}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{StreamGraph GUI}
\begin{itemize}
\item Minimalistic and clear design
\item Mouse control via pop-up menus
\item Intuitive control
\end{itemize}
\begin{block}{Elements}
\begin{itemize}
\item Menu bar
\item Working surface
\item Notification bar
\item Property window
\end{itemize}
\end{block}
\end{frame}
%------------------------------------------------
\section{Demo}
\begin{frame}
\frametitle{Demo}
\textbf{DEMO}: \texttt{demoPower.str}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{demoPower}
\label{fig_demoPower}
\end{figure}
\end{frame}
%------------------------------------------------
\section{Backend}
%------------------------------------------------
\subsection{Framework}
% Boris
\begin{frame}
\frametitle{StreamGraph View}
\begin{itemize}
\item Based on Gtk2::Ex::MindMapView \cite{GTK2EXMindMapView}
\item Gnome Canvas used as engine
\item Item $\rightarrow$ border $\rightarrow$ content
\item Handle keyboard and mouse events
\item Many other changes
\end{itemize}
\end{frame}
%------------------------------------------------
\subsection{Model}
% Jakob
\begin{frame}
\frametitle{Model}
Program structures modeled in two ways:
\begin{block}{Graphical: \textit{Node}}
\begin{itemize}
\item Node types (\textit{Filter, Subgraph, Parameter, Comment})
\item Factory
\end{itemize}
\end{block}
\begin{block}{Textual: \textit{CodeObject}}
\begin{itemize}
\item StreamIt types (\textit{Pipeline, Splitjoin, Parameter})
\item \textit{CodeObject}s derived from \textit{Node}s
\end{itemize}
\end{block}
\end{frame}
\subsection{Graph Transformation}
\begin{frame}
\frametitle{Graph Transformation}
\begin{columns}[c]
\begin{column}{0.48\textwidth}
\begin{itemize}
\item Load subgraphs
\item Add void sources and -sinks
\item Add identities for empty pipelines
\item Test for graph errors %throughout the above and leave invalid partial graphs alone
\end{itemize}
\end{column}
\hfill
\begin{column}{0.48\textwidth}
\includegraphics[width=1.035\textwidth]{subgraph_before_after}\\
\vspace{1.1mm}
\includegraphics[width=0.53\textwidth]{voidend_before_after} ~
\includegraphics[width=0.47\textwidth]{identity_before_after}
\end{column}
\end{columns}
\end{frame}
%------------------------------------------------
\subsection{Code Generation}
% Florian
\begin{frame}
\frametitle{Code Generation}
\begin{block}{General procedure}
\begin{itemize}
\item Generate code for all filters
\item Build pipeline-based structure
\item Generate code for topology in recursive manner
\end{itemize}
\end{block}
\begin{block}{Filter generation}
\begin{itemize}
\item Make name unique
\item Get parameters
\item Build \textit{init}- and \textit{work}- function
\end{itemize}
\end{block}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{StreamIt pipeline-based structure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{StreamGraphToStreamIt}
\label{fig_StreamGraph_To_StreamIt}
\end{figure}
\end{frame}
%------------------------------------------------
\begin{frame}
\frametitle{Code Generation}
\begin{block}{Building the structure}
\begin{itemize}
\item Begin with pipeline (main)
\item If split is detected start building of split-join\\
\quad Build a pipeline for every Path.
\item Continue with generation of pipeline.
\end{itemize}
Indirect recursive method guarantees StreamIt-type hierarchical structure
\end{block}
Get the code by generating from inside out.
\end{frame}
%------------------------------------------------
\section{Conclusion}
%------------------------------------------------
\subsection{Results}
\begin{frame}
\frametitle{Results}
\begin{itemize}
\item Complete toolchain from "empty" to "running"
\item Limited StreamIt representation
\item Working prototype
\end{itemize}
\end{frame}
%------------------------------------------------
\subsection{Improvements \& Future}
\begin{frame}
\frametitle{Improvements}
\begin{itemize}
\item Ordered splitjoin (currently only commutative use of results)
\item More StreamIt constructs (e.g. message passing, feedback-loop)
\item UI
\begin{itemize}
\item Group-to-subgraph
\item In-graph error highlighting
\item Data visualization
\item ...
\end{itemize}
\item More automatic compatability
\item StreamIt+Java1.5+Perl-packages auto setup
\item Code quality \& style
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Future}
Possible research and development directions
\begin{itemize}
\item Different background language
\item More general graphical environment
\end{itemize}
\end{frame}
%------------------------------------------------
\section*{Bibliography}
\begin{frame}
\frametitle{Bibliography}
\footnotesize{
\begin{thebibliography}{99} % Beamer does not support BibTeX so references must be inserted manually as below
\bibitem{streamIt}
William~Thies, Michael~Karczmarek, and Saman~Amarasinghe, \emph{StreamIt: A Language for Streaming Applications},
\hskip 1em plus
0.5em minus 0.4em\relax Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139.
\bibitem{GTK2EXMindMapView}
\url{http://search.cpan.org/~hemlock/Gtk2-Ex-MindMapView-0.000001/lib/Gtk2/Ex/MindMapView.pm}
\end{thebibliography}
}
\end{frame}
%----------------------------------------------------------------------------------------
\end{document}
| {
"alphanum_fraction": 0.6531295613,
"avg_line_length": 29.1175,
"ext": "tex",
"hexsha": "03f95225cb449b1f29936c245affd066271afe3d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "57893740a7bf460e3d66cd1e8fd2777d5b726cc4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "grandchild/streamgraph",
"max_forks_repo_path": "doc/presentationStreamGraph.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "57893740a7bf460e3d66cd1e8fd2777d5b726cc4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "grandchild/streamgraph",
"max_issues_repo_path": "doc/presentationStreamGraph.tex",
"max_line_length": 194,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "57893740a7bf460e3d66cd1e8fd2777d5b726cc4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "grandchild/streamgraph",
"max_stars_repo_path": "doc/presentationStreamGraph.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3201,
"size": 11647
} |
\subsection{Publishing and Subscribing}
\label{p:topics}
The goal of this problem is to have you learn how to move a turtle using Twist messages and
calculate the distance between turtle and a point.
The work for this problem should be done in file \texttt{pubsub.py}.
\begin{enumerate}[(a)]
\item In \texttt{\_\_init\_\_}:
\begin{enumerate}[i.]
\item First, have your turtle subscribe to \texttt{turtle1/pose} with the callback function
\texttt{self.update\_pose}. %(\textit{X points})
\item Have your turtle publish to \texttt{turtle1/cmd\_vel}. %(\textit{X points})
\item Specify a publishing rate of $10$.
\end{enumerate}
\item In \texttt{move}:
\begin{enumerate}[i.]
\item Create a Twist message with values that change both the angular and linear velocities
such that the turtle moves both angularly and linearly.
\item Publish the Twist message to the velocity publisher.
\end{enumerate}
\item In \texttt{update\_pose}:
\begin{enumerate}[i.]
\item Set the instance variable \texttt{self.pose} to the information provided by the subscriber.
\end{enumerate}
\item In \texttt{calculate\_distance}:
\begin{enumerate}[i.]
\item Using \texttt{numpy}, set \texttt{vector} to be the difference between the coordinates
of the turtle's current pose and the point $(x,y)$. (Hint: Use a \texttt{numpy} array for
your vector)
\item Using \texttt{numpy} and \texttt{vector}, set \texttt{distance} to the distance between the
turtle and the point $(x,y)$.
\item Set the ROS parameter to \texttt{distance}. (Note: the value of distance must be cast
to a float).
\end{enumerate}
\item In \texttt{pubsub.launch}:
\begin{enumerate}[i.]
\item Review the format of this file and understand what it does (see ROS launch file
documentation for more information).
\end{enumerate}
\end{enumerate}
| {
"alphanum_fraction": 0.694202155,
"avg_line_length": 44.2954545455,
"ext": "tex",
"hexsha": "ff7249e9edbaf20df5f4f57b305bf8ece9a8e1cd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6fe2fc24ce8433e54b4f99db9e75ea10bf821267",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wbthomason-robotics/ros-test",
"max_forks_repo_path": "docs/p1/move.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6fe2fc24ce8433e54b4f99db9e75ea10bf821267",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wbthomason-robotics/ros-test",
"max_issues_repo_path": "docs/p1/move.tex",
"max_line_length": 103,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6fe2fc24ce8433e54b4f99db9e75ea10bf821267",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wbthomason-robotics/ros-test",
"max_stars_repo_path": "docs/p1/move.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 518,
"size": 1949
} |
\documentclass[11pt]{report}
\usepackage[left=1in,right=1in,top=1in,bottom=1in]{geometry}
\usepackage{amssymb,amsmath}
\usepackage{fancyhdr}
\pagestyle{fancy}\pagestyle{fancy}\pagestyle{fancy}
\fancyhead[RO]{\slshape \rightmark}
\fancyhead[LO]{}
\usepackage{paralist}
\usepackage{tikz} % drawing support
\usepackage{soul} % provide highlighting support
\usepackage{color}
\usepackage{listings}
\usetikzlibrary{patterns, positioning}
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{longtable,booktabs}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{3}
% Use this command to omit text temporarily for the document.
\newcommand{\omitted}[1]{}
% Use this command to provide a feedback or comment on the
% text. The first argument should be the name of the person
% providing feedback and the second argument should be the
% text of the feedback. For example:
% \feedback{David Tarditi}{Here are some additional examples.}
\newcommand{\feedback}[2]{\footnote{Feedback (#1): #2}}
%
% define colors used for highlighting
%
\definecolor{lightblue}{rgb}{0.8,0.85,1}
\definecolor{lightyellow}{rgb}{1.0,1.0,0.6}
\sethlcolor{lightyellow}
\lstdefinelanguage[checked]{C}[]{C}
{morekeywords={array_ptr,_Array_ptr,
assume_bounds_cast, _Assume_bounds_cast,
bool, _Bool, % C11
bounds,
byte_count,
checked,_Checked,
count,
dynamic_bounds_cast, _Dynamic_bounds_cast,
dynamic_check, _Dynamic_check,
for_any, _For_any,
itype_for_any, _Itype_for_any,
nt_array_ptr, _Nt_array_ptr,
nt_checked, _Nt_checked,
opaque, _Opaque,
ptr,_Ptr,
rel_align, rel_align_value,
reveal, _Reveal,
unchecked,_Unchecked,
where, _Where},
moredelim=[is][\it \color{purple}]{|-}{-|}
}
\lstdefinestyle{customc}{
belowcaptionskip=1\baselineskip,
breaklines=true,
% frame=L,
xleftmargin=\parindent,
language=[checked]C,
showstringspaces=false,
basicstyle=\small\ttfamily,
keywordstyle=\color{blue},
commentstyle=\bfseries\color{green!40!black},
identifierstyle=\color{teal}, % \color{purple!40!black},
stringstyle=\color{brown},
}
\lstset{language=C,style=customc}
%
% meta variables are italicized. Use the macro name var
% to save on typing.
\newcommand{\var}[1]{\texttt{\textit{#1}}}
%
% macro for font for keywords
%
\newcommand{\keyword}[1]{\lstinline|#1|}
\newcommand{\code}[1]{\lstinline|#1|}
%
%
% type macros
%
\newcommand{\void}{\lstinline{void}}
% array_ptr type macros
%
\newcommand{\arrayptr}{\lstinline|array_ptr|}
\newcommand{\plainarrayptr}{\texttt{array\_ptr}}
\newcommand{\arrayptrinst}[1]{\lstinline|array_ptr<|{#1}\lstinline|>|}
\newcommand{\arrayptrT}{\arrayptrinst{\var{T}}}
\newcommand{\arrayptrchar}{\arrayptrinst{\keyword{char}}}
\newcommand{\arrayptrint}{\arrayptrinst{\keyword{int}}}
\newcommand{\arrayptrvoid}{\arrayptrinst{\keyword{void}}}
\newcommand{\ntarrayptr}{\lstinline|nt_array_ptr|}
\newcommand{\ntarrayptrinst}[1]{\lstinline|nt_array_ptr<|{#1}\lstinline|>|}
\newcommand{\ntarrayptrT}{\ntarrayptrinst{\var{T}}}
\newcommand{\ntarrayptrchar}{\ntarrayptrinst{\keyword{char}}}
\newcommand{\ntarrayptrvoid}{\ntarrayptrinst{\keyword{void}}}
% use the name spanptr because span is already a command
% in tex
\newcommand{\spanptr}{\lstinline|span|}
\newcommand{\spanptrinst}[1]{\lstinline|span<|{#1}\lstinline|>|}
\newcommand{\spanptrT}{\spanptrinst{\var{T}}}
\newcommand{\spanptrchar}{\spanptrinst{\keyword{char}}}
\newcommand{\spanptrint}{\spanptrinst{\keyword{int}}}
\newcommand{\spanptrvoid}{\spanptrinst{\keyword{void}}}
\newcommand{\ptr}{\lstinline|ptr|}
\newcommand{\ptrinst}[1]{\lstinline|ptr<|{#1}\lstinline|>|}
\newcommand{\ptrT}{\ptrinst{\var{T}}}
\newcommand{\ptrchar}{\ptrinst{\keyword{char}}}
\newcommand{\ptrint}{\ptrinst{\keyword{int}}}
\newcommand{\ptrvoid}{\ptrinst{\keyword{void}}}
\newcommand{\uncheckedptr}{\lstinline|*|}
\newcommand{\uncheckedptrinst}[1]{{#1} \lstinline|*|}
\newcommand{\uncheckedptrT}{\uncheckedptrinst{\var{T}}}
\newcommand{\uncheckedptrvoid}{\uncheckedptrinst{\keyword{void}}}
% polymorphic type macros
\newcommand{\forany}{\lstinline|for_any|}
%
% bounds expression macros
%
\newcommand{\relalign}[1]{\lstinline|rel_align(|{#1}\lstinline|)|}
\newcommand{\relalignval}[1]{\lstinline|rel_align_value(|{#1}\lstinline|)|}
\newcommand{\bounds}[2]{\lstinline|bounds(|{#1}\lstinline|, |{#2}\lstinline|)|}
\newcommand{\boundsrel}[3]{\bounds{#1}{#2} \relalign{#3}}
\newcommand{\boundsrelval}[3]{\bounds{#1}{#2} \relalignval{#3}}
\newcommand{\boundsany}{\lstinline|bounds(any)|}
\newcommand{\boundsunknown}{\lstinline|bounds(unknown)|}
\newcommand{\boundscount}[1]{\lstinline|count(|{#1}\lstinline|)|}
\newcommand{\boundsbytecount}[1]{\lstinline|byte_count(|{#1}\lstinline|)|}
%
% bounds declaration macros
%
\newcommand{\boundsdecl}[2]{\texttt{#1}~\texttt{:}~\texttt{#2}}
%
% computed bounds for expressions
%
\newcommand{\boundsinfer}[2]{#1~$\vdash$~#2}
% expression macros
\newcommand{\sizeof}[1]{\lstinline|sizeof(|#1\lstinline|)|}
\newcommand{\cast}[2]{\lstinline|(|#1\lstinline|)| #2}
\newcommand{\inbounds}[1]{\lstinline|in_bounds(|{#1}\lstinline|)|}
\newcommand{\exprcurrentvalue}{\lstinline|expr_current_value|}
\newcommand{\plusovf}{\lstinline|+|\textsubscript{ovf}}
\newcommand{\minusovf}{\lstinline|-|\textsubscript{ovf}}
\newcommand{\mulovf}{\lstinline|*|\textsubscript{ovf}}
\begin{document}
\begin{titlepage}
{\center
\mbox{ }\\
\vspace{1in}
{\huge Extending C with bounds safety and improved type safety\par}
%{Version 0.7.1 (June 8, 2018) \par}
{Version 0.9 - Draft as of \today \par}
\vspace{0.25in}
{Checked C Technical Report Number 1 \par}
\vspace{0.125in}
{Author: David Tarditi, Microsoft\par}
\vspace{0.5in}
{\it Summary \par}
\input{abstract}
}
\end{titlepage}
\thispagestyle{empty}
\mbox{ }\\
\vspace{1.0in}
Microsoft is making this Specification available under the Open Web
Foundation Final Specification Agreement (version OWF 1.0). The OWF 1.0
is available at {\color{blue} \url{http://www.openwebfoundation.org/legal/the-owf-1-0-agreements/owfa-1-0}}.
This Specification is part of an on-going research project.
Contributions to this specification
are made under the Open Web Foundation Contributor License 1.0.
This license is available at {\color{blue} \url{http://www.openwebfoundation.org/legal/the-owf-1-0-agreements/owf-contributor-license-agreement-1-0---copyright-and-patent}}.
\newpage
% use roman numbers for the table of contents
\setcounter{page}{1}
\pagenumbering{roman}
\tableofcontents
% number the body of the document from 1 using arabic
% numerals.
\setcounter{page}{1}
\pagenumbering{arabic}
\include{pictures/rel-align-picture1}
\include{introduction}
\include{core-extensions}
\include{variable-bounds}
\include{checking-variable-bounds}
\include{void-ptr-replacements}
\include{interoperation}
\include{structure-bounds}
\include{pointers-to-pointers}
\include{simple-invariants}
\include{related-work}
%\chapter{Experience applying to this to OpenSSL}
%\label{chapter:eval}
\include{open-issues}
\include{design-alternatives}
\nocite{Jones2009}
\nocite{Jim2002}
\bibliographystyle{plain}
\bibliography{sources}
\appendix
% \include{fragments}
% \include{span-compilation}
\end{document}
| {
"alphanum_fraction": 0.7234307023,
"avg_line_length": 31.7984189723,
"ext": "tex",
"hexsha": "9c42e9f379ddc7cbbcfd1f0a32393b8988b70ebc",
"lang": "TeX",
"max_forks_count": 133,
"max_forks_repo_forks_event_max_datetime": "2019-04-23T08:59:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-06-14T06:50:39.000Z",
"max_forks_repo_head_hexsha": "ad4e8b01121fbfb40d81ee798480add7dc93f0bf",
"max_forks_repo_licenses": [
"BSD-Source-Code"
],
"max_forks_repo_name": "procedural/checkedc_binaries_ubuntu_16_04_from_12_Oct_2021",
"max_forks_repo_path": "llvm/projects/checkedc-wrapper/checkedc/spec/bounds_safety/checkedc.tex",
"max_issues_count": 186,
"max_issues_repo_head_hexsha": "ad4e8b01121fbfb40d81ee798480add7dc93f0bf",
"max_issues_repo_issues_event_max_datetime": "2019-04-22T20:40:12.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-06-14T22:38:37.000Z",
"max_issues_repo_licenses": [
"BSD-Source-Code"
],
"max_issues_repo_name": "procedural/checkedc_binaries_ubuntu_16_04_from_12_Oct_2021",
"max_issues_repo_path": "llvm/projects/checkedc-wrapper/checkedc/spec/bounds_safety/checkedc.tex",
"max_line_length": 173,
"max_stars_count": 1838,
"max_stars_repo_head_hexsha": "36a89b3722caa381405d939c70708e610cabf0de",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mwhicks1/checkedc",
"max_stars_repo_path": "spec/bounds_safety/checkedc.tex",
"max_stars_repo_stars_event_max_datetime": "2019-05-03T11:49:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-13T23:19:54.000Z",
"num_tokens": 2501,
"size": 8045
} |
\section*{Acknowledgment}
The project was completed due to support from project supervisor Anders la Cour-Harbo and also due to the cooperation with DTU team members. | {
"alphanum_fraction": 0.8253012048,
"avg_line_length": 83,
"ext": "tex",
"hexsha": "92fd64226b4b3ea7417c89b0cd878004c5ab5534",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-08-08T11:32:23.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-08-08T11:32:23.000Z",
"max_forks_repo_head_hexsha": "e406f117e62a6b4533b587aecefadb895deb88c8",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "predict-drone/drone-control",
"max_forks_repo_path": "documentation/frontmatter/preface.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "e406f117e62a6b4533b587aecefadb895deb88c8",
"max_issues_repo_issues_event_max_datetime": "2021-08-20T11:43:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-08-20T11:43:33.000Z",
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "predict-drone/drone-control",
"max_issues_repo_path": "documentation/frontmatter/preface.tex",
"max_line_length": 140,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e406f117e62a6b4533b587aecefadb895deb88c8",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "predict-drone/drone-control",
"max_stars_repo_path": "documentation/frontmatter/preface.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 35,
"size": 166
} |
%-------------------------------------------------------
% DOCUMENT CONFIGURATIONS
%-------------------------------------------------------
%-------------------------------------------------------
% START OF ARCHITECTURE ANALYSE
%-------------------------------------------------------
\subsection{Proof of Participation}
While doing research, the concept and bases had to be pretty clear to help us focus on the right track. We then made an architecture for the project. Of course, this architecture is not definitive but contributes to the understanding of what we are trying to do. The latest global architecture version is found on figure ~\ref{fig:latest-architecture} in the annexes.
The main point of this architecture is to create a consensus-driven network. The nodes running on machines connected to the network are acting not more than clients.
\subsubsection{User}
\subsubsection{Nodes} Considered as clients to the network, they have a trust level, which is stored on the network. This level increases over time with proof-of-participation and correct integrity checks. Identities' trust level is rising on the other hand for good behavior. This last point is described in the Big Picture chapter.
\subsubsection{OC Blockchain}
\subsubsection{Gateway}
\subsubsection{Storage} Controlled by the consensus and being the second part of Overclouds' black-box, storage allows the identities to save data into the distributed nodes of the network. Integrity checks are done by the consensus as multiple nodes are doing the same computation work and chunks storage (including redundancy).
\subsubsection{OC Contract} Working as a black-box, the goal of the is to applying the rules made by the identities. The security between the nodes is controlled by Overclouds, indeed, for client's point of view, they are talking with the network and not a particular node from the network. The network work is split into encrypted and distributed chunks to nodes. Note that the chunks are encrypted for the specific node as the keys are stored and owned by the consensus. Each node is participating by default to the network storage and computation power.
\subsubsection{Community Contracts}
%-------------------------------------------------------
% END OF ARCHITECTURE ANALYSE
%------------------------------------------------------- | {
"alphanum_fraction": 0.6878980892,
"avg_line_length": 75.9677419355,
"ext": "tex",
"hexsha": "6320021593ace46ce943b37f9cf3a20a040af7e4",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Rocla/OverClouds",
"max_forks_repo_path": "report-bs/16dlm-tb210-overclouds/analyses/proof-of-participation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Rocla/OverClouds",
"max_issues_repo_path": "report-bs/16dlm-tb210-overclouds/analyses/proof-of-participation.tex",
"max_line_length": 556,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "0a7e241df00d81702bdf1105e9c35c4fa642da2f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Rocla/OverClouds",
"max_stars_repo_path": "report-bs/16dlm-tb210-overclouds/analyses/proof-of-participation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 431,
"size": 2355
} |
\subsubsection{Influence of Threshold Determination} \label{influence_of_threshold_determination}
115 different simulations were tested that use $\eta$DTW with a Sakoe-Chiba band of size 36\% depending on the
input time series length and \textit{Mid} as window size determination. Three different official approaches were
tested for the threshold determination, \textit{HAveD}, \textit{HMidD} and \textit{HMinD}. Furthermore two unofficial
approaches to double check the performance of the official approaches, \textit{Peak} with a factor of 1.1 and 1.2. Every
approach has respectively 23 different simulations. Figure \ref{fig:threshold_result} illustrates the distribution of
the five subsets. The \textit{HAveD} and the \textit{HMidD} approaches reach the best $F_{1}score_{\mu}$ values,
\textit{HAveD} a bit better as \textit{HMidD}. Surprising is the poor performance of both \textit{Peak} approaches,
their low $Precision_{\mu}$ values are the reason for this. Remarkable are the top $Precision_{\mu}$ values of the
\textit{HMinD} threshold determination.
\begin{figure}[H]
\begin{center}
\resizebox {\textwidth} {!} {
\begin{tabular}{cc}
\resizebox {!} {\height} {
\begin{tikzpicture}
\begin{axis}[
legend pos=south west,
xmin=0.4,
xmax=1,
ymin=0.1,
ymax=0.7,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
xlabel=$Precision_{\mu}$,
ylabel=$Recall_{\mu}$,
samples=100]
\addplot[blue, only marks, mark size=1] table {../data/fig/threshold_result/haved.dat};
\addlegendentry{HAveD}
\addplot[red, only marks, mark size=1] table {../data/fig/threshold_result/hmidd.dat};
\addlegendentry{HMidD}
\addplot[green, only marks, mark size=1] table {../data/fig/threshold_result/hmind.dat};
\addlegendentry{HMinD}
\addplot[violet, only marks, mark size=1] table {../data/fig/threshold_result/peak110.dat};
\addlegendentry{Peak (1.1)}
\addplot[cyan, only marks, mark size=1] table {../data/fig/threshold_result/peak120.dat};
\addlegendentry{Peak (1.2)}
\addplot[gray, domain=0.11:1] {(0.2 * x) / (2 * x - 0.2)};
\addplot[gray, domain=0.16:1] {(0.3 * x) / (2 * x - 0.3)};
\addplot[gray, domain=0.21:1] {(0.4 * x) / (2 * x - 0.4)};
\addplot[gray, domain=0.26:1] {(0.5 * x) / (2 * x - 0.5)};
\addplot[gray, domain=0.31:1] {(0.6 * x) / (2 * x - 0.6)};
\addplot[gray, domain=0.36:1] {(0.7 * x) / (2 * x - 0.7)};
\addplot[gray, domain=0.41:1] {(0.8 * x) / (2 * x - 0.8)};
\end{axis}
\end{tikzpicture}
} &
\resizebox {!} {\height} {
\begin{tikzpicture}
\begin{axis}[
xmin=0,
xmax=1,
ymin=0,
ymax=1,
width=\axisdefaultwidth,
height=\axisdefaultwidth,
xlabel=$Precision_{\mu}$,
ylabel=$Recall_{\mu}$,
samples=100]
\addplot[blue, only marks, mark size=1] table {../data/fig/threshold_result/haved.dat};
\addplot[red, only marks, mark size=1] table {../data/fig/threshold_result/hmidd.dat};
\addplot[green, only marks, mark size=1] table {../data/fig/threshold_result/hmind.dat};
\addplot[violet, only marks, mark size=1] table {../data/fig/threshold_result/peak110.dat};
\addplot[cyan, only marks, mark size=1] table {../data/fig/threshold_result/peak120.dat};
\addplot[gray, domain=0.051:1] {(0.1 * x) / (2 * x - 0.1)};
\addplot[gray, domain=0.11:1] {(0.2 * x) / (2 * x - 0.2)};
\addplot[gray, domain=0.16:1] {(0.3 * x) / (2 * x - 0.3)};
\addplot[gray, domain=0.21:1] {(0.4 * x) / (2 * x - 0.4)};
\addplot[gray, domain=0.26:1] {(0.5 * x) / (2 * x - 0.5)};
\addplot[gray, domain=0.31:1] {(0.6 * x) / (2 * x - 0.6)};
\addplot[gray, domain=0.36:1] {(0.7 * x) / (2 * x - 0.7)};
\addplot[gray, domain=0.41:1] {(0.8 * x) / (2 * x - 0.8)};
\addplot[gray, domain=0.46:1] {(0.9 * x) / (2 * x - 0.9)};
\end{axis}
\end{tikzpicture}
}
\end{tabular}
}
\end{center}
\caption{$Precision_{\mu}$ and $Recall_{\mu}$ of all simulations that use $\eta$DTW with a Sakoe-Chiba band of size
36\% depending on the input time series length and \textit{Mid} as window size determination, separated by the used
threshold determination. The left plot is a zoomed version of the right plot. Gray lines are illustrating the
distribution of $F_{1}score_{\mu}$ in $\frac{1}{10}$ steps.}
\label{fig:threshold_result}
\end{figure}
| {
"alphanum_fraction": 0.4879227053,
"avg_line_length": 66.6206896552,
"ext": "tex",
"hexsha": "7947b74d9e89bbab189b98d3fd4f9b69183b31fa",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-01-11T23:15:57.000Z",
"max_forks_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GordonLesti/SlidingWindowFilter",
"max_forks_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_threshold_determination.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GordonLesti/SlidingWindowFilter",
"max_issues_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_threshold_determination.tex",
"max_line_length": 120,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "22c11f2912a5c523ae8ad85a849e2d0b123536ec",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GordonLesti/SlidingWindowFilter",
"max_stars_repo_path": "bachelor-thesis/experiment/evaluation/influence_of_threshold_determination.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-14T11:43:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-06-22T09:37:30.000Z",
"num_tokens": 1537,
"size": 5796
} |
\documentclass[]{book}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={R package workshop},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{natbib}
\bibliographystyle{apalike}
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{longtable,booktabs}
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{5}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
%%% Change title format to be more compact
\usepackage{titling}
% Create subtitle command for use in maketitle
\providecommand{\subtitle}[1]{
\posttitle{
\begin{center}\large#1\end{center}
}
}
\setlength{\droptitle}{-2em}
\title{R package workshop}
\pretitle{\vspace{\droptitle}\centering\huge}
\posttitle{\par}
\author{}
\preauthor{}\postauthor{}
\date{}
\predate{}\postdate{}
\usepackage{booktabs}
\begin{document}
\maketitle
{
\setcounter{tocdepth}{1}
\tableofcontents
}
\hypertarget{preface}{%
\chapter*{Preface}\label{preface}}
\addcontentsline{toc}{chapter}{Preface}
\hypertarget{about-this-workshop}{%
\section*{About this workshop}\label{about-this-workshop}}
\addcontentsline{toc}{section}{About this workshop}
This workshop was created by COMBINE, an association for Australian students in
bioinformatics, computational biology and related fields. You can find out
more about COMBINE at \url{http://combine.org.au}.
The goal of this workshop is to explain the basics of R package development. By
the end of the workshop you should have your own minimal R package that you can
use to store your personal functions.
The materials were written using the \textbf{bookdown} package
(\url{https://bookdown.org/home/}), which is built on top of R Markdown and \textbf{knitr}.
\hypertarget{requirements}{%
\section*{Requirements}\label{requirements}}
\addcontentsline{toc}{section}{Requirements}
The workshop assumes that you are familar with basic R and the RStudio IDE. This
includes topics such as installing packages, assigning variables and writing
functions. If you are not comfortable with these you may need to complete an
introductory R workshop first.
\hypertarget{r-and-rstudio}{%
\subsection*{R and RStudio}\label{r-and-rstudio}}
\addcontentsline{toc}{subsection}{R and RStudio}
You will need a recent version of R and RStudio. These materials were written
using R version 3.6.0 (2019-04-26) and RStudio version
1.2.1335. You can download R from
\url{https://cloud.r-project.org/} and RStudio from
\url{https://www.rstudio.com/products/rstudio/download/}.
\hypertarget{packages}{%
\subsection*{Packages}\label{packages}}
\addcontentsline{toc}{subsection}{Packages}
The main packages used in the workshop are below with the versions used in these
materials:
\begin{itemize}
\tightlist
\item
\textbf{devtools} (v2.0.2)
\item
\textbf{usethis} (v1.5.1)
\item
\textbf{roxygen2} (v6.1.1)
\item
\textbf{testthat} (v2.1.1)
\item
\textbf{knitr} (v1.23)
\item
\textbf{ggplot2} (v3.2.0)
\item
\textbf{rlang} (v0.4.0)
\end{itemize}
Please make sure these packages are installed before starting the workshop. You
can install them by running the following code.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{pkgs <-}\StringTok{ }\KeywordTok{c}\NormalTok{(}\StringTok{"devtools"}\NormalTok{, }\StringTok{"usethis"}\NormalTok{, }\StringTok{"roxygen2"}\NormalTok{, }\StringTok{"testhat"}\NormalTok{, }\StringTok{"knitr"}\NormalTok{, }\StringTok{"ggplot2"}\NormalTok{,}
\StringTok{"rlang"}\NormalTok{)}
\KeywordTok{install.packages}\NormalTok{(pkgs)}
\end{Highlighting}
\end{Shaded}
\hypertarget{github}{%
\subsection*{GitHub}\label{github}}
\addcontentsline{toc}{subsection}{GitHub}
Version control using git is very useful and should be part of your package
development process but it is outside the scope of this workshop. However,
uploading your package to code sharing websites such as GitHub is the easiest
way to distribute it. Towards the end of the workshop is a section showing you
to upload your package to GitHub using R commands (no knowledge of git
necessary). If you would like to try this and don't already have a GitHub
account please create one at \url{https://github.com/join}.
\hypertarget{license}{%
\section*{License}\label{license}}
\addcontentsline{toc}{section}{License}
These materials are covered by the Creative Commons Attribution 4.0
International (CC BY 4.0) license
(\url{https://creativecommons.org/licenses/by/4.0/}).
\hypertarget{introduction}{%
\chapter{Introduction}\label{introduction}}
\hypertarget{what-is-a-package}{%
\section{What is a package?}\label{what-is-a-package}}
An R package is a collection of functions that are bundled together in a way
that lets them be easily shared. Usually these functions are designed to work
together to complete a specific task such as analysing a particular kind of
data. You are probably familiar with many packages already, for example
\textbf{ggplot2} or \textbf{data.table}.
Packages can take various forms during their life cycle. For example the
structure you use when writing package code is not exactly the same as what will
be installed by somebody else. While you don't need to know about these forms in
detail to create a package it is useful to be aware of them. For more details
have a look at the ``What is a package?'' section of Hadley Wickham's ``R packages''
book (\url{http://r-pkgs.had.co.nz/package.html\#package}).
\hypertarget{why-write-a-package}{%
\section{Why write a package?}\label{why-write-a-package}}
Packages are the best way to distribute code and documentation, and as we are
about to find out they are very simple to make. Even if you never intend to
share your package it is useful to have a place to store your commonly used
functions. You may have heard the advice that if you find yourself reusing code
then you should turn it into a function so that you don't have to keep rewriting
it (along with other benefits). The same applies to functions. If you have some
functions you reuse in different projects then it probably makes sense to put
those in a package. It's a bit more effort now but it will save you a lot of
time in the long run.
Of course often you will want to share your package, either to let other people
use your functions or just so people can see what you have done (for example
when you have code and data for a publication). If you are thinking about
making a software package for public use there are a few things you should
consider first:
\begin{itemize}
\tightlist
\item
Is your idea new or is there already a package out there that does something
similar?
\item
If there is does your package improve on it in some way? For example is it
easier to use or does it have better performance?
\item
If a similar package exists could you help improve it rather than making a
new one? Most package developers are open to collaboration and you may be
able to achieve more by working together.
\end{itemize}
\hypertarget{packages-for-writing-packages}{%
\subsection{Packages for writing packages}\label{packages-for-writing-packages}}
This workshop teaches a modern package development workflow that makes use of
packages designed to help with writing packages. The two main packages are
\textbf{devtools} and \textbf{usethis}. As you might gather from the name \textbf{devtools}
contains functions that will help with development tasks such as checking,
building and installing packages. The \textbf{usethis} package contains a range of
templates and handy functions for making life easier, many of which were
originally in \textbf{devtools}\footnote{This is important to remember when looking at older
tutorials or answers to questions on the internet. If \texttt{devtools::func()} doesn't
seem to exist any more try \texttt{usethis::func()} instead}. All of the core parts
of package development can be performed in other ways such as typing commands
on the command line or clicking buttons in RStudio but we choose to use these
packages because they provide a consistent workflow with sensible defaults.
Other packages we will use that will be introduced in the appropriate sections
are:
\begin{itemize}
\tightlist
\item
\textbf{roxygen2} for function documentation
\item
\textbf{testthat} for writing unit tests
\item
\textbf{knitr} for building vignettes
\end{itemize}
\hypertarget{setting-up}{%
\chapter{Setting up}\label{setting-up}}
\hypertarget{open-rstudio}{%
\section{Open RStudio}\label{open-rstudio}}
The first thing we need to do is open RStudio. Do this now. If you currently
have a project open close it by clicking \emph{File \textgreater{} Close project}.
\hypertarget{naming-your-package}{%
\section{Naming your package}\label{naming-your-package}}
Before we create our package we need to give it a name. Package names can only
consist of letters, numbers and dots (.) and must start with a letter. While all
of these are allowed it is generally best to stick to just lowercase letters.
Having a mix of lower and upper case letters can be hard for users to remember
(is it \textbf{RColorBrewer} or \textbf{Rcolorbrewer} or \textbf{rcolorbrewer}?). Believe it
or not choosing a name can be one of the hardest parts of making a package!
There is a balance between choosing a name that is unique enough that it is easy
to find (and doesn't already exist) and choosing something that makes it obvious
what the package does. Acronyms or abbreviations are one option that often works
well. It can be tricky to change the name of a package later so it is worth
spending some time thinking about it before you start.
\begin{quote}
\textbf{Checking availability}
If there is even a small chance that your package might be used by other
people it is worth checking that a package with your name doesn't already
exist. A handy tool for doing this is the \textbf{available} package. This package
will check common package repositories for your name as well as things like
Urban Dictionary to make sure your name doesn't have some meanings you weren't
aware of!
\end{quote}
At the end of this workshop we want you to have a personal package that you can
continue to add to and use so we suggest choosing a name that is specific to
you. Something like your initials, a nickname or a username would be good
options. For the example code we are going to use \texttt{mypkg} and you could use that
for the workshop if you want to.
\hypertarget{creating-your-package}{%
\section{Creating your package}\label{creating-your-package}}
To create a template for our package we will use the \texttt{usethis::create\_package()}
function. All it needs is a path to the directory where we want to create the
package. For the example we put it on the desktop but you should put it
somewhere more sensible.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{create_package}\NormalTok{(}\StringTok{"~/Desktop/mypkg"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
You will see some information printed to the console, something like (where USER
is your username):
\begin{verbatim}
✔ Creating 'C:/Users/USER/Desktop/mypkg/'
✔ Setting active project to 'C:/Users/USER/Desktop/mypkg'
✔ Creating 'R/'
✔ Writing 'DESCRIPTION'
Package: mypkg
Title: What the Package Does (One Line, Title Case)
Version: 0.0.0.9000
Authors@R (parsed):
* First Last <[email protected]> [aut, cre] (<https://orcid.org/YOUR-ORCID-ID>)
Description: What the package does (one paragraph).
License: What license it uses
Encoding: UTF-8
LazyData: true
✔ Writing 'NAMESPACE'
✔ Writing 'mypkg.Rproj'
✔ Adding '.Rproj.user' to '.gitignore'
✔ Adding '^mypkg\\.Rproj$', '^\\.Rproj\\.user$' to '.Rbuildignore'
✔ Opening 'C:/Users/USER/Desktop/mypkg/' in new RStudio session
✔ Setting active project to '<no active project>'
\end{verbatim}
You will see something similar whenever we run a \textbf{usethis} command. Green
ticks indicate that a step has been completed correctly. If you ever see a red
dot that means that there is something \textbf{usethis} can't do for you and you will
need to follow some instructions to do it manually. At the end a new RStudio
window with your package should open. In this window you should see the
following files:
\begin{itemize}
\tightlist
\item
\texttt{DESCRIPTION} - The metadata file for your package. We will fill this in next
and it will be updated as we develop our package.
\item
\texttt{NAMESPACE} - This file describes the functions in our package. Traditionally
this has been a tricky file to get right but the modern development tools
mean that we shouldn't need to edit it manually. If you open it you will see
a message telling you not to.
\item
\texttt{R/} - This is the directory that will hold all our R code.
\end{itemize}
These files are the minimal amount that is required for a package but we will
create other files as we go along. Some other useful files have also been
created by \textbf{usethis}.
\begin{itemize}
\tightlist
\item
\texttt{.gitignore} - This is useful if you use git for version control.
\item
\texttt{.Rbuildignore} - This file is used to mark files that are in the directory
but aren't really part of the package and shouldn't be included when we build
it. Most of the time you won't need to worry about this as \textbf{usethis} will
edit it for you.
\item
\texttt{mypkg.Rproj} - The RStudio project file. Again you don't need to worry about
this.
\end{itemize}
\hypertarget{filling-in-the-description}{%
\section{Filling in the DESCRIPTION}\label{filling-in-the-description}}
The \texttt{DESCRIPTION} file is one of the most important parts of a package. It
contains all the metadata about the package, things like what the package is
called, what version it is, a description, who the authors are, what other
packages it depends on etc. Open the \texttt{DESCRIPTION} file and you should see
something like this (with your package name).
\begin{verbatim}
Package: mypkg
Title: What the Package Does (One Line, Title Case)
Version: 0.0.0.9000
Authors@R:
person(given = "First",
family = "Last",
role = c("aut", "cre"),
email = "[email protected]",
comment = c(ORCID = "YOUR-ORCID-ID"))
Description: What the package does (one paragraph).
License: What license it uses
Encoding: UTF-8
LazyData: true
\end{verbatim}
\hypertarget{title-and-description}{%
\subsection{Title and description}\label{title-and-description}}
The package name is already set correctly but most of the other fields need to
be updated. First let's update the title and description. The title should be
a single line in Title Case that explains what your package is. The description
is a paragraph which goes into a bit more detail. For example you could write
something like this:
\begin{verbatim}
Package: mypkg
Title: My Personal Package
Version: 0.0.0.9000
Authors@R:
person(given = "First",
family = "Last",
role = c("aut", "cre"),
email = "[email protected]",
comment = c(ORCID = "YOUR-ORCID-ID"))
Description: This is my personal package. It contains some handy functions that
I find useful for my projects.
License: What license it uses
Encoding: UTF-8
LazyData: true
\end{verbatim}
\hypertarget{authors}{%
\subsection{Authors}\label{authors}}
The next thing we will update is the \href{mailto:Authors@R}{\nolinkurl{Authors@R}} field. There are a couple of ways
to define the author for a package but \href{mailto:Authors@R}{\nolinkurl{Authors@R}} is the most flexible. The
example shows us how to define an author. You can see that the example person
has been assigned the author (``aut'') and creator (``cre'') roles. There must be
at least one author and one creator for every package (they can be the same
person) and the creator must have an email address. There are many possible
roles (including woodcutter (``wdc'') and lyricist (``lyr'')) but the most important
ones are:
\begin{itemize}
\tightlist
\item
cre: the creator or maintainer of the package, the person who should be
contacted with there are problems
\item
aut: authors, people who have made significant contributions to the package
\item
ctb: contributors, people who have made smaller contributions
\item
cph: copyright holder, useful if this is someone other than the creator (such
as their employer)
\end{itemize}
\begin{quote}
\textbf{Adding an ORCID}
If you have an ORCID you can add it as a comment as shown in the example.
Although not an official field this is recognised in various places (including
CRAN) and is recommended if you want to get academic credit for your package
(or have a common name that could be confused with other package authors).
\end{quote}
Update the author information with your details. If you need to add another
author simply concatenate them using \texttt{c()} like you would with a normal vector.
\begin{verbatim}
Package: mypkg
Title: My Personal Package
Version: 0.0.0.9000
Authors@R: c(
person(given = "Package",
family = "Creator",
role = c("aut", "cre"),
email = "[email protected]"),
person(given = "Package",
family = "Contributor",
role = c("ctb"),
email = "[email protected]")
)
Description: This is my personal package. It contains some handy functions that
I find useful for my projects.
License: What license it uses
Encoding: UTF-8
LazyData: true
\end{verbatim}
\hypertarget{license-1}{%
\subsection{License}\label{license-1}}
The last thing we will update now is the software license. The describes how
our code can be used and without one people must assume that it can't be used
at all! It is good to be as open and free as you can with your license to make
sure your code is as useful to the community as possible. For this example we
will use the MIT license which basically says the code can be used for any
purpose and doesn't come with any warranties. There are templates for some of
the most common licenses included in \textbf{usethis}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_mit_license}\NormalTok{(}\StringTok{"Your Name"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
This will update the license field.
\begin{verbatim}
Package: mypkg
Title: My Personal Package
Version: 0.0.0.9000
Authors@R: c(
person(given = "Package",
family = "Creator",
role = c("aut", "cre"),
email = "[email protected]"),
person(given = "Package",
family = "Contributor",
role = c("ctb"),
email = "[email protected]")
)
Description: This is my personal package. It contains some handy functions that
I find useful for my projects.
License: MIT + file LICENSE
Encoding: UTF-8
LazyData: true
\end{verbatim}
It will also also create two new files, \texttt{LICENSE.md} which contains the text
of the MIT license (it's very short if you want to give it a read) and \texttt{LICENSE}
which simply contains:
\begin{verbatim}
YEAR: 2019
COPYRIGHT HOLDER: Your Name
\end{verbatim}
There are various other licenses you can use but make sure you choose one
designed for software not other kinds of content. For example the Creative
Commons licenses are great for writing or images but aren't designed for code.
For more information about different licenses and what they cover have a look at
\url{http://choosealicense.com/} or \url{https://tldrlegal.com/}. For a good discussion
about why it is important to declare a license read this blog post by Jeff
Attwood \url{http://blog.codinghorror.com/pick-a-license-any-license/}.
\hypertarget{functions}{%
\chapter{Functions}\label{functions}}
\hypertarget{adding-a-function}{%
\section{Adding a function}\label{adding-a-function}}
Now that our package is all set up it's time to add our first function! We can
use the \texttt{usethis::use\_r()} function to set up the file. Our function is going
to be about colours so we will use that as the name of the R file.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_r}\NormalTok{(}\StringTok{"colours"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{quote}
\textbf{Organising your code}
There are no rules about how to organise your functions into different files
but you want generally want to group similar functions into a file with a
a clear name. Having all of your functions in a single file isn't great, but
neither is having a separate file for each function. A good rule of thumb is
that if you are finding it hard to locate a function you might need to move
it to a new file. There are two shortcuts for finding functions in RStudio,
selecting a function name and pressing \textbf{F2} or pressing \textbf{Ctrl + .} and
searching for the function.
\end{quote}
As an example we are going to write a function that takes the red, green and
blue values for a colour and returns a given number of shades. Copy the
following code into your R file and save it (you can ignore the comments if you
want to, they are just there to explain how the function works).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{make_shades <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(red, green, blue, n, }\DataTypeTok{lighter =} \OtherTok{TRUE}\NormalTok{) \{}
\CommentTok{# Convert the colour to RGB}
\NormalTok{ colour_rgb <-}\StringTok{ }\NormalTok{grDevices}\OperatorTok{::}\KeywordTok{col2rgb}\NormalTok{(colour)[, }\DecValTok{1}\NormalTok{]}
\CommentTok{# Decide if we are heading towards white or black}
\ControlFlowTok{if}\NormalTok{ (lighter) \{}
\NormalTok{ end <-}\StringTok{ }\DecValTok{255}
\NormalTok{ \} }\ControlFlowTok{else}\NormalTok{ \{}
\NormalTok{ end <-}\StringTok{ }\DecValTok{0}
\NormalTok{ \}}
\CommentTok{# Calculate the red, green and blue for the shades}
\CommentTok{# we calculate one extra point to avoid pure white/black}
\NormalTok{ red <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{1}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\NormalTok{ green <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{2}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\NormalTok{ blue <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{3}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\CommentTok{# Convert the RGB values to hex codes}
\NormalTok{ shades <-}\StringTok{ }\NormalTok{grDevices}\OperatorTok{::}\KeywordTok{rgb}\NormalTok{(red, green, blue, }\DataTypeTok{maxColorValue =} \DecValTok{255}\NormalTok{)}
\KeywordTok{return}\NormalTok{(shades)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\hypertarget{using-the-function}{%
\section{Using the function}\label{using-the-function}}
Now that we have a function we want to see if it works. Usually when we write
a new function we load it by copying the code to the console or sourcing the
R file. When we are developing a package we want to try and keep our
environment empty so that we can be sure we are only working with objects inside
the package. Instead we can load functions using \texttt{devtools::load\_all()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{load_all}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
The function doesn't appear in the environment, just like all the functions in
a package don't appear in the environment when we load it using \texttt{library()}.
But if we try to use it the function should work.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{make_shades}\NormalTok{(}\StringTok{"goldenrod"}\NormalTok{, }\DecValTok{5}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
Congratulations, you now have a functional package! In the next section we
will perform some checks to see if we have forgotten anything.
\hypertarget{checking-you-package}{%
\chapter{Checking you package}\label{checking-you-package}}
Although what is absolutely required for a package is fairly minimal there are a
range of things that are needed for a package to be considerd ``correct''.
Keeping track of all of these can be difficult but luckily the
\texttt{devtools::check()} function is here to help! This function runs a series of
checks developed by some very smart people over a long period of time that are
designed to make sure your package is working correctly. It is highly
recommended that you run \texttt{devtools::check()} often and follow it's advice to
fix any problems. It's much easier to fix one or two problems when they first
come up than to try many at once after you have moved on to other things. Let's
run the checks on our package and see what we get.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- Building ----------------------------------------------------------- mypkg --
Setting env vars:
* CFLAGS : -Wall -pedantic
* CXXFLAGS : -Wall -pedantic
* CXX11FLAGS: -Wall -pedantic
--------------------------------------------------------------------------------
√ checking for file 'C:\Users\USER\Desktop\mypkg/DESCRIPTION' (3.1s)
- preparing 'mypkg':
√ checking DESCRIPTION meta-information ...
- checking for LF line-endings in source and make files and shell scripts
- checking for empty or unneeded directories
- building 'mypkg_0.0.0.9000.tar.gz'
-- Checking ----------------------------------------------------------- mypkg --
Setting env vars:
* _R_CHECK_CRAN_INCOMING_REMOTE_: FALSE
* _R_CHECK_CRAN_INCOMING_ : FALSE
* _R_CHECK_FORCE_SUGGESTS_ : FALSE
-- R CMD check -------------------------------------------------------------------------
- using log directory 'C:/Users/USER/AppData/Local/Temp/Rtmp8eH30T/mypkg.Rcheck' (2.3s)
- using R version 3.6.0 (2019-04-26)
- using platform: x86_64-w64-mingw32 (64-bit)
- using session charset: ISO8859-1
- using options '--no-manual --as-cran'
√ checking for file 'mypkg/DESCRIPTION' ...
- this is package 'mypkg' version '0.0.0.9000'
- package encoding: UTF-8
√ checking package namespace information ...
√ checking package dependencies (1s)
√ checking if this is a source package ...
√ checking if there is a namespace
√ checking for .dll and .exe files
√ checking for hidden files and directories ...
√ checking for portable file names ...
√ checking serialization versions ...
√ checking whether package 'mypkg' can be installed (1.4s)
√ checking package directory
√ checking for future file timestamps (815ms)
√ checking DESCRIPTION meta-information (353ms)
√ checking top-level files ...
√ checking for left-over files
√ checking index information
√ checking package subdirectories ...
√ checking R files for non-ASCII characters ...
√ checking R files for syntax errors ...
√ checking whether the package can be loaded ...
√ checking whether the package can be loaded with stated dependencies ...
√ checking whether the package can be unloaded cleanly ...
√ checking whether the namespace can be loaded with stated dependencies ...
√ checking whether the namespace can be unloaded cleanly ...
√ checking loading without being on the library search path ...
√ checking dependencies in R code ...
√ checking S3 generic/method consistency (410ms)
√ checking replacement functions ...
√ checking foreign function calls ...
√ checking R code for possible problems (2.2s)
W checking for missing documentation entries ...
Undocumented code objects:
'make_shades'
All user-level objects in a package should have documentation entries.
See chapter 'Writing R documentation files' in the 'Writing R
Extensions' manual.
- checking examples ... NONE (956ms)
See
'C:/Users/USER/AppData/Local/Temp/Rtmp8eH30T/mypkg.Rcheck/00check.log'
for details.
-- R CMD check results ------------------------------------------- mypkg 0.0.0.9000 ----
Duration: 12.3s
> checking for missing documentation entries ... WARNING
Undocumented code objects:
'make_shades'
All user-level objects in a package should have documentation entries.
See chapter 'Writing R documentation files' in the 'Writing R
Extensions' manual.
0 errors √ | 1 warning x | 0 notes √
\end{verbatim}
You can see all the different types of checks that \textbf{devtools} has run but they
most important section is at the end where it tells you how many errors,
warnings and notes there are. Errors happen when you code has broken and failed
one of the checks. If errors are not fixed your package will not work correctly.
Warnings are slightly less serious but should also be addressed. Your package
will probably work without fixing thise but it is highly advised that you do.
Notes are advice rather than problems. It can be up to you whether or not to
address them but there is usally a good reason to. Often the failed checks come
with hints about how to fix them but sometimes they can be hard to understand.
If you are not sure what they mean try doing an internet search and it is likely
that somebody else has come across the same problem. Our package has received
one warning telling us that we are missing some documentation.
\hypertarget{documenting-functions}{%
\chapter{Documenting functions}\label{documenting-functions}}
The output of our check tells us that we are missing documentation for the
\texttt{make\_shades} function. Writing this kind of documentation is another part of
package development that has been made much easier by modern packages, in this
case one called \textbf{roxygen2}. R help files use a complicated syntax similar to
LaTeX that can be easy to mess up. Instead of writing this all ourselves
using Roxygen lets us just write some special comments at the start of each
function. This has the extra advantage of keeping the documentation with the
code which make it easier to keep it up to date.
\hypertarget{adding-documentation}{%
\section{Adding documentation}\label{adding-documentation}}
To insert a documentation skeleton in RStudio click inside the \texttt{make\_shades}
function then open the \emph{Code} menu and select \emph{Insert Roxygen skeleton} or use
\textbf{Ctrl + Alt + Shift + R}. The inserted code looks like this:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Title}
\CommentTok{#'}
\CommentTok{#' @param colour }
\CommentTok{#' @param n }
\CommentTok{#' @param lighter }
\CommentTok{#'}
\CommentTok{#' @return}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\end{Highlighting}
\end{Shaded}
Roxygen comments all start with \texttt{\#\textquotesingle{}}. The first line is the title of the
function then there is a blank line. Following that there can be a paragraph
giving a more detailed description of the function. Let's fill those in to
start with.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make n lighter or darker shades}
\CommentTok{#' }
\CommentTok{#' @param colour }
\CommentTok{#' @param n }
\CommentTok{#' @param lighter }
\CommentTok{#'}
\CommentTok{#' @return}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\end{Highlighting}
\end{Shaded}
The next section describes the parameters (or arguments) for the function marked
by the \texttt{@param} field. RStudio has helpfully filled in names of these for us
but we need to provide a description.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make n lighter or darker shades}
\CommentTok{#' }
\CommentTok{#' @param colour The colour to make shades of}
\CommentTok{#' @param n The number of shades to make}
\CommentTok{#' @param lighter Whether to make lighter (TRUE) or darker (FALSE) shades}
\CommentTok{#'}
\CommentTok{#' @return}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\end{Highlighting}
\end{Shaded}
The next field is \texttt{@return}. This is where we describe what the function
returns. This is usually fairly short but you should provide enough detail to
make sure that the user knows what they are getting back.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make n lighter or darker shades}
\CommentTok{#' }
\CommentTok{#' @param colour The colour to make shades of}
\CommentTok{#' @param n The number of shades to make}
\CommentTok{#' @param lighter Whether to make lighter (TRUE) or darker (FALSE) shades}
\CommentTok{#'}
\CommentTok{#' @return A vector of n colour hex codes}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\end{Highlighting}
\end{Shaded}
After \texttt{@return} we have \texttt{@export}. This field is a bit different because
it doesn't add documentation to the help file, instead it modifies the
\texttt{NAMESPACE} file. Adding \texttt{@export} tells Roxygen that this is a function that
we want to be available to the user. When we build the documentation Roxygen
will then add the correct information to the \texttt{NAMESPACE} file. If we had an
internal function that wasn't meant to be used by the user we would leave out
\texttt{@export}.
The last field in the skeleton is \texttt{@examples}. This is where we put some
short examples showing how the function can be used. These will be placed in
the help file and can be run using \texttt{example("function")}. Let's add a couple
of examples. If you want to add a comment to an example you need to add
another \texttt{\#}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make n lighter or darker shades}
\CommentTok{#' }
\CommentTok{#' @param colour The colour to make shades of}
\CommentTok{#' @param n The number of shades to make}
\CommentTok{#' @param lighter Whether to make lighter (TRUE) or darker (FALSE) shades}
\CommentTok{#'}
\CommentTok{#' @return A vector of n colour hex codes}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\CommentTok{#' # Five lighter shades}
\CommentTok{#' make_shades("goldenrod", 5)}
\CommentTok{#' # Five darker shades}
\CommentTok{#' make_shades("goldenrod", 5, lighter = FALSE)}
\end{Highlighting}
\end{Shaded}
\begin{quote}
\textbf{Other fields}
In this example we only fill in the fields in the skeleton but there are many
other useful fields. For example \texttt{@author} (specify the function author),
\texttt{@references} (any associated references) and \texttt{@seealso} (links to related
functions).
\end{quote}
\hypertarget{building-documentation}{%
\section{Building documentation}\label{building-documentation}}
Now we can build our documentation using \textbf{devtools}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{document}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Updating mypkg documentation
Updating roxygen version in C:\Users\USER\Desktop\mypkg/DESCRIPTION
Writing NAMESPACE
Loading mypkg
Writing NAMESPACE
Writing make_shades.Rd
\end{verbatim}
The output shows us that \textbf{devtools} has done a few things. Firstly it has
set the version of \textbf{roxygen2} we are using in the \texttt{DESCRIPTION} file by
adding this line:
\begin{verbatim}
RoxygenNote: 6.1.1
\end{verbatim}
Next it has updated the \texttt{NAMESPACE} file. If you open it you will see:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{# Generated by roxygen2: do not edit by hand}
\KeywordTok{export}\NormalTok{(make_shades)}
\end{Highlighting}
\end{Shaded}
Which tells us that the \texttt{make\_shades} function is exported.
The last thing it has done is create a new file called \texttt{make\_shades.Rd} in the
\texttt{man/} directory (which will be created if it doesn't exist). The \texttt{.Rd}
extension stands for ``R documentation'' and this is what is turned into a help
file when the package is installed. Open the file and see what it looks like.
\begin{verbatim}
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/colours.R
\name{make_shades}
\alias{make_shades}
\title{Make shades}
\usage{
make_shades(colour, n, lighter = TRUE)
}
\arguments{
\item{colour}{The colour to make shades of}
\item{n}{The number of shades to make}
\item{lighter}{Whether to make lighter (TRUE) or darker (FALSE) shades}
}
\value{
A vector of n colour hex codes
}
\description{
Given a colour make n lighter or darker shades
}
\examples{
# Five lighter shades
make_shades("goldenrod", 5)
# Five darker shades
make_shades("goldenrod", 5, lighter = FALSE)
}
\end{verbatim}
Hopefully you can see why we want to avoid writing this manually! This is only
a simple function but already the help file is quite complicated with lots of
braces. To see what the rendered documentation looks like just run
\texttt{?make\_shades}.
\hypertarget{formatting-documentation}{%
\section{Formatting documentation}\label{formatting-documentation}}
The rendered output already looks pretty good but we might want to add some
extra formatting to it to make it a bit clearer. As we have seen above there
is a special syntax for different kinds of formatting. For example we can mark
code in the documentation using \texttt{\textbackslash{}code\{\}}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make \textbackslash{}code\{n\} lighter or darker shades}
\CommentTok{#' }
\CommentTok{#' @param colour The colour to make shades of}
\CommentTok{#' @param n The number of shades to make}
\CommentTok{#' @param lighter Whether to make lighter (\textbackslash{}code\{TRUE\}) or darker (\textbackslash{}code\{FALSE\})}
\CommentTok{#' shades}
\CommentTok{#'}
\CommentTok{#' @return A vector of \textbackslash{}code\{n\} colour hex codes}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\CommentTok{#' # Five lighter shades}
\CommentTok{#' make_shades("goldenrod", 5)}
\CommentTok{#' # Five darker shades}
\CommentTok{#' make_shades("goldenrod", 5, lighter = FALSE)}
\end{Highlighting}
\end{Shaded}
Run \texttt{devtools::document()} again and see what has changed in the rendered file.
There are many other kinds of formatting we could use, for example: \texttt{\textbackslash{}code\{\}},
\texttt{\textbackslash{}eqn\{\}}, \texttt{\textbackslash{}emph\{\}}, \texttt{\textbackslash{}strong\{\}}, \texttt{\textbackslash{}itemize\{\}}, \texttt{\textbackslash{}enumerate\{\}}, \texttt{\textbackslash{}link\{\}},
\texttt{\textbackslash{}link{[}{]}\{\}}, \texttt{\textbackslash{}url\{\}}, \texttt{\textbackslash{}href\{\}\{\}}, \texttt{\textbackslash{}email\{\}}.
\begin{quote}
\textbf{Using Markdown}
If you are familiar with Markdown you may prefer to use it for writing
documentation. Luckily Roxygen has a Markdown mode that can be activated using
\texttt{usethis::use\_roxygen\_md()}. See the Roxygen Markdown vignette for more
details \url{https://cran.r-project.org/web/packages/roxygen2/vignettes/markdown.html}.
\end{quote}
\hypertarget{testing}{%
\chapter{Testing}\label{testing}}
Now that we have some documentation \texttt{devtools::check()} should run without any
problems.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- R CMD check results ------------------------------------------- mypkg 0.0.0.9000 ----
Duration: 15.2s
0 errors √ | 0 warnings √ | 0 notes √
\end{verbatim}
\emph{(This is just the bottom part of the output to save space)}
While we pass all the standard package checks there is one kind of check that
we don't have yet. Unit tests are checks to make sure that a function works in
the way that we expect. The examples we wrote earlier are kind of like informal
unit tests because they are run as part of the checking process but it is better
to have something more rigorous. One approach to writing unit tests is what is
known as ``test driven development''. The idea here is to write the tests before
you write a function. This way you know exactly what a function is supposed to
do and what problems there might be. While this is a good principal it can
take a lot of advance planning. A more common approach could be called
``bug-driven testing''. For this approach whenever we come across a bug we write
a test for it before we fix it, that way the same bug should never happen a
again. When combined with some tests for obvious problems this is a good
compromise better testing for every possible outcome and not testing at all.
For example let's see what happens when we ask \texttt{make\_shades()} for a negative
number of shades.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{make_shades}\NormalTok{(}\StringTok{"goldenrod"}\NormalTok{, }\DecValTok{-1}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Error in seq(colour_rgb[1], end, length.out = n + 1)[1:n] :
only 0's may be mixed with negative subscripts
\end{verbatim}
This doesn't make sense so we expect to get an error but it would be useful if
the error message was more informative. What if we ask for zero shades?
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{make_shades}\NormalTok{(}\StringTok{"goldenrod"}\NormalTok{, }\DecValTok{0}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
[1] "#DAA520"
\end{verbatim}
That does work, but it probably shouldn't. Before we make any changes to the
function let's design some tests to make sure we get what we expect. There are
a few ways to write unit tests for R packages but we are going to use the
\textbf{testthat} package. We can set everything up with \textbf{usethis}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_testthat}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Adding 'testthat' to Suggests field in DESCRIPTION
✔ Creating 'tests/testthat/'
✔ Writing 'tests/testthat.R'
● Call `use_test()` to initialize a basic test file and open it for editing.
\end{verbatim}
Now we have a \texttt{tests/} directory to hold all our tests. There is also a
\texttt{tests/testthat.R} file which looks like this:
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{library}\NormalTok{(testthat)}
\KeywordTok{library}\NormalTok{(mypkg)}
\KeywordTok{test_check}\NormalTok{(}\StringTok{"mypkg"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
All this does is make sure that our tests are run when we do
\texttt{devtools::check()}. To open a new test file we can use \texttt{usethis::use\_test()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_test}\NormalTok{(}\StringTok{"colours"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Increasing 'testthat' version to '>= 2.1.0' in DESCRIPTION
✔ Writing 'tests/testthat/test-colours.R'
● Modify 'tests/testthat/test-colours.R'
\end{verbatim}
Just like R files our test file needs a name. Tests can be split up however you
like but it often makes sense to have them match up with the R files so things
are easy to find. Our test file comes with a small example that shows how to
use \textbf{testthat}.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{test_that}\NormalTok{(}\StringTok{"multiplication works"}\NormalTok{, \{}
\KeywordTok{expect_equal}\NormalTok{(}\DecValTok{2} \OperatorTok{*}\StringTok{ }\DecValTok{2}\NormalTok{, }\DecValTok{4}\NormalTok{)}
\NormalTok{\})}
\end{Highlighting}
\end{Shaded}
Each set of tests starts with the \texttt{test\_that()} function. This function has two
arguments, a description and the code with the tests that we want to run. It
looks a bit strange to start with but it makes sense if you read it as a
sentence, ``Test that multiplication work''. That makes it clear what the test
is for. Inside the code section we see an \texttt{expect} function. This function also
has two parts, the thing we want to test and what we expect it to be. There are
different functions for different types of expectations. Reading this part as
a sentence says something like ``Expect that 2 * 2 is equal to 4''. For our test
we want to use the \texttt{expect\_error()} function, because that is what we expect.
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{test_that}\NormalTok{(}\StringTok{"n is at least 1"}\NormalTok{, \{}
\KeywordTok{expect_error}\NormalTok{(}\KeywordTok{make_shades}\NormalTok{(}\StringTok{"goldenrod"}\NormalTok{, }\DecValTok{-1}\NormalTok{),}
\StringTok{"n must be at least 1"}\NormalTok{)}
\KeywordTok{expect_error}\NormalTok{(}\KeywordTok{make_shades}\NormalTok{(}\StringTok{"goldenrod"}\NormalTok{, }\DecValTok{0}\NormalTok{),}
\StringTok{"n must be at least 1"}\NormalTok{)}
\NormalTok{\})}
\end{Highlighting}
\end{Shaded}
To run our tests we use \texttt{devtools::test()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{test}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Loading mypkg
Testing mypkg
√ | OK F W S | Context
x | 0 2 | colours
--------------------------------------------------------------------------------
test-colours.R:2: failure: n is at least 1
`make_shades("goldenrod", -1)` threw an error with unexpected message.
Expected match: "n must be at least 1"
Actual message: "only 0's may be mixed with negative subscripts"
test-colours.R:4: failure: n is at least 1
`make_shades("goldenrod", 0)` did not throw an error.
--------------------------------------------------------------------------------
== Results =====================================================================
OK: 0
Failed: 2
Warnings: 0
Skipped: 0
No one is perfect!
\end{verbatim}
We can see that both of our tests failed. That is ok because we haven't fixed
the function yet. The first test fails because the error message is wrong and
the second one because there is no error. Now that we have some tests and we
know they check the right things we can modify our function to check the value
of \texttt{n} and give the correct error.
Let's add some code to check the value of \texttt{n}. We will update the documentation
as well so the user knows what values can be used.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Make shades}
\CommentTok{#'}
\CommentTok{#' Given a colour make \textbackslash{}code\{n\} lighter or darker shades}
\CommentTok{#'}
\CommentTok{#' @param colour The colour to make shades of}
\CommentTok{#' @param n The number of shades to make, at least 1}
\CommentTok{#' @param lighter Whether to make lighter (\textbackslash{}code\{TRUE\}) or darker (\textbackslash{}code\{FALSE\})}
\CommentTok{#' shades}
\CommentTok{#'}
\CommentTok{#' @return A vector of \textbackslash{}code\{n\} colour hex codes}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\CommentTok{#' # Five lighter shades}
\CommentTok{#' make_shades("goldenrod", 5)}
\CommentTok{#' # Five darker shades}
\CommentTok{#' make_shades("goldenrod", 5, lighter = FALSE)}
\NormalTok{make_shades <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(colour, n, }\DataTypeTok{lighter =} \OtherTok{TRUE}\NormalTok{) \{}
\CommentTok{# Check the value of n}
\ControlFlowTok{if}\NormalTok{ (n }\OperatorTok{<}\StringTok{ }\DecValTok{1}\NormalTok{) \{}
\KeywordTok{stop}\NormalTok{(}\StringTok{"n must be at least 1"}\NormalTok{)}
\NormalTok{ \}}
\CommentTok{# Convert the colour to RGB}
\NormalTok{ colour_rgb <-}\StringTok{ }\NormalTok{grDevices}\OperatorTok{::}\KeywordTok{col2rgb}\NormalTok{(colour)[, }\DecValTok{1}\NormalTok{]}
\CommentTok{# Decide if we are heading towards white or black}
\ControlFlowTok{if}\NormalTok{ (lighter) \{}
\NormalTok{ end <-}\StringTok{ }\DecValTok{255}
\NormalTok{ \} }\ControlFlowTok{else}\NormalTok{ \{}
\NormalTok{ end <-}\StringTok{ }\DecValTok{0}
\NormalTok{ \}}
\CommentTok{# Calculate the red, green and blue for the shades}
\CommentTok{# we calculate one extra point to avoid pure white/black}
\NormalTok{ red <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{1}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\NormalTok{ green <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{2}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\NormalTok{ blue <-}\StringTok{ }\KeywordTok{seq}\NormalTok{(colour_rgb[}\DecValTok{3}\NormalTok{], end, }\DataTypeTok{length.out =}\NormalTok{ n }\OperatorTok{+}\StringTok{ }\DecValTok{1}\NormalTok{)[}\DecValTok{1}\OperatorTok{:}\NormalTok{n]}
\CommentTok{# Convert the RGB values to hex codes}
\NormalTok{ shades <-}\StringTok{ }\NormalTok{grDevices}\OperatorTok{::}\KeywordTok{rgb}\NormalTok{(red, green, blue, }\DataTypeTok{maxColorValue =} \DecValTok{255}\NormalTok{)}
\KeywordTok{return}\NormalTok{(shades)}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
\begin{quote}
\textbf{Writing parameter checks}
These kinds of checks for parameter inputs are an important part of a function
that is going to be used by other people (or future you). They make sure that
all the input is correct before the function tries to do anything and avoids
confusing error messages. However they can be fiddly and repetitive to write.
If you find yourself writing lots of these checks two packages that can make
life easier by providing functions to do it for you are \textbf{checkmate} and
\textbf{assertthat}.
\end{quote}
Here we have used the \texttt{stop()} function to raise an error. If we wanted to give
a warning we would use \texttt{warning()} and if just wanted to give some information
to the user we would use \texttt{message()}. Using \texttt{message()} instead of \texttt{print()} or
\texttt{cat()} is important because it means the user can hide the messages using
\texttt{suppressMessages()} (or \texttt{suppressWarnings()} for warnings). Now we can try our
tests again and they should pass.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{test}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Loading mypkg
Testing mypkg
√ | OK F W S | Context
√ | 2 | colours
== Results =====================================================================
OK: 2
Failed: 0
Warnings: 0
Skipped: 0
\end{verbatim}
There are more tests we could write for this function but we will leave that as
an exercise for you. If you want to see what parts of your code need testing you
can run the \texttt{devtools::test\_coverage()} function (you might need to install the
\textbf{DT} package first). This function uses the \textbf{covr} package to make a report
showing which lines of your code are covered by tests.
\hypertarget{dependencies}{%
\chapter{Dependencies}\label{dependencies}}
Our \texttt{make\_shades()} function produces shades of a colour but it would be good
to see what those look like. Below is a new function called \texttt{plot\_colours()}
that can visualise them for us using \textbf{ggplot2} (if you don't have \textbf{ggplot2}
installed do that now). Add this function to \texttt{colours.R}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Plot colours}
\CommentTok{#'}
\CommentTok{#' Plot a vector of colours to see what they look like}
\CommentTok{#' }
\CommentTok{#' @param colours Vector of colour to plot}
\CommentTok{#'}
\CommentTok{#' @return A ggplot2 object}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\CommentTok{#' shades <- make_shades("goldenrod", 5)}
\CommentTok{#' plot_colours(shades)}
\NormalTok{plot_colours <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(colours) \{}
\NormalTok{ plot_data <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{Colour =}\NormalTok{ colours)}
\KeywordTok{ggplot}\NormalTok{(plot_data,}
\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour, }\DataTypeTok{y =} \DecValTok{1}\NormalTok{, }\DataTypeTok{fill =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour,}
\DataTypeTok{label =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour)) }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_tile}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{angle =} \StringTok{"90"}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\KeywordTok{scale_fill_identity}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\KeywordTok{theme_void}\NormalTok{()}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
Now that we have added something new we should run our checks again
(\texttt{devtools::document()} is automatically run as part of \texttt{devtools::check()} so
we can skip that step).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- R CMD check results ------------------------------------------- mypkg 0.0.0.9000 ----
Duration: 15.4s
> checking examples ... ERROR
Running examples in 'mypkg-Ex.R' failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: plot_colours
> ### Title: Plot colours
> ### Aliases: plot_colours
>
> ### ** Examples
>
> shades <- make_shades("goldenrod", 5)
> plot_colours(shades)
Error in ggplot(plot_data, aes(x = .data$Colour, y = 1, fill = .data$Colour, :
could not find function "ggplot"
Calls: plot_colours
Execution halted
> checking R code for possible problems ... NOTE
plot_colours: no visible global function definition for 'ggplot'
plot_colours: no visible global function definition for 'aes'
plot_colours: no visible binding for global variable '.data'
plot_colours: no visible global function definition for 'geom_tile'
plot_colours: no visible global function definition for 'geom_text'
plot_colours: no visible global function definition for
'scale_fill_identity'
plot_colours: no visible global function definition for 'theme_void'
Undefined global functions or variables:
.data aes geom_text geom_tile ggplot scale_fill_identity theme_void
1 error x | 0 warnings √ | 1 note x
\end{verbatim}
The checks have returned one error and one note. The error is more serious so
let's have a look at that first. It says \texttt{could\ not\ find\ function\ "ggplot"}.
Hmmmm\ldots{}the \texttt{ggplot()} function is in the \textbf{ggplot2} package. When we used
\texttt{col2rgb()} in the \texttt{make\_shades()} function we had to prefix it with
\texttt{grDevices::}, maybe we should do the same here.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Plot colours}
\CommentTok{#'}
\CommentTok{#' Plot a vector of colours to see what they look like}
\CommentTok{#' }
\CommentTok{#' @param colours Vector of colour to plot}
\CommentTok{#'}
\CommentTok{#' @return A ggplot2 object}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @examples}
\CommentTok{#' shades <- make_shades("goldenrod", 5)}
\CommentTok{#' plot_colours(shades)}
\NormalTok{plot_colours <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(colours) \{}
\NormalTok{ plot_data <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{Colour =}\NormalTok{ colours)}
\NormalTok{ ggplot2}\OperatorTok{::}\KeywordTok{ggplot}\NormalTok{(plot_data,}
\NormalTok{ ggplot2}\OperatorTok{::}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour, }\DataTypeTok{y =} \DecValTok{1}\NormalTok{, }\DataTypeTok{fill =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour,}
\DataTypeTok{label =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour)) }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{geom_tile}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{angle =} \StringTok{"90"}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{scale_fill_identity}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{theme_void}\NormalTok{()}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
Now what do our checks say?
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- R CMD check results ------------------------------------------ mypkg 0.0.0.9000 ----
Duration: 15s
> checking examples ... ERROR
Running examples in 'mypkg-Ex.R' failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: plot_colours
> ### Title: Plot colours
> ### Aliases: plot_colours
>
> ### ** Examples
>
> shades <- make_shades("goldenrod", 5)
> plot_colours(shades)
Error in loadNamespace(name) : there is no package called 'ggplot2'
Calls: plot_colours ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
Execution halted
> checking dependencies in R code ... WARNING
'::' or ':::' import not declared from: 'ggplot2'
> checking R code for possible problems ... NOTE
plot_colours: no visible binding for global variable '.data'
Undefined global functions or variables:
.data
1 error x | 1 warning x | 1 note x
\end{verbatim}
There is now one error, one warning and one note. That seems like we are going
in the wrong direction but the error is from running the example and the
warning gives us a clue to what the problem is. It says ``\,`::' or `:::' import
not declared from: `ggplot2'\,''. The important word here is ``import''. Just like
when we export a function in our package we need to make it clear when we are
using functions in another package. To do this we can use
\texttt{usethis::use\_package()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Setting active project to 'C:/Users/Luke/Desktop/mypkg'
✔ Adding 'ggplot2' to Imports field in DESCRIPTION
● Refer to functions with `ggplot2::fun()`
\end{verbatim}
The output tells us to refer to functions using ``::'' like we did above so we
were on the right track. It also mentions that it has modified the \texttt{DESCRIPTION}
file. Let's have a look at it now.
\begin{verbatim}
Package: mypkg
Title: My Personal Package
Version: 0.0.0.9000
Authors@R: c(
person(given = "Package",
family = "Creator",
role = c("aut", "cre"),
email = "[email protected]"),
person(given = "Package",
family = "Contributor",
role = c("ctb"),
email = "[email protected]")
)
Description: This is my personal package. It contains some handy functions that
I find useful for my projects.
License: MIT + file LICENSE
Encoding: UTF-8
LazyData: true
RoxygenNote: 6.1.1
Suggests:
testthat (>= 2.1.0)
Imports:
ggplot2
\end{verbatim}
The two lines at the bottom tell us that our package uses functions in
\textbf{ggplot2}. There are three main types of dependencies\footnote{There is a fourth kind
(Enhances) but that is almost never used.}. Imports is the most common. This
means that we use functions from these packages and they must be installed when
our package is installed. The next most common is Suggests. These are packages
that we use in developing our package (such as \textbf{testthat} which is already
listed here) or packages that provide some additional, optional functionality.
Suggested packages aren't usually installed so we need to do a check before we
use them. The output of \texttt{usethis::use\_package()} will give you an example if
you add a suggested package. The third type of dependency is Depends. If you
depend on a package it will be loaded whenever your package is loaded. There are
some cases where you might need to do this but you should avoid Depends unless
it is absolutely necessary.
\begin{quote}
\textbf{Should you use a dependency?}
Deciding which packages (and how many) to depend on is a difficult and
philosophical choice. Using functions from other packages can save you time
and effort in development but it might make it more difficult to maintain
your package. Some things you might want to consider before depending on a
package are:
\begin{itemize}
\tightlist
\item
How much of the functionality of the package do you want to use?
\item
Could you easily reproduce that functionality?
\item
How well maintained is the package?
\item
How often is it updated? Packages that change a lot are more likely to
break your code.
\item
How many dependencies of it's own does that package have?
\item
Are you users likely to have the package installed already?
\end{itemize}
Packages like \textbf{ggplot2} are good choices for dependencies because they are
well maintained, don't change too often, are commonly used and perform a
single task so you are likely to use many of the functions.
\end{quote}
Hopefully now that we have imported \textbf{ggplot2} we should pass the checks.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- R CMD check results ------------------------------------------ mypkg 0.0.0.9000 ----
Duration: 16.4s
> checking R code for possible problems ... NOTE
plot_colours: no visible binding for global variable '.data'
Undefined global functions or variables:
.data
0 errors √ | 0 warnings √ | 1 note x
\end{verbatim}
Success! Now all that's left is that pesky note. Visualisation functions are
probably some of the most common functions in packages but there are some
tricks to programming with \textbf{ggplot2}. The details are outside the scope of
this workshop but if you are interested see the ``Using ggplot2 in packages''
vignette \url{https://ggplot2.tidyverse.org/dev/articles/ggplot2-in-packages.html}.
To solve our problem we need to import the \textbf{rlang} package.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"rlang"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Adding 'rlang' to Imports field in DESCRIPTION
● Refer to functions with `rlang::fun()`
\end{verbatim}
Writing \texttt{rlang::.data} wouldn't be very attractive or readable\footnote{Also for
technical reasons it won't work in this case.}. When we want to use a function
in another package with \texttt{::} we need to exlicitly import it. Just like when we
exported our functions we do this using a Roxygen comment.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' Plot colours}
\CommentTok{#'}
\CommentTok{#' Plot a vector of colours to see what they look like}
\CommentTok{#' }
\CommentTok{#' @param colours Vector of colour to plot}
\CommentTok{#'}
\CommentTok{#' @return A ggplot2 object}
\CommentTok{#' @export}
\CommentTok{#'}
\CommentTok{#' @importFrom rlang .data}
\CommentTok{#' }
\CommentTok{#' @examples}
\CommentTok{#' shades <- make_shades("goldenrod", 5)}
\CommentTok{#' plot_colours(shades)}
\NormalTok{plot_colours <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(colours) \{}
\NormalTok{ plot_data <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{Colour =}\NormalTok{ colours)}
\NormalTok{ ggplot2}\OperatorTok{::}\KeywordTok{ggplot}\NormalTok{(plot_data,}
\NormalTok{ ggplot2}\OperatorTok{::}\KeywordTok{aes}\NormalTok{(}\DataTypeTok{x =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour, }\DataTypeTok{y =} \DecValTok{1}\NormalTok{, }\DataTypeTok{fill =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour,}
\DataTypeTok{label =}\NormalTok{ .data}\OperatorTok{$}\NormalTok{Colour)) }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{geom_tile}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{geom_text}\NormalTok{(}\DataTypeTok{angle =} \StringTok{"90"}\NormalTok{) }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{scale_fill_identity}\NormalTok{() }\OperatorTok{+}
\StringTok{ }\NormalTok{ggplot2}\OperatorTok{::}\KeywordTok{theme_void}\NormalTok{()}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
When we use \texttt{devtools::document()} this comment will be read and a note placed
in the \texttt{NAMESPACE} file, just like for \texttt{@export}.
\begin{verbatim}
# Generated by roxygen2: do not edit by hand
export(make_shades)
export(plot_colours)
importFrom(rlang,.data)
\end{verbatim}
Those two steps should fix our note.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
-- R CMD check results ------------------------------------------ mypkg 0.0.0.9000 ----
Duration: 16.8s
0 errors √ | 0 warnings √ | 0 notes √
\end{verbatim}
If we used \texttt{rlang::.data} in multiple functions in our pacakge it might make
sense to only import it once. It doesn't matter where we put the \texttt{@importFrom}
line (or how many times) it will still be added to \texttt{NAMESPACE}. This means we
can put all import in a central location. The advantage of this is that they
only appear once and are all in one place but it makes it harder to know which
of our functions have which imports and remove them if they are no longer
needed. Which approach you take is up to you.
We should write some tests for this function as well but we will leave that
as an exercise for you to try later.
\hypertarget{other-documentation}{%
\chapter{Other documentation}\label{other-documentation}}
In a previous section we documented our functions using Roxygen comments but
there are a few other kinds of documentation we should have.
\hypertarget{package-help-file}{%
\section{Package help file}\label{package-help-file}}
Users can find out about our functions using \texttt{?function-name} but what if they
want to find out about the package itself? There is some information in the
\texttt{DESCRIPTION} but that can be hard to access. Let's add a help file for the
pacakge.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package_doc}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Writing 'R/mypkg-package.R'
\end{verbatim}
This creates a special R file for us called \texttt{mypkg-package.R}. The contents of
this file doesn't look like much it is understood by \textbf{devtools} and
\textbf{roxygen2}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{#' @keywords internal}
\StringTok{"_PACKAGE"}
\CommentTok{# The following block is used by usethis to automatically manage}
\CommentTok{# roxygen namespace tags. Modify with care!}
\CommentTok{## usethis namespace: start}
\CommentTok{## usethis namespace: end}
\OtherTok{NULL}
\end{Highlighting}
\end{Shaded}
Run \texttt{devtools::document()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{document}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Updating mypkg documentation
Writing NAMESPACE
Loading mypkg
Writing NAMESPACE
Writing mypkg-package.Rd
\end{verbatim}
We can see that a new \texttt{.Rd} file has been created and we can view the contents
using \texttt{?mypkg}. The information here has been automatically pulled from the
\texttt{DESCRIPTION} file so we only need to update it in one place.
\hypertarget{vignettes}{%
\section{Vignettes}\label{vignettes}}
The documentation we have written so far explains how individual functions
work in detail but it doesn't show what the package does as a whole. Vignettes
are short tutorials that explain what the package is designed for and how
different functions can be used together. There are different ways to write
vignettes but usually they are R Markdown files. We can create a vignette with
\texttt{usethis::use\_vignette()}. There can be multiple vignettes but it is common
practice to start with one that introduces the whole package.
\begin{quote}
\textbf{What is R Markdown?}
Markdown is a simple markup language that makes it possible to write documents
with minimal formatting. See \emph{Help} \textgreater{} \emph{Markdown Quick Reference} in RStudio
for a quick guide to how this formatting works. R Markdown adds chunks of R
code that are run and the output included in the final document.
\end{quote}
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_vignette}\NormalTok{(}\StringTok{"mypkg"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Adding 'knitr' to Suggests field in DESCRIPTION
✔ Setting VignetteBuilder field in DESCRIPTION to 'knitr'
✔ Adding 'inst/doc' to '.gitignore'
✔ Creating 'vignettes/'
✔ Adding '*.html', '*.R' to 'vignettes/.gitignore'
✔ Adding 'rmarkdown' to Suggests field in DESCRIPTION
✔ Writing 'vignettes/mypkg.Rmd'
● Modify 'vignettes/mypkg.Rmd'
\end{verbatim}
Because this is our first vignette \textbf{usethis} has added some information to
the \texttt{DESCRIPTION} file including adding the \textbf{knitr} package as a suggested
dependency. It also creates a \texttt{vignettes/} directory and opens our new
\texttt{mypkg.Rmd} file.
\begin{verbatim}
---
title: "mypkg"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{mypkg}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(mypkg)
```
\end{verbatim}
If you are familiar with R Markdown you might note some unusual content in the
header. This is important for the vignette to build properly. There are also
some \textbf{knitr} options set which are the convention for vignettes.
Let's add a short example of how to use our package.
\begin{verbatim}
---
title: "mypkg"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{mypkg}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(mypkg)
```
# Introduction
This is my personal package. It contains some handy functions that I find useful
for my projects.
# Colours
Sometimes you want to generate shades of a colour. The `make_shades()` function
makes this easy!
```{r}
shades <- make_shades("goldenrod", 5)
```
If you want to see what the shades look like you can plot them using
`plot_colours()`.
```{r}
plot_colours(shades)
```
This function is also useful for viewing any other palettes.
```{r}
plot_colours(rainbow(5))
```
\end{verbatim}
To see what the vignette looks like run \texttt{devtools::build\_vignettes()}. Asking
\textbf{devtools} to build the vignette rather than rendering it in another way
(such as the \emph{Knit} button in RStudio) makes sure that we are using the
development version of the package rather than any version that is installed.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{devtools}\OperatorTok{::}\KeywordTok{build_vignettes}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Building mypkg vignettes
--- re-building 'mypkg.Rmd' using rmarkdown
--- finished re-building 'mypkg.Rmd'
Moving mypkg.html, mypkg.R to doc/
Copying mypkg.Rmd to doc/
Building vignette index
\end{verbatim}
This creates a new directory called \texttt{doc/} that contains the rendered vignette.
Click on the \texttt{mypkg.html} file and open it in your browser.
If you want to use any other packages in your vignette that the package doesn't
already depend on you need to add them as a suggested dependency.
\hypertarget{readme}{%
\section{README}\label{readme}}
If you plan on sharing the source code rather than the built package it is
useful to have a README file to explain what the package is, how to install and
use it, how to contribute etc. We can create a template with
\texttt{usethis::use\_readme\_md()} (if we wanted to and R Markdown file with R code and
output we might use \texttt{usethis::use\_readme\_md()} instead).
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_readme_md}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Writing 'README.md'
● Modify 'README.md'
\end{verbatim}
\begin{verbatim}
# mypkg
<!-- badges: start -->
<!-- badges: end -->
The goal of mypkg is to ...
## Installation
You can install the released version of mypkg from [CRAN](https://CRAN.R-project.org) with:
``` r
install.packages("mypkg")
```
## Example
This is a basic example which shows you how to solve a common problem:
``` r
library(mypkg)
## basic example code
```
\end{verbatim}
There are the comments near the top that mention badges and you might have seen
badges (or shields) on README files in code repositories before. There are
several \textbf{usethis} functions for adding badges. For example we can mark this
package as been at the experimental stage using
\texttt{usethis::use\_lifecycle\_badge()}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_lifecycle_badge}\NormalTok{(}\StringTok{"experimental"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
# mypkg
<!-- badges: start -->
[](https://www.tidyverse.org/lifecycle/#experimental)
<!-- badges: end -->
The goal of mypkg is to ...
\end{verbatim}
The rest of the template isn't very useful so replace it with something better.
\hypertarget{package-website}{%
\section{Package website}\label{package-website}}
If you have a publicly available package it can be useful to have a website
displaying the package documentation. It gives your users somewhere to go and
helps your package appear in search results. Luckily this is easily achieved
using the \textbf{pkgdown} package. If you have it installed you can set it up with
\textbf{usethis}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_pkgdown}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\hypertarget{versioning}{%
\chapter{Versioning}\label{versioning}}
We now have at least something for all of the major parts of our package.
Whenever you reach a milestone like this it is good to update the package
version. Having a good versioning system is important when it comes to things
like solving user issues. Version information is recorded in the \texttt{DESCRIPTION}
file. This is what we have at the moment.
\begin{verbatim}
Version: 0.0.0.9000
\end{verbatim}
This version number follows the format \texttt{major.minor.patch.dev}. The different
parts of the version represent different things:
\begin{itemize}
\tightlist
\item
\texttt{major} - A significant change to the package that would be expected to break
users code. This is updated very rarely when the package has been redesigned
in some way.
\item
\texttt{minor} - A minor version update means that new functionality has been added
to the package. It might be new functions to improvements to existing
functions that are compatible with most existing code.
\item
\texttt{patch} - Patch updates are bug fixes. They solve existing issues but don't
do anything new.
\item
\texttt{dev} - Dev versions are used during development and this part is missing from
release versions. For example you might use a dev version when you give
someone a beta version to test. A package with a dev version can be expected
to change rapidly or have undiscovered issues.
\end{itemize}
Now that we know how this system works let's increase our package version.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_version}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
Current version is 0.0.0.9000.
Which part to increment? (0 to exit)
1: major --> 1.0.0
2: minor --> 0.1.0
3: patch --> 0.0.1
4: dev --> 0.0.0.9001
Selection:
\end{verbatim}
The prompt asks us which part of the version we want to increment. We have added
some new functions so let's make a new minor version.
\begin{verbatim}
Selection: 2
✔ Setting Version field in DESCRIPTION to '0.1.0'
\end{verbatim}
Whenever we update the package version we should record what changes have been
made. We do this is a \texttt{NEWS.md} file.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_news_md}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Writing 'NEWS.md'
● Modify 'NEWS.md'
\end{verbatim}
Modify the file to record what we have done during the workshop.
\begin{verbatim}
# mypkg 0.1.0
* Created the package
* Added the `make_shades()` function
* Added the `plot_colours()` function
* Added a vignette
\end{verbatim}
\hypertarget{building-installing-and-releasing}{%
\chapter{Building, installing and releasing}\label{building-installing-and-releasing}}
If you want to start using your package in other projects the simplest thing
to do is run \texttt{devtools::install()}. This will install your package in the same
way as any other package so that it can be loaded with \texttt{library()}. However this
will only work on the computer you are developing the package on. If you want
to share the package with other people (or other computers you work on) there
are a few different options.
\hypertarget{building}{%
\section{Building}\label{building}}
One way to share your package is to manually transfer it to somewhere else. But
rather then copying the development directory what you should share is a
prebuilt package archive. Running \texttt{devtools::build()} will bundle up your
package into a \texttt{.tar.gz} file without any of the extra bits required during
development. This archive can then be transferred to wherever you need it and
installed using \texttt{install.packages("mypkg.tar.gz",\ repos\ =\ NULL)} or \texttt{R\ CMD\ INSTALL\ mypkg.tar.gz} on the command line. While this is fine if you just want
to share the package with yourself or a few people you know, it doesn't work if
you want it to be available to the general community.
\hypertarget{official-repositories}{%
\section{Official repositories}\label{official-repositories}}
\hypertarget{cran}{%
\subsection{CRAN}\label{cran}}
The most common repository for public R packages is the Comprehensive R Archive
Network (CRAN). This is where packages are usually downloaded from when you
use \texttt{install.packages()}. Compared to similar repositories for other programming
languages getting your package accepted to CRAN means meeting a series of
requirements. While this makes the submission process more difficult it gives
users confidence that your package is reliable and will work on multiple
platforms. It also makes your package much easier to install for most users and
makes it more discoverable. The details of the CRAN submission process are
beyond the scope of this workshop but it is very well covered in the ``Release''
section of Hadley Wickham's ``R packages'' book
(\url{http://r-pkgs.had.co.nz/release.html}) and the CRAN section of Karl Broman's
``R package primer'' (\url{https://kbroman.org/pkg_primer/pages/cran.html}). You should
also read the offical CRAN submission checklist
\url{https://cran.r-project.org/web/packages/submission_checklist.html}. The CRAN
submission process has a reputation for being prickly and frustrating to go
through but it is important to remember that the maintainers are volunteering
their time to do this for thousands of packages. Because of their hard work
CRAN is a large part of why R is so successful.
\hypertarget{bioconductor}{%
\subsection{Bioconductor}\label{bioconductor}}
If your package is designed for analysing biological data you might want to
submit it to Bioconductor rather than CRAN. While Bioconductor has a smaller
audience it is more specialised and is often the first place researchers in the
life sciences look. Building a Bioconductor package also means that you can
take advantage of the extensive ecosystem of existing objects and packages for
handling biological data types. While there are lots of advantages to having
your package on Bioconductor the coding style is slightly different to what is
often used for CRAN packages. If you think you might want to submit your
package to Bioconductor in the future have a look at the Bioconductor package
guideline (\url{https://www.bioconductor.org/developers/package-guidelines/}) and the
how to guide to building a Bioconductor package
(\url{https://www.bioconductor.org/developers/how-to/buildingPackagesForBioc/}). The
Bioconductor submission process is conducted through GitHub
(\url{https://bioconductor.org/developers/package-submission/}). The Bioconductor
maintainers will guide you through the process and make suggestions about how
to improve your package and integrate it with other Bioconductor packages.
Unlike CRAN which uploads packages all year round Bioconductor has two annual
releases, usually in April and October. This means that all the packages in a
release are guaranteed to be compatible with each other but make sure you
submit in time or you will have to wait another six months for your package to
be available to most users.
\hypertarget{ropensci}{%
\subsection{rOpenSci}\label{ropensci}}
rOpenSci is not a package repository as such but an organisation that reviews
and improves R packages. Packages that have been accepted by rOpenSci should
meet a certain set of standards. By submitting your package to rOpenSci you will
get it reviewed by experienced programmers who can offer suggestions on how to
improve it. If you are accepted you will receive assistance with maintaining
your package and it will be promoted by the organisation. Have a look at their
submission page for more details \url{https://github.com/ropensci/software-review}.
\hypertarget{code-sharing-websites}{%
\section{Code sharing websites}\label{code-sharing-websites}}
Uploading your package to a code sharing website such as GitHub, Bitbucket or
GitLab offers a good compromise between making your package available and
going through an official submission process. This is a particularly good
option for packages that are still in early development and are not ready to
be submitted to one of the major repositories. Making your package available on
one of these sites also gives it a central location for people to ask questions
and submit issues. Code sharing websites are usually accessed through the git
version control system. If you are unfamiliar with using git on the command line
there are functions in \textbf{usethis} that can run the commands for you from R. The
following steps will take you through uploading a package to GitHub but they are
similar for other websites. If you don't already have a GitHub account create
one here \url{https://github.com/join}.
\hypertarget{set-up-git}{%
\subsection{Set up git}\label{set-up-git}}
First we need to configure our git details.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_git_config}\NormalTok{(}\DataTypeTok{user.name =} \StringTok{"Your Name"}\NormalTok{, }\DataTypeTok{user.email =} \StringTok{"[email protected]"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
The email address should be the same one you used to sign up to GitHub. Now we
can set up git in our package repository.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_git}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Initialising Git repo
There are 13 uncommitted files:
* '.gitignore'
* '.Rbuildignore'
* 'DESCRIPTION'
* 'LICENSE'
* 'LICENSE.md'
* 'man/'
* 'mypkg.Rproj'
* 'NAMESPACE'
* 'NEWS.md'
* 'R/'
* 'README.md'
* 'tests/'
* 'vignettes/'
Is it ok to commit them?
1: Not now
2: Yup
3: Negative
Selection: 2
✔ Adding files
✔ Commit with message 'Initial commit'
● A restart of RStudio is required to activate the Git pane
Restart now?
1: No
2: No way
3: Yes
Selection: 1
\end{verbatim}
If you are already familiar with git this should make sense to you. If not, what
this step does (in summary) is set up git and save the current state of the
package. If you chose to restart RStudio you will see a new git pane that can
be used to complete most of the following steps by pointing and clicking.
\hypertarget{connect-to-github}{%
\subsection{Connect to GitHub}\label{connect-to-github}}
The next step is to link the directory on your computer with a repository on
GitHub. First we need to create a special access token. The following command
will open a GitHub website.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_github}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Opening URL 'https://github.com/settings/tokens/new?scopes=repo,gist&description=R:GITHUB_PAT'
● Call `usethis::edit_r_environ()` to open '.Renviron'.
● Store your PAT with a line like:
GITHUB_PAT=xxxyyyzzz
[Copied to clipboard]
● Make sure '.Renviron' ends with a newline!
\end{verbatim}
Click the ``Generate token'' button on the webpage and then copy the code on the
next page. As it says you can only view this once so be careful to copy it now
and don't close the page until you are finished. When you have the code follow
the rest of the instructions from \textbf{usethis}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{edit_r_environ}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
● Modify 'C:/Users/Luke/Documents/.Renviron'
● Restart R for changes to take effect
\end{verbatim}
Edit the file to look something like this (with your code).
\begin{verbatim}
GITHUB_PAT=YOUR_CODE_GOES_HERE
\end{verbatim}
Save it then restart R by clicking the \emph{Session} menu and selecting \emph{Restart R}
(or using \textbf{Ctrl+Shift+F10}).
\begin{verbatim}
Restarting R session...
\end{verbatim}
Copying that code and adding it to your \texttt{.Renviron} gives R on the computer you
are using access to your GitHub repositories. If you move to a new computer you
will need to do this again. Now that we have access to GitHub we can create a
repository for our packages.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_github}\NormalTok{()}
\end{Highlighting}
\end{Shaded}
\begin{verbatim}
✔ Setting active project to 'C:/Users/Luke/Desktop/mypkg'
✔ Checking that current branch is 'master'
Which git protocol to use? (enter 0 to exit)
1: ssh <-- presumes that you have set up ssh keys
2: https <-- choose this if you don't have ssh keys (or don't know if you do)
Selection: 2
● Tip: To suppress this menu in future, put
`options(usethis.protocol = "https")`
in your script or in a user- or project-level startup file, '.Rprofile'.
Call `usethis::edit_r_profile()` to open it for editing.
● Check title and description
Name: mypkg
Description: My Personal Package
Are title and description ok?
1: For sure
2: No way
3: Negative
Selection: 1
✔ Creating GitHub repository
✔ Setting remote 'origin' to 'https://github.com/lazappi/mypkg.git'
✔ Adding GitHub links to DESCRIPTION
✔ Setting URL field in DESCRIPTION to 'https://github.com/lazappi/mypkg'
✔ Setting BugReports field in DESCRIPTION to 'https://github.com/lazappi/mypkg/issues'
✔ Pushing 'master' branch to GitHub and setting remote tracking branch
✔ Opening URL 'https://github.com/lazappi/mypkg'
\end{verbatim}
Respond to the prompts from \textbf{usethis} about the method for connecting to
GitHub and the title and description for the repository. When everthing is done
a website should open with your new package repository. Another thing this
function does is add some extra information to the description that let's people
know where to find your new website.
\begin{verbatim}
URL: https://github.com/user/mypkg
BugReports: https://github.com/user/mypkg/issues
\end{verbatim}
\hypertarget{installing-from-github}{%
\subsection{Installing from GitHub}\label{installing-from-github}}
Now that your package is on the internet anyone can install it using the
\texttt{install\_github()} function in the \textbf{remotes} package (which you should
already have installed as a dependency of \textbf{devtools}). All you need to give
it is the name of the user and repository.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"user/mypkg"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
If you are familiar with git you can install from a particular branch, tag or
commit by adding that after \texttt{@}.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"user/mypkg@branch_name"}\NormalTok{)}
\NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"user/mypkg@tag_id"}\NormalTok{)}
\NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"user/mypkg@commit_sha"}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\hypertarget{updating-github}{%
\subsection{Updating GitHub}\label{updating-github}}
After you make improvements to your package you will probably want to update the
version that is online. To do this you need to learn a bit more about git. Jenny
Bryan's ``Happy Git with R'' tutorial (\url{https://happygitwithr.com}) is a great place
to get started but the (very) quick steps in RStudio are:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Open the \emph{Git} pane (it will be with \emph{Environment}, \emph{History} etc.)
\item
Click the check box next to each of the listed files
\item
Click the \emph{Commit} button
\item
Enter a message in the window that opens and click the \emph{Commit} button
\item
Click the \emph{Push} button with the green up arrow
\end{enumerate}
Refresh the GitHub repository website and you should see the changes you have
made.
\hypertarget{advanced-topics}{%
\chapter{Advanced topics}\label{advanced-topics}}
In this workshop we have worked though the basic steps required to create an
R package. In this section we introduce some of the advanced topics that may be
useful for you as you develop more complex packages. These are included here to
give you an idea of what is possible for you to consider when planning a
package. Most of these topics are covered in Hadley Wickhams ``Advanced R'' book
(\url{https://adv-r.hadley.nz}) but there are many other guides and tutorials
available.
\hypertarget{including-datasets}{%
\section{Including datasets}\label{including-datasets}}
It can be useful to include small datasets in your R package which can be used
for testing and examples in your vignettes. You may also have reference data
that is required for your package to function. If you already have the data as
an object in R it is easy to add it to your package with \texttt{usethis::use\_data()}.
The \texttt{usethis::use\_data\_raw()} function can be used to write a script that reads
a raw data file, manipulates it in some way and adds it to the package with
\texttt{usethis::use\_data()}. This is useful for keeping a record of what you have done
to the data and updating the processing or dataset if necessary. See the
``Data'' section of ``R packages'' (\url{http://r-pkgs.had.co.nz/data.html}) for more
details about including data in your package.
\hypertarget{designing-objects}{%
\section{Designing objects}\label{designing-objects}}
If you work with data types that don't easily fit into a table or matrix you may
find it convenient to design specific objects to hold them. Objects can also
be useful for holding the output of functions such as those that fit models or
perform tests. R has several different object systems. The S3 system is the
simplest and probably the most commonly used. Packages in the Bioconductor
ecosystem make use of the more formal S4 system. If you want to learn more about
desiging R objects a good place to get started is the ``Object-oriented
programming'' chapter of Hadley Wickham's ``Advanced R'' book
(\url{https://adv-r.hadley.nz/oo.html}). Other useful guides include
Nicholas Tierney's ``A Simple Guide to S3 Methods''
(\url{https://arxiv.org/abs/1608.07161}) and Stuart Lee's ``S4: a short guide for the
perplexed'' (\url{https://stuartlee.org/post/content/post/2019-07-09-s4-a-short-guide-for-perplexed/}).
\hypertarget{integrating-other-languages}{%
\section{Integrating other languages}\label{integrating-other-languages}}
If software for completing a task already exists but is in another language it
might make sense to write an R package that provides an interface to the
existing implementation rather than replementing it from scratch. Here are some
of the R packages that help you integrate code from other languages:
\begin{itemize}
\tightlist
\item
\textbf{Rcpp} (C++) \url{http://www.rcpp.org/}
\item
\textbf{reticulate} (Python) \url{https://rstudio.github.io/reticulate/}
\item
\textbf{RStan} (Stan) \url{https://mc-stan.org/users/interfaces/rstan}
\item
\textbf{rJava} (Java) \url{http://www.rforge.net/rJava/}
\end{itemize}
Another common reason to include code from another language is to improve
performance. While it is often possible to make code faster by reconsidering
how things are done within R sometimes there is no alternative. The \textbf{Rcpp}
package makes it very easy to write snippets of C++ code that is called from R.
Depending on what you are doing moving even very small bits of code to C++ can
have big impacts on performance. Using \textbf{Rcpp} can also provide access to
existing C libraries for specialised tasks. The ``Rewriting R code in C++''
section of ``Advanced R'' (\url{https://adv-r.hadley.nz/rcpp.html}) explains when and
how to use \textbf{Rcpp}. You can find other resources including a gallery of
examples on the official \textbf{Rcpp} website (\url{http://www.rcpp.org/}).
\hypertarget{metaprogramming}{%
\section{Metaprogramming}\label{metaprogramming}}
Metaprogramming refers to code that reads and modifies other code. This may
seem like an obscure topic but it is important in R because of it's
relationship to non-standard evaluation (another fairly obscure topic). You
may not have heard of non-standard evaluation before but it is likely you have
used it. This is what happens whenever you provide a function with a bare name
instead of a string or a variable. Metaprogramming particularly becomes
relevant to package development if you want to have functions that make use of
packages in the Tidyverse such as \textbf{dplyr}, \textbf{tidy} and \textbf{purrr}. The
``Metaprogramming'' chapter of ``Advanced R''
(\url{https://adv-r.hadley.nz/metaprogramming.html}) covers the topic in more detail
and the ``Tidy evaluation'' book (\url{https://tidyeval.tidyverse.org/}) may be useful
for learning how to write functions that use Tidyverse packages.
\hypertarget{good-practices-and-advice}{%
\chapter{Good practices and advice}\label{good-practices-and-advice}}
This section contains some general advice about package development. It may be
opinionated in places so decide which things work for you.
\hypertarget{design-advice}{%
\section{Design advice}\label{design-advice}}
\begin{itemize}
\tightlist
\item
\textbf{Compatibility} - Make your package compatible with how your users already
work. If there are data structure that are commonly used write your functions
to work with those rather than having to convert between formats.
\item
\textbf{Ambition} - It's easy to get carried away with trying to make a package
that does everything but try to start with whatever is most important/novel.
This will give you a useful package as quickly and easily as possible and
make it easier to maintain in the long run. You can always add more
functionality later if you need to.
\item
\textbf{Messages} - Try to make your errors and messages as clear as possible and
other advice about how to fix them. This can often mean writing a check
yourself rather than relying on a default message from somewhere else.
\item
\textbf{Check input} - If there are restrictions on the values parameters can take
check them at the beginning of your functions. This prevents problems as
quickly as possible and means you can assume values are correct in the rest of
the function.
\item
\textbf{Useability} - Spend time to make your package as easy to use as possible.
Users won't know that your code is faster or produces better results if they
can't understand how to use your functions. This includes good documentation
but also things like having good default values for parameters.
\item
\textbf{Naming} - Be obvious and consistent in how you name functions and
parameters. This makes it easier for users to guess what they are without
looking at the documentation. One option is to have a consistent prefix to
function names (like \textbf{usethis} does) which makes it obvious which package
they come from and avoids clashes with names in other packages.
\end{itemize}
\hypertarget{coding-style}{%
\section{Coding style}\label{coding-style}}
Unlike some other languages R is very flexible in how your code can be
formatted. Whatever coding style you prefer it is important to be consistent.
This makes your code easier to read and makes it easier for other people to
contribute to it. It is useful to document what coding style you are using. The
easiest way to do this is to adopt a existing style guide such as those created
for the Tidyverse (\url{https://style.tidyverse.org/}) or Google
(\url{https://google.github.io/styleguide/Rguide.html}) or this one by Jean Fan
(\url{https://jef.works/R-style-guide/}). If you are interested in which styles people
actually use check out this analysis presented at useR! 2019
\url{https://github.com/chainsawriot/rstyle}. When contibuting to other people's
projects it is important (and polite) to conform to their coding style rather
than trying to impose your own.
If you want to make sure the style of your package is consistent there are some
packages that can help you do that. The \textbf{lintr} package will flag any style
issues (and a range of other programming issues) while the \textbf{styler} package
can be used to reformat code files. The \textbf{goodpractice} package can also be
used to analyse your package and offer advice. If you are more worried about
problems with the text parts of your package (documentation and vignettes) then
you can activate spell checking with \texttt{usethis::use\_spell\_check()}.
\hypertarget{version-control}{%
\section{Version control}\label{version-control}}
There are three main ways to keep tracks of changes to your package:
\begin{enumerate}
\def\labelenumi{\arabic{enumi}.}
\tightlist
\item
Don't keep track
\item
Save files with different versions
\item
Use a version control system (VCS)
\end{enumerate}
While it can be challenging at first to get your head around git (or another
VCS) it is highly recommended and worth the effort, both for packages and your
other programming projects. Here are something of the big benefits of having
your package in git:
\begin{itemize}
\tightlist
\item
You have a complete record of every change that has been made to the package
\item
It is easy to go back if anything breaks or you need an old version for
something
\item
Because of this you don't have to worry about breaking things and it is
easier to experiment
\item
Much easier to merge changes from collaborators who might want to contribute
to your package
\item
Access to a variety of platforms and services built around this technology,
for example installing your package, hosting a package website and continuous
integration (see below)
\end{itemize}
As mentioned earlier a great way to get started with git for R projects is
Jenny Bryan's ``Happy Git with R'' (\url{https://happygitwithr.com}) but there are
many more tutorials and workshops available.
\hypertarget{continuous-integration}{%
\section{Continuous integration}\label{continuous-integration}}
During the workshop we showed you how to run checks and tests on your package
but this will only tell you if they pass on your particular computer and
platform. Continuous integration services can be used to automatically check
your package on multiple platforms whenever you make a significant change to
your package. They can be linked to your repository on code sharing websites
like GitHub and whenever you push a new version they will run the checks for
you. This is similar to what CRAN and Bioconductor do for their packages but
we doing it yourself you can be more confident that you won't run into issues
when you submit your package to them. If your package isn't on one of the major
repositories it helps give your users confidence that it will be reliable.
Some continuous integration services are:
\begin{itemize}
\tightlist
\item
Travis CI (\url{https://travis-ci.com/})
\item
AppVeyor (\url{https://www.appveyor.com/})
\item
CircleCI (\url{https://circleci.com/})
\end{itemize}
Each of these has a free service at it is easy to set them up for your package
using the appropriate \texttt{usethis::use\_CI\_SERVICE()} function.
\hypertarget{resources}{%
\chapter*{Resources}\label{resources}}
\addcontentsline{toc}{chapter}{Resources}
This section has links to additional resources on package development, many of
which were used in developing these materials.
\hypertarget{official-guides}{%
\section*{Official guides}\label{official-guides}}
\addcontentsline{toc}{section}{Official guides}
\hypertarget{cran-1}{%
\subsection*{CRAN}\label{cran-1}}
\addcontentsline{toc}{subsection}{CRAN}
\begin{itemize}
\tightlist
\item
\textbf{Writing R extensions}
\url{https://cran.r-project.org/doc/manuals/R-exts.html}
\end{itemize}
\hypertarget{bioconductor-1}{%
\subsection*{Bioconductor}\label{bioconductor-1}}
\addcontentsline{toc}{subsection}{Bioconductor}
\begin{itemize}
\tightlist
\item
\textbf{Bioconductor Package Guidelines}
\url{https://www.bioconductor.org/developers/package-guidelines/}
\item
\textbf{Building Packages for Bioconductor}
\url{https://www.bioconductor.org/developers/how-to/buildingPackagesForBioc/}
\item
\textbf{Bioconductor Package Submission}
\url{https://bioconductor.org/developers/package-submission/}
\end{itemize}
\hypertarget{ropensci-1}{%
\subsection*{rOpenSci}\label{ropensci-1}}
\addcontentsline{toc}{subsection}{rOpenSci}
\begin{itemize}
\tightlist
\item
\textbf{rOpenSci Packages: Development, Maintenance, and Peer Review}
\url{https://devguide.ropensci.org/}
\end{itemize}
\hypertarget{rstudio}{%
\subsection*{RStudio}\label{rstudio}}
\addcontentsline{toc}{subsection}{RStudio}
\begin{itemize}
\tightlist
\item
\textbf{Developing Packages with RStudio}
\url{https://support.rstudio.com/hc/en-us/articles/200486488-Developing-Packages-with-RStudio}
\item
\textbf{Building, Testing and Distributing Packages}
\url{https://support.rstudio.com/hc/en-us/articles/200486508-Building-Testing-and-Distributing-Packages}
\item
\textbf{Writing Package Documentation}
\url{https://support.rstudio.com/hc/en-us/articles/200532317-Writing-Package-Documentation}
\end{itemize}
\hypertarget{books}{%
\section*{Books}\label{books}}
\addcontentsline{toc}{section}{Books}
\begin{itemize}
\tightlist
\item
\textbf{R Packages} (Hadley Wickham)
\url{http://r-pkgs.had.co.nz/}
\item
\textbf{Advanced R} (Hadley Wickam)
\url{https://adv-r.hadley.nz/}
\item
\textbf{Advanced R Course} (Florian Privé)
\url{https://privefl.github.io/advr38book/}
\end{itemize}
\hypertarget{tutorials}{%
\section*{Tutorials}\label{tutorials}}
\addcontentsline{toc}{section}{Tutorials}
\begin{itemize}
\tightlist
\item
\textbf{Writing an R package from scratch} (Hilary Parker)
\url{https://hilaryparker.com/2014/04/29/writing-an-r-package-from-scratch/}
\item
\textbf{Writing an R package from scratch (Updated)} (Thomas Westlake)
\url{https://r-mageddon.netlify.com/post/writing-an-r-package-from-scratch/}
\item
\textbf{usethis workflow for package development} (Emil Hvitfeldt)
\url{https://www.hvitfeldt.me/blog/usethis-workflow-for-package-development/}
\item
\textbf{R package primer} (Karl Broman)
\url{https://kbroman.org/pkg_primer/}
\item
\textbf{R Package Development Pictorial} (Matthew J Denny)
\url{http://www.mjdenny.com/R_Package_Pictorial.html}
\item
\textbf{Building R Packages with Devtools} (Jiddu Alexander)
\url{http://www.jiddualexander.com/blog/r-package-building/}
\item
\textbf{Developing R packages} (Jeff Leek)
\url{https://github.com/jtleek/rpackages}
\item
\textbf{R Package Tutorial} (Colautti Lab)
\url{https://colauttilab.github.io/RCrashCourse/Package_tutorial.html}
\item
\textbf{Instructions for creating your own R package} (MIT)
\url{http://web.mit.edu/insong/www/pdf/rpackage_instructions.pdf}
\end{itemize}
\hypertarget{workshops-and-courses}{%
\section*{Workshops and courses}\label{workshops-and-courses}}
\addcontentsline{toc}{section}{Workshops and courses}
\begin{itemize}
\tightlist
\item
\textbf{R Forwards Package Workshop} (Chicago, February 23, 2019)
\url{https://github.com/forwards/workshops/tree/master/Chicago2019}
\item
\textbf{Write your own R package} (UBC STAT 545)
\url{http://stat545.com/packages00_index.html}
\end{itemize}
\hypertarget{blogs}{%
\section*{Blogs}\label{blogs}}
\addcontentsline{toc}{section}{Blogs}
\begin{itemize}
\tightlist
\item
\textbf{How to develop good R packages (for open science)} (Malle Salmon)
\url{https://masalmon.eu/2017/12/11/goodrpackages/}
\end{itemize}
\hypertarget{style}{%
\section*{Style}\label{style}}
\addcontentsline{toc}{section}{Style}
\begin{itemize}
\tightlist
\item
\textbf{Tidyverse}
\url{https://style.tidyverse.org/}
\item
\textbf{Google}
\url{https://google.github.io/styleguide/Rguide.html}
\item
\textbf{Jean Fan}
\url{https://jef.works/R-style-guide/}
\item
\textbf{A Computational Analysis of the Dynamics of R Style Based on 94 Million Lines of Code from All CRAN Packages in the Past 20 Years.} (Yen, C.Y., Chang, M.H.W., Chan, C.H.)
\url{https://github.com/chainsawriot/rstyle}
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7468592006,
"avg_line_length": 39.8700322234,
"ext": "tex",
"hexsha": "3257895881083f24d5f61baffe12852673635845",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2022-01-29T05:32:05.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-12T19:27:23.000Z",
"max_forks_repo_head_hexsha": "282ccb8fd7a6e1199219b4aed111994c4fd73752",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "FrieseWoudloper/workshop-R-packages",
"max_forks_repo_path": "docs/r-pkg-dev.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "282ccb8fd7a6e1199219b4aed111994c4fd73752",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "FrieseWoudloper/workshop-R-packages",
"max_issues_repo_path": "docs/r-pkg-dev.tex",
"max_line_length": 271,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "282ccb8fd7a6e1199219b4aed111994c4fd73752",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "FrieseWoudloper/workshop-R-packages",
"max_stars_repo_path": "docs/r-pkg-dev.tex",
"max_stars_repo_stars_event_max_datetime": "2019-08-12T19:29:29.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-12T17:27:10.000Z",
"num_tokens": 30164,
"size": 111357
} |
\problemname{Counting Greedily Increasing Supersequences}
Given a permutation $A = (a_1, a_2, \dots, a_N)$ of the integers $1, 2, \dots, N$, we define the \emph{greedily increasing subsequence} (GIS) in the following way.
Let $g_1 = a_1$. For every $i > 1$, let $g_i$ be the leftmost integer in $A$ that is strictly larger than $g_{i-1}$.
If for a given $i$ there is no such integer, we say that the GIS of the sequence is the sequence $(g_1, g_2, ..., g_{i - 1})$.
For example, consider the permutation $(2, 3, 1, 5, 4, 7, 6)$.
First, we have $g_1 = 2$.
The leftmost integer larger than $2$ is $3$, so $g_2 = 3$.
The leftmost integer larger than $3$ is $5$ ($1$ is too small), so $g_3 = 5$.
Finally, $g_4 = 7$.
Thus, the GIS of $(2, 3, 1, 5, 4, 7, 6)$ is $(2, 3, 5, 7)$.
Given a sequence $G = (g_1, g_2, \dots, g_L)$, how many permutations $A$ of the integers $1, 2, \dots, N$ have $G$ as its GIS?
\section*{Input}
The first line of input contains the integers $1 \le N \le 10^6$, the number of elements of the permutation $A$,
and $1 \le L \le 10^6$, the length of the sequence $G$.
The next line contains $L$ positive integers between $1$ and $N$, the elements $g_1, \dots, g_L$ of the sequence $G$.
\section*{Output}
Output a single integer: the number of $N$-element permutations having the given sequence as its GIS.
Since this number may be large, output it modulo the prime number $10^9 + 7$.
| {
"alphanum_fraction": 0.6730632552,
"avg_line_length": 56.28,
"ext": "tex",
"hexsha": "c7ebef41c382af84fd1b4527dd6eaa991858631c",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e9d5e3d63a79c2191ca55f48438344d8b7719d90",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "Kodsport/nova-challenge-2018",
"max_forks_repo_path": "countinggis/problem_statement/problem.en.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e9d5e3d63a79c2191ca55f48438344d8b7719d90",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "Kodsport/nova-challenge-2018",
"max_issues_repo_path": "countinggis/problem_statement/problem.en.tex",
"max_line_length": 163,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "e9d5e3d63a79c2191ca55f48438344d8b7719d90",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "Kodsport/nova-challenge-2018",
"max_stars_repo_path": "countinggis/problem_statement/problem.en.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-13T13:38:16.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-13T13:38:16.000Z",
"num_tokens": 499,
"size": 1407
} |
% \chapter{Deployment view (UML Deployment diagram)}\label{ch:deployment}
\begin{landscape}
\section{Context diagram}
The context diagram for the deployment view is displayed in figure \ref{fig:depl_context}. \\
\centering
\vspace*{\fill}
\begin{figure}[!htp]
\centering
\includegraphics[width=\textwidth]{images/deployment-context}
\caption{Context diagram for the deployment view.}\label{fig:depl_context}
\end{figure}
\vfill
\end{landscape}
\begin{landscape}
\section{Primary diagram}
The primary diagram for the deployment view is displayed in figure \ref{fig:depl_primary}.
\centering
\vspace*{\fill}
\begin{figure}[!htp]
\centering
\includegraphics[width=\textwidth]{images/deployment-primary}
\caption{Primary diagram for the deployment view.}\label{fig:depl_primary}
\end{figure}
\vfill
\end{landscape}
| {
"alphanum_fraction": 0.6673640167,
"avg_line_length": 26.5555555556,
"ext": "tex",
"hexsha": "9da17e288b052f15a61b1b2ff142b0558b08bd34",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "56f14be5801eb7c4a8dcb53f552ca36ffbc13066",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arminnh/software-architecture-assignment",
"max_forks_repo_path": "part-2b/report/deployment-view.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "56f14be5801eb7c4a8dcb53f552ca36ffbc13066",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arminnh/software-architecture-assignment",
"max_issues_repo_path": "part-2b/report/deployment-view.tex",
"max_line_length": 97,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "56f14be5801eb7c4a8dcb53f552ca36ffbc13066",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arminnh/software-architecture-assignment",
"max_stars_repo_path": "part-2b/report/deployment-view.tex",
"max_stars_repo_stars_event_max_datetime": "2017-12-16T08:12:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-12-16T08:12:00.000Z",
"num_tokens": 249,
"size": 956
} |
\chapter*{About}
\addcontentsline{toc}{chapter}{About}
\thispagestyle{empty}
Hello, I'm M Ahsan Al Mahir, a math olympiad contestant from Bangladesh. I have been with
math olympiads since 2016. And this is my journal of problem solving that I have been
keeping since 2017.
At the moment of compiling, this journal has \textbf{682} randomly ordered problems,
\textbf{210} theorems and lemmas, and \textbf{174} figures, mostly geometric, drawn in
geogebra and inkscape.
My motivation for keeping the problems I encountered was very significant for me. When I
got serious about math olympiad in 2017, I was really bad at combinatorics. It was a
comletely wild topic, I couldn't seem to find any idea on how to approach any combi
problem whatsoever. So what I decided was to keep a list of general tricks that I would
look through whenever I would try a combinatorial problem. As time went by, that list grew
longer, and so I had to be serious about keeping it organized.
And that's how this journal came to existence. I tried to organize combi problems into the
categories I found were most intuitive, but that wasn't very successful, as the division
between the topics of cominatorics isn't very clear. So expect to find many miscategorized
problems. Also I added anything I found interesting related to olympiad math in this
journal. But as it happens, I didn't follow through with most of those topics, so you
might find a few really fancy beginning of a topic, that never made its way to the second
page.
Also there are a LOT of spelling and grammatical mistakes, as most of the entries I made
here was right after solving (or in most cases, after failing to solve and reading the
solution), I never went back to proofread the comments I left here. So expect a lot of
nonsense talking and typos. Apologies for those :3
The source files for this journal can be found in my github repository:
\url{https://github.com/AnglyPascal/MO-Problem-Journal}. To compile these files, you will
also need the \texttt{sty} and \texttt{font} files at
\url{https://github.com/AnglyPascal/sty}
\subsection*{How to use this journal}
This journal is divided into chapter/section/subsection manner. Each subsection starts
with a list of useful theorems of lemmas related to that topic, followed by problems and
solutions or hints. Besides the essential theorem and lemmas, I have also added links to
important handouts at the beginning of each section. So check them out if you find the
topics interesting. There are also some boxes titled ``\textbf{Stuck? Try These}'' at the
beginning of the sections, that contain ``rules of thumb'' ideas to keep in mind while
approaching a problem related to that section.
\thispagestyle{empty}
At the end of the journal, there are four indices: Problems, Theorems, Definitios and
Strategies, that alphabetically list all the items in here. You can use them to quickly
search for a problem or a theorem in this journal.
Most of the problems, theorems and lemmas have links to the AoPS page, Wiki page or
whatever source I learned them from, linked to their titles. Hyperlinks are colored using
\textcolor{urlC}{\textbf{teal green}}. The links between different part of this file are
colored using \textcolor{linkC}{\textbf{teal}}.
There might be some cases where I missed to link the sources, or couldn't find any
sources. If you notice something like that, please create an issue entry at the github
page, or email me at \url{[email protected]}.
There aren't that many full solutions in this journal, but I listed at least some hints
for most of the problems (though I can't vouch for their usefulness). But you can and
definitely should always visit the AoPS page to look at others solutions.
I intended this journal to be just a list of tricks when I began working on it. Over the
years, this file has grown in size and has become massive. But don't mistake it for a book
or something. Things are all over the place, and not nearly as helpful as an actual
book. But there are interesting things hidden beneath the unorganized texts, and there are
a lot of problems at one place. So it is advised to use it as an extra large problem set
and a resources file rather than a book.
\vspace{2em}
\signature\\
\today
\newpage
\section*{On ``\texttt{familiarity}'' \\ or, How to avoid ``going down the
\texttt{Math Rabbit Hole}''?}
\thispagestyle{empty}
An excerpt from the
\href{https://math.stackexchange.com/questions/617625/on-familiarity-or-how-to-avoid-going-down-the-math-rabbit-hole}{math.stackexchange
post} of the same title.
Anyone trying to learn mathematics on his/her own has had the experience of ``going down
the Math Rabbit Hole.''
For example, suppose you come across the novel term vector space, and want to learn more
about it. You look up various definitions, and they all refer to something called a field.
So now you're off to learn what a field is, but it's the same story all over again: all
the definitions you find refer to something called a group. Off to learn about what a
group is. Ad infinitum. That's what I'm calling here ``to go down the Math Rabbit Hole.''
Imagine some nice, helpful fellow came along, and made a big graph of every math concept
ever, where each concept is one node and related concepts are connected by edges. Now you
can take a copy of this graph, and color every node green based on whether you ``know''
that concept (unknowns can be grey).
How to define ``know"? In this case, when somebody mentions that concept while talking
about something, do you immediately feel confused and get the urge to look the concept up?
If no, then you know it (funnily enough, you may be deluding yourself into thinking you
know something that you completely misunderstand, and it would be classed as ``knowing''
based on this rule - but that's fine and I'll explain why in a bit). For purposes of
determining whether you ``know'' it, try to assume that the particular thing the person is
talking about isn't some intricate argument that hinges on obscure details of the concept
or bizarre interpretations - it's just mentioned matter-of-factly, as a tangential remark.
When you are studying a topic, you are basically picking one grey node and trying to color
it green. But you may discover that to do this, you must color some adjacent grey nodes
first. So the moment you discover a prerequisite node, you go to color it right away, and
put your original topic on hold. But this node also has prerequisites, so you put it on
hold, and... What you are doing is known as a depth first search. It's natural for it to
feel like a rabbit hole - you are trying to go as deep as possible. The hope is that
sooner or later you will run into a wall of greens, which is when your long, arduous
search will have born fruit, and you will get to feel that unique rush of climbing back up
the stack with your little jewel of recursion terminating return value.
Then you get back to coloring your original node and find out about the other
prerequisite, so now you can do it all over again.
DFS is suited for some applications, but it is bad for others. If your goal is to color
the whole graph (ie. learn all of math), any strategy will have you visit the same number
of nodes, so it doesn't matter as much. But if you are not seriously attempting to learn
everything right now, DFS is not the best choice.
\begin{figure}[H] \centering \includegraphics[width=.8\textwidth]{Pics/dfs.png}
\captionsetup{labelformat=empty} \caption{\url{https://xkcd.com/761/}} \end{figure}
So, the solution to your problem is straightforward - use a more appropriate search
algorithm!
\thispagestyle{empty}
Immediately obvious is breadth-first search. This means, when reading an article (or page,
or book chapter), don't rush off to look up every new term as soon as you see it. Circle
it or make a note of it on a separate paper, but force yourself to finish your text even
if its completely incomprehensible to you without knowing the new term. You will now have
a list of prerequisite nodes, and can deal with them in a more organized manner.
Compared to your DFS, this already makes it much easier to avoid straying too far from
your original area of interest. It also has another benefit which is not common in actual
graph problems: Often in math, and in general, understanding is cooperative. If you have a
concept A which has prerequisite concept B and C, you may find that B is very difficult to
understand (it leads down a deep rabbit hole), but only if you don't yet know the very
easy topic C, which if you do, make B very easy to ``get'' because you quickly figure out
the salient and relevant points (or it may be turn out that knowing either B or C is
sufficient to learn A). In this case, you really don't want to have a learning strategy
which will not make sure you do C before B!
BFS not only allows you to exploit cooperativities, but it also allows you to manage your
time better. After your first pass, let's say you ended up with a list of 30 topics you
need to learn first. They won't all be equally hard. Maybe 10 will take you 5 minutes of
skimming wikipedia to figure out. Maybe another 10 are so simple, that the first Google
Image diagram explains everything. Then there will be 1 or 2 which will take days or even
months of work. You don't want to get tripped up on the big ones while you have the small
ones to take care of. After all, it may turn out that the big topic is not essential, but
the small topic is. If that's the case, you would feel very silly if you tried to tackle
the big topic first! But if the small one proves useless, you haven't really lost much
energy or time.
\thispagestyle{empty}
Once you're doing BFS, you might as well benefit from the other, very nice and clever
twists on it, such as Dijkstra or A*. When you have the list of topics, can you order them
by how promising they seem? Chances are you can, and chances are, your intuition will be
right. Another thing to do - since ultimately, your aim is to link up with some green
nodes, why not try to prioritize topics which seem like they would be getting closer to
things you do know? The beauty of A* is that these heuristics don't even have to be very
correct - even ``wrong'' or ``unrealistic'' heuristics may end up making your search
faster.
| {
"alphanum_fraction": 0.7800793728,
"avg_line_length": 58.3672316384,
"ext": "tex",
"hexsha": "8d0269194813b15f0481a53c974c087c9e1191de",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z",
"max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AnglyPascal/BCS_Question_Bank",
"max_forks_repo_path": "about.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AnglyPascal/BCS_Question_Bank",
"max_issues_repo_path": "about.tex",
"max_line_length": 136,
"max_stars_count": 48,
"max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank",
"max_stars_repo_path": "about.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z",
"num_tokens": 2488,
"size": 10331
} |
\subsubsection{\stid{5.05} Argo}
\paragraph{Overview}
The Argo project~\cite{perarnau2017argo} is building portable, open source system software that improves
the performance and scalability and provides increased functionality to
Exascale applications and runtime systems.
We focus on four areas of the OS/R stack where the need from the ECP
applications and facilities is perceived to be the most urgent:
1) support for hierarchical memory;
2) dynamic and hierarchical power management to meet performance
targets;
3) containers for managing resources within a node; and
4) internode interfaces for collectively managing resources across groups
of nodes.
\paragraph{Key Challenges}
Many ECP applications have a complex runtime structure, ranging from in
situ data analysis, through an ensemble of largely independent individual
subjobs, to arbitrarily complex workflow structures~\cite{dreher2017situ}. At the same time, HPC
hardware complexity increases as well, from deeper memory hierarchies
encompassing on-package DRAM and byte-addressable NVRAM, to heterogeneous
compute resources and performance changing dynamically based on
power/thermal constraints.
To meet the emerging needs of ECP workloads while providing optimal
performance and resilience, the compute, memory, and interconnect resources
must be managed in cooperation with applications and runtime systems; yet
existing resource management solutions lack the necessary capabilities and
vendors are reluctant to innovate in this space in the absence of clear
directions from the community.
\paragraph{Solution Strategy}
Our approach is to augment and optimize for HPC the existing open source
offerings provided by vendors. We are working with ECP applications and
runtime systems to distill the needed new interfaces and to build, test,
and evaluate the newly implemented functionality with ECP workloads. This
needs to be done in cooperation with facilities, who can provide early
hardware testbeds where the newly implemented functionality can be
demonstrated to show benefits, tested at scale, and matured. Over the
years we have cultivated an excellent relationship with the vendors
providing HPC platforms because our approach has been to augment and
improve, rather than develop our own OS/R from scratch. IBM, Cray, and
Intel are eager to integrate the components we develop for ECP that can
help applications.
Our work in each area focuses on the following:
\begin{enumerate}
\item \textbf{Hierarchical memory:} Incorporate NVRAM into the memory hierarchy
using UMap: a user-space \texttt{mmap} replacement for out-of-core data,
leveraging recent \texttt{userfaultfd} mechanism of the Linux kernel for page fault
handling, featuring application-class specific prefetching and eviction
algorithms. Expose deep DRAM hierarchy by treating high-bandwidth memory
(MCDRAM, HBM) as a scratchpad~\cite{perarnau2016exploring}, managed by the Argonne Memory Library (AML),
which provides applications with asynchronous memory migration
between memory tiers and other convenience mechanisms.
\item \textbf{Power management:}
\emph{PowerStack} will explore hierarchical interfaces for power management
at three specific
levels~\cite{Ellsworth:argo,ellsworth_e2sc2016,patki2016,sakamoto2017}: the
global level of batch job schedulers (which we refer to as the Global
Resource Manager or GRM), the enclave level of job-level runtime systems
(open-source solution of Intel GEOPM and the ECP Power Steering project
will be leveraged here), and the node-level through measurement and control
mechanisms integrated with the NRM (described below).
At the node level, we will develop low-level, vendor-specific
monitoring/controlling capabilities to monitor power/energy consumption,
core temperature and other hardware status~\cite{osti_1353371,zhang2015minimizing}, and control the hardware power
capping and the CPU frequencies.
\item \textbf{Containers:} Develop a Node Resource Manager (NRM) that leverages
technologies underlying modern container runtimes
(primarily \texttt{cgroups}) to partition resources on compute nodes~\cite{zounmevo2015container},
arbitrating between application components and runtime services.
\item \textbf{Hierarchical resource management:} Develop a set of distributed
services and user-facing interfaces~\cite{perarnau2015distributed} to allow applications and runtimes to
resize, subdivide, and reconfigure their resources inside a job. Provide
the enclave abstraction: recursive groups of nodes that are managed as a
single entity; those enclaves can then be used to launch new services or to
create subjobs that can communicate with each other.
\end{enumerate}
\paragraph{Recent Progress}
We identified initial representative ECP applications and benchmarks of
interest, focusing in particular on characteristics such as: coupled codes
consisting of multiple components, memory-intensive codes that do not fit
well in DRAM or that are bandwidth-bound, and codes with dynamically
changing resource requirements.
%We interfaced (either face-to-face or via
%teleconferencing) with the representatives of the following projects:
%NWChemEx, PaRSEC, CANDLE, GAMESS, MPI teams, PETSc, SOLLVE, SICM, PowerRT,
%and Flux.
We designed and developed an API between Node Power and Node
Resource Manager (NRM), which in turn allows Global Resource
Manager (GRM) to control and monitor power and other node-local
resources. Additionally, we studied the effect of power capping on
different applications using the NodePower API and developed power
regression models required for a demand-response policy, which may be
added to the GRM in future.
We developed Yggdrasil, a resource event reactor that provides the enclave
abstraction. It can use standard HPC schedulers such as SLURM or Flux as a
backend.
We developed the initial, centralized version of the Global Resource
Manager (GRM) providing a hierarchical framework for power control. We
enhanced existing PowSched codebase to use the power controller in the node
OS to acquire node local power data and adjust the power caps. We also developed a variation-aware
scheduler to address manufacturing variability under power constraints with Flux infrastructure,
and extended SLURM to support power scheduling plugins to enable our upcoming GRM
milestones.
We designed and developed the first version of the unified Node
Resource Manager. The NRM provides high level of control over node
resources, including initial allocation at job launch and dynamic
reallocation at the request of the application and other services.
The initial set of
managed resources includes CPU cores and memory; they can be allocated to
application components via a container abstraction, which is used to describe
partitions of physical resources (to decrease interference),
and more.
We developed the first stable version of UMap, the user-space memory map
page fault handler for NVRAM.
UMap handler maps application
threads' virtual address ranges to persistent data sets, transparently
pages in active pages and evicts unused pages. We evaluated the costs and
overheads of various approaches and characterized end-to-end performance
for simple I/O intensive applications.
We designed AML, a memory library for explicit management of deep memory
architectures, and validated it on Intel's Knights Landing. Its main
feature is a scratchpad API, allowing applications to implement algorithms
similar to out-of-core for deep memory.
We provided multiple optimized versions of memory migration facilities,
ranging from a regular copy to a transparent move of memory pages, using
synchronous and asynchronous interfaces and single- and multithreaded
backends.
\begin{figure}[h]
\centering
\includegraphics[height=.15\textheight]{projects/2.3.5-Ecosystem/2.3.5.05-Argo/argo-global}\hspace{1em}%
\includegraphics[height=.15\textheight]{projects/2.3.5-Ecosystem/2.3.5.05-Argo/argo-node}
\caption{Global and node-local components of the Argo software stack and
interactions between them and the surrounding HPC system components.}
\end{figure}
\paragraph{Next Steps}
We plan to investigate different power management policies,
particularly demand response, which is becoming a crucial feature for
data centers, allowing to reduce the power consumption of servers
without killing running applications or shutting down nodes.
We plan to continue the
development of power-aware versions of SLURM and Flux, and enable the path toward
integration with GEOPM and NRM.
We plan to improve the control loop inside the NRM to take into account I/O and
memory interference when placing containers on a node. We are also working on
making the NRM implementation compatible with Singularity.
We plan to improve the performance of UMap and demonstrate it with an
astronomy application having hundreds of files.
%
We plan to port applications to AML and integrate with runtime projects like
BOLT or PaRSEC for efficient use of Knights Landing memory using our
scratchpad.
| {
"alphanum_fraction": 0.8219831411,
"avg_line_length": 49.8121546961,
"ext": "tex",
"hexsha": "65494a15988e1fd8fbdfaf2a2da0fac4d5d3f37a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_line_length": 114,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.5-Ecosystem/2.3.5.05-Argo/2.3.5.05-Argo.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1950,
"size": 9016
} |
\section{Feb 02 Review Questions}
\begin{QandA}
\item The state machine approach paper describes implementing stability using several different clock methods. What properties does a clock need to provide so that it can be used to implement stability?
\begin{answered}
The clock should be able to assign the unique identifier to each request whose issuance corresponds to an an event. In addition, the clock should ensure the total ordering on the unique identifiers. In other words, the clock has to satisfy: 1) clock value $\hat{T_p}$ incremented after each event at process $p$ 2) Upon receipt of a message with timestamp $\tau$, process resets the clock value $\hat{T_p}$ to
$\max(\hat{T_p}, \tau) + 1$.
\end{answered}
\item For linearizability, sequential consistency, and eventual consistency, describe an application (real or imagined) that could reasonably use that consistency model.
\begin{answered}
\begin{itemize}
\item linearizability example: transactions on RDBMS enforces linearizability in the sense that the read/write operations on a table in RDBMS have to be "atomic". Once a tuple is modified and commited,
the changes to the tuple become visible to the following read immediately. A real life example is that when I deposit the money into an
account, I want to see the the money reflect the latest deposit immediately in my bank account.
\item sequential consistency example: isolation requirement for RDBMS. When two transactions that are modifying the same table, each operation
within the transactions are executed in the order they are issued. However, the operations from two transactions may be interleaved (i.e., under "Read uncommitted" isolation level). Another
example is one person $A$ issues "unfriend", "post" operations and the other person $B$ issues "scroll the facebook page". The operation
order from these two people are kept: "post" never goes before "unfriend" but operations may interleave: "unfriend", "scroll the facebook
page", "post" instead of doing $A$'s operation first ("unfriend", "post") and then $B$'s.
\item eventual consistency example: high available key-value store like Dynamo. The order status display service may not see the user's
update on the shopping cart but eventually those updates can be seen by every services. Another example is when you like a post, the
other people who can see the post may not see your ``like" immediately in their facebook page but eventually, they will see your ``like" on the post.
\end{itemize}
\end{answered}
\item What's one benefit of using invalidations instead of leases? What's one benefit of using leases over invalidations?
\begin{answered}
One benefit of using leases over invalidations can be seen from the following example: suppose we use the eventual consistency model
and a write is waiting on invalidations.
Invalidation has negative impact to the system performance because the user cannot use the cache due to the invalidation but it is ok for client reads the old value because of eventual consistency. However, for the lease, if the cache is hold by the lease holder and still within the lease term, user can still read the data from the cache. This scenario also indicates the benefit of the invalidations over leases in the sense that the user may not get the latest updated value since they can read
the old value from the lease-holder-protected cache directly. Thus, for the linearizability consistency model, we cannot use the leases because the write may not be approved by the leaseholder and there might be a read that happens immediately after reading
from the out-dated cache. This violates the linearizability, which guarantees that reads always reflect the latest write.
Thus, there will be a delay in write, which hinders the system performance. However, for
eventual consistency model, leases is more favorable than invalidations.
\end{answered}
\end{QandA} | {
"alphanum_fraction": 0.7632161298,
"avg_line_length": 112.9722222222,
"ext": "tex",
"hexsha": "5071780e606e265751126ab3df96f7e96f06a806",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-24T05:17:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-12-26T09:02:45.000Z",
"max_forks_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "xxks-kkk/Code-for-blog",
"max_forks_repo_path": "2018/380D-vijay/review_questions/0202.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_issues_repo_issues_event_max_datetime": "2017-10-22T20:10:50.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-10-22T20:10:50.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "xxks-kkk/Code-for-blog",
"max_issues_repo_path": "2018/380D-vijay/review_questions/0202.tex",
"max_line_length": 503,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "3d5ae181f2b6c986f3dc1977d190847757d30834",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "xxks-kkk/Code-for-blog",
"max_stars_repo_path": "2018/380D-vijay/review_questions/0202.tex",
"max_stars_repo_stars_event_max_datetime": "2021-09-11T23:43:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-04T08:20:03.000Z",
"num_tokens": 872,
"size": 4067
} |
\chapter{Medians and Order Statistics}
\section{Minimum and maximum}
\begin{enumerate}
\item[9.1{-}1]{Show that the second smallest of $n$ elements can be found with
$n + \ceil{\lg n} - 2$ comparisons in the worst case. (\emph{Hint:} Also find
the smallest element.)}
\begin{framed}
Lets find first the smallest element. Compare the elements in pairs and discard
the largest element of each pair. The number of elements is now $\ceil{n/2}$.
Repeat this operation recursively to the remaining elements until the smallest
element is found. Since we discard one element in each comparison, the number of
comparisons is the number of elements that is not the smaller. Thus, $n - 1$
comparisons. Note that the second smallest element can only be greater than the
smallest element. Thus, the second smallest element is among these
$\ceil{\lg n}$ elements that were discarded when compared to the smallest
element. Use the same recursive approach on these $\ceil{\lg n}$ elements to
find the second smallest with $\ceil{\lg n} - 1$ comparisons. The total number
of comparisons in the worst-case is then $n - 1 + \ceil{\lg n}
- 1 = n + \ceil{\lg n} - 2$.
\end{framed}
\item[9.1{-}2]{($\star$) Prove the lower bound of $\ceil{3n/2} - 2$ comparisons
in the worst case to find both the maximum and minimum of $n$ numbers.
(\emph{Hint:} Consider how many numbers are potentially either the maximum or
minimum, and investigate how a comparison affects these counts.)}
\begin{framed}
At the start, any of the $n$ the elements can be both the minimum and the
maximum. After the first comparison, we can discard the largest as not being the
minimum and the smallest as not being the maximum. From now on we have two
options: compare two different elements or compare one of the elements
previously compared with a different element. The first option will decrease by
one both the number of potential minimums and potential maximums, while the
second option will only decrease one of these totals. Thus, the best way to
start is to group the elements in pairs and compare them, which requires
$\floor{n/2}$ comparisons. After comparing all the pairs, we will have
$\ceil{n/2}$ potential maximums and $\ceil{n/2}$ potential minimums. In the
worst-case, those sets are disjoint and must be treated independently. We know
from the previous question that the minimum number of comparisons needed to find
the minimum or the maximum among $\ceil{n/2}$ elements is $\ceil{n/2} - 1$.
Thus, the lower bound to find both the maximum and the minimum of $n$ numbers in
the worst-case is
\[
\Bigl\lfloor \frac{n}{2} \Bigr\rfloor + 2 \left( \Bigl\lceil \frac{n}{2} \Bigr\rceil - 1 \right).
\]
If $n$ is even, we have
\[
\Bigl\lfloor \frac{n}{2} \Bigr\rfloor + 2 \left( \Bigl\lceil \frac{n}{2} \Bigr\rceil - 1 \right)
= \frac{n}{2} + n - 2
= \frac{3n}{2} - 2
= \Bigl\lceil \frac{3n}{2} \Bigr\rceil - 2.
\]
If $n$ is odd, we have
\[
\Bigl\lfloor \frac{n}{2} \Bigr\rfloor + 2 \left( \Bigl\lceil \frac{n}{2} \Bigr\rceil - 1 \right)
= \frac{n - 1}{2} + (n + 1) - 2
= \frac{3n - 3}{2}
= \Bigl\lceil \frac{3n}{2} \Bigr\rceil - 2.
\]
\end{framed}
\end{enumerate}
\newpage
\section{Selection in worst-case linear time}
\begin{enumerate}
\item[9.2-1]{Show that \textsc{Randomized-Select} never makes a recursive call
to a 0-length array.}
\begin{framed}
At the start of each recursive call, a random pivot is chosen. If it happens to
be the $i$th element, the element being searched has been found and is returned
without any additional recursion call. Otherwise, the $i$th element is either
before or after the pivot and a recursive call is made on the side of the
subarray that includes the $i$th element.
\end{framed}
\item[9.2-2]{Argue that the indicator random vartiable $X_k$ and the value
$T(\max(k - 1, n - k))$ are independent.}
\begin{framed}
Both $X_k$ and $T(\max(k - 1, n - k))$ depends on the value of $k$. However, no
matter if $X_k$ is 0 or 1, the value of $T(\max(k - 1, n - k))$ is the same.
\end{framed}
\item[9.2-3]{Write an iterative version of \textsc{Randomized-Select}.}
\begin{framed}
The pseudocode is stated below.
\begin{algorithm}[H]
\SetAlgoNoEnd\DontPrintSemicolon
\BlankLine
\SetKwFunction{algo}{Randomized-Select-Iterative}
\SetKwProg{myalg}{}{}{}
\nonl\myalg{\algo{A, p, r, i}}{%
\If{$p == r$}{%
\Return{$A[p]$}\;
}
\While{\texttt{\upshape{True}}}{%
$q = \texttt{Randomized-Partition}(A, p, r)$\;
$k = q - p + 1$\;
\If{$i == k$}{%
\Return{$A[q]$}\;
}
\ElseIf{$i < k$}{%
$r = q - 1$\;
}
\Else{%
$p = q + 1$\;
$i = i - k$\;
}
}
}
\end{algorithm}
\end{framed}
\item[9.2-4]{Suppose we use \textsc{Randomized-Select} to select the minimum
element of the array $A = \langle 3, 2, 9, 0, 7, 5, 4, 8, 6, 1 \rangle$.
Describe a sequence of partitions that results in a worst-case performance of
\textsc{Randomized-Select}.}
\begin{framed}
The worst-case occurs when the pivot is always the greatest element. The number
of calls to partition in this case is $n - 1$.
\end{framed}
\end{enumerate}
\newpage
\section{Selection in worst-case linear time}
\begin{enumerate}
\item[9.3-1]{In the algorithm \textsc{Select}, the input elements are divided
into groups of 5. Will the algorithm work in linear time if they are divided
into groups of 7? Argue that \textsc{Select} does not run in linear time if
groups of 3 are used.}
\begin{framed}
If the elements are divided into groups of 7, the number of elements
greater/smaller than the median-of-medians is at least
\[
7 \left( \Bigl\lceil \frac{1}{2} \Bigl\lceil \frac{n}{7} \Bigr\rceil \Bigr\rceil - 2 \right)
\ge \frac{4n}{14} - 8 = \frac{2n}{7} - 8,
\]
which implies that, in the worst-case, step 5 calls \textsc{Select} recursively
on at most
\[
n - \left( \frac{2n}{7} - 8 \right) = \frac{5n}{7} + 8
\]
elements. We then have the recurrence
\[
T(n) = T\left( \Bigl\lceil \frac{n}{7} \Bigr\rceil \right) + T\left( \frac{5n}{7} + 8 \right) + O(n).
\]
We shall prove that its running time is linear by substitution. More
specifically, we will show that
\[
T(n) \le cn \; \Forall n \ge n_0,
\]
where $c$ and $n_0$ are positive constants. Substituting into the recurrence,
yields
\begin{equation*}
\begin{aligned}
T(n) &\le c \Bigl\lceil \frac{n}{7} \Bigr\rceil + c \left( \frac{5n}{7} + 8 \right) + an\\
&\le c \frac{n}{7} + c + c \frac{5n}{7} + 8c + an & \text{($c \ge 1$)}\\
&= \frac{6}{7} cn + 9c + an\\
&= cn + \left( - \frac{1}{7} cn + 9c + an \right)\\
&\le cn,
\end{aligned}
\end{equation*}
where the last step holds for
\[
-\frac{1}{7} cn + 9c + an \le 0 \rightarrow c \ge 7a \left(\frac{n}{n - 63}\right),
\]
and picking $n_0 = 126$, it holds for $c \ge 14a$.
Similarly, with groups of 3, the number of elements greater/smaller than the
median-of-medians is at least
\[
2 \left( \Bigl\lceil \frac{1}{2} \Bigl\lceil \frac{n}{3} \Bigr\rceil \Bigr\rceil - 2 \right)
\ge \frac{n}{3} - 4.
\]
which implies that, in the worst-case, step 5 calls \textsc{Select} recursively
on at most
\[
n - \left( \frac{n}{3} - 4 \right) = \frac{2n}{3} + 4
\]
elements. We then have the recurrence
\[
T(n) = T\left( \Bigl\lceil \frac{n}{3} \Bigr\rceil \right) + T\left( \frac{2n}{3} + 4 \right) + O(n).
\]
We shall prove that its running time is $\omega(n)$ by substitution. More
specifically, we will show that
\[
T(n) > cn + d \; \Forall n \ge n_0,
\]
where $c$, $d$, and $n_0$ are positive constants. Substituting into the
recurrence, yields
\begin{equation*}
\begin{aligned}
T(n) &> c \Bigl\lceil \frac{n}{3} \Bigr\rceil + d + c \left( \frac{2n}{3} + 4 \right) + d + an\\
&> c \frac{n}{3} + c + d + c \frac{2n}{3} + 4c + d + an & \text{($c \ge 1$)}\\
&= cn + 5c + 2d + an\\
&> cn,\\
\end{aligned}
\end{equation*}
where the last step holds for $5c + 2d + an > 0$.
\end{framed}
\item[9.3-2]{Analyze \textsc{Select} to show that if $n \ge 140$, then at least
$\ceil{n/4}$ elements are greater than the median-of-medians $x$ and at least
$\ceil{n/4}$ elements are less than $x$.}
\begin{framed}
We have that at least
\[
\frac{3n}{10} - 6
\]
elements are greater/smaller than $x$. To this number be equal to or greater
than $\ceil{n/4}$, we find $n$ such that
\begin{equation*}
\begin{aligned}
\frac{3n}{10} - 6 \ge \Bigl\lceil \frac{n}{4} \Bigr\rceil
&\rightarrow \frac{3n}{10} - 6 \ge \frac{n}{4} + 1\\
&\rightarrow \frac{6n - 5n}{20} \ge 7\\
&\rightarrow \frac{n}{20} \ge 7\\
&\rightarrow n \ge 140.
\end{aligned}
\end{equation*}
\end{framed}
\item[9.3-3]{Show how quicksort can be made to run in $O(n \lg n)$ time in the
worst-case, assuming that all elements are distinct.}
\begin{framed}
Update the partition procedure to use the median as the pivot. It will take an
additional $O(n)$-time to find the median with the \textsc{Select} procedure,
but the running time of partition will still be linear. We will then have the
recurrence
\[
T(n) = 2T \left( \frac{n}{2} \right) + O(n),
\]
which takes
\[
\sum_{i = 0}^{\lg n} 2^i \cdot \frac{n}{2^i} = \sum_{i = 0}^{\lg n} n = O(n \lg n).
\]
\end{framed}
\item[9.3-4]{($\star$) Suppose that an algorithm uses only comparisons to find
the $i$th smallest element in a set of $n$ elements. Show that it can also find
the $i - 1$ smaller elements and the $n - i$ larger elements without performing
any additional comparisons.}
\begin{framed}
Assume without loss of generality that the elements of the array are distinct.
Let $x$ denote the $i$th order statistic that was found through comparisons.
First note that if there exists an element $y$ that was never compared to any
other element, its value was not taken into account to determine $x$, which
implies that there are at least two possible order statistics for $x$ {--} one
for $y < x$ and another for $y > x$. The same occurs if $y$ is only compared to
elements that are not between $x$ and $y$ in the sorted order. Note that these
comparisons are insufficient to determine if $y$ is smaller or greater than $x$,
and there will also be at least two possible order statistics for $x$.
Therefore, to find $x$, the algorithm must compare $y$ to $x$ directly or by
transitivity. These comparisons are sufficient to determine the relative order
of every element with respect to $x$, and therefore to also determine the
$i - 1$ smaller and the $n - i$ greater elements of the array.
\end{framed}
\newpage
\item[9.3-5]{Suppose that you have a ``black-box'' worst-case linear-time median
subroutine. Give a simple, linear-time algorithm that solves the selection
problem for an arbitrary order statistic.}
\begin{framed}
A simple algorithm works as follows:
\begin{enumerate}
\item Find the lower median $m$ using the ``black-box'' median subroutine.
\item If $i = \ceil{n/2}$, just return $m$. Otherwise, partition the array using
$m$ as the pivot and recursively find the $i$th element on the first
$\ceil{n / 2} - 1$ elements if $i < \ceil{n / 2}$, or the $(i - \ceil{n / 2})$th
element on the last $\floor{n/2}$ elements if $i > \ceil{n/2}$.
\end{enumerate}
This algorithm has the recurrence
\[
T(n) = T(n/2) + O(n),
\]
which can be solved using case 3 of the master method, since $n^{\lg 1}$ is
polynomially smaller than $f(n)$. Thus, $T(n) = \Theta(n)$.
The pseudocode of this algorithm is stated below.
\begin{algorithm}[H]
\SetAlgoNoEnd\DontPrintSemicolon
\BlankLine
\SetKwFunction{algo}{Select'}
\SetKwProg{myalg}{}{}{}
\nonl\myalg{\algo{A, p, r, i}}{%
$m = \texttt{Median}(A, p, r)$\;
$k = \ceil{n/2}$\;
\If{$i == k$}{%
\Return{$m$}\;
}
\Else{%
$q = \texttt{Partition}(A, p, r, m)$\;
\If{$i < k$}{%
// recurve over the first $\ceil{n/2} - 1$ elements\;
$\texttt{Select'}(A, p, p + k - 2, i)$\;
}
\Else{%
// recurve over the last $\floor{n/2}$ elements\;
$\texttt{Select'}(A, p + k, r, i - k)$\;
}
}
}
\end{algorithm}
\end{framed}
\newpage
\item[9.3-6]{The $k$th \textbf{\emph{quantiles}} of an $n$-element set are the
$k - 1$ order statistics that divide the sorted set into $k$ equal-sized sets
(to within 1). Give an $O(n \lg k)$-time algorithm to list the $k$th quantiles
of a set.}
\begin{framed}
Let $S$ be an $n$-set and $S_{(i)}$ denote the $i$th order statistic of $S$. The
$k$th quantiles of $S$ are the elements
\[
S_{(1 (n / k))}, S_{(2 (n / k))}, \dots, S_{((k - 1) (n / k))}.
\]
An efficient algorithm to find the above elements work as follows:
\begin{enumerate}
\item If $k = 1$, then return $\emptyset$.
\item Otherwise, do the following:
\begin{enumerate}
\item Partition $S$ around the element $S_{(\floor{k / 2} (n / k))}$. Let $q$
denote the position of the pivot after partition and let $S_1$ and $S_2$ denote
the subsets $S[1, \dots, q]$ and $S[q + 1, \dots, n]$, respectively.
\item Recursively solve the $(\floor{k/2})$th quantiles of $S_1$ and the
$(\ceil{k/2})$ quantiles of $S_2$. Let $Q_1$ and $Q_2$ denote the solutions of
$S_1$ and $S_2$, respectively.
\item Return $Q_1 \cup \{S[q]\} \cup Q_2$.
\end{enumerate}
\end{enumerate}
We shall now prove that this algorithm runs in $O(n \lg k)$. First note that
since
\[
n \text{ mod } k = 0,
\]
$k$ is even implies that $n$ is also even. Thus, for even $k$, we have
\begin{equation*}
\begin{aligned}
\Bigl\lfloor \frac{k}{2} \Bigr\rfloor \cdot \frac{n}{k}
&= \frac{k}{2} \cdot \frac{n}{k}\\
&= \frac{n}{2}\\
&= \Bigl\lfloor \frac{n}{2} \Bigr\rfloor,
\end{aligned}
\end{equation*}
which implies that $q$ is the lower median. When $k$ is odd, we have
\begin{equation*}
\begin{aligned}
\Bigl\lfloor \frac{k}{2} \Bigr\rfloor \cdot \frac{n}{k}
&= \frac{k - 1}{2} \cdot \frac{n}{k}\\
&= \left( \frac{k}{2} - \frac{1}{2} \right) \frac{n}{k}\\
&= \frac{n}{2} - \frac{n}{2k}.
\end{aligned}
\end{equation*}
Step (a) takes $O(1)$. Step (b) has the recurrence
\begin{equation*}
T(n, k) =
\begin{cases}
O(1), & k = 1\\
T\left( \Bigl\lfloor \frac{n}{2} \Bigr\rfloor, \Bigl\lfloor \frac{k}{2} \Bigr\rfloor \right) +
T\left( \Bigl\lceil \frac{n}{2} \Bigr\rceil, \Bigl\lceil \frac{k}{2} \Bigr\rceil \right) +
O(n), & \text{$k > 1$ and $k$ is even}\\
T\left( \frac{n}{2} - \frac{n}{2k}, \Bigl\lfloor \frac{k}{2} \Bigr\rfloor \right) +
T\left( \frac{n}{2} + \frac{n}{2k}, \Bigl\lceil \frac{k}{2} \Bigr\rceil \right) +
O(n), & \text{$k > 1$ and $k$ is odd}
\end{cases}
\end{equation*}
We shall solve this recurrence through the analysis of its recursion-tree. Since
the problem is always divided into two subproblems, without overlap, the total
cost over all nodes at depth $i$ is $cn$. The bottom level at depth $\lg k$
has $2^{\lg k} = k$ nodes, each contributing cost $O(1)$, for a total cost of
$O(k)$. Thus, the cost of the entire tree is \begin{equation*}
\begin{aligned}
T(n, k) &= \sum_{i = 0}^{\lg k - 1} cn + O(k)\\
&= cn \lg k + O(k)\\
&= O(n \lg k).
\end{aligned}
\end{equation*}
\end{framed}
\newpage
\item[9.3-7]{Describe an $O(n)$-time algorithm that, given a set $S$ of $n$
distinct numbers and a positive integer $k \le n$, determines the $k$ numbers in
$S$ that are closest to the median of $S$.}
\begin{framed}
Let $A$ be an array of size $n$. The following algorithm finds $k$ elements of
$A$ such that every element
\begin{itemize}
\item is greater than or equal to the $(\floor{n/2} - \floor{(k - 1)/2})$th
order statistic of $A$, and
\item is lower than or equal to the $(\floor{n/2} + \ceil{(k - 1)/2})$th order
statistic of $A$.
\end{itemize}
Do the following steps:
\begin{enumerate}
\item Find the $q$th order statistic of $A$, such that
$q = \floor{n/2} - \floor{(k - 1)/2}$, and partition $A$ around this element.
\item If $k = 1$, return $A[q]$.
\item Otherwise, do the following:
\begin{enumerate}
\item Let $A'$ denote subarray $A[q, \dots, n]$.
\item Find the $k$th order statistic of $A'$ and partition $A'$ around this
element.
\item Return the subarray $A'[q, \dots, q + k - 1]$.
\end{enumerate}
\end{enumerate}
The algorithm do at most two selections and two partitions. Thus, its running
time is $4 \cdot O(n) + O(1) = O(n)$.
\end{framed}
\item[9.3-8]{Let $X[1 \dots n]$ and $Y[1 \dots n]$ be two arrays, each
containing $n$ numbers already in sorted order. Give an $O(\lg n)$-time algorithm
to find the median of all $2n$ elements in arrays $X$ and $Y$.}
\begin{framed}
Note that, since both arrays are sorted, the order statistic of $X[i]$ is
$\floor{(n + 1)/2}$ (the median) if, and only if,
\[
X[i] \ge Y\left[ \Bigl\lfloor \frac{n + 1}{2} \Bigr\rfloor - i \right],
\]
and
\[
X[i] \le Y\left[ \Bigl\lfloor \frac{n + 1}{2} \Bigr\rfloor - i + 1 \right].
\]
Start testing the median of $X$. If the first comparison fails, recurse over the
right half. If the second comparison fails, recurse over the left half.
Otherwise, return $X[i]$. If a recursion is performed on an empty array, the
median is not within $X$. Repeat a similar procedure on $Y$ to find the median.
The complexity of this algorithm is $O(\lg n) + O(\lg n) = O(\lg n)$.
\end{framed}
\item[9.3-9]{Professor Olay is consulting for an oil company, which is planning
a large pipeline running east to west through an oil field of $n$ wells. The
company wants to connect a spur pipeline from each well directly to the main
pipeline along a shortest route (either north or south), as shown in Figure 9.2.
Given the $x$- and $y$-coordinates of the wells, how should the professor pick
the optimal location of the main pipeline, which would be the one that minimizes
the total length of the spurs? Show how to determine the optimal location in
linear time.}
\begin{framed}
The optimal locations are the lower and upper medians of the $y$ values. Find
one of them with the \textsc{Select} algorithm.
\end{framed}
\end{enumerate}
\newpage
\section*{Problems}
\addcontentsline{toc}{section}{\protect\numberline{}Problems}%
\begin{enumerate}
\item[9-1]{\textbf{\emph{Largest i numbers in sorted order}}\\
Given a set of $n$ numbers, we wish to find the $i$ largest in sorted order
using a comparison-based algorithm. Find the algorithm that implements each of
the following methods with the best asymptotic worst-case running time, and
analyze the running times of the algorithms in terms of $n$ and $i$.
\begin{enumerate}
\item[\textbf{a.}] Sort the numbers, and list the $i$ largest.
\item[\textbf{b.}] Build a max-priority queue from the numbers, and call \textsc{Extract-Max} $i$ times.
\item[\textbf{c.}] Use an order-statistic algorithm to find the $i$th largest number,
partition around that number, and sort the $i$ largest numbers.
\end{enumerate}
}
\begin{framed}
\begin{enumerate}
\item The pseudocode is stated below.
\begin{algorithm}[H]
\SetAlgoNoEnd\DontPrintSemicolon
\BlankLine
\SetKwFunction{algo}{LargestNumbersSort}
\SetKwProg{myalg}{}{}{}
\nonl\myalg{\algo{A, i}}{%
Let $B$ be an integer array of size $i$\;
\texttt{Heapsort}($A, 1, A.length$)\;
\For{$j = 1$ \KwTo $i$}{%
$B[j] = A[j]$\;
}
\Return{$B$}\;
}
\end{algorithm}
This algorithm runs in $\Theta(n \lg n + i) = \Theta(n \lg n)$.
\item The pseudocode is stated below.
\begin{algorithm}[H]
\SetAlgoNoEnd\DontPrintSemicolon
\BlankLine
\SetKwFunction{algo}{LargestNumbersPriorityQueue}
\SetKwProg{myalg}{}{}{}
\nonl\myalg{\algo{A, i}}{%
Let $B$ be an integer array of size $i$\;
\texttt{Build-Max-Heap}($A$)\;
\For{$j = 1$ \KwTo $i$}{%
$\text{\emph{element}}$ = \texttt{Heap-Extract-Max}($A$)\;
$B[i - j + 1] = element$\;
}
\Return{$B$}\;
}
\end{algorithm}
\textsc{Build-Max-Heap} call takes $O(n)$, \textsc{Extract-Max} call takes
$O(\lg n)$. This algorithm runs in $O(n + i \lg n)$.
\item The pseudocode is stated below.
\begin{algorithm}[H]
\SetAlgoNoEnd\DontPrintSemicolon
\BlankLine
\SetKwFunction{algo}{LargestNumbersOrderStatistic}
\SetKwProg{myalg}{}{}{}
\nonl\myalg{\algo{A, i}}{%
Let $B$ be an integer array of size $i$\;
$q = \texttt{Select}(A, i)$\;
\texttt{Partition}($A, 1, A.length, q$)\;
\texttt{Heapsort}($A, 1, i$)\;
\For{$j = 1$ \KwTo $i$}{%
$B[j] = A[j]$\;
}
\Return{$B$}\;
}
\end{algorithm}
\textsc{Select} call takes $O(n)$, \textsc{Partition} call takes $O(n)$,
\textsc{Quicksort} call takes $O(i \lg i)$. This algorithm runs in $O(n + i \lg i)$.
\end{enumerate}
\end{framed}
\newpage
\item[9-2]{\textbf{\emph{Weighted median}}\\
For $n$ distinct elements $x_1, x_2, \dots, x_n$ with positive weights
$w_1, w_2, \dots, w_n$ such that $\sum_{i = 1}^{n} w_i = 1$, the
\textbf{\emph{weighted (lower) median}} is the element $x_k$ satisfying
\[
\sum_{x_i < x_k} w_i < \frac{1}{2},
\]
and
\[
\sum_{x_i > x_k} w_i \le \frac{1}{2}.
\]
For example, if the elements are 0.1, 0.35, 0.05, 0.1, 0.15, 0.05, 0.2 and each
element equals its weight (that is, $w_i = x$ for $i = 1, 2, \dots, 7$), the
median is 0.1, but the weighted median is 0.2.
\begin{enumerate}
\item[\textbf{a.}] Argue that the median of $x_1, x_2, \dots, x_n$ is the
weighted median of the $x_i$ with weights $w_i = 1/n$ for $1, 2, \dots, n$.
\item[\textbf{b.}] Show how to compute the weighted median of $n$ elements in
$O(n \lg n)$ worst-case time using sorting.
\item[\textbf{c.}] Show how to compute the weighted median in $\Theta(n)$
worst-case time using a linear-time median algorithm such as \textsc{Select}
from Section 9.3.
\end{enumerate}
The \textbf{\emph{post-office location problem}} is defined as follows. We are
given $n$ points $p_1, p_2, \dots, p_n$ with associated weights
$w_1, w_2, \dots, w_n$. We wish to find a point $p$ (not necessarily one of the
input points) that minimizes the sum $\sum_{i = 1}^n w_i d(p, p_i)$, where
$d(a, b)$ is the distance between points $a$ and $b$.
\begin{enumerate}
\item[\textbf{d.}] Argue that the weighted median is a best solution for the
1-dimensional post-office location problem, in which points are simply real
numbers and the distance between points $a$ and $b$ is $d(a, b) = |a - b|$.
\item[\textbf{e.}] Find the best solution for the 2-dimensional post-office
location problem, in which the points are $(x, y)$ coordinate pairs and the
distance between points $a = (x_1, y_1)$ and $b = (x_2, y_2)$ is the
\textbf{\emph{Manhattan distance}} given by
$d(a, b) = |x_1 - x_2| + |y_1 - y_2|$.
\end{enumerate}
}
\begin{framed}
\begin{enumerate}
\item Note that there are at most $\floor{(n - 1)/2}$ elements that are smaller
than the median and at most $\ceil{(n - 1)/2}$ elements that are greater than
the median. Since the weight of each element is $1/n$, we have
\[
\sum_{x_i < x_k} w_i = \sum_{x_i < x_k} \frac{1}{n}
= \frac{1}{n} \sum_{x_i < x_k} 1
\le \frac{1}{n} \cdot \Bigl\lfloor \frac{n - 1}{2} \Bigr\rfloor
< \frac{1}{n} \cdot \frac{n}{2} = \frac{1}{2},
\]
and
\[
\sum_{x_i < x_k} w_i = \sum_{x_i < x_k} \frac{1}{n}
= \frac{1}{n} \sum_{x_i < x_k} 1
\le \frac{1}{n} \cdot \Bigl\lceil \frac{n - 1}{2} \Bigr\rceil
\le \frac{1}{n} \cdot \frac{n}{2} = \frac{1}{2},
\]
which implies that the median is also the weighted median.
\item Sort the array with \textsc{Heapsort}. Iterate over the elements of the
array, accumulating the sum of their weights until the sum achieves a value that
is greater than or equal to $1/2$. Let $x_k$ denote the last element that made
the sum accumulate a value greater than or equal to $1/2$. Note that at that
point
\[
\sum_{x_i < x_k} w_i < \frac{1}{2}
\]
holds since the sum of the weights until the element right before $x_k$ is
smaller than $1/2$ and
\[
\sum_{x_i > x_k} w_i \le \frac{1}{2}
\]
holds since the sum of the weights until $x_k$ is greater than or equal to $1/2$
and $\sum_{i = 1}^{n} w_i = 1$. Thus, $x_k$ is the weighted median. This
algorithm takes $\Theta(n \lg n)$ to sort the array with \textsc{Heapsort} and
$O(n)$ to accumulate the weights and find the weighted median.
\item Do the following steps:
\begin{enumerate}
\item Find the median with the \textsc{Select} algorithm.
\item Partition the array around the median.
\item Let $x_m$ denote the position of the median after partitioning. Let
$W_L = \sum_{X_i < X_m} w_i$ and $W_R = \sum_{x_i > x_m} w_i$.
\item If $W_L < 1/2$ and $W_R \le 1/2$, $x_m$ is the weighted median. Otherwise,
do the following:
\begin{enumerate}
\item If $W_L \ge 1/2$, the weighted median is before $x_m$.
Set $w_m = w_m + W_R$ and recurse on the left half of the array, including
$x_m$.
\item If $W_R > 1/2$, the weighted median is after $x_m$. Set $w_m = w_m + W_L$
and recurse on the right half of the array, including $x_m$.
\end{enumerate}
\end{enumerate}
This algorithm has the recurrence:
\[
T(n) = T\left(\frac{n}{n} + 1\right) + \Theta(n)
= \sum_{i = 0}^{\lg n} \left(\frac{n}{2^i} + 1\right)
= n \sum_{i = 0}^{\lg n} \frac{1}{2^i} + \sum_{i = 0}^{\lg n} 1
\le 2n + \lg n + 1
= \Theta(n).
\]
\item Skipped.
\item Skipped.
\end{enumerate}
\end{framed}
\newpage
\item[9-3]{\textbf{\emph{Small order statistics}}\\
We showed that the worst-case number $T(n)$ of comparisons used by
\textsc{Select} to select the ith order statistic from $n$ numbers satisfies
$T(n) = \Theta(n)$, but the constant hidden by the $\Theta$-notation is rather
large. When $i$ is small relative to $n$, we can implement a different procedure
that uses \textsc{Select} as a subroutine but makes fewer comparisons in the
worst case.
\begin{enumerate}
\item[\textbf{a.}] Describe an algorithm that uses $U_i(n)$ comparisons to
find the $i$th smallest of $n$ elements, where
\[
U_i(n) =
\begin{cases}
T(n) & \text{if } i \ge n/2,\\
\floor{n/2} + U_i(\ceil{n/2}) + T(2i) & \text{otherwise}.
\end{cases}
\]
(\emph{Hint:} Begin with $\floor{n/2}$ disjoint pairwise comparisons, and
recurse on the set containing the smaller element from each pair.)
\item[\textbf{b.}] Show that, if $i < n/2$, then $U_i(n) = n + O(T(2i) \lg (n/i))$.
\item[\textbf{c.}] Show that if $i$ is a constant less than $n/2$, then $U_i(n) = n + O(\lg n)$.
\item[\textbf{d.}] Show that if $i = n/k$ for $k \ge 2$, then $U_i(n) = n + O(T(2n/k) \lg k)$.
\end{enumerate}
}
\begin{framed}
\begin{enumerate}
\item First, note that the \textsc{Select} algorithm find the $i$th element by
partitioning the array. That is, when the $i$th element is found, the first $i$
elements are the $i$ smallest. However, when $n$ is too large with respect to
$i$, it perform more comparisons than necessary. Taking the hint that the
question gave us, we can reduce the number of comparisons when $n$ is too large
by running \textsc{Select} only when $n$ is smaller than or equal to $2i$.
The key insight to solve the question is to observe that if we first make
disjoint pairwise comparisons and then run \textsc{Select} only among the
smallest element of each pair, the $i$th order statistic of the whole array is
among the $i$ smallest elements that were found by \textsc{Select} and their
large counterparts on the right half of the array. This occurs because the
remaining elements on the left half are larger than at least $i$ elements and
their larger counterparts on the right half are even larger.
We can then use this notion to build a recursive algorithm that solves the
selection problem with fewer comparisons, using the \textsc{Select} algorithm
only when $n$ is small enough:
\begin{enumerate}
\item If $i \ge n/2$, run \textsc{Select} and return its result.
\item Otherwise, do the following:
\begin{enumerate}
\item Perform disjoint pairwise comparisons and rearrange the array such
that the smaller element of each pair appears on the left half of the
array, in the same order of its larger counterparts.
\item Recursively find the $i$th element among the elements on the left half
of the array.
\item The $i$th order statistic is among the first $i$ elements of the array
and their larger counterparts. Run \textsc{Select} on these $2i$ elements
and return the result.
\end{enumerate}
\end{enumerate}
\item Can be proved by substitution.
\item From the previous item, we have
\[
U_i(n) = n + O(T(2i) \lg(n/i)),
\]
which implies that, when $i$ is a constant less than $n/2$, we have
\begin{equation*}
\begin{aligned}
U_i(n) &= n + O(T(2i) \lg(n/i))\\
&= n + O(O(1) O(\lg n))\\
&= n + O(\lg n).
\end{aligned}
\end{equation*}
\item If $k > 2$, then $i < n/2$ and we can use the result of item (b):
\begin{equation*}
\begin{aligned}
U_i(n) &= n + O(T(2i)\lg(n/i))\\
&= n = O(T(2n/k) \lg(n/(n/k)))\\
&= n = O(T(2n/k) \lg(k)).
\end{aligned}
\end{equation*}
If $k = 2$, then $i = n/2$ and $\lg k = 1$. Thus, we have
\begin{equation*}
\begin{aligned}
U_i(n) &= T(n)\\
&\le n + T(n) + \lg k\\
&= n + O(T(n) + \lg k)\\
&= n + O(T(2n/k) + \lg k).
\end{aligned}
\end{equation*}
\end{enumerate}
\end{framed}
\newpage
\item[9-4]{\textbf{\emph{Alternative analysis of randomized selection}}\\
In this problem, we use indicator random variables to analyze the
\textsc{Randomized-Select} procedure in a manner akin to our analysis of
\textsc{Randomized-Quicksort} in Section 7.4.2.
As in the quicksort analysis, we assume that all elements are distinct, and we
rename the elements of the input array $A$ as $z_1, z_2, \dots, z_n$, where
$z_i$ is the $i$th smallest element. Thus, the call
\textsc{Randomized-Select}($A, 1, n, k$) returns $z_k$.
For $1 \le i < j \le n$, let $X_{ijk}$ = I\{$z_i$ is compared with $z_j$
sometime during the execution of the algorithm to find $z_k$\}.
\begin{enumerate}
\item[\textbf{a.}] Give an exact expression for $\text{E}[X_{ijk}]$.
(\emph{Hint:} Your expression may have different values, depending on the values
of $i, j,$ and $k$.)
\item[\textbf{b.}] Let $X_k$ denote the total number of comparisons between
elements of array $A$ when finding $z_k$. Show that
\[
\text{E}[X_k] \le 2 \left( \sum_{i = 1}^{k} \sum_{j = k}^{n} \frac{1}{j - i + 1} +
\sum_{j = k + 1}^{n} \frac{j - k - 1}{j - k + 1} +
\sum_{i = 1}^{k - 2} \frac{k - i - 1}{k - i + 1} \right).
\]
\item[\textbf{c.}] Show that $\text{E}[X_k] \le 4n$.
\item[\textbf{d.}] Conclude that, assuming all elements of array $A$ are
distinct, \textsc{Randomized-Select} runs in expected time $O(n)$.
\end{enumerate}
}
\begin{framed}
Skipped.
\end{framed}
\end{enumerate}
| {
"alphanum_fraction": 0.6657979811,
"avg_line_length": 36.2607655502,
"ext": "tex",
"hexsha": "b8a62cbcec031441bb19c1f3d1a691ef8d943ddb",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-03-12T04:51:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-03-12T04:51:51.000Z",
"max_forks_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danielmoraes/clrs",
"max_forks_repo_path": "chapters/C9.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e",
"max_issues_repo_issues_event_max_datetime": "2021-01-31T20:41:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-31T20:41:48.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danielmoraes/clrs",
"max_issues_repo_path": "chapters/C9.tex",
"max_line_length": 106,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2eb6b5e0be2d6569bf19181dfd972209d97f0e0e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danielmoraes/clrs",
"max_stars_repo_path": "chapters/C9.tex",
"max_stars_repo_stars_event_max_datetime": "2016-07-08T17:39:19.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-07-08T17:39:19.000Z",
"num_tokens": 10426,
"size": 30314
} |
\section{Using owoLED}
\subsection{Building \& running the examples}
\subsection{Options for using owoLED in your project}
\subsubsection{Vendoring}
\subsubsection{Binaries}
| {
"alphanum_fraction": 0.7966101695,
"avg_line_length": 19.6666666667,
"ext": "tex",
"hexsha": "2498b46508f4c0903f11b7f54092a3ccc374032f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "11683f22419bc891aa7bdcd8ee10c07d15b9ee62",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "TypicalFence/owoLED",
"max_forks_repo_path": "docs/sections/2-using-owoled.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "11683f22419bc891aa7bdcd8ee10c07d15b9ee62",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "TypicalFence/owoLED",
"max_issues_repo_path": "docs/sections/2-using-owoled.tex",
"max_line_length": 53,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "11683f22419bc891aa7bdcd8ee10c07d15b9ee62",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "TypicalFence/owoLED",
"max_stars_repo_path": "docs/sections/2-using-owoled.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 46,
"size": 177
} |
\subsubsection{Mixed Cell Radionuclide Mass Balance Model}\label{sec:mixed_cell}
Slightly more complex, the Mixed Cell model incorporates the influence of
porosity, elemental solubility limits, and sorption in addition to the
degradation behavior of the Degradation Rate model. A graphical representation
of the discrete sub-volumes in the mixed cell model is given in Figure
\ref{fig:deg_sorb_volumes}.
\input{./nuclide_models/mass_balance/mixed_cell/deg_sorb_volumes}
After some time degrading, the total volume in the degraded region $(V_d)$ can be
expressed as in equation \eqref{deg_volumes}. Additionally, given a volumetric
porosity, $\theta$, the intact and degraded volumes can also be described in
terms of their constituent solid matrix $(V_{is} + V_{ds})$ and pore fluid
volumes $(V_{if} + V_{df})$,
\begin{align}
V_d(t_n) &= \mbox{ degraded volume at time }t_n [m^3]\nonumber\\
&= V_{df}(t_n) + V_{ds}(t_n)
\intertext{where}
V_{df}(t_n) &= \mbox{ degraded fluid volume at time }t_n[m^3]\nonumber\\
&= \theta V_d(t_n)\\
&= \theta d(t_n) V_T
\end{align}
\begin{align}
V_{ds}(t_n) &= \mbox{ degraded solid volume at time }t_n [m^3]\nonumber\\
&= (1-\theta) V_d(t_n)\\
&= (1-\theta) d(t_n) V_T
\end{align}
\begin{align}
V_i(t_n) &= \mbox{ intact volume at time }t_n [m^3]\nonumber\\
&= V_{if}(t_n) + V_{is}(t_n)
\end{align}
\begin{align}
V_{if}(t_n) &= \mbox{ intact fluid volume at time }t_n [m^3]\nonumber\\
&= \theta V_i(t_n)\\
&= \theta (1-d(t_n))V_T
\intertext{and}
V_{is}(t_n) &= \mbox{ intact solid volume at time }t_n [m^3]\nonumber\\
&= (1-\theta) V_i(t_n)\\
&= (1-\theta) (1-d(t_n))V_T.
\end{align}
This model distributes contaminant masses throughout each sub-volume of the
component. Contaminant
masses and concentrations can therefore be expressed with notation indicating
in which volume they reside, such that
\begin{align}
C_{df} &= \frac{m_{df}}{V_{df}} \label{c_df}\\
C_{ds} &= \frac{m_{ds}}{V_{ds}} \label{c_ds}\\
C_{if} &= \frac{m_{if}}{V_{if}} \label{c_if}\\
C_{is} &= \frac{m_{is}}{V_{is}}. \label{c_is}
\intertext{where}
df = \mbox{degraded fluid}\\
ds = \mbox{degraded solid}\\
if = \mbox{intact fluid}\\
is = \mbox{intact solid.}
\end{align}
The contaminant mass in the degraded fluid ($m_{df}$) is the contaminant mass that is
treated as ``available'' to adjacent components. That is, $m_{df}$ is the mass
vector $m_{ij}$ which has been released by component $i$ and can be transferred
to component $j$ in the following mass transfer phase.
\paragraph{Sorption}
The mass in all volumes exists in both sorbed and non-sorbed phases. The
relationship between the sorbed mass concentration in the solid phase (e.g. the
pore walls),
\begin{align}
s &=\frac{\mbox{ mass of sorbed contaminant} }{ \mbox{mass of total solid phase }}
\label{solid_conc}
\end{align}
and the dissolved liquid concentration,
\begin{align}
C &=\frac{\mbox{ mass of dissolved contaminant} }{ \mbox{volume of total liquid phase }}
\label{liquid_conc}
\end{align}
can be characterized by a sorption ``isotherm'' model. A sorption isotherm
describes the equilibrium relationship between the amount of material bound to
surfaces and the amount of material in the solution. The Mixed Cell mass
balance model uses a linear isotherm model.
With the linear isotherm model, the mass of contaminant sorbed onto the
solid phase, also referred to as the solid concentration, can be found
\cite{schwartz_fundamentals_2004}, according to the relationship
\begin{align}
s_p &= K_{dp} C_{p}
\label{linear_iso}
\intertext{where}
s_p &= \mbox{ the solid concentration of isotope p }[kg/kg]\nonumber\\
K_{dp} &= \mbox{ the distribution coefficient of isotope p}[m^3/kg]\nonumber\\
C_p &= \mbox{ the liquid concentration of isotope p }[kg/m^3].\nonumber
\end{align}
Thus, from \eqref{solid_conc},
\begin{align}
s_{dsp} &= K_{dp} C_{dfp}\nonumber\\
&= \frac{K_{dp}m_{dfp}}{V_{df}}\nonumber
\intertext{where}
s_{dsp} &= \mbox{ isotope p concentration in degraded solids } [kg/kg] \nonumber\\
C_{dfp} &= \mbox{ isotope p concentration in degraded fluids } [kg/m^3]. \nonumber
\end{align}
In this model, sorption is taken into account throughout the volume. In the
intact matrix, the contaminant mass is distributed between the pore walls and
the pore fluid by sorption. So too, contaminant mass released from the intact
matrix by degradation is distributed between dissolved mass in the free fluid
and sorbed mass in the degraded and precipitated solids. Note that this model is
agnostic to the mechanism of degradation. It simulates degradation purely from
a rate and release is accordingly congruent \cite{kawasaki_congruent_2004} with
that degradation.
To begin solving for the boundary conditions in this model, the amount of non-sorbed
contaminant mass in the degraded fluid volume must be found. Dropping the
isotope subscripts and beginning with equations \eqref{c_df} and \eqref{linear_iso},
\begin{align}
m_{df} &= C_{df}V_{df}
\intertext{and assuming the sorbed material is in the degraded solids}
m_{df} &= \frac{s_{ds}V_{df}}{K_d},\nonumber\\
\intertext{then applying the definition of $s_{ds}$ and $m_{ds}$}
m_{df} &= \frac{\frac{m_{ds}}{m_T}V_{df}}{K_d}\nonumber\\
&= \frac{(dm_T-m_{df})V_{df}}{K_dm_T}.\nonumber
\intertext{This can be rearranged to give}
m_{df} &= \frac{dV_{df}}{K_d}\frac{1}{\left( 1+ \frac{V_{df}}{K_dm_T}\right)}\nonumber\\
&= \frac{dV_{df}}{\left( K_d+ \frac{V_{df}}{m_T}\right)}.
\intertext{Finally, using the definition of $V_{df}$ in terms of total volume,}
m_{df} &= \frac{d^2 \theta V_T}{K_d + \frac{d \theta V_T}{m_T}}.
\label{sorption}
\end{align}
\paragraph{Solubility}
Dissolution of the contaminant into the
available fluid volume is constrained by the
elemental solubility limit.
The reduced mobility of radionuclides with lower
solubilities can be modeled \cite{hedin_integrated_2002} as a reduction in the
amount of solute available for transport, thus:
\begin{align}
m_{i}(t)&\le V(t)C_{sol, i}\label{sol_lim}
\intertext{where}
m_{i} &= \mbox{ mass of isotope i in volume }V [kg]\nonumber\\
V &= \mbox{ a distinct volume of fluid } [m^3]\nonumber\\
C_{sol, i} &= \mbox{ the maximum concentration of i } [kg\cdot m^{-3}].\nonumber
\end{align}
That is, the mass $m_{i}$ in kg of a radionuclide $i$ dissolved into the waste package
void volume $V_1$ in m$^3$, at a time t, is limited by the solubility limit,
the maximum concentration, $C_{sol}$ in kg/m$^3$ at which that radionuclide is
soluble \cite{hedin_integrated_2002}.
The final available mass is therefore the $m_{df}$ from equation
\eqref{sorption} constrained by:
\begin{align}
m_{df,i} &\leq V_{df} C_{sol,i}
\label{solubility}
\intertext{where}
m_{df,i} &= \mbox{ solubility limited mass of isotope i in volume }V_{df} [kg]\nonumber\\
C_{sol,i} &= \mbox{ the maximum dissolved concentration limit of i }[kg/m^3].\nonumber
\end{align}
| {
"alphanum_fraction": 0.711445612,
"avg_line_length": 41.6686390533,
"ext": "tex",
"hexsha": "5d20749e2164f0adfe854c19b8fd4f0057d377ab",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "katyhuff/2017-huff-rapid",
"max_forks_repo_path": "nuclide_models/mass_balance/mixed_cell/mixed_cell.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "katyhuff/2017-huff-rapid",
"max_issues_repo_path": "nuclide_models/mass_balance/mixed_cell/mixed_cell.tex",
"max_line_length": 95,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "cfb06a9a2e744914e7f3d088014db7a71a68c39d",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "katyhuff/2017-huff-rapid",
"max_stars_repo_path": "nuclide_models/mass_balance/mixed_cell/mixed_cell.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2212,
"size": 7042
} |
\documentclass[11pt]{article}
\usepackage[english]{babel}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{enumerate}
\usepackage[a4paper, landscape, margin=0.5cm]{geometry}
\usepackage{multicol}
\newcommand{\rulewidth}{0.4pt}
\setlength{\columnseprule}{\rulewidth}
\usepackage{titlesec}
\newcommand{\rulefilled}[1]{#1~\hrulefill}
\newcommand{\dotfilled}[1]{#1{\tiny\dotfill}}
\titleformat{\section}{\bfseries}{\thesection}{1em}{\rulefilled}
\titleformat{\subsection}{\bfseries}{\thesubsection}{1em}{\dotfilled}
\titleformat*{\subsubsection}{\bfseries}
\titleformat*{\paragraph}{\bfseries}
\titleformat*{\subparagraph}{\bfseries}
\titlespacing*{\section}{0pt}{0pt}{0pt}
\titlespacing*{\subsection}{0pt}{0pt}{0pt}
\titlespacing*{\paragraph}{0pt}{0pt}{1em}
\setlength{\parindent}{0pt}
\setlength{\parskip}{0pt}
\usepackage{amsmath, amssymb}
\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator{\sign}{sign}
\DeclareMathOperator{\loss}{\mathcal{L}}
\DeclareMathOperator{\obj}{obj}
\DeclareMathOperator{\class}{class}
\DeclareMathOperator*{\E}{\mathbb{E}}
\let\P\relax
\DeclareMathOperator*{\P}{\mathbb{P}}
\newcommand{\R}{\ensuremath{\mathbb{R}}}
\DeclareMathOperator*{\B}{\ensuremath{\mathbb{B}}}
\newcommand{\Sharp}[1]{\ensuremath{#1^{\sharp}}}
\usepackage[noEnd=true, commentColor=black]{algpseudocodex}
\usepackage{enumitem}
\setlist[enumerate]{nosep, leftmargin=1.2em}
\setlist[itemize]{nosep, leftmargin=0.6em, labelsep=0pt}
\usepackage{booktabs, multirow, bigdelim}
\title{RIAI Summary HS 2020}
\author{Christian Knabenhans}
\date{January 2021}
\begin{document}
\pagestyle{empty}
\begin{multicols*}{3}
\section*{Adversarial Attacks}
\input{adversarial-attacks}
\section*{Adversarial Defenses}
\input{adversarial-defenses}
\section*{Certification}
\input{certification}
\section*{Certification (Complete Methods)}
\input{certification-with-complete-methods}
\section*{Zonotope}
\input{zonotope}
\section*{DeepPoly}
\input{deeppoly}
\section*{Abstract Interpretation}
\input{abstract-interpretation}
\section*{Certified Defenses }
\input{certified-defenses}
\section*{Certified Robustness to Geometric Trafo}
\input{geometric-transformations}
\section*{Visualization}
\input{visualization}
\section*{Logic \& Deep Learning}
\input{logic}
\section*{Randomized Smoothing}
\input{randomized-smoothing}
\section*{General}
\input{general}
\end{multicols*}
\end{document}
| {
"alphanum_fraction": 0.7696058513,
"avg_line_length": 28.6162790698,
"ext": "tex",
"hexsha": "0bb621165df6874b8316bf1658b4ea0d36e3bac0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "cknabs/RIAI-summary-HS2020",
"max_forks_repo_path": "main.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf",
"max_issues_repo_issues_event_max_datetime": "2021-01-25T10:50:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-01-25T09:29:16.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "cknabs/RIAI-summary-HS2020",
"max_issues_repo_path": "main.tex",
"max_line_length": 69,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "42a1ee3cc2e51c52188f842c78923792bd0f3edf",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "cknabs/RIAI-summary-HS2020",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-24T20:28:56.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-20T21:27:13.000Z",
"num_tokens": 805,
"size": 2461
} |
%% SECTION HEADER /////////////////////////////////////////////////////////////////////////////////////
\section{Subsection}
\label{sec31}
%% SECTION CONTENT ////////////////////////////////////////////////////////////////////////////////////
\lipsum[1]
%% SUBSECTION HEADER //////////////////////////////////////////////////////////////////////////////////
\subsection{Subsubsection}
\label{sec311}
\lipsum[1]
%% SUBSECTION HEADER //////////////////////////////////////////////////////////////////////////////////
\subsection{Subsubsection}
\label{sec312}
\lipsum[1]
%% SUBSECTION HEADER //////////////////////////////////////////////////////////////////////////////////
\subsection{Subsubsection}
\label{sec313}
\lipsum[1]
| {
"alphanum_fraction": 0.3165075034,
"avg_line_length": 28.1923076923,
"ext": "tex",
"hexsha": "4fcdc0d4dc11dd6b70638cfce4c8b1eb1cfd2118",
"lang": "TeX",
"max_forks_count": 4,
"max_forks_repo_forks_event_max_datetime": "2020-09-22T10:10:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-09-11T05:12:18.000Z",
"max_forks_repo_head_hexsha": "9e8255d5406211b07253fca29788a3557860edc0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SeaShadow/LaTeX-AMC-PhD-Thesis-Template",
"max_forks_repo_path": "Chapters/Chapter3/sect31.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9e8255d5406211b07253fca29788a3557860edc0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SeaShadow/LaTeX-AMC-PhD-Thesis-Template",
"max_issues_repo_path": "Chapters/Chapter3/sect31.tex",
"max_line_length": 103,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "9e8255d5406211b07253fca29788a3557860edc0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SeaShadow/LaTeX-AMC-PhD-Thesis-Template",
"max_stars_repo_path": "Chapters/Chapter3/sect31.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-16T10:40:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-09-05T01:29:35.000Z",
"num_tokens": 102,
"size": 733
} |
\subsection{Introduction}
Causality v correlation. If just getting correlation, could have bad out of sample performance
Section on causality. Difference between disease causes symptom and symptom causes disease
Linear models can be manipulated to have any variable on the left.
| {
"alphanum_fraction": 0.8204225352,
"avg_line_length": 28.4,
"ext": "tex",
"hexsha": "82f31029f15f8137fcb81fc3aa0da22d855e3b8b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/olsInference/07-01-introduction.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/olsInference/07-01-introduction.tex",
"max_line_length": 94,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/olsInference/07-01-introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 53,
"size": 284
} |
\documentclass[10pt,aspectratio=169]{beamer}
\usetheme[progressbar=frametitle,sectionpage=none,background=light]{metropolis}
%%––––––––––––––––––––––––––––––––––––––––––––––––
% Define styles
%%––––––––––––––––––––––––––––––––––––––––––––––––
%%––––––––––––––––––––––––––––––––––––––––––––––––
% Setting up colors
\definecolor{logoblue1}{RGB}{35, 121, 181}
\definecolor{logoblue2}{RGB}{88, 145, 202}
\definecolor{darkblue}{RGB}{25, 41, 54}
\definecolor{lightgrey}{RGB}{134, 143, 161}
\definecolor{greytext}{RGB}{102, 118, 128}
\definecolor{darktext}{RGB}{29, 43, 52}
\definecolor{green}{RGB}{0, 184, 44}
\definecolor{vividblue}{RGB}{15, 117, 183}
\definecolor{orange}{RGB}{246, 177, 70}
\definecolor{lightblue}{RGB}{244, 247, 251}
\definecolor{white}{RGB}{255, 255, 255}
\definecolor{red}{RGB}{183, 25, 29}
\setbeamercolor{frametitle}{bg=darkblue, fg=lightblue}
\setbeamercolor{background canvas}{bg=black}
\setbeamercolor{normal text}{fg=lightblue}
%%––––––––––––––––––––––––––––––––––––––––––––––––
%%––––––––––––––––––––––––––––––––––––––––––––––––
% Setting up fonts
\usepackage{lato}
\usepackage{roboto}
\usepackage{montserrat}
\setbeamerfont{frametitle}{family=\flafamily, size*={18}{18}}
% \setbeamerfont{footline}{family=\fontfamily{montserrat}}
% \setbeamerfont{normal text}{family=\roboto, size*={16}{18}}
% Setting up fonts for bibliography style
\setbeamerfont{bibliography entry author}{size=\small}
\setbeamerfont{bibliography entry title}{size=\small}
\setbeamerfont{bibliography entry location}{size=\small}
\setbeamerfont{bibliography entry note}{size=\small}
\setbeamerfont{bibliography item}{size=\small}
%%––––––––––––––––––––––––––––––––––––––––––––––––
\usepackage{appendixnumberbeamer}
\usepackage{booktabs}
\usepackage[scale=2]{ccicons}
\usepackage{pgfplots}
\usepgfplotslibrary{dateplot}
\usepackage{xspace}
\newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace}
\usepackage{hyperref}
\hypersetup{
colorlinks,
urlcolor=vividblue,
citecolor=lightblue,
linkcolor=lightblue
}
\usepackage{graphicx}
\graphicspath{ {../imgs/} }
\usepackage{caption}
\title{\textcolor{orange}{Graph Embeddings}}
% \subtitle{A modern beamer theme}
\date{\today}
\author{Tim Semenov\\ \textcolor{vividblue}{[email protected]}}
% \institute{Center for modern beamer themes}
%\titlegraphic{\hfill\includegraphics[height=1.5cm]{logo.png}}
\begin{document}
\maketitle
\setbeamertemplate{frame footer}{\large\textcolor{logoblue1}{source}\textcolor{logoblue2}{\{d\}}}
\begin{frame}{Table of contents}
\setbeamertemplate{section in toc}[sections numbered]
\tableofcontents[hideallsubsections]
\end{frame}
%%––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
\section{Introduction}
{
\usebackgroundtemplate{\includegraphics[width=\paperwidth]{img.png}}
\begin{frame}[fragile]{Introduction}
\end{frame}
}
\begin{frame}[fragile]{Notations}
% A graph $G(V,E)$ is a collection of $V=\{v_1,\ldots,v_n\}$ vertices (a.k.a. nodes) and
% $E=\{e_{ij}\}^n_{i,j=1}$ edges. The adjacency matrix $S$ of graph $G$ contains non-negative
% weights associated with each edge: $s_{ij}\geq0$. If $v_i$ and $v_j$ are not connected
% to each other, then $s_{ij}=0$. For undirected weighted graphs, $s_{ij}=s_{ji} \hspace{1mm}
% \forall \hspace{1mm} i,j \in [n]$.
\begin{tabular}{ l l }
Adjacency matrix: & $A$ \\
Degree matrix: & $D=diag(\sum_j A_{ij})$ \\
Transition matrix: & $M=D^{-1}A$\\
Node neighborhood: & $N(i)$\\[4mm]
\pause
First-order proximity: & edges of adjacency matrix.\\
Second-order proximity: & similarity between nodes' neighborhoods.
\end{tabular}
\end{frame}
\begin{frame}[fragile]{Notations}
Pointwise mutual information: $$PMI(x,y)=\log \frac{P(x,y)}{P(x)P(y)}$$
Positive PMI: $$PPMI(x,y)=\max (PMI(x,y),0)$$
Shifted PPMI: $$SPPMI_k(x,y)=\max (PMI(x,y)-\log k,0)$$
\end{frame}
%%––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
\section{Graph embeddings}
\begin{frame}[fragile]{Graph embedding methods}
Taxonomy \cite{goyal2017graph}:
\begin{enumerate}
\item Factorization based
\item Random Walk based
\item Deep Learning based
\item Other
\end{enumerate}
\end{frame}
\begin{frame}[fragile]{Factorization}
\begin{enumerate}
\item HOPE \cite{ou2016asymmetric}
\begin{itemize}
\item $\displaystyle \min ||S-U^S{U^t}^T||_F^2$\\[1mm]
\item $\displaystyle U=[U^S,U^t]$ -- the embedding matrix.\\[1mm]
\item $S$ -- proximity matrix:
\begin{itemize}
\item[$\diamond$] $S=A^2 \;$ -- common neighbors.
\item[$\diamond$] $S=ADA \;$ -- Adamic-Adar.
\item[$\diamond$] \ldots
\end{itemize}
\end{itemize}
\item GraRep \cite{Cao:2015:GLG:2806416.2806512}
\begin{itemize}
\item $X_k=SPPMI_{\beta}(A^k), \, k=1,\ldots,K$\\[1mm]
\item $[U_k,\Sigma_k,V_k^T]=SVD(X_k)$\\[1mm]
\item $W_k=U_{k,d}(\Sigma_{k,d})^{\frac{1}{2}}$
\end{itemize}
\end{enumerate}
\end{frame}
\begin{frame}[fragile]{Random Walk: node2vec}
Optimization problem \cite{grover2016node2vec}: $$\max \sum_{v_i \in V} \log P(N(i)|v_i)$$
Conditional independence: $$P(N(i)|v_i)=\prod_{j\in N(i)} P(v_j|v_i), \quad
P(v_j|v_i)=\frac{\exp(u_j^{T}u_i)}{\sum_{k=1}^{|V|} \exp(u_k^{T}u_i)}$$
\pause
Simplified optimization problem: $$\max \sum_{v_i \in V} \Big[ -\log{Z_{v_i}} +
\sum_{j \in N(i)} u_j^Tu_i \Big], \quad Z_{v_i}=\sum_{v_i \in V} \exp(u_j^Tu_i)$$
\end{frame}
\begin{frame}[fragile]{Random Walk: node2vec}
Let $c_i$ denote the \textit{i}-th node in the random walk: $$P(c_i=x|c_{i-1}=v)=
\begin{cases}
\frac{\pi_{vx}}{Z} & \text{if } (v,x) \in E \\
0 & \text{otherwise}
\end{cases}$$
Assume the walk just traversed edge $(t,v)$:
$$\pi_{vx}=\alpha_{pq}(t,x)\cdot w_{vx}, \quad \alpha_{pq}(t,x)=
\begin{cases}
\frac{1}{p} & \text{if } d_{tx}=0\\
1 & \text{if } d_{tx}=1\\
\frac{1}{q} & \text{if } d_{tx}=2\\
\end{cases}$$
\end{frame}
\begin{frame}[fragile]{Other: LINE}
Joint probability between vertices \cite{2015arXiv150303578T}:
$$P(v_i,v_j)=\frac{1}{1+\exp(-u_i^Tu_j)}, \quad \hat{P}(v_i,v_j)=\frac{w_{ij}}{W}$$
Conditional distribution of the contexts:
$$P(v_j|v_i)=\frac{\exp(u_j^{'T}u_i)}{\sum_{k=1}^{|V|} \exp(u_k^{'T}u_i)}, \quad \hat{P}(v_j,v_i)=\frac{w_{ij}}{d_i}$$
\pause
First-order proximity loss: $$O_1=-\sum_{(i,j)\in E} w_{ij} \log p_1(v_i,v_j)$$\\[2mm]
Second-order proximity loss: $$O_2=-\sum_{(i,j)\in E} w_{ij} \log p_2(v_j|v_i)$$
\end{frame}
\begin{frame}[fragile]{Other: LINE}
Use negative sampling to optimize $O_2$:
$$\log \sigma(u_j^{'T}u_i)+\sum_{i=1}^K E_{v_n \sim P_n(v)}[\log \sigma(-u_n^{'T}u_i)]$$
$O_1$ has a trivial minima, so we modify it to utilize negative sampling:
$$\log \sigma(u_j^Tu_i)+\sum_{i=1}^K E_{v_n \sim P_n(v)}[\log \sigma(-u_n^{'T}u_i)]$$
\pause
In case of low degree we add weight to second-order neighbors:
$$w_{ij}=\sum_{k \in N(i)} w_{ik} \frac{w_{kj}}{d_k}$$
\end{frame}
\begin{frame}[fragile]{Deep Learning: SDNE}
Encoder-decoder model \cite{wang2016structural}:
$$y_i^{(k)}=\sigma(W^{(k)}y_i^{(k-1)}+b^{(k)}), \; k=1,\ldots,K, \; y_i^{(0)}=x_i$$
$$\hat{y}_i^{(k)}=\sigma(\hat{W}^{(k)}\hat{y}_i^{(k-1)}+\hat{b}^{(k)}), \; k=1,\ldots,K,
\; \hat{y}_i^{(0)}=y_i^{(K)}, \; \hat{y}_i^{(K)}=\hat{x}_i$$
\pause
Loss functions:
$$L_1=\sum_{i,j=1}^n a_{i,j} ||y_i^{(k)}-y_j^{(k)}||_2^2$$
\pause
$$L_2=\sum_{i=1}^n ||(\hat{x}_i-x_i) \odot b_i||_2^2, \quad b_{i,j}=
\begin{cases}
1 & \text{if } a_{i,j}=0\\
\beta > 1
\end{cases}$$
\pause
$$L=L_2+\alpha L_1 + \nu L_{reg}, \quad L_{reg}=\frac{1}{2}\sum_{k=1}^K(||W^{(k)}||^2_F+||\hat{W}^{(k)}||^2_F)$$
\end{frame}
%%––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
\section{Experiment results}
\begin{frame}[fragile]{Datasets}
\begin{itemize}
\item \textcolor{orange}{Blogcatalog, Flickr and Youtube}\\
Social networks of online users. Each user is labelled by at least one category.
\item \textcolor{orange}{Arxiv GR-QC}\\
Paper collaboration network which covers papers in the field of General Relativity
and Quantum Cosmology from arXiv.
\item \textcolor{orange}{20-Newsgroup}\\
Tf-idf vectors of each word are used to represent documents. The documents are
connected based on their cosine similarity.
\end{itemize}
\end{frame}
\begin{frame}[fragile]{Datasets}
\begin{table}
\begin{tabular}{ccc}
Dataset & |V| & |E|\\
\textcolor{orange}{Blogcatalog} & 10312 & 667966\\
\textcolor{orange}{Flickr} & 80513 & 11799764\\
\textcolor{orange}{Youtube} & 1138499 & 5980886\\
\textcolor{orange}{Arxiv GR-QC} & 5242 & 28980\\
\textcolor{orange}{20-Newsgroup} & 1720 & Full-connected
\end{tabular}
\end{table}
\end{frame}
\begin{frame}[fragile]{Experiment: reconstruction task}
\begin{table}
\caption*{Arxiv GR-QC \cite{wang2016structural}}
\begin{tabular}{ccccc}
\hfill & SDNE & GraRep & LINE & DeepWalk\\
MAP & \textcolor{orange}{0.836} & 0.05 & 0.69 & 0.58
\end{tabular}
\end{table}
\begin{table}
\caption*{Blogcatalog \cite{wang2016structural}}
\begin{tabular}{ccccc}
\hfill & SDNE & GraRep & LINE & DeepWalk\\
MAP & \textcolor{orange}{0.63} & 0.42 & 0.58 & 0.28
\end{tabular}
\end{table}
\end{frame}
\begin{frame}[fragile]{Experiment: link prediction}
\begin{table}
\caption*{Arxiv GR-QC \cite{wang2016structural}}
\begin{tabular}{cccccc}
Algorithm & P@2 & P@10 & P@100 & P@200 & P@300\\
SDNE & \textcolor{orange}{1} & \textcolor{orange}{1} & \textcolor{orange}{1} & \textcolor{orange}{1} & \textcolor{orange}{1}\\
LINE & 1 & 1 & 1 & 1 & 0.99\\
DeepWalk & 1 & 0.8 & 0.6 & 0.555 & 0.443\\
GraRep & 1 & 0.2 & 0.04 & 0.035 & 0.033
\end{tabular}
\end{table}
\end{frame}
%%––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
\section{Conclusion}
\begin{frame}{Summary}
\begin{itemize}
\pause
\item Use matrix factorization when you can.
\pause
\item Use random walk when matrix size is too large.
\pause
\item Use deeper models on top of rough embeddings.
\end{itemize}
\vspace{2mm}
\pause
Follow us on github:
\begin{center}\url{https://github.com/src-d/role2vec}\end{center}
\end{frame}
\appendix
\begin{frame}[allowframebreaks]{References}
\setbeamertemplate{bibliography item}{\insertbiblabel}
\bibliography{references.bib}
\bibliographystyle{abbrv}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.6083519345,
"avg_line_length": 34.4615384615,
"ext": "tex",
"hexsha": "57027356cd919003b266ff06fc484752bc4cbf1f",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-01-28T11:46:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-07-05T15:54:15.000Z",
"max_forks_repo_head_hexsha": "2d07e95aa2f3877f4b4e286b5d6378705c642eb9",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "IgnasA/role2vec",
"max_forks_repo_path": "presentations/2017-07-12 (papers overview)/src/presentation.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "2d07e95aa2f3877f4b4e286b5d6378705c642eb9",
"max_issues_repo_issues_event_max_datetime": "2021-02-24T05:14:14.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-24T05:14:14.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "IgnasA/role2vec",
"max_issues_repo_path": "presentations/2017-07-12 (papers overview)/src/presentation.tex",
"max_line_length": 132,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "2d07e95aa2f3877f4b4e286b5d6378705c642eb9",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "IgnasA/role2vec",
"max_stars_repo_path": "presentations/2017-07-12 (papers overview)/src/presentation.tex",
"max_stars_repo_stars_event_max_datetime": "2019-03-16T22:23:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-12T17:42:14.000Z",
"num_tokens": 4102,
"size": 10752
} |
\section{An Annotation Framework with Assistance}
We constructed an annotation environment that supports the manual task of annotating data using \ac{ML} to assist the annotator. We explain its mechanics at the example of creating training data for \ac{NER} (Section~\ref{sec:conceptionNERatHeart}). However, the annotation environment is modular and thus capable of creating training data for other applications as well. We discuss the architectural key points of the annotation framework (Section~\ref{sec:implementationWSA}) and, as a proof of concept, we demonstrate an \ac{ML} based assistance system using an \ac{MEC} to identify named entities (Section~\ref{sec:nerWithNLTK}). Finally we explain the mathematical background of the assistance (Section~\ref{sec:algorithmicFoundationMEC}) we implemented.
Our annotation framework \ac{DALPHI} was designed to assist the human doing the annotations with pre-annotations, \ac{AL} support, and offers an iterative work cycle. A software framework that helps to improve the task of training data generation has several different requirements to meet. It is hard to understand the design decisions that have impacted the assistance system component without considering the design decisions for the whole annotation framework. Therefore the following sections will focus on the assistance system, but also provides insights into the design process of the \ac{DALPHI} framework as a whole.
\input{2-1-conception.tex}
\input{2-2-implementation.tex}
| {
"alphanum_fraction": 0.8134576949,
"avg_line_length": 187.625,
"ext": "tex",
"hexsha": "ba1445249cf56f975e0c23ca5fa64ee0aca7c902",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "60dbc03ce40e3ec42f2538d67a6aabfea6fbbfc8",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RGreinacher/bachelor-thesis",
"max_forks_repo_path": "Thesis/src/2-assistance-system.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "60dbc03ce40e3ec42f2538d67a6aabfea6fbbfc8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RGreinacher/bachelor-thesis",
"max_issues_repo_path": "Thesis/src/2-assistance-system.tex",
"max_line_length": 760,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "60dbc03ce40e3ec42f2538d67a6aabfea6fbbfc8",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RGreinacher/bachelor-thesis",
"max_stars_repo_path": "Thesis/src/2-assistance-system.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-13T10:00:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-04-13T10:00:46.000Z",
"num_tokens": 319,
"size": 1501
} |
% Modified based on Xiaoming Sun's template
\documentclass{article}
\usepackage{amsmath,amsfonts,amsthm,amssymb}
\usepackage{setspace}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{extramarks}
\usepackage{chngpage}
\usepackage{soul,color}
\usepackage{graphicx,float,wrapfig}
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=blue,
filecolor=magenta,
urlcolor=cyan,
}
\newcommand{\Class}{Mathematics for Computer Science}
% Homework Specific Information. Change it to your own
\newcommand{\Title}{Homework 2}
% In case you need to adjust margins:
\topmargin=-0.45in %
\evensidemargin=0in %
\oddsidemargin=0in %
\textwidth=6.5in %
\textheight=9.0in %
\headsep=0.25in %
% Setup the header and footer
\pagestyle{fancy} %
\chead{\Title} %
\rhead{\firstxmark} %
\lfoot{\lastxmark} %
\cfoot{} %
\rfoot{Page\ \thepage\ of\ \protect\pageref{LastPage}} %
\renewcommand\headrulewidth{0.4pt} %
\renewcommand\footrulewidth{0.4pt} %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Some tools
\newcommand{\enterProblemHeader}[1]{\nobreak\extramarks{#1}{#1 continued on next page\ldots}\nobreak%
\nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak}%
\newcommand{\exitProblemHeader}[1]{\nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak%
\nobreak\extramarks{#1}{}\nobreak}%
\newcommand{\homeworkProblemName}{}%
\newcounter{homeworkProblemCounter}%
\newenvironment{homeworkProblem}[1][Problem \arabic{homeworkProblemCounter}]%
{\stepcounter{homeworkProblemCounter}%
\renewcommand{\homeworkProblemName}{#1}%
\section*{\homeworkProblemName}%
\enterProblemHeader{\homeworkProblemName}}%
{\exitProblemHeader{\homeworkProblemName}}%
\newcommand{\homeworkSectionName}{}%
\newlength{\homeworkSectionLabelLength}{}%
\newenvironment{homeworkSection}[1]%
{% We put this space here to make sure we're not connected to the above.
\renewcommand{\homeworkSectionName}{#1}%
\settowidth{\homeworkSectionLabelLength}{\homeworkSectionName}%
\addtolength{\homeworkSectionLabelLength}{0.25in}%
\changetext{}{-\homeworkSectionLabelLength}{}{}{}%
\subsection*{\homeworkSectionName}%
\enterProblemHeader{\homeworkProblemName\ [\homeworkSectionName]}}%
{\enterProblemHeader{\homeworkProblemName}%
% We put the blank space above in order to make sure this margin
% change doesn't happen too soon.
\changetext{}{+\homeworkSectionLabelLength}{}{}{}}%
\newcommand{\Answer}{\ \\\textbf{Answer:} }
\newcommand{\Acknowledgement}[1]{\ \\{\bf Acknowledgement:} #1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Make title
\title{\textmd{\bf \Class: \Title}}
\date{March 8, 2019}
\author{Xingyu Su 2015010697}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\begin{spacing}{1.1}
\maketitle \thispagestyle{empty}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Begin edit from here
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 3.8.8
\begin{homeworkProblem}[LPV 3.8.8]
Prove the following identities:
\begin{align*}
\sum_{k=0}^{m}{(-1)^k\binom{n}{k}} = (-1)^m\binom{n-1}{m};\\
\sum_{k=0}^n\binom{n}{k}\binom{k}{m}=\binom{n}{m}2^{n-m}.
\end{align*}
\Answer
From Pascal Triangle, we have
\begin{equation*}
\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k}
\end{equation*}
\hspace{1em}
So we have:
\begin{align*}
\sum_{k=0}^{m}{(-1)^k\binom{n}{k}}
&=\binom{n}{0}-\binom{n}{1}+\binom{n}{2}-\binom{n}{3}+\cdots+(-1)^k\binom{n}{k} \\
&=\binom{n-1}{0}-\left[\binom{n-1}{0}+\binom{n-1}{1}\right]+\left[\binom{n-1}{1}+\binom{n-1}{2}\right]-\cdots+(-1)^m\left[\binom{n-1}{m-1}+\binom{n-1}{m}\right] \\
&=(-1)^m\binom{n-1}{m}
\end{align*}
\hspace{1em}
For the second identity, consider m is a non-negetive number. So when $k<m$, $\binom{k}{m}=0$, then:
\begin{equation*}
\sum_{k=0}^n\binom{n}{k}\binom{k}{m} = \sum_{k=m}^n\binom{n}{k}\binom{k}{m}
\end{equation*}
\hspace{1em}
Easy to find:
\begin{align*}
\binom{n}{k}\binom{k}{m}
&= \frac{n!}{k!(n-k)!}\frac{k!}{m!(k-m)!} \\
&= \frac{n!}{(n-k)!(k-m)!m!} \\
&= \frac{n!}{(n-m)!m!}\frac{(n-m)!}{(n-k)!(k-m)!} \\
&= \binom{n-m}{k-m}\binom{n}{m}
\end{align*}
\hspace{1em}
So:
\begin{equation*}
\sum_{k=0}^n\binom{n}{k}\binom{k}{m}=\binom{n}{m}\sum_{k=m}^n\binom{n-m}{k-m} = \binom{n}{m}\sum_{i=0}^{n-m}\binom{n-m}{i}=\binom{n}{m}2^{n-m}
\end{equation*}
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 3.8.12
\begin{homeworkProblem}[LPV 3.8.12]
Prove that
\begin{equation*}
1+\binom{n}{1}2+\binom{n}{2}4+\cdots+\binom{n}{n-1}2^{n-1}+\binom{n}{n}2^n=3^n.
\end{equation*}
Try to find a combinatorial proof.
\Answer
Consider $n=1$, obviously we have $1+\binom{1}{1}2=3=3^1$, equation is true. If the equation is true when $n=k-1$:
\begin{equation*}
1+\binom{k-1}{1}2+\binom{k-1}{2}4+\cdots+\binom{k-1}{k-1}2^{k-1} = 3^{k-1}
\end{equation*}
\hspace{1em}
Then for $n=k$, we have:
\begin{align*}
& 1+\binom{k}{1}2+\binom{k}{2}4+\cdots+\binom{k-1}{k-1}2^{k-1}+\binom{k}{k}2^{k} \\
& = 1 + \left[\binom{k-1}{0}+\binom{k-1}{1}\right]2+\left[\binom{k-1}{1}+\binom{k-1}{2}\right]4+\cdots+\left[\binom{k-1}{k-1}+\binom{k-1}{k}\right]2^k \\
& = 1+\binom{k-1}{1}2+\binom{k-1}{2}4+\cdots+\binom{k-1}{k-1}2^{k-1} + 2 +\binom{k-1}{1}4+\cdots+\binom{k-1}{k-1}2^{k} \\
& = 3^{k-1}+2\times 3^{k-1} \\
& = 3^k
\end{align*}
\hspace{1em}
So the equation is true for all $n=1,2,3,...$
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 3.8.14
\begin{homeworkProblem}[LPV 3.8.14]
Let $n$ be a positive integer divisible by 3. Use Stirling’s formula to find the approximate value of $\binom{n}{n/3}$.
\Answer
We know that:
\begin{equation*}
\binom{n}{n/3}=\frac{n!}{(n/3)!(2n/3)!}
\end{equation*}
\hspace{1em}
By the Stirling's formula, we have:
\begin{equation*}
n!\ \sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n, (n/3)!\ \sim \sqrt{2\pi n/3}\left(\frac{n/3}{e}\right)^{n/3}, (2n/3)!\ \sim \sqrt{4\pi n/3}\left(\frac{2n/3}{e}\right)^{2n/3},
\end{equation*}
\hspace{1em}
And so:
\begin{equation*}
\binom{n}{n/3}
=\frac{\sqrt{2\pi n}\left(\frac{n}{e}\right)^n}{\sqrt{2\pi n/3}\left(\frac{n/3}{e}\right)^{n/3} \sqrt{4\pi n/3}\left(\frac{2n/3}{e}\right)^{2n/3}}
=\frac{1}{2/3\sqrt{\pi n}\left(\frac{1}{3}\right)^n 2^{2n/3}}
=\frac{3}{2}\sqrt{\frac{1}{\pi n}} 3^n 2^{-\frac{2}{3}n}
\end{equation*}
\hspace{1em}
The comparison between the real value and its approxmation is shown in Figure 1.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\linewidth]{figures/compare}
\caption{Comparison of between approximation and real value with total number varies from 3 to 99.}
\label{fig:compare}
\end{figure}
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 4.3.7
\begin{homeworkProblem}[LPV 4.3.7]
How many subsets does the set $\{1, 2, \cdots , n\}$ have that contain no three consecutive integers? Find a recurrence.
\Answer
Denote the sequence of integers as $A_n$. It's easy to find that $A_1 = 1$, $A_2 = 3$, $A_3 = 6$, $A_4=12$. And as shown in Figure 2, when $A_1, A_2, \cdots,A_n$ are known, we can get $A_{n+1}$ from previous values. (1) Consider the part of subsets without element $n+1$, this part counts to be $A_n$ obviously; and the part contains element $n+1$ can be divide into 2 parts: (2a) $A_n$ with $n+1$ which contains (2b) $A_{n-3}$ exception violating rules. So the recurrence can be written as:
\begin{equation*}
A_{n+1} = 2A_n-A_{n-3}
\end{equation*}
\hspace{1em}
With this formula, we can get $A_5= 2A_4-A_1=23$, which is proved to be true after counting by hands.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{figures/recurr}
\caption{Demonstration of the recurrence of $A_n$.}
\label{fig:recurr}
\end{figure}
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 4.3.12
\begin{homeworkProblem}[LPV 4.3.12]
Recalling the Lucas numbers $L_n$ introduced in Exercise 4.3.2, prove the following identities:
(a) $ F_{2n}=F_nL_n;$
(b) $ 2F_{k+n}=F_kL_n+F_nL_k;$
(c) $ 2L_{k+n}=5F_kF_n+L_kL_n;$
(d) $ L_{4k}=L_{2k}^2-2;$
(e) $ L_{4k+2}=L_{2k+1}^2+2.$
\Answer
From Exercise 4.3.2 we can easily know that:
\begin{equation*}
L_{n} = F_n+2F_{n-1}
\end{equation*}
\hspace{1em}
(a)
\begin{align*}
F_n L_n
&=F_n(F_n+2F_{n-1}) \\
&=F_n^2+2F_nF_{n-1}+F^2_{n-1}-F^2_{n-1} \\
&=(F_n+F_{n-1})^2-F_{n-1}^2 \\
&=F_{n+1}^2+F_{n}^2-F_n^2-F_{n-1}^2 \\
&=F_{2n+1}-F_{2n-1} \\
&=F_{2n}
\end{align*}
\hspace{2em}
(b) From identity (4.5) $F_{a+b+1}=F_{a+1}F_{b+1}+F_a F_b$, we can get:
\begin{align*}
F_k L_n + F_n L_k
&=F_k(F_n+2F_{n-1})+F_n(F_k+2F_{k-1}) \\
&=2F_k F_n+2F_{n-1}F_k+2F_{k-1}F_n +2F_{k-1}F_{n-1}-2F_{k-1}F_{n-1} \\
&=2(F_k+F_{k+1})(F_n+F_{n-1})-2F_{k-1}F_{n-1} \\
&=2(F_{k+1} F_{n+1}+F_k F_n-F_k F_n-F_{k-1}F_{n-1}) \\
&=2(F_{k+n+1}-F_{k+n-1}) \\
&=2F_{k+n}
\end{align*}
\hspace{2em}
(c)
\begin{align*}
5F_k F_n +L_k L_n
&= 5F_k F_n + (F_k+2F_{k-1})(F_n+2F_{n-1}) \\
&= 2(F_k F_n +F_{k-1}F_n+2F_{n-1}F_k)+4(F_k F_n+F_{k-1}F_{n-1}) \\
&= 2F_{k+n}+4F_{k+n-1} \\
&= 2L_{k+n}
\end{align*}
\hspace{2em}
(d) From (a) and identity (4.5), it's easy to get:
\begin{align*}
L_{4k}
&= F_{4k}+2F_{4k-1} \\
&= F_{2k}L_{2k}+2(F_{2k}F_{2k}+F_{2k-1}F_{2k-1}) \\
&= L_{2k}^2-L_{2k}(F_{2k}+2F_{2k-1})+F_{2k}L_{2k}+2(F_{2k}F_{2k}+F_{2k-1}F_{2k-1})\\
&= L_{2k}^2-2F_{2k}F_{2k-1}+2F_{2k}^2-2F_{2k-1}^2 \\
&= L_{2k}^2+2F_{2k}^2-2F_{2k-1}F_{2k+1}
\end{align*}
\hspace{2em}
Denote $A_n = F_{n}^2-F_{n-1}F_{n+1}$, so:
\begin{align*}
A_n
&= F_{n}^2-F_{n-1}F_{n+1} \\
&= F_n^2-F_{n-1}(F_n+F_{n-1}) \\
&= (F_n-F_{n-1})F_n-F_{n-1}^2 \\
&= -(F_{n-1}^2 - F_{n-2}F_n) \\
&=-A_{n-1} \\
&= (-1)^{n-1} A_1
\end{align*}
\hspace{2em}
And $A_1 = F_1^2-F_0 F_2=1$, so:
\begin{equation*}
L_{4k} = L_{2k}^2+2A_{2k}=L_{2k}^2+2(-1)^{2k-1}=L_{2k}^2-2
\end{equation*}
\hspace{2em}
(e) Similar to (d), we have:
\begin{equation*}
L_{4k+2} = L_{2k+1}^2+2A_{2k+1}=L_{2k+1}^2+2(-1)^{2k}=L_{2k+1}^2+2
\end{equation*}
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-LPV 4.3.14
\begin{homeworkProblem}[LPV 4.3.14]
\hspace{1.2em}
(a) Prove that every positive integer can be written as the sum of different Fibonacci numbers.
(b) Prove even more: every positive integer can be written as the sum of different Fibonacci numbers, so that no two consecutive Fibonacci numbers are used.
(c) Show by an example that the representation in (a) is not unique, but also prove that the more restrictive representation in (b) is.
\Answer
(a) Consider positive integer $N$, when $N$ is equal to a Fibonacci number, the identity is obviously true. If for any $0<N<F_n$, the identity is true, and
\begin{equation*}
N=\sum_{i=1}^{n-1}a_i F_i, a_i \in \{0,1\}
\end{equation*}
So for bigger integer $F_n<N<F_{n+1}$, we can get:
\begin{equation*}
N=F_n+\sum_{i=1}^{n-1}b_i F_i, b_i \in \{0,1\}
\end{equation*}
which means if first few integers satisfy the identity, then the identity is true for all integers. Since we have known that for $N=1,2,3,5,8...$, the identity is true. Then we can say every positive integer can be written as the sum of different Fibonacci numbers.
(b) Similar to (a), we only need to prove that $\sum F_n+F_{n-2}+\cdots \geq F_{n+1}-1$, then for every integers between $F_{n}$ and $F_{n+1}$, the identity are true. Here are inductions:
\begin{equation*}
F_1+F_3 = F_4,\ \text{then}\ F_1+F_3+F_5=F_4+F_5=F_6
\end{equation*}
\begin{equation*}
\text{Known} \sum_{i=1}^{k}F_{2k-1}=F_{2k}, \text{then} \sum_{i=1}^{k+1}F_{2k-1}=F_{2k}+F_{2k+1} =F_{2k+1}
\end{equation*}
\begin{equation*}
F_2+F_4 = F_5-1,\ \text{then}\ F_2+F_4+F_6=F_5-1+F_6=F_7-1
\end{equation*}
\begin{equation*}
\text{Known} \sum_{i=1}^{k}F_{2k}=F_{2k+1}-1, \text{then} \sum_{i=1}^{k+1}F_{2k}=F_{2k+1}-1+F_{2k+2} =F_{2k+3}-1
\end{equation*}
So $\sum F_n+F_{n-2}+\cdots \geq F_{n+1}-1$ is true for all positive even numbers and odd numbers. Which in result proves the identity.
(c) An example: when $N=32$, there can be 2 ways to represent (a) as below
\begin{equation*}
32 = 21 + 8 + 3 = 13 + 8 + 5 + 3 + 2 + 1
\end{equation*}
\hspace{1em}
But for representation in (b), there is only one exception that
\begin{equation*}
F_{2k} = \sum_{i=1}^k F_{2k-1}
\end{equation*}
\hspace{1em}
And all integers left have only one way to get the identity true shown in (b).
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-Special Problem 1
\begin{homeworkProblem}[Special Problem 1]
In class we considered a 3-Person Hat Problem in which each person can see the bits posted on the other two people’s foreheads and tries to guess the bit on his/her own forehead, but each person is permitted to make no guess; the team wins if (1) no one makes an incorrect guess, and (2) at least one person makes a correct guess. We discussed a strategy for the team with a probability 3/4 to win.
\textit{Question:} Give a rigorous proof that this is the best strategy possible. That is, no strategy for the team can win with probability higher than 3/4. Your model should be general enough to include strategies with randomized moves. (Recall that in class, we mentioned a particular strategy in which one person always make a random guess (0 or 1) and the other two don’t speak.)
\textbf{Remarks} You should first define a mathematical model. Define a probability space, how any strategy is specified precisely, and how to define win as an event for the strategy. This allows you to define what the term best strategy means.
\Answer
The state space should be:
\begin{align*}
& \{0,0,0\}, \ \{0,0,1\}, \ \{0,1,0\}, \ \{0,1,1\},\\
& \{1,0,0\}, \ \{1,0,1\}, \ \{1,1,0\}, \ \{1,1,1\}
\end{align*}
\hspace{1em}
And probability for each state is all ${1/8}$. The strategy to win ${3/4}$ conditions is:
\hspace{2em}
(1) If first person sees one type of bits, he guess the other type. The other two partners will know their bits.
\hspace{2em}
(2)If first person sees two type of bits, he makes no guess. Since the other two will know each others' bit, then they can guess for themselves.
\hspace{1em}
Consider the infomation passed from the first person, we can know for sure that \textbf{he can not make sure what type of bit he has in every state}. And \textbf{if he always make no guess, he brings nothing to the other two}. Which means he can only \textbf{make no guess in at most half of states to pass infomation}, and \textbf{for the other half, he can only make a guess which can wins in at most half of states left}.
\hspace{1em}
The proof is not rigorous enough, but it explains well for me.
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-Special Problem 2
\begin{homeworkProblem}[Special Problem 2]
We discussed in class a game of ”Finding your IDs” for n students, using a cycle-following strategy for all the students. Let $g(n)$ be the probability of winning in this game, i.e. all students succeed in finding their ID.
(a) Give an explicit mathematical formula for $g(n)$.
(b) Determine the value of $\lim_{n\to \infty}g(n)$.
(c) Let us consider a modified game, in which each student is only allowed to search through $n/3$ boxes (instead of $n/2$). Let $h(n)$ be the probability of winning. Determine the value of $\lim_{n\to\infty} h(n)$.
\Answer
(a) Since each student have half probability to get his ID correct, then $g(n)=(\frac{1}{2})^n$
(b) $\lim_{n\to\infty}g(n)=\lim_{n\to\infty}(\frac{1}{2})^n=0$
(c) Similarly, $h(n)=(\frac{1}{3})^n$, $\lim_{n\to\infty}h(n)=\lim_{n\to\infty}(\frac{1}{3})^n=0$
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% HOMEWORK-2-Special Problem 3
\begin{homeworkProblem}[Special Problem 3]
In a New Year’s party with $2n$ people, $k$ random names are picked to receive gifts. Assume that these $2n$ people are actually $n$ husband-wife couples. Let $r_{n,k}$ be the probability that at least for one couple, both husband and wife win gifts.
(a) Give a mathematical formula for $r_{n,k}$.
(b) Determine the value of $\lim_{n\to\infty}r_{n,\sqrt{n}}$ .
\Answer
(a) The opposite condition is that all k persons belong to different couples. So:
\begin{align*}
& if\ k\leq n, r_{n,k} = 1-\frac{C_{n}^{k} 2^k}{C_{2n}^{k}} \\
& else, \ r_{n,k}=1
\end{align*}
\hspace{1em}
(b) For ${n,k}$
\begin{equation*}
1-\frac{C_{n}^{k} 2^k}{C_{2n}^{k}} = 1-\frac{n!2^k (2n-k)!}{(2n)!(n-k)!}
\end{equation*}
\hspace{1em}
And with $k=\sqrt{n}$, Stirling's formula $n!\sim \sqrt{2\pi n}(\frac{n}{e})^n$, we have:
\begin{align*}
\lim_{n\to\infty}{r_{n,\sqrt{n}}}
&= 1-\lim_{n\to\infty}\frac{\sqrt{2\pi n}\left(\frac{n}{e}\right)^n\sqrt{2\pi (2n-\sqrt{n})}\left(\frac{2n-\sqrt{n}}{e}\right)^{2n-\sqrt{n}}}{\sqrt{2\pi 2n}\left(\frac{2n}{e}\right)^{2n}\sqrt{2\pi(n-\sqrt{n})}\left(\frac{n-\sqrt{n}}{e}\right)^{n-\sqrt{n}}} \\
&= 1-\lim_{n\to\infty}\sqrt{\frac{2\sqrt{n}-1}{2\sqrt{n}-2}}\left(\frac{2\sqrt{n}-1}{2\sqrt{n}-2}\right)^{n-\sqrt{n}}\left(1-\frac{1}{2\sqrt{n}}\right)^{n}
\end{align*}
\hspace{1em}
Denote $\sqrt{n}$ as $x$, then
\begin{equation*}
\lim_{n\to\infty}{r_{n,\sqrt{n}}} = 1-\lim_{x\to\infty}\left(\frac{2x-1}{2x-2}\right)^{x^2-x+\frac{1}{2}}\left(1-\frac{1}{2x}\right)^{x^2} \triangleq 1-A
\end{equation*}
\begin{align*}
\ln{A}
&= \lim_{x\to\infty}\left(x^2-x+\frac{1}{2}\right)\ln{\left(1+\frac{1}{2x-2}\right)}+\lim_{x\to\infty}x^2\ln\left(1-\frac{1}{2x}\right) \\
&= \lim_{x\to\infty}\left(-x+\frac{1}{2}\right)\ln{\left(1+\frac{1}{2x-2}\right)}+\lim_{x\to\infty}x^2\ln{\left(1+\frac{1}{2x-2}\right)\left(1-\frac{1}{2x}\right)}
\end{align*}
\hspace{1em}
First part:
\begin{align*}
\lim_{x\to\infty}\left(-x+\frac{1}{2}\right)\ln{\left(1+\frac{1}{2x-2}\right)}
&= -\frac{1}{2}\lim_{c\to\infty}(c+1)\ln(1+1/c) \\
&= -\frac{1}{2}\lim_{c\to\infty}\frac{\partial \ln(1+1/c)}{\partial c}/\frac{\partial \frac{1}{c+1}}{\partial c} \\
&= -\frac{1}{2}\lim_{c\to\infty}\frac{c+1}{c}\\
&= -\frac{1}{2}
\end{align*}
\hspace{1em}
Second part:
\begin{align*}
\lim_{x\to\infty}x^2\ln{\left(1+\frac{1}{2x-2}\right)\left(1-\frac{1}{2x}\right)}
&= \lim_{x\to\infty}x^2\ln\left(1+\frac{1}{4x(x-1)}\right) \\
&= \lim_{x\to\infty}\frac{\frac{\partial \ln(1+\frac{1}{4x(x-1)})}{\partial x}}{\frac{\partial \frac{1}{x^2}}{\partial x}} \\
&= \lim_{x\to\infty}\frac{x^2}{2(x-1)(2x-1)} \\
&= \frac{1}{4}
\end{align*}
\hspace{1em}
So:
\begin{equation*}
\lim_{n\to\infty}{r_{n,\sqrt{n}}}=1-e^{\ln A}=1-e^{-\frac{1}{2}+\frac{1}{4}}=1-e^{-\frac{1}{4}}
\end{equation*}
\hspace{1em}
And Figure 3 shows the convergence between of $r_{n,\sqrt{n}}$ and $1-e^{-\frac{1}{4}}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{figures/conver}
\caption{Demonstration of the convergence between of $r_{n,\sqrt{n}}$ and $1-e^{-\frac{1}{4}}$.}
\label{fig:conver}
\end{figure}
\end{homeworkProblem}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% ACKNOWLEGEMENT
\Acknowledgement
[1] Thanks to \href{https://github.com/jiweiqi}{Weiqi Ji} for discussing about SP3(b).
% End edit to here
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{spacing}
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
| {
"alphanum_fraction": 0.5748460445,
"avg_line_length": 40.1330798479,
"ext": "tex",
"hexsha": "64c9e0c8ece05a8bac755e744e48e115881c9abf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2bc3747fbc4567ac4999ae3ba80ff074b543d602",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "SuXY15/SuXY15.github.io",
"max_forks_repo_path": "_site/assets/mcs/hw2/hw2_2015010697.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "2bc3747fbc4567ac4999ae3ba80ff074b543d602",
"max_issues_repo_issues_event_max_datetime": "2022-02-26T03:50:44.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-02-25T10:39:30.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "SuXY15/SuXY15.github.io",
"max_issues_repo_path": "_site/assets/mcs/hw2/hw2_2015010697.tex",
"max_line_length": 492,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2bc3747fbc4567ac4999ae3ba80ff074b543d602",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "SuXY15/SuXY15.github.io",
"max_stars_repo_path": "_site/assets/mcs/hw2/hw2_2015010697.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-08T08:40:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-08T08:40:34.000Z",
"num_tokens": 7834,
"size": 21110
} |
%23: 7, Munkres \textsection 24: 1, 3, 10e
% v0.04 by Eric J. Malm, 10 Mar 2005
\documentclass[12pt,letterpaper,boxed]{article}
% set 1-inch margins in the document
\usepackage{enumerate}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{marginnote}
\usepackage{float}
\input{macros.tex}
% \newtheorem{lemma}[section]{Lemma}
\usepackage{graphicx}
\usepackage{float}
% Note: for other writers, please take a look at the shortcuts I have already defined above.
\author{Samuel Stewart}
\title{Recognizing Mathematical Symbols with a Sparse CNN}
\maketitle
% TODO: employ roman numerals in the problem
\begin{document}
\section{Problem}
Goal: beat prediction accuracy \emph{and} speed of Haskell backend using dynamic time warp.
Hypothesis: Convolution neural network is better at recognizing characters. Sparsity enables speedup.
\section{Introduction to Neural Networks}
\subsection{Starting with a Single Neuron}
Abstractly, a neuron receives an input vector $\overline{x} \in \R^n$ and outputs a real number: a large positive number indicates activation, and a small negative number indicates deactive. A neural network consists of millions of such neurons, strung together carefully.
% picture of neuron from neural network? Hand drawn or in Omnigraffle.
A neuron has the following parameters
\begin{enumerate}
\item A shift vector $\overline{b}$for the input vector
\item A vector $\overline{w}$ of weights for the input vector
\item A mapping $f : \R \to \R$ that describes the output of the neuron (intuitively, when the neuron \textbf{activates}).
\end{enumerate}
The output of the neuron is then simply
\[
f(\overline{w} \cdot (\overline{x} + \overline{b})).
\]
A single neuron is quite powerful and a good place to begin. One can rephrase other classifiers \cite{andrej2017} within this framework. For example, if one chooses
\[
f(t) = \frac{1}{1 + e^{-t}}
\]
% include graph of logistic function on the right and the equation on the left
and tune the parameters appropriately, then one is performing \textit{logistic regression}. For real neural networks, the function
\[
f(t) = \max(0, t)
\]
% picture of max function on the right and equation on the left
is more accurate (\cite{andrej2017}).
As a simple example, consider the problem of predicting whether a given height measurement is of a man or a woman. With the logistic function as above, one can build a simple classifier in Mathematica.
% Get some height data (or generate it) in Mathematica. Compute the loss function and generate a graph.
\subsection{A Geometric Viewpoint}
Note: one can view classification problems as really just trying to decompose image into basis elements. Since each row of W is a dot product against the input data, we are really detecting the high dimensional *angle* and offset. Everything is really just high dimensional geometry.
\subsection{Neural Networks as Nonlinear Combinations of Matrix Multiplications}
\subsection{Graph Representation}
\section{Previous Work}
\section{The Problem}
\section{Methodology}
We requested the full $210,454$ samples from the author of the popular online tool Detexify which converts hand-drawn \LaTeX characters to codes [cite] using an algorithm called dynamic time warp [cite dynamic time warp]. Each sampled consists of a classification (the \LaTeX symbol code) and an array of timestamped coordinates representing the stroke pattern for a given sample. Preprocessing required converting each sample to a $200 \times 200$ grayscale image by rendering a thick line connecting the sampled points via Python Image Library [cite].
Using the Python frameworks \textbf{Lasagne} and \textbf{nolearn}, we implemented a network with the following structure
% Diagram of the network we implemented. Can we just do a network with no hidden layers? Probably
We reserved one quarter of the data to test generalizability and trained the network on the remainder. The following figure shows our loss function on the training data
% Loss function figure
The accuracy on the out of sample data was $100\%$
\subsection{Evaluation of a network with linear algebra}
\subsection{Density of neural networks}
\section{Convolution Neural Networks}
\subsection{Why are CNNs different than vanilla CNN?}
1. "better" in some ill-defined sense. I assume classification accuracy?
2. General idea appears to be that
\section{Exploiting Sparsity in Convolutional Neural Networks}
\subsection{Training the Network}
\subsection{Cost of Accuracy}
\section{Questions while learning}
1. How to select proper activation function?
2. How can one rephrase this problem mathematically?
3. Why can't neurons have multiple outputs?
4. Are there results connecting the number of samples with the accuracy / generalizability of the network?
\section{Reproducible Research Questions}
1. What did I do?
2. Why did I do it?
3. How did I set up everything at the time of the analysis?
4. When did I make changes, and what were they?
5. Who needs to access it, and how can I get it to them?
\section{References}
ConvNetJS (playground for neural nets)
http://cs.stanford.edu/people/karpathy/convnetjs/
andrej2017
Andrej Karpathy. http://cs231n.github.io/neural-networks-1/
(Spatially-sparse convolutional neural networks) https://arxiv.org/abs/1409.6070
VisTrails workflow management
https://www.vistrails.org/index.php/Main_Page
Proof of universality of neural networks:
http://neuralnetworksanddeeplearning.com/chap4.html
Pandas for data cleaning
http://t-redactyl.io/blog/2016/10/a-crash-course-in-reproducible-research-in-python.html
IPython
https://ipython.org/documentation.html
\end{document}
| {
"alphanum_fraction": 0.7823177175,
"avg_line_length": 37.1111111111,
"ext": "tex",
"hexsha": "950d9a97449e3659ec9b7d212df521c42537a6e7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cecdadb6a6bddd641875f78b4abebdc3e6fa1bbc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "samstewart/cnn-latex-character-recognition",
"max_forks_repo_path": "paper/paper.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cecdadb6a6bddd641875f78b4abebdc3e6fa1bbc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "samstewart/cnn-latex-character-recognition",
"max_issues_repo_path": "paper/paper.tex",
"max_line_length": 554,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cecdadb6a6bddd641875f78b4abebdc3e6fa1bbc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "samstewart/cnn-latex-character-recognition",
"max_stars_repo_path": "paper/paper.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-01T04:04:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-01T04:04:59.000Z",
"num_tokens": 1378,
"size": 5678
} |
\chapter{Content of the DVD}
In this chapter, you should explain the content of your DVD.
| {
"alphanum_fraction": 0.7608695652,
"avg_line_length": 23,
"ext": "tex",
"hexsha": "a6f43c78e4dc956480a4970ae08953eb17c6b458",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_forks_repo_path": "tuhhthesis/appendix_CD-Content.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_issues_repo_path": "tuhhthesis/appendix_CD-Content.tex",
"max_line_length": 61,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5c351afff6447f16dfce885636ee44ac41762396",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "bone4/Bachelorarbeit-JuraCoffeeThesis",
"max_stars_repo_path": "tuhhthesis/appendix_CD-Content.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 22,
"size": 92
} |
\section{Monte Carlo EM Procedure}
Let $X_{ij} = [x_i, x_j, x_{ij}]$ denote the features, $\Delta_{ij} = [\alpha_i, \beta_j, \gamma_i, u_i, v_j, s_i]$ denote the latent factors, and
$\Theta = [b, g_0, d_0,, c_0, G, D, H, \lambda, \eta, a_\alpha, a_\beta, a_\gamma, A_u, A_v, A_s]$ denote the model parameters. We use the convention that $y = \{y_{ij}\}$, $X = \{X_{ij}\}$, $z = \{z_{jn}\}$ and so on. The EM algorithm seeks to find the $\Theta$ that maximizes the incomplete data likelihood (marginalizing over latent factors):
$$
\arg\max_{\Theta} \Pr[y, w \,|\, \Theta, X]
$$
Let $LL(\Theta; \Delta, z, y, w, X) = \log(\Pr[\Delta, z, y, w \,|\, \Theta, X])$ denote the log of the complete data likelihood. Let $\hat{\Theta}^{(t)}$ denote our current estimate of $\Theta$ at the $t$th iteration. The EM algorithm iterate through the following two steps until convergence.
\begin{itemize}
\item {\bf E-step:} Compute the sufficient statistics of $E_{\Delta, z}[LL(\Theta; \Delta, z, y, w, X) \,|\, \hat{\Theta}^{(t)}]$ as a function of $\Theta$, where the expectation is taken over the posterior distribution of $(\Delta, z \,|\, \hat{\Theta}^{(t)}, y, w, X)$.
\item {\bf M-step:} Find the $\Theta$ that maximizes the expectation computed in the E-step.
$$
\hat{\Theta}^{(t+1)}= \arg\max_{\Theta}
E_{\Delta, z}[LL(\Theta; \Delta, z, y, w, X) \,|\, \hat{\Theta}^{(t)}]
$$
\end{itemize}
\subsection{Monte-Carlo E-Step}
Because $E_{\Delta, z}[LL(\Theta; \Delta, z, y, w, X) \,|\, \hat{\Theta}^{(t)}]$ is not in closed form, we compute the Monte-Carlo mean based on $L$ samples generated by a Gibbs sampler. The Gibbs sampler repeats the following procedure for $L$ times.
\begin{enumerate}
\item Sample from $(\alpha_i \,|\, \textrm{Rest})$, which is Gaussian, for all $i$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\prime} b)\gamma_i - \beta_j -
u_i^\prime v_j - s_{i}^{\prime} \bar{z}_j \\
\textit{Var}[\alpha_i|\mbox{Rest}] & =
( \frac{1}{a_{\alpha}} +
\sum_{j \in \mathcal{J}_i} \frac{1}{\sigma_{ij}^{2}} )^{-1} \\
E[\alpha_i|\mbox{Rest}] & =
\textit{Var}[\alpha_i|\mbox{Rest}]
( \frac{g_0^{\prime} x_i}{a_{\alpha}} +
\sum_{j \in \mathcal{J}_i} \frac{o_{ij}}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(\beta_j \,|\, \textrm{Rest})$, which is Gaussian, for all $j$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\prime} b) \gamma_i - \alpha_i -
u_i^\prime v_j - s_{i}^{\prime} \bar{z}_j \\
\textit{Var}[\beta_j|\mbox{Rest}] & =
( \frac{1}{a_{\beta}} +
\sum_{i \in \mathcal{I}_j} \frac{1}{\sigma_{ij}^{2}} )^{-1} \\
E[\beta_j|\mbox{Rest}] & =
\textit{Var}[\beta_j|\mbox{Rest}]
( \frac{d_0^{\prime} x_j}{a_{\beta}} +
\sum_{i \in \mathcal{I}_j} \frac{o_{ij}}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(\gamma_i \,|\, \textrm{Rest})$, which is Gaussian, for all $i$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - \alpha_i - \beta_j - u_i^\prime v_j -
s_i^\prime \bar{z}_j \\
\textit{Var}[\gamma_i|\mbox{Rest}] & =
( \frac{1}{a_{\gamma}} +
\sum_{j \in \mathcal{J}_i}
\frac{(x_{ij}^{\prime} b)^2}{\sigma_{ij}^{2}}
)^{-1} \\
E[\gamma_i|\mbox{Rest}] & =
\textit{Var}[\gamma_i|\mbox{Rest}]
( \frac{ c_0^\prime x_i }{a_{\gamma}} +
\sum_{j \in \mathcal{J}_i}
\frac{o_{ij} (x_{ij}^{\prime} b)}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(u_i \,|\, \textrm{Rest})$, which is Gaussian, for all $i$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\prime} b)\gamma_i -
\alpha_i - \beta_j - s_i^\prime \bar{z}_j \\
\textit{Var}[u_i|\mbox{Rest}] & =
( A_u^{-1} +
\sum_{j \in \mathcal{J}_i}
\frac{v_j v_j^{\prime}}{\sigma_{ij}^{2}}
)^{-1} \\
E[u_i|\mbox{Rest}] & =
\textit{Var}[u_i|\mbox{Rest}]
( A_u^{-1} G x_i +
\sum_{j \in \mathcal{J}_i} \frac{o_{ij} v_j}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(v_j \,|\, \textrm{Rest})$, which is Gaussian, for all $j$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\prime} b)\gamma_i -
\alpha_i - \beta_j - s_i^\prime \bar{z}_j \\
\textit{Var}[v_j|\mbox{Rest}] & =
( A_v^{-1} +
\sum_{i \in \mathcal{I}_j}
\frac{u_i u_i^{\prime}}{\sigma_{ij}^{2}}
)^{-1} \\
E[v_j|\mbox{Rest}] & =
\textit{Var}[v_j|\mbox{Rest}]
( A_v^{-1} D x_j +
\sum_{i \in \mathcal{I}_j} \frac{o_{ij} u_i}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(s_i \,|\, \textrm{Rest})$, which is Gaussian, for all $i$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij} & = y_{ij} - (x_{ij}^{\prime} b)\gamma_i -
\alpha_i - \beta_j - u_i^\prime v_j \\
\textit{Var}[s_i|\mbox{Rest}] & =
( A_s^{-1} +
\sum_{j \in \mathcal{J}_i}
\frac{\bar{z}_j \bar{z}_j^{\prime}}{\sigma_{ij}^{2}}
)^{-1} \\
E[s_i|\mbox{Rest}] & =
\textit{Var}[s_i|\mbox{Rest}]
( A_s^{-1} H x_i +
\sum_{j \in \mathcal{J}_i} \frac{o_{ij} \bar{z}_j}{\sigma_{ij}^{2}} )
\end{split}
\end{equation}
\item Sample from $(z_{jn} \,|\, \textrm{Rest})$, which is multinomial, for all $j$ and $n$. Let $z_{\neg jn}$ denote $z$ with $z_{jn}$ removed. The probability of $z_{jn}$ being topic $k$ is given by
\begin{equation*}
\begin{split}
&\Pr[z_{jn} = k \,|\, z_{\neg jn}, \Delta, \hat{\Theta}^{(t)}, y, w, X] \\
&\propto \Pr[z_{jn} = k, y \,|\, z_{\neg jn}, \Delta, \hat{\Theta}^{(t)}, w, X] \\
&\propto \Pr[z_{jn} = k \,|\, w, z_{\neg jn}, \hat{\Theta}^{(t)}] \cdot
\prod_{i\in I_j} \Pr[y_{ij}\,|\,z_{jn} = k, z_{\neg jn},
\Delta, \hat{\Theta}^{(t)}, X]
\end{split}
\end{equation*}
Let
$$
Z_{jk\ell} = \sum_n \mathbf{1}\{z_{jn} = k \mbox{ and } w_{jn} = \ell\}
$$
denote the number of times word $\ell$ belongs to topic $k$ in item $j$. Let $Z_{k\ell} = \sum_j Z_{jk\ell}$, $Z_{k} = \sum_{j\ell} Z_{jk\ell}$, and so on. We use $Z_{\cdot}^{\neg jn}$ to denote the count $Z_{\cdot}$ with $z_{jn}$ removed from the summation. Assume $w_{jn} = \ell$. Then,
\begin{equation*}
\begin{split}
& \Pr[z_{jn} = k \,|\, w, z_{\neg jn}, \hat{\Theta}^{(t)}] \\
& \propto \Pr[z_{jn} = k, w_{jn} = \ell \,|\,
w_{\neg jn}, z_{\neg jn}, \hat{\Theta}^{(t)}] \\
& = \Pr[w_{jn} = \ell \,|\,
w_{\neg jn}, z_{jn} = k, z_{\neg jn}, \eta] ~
\Pr[z_{jn} = k \,|\, z_{\neg jn}, \lambda] \\
& = E[ \Phi_{k\ell} \,|\, w_{\neg jn}, z_{\neg jn}, \eta] ~
E[ \theta_{jk} \,|\, z_{\neg jn}, \lambda] \\
& = \frac{Z_{k\ell}^{\neg jn} + \eta}
{Z_{k}^{\neg jn} + W \eta}~
\frac{Z_{jk}^{\neg jn} + \lambda_k}
{Z_{j}^{\neg jn} + \sum_k \lambda_k}
\end{split}
\end{equation*}
Note that the denominator of the second term $(Z_{j}^{\neg jn} + \sum_k \lambda_k)$ is independent of $k$. Thus, we obtain
\begin{equation}
\Pr[z_{jn} = k \,|\, \mbox{Rest}]
\propto \frac{Z_{k\ell}^{\neg jn} + \eta}
{Z_{k}^{\neg jn} + W \eta} ~
(Z_{jk}^{\neg jn} + \lambda_k) ~
\prod_{i\in \mathcal{I}_j} f_{ij}(y_{ij})
\end{equation}
where $f_{ij}(y_{ij})$ is the probability density at $y_{ij}$ given the current values of $(x_{ij}^{\prime}\, b) \gamma_i + \alpha_i + \beta_j + u_i^\prime v_j + s_{i}^{\prime} \, \bar{z}_{j}$ and $\sigma_{ij}^2$.
\begin{equation}
\begin{split}
\mbox{Let } o_{ij}
&= y_{ij} - (x_{ij}^{\prime}\, b) \gamma_i -
\alpha_i - \beta_j - u_i^\prime v_j \\
\prod_{i\in \mathcal{I}_j} f_{ij}(y_{ij})
&\propto \exp\left\{
-\frac{1}{2} \sum_{i \in \mathcal{I}_j}
\frac{(o_{ij} - s_{i}^{\prime} \, \bar{z}_{j})^2}{\sigma_{ij}^2}
\right\} \\
&\propto \exp\left\{
\bar{z}_{j}^\prime B_j -
\frac{1}{2} \bar{z}_{j}^\prime C_j \bar{z}_{j}
\right\} \\
\mbox{where } &
B_j = \sum_{i \in \mathcal{I}_j} \frac{o_{ij} s_i}{\sigma_{ij}^{2}}
\mbox{ and }
C_j = \sum_{i \in \mathcal{I}_j} \frac{s_i s_i^{\prime}}{\sigma_{ij}^{2}}
\end{split}
\end{equation}
\end{enumerate}
\subsection{M-Step}
In the M-step, we want to find the $\Theta$ = $[b, g_0, c_0, d_0, G, D, H, \lambda, \eta, a_\alpha, a_\beta, a_\gamma, A_u, A_v]$ that maximizes the expected complete data likelihood computed in the E-step.
$$
\hat{\Theta}^{(t+1)}= \arg\max_{\Theta}
E_{\Delta, z}[LL(\Theta; \Delta, z, y, w, X) \,|\, \hat{\Theta}^{(t)}]
$$
where $- LL(\Theta; \Delta, z, y, w, X) =$
{\small\begin{equation*}
\begin{split}
& ~~ \mbox{Constant} +
\frac{1}{2} \sum_{ij} \left( \frac{1}{\sigma_{ij}^{2}}
(y_{ij}-\alpha_i-\beta_j- (x_{ij}^{\prime}b) \gamma_i
- u_i^\prime v_j - s_{i}^{\prime} \bar{z}_{j})^{2}
+ \log \sigma_{ij}^{2} \right) \\
& ~~ + \frac{1}{2 a_{\alpha}} \sum_{i}
(\alpha_i - g_{0}^{\prime}x_i)^{2}
+ \frac{M}{2} \log a_{\alpha}
+ \frac{1}{2 a_{\gamma}} \sum_{i}
(\gamma_i - c_{0}^{\prime}x_i)^{2}
+ \frac{M}{2} \log a_{\gamma} \\
& ~~ + \frac{1}{2} \sum_{i}
(u_i - Gx_i)^{\prime} A_u^{-1} (u_i - Gx_i)
+ \frac{M}{2} \log(\det A_u) \\
& ~~ + \frac{1}{2} \sum_{j}
(v_j - Dx_j)^{\prime} A_v^{-1} (v_j - Dx_j)
+ \frac{N}{2} \log(\det A_v) \\
& ~~ + \frac{1}{2} \sum_{i}
(s_i - Hx_i)^{\prime} A_s^{-1} (s_i - Hx_i)
+ \frac{M}{2} \log(\det A_s) \\
& ~~ + \frac{1}{2 a_{\beta}} \sum_{j}
(\beta_j - d_{0}^{\prime}x_j)^{2}
+ \frac{N}{2} \log a_{\beta} \\
& ~~ + N \left( \sum_k \log\Gamma(\lambda_k)
- \log\Gamma(\textstyle\sum_k \lambda_k) \right)
+ \sum_j \left(
\log\Gamma\left(Z_j + \textstyle\sum_k \lambda_k\right) -
\sum_k \log\Gamma(Z_{jk} + \lambda_k)
\right) \\
& ~~ + K \left( W \log\Gamma(\eta)
- \log\Gamma(W \eta) \right)
+ \sum_k \left(
\log\Gamma(Z_k + W \eta) -
\sum_\ell \log\Gamma(Z_{k\ell} + \eta)
\right) \\
\end{split}
\end{equation*}}
\noindent The optimal $g_0$, $c_0$, $G$, $D$, $d_0$, $a_\alpha$, $a_\gamma$, $A_u$, $A_v$ and $a_\beta$ can be obtained using the same regression procedure as that in the original RLFM.
\\
{\bf Optimal $b$ and $\sigma^2$:} Consider the Gaussian case, where $\sigma_{ij}^2 = \sigma^2$. Define
$$
o_{ij} = y_{ij} - (\alpha_i + \beta_j + u_i^\prime v_j + s_i^\prime \bar{z}_j)
$$
We use $\tilde{E}[\cdot]$ to denote the Monte-Carlo expectation. Here, we want to find $b$ and $\sigma^2$ that minimize
\begin{equation*}
\begin{split}
& \frac{1}{\sigma^2} \sum_{ij}
\tilde{E}[(o_{ij} - \gamma_i(x_{ij}^{\prime}b))^2]
+ P \log(\sigma^2) \\
&= \frac{1}{\sigma^2} \sum_{ij} \left(
\tilde{E}[o_{ij}^2]
- 2\tilde{E}[o_{ij}\gamma_i] (x_{ij}^{\prime}b)
+ \tilde{E}[\gamma_i^2] (x_{ij}^{\prime}b)^2
\right)
+ P \log(\sigma^2) \\
&= \frac{1}{\sigma^2} \sum_{ij} \left(
\tilde{E}[o_{ij}^2]
- \frac{(\tilde{E}[o_{ij}\gamma_i])^2}{\tilde{E}[\gamma_i^2]}
+ \tilde{E}[\gamma_i^2] \left(
\frac{\tilde{E}[o_{ij}\gamma_i]}{\tilde{E}[\gamma_i^2]}
- x_{ij}^{\prime} b
\right)^2
\right)
+ P \log(\sigma^2) \\
\end{split}
\end{equation*}
where $P$ is the total number of observed user-item pairs. The optimal $b$ can be found by weighted least squares regression with feature $x_{ij}$, response $\frac{\tilde{E}[o_{ij}\gamma_i]}{\tilde{E}[\gamma_i^2]}$ and weight $\tilde{E}[\gamma_i^2]$. The optimal $\sigma^2$ is the above summation (over $ij$) divided by $P$ with the optimal $b$ value.
\\
{\bf Optimal $\eta$:} We want to find $\eta$ that minimizes
\begin{equation*}
\begin{split}
& K \left( W \log\Gamma(\eta)
- \log\Gamma(W \eta) \right)
+ \sum_k \left(
\tilde{E}[\log\Gamma(Z_k + W \eta)] -
\sum_\ell \tilde{E}[\log\Gamma(Z_{k\ell} + \eta)]
\right) \\
\end{split}
\end{equation*}
Since this optimization is just one dimensional and $\eta$ is a nuance parameter, we can simply try a number of fixed possible $\eta$ values.
\\
{\bf Optimal $\lambda$:} We want to find $\lambda_1$, ..., $\lambda_K$ that minimize
\begin{equation*}
\begin{split}
& N \left( \sum_k \log\Gamma(\lambda_k)
- \log\Gamma(\textstyle\sum_k \lambda_k) \right) \\
& + \sum_j \left(
\tilde{E}[\log\Gamma\left(Z_j + \textstyle\sum_k \lambda_k\right)]
- \sum_k \tilde{E}[\log\Gamma(Z_{jk} + \lambda_k)]
\right)\\
\end{split}
\end{equation*}
{\bf Question: How to optimize this efficiently?} What's the sufficient statistics? Storing Monte-Carlo samples in memory for $Z_{jk}$ may not be feasible. Any approximation? For now, I just assume $\lambda_k = \lambda_0$, a single scalar and search through a fixed set of points (which is also assumed in the first Gibbs-sampling-based LDA paper by Griffiths, 2004).
{\bf Comment:}: I think assuming the same $\lambda$ should work fine with a lot of document data, in fact we can select $\eta$ and $\lambda_0$ through cross-validation in the first implementation to simplify things.
{\bf Comment:} The Gibbs sampler here looks trivial, I also feel it would be better behaved than the sampler we had in RLFM. However, one drawback is the conditional
distribution of $u_i$ that involves inverting a matrix whose dimension equals number of topics. In general, we may want to try to a few thousand topics with LDA; we may have to break $u_i$ into blocks of small chunks and sample each block one at a time conditional on the others. However, this does not seem to be a worry in the first implementation where we could try few 100 topics in the LDA.
| {
"alphanum_fraction": 0.5770745116,
"avg_line_length": 45.1711409396,
"ext": "tex",
"hexsha": "502d09db66a8f31cc04fe27e00feb12f4987d18c",
"lang": "TeX",
"max_forks_count": 36,
"max_forks_repo_forks_event_max_datetime": "2021-12-24T05:37:19.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-26T05:13:22.000Z",
"max_forks_repo_head_hexsha": "3815cbb311da8819b686661ce7007a7cb62e0f7a",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "TotallyBullshit/Latent-Factor-Models",
"max_forks_repo_path": "src/LDA-RLFM/doc/fitting.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "3815cbb311da8819b686661ce7007a7cb62e0f7a",
"max_issues_repo_issues_event_max_datetime": "2017-12-31T01:14:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-05-05T19:40:11.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "TotallyBullshit/Latent-Factor-Models",
"max_issues_repo_path": "src/LDA-RLFM/doc/fitting.tex",
"max_line_length": 396,
"max_stars_count": 86,
"max_stars_repo_head_hexsha": "bda67b6fab8fa3a4219d5360651d9105e006a8c7",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "beechung/Latent-Factor-Models",
"max_stars_repo_path": "src/LDA-RLFM/doc/fitting.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-18T08:24:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-02T21:49:23.000Z",
"num_tokens": 5529,
"size": 13461
} |
\appendix
\section{Appendix}
% \section{The obvious}
% \subsection{Reference use}
% \begin{itemize}
% \item use a system for generating the bibliographic information automatically from your database, e.g., use BibTex and/or Mendeley, EndNote, Papers, or \ldots
% \item all ideas, fragments, figures and data that have been quoted from other work have correct references
% \item literal quotations have quotation marks and page numbers
% \item paraphrases are not too close to the original
% \item the references and bibliography meet the requirements
% \item every reference in the text corresponds to an item in the bibliography and vice versa
% \end{itemize}
% \subsection{Structure}
% Paragraphs
% \begin{itemize}
% \item are well-constructed
% \item are not too long: each paragraph discusses one topic
% \item start with clear topic sentences
% \item are divided into a clear paragraph structure
% \item there is a clear line of argumentation from research question to conclusions
% \item scientific literature is reviewed critically
% \end{itemize}
% \subsection{Style}
% \begin{itemize}
% \item correct use of English: understandable, no spelling errors, acceptable grammar, no lexical mistakes
% \item the style used is objective
% \item clarity: sentences are not too complicated (not too long), there is no ambiguity
% \item attractiveness: sentence length is varied, active voice and passive voice are mixed
% \end{itemize}
% \subsection{Tables and figures}
% \begin{itemize}
% \item all have a number and a caption
% \item all are referred to at least once in the text
% \item if copied, they contain a reference
% \item can be interpreted on their own (e.g. by means of a legend)
% \end{itemize}
\subsection{Pseudocode}
\label{appendix-pseudocode}
\begin{algorithm}
\DontPrintSemicolon
\SetKwFunction{DInit}{Init}
\SetKwProg{Fn}{On event}{:}{}
\Fn{\DInit}{
delivered = False\;
paths = $\emptyset$\;
}
\SetKwFunction{DRecv}{Receive}
\Fn{\DRecv{$p_{recv}$, $m$, $path$, $planned$}}{
$path$ = $path \cup \{p_{recv}\}$\;
\ForAll{$p_j \in planned$}{
transmit($p_j$, $m$, $path$, $planned$)\;
}
paths.add($path$)\;
\uIf{paths contains $f+1$ node-disjoint paths to the origin \textbf{and} delivered = False}{
deliver($m$)\;
delivered = True\;
}
}
\SetKwFunction{DBrd}{Broadcast}
\Fn{\DBrd{$m$}}{
deliver($m$)\;
delivered = True\;
\ForAll{$(p_j, route) \in routingTable$}{
transmit($p_j$, $m$, $\emptyset$, $route$)\;
}
}
\caption{Dolev's Reliable Communication routed algorithm}
\label{background:dolev}
\end{algorithm}
\begin{algorithm}[h]
\DontPrintSemicolon
\SetKwFunction{BInit}{Init}
\SetKwProg{Fn}{On event}{:}{}
\Fn{\BInit}{
sentEcho = sentReady = delivered = False\;
echos = readys = $\emptyset$\;
}
\SetKwFunction{BRecvEcho}{ReceiveEcho}
\Fn{\BRecvEcho{$p_{recv}$, $m$}}{
\uIf{\textbf{not} sentEcho}{
\ForAll{$p_j \in neighbours$}{
transmit($p_j$, $m$, ECHO)\;
}
sentEcho = True\;
}
echos.add($p_{recv}$)\;
\uIf{len(echos) $\ge$ $\ceil{\frac{N+f+1}{2}}$ \textbf{and not} sentReady}{
\ForAll{$p_j \in neighbours$}{
transmit($p_j$, $m$, READY)\;
}
sentReady = True\;
}
}
\SetKwFunction{BRecvReady}{ReceiveReady}
\Fn{\BRecvReady{$p_{recv}$, $m$}}{
readys.add($p_{recv}$)\;
\uIf{len(readys) $\ge f+1$ \textbf{and not} sentReady}{
\ForAll{$p_j \in neighbours$}{
transmit($p_j$, $m$, READY)\;
}
sentReady = True\;
}
\uIf{len(readys) $\ge 2f+1$ \textbf{and not} delivered}{
deliver($m$)\;
delivered = True\;
}
}
\SetKwFunction{BBrd}{Broadcast}
\Fn{\BBrd{$m$}}{
\ForAll{$p_j \in neighbours$}{
transmit($p_j$, $m$, SEND)\;
transmit($p_j$, $m$, ECHO)\;
}
}
\caption{Bracha's authenticated double echo algorithm}
\label{background:bracha}
\end{algorithm} | {
"alphanum_fraction": 0.6104474116,
"avg_line_length": 31.1605839416,
"ext": "tex",
"hexsha": "9261e23646d91238ed61df160ffa4c56962ff692",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "timanema/brb-thesis",
"max_forks_repo_path": "report/sections/appendix.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "timanema/brb-thesis",
"max_issues_repo_path": "report/sections/appendix.tex",
"max_line_length": 160,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1c3e19b02b46812ff563bfc776d793adb440b00a",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "timanema/brb-thesis",
"max_stars_repo_path": "report/sections/appendix.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1217,
"size": 4269
} |
\subsection{Span}
\subsubsection{Span function}
We can take a subset \(S\) of \(V\). We can then make linear combinations of these elements.
This is called the linear span - \(span (S)\).
| {
"alphanum_fraction": 0.6994818653,
"avg_line_length": 19.3,
"ext": "tex",
"hexsha": "135fa69720f2fd5f370ec253c24ed08360c2cf50",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/geometry/linearAlgebra/01-02-span.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/geometry/linearAlgebra/01-02-span.tex",
"max_line_length": 92,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/geometry/linearAlgebra/01-02-span.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 49,
"size": 193
} |
\section{Graphics}
\slide{Full-slide image}
{
\insImgCenter{0.25}{pic/transistor}
\sourceRefUrl{https://upload.wikimedia.org/wikipedia/commons/e/e2/Transistor-die-KSY34.jpg} % image source (creative common license)
}
\slide{Image with source-reference link shifted}
{
\insImg{-0.30}{0.12}{0.31}{pic/transistor}
\insImg{0.50}{0.12}{0.25}{pic/transistor}
\vspace{15em}
\sourceRefUrlShifted{50em}{https://upload.wikimedia.org/wikipedia/commons/e/e2/Transistor-die-KSY34.jpg} % image source
}
\slide{Box around image part}
{
\begin{center}
\begin{tikzpicture}
\node[anchor=south west, inner sep=0] at (0,0) { \includegraphics[scale=0.25]{pic/transistor} };
\draw<1>[green,thick, rounded corners] (2.7, 1.6) rectangle (\textheight-3.2cm, 5);
\draw<2>[red, ultra thick, rounded corners] (3.9, 2.5) rectangle (4.8, 3.2);
\end{tikzpicture}
\end{center}
\sourceRefUrl{https://upload.wikimedia.org/wikipedia/commons/e/e2/Transistor-die-KSY34.jpg} % image source (creative common license)
}
\slide{Images on the same page}
{
\begin{center}
\includegraphics<1>[scale=0.125]{pic/transistor}
\pause
\includegraphics<2>[scale=0.25]{pic/transistor}
\end{center}
\sourceRefUrl{https://upload.wikimedia.org/wikipedia/commons/e/e2/Transistor-die-KSY34.jpg} % image source (creative common license)
}
\slide{Image decorations}
{
\begin{itemize}
\item Lorem ipsum dolor sit amet,
\item consectetur adipisicing elit,
\item sed do eiusmod tempor incididunt
\item ut labore et dolore magna aliqua.
\item Ut enim ad minim veniam, quis
\item nostrud exercitation ullamco laboris
\item nisi ut aliquip ex ea commodo consequat.
\item Duis aute irure dolor in reprehenderit
\item in voluptate velit esse cillum dolore
\item eu fugiat nulla pariatur.
\end{itemize}
\insImg{0.8}{0.2}{0.05}{pic/transistor}
\pause
\insImgFr{2-}{0.8}{0.55}{0.08}{pic/transistor}
}
\slide{Images interleaved with text}
{
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
\begin{center}
\includegraphics[scale=0.1]{pic/transistor}
\end{center}
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
}
\slide{PNGs out of SVGs}
{
\begin{center}
\includegraphics[scale=0.5]{svg/biohazard}
\end{center}
}
\slide{} % unnamed!
{
\begin{center}
\includegraphics[scale=0.33]{pic/transistor}
\end{center}
}
\slide{Keeping space for images}
{
\begin{block}{Note}
Thanks to usage of \texttt{onslide} image sizes are preserved, even if not displayed.
Reserving space means things do not float between slide changes.
\end{block}
\begin{columns}
\begin{column}{0.33\textwidth}
\begin{center}
Column 1
\end{center}
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
Column 2
\end{center}
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
Column 3
\end{center}
\end{column}
\end{columns}
\begin{columns}
\begin{column}{0.33\textwidth}
\begin{center}
\onslide<2->{ \insImgCenter{0.1}{pic/transistor} }
\end{center}
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
\onslide<3->{ \insImgCenter{0.2}{svg/biohazard} }
\end{center}
\end{column}
\begin{column}{0.33\textwidth}
\begin{center}
\onslide<4->{ \insImgCenter{0.6}{pic/plantuml_logo} }
\end{center}
\end{column}
\end{columns}
}
| {
"alphanum_fraction": 0.74107683,
"avg_line_length": 22.9583333333,
"ext": "tex",
"hexsha": "8940847370284998c7d844560f078c005d7323f4",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-01-15T08:28:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-21T14:08:58.000Z",
"max_forks_repo_head_hexsha": "124a377a24e89cddb0531d1b87e56539c2793323",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "el-bart/beamer_cpp",
"max_forks_repo_path": "template/graphics.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "124a377a24e89cddb0531d1b87e56539c2793323",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "el-bart/beamer_cpp",
"max_issues_repo_path": "template/graphics.tex",
"max_line_length": 132,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "124a377a24e89cddb0531d1b87e56539c2793323",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "el-bart/beamer_cpp",
"max_stars_repo_path": "template/graphics.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-15T08:28:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-01-14T06:20:13.000Z",
"num_tokens": 1049,
"size": 3306
} |
\documentclass{beamer}
\usepackage{comment}
\usepackage{color}
\usepackage{listings}
\usepackage{verbatim}
\usepackage{multicol}
\usepackage{booktabs}
\usepackage{xspace}
\usepackage{hyperref}
\hypersetup{colorlinks=true,citecolor=blue,filecolor=blue,linkcolor=blue,
urlcolor=blue,breaklinks=true}
\definecolor{green}{RGB}{0,128,0}
\def\EQ#1\EN{\begin{equation*}#1\end{equation*}}
\def\BA#1\EA{\begin{align*}#1\end{align*}}
\def\BS#1\ES{\begin{split*}#1\end{split*}}
\newcommand{\bc}{\begin{center}}
\newcommand{\ec}{\end{center}}
\newcommand{\eq}{\ =\ }
\newcommand{\degc}{$^\circ$C}
\def\p{\partial}
\def\qbs{\boldsymbol{q}}
\def\Dbs{\boldsymbol{D}}
\def\A{\mathcal A}
\def\gC{\mathcal C}
\def\gD{\mathcal D}
\def\gL{\mathcal L}
\def\M{\mathcal M}
\def\P{\mathcal P}
\def\Q{\mathcal Q}
\def\gR{\mathcal R}
\def\gS{\mathcal S}
\def\X{\mathcal X}
\def\bnabla{\boldsymbol{\nabla}}
\def\bnu{\boldsymbol{\nu}}
\renewcommand{\a}{{\alpha}}
%\renewcommand{\a}{{}}
\newcommand{\s}{{\sigma}}
\newcommand{\bq}{\boldsymbol{q}}
\newcommand{\bz}{\boldsymbol{z}}
\def\bPsi{\boldsymbol{\Psi}}
\def\Li{\textit{L}}
\def\Fb{\textbf{f}}
\def\Jb{\textbf{J}}
\def\cb{\textbf{c}}
\def\Dt{\Delta t}
\def\tpdt{{t + \Delta t}}
\def\bpsi{\boldsymbol{\psi}}
\def\dbpsi{\delta \boldsymbol{\psi}}
\def\bc{\textbf{c}}
\def\dbc{\delta \textbf{c}}
\def\arrows{\rightleftharpoons}
\newcommand{\bGamma}{\boldsymbol{\Gamma}}
\newcommand{\bOmega}{\boldsymbol{\Omega}}
%\newcommand{\bPsi}{\boldsymbol{\Psi}}
%\newcommand{\bpsi}{\boldsymbol{\psi}}
\newcommand{\bO}{\boldsymbol{O}}
%\newcommand{\bnu}{\boldsymbol{\nu}}
\newcommand{\bdS}{\boldsymbol{dS}}
\newcommand{\bg}{\boldsymbol{g}}
\newcommand{\bk}{\boldsymbol{k}}
%\newcommand{\bq}{\boldsymbol{q}}
\newcommand{\br}{\boldsymbol{r}}
\newcommand{\bR}{\boldsymbol{R}}
\newcommand{\bS}{\boldsymbol{S}}
\newcommand{\bu}{\boldsymbol{u}}
\newcommand{\bv}{\boldsymbol{v}}
%\newcommand{\bz}{\boldsymbol{z}}
\newcommand{\pressure}{P}
\def\water{H$_2$O}
\def\calcium{Ca$^{2+}$}
\def\copper{Cu$^{2+}$}
\def\magnesium{Mg$^{2+}$}
\def\sodium{Na$^+$}
\def\potassium{K$^+$}
\def\uranium{UO$_2^{2+}$}
\def\hion{H$^+$}
\def\hydroxide{0H$^-$}
\def\bicarbonate{HCO$_3^-$}
\def\carbonate{CO$_3^{2-}$}
\def\cotwo{CO$_2$(aq)}
\def\chloride{Cl$^-$}
\def\fluoride{F$^-$}
\def\phosphoricacid{HPO$_4^{2-}$}
\def\nitrate{NO$_3^-$}
\def\sulfate{SO$_4^{2-}$}
\def\souotwooh{$>$SOUO$_2$OH}
\def\sohuotwocothree{$>$SOHUO$_2$CO$_3$}
\def\soh{$>$SOH}
\newcommand{\pft}{PFLOTRAN\xspace}
\newcommand\add[1]{{{\color{blue} #1}}}
\newcommand\remove[1]{\sout{{\color{red} #1}}}
\newcommand\codecomment[1]{{{\color{green} #1}}}
\newcommand\redcolor[1]{{{\color{red} #1}}}
\newcommand\bluecolor[1]{{{\color{blue} #1}}}
\newcommand\greencolor[1]{{{\color{green} #1}}}
\newcommand\magentacolor[1]{{{\color{magenta} #1}}}
\newcommand\gehcomment[1]{{{\color{orange} #1}}}
\def\aligntop#1{\vtop{\null\hbox{#1}}}
\begin{comment}
\tiny
\scriptsize
\footnotesize
\small
\normalsize
\large
\Large
\LARGE
\huge
\Huge
\end{comment}
%\usetheme[height=7mm]{Rochester} % No navigation bar
\setbeamertemplate{blocks}[rounded][shadow=true]
\setbeamersize{text margin left=4mm, text margin right=4mm}
% Do this so there isn't so much white space before bullet items! --RTM
\beamertemplatenavigationsymbolsempty % to get rid of nav symbols
%\setbeamertemplate{frames}{}
\begin{document}
\title[Alquimia]{Alquimia: an API for ASCEM geochemistry}
\author[]{Ben Andre, Glenn Hammond, Sergi Molins, Carl Steefel}
\date{\today}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}{Template}
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\frame{\titlepage}
%\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{The Alquimia Philosophy}
\Large
\begin{itemize}
\item What Alquimia \bluecolor{is}:
\begin{itemize}
\Large
\item An API and wrapper facilitating interfacing with existing, external
geochemistry codes as 3$^\text{rd}$ party libraries
%\item Geochemical democracy
\end{itemize}
\vspace{1cm}
\item What Alquimia \redcolor{is not}:
\begin{itemize}
\Large
\item A geochemistry library implementation
%\item Geochemical coercion
\end{itemize}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Alquimia Concept}
Provide a uniform application programming interface (API) through which (solute) transport simulators may couple to third-party geochemical reaction libraries. The API
\begin{itemize}
\item Provides a common set of primitive data structures to which data structures from 3$^\text{rd}$ party libraries may be mapped.
\item Provides common handles (routines) for
\begin{itemize}
\item Reading and initializing the geochemical basis and associated reactions.
\item Integrating a time step at a single grid cell.
\item Extracting geochemical concentrations.
\end{itemize}
\item Serves solely to pass information between transport and reaction components (i.e. it does not drive the geochemical solution procedure).
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Alquimia: Required Functionality}
\begin{itemize}
\item Handles for
\begin{itemize}
\item Reading of geochemical reaction data
\item Basis management (e.g. reading and setting the basis)
\item Constraint processing for geochemical conditions (e.g. boundary/initial conditions, source/sinks) with respect to the desired basis
\item Speciation
\item Reaction stepping in operator split mode
%\item Evaluation of local residual and Jacobian entries for grid cell
%\begin{itemize}
%\item Note that \emph{evaluation} does not construct the global Jacobian
%\end{itemize}
\item Delivery of secondary variables (e.g. pH, reaction rates, mineral saturation, etc.) at the request of the client
\end{itemize}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Division of Labor}
\begin{tabular}{c|c|c}
\aligntop{
\begin{minipage}{0.3\textwidth}
\textbf{Client (Amanzi)}
\begin{itemize}
\small
\item Process kernel implemented by developers of the transport code
\item Manage global storage
\item Loop through space
\item Unpack and move data from mesh dependent storage into Alquimia data transfer containers
\item Manage time stepping, sub-stepping, error handling etc.
\end{itemize}
\end{minipage}} &
\aligntop{
\begin{minipage}{0.3\textwidth}
\textbf{Alquimia wrapper}
\begin{itemize}
\small
\item Defines an engine independent API
\item Provides data munging or delivers raw data
\item \redcolor{No} geochemical calculations
\end{itemize}
\end{minipage}} &
\aligntop{
\begin{minipage}{0.3\textwidth}
\textbf{Engine}
\begin{itemize}
\small
\item Provides all geochemical functionality, i.e. basis
management, constraint processing, reaction stepping
\end{itemize}
\end{minipage}} \\
\end{tabular}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Constraint Processing}
\begin{itemize}
\item Domain scientist rarely want to define the IC/BC for a problem
purely in terms of total component concentrations.
\item Often want to specify the system in variables that are
closer to observational data. for example:
\begin{itemize}
\item pH instead of Total $H^+$
\item aqueous $CO_2$ at equilibrium with the atmosphere
\item $Ca^{2+}$ in equilibrium with calcite
\item $Cl^-$ based on charge balance.
\end{itemize}
\item What does this mean for state, transport and input processing?
\item Initial and boundary conditions are no longer simple lists of
dirichlet conditions.
\item ``Geochemical Conditions'' must be preprocessed by the
geochemistry engine to determine the appropriate dirichlet condition
to apply.
\item ...
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile,containsverbatim,allowframebreaks]{PFLOTRAN biogeochemical constraints}
\footnotesize
\begin{semiverbatim}
\bluecolor{CONSTRAINT} copper_leaching_initial_condition
\bluecolor{CONCENTRATIONS}
Na+ 5.0d-3 \bluecolor{T} \magentacolor{! Total component concentration}
K+ 2.5d-5 \bluecolor{M} Muscovite \magentacolor{! Equilibration with mineral}
Ca++ 6.5d-4 \bluecolor{M} Calcite
H+ 8.d0 \bluecolor{P} \magentacolor{! pH}
Cu++ 6.4d-9 \bluecolor{M} Chrysocolla
Al+++ 2.8d-17 \bluecolor{M} Kaolinite
Fe++ 1.2d-23 \bluecolor{M} Goethite
SiO2(aq) 1.8d-4 \bluecolor{M} Chalcedony
HCO3- -3.d0 \bluecolor{G} CO2(g) \magentacolor{! Equilibration with CO2(g)}
SO4-- 5.0d-4 \bluecolor{T DATASET} Sulfate \magentacolor{! Dataset name `random_so4'}
Cl- 3.7d-3 \bluecolor{Z} \magentacolor{! Charge balance}
O2(aq) -0.7d0 \bluecolor{G} O2(g)
\bluecolor{/}
...
\end{semiverbatim}
\newpage
\begin{semiverbatim}
...
\bluecolor{MINERALS} \magentacolor{vol. frac.} \magentacolor{spec. surf. area}
Alunite 0.d0 1.d0 cm^2/cm^3
Chrysocolla2 5.0d-3 1.d0 cm^2/cm^3
Goethite 2.5d-2 1.d0 cm^2/cm^3
Gypsum 0.d0 1.d0 cm^2/cm^3
Jarosite 0.d0 1.d0 cm^2/cm^3
Jurbanite 0.d0 1.d0 cm^2/cm^3
Kaolinite 5.0d-2 1.d0 cm^2/cm^3
Muscovite 5.0d-2 1.d0 cm^2/cm^3
SiO2(am) 0.d0 1.d0 cm^2/cm^3
Quartz 8.2d-1 1.d0 cm^2/cm^3
\bluecolor{/}
\bluecolor{/}
\end{semiverbatim}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[allowframebreaks]{Timeline (FY13)}
\begin{itemize}
\item Alquimia interface
\begin{itemize}
\item Preliminary API (pseudocode) - Dec
\item Batch chemistry driver - Feb
\item Process constraints with database - Mar
\item Reactive transport driver (OS) - Apr
\item Amanzi driven reactive transport - Jun
\end{itemize}
\item Geochemistry database
\begin{itemize}
\item With unexpected Akuna deliverables, no commitment until FY 14
\end{itemize}
\item Akuna interface
\begin{itemize}
\item Preliminary implementation - Mar
\end{itemize}
\newpage
\item Amanzi chemistry library
\begin{itemize}
\item Reaction step through Alquimia - Feb
\item Existing functionality supported through FY13
\end{itemize}
\item \pft
\begin{itemize}
\item Regression test suite - Done
\item Isolation of \pft chemistry - Done
\item \pft chemistry library
\begin{itemize}
\item Develop Alquimia linkage - Jan
\item Additional development/refactor as needed
\item Interface to new geochemical database - FY14
\end{itemize}
\end{itemize}
\item CrunchFlow chemistry library
\begin{itemize}
\item Staggered by a month with \pft
\end{itemize}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{\pft Geochemistry Flowchart}
\begin{itemize}
\item Initialization
\item Read input deck
\item Set basis (read database, form reaction network)
\item Equilibrate constraints
\item Reaction step
\item Speciate for output
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile,containsverbatim]{PFLOTRAN input specification example}
\footnotesize
\begin{tabular}{cc}
%\aligntop{
\begin{minipage}{0.5\textwidth}
\begin{semiverbatim}
\greencolor{:======== chemistry ========}
\bluecolor{CHEMISTRY}
\bluecolor{PRIMARY_SPECIES}
H+
HCO3-
Ca++
\bluecolor{/}
\bluecolor{SECONDARY_SPECIES}
OH-
CO3--
CO2(aq)
CaCO3(aq)
CaHCO3+
CaOH+
\bluecolor{/}
\bluecolor{GAS_SPECIES}
CO2(g)
\bluecolor{/}
...
\end{semiverbatim}
\end{minipage}
%}
%\aligntop{
\begin{minipage}{0.5\textwidth}
\begin{semiverbatim}
...
\bluecolor{MINERALS}
Calcite
\bluecolor{/}
\bluecolor{MINERAL_KINETICS}
Calcite
\bluecolor{RATE_CONSTANT} 1.d-6
\bluecolor{/}
\bluecolor{/}
\bluecolor{DATABASE} ./hanford.dat
\bluecolor{LOG_FORMULATION}
\greencolor{: OPERATOR_SPLITTING}
\bluecolor{ACTIVITY_COEFFICIENTS TIMESTEP}
\bluecolor{OUTPUT}
PH
all
\bluecolor{/}
\bluecolor{END}
\end{semiverbatim}
\end{minipage}
%}
\end{tabular}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile,containsverbatim,allowframebreaks]{PFLOTRAN biogeochemical constraints: IFRC}
\footnotesize
\begin{semiverbatim}
\bluecolor{CONSTRAINT} ifrc_initial_condition_SZ
\bluecolor{CONCENTRATIONS}
H+ 7.3 \bluecolor{P}
Ca++ 1.2d-3 \bluecolor{T}
Mg++ 5.1d-4 \bluecolor{T}
UO2++ 5.d-7 \bluecolor{T DATASET} Initial_U
K+ 1.6d-4 \bluecolor{T}
Na+ 1.0d-3 \bluecolor{Z}
HCO3- 2.6d-3 \bluecolor{T}
Cl- 7.0d-4 \bluecolor{T}
SO4-- 6.4d-4 \bluecolor{T}
Tracer 1.e-7 \bluecolor{F}
\bluecolor{/}
\bluecolor{MINERALS}
Calcite \bluecolor{DATASET} Calcite_Vol_Frac 1.
Metatorbernite 0.0 1.
\bluecolor{/}
\bluecolor{/}
\end{semiverbatim}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile,containsverbatim,allowframebreaks]{PFLOTRAN biogeochemistry OO Design}
\scriptsize
%\footnotesize
\begin{semiverbatim}
\bluecolor{type}, \bluecolor{public} :: reaction_type
...
\bluecolor{integer} :: neqcplx \greencolor{! # sec aq complexes}
\bluecolor{integer}, \bluecolor{pointer} :: eqcplxspecid(:,:) \greencolor{! pri spec ids in cplx}
\bluecolor{real*8}, \bluecolor{pointer} :: eqcplxstoich(:,:) \greencolor{! stoich of pri spec}
\bluecolor{real*8}, \bluecolor{pointer} :: eqcplx_lnK(:) \greencolor{! nat log of equil coef}
...
\bluecolor{end} reaction_type
\bluecolor{type}, \bluecolor{public} :: reactive_transport_auxvar_type
...
\bluecolor{real*8}, \bluecolor{pointer} :: pri_molal(:) \greencolor{! pri spec conc [m]}
\bluecolor{real*8}, \bluecolor{pointer} :: total(:,:) \greencolor{! tot comp conc [M]}
\bluecolor{real*8}, \bluecolor{pointer} :: sec_molal(:) \greencolor{! sec aq complex conc [m]}
\bluecolor{real*8}, \bluecolor{pointer} :: pri_act_coef(:) \greencolor{! pri act coef}
\bluecolor{real*8}, \bluecolor{pointer} :: sec_act_coef(:) \greencolor{! sec act coef}
...
\bluecolor{end type} reactive_transport_auxvar_type
\end{semiverbatim}
\newpage
\tiny
\begin{semiverbatim}
\bluecolor{integer} :: i, icomp, icplx, ncomp, iphase
\bluecolor{real*8} :: ln_act(:), lnQK
\bluecolor{type}(reactive_transport_auxvar_type), \bluecolor{pointer} :: rt_axuvar
\bluecolor{type}(reaction_type), \bluecolor{pointer} :: reaction
ln_act(:) = \bluecolor{log}(rt_auxvar%pri_molal(:)) + &
\bluecolor{log}(rt_auxvar%pri_act_coef(:))
\bluecolor{do} icplx = 1, reaction%neqcplx ! for each sec aq complex
\greencolor{! calculate secondary aqueous complex concentration}
lnQK = -reaction%eqcplx_lnK(icplx)
ncomp = reaction%eqcplxspecid(0,icplx)
\bluecolor{do} i = 1, ncomp
icomp = reaction%eqcplxspecid(i,icplx)
lnQK = lnQK + reaction%eqcplxstoich(i,icplx) * &
ln_act(icomp)
\bluecolor{enddo}
rt_auxvar%sec_molal(icplx) = \bluecolor{exp}(lnQK) / &
rt_auxvar%sec_act_coef(icplx)
\greencolor{! add complex to total component concentration}
\bluecolor{do} i = 1, ncomp
icomp = reaction%eqcplxspecid(i,icplx)
rt_auxvar%total(icomp,iphase) = &
rt_auxvar%total(icomp,iphase) + &
reaction%eqcplxstoich(i,icplx)* &
rt_auxvar%sec_molal(icplx)
\bluecolor{enddo}
\bluecolor{enddo}
\end{semiverbatim}
\newpage
%\scriptsize
\footnotesize
\begin{semiverbatim}
\greencolor{! initialization of reaction network and basis}
\bluecolor{call} DatabaseRead(reaction,option)
\bluecolor{call} BasisInit(reaction,option)
...
\greencolor{! equilibration of constraint and assignment to auxvar object}
\bluecolor{call} ReactionEquilibrateConstraint(rt_auxvar,global_auxvar, &
reaction, &
constraint_coupler, &
porosity, &
PETSC_FALSE,option)
...
\greencolor{! computing reaction contribution to residual and Jacobian}
\bluecolor{call} RReaction(residual,J,PETSC_FALSE, &
rt_aux_vars(ghosted_id), &
global_aux_vars(ghosted_id), &
porosity(ghosted_id), &
volume(local_id),reaction,option)
\end{semiverbatim}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
| {
"alphanum_fraction": 0.599456374,
"avg_line_length": 33.3847549909,
"ext": "tex",
"hexsha": "f25aae6c1b1227fb88b35e4816afbeedca978e1c",
"lang": "TeX",
"max_forks_count": 27,
"max_forks_repo_forks_event_max_datetime": "2021-10-04T21:49:16.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-11-03T18:12:46.000Z",
"max_forks_repo_head_hexsha": "2ee3bcfacc63f685864bcac2b6868b48ad235225",
"max_forks_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_forks_repo_name": "smolins/alquimia-dev",
"max_forks_repo_path": "doc/presentations/Workshop_Slides_Nov-12/alquimia.tex",
"max_issues_count": 37,
"max_issues_repo_head_hexsha": "2ee3bcfacc63f685864bcac2b6868b48ad235225",
"max_issues_repo_issues_event_max_datetime": "2021-04-07T05:20:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-12-01T20:58:48.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_issues_repo_name": "smolins/alquimia-dev",
"max_issues_repo_path": "doc/presentations/Workshop_Slides_Nov-12/alquimia.tex",
"max_line_length": 168,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "2ee3bcfacc63f685864bcac2b6868b48ad235225",
"max_stars_repo_licenses": [
"BSD-3-Clause-LBNL"
],
"max_stars_repo_name": "smolins/alquimia-dev",
"max_stars_repo_path": "doc/presentations/Workshop_Slides_Nov-12/alquimia.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-20T21:55:37.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-12-09T18:35:14.000Z",
"num_tokens": 5435,
"size": 18395
} |
%==============================================================
\section{Big Data}\label{bdf}
After evaluating different data sources presenting various methods to extract and process different audio features, the following section describes the data analysis with Big Data processing frameworks like Apache Spark~\cite{spark} and Hadoop~\cite{hadoop}. Most of the basic information on Hadoop and Spark in the next few sections are taken from the book "Data Analytics with Spark using Python" by Jeffrey Aven, which gives a very comprehensible and practical introduction to the field of Big Data processing with PySpark~\cite{sparkbook1}.
%Later, Section~\ref{bds1} deals with the implementation of the various similarity measurements with Spark, the handling of larger amounts of data, runtime analysis and the combination of multiple similarity measurements, while Chapter~\ref{bds2} gives a short overview over the achieved results using the Big Data framework to compare audio features.
\subsection{Hadoop}
With the ever-growing availability of huge amounts of high-dimensional data, the need for toolkits and efficient algorithms to handle these grew over the past years. One key to handle Big Data problems is the use of parallelism.\\
Search engine providers like Google and Yahoo firstly ran into the problem of using "internet-scale" data in the early 2000s when faced with the problem of storing and processing the ever-growing amount of indexes from documents on the internet. In 2003, Google presented their white paper called "The Google File System"~\cite{gfs}. MapReduce is a programming paradigm introduced by Google as an answer to the problem of internet-scale data and dates back to 2004 when the paper "MapReduce: Simplified Data Processing on Large Clusters" was published~\cite{mapreduce1}.\\
Doug Cutting and Mike Cafarella worked on a web crawler project called "Nutch" during that time. Inspired by the two papers Cutting incorporated the storage and processing principles from Google, leading to what we know as Hadoop today. Hadoop joined the Apache Software Foundation in 2006. The MapReduce programming paradigm for data processing is the core concept used by Hadoop.~\cite[p. 6]{sparkbook1}\\
Hadoop is a scalable solution capable of running on large computer clusters. It does not necessarily require a supercomputing environment and is able to run on clusters of lower-cost commodity hardware. The data is stored redundantly on multiple nodes with a configurable replication factor defining how many copies of each data chunk are stored redundantly on other nodes. This enables an error management where faulty operations can simply be restarted.\\
Hadoop is based on the idea of data locality. In contrast to the usual approach, where the data is requested from its location and transferred to a remote processing system or host, Hadoop brings the computation to the data instead. This minimizes the problem of data transfer times over the network at compute time when working with very large-scale data / Big Data. One prerequisite is that the operations on the data are independent of each other. Hadoop follows this approach called "shared nothing", where data is processed locally in parallel on many nodes at the same time by splitting the data into independent, small subsets without the need for communication with other nodes. Additionally, Hadoop is a schemaless (schema-on-read) system which means that it is able to store and process unstructured, semi-structured (JSON, XML), or well structured data (relational database).~\cite[p. 7]{sparkbook1}\\
To make all this possible, Hadoop relies on its core components YARN (Yet Another Resource Negotiator) as the processing and resource scheduling subsystem and the Hadoop Distributed File System (HDFS) as Hadoop's data storage subsystem.\\
\subsubsection{MapReduce}
Figure~\ref{mapred} shows the basic scheme of a MapReduce program.
\FloatBarrier
\begin{figure}[htbp]
\centering
\framebox{\parbox{1\textwidth}{
%Image based on: https://commons.wikimedia.org/wiki/File:Mapreduce.png
\begin{tikzpicture}[node distance = 4cm][every node/.style={thick}]
\colorlet{coul0}{orange!20} \colorlet{coul1}{blue!20} \colorlet{coul2}{red!20} \colorlet{coul3}{green!20}
\tikzstyle{edge}=[->, very thick]
\draw[thick, fill=violet!30] (-1, -2) rectangle node[rotate=90] {\textbf{Input data}} (0,2);
\foreach \i in {0,1,2,3} {
\node[draw, fill=coul\i, xshift=2em] (data\i) at (1.5, 1.5 - \i) {Input};
\node[ellipse, draw, fill=cyan!20, xshift=2em] (map\i) at (3.5, 1.5 - \i) {\textsf{Map}};
\draw[edge] (0,0) -- (data\i.west);
\draw[edge] (data\i) -- (map\i);
}
\node[draw, minimum height=2cm, fill=purple!30, xshift=7em] (resultat) at (10, 0) {\textbf{Results}};
\foreach \i in {0,1,2} {
\node[draw, fill=yellow!20, minimum width=2cm, xshift=4em] (paire\i) at (5.5, 1.5 - \i*1.5) {\begin{minipage}{1cm}Tuples \centering $\langle k,v \rangle$\end{minipage}};
\node[ellipse, draw, fill=cyan!20, xshift=6em] (reduce\i) at (7.5, 1.5 - \i*1.5) {\textsf{Reduce}};
\draw[edge] (paire\i) -- (reduce\i);
\draw[edge] (reduce\i.east) -- (resultat);
}
%paire
\draw[edge] (map0.east) -- (paire0.west); \draw[edge] (map0.east) -- (paire1.west);
\draw[edge] (map1.east) -- (paire0.west); \draw[edge] (map1.east) -- (paire2.west);
\draw[edge] (map2.east) -- (paire1.west); \draw[edge] (map2.east) -- (paire0.west);
\draw[edge] (map3.east) -- (paire1.west); \draw[edge] (map3.east) -- (paire2.west);
\end{tikzpicture}
}}
\caption{MapReduce algorithm~\cite{mapred1im}}
\label{mapred}
\end{figure}
\FloatBarrier
\noindent In the first stage, the input data is split into chunks and distributed over the nodes of a cluster. This is usually managed by a distributed file system like the HDFS. One master node stores the addresses of all data chunks.\\
The data is then fed into the mappers which operate on the input data and finally transforms the input into key-value tuples.\\
In an intermediate step the key-value pairs are usually grouped by their keys before being fed into the reducers. The reducers apply another method to all tuples with the same key.\\
The amount of key-value pairs at the output from all mappers divided by the number of input files is called "replication rate" ($r$). The highest count of values for one key being fed into a reducer can be denoted as $q$ (reducer size). Usually, there is a trade-off between a high replication rate $r$ and small reducer size $q$ (highly parallel with more network traffic) or small $r$ and larger $q$ (less network traffic but worse parallelism due to an overall smaller reducer count).
\subsection{Spark}\label{sparksec}
Hadoop as a Big Data processing framework has a few downsides compared to other, newer options like Spark. The Spark project was started in 2009 and was created as a part of the Mesos research project. It was developed as an alternative to the implementation of MapReduce in Hadoop. Spark is written in the programming language Scala~\cite{scalalang} and runs in Java Virtual Machines (JVM) but also provides native support for programming interfaces in Python, Java and R. One major advantage compared to Hadoop is the efficient way of caching intermediate data to the main memory instead of writing it onto the hard drive. While Hadoop has to read all data from the disk and writes all results back to the disk, Spark can efficiently take advantage of the RAM available in the different nodes, making it suitable for interactive queries and iterative machine learning operations. To be able to offer these kinds of in-memory operations Spark uses a structure called "Resilient Distributed Dataset" (RDD).~\cite[p. 13]{sparkbook1}\\
Figure~\ref{dataloc} shows the simplified architecture of a compute cluster running Spark.
\begin{figure}[htbp]
\centering
\framebox{\parbox{1\textwidth}{
\begin{tikzpicture}
\tikzstyle{bigbox} = [draw=blue!60, blur shadow={shadow blur steps=5}, minimum size=2cm, thick, fill=blue!10, rounded corners, rectangle]
\tikzstyle{box} = [draw=black!40!blue, minimum size=0.6cm, rounded corners,rectangle, fill=blue!50]
\tikzstyle{box2} = [draw=black!60!blue, minimum size=0.6cm, rounded corners,rectangle, fill=blue!10]
\tikzstyle{box3} = [draw=blue!80, minimum size=0.6cm, rounded corners,rectangle, fill=blue!10]
\node[server](server 1){};
\node[server, right of= server 1, xshift=3cm](server 2){};
\node[server, right of= server 2, xshift=3cm](server 3){};
\node[rack switch, above of=server 2,xshift=0.1cm,yshift=0.25cm]
(rack switch 1){};
\node[box, above of=rack switch 1, xshift=0cm, yshift=0.95cm](cm){Cluster Manager};
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.25cm, xshift=-0.01cm] [fit = (cm)](ma){\\ \ \\Master};
\end{pgfonlayer}
\node[server, above of=ma, yshift=0.75cm](servermaster){};
\node[box, above of=servermaster, xshift=0cm, yshift=1.25cm](sc){SparkSession (SparkContext, SparkConf)};
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.25cm, xshift=-0.01cm] [fit = (sc)](dr){\\ \ \\Driver};
\end{pgfonlayer}
\node[box, below of=server 3, xshift=-3.25em, yshift=-0.25cm](e1){Executor 1};
\node[box, below of=server 3, xshift=3.25em, yshift=-0.25cm](e2){Executor 2};
\node[box, below of=server 2, xshift=0em, yshift=-0.25cm](e3){Executor 3};
\node[box, below of=server 1, xshift=-3.25em, yshift=-0.25cm](e4){Executor 4};
\node[box, below of=server 1, xshift=3.25em, yshift=-0.25cm](e5){Executor 5};
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.35cm, xshift=-0.01cm] [fit = (e1)](mem1){\\ \ \\Memory};
\end{pgfonlayer}
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.35cm, xshift=0.01cm] [fit = (e2)](mem2){\\ \ \\Memory};
\end{pgfonlayer}
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.35cm] [fit = (e3)](mem3){\\ \ \\Memory};
\end{pgfonlayer}
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.35cm, xshift=-0.01cm] [fit = (e4)](mem4){\\ \ \\Memory};
\end{pgfonlayer}
\begin{pgfonlayer}{background}
\node[bigbox, yshift=-0.35cm, xshift=0.01cm] [fit = (e5)](mem5){\\ \ \\Memory};
\end{pgfonlayer}
\node[box2, below of=mem1, yshift=-0.01cm](t1){Task 1};
\node[box2, below of=mem2, yshift=-0.01cm](t2){Task 2};
\node[box2, below of=mem3, yshift=-0.01cm](t3){Task 3};
\node[box2, below of=mem4, yshift=-0.01cm](t4){Task 4};
\node[box2, below of=mem5, yshift=-0.01cm](t5){Task 5};
\node[box2, below of=t2, yshift=0.3cm](t6){Task 6};
\node[box2, below of=t4, yshift=0.3cm](t7){Task 7};
\node[box2, below of=t5, yshift=0.3cm](t8){Task 8};
\draw[thick,black!60!green] (t2.south)--(t6);
\draw[thick,black!60!green] (t4.south)--(t7);
\draw[thick,black!60!green] (t5.south)--(t8);
\draw[thick,darkgray!10!gray] (servermaster.north)--(dr.south);
\draw[thick,darkgray!10!gray] (servermaster.south)--(ma.north);
\draw[thick,darkgray!10!gray] (ma.south)--(rack switch 1.north);
\draw[thick,darkgray!10!gray] (server 1.north)--(rack switch 1);
\draw[thick,darkgray!10!gray] (server 2.north)--(rack switch 1);
\draw[thick,darkgray!10!gray] (server 3.north)--(rack switch 1);
\draw[thick,darkgray!10!gray] (server 3.south)--(e1);
\draw[thick,darkgray!10!gray] (server 3.south)--(e2);
\draw[thick,darkgray!10!gray] (server 2.south)--(e3);
\draw[thick,darkgray!10!gray] (server 1.south)--(e4);
\draw[thick,darkgray!10!gray] (server 1.south)--(e5);
% = = = = = = = = = = = = = = = =
% Labels
% = = = = = = = = = = = = = = = =
\node[box3, xshift=-6.1cm,yshift=0.3cm,left of = sc,align=left](lev1){\textbf{Main Program}};
\node[box3, xshift=-6.25cm,yshift=0.3cm,left of = servermaster,align=left](lev2){\textbf{Master Node}};
\node[box3, xshift=-5.85cm,yshift=0.3cm,left of = cm,align=left](lev3){\textbf{Cluster Manager}};
\node[box3, xshift=-6.9cm,yshift=0.3cm,left of = rack switch 1,align=left](lev4){\textbf{Switch}};
\node[box3, xshift=-2.185cm,yshift=0.3cm,left of = server 3,align=left](lev5){\textbf{Worker Nodes}};
\node[box3, xshift=-1.325cm,yshift=0.3cm,left of = e1,align=left](lev6){\textbf{Executors}};
\end{tikzpicture}
}}
\caption{Spark cluster scheme (according to~\cite[p. 46]{sparkbook1})}
\label{dataloc}
\end{figure}
\noindent The core components of a Spark application are the Driver, the Master, the Cluster Manager, and the Executors. The Driver is the process to which clients submit their applications. It is responsible for the planning and execution of a Spark program and returns status logs and results to the clients. It can be located on a remote client or on a node in the cluster. The SparkSession is created by the Driver and represents a connection to a Spark cluster. The SparkContext and SparkConf as child objects of the SparkSession contain the necessary information to configure the cluster parameters, e.g., the number of CPU cores and memory assigned to the Executors and the number of Executors that get spawned overall on the cluster. Up until version 2.0, entry points for Spark applications included the SparkContext, SQLContext, HiveContext, and StreamingContext. In more recent versions these were combined into one SparkSession object providing a single entry point.
The execution of the Spark application is scheduled, and directed acyclic graphs (DAG) are created by the Spark Driver. The nodes of these DAGs represent transformational or computational steps on the data. These DAGs can be visualized using the Spark application UI typically running on port 4040 of the Driver node. The Spark application UI is a useful tool to improve the performance of Spark applications and for debugging, as it also gives information about the computation time of the distinct tasks within a Spark program.~\cite[pp. 45ff]{sparkbook1}\\
\begin{figure}[htbp]
\captionsetup[subfigure]{justification=centering}
\centering
\framebox{\parbox{1\textwidth}{
\begin{subfigure}{.5\textwidth}
\captionsetup{justification=centering}
\captionsetup[subfigure]{justification=centering}
\centering
\includegraphics[scale=0.23]{Images/Spark/df_slow_chroma.png}
\caption{\noindent Event timeline}
\label{sui2}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\captionsetup{justification=centering}
\captionsetup[subfigure]{justification=centering}
\centering
\includegraphics[scale=0.26]{Images/Spark/toPandasDag.png}
\caption{\noindent DAG}
\label{sui3}
\end{subfigure}%
}}
\caption{Spark application UI examples taken from the recommender system}
\label{appui}
\end{figure}
\FloatBarrier
\noindent Two examples of information provided by the Spark application UI are shown in Figure~\ref{appui}, with Figure~\ref{sui2} showing the event timeline for a poorly optimized code snippet, where a single collect operation takes multiple minutes. Figure~\ref{sui3} gives an example of an optimized DAG. %returning recommendations from one song request into a *.csv file.\\
\noindent The Workers are the nodes in the cluster on which the actual computation of the Spark DAG tasks takes place. As defined within the SparkConf, the Worker nodes spawn a finite or fixed number of Executors that reserve CPU and memory resources and run in parallel. The Executors are hosted in JVMs on the Workers. Finally, the Spark Master and the Cluster Manager are the processes that monitor, reserve and allocate the resources for the Executors. Spark can work on top of various Cluster Managers like Apache Mesos, Hadoop, YARN, and Kubernetes. Spark can also work in standalone mode, where the Spark Master also takes control of the Cluster Managers' tasks. If Spark is running on top of a Hadoop cluster, it uses the YARN ResourceManager as the Cluster Manager, and the ApplicationMaster as the Spark Master. The ApplicationMaster is the first task allocated by the ResourceManager and negotiates the resources (containers) for the Executors and makes them available to the Driver.~\cite[pp. 49 ff]{sparkbook1}\\
When running on top of a Hadoop installation, Spark can additionally take advantage of the HDFS by reading data directly out of it.\\
\subsubsection{Cluster Configuration and Execution}\label{cconfexp}
There are multiple options of passing a Spark programm to the cluster. The first one is to use a spark shell e.g. by calling \lstinline{pyspark} when working with the Spark Python API or \lstinline{spark-shell} for use with Scala. If the interactive option of using a spark shell is chosen, a SparkSession is automatically created and exited once the spark shell gets closed.
Alternatively the Spark application can be passed to the cluster directly, using \lstinline{spark-submit application.py -options} (Python).
As mentioned previously, the configuration of the Spark cluster can be changed. This can either be done by using a cluster configuration file (e.g. spark-defaults.conf), by submitting the parameters as arguments passed to pyspark, spark-console or spark-submit, or by directly setting the configuration properties inside the Spark application code (see Code Snippet~\ref{lst:cconf})\\
\begin{pythonCode}[frame=single,label={lst:cconf},caption={Example cluster configuration Python},captionpos=b]
confCluster = SparkConf().setAppName("MusicSimilarity Cluster")
confCluster.set("spark.executor.memory", "1g")
confCluster.set("spark.executor.cores", "1")
sc = SparkContext(conf=confCluster)
sqlContext = SQLContext(sc)
spark = SparkSession.builder.master("cluster").appName("MusicSimilarity").getOrCreate()
\end{pythonCode}
\noindent In the code snippet, each Executor gets 1GB of RAM and 1 CPU core assigned by setting the according parameters in the \lstinline{confCluster} object. The SparkContext is saved into the object \lstinline{sc} and \lstinline{sqlContext} contains the SQLContext object.
\subsubsection{Spark Advantages}
For this thesis, the programming language of choice is Python. With its high-level Python API, Spark applications can access commonly known and widely used Python libraries such as Numpy or Scipy. It also contains its own powerful libraries like the Spark ML library for machine learning applications or GraphX for the work with large graphs.\\
Spark can be used in combination with SQL (e.g., the Hive project) and NoSQL Systems like Cassandra and HBase. Spark SQL enables the transformation of RDDs to well structured DataFrames. The DataFrame concept is later used in Section~\ref{bds1}.\\
One other important concept Spark uses is its lazy evaluation or lazy execution. Spark differentiates between data transformations (e.g. \lstinline{filter()}, \lstinline{join()}, and \lstinline{map()}) and actions (e.g. \lstinline{take()} or \lstinline{count()}). The actual processing and transformation of data is deferred until an action is called.
\begin{pythonCode}[frame=single,label={lst:lev},caption={Lazy evaluation},captionpos=b]
chroma = sc.textFile("features.txt").repartition(repartition_count)
chroma = chroma.map(lambda x: x.split(';'))
chroma = chroma.filter(lambda x: x[0] == "OrbitCulture_SunOfAll.mp3")
chroma = chroma.count()
\end{pythonCode}
\noindent In the example Code Snippet~\ref{lst:lev} a text file \lstinline{"features.txt"} gets read into an RDD \lstinline{chroma} and repartitioned into \lstinline{repartition_count} blocks. The \lstinline{map()} transformation splitting the feature vectors and the \lstinline{filter()} transformations that searches for a specific file ID are only executed once the \lstinline{count()} action is called. Only then a DAG is created together with logical and physical execution plans and the tasks are distributed across the Executors. The lazy evaluation allows Spark to combine as many operations as possible which may lead to a drastic reduction of processing stages and data shuffling (data transferred between Executors) and thus reducing unnecessary overhead and network traffic. But the lazy execution has to be kept in mind during debugging and performance testing.~\cite[p.73]{sparkbook1}
\noindent Another important part of Spark is its ability to process streaming data. While Hadoop is good at batch processing very large datasets but rather slow when it comes to iterative tasks on the same data due to its persistent write operations to the hard drive, Spark outperforms Hadoop with its capability to use RDDs and the main memory during iterative tasks.
With Spark streaming the possibility to process data streams, e.g., from social networks, in real-time is given.\\
The combination of batch- and stream-processing methods is called "Lambda architecture" in data science literatur. It describes a data-processing architecture consisting of a Batch-Layer, a Speed-Layer for real-time processing and a Serving-Layer managing the data~\cite[pp. 8f]{nextgenbig}. Spark offers the possibility to take care of both, batch- and stream-processing jobs. Combined with other frameworks like the Apache SMACK stack (Spark, Mesos, Akka, Cassandra, and Kafka), Spark offers plenty possibilities for high-throughput Big Data processing~\cite[p. 5]{smack}.\\
This thesis preliminary focuses on batch processing and finding similar items. But the possibility to pass song titles in real-time to Spark and getting recommendation lists of similar songs in a few seconds in return could be a long-term goal of future work.\\
\subsection{Music Similarity with Big Data Frameworks}
The similarities can be calculated as "one-to-many-items" similarities. That means that for only one song at a time the similarities to all other songs have to be calculated. This is the approach investigated in this thesis. The other option would be to pre-calculate a full similarity matrix (All-pairs similarity). But looking at large-scale datasets with millions of songs, this would take a considerable amount of time. A combination of both approaches would be to calculate the similarities for one song request at a time but store these similarities into a sparse similarity matrix once they got computed to speed up subsequent requests of the same songs. But this is beyond the scope of this thesis.\\
Given the short introduction to Big Data frameworks, the decision to use Spark for the computation of the similarities between audio features can be justified as follows.\\
The computation of the "one-to-many-item" similarity follows the shared nothing approach of Spark. All of the features from different songs are independent of each other, and the distances can be computed in parallel. Only the scaling of the result requires an aggregation of maximum and minimum values. And to return the top results, a means of sorting has to be performed. But apart from these operations that require data shuffling, all the features can be distributed on a cluster and the similarity to one broadcasted song can be calculated independently, following the data locality principle. This offers a fully scalable solution for very large datasets. Additionally, Spark enables efficient ways to cache the audio feature data into the main memory. Under the prerequisite that the sum of all features from all songs fit into the main memory of the cluster, interactive consecutive song requests could be answered without the need of reading the features from the hard drive every time.
One limitation is that Spark itself is unable to read and handle audio files. The feature extraction itself has to be performed separately, and only the extracted features are loaded into the cluster and processed with Spark. The feature extraction process is later described in Section~\ref{simmet}.\\
| {
"alphanum_fraction": 0.7634627626,
"avg_line_length": 106.628959276,
"ext": "tex",
"hexsha": "4eb0fc99ebf36a57f845d97f4f931bfcaff4b745",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "223ea464015608ce23c9e856963b4a3702617f73",
"max_forks_repo_licenses": [
"Unlicense"
],
"max_forks_repo_name": "oObqpdOo/MusicSimilarity",
"max_forks_repo_path": "CH1_2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "223ea464015608ce23c9e856963b4a3702617f73",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Unlicense"
],
"max_issues_repo_name": "oObqpdOo/MusicSimilarity",
"max_issues_repo_path": "CH1_2.tex",
"max_line_length": 1034,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "223ea464015608ce23c9e856963b4a3702617f73",
"max_stars_repo_licenses": [
"Unlicense"
],
"max_stars_repo_name": "oObqpdOo/MusicSimilarity",
"max_stars_repo_path": "CH1_2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6309,
"size": 23565
} |
% Options for packages loaded elsewhere
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
]{article}
\usepackage{amsmath,amssymb}
\usepackage{lmodern}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\@ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\usepackage[margin=1in]{geometry}
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\usepackage{graphicx}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
% Set default figure placement to htbp
\makeatletter
\def\fps@figure{htbp}
\makeatother
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
\ifluatex
\usepackage{selnolig} % disable illegal ligatures
\fi
\author{}
\date{\vspace{-2.5em}}
\begin{document}
\hypertarget{operations-research.-laboratory-session}{%
\section{Operations Research. Laboratory
Session}\label{operations-research.-laboratory-session}}
\hypertarget{heuristic-optimization-with-metaheur.}{%
\section{\texorpdfstring{Heuristic Optimization with
\texttt{metaheuR}.}{Heuristic Optimization with metaheuR.}}\label{heuristic-optimization-with-metaheur.}}
\hypertarget{experimenting-with-different-neighborhoods-in-local-search}{%
\section{(Experimenting with different neighborhoods in Local
Search)}\label{experimenting-with-different-neighborhoods-in-local-search}}
\hypertarget{the-lop-problem}{%
\section{(The LOP problem)}\label{the-lop-problem}}
The aim of this laboratory session is to continue learning how to
implement heuristic algorithms for solving combinatorial optimization
problems using the \texttt{metaheuR} package in R. Be careful, in R
there exists a package called \texttt{metaheur}, but that's not the one
we'll use.
\hypertarget{installing-metaheur}{%
\section{\texorpdfstring{1. Installing
\texttt{metaheuR}}{1. Installing metaheuR}}\label{installing-metaheur}}
You can install it directly from RStudio. Download the file
\texttt{metaheuR\_0.3.tar.gz} from the eGela platform to a working
directory in your computer. I saved it here:
/Users/JosuC/Desktop
To install the package, write the path that corresponds to the working
directory where you saved the file \texttt{metaheuR\_0.3.tar.gz} and
execute the following commands:
\begin{Shaded}
\begin{Highlighting}[]
\ControlFlowTok{if}\NormalTok{ (}\SpecialCharTok{!}\FunctionTok{require}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{)) }\FunctionTok{install.packages}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{, }\AttributeTok{dependencies=}\ConstantTok{TRUE}\NormalTok{)}
\FunctionTok{library}\NormalTok{(ggplot2)}
\FunctionTok{library}\NormalTok{(metaheuR)}
\end{Highlighting}
\end{Shaded}
Once the package is installed and loaded, you can go to RStudio
``Packages'' and click on the name of the package to see the help pages
of all the functions defined in it.
For more extensive documentation, the book ``\emph{Bilaketa
Heuristikoak: Teoria eta Adibideak R lengoaian}'' published by the
UPV/EHU is suggested. It is written in Basque and freely accessible in:
\url{https://addi.ehu.es/handle/10810/25757}
\hypertarget{the-linear-ordering-problem-lop}{%
\section{2. The Linear Ordering Problem
(LOP)}\label{the-linear-ordering-problem-lop}}
With illustrative purposes, in the current session, the Linear Ordering
Problem (LOP) will be considered.The LOP problem is stated as follows:
given a matrix \(B=[b_{i,j}]_{n\times n}\) of weights, find the
simultaneous permutation \(\sigma\) of the \(n\) rows and columns that
maximizes the sum of the weights \(b_{i,j}\) localted in the upper
triangle of the matrix (above the main diagonal).
For more information about this NP-hard problem, have a look at these
references or search for new ones at Google Scholar:
\begin{itemize}
\item "The Linear Ordering Problem Revisited". Josu Ceberio, Alexander Mendiburu, José Antonio Lozano (2014). http://hdl.handle.net/10810/11178
\item "Linear Ordering Problem". Martí, Reinelt and Duarte (2009). Problem description, the LOP formulated as 0/1 linear integer programming problem, etc. http://grafo.etsii.urjc.es/optsicom/lolib/
\end{itemize}
A very small instance of a LOP problem is the matrix of weights shown in
the first matrix below. In this LOP instance, there are \(n=4\) rows and
columns. The second matrix represents a different order for rows and
columns: row 3 (and column 3) are placed in the first position (either
row or column) in the matrix and row 1 (and column 1) in the third
position.
\begin{center}
$\begin{array}{c|rrrr|r|rrrr|}
\multicolumn{1}{c}{} & 1 & 2 & 3 & \multicolumn{1}{c}{4}&
\multicolumn{1}{c}{\hspace{2cm} } & 3 & 2 & 1 & \multicolumn{1}{c}{4} \\
\cline{2-5} \cline{7-10}
1 & 0 & 2 & 1 & 3 & 3 & 0 & 3 & 2 & 5\\
2 & 4 & 0 & 1 & 5 & 2 & 1 & 0 & 4 & 5\\
3 & 2 & 3 & 0 & 5 & 1 & 1 & 2 & 0 & 3\\
4 & 1 & 2 & 1 & 0 & 4 & 1 & 2 & 1 & 0\\
\cline{2-5} \cline{7-10}
\end{array}$
\end{center}
Note: The implementation in \texttt{metaheuR} does not maximize the sum
of values in the upper triangle. Instead, it minimizes the sum of values
under the diagonal. In fact, it is equivalent, but, be careful: when you
compare two solutions, the optimum is the minimum for \texttt{metaheuR}.
\hypertarget{formulating-the-problem-in-rstudio}{%
\subsection{Formulating the problem in
RStudio}\label{formulating-the-problem-in-rstudio}}
First of all, we need to define the problem. If the problem is small, we
can introduce the matrix of weights directly in RStudio. Let's introduce
the very small \(4 \times 4\) instance of a LOP problem shown
previously.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# LOP problem of the sheet of exercises (4x4 matrix of weights).}
\CommentTok{\# Introduce the data and create the matrix object with it.}
\CommentTok{\# WRITE HERE (2 lines of code)}
\end{Highlighting}
\end{Shaded}
In the lab we'll use the particular instance of a LOP problem downloaded
from the eGela platform: \emph{N-be75eec\_30}. It is a text file, you
can open it using any text editor.
Think about the necessary functions to read the file from RStudio,
compute the number of rows (columns) and create the appropriate R object
from the given set of values.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Read the data from file N{-}be75eec and create the appropriate R object.}
\CommentTok{\# WRITE HERE (3{-}4 lines of code) }
\end{Highlighting}
\end{Shaded}
\hypertarget{questions}{%
\subsection{Questions:}\label{questions}}
\begin{itemize}
\item
What is the most appropriate codification to represent the solutions
for the LOP problem?
\item
Does the ``swap'' operator guarantee integrity? And, connectivity?
And, the ``insert'' operator?
\item
How many solutions are there in the search space for two LOP problems
considered? The very small \(4 \times 4\) problem and the instance
read from file \emph{N-be75eec\_30}?
\item
Are neighboring solutions computed by the ``swap'' and ``insert''
operators similar?
\end{itemize}
\hypertarget{solving-the-problem-with-metaheur}{%
\section{\texorpdfstring{3. Solving the problem with
\texttt{metaheuR}}{3. Solving the problem with metaheuR}}\label{solving-the-problem-with-metaheur}}
Once the matrix of weights for the LOP problem has been defined, the
function \texttt{lopProblem} implemented in \texttt{metaheuR} can be
used. Have a look at the \texttt{lopProblem} function in the help pages
in RStudio.
In the following, you are asked to use it to create the object
associated to the matrix of weights defined at the beginning of this lab
session. After that, generate a solution (the one that considers rows
and columns as they are in the matrix of weights) and compute its
objective value.
For the very small example:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Create the LOP object }
\CommentTok{\# WRITE HERE (1 line of code)}
\CommentTok{\# Generate the solutions that considers rows and columns as they are and compute its objective value}
\CommentTok{\# WRITE HERE (2 lines of code)}
\CommentTok{\# Generate the solutions that corresponds to the second table.}
\CommentTok{\# WRITE HERE (2 lines of code)}
\end{Highlighting}
\end{Shaded}
For the instance of a LOP problem downloaded from the eGela platform
(\emph{N-be75eec\_30}):
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# Create the LOP object}
\CommentTok{\# WRITE HERE (1 line of code)}
\CommentTok{\# Generate the solutions that considers rows and columns as they are and compute its objective value}
\CommentTok{\# WRITE HERE (2 lines of code)}
\end{Highlighting}
\end{Shaded}
\hypertarget{local-search-swap-and-insert-neighborhoods}{%
\section{4. Local search (``swap'' and ``insert''
neighborhoods)}\label{local-search-swap-and-insert-neighborhoods}}
Local search is a heuristic method that is based on the intuition about
the searching process: given a solution, the idea is to find better
solutions in its neighborhood. At each iteration, the algorithm keeps a
current solution and substitutes it with another one in the
neighborhood. We already worked with it in the previous laboratory
session.
The efficiency of the algorithm is highly related to the shape of the
neighborhood selected. In this lab session we are going to experiment
with two different neighborhoods: the ones generated by the ``swap'' and
the ``insert'' operators. The aim is to estimate the number of local
optima in each of the neighborhoods, to select the most efficient one.
You can use the functions \texttt{swapNeighborhood} and
\texttt{insertNeighborhood} implemented in \texttt{metaheuR}.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# For the solution that considers rows and columns in the same order as they are, }
\CommentTok{\# create a "swap" neighborhood object}
\CommentTok{\# WRITE HERE (2 lines of code)}
\CommentTok{\# Now, create an "insert" neighborhood object for the same solution}
\CommentTok{\# WRITE HERE (1 line of code)}
\end{Highlighting}
\end{Shaded}
Having the initial solution and the neighborhoods defined, now we'll
apply the \texttt{basicLocalSearch} function, as we did in the previous
lab session. You can have a look at the help pages to see how to use it.
It requires quite a lot of parameters.
As we have seen in the theory, there are different strategies to select
a solution among the ones in the neighborhood during the searching
process. In the previous lab we applied a greedy strategy to select a
solution in the neighborhood with \texttt{greedySelector}. This time we
can experiment with another option and select the first neighbor that
improves the current solution, \texttt{firstImprovementSelector}. We
will consider the two options and compare the results.
According to the resources available for the searching process, this
time we will limit them as follows: the execution time (10 seconds), the
number of evaluations performed (\(100n^2\)) and the number of
iterations to be carried out (\(100n\)), being \(n\) the size of the
problem.
According to the neighborhoods, we've got two to select and compare: the
``insert neighborhood'' and the ``swap neighborhood'' created
previously.
Once all the parameters are ready, the searching process can start. Try
with the different options for the parameters mentioned.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# WRITE HERE (11{-}12 lines of code)}
\CommentTok{\# Extract the approximated solution found and the objective value}
\CommentTok{\# WRITE HERE (2 lines of code)}
\end{Highlighting}
\end{Shaded}
\hypertarget{questions-1}{%
\subsection{Questions:}\label{questions-1}}
\begin{itemize}
\tightlist
\item
What are the approximate solutions obtained for the different
neighborhoods and the different strategies to select a solution among
the ones in the neighborhood? What's their objective value? Compare
them and say which one is the best (do not forget that
\texttt{metaheuR} minimizes the sum of values under the diagonal).
\end{itemize}
\begin{enumerate}
\def\labelenumi{\arabic{enumi})}
\item
Apply a greedy strategy to select a solution in an ``insert''
neighborhood:
\item
Select the first neighbor that improves the current solution in an
``insert'' neighborhood:
\item
Apply a greedy strategy to select a solution in a ``swap''
neighborhood::
\item
Select the first neighbor that improves the current solution in a
``swap'' neighborhood:
\end{enumerate}
\begin{itemize}
\tightlist
\item
Now, you can try to increment the resources for the searching process.
Do you obtain better results?
\end{itemize}
\hypertarget{estimating-the-number-of-local-optima}{%
\section{Estimating the number of local
optima}\label{estimating-the-number-of-local-optima}}
Our intuition suggests that if there is a small number of local optimum
in the neighborhood, the algorithm has more chances to find the global
optimum; its shape is said to be ``smooth'' or ``flat''. On the
contrary, having a neighborhood with a lot of local optimum makes it
more difficult to find the global optimum, since the algorithm will very
easily get stack in a local optimum; its shape is said to be
``wrinkled'' or ``rugged''.
Having a ``swap'' and an ``insert'' neighborhood, for example, how can
we know which of the two is better? We could compute all the solutions
in the neighborhoods, evaluate them all and calculate the number of
local optimum for each of them, but this strategy makes no sense, since
it requires to analyze the complete search spaces: computationally too
expensive. So, normally estimations are computed.
There is a very easy way to estimate the number of local optimum in a
neighborhood. It is possible to generate \(k\) initial solutions at
random and apply the local search algorithm to each of them to obtain
\(k\) local optima. They are not all necessarily different, of course,
because of the ``basins of attraction''. Let's say that for a particular
neighborhood and starting at \(k\) initial solutions \(LO_{diff}\)
different local optimums are obtained. So, the percentage of different
local optimums obtained can be computed like this:
\[ 100*\frac{LO_{diff}}{k}.\]
In the following you are asked to estimate the number of local optimum
for the two neighborhoods we created at the beginning of the lab
session: the ``swap'' neighborhood and the ``insert'' neighborhood. We
will generate (\(k=5\)) initial solutions to start. Based on the results
you obtained in the previous section, it is up to you to decide the
strategy you'll use to select a solution among the ones in the
neighborhood. According to the resources, we will limit the search to
\(1000n^2\) evaluations.
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# WRITE HERE (15{-}20 lines of code)}
\end{Highlighting}
\end{Shaded}
\hypertarget{questions-2}{%
\subsection{Questions:}\label{questions-2}}
\begin{itemize}
\item
What is the number of local optimum estimated for the ``swap''
neighborhood? And, for the ``insert'' neighborhood?
\item
Which one would you say is more appropriate for the LOP problem?
\item
Repeat the experiment for different number of initial solutions,
\(k=10, 15, 20\). Do not consider very large values for \(k\), because
it takes time to do the estimations\ldots{} Can you observe any
difference?
\end{itemize}
\hypertarget{advanced-local-search-based-algorithms}{%
\section{5. Advanced local search-based
algorithms}\label{advanced-local-search-based-algorithms}}
Local search-based algorithms stop their execution whenever local optima
solution have been found (unless assigned resources run out before).
This implies that no matter how much we increase the availability of
execution resources, the algorithm will remain stucked in the same
solution. In response to such weaknesses, the community of heuristic
optimization proposed a number of strategies that permit the algorithm
to scape being stucked, and enable to continue optimizing. An obvious
strategy, is to run another local search algorithm, however, since the
current solution is a local optimum, then, no improvement will take
place. In this sense, an alternative is to perturbate the current
solution (5\% of the numbers that compound the solution), and then apply
again the local search algorithm. This general procedure is known
Iterated Local Search (ILS). The algorithm repeats until the available
resources run out.
\texttt{metaheuR} already includes an implementation of the ILS, but I
do not want you to use it. Instead, I want you to implement your own
design. You have almost every puzzle piece:
\begin{itemize}
\tightlist
\item
Local search algorithm (using the best neighborhood found so far).
\item
The \texttt{getEvaluation} function to obtain the objective value of
the best solution found by the local search algorithm.
\item
The non-consumed evaluations can be known with the function
\texttt{getConsumedEvaluations}.
\item
Stop the ILS algorithm after \(10^6\) function evaluations. If it
takes too much time, test first with \(10^5\) evaluations.
\end{itemize}
The perturbation function is given below. The \texttt{ratio} parameter
describes the percentage of the solution that will be shuffled.
\begin{Shaded}
\begin{Highlighting}[]
\NormalTok{perturbShuffle}\OtherTok{\textless{}{-}}\ControlFlowTok{function}\NormalTok{(solution,ratio,...)\{}
\FunctionTok{return}\NormalTok{ (}\FunctionTok{shuffle}\NormalTok{(}\AttributeTok{permutation=}\NormalTok{solution,}\AttributeTok{ratio=}\NormalTok{ratio))}
\NormalTok{\}}
\end{Highlighting}
\end{Shaded}
The implementation of the ILS:
\begin{Shaded}
\begin{Highlighting}[]
\CommentTok{\# BETE HEMEN (24{-}30 lerro)}
\end{Highlighting}
\end{Shaded}
\hypertarget{questions-3}{%
\subsection{Questions}\label{questions-3}}
\begin{itemize}
\tightlist
\item
Return the solution and objective value of the solution found so far.
Is this solution better than the one calculated by the local search
algorithm in the previous sections?
\end{itemize}
\end{document}
| {
"alphanum_fraction": 0.7567653759,
"avg_line_length": 39.3369175627,
"ext": "tex",
"hexsha": "bc5173a971cd70839b481a72d608228f151335bf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ada8fae57566e76a9a0bc3d6de906cde167828a7",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "usainzg/EHU",
"max_forks_repo_path": "OR/Lab7_UnaiSainz.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ada8fae57566e76a9a0bc3d6de906cde167828a7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "usainzg/EHU",
"max_issues_repo_path": "OR/Lab7_UnaiSainz.tex",
"max_line_length": 258,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ada8fae57566e76a9a0bc3d6de906cde167828a7",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "usainzg/EHU",
"max_stars_repo_path": "OR/Lab7_UnaiSainz.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6312,
"size": 21950
} |
\section{Adaptive Noise Cancellation}
\begin{enumerate}[label=\alph*), leftmargin=*]
%% a)
\item
%
Let the pure sine wave $x(n)$, the noisy signal $s(n)$, the noise term $\eta(n)$ and the ALE filter output $\hat{x}(n; \Delta)$, parametrised by the delay parameter $\Delta$.
Then the Mean Squared Error (MSE) is given by:
\begin{align}
\mathtt{MSE} =
\E \bigg[ \big( s(n) - \hat{x}(n; \Delta) \big)^{2} \bigg] &= \E \bigg[ \big( x(n) + \eta(n) - \hat{x}(n; \Delta) \big)^{2} \bigg] \\
&= \E \bigg[ \big( \eta(n) + (x(n) - \hat{x}(n; \Delta)) \big)^{2} \bigg] \\
&= \E \bigg[ \eta^{2}(n) \bigg] +
\E \bigg[ \big( x(n) - \hat{x}(n; \Delta) \big)^{2} \bigg] +
2\E \bigg[ \eta(n) \big(x(n) - \hat{x}(n; \Delta) \big) \bigg]
\label{eq:ale_mse}
\end{align}
The first term, noise power $\E [ \eta^{2}(n) ]$, is independent of $\Delta$, while the second term, Mean Squared Prediction Error $\E [ (x(n) - \hat{x}(n; \Delta))^{2} ]$ is not
a function of noise $\eta(n)$. Hence, the last term only involves both the delay $\Delta$ (through $\hat{x}(n)$) and the noise term, so we will minimise it:
\begin{equation}
\underset{\Delta \in \sN}{min}\ \E \bigg[ \eta(n) \big(x(n) - \hat{x}(n; \Delta) \big) \bigg]
\end{equation}
Using the fact that $x(n)$ and $\eta(n)$ are uncorrelated the term $\E [ \eta(n) x(n) ]$ vanished:
\begin{align}
\underset{\Delta \in \sN}{min}\ \E \bigg[ \eta(n) \hat{x}(n; \Delta) \bigg] &\rightarrow
\underset{\Delta \in \sN}{min}\ \E \bigg[ \big( u(n) + 0.5u(n-2) \big) \vw^{T} \vu(n; \Delta) \bigg] \\
&\rightarrow
\underset{\Delta \in \sN}{min}\ \E \bigg[ \big( u(n) + 0.5u(n-2) \big) \sum_{i=0}^{M-1} w_{i} s(n - \Delta - i) \bigg] \\
&\rightarrow
\underset{\Delta \in \sN}{min}\ \E \bigg[ \big( u(n) + 0.5u(n-2) \big) \sum_{i=0}^{M-1} w_{i} \big( x(n - \Delta - i) + \eta(n - \Delta - i) \big) \bigg] \\
&\rightarrow
\underset{\Delta \in \sN}{min}\ \E \bigg[ \big( u(n) + 0.5u(n-2) \big) \sum_{i=0}^{M-1} w_{i} \big( \eta(n - \Delta - i) \big) \bigg] \\
&\rightarrow
\underset{\Delta \in \sN}{min}\ \E \bigg[ \big( u(n) + 0.5u(n-2) \big) \sum_{i=0}^{M-1} w_{i} \big( u(n - \Delta - i) + 0.5 u(n - 2 - \Delta - i) \big) \bigg]
\label{con:delta} \\
&\rightarrow
0, \quad \Delta > 2
\end{align}
Since $u(n)$ is identically and \textbf{independently} distributed white noise:
\begin{equation}
\E \bigg[ u(n) u(n - j) \bigg] = 0, \quad \forall j \neq 0
\end{equation}
therefore the expectation in (\ref{con:delta}) is zero and thus minimised for $\Delta > 2$, since the terms are non time-overlapping.
This is an expected result, since the colored noise signal $\eta(n)$ is a second order MA process.
The theoretical optimal delay range, $\Delta > 2$ is also verified empirically. In figure \ref{fig:3_3_a_1} the clean signal $x(n)$ against, $s(n)$ and filter output $\hat{x}(n)$
are illustrated for different delay $\Delta$ values. Moreover, the MPSE as a function of $\Delta$ is also plotted in figure \ref{fig:3_3_a_2}, verifying the improved performance $\Delta > 2$.
All experiments are conducted using a fixed model order $M = 5$ LMS filter.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/a/ale_overlay-Delta_1}
\end{subfigure}
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/a/ale_overlay-Delta_2}
\end{subfigure}
~
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/a/ale_overlay-Delta_3}
\end{subfigure}
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/a/ale_overlay-Delta_4}
\end{subfigure}
\caption{ALE: overlay plots for various $\Delta$ delays, for fixed $M=5$.}
\label{fig:3_3_a_1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/a/ale_mpse}
\end{subfigure}
\caption{ALE: MPSE against $\Delta$, for fixed $M=5$.}
\label{fig:3_3_a_2}
\end{figure}
%% b)
\item
%
The experiments are repeated varying now both the delay parameter, $\Delta$, and the model order, $M$, obtaining figures \ref{fig:3_3_b_1}, \ref{fig:3_3_b_2}.
We notice that the mean squared prediction error (MPSE) is minimised for the hyperparmeters pair $(\Delta, M) = (3, 6)$.
Over-modelling (large $M$) results in excess degrees of freedom that increase computational complexity and over-fit noise, degrading
model performance. For model order $M=6$ MPSE is minimised, while the model complutational load is still not prohibitive.
In the previous part we showed theoretically that for $\Delta > 2$ the noise and the filter output are uncorrelated thus MSE is minimised.
Nonetheless, the second term in (\ref{eq:ale_mse}) was ignored. The impact of this term on the MPSE is illustrated in figure \ref{fig:3_3_b_2},
where very large $\Delta$ (i.e $\Delta = 25$) inevitably cause a time-shift between the filter output $\hat{x}(n)$ and the true sine wave $x(n)$.
Hence, $\Delta = 3$ is the optimal parameter, minimising delay effects between $x(n)$ and $\hat{x}(n)$, as well as guaranteeing uncorrelation between
$\hat{x}(n)$ and $\eta(n)$.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/b/ale_mpse_vs_Delta}
\end{subfigure}
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/b/ale_mpse_vs_M}
\end{subfigure}
\caption{ALE: MPSE against delay $\Delta$ and model order $M$.}
\label{fig:3_3_b_1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/b/ale_overlay-Delta_1}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/b/ale_overlay-Delta_3}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/b/ale_overlay-Delta_25}
\end{subfigure}
\caption{ALE: model deterioration for large delays $\Delta$.}
\label{fig:3_3_b_2}
\end{figure}
%% c)
\item
%
We compare the performance of the Adaptive Noise Cancellation (ANC) configuration to the Adaptive Line Enchancer (ALE) configuration, used in previous parts.
The colored noise $\epsilon(n)$ is used as input to the LMS filter to perform ANC, such that:
\begin{equation}
\epsilon(n) = 0.9 \eta(n) + 0.05
\end{equation}
and as a result $\epsilon(n)$ is the secondary noise signal, correlated to the $\eta(n)$, primary noise signal. Note that the relationship between the two signals
is unknown to the ANC algorithm.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/c/ALE_overlay}
\end{subfigure}
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/c/ANC_overlay}
\end{subfigure}
\caption{ALE vs ANC: overlay plots and mean prediction squared error.}
\label{fig:3_3_c_1}
\end{figure}
In figure \ref{fig:3_3_c_1} the overlay plots for 100 realisations of the process $x(n)$ are provided, along with the denoised versions of both the ALE and ANC configurations.
ANC performs overall better, with MPSE = 0.1374, than the ALE configuration, which scores MPSE = 0.2520. Nonetheless, we highlight the fact that the ANC algorithm
does poorly in the first timesteps, but for $t > 400$, it tracks the true signal $x(n)$ much better. An ensemble of realisations of the process is also simulated and its mean are illustrated in figure \ref{fig:3_3_c_2}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/c/comparison}
\end{subfigure}
\caption{ALE vs ANC: ensemble mean comparison.}
\label{fig:3_3_c_2}
\end{figure}
%% d)
\item
%
Let a synthetic reference input, $\epsilon(n)$, composed of a sinusoid of $50\ Hz$ corrupted by white Gaussian noise. The ANC configuration is used with inputs
$\epsilon(n)$ and the \texttt{POz} EEG time-series, in order to remove the strong $50 Hz$ frequency component due to power-line interference (mains).
For illustration purposes \texttt{spectrogram}s are plotted using a rectangular window of length $L=4096$ and $80\%$ overlap. The obtained spectrograms are provided
in figures \ref{fig:3_3_d_1} and \ref{fig:3_3_d_2}. As expected the original \texttt{POz} signal has a strong $50\ Hz$ frequency component.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/original}
\end{subfigure}
\caption{EEG: original, reference spectrogram.}
\label{fig:3_3_d_1}
\end{figure}
The LMS filter order $M$ and the step-size $\mu$ is varied, and the impact on the spectrogram is shown in figure \ref{fig:3_3_d_2}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_1-mu_0.001}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_1-mu_0.005}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_1-mu_0.100}.pdf}
\end{subfigure}
~
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_10-mu_0.001}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_10-mu_0.005}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_10-mu_0.100}.pdf}
\end{subfigure}
~
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_25-mu_0.001}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_25-mu_0.005}.pdf}
\end{subfigure}
~
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[height=1in]{{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/M_25-mu_0.100}.pdf}
\end{subfigure}
\caption{EEG: ANC denoised spectrogram for different model order, $M$, and step-sizes, $\mu$.}
\label{fig:3_3_d_2}
\end{figure}
Large step-sizes (i.e $\mu = 0.1$) affect significantly the spectral components around $50\ Hz$, degrading ANC performance.
On the other hand, small step-sizes (i.e $\mu = 0.001$) take more time to reach steady-state, however provide successful denoising, without disrupting the frequencies close to $50\ Hz$.
On the other hand, under-modelling (i.e $M = 1$) leads to poor noise cancellation, since the $50\ Hz$ power-line interface component has not been attenuated.
However, over-modelling (i.e $M=25$) degrades quality of neighbour frequencies, while a medium size model, such as $M=10$, achieves satisfying performance, by eliminating the
$50\ Hz$ component of interest, without affecting any other compoenents.
Overall, an ANC configuration with $(M, \mu) = (10, 0.001)$ is selected. The corresponding denoised periodogram is also provided at figure \ref{fig:3_3_d_3}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/periodogram}
\end{subfigure}
~
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height=1.5in]{report/adaptive-signal-processing/adaptive-noise-cancellation/assets/d/periodogram-error}
\end{subfigure}
\caption{EGG: periodograms of original and denoised signals.}
\label{fig:3_3_d_3}
\end{figure}
%
\end{enumerate} | {
"alphanum_fraction": 0.6600971352,
"avg_line_length": 49.1591695502,
"ext": "tex",
"hexsha": "681e94e82adf5408c53278518087f0227d52145a",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z",
"max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AmjadHisham/ASPMI",
"max_forks_repo_path": "tex/report/adaptive-signal-processing/adaptive-noise-cancellation/index.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AmjadHisham/ASPMI",
"max_issues_repo_path": "tex/report/adaptive-signal-processing/adaptive-noise-cancellation/index.tex",
"max_line_length": 219,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "filangel/ASPMI",
"max_stars_repo_path": "tex/report/adaptive-signal-processing/adaptive-noise-cancellation/index.tex",
"max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z",
"num_tokens": 4272,
"size": 14207
} |
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{parskip}
\usepackage{lscape}
\usepackage{multicol}
\begin{document}
\begin{landscape}
% lose the page number on this page
\thispagestyle{empty}
% start a 2 column page
\begin{multicols}{2}
\section*{(Batch) Gradient Descent}
\begin{equation*}
J_{train}(\theta) = \frac{1}{2m} \sum_{i=1}^m(h_{\theta}(x^{(i)} - y^{(i)}))^2
\end{equation*}
Perform gradient descent by updating $\theta$ using the derivative of the cost function.
Repeat for every $j=0,...,n$ \{
\begin{equation*}
\theta_j := \theta_j - \alpha\frac{1}{m}\sum_{i=1}^m(h_{\theta}(x^{(i)} - y^{(i)}))x_j^{(i)}
\end{equation*}
\}
In large data sets, this summation would have to occur over every training example $m$ at every iteration of the for loop.
% This code block below starts the new column
\vfill
\columnbreak
% This code block above starts the new column
\section*{Stochastic Gradient Descent}
In stochastic gradient descent, the cost is rewritten as the cost of one training example:
\begin{equation*}
cost(\theta, (x^{(i)}, y^{(i)})) = \frac{1}{2}(h_\theta(x^{(i)})-y^{(i)})^2
\end{equation*}
And thus the cost function is:
\begin{equation*}
J_{train}(\theta) = \frac{1}{m}\sum_{i=1}^m cost(\theta, (x^{(i)}, y^{(i)}))
\end{equation*}
\begin{enumerate}
\item Randomly shuffle the dataset (randomly reorder the training examples
\item Repeat \{
\begin{description}
\item for $i=1,...,m$ \{
\item$\theta_j := \theta_j - \alpha h_\theta(x^{(i)}- y^{(i)})x^{(i)}_j$ \quad \textbf{note:} this is $\frac{\partial}{\partial\theta_j}cost$
(for every $j=0,...,n$)
\item\}
\end{description}
\item[]\}
\end{enumerate}
In essence, stochastic gradient descent is updating the parameters $\Theta$ (the matrix of $\theta_{0,...,j}$) by stepping through each individual training example rather than summing across all training examples. The outer repeat loop is meant to symbolize that this process may have to be repeated over several iterations (e.g., 1-10 times) until pseudo-convergence.
\end{multicols}
\end{landscape}
\end{document}
| {
"alphanum_fraction": 0.6863013699,
"avg_line_length": 34.21875,
"ext": "tex",
"hexsha": "3b8f2a07d91153be48bb71585c3b56f3c3021eb7",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mazin-abdelghany/coursera-machine-learning",
"max_forks_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mazin-abdelghany/coursera-machine-learning",
"max_issues_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex",
"max_line_length": 368,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5b2d6fa46c1e68314054623c13b06e0ef96776ba",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mazin-abdelghany/coursera-machine-learning",
"max_stars_repo_path": "tex files/Batch v. Stochastic Gradient Descent.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 702,
"size": 2190
} |
%%%%%%%%%%%%%%%%%%%%%%%%
%
% Thesis template by Youssif Al-Nashif
%
% May 2020
%
%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\subsection{ Skip-grams}
\hspace*{0.3cm} As an alternative to natural language processing (NLP) methods, which are reliant on ``bag-of-words" methods, the methods used here utilize a graph representation of the text. Consider a bigram, a pair of two words\textemdash like ``hot dog" or ``peanut butter", these bigrams can be constructed for a text document where every pair or adjacent words is a bigram. The bigrams can then be used to make a graph, where each word is a vertex, and each bigram is an edge. This graph representation holds more context than the bag-of-words methods; for example seeing the words ``cake" and ``carrot" in a bag of words may not show that ``carrot cake" was the real intent of the text. This is an important concept for modeling text, as we should strive to achieve a representation of the text that makes for effective modeling that will capture the true meaning of the text in question. Keeping this in mind, with the example of ``carrot cake", what about the idiom ``beating a dead horse"? Each word individually may mean something other than the idiom. Even the bigrams ``beating dead" and ``dead horse" do not capture what the idiom means. We can expand the number of words in the n-gram to be 3 or 4 words, or alternatively, we can make more ``edges" or connect more words. We can connect words that are not immediately adjacent but perhaps within $k$ words away. These bigrams that appear within $k$ words of each other are called ``skip-grams". The skip-gram allows to capture context of larger sequences of words since the graph representation will show how the $k$ wide neighborhood of words was connected. In the idiom example, using skip-grams with window width $k = 2$, and removing common words (e.g. ``a", ``at", ``the"), will produce a graph like:
$$
E(G) = \{
\text{beat} \longleftrightarrow \text{dead},
\text{dead} \longleftrightarrow \text{horse},
\text{horse} \longleftrightarrow \text{beat} \}
$$
This graph representation contains a cycle, of length 3, where most native english speakers will identify the meaning behind the graph representation. As ideas, idioms, figures of speech, and other concepts (that may be explained in a non-literal fashion) grow in size as they include more words, it becomes more difficult to capture the meaning behind the text. However, leveraging the concept of a skip gram can produce such a rich graph representation of the text that the original meaning is more likely to be preserved. Other research has shown that use of skip-grams for text modeling leads to less data sparsity and mitigates issues of variety in large corpuses through modeling text in this way. The skip-grams are shown to preserve more context than traditional bag-of-words methods that use words as the token of choice \cite{guthrie2006closer}.\\
\subsection{ Graph Kernels}
\hspace*{0.3cm} The next natural question is, ``how can we compare these graph representations?", and we address this with graph kernel methods. These methods are generally used to compare the similarity of graphs. These use of a graph kernel to compare graphs was first published in 2003, and since then various applications and adaptations have been made to the methods. In the case of text mining, the graph kernel must assess vertex labels \textemdash if one intends to map words to vertices, otherwise they will be assessing the topology alone. In this study, the Edge-Histogram kernel is the kernel used to compute similarity. This kernel was chosen as it uses labels on the graph structure, and is not as computationally intensive as other methods \cite{sugiyama2015halting}. In the specific implementation used for these studies, the computation time was the shortest when compared with other kernel methods like: graphlet, random walk, and Weisfeiler-Lehman kernel \cite{sugiyama2015halting}. Since the data sets of concern in the studies feature either large graphs or a large number of graphs, the kernel had to be cheap computationally.\\
To compute and edge histogram kernel on two graphs, $G_1$ and $G_2$, first define the set of edges $E_i =\{ (u_1,v_1), (u_2,v_2), ... , (u_n,v_n) \}$ where $(u_n,v_n)$ is the $n$-th edge connecting $u_n$ to $v_n$. Then the edge label histogram is defined to be $\vec{g} = \{ g_1, g_2, ... , g_s\}$, so that $g$ contains the number of times each edge label appeared. In the case of graphical representations of text, the number of times a skip-gram appears is not considered; it either appeared or did not. For this reason, a Manhattan distance is chosen, as opposed to a euclidean or similar distance metric, since the Manhattan distance measures distance along a grid\textemdash like Manhattan city blocks from point A to point B. Since the data are all on a grid in essence, due to the binary nature of either having a label or not, the Manhattan distance is a natural fit here. The kernel is then the sum or the product of each element in the $g$ vectors for each $G_1$ and $G_2$ in the case of a linear kernel \cite{sugiyama2015halting}.
%https://papers.nips.cc/paper/2015/file/31b3b31a1c2f8a370206f111127c0dbd-Paper.pdf
\subsection{Using Kernel for Clustering}
\hspace*{0.3cm} The output of the kernel is useful for a variety of tasks. Some other popular applications have included classification with support vector machines, which are popular with other kernel methods. In this case, the kernel is used for unsupervised clustering. Within the kernel matrix, $K$, the entry $k_{i,j}$ represents the similarity between graphs $i$ and $j$. This matrix which contains measures of similarity between points can be used as a distance matrix for hierarchical clustering. Before using the graph kernel as a distance matrix, normalization or standardization takes place, and principal component analysis may be used. The end result is each row is a single graph-document being described by its similarity to all the other graphs, which are the column values. Once the values are transformed or rotated by preprocessing methods, the points are just represented by their similarity to one another, but in a transformed space. Various hyper parameters can be tuned for successful clustering; the graph kernel has a parameter that can be tuned, and the hierarchical clustering can be tried with differing types of linkage.
%\subsection{Kernel Density Estimation Clustering for Linear Kernel}
%In addition to hierarchical clustering, a method was developed to find potential clusters based on the kernel similarity measure, but while measuring similarity to a single graph. For example, we can compare how similar graphs $B$ and $C$ without computing their similarity, by comparing how similar they are to graph $A$; this extension of transitive property logic allows for focusing on the similarity of the graph list as it relates to just one graph.
| {
"alphanum_fraction": 0.7776189796,
"avg_line_length": 134.5576923077,
"ext": "tex",
"hexsha": "87cadc87480438c8278bffe6d1caa3b876c100af",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Levi-Nicklas/GraphDocNLP",
"max_forks_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/00_Introduction.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff",
"max_issues_repo_issues_event_max_datetime": "2021-02-25T14:18:51.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-02-18T16:07:14.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Levi-Nicklas/GraphDocNLP",
"max_issues_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/00_Introduction.tex",
"max_line_length": 1771,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "dec1acb24a2ab42b46d161c92b69ad3a55fcc5ff",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Levi-Nicklas/GraphDocNLP",
"max_stars_repo_path": "Thesis_Tex/Content/02_Chapters/Chapter 03/Sections/00_Introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2021-01-27T02:08:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-01-27T02:08:34.000Z",
"num_tokens": 1597,
"size": 6997
} |
\subsection{Random Roles}
All players follow the same strategy, according to which each player is permitted to freely add or steal direct trust from
other players. After $R$ rounds exactly two players are selected at random (these choices follow the uniform distribution).
One is dubbed seller and the other buyer. The seller offers a good that costs $C$, which she values at $C - l$ and the buyer
values at $C + l$. The values $R, C$ and $l$, as well as the uniform distribution with which the buyer and seller are
chosen, are common knowledge from the beginning of the game. The exchange completes if and only if $Tr_{Buyer \rightarrow
Seller} \geq C$. In this game, fig~\ref{fig:game} would be augmented by an additional level at the bottom, where Nature
chooses the two transacting players. There are three variants of the game, each with a different utility for the players
(the first two versions have two subvariants each).
\subsubsection{Hoarders} \ \\
If player $A$ is not chosen to be either buyer or seller, then her utility is equal to $Cap_{A, R}$. Intuitively players
do not attach any value to having (incoming or outgoing) direct trust at the end of the game. If the buyer and the seller
do not manage to complete the exchange, the buyer's utility is $Cap_{Buyer, R}$. If on the other hand they manage to
exchange the good, then the buyer's utility is $Cap_{Buyer, R} + l$. Intuitively these utilities signify that the buyer
uses her preexisting capital to buy. As for the seller there exist two subvariants for her utility:
\begin{enumerate}
\item If the exchange is eventually not completed, the seller's utility is $Cap_{Seller, R} - l$. If on the other hand
the exchange takes place, the seller's utility is $Cap_{Seller, R}$. Intuitively, the seller is first obliged to buy the
good from the environment at the cost of $C$.
\item If the exchange is eventually not completed, the seller's utility is $Cap_{Seller, R} + C - l$. If the exchange
takes place, the seller's utility is $Cap_{Seller, R} + C$. Intuitively, the seller is handed the good for free by the
environment.
\end{enumerate}
\subsubsection{Sharers} \ \\
If player $A$ is not chosen to be either buyer or seller, then her utility is equal to
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq A}}\frac{DTr_{A \rightarrow B, R} + DTr_{B \rightarrow A, R}}{2} +
Cap_{A, R} \enspace.
\end{equation*}
Intuitively, players attach equal value to all the funds they can directly spend, regardless of whether others can spend
them as well. If the buyer and the seller do not manage to complete the exchange, the buyer's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Buyer}}\frac{DTr_{Buyer \rightarrow B, R} + DTr_{B \rightarrow Buyer,
R}}{2} + Cap_{Buyer, R} \enspace.
\end{equation*}
If on the other hand they manage to exchange the good, then the buyer's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Buyer}}\frac{DTr_{Buyer \rightarrow B, R} + DTr_{B \rightarrow Buyer,
R}}{2} + Cap_{Buyer, R} + l \enspace.
\end{equation*}
Intuitively these utilities signify that the buyer uses her preexisting accessible funds to buy. As for the seller there
exist two subvariants for her utility:
\begin{enumerate}
\item If the exchange is not completed, the seller's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Seller}}\frac{DTr_{Seller \rightarrow B, R} + DTr_{B \rightarrow
Seller, R}}{2} + Cap_{Seller, R} - l \enspace.
\end{equation*}
If the exchange takes place, the seller's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Seller}}\frac{DTr_{Seller \rightarrow B, R} + DTr_{B \rightarrow
Seller, R}}{2} + Cap_{Seller, R} \enspace.
\end{equation*}
Intuitively, the seller is first obliged to buy the good from the environment at the cost of $C$.
\item If the exchange is not completed, the seller's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Seller}}\frac{DTr_{Seller \rightarrow B, R} + DTr_{B \rightarrow
Seller, R}}{2} + Cap_{Seller, R} + C - l \enspace.
\end{equation*}
If the exchange takes place, the seller's utility is
\begin{equation*}
\sum\limits_{\substack{B \in \mathcal{V} \\ B \neq Seller}}\frac{DTr_{Seller \rightarrow B, R} + DTr_{B \rightarrow
Seller, R}}{2} + Cap_{Seller, R} + C \enspace.
\end{equation*}
Intuitively, the seller is handed the good for free by the environment.
\end{enumerate}
\subsubsection{Materialists} \ \\
If player $A$ is not chosen to be either buyer or seller, then her utility is 0. If the buyer and the seller do not
manage to complete the exchange, their utility is 0 as well. If on the other hand they manage to exchange the good, then
the utility is $l$ for both of them. Intuitively these utilities signify that in this game there is gain only for those
who exchange goods and the gain is exactly the difference between the objective value and the subjective value that the
relevant parties perceive.
| {
"alphanum_fraction": 0.6995138369,
"avg_line_length": 66.024691358,
"ext": "tex",
"hexsha": "9cb361ad0032640b6cffdee88864334406fd1351",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2017-08-28T06:32:33.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-03-07T10:49:58.000Z",
"max_forks_repo_head_hexsha": "dfd45afb78ba92d7c0b0a64222aaf173e9627c09",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "OrfeasLitos/DecentralisedTrustNetwork",
"max_forks_repo_path": "may31deliverable/gametheory/random.tex",
"max_issues_count": 9,
"max_issues_repo_head_hexsha": "dfd45afb78ba92d7c0b0a64222aaf173e9627c09",
"max_issues_repo_issues_event_max_datetime": "2017-07-31T14:42:20.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-03-07T12:25:26.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "OrfeasLitos/DecentralisedTrustNetwork",
"max_issues_repo_path": "may31deliverable/gametheory/random.tex",
"max_line_length": 126,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "dfd45afb78ba92d7c0b0a64222aaf173e9627c09",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "OrfeasLitos/TrustNet",
"max_stars_repo_path": "may31deliverable/gametheory/random.tex",
"max_stars_repo_stars_event_max_datetime": "2021-04-01T14:07:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-03-15T14:33:06.000Z",
"num_tokens": 1504,
"size": 5348
} |
\documentclass{article}
\usepackage{graphicx}
\usepackage{tipa}
\date{February 24, 2018}
\begin{document}
\title{Finding Lane Lines, Part 1}
\author{Alexander Mont}
\maketitle
\section{Pipeline Overview}
Here is a simple project for finding the lane lines on a road image. The pipeline consisted of the following steps:
\begin{enumerate}
\item First, we convert the image to grayscale. This is because the subsequent algorithms to be run on this, such as Canny edge detection, only work on grayscale images. Note that we do this by using the OpenCV function \texttt{cv2.cvtColor(img, cv2.COLOR\_RGB2GRAY}, which treats all of the three color channels (red, green, blue) equally.
\item Then, we apply Canny edge detection. Canny edge detection works by computing the \emph{gradient} (difference between adjacent pixels) of the image, and highlights only the places where this gradient is highest - i.e., where there is an edge. Note that this effectively turns a segment of a lane line (a dotted line) into a quadrilateral, and turns a solid lane line into two edges.
\item The output of Canny edge detection is just a pixel bitmap showing which pixels are part of edges. Now we need to turn these pixels into lines. To do this we use the Hough transform, and algorithm which does the following:
\begin{enumerate}
\item Considers the ``space of possible lines'' defined by $\theta$ (the angle of the line), and $\rho$ (the shortest distance of the line from the origin) - this is referred to as the ``Hough space''.
\item Each highlighted point defines a curve in the Hough space representing the possible lines that contain that point.
\item Points in Hough space where lots of the above lines intersect represent lines in the original space (the more Hough curves intersect at that point in Hough space, the longer that line is)
\end{enumerate}
\item This process is likely to generate multiple lines for each lane line, so we must now combine these lines into just two lines - one on the left side and one on the right side. We do this as follows:
\begin{enumerate}
\item Divide up the lines into two categories: one with negative slope (on the left) and one with positive slope (on the right). The negative slope is on the left because the X axis goes from left to right, but the Y axis goes from the top dow.
\item In each of these categories, compute the \emph{smallest} (absolute value of) slope, where slope here is defined as $\frac{x}{y}$ - note that this is different from how slope is usually defined. We do it this way so that a nearly vertical line will not cause numerical issues due to the extremely high slope (and we expect nearly vertical lines to be more frequent than nearly horizontal lines(
\item On each side, we consider only Hough lines with a slope of no more than 0.2 away from this ``smallest slope''. The purpose of this is to filter out any lane lines other than the lines that bound the lane the car is in (a lane line multiple lanes over will have a higher slope)
\item We compute the average of the slopes of these lines to get the overall slope of the lane line, and then compute the intercept by fitting a line of the given slope through the endpoints of all the Hough lines.
\end{enumerate}
\item Then we draw the overall lane lines.
\end{enumerate}
It is also possible to use this algorithm to annotate the lane lines in a video by applying the above algorithm to each frame in the video. When I did this I noticed that the drawn lane lines jumped around a lot from frame to frame even though the actual lane lines do not appear to move that much. Thus I changed my algorithm in this case to compute the slope of each line in a given frame as (0.95 * slope of line from previous frame) + (0.05 * observed average slope of the Hough lines as described above). Thus this significantly dampens the frame-to-frame vibrations.
\section{Discussions and Improvements}
Here are some of the shortcomings of this algorithm and ways it could be improved. I plan to implement these and other improvements in a future project.
\begin{enumerate}
\item The slopes of the Hough lines appear to change a lot from frame to frame spuriously. It is likely that this is because the edges obtained by the Canny edge detection are only one pixel wide (all the pixels in the "middle" of the lane line are thrown out) so a change in even a few of these pixels may significantly change the slope. A better approach may be to note that we don't really care about the \emph{edges} of the lane line per se (the ``lane line'' is really a thin quadrilateral) what we care about is the overall lane line. Therefore, a better approach may be to not use Canny edge detection at all. A better approach may be to do the following:
\begin{enumerate}
\item Use thresholding to identify which pixels in the image are likely part of lane lines. for instance, any white or yellow pixels are likely part of lane lines. So the thresholding could be based on just the red and green color channels, since yellow has high red and green but low blue.
\item Use a flood fill or similar algorithm to segment these ``likely pixels'' into contiguous groups.
\item Try to identify which contiguous groups are lane lines on the road (as opposed to other objects such as white or yellow cars). Note that a lane line or lane line segment is expected to be long and thin. Thus, if one took all the $(x,y)$ coordinates of pixels in a lane line and did PCA on them, one would expect to see one very large principal component and one much smaller principal component. This pattern could be easily identified.
\item Once we have a contiguous group of pixels that represents a lane line, we can fit a line through it (e.g. using a least squares fit) to get the slope and intercept.
\end{enumerate}
\item The smoothing technique used between frames in the video works well for the test videos in this project where the car is moving roughly straight, but may not work well in a scenario where the car is changing lanes so the slope of the lane lines actually is changing significantly - it might be slow to catch up. A better solution here may be a Kalman filter which stores as its state both the current slope and a rate of change in slope. Thus if the car was e.g. changing lanes, where the lane line's slope is smoothly changing, the filter would pick up on this and track it.
\item The identifying of each lane line by a slope and intercept assumes that it is a straight line. This may not be true if the road is curving or if there is a hill in front of the car This is a major flaw because both of these situations are likely to require action by a self-driving car, so the car would want to know about it. A solution here may be to, after the lane line segments on the road have been identified as described above, to use some sort of clustering algorithm to find "clusters" of these line segments in Hough space, and assume that similar clusters represent the same lane lines. This would also mean that the algorithm wouldn't need to assume that there are two primary lane lines, one on each side - it could detect other things that look like lane lines.
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.7838929915,
"avg_line_length": 140.7254901961,
"ext": "tex",
"hexsha": "eca408eeb2555b59cd61eda69dc2f47ddefd5777",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9673e2403e8386aae4380d9626c6edd7453e4bea",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alexander-mont/alexander-mont-sdc",
"max_forks_repo_path": "carnd_p1_writeup.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9673e2403e8386aae4380d9626c6edd7453e4bea",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alexander-mont/alexander-mont-sdc",
"max_issues_repo_path": "carnd_p1_writeup.tex",
"max_line_length": 782,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9673e2403e8386aae4380d9626c6edd7453e4bea",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alexander-mont/alexander-mont-sdc",
"max_stars_repo_path": "carnd_p1_writeup.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1592,
"size": 7177
} |
Subsets and Splits