Search is not available for this dataset
text
string | meta
dict |
---|---|
% $Id: faq-hyp+pdf.tex,v 1.8 2014/02/16 21:23:58 rf10 Exp $
\section{Hypertext and \acro{PDF}}
\Question[Q-acrobat]{Making \acro{PDF} documents from \AllTeX{}}
There are three general routes to \acro{PDF} output: Adobe's original
`distillation' route (via \PS{} output), direct conversion of a
\acro{DVI} file, and the use of a direct \TeX{}-like \acro{PDF}
generator such as \Qref*{\PDFTeX{}}{Q-whatpdftex}.
For simple documents (with no hyper-references), you can either
\begin{itemize}
\item process the document in the normal way, produce \PS{}
output and distill it;
\item (on a Windows or Macintosh machine with appropriate
tools installed) pass the output through a \acro{PDF}writer in place
of a printer driver. This route is only appropriate for simple
documents: \acro{PDF} writers cannot create hyperlinks;
\item process the document with ``vanilla'' \LaTeX{} and generate \acro{PDF}
direct from the \acro{DVI} using \ProgName{dvipdfm}/\ProgName{dvipdfmx}; or
\item process the document direct to \acro{PDF} with \PDFTeX{},
\Qref{\LuaTeX{}}{Q-luatex}, or \Qref{\xetex{}}{Q-xetex}.
\end{itemize}
To translate all the \LaTeX{} cross-referencing into Acrobat
links, you need a \LaTeX{} package to redefine
the internal commands. There are two of these for \LaTeX{}, both
capable of conforming to the
\Qref{Hyper\TeX{} specification}{Q-hyper}:
Heiko Oberdiek's \Package{hyperref}, and Michael Mehlich's
\Package{hyper}. (In practice, almost everyone uses
\Package{hyperref}; \Package{hyper} hasn't been updated since 2000.)
\Package{Hyperref} can often determine how it should generate
hypertext from its environment, but there is a wide set of
configuration options you can give via \csx{usepackage}. The package
can operate using \PDFTeX{} primitives, the hyper\TeX{}
\csx{special}s, or \acro{DVI} driver-specific \csx{special} commands.
Both \ProgName{dvips} and \YandY{}'s \acro{\ProgName{DVIPSONE}} can
translate the \acro{DVI} with these \csx{special} commands into
\PS{} acceptable to Distiller, and
\ProgName{dvipdfm} and \ProgName{dvipdfmx} have \csx{special} commands of
their own.
If you use \plaintex{}, the \Qref*{\Eplain{} macros}{Q-eplain} can
help you create \acro{PDF} documents with hyper-references.
It can operate using \PDFTeX{} primitives, or \csx{special} commands
for the \ProgName{dvipdfm}/\ProgName{dvipdfmx} \acro{DVI} drivers.
While there is no free implementation of all of \ProgName{Adobe}
\ProgName{Distiller}'s
functionality, any but the implausibly old versions of
\href{http://www.ghostscript.com/}{\ProgName{ghostscript}}
provide pretty reliable distillation (but beware of the problems with
\Qref*{\ProgName{dvips} output for distillation}{Q-dvips-pdf}).
For viewing (and printing) the resulting files, Adobe's
\ProgName{Acrobat} \ProgName{Reader} is available for a fair range of
platforms; for those for which Adobe's reader is unavailable, remotely
current versions of \href{http://www.ghostscript.com/}{\ProgName{ghostscript}}
combined with \ProgName{gv} or
\href{http://www.ghostgum.com.au/}{\ProgName{gsview}} can display and
print \acro{PDF} files, as can \ProgName{xpdf}.
In some circumstances, a
\href{http://www.ghostscript.com/}{\ProgName{ghostscript}}-based viewer
application is actually preferable to Acrobat Reader. For example, on
Windows Acrobat Reader locks the \extension{pdf} file it's displaying: this
makes the traditional (and highly effective) \AllTeX{} development
cycle of ``Edit\arrowhyph{}Process\arrowhyph{}Preview'' become
rather clumsy~--- \href{http://www.ghostgum.com.au/}{\ProgName{gsview}}
doesn't make the same <mistake.
\begin{ctanrefs}
\item[\nothtml{\rmfamily}Acrobat Reader]download from \URL{http://get.adobe.com/reader}
\item[dvipdfm]\CTANref{dvipdfm}
\item[dvipdfmx]\CTANref{dvipdfmx}
\item[gv]Browse \CTANref{gv}
\item[hyper.sty]\CTANref{hyper}
\item[hyperref.sty]\CTANref{hyperref}
\end{ctanrefs}
\LastEdit{2014-01-22}
\Question[Q-hyper]{Making hypertext documents from \TeX{}}
If you want on-line hypertext with a \AllTeX{} source, probably on the
World Wide Web, there are four technologies to consider:
\begin{itemize}
\item start from \alltex{}, and use one of a number of techniques to
translate (more or less) directly to
\Qref{\acro{HTML}}{Q-LaTeX2HTML};
% beware line break
\item Start from \Qref*{\Package{texinfo}}{Q-texinfo} source,
and use the \ProgName{info} viewer, or convert the \Package{texinfo}
source to \acro{HTML} using \ProgName{texi2html};
\item Start from \alltex{}; use \pdftex{}, \xetex{} or \LuaTeX{} to
produce \acro{PDF}, using \Package{hyperref} to construct
hyperlinks.
\item Start from (unconventional) \alltex{} which use the % ! line break
\Qref*{hyper\TeX{} conventions}{Q-hypertex}.
\end{itemize}
\begin{ctanrefs}
\item[texinfo]\CTANref{texinfo}
\end{ctanrefs}
\LastEdit{2013-05-30}
\Question[Q-hypertex]{The \emph{hyper\TeX{}} project}
The \emph{hyper\TeX{}} project extended the functionality of all the
\LaTeX{} cross-referencing commands (including the table of contents)
to produce \csx{special} commands which are parsed by \acro{DVI} processors
conforming to the Hyper\TeX{} guidelines;
it provides general hypertext links, including those
to external documents.
The Hyper\TeX{} specification says that conformant viewers/translators
must recognize the following set of \csx{special} commands:
\begin{description}
\item[href:] |html:<a href = "href_string">|
\item[name:] |html:<a name = "name_string">|
\item[end:] |html:</a>|
\item[image:] |html:<img src = "href_string">|
\item[base\_name:] |html:<base href = "href_string">|
\end{description}
The \emph{href}, \emph{name} and \emph{end} commands are used to do
the basic hypertext operations of establishing links between sections
of documents.
Further details are available on \URL{http://arXiv.org/hypertex/}; there
are two commonly-used implementations of the specification, a
modified \ProgName{xdvi} and (recent releases of)
\ProgName{dvips}. Output from the latter may be used in recent
releases of \href{http://www.ghostscript.com/}{\ProgName{ghostscript}}
or Acrobat Distiller.
\Question[Q-dvips-pdf]{Quality of \acro{PDF} from \PS{}}
\keywords{blurry fuzzy crippled}
Any reasonable \PS{}, including any output of \ProgName{dvips}, may be
converted to \acro{PDF}, using (for example) a sufficiently recent
version of \href{http://www.ghostscript.com/}{\ProgName{ghostscript}},
Frank Siegert's (shareware)
\href{http://www.pstill.com/}{\ProgName{PStill}}, or Adobe's (commercial)
\ProgName{Distiller}.
But, although the job may (almost always) be done, the results are
often not acceptable: the most frequent problem is bad presentation of
the character glyphs that make up the document. The following answers
offer solutions to this (and other) problems of bad presentation.
Issues covered are:
\begin{itemize}
\item \Qref*{Wrong type of fonts used}{Q-fuzzy-type3}, which is
the commonest cause of fuzzy text.
\item \Qref*{\ProgName{Ghostscript} too old}{Q-fuzzy-gs},
which can also result in fuzzy text.
% beware line breaks
\item \Qref*{Switching to font encoding \acro{T}1 encoding}{Q-fuzzy-T1},
which is yet another possible cause of fuzzy text.
\item Another problem~--- missing characters~--- arises from an
% beware line breaks
\Qref*{aged version of \ProgName{Adobe}~\ProgName{Distiller}}{Q-distill-prob}.
\item Finally, there's the common confusion that arises from using the
\ProgName{dvips} configuration file \texttt{-Ppdf}, the % ! line br
\Qref*{weird characters}{Q-charshift}.
\end{itemize}
It should be noted that \ProgName{Adobe} % \ProgName{Acrobat} no longer
% part of the name
\ProgName{Reader}~6 (released in mid-2003, and later versions) does
not exhibit the ``fuzziness'' that so many of the answers below
address. This is of course good news: however, it will inevitably be
a long time before every user in the world has this (or later)
versions, so the remedies below are going to remain for some time to
come.
The problems are also discussed, with practical examples, in Mike
Shell's \Package{testflow} package, which these FAQs recommend as a
``\Qref*{specialised tutorial}{Q-tutbitslatex}.
\begin{ctanrefs}
\item[testflow]\CTANref{testflow}
\end{ctanrefs}
\Question[Q-fuzzy-type3]{The wrong type of fonts in \acro{PDF}}
\keywords{crippled blurry}
This is far the commonest problem: the symptom is that text in the
document looks ``fuzzy''.
Most people use \ProgName{Adobe} \ProgName{Acrobat} \ProgName{Reader}
to view their \acro{PDF}: \ProgName{Reader} is distributed free of
charge, and is widely available, for all its faults. One of those
faults is its failure to deal with bitmap fonts (at least, in all
versions earlier than version~6, all of which copies are pretty old,
now~\dots{} but some are occasionally found).
So we don't want bitmap fonts in our \PS{}: with them, characters show
up in \ProgName{Reader}'s display as blurred blobs which are often not
even recognisable as the original letter, and are often not properly placed
on the line. Nevertheless, even now, most \TeX{} systems have
\ProgName{dvips} configured to use
\Qref*{\extension{pk} files}{Q-pk} in its output. Even
\PDFTeX{} will use \extension{pk} files if it can see no alternative for
a font in the document it is processing.
Our remedy is to use
``\Qref*{Adobe Type~1}{Q-adobetypen}''
versions of the fonts we need. Since Adobe are in the
business of selling Type~1 fonts, \ProgName{Reader} was of course made
to deal with them really rather well, from the very beginning.
Of course, if your document uses nothing but fonts that came from
Adobe in the first place~--- fonts such as \FontName{Times} that
appear in pretty much every \PS{} printer, or such as Adobe
\FontName{Sabon} that you pay extra for~--- then there's no problem.
But most people use \FontName{Computer} \FontName{Modern} to start
with, and even those relative sophisticates who use something as
exotic as \ProgName{Sabon} often find themselves using odd characters
from \acro{CM} without really intending to do so. Fortunately, rather
good versions of the \acro{CM} fonts are available from the \acro{AMS}
(who have them courtesy of % ! line break
\Qref{Blue Sky Research}{Q-commercial} and \YandY{}).
Most modern systems have the fonts installed ready to use; and any
system installed less than 3~years ago has a \ProgName{dvips}
configuration file `\texttt{pdf}' that signals the use of the
\acro{CM} fonts, and also sets a few other parameters to improve
\ProgName{dvips}' output. Use this configuration as:
\begin{quote}
\begin{verbatim}
dvips -Ppdf myfile -o myfile.ps
\end{verbatim}
\end{quote}
This may produce a warning message about failing to find the
configuration file:
\begin{quote}
\begin{verbatim}
dvips: warning: no config file for `pdf'
\end{verbatim}
\end{quote}
or something similar, or about failing to find a font file:
\begin{quote}
\begin{verbatim}
dvips: ! Couldn't find header file cmr10.pfb
\end{verbatim}
\end{quote}
Either of these failures signals that your
system doesn't have the fonts in the first place.
A way of using the fonts that doesn't involve the sophistication of
the \texttt{-Ppdf} mechanism is simply to load maps:
\begin{quote}
\begin{verbatim}
dvips -Pcmz -Pamz myfile -o myfile.ps
\end{verbatim}
\end{quote}
You may encounter the same warning messages as listed above.
If your system does not have the fonts, it won't have the
configuration file either; however, it might have the configuration
file without the fonts. In either case, you need to
\Qref*{install the fonts}{Q-inst1cm}.
\Question[Q-fuzzy-gs]{Fuzzy fonts because \ProgName{Ghostscript} too old}
\keywords{crippled blurry}
So you've done everything the \acro{FAQ} has told you that you need,
correct fonts properly installed and appearing in the \ProgName{dvips}
output, but \emph{still} you get fuzzy character output after
distilling with \href{http://www.ghostscript.com/}{\ProgName{ghostscript}}.
The problem could arise from too old a version of
\href{http://www.ghostscript.com/}{\ProgName{ghostscript}}, which you
may be using directly, or via a
script such as \ProgName{ps2pdf} (distributed with
\ProgName{ghostscript} itself), \ProgName{dvipdf}, or similar.
Though \ProgName{ghostscript} was capable of distillation from
version~5.50, that version could only produce bitmap Type~3 output of
any font other than the fundamental 35~fonts (\FontName{Times},
\FontName{Helvetica}, etc.). Later versions added `complete'
distillation, but it wasn't until version~6.50 that one could rely on
it for everyday work.
So, if your \acro{PDF} output still looks fuzzy in \ProgName{Acrobat}
\ProgName{Reader}, upgrade \ProgName{ghostscript}. The new version
should be at least version~6.50, of course, but it's usually good
policy to go to the most recent version (version~8.12 at the time of
writing~--- 2003).
\Question[Q-fuzzy-T1]{Fonts go fuzzy when you switch to \acro{T}1}
\keywords{crippled blurry}
You've been having problems with hyphenation, and someone has
suggested that you should use ``\cmdinvoke{usepackage}[T1]{fontenc}''
to help sort them out. Suddenly you find that your final \acro{PDF}
has become fuzzy. The problem may arise whether you are using \PS{}
output and then distilling it, or you are using \PDFTeX{} for the
whole job.
In fact, this is the same problem as most others about the
\Qref*{quality of \acro{PDF}}{Q-dvips-pdf}: you've abandoned
your previous setup using Type~1 versions of the \acro{CM} fonts, and
\ProgName{dvips} has inserted Type~3 versions of the \acro{EC} fonts
into your document output. (See % beware line break
``\Qref*{Adobe font types}{Q-adobetypen}
for details of these font types; also, note that the font
\emph{encoding}~\acro{T}1
has nothing directly to do with the font \emph{format}~Type~1).
However, as noted in % beware line break
\htmlonly{``}\Qref[question]{8-bit Type~1 fonts}{Q-type1T1}\htmlonly{''},
Type~1 versions of \acro{CM}-like fonts in \acro{T}1 (or equivalent) encoding
are now available, both as ``real'' fonts, and as virtual font sets.
One solution, therefore, is to use one of these alternatives.
The alternative is to switch font family altogether, to something like
\FontName{Times} (as a no-thought default) or one of the many more pleasing
Adobe-encoded fonts. The default action of \Package{fontinst}, when
creating metrics for such a font, is to create settings for both \acro{OT}1
and \acro{T}1 encodings, so there's little change in what goes on (at the
user level) even if you have switched to \acro{T}1~encoding when using the
fonts.
\Question[Q-distill-prob]{Characters missing from \acro{PDF} output}
If you're using \ProgName{Acrobat} \ProgName{Distiller} to create your
\acro{PDF} output, you may find
characters missing. This may manifest
itself as messed-up maths equations (missing
``\latexhtml{\ensuremath{-}}{-}'' signs, for example), or bits missing
from large symbols. Early versions of \ProgName{Distiller} used to
ignore character positions 0--31 and 128--159 of every font: Adobe's
fonts never use such positions, so why should \ProgName{Distiller}?
Well, the answer to this question is ``because Adobe don't produce all
the world's fonts''~--- fonts like \FontName{Computer}
\FontName{Modern} were around before Adobe came on the scene, and
\emph{they} use positions 0--31. Adobe don't react to complaints like
that in the previous sentence, but they do release new versions of
their programs; and \ProgName{Distiller}, since at least version~4.0,
\emph{has} recognised the font positions it used to shun.
Meanwhile, \TeX{} users with old versions of \ProgName{Distiller} need
to deal with their fonts. \ProgName{Dvips} comes to our aid: the
switch \texttt{-G1} (``remap characters''), which moves the offending
characters out of the way. The \acro{PDF} configuration file
(\texttt{-Ppdf}), recommended % beware line break
\latexhtml{above}{in ``\Qref{the wrong type of fonts}{Q-fuzzy-type3}''},
includes the switch.
The switch is not without its problems; pre-2003 versions of
\ProgName{dvips} will apply it to Adobe fonts as well, causing
\Qref*{havoc}{Q-charshift}, but fortunately
that problem is usually soluble. However, a document using both
\acro{CM} and Adobe-specified fonts is stuck. The only real solution
is either to upgrade \ProgName{dvips}, or to spend money to upgrade
\ProgName{Distiller}.
\Question[Q-type1T1]{Finding `8-bit' Type~1 fonts}
\keywords{eight}
Elsewhere, answers to these \acro{FAQ}s recommend that you use an
`8-bit' font to permit % line break!!
\Qref*{accentuation of inflected languages}{Q-hyphenaccents},
and also recommend the use of Type~1 fonts to ensure that
you get \Qref*{good quality \acro{PDF}}{Q-fuzzy-type3}. These
recommendations used to be contradictory: one could not just
``switch'' from the free \acro{CM} fonts to free Cork- (or similarly)
encoded Type~1 fonts. The first approach that started to alleviate
these problems, was the development of virtual fonts that make
a good approach to the Cork encoding (see below). Now, however, we
have ``true'' Type~1 fonts available: as always, we have an
embarrassment of riches with three free alternatives, and one
commercial and one shareware version.
\Package{CM-super} is an
auto-traced set which encompasses all of the \acro{T}1 and \acro{TS}1
encodings as well as the \acro{T}2* series (the family of encodings
that cover languages based on Cyrillic alphabets). These fonts are
pretty easy to install (the installation instructions are clear), but
they are huge: don't try to install them if you're short of disc
space.
\Package{CM-LGC} is a similar ``super-font'' set, but of much more
modest size; it covers \acro{T}1, \acro{TS}1 and \acro{T}2\acro{A}
encodings (as does \Package{CM-super}, and also covers the \acro{LGR}
encoding (for typesetting Greek, based on Claudio Beccari's \MF{}
sources). \Package{CM-LGC} manages to be small by going to the
opposite extreme from \Package{CM-super}, which includes fonts at all
the sizes supported by the original \acro{EC} (a huge range);
\Package{CM-LGC} has one font per font shape, getting other sizes by
scaling. There is an inevitable loss of quality inherent in this
approach, but for the disc-space-challenged machine, \Package{CM-LGC}
is an obvious choice.
\Package{Tt2001} is a simple scan of the \acro{EC} and \acro{TC}
fonts, and has some virtues~--- it's noticeably smaller than
\Package{CM-super} while being less stark than \Package{CM-LGC}.
\Package{Latin} \Package{Modern} is produced using the
program \Qref*{\ProgName{MetaType1}}{Q-textrace}. The
\Package{Latin} \Package{Modern} set comes with \acro{T}1, \acro{TS}1
\acro{LY}1 encoded variants (as well as a variant using the Polish
\acro{QX} encoding); for the glyph set it covers, its outlines seem
rather cleaner than those of \Package{CM-super}. \Package{Latin}
\Package{Modern} is more modest in its disc space demands than is
\Package{CM-super}, while not being nearly as stark in its range of
design sizes as is \Package{CM-LGC}~--- \Package{Latin}
\Package{Modern}'s fonts are offered in the same set of sizes as the
original \Package{CM} fonts. It's hard to argue with the choice:
Knuth's range of sizes has stood the test of time, and is one of the
bases on which the excellence of the \TeX{} system rests.
\Qref*{Virtual fonts}{Q-virtualfonts} help us deal with the problem,
since they allow us to map ``bits of \acro{DVI} file'' to single
characters in the virtual font; so we can create an ``\'e'' character
by recreating the \acro{DVI} commands that would result from the code
``\csx{'}\texttt{e}''. However, since this involves two characters being
selected from a font, the arrangement is sufficient to fool
\ProgName{Acrobat} \ProgName{Reader}: you can't use the program's
facilities for searching for text that contains inflected characters,
and if you \emph{cut} text from a window that contains such a
character, you'll find something unexpected (typically the accent and
the `base' characters separated by a space) when you \ProgName{paste}
the result. However, if you can live with this difficulty, virtual
fonts are a useful and straightforward solution to the problem.
There are two virtual-font offerings of \acro{CM}-based 8-bit
fonts~--- the \Package{ae} (``almost \acro{EC}'') and
\Package{zefonts} sets; the \Package{zefonts} set has wider coverage
(though the \Package{ae} set may be extended to offer guillemets by
use of the \Package{aeguill} package). Neither offers characters such
as \texttt{eth} and \texttt{thorn} (used in, for example, in
Icelandic), but the \Package{aecompl} package works with the
\Package{ae} fonts to provide the missing characters from the
\acro{EC} fonts (i.e., as bitmaps).
The sole remaining commercial \acro{CM}-like 8-bit font comes from
Micropress, who offer the complete \acro{EC} set
in Type~1 format, as part of their range of outline versions of fonts
that were originally distributed in \MF{} format. See
\Qref[question]{``commercial distributions''}{Q-commercial}.
The shareware % ! line break
\Qref*{BaKoMa \TeX{} distribution}{Q-TeXsystems} offers a
set of Type~1 \acro{EC} fonts, as an extra shareware option. (As far
as the present author can tell, these fonts are \emph{only} available
to users of BaKoMa \TeX{}: they are stored in an archive format that
seems not to be publicly available.)
Finally, you can use one of the myriad text fonts available in Type~1
format (with appropriate \acro{PSNFSS} metrics for \acro{T}1 encoding,
or metrics for some other 8-bit encoding such as \acro{LY}1). However,
if you use someone else's text font (even something as simple as
Adobe's Times family) you have to find a matching family of
mathematical fonts, which is a non-trivial undertaking~---
\htmlonly{see }\Qref{``choice of scalable fonts''}{Q-psfchoice}.
\begin{ctanrefs}
\item[\nothtml{\rmfamily}ae fonts]\CTANref{ae}
\item[aecompl.sty]Distributed with \CTANref{ae}
\item[aeguill.sty]\CTANref{aeguill}
\item[\nothtml{\rmfamily}BaKoMa fonts]Browse \CTANref{bakoma-texfonts}
\item[\nothtml{\rmfamily}CM-LGC fonts]\CTANref{cm-lgc}
\item[\nothtml{\rmfamily}CM-super fonts]\CTANref{cm-super} (beware:
very large download)
\item[\nothtml{\rmfamily}Latin Modern fonts]\CTANref{lm}
\item[\nothtml{\rmfamily}tt2001 fonts]\CTANref{tt2001}
\item[\nothtml{\rmfamily}zefonts]\CTANref{zefonts}
\end{ctanrefs}
\Question[Q-pkfix]{Replacing Type 3 fonts in \PS{}}
One often comes across a \PS{} file generated by
\ProgName{dvips} which contains embedded \acro{PK} fonts; if you try
to generate \acro{PDF} from such a file, the quality will be poor.
Of course, the proper solution is to regenerate the \PS{} file,
but if neither the sources nor the \acro{DVI} file are available, one
must needs resort to some sort of patching to replace the bitmap fonts
in the file by outline fonts.
The program \ProgName{pkfix} (by Heiko Oberdiek) will do this
patching, for files created by ``not too old versions'' of
\ProgName{dvips}: it finds the fonts to be replaced by examining the
\PS{} comments \ProgName{dvips} has put in the file. For each
font, \ProgName{pkfix} puts appropriate \TeX{} commands in a file,
which it then processes and runs through \ProgName{dvips} (with switch
\texttt{-Ppdf}) to acquire an appropriate copy of the font; these copies are
then patched back into the original file.
If your source file is older than \ProgName{pkfix} can deal with,
there's still a modicum of hope: \ProgName{pkfix-helper} examines the
bitmap fonts in a document, compares them with the metric
(\extension{tfm}) fonts on your system and comes to a view of which
font might be which. The program reports on ``poor'' matches, and
there are options for confirming, or replacing, its guesses. The
technique (which sounds implausible) is successful enough to be worth
a try.
A further option is Frank Siegert's (shareware)
\href{http://www.pstill.com}{PStill}, which is capable of processing
the \PS{} it is distilling, and one option is to replace bitmap fonts
in the file with Type~1 versions.
\begin{ctanrefs}
\item[pkfix]\CTANref{pkfix}
\item[pkfix-helper]\CTANref{pkfix-helper}
\end{ctanrefs}
\Question[Q-pdfpagelabels]{\Package{Hyperref} and repeated page numbers}
The \Class{book} class (and its friends and relations) automatically
changes the display of page numbers in the frontmatter of the document
to lower-case roman. This is fine for human readers, but if
\Package{hyperref} has been misconfigured, the existence of pages have
the same page number can cause problems. Fortunately, the
configuration options to make \Package{hyperref} ``do the right
thing'' are (by default) set up to avoid problems.
The two options in question are:
\begin{description}
\item[\pkgoption{plainpages=false}] Make page anchors using the
formatted form of the page number. With this option,
\Package{hyperref} writes different anchors for pages `ii' and `2'.
(This is the default value for the option, which is a % ! line break
\emph{good thing}\dots{})
If the option is set `\texttt{true}' \Package{hyperref} writes page
anchors as the arabic form of the page number, rather than the
formatted form that gets printed; this is not usually appropriate.
\item[\pkgoption{pdfpagelabels}] Set \acro{PDF} page labels; i.e.,
write the value of \csx{thepage} to the \acro{PDF} file so that
\Package{Acrobat Reader} can display the page number as (say) `ii (4
of 40)' rather than simply `4 of 40'.
\end{description}
The two should be used whenever page numbering is not just
`1\texttt{..}\ensuremath{n}'; they may be used independently, but
usually are not.
The recipe isn't perfect: it relies on \csx{thepage} being different
for every page in the document. A common problem arises when there is
an unnumbered title page, after which page numbers are reset: the
\PDFTeX{} warning of ``\Qref*{duplicate destinations}{Q-hyperdupdest}''
will happen in this case, regardless of the options.
\begin{ctanrefs}
\item[hyperref.sty]\CTANref{hyperref}
\end{ctanrefs}
\Question[Q-cpy-srchpdf]{Copy-paste-able/searchable \acro{PDF} files}
\acro{PDF} files generated from \TeX{} (and friends), will by default
hold their text in the encoding of the original \TeX{} font used by
the document.
When \acro{PDF} readers, etc., offer copy-paste or searching
functions, the operations take place on the glyph codes used for the
fonts selected by the document. This is fine, for the simplest
documents (in English, at least); the problem comes when you're using
an inflected language (with accented letters, or composite glyphs
such as `\ae{}')~--- \TeX{} will typically use a non-standard
encoding, and there are likely be problems, since \acro{PDF} readers
assume the text is presented in Unicode.
For \acro{PDF} generated from \LaTeX{} (the \acro{DVI} being
converted, by whatever means), or from \pdflatex{}, the character
codes used in the \acro{PDF} file are in fact those of the document's
\Qref*{font encoding}{Q-whatenc}; if you're using \acro{OT}1 or
\acro{T}1, your document will be \acro{OK} for almost all \acro{ASCII}
characters, but it's likely that anything ``out of the ordinary'' will
not be represented properly. (Of course, \acro{PDF} generated from
\xetex{}- or \LuaTeX{}-based formats is going to be \acro{OK}, since
those engines work in Unicode througout.)
The solution comes from the character-mapping facilities in the
\acro{PDF} specification: the file may specify a table of translations
of characters present in the coding used in the file, to a Unicode
version of the characters.
Packages \Package{cmap} and \Package{mmap} both offer means of
generating such tables (\Package{mmap} has wider coverage, including
the various maths encodings); both work with \pdftex{} and no other
engine. Thus your document becomes something like:
\begin{quote}
\begin{verbatim}
\documentclass{article}
\usepackage{mmap} % (or cmap)
\usepackage[T1]{fontenc}
... % your other packages
\begin{document}
... % your actual text
\end{verbatim}
\end{quote}
Unfortunately, the packages only work with fonts that are directly
encoded, such as the default (Computer Modern, i.e., \FontName{cm}
fonts, and things such as \FontName{cm-super} or the \FontName{Latin}
\FontName{Modern} sets. Fonts like Adobe
Times Roman (which are encoded for \AllTeX{} use via virtual fonts)
are not amenable to this treatment.
\begin{ctanrefs}
\item[cmap.sty]\CTANref{cmap}
\item[cm-super \nothtml{\rmfamily}fonts]\CTANref{cm-super}
\item[\nothtml{\rmfamily}Latin Modern fonts]\CTANref{lm}
\item[mmap.sty]\CTANref{mmap}
\end{ctanrefs}
\LastEdit{2013-08-21}
% \Question[Q-hypertex]{The Hyper\TeX{} project}
%
% The Hyper\TeX{} project extended the functionality of all the
% \LaTeX{} cross-referencing commands (including the table of contents)
% to produce \csx{special} commands which are parsed by \acro{DVI} processors
% conforming to the Hyper\TeX{} guidelines;
% it provides general hypertext links, including those
% to external documents.
%
% The Hyper\TeX{} specification says that conformant viewers/translators
% must recognize the following set of \csx{special} commands:
% \begin{description}
% \item[href:] |html:<a href = "href_string">|
% \item[name:] |html:<a name = "name_string">|
% \item[end:] |html:</a>|
% \item[image:] |html:<img src = "href_string">|
% \item[base\_name:] |html:<base href = "href_string">|
% \end{description}
%
% The \emph{href}, \emph{name} and \emph{end} commands are used to do
% the basic hypertext operations of establishing links between sections
% of documents.
%
% Further details are available on \URL{http://arXiv.org/hypertex/}; there
% are two commonly-used implementations of the specification, a
% modified \ProgName{xdvi} and (recent releases of)
% \ProgName{dvips}. Output from the latter may be used in recent
% releases of \ProgName{ghostscript} or Acrobat Distiller.
| {
"alphanum_fraction": 0.7582432252,
"avg_line_length": 47.0392464678,
"ext": "tex",
"hexsha": "2f890c914015c0d3dea775d5d41de0a3b112b0bd",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-05-05T18:40:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-05-05T18:40:57.000Z",
"max_forks_repo_head_hexsha": "ab4748d02790e4fa262e4e7dec85c3d6fc04204e",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "texfaq/historical",
"max_forks_repo_path": "faq-hyp+pdf.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "ab4748d02790e4fa262e4e7dec85c3d6fc04204e",
"max_issues_repo_issues_event_max_datetime": "2018-06-14T17:12:45.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-06-13T09:33:03.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "texfaq/historical",
"max_issues_repo_path": "faq-hyp+pdf.tex",
"max_line_length": 87,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ab4748d02790e4fa262e4e7dec85c3d6fc04204e",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "texfaq/historical",
"max_stars_repo_path": "faq-hyp+pdf.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8441,
"size": 29964
} |
% {{cookiecutter.author}}
% {{cookiecutter.email}}
% === METHODS ===
\chapter{Methods} \label{methods}
This chapter provides details on the approaches and methods taken to achieve the research objectives.
| {
"alphanum_fraction": 0.7403846154,
"avg_line_length": 23.1111111111,
"ext": "tex",
"hexsha": "2eb2c6d99ae0b5444686522febb499401d313f37",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-06-18T13:19:42.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-18T13:19:42.000Z",
"max_forks_repo_head_hexsha": "fdcf378185db1b18e9433782abf4ffa34806ca88",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "rrwen/cookiecutter-latex-ryerson",
"max_forks_repo_path": "{{cookiecutter.project_name}}/main_body/chapter02_methods.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fdcf378185db1b18e9433782abf4ffa34806ca88",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "rrwen/cookiecutter-latex-ryerson",
"max_issues_repo_path": "{{cookiecutter.project_name}}/main_body/chapter02_methods.tex",
"max_line_length": 101,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "fdcf378185db1b18e9433782abf4ffa34806ca88",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "rrwen/cookiecutter-latex-ryerson",
"max_stars_repo_path": "{{cookiecutter.project_name}}/main_body/chapter02_methods.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-18T19:02:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-24T04:52:17.000Z",
"num_tokens": 45,
"size": 208
} |
\documentclass{article}
\pagestyle{empty}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{graphicx}
\usepackage{multicol}
\setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in}
\setlength{\topmargin}{0in} \setlength{\textheight}{8.5in}
\setlength{\textwidth}{6.5in}
\makeatletter
\renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{%
\hskip -\arraycolsep
\let\@ifnextchar\new@ifnextchar
\array{#1}}
\makeatother
\begin{document}
\begin{flushleft}
\bfseries{MATH 260, Linear Systems and Matrices, Fall `14}\\
\bfseries{Activity 9: Null \& Column Spaces}\\
%\bfseries{Honor Code:} \hspace{3.5in}\bfseries{Names:}\\
\end{flushleft}
\begin{flushleft}
\section*{Warm-up: In the Null Space?}
%This is the matrix from Sect. 5.4, pg 339, #43
Recall that the null space is the set of all vectors $\vec{x}$ such that $\textbf{A}\vec{x}=\vec{0}$. Given the matrix:\\
\begin{center}
$\textbf{G}=
\begin{bmatrix}
2 & 3 & -1\\
-1 & 4 & 6 \\
1 & 7 & 5
\end{bmatrix}
$\\
\end{center}
1a) Determine which of the following vectors are in the null space of \textbf{G}.\\
\begin{center}
$ \vec{w}_1=\begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} $
\hspace{0.3in}
$ \vec{w}_2=\begin{bmatrix} 8 \\ -4 \\ 4 \end{bmatrix} $
\hspace{0.3in}
$ \vec{w}_3=\begin{bmatrix} -2 \\ 1 \\ -1 \end{bmatrix} $
\end{center}
\vspace{2in}
1b) Find the actual null space of \textbf{G}. Do your answers above make sense?
\newpage
\section*{Null Space}
Given the matrix $\textbf{F} = \begin{bmatrix} 1 & 2 & 1\\ 1 & 2 & 2 \end{bmatrix}$.
\vspace{0.1in}
2) Give a basis for the null space of $\textbf{F}$.
\vspace{1.5in}
\section*{Column Space}
We know that the column space ( $span( \{ \vec{v}_1 , \vec{v}_2, \ldots \vec{v}_n \} )$ ) is a subspace. However, generally in mathematics we want to describe things in as simple terms as possible. For spaces, this means giving a \textit{basis} only for a space. \\
Recall that a set is a \textbf{basis} of a vector space if it has the properties:\\
(i) The set is linearly independent \hspace{.25in} (ii) The span of the set covers the entire vector space.\\
\vspace{0.2in}
3) The column space of \textbf{G} could be given as \textit{span}$ \left( \left\{ \begin{bmatrix} 2 \\ -1 \\ 1\end{bmatrix}, \begin{bmatrix} 3 \\ 4 \\ 7 \end{bmatrix}, \begin{bmatrix} -1 \\ 6 \\ 5 \end{bmatrix} \right\} \right)$. Does the set $S=\left\{ \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix}, \begin{bmatrix} 3 \\ 4 \\ 7 \end{bmatrix}, \begin{bmatrix} -1 \\ 6 \\ 5 \end{bmatrix} \right\}$ form a basis for the column space? (Show why or why not)
\vspace{1.5in}
We already have all the tools and information to define a valid basis. Let's do so...
\vspace{0.1in}
4) Which columns in the RREF of \textbf{G} are pivot columns?
\pagebreak
5) Take the columns from the original \textbf{G} that you identified as the pivot columns. Form a new set $S_{B}$ from these columns. These should be $3 \times 1$ vectors with non-zero values in each row.
\vspace{0.1in}
5a) Is this new set $S_{B}$ linearly independent?
\vspace{1.5in}
5b) Does it span the entire column space? (\textit{Hint: Does $span(S) = span (S_{B})$})
\vspace{1.5in}
5c) Is the set $S_{B}$ a basis for the column space? Explain.
\vspace{0.75in}
6) We defined the \textbf{rank} of a matrix as the number of pivot columns in RREF. How does the rank of \textbf{G} relate to the column space of \textbf{G}? (\textit{Hint: What property of a space gives a single value out?} )
\vspace{1in}
\Large Note this relationship between rank and column spaces is actually true for any matrix. \normalsize
\vspace{0.2in}
7) Give a basis for the column space of $\textbf{F}$ (from problem 2).
\newpage
\section*{Rank-Nullity Theorem}
8) You should have already found a basis for each of the column space and the null space of $\textbf{F}$ (problems 2 and 7).
\vspace{0.2in}
9a) What are the dimensions of the null space and column space for \textbf{F}?
\vspace{0.75in}
9b) How do the dimensions of \textbf{F} relate to the sum: dim(column space of \textbf{F}) + dim(null space of \textbf{F})?
\vspace{1in}
10a) State the dimension of the null space (the \textit{nullity} of \textbf{G}) and the dimension of the column space for matrix \textbf{G}.
\vspace{0.75in}
10b) How do the dimensions of \textbf{G} relate to the sum: dim(column space of \textbf{G}) + dim(null space of \textbf{G})?
\vspace{1in}
11) The \textbf{Rank-Nullity Theorem} generalizes the above results for the $m\times n$ matrix \textbf{A}. Based on your results, what do you think the \textbf{Rank-Nullity Theorem} is?
\vspace{0.75in}
\end{flushleft}
\end{document} | {
"alphanum_fraction": 0.6956239168,
"avg_line_length": 34.447761194,
"ext": "tex",
"hexsha": "5e93d9f0dfcdcad33483c64ca0af6386425a7bfa",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Pelonza/PB-LinAlg",
"max_forks_repo_path": "Fall 2014 - Capaldi A/Activities/Activity09_NullColumnSpace.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Pelonza/PB-LinAlg",
"max_issues_repo_path": "Fall 2014 - Capaldi A/Activities/Activity09_NullColumnSpace.tex",
"max_line_length": 450,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c92c2f3f9e3fc87a1a89041eb7bfaa1a87c9276d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Pelonza/PB-LinAlg",
"max_stars_repo_path": "Fall 2014 - Capaldi A/Activities/Activity09_NullColumnSpace.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1608,
"size": 4616
} |
\chapter{Data Exploration and Scheduling Problem Building Blocks}
\label{chapter: Problem Definition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% SECTION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In this chapter we re-introduce the problem assigned to us by Royal Mail in a greater level of detail. Sections \ref{section: problem outline}-\ref{section: Input Parameters} walk the reader through a more detailed description of the problem to introduce them to the main elements that constitute the problem as well as give them an idea of Royal Mail's current practices through a series of examples. They also outline the motivation behind solving this problem, and provide arguments justifying why this is a problem worthwhile solving, with great potential for efficiency gains despite the complexity involved in obtaining a solution.
\vspace{\baselineskip}
\noindent
Following the description of the problem, Section \ref{section: Data Exploration} continues by outlining the structure of the historical data supplied by Royal Mail, and the steps taken to bring the dataset in the form that was utilised in the modelling portion of the project. The \textbf{finalised} form of the dataset described at the end of the Data Cleaning section \ref{section: Data Cleaning} constitutes the foundation upon which this dissertation will derive feasible, and efficient schedules. The chapter concludes with Sections \ref{section: Redefined Dataset}-\ref{fig: Wave-instances.} describing the processes behind the generation of two unique instances of the problem that will be utilised to conduct more targeted experiments in Chapter \ref{chapter: 2-Evaluating Royal Mail Historical Data}.
\vspace{\baselineskip}
\noindent
All in all, this chapter gives a more detailed overview of the problem, as well as a detailed description of the dataset provided by our Industrial Liaison, and the steps taken to prepare it for use in our experiments.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{General Context}
\label{section: problem outline}
Each of Royal Mail's \textbf{Mail Centres (MC)} upon receiving the mail collected from each \textbf{Delivery Office (DO)} will sort it and then decide which portions of it need to be redistributed to DOs belonging to the same MC the following day. Our goal is to structure the delivery itinerary for each MC, for the portion of the post that will remain within the circulation of that MC.
\vspace{\baselineskip}
\noindent
The day before a piece of mail is delivered, the MC responsible for it will have a set of round-trips that need to be completed. At its disposal, each MC has a crew of \textbf{HGV drivers}, that will complete those trips. Our objective, is to schedule those trips in a more efficient manner than Royal Mail's current scheduling practices, that minimises the time that drivers spend performing \textbf{non-essential} activities that could be performed by lower-grade employees. The process of creating those schedules can be resolved into two sub-components. \textbf{Firstly}, decide which trips are completed by each driver. \textbf{Secondly}, decide how to sequence the trips assigned to each driver. In allocating trips to a driver we need to respect some restrictions imposed by certain parameters on whether a trip can be allocated to each specific driver, on a particular instant in time. These restrictions are driven by the EU regulations\footnote{Further explained in Appendix \ref{section: EU rules}} around HGV drivers, in particular the \textit{driving} and \textit{working} time directives. However, for the purposes of this dissertation we neglect those, hence our focus is solely concerned with the task of finding the \textbf{room} available \textbf{for optimisation}. Following that, it should be straightforward to make the modelling more realistic by introducing those important constraints that ensure the utmost legality of the schedules and it is one of the direction considered for future work\footnote{Further explained in Section \ref{section: Addition of the Meal Relief Constraint} of Chapter \ref{chapter: Future Directions}} on the problem.
\vspace{\baselineskip}
\noindent
As mentioned in the Introduction chapter of the report, we only focus on the \textbf{scheduling} aspect of the problem, and consider the routes to be fixed, hence we do not focus on the \textit{Vehicle Routing} point of view.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2cm,>=stealth',bend angle=45,auto,rotate=90,transform shape]%picture #1
\tikzstyle{place}=[circle,thick,draw=black!75,fill=white!20,minimum size=10mm]
\tikzstyle{red place}=[place,draw=red!75,fill=red!20]
\tikzstyle{transition}=[rectangle,thick,draw=black!75,
fill=black!20,minimum size=10mm]
\tikzstyle{every label}=[black]
\begin{scope}
%All but clients
\node [place,label={[rotate=-90]center:DO}] (w1) {};
\node [place, draw=white,label={[rotate=-90]center:Trip A}] (c1) [below of=w1] {};
\node [place, line width=0.8mm,label={[rotate=-90]center:MC}] (s) [below of=c1] {};
\node [place, draw=white,label={[rotate=-90]center:Trip B}] (c2) [below of=s] {};
\node [place,label={[rotate=-90]center:DO}] (w2) [right of=c2] {}
edge [pre,bend right] (s);
%boxes w/ Clients
\node [transition,label={[rotate=-90]center:Client}] (e1) [left of=c1] {}
edge [pre,bend left] (w1)
edge [post,bend right] (s);
\node [transition,label={[rotate=-90]center:Client}] (e2) [left of=c2] {}
edge [post,bend left] (s);
\node [transition,label={[rotate=-90]center:Client}] (l1) [right of=c1] {}
edge [pre,bend left] (s)
edge [post,bend right] node[swap] {} (w1);
\node [transition,label={[rotate=-90]center:Client}] (l2) [below of=c2] {}
edge [pre,bend right] (w2)
edge [post,bend left] node {} (e2);
\end{scope}
\end{tikzpicture}%picture #1
\qquad
%picture #-5
\caption{An example of a typical duty consisting of two \textbf{round-trips} (i.e. atomic blocks), both starting and concluding at the MC.}
\label{fig:Atomic block}%
\end{figure}
\vspace{\baselineskip}
\noindent
At the start of every working day, a driver is assigned their day's itinerary. The dataset provided to us, contains such historical itineraries for every driver per each day of the week. The task at hand, is to use those historical itineraries to \textbf{propose optimised schedules for each day}, which will \textbf{be optimal in their use of the drivers' time.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Building Blocks}
\label{section: Input Parameters}
The optimised schedules will be based on historical schedules supplied in the form of a dataset\footnote{Further explained in Section \ref{section: Data Exploration}} by Royal Mail. The context of the problem involves historical schedules for a \textbf{model week} for the Exeter MC. The \textit{definition} of a \textbf{schedule} is the collection of duties that are assigned to drivers for each day of the week. Each driver is assigned their \textbf{duty} (or else their shift) for the day which constitutes their itinerary for that day. Their duty instructs them to fulfill a number of round-trips each day. Their first trip of the day commences from the Exeter MC, and following a number of visits to external locations their final trip concludes at the Exeter MC, at the end of their day's itinerary. The duties are composed of one or more units of the following two key data points featured throughout the dataset:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Bullet Points %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{itemize}
\item \underline{\textbf{Activity}}: The main \textit{unit of information} signifying the completion of a task. Each activity has a \texttt{processing\_time} associated with it, informing us of how long the job that it contains takes to be completed. The jobs contained in an activity could be loading/unloading of the mail, driving time between two locations etc\footnote{A comprehensive list of all the instances of activities is cited in Table \ref{table:Activity List} of Appendix \ref{section: Appendix Activities Feaure in the Dataset}.}.
\item \underline{\textbf{Atomic Interval (block)}}: The building blocks that make up a \textbf{duty}. A block represents a \textbf{single round-trip} that commences at the MC and through completing stops at various external locations, concludes again at the MC, see Figure \ref{fig:Atomic block}. Those locations are either Royal Mail's or clients' premises. Mathematically it is defined as a \textit{block of time}, during which a collection of \textbf{activities} take place. Multiple such atomic intervals are often featured inside a single duty. The intervals are completely \textit{tightly packed} in the sense that activities are rigidly in place, with no idle time going to waste in between two adjacent activities inside a block. The intervals are characterised as \textit{atomic}, to highlight their firm structure that cannot be broken into sub-components i.e. smaller sets of activities. To move a block, one cannot simply move a portion of it but has to move it in its entirety.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Two images in one, illustrating atomic block
\begin{figure}%
\centering
\subfloat[The set relationship between the various Input Parameters of our problem.]{
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.9}]
\node[set,fill=blue4,text width=6.5cm,label={[below=148pt of rea,text opacity=1]\textit{Schedule}}]
(nat) at (0,-0.7) (rea) {};
\node[set,fill=blue3,text width=5cm,label={[below=108pt of rea,text opacity=1]\textit{Duties}}]
(nat) at (0,-0.5) (rea) {};
\node[set,fill=blue2,text width=3.5cm,label={[below=66pt of int]\textit{Atomic Blocks}}]
(int) at (0,-0.3) {};
\node[set,fill=blue1,text width=2cm] (nat) at (0,0) {\textit{Activities}};
\end{tikzpicture}}%picture #1
\qquad
%picture #2
\subfloat[Structure of a Duty, showing $Activities$ ($A_{i}$) inside $Atomic \; Blocks$ which themselves are inside a $Duty$.]{\raisebox{5em}{\begin{tikzpicture}[line width=.7pt,scale=0.78, every node/.style={scale=0.9}]
%Activities
\draw[fill=gray!20] (0,0)node[below]{} rectangle(1,1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% NOTATION %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%(bottomleft) (topright)
\draw[fill=gray!20] (1,0)node[below]{} rectangle(3,1);
\draw[fill=gray!20] (3,0)node[below]{}rectangle(4,1);
\draw[fill=gray!20] (4,0)node[below]{}rectangle (5,1)(2.5,1)node[below,yshift=-1.15cm, text=cyan!70!blue]{$\boldsymbol{Block_1}$};
\draw[fill=gray!20] (5,0)node[below]{}rectangle (6,1);
\draw[fill=gray!20] (6,0)node[below]{}rectangle (8,1);
\draw[fill=gray!20] (8,0)node[below]{}rectangle (9,1)(7,1)node[below,yshift=-1.15cm, text=orange]{$\boldsymbol{Block_2}$}(9.605,0.75)node[below,yshift=0.9cm]{$\boldsymbol{Duty}$};
%Blocks
%\draw[fill=none, double=gray!40,double distance =1pt] (-0.1,-0.1)node[below]{} rectangle(4.93,1.1);
%\draw[fill=none, double=red,double distance =1pt] (5.07,-0.1)node[below]{} rectangle(9.1,1.1);
\draw[fill=none,very thick,dotted,draw=cyan!70!blue] (-0.1,-0.1)node[below]{} rectangle(4.93,1.1);
\draw[fill=none,very thick,dotted,draw=orange] (5.07,-0.1)node[below]{} rectangle(9.1,1.1);
%Duty
\draw[fill=none,ultra thick,dashed] (-0.2,-0.2)node[below]{} rectangle(9.2,1.2);
\path (0.5,.5)node{$A_1$} (2,.5)node{$A_2$} (3.5,.5)node{$A_3$} (4.5,.5)node{$A_4$} (5.5,.5)node{$A_5$} (7,.5)node{$A_6$} (8.5,.5)node{$\dotsi$};
%%%%%%%(width of label, height)
\end{tikzpicture}}}
%end of picture #2
\caption{Illustrations explaining the nature of the Input Parameters of the problem.}%
\label{fig:Useful Time}%
\end{figure}
\vspace{\baselineskip}
\noindent
We now focus on two simplified examples that are meant to give the reader a more practical overview of the problem in question. We start off with a dreamt-up example that serves as an illustration that hopefully highlights the foundations of the problem, then transitioning to a more reality-based example that is meant to resemble what typically occurs in a Royal Mail MC.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[htb!]
\begin{tikzpicture}
\begin{axis}[
font=\scriptsize,
ytick style={draw=none},
xtick style={draw=none},
%unit vector ratio*=1 1 1,%1 0.6 1,
axis lines = middle,
enlarge x limits = {value=.01,upper},
enlarge y limits = {value=.05,upper},
ylabel={\textbf{Duties}},
xlabel={\textbf{Time (HH:mm)}},
ylabel near ticks,
xlabel near ticks,
const plot,
stack plots=false,
area style,
width=\linewidth,
height=5cm, %control the height of chart
%width=\linewidth,height=\textheight,
ytick={1,...,60},
yticklabels={},
xtick={1,3,...,24},
%xticklabels = {4,6,8},
extra y ticks={1,2,3,4},
extra y tick style={yticklabel={$D_{\pgfmathprintnumber{\tick}}$}}
]
const char *colour[12] = { "yellow", "orange", "red!20", "gray", "light blue", "teal", "yellow!60!black", "blue!20", "magenta", "green!20","cyan","brown"};
\addplot[fill=yellow] coordinates {(4.00,0) (4.00,1) (5,1) (5,0) } node at (current path bounding box.center) {Start};
\addplot[fill=orange] coordinates {(5,0) (5,1) (6.5,1) (6.5,0) } node at (current path bounding box.center) {Load};
\addplot[fill=red!20] coordinates {(6.5,0) (6.5,1) (9.00,1) (9.00,0) } node at (current path bounding box.center) {Travel to Client};
\addplot[fill=gray] coordinates {(9.00,0) (9.00,1) (10.50,1) (10.50,0) } node at (current path bounding box.center) {Unload};
\addplot[fill=light blue] coordinates {(10.5,0) (10.5,1) (12.5,1) (12.5,0) } node at (current path bounding box.center) {End @MC};
\addplot[fill=teal] coordinates {(12.5,0) (12.5,1) (13.5,1) (13.5,0) } node at (current path bounding box.center) {Start};
\addplot[fill=yellow!60!black] coordinates {(13.5,0) (13.5,1) (14.5,1) (14.5,0) } node at (current path bounding box.center) {Load};
\addplot[fill=blue!20] coordinates {(14.5,0) (14.5,1) (17.00,1) (17.00,0) } node at (current path bounding box.center) {Travel to DO};
\addplot[fill=magenta] coordinates {(17.00,0) (17.00,1) (18.50,1) (18.50,0) } node at (current path bounding box.center) {Unload};
\addplot[fill=green!20] coordinates {(18.5,0) (18.5,1) (20.5,1) (20.5,0) } node at (current path bounding box.center) {End @MC};
\addplot[fill=cyan] coordinates {(4.00,1) (4.00,2) (5.42,2) (5.42,1) } node at (current path bounding box.center) {Start};
\addplot[fill=brown] coordinates {(5.42,1) (5.42,2) (7,2) (7,1) } node at (current path bounding box.center) {Load};
\addplot[fill=yellow] coordinates {(7,1) (7,2) (9.30,2) (9.30,1) } node at (current path bounding box.center) {Travel to DO};
\addplot[fill=orange] coordinates {(9.3,1) (9.3,2) (11,2) (11,1) } node at (current path bounding box.center) {Unload};
\addplot[fill=red!20] coordinates {(11,1) (11,2) (13.33,2) (13.33,1) } node at (current path bounding box.center) {End @MC};
\addplot[fill=gray] coordinates {(13.33,1) (13.33,2) (14.33,2) (14.33,1) } node at (current path bounding box.center) {Start};
\addplot[fill=light blue] coordinates {(14.33,1) (14.33,2) (17,2) (17,1) } node at (current path bounding box.center) {Travel to DO};
\addplot[fill=teal] coordinates {(17,1) (17,2) (18,2) (18,1) } node at (current path bounding box.center) {Load};
\addplot[fill=yellow!60!black] coordinates {(18,1) (18,2) (20,2) (20,1) } node at (current path bounding box.center) {End @MC};
\addplot[fill=blue!20] coordinates {(5.00,2) (5.00,3) (6.08,3) (6.08,2) } node at (current path bounding box.center) {Start};
\addplot[fill=magenta] coordinates {(6.08,2) (6.08,3) (8.08,3) (8.08,2) } node at (current path bounding box.center) {Load};
\addplot[fill=green!20] coordinates {(8.08,2) (8.08,3) (10.50,3) (10.50,2) } node at (current path bounding box.center) {Travel to DO};
\addplot[fill=cyan] coordinates {(10.50,2) (10.50,3) (12.33,3) (12.33,2) } node at (current path bounding box.center) {Load};
\addplot[fill=brown] coordinates {(12.33,2) (12.33,3) (15,3) (15,2) } node at (current path bounding box.center) {Travel to DO};
\addplot[fill=yellow] coordinates {(15,2) (15,3) (16.33,3) (16.33,2) } node at (current path bounding box.center) {Unload};
\addplot[fill=orange] coordinates {(15,2) (15,3) (17,3) (17,2) } node at (current path bounding box.center) {End @MC};
\addplot[fill=yellow] coordinates {(4.75,3) (4.75,4) (6.42,4) (6.42,3) } node at (current path bounding box.center) {Start};
\addplot[fill=orange] coordinates {(6.42,3) (6.42,4) (8.92,4) (8.92,3) } node at (current path bounding box.center) {Load};
\addplot[fill=light blue] coordinates {(8.92,3) (8.92,4) (11.92,4) (11.92,3) } node at (current path bounding box.center) {Travel to Airport};
\addplot[fill=teal] coordinates {(11.92,3) (11.92,4) (13.22,4) (13.22,3) } node at (current path bounding box.center) {Unload};
\addplot[fill=yellow!60!black] coordinates {(13.22,3) (13.22,4) (15.33,4) (15.33,3) } node at (current path bounding box.center) {End @MC};
\end{axis}
\end{tikzpicture}
\caption{Gantt chart of abstract exemplary case.}
\label{fig:Schedule-for-abstract-exemplary-case}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Generalised Schedule Template}
We start with an abstracted example that resembles our problem very closely. The goal of this section is to formalise the problem and give the reader a general sense of the specifics of the problem. We have a set of duties and a set of blocks. In particular, we have 4 duties at our disposal and 6 atomic blocks that require scheduling. The details of the blocks schedule in each of the four duties are seen in the table below:
\begin{table}[h]
\small
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Block} & \multicolumn{4}{|c|}{ \textbf{Characteristics}} \\
\hline
& Start Time & End Time & Number of Activities & Perfored by Duty\\
\hline
1 & 04:00 & 12:30 & 5 & 1\\
\hline
2 & 12:30 & 20:30 & 5 & 1\\
\hline
3 & 04:00 & 13:20 & 5 & 2\\
\hline
4 & 13:20 & 20:00 & 4 & 2\\
\hline
5 & 05:00 & 17:00 & 6 & 3\\
\hline
6 & 04:45 & 15:20 & 5 & 4\\
\hline
\end{tabular}%
\medbreak
\end{table}
%of activities. (or maybe a real small instance of Royal Mail, to say oh for example this is what happened in Royal στο πρώτο wave της %Δευτέρας, και write for confidentiality purposes we call them ATOMIC BLOCK 1,2,4, 5 or Athens, Tripoli, Thessaloniki Or do both actually).
\vspace{\baselineskip}
\noindent
In this particular example we show the process of creating a schedule through an instance of blocks and duties. Namely, the act of scheduling is to place the blocks inside duties in a particular sequence. In this case that sequence is performed heuristically without the help of an algorithm.
\vspace{\baselineskip}
\noindent
Nonetheless certain interesting facts can be determined from this example. To start with, we can see that the \texttt{End} activity incorporates the travel leg from the external location back to the MC. Moreover, we can see that the length of the trip back to the MC is more often than not shorter than that of the trip to the external location. This is observed, because usually the initial travel leg from the MC is conducted during rush hours since the mail needs to be delivered first thing in the morning. In contrast the travel leg back to the MC is not particularly rushed and the driver can determine the best time to complete it.
\vspace{\baselineskip}
\noindent
The typical blocks that occur most often are blocks such as block 1-3. They contain the typical sequence of activities: the \texttt{start} to the duty, the \texttt{loading} of the mail, the \texttt{travel leg} to the external location after the \texttt{unloading} of the mail a return back to the MC to \texttt{end} the duty. In contrast blocks 4-6 are slightly unique in their sequence of events. Block number 4 for instance is a \textit{container repatriation} trip. The HGV leaves the MC to pick up containers that were previously left at an external location, in order to bring the back to the MC for their use the following day.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% sub-Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Historical Schedule Example}
We present an isolated version of our problem that only concerns a small part of the overall problem. The thought process behind this example is to simulate for the reader what typically occurs in a Royal Mail MC. More specifically, we present part of the schedule that dictates the Monday morning operations at a MC. This is illustrated through a Gantt chart in Figure \ref{fig:Schedule-for-exemplary-case} which aims to highlight a small snippet of what would occur in a MC at the start of a week.
\vspace{\baselineskip}
\noindent
Taking a more detailed look one can see that maybe this historical schedule is \textbf{not so balanced}. The majority of people look like they work for around 8 hours such as the first shift ($D_1$) that began at 4:00 AM and concluded at 12:00 PM. For confidentiality purposes, the names of locations, clients, and jobs undertaken by each drivers have been omitted and generic placeholder names have been used instead. For instance, shift $D_1$ consisted of 4 \textbf{atomic blocks}, hence 4 \textbf{round-trips}. The first atomic block, is the largest one in this duty and consisted of visiting 6 external locations, after the start of the trip from the MC. Following, that 3 smaller atomic blocks that consisted of short-lasting round trips were completed by this driver, until h(er)is end of the duty.
\vspace{\baselineskip}
\noindent
However, taking a more macro outlook on this schedule we can see that there are also people who are required to do overtimes which might be costly for the company, such as shift ($D_{14}$) that begins at 5:00 AM and lasts until 5:00 PM. This particular shift, consisted of \textbf{6 atomic blocks}, where the first one, was also a fairly long-lasting one. It could be possible to obtain a more uniform allocation of blocks per driver, that would result in a more uniform allocation of workload. That is the purpose of the experiments, we then run in Chapter \ref{chapter: 2-Evaluating Royal Mail Historical Data}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
font=\footnotesize,
ytick style={draw=none},
xtick style={draw=none},
unit vector ratio*=1 1 1,
axis lines = middle,
enlarge x limits = {value=.01,upper},
enlarge y limits = {value=.05,upper},
ylabel={\textbf{Duties}},
xlabel={\textbf{Time (HH:mm)}},
ylabel near ticks,
xlabel near ticks,
const plot,
stack plots=false,
area style,
width=\linewidth,height=\textheight,
ytick={1,...,60},
yticklabels={},
xtick={1,...,24},
extra y ticks={1,2,3,4,5,6,7,8,9,10,11,12,13,14},
extra y tick style={yticklabel={$D_{\pgfmathprintnumber{\tick}}$}}
]
\addplot[fill=yellow] coordinates {(4.00,0) (4.00,1) (8.08,1) (8.08,0) } node at (current path bounding box.center) {B1};
\addplot[fill=orange] coordinates {(8.08,0) (8.08,1) (8.83,1) (8.83,0) } node at (current path bounding box.center) {B2};
\addplot[fill=red!20] coordinates {(8.83,0) (8.83,1) (11.00,1) (11.00,0) } node at (current path bounding box.center) {B3};
\addplot[fill=gray] coordinates {(11.00,0) (11.00,1) (11.92,1) (11.92,0) } node at (current path bounding box.center) {B4};
\addplot[fill=light blue] coordinates {(4.00,1) (4.00,2) (7.42,2) (7.42,1) } node at (current path bounding box.center) {B1};
\addplot[fill=teal] coordinates {(7.42,1) (7.42,2) (12.33,2) (12.33,1) } node at (current path bounding box.center) {B2};
\addplot[fill=yellow!60!black] coordinates {(4.00,2) (4.00,3) (6.08,3) (6.08,2) } node at (current path bounding box.center) {B1};
\addplot[fill=blue!20] coordinates {(6.08,2) (6.08,3) (8.08,3) (8.08,2) } node at (current path bounding box.center) {B2};
\addplot[fill=magenta] coordinates {(8.08,2) (8.08,3) (10.50,3) (10.50,2) } node at (current path bounding box.center) {B3};
\addplot[fill=green!20] coordinates {(10.50,2) (10.50,3) (12.33,3) (12.33,2) } node at (current path bounding box.center) {B4};
\addplot[fill=yellow] coordinates {(4.00,3) (4.00,4) (6.42,4) (6.42,3) } node at (current path bounding box.center) {B1};
\addplot[fill=orange] coordinates {(6.42,3) (6.42,4) (8.92,4) (8.92,3) } node at (current path bounding box.center) {B2};
\addplot[fill=red!20] coordinates {(8.92,3) (8.92,4) (11.33,4) (11.33,3) } node at (current path bounding box.center) {B3};
\addplot[fill=gray] coordinates {(4.00,4) (4.00,5) (5.33,5) (5.33,4) } node at (current path bounding box.center) {B1};
\addplot[fill=light blue] coordinates {(5.33,4) (5.33,5) (7.83,5) (7.83,4) } node at (current path bounding box.center) {B2};
\addplot[fill=teal] coordinates {(7.83,4) (7.83,5) (8.50,5) (8.50,4) } node at (current path bounding box.center) {B3};
\addplot[fill=yellow!60!black] coordinates {(8.50,4) (8.50,5) (11.25,5) (11.25,4) } node at (current path bounding box.center) {B4};
\addplot[fill=blue!20] coordinates {(4.00,5) (4.00,6) (6.22,6) (6.22,5) } node at (current path bounding box.center) {B1};
\addplot[fill=magenta] coordinates {(6.22,5) (6.22,6) (8.72,6) (8.72,5) } node at (current path bounding box.center) {B2};
\addplot[fill=green!20] coordinates {(8.72,5) (8.72,6) (11.88,6) (11.88,5) } node at (current path bounding box.center) {B3};
\addplot[fill=yellow] coordinates {(4.00,6) (4.00,7) (6.58,7) (6.58,6) } node at (current path bounding box.center) {B1};
\addplot[fill=orange] coordinates {(6.58,6) (6.58,7) (9.17,7) (9.17,6) } node at (current path bounding box.center) {B2};
\addplot[fill=red!20] coordinates {(9.17,6) (9.17,7) (11.67,7) (11.67,6) } node at (current path bounding box.center) {B3};
\addplot[fill=gray] coordinates {(4.00,7) (4.00,8) (6.50,8) (6.50,7) } node at (current path bounding box.center) {B1};
\addplot[fill=light blue] coordinates {(6.50,7) (6.50,8) (8.50,8) (8.50,7) } node at (current path bounding box.center) {B2};
\addplot[fill=teal] coordinates {(8.50,7) (8.50,8) (11.67,8) (11.67,7) } node at (current path bounding box.center) {B3};
\addplot[fill=green!20] coordinates {(4.00,8) (4.00,9) (7.50,9) (7.50,8) } node at (current path bounding box.center) {B1};
\addplot[fill=yellow] coordinates {(7.50,8) (7.50,9) (8.83,9) (8.83,8) } node at (current path bounding box.center) {B2};
\addplot[fill=orange] coordinates {(8.83,8) (8.83,9) (10.50,9) (10.50,8) } node at (current path bounding box.center) {B3};
\addplot[fill=red!20] coordinates {(10.50,8) (10.50,9) (12.50,9) (12.50,8) } node at (current path bounding box.center) {B4};
\addplot[fill=gray] coordinates {(4.50,9) (4.50,10) (7.83,10) (7.83,9) } node at (current path bounding box.center) {B1};
\addplot[fill=light blue] coordinates {(7.83,9) (7.83,10) (9.67,10) (9.67,9) } node at (current path bounding box.center) {B2};
\addplot[fill=teal] coordinates {(9.67,9) (9.67,10) (12.17,10) (12.17,9) } node at (current path bounding box.center) {B3};
\addplot[fill=orange] coordinates {(12.17,9) (12.17,10) (14.08,10) (14.08,9) } node at (current path bounding box.center) {B4};
\addplot[fill=yellow!60!black] coordinates {(4.50,10) (4.50,11) (8.42,11) (8.42,10) } node at (current path bounding box.center) {B1};
\addplot[fill=blue!20] coordinates {(8.42,10) (8.42,11) (12.08,11) (12.08,10) } node at (current path bounding box.center) {B2};
\addplot[fill=magenta] coordinates {(12.08,10) (12.08,11) (13.00,11) (13.00,10) } node at (current path bounding box.center) {B3};
\addplot[fill=green!20] coordinates {(4.50,11) (4.50,12) (7.05,12) (7.05,11) } node at (current path bounding box.center) {B1};
\addplot[fill=yellow] coordinates {(7.05,11) (7.05,12) (9.05,12) (9.05,11) } node at (current path bounding box.center) {B2};
\addplot[fill=orange] coordinates {(9.05,11) (9.05,12) (12.88,12) (12.88,11) } node at (current path bounding box.center) {B3};
\addplot[fill=red!20] coordinates {(5,12) (5,13) (7.25,13) (7.25,12) } node at (current path bounding box.center) {B1};
\addplot[fill=gray] coordinates {(7.25,12) (7.25,13) (10.67,13) (10.67,12) } node at (current path bounding box.center) {B2};
\addplot[fill=light blue] coordinates {(10.67,12) (10.67,13) (13.83,13) (13.83,12) } node at (current path bounding box.center) {B3};
\addplot[fill=teal] coordinates {(5.15,13) (5.15,14) (9.08,14) (9.08,13) } node at (current path bounding box.center) {B1};
\addplot[fill=yellow] coordinates {(9.08,13) (9.08,14) (11.50,14) (11.50,13) } node at (current path bounding box.center) {B2};
\addplot[fill=orange] coordinates {(11.50,13) (11.50,14) (12.50,14) (12.50,13) } node at (current path bounding box.center) {B3};
\addplot[fill=red!20] coordinates {(12.50,13) (12.50,14) (13.30,14) (13.30,13) } node at (current path bounding box.center) {B4};
\addplot[fill=gray] coordinates {(13.30,13) (13.30,14) (15.08,14) (15.08,13) } node at (current path bounding box.center) {B5};
\addplot[fill=magenta] coordinates {(15.08,13) (15.08,14) (17.08,14) (17.08,13) } node at (current path bounding box.center) {B6};
\end{axis}
\end{tikzpicture}
\caption{Gantt chart showing \textit{Blocks} ($B_j$) allocated to \textit{Duties} ($D_i$) as in the Monday morning schedule of the MC.}
\label{fig:Schedule-for-exemplary-case}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Dataset Exploration}
\label{section: Data Exploration}
Royal Mail have provided historical data from the Exeter MC, that shows the itineraries that the HGV drivers follow over a week. Namely, a \textbf{model week} that represents the normal \textbf{mode of operation} of a Royal Mail MC over a typical week of the year. The main data points featured in the dataset stem from the Input Parameters outlined in Section \ref{section: Input Parameters}, and below we mention the attributes associated with each \textit{Input Parameter} from within the dataset:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Bullet points %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{itemize}
\item \underline{\textbf{Activity}}: Each unit of activity is described by a \textit{tuple}\textit{ a/b/c/d}, where \textit{a}, \textit{b} are the origin and destination locations where the activity takes place, and \textit{c}, \textit{d} are its \texttt{start} and \texttt{end} time. The difference $|d-c|$ therefore, shows the \texttt{duration} or \texttt{processing time} of the activity.
\item \underline{\textbf{Atomic Interval (block)}}: In the dataset, a \textbf{block} is constituted by a set of \textbf{activities} from table \ref{table:Activity List}. We identify a block by its first, and last activity, and more specifically merely by the \texttt{start}, and \texttt{end} time of the \texttt{start/end} activity since the locations at the start and end will be the MC itself, by the definition of a block.
%\todo{Figure required showing this thing with first and last start/end activity.}
\item \underline{\textbf{Duty}}: The largest unit of time seen in the dataset that consists of a collection of \textbf{blocks}. Duties in the dataset correspond to a \textit{driver} taking the \textit{HGV} assigned to them, and completing the collection of \textbf{blocks} (i.e. round-trips) required.
\end{itemize}
\vspace{\baselineskip}
\noindent
The historical schedules in the dataset, are the current feasible itineraries that Royal Mail operates. After careful analysis of the \textbf{200} duties in the dataset, we can infer that the Exeter MC serves 30 DOs, 38 client locations with a fleet of 65 total HGVs split between 17, 7.5 tonne lorries, and 3.5 tonne vans. Moreover, we deduce that the Exeter MC opens for business at 4:00 AM in the morning on all weekdays, as well as on Saturday. Sunday is an exception, with a single shift commencing at 8:00 AM and finishing at 6:00 PM in the afternoon. The \textbf{absolute latest finishing time} of a shift for weekdays as well as for Saturday is at 3:00 AM in the morning which constitutes the \textit{close of business (COB)} for the day. The tuning of the absolute \textbf{earliest starting time} and \textbf{latest finishing time} are two of the most important optimisation decisions to be made since, opening for business slightly earlier than the current 4:00 AM time set by company policy, could result in an overall reduction of the duration of the each day's schedule.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[t]
\small
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Schedule} & \multicolumn{3}{|c|}{ \textbf{Characteristics}} & \multicolumn{3}{|c|}{ \textbf{Duties (HH:mm)}} & \textbf{Total Time} \\
\hline
& Duties & Blocks & Activities & Average & Minimum & Makespan & \\
\hline
Original & 200 & 514 & 3,555 & 07:50 & 02:50 & 11:50 & 1,657:51 \\
\hline
\end{tabular}%
\medbreak
\end{table}
\vspace{\baselineskip}
\noindent
Taking a closer look at the idiosyncrasies of the current set of schedules we can extract the following \textbf{insights} for each class of activities:
\vspace{\baselineskip}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Bullet Points %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{enumerate}
\item \underline{\texttt{Start/End}}: Duties tend to begin in a wave-like fashion. We observed three waves of starting times, with waves occurring near the opening of business, midnight and mid-day\footnote{Detailed evidence from the dataset highlighting this phenomenon can be seen in Figure \ref{fig:starting time} of Appendix \ref{subsection: Appendix Starting times}}. The drivers are split in those three groups, with approximately 59 drivers beginning their shift in the \textbf{morning} wave and 61, 63 in the \textbf{afternoon} and \textbf{night} waves respectively.
\item \underline{\texttt{Travel}}: For each, of the travel activities, we are also supplied with data regarding the mileage covered during each trip as well as, a description regarding the reason for that leg of the trip (e.g. mail, time-sensitive mail, repatriation of mail containers etc.). Leveraging this attribute of travel legs we can see that a certain subset of atomic blocks that contain time-critical tasks occurring within a certain time-span of the day tend to take place at the beginning of a duty. These are the blocks containing the \textit{delivery} of post itself as well as, delivery of time-critical parcels to the airport. In contrast, other blocks that involve non-time-constrained tasks for example \textit{container repatriation} or when an HGV returns with an \textit{empty} cargo bay following a delivery, can be scheduled interchangeably within a shift, and are hence more open to optimisation.
\item \underline{\texttt{Load/Unload}}: A \textit{load/unload} activity takes place, immediately prior and directly subsequent of a travel leg, as illustrated in Figure \ref{fig:load/unload}. Although this activity could be considered non-useful time as it is time not spent on the road, it cannot nonetheless be optimised away, since company policy states that the driver must be present, and should not be occupied with any other activity in parallel to a \texttt{load/unload} of the mail units. Consequently, for the purposes of our optimisation problem we consider this to be \textbf{useful} time.
\item \underline{\texttt{Meal-Relief}}: The meal allowance occurs immediately before or after the beginning or end of a block. It is freely interchangeable within the space of a duty but given that it must occur at the premises of the MC, it always occurs at the start or at the end of a block.
\item \underline{\texttt{Processing/Distribution}}: There are cases when a gap occurs between two activities due to the finishing time of the prior activity not completely coinciding with the starting time of the following one. Given that both activities are rigidly scheduled in place and cannot be altered to coincide, drivers are heuristically instructed to fill in the time until their next \textit{useful} activity by assisting with administrative tasks that ideally should be carried out by lower-skilled employees. As a result, unnecessary costs are incurred by Royal Mail due to the inefficiency of the schedule. These activities provide us with the opportunity to minimise the makespan of each duty by attempting to reduce their rate of occurrence, as seen in Section \ref{section: Redefined Dataset}.
\item \underline{\texttt{Park Vehicle}}: The parking activity occurs four times within the dataset whenever a \texttt{3.5 Tonne van} arrives at the National Distribution Center. For the majority of duties, the time taken to complete the parking activity is incorporated in the duration of the End activity. It is only in these four duties that parking is included as a separate activity. As a result, we can assume that we can incorporate it ourselves into the \texttt{End} activity in those four duties to simplify the dataset as a whole, and equivalently then render the \textt{Park Vehicle} activity as \textbf{non-useful} time.
\item \underline{\texttt{Check}}: They occur as standalone activities in a handful of shifts that are carried out by the \texttt{3.5 Tonne Vans}. In the majority of cases they are included in the time that the \texttt{Start} activity takes, as explained by the description attached to each activity. As a result, they occur at the beginning of each duty, and tend to take less than 25 minutes. Hence, similarly to the \textt{Park Vehicle} activity, we can incorporate them into the \textt{Start} activity, and characterise \texttt{Check} as \textbf{non-useful}.
\item \underline{\texttt{Clean}}: These take place at the \textit{beginning} or \textit{end} of blocks, and after more critical activities have been completed. For example, if a \texttt{Meal-Relief} and a \texttt{Clean} are both scheduled to take place exactly at the end of a block, priority will be given to the \texttt{Meal-Relief}, since it is a more time sensitive activity. In the majority of cases, it is not entered as a standalone entry and is described as part of the time taken for \texttt{Start} or \textit{End} activities.
\end{enumerate}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}{
\centering
\begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}]
\node (A) at (0,0){};
\node (B) at (4,0) [square,draw,label=above:{Load/Unload}] {MC/DO};
\draw[thick,->] (A) -- (B) node[midway,above] {Travel};
\node (C) at (8,0){};
\draw[thick,->] (B) -- (C) node[midway,above] {Travel};
\end{tikzpicture}
\caption{Illustration of the pattern of activities that directly succeeds and precedes any travel-leg of an atomic interval.}
\label{fig:load/unload}
}\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Data Cleaning}
\label{section: Data Cleaning}
The quality and effectiveness of the schedules that we hope to generate, will depend significantly on the quality of the dataset from which our schedules will be derived. Consequently, an extensive process to clean the dataset from outliers and general noise is undertaken with the goal of filtering down the schedules to show the most frequently occurring duties which accurately depict what typically occurs in a MC such that an optimisation would really make a difference to Royal Mail.\par
\vspace{\baselineskip}
\noindent
To retain the reproducibility of the dataset, the specific steps that were taken to clean the dataset are outlined below:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Bullet points %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\vspace{\baselineskip}
\begin{enumerate}[label=\textbf{\arabic*}.]
\item \underline{\textbf{Elimination of unwanted attributes}}: The dataset contained numerous attributes attached to each activity, that are utilised in accordance with internal company policy. For instance, some of those attributes contained information regarding the level of driving skills required for each duty. However, our problem operates within the context of \textit{multiprocessor scheduling}, where all machines are considered to be identical, and is hence agnostic to the level of skill of the HGV drivers. We view each task as if it is processed by a machine so we choose to neglect the information provided by those fields and focus merely on scheduling \textbf{legal} and \textbf{feasible} duties. Consequently, the following fields were eliminated from our dataset \texttt{driver\_grade, sort\_order, date\_amended}. A more detailed list of the attributes contained in the dataset can be seen in Appendix \ref{section: Appendix Activities Feaure in the Dataset}.
\item \underline{\textbf{Incorporation of sparsely occurring activities into \texttt{Start} and \texttt{End} activities}}: Within the dataset, we observed that the parking activity occurs on its own, in only a handful of cases and is instead incorporated into the \texttt{End} activity in the majority of cases (as we were informed by the description of the task undertaken, attached to some of the activities). Hence, to avoid focusing on \textbf{unwanted} noise cases, we chose to incorporate the duration of the parking activity into the end activity for those cases. In a similar fashion we include, \texttt{Clean} and \texttt{Check} to the \texttt{Start}, and \texttt{End} activities as is appropriate.
\item \underline{\textbf{Standardisation of standalone entries}}: In a small subset of cases, an extra level of detail is added onto the duties specifying whether a \texttt{load} or \texttt{unload} activity is being undertaken as opposed to labelling it as a more general \texttt{load/unload} operation. Given, that this extra layer of detail does not concurrently add any extra level of understanding to the problem we choose to label all separate \texttt{load} and \texttt{unload} operations as a more abstract \texttt{load/unload} activity. The same procedure is carried out for separate entries of \texttt{Processing}, and \texttt{Distribution} activities the distinction of which is similarly of no significance to us.\par
\item \underline{\textbf{Neglected duties that involve trips to out-of-scope Royal Mail premises}}: Our focus is placed on optimising the scheduling of the routes that occur between the Exeter MC, and the DOs as well as clients for which it is responsible. Hence, duties that involve trips to the \textit{Distribution Centres} and other MCs are out of our scope for optimisation, hence we do not consider trips to those locations.\par
\item \underline{\textbf{Elimination of Outliers}}: We did not consider entries that involved duties finishing after the \textit{close of business (COB)} as we assumed they involve out of the ordinary situations where duties run late and hence, do not reflect normal business practice.\par
\item \underline{\textbf{Neglected data related to Time-Constrained mail}}: We ignore data that is related to \textbf{time-constrained packages}, as the modelling of such block is out of scope for this project. This decision was made in conjunction with the Royal Mail Data Science group, since the study of such specific trips was not at the time the top priority. The thought process behind this action is that for the purposes of our project we want total freedom as far as moving the blocks and these duties reflect \textbf{fixed blocks} hence, represent constraints to our optimisation problem. In practice, to avoid considering such trips occurring in the dataset that was going to be used for the modelling portion of the project, we choose to neglect duties that involve trips to the Exeter Airport, as it is such trips that often have a time-limit attached to them. \par
\item \underline{\textbf{Neglected flawed data entries}}: In a handful of duties, no presence of an atomic block was observed, instead merely activities were scheduled without a \texttt{Travel} leg that constituted the \textit{round-trip} of an \textbf{atomic block} being present. Such cases were also considered to be abnormal entries and were not taken into consideration. \par
\end{enumerate}
\vspace{\baselineskip}
\noindent
The resulting \textbf{Finalised Dataset (Cleaned)} that was obtained after subjecting the original dataset to the aforementioned data cleaning processes, schedule the activities in Table \ref{table:Final Activity List}, and has the following characteristics:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Schedule} & \multicolumn{3}{|c|}{ \textbf{Characteristics}} & \multicolumn{3}{|c|}{ \textbf{Duties (HH:mm)}} & \textbf{Total Time} \\
\hline
& Duties & Blocks & Activities & Average & Minimum & Makespan & \\
\hline
Original & 200 & 514 & 3,555 & 08:18 & 02:50 & 11:50 & 1,657:51 \\
\hline
Cleaned & 183 & 462 & 3,285 & 07:50 & 02:50 & 11:50 & 1,435:22 \\
\hline
\end{tabular}%
\medbreak
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Table containing the types of activities
\begin{table}[ht]
\small
\centering
\begin{tabular}{|l|p{8.3cm}|}
\hline
\multicolumn{1}{|c|}{ \textbf{Activity}} & \multicolumn{1}{|c|}{ \textbf{Description}} \\
\hline
\texttt{Start/End} & Indicates the \textit{beginning}, and \textit{end} of a duty. \\
\hline
\texttt{Travel} & The \textit{travel leg} from one location to the next. \\
\hline
\multirow{2}*{\texttt{Load/Unload}} & The \textit{loading} and \textit{offloading} of mail units before leaving or after arriving to a designated location respectively. \\
\hline
\multirow{2}*{\texttt{Meal-Relief}} & The \textit{meal allowance} break to meet EU \textit{driving time} regulations. \\
\hline
\texttt{Distribution/Processing} & Non-essential administrative tasks. \\
\hline
\texttt{Park Vehicle} & \textit{Parking} of HGV at end of duty. \\
\hline
\texttt{Check} & Scheduled \textit{servicing} of HGV. \\
\hline
\texttt{Clean} & Scheduled \textit{cleaning} of HGV. \\
\hline
\end{tabular}%
\medbreak
\caption{List of activities in the \textbf{Finalised Dataset (Cleaned)}.}
\label{table:Final Activity List}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Eliminating Redundant Activities}
\label{section: Redefined Dataset}
The finalised dataset presented above will be used to solve models that will provide us with a primary level of understanding in regards to the opportunities for optimisation. We then decided to artificially create some more space for optimisation in the historical data by implementing the first step in the procedures implemented on a daily basis by Royal Mail Scheduling operators to create the actual schedules utilised in MCs. That step involves neglecting activities within the atomic blocks that involve the completion of tasks considered \textbf{non-useful} time for the drivers. The activities, are generally classified in two categories, \textit{useful} and \textit{non-useful}. \textit{Useful} time for \textit{HGV} drivers, from the perspective of Royal Mail, is time related to activities during which, the driver \textbf{must be present} and \textbf{in control} of the task (e.g. activities involving driving). On the contrary \textit{non-useful} time refers to activities that are put in place as padding time between two successive \textit{useful} activities, that have been schedule to start at a particular time. To fill the time between those \textit{useful} activities drivers end up helping with other activities that are primarily designed for employees of different specialties.
\vspace{\baselineskip}
\noindent
The classification was performed based on the frequency with which activities occurred in the dataset. Sparsely occurring activities were labelled as \textit{non-useful} as they did not carry the vast portion of the workload and were hence deemed non critical. As a result, the activities were distinguished in accordance with Table \ref{table: Useful vs Non-Useful Activities}. The reasoning behind distinguishing the activities in this manner is to signify which activities must be maintained in place during our quest to allocate as much of the time available within a shift to \textit{useful} activities, (i.e. minimise the \textit{non-useful} time). This operation gives us the necessary space, or else \textbf{idle time} within the historical duties to re-schedule the blocks in a more efficient manner. We now have a revised historical dataset, that we refer to as \textbf{Redefined Historical Schedules}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Double Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}[ht]
\begin{floatrow}
\ffigbox{%
{\begin{ganttchart}[
x unit=0.5cm,
y unit chart=0.5cm,
canvas/.style={draw=none,fill=none}, % remove canvas borders, etc
vgrid={*1{draw=black!12}}, % vertical gray lines every unit
inline, % draw bars inline
group/.style={draw=none,fill=none}, % remove group borders, etc
bar top shift=0.1, % give bar 10% padding top/bottom
bar height=0.8, % bar size 80% of vertical space
y unit title=0.5cm, % crop titles a little smaller
title/.style={draw=none,fill=none}, % remove title borders, etc
include title in canvas=false % no vertical grid in title
]{-1}{12} % limits of time axis
\gantttitle{0}{2}
\gantttitle{2}{2}
\gantttitle{4}{2}
\gantttitle{6}{2}
\gantttitle{8}{2}
\gantttitle{10}{2}
\gantttitle{12}{2} \\
%fake schedule line to center the main one
\ganttgroup[inline=false]{}{0}{1}
\ganttbar[bar/.style={fill=yellow, opacity=0}]{}{2}{5} \\
%real schedule line
\ganttgroup[inline=false]{$D_{1}$}{0}{1}
\ganttbar[bar/.style={fill=light blue}]{1}{0}{2} %first {} containts number displayed on cell, other two the limits from which to which
\ganttbar[bar/.style={fill=otherbluegantt}]{2}{3}{3.5}
\ganttbar[bar/.style={fill=light blue}]{3}{4}{7}
\ganttbar[bar/.style={fill=otherbluegantt}]{4}{8}{9}
\ganttbar[bar/.style={fill=light blue}]{5}{10}{11} \\
%2nd schedule line
\ganttgroup[inline=false]{$D_{2}$}{0}{1}
\ganttbar[bar/.style={fill=light blue}]{1}{1}{3} %first {} containts number displayed on cell, other two the limits from which to which
\ganttbar[bar/.style={fill=light blue}]{2}{4}{5}
\ganttbar[bar/.style={fill=otherbluegantt}]{3}{6}{8}
\ganttbar[bar/.style={fill=light blue}]{4}{9}{9.5} \\
%3rd schedule line
\ganttgroup[inline=false]{$D_{3}$}{0}{1}
\ganttbar[bar/.style={fill=light blue}]{1}{1}{4} %first {} containts number displayed on cell, other two the limits from which to which
\ganttbar[bar/.style={fill=otherbluegantt}]{2}{5}{6}
\ganttbar[bar/.style={fill=light blue}]{3}{7}{9}
\ganttbar[bar/.style={fill=light blue}]{4}{10}{11} \\
%fake schedule line to center the main one
\ganttgroup[inline=false]{}{0}{1}
\ganttbar[bar/.style={fill=yellow, opacity=0}]{}{2}{5} \\
\node[fill=light blue,draw] at ([xshift=-30pt, yshift=-40pt]current bounding box.south){Useful};
\node[fill=otherbluegantt,draw] at ([xshift=+30pt,yshift=+7.2pt]current bounding box.south){Non-useful};
\end{ganttchart}}
}{%
\caption{Gantt chart showing the split of \textit{useful} and \textit{non-useful} activities inside each block of each duty $D_{i}$.}%
}
\capbtabbox{%
\begin{tabular}{|l|c|}
\hline
\textbf{Activity} & \textbf{Useful Time} \\
\hline
\texttt{Start/End} & \cmark \\
\hline
\texttt{Travel} & \cmark\\
\hline
\texttt{Load/Unload} & \cmark\\
\hline
\texttt{Meal Relief} & \cmark\\
\hline
\texttt{Distribution/Processing} & \xmark\\
\hline
\texttt{Park Vehicle} & \xmark\\
\hline
\texttt{Check} & \xmark\\
\hline
\texttt{Clean} & \xmark\\
\hline
\end{tabular}
}{%
\caption{List of \textit{useful} or \textit{not} activities.}%
\label{table: Useful vs Non-Useful Activities}
}
\end{floatrow}
\end{figure}
\vspace{\baselineskip}
\noindent
By deleting theses activities we will observe an initial \textit{step change} reduction in the overall labour hours of the new schedules. This systemic change\footnote{Interested readers can see it in more detail in Figure \ref{fig: Redefined Historical.} of Appendix \ref{chapter: second appendix}}, is obviously not due to the efficiency of the new schedules but due to the deletion go of the redundant activities. As a result, we must not consider it when determining the quality of our new schedules, but actively distinguish it when evaluating the performance of the new schedules. In total, this operation results in a 435 hour reduction of the total labour hours, as well as a 1 hour and 44 minutes reduction in the average duration of a shift. This, systematic reduction of \textit{occupied time} constitutes the space for optimisation that our models can then exploit to provide even more optimal schedules.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Schedule} & \multicolumn{3}{|c|}{ \textbf{Characteristics}} & \multicolumn{3}{|c|}{ \textbf{Duties (HH:mm)}} & \textbf{Total Time} \\
\hline
& Duties & Blocks & Activities & Average & Minimum & Makespan & \\
\hline
Original & 200 & 514 & 3,555 & 08:18 & 02:50 & 11:50 & 1,657:51 \\
\hline
Cleaned & 183 & 462 & 3,285 & 07:50 & 02:50 & 11:50 & 1,435:22 \\
\hline
Redefined & 183 & 462 & 2,850 & 06:06 & 02:00 & 10:45 & 1,118:58 \\
\hline
\end{tabular}%
\medbreak
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Table %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{table}[h]
\small
\centering
\begin{tabular}{c|c}
\textbf{Idle Time Gained (HH:mm) per Duty} & \textbf{Overall Time Reduction (hours)} \\
\hline
01:44 & 435 \\
\end{tabular}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Section %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Departure Waves} %we saw that things start around the same time, so we translate this to our instances so that things only start within that instance.
\label{section: Wave Instances - Data}
The final modification that was applied to the finalised dataset was performed after discussions with our industrial liaison in order to establish a \textbf{more practical} and \textbf{realistic} direction for our study. To achieve that we decided to implement a new kind of constraint, that limits the space within which we can move a block. This philosophy stems from company policy that is in place to protect the fleet of drivers belonging to a MC, who tend to have set-up their lives around their jobs. Hence, people are in favour of a \textbf{consistent starting time}, which means that for instance, people that have historically started their duties in the mornings will be unwilling to change to a night start\footnote{This phenomenon was also observed when analysing the dataset in Section \ref{section: Data Exploration}.}.
\vspace{\baselineskip}
\noindent
For the purposes of our model this translates to assigning a certain \textit{degree of freedom} or else a \textbf{slack} to atomic blocks within which they are allowed to move. More specifically, a job that historically started at time $t$ is now given a \textit{window} within which we can change its starting time. That window is the length of time of a \textbf{wave}, where waves are the clusters during which duties tend to start. There are three waves observed in the dataset, a \texttt{morning, afternoon and night} wave as seen in Figure \ref{fig:starting time} of Appendix \ref{subsection: Appendix Starting times}.
\vspace{\baselineskip}
\noindent
Our quest to model this more pragmatic and realistic version of our problem, motivates the forming of instances of the problem where only a certain wave in itself is concerned. To create those instances, we take the \textbf{Finalised} dataset outlined in Section \ref{section: Data Cleaning}, and split it into \textbf{three sub-instances}, each representing one of the \textbf{three waves}, observed in Figure \ref{fig:starting time} in Appendix \ref{subsection: Appendix Starting times}. The effect of splitting our dataset into \textbf{three separate,} and \textbf{autonomous} instances, is observed in Figure \ref{fig: Wave-instances.}.
\vspace{\baselineskip}
\noindent
All in all, this results in a minor transformation of our problem. We are still able to freely decide which \textit{round-trip} gets assigned to which duty, but we are only allowed to choose from a smaller pool of blocks from within each wave instance. Hence, in contrast to previous cases, a block is prohibited from being given a \texttt{start} time that belongs to a different wave.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Figure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure}
\begin{center}
\includegraphics[width=0.46\linewidth]{[1] - chapter/Image Files/Waves-instances.png}
\end{center}
\caption{The effect of splitting the Historical dataset into \textbf{wave instances} of the problem.}
\label{fig: Wave-instances.}
\end{figure}
\vspace*{4in}
| {
"alphanum_fraction": 0.6823779687,
"avg_line_length": 85.3460992908,
"ext": "tex",
"hexsha": "c6cd02b4e6d057475d8d51f159c92e5c99bb615f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "liaskast/Final-Year-Project",
"max_forks_repo_path": "Report/.tex files/[1] - chapter/Problem Definition.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "liaskast/Final-Year-Project",
"max_issues_repo_path": "Report/.tex files/[1] - chapter/Problem Definition.tex",
"max_line_length": 1669,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "6943f5fc406891ae1635e42dff6e7fba28a2bffc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "liaskast/Final-Year-Project",
"max_stars_repo_path": "Report/.tex files/[1] - chapter/Problem Definition.tex",
"max_stars_repo_stars_event_max_datetime": "2020-07-26T14:12:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-21T21:22:10.000Z",
"num_tokens": 16731,
"size": 60169
} |
% ==================================================
% FSE demo paper
% 4 pages in the ACM proceedings format, including all text, references and figures.
%
% The paper must communicate clearly the following information to the audience:
% 1. the envisioned users
% 2. the software engineering challenge it proposes to address;
% 3. the methodology it implies for its users
% 4. the results of validation studies already conducted for mature tools, or the design of planned studies for early prototypes.
% The paper must be accompanied by a short video (between 3 and 5 minutes long)
%
% Paper submission : June 29, 2018
% Author notification : July 16, 2018
% Camera-ready deadline : July 30, 2018
% ==================================================
\documentclass[sigconf, screen]{acmart}
%\settopmatter{authorsperrow=4}
%\pdfpagewidth=8.5in
%\pdfpageheight=11in
\input{macros}
%\setcopyright{rightsretained}
\setcopyright{acmlicensed}
%\setcopyright{acmcopyright}
\acmPrice{15.00}
\acmDOI{10.1145/3236024.3264585}
\acmYear{2018}
\copyrightyear{2018}
\acmISBN{978-1-4503-5573-5/18/11}
\acmConference[ESEC/FSE '18]{Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering}{November 4--9, 2018}{Lake Buena Vista, FL, USA}
\acmBooktitle{Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE '18), November 4--9, 2018, Lake Buena Vista, FL, USA}
% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}
\begin{document}
\title{Augmenting Stack Overflow with API Usage Patterns Mined from GitHub}
%\author{Anastasia Reinhardt\textsuperscript{1,2}\footnotemark\,\,Tianyi Zhang\textsuperscript{1}\footnotemark\,\,Mihir Mathur\textsuperscript{1}\,\,Miryung Kim\textsuperscript{1}}
%\authornote{Work done as an intern at University of California, Los Angeles.}
%\authornote{Corresponding author.}
%\affiliation{\textsuperscript{1}University of California, Los Angeles}
%\affiliation{\textsuperscript{2}George Fox University}
%\affiliation{[email protected], \{tianyi.zhang, miryung\}@cs.ucla.edu, [email protected]}
\author{Anastasia Reinhardt}
\authornote{Work done as an intern at University of California, Los Angeles.}
\affiliation{
\institution{George Fox University}
\city{Newberg}
\state{Oregon}
\country{U.S.}
}
\email{[email protected]}
\author{Tianyi Zhang}
\authornote{Corresponding author.}
\affiliation{
\institution{University of California, Los Angeles}
\city{Los Angeles}
\state{California}
\country{U.S.}
}
\email{[email protected]}
\author{Mihir Mathur}
\affiliation{
\institution{University of California, Los Angeles}
\city{Los Angeles}
\state{California}
\country{U.S.}
}
\email{[email protected]}
\author{Miryung Kim}
\affiliation{
\institution{University of California, Los Angeles}
\city{Los Angeles}
\state{California}
\country{U.S.}
}
\email{[email protected]}
%\renewcommand{\authors}{Anastasia Reinhart, Tianyi Zhang, Mihir Mathur, and Miryung Kim}
%\renewcommand{\shortauthors}{Anastasia Reinhart, Tianyi Zhang, Mihir Mathur, and Miryung Kim}
\title[Augmenting Stack Overflow with API Usage Patterns from GitHub]{Augmenting Stack Overflow with API Usage Patterns Mined from GitHub}
% ==================================================
% abstract
% ==================================================
\input{abstract}
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10011007.10010940.10011003.10011004</concept_id>
<concept_desc>Software and its engineering~Software reliability</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
%<concept_id>10011007.10011074.10011134</concept_id>
%<concept_desc>Software and its engineering~Collaboration in software development</concept_desc>
%<concept_significance>300</concept_significance>
%</concept>
<concept>
<concept_id>10011007.10011006.10011066.10011069</concept_id>
<concept_desc>Software and its engineering~Integrated and visual development environments</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[300]{Software and its engineering~Software reliability}
%\ccsdesc[300]{Software and its engineering~Collaboration in software development}
\ccsdesc[300]{Software and its engineering~Integrated and visual development environments}
\keywords{online Q\&A forum, API usage pattern, code assessment}
\maketitle
% ==================================================
% Introduction
% ==================================================
\input{intro}
% ==================================================
% Approach
% ==================================================
\input{impl}
% ==================================================
% Motivating Example and Tool Features
% ==================================================
\input{motive}
% ==================================================
% Related Work
% ==================================================
\input{related}
% ==================================================
% Summary
% ==================================================
\input{summary}
\section*{Acknowledgment}
Participants in this project are supported by AFRL grant FA8750-15-2-0075, NSF grants CCF-1764077, CCF-1527923, CCF-1460325, CCF-1723773, ONR grant N00014-18-1-2037 and gift from Huawei.
\balance
\bibliographystyle{ACM-Reference-Format}
\bibliography{examplecheck}
\end{document}
| {
"alphanum_fraction": 0.6769340974,
"avg_line_length": 35.7948717949,
"ext": "tex",
"hexsha": "329fe7d3f5472a6c3971e1ce281b1a3cb5c12e97",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "243582982bec98d928a56a44cfb18a7c0e2c39f8",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "UCLA-SEAL/ExampleCheck",
"max_forks_repo_path": "code/chrome-extension/paper/examplecheck.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "243582982bec98d928a56a44cfb18a7c0e2c39f8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "UCLA-SEAL/ExampleCheck",
"max_issues_repo_path": "code/chrome-extension/paper/examplecheck.tex",
"max_line_length": 209,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "243582982bec98d928a56a44cfb18a7c0e2c39f8",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "UCLA-SEAL/ExampleCheck",
"max_stars_repo_path": "code/chrome-extension/paper/examplecheck.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-23T05:50:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-01T10:09:37.000Z",
"num_tokens": 1449,
"size": 5584
} |
\documentclass{article}
\usepackage{fullpage}
\usepackage{amsmath}
\usepackage{tikz}
\usepackage{fancyhdr}
\pagestyle{fancy}
\renewcommand{\headrulewidth}{0pt}
\cfoot{\sc Page \thepage\ of \pageref{end}}
\begin{document}
{\large \noindent{}University of Toronto at Scarborough\\
\textbf{CSC A67/MAT A67 - Discrete Mathematics, Fall 2015}}
\section*{\huge Exercise \#1: Counting/Arrangements}
{\large Due: September 19, 2015 at 11:59 p.m.\\
This assignment is worth 3\% of your final grade.}\\[1em]
\textbf{Warning:} Your electronic submission on MarkUs affirms that this exercise is your own work and no
one else's, and is in accordance with the University of Toronto Code of Behaviour on Academic Matters,
the Code of Student Conduct, and the guidelines for avoiding plagiarism in CSC A67/MAT A67.\\[1ex]
This exercise is due by 11:59 p.m. September 19. If you haven't finished by then, you may hand in your
exercise late with a penalty as specified in the course information sheet.\\[1ex]
\renewcommand{\labelenumi}{\arabic{enumi}.}
\renewcommand{\labelenumii}{(\alph{enumii})}
\begin{enumerate}
\item Ruth has the following set of refrigerator magnets: \{A, B, C, D, E, F, G\}\marginpar{[3]}
\begin{enumerate}
\item How many different three-letter strings can she form with these magnets?
\item How many different three-letter strings can she form if the middle lettter must be a vowel?
\end{enumerate}
\item The school board consists of three men and four women. When they hold a meeting, they sit in a row.\marginpar{[7]}
\begin{enumerate}
\item How many different seating arrangments are there?
\item How many different ways can the row be arranged if no two women sit next to each other?
\item How many ways are there to select a subcommittee of four board members?
\item How many ways are there to select a subcommittee of four board members if the subcommittee must contain at least two women?
\end{enumerate}
\item There are nine empty seats in a theatre, and five customers need to find places to sit. How many different ways can these five seat themselves?\marginpar{[2]}
\item How many solutions (using only non-negative integers) are there to the following equation?\marginpar{[2]}
\begin{equation*}
x_1+x_2+x_3+x_4+x_5+x_6+x_7=20
\end{equation*}
\item The North South Line of the Singapore Mass Rapid Transit system has 25 stations. How many ways are there to divide this line into three segments, where each segment contains at least one station? (One possible such division is shown below.)\marginpar{[2]}\\[1em]
\begin{tikzpicture}[scale=0.6,station/.style={fill,circle,minimum size=2mm}]
\draw (0,0) node[station,fill=blue] {} \foreach \x in {1,...,5} { -- (\x,0) node[station,fill=blue] {}} -- (6.5,0);
\draw (6.5,0) node[station,fill=red] {} \foreach \x in {7.5,...,14.5} { -- (\x,0) node[station,fill=red] {}} -- (16,0);
\draw (16,0) node[station,fill=yellow] {} \foreach \x in {17,...,25} { -- (\x,0) node[station,fill=yellow] {}};
\draw (2.5,-.75) node {Segment \#1};
\draw (10.5,-.75) node {Segment \#2};
\draw (20.5,-.75) node {Segment \#3};
\end{tikzpicture}
\item If you are making a salad, and you must choose\marginpar{[2]}
\begin{itemize}
\item 1 of 2 dressings,
\item at least 1 of 3 kinds of lettuce, and
\item up to 4 other ingredients (tomatoes, cucumbers, etc.),
\end{itemize}
how many different salad combinations can you make?
\end{enumerate}
\hrulefill\\
\noindent[Total: 18 marks]\label{end}
\end{document} | {
"alphanum_fraction": 0.7332180928,
"avg_line_length": 52.5909090909,
"ext": "tex",
"hexsha": "5bf9e356a651d4ee342ee5038da77f4c018f1829",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-02-09T05:34:01.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-02-11T13:35:28.000Z",
"max_forks_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ozhanghe/ozhanghe.github.io",
"max_forks_repo_path": "teaching/resources/Counting-Arrangements-Exercise.tex",
"max_issues_count": 11,
"max_issues_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa",
"max_issues_repo_issues_event_max_datetime": "2020-01-18T03:30:18.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-06-05T03:48:15.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ozhanghe/ozhanghe.github.io",
"max_issues_repo_path": "teaching/resources/Counting-Arrangements-Exercise.tex",
"max_line_length": 268,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "7b58b8e325da2c788c4dd7cf5bec4d08d77c24fa",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ozhanghe/ozhanghe.github.io",
"max_stars_repo_path": "teaching/resources/Counting-Arrangements-Exercise.tex",
"max_stars_repo_stars_event_max_datetime": "2020-04-23T17:23:00.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-04-23T17:23:00.000Z",
"num_tokens": 1055,
"size": 3471
} |
%----------------------------------------------------------------------------
% Magic Addendum: Version 6.5 differences
%----------------------------------------------------------------------------
\NeedsTeXFormat{LaTeX2e}[1994/12/01]
\documentclass[letterpaper,twoside,12pt]{article}
\usepackage{epsfig,times}
\setlength{\textwidth}{8.5in}
\addtolength{\textwidth}{-2.0in}
\setlength{\textheight}{11.0in}
\addtolength{\textheight}{-2.0in}
\setlength{\oddsidemargin}{0in}
\setlength{\evensidemargin}{0pt}
\setlength{\topmargin}{-0.5in}
\setlength{\headheight}{0.2in}
\setlength{\headsep}{0.3in}
\setlength{\topskip}{0pt}
\def\hinch{\hspace*{0.5in}}
\def\starti{\begin{center}\begin{tabbing}\hinch\=\hinch\=\hinch\=hinch\hinch\=\kill}
\def\endi{\end{tabbing}\end{center}}
\def\ii{\>\>\>}
\def\mytitle{Magic Addendum: Version 6.5 differences}
%----------------------------------------------------------------------------
\begin{document}
\makeatletter
\newcommand{\ps@magic}{%
\renewcommand{\@oddhead}{\mytitle\hfil\today}%
\renewcommand{\@evenhead}{\today\hfil\mytitle}%
\renewcommand{\@evenfoot}{\hfil\textrm{--{\thepage}--}\hfil}%
\renewcommand{\@oddfoot}{\@evenfoot}}
\newcommand{\ps@mplain}{%
\renewcommand{\@oddhead}{}%
\renewcommand{\@evenhead}{}%
\renewcommand{\@evenfoot}{\hfil\textrm{--{\thepage}--}\hfil}%
\renewcommand{\@oddfoot}{\@evenfoot}}
\makeatother
\pagestyle{magic}
\thispagestyle{mplain}
\begin{center}
{\bfseries \Large \mytitle} \\
\vspace*{0.5in}
{\itshape Stefanos Sidiropoulos} \\
\vspace*{0.5in}
Center for Integrated Systems \\
Stanford University \\
Stanford, CA 94305 \\
\vspace*{0.25in}
This tutorial corresponds to Magic version 7. \\
\end{center}
\vspace*{0.5in}
{\noindent\bfseries\large Affected Documents:}
\starti
\> Magic Tutorial \#6: Design-Rule Checking \\
\> Magic Tutorial \#9: Format Conversion for CIF and Calma \\
\> Magic Tutorial \#W-1: Design-Rule Extensions \\
\> Magic Maintainer's Manual \#2: The Technology File \\
\> Magic man pages: ext2sim(1), ext2spice(1), extflat(3), ext(5).
\endi
\vspace*{0.25in}
\section{Introduction}
Magic 6.5 has some significant modifications that make some of the
original version 6 documents obsolete. The purpose of this addendum
is to highlight these differences so that users can take advantage
of the new features.
\section{Extractor Extensions}
The 6.5 extractor uses double precision floating point numbers
to represent capacitances. Therefore all the capacitances
in (aF/sq-lambda) associated with
the {\itshape areacap, perimc, sidewall, sideoverlap} keywords
in the extract section of the technology file can be {\itshape floating point
numbers}.
Additionally the extension of the capacitance to floating point numbers
affects the manual pages of ext2sim(1), ext2spice(1), extflat(3), ext(5)
which can be found in your local system under CAD{\_}HOME/man
The 6.5 extractor shields the perimeter capacitance from layer to layer.
To facilitate this two new commands {\itshape planeorder, noplaneordering}
have been introduced and the {\itshape sideoverlap} command has been modified.
The syntax for the new commands is:
\starti
\ii {\bfseries planeorder} {\itshape plane num }
\endi
Where {\itshape plane} is one of the defined planes and {\itshape num} is a positive
integer indicating the ordering of this plane from lower to higher. So for
example the metal1 plane has order 3 while metal2 has order 4.
In case you dont want to specify the order of the planes the extractor
will complain and assume a default one. If you want to suppress the
warning you just have to issue the keyword:
\starti
\ii {\bfseries noplaneordering }
\endi
The {\itshape sideoverlap} keyword syntax has been altered to:
\starti
\ii {\bfseries sideoverlap} {\itshape intypes outtypes ovtypes cap shieldtypes}
\endi
where {\itshape intypes}, {\itshape outtypes}, and {\itshape ovtypes} are type-lists
and {\itshape cap} is capacitance in attofarads per lambda.
This is the capacitance associated with an edge with a type
in {\itshape intypes} on its inside and a type in {\itshape outtypes} on
its outside, that overlaps a tile whose type is in {\itshape ovtypes}.
If the {\itshape shieldtypes} is present however this shields the capacitance.
So for example to shield metal-2 to poly capacitance use:
\starti
\ii {\bfseries sideoverlap} M2Cap \~{}M2Cap PolyCap 19.41 M1Cap
\endi
\section{DRC Extensions}
This version includes code fragments implemented in DEC-WRL by Don Stark
which enable to implement more complicated DRC rules. For a description
of these enhancements look in the magic tutorial \#W1 which can be
found in the file doc/tutwrl1.ps under the magic source tree.
\section{CIF extensions}
Two new commands have been integrated in the cif output section courtesy
of Steven Tell and Fred Heaton at UNC.
The first new command is a command that
enables the generation of DRC correct layers at the top level (such as
the nwell in the SCMOS tech files). The syntax is:
\starti
\ii {\bfseries min-width} {\itshape width }
\endi
The width argument is in centimicrons. This command should be specified
within a layer sub-section of the cifoutput section of the technology
file.
The second command is an extension to the squares cif output command.
Its syntax is:
\starti
\ii {\bfseries squares-grid} {\itshape border size separation grid}
\endi
The added argument is {\itshape grid}. It is
in units of centi-microns. In some technologies, all features
must fall on a specified grid. In our case, this was a .05
micron grid. In the original implementation of magic, if lambda
was not set to some integral multiple of the grid one could
generate contacts that did not fall on grid boundaries. By
specifying the grid spacing, the new enhancement to the contact
generation code will allow contacts to be generated on grid.
This does introduce some problems. In particular, some odd
size contacts will not be able to generate a CIF contact structure
that is centered on its corresponding magic contact. This is
not a problem in most cases, except where an odd size contact
is shared between two cells. In this case, the CIF contact
strucuture might be shifted to the left during CIF generation
to get it on grid and the other cell might be shifted to the
right. The superposition of these two structures may create
an illegal contact size or spacing. Use with extreme care or combine
it with cifwidth and cifspacing rules to verify correct operation.
\section{New commands}
Three new commands have been introduced (based on the WRL code fragments
by Bob Mayo):
\starti
\ii {\bfseries goto} {\itshape nodename}
\endi
Places the box/cross over the node named {\itshape nodename}.
\starti
\ii {\bfseries findlabel} {\itshape labelname}
\endi
Places the box/cross over the label {\itshape nodename}.
\starti
\ii {\bfseries flatten} {\itshape destname}
\endi
Flattens the cell in the current layout window and places it in the cell
named {\itshape cellname}. The labels are changed to retain their hierarchical
prefixes.
\end{document}
| {
"alphanum_fraction": 0.7343618513,
"avg_line_length": 34.9509803922,
"ext": "tex",
"hexsha": "8b3b5c89702342836764076f7eeb9f829f84301a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "fb85e97b9233cff352d964823173c18527c714aa",
"max_forks_repo_licenses": [
"TCL",
"X11",
"MIT"
],
"max_forks_repo_name": "wisehackermonkey/magic",
"max_forks_repo_path": "doc/latexfiles/addendum6_5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "fb85e97b9233cff352d964823173c18527c714aa",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"TCL",
"X11",
"MIT"
],
"max_issues_repo_name": "wisehackermonkey/magic",
"max_issues_repo_path": "doc/latexfiles/addendum6_5.tex",
"max_line_length": 84,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "fb85e97b9233cff352d964823173c18527c714aa",
"max_stars_repo_licenses": [
"TCL",
"X11",
"MIT"
],
"max_stars_repo_name": "wisehackermonkey/magic",
"max_stars_repo_path": "doc/latexfiles/addendum6_5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1944,
"size": 7130
} |
The Integrated Information Theory (IIT) has is proposed as a mathematical way to understand consciousness.
It revolves around analyzing the phenomenology of a system, that is, how a system experiences the environment.
IIT 3.0 mostly does this by finding the maximally irreducible conceptual structures formed by a causal structure interacting with the environment.
The theory tries to assume properties of consciousness (axioms), and from there, it postulates the properties that must be necessary for a physical substrate hosting the system.
The causal structure, also called a cause-effect structure (CES), is analyzed to find conceptual structures that measure how well a system integrates information.
A more substantial amount of integrated information results in a higher amount of possible states in the system.
Integrated information is when there is no way to cut the CES without losing information.
The CES is an unfolding of a model of the neural substrate in the sense that the model is cut in every possible way (imagine cutting a network of nodes).
The CES maps to the properties of experience, and the CES can be quantified by $\Phi$ \cite{oizumi_phenomenology_2014}.
Although IIT might seem overly abstract, the first tangible result was published as a study on the question "Why does space feel the way it does?".
Here a model of the Visual Cortex 1 and 2 (V1 and V2) was cut into a CES, using the knowledge of the grid cells that are proven to be essential for localization \cite{haun_why_2019}.
\paragraph{A summary of corollaries} of IIT, based on the article by Oizumi et. al \cite{oizumi_phenomenology_2014}:
An intelligent system will usually consist of a main \textbf{complex}, and smaller supporting complexes.
This central complex will be the most conscious part of the system, like the part of our brain that we could not function without, while the supporting complexes could be compared to smaller parts of the brain, like, for example, the visual cortex.
Intelligent systems will not be modular, that is, if the system produces less functionality than their components, and they have to produce functionality that is more than a higher order of the combined components.
Inactive systems can be conscious in the way that a system may have significant parts that are ready to be activated or that are passively affecting the state of the system.
A system can perform very complex functions but still not be conscious.
In particular, feed-forward networks would not be conscious.
An example if this is a microprocessor implementing a neural network that, in some cases, can recognize thousands of different objects faster than a human can recognize one object.
IIT states that it is not only the functionality of a system that determines how conscious it is but also how the function performs internally.
A network that can recognize objects based on internal states and previous experiences is more conscious compared to a feed-forward neural network, which only recognizes the object based on the external input it gets.
Networks that are not necessarily feed-forward only, but simulated on a physical substrate that implements functionality based on numerical approximation, would not be termed conscious in IIT 3.0 \cite{marshall_integrated_2016}.
A final corollary that summarises the ones above says that an intelligent system can develop concepts based on other concepts within itself without the need for external stimuli.
These internal concepts would more often be connected to a large number of other concepts and not contribute to specific details.
\paragraph{In this project} I hope to utilize IIT as a way to analyze the perception of automated agents, evolved as SNN \vref{sect:snn} animats \vref{sect:agent} on BrainScaleS \vref{sect:bss}.
As the complexity and states of the animats are known, the animats can be analyzed.
Analyzing the animats in regards to IIT will only be attempted if time permits.
| {
"alphanum_fraction": 0.8139240506,
"avg_line_length": 127.4193548387,
"ext": "tex",
"hexsha": "efe4c2ebb16685a01e0bdff0c05f7397704757c6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e934d1491b22a8441e124927c3daaba74231e1bc",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "danxan/neuromorphic-evolution",
"max_forks_repo_path": "mymaster/sections/iit.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e934d1491b22a8441e124927c3daaba74231e1bc",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "danxan/neuromorphic-evolution",
"max_issues_repo_path": "mymaster/sections/iit.tex",
"max_line_length": 248,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e934d1491b22a8441e124927c3daaba74231e1bc",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "danxan/neuromorphic-evolution",
"max_stars_repo_path": "mymaster/sections/iit.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 810,
"size": 3950
} |
\chapter{Advances}
| {
"alphanum_fraction": 0.75,
"avg_line_length": 6.6666666667,
"ext": "tex",
"hexsha": "0e9e8ed2c45c694530a44f1e9f3f14f4319403c3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117",
"max_forks_repo_licenses": [
"DOC"
],
"max_forks_repo_name": "xiaolanchong/call_to_power2",
"max_forks_repo_path": "doc/user/manual/include/app_advances.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"DOC"
],
"max_issues_repo_name": "xiaolanchong/call_to_power2",
"max_issues_repo_path": "doc/user/manual/include/app_advances.tex",
"max_line_length": 18,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "39f19d18b363cefb151bbbf050c6b672ee544117",
"max_stars_repo_licenses": [
"DOC"
],
"max_stars_repo_name": "xiaolanchong/call_to_power2",
"max_stars_repo_path": "doc/user/manual/include/app_advances.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6,
"size": 20
} |
%!TEX root = paper.tex
\looseness-1
%Data is central to modern systems in a wide range of domains.
%, including healthcare, transportation, and finance.
The core of modern data-driven systems typically comprises of models learned
from large datasets, and they are usually optimized to target particular data
and workloads. While these data-driven systems have seen wide adoption and
success, their reliability and proper functioning hinge on the data's continued
conformance to the systems' initial settings and assumptions. When the serving
data (on which the system operates) deviates from the profile of the initial
data (on which the system was trained), system performance degrades and system
behavior becomes unreliable. Therefore, a mechanism to assess the
trustworthiness of a system's inferences is paramount, especially for systems
that perform safety-critical or high-impact operations.
A machine-learned (ML) model typically works best if the serving dataset
follows the profile of the dataset the model was trained on; when it doesn't,
the model's inference can be unreliable. One can profile a dataset in many
ways, such as by modeling the data distribution of the
dataset~\cite{achlioptas2017learning}, or by finding the (implicit)
\emph{constraints} that the dataset satisfies~\cite{pena2019discovery}.
Distribution-oriented approaches learn data likelihood (e.g., joint or
conditional distribution) from the training data, and can be used to check if
the serving data is unlikely. An unlikely tuple does not necessarily imply that
the model would fail for it. The problem with the distribution-oriented
approaches is that they tend to overfit, and, thus, are overly conservative
towards unseen tuples, leading them to report many such false positives.
\looseness-1 We argue that certain constraints offer a more effective and
robust mechanism to quantify trust of a model's inference on a serving tuple.
The reason is that learning systems implicitly exploit such constraints during
model training, and build models that assume that the constraints will continue
to hold for serving data. For example, when there exist high correlations among
attributes in the training data, learning systems will likely reduce the
weights assigned to redundant attributes that can be deduced from others, or
eliminate them altogether through dimensionality reduction. If the serving data
preserves the same correlations, such operations are inconsequential;
otherwise, we may observe model failure.
\looseness-1 In this paper, we characterize datasets with a new data-profiling
primitive, \emph{\dis}, and we present a mechanism to identify \emph{strong}
\dis, whose violation indicates unreliable inference. \Dis specify constraints
over \emph{arithmetic relationships} involving multiple numerical attributes of
a dataset. We argue that a tuple's conformance to the \dis is more critical for
accurate inference than its conformance to the training data distribution. This
is because any violation of \dis is likely to result in a catastrophic failure
of a learned model that is built upon the assumption that the \dis will always
hold. Thus, we can use a tuple's deviation from the \dis as a proxy for the
trust on a learned model's inference for that tuple. We proceed to describe a
real-world example of \dis, drawn from our case-study evaluation on
\emph{trusted machine learning} (TML).
\setlength{\tabcolsep}{.5em}
\renewcommand{\arraystretch}{.9}
\begin{figure}[t]
\centering
\resizebox{1\columnwidth}{!}
{\small
\begin{tabular}{lcccc}
\toprule
& \textbf{Departure} & \textbf{Departure Time} & \textbf{Arrival Time} & \textbf{Duration (min)} \\
& \textbf{Date} & \textbf{[DT]} & \textbf{[AT]} & \textbf{[DUR]} \\
\midrule
$t_1$ & May 2 & \texttt{14:30} & \texttt{18:20} & 230 \\
$t_2$ & July 22 & \texttt{09:05} & \texttt{12:15} & 195 \\
\revisetwo{$t_3$} & \revisetwo{June 6} & \revisetwo{\texttt{10:20}} & \revisetwo{\texttt{12:20}} & \revisetwo{115} \\
$t_4$ & May 19 & \texttt{11:10} & \texttt{13:05} & 117 \\
\rowcolor{vlightgray}
$t_5$ & April 7 & \texttt{22:30} & \texttt{06:10} & 458 \\
\bottomrule
\end{tabular}
}
\vspace{-3mm}
\caption{\small
%
Sample of the airlines dataset (details are in
Section~\ref{exp-invariants-for-ML}), showing departure, arrival, and
duration only. The dataset does not report arrival date, but an arrival time
earlier than departure time (e.g., last row), indicates an overnight flight.
\reviseone{All times are in 24 hour format} and in the same time zone. There is some
noise in the values.
%
}
\vspace{-2mm}
\label{fig:flights}
\end{figure}
\begin{example}\label{ex:tml}
\looseness-1
We used a dataset with flight information that includes data on departure and
arrival times, flight duration, etc.\ (Fig.~\ref{fig:flights}) to train a
linear regression model to predict flight delays. \revisetwo{The model was
trained on a subset of the data that happened to include only daytime flights
(such as the first four tuples)}. In an empirical evaluation of the regression
accuracy, we found that the mean absolute error of the regression output more
than quadruples for overnight flights (such as the last tuple $t_5$), compared
to daytime flights. The reason is that tuples representing overnight flights
deviate from the profile of the training data \revisetwo{that only contained
daytime flights}. Specifically, daytime flights satisfy the \di that ``arrival
time is later than departure time and their difference is very close to the
flight duration'', which does not hold for overnight flights. Note that this
\invariant is just based on the covariates (predictors) and does not involve
the target attribute $\mathtt{delay}$. Critically, although this \di is unaware
of the regression task, it was still a good proxy of the regressor's
performance. \revisetwo{In contrast, approaches that model data likelihood may
report long daytime flights as unlikely, since all flights in the training data
($t_1$--$t_4$) were also short flights, resulting in false alarms, as the model
works very well for most daytime flights, regardless of the duration (i.e., for
both short and long daytime flights).}
%
\end{example}
\revisetwo{Example~\ref{ex:tml} demonstrates that when training data has
\emph{coincidental} relationships (e.g., the one between $\mathtt{AT}$,
$\mathtt{DT}$, and $\mathtt{DUR}$ for daytime flights), then ML models may
\emph{implicitly} assume them as \emph{invariants}. \Dis can capture such data
invariants and flag non-conforming tuples (overnight flights) during
serving.}\label{extake}
\smallskip
\paragraph{\Dis.} \Dis complement the existing data profiling literature, as
the existing constraint models, such as functional dependencies and denial
constraints, cannot model arithmetic relationships. For example, the \di of
Example~\ref{ex:tml} is: $-\epsilon_1 \le \mathtt{AT} - \mathtt{DT} -
\mathtt{DUR} \le \epsilon_2$, where $\epsilon_1$ and $\epsilon_2$ are small
values. \Dis can capture complex linear dependencies across attributes within a
\emph{noisy} dataset. For example, if the flight departure and arrival data
reported the hours and the minutes across separate attributes, the \invariant
would be on a different arithmetic expression: $(60\cdot \mathtt{arrHour} +
\mathtt{arrMin}) - (60\cdot \mathtt{depHour} + \mathtt{depMin}) -
\mathtt{duration}$.
\looseness-1 The core component of a \di is the arithmetic expression, called
\emph{projection}, which is obtained by a linear combination of the numerical
attributes. There is an unbounded number of projections that we can use to form
arbitrary \dis. For example, for the projection $\mathtt{AT}$, we can find a
broad range $[\epsilon_3, \epsilon_4]$, such that all training tuples in
Example~\ref{ex:tml} satisfy the \di $\epsilon_3 \le \mathtt{AT} \le
\epsilon_4$. However, this \invariant is too inclusive and a learned model is
unlikely to exploit such a weak constraint. In contrast, the projection
$\mathtt{AT} - \mathtt{DT} - \mathtt{DUR}\;$ leads to a stronger \di with a
narrow range as its bounds, which is selectively permissible, and, thus, more
effective.
\smallskip
\paragraph{Challenges and solution sketch.} The principal challenge is to
discover an \emph{effective} set of conformance constraints that are likely to
affect a model's inference implicitly. We first characterize ``good''
projections (that construct effective constraints) and then propose a method to
discover them. We establish through theoretical analysis two important results:
(1)~A projection is good over a dataset if it is almost constant (i.e., has low
variance) over all tuples in that dataset. (2)~A set of projections,
collectively, is good if the projections have small pair-wise correlations. We
show that low variance components of a principal component analysis (PCA) on a
dataset yield such a set of \views. Note that this is different from---and, in
fact, completely opposite of---the traditional approaches
(e.g.,~\cite{DBLP:conf/kdd/QahtanAWZ15}) that perform multidimensional analysis
based on the high-variance principal components, after reducing dimensionality
using PCA.
\smallskip
\paragraph{Scope.} \looseness-1 Fig.~\ref{relatedWorkMatrix} summarizes prior
work on related problems, but our scope differs significantly. Specifically, we
can detect if a serving tuple is non-conforming with respect to the training
dataset \emph{only based on its predictor attributes}, and require no knowledge
of the ground truth. This setting is essential in many practical applications
when we observe \emph{extreme verification latency}~\cite{souzaSDM:2015}, where
ground truths for serving tuples are not immediately available. For example,
consider a self-driving car that is using a trained controller to generate
actions based on readings of velocity, relative positions of obstacles, and
their velocities. In this case, we need to determine, only based on the sensor
readings (predictors), when the driver should be alerted to take over vehicle
control.
%, as we cannot use ground-truths to generate an alert.
%
Furthermore, we \emph{do not assume access to the model}, i.e., model's
predictions on a given tuple. This setting is necessary for (1)~safety-critical
applications, where the goal is to quickly alert the user, without waiting for
the availability of the prediction, (2)~auditing and privacy-preserving
applications where the prediction cannot be shared, and (3)~when we are unaware
of the detailed functionality of the system due to privacy concerns or lack of
jurisdiction.
% , but only have some
% meta-information such as the system trains some linear model over the training
% data.
%
We focus on identifying \emph{tuple-level} non-conformance as opposed to
dataset-level non-conformance that usually requires observing entire data's
distribution. However, our tuple-level approach trivially extends (by
aggregation) to the entire dataset.
\input{table-comparison}
\smallskip
\paragraph{Contrast with prior art.} We now discuss where \dis fit with respect
to the existing literature (Fig.~\ref{relatedWorkMatrix}).
% on data profiling and literature on modeling trust in data-driven inferences
\looseness-1 \subsubsection*{Data profiling techniques} \Dis fall under the
umbrella of data profiling, which refers to the task of extracting~tech\-nical
metadata about a given dataset~\cite{DBLP:journals/vldb/AbedjanGN15}. A key
task in data profiling is to learn relationships among attributes. Functional
dependencies (FD)~\cite{papenbrock2015functional} and their variants only
capture if a relationship exists between two sets of attributes, but do not
provide a closed-form (parametric) expression of the relationship. Using the FD
``$\{\mathtt{AT}, \mathtt{DT}\} \rightarrow$ $\{\mathtt{DUR}\}$'' to model the
\invariant of Example~\ref{ex:tml} suffers from several limitations. First,
since the data is noisy, no exact FD can be learned. Metric
FDs~\cite{koudas2009metric} allow small variations in the data,
% (similar
% attribute values are considered identical),
but hinge on appropriate distance metrics and thresholds. For example, if
$\mathtt{time}$ is split across two attributes ($\mathtt{hour}$ and
$\mathtt{minute}$), the distance metric is non-trivial: it needs to encode that
$\langle \mathtt{hour} = 4, \mathtt{min} = 59 \rangle$ and $\langle
\mathtt{hour} = 5, \mathtt{min} = 1\rangle$ are similar, while $\langle
\mathtt{hour} = 4, \mathtt{min} = 1\rangle$ and $\langle \mathtt{hour} = 5,
\mathtt{min} = 59\rangle$ are not. In contrast, \dis can model the composite
attribute ($60 \cdot \mathtt{hour} + \mathtt{minute}$) by automatically
discovering the coefficients $60$ and $1$.
% for such a composite attribute.
\looseness-1 Denial constraints (DC)~\cite{DBLP:journals/pvldb/ChuIP13,
DBLP:journals/pvldb/BleifussKN17, pena2019discovery,
DBLP:journals/corr/abs-2005-08540} encapsulate a number of different
data-profiling primitives such as FDs and their variants (e.g,~\cite{
DBLP:conf/icde/FanGLX09}). Exact DCs can adjust to noisy data by adding
predicates until the constraint becomes exact over the entire dataset, but this
can make the constraint extremely large and complex, which might even fail to
provide the desired generalization. For example, a finite DC---whose language
is limited to universally quantified first-order logic---cannot model the
constraint of Example~\ref{ex:tml}, which involves an arithmetic expression
(addition and multiplication with a constant). Expressing \dis requires a
richer language that includes linear arithmetic expressions. \revisetwo{Pattern
functional dependencies (PFD)~\cite{qahtan2020pattern} move towards addressing
this limitation of DCs, but they focus on text attributes: they are regex-based
and treat digits as characters. However, modeling arithmetic relationships of
numerical attributes requires interpreting digits as numbers.}
\looseness-1 To adjust for noise, FDs and DCs either relax the notion of
constraint violation or allow a user-defined fraction of tuples to violate the
(strict) constraint~\cite{pena2019discovery, huhtala1999tane,
kruse2018efficient, DBLP:conf/sigmod/IlyasMHBA04, koudas2009metric,
caruccio2016discovery, DBLP:journals/corr/abs-2005-08540}. Some
approaches~\cite{DBLP:conf/sigmod/IlyasMHBA04, DBLP:conf/sigmod/ZhangGR20,
DBLP:conf/sigmod/YanSZWC20} use statistical techniques to model other types of
data profiles such as correlations and conditional dependencies. However, they
require additional parameters such as noise and violation thresholds and
distance metrics. In contrast, \dis do not require any parameter from the user
and work on noisy datasets.
\revisetwo{Existing data profiling techniques are not designed to learn what ML
models exploit and are sensitive to noise in the numerical attributes.
Moreover, data constraint discovery algorithms typically search over an
exponential set of candidates, and hence, are not scalable: their complexity
grows exponentially with the number of attributes or quadratically with data
size. In contrast, our technique for deriving \dis is highly scalable (linear
in data size) and efficient (cubic in the number of attributes). It does not
explicitly explore the candidate space, as PCA---which lies at the core of our
technique---performs the search \emph{implicitly} by iteratively refining
weaker \invariants to stronger ones.} \label{nocandidate}
\subsubsection*{Learning techniques} While \emph{ordinary least square} finds
the lowest-variance projection, it minimizes observational error on only the
target attribute, and, thus, does not apply to our setting. \emph{Total least
square} offers a partial solution as it takes observational errors on all
predictor attributes into account; but, it finds only one projection---the
lowest variance one---that fits the data tuples best. But there may exist other
projections with slightly higher variances and we consider them all. As we show
empirically in Section~\ref{exp-invariants-for-drift}, constraints derived from
multiple projections, collectively, capture various aspects of the data, and
result in an effective data profile targeted towards certain tasks such as
data-drift quantification~\citeTechRep.
\medskip
\paragraph{Contributions.} We make the following contributions:
\begin{itemize}
\item We ground the motivation of our work with two case studies on trusted
machine learning (TML) and data drift. (Section~\ref{sec:casestudies})
\item We introduce and formalize \dis, a new data profiling primitive that
specifies constraints over arithmetic relationships among numerical
attributes of a dataset. We describe a \emph{conformance language} to
express \dis, and a \emph{quantitative semantics} to quantify how much a
tuple violates the \dis. In applications of constraint violations, some
violations may be more or less critical than others. To capture that, we
consider a notion of \invariant importance, and weigh violations against
\invariants accordingly. (Section~\ref{sec:data-invs})
\item We formally establish that strong \dis are constructed from
projections with small variance and small mutual correlation on the given
dataset. Beyond simple linear \invariants (e.g., the one in
Example~\ref{ex:tml}), we derive \emph{disjunctive} \invariants, which are
disjunctions of linear \invariants. We achieve this by dividing the
dataset into disjoint partitions, and learning linear \invariants for each
partition. We provide an efficient, scalable, and highly parallelizable
algorithm for computing a set of linear \dis and disjunctions over them.
We also analyze its runtime and memory complexity.
(Section~\ref{sec:synth-data-inv})
\item We formalize the notion of \emph{\nc} tuples in the context of
trusted machine learning and provide a mechanism to detect \nc tuples
using \dis. (Section~\ref{sec:di-for-tml})
\item We empirically analyze the effectiveness of \dis in two case-study
applications---TML and data-drift quantification. We show that \dis can
reliably predict the trustworthiness of linear models and quantify data
drift precisely, outperforming the state of the art.
(Section~\ref{sec:experiments})
\end{itemize} | {
"alphanum_fraction": 0.7732708468,
"avg_line_length": 57.652173913,
"ext": "tex",
"hexsha": "3b25bfe778a887eef13ef4842f8afacd3f0f424f",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-12-09T05:22:18.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-09T05:22:18.000Z",
"max_forks_repo_head_hexsha": "ee419285ab32464f063225fbdeba005a043d2033",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "afariha/ConformanceConstraintsReproducibility",
"max_forks_repo_path": "Paper/1_introduction.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "ee419285ab32464f063225fbdeba005a043d2033",
"max_issues_repo_issues_event_max_datetime": "2021-12-09T09:30:49.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-09T09:30:49.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "afariha/ConformanceConstraintsReproducibility",
"max_issues_repo_path": "Paper/1_introduction.tex",
"max_line_length": 137,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ee419285ab32464f063225fbdeba005a043d2033",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "afariha/ConformanceConstraintsReproducibility",
"max_stars_repo_path": "Paper/1_introduction.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4655,
"size": 18564
} |
\section{Bottleneck structure in MgNet by using subspace correction}
Recall the standard MgNet iteration
\begin{equation}\label{eq:mgnetiteration}
u^{\ell,i} = u^{\ell,i-1} + \sigma \circ B^{\ell,i} \ast \sigma ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}),
\end{equation}
which corresponds to the classical residual correction scheme in multigrid as
\begin{equation}\label{key}
u^{\ell,i} = u^{\ell,i-1} + B^{\ell,i} ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}).
\end{equation}
Now let us recall the subspace correction scheme on a fixed level (for example $\ell$-th level),
we have the following iterative scheme
\begin{equation}\label{eq:bottleneckmgnet}
u^{\ell,i} = u^{\ell,i-1} + P^{\ell,i} B^{\ell,i} R^{\ell,i}({f^\ell - A^{\ell} \ast u^{\ell,i-1}}).
\end{equation}
Here let us recall the dimension of $f^\ell$ and $u^{\ell,i}$ as
\begin{equation}\label{key}
f^\ell, u^{\ell,i} \in \mathbb{R}^{c_\ell \times m_\ell \times n_\ell },
\end{equation}
which leads to the dimension of $B^{\ell,i}$ in standard MgNet in \eqref{eq:mgnetiteration} to be
\begin{equation}\label{key}
B^{\ell,i} \in \mathbb{R}^{c_\ell \times c_\ell \times 3 \times 3}.
\end{equation}
However, for a subspace correction scheme, we can take $R^{\ell,i}$ as the restriction operator as
\begin{equation}\label{key}
R^{\ell,i}: \mathbb{R}^{c_\ell \times m_\ell \times n_\ell } \mapsto \mathbb{R}^{ \alpha c_\ell \times m_\ell \times n_\ell },
\end{equation}
where $\alpha \in (0,1]$ so example $\alpha = \frac{1}{4}$.
A rational choice for $R^{\ell,i}$ and $P^{\ell,i}$ should be
\begin{equation}\label{key}
R^{\ell,i} \in \mathbb{R}^{\alpha c_\ell \times c_\ell \times 1 \times 1},
\end{equation}
and
\begin{equation}\label{key}
P^{\ell,i} \in \mathbb{R}^{ c_\ell \times \alpha c_\ell \times 1 \times 1}.
\end{equation}
Of course, we can just take the $R^{\ell,i} = [P^{\ell.i}]^T$ based on the
theory of subspace corrections.
Then, the size of $B^{\ell,i}$ in \eqref{eq:bottleneckmgnet} can be reduced to
\begin{equation}\label{key}
B^{\ell,i} \in \mathbb{R}^{\alpha c_\ell \times\alpha c_\ell \times 3 \times 3}.
\end{equation}
Thus the dimension of all operations $R^{\ell,i}$, $P^{\ell,i}$ and $B^{\ell,i}$ will be
\begin{equation}\label{key}
\begin{aligned}
&\alpha c_\ell \times c_\ell \times 1 \times 1 + c_\ell \times \alpha c_\ell \times 1 \times 1 + \alpha c_\ell \times\alpha c_\ell \times 3 \times 3 \\
&= ((3\alpha)^2 + 2\alpha) c_\ell^2\\
&= \frac{17}{16} c_\ell^2 \quad ( \alpha = \frac{1}{4}),
\end{aligned}
\end{equation}
which is much less than the size of $B^{\ell,i}$ in original MgNet in~\eqref{eq:mgnetiteration} which
is $9c_\ell^2$.
To follow the linear constrained model assumption, we may take the nonlinearity as
\begin{equation}\label{eq:bottleneckmgnet-1}
u^{\ell,i} = u^{\ell,i-1} + \sigma \circ P^{\ell,i} \ast \sigma \circ B^{\ell,i} \ast \sigma \circ R^{\ell,i} \ast \sigma ({f^\ell - A^{\ell} \ast u^{\ell,i-1}}).
\end{equation}
Following the similar derivation from MgNet to ResNet, we can also derive
the next "enhanced" bottleneck ResNet from \eqref{eq:bottleneckmgnet-1} as
\begin{equation}\label{key}
r^{\ell,i} = r^{\ell,i-1} - A^\ell \ast \sigma \circ P^{\ell,i} \ast \sigma \circ B^{\ell,i} \ast \sigma \circ R^{\ell,i} \ast \sigma (r^{\ell,i-1}).
\end{equation}
| {
"alphanum_fraction": 0.6590154032,
"avg_line_length": 47.3,
"ext": "tex",
"hexsha": "4c86c2da5c0d1e6423b8ab31873d79016f245561",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "liuzhengqi1996/math452",
"max_forks_repo_path": "6DL/Bottleneck-MgNet.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "liuzhengqi1996/math452",
"max_issues_repo_path": "6DL/Bottleneck-MgNet.tex",
"max_line_length": 168,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "liuzhengqi1996/math452",
"max_stars_repo_path": "6DL/Bottleneck-MgNet.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1251,
"size": 3311
} |
\documentclass{article}
\usepackage[english]{babel}
\usepackage{naproche}
\begin{document}
\pagenumbering{gobble}
\section*{Cantor's Theorem}
Let us prove that every set is strictly smaller than ist powerset.
This result is known as \textit{Cantor's Theorem}.
\begin{forthel}
\begin{theorem}[Cantor]
Let $x$ be a set.
There is no surjection from $x$ onto the powerset of $x$.
\end{theorem}
\begin{proof}
Proof by case analysis.
Case $x$ is empty. Obvious.
Case $x$ is nonempty.
Assume the contrary.
Take a surjection $f$ from $x$ onto the powerset of $x$.
Define $N = \{ u "in" x : u "is not an element of" f(u) \}$.
Take an element $u$ of $x$ such that $N = f(u)$.
Indeed we can show that $N$ is an element of the powerset of $x$.
Every element of $N$ is an element of $x$.
Hence $N$ is a subset of $x$.
Thus $N$ is an element of the powerset of $x$.
End.
Then we have a contradiction.
End.
\end{proof}
\end{forthel}
\end{document}
| {
"alphanum_fraction": 0.602739726,
"avg_line_length": 24.8863636364,
"ext": "tex",
"hexsha": "41d039cb09b3db61a43b96e3c403159b5ac35ae0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8c0a458fcbd094d28122ff6079a88292e7a79ecb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "McEarl/language-ftl",
"max_forks_repo_path": "examples/latex-forthel.ftl.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "8c0a458fcbd094d28122ff6079a88292e7a79ecb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "McEarl/language-ftl",
"max_issues_repo_path": "examples/latex-forthel.ftl.tex",
"max_line_length": 73,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "8c0a458fcbd094d28122ff6079a88292e7a79ecb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "McEarl/language-forthel",
"max_stars_repo_path": "examples/latex-forthel.ftl.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-26T10:11:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-06-26T10:11:36.000Z",
"num_tokens": 331,
"size": 1095
} |
\chapter*{Zhiyuan Liu's Individual Contribution Report}
Our capstone design topic is ``RISC-V SoC Microarchitecture Design and Optimization'' and in this project, we propose a processor design that can support 4-way superscalar execution and instruction dynamic scheduling together with some advanced optimizations like approximate floating-point computing units to solve challenges in AI-oriented embedded system which requires the CPU to be energy-efficient, inexpensive and fast at same time. After designing our processor, we also set up the SoC and peripheral components like cache and I/O devices in software simulator tools to test and validate our design. Our project can be divided into hardware part and software part. Li Shi, Jian Shi and Yiqiu Sun are mainly hardware engineers in our team while Yichao Yuan and I are software engineers. We hold our group meetings two to three times a week and there are also some individual meetings. Besides, we have meetings with our instructor every two weeks to report our project progress and explain some technical issues. Everyone has done their best for our design and everyone has contributed to our project equally, the following is our division of work.
\begin{enumerate}
\item Li Shi: He is responsible for most of backend designs like instruction dispatch, instruction issue, register file and execution units for integer and memory access. He also helps debug and integrate FP components. His workload is about 15 hours per week.
\item Jian Shi: He is responsible for hardware frontend designs such as free list, register renaming table, reorder buffer and execution units for floating point. At the same time, during the preparation phase of our project, he also did a lot of research on other RISC-V cores. His workload is about 15 hours per week.
\item Yiqiu Sun: She is responsible for designing the branch predictor and instruction fetch unit and helps integrate and design our overall microarchitecture. She is also our technical support and helps review our code and proposes many constructive ideas about our design. Her workload is about 15 hours per week.
\item Yichao Yuan: He is responsible for software simulation and validation part. He helps to replicate the Spike-based model to our verilator-based model, so that our CPU core can be better compared with the Spike model, and help simulation and validation. He also explores cache design and AXI bus protocol for our project at the preparation stage. His workload is about 15 hours per week.
\item Zhiyuan Liu: I am responsible for the compilation workflow and I also do parts of the instruction fetch at the beginning of our project with the help of Yiqiu Sun. I investigate and study the structure of LLVM compiler and GCC compiler and try to modify and rebuild the compiler toolchains such that customized instructions for approximate computing units can be issued and disassembled by our new compiler toolchains. I also integrate the approximate computing functions into Spike to help validate our core. My workload is about 15 hours per week.
\end{enumerate}
As mentioned in the previous part, I am the software engineer in our team, and my job is mainly concentrated on compilation workflow and Spike validation part. But at the beginning of the project, I am also responsible for part of the hardware frontend design - instruction fetch. I have had the experience of RTL design, but I have not been in touch with SystemVerilog before, so it is painful when I first start writing SystemVerilog for our project. In the beginning, I just treat it purely as an ordinary software language, but in the process of writing and with the help of our team members, I can now roughly understand how to use SystemVerilog to describe the behavior of the hardware. With the help of Yiqiu Sun, we complete the instruction fetch part. In the middle and late part of our project, I focus on the compilation workflow. In this process, I compare the architecture of the LLVM compiler with that of GCC, and get an in-depth understanding of the compilation process. Between LLVM and GCC compiler, after comparing their structure, I select GCC compiler. Among many versions of RISC-V GCC compiler, I finally choose riscv64-multilib-elf-gcc as our target compiler because the design goal of our CPU is for embedded system, so we must use static link library, and we want our design to be better compatible with 64-bit RISC-V architecture in the future. At the same time, I also study how to modify and rebuild the compiler so that it can issue and disassemble our customized instructions for approximate computing, how to add our customized instruction to Spike, and how to describe the functional behavior of our customized instruction so that Spike can also run approximate computation for floating point.
I also learn a lot about how to manage engineering project in our capstone design.
Many useful project management tools are used in our project such as Feishu and Git. We use Feishu to arrange our group meetings and use Feishu docs to track each team member’s progress and schedule. We use Git to manage our code. For example, we have a Feishu docs named ``VE450 project management''. In that docs, we record the division of work for everyone and update our progress in time.
In terms of the technical communication part, I actively participate in each presentation and our final expo. In our presentations, I am responsible for introducing the background of our project to the audience, the actual problems that our project solved, and the innovations of our project. I also make a technical report and presentation on compilation workflow to our instructor.
In conclusion, as one of the software engineers in our team, I try my best to contribute to our capstone design. We work as a team and everyone in our group tries their best to make the project better.
| {
"alphanum_fraction": 0.8099355714,
"avg_line_length": 268.0909090909,
"ext": "tex",
"hexsha": "51be4a7c7b7fb38d8c8e297ea2f87df010f98078",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "224c1d798e772f39043401afafb3832ef1fe2518",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "UMJI-VE450-21SU/SJTU-Thesis-JI-Adaption",
"max_forks_repo_path": "individual/lzy.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "224c1d798e772f39043401afafb3832ef1fe2518",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "UMJI-VE450-21SU/SJTU-Thesis-JI-Adaption",
"max_issues_repo_path": "individual/lzy.tex",
"max_line_length": 1726,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "224c1d798e772f39043401afafb3832ef1fe2518",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "UMJI-VE450-21SU/SJTU-Thesis-JI-Adaption",
"max_stars_repo_path": "individual/lzy.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1172,
"size": 5898
} |
%!TEX root = ../gronskiy_phd_thesis.tex
\chapter[Approximation-Based Regularization for Robust Optimization]{Approximation-Based Regularization \\ for Robust Optimization}
\label{ch:gen_appch}
\hfill
\begin{minipage}[t]{.75\textwidth}
\textit{``Truth is much too complicated to allow anything but approximations.''} \\
\hrule
\vspace{.2cm}
\hfill
\textsc{--- John von NEUMANN}, ``The Mathematician''
\end{minipage}
\section{Introduction}
\subsection{Motivation}
Within a given data set, not all information is useful~--- given measurement
error, some part of it explains noise, not the true functional dependencies.
Hence, there exists an inevitable limitation on the amount of information bits
one should use in order to avoid overfitting. However, another curse~---
underfitting~--- happens if for some reason the solver decides to play on the
safe side and uses less information than would be optimal. We refer to this
phenomenon as the \textit{informativeness vs. robustness trade-off}.
\index{Underfitting}
In the line of research started
by~\citet{conf/isit/Buhmann10,conf/mcpr2/Buhmann11}, an approach of utilizing
self-calibrating\footnote{Sometimes ``self-calibrating'' is replaced by
``context-sensitive''.} optimization procedures is devised and advocated. In its
essential part, this approach aims at maximizing (through a set of tools
discussed below) the amount of useful information, thereby making
solutions both statistically robust \textit{and} informative.
This approach features~\citep[cf.][]{Busetto:PhD,jcss:2017}, among others, the following key
properties:
\begin{itemize}
\item it does not require any knowledge on
probability distributions of input instances, particularly not whether the
noise is systematic or random;
\item it allows to quantify the quality of the obtained solution w.r.t.~
unseen data instances;
\item moreover, it makes possible to rank different models.
\end{itemize}
In this chapter, we will provide justification of this approach, as well as
introduce some prototypic examples proving its usability. We will also discuss
possible generalizations and adjustments of this approach, which will be
addressed in the next chapters.
\subsection{Contributions and Outline of the Chapter}
\label{sec:asc_contribs}
As main contributions of this chapter, we
\begin{itemize}
\item provide a justification and revisit the approach for robust optimization
called Approximation Set Coding;\index{ASC|see{Approximation Set Coding}} \index{Approximation Set Coding}
\item introduce and evaluate the simplest, yet interpretable proof-of-concept
model which shows experimentally the superiority of the ASC-based approach
and refers to clear intuitions about the mechanism thereof;
\item prove a theoretical result which suggests one step further in the direction
of eliminating the computational bottleneck of the ASC~--- computing the ASC
score;
\item introduce a so-called Gibbs relaxation of the ASC approach, give
a rationale behind it and experimentally evaluate its performance.
\end{itemize}
The chapter is outlined as follows. First, a background and related work are
given in Section~\ref{sec:gen_appch_related_work}. We then provide technical
preliminaries (setting and model assumptions) in
Section~\ref{sec:gen_appch_setting_and_model}. A comprehensive introduction into
the original approximation set-based approach is then given in
Section~\ref{sec:asc_original}. Section~\ref{sec:similarity_approach_intro}
presents an analogical approach to robust optimization. We then give a
proof-of-concept experimental confirmation of the validity of approximation
set-based methods in Section~\ref{sec:proof_of_concept}. Further, we address one
problem which is very characteristic bottleneck for the most of applications of
approximation set-based approaches and solve it in
Section~\ref{sec:analytic_solution}. We then explain how a relaxation of the
approach (called Gibbs relaxation) works in Section~\ref{sec:gibbs_relaxation_of_sim}
and show experimental results for it. Finally, concluding remarks follow
in~Section~\ref{sec:gen_appch_conclusion}.
\myremark Section~\ref{sec:similarity_approach_intro} is included into the
thesis for the sake of ``self-containedness'', and presents an approach not due
to the author of this thesis. For smooth integration into this thesis, we
provided necessary terminological and notational adaptation, but
nevertheless, some small parts of the text of
Section~\ref{sec:similarity_approach_intro}, as well as
Section~\ref{sec:gen_appch_conclusion} may still be similar to that
of~\citet{Sramek:PhD}. Sections~\ref{sec:proof_of_concept},~\ref{sec:analytic_solution}
and~\ref{sec:gibbs_relaxation_of_sim} are a result of joint work hence their
textual presentation may be partially similar to that of~\citet{jcss:2017}.
\section{Background and Related Work Overview}
\label{sec:gen_appch_related_work}
When dealing with uncertain (noisy) inputs, the model designer always confronts
with two related questions:
\begin{itemize}
\item For a given model, how to provide a well-generalizing regularization?
\item For a predefined set of models, how to establish an ordering of them which
would reflect their generalization capability?
\end{itemize}
While we introduce an approach which solves both tasks, in this chapter we will
mainly concentrate on the first application of it, while the next chapter
(Chapter~\ref{ch:mst}) addresses the second one. For now, however, we summarize
an overview of both tasks.
\subsection{Generalization and Stability in Learning}
\label{sec:generalization_stability_in_learning}
When dealing with noisy inputs, a designer of an empirical risk minimization
(ERM) algorithm is always confronting with how well the learned solution
generalizes to the unknown test sets. In fact, the whole field of statistical
learning theory~\citep{Vapnik71,Vapnik:1982} \index{Statistical
learning theory} has been in its core posing this
questions since 70s of the last century. It focuses on the question: for a given
algorithm, can we derive bounds of its generalization error~\citep{Bishop:2006}?
\index{ERM|see{Empirical Risk Minimization}}
\index{Empirical Risk Minimization}
The ways of bounding generalization error \index{Generalization error} can be, in our view, split into
three\footnote{These classes are very much interrelated. We brought such
classification for simplicity and don't pretend it to be the only division
possible.} classes: to a more classical one belongs, e.g.~bounding
generalization error via considering properties of \textit{hypothesis space}
such as VC-dimension~\citep{Vapnik71,Vapnik:1982} or Rademacher
complexity~\citep{Shalev-Shwartz:2014}.
Another class of research directions encompasses approaches where one derives
such bounds using the so-called (hypothesis, error or uniform)
stability~\citep{Devroye79,Bousquet:2002} property of the ERM algorithm, which,
in essence, reflects how critical are the fluctuations of the input data for the
outcome of an algorithm. As a side remark, we can note that from the technical
standpoint, the mentioned bounds utilized various concentration
inequalities~\citep{RaginskyS15}.
Lastly, stability \index{Stability} (and thus generalization) properties have recently enjoyed
research from the information theoretic prospective, considering a learning
algorithm as a channel from the input to output~\citep{Russo15,Xu17} and
relating stability to the mutual information between the input and the output.
To the advantages of this third class of approaches belongs the fact, that the
bounds provided by it, involve \textit{both} the properties of the hypothesis
space and the learning algorithm (as opposed to the aforementioned methods). To
the same ``information theory-inspired'' class can we assign a recent work
by~\citet{Alabdulmohsin:2015} which relates generalization to a total variation
\index{Total variation information} information.
\myremark The approach via input-output mutual information comes very close to
the one introduced and advocated in this chapter. However, we should note that
both approaches stem from different definitions of the communication channel
used to derive error bounds.
\myremark It should be noted here that all the above methods only provide ways
to guarantee certain performance when the learning algorithm is fixed. They do
not answer the question how to regularize its solutions for a more robust
performance. In contrast, the approach presented and tested in this chapter does
exactly this.
\subsection{Model Selection}
Besides quantifying the quality of a given ERM solution (overview for which was
given above), a modeler can ask another question: how to choose between two
possible models, taking into account various properties such as e.g.~complexity?
A long line research which addressed this question is presented by methods such
as the Minimum Description Length (MDL) principle~\citep{Rissanen:1978}, the
Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC) or
the Generalized Information Criterion~\citep[for an overview,
see][]{Konishi:2007}.
\index{Model validation}
\index{Model validation!MDL}
\index{Model validation!AIC}
\index{Model validation!BIC}
\index{AIC|see{Model validation}}
\index{BIC|see{Model validation}}
\index{MDL|see{Model validation}}
\subsection{Robust Optimization}
Besides the approaches characteristic for statistical learning theory, there
exists a methodological direction called \textit{robust optimization}.
\index{Robust optimization}
%
Being very close to the approaches above, robust optimization deals with models
for the uncertain input~--- however, contrary to the approaches traditional for
statistical learning theory, in the field of robust optimization, it is
explicitly discouraged to assume the knowledge of the input data distribution,
although some information (for example, if the data comes form certain interval
domain or not) might be available.
%
For a comprehensive overview of robust optimization approaches, we recommend a
recent survey by~\citet{series/lncs/GoerigkS16}.
The closest point of contact between the robust optimization approaches and
approaches of the above Section~\ref{sec:generalization_stability_in_learning}
is, in out view, \textit{optimization for stable inputs} which makes an attempt
to understand the connection between fluctuations of the input and the output,
i.e. some sort of stability~\citep{journals/cpc/BiluL12,%
conf/sirocco/BiloGGPW09,journals/scheduling/GattoW11,conf/sofsem/MihalakSSW11}.
% \section{Related Work}
% \label{sec:gen_appch_related_work}
% Noisy inputs received attention in a variety of ways. \emph{Stochatic
% programming}~\cite{Schneider07,KallM05} utilizes random variables to model
% uncertainty in the input data. When the random variables are used only in the
% optimization method and not in modelling the problem, one commonly refers to
% \emph{stochastic optimization}. Stochastic methods often aim at minimizing or
% maximizing the expected cost, and in general they assume all probability
% distributions to be known exactly. However, this is quite a strong assumption
% because as argued earlier, the patterns behind real-world noise might be complex
% and not easily observable.
% \paragraph{Sensitivity analysis and stability}
% \emph{Robust optimization}~\cite{BenTalEGN09} assumes that we are given a set of
% so-called scenarios instead of a probability distribution over the possible
% input instances. Each scenario corresponds to one particular realization of the
% input. Strict robustness computes a solution that is feasible in every scenario,
% and that minimizes the maximum cost over all scenarios. Instead of minimizing
% the maximum absolute cost, min-max regret robustness~\cite{journals/asa/Savage51}
% minimizes the largest regret (i.e., the difference between the cost of the
% chosen solution and the cost of the minimum in the given scenario) over all
% scenarios. However, a worst case perspective in which an adversary will reveal
% the worst possible instance, given the chosen solution, is far too pessimistic
% for real-world applications. Therefore, various relaxations of the
% aforementioned robust optimization approaches have been
% proposed~\cite{series/lncs/GoerigkS16}. For example, cardinality constrained
% robustness~\cite{journals/ior/BertsimasS04} allows a certain amount of problem
% constraints to be violated, while light robustness~\cite{journals/mmor/Schobel14}
% allows to relax the constraints. The disadvantage of these approaches is that
% they might compute solutions that are no longer feasible for the actual
% scenario. To guarantee feasibility in all cases, recoverable robust
% optimization~\cite{series/lncs/LiebchenLMS09} goes one step further and allows
% the proposed solution to be modified after the true instance (or parts of it)
% have been revealed. For an overview on robust optimization approaches, we refer
% to a recent survey by Goerigk and Sch\"obel~\cite{series/lncs/GoerigkS16}.
% \emph{Optimization for stable inputs} attempts to understand when and how small
% changes in the input data affect the solution~\cite{journals/cpc/BiluL12,%
% conf/sirocco/BiloGGPW09,journals/scheduling/GattoW11,conf/sofsem/MihalakSSW11}.
% %
% % \emph{Info-gap decision theory} is peculiar in that it models uncertainty as an
% % information gap rather than a probability~\cite{BenHaim06}.
% % \tptodo{Elaborate or remove}
% % %
% Optimization for sample average~\cite{KleywegtSH02} aims at solving optimization
% problems preceded by averaging several inputs into one.
\section{Setting and Generative Model Assumptions}
\label{sec:gen_appch_setting_and_model}
\subsection{Optimization Problem}
\label{sec:optimization_problem_description}
In the setting we are going to analyze in this and the next chapters, the
following components are assumed to be defined:
\begin{itemize}
\item A set $\mathcal{X}$ of possible \textit{data instances} $X$:
\begin{equation}
\mathcal{X} \ni X,
\end{equation}
\nomenclature[A, 01]{$\mathcal{X}$}{source of data instances}%
\nomenclature[A, 01a]{$X \in \mathcal{X}$}{[random] data instance}%
on which no further assumptions (e.g. structure, finiteness, countability) are
imposed in the most general case (see below in
Section~\ref{sec:data_generation_model} for possible specifications of such
assumptions).
\index{Data instance}
\item A set $\mathcal{C}$ of possible \textit{solutions}, or
\textit{hypotheses} $c$:
\begin{equation}
\mathcal{C} \ni c,
\end{equation}
\nomenclature[A, 01b]{$\mathcal{C}$}{set of solutions}%
\nomenclature[A, 01c]{$c \in \mathcal{C}$}{solution}% can add \nomnorefeq
where again no further structural or finiteness assumptions are imposed.
\item An \textit{objective function} $R(c, X)$ representing the value of a
given solution solution $c$ for a data instance $X$:
\begin{equation}\label{eq:cost_function}
R(c, X) \colon \mathcal{C} \times \mathcal{X} \to \mathbb{R}.
\end{equation}
\nomenclature[A, 01d]{$R(c, X)$}{cost function}%
If not stated otherwise, we will assume a minimization (i.e. ``cost'', ``error''
or ``energy'') semantics of $R(c,X)$ and call it a \textit{cost function}.
\end{itemize}
\begin{definition}\label{def:optimization_problem_definition}
Provided that a \textit{solution feasibility} assumption is fulfilled, i.e.~any
solution $c \in \mathcal{C}$ is \textit{feasible} (i.e.~valid) for any data
instance $X \in \mathcal{X}$, then we can say that these three components define a
valid \textit{optimization problem} denoted by a triplet $\mathcal{P} =
(\mathcal{X}, \mathcal{C}, R)$.
\nomenclature[A, 01e]{$\mathcal{P} = (\mathcal{X}, \mathcal{C}, R)$}{optimization problem \nomnorefeqpage\hfill Def.~\ref{def:optimization_problem_definition}}%
\index{Optimization problem}
\end{definition}
The optimization goal consists in finding the
set of those solutions which minimize the cost function:
\begin{equation}
\mathcal{C}^\bot(X) \coloneqq \arg \min_{c \in \mathcal{C}} R(c, X).
\end{equation}
\nomenclature[A, 01f]{$\mathcal{C}^\bot(X)$}{set of empirical optimizers}%
With a bit of notation abuse we will also write
\begin{equation}
c^\bot(X) \coloneqq \arg \min_{c \in \mathcal{C}} R(c, X) \in \mathcal{C}^\bot(X),
\end{equation}
\index{Global minimizer}
meaning a \textit{one} (out of many possible) optimal solution (global
minimizer). We will denote the optimal cost as:
\begin{equation}
R^\bot(X) \coloneqq \min_{c \in \mathcal{C}} R(c, X).
\end{equation}
\nomenclature[A, 01g]{$R^\bot(X)$}{optimal cost}%
\subsection{Data Generation Model}
\label{sec:data_generation_model}
Dealing with \textit{uncertainty} in optimization requires to define a data
generation process.
In the following, we will simply assume that there is exists \textit{true
(ground, signal) data instance} $X^0$, from which the \textit{noise-contaminated
data instances} are obtained independently, i.e.:
\begin{equation}\label{eq:data_gen_model}
X', X'', X''', \dots \sim PG(X | X^0)
\end{equation}
through a \textit{problem generating} process $PG(\cdot | X^0)$.
\nomenclature[A, 01h]{$PG(X \mid X^0)$}{problem generator}%
\index{Problem generator}
Note that the problem generating process is parametrized through the ground
truth $X^0$. Note also that the obtained data instances are independent,
conditioned on the $X^0$:
\begin{equation}\label{eq:pg_independence}
\text{for any data instances $X', X'' \sim PG(X | X^0)$:\quad } X'
\independent X'' | X^0.
\end{equation}
\index{Data instance!Ground truth}
\myremark Although this notation might seem complex, it is actually very
straightforward. In most cases we consider, $X \in \mathcal{X}$ will be just a
vector of random (generated by $PG(\cdot)$) weights from which the costs $R(c,
X)$ are constructed.
\section{Approximation Set-Based Approach}
\label{sec:asc_original}
\index{Approximation Set Coding}
In this section, we will introduce the notions related to Approximation Set
Coding framework~--- a successful way of regularizing solutions to cost-driven
optimization problems.
\subsection{Approximation Sets}
We introduce the notion of \textit{approximation sets}, which are intended to
address the question: how to avoid the risk of overfitting in those frequent
cases, when the solver is not aware of precise noise conditions $PG(\cdot)$
imposed on the dataset?
Consider the following thought experiment: datasets $X', X'', \ldots$ are drawn
according to the random data generation process $PG(\cdot | X^0)$ as given in
Section~\ref{sec:data_generation_model}. As all the datasets stem from the same
``ground'' dataset $X^0$ (in some sense, which we leave undefined for the sake
of keeping things simple for now), they contain both useful and irrelevant
information. In other words: only \textit{some} information the one obtains from
the dataset ``explains signal'' $X^0$ (e.g. has low condition entropy), while
the rest of the information ``explains noise''.
Utilizing an optimization model defined by cost $R(\cdot, \cdot)$, as
in~\eqref{eq:cost_function} and thus obtaining optimal solutions $c^\bot(X'),
c^\bot(X''), \ldots$, we inevitably absorb both useful and irrelevant information
and overfit, making solutions unstable w.r.t. each other. To regularize the
optimization process, one might want to relax the optimal solution by including
all the solutions located in the vicinity (in some topology we define in a
second) of it. A natural way to define such topology is to utilize the level
surfaces of the cost function $R(\cdot, \cdot)$ itself! The method proposed
by~\citet{conf/isit/Buhmann10} suggests the following definition of the
approximation set.
\begin{definition}[\citet{conf/isit/Buhmann10}]
\label{def:approximation_set}
For a given real number $\gamma \ge 0$, an approximation set is defined as follows:
\begin{equation}
\mathcal{C}_\gamma (X, R) \coloneqq
\{c \in \mathcal{C} \mid R(c, X) - R^\bot(X) \le \gamma\},
\end{equation}
\nomenclature[D, 01]{$\mathcal{C}_\gamma (X, R)$}{$\gamma$-approximation set\nomnorefeqpage\hfill Def.~\ref{def:approximation_set}}%
and the solutions belonging to it will be called $\gamma$-optimal.
\index{Solution, $\gamma$-optimal}
For the sake
of notation brevity, we will drop the parameter(s) $X$ and/or $R$ where it is
clear from the context, which dataset and cost function are meant.
\index{Approximation set}
\end{definition}
\begin{figure}[th!]
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/asc_coding_approximation_1}
\caption{Large $\gamma$: approximation sets are in a great agreement, but non-informative at all.\\}
\label{fig:asc_illustration-0}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/asc_coding_approximation_2}
\caption{Decreasing $\gamma$: approximation sets get distinguished, and more information is extracted.}
\label{fig:asc_illustration-1}
\end{subfigure}
\\[.5cm]
% \begin{subfigure}[b]{.48\textwidth}
% \includegraphics[width=\linewidth]{figures/ch_generic_approach/asc_coding_approximation_3}
% \caption{Further decreasing $\gamma$: approximation sets get distinguished, and more information is extracted.}
% \label{fig:asc_illustration-2}
% \end{subfigure}
% \hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/asc_coding_approximation_4}
\caption{Small $\gamma$: almost all the information is extracted, but solutions are in poor agreement.}
\label{fig:asc_illustration-3}
\end{subfigure}
\\[.5cm]
\caption{Intuitive illustration of informativeness vs. stability.
Approximation sets are parametrized by $\gamma$. The data inputs $X'$,
$X''$ and $X'''$ come from the same generative source. Decreasing the
parameter $\gamma$ leads to extracting more information from the given
data, but at the same time making solutions less stable.}
\label{fig:asc_illustration}
\end{figure}
Properties of the parameter $\gamma$ are crucial for understanding its role.
On one hand, it is obvious that infinite $\gamma$ yields the whole set of
feasible solutions:
\[
\left.\mathcal{C}_{\gamma} \right|_{\gamma = \infty} (X)
\equiv \mathcal{C}.
\]
On the other hand, it holds
\[
\left.\mathcal{C}_{\gamma} \right|_{\gamma = 0} (X)
\equiv \mathcal{C}^\bot(X)
\equiv \{c^\bot(X)\},
\]
i.e. zero $\gamma$ yields only optimal solutions. Selection of the parameter
$\gamma$ allows to trade-off stability of the solutions (extreme case: $\gamma =
\infty$) and the their informativeness (extreme case: $\gamma = 0$). This raises
a very important question: does there exist a way to choose this parameter?
\subsection{Communication and Learning Stability}
\label{sec:communication_learning_stability}
\begin{figure}[bh!]
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/Boolean_Cube_8code}
\caption{}
\label{fig:boolen_cube_vectors_8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/Boolean_Cube_2code}
\caption{}
\label{fig:boolen_cube_vectors_2}
\end{subfigure}
\\[.5cm]
\caption{Placing codebook vectors of length $3$ on a Boolean cube. Case \textbf{(a)}
is a ``mean'' option, when we use all the eight vertices as codebook vectors.
Case \textbf{(b)} is a ``lean'' option, when we use some of vertices as
neighborhoods to the two codebook vectors $000$ and $111$ denoted as big $0$ and big $1$.}
\label{fig:boolen_cube_vectors}
\end{figure}
\begin{figure}[th!]
\centering
\begin{subfigure}[b]{.85\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/Boolean_Cube_8code_error}
\caption{High rate ($R_{\text{code}}=1$), but no way to correct the error
(red: sent and received codes).}
\label{fig:boolen_cube_vectors_error_8}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.85\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/Boolean_Cube_2code_error}
\caption{Lower rate ($R_{\text{code}} = 1/3$), correcting one digit error
(red: sent and received codes).}
\label{fig:boolen_cube_vectors_error_2}
\end{subfigure}
\\[.5cm]
\caption{Dealing with one digit error. Case \textbf{(a)}
is high rate option with eight codebook vectors, leading to a low
error-correcting capacity (in fact, no error can be tolerated). Case
\textbf{(b)} is lower rate option with two codebook vectors leading to a
higher error-correcting capacity (one digit error can be tolerated, two
digits not).}
\label{fig:boolen_cube_vectors_error}
\end{figure}
In this section, we will be working under definitions of
Section~\ref{sec:background_coding}. The approximation set-based approach has clear
analogies in communication, featuring the idea of communication by means of data
and solutions. To illustrate this relation, we first will refer to information
theory and coding. As established by~\citet{shannon:1948, shannon:1963}, all the
rates up to channel capacity are achievable with vanishing error.
\index{Shannon's Channel Coding Theorem}
\newtheorem*{shannon_thm}{Shannon's Channel Coding Theorem}
\begin{shannon_thm}[e.g. Theorem 7.7.1, \citealp{Cover:2006}]
For a discrete memoryless channel, all rates below capacity $C$ are
achievable. Specifically, for every rate $R_{\text{code}} < C$, there exists a sequence of
$(2^{nR_{\text{code}}}, n)$ codes with maximum probability of error $\lambda^{(n)} \to 0$.
Conversely, any sequence of $(2^{nR_{\text{code}}}, n)$ codes with $\lambda^{(n)} \to 0$
must have $R_{\text{code}} \le C$.
\end{shannon_thm}
This important theoretical statement has a non-constructive
proof resting on the idea of random coding with code length $n$ going to
infinity, and thus it does not provide a practical way of building such codes of
finite length. It turns out, that an attempt to design a finite-sized code faces
the trade-off between its error-correcting capability and its rate. An example
of this idea is the simplest Hamming code of length $3$ which we are going to
briefly illustrate due to its importance for the next steps.
\index{Trade-off!Error-correcting capability}
\index{Trade-off!Code rate}
\index{Hamming code}
Figures~\ref{fig:boolen_cube_vectors} and~\ref{fig:boolen_cube_vectors_error} to
some extent explain this trade-off in the simplest possible setting, thus
preparing the reader for introducing the communication channel by means of
datasets. Figure~\ref{fig:boolen_cube_vectors} shows that one can vary the
codebook vector set by, for instance, expanding ``neighborhoods'' of two
vertices $(000)$ and $(111)$ by including all the adjacent vectors, while
Figure~\ref{fig:boolen_cube_vectors_error} demonstrates that although the above
process reduces the code rate from $R_{\text{code}} = 1$ down to
$R_{\text{code}} = 1/3$, it increases its error-correcting capability so that
the code can now tolerate all the one-digit errors. One can also imagine an
extreme (not shown in figures) case of high two-digit noise: it is easy to see
that under this condition, a reliable, stable communication is only possible
with \textit{only one} codebook vector and the \textit{zero rate}
($R_{\text{code}} = 0$). In other words, the code gets less informative, but
more robust.
\index{Code!Rate}
\index{Codebook!Vectors}
\begin{algorithm}[th!]
\caption{Establishing the Communication}\label{alg:communication_establishing}
\KwData{\\
\quad instance of the dataset $X' \in \mathcal{X}$, \\
\quad solution set $\mathcal{C} = \{c\}$, \\
\quad cost function $R(c, X)$, \\
\quad set of transformations $\mathbb{T} = \{\tau\}$, where $\tau \colon \mathcal{X} \to \mathcal{X}$\\
\quad parameter $\gamma$}
\KwResult{established communication scheme}
{Sender and Receiver agree on $R(c, X)$\;}
{Sender and Receiver agree on $X'$\;}
{Sender and Receiver agree on $\mathbb{T}$\;}
{Sender and Receiver agree on $\gamma$\;}
\tcp{Then, a coverage by approximation sets is generated:}
\ForEach{$\tau \in \mathbb{T}$}{
{both Sender and Receiver generate a transformed dataset $\tau \circ X'$\;}
{both Sender and Receiver compute $\gamma$-approximation set
$\mathcal{C}_\gamma(\tau \circ X')$\;}
}
\end{algorithm}
As well as in the coding scenario described above, the learning process can be
viewed as a noisy communication, where the model is a \textit{decoder} which tries
to figure out the solution to \textit{true (useful) signal} contained in the
noisy data. Thus, the following rough analogies can be pointed out:
\begin{itemize}
\item The role of codebook vectors is played by solutions $c$ to the optimization
problem $R(c,X)$.
\item The role of errors is played by the noise generating process $PG()$
(see.~\eqref{eq:data_gen_model}), which injects uncertainty into data $X$.
\item The role of ``neighborhoods'' of codebook vectors from
Figures~\ref{fig:boolen_cube_vectors_2}
and~\ref{fig:boolen_cube_vectors_error_2} is played by approximation
sets.
\end{itemize}
We are now going, following~\cite{conf/isit/Buhmann10}, to introduce an
artificial communication scenario~(Algorithms~\ref{alg:communication_establishing},
\ref{alg:communication_transmission} and~\ref{alg:communication_decoding}).
We advise the reader to compare the textual explanation with the pictorial one
in Figure.~\ref{fig:coding_scheme_cartoon}.
\index{Approximation Set Coding!Communication scenario}
\index{Communication scenario|see{Approximation Set Coding}}
\begin{algorithm}[bh!]
\caption{Encoding and Transmission}\label{alg:communication_transmission}
\KwData{\\
\quad instance of the dataset $X' \in \mathcal{X}$, \\
\quad instance $X'' \in \mathcal{X}$ not known to receiver, \\
\quad solution set $\mathcal{C} = \{c\}$, \\
\quad cost function $R(c, X)$, \\
\quad set of transformations $\mathbb{T} = \{\tau\}$, where $\tau \colon \mathcal{X} \to \mathcal{X}$\\
\quad parameter $\gamma$}
\KwResult{a received message}
{Sender picks a $\tau_{\text{send}} \in \mathbb{T}$ and sends it\;}
{Sender encodes it by generating
a transformed dataset $\tau_{\text{send}} \circ X'$ and sends it\;}
{Sender sends $\tau_{\text{send}} \circ X'$\;}
\tcp{Channel noise comes in the next line:}
{Channel introduces error by applying transformation $\tau_{\text{send}}$ to a $X''$\;}
{Receiver receives $\tau_{\text{send}} \circ X''$ without knowing either $\tau_{\text{send}}$ or $X''$\;}
\end{algorithm}
\paragraph{Encoding step (Algorithm~\ref{alg:communication_establishing}
\index{Approximation Set Coding!Encoding and transmission}
and~\ref{alg:communication_transmission};
Fig.~\ref{fig:coding_scheme_cartoon_1})} Very briefly, the Sender-Receiver
analogy consists in distinguishing individual solutions by means of the noisy
datasets: Sender sends a message (defined below) encoded by the first dataset,
and Receiver receives this message, but perturbed by means of the second
dataset. More precisely, assuming the generative process $PG$
(see~\eqref{eq:data_gen_model}), the transmitted ``messages'' are the
transformations $\tau \in \mathbb{T}$ of the datasets, so
\begin{equation}
\tau \in \mathbb{T}, \quad \tau \colon \mathcal{X} \to \mathcal{X}.
\end{equation}
\nomenclature[D, 01a]{$\mathbb{T}$}{set of messages}%
\nomenclature[D, 01b]{$\tau \in \mathbb{T}$}{message}%
Now, both Sender and Receiver are agreeing on the dataset $X'$, which will
play the role of the encoding ``benchmark''. Sender then picks a
transformation $\tau_{\text{send}}$ and encodes the message by means of $X'$ via
applying one to the other:
\begin{equation}
X_{\text{send}} \coloneqq \tau_{\text{send}} \circ X',
\end{equation}
\nomenclature[D, 01c]{$X', X''$}{two instances (ASC scenario)}%
\nomenclature[D, 01da]{$\tau_{\text{send}}$}{message sent}%
and sends it out. Remember that Receiver does not know $\tau_{\text{send}}$, but knows
``codebook approximation sets'' $\{\tau \circ X'\}_{\tau \in \mathbb{T}}$.
\paragraph{Hypothetic noise-free transmission}
If there were no noise, Receiver, having obtained $X_\text{received} =
X_{\text{send}}$, and knowing both $\mathbb{T}$ and $X'$, could just recover the
$\tau_{\text{send}}$ by enumerating:
\begin{equation}
\label{eq:acs_brute_force_decoding}
\hat \tau \coloneqq \arg \max_{\tau \in \mathbb{T}} \Ind\{X_{\text{received}} =
\tau \circ X' \}.
\end{equation}
\nomenclature[D, 01db]{$\hat \tau$}{message decoded}%
\paragraph{Actual noisy transmission (Algorithm~\ref{alg:communication_transmission};
Fig.~\ref{fig:coding_scheme_cartoon_2})}
However, the noise is injected by replacing
$X'$ by $X''$, which is a noisy version of the initial dataset:
\begin{equation}
X_{\text{received}} \coloneqq \tau_{\text{send}} \circ X'',
\end{equation}
which makes it impossible for Receiver to perfectly match obtained message to
any of the ``benchmarked ones'' like in Eq.~\eqref{eq:acs_brute_force_decoding}.
\parsec
\myremark It it important to realize, that there are two manifestations of noise
in this scenario. One is the original source of noise generated by $PG(\cdot)$ and resulting
in replacing $X'$ by $X''$. The other is the transmission error caused by
difference between the sent and received messages.
\paragraph{Decoding (Algorithm~\ref{alg:communication_decoding}; Fig.~\ref{fig:coding_scheme_cartoon_3} and
\ref{fig:coding_scheme_cartoon_4})}
Just the same
as the Hamming channel performs decoding the received vector by finding
the closest codebook vector, our Receiver tries to find the closest codebook
dataset out of all the possible datasets $\{\tau \circ X'\}_{\tau \in \mathbb{T}}$.
Closeness is measured as the size of the intersection of their approximation sets:
\begin{equation}
\hat \tau = \arg \max_{\tau \in \mathbb{T}} \,\,
\bigl|
\mathcal{C}_\gamma(\tau \circ X') \cap \mathcal{C}_\gamma(\tau_{\text{send}} \circ X'')
\bigr|,
\end{equation}
thus, approximation sets play the role of parity check regions here.
\index{Parity check}
\begin{algorithm}[t]
\caption{Decoding}\label{alg:communication_decoding}
\KwData{\\
\quad instance of the dataset $X' \in \mathcal{X}$, \\
\quad instance $X'' \in \mathcal{X}$ not known to receiver, \\
\quad solution set $\mathcal{C} = \{c\}$, \\
\quad cost function $R(c, X)$, \\
\quad set of transformations $\mathbb{T} = \{\tau\}$, where $\tau \colon \mathcal{X} \to \mathcal{X}$\\
\quad parameter $\gamma$}
\KwResult{Transformation $\hat \tau$ which is estimate for $\tau_{\text{send}}$}
{Receiver computes a $\gamma$-approximation set of the received dataset:
$\mathcal{C}_\gamma(\tau_{\text{send}} \circ X'')$\;}
{Receiver maximizes its overlap with known $\gamma$-approximation sets:
\begin{equation}\label{eq:asc_decoding_intersection}
\hat \tau = \arg \max_{\tau \in \mathbb{T}} \,\,
\bigl|
\mathcal{C}_\gamma(\tau \circ X') \cap \mathcal{C}_\gamma(\tau_{\text{send}} \circ X'')
\bigr|
\end{equation}}
\index{Approximation Set Coding!Decoding}
\end{algorithm}
\index{Hamming code!Decoding}
\index{Hamming distance}
It is crucially important to realize that this decoding rule is very similar to
that of the Hamming code (and thus very natural), because in the Hamming coding,
the closeness is measured by \textit{minimizing} the Hamming distance between
the received vector and the codebook vectors, which is the same as
\textit{maximizing} the intersection between them:
\begin{align}
\hat{\mathbf{x}} &=
\arg \min_{\mathbf{x} \in \mathbb{B}^3} \;
\|\textbf{x}_\text{received} \oplus \textbf{x} \| \notag \\
&= \arg \min_{\mathbf{x} \in \mathbb{B}^3} \; \bigl( n -
\|\textbf{x}_\text{received} \cap \textbf{x}\| \bigr) \notag \\
&= \arg \max_{\mathbf{x} \in \mathbb{B}^3} \;
\|\textbf{x}_\text{received} \cap \textbf{x}\|.
\end{align}
\nomenclature[A, 00]{$\mathbb{B}^3$}{Boolean cube}%
\nomenclature[A, 00a]{$\oplus$}{sum modulo $2$}%
\paragraph{Decoding error and its probability}
When $\hat \tau \ne \tau_\text{send}$, we say that a decoding error occurs.
Obviously, the noise in our channel
(Algorithm~\ref{alg:communication_transmission}), acting via $PG(\cdot | X^0)$,
is the reason for that. Transferring robust optimization problem into a robust
decoding problem, we now will answer, following~\citet{conf/isit/Buhmann10}, a
natural question: how can we bound this probability?
We are interested in bounding the probability
\begin{equation}
\Prob(\hat \tau \ne \tau_\text{send} | \tau_\text{send}).
\end{equation}
\begin{figure}[th!]
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_1}
\caption{}
\label{fig:coding_scheme_cartoon_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_2}
\caption{}
\label{fig:coding_scheme_cartoon_2}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_3}
\caption{}
\label{fig:coding_scheme_cartoon_3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_4}
\caption{}
\label{fig:coding_scheme_cartoon_4}
\end{subfigure}
\\[.5cm]
\caption{Process of correct decoding by approximation sets in the solution
space: \textbf{(a)} $X'$ is set and sender sends $\tau_4$; \textbf{(b)} due
to noise which replaces $X'$ by $X''$, all the minimizers move around (red
to blue) in the solution space; \textbf{(c)} the received solution is
surrounded by its approximation set (blue) and overlaps are considered;
\textbf{(d)} decoded solution (dark red) happens to be $\tau_4$ which was initially
sent (correct decoding).}
\label{fig:coding_scheme_cartoon}
\end{figure}
Before we proceed, we will denote the intersection in~\eqref{eq:asc_decoding_intersection}
as follows:
\begin{equation}
\Delta \mathcal{C}_\gamma^\tau
\coloneqq \mathcal{C}_\gamma(\tau \circ X')
\cap \mathcal{C}_\gamma(\tau_{\text{send}} \circ X'').
\end{equation}
Due to the union bound, it holds that \index{Union bound}
\begin{equation}
\Prob(\hat \tau \ne \tau_\text{send} | \tau_\text{send})
\le \sum_{\tau \in \mathbb{T}} \Prob
\bigl(
|\Delta \mathcal{C}_\gamma^\tau| \ge |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}| \bigm| \tau_\text{send}
\bigr),
\end{equation}
i.e. for decoding error to occur, one has to encounter an approximation set
which is yielded by a wrong transformation, but happens to be closer to the
received approximation set (this is illustrated in
Figure~\ref{fig:coding_scheme_cartoon_7}). The last bound can be rewritten via
the indicator function:
\begin{equation}
\Prob(\hat \tau \ne \tau_\text{send} | \tau_\text{send})
\le \sum_{\tau \in \mathbb{T}} \Expct_{PG}
\bigl[
\Ind\{|\Delta \mathcal{C}_\gamma^\tau| \ge |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|\} \bigm| \tau_\text{send}
\bigr],
\end{equation}
where the expectation is taken w.r.t. the problem generation process $X', X''
\sim PG(\cdot | X^0)$. We further utilize the monotonicity of $\log$ function:
\begin{equation}
\Ind\{|\Delta \mathcal{C}_\gamma^\tau| \ge |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|\}
= \Ind\{\log |\Delta \mathcal{C}_\gamma^\tau| \ge \log |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|\}
\end{equation}
and the fact that $\Ind\{x \ge 0\} \le \exp(x)$ to come to the following:
\begin{equation}
\Expct_{PG}
\Bigl(
\Ind\{|\Delta \mathcal{C}_\gamma^\tau| \ge |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|\} \Bigm| \tau_\text{send}
\Bigr)
\le
\frac{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}%
{|\mathbb{T}| \; |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|},
\end{equation}
where the product in the nominator comes from the fact that, under our
generation process, the data instances $X'$ and $X''$ are independent given
$X^0$, see~\eqref{eq:pg_independence}.
In the spirit of~\citet{shannon:1948}, we use the random coding argument here:
all the $\tau$ are identically distributed and independent, hence the above can
be rewritten:
\begin{equation}
\Prob(\hat \tau \ne \tau_\text{send} | \tau_\text{send}) \le (|\mathbb{T}| - 1)
\exp(- I_\gamma(\tau_\text{send}, \hat \tau)),
\end{equation}
where
\begin{equation}
I_\gamma(\tau_\text{send}, \hat \tau) \coloneqq \Expct \log
\Bigl(
\frac{|\mathbb{T}| \; |\Delta \mathcal{C}_\gamma^{\tau_\text{send}}|}%
{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}
\Bigr).
\end{equation}
\paragraph{Optimizing approximation parameter $\boldsymbol\gamma$}
At this point, we can determine the optimal $\gamma^*$ as follows: the optimal
approximation threshold is chosen as
\begin{equation}\label{eq:asc_best_gamma}
\gamma^* = \arg \max_{\gamma \ge 0} I_\gamma(\tau_\text{send}, \hat \tau).
\end{equation}
\nomenclature[D, 01]{$\gamma^*$}{optimal $\gamma$}%
\begin{figure}[th!]
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_5}
\caption{}
\label{fig:coding_scheme_cartoon_5}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_6}
\caption{}
\label{fig:coding_scheme_cartoon_6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/coding_scheme_7}
\caption{}
\label{fig:coding_scheme_cartoon_7}
\end{subfigure}
\\[.5cm]
\caption{Decreased $\gamma$ and increased code rate leads to incorrect
decoding: \textbf{(a)} same setting (i.e. same noise) as in
Figure~\ref{fig:coding_scheme_cartoon}, but added more codebook
vectors; \textbf{(b)} due to noise which replaces $X'$ by $X''$, all the
minimizers move around (red to blue) in the solution space, \textbf{(c)}
decoded solution (dark red) happens to be wrong (incorrect decoding).}
\label{fig:coding_scheme_cartoon_incorrect}
\end{figure}
In practical applications and in the spirit of the Shannon's random coding
argument, it is often assumed that that $\tau_\text{send} = \mathrm{Id}$, i.e.
one computes
\begin{equation}
I_\gamma(\tau_\text{send}, \hat \tau) \coloneqq \Expct \log
\Bigl(
\frac{|\mathbb{T}| \; |\Delta \mathcal{C}_\gamma(X', X'')|}%
{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}
\Bigr),
\end{equation}
where
\begin{equation}
\Delta \mathcal{C}_\gamma \coloneqq \mathcal{C}_\gamma(X')
\cap \mathcal{C}_\gamma(X'').
\end{equation}
\nomenclature[D, 01]{$\Delta \mathcal{C}_\gamma$}{intersection of approximation sets}%
In practice, one often replaces $|\mathbb{T}|$ with the cardinality of the full
solution set~\citep{morteza12}, reflecting a specific choice of possible
transformations\footnote{Since the proof of error probability rests on the
argument of random coding and the codebook messages are chosen randomly, all
the considerations remain valid.}:
\begin{equation}\label{eq:asc_mutual_information_formula}
I_\gamma(\tau_\text{send}, \hat \tau) \coloneqq \Expct \log
\Bigl(
\frac{|\C| \; |\Delta \mathcal{C}_\gamma(X', X'')|}%
{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}
\Bigr),
\end{equation}
\nomenclature[D, 02a]{$I_\gamma$}{ASC $\gamma$-score}%
\begin{definition}
\label{def:asc_score}
We will call the above quantity ASC $\gamma$-score. We will call its maximum
simply ASC score or Approximation Capacity (AC):
\begin{equation}
C \coloneqq \max_\gamma I_\gamma.
\end{equation}
\nomenclature[D, 02ba]{$C$}{approximation capacity}%
\index{ASC score}
\index{Approximation capacity}
\end{definition}
\myremark It is interesting to note that the semantics of $I_\gamma(\tau_\text{send}, \hat
\tau)$ is surprisingly similar to that of mutual information
(Definition~\ref{def:inf_theory_mutual_information}). First, both are related to
the maximum rate of certain channel. Second, both can be decomposed in quite a
similar way: recall from~\eqref{eq:background_mi_decomposition} that, for random
variables $X$ and $Y$,
\begin{equation*}
\MI(X, Y) = H(X) + H(Y) - H(X, Y).
\end{equation*}
In a same way one may observe, that~\eqref{eq:asc_mutual_information_formula}
can be very easily decomposed into three logarithms:
\begin{align}
I_\gamma(\tau_\text{send}, \hat \tau) = \overbrace{- \Expct \log
\Bigl(
\frac{|\mathcal{C}_\gamma(X')|}{ |\C| }
\Bigr)}^{\text{single entropy}}
%
&\overbrace{- \Expct \log
\Bigl(
\frac{|\mathcal{C}_\gamma(X'')|}{|\C|}
\Bigr)}^{\text{single entropy}} \notag \\
%
&\underbrace{+\Expct \log
\Bigl(
\frac{|\Delta \mathcal{C}_\gamma(X', X'')|}{ |\C| }
\Bigr)}_{\text{joint entropy}},
\end{align}
where first two terms can be contemplated as single entropies of uniform
distributions over approximation sets, and the third term corresponds to
the joint entropy.
\myremark In practical applications, when there are only two data points $X'$, $X''$ and
no information about $PG(\cdot)$ is available, one can use an empirical version
of~\eqref{eq:asc_mutual_information_formula}
\begin{equation}\label{eq:asc_mutual_information_formula_wo_expct}
\hat I_\gamma(X', X'') \coloneqq \log
\Bigl(
\frac{|\C| \; |\Delta \mathcal{C}_\gamma(X', X'')|}%
{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}
\Bigr),
\end{equation}
\nomenclature[D, 02b]{$\hat I_\gamma$}{empirical ASC $\gamma$-score}%
as an estimator without the expectation sign. More on that will be given in
Chapter~\ref{ch:mst} when describing the application.
\section{Another View: Similarity Approach}
\label{sec:similarity_approach_intro}
In this section, we briefly visit, with sufficient adaptation of
notation\footnote{The two interpretations of the same idea have been
developed in parallel, hence there are two consistent systems of notation.},
another view on finding a robust approximation, which is called
\textit{similarity approach} and was introduced and developed, e.g.,
in~\citep{Sramek:PhD,Proeger:PhD,jcss:2017}. Yielding the same quantity as
in~\eqref{eq:asc_best_gamma}, this approach arose from a specific interpretation
of the ASC~\citep{conf/isit/Buhmann10}.
\index{Similarity approach}
For the sake of itegration into the thesis, in this section we use additive
approximation set notation (i.e. the same as everywhere in this thesis),
while~\citet{Sramek:PhD} used a multiplicative one\footnote{For the definiton of
multiplicative approximation sets, refer to e.g.~\citep{Sramek:PhD} or
\citep{jcss:2017}.}. Assume there are two \textit{data instances} $X'$ and
$X''$, coming from the same source~$PG(\cdot | X^0)$,
see~\eqref{eq:data_gen_model}. \citet{Sramek:PhD} identifies two cases:
\begin{itemize}
\item If the generation process $PG$ is very noisy, resulting in two
non-similar instances $X'$ and $X''$ it is obvious that the intersection of
two approximation sets ${\mathcal{C}_\gamma}(X')\cap
{\mathcal{C}_\gamma}(X'')$ will contain some solutions when $\gamma$ is large
enough. \citet{Sramek:PhD} calls such solutions \emph{expected
due to $\gamma$}. At this point, the reader can start building analogies to
the above by revisiting Figure~\ref{fig:asc_illustration-0}, where large
approximation sets yield a lot of solutions in the intersection.
\item On the other hand, if the two instances $X'$ and $X''$ are more similar,
which is the case for a low-noise $PG$, the intersection
${\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')$, taken at the same
$\gamma$ value, will contain, in addition to the above-mentioned (expected due
to $\gamma$) ones, some solutions due to the similarity of the instances.
\citet{Sramek:PhD} calls them \emph{unexpected}. In terms coined later~\citep{jcss:2017}
it is called \emph{expected due to similarity}.
\index{Solutions!Expected due to similarity}
\end{itemize}
The point of introducing such cases consists in the following: these latter
solutions,~--- i.e. the ones expected due to similarity~--- are likely to be
good choices for possible test instance $X'''$ that comes from the same source.
The goal is thus shifted to finding the $\gamma$ that maximizes the ratio of the
number of solutions that are expected due to similarity over the size of the
intersection (compare to the ASC approach~\eqref{eq:asc_best_gamma}; the
comparison will be summarized in conclusion,
Section~\ref{sec:gen_appch_conclusion}). To fulfill the task, several
definitions are required. Figure~\ref{fig:intersection_types} illustrates these
definitions.
\begin{definition}[Feasible approximation set] \label{def:feasible_as}
A set of solutions $F\subseteq\mathcal{C}$ is called a
\emph{feasible approximation set} if there exists some instance $\tilde X$ and
some number $\tilde \gamma$ such that $F$ is the $\tilde \gamma$-approximation
set of $\tilde X$.
\end{definition}
\begin{definition}[Expected intersection sizes due to $\gamma$] \label{def:intersection_due_to_gamma}
Given $\gamma$ and the sizes $|{\mathcal{C}_\gamma}(X')| =: k(\gamma)$ and
$|{\mathcal{C}_\gamma}(X'')| =: l(\gamma)$, let
$es(\gamma,k(\gamma),l(\gamma))$ denote the expected size of the intersection
of two feasible approximation sets $A$ and $B$ of sizes $k(\gamma)$ and
$l(\gamma)$, respectively.
\nomenclature[D, 03a]{$es(\gamma,k(\gamma),l(\gamma))$}{expected intersection\nomnorefeqpage\hfill Def.~\ref{def:intersection_due_to_gamma}}%
\end{definition}
\begin{definition}[Expected intersection due to similarity] \label{def:intersection_due_to_sim}
Given $\gamma$ and the sizes $|{\mathcal{C}_\gamma}(X')| =: k(\gamma)$ and
$|{\mathcal{C}_\gamma}(X'')| =: l(\gamma)$, if the intersection of
${\mathcal{C}_\gamma}(X')$ and ${\mathcal{C}_\gamma}(X'')$ is larger than the
expected size $es(\gamma,k(\gamma),l(\gamma))$, then it contains some
solutions that are expected due to similarity, and we will denote them
$sim(\gamma)$.
\nomenclature[D, 03b]{$sim(\gamma)$}{size expected due to $\gamma$\nomnorefeqpage\hfill Def.~\ref{def:intersection_due_to_sim}}%
\end{definition}
%
%
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.55\textwidth}
\label{fig:approx_sets--example--1}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/approx_sets--schematic--1}
\caption{Placing a solution $c \in \C$}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.55\textwidth}
\label{fig:approx_sets--example--3}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/approx_sets--schematic--3}
\caption{Intersection $\mathcal{C}_{\gamma}(X')\cap \mathcal{C}_{\gamma}(X'')$}
\end{subfigure}
\\[.5cm]
\caption{
Approximation sets for the instances $X'$ and $X''$. By $c^\bot(X)$ we
denote the solution whose cost is minimum in $X$. \textbf{(a)}: We
place each solution $c \in \mathcal{C}$ at position
$(\gamma',\gamma'')$, where $\gamma'=R(c, X') - R^\bot(X')$ and
$\gamma''=R(c, X'') - R^\bot(X'')$. \textbf{(b)}: Example of
intersection of approximation sets $\mathcal{C}_{\gamma}(X')\cap
\mathcal{C}_{\gamma}(X'')$ (this view on approximation sets was
originally suggested by Tobias Pröger~\citep[cf.][]{jcss:2017}, figure labels
adapted for additive notation).}
\label{fig:approx_sets--schematic}
\end{figure}
%
Thus we have
\begin{equation}
|{\mathcal{C}_\gamma}(X') \cap {\mathcal{C}_\gamma}(X'')|=sim(\gamma)+
es(\gamma,k(\gamma),l(\gamma)),
\end{equation}
and, to maximize the probability that the uniformly randomly
chosen solution from the intersection is stable, we want to find the value $\gamma$ that maximizes
$\frac{sim(\gamma)}{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))}$. The following about
maximization objectives holds:
\begin{align}
\arg \max_{\gamma>0} &\;\; \frac{sim(\gamma)}{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))} \notag \\
&\qquad= \arg \max_{\gamma > 0} \; \Bigl(
1 - \frac{es(\gamma,k(\gamma),l(\gamma))}{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))}
\Bigr) \notag \\
&\qquad= \arg \min_{\gamma > 0} \;\; \frac{es(\gamma,k(\gamma),l(\gamma))}{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))}
\notag \\
&\qquad= \arg \max_{\gamma > 0} \;\; \frac{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))}{es(\gamma,k(\gamma),l(\gamma))},
\end{align}
hence we can reformulate (for the sake of clarity) the objective of the similarity-based
approach as maximizing the value
\begin{align}
\label{eq:similarity}
S_\gamma(X',X'')
\coloneqq \frac{|{\mathcal{C}_\gamma}(X') \cap {\mathcal{C}_\gamma}(X'')|}{es(\gamma,k(\gamma),l(\gamma))}
= \frac{sim(\gamma)+es(\gamma,k(\gamma),l(\gamma))}{es(\gamma,k(\gamma),l(\gamma))}.
\end{align}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.49\textwidth}
\label{fig:intersection_due_to_gamma}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/intersection_due_to_gamma}
\caption{Two non-similar approximation sets.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\textwidth}
\label{fig:intersection_due_to_sim}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/intersection_due_to_sim}
\caption{Two similar approximation sets}
\end{subfigure}
\\[.5cm]
\caption{Illustration of ideas contained in Definitions~\ref{def:feasible_as},
\ref{def:intersection_due_to_gamma} and \ref{def:intersection_due_to_sim}: as
opposed to randomly chosen approximation sets \textbf{(a)}, two related
(similar) approximation sets \textbf{(b)} have a $sim(\gamma)$ component of
the intersection, which we naturally seek to maximize.}
\label{fig:intersection_types}
\end{figure}
\paragraph{Problem-based instance similarity}
In equation~\eqref{eq:similarity}, the expected size of the intersection is
w.r.t.\ the problem specific probability distribution over all feasible
approximation sets of size $|{\mathcal{C}_\gamma}(X')|$ and
$|{\mathcal{C}_\gamma}(X'')|$, respectively. However, this distribution is hard
to estimate, so,~\citet{Sramek:PhD} introduced a problem-based based instance
similarity, which approximates the denominator by a uniformly chosen pair of
approximation sets.
\index{Similarity approach!Problem-based similarity}
%
\begin{definition}[Problem-based instance similarity]
Let $X'$ and $X''$ be two input instances of a combinatorial optimization
problem $\mathcal{P}$ with solution space $\mathcal{C}$. For a given $\gamma$,
let ${\mathcal{C}_\gamma}(X')$ and ${\mathcal{C}_\gamma}(X'')$ be $\gamma$-approximation sets for $X'$
and $X''$. Further, let $\mathcal{F}_k$ denote the set of all feasible
approximation sets of size $k$, i.e., the set of all such sets $F\subseteq
\mathcal{C}$ of size $k$ for which there exists an instance $I'$ and
a value $\tilde \gamma$ such that $F=\mathcal{C}_{\tilde \gamma}(\tilde X)$. Then, the expression
\begin{equation}
\label{eq:generic_similarity}
S_\gamma(X',X'') = \frac{|{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')|}
{\mathop{\mathbb{E}}_{A\in \mathcal{F}_{|{\mathcal{C}_\gamma}(X')|}, B\in
\mathcal{F}_{|{\mathcal{C}_\gamma}(X'')|}}{\big[|A\cap B|\big]}}
\end{equation}
\nomenclature[D, 03c]{$S_\gamma(X',X'')$}{instance $\gamma$-similarity}%
\nomenclature[D, 04]{$\mathcal{F}_k$}{all feasible approximation sets}%
is the \emph{similarity of $X'$ and $X''$ at value $\gamma$} (with respect to
the optimization problem $\mathcal{P}$), and the expression
\begin{equation}
\label{def:S}
S(X',X'') \coloneqq \max_\gamma S_\gamma(X',X'')
\end{equation}
\nomenclature[D, 03d]{$S(X',X'')$}{instance similarity}%
is the \emph{similarity of $X'$ and $X''$} with respect to the optimization
problem $\mathcal{P}$.
\end{definition}
%
Thus, the similarity-based approach (in the following referred just as
``similarity'' approach) works as follows. First, we compute the value $\gamma$
that maximizes the similarity
\begin{align}\label{eq:similarity_formula}
S_\gamma(X',X'') = \frac{|{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')|}
{\mathop{\mathbb{E}}_{A\in \mathcal{F}_{|{\mathcal{C}_\gamma}(X')|}, B\in
\mathcal{F}_{|{\mathcal{C}_\gamma}(X'')|}}{\big[|A\cap B|\big]}},
\tag{\ref{eq:generic_similarity}}
\end{align}
where the expectation is w.r.t.\ the uniform probability distribution over the
elements in $\mathcal{F}_{|{\mathcal{C}_\gamma}(X')|}$ and $\mathcal{F}_{|{\mathcal{C}_\gamma}(X'')|}$,
respectively. We then return a solution from ${\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')$
uniformly at random.
However, there are two practical issues with the procedure shown above: a)~it is
not always clear how to directly optimize $\gamma$ for the value
of~\eqref{eq:generic_similarity}; and b)~sampling from the intersection of the
corresponding $\gamma$-approximation sets uniformly at random might be
difficult. Despite all that, the similarity approach can be always applied in the
cases, where one can provide all the steps of Algorithm~\ref{alg:similarity}.
\medskip
\begin{algorithm}[ht!]
\caption{Pipeline for Similarity Approach (Section~\ref{sec:similarity_approach_intro})}
\label{alg:similarity}
{Determine the domains $\mathcal{F}_k$ of feasible approximation sets of size
$k$.}
{Provide a mathematical analysis or an algorithm $ALG_\mathbb{E}$ that
computes the expected size of the intersection of two approximation sets of
given sizes $k$ and $l$.}
{Provide an algorithm $ALG_\cap$ that computes the size of the intersection
${\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')$, given $\gamma$ and
two instances $X'$ and $X''$.}
{Find $\gamma^*$ that maximizes the similarity $S_\gamma(X',X'')$, using
$ALG_\mathbb{E}$ and $ALG_\cap$.}
{Provide an algorithm $ALG_\text{rand}$ that picks a uniform random solution
from the intersection $\C_{\gamma^*}(X')\cap \C_{\gamma^*}(X'')$.}
\end{algorithm}
\medskip
In order to fulfill these tasks, on can use several tools provided below. It is
important to notice that these useful theorems close the
gap between the ASC formulation~\eqref{eq:asc_mutual_information_formula} and the
similarity approach formulation~\eqref{eq:similarity_formula}.
\begin{theorem}[\citealp{Sramek:PhD}]
\label{thm:simple}
Let $\mathcal{P} = (\mathcal{X}, \mathcal{C}, R)$
(see~Section~\ref{sec:optimization_problem_description}) be an optimization
problem with the property that for any subset $F$ of the set of all feasible
solutions $\mathcal{C}$ there exists an instance $\tilde X \in \mathcal{X}$
and a value $\tilde \gamma$ such that $\mathcal{C}_{\tilde \gamma}(\tilde
X)=F$. Then, the similarity of two instances $X', X''\in\mathcal{X}$ at value
$\gamma$ is
\begin{align}
\label{eq:simple}
S_\gamma(X',X'')=\frac{|\mathcal{C}||{\mathcal{C}_\gamma}(X') \cap {\mathcal{C}_\gamma}(X'')|}
{|{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}.
\end{align}
\end{theorem}
However, as \citet{Sramek:PhD} notes, there exists an issue that not every subset
$F\subseteq\mathcal{C}$ is a feasible approximation set, and there is still no general
algorithm of computing the expected size of the intersection. The following
chain of theorems provides some approximation guarantees for the value of~\eqref{eq:simple}.
\begin{theorem}[\citealp{Sramek:PhD}]
\label{thm:bound}
Let $\mathcal{P} = (\mathcal{X}, \mathcal{C}, R)$ be an optimization problem. If
$|{\mathcal{C}_\gamma}(X')|=|{\mathcal{C}_\gamma}(X'')|$ for a given $\gamma$, then
\begin{align}
\label{eq:bound}
S_\gamma(X',X'') \leq \frac{|\mathcal{C}||{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')|}
{|{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}.
\end{align}
\end{theorem}
\begin{theorem} [\citealp{Sramek:PhD}]
\label{thm:approx}
Let $A$ be a constant such that for each feasible solution $c$ of some
optimization problem $\mathcal{P} = (\mathcal{X}, \mathcal{C}, R)$ it holds that
$|\{F\in \mathcal{F}_k | c\in F\}| \leq A k|\mathcal{F}_k|/|\mathcal{C}|$. Then,
\begin{align}
S_\gamma(X',X'')\geq\frac{|\mathcal{C}||{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')|}
{A |{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}.
\end{align}
\end{theorem}
\begin{theorem} [\citealp{Sramek:PhD}]
\label{thm:worst_case}
Let $\mathcal{P} = (\mathcal{X}, \mathcal{C}, R)$ be an optimization problem.
Then,
\begin{align}
S_\gamma(X',X'')\geq\frac{|{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')|}{|{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}.
\end{align}
\end{theorem}
%
\myremark This shows that the step of deriving the appropriate specific formula or
algorithm to calculate the expected size of the intersection is a necessary
component of the approach, unless it is possible to show that for a concrete
problem the upper bound is sufficient. We will speculate more on that in the
conclusion to this chapter (Section~\ref{sec:gen_appch_conclusion}).
\section{Proof-of-Concept Prototypic Example}
\label{sec:proof_of_concept}
\index{Prototypic example!For ASC}
Previously, in Section~\ref{sec:asc_original}, we introduced a method of
solution regularization by ASC, and later in
Section~\ref{sec:similarity_approach_intro} we gave a thorough overview of an
analogical approach called instance similarity. While they stem from completely
different roots, it can be easily seen that they both aim at choosing an optimal
approximation set width $\gamma$ in a same way. Specifically, we seek to
optimize
\begin{equation}\label{eq:similarity_maximization_objective}
\gamma^* = \arg \max_{\gamma >0} \frac{|{\mathcal{C}_\gamma}(X') \cap {\mathcal{C}_\gamma}(X'')|}
{|{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}
\end{equation}
\myremark Note that this equation is
\textit{not} identical either to its ASC
version~\eqref{eq:asc_mutual_information_formula} or similarity-based
version~\eqref{eq:similarity_formula}, although yielding same optimization goal;
we will briefly revisit the technical differences, like presence of logarithm,
later in the conclusion.
One of the contributions of this thesis is to present an abstract
proof-of-concept model for prototypical combinatorial optimization problems,
which would allow to experimentally the advantages of the approximation set-based
approaches. We will mostly experimentally investigate how the methods of
Sections~\ref{sec:asc_original} and~\ref{sec:similarity_approach_intro} perform
on this model.
\subsection{The Example Setting and Terminology}
We expect the approximation set-based methods to exceed the performance of other
optimization methods when the set of solutions that have stable cost over all or
most instances is large enough not to be completely hidden in the noise.
%
To highlight the potential of our approach, we consider an uncertain
minimization problem $(\mathcal{X}, \mathcal{C}, R)$ in which the solution space
$\mathcal{C}$ is partitioned into two sets $\sgood$ and $\sbad$ of sizes $\g$
and $\b$, respectively, which contain the \good\ and the \bad\ solutions,
respectively. Without loss of generality we assume that
\begin{align}
\C &= \{c_i\}_{i=1}^n \notag \\
\sgood &= \{c_1,\ldots, c_{|\g|}\} \notag \\
\sbad &= \{c_{|\g|+1},\ldots,c_{|\g|+|\b|}\}.
\end{align}
\nomenclature[D, 04a]{$\sgood$, $\sbad$}{\good\ and \bad\ solutions}%
%
The sets $\sgood$ and $\sbad$ represent solutions which are desirable and
non-desirable to be chosen, which reflects the fact that the approximation
set-based approaches are designed to reliably tell them apart. We further assume
that $\g \ll \b$, which corresponds to the fact that \good\ solutions should be hard
to identify.
Our proof-of-concept scenario abstracts from a concrete optimization problem. In
other words, we do not address here the problem of specific optimization
algorithms. Hence we explicitly state that instead of generating inputs $X \in
\mathcal{X}$, we rather directly generate costs of solutions $c \in \C$.
%
In the terminology of Section~\ref{sec:optimization_problem_description}, an
instance $X$ can be represented as a vector of random solution costs of length
$n$:
\begin{equation}\label{eq:generic_appch_cost_vector}
X \coloneqq \langle R_i \rangle_{i=1}^{n},
\end{equation}
and the cost function is simply
\begin{equation}
R(c_i, X) \coloneqq R_i,
\end{equation}
i.e.~the $i$-th entry stores the cost of the solution $c_i$ in $X$.
\subsection{Problem Generation}
\label{sec:gen_appch_pg}
We define the solution ``desirability'' by the intuition that costs of \good\
solutions have a small standard deviation and play the role of signal,
while costs of \bad\ solutions have a higher mean and/or a higher standard
deviation and play the role of deceiving noise.
We assume the cost vector of an instance $X$ to be generated with the following
random problem generating process $PG(\cdot)$:
\begin{itemize}
\item[1)] the first $\g$ values are chosen at random according to some
(fixed) probability distribution $\DG$, and
\item[2)] the remaining $\b$ values are chosen at random according to some
(fixed) probability distribution $\DB$.
\end{itemize}
\nomenclature[D, 04b]{$\DG$, $\DB$}{cost distributions\nomnorefeq}%
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.8\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/stable_and_unstable_distr}
\end{subfigure}
\caption{Schematic example of \good\ and \bad\ cost distributions and noise
levels $N \in \mathcal{N}$ of \bad\ ones, as described in
Section~\ref{sec:gen_appch_pg}.}
\label{fig:stable_and_unstable_solutions}
\end{figure}
Naturally it is safe to assume that both $\DG$ and $\DB$ have the property that
\good\ solutions are superior to \bad\ ones
(Figure~\ref{fig:stable_and_unstable_solutions}), e.g., because they have a
smaller expected cost or a smaller variance, i.e. for any $R_\text{\good} \sim
\DG$ and $\R_\text{\bad} \sim \DB$, $\Expct [R_\text{\good}] < \Expct [R_\text{\bad}]$
and $\Var [R_\text{\good}] < \Var [R_\text{\bad}]$.
We further assume $\DG$ and $\DB$ are independent of the
instance and of the concrete solution (costs of \good\ solutions are always
chosen from $\DG$, costs of \bad\ solutions are always chosen from $\DB$).
We model noise in a generic way by defining a set of noise levels $\mathcal{N}$
(the concrete definition depends on the type of the noise, see~\citep{jcss:2017}). For a fixed noise
level $N\in\mathcal{N}$, we randomly generate an instance as follows. \Good\
solutions are drawn from a distribution with fixed mean $\mu_\mathrm{\G}$ and fixed
standard deviation $\sigma_\mathrm{\G}$.
\Bad\ solutions are drawn from a distribution with mean $\mu_\mathrm{\B}(N)$ and standard
deviation $\sigma_\mathrm{\B}(N)$. The distributions of the \bad\ solutions are
chosen in a way such that for every two noise levels $N,N'\in\mathcal{N}$ with
$N'>N$, we have $\mu_\mathrm{\B}(N')<\mu_\mathrm{\B}(N)$ or
$\sigma_\mathrm{\B}(N')>\sigma_\mathrm{\B}(N)$.
\nomenclature[D, 04c]{$N \in \mathcal{N}$}{noise levels for experiment\nomnorefeq}%
\myremark Such assumptions on noise levels are justified by the fact that noise
would naturally imply either a smaller expected cost, or a higher standard
deviation, or both~--- resulting in a more aggressive ``deceiving'' of the
algorithm. See Figure~\ref{fig:stable_and_unstable_solutions} for schematic
illustration of this intuition.
Due to its enormous theoretical and practical relevance, we present here the
results for the Gaussian noise model (for more noise settings, see~\citep{jcss:2017}).
\Good\ solutions are drawn from a Gaussian
distribution with mean $\mu_{\G}=1$ and standard deviation $\sigma_{\G}=1$. We
define the noise levels $\mathcal{N}$ in such a way that for each noise level
$N\in\mathcal{N}$, \bad\ solutions are drawn from a Gaussian distribution with
mean $\mu_{\B}(N)=10$ and standard deviation $\sigma_{\B}(N)=N$: $\DG = \mathrm{Norm}(1, 1)$ and
$\DB(N) = \mathrm{Norm}(10, N^2)$.
\subsection{The Goal and Success Metrics}
Now, our goal is the following: given two instances $X'$ and $X''$ generated by
the random process $PG(\cdot)$ described above, our algorithm $\mathscr{A}$ has
to compute a set of solutions $\hat \C_\mathscr{A}$ of candidates for solutions in
$\sgood$, from which it then picks a solution
uniformly at random. The only knowledge of an
algorithm consists of the two cost vectors of $X'$ and $X''$ defined
in~\eqref{eq:generic_appch_cost_vector}. The algorithm cannot exploit the fact
that there are two categories of solutions, and in particular it has no
knowledge about $\DG$ and $\DB$.
Since we assume that a solution from $\hat{\mathcal{C}}_{\mathscr{A}}$ is
picked uniformly at random, we define the \emph{success probability} of $\mathscr{A}$
with input $X'$ and $X''$ as
\begin{align}
\label{eq:uncert:prob_succ}
P_{\mathscr{A}}(X',X'')
= \frac{|{\mathcal{C}}_\mathscr{A}\cap\sgood|}{|{\mathcal{C}}_\mathscr{A}|}
\text{, for a solving algorithm } \mathscr{A}.
\end{align}
\nomenclature[D, 04d]{$P_{\mathscr{A}}(X',X'')$}{success metric for experiment}%
We want to investigate how the success probabilities of the similarity algorithm
proposed in this chapter evolves with increasing noise, and benchmark it against
some other algorithms. In this thesis, we present only a Joint Minimizer algorithms
(see next section), but for the results produced on a more complete list of benchmarks
we refer the reader to~\citep{jcss:2017}.
\subsection{Experimental Results}
\label{sec:continuous_noise}
\paragraph{Benchmark: joint cost minimizing}
When only two instances are given, the most efficient and straightforward idea
to find a solution that is likely to be good for a test instance is to compute
a solution~$c$ that minimizes the average cost, or equivalently, the joint cost
$R(c, X')+R(c, X'')$. We refer to this method as the \textit{Joint Minimizer}
method in the plots below.
\index{Joint cost minimizer}
\index{Joint minimizer|see{Joint cost minimizer}}
\paragraph{Results}
For each noise level $N\in\mathcal{N}$, we perform the following
experiment: we generate $\R=1000$ instance pairs $(X',X'')_{k\in\{1,\ldots,
\R\}}$ with noise level $N$ according to the $PG(\cdot)$ process described in
Section~\ref{sec:gen_appch_pg}, and for each of these instance pairs we compute
$P_\mathscr{A}(X',X'')$ for all algorithms $\mathscr{A}$. After that we set
\begin{align}
\hat P_\mathscr{A}(N) &\coloneqq \frac{1}{\R} \sum_{k=1}^{\R} P_\mathscr{A}(X',X'')
\end{align}
to estimate the average success probability of the proposed methods in
dependency of the noise level $N$. Unless otherwise stated, $\mathcal{C}$
contains $n=1000$ solutions.
In our experiments, $\R=1000$ repetitions turned out to be enough to exhibit
the behaviors of the methods. Preliminary experiments with $10000$ repetitions
gave similar results: the rankings of the methods were the same, only the curves in
the plots appeared to be smoother.
Figure~\ref{fig:gnm} shows that the experimental results for
Gaussian noise show a strong indication that the approximation set-based similarity
approach is very competitive against the joint cost minimization.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g50_b950}
\caption{$5\%$ of solutions are \good.}
\label{fig:gnm_5}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g100_b900}
\caption{$10\%$ of solutions are \good.}
\label{fig:gnm_10}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g200_b800}
\caption{$20\%$ of solutions are \good.}
\label{fig:gnm_20}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g500_b500}
\caption{$50\%$ of solutions are \good.}
\label{fig:gnm_50}
\end{subfigure}
\\[.5cm]
\caption{Experimental results where $5\%$ \textbf{(a)}, $10\%$ \textbf{(b)},
$20\%$ \textbf{(c)} and $50\%$ \textbf{(d)} of the solutions are \good.
Total number of solutions equals $1000$.}
\label{fig:gnm}
\end{figure}
Note that the latter is a straightforward way to compute solutions, when only two
inputs are provided.
\section{Finding Optimal Approximations Analytically}
\label{sec:analytic_solution}
\subsection{Theoretical Results}
One of our main assumptions was that the noise generating process is unknown to
the predicting algorithm $\mathscr{A}$. It was also previously noted that the
crucial step of the whole approximation set-based approach consists in deriving
the appropriate specific formula or algorithm to calculate the
similarity~\eqref{eq:simple}. As a first step towards a formal analysis of the
model discussed in the previous section, in this section we thoroughly
investigate how the similarity~\eqref{eq:simple} behaves in expectation (where
the expectation is computed over all pairs of instances generated by the random
process $PG(\cdot)$), i.e., we analyze the function
\begin{equation}
\label{eq:oracle_similarity}
S_\gamma^{\mathrm{EXP}}=\mathbb{E}_{X',X''\sim PG} \,
S_\gamma(X',X'').
\end{equation}
\index{Similarity approach!Calibration assumption}
\index{Calibration assumption|see{Similarity approach}}
For simplicity we introduce the \emph{calibrating assumption} that the minimum
solutions of both instances $X'$ and $X''$ have the same cost $m$:
\begin{equation}\label{eq:gen_appch_calibrating_assumption}
\min_c R(c, X') \approx \min_c R(c, X'') \approx m.
\end{equation}
Without this assumption our analysis would still be possible, but it would be
more technical. Notice that the assumption does not imply that the minimum
solutions themselves are the same: in general, it holds that
\begin{equation}
\arg \min_c R(c, X') \ne \arg \min_c R(c, X''),
\end{equation}
i.e. minimum costs are not necessarily attained on the same solution.
\index{Similarity approach!Theoretical estimator}
\begin{theorem}
\label{thm:oracle_similarity}
Let $\gamma>0$, $V = |\mathcal{C}_\gamma(X')\cap \mathcal{C}_\gamma(X'')|$,
$W = |\mathcal{C}_\gamma(X')|\cdot |\mathcal{C}_\gamma(X'')|$, $m$ be the minimum cost of a
solution in both $X'$ and $X''$ (i.e., the calibrating assumption is
satisfied), and $\FG$ and $\FB$ denote the cumulative density functions of the
\good\ and the \bad\ solutions, respectively, evaluated at $m+\gamma$. Then, the
expected similarity~\eqref{eq:oracle_similarity} can be approximated by the
estimated similarity
\begin{align}
\label{eq:esim}
S_\gamma^{\mathrm{EXP}} \sim \hat S_\gamma \coloneqq |\mathcal{C}|\left(
\frac{\Expct[V]}{\Expct[W]} -
\frac{\Cov(V,W)}{\Expct[W]^2} + \frac{\Var[W] \cdot \Expct[V]}{\Expct[W]^3}\right)
\end{align}
\nomenclature[D, 05]{$S_\gamma^{\mathrm{EXP}}$, $\hat S_\gamma$}{theoretical estimator of similarity}%
where
\begin{align}
\label{eq:esim:ev}
\Expct[V] &= \g\FG^2 + \b\FB^2, \\
\label{eq:esim:ew}
\Expct[W] &= (\g\FG + \b\FB)^2, \\
\label{eq:esim:cov}
\Cov(V, W) &= \g\FG^2(1 - \FG^2) + 2\g(\g-1) \FG^3(1 - \FG) \notag \\
&\quad + 2\g\b\FG^2\FB (1 - \FG) + 2\g\b\FG\FB^2(1 - \FB) \notag \\
&\quad + \b\FB^2(1 - \FB^2) + 2\b(\b-1) \FB^3(1 - \FB), \text{ and} \\
\label{eq:esim:var}
\Var[W] &= \g^2\FG^2(1- \FG^2) + 2\g^2(\g-1)\FG^3(1 - \FG) \notag\\
&\quad + 2\g\b(\b - 1) \FG \FB^2 (1 - \FG) \notag\\
&\quad + 2\g(\g - 1)\b \FG^2 \FB (1 - \FB) \notag\\
&\quad + 2\g\b \FG \FB (1 - \FG\FB) \notag\\
&\quad + \b^2 \FB^2 (1 - \FB^2) + 2\b^2(\b-1) \FB^3 (1 - \FB) \notag\\
&\quad + 4 \g^2 \b \FG^2 \FB (1 - \FG) + 4\g\b^2 \FG \FB^2 (1 - \FB).
\end{align}
\end{theorem}
\paragraph{Proof of Theorem~\ref{thm:oracle_similarity}}
To make this proof more readable, we break it down into several
steps.
\begin{itemize}
\item[1)] {\em Preliminaries.}
Let $m=\min_{c \in \mathcal{C}} R(c, X')=\min_{c \in \mathcal{C}} R(c, X'')$. Let
$c_i$, $i\in\{1,\ldots,\g\}$ denote the solutions in $\sgood$ and
$\bar c_i$, $i\in\{1,\ldots,\b\}$ denote the solutions in $\sbad$. We define
\begin{align*}
A'_{i, \gamma} &= \mathbbm{1}\{R(c_i,X') \le m+\gamma\}, \quad 1 \le i \le \g \\
A''_{i, \gamma} &= \mathbbm{1}\{R(c_i,X'') \le m+\gamma\}, \quad 1 \le i \le \g \\
B'_{j, \gamma} &= \mathbbm{1}\{R(\bar c_j,X') \le m+\gamma\}, \quad 1 \le j \le \b \\
B''_{j, \gamma} &= \mathbbm{1}\{R(\bar c_j,X'') \le m+\gamma\}, \quad 1 \le j \le \b.
\end{align*}
\nomenclature[A, 00]{$\mathbbm{1}\{\cdot\}$}{indicator function\nomnorefeqpage}%
Now the components of the similarity~\eqref{eq:similarity} can be
expressed as
\begin{align*}
|\mathcal{C}_\gamma(X')\cap \mathcal{C}_\gamma(X'')|
&= \sum_{i = 1}^{\g} A'_{i,\gamma} A''_{i,\gamma} +
\sum_{j = 1}^{\b} B'_{j,\gamma} B''_{j,\gamma}, \\
|\mathcal{C}_\gamma(X')|
&= \sum_{i = 1}^{\g} A'_{i,\gamma} + \sum_{j = 1}^{\b} B'_{j,\gamma}, \\
|\mathcal{C}_\gamma(X'')|
&= \sum_{i = 1}^{\g} A''_{i,\gamma} + \sum_{j = 1}^{\b} B''_{j,\gamma}.
\end{align*}
For the rest of this proof we will simplify the notation as follows:
1)~$\gamma$~is omitted in the subscript because we can assume it to be
the same throughout all considerations, 2) the limits in the sums are
omitted; for \good\ solutions we
always sum up to $\g$ and for \bad\ solutions to $\b$, and 3) by $\FG$ and
$\FB$ we denote the cumulative density functions of \good\ and \bad\
distributions, respectively, evaluated at $m+\gamma$:
\begin{equation}
\FG \coloneqq \FG(m + \gamma), \qquad \FB \coloneqq \FB(m + \gamma).
\end{equation}
Observe that 1) $\Expct[A'_i]=\Expct[A''_i]=\FG$ and $\Expct[B'_j]=\Expct[B''_j]
=\FB$, 2) the random variables in $\{A'_i\}_i \cup \{A''_j\}_j \cup \{B'_k\}_k
\cup \{B''_\ell\}_\ell$ are jointly independent, and 3) $(A'_i)^2=A'_i$,
$(A''_i)^2=A''_i$, $(B'_j)^2=Y_j$ and $(B''_j)^2=Y_j$ because these are
indicators. Also, remember that for jointly independent indicator random
variables $Z_1, Z_2, Z_3$ with $\Expct[Z_i] = z_i$ we have
\nomenclature[B, 00]{$\Expct[\cdot]$}{expected value\nomnorefeqpage}%
\nomenclature[B, 00]{$\Var[\cdot]$}{variance\nomnorefeqpage}%
\begin{align}
\Cov(Z_1, Z_2)
&= z_1 z_2 (1 - z_1 z_2) \label{eq:ag:covariance_general_2} \\
\Cov(Z_1 Z_2, Z_1 Z_3 )
&= z_1 z_2 z_3 (1 - z_1) \label{eq:ag:covariance_general_1}
\end{align}
\item[2)] {\em Taylor expansion of the expected similarity.}
A second-order Taylor approximation of $\Expct[V/W]$ gives
\begin{align}
\tag{\ref{eq:esim}}
\Expct\biggl[\frac{V}{W}\biggl]
\approx \frac{\Expct[V]}{\Expct[W]} -
\frac{\Cov(V,W)}{\Expct[W]^2} +
\frac{\Var[W] \cdot \Expct[V]}{\Expct[W]^3}.
\end{align}
Remember that $V$ denotes the size of the intersection while $W$ is the
product of the approximation set sizes. In the following, we will
analyze each term of~\eqref{eq:esim} separately.
\item[3)] {\em Expected values of $V$ and $W$.}
\begin{align*}
\tag{\ref{eq:esim:ev}}
\Expct[V] = \sum_i \Expct[A'_i] \cdot \Expct[A''_i] +
\sum_j \Expct[B'_j] \cdot \Expct[B''_j] = \g\FG^2 + \b\FB^2.
\end{align*}
Taking the independence of the random variables into account, for
$\Expct[W]$ we obtain
\begin{align*}
\tag{\ref{eq:esim:ew}}
\Expct[W]
= \Expct\Bigl[\sum_i A'_i + \sum_j B'_j\Bigr] \cdot
\Expct\Bigl[\sum_i A''_i + \sum_j B''_j\Bigr]
= ( \g \FG + \b \FB )^2.
\end{align*}
\item[4)] {\em Analyzing the covariance of $V$ and $W$.}
Remember that
\begin{align}
V &= \sum_i A'_i A''_i + \sum_i B'_i B''_i, \\
W &= \sum_{j,k} A'_j A''_k + \sum_{i,j} A'_j B''_k +
\sum_{j,k} B'_j A''_k + \sum_{j,k} B'_j B''_k,
\label{eq:ag:w_representation}
\end{align}
hence
\begin{align}
\Cov(V, W)
&= \sum_{i,j,k} \Cov( A'_i A''_i, A'_j A''_k ) + \sum_{i,j,k} \Cov( A'_i A''_i, A'_j B''_k ) \notag \\
&+ \sum_{i,j,k} \Cov( A'_i A''_i, B'_j A''_k ) + \sum_{i,j,k} \Cov( A'_i A''_i, B'_j B''_k ) \notag \\
&+ \sum_{i,j,k} \Cov( B'_i B''_i, A'_j A''_k ) + \sum_{i,j,k} \Cov( B'_i B''_i, A'_j B''_k ) \notag \\
&+ \sum_{i,j,k} \Cov( B'_i B''_i, B'_j A''_k ) + \sum_{i,j,k} \Cov( B'_i B''_i, B'_j B''_k )
\end{align}
We will now analyze each of the single terms.
\begin{itemize}
\item[$\bullet$]
In the first term $\sum \Cov( A'_i A''_i, A'_j A''_k )$
only the summands with $j=i$ or $k=i$ are non-zero, hence we
obtain
\begin{align}
\sum_{i,j,k} &\Cov( A'_i A''_i, A'_j A''_k )
= \sum_i \Cov( A'_i A''_i, A'_i A''_i) \notag \\
&\hspace{1cm}+ \sum_{i \ne j} \Bigl[ \Cov( A'_i A''_i, A'_i A''_j)
+ \Cov( A'_i A''_i, A'_j A''_i)\Bigr] \notag \\
&= \sum_i \Cov( A'_i A''_i, A'_i A''_i)
+ 2 \sum_{i \ne j} \Cov( A'_i A''_i, A'_i A''_j), \notag\\
&= \g \FG^2(1 - \FG^2)+2\g(\g-1) \FG^3 (1 - \FG),
\label{eq:ag:comp_covariance_4}
\end{align}
where the last equality holds due
to~\eqref{eq:ag:covariance_general_2}
and~\eqref{eq:ag:covariance_general_1}.
\item[$\bullet$]
The next two terms $\sum \Cov( A'_i A''_i, A'_j B''_k )$ and
$\sum \Cov( A'_i A''_i, B'_j A''_k )$ are equal to each other
(due to the symmetry of $A'$ and $A''$), so their sum resolves
to
\begin{equation}
2 \sum_{i,k} \Cov( A'_i A''_i, A'_i B''_k )
\stackrel{\text{\eqref{eq:ag:covariance_general_1}}}{=}
2 \g \b \FG^2 \FB ( 1 - \FG). \label{eq:ag:comp_covariance_2}
\end{equation}
\item[$\bullet$]
The next two terms $\sum \Cov( A'_i A''_i, B'_j B''_k )$
and $\sum \Cov( B'_i B''_i, A'_j A''_k )$ are both zero
due to the independence of $A'_i A''_i$ and $B'_j B''_k$.
\item[$\bullet$]
The next two terms $\sum \Cov( B'_i B''_i, A'_j B''_k )$
and $\sum \Cov( B'_i B''_i, B'_j A''_k )$ can be computed in
exactly the same way as~\eqref{eq:ag:comp_covariance_2} where
both $\FG$ and $\FB$ as well as $\g$ and $\b$ are interchanged.
Hence, their sum equals
\begin{equation*}
2 \sum_{i,k} \Cov( B'_i B''_i, B'_i A''_k )
\stackrel{\text{\eqref{eq:ag:covariance_general_1}}}{=}
2 \g\b \FG \FB^2 ( 1 - \FB).
\end{equation*}
\item[$\bullet$]
The last term $\sum \Cov( B'_i B''_i, B'_j B''_k )$ is computed
similar as~\eqref{eq:ag:comp_covariance_4},
performing the above-mentioned replacements, hence
\begin{align*}
\sum_{i,j,k} &\Cov( B'_i B''_i, B'_j B''_k )
= \b \FB^2(1 - \FB^2) + 2\b(\b-1) \FB^3 (1 - \FB).
\end{align*}
\end{itemize}
\item[5)] {\em Analyzing the variance of $W$.}
Finally we compute $\Var[W] = \Cov(W, W)$. When $W$ is expressed
as~\eqref{eq:ag:w_representation}, we obtain
\begin{align*}
\Cov(W, W)
&= \sum_{i,j,k,\ell} \Cov(A'_i A''_j, A'_k A''_\ell) + \sum_{i,j,k,\ell} \Cov(A'_i B''_j, A'_k B''_\ell) \\
&+\sum_{i,j,k,\ell} \Cov(B'_i A''_j, B'_k A''_\ell) + \sum_{i,j,k,\ell} \Cov(B'_i B''_j, B'_k B''_\ell) \\
&+ 2 \sum_{i,j,k,\ell} \Cov(A'_i A''_j, A'_k B''_\ell) + 2 \sum_{i,j,k,\ell} \Cov(A'_i A''_j, B'_k A''_\ell) \\
&+ 2 \sum_{i, j, k, \ell} \Cov(A'_i A''_j, B'_k B''_\ell) + 2 \sum_{i,j,k,\ell} \Cov(A'_i B''_j, B'_k A''_\ell) \\
&+ 2 \sum_{i,j,k,\ell} \Cov(A'_i B''_j, B'_k B''_\ell) + 2 \sum_{i,j,k,\ell} \Cov( B'_i A''_j, B'_k B''_\ell).
\end{align*}
As before we analyze each of these terms separately.
\begin{itemize}
\item[$\bullet$]
The first term $\sum \Cov(A'_i A''_j, A'_k A''_\ell)$ can be
expressed as
\begin{align*}
&\sum_{i} \Cov(A'_i A''_i, A'_i A''_i)
+ 4 \sum_{i\ne j} \Cov(A'_i A''_i, A'_i A''_j) \\
&+ 2\hspace{-4mm}\sum_{i \ne j, i \ne k, j \ne k}\hspace{-4mm} \Cov( A'_i A''_j, A'_i A''_k)
+ \sum_{i \ne j} \Cov( A'_i A''_j, A'_i A''_j)
\end{align*}
where
\begin{align*}
\sum_{i} \Cov(A'_i A''_i, A'_i A''_i) &= \g\FG^2(1 - \FG^2), \\
4 \sum_{i\ne j} \Cov(A'_i A''_i, A'_i A''_j) &= 4\g(\g-1)\FG^3(1-\FG), \\
2\hspace{-4mm}\sum_{i \ne j, i \ne k, j \ne k}\hspace{-4mm}
\Cov( A'_i A''_j, A'_i A''_k) &= 2\g(\g-1)(\g-2) \FG^3( 1- \FG), \\
\sum_{i \ne j} \Cov( A'_i A''_j, A'_i A''_j) &= \g(\g-1)\FG^2 ( 1 - \FG^2),
\end{align*}
and therefore
\begin{equation}
\label{eq:ag:comp_covariance_6}
\sum_{i,j,k,\ell}\hspace{-1mm} \Cov(A'_i A''_j, A'_k A''_\ell)
= \g \FG^2( 1- \FG^2) + 2 \g^2 (\g-1) \FG^3 ( 1- \FG).
\end{equation}
\item[$\bullet$]
The next two terms $\sum \Cov(A'_i B''_j, A'_k B''_\ell)$ and
$\sum \Cov(B'_i A''_j, B'_k A''_\ell)$ are equal due to the
symmetry in instances, hence their sum equals
\begin{align*}
%&2 \sum_{i, j, k, \ell} \Cov(A'_i B''_j, A'_k B''_\ell) \\
&2 \sum_{\substack{i \\ j \ne k}} \Cov(A'_i B''_j, A'_i B''_k)
+ 2 \sum_{\substack{i \\ j \ne k}} \Cov(A'_j B''_i, A'_k B''_i) \\
&\quad + 2 \sum_{i, j} \Cov(A'_i B''_j, A'_i B''_j),
\end{align*}
where the the terms are computed as
\begin{align*}
2 \sum_{\substack{i \\ j \ne k}} \Cov(A'_i B''_j, A'_i B''_k)
&\stackrel{\text{\eqref{eq:ag:covariance_general_1}}}{=}
2 \g\b(\b-1) \FG \FB^2 (1 - \FG), \\
2 \sum_{\substack{i \\ j \ne k}} \Cov(A'_j B''_i, A'_k B''_i)
&\stackrel{\text{\eqref{eq:ag:covariance_general_1}}}{=}
2 \g (\g-1) \b \FG^2 \FB (1 - \FB), \\
2 \sum_{i, j} \Cov(A'_i B''_j, A'_i B''_j)
&\stackrel{\text{\eqref{eq:ag:covariance_general_2}}}{=}
2\g\b \FG \FB ( 1 - \FG \FB).
\end{align*}
\item[$\bullet$]
The next term $\sum \Cov(B'_i B''_j, B'_k B''_\ell)$ is computed
analogically to~\eqref{eq:ag:comp_covariance_6}
where \good\ and \bad\ solutions are interchanged, resulting in
\begin{align*}
\sum_{i,j,k,\ell} &\Cov(B'_i B''_j, B'_k B''_\ell)
= \b^2 \FB^2 ( 1 - \FB^2) + 2 \b^2(\b-1) \FB^3 (1 - \FB).
\end{align*}
\item[$\bullet$]
The next terms $2\sum \Cov(A'_i A''_j, A'_k B''_\ell)$ and
$2\sum \Cov(A'_i A''_j, B'_k A''_\ell)$ are equal due to the
symmetry of the instances, hence their sum is
\begin{align}\label{eq:ag:comp_covariance_7}
4 \sum_{i,j,k,\ell} \Cov(A'_i A''_j, A'_k B''_\ell)
&= 4 \sum_{i,j,k} \Cov(A'_i A''_j, A'_i B''_k) \notag \\
&\stackrel{\text{\eqref{eq:ag:covariance_general_1}}}{=} 4 \g^2 \b \FG^2 \FB ( 1 - \FG ).
\end{align}
\item[$\bullet$]
The next terms $2\sum \Cov(A'_i A''_j, B'_k B''_\ell)$ and
$2\sum \Cov(A'_i B''_j, B'_k A''_\ell)$ are both equal to zero
due to the independence of $A'_i A''_j$ and $B'_k B''_\ell$, and
of $A'_i B''_j$ and $A'_k A''_\ell$.
\item[$\bullet$]
The last terms $2\sum \Cov(A'_i B''_j, B'_k B''_\ell)$ and
$2\sum \Cov(B'_i A''_j, B'_k B''_\ell)$ are equal due to the
symmetry the instances hence their sum can be computed
analogically to~\eqref{eq:ag:comp_covariance_7} where \good\ and
\bad\ solutions are interchanged. Hence, we obtain
\begin{equation*}
4 \sum_{i,j,k,\ell} \Cov(B'_i A''_j, B'_k B''_\ell)
= 4 \g \b^2 \FG \FB^2 ( 1 - \FB).
\qedhere
\end{equation*}
\nomenclature[B, 00]{$\Cov(\cdot, \cdot)$}{covariance\nomnorefeqpage}%
\end{itemize}
\end{itemize}
The proof is thus finished.
\QEDA
\subsection{Experimental Results}
We now provide both positive and negative experimental results which highlight
the scope of applicability of such similarity estimation. We performed an
experimental evaluation using Gaussian noise in a setting similar to the one in
Sections~\ref{sec:gen_appch_pg}--\ref{sec:continuous_noise}: parameters were set
to $\g=100$, $\b=900$, $\mu_{\mathrm{\G}}=1$, $\sigma_{\mathrm{\G}}=1$, $\mu_{\mathrm{\B}}=10$,
$\sigma_{\mathrm{\B}}\in\{0,0.1,\ldots,10\}$.
%
The only adjustment on had to make was a slightly changed instance generator due
to the calibrating assumption~\eqref{eq:gen_appch_calibrating_assumption}: since
the minima of both instances have to be sufficiently close to each other, the
problem generation process disregarded each pair of instances for which the
minima $m'$ and $m''$ differed by more than $\varepsilon=10^{-4}$, and
repeatedly generated a new pair until $|m'-m''|\le
\varepsilon$.
For each successful (i.e. not rejected due to calibrating assumption) instance
pair $(X',X'')$, we computed similarity~\eqref{eq:simple} and
estimated similarity~\eqref{eq:esim}, where the latter was calibrated with
$m=(m'+m'')/2$. We repeated the process $\R=1000$ times and calculated the
average similarity
\begin{equation}
\bar S_\gamma=\frac{1}{\R}\sum_{k=1}^\R S_\gamma(X',X''),
\end{equation}
and compared it to the estimated similarity~\eqref{eq:esim}. We note that we did
not compute the average estimated similarity over all instance pairs, but
instead calibrated Equation~\eqref{eq:esim} directly using the average minimum
cost of the instance pairs, i.e., using $m=\frac{1}{\R}\sum_{k=1}^\R
(m'^k+m''^k)/2$ where $m'^k=\min_{c \in \mathcal{C}} R(c, X'^k)$ and $m''$ is
defined respectively.
Figure~\ref{fig:realVsEstimatedSimilarities} shows the plots of $\hat S_\gamma$ and
$\bar S_\gamma$ defined above for two noise levels: $\sigma_{\mathrm{\B}}=1$ and for $\sigma_{\mathrm{\B}}=5$.
\begin{figure}[ht!]
\centering
\begin{subfigure}[b]{.6\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/realVsEstimatedSimilarities_sb1}
\caption{}
\end{subfigure}
\\[.5cm]
\begin{subfigure}[b]{.6\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/realVsEstimatedSimilarities_sb5}
\caption{}
\end{subfigure}
\\[.5cm]
\caption{Average vs. estimated similarity for $\sigma_{\mathrm{\B}}=1$ \textbf{(a)}, and for
$\sigma_{\mathrm{\B}}=5$ \textbf{(b)}.}
\label{fig:realVsEstimatedSimilarities}
\end{figure}
We see that the estimated similarity matches the average similarity relatively
well, especially for larger values of $\gamma$. Although the discrepancy grows
with the noise (which is natural due to the Taylor expansion used in the proof),
probably the most important thing to note is that the positions of the $\gamma^*$
computed based on $\hat S_\gamma$ and $\bar S_\gamma$ remain the same.
% \agcomm{till here}
% For $\sigma_{\mathrm{\B}}=5$, the situation is more difficult to analyze. However,
% Figure~\ref{fig:realVsEstimatedSimilarities}b shows that the maximum of the
% estimated similarity is larger than the estimated similarity at $\gamma=0$, and
% it also shows that the values $\gamma$ where the average and the estimated
% similarity, respectively, are maximized coincide well. Therefore the
% aforementioned problem of an empty intersection does not occur.
% As before we computed for both methods the
% intersection $\mathcal{C}_{\gamma^*}(X')\cap \mathcal{C}_{\gamma^*}(X'')$ and
% evaluated the resulting success probability using the definition
% in~\eqref{eq:uncert:prob_succ}.
% Figure~\ref{fig:realVsEstimated} shows that for
% high noise, \tESIM\ has a higher chance to pick a \good\ solution than \tSIM\
% and \tMR\ which is not surprising because it has knowledge about the underlying
% process.
% \begin{figure}[t!]
% \centering
% \begin{subfigure}[b]{.85\textwidth}
% \includegraphics[width=\linewidth]{figures/ch_generic_approach/realVsEstimatedSuccessProbabilities}
% \end{subfigure}
% \\[.5cm]
% \begin{subfigure}[b]{.85\textwidth}
% \includegraphics[width=\linewidth]{figures/ch_generic_approach/realVsEstimatedGamma}
% \end{subfigure}
% \\[.5cm]
% \caption{Comparison of the success rates of \tMR\ and \tSIM\ with the one of
% the estimated similarity (a), and a comparison of the values $\gamma^*$
% that \tMR, \tSIM\ and \tESIM\ compute.}
% \label{fig:realVsEstimated}
% \end{figure}
% The weak performance of the estimated similarity for small noise seems to be
% more surprising. To understand why this happens, consider
% Figure~\ref{fig:realVsEstimated}b\AGcomm{Fig ref broken} which shows the average value of $\gamma^*$
% that each method computes. Observe that for $\sigma_{\mathrm{\B}}<2.5$, the average value of
% $\gamma^*$ that the estimated similarity computes is below the one that
% \tMR\ computes, and since \tMR\ computes the
% smallest $\gamma$ for which the intersection of both $\gamma$-approximation sets
% is non-empty, the estimated similarity nearly always underestimates $\gamma$. To
% understand why this happens, we investigate the situation for $\sigma_{\mathrm{\B}}=1$ (low
% noise) and $\sigma_{\mathrm{\B}}=5$ (moderate noise). For each of the $\R=1000$ experiments,
% we compared the values of $\gamma^*$ that \tSIM\ computes with the
% ones of the estimated similarity. Figure~\ref{fig:realVsEstimatedPairs} shows
% the distribution of the points $(\gamma_\SIM^*,\gamma_\ESIM^*)$ where a point is
% red if \tSIM\ outperformed \tESIM, and green
% otherwise.
% \begin{figure}[t]
% \centering
% \begin{minipage}[t]{\textwidth}
% \small
% \centering
% \includegraphics[width=.9\linewidth]{figures/ch_generic_approach/realVsEstimatedGamma_sb1}
% (a)
% \end{minipage}\\
% \begin{minipage}[t]{\textwidth}
% \small
% \centering
% \includegraphics[width=.9\linewidth]{figures/ch_generic_approach/realVsEstimatedGamma_sb5}
% (b)
% \end{minipage}
% \caption{Each point corresponds to the outcome of one experiment, where the
% $x$-coordinate denotes the value $\gamma^*$ computed by \tSIM\ and the
% $y$-coordinate denotes the value $\gamma^*$ computed by \tESIM. A point
% is red if \tSIM\ outperformed \tESIM, and green otherwise. The
% experiments were performed for $\sigma_{\mathrm{\B}}=1$ (a), and for $\sigma_{\mathrm{\B}}=5$
% (b).}
% \label{fig:realVsEstimatedPairs}
% \end{figure}
% We see that for low noise, the estimated similarity nearly always
% underestimates $\gamma^*$. On the other hand, the choice of $\gamma^*$ for
% moderate noise is often better than the one by \tSIM. Hence, it
% seems that \tSIM\ is still too much influenced by the noise in the
% instances.
To summarize, we considered the expected similarity of two instances from the
same generator, and we derived an estimation for it that only depends on the
number of \good\ and \bad\ solutions, and on the respective cumulative density
functions. Our experiments showed that our estimation approximates the expected
similarity well when the noise is not too low.
%
Our experiments also showed that the $\gamma^*$ that maximizes the estimated
similarity does indeed help to identify \good\ solutions. In particular,
choosing a solution from the intersection of the corresponding
$\gamma^*$-approximation is a promising way of robust solving. One of the
possible steps in this direction should be to analyze how many \good\ and how
many \bad\ solutions this intersection contains in expectation.
\section{Gibbs Relaxation of the Approximation Set-Based Approach}
\label{sec:gibbs_relaxation_of_sim}
\subsection{Approximation Sets with Gibbs Weights}
\label{sec:gibbs_relaxation_of_sim_weights}
\index{Approximation Set Coding!Gibbs relaxation}
\index{Gibbs relaxation|see{Approximation Set Coding}}
\nomenclature[D, 06]{$w^G_\beta(c, X)$}{Gibbs weights\nomnorefeq}%
\nomenclature[D, 06a]{$\beta$}{inverse temperature\nomnorefeq}%
\citet{conf/isit/Buhmann10}, in addition to the approximation set-based
approach, introduced its Gibbs-relaxed version which we give in this section.
The idea (we adapt it for the sake of notation alignment with the material of
this chapter) is as follows: using the maximum entropy principle
(Section~\ref{sec:background_max_entropy}) by~\citet{Jaynes82} from statistical
physics~\citep[see also][]{book/MezardM09}, for a real number $\beta \ge 0$, an
instance~$X$ and a solution~$c$, the Gibbs weight \index{Gibbs weights} of $c$
is defined as $w^G_\beta(c, X)
\coloneqq \exp(-\beta R(c, X))$. Now one computes a value $\beta^*$ that
maximizes
\begin{align}
\label{eq:gibbs_realaxation}
\beta^* = \arg \max_{\beta > 0}
\log \biggl(|\C| \frac{\sum_{c\in\mathcal{C}}\big(w^G_\beta(c,X')\cdot w^G_\beta(c,X'')
\big)}{\big(\sum_{c\in\mathcal{C}} w^G_\beta(c,X')\big)\cdot
\big(\sum_{c\in\mathcal{C}} w^G_\beta(c,X'')\big)} \biggr),
\end{align}
\nomenclature[D, 06c]{$\beta^*$}{optimal inverse temperature}%
or, since the optimization goal is the same (see remark
after~\eqref{eq:similarity_maximization_objective}), maximizes the ratio
\begin{align}
\label{eq:gibbs_similarity_maximization_objective}
\beta^* = \arg \max_{\beta > 0}
\frac{\sum_{c\in\mathcal{C}}\big(w^G_\beta(c,X')\cdot w^G_\beta(c,X'')
\big)}{\big(\sum_{c\in\mathcal{C}} w^G_\beta(c,X')\big)\cdot
\big(\sum_{c\in\mathcal{C}} w^G_\beta(c,X'')\big)},
\end{align}
and then samples a solution $c$ from the whole solution space $\mathcal{C}$ with
probability
\[
p_\beta(c) = \frac{w^G_{\beta^*}(c,X')\cdot w^G_{\beta^*}(c,X'')}{\sum_{c'\in
\mathcal{C}} (w^G_{\beta^*}(c',X')\cdot w^G_{\beta^*}(c',X''))}.
\]
We refer to this as the \textit{Gibbs relaxation of approximation set-based
approach}.
\subsection{Relation of Similarity and Gibbs Similarity}
Interestingly, the classical approximation set-based
approach~\eqref{eq:similarity_maximization_objective} and its Gibbs
relaxation~\eqref{eq:gibbs_similarity_maximization_objective} have a clear
relation: for a number $\gamma\ge 0$, an instance $X$ and a solution~$c$ we
define a 0-1-weight $w^\Ind_\gamma(c, X)$ that is $1$ if and only if $R(c,
X)\le R(c^\perp,X) +
\gamma$, and 0 otherwise. It is easy to see that
\begin{align}
\label{eq:approximation_set_sum}
|{\mathcal{C}_\gamma}(X')| &= \sum_{c\in\mathcal{C}} w^\Ind_\gamma(c, X') \notag \\
|{\mathcal{C}_\gamma}(X'')| &= \sum_{c\in\mathcal{C}} w^\Ind_\gamma(c, X'') \notag \\
|{\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')| &= \sum_{c\in\mathcal{C}}
\big(w^\Ind_\gamma(c,X')\cdot w^\Ind_\gamma(c,X'')\big).
\end{align}
\nomenclature[D, 06c]{$w^\Ind_\gamma(c, X)$}{indicator weights}%
With these equalities it follows that the objective of maximizing
$S_\gamma(X',X'')$ in~\eqref{eq:similarity_maximization_objective} corresponds
to the one of~\eqref{eq:gibbs_similarity_maximization_objective} in which the
0-1-weights $w^{\Ind}_\gamma$ are substituted for the Gibbs weights $w^G_\beta$.
%
Moreover, notice that $w^\Ind_\gamma(c,X')\cdot w^\Ind_\gamma(c,X'')=1$ if
and only $c\in {\mathcal{C}_\gamma}(X')\cap {\mathcal{C}_\gamma}(X'')$. Hence, sampling a solution from
$\mathcal{C}$ with a probability proportional to $w^\Ind_{\gamma^*}(c,X')
\cdot w^\Ind_{\gamma^*}(c,X'')$ corresponds to sampling a solution from
$\mathcal{C}_{\gamma^*}(X')\cap \mathcal{C}_{\gamma^*}(X'')$ uniformly at random.
%
Similar to the parameter $\gamma$ in Equation~\eqref{eq:simple}, the parameter
$\beta$ (called ``inverse temperature'' in statistical physics\footnote{Much
more on that will be given in Chapter~\ref{ch:free_energy}.})
\index{Inverse temperature} controls
the amount of solutions that are taken into account. For $\beta=0$, all
solutions have the same weight 1 (corresponding to the case $\gamma=\infty$ in
which the intersection contains every solution in $\mathcal{C}$), while for
$\beta\to \infty$ the distribution concentrates on the solutions with the
minimum joint cost. Hence, the parameter $\beta$
in~\eqref{eq:gibbs_similarity_maximization_objective} is by its semantics an
``inverse'' to the parameter~$\gamma$ in~\eqref{eq:simple}.
\subsection{Experimental Results}
\label{sec:gen_appch_gibbs_experiments}
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g50_b950_gibbs}
\caption{$5\%$ of solutions are \good.}
\label{fig:gnm_5_gibbs}
\end{subfigure}
\hfill
\begin{subfigure}[b]{.49\textwidth}
\includegraphics[width=\linewidth]{figures/ch_generic_approach/gnm_g100_b900_gibbs}
\caption{$10\%$ of solutions are \good.}
\label{fig:gnm_10_gibbs}
\end{subfigure}
\\[.5cm]
\caption{Gibbs relaxation shows almost the same performance. Experimental
results where $5\%$ \textbf{(a)} and $10\%$ \textbf{(b)}. Model and setting
are the same as in Section~\ref{sec:proof_of_concept}.}
\label{fig:gnm_gibbs}
\end{figure}
Since the Gibbs relaxation chooses every solution $c \in \mathcal{C}$ with a probability
proportional to $w^G_{\beta^*}(c, X')\cdot w^G_{\beta^*}(c, X'')$, we define its
success probability as
\begin{equation}
P_\mathscr{A}^G(X',X'')
\coloneqq \frac{\sum_{c \in \sgood} w^G_{\beta^*}(c, X')\cdot w^G_{\beta^*}(c, X'')}
{\sum_{c \in \mathcal{C}} w^G_{\beta^*}(c, X')\cdot w^G_{\beta^*}(c, X'')}
\end{equation}
where $\beta^*$ is the value $\beta$ that
maximizes~(\ref{eq:gibbs_similarity_maximization_objective}). Notice that the
sums in the numerator and denominator are computed over different sets of
solutions. Notice that this formula is a full analogy
of~\eqref{eq:uncert:prob_succ}.
Figure~\ref{fig:gnm_gibbs} shows, under the same setting as in
Section~\ref{sec:proof_of_concept}, that the Gibbs relaxation provides yields
almost the same performance and thus can be considered as a viable variant
of the approximation set-based approach. This idea will be massively exploited
in Chapter~\ref{ch:free_energy}.
\section{Discussion and Conclusion}
\label{sec:gen_appch_conclusion}
In this chapter, we introduced an approximation set-based approach to robust
optimization and justified it via a so-called Approximation Set Coding which
provides an information-theoretic background. Below, we will elaborate on some
points which are, in our view, highlighting its most interesting and/or
controversial properties.
\subsection*{Role of the logarithm}
Consider the comparison between the empirical ASC score~\eqref{eq:asc_mutual_information_formula}
\begin{equation*}
\log
\frac{|\mathbb{\C}| \; |\Delta \mathcal{C}_\gamma(X', X'')|}%
{|\mathcal{C}_\gamma(X')| \; |\mathcal{C}_\gamma(X'')|}
\end{equation*}
and the empirical similarity score~\eqref{eq:simple}:
\begin{equation*}
\frac{|\mathcal{C}||{\mathcal{C}_\gamma}(X') \cap {\mathcal{C}_\gamma}(X'')|}
{|{\mathcal{C}_\gamma}(X')||{\mathcal{C}_\gamma}(X'')|}.
\end{equation*}
Note that there is a difference in putting the logarithm in front of the
ASC score. Although both have their maxima at the same $\gamma$, this logarithm
will be of essential importance later in Chapter~\ref{ch:free_energy} so
it is instructive to explain this difference.
The numerator and the denominator of the score can be seen as the
alternative and the null hypothesis, respectively, in statistical hypothesis
testing~--- the likelihood ratio test. One can view the usage of logarithm of
the likelihood ratio as a tool for ensuring asymptotic normality of estimators
in the case of weak coupling. For the main objective of this chapter, using the
logarithm had no special implications, since we were interested in the $\gamma^*$
which maximized score, but not in the score itself.
We should also note that the coding argument which we brought when deriving ASC
implies that logarithm allows to quantify the informativeness/capacity using \textit{bits}
(or \textit{nats}, depending on the type of the logarithm in use). This is
turns the ASC score into the one which allows interpretable value~--- i.e. the one
answering ``how many bits of information can the model extract''.
\index{Nat (measure of information)}
\subsection*{Is the way of defining approximation unique?}
As one can see from the material of this chapter, the whole approximation
set-based approach rests on \textit{some notion} of closeness of the given
solution to the optimal one ($c^\bot$). We quantified this notion in terms of
parameters $\gamma$ or (in case of Gibbs approximation) $\beta$. But there
exists a whole zoo of other possible parametrizations, for example,
parametrizing by the step $t$ of a stepwise algorithm. However, we advocate the
point of view that such parametrization should yield a local topology around
each solution according to the following informal procedure:
\begin{enumerate}
\item define certain measure of local closeness of solutions around the optimal;
\item make an assumption: each solution \textit{is} the optimal solution for some
input;
\item local approximation topologies induced by the above create a ``cover''
of the whole solution space;
\index{Topology}
\index{Local approximation topology}
\item derive conditions under which such a covering by local topology is can be
turned into metric space (metrization theorems); \index{Metrization theorems}
\item the above allows to create a uniform (i.e. non-local) closeness relation.
\end{enumerate}
This high-level roadmap gives some insight into the final goal of such a journey:
understand the structure of solution space in a problem-specific manner.
\subsection*{Are all solutions in the intersection created equal?}
Our method expects all solutions in the best approximation set intersection to
be equally desirable (e.g., equally good for a third, unknown instance). In some
cases, it might be useful to choose the solution based on some problem specific
criterion, e.g., choose the solution closest to the centroid of the intersection
set.
\subsection*{Will more input lead to better results?}
We mostly studied the two instance scenario because this is the minimum number
of instances necessary to distinguish information from noise. Often, however,
more than two instances are available. The extension to multiple instances is
not immediately obvious.
There are several ways of addressing it: (a) first, one can break it into pairs
and average. This is how the framework is intended to be used in practice; (b)
second, one can derive a version of ASC for multiple agents. The latter approach
sounds much more interesting from the research prospective, as it is not clear
what would be the channel analogy in case of several agents (remember, in the
two instance scenario, we considered one data point as a codebook benchmark, and
the other as error problem generator). On can as well go in the direction of a
straightforward generalization of the similarity
formula~\eqref{eq:asc_mutual_information_formula}. In the course of our
research, some attempts have been made in that direction and they yielded
promising results.
\subsection*{Can we find efficient algorithmic solutions?}
The remark after Theorems~\ref{thm:simple}--\ref{thm:worst_case} tells that one
of the pitfalls of approximation set-based approach consists in computation of
the similarity score. While we used brute-force enumeration for our
proof-of-concept experiments, it would be of a great importance to find either
(a) analytical estimations for the similarity score or (b) efficient algorithms
for computing it.
In this chapter, we tackled case (a) and made an attempt to derive a very simple
analytical estimator, which uses the knowledge of the true distributions. This
assumption, of course, renders it useless in real cases, but allows usage of
plug-in estimators of the true distributions.
On a much higher level which uses less information about the true distributions,
the approach (a) will be tackled in Chapter~\ref{ch:free_energy}.
In specific cases, such as application to combinatorial algorithms, the approach
(b) can be used by utilizing combinatorial structure of the solutions. This will
be shown in Chapter~\ref{ch:mst}.
\subsection*{Similarity as a computational goal-specific measure}
An interesting side-result that we did not focus on in this chapter is the
expressiveness of instance similarity $S_{\gamma^*}$. In fact, it utilizes a
\textit{computational goal-induced} topology on the set of solutions. We bring
here a motivation which was best described in~\citep{jcss:2017}.
\index{Topology}
\index{Computational goal-induced topology}
For example, consider the problem of computing a shortest path between two given
vertices in a graph $G$.
%
Having two instances $X'$ and $X''$ of this problem, one may attempt to measure
the similarity of these instances using certain structural information exposed to
us~--- e.g.,~the correlation coefficient or the Euclidean distance between the
vectors containing the edge weights.
\index{Euclidean distance}
However, if the instances differ a lot only in some weights which are usually
high and thus these edges that are never used in any nearly-shortest path, then
the similarity approach will correctly consider such examples as similar,
whereas for example the correlation coefficient will tell the opposite.
%
At the same time, if the computational goal was a maximum matching of edges
rather than the minimizing the weight cost, the similarity would regard the two
instances as significantly different.
%
This example highlights the need for a measure of \emph{similarity of instances
with respect to a computational goal}. This is performed by inducing a local topology
around each solution, and this topology depends only on the computational goal
and not on anything else.
| {
"alphanum_fraction": 0.7105452712,
"avg_line_length": 49.6708407871,
"ext": "tex",
"hexsha": "5378690453ced3131532d5e5fabb19fb2c240cbd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "agronskiy/phd-thesis",
"max_forks_repo_path": "thesis/ch_generic_approach/ch_generic_approach.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "agronskiy/phd-thesis",
"max_issues_repo_path": "thesis/ch_generic_approach/ch_generic_approach.tex",
"max_line_length": 162,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "182fcc5c09c8aa20df54cf536eb87766bfb6c353",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "agronskiy/phd-thesis",
"max_stars_repo_path": "thesis/ch_generic_approach/ch_generic_approach.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 33663,
"size": 111064
} |
\chapter{Belle~\RN{2}}
\label{chap:belle2_experiment}
\section{Experiment}
\label{sec:experimental}
The Belle~\RN{2} experiment is performed at the SuperKEKB accelerator located in Tsukuba, Japan. It is mainly designed to study B mesons.
In the experiment, asymmetric electron- positron-beams are collided with a center of mass energy of $\sqrt{s} = 10.58 \mathrm{~GeV}$, exactly on the $\Upsilon (4S)$ resonance. The two beams, positrons at $4 \mathrm{~GeV}$ and electrons at $7 \mathrm{~GeV}$, are focussed to a narrow crossing section. The additional boost in one direction is used to measure the $B$ meson lifetimes.
In comparison to the predecessor experiment Belle, the integrated luminosity will be $50~{ab}^{-1}$ and hence $50$ times higher. The instantaneous luminosity will be $8 \cdot 10^{35} \mathrm{cm}^{-2} \mathrm{s}^{-1}$ which represents a $40$-fold increase.
\section{Detector system}
\label{sec:detector_system}
\subsection{Overview}
\label{sec:detector_system_overview}
The Belle~\RN{2} detector system~\cite{Abe:2010gxa,Pulvermacher:SuperKEKBDetectorComponents,Pulvermacher:AnalysisSoftware} is a composition of multiple detectors, each measuring a subset of a particle's properties. Its design is depicted in~\autoref{fig:belle2_detector_design_white_paper}.
The inner three detectors -- \textbf{P}i\textbf{X}el \textbf{D}etector (PXD), \textbf{S}ilicon \textbf{V}ertex \textbf{D}etector (SVD) and \textbf{C}entral \textbf{D}rift \textbf{C}hamber (CDC) -- record the position of traversing charged particles. Hence they are also called tracking detectors. They are located in a homogeneous magnetic field of $1.5~\mathrm{T}$.
The innermost detector is the PXD. Together with the SVD which surrounds the PXD, it is used to reconstruct decay vertices and identify tracks belonging to particles with low-momenta.
The CDC measures the momentum and charge of particles via their curvature in the magnetic field.
Next, the \textbf{T}ime \textbf{O}f \textbf{P}ropagation (TOP) counter (`Barrel~PID') and the \textbf{A}erogel \textbf{R}ing-\textbf{I}maging \textbf{CH}erenkov (ARICH) counter (`Endcap~PID') are used to identify charged particles via their emission of Cherenkov radiation in the detector. However, there is no such installation for the backwards-facing endcap of the detector due to the asymmetric beams.
The \textbf{E}lectromagnetic \textbf{C}a\textbf{L}orimeter (ECL) identifies photons and electrons.
The outermost detector called $\boldsymbol{K}^0_{\boldsymbol{L}}$/$\boldsymbol{\mu}$ (KLM) is used to identify kaons and muons.\footnotemark{}
\footnotetext{If not specifically stated otherwise, the charge conjugate of a particle is implied.}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth,height=0.6\textheight,keepaspectratio]{{{../res/Belle 2 detector design white paper (truncated)}}}
\caption{Side view of the upper half of the Belle~\RN{2} detector. Adapted from~\cite{Abe:2010gxa}.}
\label{fig:belle2_detector_design_white_paper}
\end{figure}
\subsection{Silicon detectors}
\label{sec:detector_system_silicon_detectors}
The PXD and SVD consist of tiny doped silicon chips which yield the location of electron-holes created by particles passing through them. The PXD detector uses small pixels while the SVD detector uses strips of detector material. Therefore, the PXD detector is able to further differentiate multiple simultaneous tracks while the SVD allows for a faster readout and is less prone to noise.
\subsection{Central drift chamber}
\label{sec:detector_system_tracking_detectors}
The CDC which surrounds the PXD and SVD, consists of a collection of field wires and sense wires located in a volume filled with gas. The sense wires are used to measure the current produced by electromagnetic showers. The latter is caused by particles passing through the gas. The wires are close to being parallel to the beampipe but have a slight twist. This allows the detector to not only have an excellent estimation of the transverse distance to the beam pipe but also provides information about the longitudinal position.
\subsection{Barrel and endcap PID}
\label{sec:detector_system_barrel_and_endcap_pid}
The Cherenkov effect is used to measure the velocity of particles in the TOP and ARICH detector. Charged particles which travel faster than the speed of light in the medium -- Quartz in case of the TOP detector and aerogel for the ARICH detector -- produce light. The velocity can be calculated by measuring the time of propagation and the angle of the emitted light.
\subsection{Electromagnetic calorimeter}
\label{sec:detector_system_electromagnetic_calorimeter}
The main purpose of the ECL detector is to determine the energy of photons and electrons. Both particle species excite the medium and create electromagnetic showers. The light of the de-excitation can subsequently be measured.
\subsection{$\boldsymbol{K}^0_{\boldsymbol{L}}$/$\boldsymbol{\mu}$ detector}
\label{sec:detector_system_k0lmu}
Last but not least, the KLM detector identifies kaons and muons which have passed through the previous layers of the system. When traversing this detector, the particles pass through plates serving as electrodes separated by layers of gas in between them. Ionized particles created by the incident particle are accelerated in this field and subsequently produce a spark picked up by the detector.
\section{Interaction with matter}
\label{sec:interaction_with_matter}
\subsection{Charged particle interaction}
\label{sec:interaction_with_matter}
Particles with non-zero charge mainly interact with the medium electromagnetically. In general, an interaction occurs either by scattering in the electric field of the atom, polarization of the medium, ionization or excitation. Besides, hadrons may scatter at the atom itself. Particles and their anti-particles additionally have the ability to annihilate.
The polarization of the medium causes Cherenkov radiation to be emitted. At velocities below the speed of light in the medium ($v < c/n$), only boundaries of the particle's velocity may be determined. However, at $v > c/n$ the Cherenkov effect can be observed. The effect occurs due to the information about the charge of the traversing particle not reaching the medium in front of it soon enough. Hence, the medium behind the particle is already aligned with the electric field while the medium in front is not. The result is an electromagnetic wave. The angle between the normal vector of the wave and the track of the particle is given by
\begin{equation}
\cos(\Theta_{c}) = \frac{1}{v/c \cdot n} = \frac{1}{\beta \cdot n}
\mathrm{.}
\end{equation}
This is the effect which the TOP and ARICH detectors exploit.
Particles with low energy traveling through a medium interact predominantly with atomic electrons. The average energy loss of a particle is described by the Bethe-Bloch formula. It is given by $\left< \mathrm{d}E/\mathrm{d}x \right> \propto 1/{\beta^2}$ for velocities of up to about $90\%$ of the speed of light and is minimal at $\beta \gamma \approx 4$. It describes the momentum dependency of the average energy loss. However, the actual shape of $\mathrm{d}E/\mathrm{d}x$ is modelled by the Landau distribution. Note that the initial assumptions needed for this formula are not met for electrons. This is because both participants of the interaction belong to the same species and have identical masses.
A $\mathrm{d}E/\mathrm{d}x$-measurement is performed at the silicon detectors and the CDC.
The interaction with the electromagnetic field of the nucleus is the dominant cause for the energy loss of high-energy particles. Energy is radiated away via so called Bremsstrahlung. The leftover energy decreases exponentially with the distance traversed and is inversely proportional to the square root of the mass. Therefore it is mainly important for particles with a low mass, e.g., electrons. The radiation due to this effect is predominantly measured by the ECL.
\subsection{Particle identification}
\label{sec:particle_identification}
At Belle~\RN{2} the detector system differentiates among six long living particle species $K, \pi, e, \mu, p \text{ and } \hbox{deuteron}$.
The $\mathrm{d}E/\mathrm{d}x$-measurement from the silicon detectors and the CDC are one of the most useful measurements. \autoref{fig:de_dx_for_SVD} showcases this for one of the tracking detectors for momenta below $1 \mathrm{~GeV/c}$. Distinct patterns may be observed for various particle species below this momentum threshold.
New tracks can now be assigned a likelihood of producing the measured detector signal given they belonging to a certain particle species. This is done by postulating a hypothesis for the loss of energy for each such species.
ARICH, TOP and CDC furthermore extend the identification and are able to differentiate among $K, \pi, p \text{ and } \hbox{deuteron}$ but also contribute to $e$ and $\mu$ identification. They provide likelihoods for each signal given a particle hypothesis.
Further out the ECL detector provides a good separation of electrons from other charged particles above $1 \mathrm{~GeV/c}$. It is able to do so via measuring $E/p$ of the shower. The detector response is provided by estimating the degree of agreement for different particle hypothesis with the signal. An exemplary $E/p$ curve is shown in \autoref{fig:e_p_for_ECL}. It demonstrates the observable difference for electrons compared to other particle species but also shows that no clear separation of pions and muons is possible.
The KLM detector provides a good separation between muons and non-muons and contributes to the discrimination in the form of different likelihoods as well.
\begin{figure}[ht]
\centering
\subcaptionbox{$\mathrm{d}E/\mathrm{d}x$\label{fig:de_dx_for_SVD}}{
\includegraphics[width=0.43\textwidth,height=\textheight,keepaspectratio]{{{../res/dE dx for SVD detector by particles}}}
}
\hspace{2em}
\subcaptionbox{$E/p$\label{fig:e_p_for_ECL}}{
\includegraphics[width=0.43\textwidth,height=\textheight,keepaspectratio]{{{../res/E p for ECL detector by particles}}}
}
\caption[Separation of different particle species for various simulated signals. Taken from~\cite{Belle2Collaboration:B2TiP}.]{
Separation of different particle species for various simulated signals. Taken from~\cite{Belle2Collaboration:B2TiP}.
\textbf{Figure~\subref{fig:de_dx_for_SVD}} shows the $\mathrm{d}E/\mathrm{d}x$ means as a function of momentum in the SVD with the color encoding the number of hits. In order to reduce outliers the lowest $5\%$ and highest $25\%$ of the measurements of each track are not used in the estimation.
\textbf{Figure~\subref{fig:e_p_for_ECL}} shows the $E/p$ distribution for different particle species.
}
\end{figure}
| {
"alphanum_fraction": 0.7923775964,
"avg_line_length": 98.0363636364,
"ext": "tex",
"hexsha": "7049cf3a321061c3371aaa2a91460853d897bad0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Edenhofer/PID-boost",
"max_forks_repo_path": "doc/thesis/chapters/belle2.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Edenhofer/PID-boost",
"max_issues_repo_path": "doc/thesis/chapters/belle2.tex",
"max_line_length": 708,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9922a1fd3e5fbc39f701aa18cb4d2df37ead9693",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Edenhofer/PID-boost",
"max_stars_repo_path": "doc/thesis/chapters/belle2.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2652,
"size": 10784
} |
% $Id$ %
\subsection{Movie Player}
Play movies on your \dap! In order to do this, movies must be in AVI format,
and then converted to \fname{.RVF}, Rockbox's own video format. For more
details on how to use this plugin, please see \wikilink{VideoTutorial}.
| {
"alphanum_fraction": 0.7441860465,
"avg_line_length": 43,
"ext": "tex",
"hexsha": "93a64d74155222335737d56dacf5b3d9b6915447",
"lang": "TeX",
"max_forks_count": 15,
"max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z",
"max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_forks_repo_path": "manual/plugins/movieplayer.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_issues_repo_path": "manual/plugins/movieplayer.tex",
"max_line_length": 77,
"max_stars_count": 24,
"max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC",
"max_stars_repo_path": "manual/plugins/movieplayer.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z",
"num_tokens": 72,
"size": 258
} |
\section{User Guide}
This section is to outline the steps needed to setup a Fuel Slosh State Effector in python using Basilisk.
\begin{enumerate}
\item Import the linearSpringMassDamper class: \newline \textit{from Basilisk.simulation import linearSpringMassDamper}
\item Create an instantiation of a linear spring mass damper particle: \newline \textit{particle1 = linearSpringMassDamper.LinearSpringMassDamper()}
\item Define all physical parameters for a linear spring mass damper particle. For example: \newline
\textit{particle1.r\_PB\_B = [[0.1], [0], [-0.1]]}
Do this for all of the parameters for a linear spring mass damper particle seen in the public variables in the .h file.
\item Define the initial conditions of the states:\newline
\textit{particle1.rhoInit = 0.05 \quad particle1.rhoDotInit = 0.0}
\item Define a unique name for each state:\newline
\textit{particle1.nameOfRhoState = "linearSpringMassDamperRho" \quad particle1.nameOfRhoDotState = "linearSpringMassDamperRhoDot"}
\item Finally, add the particle to your spacecraftPlus:\newline
\textit{scObject.addStateEffector(particle1)}. See spacecraftPlus documentation on how to set up a spacecraftPlus object.
\end{enumerate}
| {
"alphanum_fraction": 0.7952105698,
"avg_line_length": 67.2777777778,
"ext": "tex",
"hexsha": "e6c4fb497b0d093af8f1feb2aefc972ad93c8cd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14",
"max_forks_repo_licenses": [
"0BSD"
],
"max_forks_repo_name": "ian-cooke/basilisk_mag",
"max_forks_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secUserGuide.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14",
"max_issues_repo_issues_event_max_datetime": "2019-03-13T20:52:22.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-03-13T20:52:22.000Z",
"max_issues_repo_licenses": [
"0BSD"
],
"max_issues_repo_name": "ian-cooke/basilisk_mag",
"max_issues_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secUserGuide.tex",
"max_line_length": 149,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a8b1e37c31c1287549d6fd4d71fcaa35b6fc3f14",
"max_stars_repo_licenses": [
"0BSD"
],
"max_stars_repo_name": "ian-cooke/basilisk_mag",
"max_stars_repo_path": "src/simulation/dynamics/LinearSpringMassDamper/_Documentation/secUserGuide.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 323,
"size": 1211
} |
\section{Drivers}
\input{driver_max2831}
\input{driver_tx}
\input{driver_rx}
\input{driver_radio}
\input{led_counter}
| {
"alphanum_fraction": 0.7966101695,
"avg_line_length": 16.8571428571,
"ext": "tex",
"hexsha": "a1cd6fac5709a6a7bf5f0ba31ee1e3d05d10f01f",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-09-16T23:18:10.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-07-22T12:47:41.000Z",
"max_forks_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "lab11/uSDR",
"max_forks_repo_path": "manuals/tex/drivers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "lab11/uSDR",
"max_issues_repo_path": "manuals/tex/drivers.tex",
"max_line_length": 22,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "4eeab36bcbea0e65c81f615975916ffd35d7de0b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "lab11/uSDR",
"max_stars_repo_path": "manuals/tex/drivers.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-10T11:51:36.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-23T03:56:08.000Z",
"num_tokens": 37,
"size": 118
} |
%!TEX root = ../dissertation.tex
\chapter{Introduction}
\label{chapter:introduction}
Your introduction here...\\
A demonstration of how to use acronyms and glossary:
A \gls{MSc} entry.
Second use: \gls{IST}.
Plurals: \glspl{MSc}.
A citation example \cite{nobody}
| {
"alphanum_fraction": 0.7286245353,
"avg_line_length": 16.8125,
"ext": "tex",
"hexsha": "72943389cad87e4698517da709cd0d6aa9c540a2",
"lang": "TeX",
"max_forks_count": 27,
"max_forks_repo_forks_event_max_datetime": "2018-11-15T10:31:51.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-08-18T15:22:24.000Z",
"max_forks_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_forks_repo_path": "chapters/introduction.tex",
"max_issues_count": 15,
"max_issues_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_issues_repo_issues_event_max_datetime": "2021-08-04T08:00:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-18T16:09:48.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_issues_repo_path": "chapters/introduction.tex",
"max_line_length": 52,
"max_stars_count": 43,
"max_stars_repo_head_hexsha": "9f13ccffe788db16dff8df75fbae4d51e44d8f62",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "MarcioDSilva/IST_Dissertation_template",
"max_stars_repo_path": "chapters/introduction.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-03T13:11:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-08-18T15:22:14.000Z",
"num_tokens": 82,
"size": 269
} |
%!TEX root = forallxcam.tex
\part{Truth-functional logic}
\label{ch.TFL}
\chapter{First steps to symbolisation}
\section{Validity in virtue of form}\label{s:ValidityInVirtueOfForm}
Consider this argument:
\begin{earg}
\item[] It is raining outside.
\item[] If it is raining outside, then Jenny is miserable.
\item[So:] Jenny is miserable.
\end{earg}
and another argument:
\begin{earg}
\item[] Jenny is an anarcho-syndicalist.
\item[] If Jenny is an anarcho-syndicalist, then Dipan is an avid reader of Tolstoy.
\item[So:] Dipan is an avid reader of Tolstoy.
\end{earg}
Both arguments are valid, and there is a straightforward sense in which we can say that they share a common structure. We might express the structure thus:
\begin{earg}
\item[] A
\item[] If A, then C
\item[So:] C
\end{earg}
This looks like an excellent argument \emph{structure}. Indeed, surely any argument with this \emph{structure} will be valid. And this is not the only good argument structure. Consider an argument like:
\begin{earg}
\item[] Jenny is either happy or sad.
\item[] Jenny is not happy.
\item[So:] Jenny is sad.
\end{earg}
Again, this is a valid argument. The structure here is something like:
\begin{earg}
\item[] A or B
\item[] not-A
\item[So:] B
\end{earg}
A superb structure! And here is a final example:
\begin{earg}
\item[] It's not the case that Jim both studied hard and acted in lots of plays.
\item[] Jim studied hard
\item[So:] Jim did not act in lots of plays.
\end{earg}
This valid argument has a structure which we might represent thus:\newpage
\begin{earg}
\item[] not-(A and B)
\item[] A
\item[So:] not-B
\end{earg}
The examples illustrate an important idea, which we might describe as \emph{validity in virtue of form}. The validity of the arguments just considered has nothing very much to do with the meanings of English expressions like `Jenny is miserable', `Dipan is an avid reader of Tolstoy', or `Jim acted in lots of plays'. If it has to do with meanings at all, it is with the meanings of phrases like `and', `or', `not,' and `if\ldots, then\ldots'.
In this chapter, I will develop a formal language which allows us to symbolise many arguments in such a way as to show that they are valid in virtue of their form. That language will be \emph{truth-functional logic}, or TFL.
\section{Validity for special reasons}
There are plenty of arguments that are valid, but not for reasons relating to their form. Take an example:
\begin{earg}
\item[] Juanita is a vixen
\item[So:] Juanita is a fox
\end{earg}
It is impossible for the premise to be true and the conclusion false. So the argument is valid. But the validity is not related to the form of the argument. Here is an invalid argument with the same form:
\begin{earg}
\item[] Juanita is a vixen
\item[So:] Juanita is a cathedral
\end{earg}
This might suggest that the validity of the first argument \emph{is} keyed to the meaning of the words `vixen' and `fox'. But, whether or not that is right, it is not simply the \emph{shape} of the argument that makes it valid. Equally, consider the argument:
\begin{earg}
\item[] The sculpture is green all over.
\item[So:] The sculpture is not red all over.
\end{earg}
Again, it seems impossible for the premise to be true and the conclusion false, for nothing can be both green all over and red all over. So the argument is valid. But here is an invalid argument with the same form:
\begin{earg}
\item[] The sculpture is green all over.
\item[So:] The sculpture is not shiny all over.
\end{earg}
The argument is invalid, since it is possible to be green all over and shiny all over. (I might paint my nails with an elegant shiny green varnish.) Plausibly, the validity of the first argument is keyed to the way that colours (or colour-words) interact. But, whether or not that is right, it is not simply the \emph{shape} of the argument that makes it valid.
The important moral can be stated as follows. \emph{At best, TFL will help us to understand arguments that are valid due to their form.}
\section{Atomic sentences}
I started isolating the form of an argument, in \S\ref{s:ValidityInVirtueOfForm}, by replacing \emph{subsentences} of sentences with individual letters. Thus in the first example of this section, `it is raining outside' is a subsentence of `If it is raining outside, then Jenny is miserable', and we replaced this subsentence with `A'.
Our artificial language, TFL, pursues this idea absolutely ruthlessly. We start with some \emph{atomic sentences}. These will be the basic building blocks out of which more complex sentences are built. We will use uppercase italic letters for atomic sentences of TFL. There are only twenty-six letters of the alphabet, but there is no limit to the number of atomic sentences that we might want to consider. By adding subscripts to letters, we obtain new atomic sentences. So, here are five different atomic sentences of TFL:
$$A, P, P_1, P_2, A_{234}$$
We shall use atomic sentence to represent, or symbolise, certain English sentences. To do this, we provide a \define{symbolisation key}, such as the following:
\begin{ekey}
\item[A] It is raining outside
\item[C] Jenny is miserable
\end{ekey}
In doing this, we are not fixing this symbolisation \emph{once and for all}. We are just saying that, for the time being, we shall think of the atomic sentence of TFL, `$A$', as symbolising the English sentence `It is raining outside', and the atomic sentence of TFL, `$C$', as symbolising the English sentence `Jenny is miserable. Later, when we are dealing with different sentences or different arguments, we can provide a new symbolisation key; as it might be:
\begin{ekey}
\item[A] Jenny is an anarcho-syndicalist
\item[C] Dipan is an avid reader of Tolstoy
\end{ekey}
But it is important to understand that whatever structure an English sentence might have is lost when it is symbolised by an atomic sentence of TFL. From the point of view of TFL, an atomic sentence is just a letter. It can be used to build more complex sentences, but it cannot be taken apart.
\chapter{Connectives}\label{s:TFLConnectives}
In the previous section, we considered symbolising fairly basic English sentences with atomic sentences of TFL. This leaves us wanting to deal with the English expressions `and', `or', `not', and so forth. These are \emph{connectives}---they can be used to form new sentences out of old ones. And in TFL, we shall make use of logical connectives to build complex sentences from atomic components. There are five logical connectives in TFL. This table summarises them, and they are explained throughout this section.
\begin{table}[h]
\center
\begin{tabular}{l l l}
\textbf{symbol}&\textbf{what it is called}&\textbf{rough meaning}\\
\hline
\enot&negation&`It is not the case that$\ldots$'\\
\eand&conjunction&`Both$\ldots$\ and $\ldots$'\\
\eor&disjunction&`Either$\ldots$\ or $\ldots$'\\
\eif&conditional&`If $\ldots$\ then $\ldots$'\\
\eiff&biconditional&`$\ldots$ if and only if $\ldots$'\\
\end{tabular}
\end{table}
\section{Negation}
Consider how we might symbolise these sentences:
\begin{earg}
\item[\ex{not1}] Mary is in Barcelona.
\item[\ex{not2}] It is not the case that Mary is in Barcelona.
\item[\ex{not3}] Mary is not in Barcelona.
\end{earg}
In order to symbolise sentence \ref{not1}, we will need an atomic sentence. We might offer this symbolisation key:
\begin{ekey}
\item[B] Mary is in Barcelona.
\end{ekey}
Since sentence \ref{not2} is obviously related to the sentence \ref{not1}, we shall not want to symbolise it with a completely different sentence. Roughly, sentence \ref{not2} means something like `It is not the case that B'. In order to symbolise this, we need a symbol for negation. We will use `\enot'. Now we can symbolise sentence \ref{not2} with `$\enot B$'.
Sentence \ref{not3} also contains the word `not'. And it is obviously equivalent to sentence \ref{not2}. As such, we can also symbolise it with `$\enot B$'.
\factoidbox{
A sentence can be symbolised as $\enot\meta{A}$ if it can be paraphrased in English as `It is not the case that\ldots'.
}
It will help to offer a few more examples:
\begin{earg}
\item[\ex{not4}] The widget can be replaced.
\item[\ex{not5}] The widget is irreplaceable.
\item[\ex{not5b}] The widget is not irreplaceable.
\end{earg}
Let us use the following representation key:
\begin{ekey}
\item[R] The widget is replaceable
\end{ekey}
Sentence \ref{not4} can now be symbolised by `$R$'. Moving on to sentence \ref{not5}: saying the widget is irreplaceable means that it is not the case that the widget is replaceable. So even though sentence \ref{not5} does not contain the word `not', we shall symbolise it as follows: `$\enot R$'.
Sentence \ref{not5b} can be paraphrased as `It is not the case that the widget is irreplaceable.' Which can again be paraphrased as `It is not the case that it is not the case that the widget is replaceable'. So we might symbolise this English sentence with the TFL sentence `$\enot\enot R$'. %(In English, double-negation tends to cancel out: sentence \ref{not5b} says something very similar to `the widget is replaceable'.)
But some care is needed when handling negations. Consider:
\begin{earg}
\item[\ex{not6}] Jane is happy.
\item[\ex{not7}] Jane is unhappy.
\end{earg}
If we let the TFL-sentence `$H$' symbolise `Jane is happy', then we can symbolise sentence \ref{not6} as `$H$'. However, it would be a mistake to symbolise sentence \ref{not7} with `$\enot{H}$'. If Jane is unhappy, then she is not happy; but sentence \ref{not7} does not mean the same thing as `It is not the case that Jane is happy'. Jane might be neither happy nor unhappy; she might be in a state of blank indifference. In order to symbolise sentence \ref{not7}, then, we would need a new atomic sentence of TFL.
\section{Conjunction}\label{s:ConnectiveConjunction}
Consider these sentences:
\begin{earg}
\item[\ex{and1}]Adam is athletic.
\item[\ex{and2}]Barbara is athletic.
\item[\ex{and3}]Adam is athletic, and also Barbara is athletic.
\end{earg}
We will need separate atomic sentences of TFL to symbolise sentences \ref{and1} and \ref{and2}; perhaps
\begin{ekey}
\item[A] Adam is athletic.
\item[B] Barbara is athletic.
\end{ekey}
Sentence \ref{and1} can now be symbolised as `$A$', and sentence \ref{and2} can be symbolised as `$B$'. Sentence \ref{and3} roughly says `A and B'. We need another symbol, to deal with `and'. We will use `\eand'. Thus we will symbolise it as `$(A\eand B)$'. This connective is called \define{conjunction}. We also say that `$A$' and `$B$' are the two \define{conjuncts} of the conjunction `$(A \eand B)$'.
Notice that we make no attempt to symbolise the word `also' in sentence \ref{and3}. Words like `both' and `also' function to draw our attention to the fact that two things are being conjoined. Maybe they affect the emphasis of a sentence. But we will not (and cannot) symbolise such things in TFL.
Some more examples will bring out this point:
\begin{earg}
\item[\ex{and4}]Barbara is athletic and energetic.
\item[\ex{and5}]Barbara and Adam are both athletic.
\item[\ex{and6}]Although Barbara is energetic, she is not athletic.
\item[\ex{and7}]Adam is athletic, but Barbara is more athletic than him.
\end{earg}
Sentence \ref{and4} is obviously a conjunction. The sentence says two things (about Barbara). In English, it is permissible to refer to Barbara only once. It \emph{might} be tempting to think that we need to symbolise sentence \ref{and4} with something along the lines of `$B$ and energetic'. This would be a mistake. Once we symbolise part of a sentence as `$B$', any further structure is lost. `$B$' is an atomic sentence of TFL. Conversely, `energetic' is not an English sentence at all. What we are aiming for is something like `$B$ and Barbara is energetic'. So we need to add another sentence letter to the symbolisation key. Let `$E$' symbolise `Barbara is energetic'. Now the entire sentence can be symbolised as `$(B\eand E)$'.
Sentence \ref{and5} says one thing about two different subjects. It says of both Barbara and Adam that they are athletic, and in English we use the word `athletic' only once. The sentence can be paraphrased as `Barbara is athletic, and Adam is athletic'. We can symbolise this in TFL as `$(B\eand A)$', using the same symbolisation key that we have been using.
Sentence \ref{and6} is slightly more complicated. The word `although' sets up a contrast between the first part of the sentence and the second part. Nevertheless, the sentence tells us both that Barbara is energetic and that she is not athletic. In order to make each of the conjuncts an atomic sentence, we need to replace `she' with `Barbara'. So we can paraphrase sentence \ref{and6} as, `\emph{Both} Barbara is energetic, \emph{and} Barbara is not athletic'. The second conjunct contains a negation, so we paraphrase further: `\emph{Both} Barbara is energetic \emph{and} \emph{it is not the case that} Barbara is athletic'. And now we can symbolise this with the TFL sentence `$(E\eand\enot B)$'. Note that we have lost all sorts of nuance in this symbolisation. There is a distinct difference in tone between sentence \ref{and6} and `Both Barbara is energetic and it is not the case that Barbara is athletic'. TFL does not (and cannot) preserve these nuances.
Sentence \ref{and7} raises similar issues. There is a contrastive structure, but this is not something that TFL can deal with. So we can paraphrase the sentence as `\emph{Both} Adam is athletic, \emph{and} Barbara is more athletic than Adam'. (Notice that we once again replace the pronoun `him' with `Adam'.) How should we deal with the second conjunct? We already have the sentence letter `$A$', which is being used to symbolise `Adam is athletic', and the sentence `$B$' which is being used to symbolise `Barbara is athletic'; but neither of these concerns their relative athleticity. So, to to symbolise the entire sentence, we need a new sentence letter. Let the TFL sentence `$R$' symbolise the English sentence `Barbara is more athletic than Adam'. Now we can symbolise sentence \ref{and7} by `$(A \eand R)$'.
\factoidbox{
A sentence can be symbolised as $(\meta{A}\eand\meta{B})$ if it can be paraphrased in English as `Both\ldots, and\ldots', or as `\ldots, but \ldots', or as `although \ldots, \ldots'.
}
You might be wondering why I have put \emph{brackets} around the conjunctions. My reasons can be brought out by thinking about negation interacts with conjunction. Consider:
\begin{earg}
\item[\ex{negcon1}] It's not the case that you will get both soup and salad.
\item[\ex{negcon2}] You will not get soup but you will get salad.
\end{earg}
Sentence \ref{negcon1} can be paraphrased as `It is not the case that: both you will get soup and you will get salad'. Using this symbolisation key:
\begin{ekey}
\item[S_1] You get soup.
\item[S_2] You get salad.
\end{ekey}
we would symbolise `both you will get soup and you will get salad' as `$(S_1 \eand S_2)$'. To symbolise sentence \ref{negcon1}, then, we simply negate the whole sentence, thus: `$\enot (S_1 \eand S_2)$'.
Sentence \ref{negcon2} is a conjunction: you \emph{will not} get soup, and you \emph{will} get salad. `You will not get soup' is symbolised by `$\enot S_1$'. So to symbolise sentence \ref{negcon2} itself, we offer `$(\enot S_1 \eand S_2)$'.
These English sentences are very different, and their symbolisations differ accordingly. In one of them, the entire conjunction is negated. In the other, just one conjunct is negated. Brackets help us to keep track of things like the \emph{scope} of the negation.
\section{Disjunction}
Consider these sentences:
\begin{earg}
\item[\ex{or1}]Either Denison will play golf with me, or he will watch movies.
\item[\ex{or2}]Either Denison or Ellery will play golf with me.
\end{earg}
For these sentences we can use this symbolisation key:
\begin{ekey}
\item[D] Denison will play golf with me.
\item[E] Ellery will play golf with me.
\item[M] Denison will watch movies.
\end{ekey}
However, we shall again need to introduce a new symbol. Sentence \ref{or1} is symbolised by `$(D \eor M)$'. The connective is called \define{disjunction}. We also say that `$D$' and `$M$' are the \define{disjuncts} of the disjunction `$(D \eor M)$'.
Sentence \ref{or2} is only slightly more complicated. There are two subjects, but the English sentence only gives the verb once. However, we can paraphrase sentence \ref{or2} as `Either Denison will play golf with me, or Ellery will play golf with me'. Now we can obviously symbolise it by `$(D \eor E)$' again.
\factoidbox{
A sentence can be symbolised as $(\meta{A}\eor\meta{B})$ if it can be paraphrased in English as `Either\ldots, or\ldots.'
}
Sometimes in English, the word `or' excludes the possibility that both disjuncts are true. This is called an \define{exclusive or}. An \emph{exclusive or} is clearly intended when it says, on a restaurant menu, `Entrees come with either soup or salad': you may have soup; you may have salad; but, if you want \emph{both} soup \emph{and} salad, then you have to pay extra.
At other times, the word `or' allows for the possibility that both disjuncts might be true. This is probably the case with sentence \ref{or2}, above. I might play golf with Denison, with Ellery, or with both Denison and Ellery. Sentence \ref{or2} merely says that I will play with \emph{at least} one of them. This is an \define{inclusive or}. The TFL symbol `\eor' always symbolises an \emph{inclusive or}.
It will also help to see how negation interacts with disjunction. Consider:
\begin{earg}
\item[\ex{or3}] Either you will not have soup, or you will not have salad.
\item[\ex{or4}] You will have neither soup nor salad.
\item[\ex{or.xor}] You get either soup or salad, but not both.
\end{earg}
Using the same symbolisation key as before, sentence \ref{or3} can be paraphrased in this way: `\emph{Either} it is not the case that you get soup, \emph{or} it is not the case that you get salad'. To symbolise this in TFL, we need both disjunction and negation. `It is not the case that you get soup' is symbolised by `$\enot S_1$'. `It is not the case that you get salad' is symbolised by `$\enot S_2$'. So sentence \ref{or3} itself is symbolised by `$(\enot S_1 \eor \enot S_2)$'.
Sentence \ref{or4} also requires negation. It can be paraphrased as, `\emph{It is not the case that}: either you get soup or you get salad'. Since this negates the entire disjunction, we symbolise sentence \ref{or4} with `$\enot (S_1 \eor S_2)$'.
Sentence \ref{or.xor} is an \emph{exclusive or}. We can break the sentence into two parts. The first part says that you get one or the other. We symbolise this as `$(S_1 \eor S_2)$'. The second part says that you do not get both. We can paraphrase this as: `It is not the case that both you get soup and you get salad'. Using both negation and conjunction, we symbolise this with `$\enot(S_1 \eand S_2)$'. Now we just need to put the two parts together. As we saw above, `but' can usually be symbolised with `$\eand$'. So sentence \ref{or.xor} can be symbolised as `$((S_1 \eor S_2) \eand \enot(S_1 \eand S_2))$'.
This last example shows something important. Although the TFL symbol `\eor' always symbolises \emph{inclusive or}, we can symbolise an \emph{exclusive or} in {TFL}. We just have to use a few other symbols too.
\section{Conditional}
Consider these sentences:
\begin{earg}
\item[\ex{if1}] If Jean is in Paris, then Jean is in France.
\item[\ex{if2}] Jean is in France only if Jean is in Paris.
\end{earg}
Let's use the following symbolisation key:
\begin{ekey}
\item[P] Jean is in Paris.
\item[F] Jean is in France
\end{ekey}
Sentence \ref{if1} is roughly of this form: `if P, then F'. We will use the symbol `\eif' to symbolise this `if\ldots, then\ldots' structure. So we symbolise sentence \ref{if1} by `$(P\eif F)$'. The connective is called \define{the conditional}. Here, `$P$' is called the \define{antecedent} of the conditional `$(P \eif F)$', and `$F$' is called the \define{consequent}.
Sentence \ref{if2} is also a conditional. Since the word `if' appears in the second half of the sentence, it might be tempting to symbolise this in the same way as sentence \ref{if1}. That would be a mistake. My knowledge of geography tells me that sentence \ref{if1} is unproblematically true: there is no way for Jean to be in Paris that doesn't involve Jean being in France. But sentence \ref{if2} is not so straightforward: were Jean in Dieppe, Lyons, or Toulouse, Jean would be in France without being in Paris, thereby rendering sentence \ref{if2} false. Since geography alone dictates the truth of sentence \ref{if1}, whereas travel plans (say) are needed to know the truth of sentence \ref{if2}, they must mean different things.
In fact, sentence \ref{if2} can be paraphrased as `If Jean is in France, then Jean is in Paris'. So we can symbolise it by `$(F \eif P)$'.
\factoidbox{
A sentence can be symbolised as $(\meta{A} \eif \meta{B})$ if it can be paraphrased in English as `If A, then B' or `A only if B'.
}
\noindent In fact, the conditional can represent many English expressions. Consider:
\begin{earg}
\item[\ex{ifnec1}] For Jean to be in Paris, it is necessary that Jean be in France.
\item[\ex{ifnec2}] It is a necessary condition on Jean's being in Paris that she be in France.
\item[\ex{ifsuf1}] For Jean to be in France, it is sufficient that Jean be in Paris.
\item[\ex{ifsuf2}] It is a sufficient condition on Jean's being in France that she be in Paris.
\end{earg}
If we think about it, all four of these sentences mean the same as `If Jean is in Paris, then Jean is in France'. So they can all be symbolised by `$(P \eif F)$'.
It is important to bear in mind that the connective `\eif' tells us only that, if the antecedent is true, then the consequent is true. It says nothing about a \emph{causal} connection between two events (for example). In fact, we lose a huge amount when we use `$\eif$' to symbolise English conditionals. I shall return to this in \S\S\ref{s:IndicativeSubjunctive} and \ref{s:ParadoxesOfMaterialConditional}.
\section{Biconditional}
Consider these sentences:
\begin{earg}
\item[\ex{iff1}] Shergar is a horse only if it he is a mammal
\item[\ex{iff2}] Shergar is a horse if he is a mammal
\item[\ex{iff3}] Shergar is a horse if and only if he is a mammal
\end{earg}
We shall use the following symbolisation key:
\begin{ekey}
\item[H] Shergar is a horse
\item[M] Shergar is a mammal
\end{ekey}
For reasons discussed above, sentence \ref{iff1} can be symbolised by `$(H \eif M)$'.
Sentence \ref{iff2} is importantly different. It can be paraphrased as, `If Shergar is a mammal then Shergar is a horse'. So it can be symbolised by `$(M\eif H)$'.
Sentence \ref{iff3} says something stronger than either \ref{iff1} or \ref{iff2}. It can be paraphrased as `Shergar is a horse if he is a mammal, and Shergar is a horse only if Shergar is a mammal'. This is just the conjunction of sentences \ref{iff1} and \ref{iff2}. So we can symbolise it as `$((H \eif M) \eand (M \eif H))$'. We call this a \define{biconditional}, because it amounts to stating both directions of the conditional.
We could treat every biconditional this way. So, just as we do not need a new TFL symbol to deal with \emph{exclusive or}, we do not really need a new TFL symbol to deal with biconditionals. However, we will use `\eiff' to symbolise the biconditional. So we can symbolise sentence \ref{iff3} with the TFL sentence `$(H \eiff M)$'.
The expression `if and only if' occurs a lot in philosophy and logic. For brevity, it is often abbreviated with a single, snappy word, `iff'. I shall follow this practice. So `if' with only \emph{one} `f' is the English conditional. But `iff' with \emph{two} `f's is the English biconditional. Armed with this we can say:
\factoidbox{
A sentence can be symbolised as $(\meta{A} \eiff \meta{B})$ if it can be paraphrased in English as `A iff B'; that is, as `A if and only if B'.
}
A word of caution. Ordinary speakers of English often use `if \ldots, then\ldots' when they really mean to use something more like `\ldots if and only if \ldots'. Perhaps your parents told you, when you were a child: `if you don't eat your greens, you won't get any pudding'. Suppose you ate your greens, but that your then parents refused to give you any pudding, on the grounds that they were only committed to the \emph{conditional} (roughly `if you get pudding, then you will have eaten your greens'), rather than the biconditional (roughly, `you get pudding iff you eat your greens'). Well, a tantrum would rightly ensue. So, be aware of this when interpreting people; but in your own writing, make sure to use the biconditional iff you mean to.
\section{Unless}
We have now seen all of the connectives of TFL. We can use them together to symbolise many kinds of sentences. But some cases are harder than others. And a typically nasty case is the English-language connective `unless':
\begin{earg}
\item[\ex{unless1}] Unless you wear a jacket, you will catch a cold.
\item[\ex{unless2}] You will catch a cold unless you wear a jacket.
\end{earg}
These two sentences are clearly equivalent. To symbolise them, we shall use the symbolisation key:
\begin{ekey}
\item[J] You will wear a jacket.
\item[D] You will catch a cold.
\end{ekey}
Both sentences mean that if you do not wear a jacket, then you will catch a cold. With this in mind, we might symbolise them as `$(\enot J \eif D)$'.
Equally, both sentences mean that if you do not catch a cold, then you must have worn a jacket. With this in mind, we might symbolise them as `$(\enot D \eif J)$'.
Equally, both sentences mean that either you will wear a jacket or you will catch a cold. With this in mind, we might symbolise them as `$(J \eor D)$'.
All three are correct symbolisations. Indeed, in chapter \ref{ch.TruthTables} we shall see that all three symbolisations are equivalent in TFL. For simplicity, then:
\factoidbox{
If a sentence can be paraphrased as `Unless A, B,' then it can be symbolised as $(\meta{A}\eor\meta{B})$.
}
Again, though, there is a little complication. `Unless' can be symbolised as a conditional; but as I said above, people often use the conditional (on its own) when they mean to use the biconditional. Equally, `unless' can be symbolised as a disjunction; but there are two kinds of disjunction (exclusive and inclusive). So it will not surprise you to discover that ordinary speakers of English often use `unless' to mean something more like the biconditional, or like exclusive disjunction. Suppose I say: `I shall go running unless it rains'. I probably mean something like `I shall go running iff it does not rain' (i.e.\ the biconditional), or `either I shall go running or it will rain, but not both' (i.e.\ exclusive disjunction).
Again: be aware of this when interpreting what other people have said, but be precise in your own writing. Unless you want to be ambiguous.
\practiceproblems
\problempart Using the symbolisation key given, symbolise each English sentence in TFL.\label{pr.monkeysuits}
\begin{ekey}
\item[M] Those creatures are men in suits.
\item[C] Those creatures are chimpanzees.
\item[G] Those creatures are gorillas.
\end{ekey}
\begin{earg}
\item Those creatures are not men in suits.
\item Those creatures are men in suits, or they are not.
\item Those creatures are either gorillas or chimpanzees.
\item Those creatures are neither gorillas nor chimpanzees.
\item If those creatures are chimpanzees, then they are neither gorillas nor men in suits.
\item Unless those creatures are men in suits, they are either chimpanzees or they are gorillas.
\end{earg}
\problempart Using the symbolisation key given, symbolise each English sentence in TFL.
\begin{ekey}
\item[A] Mister Ace was murdered.
\item[B] The butler did it.
\item[C] The cook did it.
\item[D] The Duchess is lying.
\item[E] Mister Edge was murdered.
\item[F] The murder weapon was a frying pan.
\end{ekey}
\begin{earg}
\item Either Mister Ace or Mister Edge was murdered.
\item If Mister Ace was murdered, then the cook did it.
\item If Mister Edge was murdered, then the cook did not do it.
\item Either the butler did it, or the Duchess is lying.
\item The cook did it only if the Duchess is lying.
\item If the murder weapon was a frying pan, then the culprit must have been the cook.
\item If the murder weapon was not a frying pan, then the culprit was either the cook or the butler.
\item Mister Ace was murdered if and only if Mister Edge was not murdered.
\item The Duchess is lying, unless it was Mister Edge who was murdered.
\item If Mister Ace was murdered, he was done in with a frying pan.
\item Since the cook did it, the butler did not.
\item Of course the Duchess is lying!
\end{earg}
\problempart Using the symbolisation key given, symbolise each English sentence in TFL.\label{pr.avacareer}
\begin{ekey}
\item[E_1] Ava is an electrician.
\item[E_2] Harrison is an electrician.
\item[F_1] Ava is a firefighter.
\item[F_2] Harrison is a firefighter.
\item[S_1] Ava is satisfied with her career.
\item[S_2] Harrison is satisfied with his career.
\end{ekey}
\begin{earg}
\item Ava and Harrison are both electricians.
\item If Ava is a firefighter, then she is satisfied with her career.
\item Ava is a firefighter, unless she is an electrician.
\item Harrison is an unsatisfied electrician.
\item Neither Ava nor Harrison is an electrician.
\item Both Ava and Harrison are electricians, but neither of them find it satisfying.
\item Harrison is satisfied only if he is a firefighter.
\item If Ava is not an electrician, then neither is Harrison, but if she is, then he is too.
\item Ava is satisfied with her career if and only if Harrison is not satisfied with his.
\item If Harrison is both an electrician and a firefighter, then he must be satisfied with his work.
\item It cannot be that Harrison is both an electrician and a firefighter.
\item Harrison and Ava are both firefighters if and only if neither of them is an electrician.
\end{earg}
\problempart
\label{pr.spies}
Give a symbolisation key and symbolise the following English sentences in TFL.
\begin{earg}
\item Alice and Bob are both spies.
\item If either Alice or Bob is a spy, then the code has been broken.
\item If neither Alice nor Bob is a spy, then the code remains unbroken.
\item The German embassy will be in an uproar, unless someone has broken the code.
\item Either the code has been broken or it has not, but the German embassy will be in an uproar regardless.
\item Either Alice or Bob is a spy, but not both.
\end{earg}
\problempart Give a symbolisation key and symbolise the following English sentences in TFL.
\begin{earg}
\item If there is food to be found in the pridelands, then Rafiki will talk about squashed bananas.
\item Rafiki will talk about squashed bananas unless Simba is alive.
\item Rafiki will either talk about squashed bananas or he won't, but there is food to be found in the pridelands regardless.
\item Scar will remain as king if and only if there is food to be found in the pridelands.
\item If Simba is alive, then Scar will not remain as king.
\end{earg}
\problempart
For each argument, write a symbolisation key and symbolise all of the sentences of the argument in TFL.
\begin{earg}
\item If Dorothy plays the piano in the morning, then Roger wakes up cranky. Dorothy plays piano in the morning unless she is distracted. So if Roger does not wake up cranky, then Dorothy must be distracted.
\item It will either rain or snow on Tuesday. If it rains, Neville will be sad. If it snows, Neville will be cold. Therefore, Neville will either be sad or cold on Tuesday.
\item If Zoog remembered to do his chores, then things are clean but not neat. If he forgot, then things are neat but not clean. Therefore, things are either neat or clean; but not both.
\end{earg}
\problempart
We symbolised an \emph{exclusive or} using `$\eor$', `$\eand$', and `$\enot$'. How could you symbolise an \emph{exclusive or} using only two connectives? Is there any way to symbolise an \emph{exclusive or} using only one connective?
\chapter{Sentences of TFL}\label{s:TFLSentences}
The sentence `either apples are red, or berries are blue' is a sentence of English, and the sentence `$(A\eor B)$' is a sentence of TFL. Although we can identify sentences of English when we encounter them, we do not have a formal definition of `sentence of English'. But in this chapter, I will \emph{define} exactly what will count as a sentence of TFL. This is one respect in which a formal language like TFL is more precise than a natural language like English.
\section{Expressions}
We have seen that there are three kinds of symbols in TFL:
\begin{center}
\begin{tabular}{l l}
Atomic sentences & $A,B,C,\ldots,Z$\\
with subscripts, as needed & $A_1, B_1,Z_1,A_2,A_{25},J_{375},\ldots$\\
\\
Connectives & $\enot,\eand,\eor,\eif,\eiff$\\
\\
Brackets &( , )\\
\end{tabular}
\end{center}
Define an \define{expression of TFL} as any string of symbols of TFL. So: write down any sequence of symbols of TFL, in any order, and you have an expression of TFL.
\section{Sentences}\label{tfl:SentencesDefined}
Given what we just said, `$(A \eand B)$' is an expression of TFL, and so is `$\lnot)(\eor()\eand(\enot\enot())((B$'. But the former is a \emph{sentence}, and the latter is \emph{gibberish}. We want some rules to tell us which TFL expressions are sentences.
Obviously, individual atomic sentences like `$A$' and `$G_{13}$' should count as sentences. We can form further sentences out of these by using the various connectives. Using negation, we can get `$\enot A$' and `$\enot G_{13}$'. Using conjunction, we can get `$(A \eand G_{13})$', `$(G_{13} \eand A)$', `$(A \eand A)$', and `$(G_{13} \eand G_{13})$'. We could also apply negation repeatedly to get sentences like `$\enot \enot A$' or apply negation along with conjunction to get sentences like `$\enot(A \eand G_{13})$' and `$\enot(G_{13} \eand \enot G_{13})$'. The possible combinations are endless, even starting with just these two sentence letters, and there are infinitely many sentence letters. So there is no point in trying to list all of the sentences one by one.
Instead, we will describe the process by which sentences can be \emph{constructed}. Consider negation: Given any sentence \meta{A} of TFL, $\enot\meta{A}$ is a sentence of TFL. (Why the funny fonts? I return to this in \S\ref{s:UseMention}.)
We can say similar things for each of the other connectives. For instance, if \meta{A} and \meta{B} are sentences of TFL, then $(\meta{A}\eand\meta{B})$ is a sentence of TFL. Providing clauses like this for all of the connectives, we arrive at the following formal definition for a \define{sentence of TFL}:
\factoidbox{
\begin{enumerate}
\item Every atomic sentence is a sentence.
\item If \meta{A} is a sentence, then $\enot\meta{A}$ is a sentence.
\item If \meta{A} and \meta{B} are sentences, then $(\meta{A}\eand\meta{B})$ is a sentence.
\item If \meta{A} and \meta{B} are sentences, then $(\meta{A}\eor\meta{B})$ is a sentence.
\item If \meta{A} and \meta{B} are sentences, then $(\meta{A}\eif\meta{B})$ is a sentence.
\item If \meta{A} and \meta{B} are sentences, then $(\meta{A}\eiff\meta{B})$ is a sentence.
\item Nothing else is a sentence.
\end{enumerate}
}
Definitions like this are called \emph{recursive}. Recursive definitions begin with some specifiable base elements, and then present ways to generate indefinitely many more elements by compounding together previously established ones. To give you a better idea of what a recursive definition is, consider a recursive definition of the notion of \emph{an ancestor of mine}. We specify a base clause.
\begin{ebullet}
\item My parents are ancestors of mine.
\end{ebullet}
and then offer further clauses like:
\begin{ebullet}
\item If x is an ancestor of mine, then x's parents are ancestors of mine.
\item Nothing else is an ancestor of mine.
\end{ebullet}
Using this definition, we can easily check to see whether someone is my ancestor: just check whether she is the parent of the parent of\ldots one of my parents. And the same is true for our recursive definition of sentences of TFL. Just as the recursive definition allows complex sentences to be built up from simpler parts, the definition allows us to decompose sentences into their simpler parts. And if we ultimately get down to atomic sentences, then we are ok.
Let's put this into practice by considering some examples.
Suppose we want to know whether or not `$\enot \enot \enot D$' is a sentence of TFL. Looking at the second clause of the definition, we know that `$\enot \enot \enot D$' is a sentence \emph{if} `$\enot \enot D$' is a sentence. So now we need to ask whether or not `$\enot \enot D$' is a sentence. Again looking at the second clause of the definition, `$\enot \enot D$' is a sentence \emph{if} `$\enot D$' is. Again, `$\enot D$' is a sentence \emph{if} `$D$' is a sentence. Now `$D$' is an atomic sentence of TFL, so we know that `$D$' is a sentence by the first clause of the definition. So for a compound sentence like `$\enot \enot \enot D$', we must apply the definition repeatedly. Eventually we arrive at the atomic sentences from which the sentence is built up.
Next, consider the example `$\enot (P \eand \enot (\enot Q \eor R))$'. Looking at the second clause of the definition, this is a sentence if `$(P \eand \enot (\enot Q \eor R))$' is. And this is a sentence if \emph{both} `$P$' \emph{and} `$\enot (\enot Q \eor R)$' are sentences. The former is an atomic sentence, and the latter is a sentence if `$(\enot Q \eor R)$' is a sentence. It is. Looking at the fourth clause of the definition, this is a sentence if both `$\enot Q$' and `$R$' are sentences. And both are!
Ultimately, every sentence is constructed nicely out of atomic sentences. When we are dealing with a \emph{sentence} other than an atomic sentence, we can see that there must be some sentential connective that was introduced \emph{most recently}, when constructing the sentence. We call that the \define{main logical operator} of the sentence. In the case of `$\enot\enot\enot D$', the main logical operator is the very first `$\enot$' sign. In the case of `$(P \eand \enot (\enot Q \eor R))$', the main logical operator is `$\eand$'. In the case of `$((\enot E \eor F) \eif \enot\enot G)$', the main logical operator is `$\eif$'.
As a general rule, you can find the main logical operator for a sentence by using the following method:
\begin{ebullet}
\item If the first symbol in the sentence is `$\enot$', then that is the main logical operator
\item Otherwise, start counting the brackets. For each open-bracket, i.e.\ `(', add $1$; for each closing-bracket, i.e.\ `$)$', subtract $1$. When your count is at exactly $1$, the first operator you hit (\emph{apart} from a `$\enot$') is the main logical operator.
\end{ebullet}
(Note: if you do use this method, then make sure to included \emph{all} the brackets in the sentence, rather than omitting some as per the conventions of \S\ref{TFLconventions}!)
The recursive structure of sentences in TFL will be important when we consider the circumstances under which a particular sentence would be true or false. The sentence `$\enot \enot \enot D$' is true if and only if the sentence `$\enot \enot D$' is false, and so on through the structure of the sentence, until we arrive at the atomic components. I will return to this in chapter \ref{ch.TruthTables}.
The recursive structure of sentences in TFL also allows us to give a formal definition of the \emph{scope} of a negation (mentioned in \S\ref{s:ConnectiveConjunction}). The scope of a `$\enot$' is the subsentence for which `$\enot$' is the main logical operator. For example, consider this sentence:
$$(P \eand (\enot (R \eand B) \eiff Q))$$
This was constructed by conjoining `$P$' with `$ (\enot (R \eand B) \eiff Q)$'. This last sentence was constructed by placing a biconditional between `$\enot (R \eand B)$' and `$Q$'. And the former of these sentences---a subsentence of our original sentence---is a sentence for which `$\enot$' is the main logical operator. So the scope of the negation is just `$\enot(R \eand B)$'. More generally:
\factoidbox{The \define{scope} of a connective in a sentence is the subsentence for which that connective is the main logical operator.}
\section{Bracketing conventions}
\label{TFLconventions}
Strictly speaking, the brackets in `$(Q \eand R)$' are an indispensable part of the sentence. Part of this is because we might use `$(Q \eand R)$' as a subsentence in a more complicated sentence. For example, we might want to negate `$(Q \eand R)$', obtaining `$\enot(Q \eand R)$'. If we just had `$Q \eand R$' without the brackets and put a negation in front of it, we would have `$\enot Q \eand R$'. It is most natural to read this as meaning the same thing as `$(\enot Q \eand R)$'. But as we saw in \S\ref{s:ConnectiveConjunction}, this is very different from `$\enot(Q\eand R)$'.
Strictly speaking, then, `$Q \eand R$' is \emph{not} a sentence. It is a mere \emph{expression}.
When working with TFL, however, it will make our lives easier if we are sometimes a little less than strict. So, here are some convenient conventions.
First, we allow ourselves to omit the \emph{outermost} brackets of a sentence. Thus we allow ourselves to write `$Q \eand R$' instead of the sentence `$(Q \eand R)$'. However, we must remember to put the brackets back in, when we want to embed the sentence into another sentence!
Second, it can be a bit painful to stare at long sentences with many nested pairs of brackets. To make things a bit easier on the eyes, we shall allow ourselves to use square brackets, `[' and `]', instead of rounded ones. So there is no logical difference between `$(P\eor Q)$' and `$[P\eor Q]$', for example.
Combining these two conventions, we can rewrite the unwieldy sentence
$$(((H \eif I) \eor (I \eif H)) \eand (J \eor K))$$
rather more simply as follows:
$$\bigl[(H \eif I) \eor (I \eif H)\bigr] \eand (J \eor K)$$
The scope of each connective is now much clearer.
\practiceproblems
\problempart
\label{pr.wiffTFL}
For each of the following: (a) Is it a sentence of TFL, strictly speaking? (b) Is it a sentence of TFL, allowing for our relaxed bracketing conventions?
\begin{earg}
\item $(A)$
\item $J_{374} \eor \enot J_{374}$
\item $\enot \enot \enot \enot F$
\item $\enot \eand S$
\item $(G \eand \enot G)$
\item $(A \eif (A \eand \enot F)) \eor (D \eiff E)$
\item $[(Z \eiff S) \eif W] \eand [J \eor X]$
\item $(F \eiff \enot D \eif J) \eor (C \eand D)$
\end{earg}
\problempart
Are there any sentences of TFL that contain no atomic sentences? Explain your answer.\\
\problempart
What is the scope of each connective in the sentence
$$\bigl[(H \eif I) \eor (I \eif H)\bigr] \eand (J \eor K)$$
\problempart
In \S\ref{tfl:SentencesDefined}, I provided a method for finding the main logical connective in any sentence. Try to explain \emph{why} the method always works.
\chapter{Use and mention}\label{s:UseMention}
In this chapter, I have talked a lot \emph{about} sentences. I now want to pause, to explain an important---and very general---point about talking \emph{about} languages.
\section{Quotation conventions}
Consider these two sentences:
\begin{ebullet}
\item Elizabeth is the Queen.
\item The expression `Elizabeth' is composed of one uppercase letter and eight lowercase letters
\end{ebullet}
When we want to talk about the Queen, we \emph{use} her name. When we want to talk about the Queen's name, we \emph{mention} that name, which we do by putting it in quotation marks.
There is a general point here. When we want to talk about things in the world, we just \emph{use} words. When we want to talk about words, we typically have to \emph{mention} those words. We need to indicate that we are mentioning them, rather than using them. To do this, some convention is needed. We can put them in quotation marks, or display them centrally in the page (say). So this sentence:
\begin{ebullet}
\item `Elizabeth' is the Queen.
\end{ebullet}
says that some \emph{expression} is the Queen. And that's false. The \emph{woman} is the Queen; her \emph{name} isn't. Conversely, this sentence:
\begin{ebullet}
\item Elizabeth is composed of one uppercase letter and eight lowercase letters.
\end{ebullet}
also says something false: Elizabeth is a woman, made of meat rather than letters. One final example:
\begin{ebullet}
\item ``\,`Elizabeth'\,'' names `Elizabeth'.
\end{ebullet}
On the left-hand-side, we have the name of a name. On the right hand side, we have a name. Perhaps this kind of sentence only occurs in logic textbooks, but it is true.
Those are just general rules for quotation, and you should observe them carefully in all your work! To be clear, the quotation-marks here do not indicate reported speech (which I might use in talking about Descartes's `cogito ergo sum'). They indicate that you are moving from talking about an object, to talking about a name of that object.
\section{Object language and metalanguage}
These general quotation conventions are very important for us. After all, we are describing a formal language here, TFL, and so we must often \emph{mention} expressions from TFL.
When we talk about a language, the language that we are talking about is called the \define{object language}. The language that we use to talk about the object language is called the \define{metalanguage}.
\label{def.metalanguage}
For the most part, the object language in this chapter has been the formal language that we have been developing: TFL. The metalanguage is English. Not conversational English, perhaps, but English supplemented with some additional vocabulary to help us get along.
Now, I have used italic uppercase letters for atomic sentences of TFL:
$$A, B, C, Z, A_1, B_4, A_{25}, J_{375},\ldots$$
These are sentences of the object language (TFL). They are not sentences of English. So I must not say, for example:
\begin{ebullet}
\item $D$ is an atomic sentence of TFL.
\end{ebullet}
Obviously, I am trying to come out with an English sentence that says something about the object language (TFL). But `$D$' is a sentence of TFL, and no part of English. So the preceding is gibberish, just like:
\begin{ebullet}
\item Schnee ist wei\ss\ is a German sentence.
\end{ebullet}
What we surely meant to say, in this case, is:
\begin{ebullet}
\item `Schnee ist wei\ss' is a German sentence.
\end{ebullet}
Equally, what we meant to say above is just:
\begin{ebullet}
\item `$D$' is an atomic sentence of TFL.
\end{ebullet}
The general point is that, whenever we want to talk in English about some specific expression of TFL, we need to indicate that we are \emph{mentioning} the expression, rather than \emph{using} it. We can either deploy quotation marks, or we can adopt some similar convention, such as placing it centrally in the page.
\section{Fonts, quotation marks, and concatenation}
However, we do not just want to talk about \emph{specific} expressions of TFL. We also want to be able to talk about \emph{any arbitrary} sentence of TFL. Indeed, I had to do this in \S\ref{s:TFLSentences}, when I presented the recursive definition of a sentence of TFL. I used bold-italic-fonts to do this:
$$\meta{A}, \meta{B}, \meta{C}, \meta{D}, \ldots$$
These symbols do not belong to TFL. Rather, they are part of our (augmented) metalanguage that we use to talk about \emph{any} expression of TFL. To explain why we need them, recall the second clause of the recursive definition of a sentence of TFL:
\begin{earg}
\item[3.] If $\meta{A}$ is a sentence, then $\enot \meta{A}$ is a sentence.
\end{earg}
This talks about \emph{arbitrary} sentences. If we had instead offered:
\begin{ebullet}
\item If `$A$' is a sentence, then `$\enot A$' is a sentence.
\end{ebullet}
this would not have allowed us to determine whether `$\enot B$' is a sentence. To emphasise:
\factoidbox{
`$\meta{A}$' is a symbol in augmented English, and we use it to talk about expressions of TFL. `$A$' is a particular atomic sentence of TFL.}
But all of this raises a further complication, concerning quotation conventions. I did not include any quotation marks in the third clause of our recursive definition. Should I have done so?
The problem is that the expression on the right-hand-side of the rule, i.e.\, `$\enot\meta{A}$' is not a sentence of English, since it contains `$\enot$'. So we might try to write:
\begin{enumerate}
\item[3$'$.] If \meta{A} is a sentence, then `$\enot \meta{A}$' is a sentence.
\end{enumerate}
But this is no good: `$\enot \meta{A}$' is not a TFL sentence, since `$\meta{A}$' is a symbol of (augmented) English rather than a symbol of TFL.
What we really want to say is something like this:
\begin{enumerate}
\item[3$''$.] If \meta{A} is a sentence, then the result of concatenating the symbol `$\enot$' with the sentence \meta{A} is itself a sentence.
\end{enumerate}
This is impeccable, but rather long-winded. %Quine introduced a convention that speeds things up here. In place of (3$''$), he suggested:
% \begin{enumerate}
% \item[3$'''$.] If \meta{A} and \meta{B} are sentences, then $\ulcorner (\meta{A}\eand\meta{B})\urcorner$ is a sentence
% \end{enumerate}
%The rectangular quote-marks are sometimes called `Quine quotes', after Quine. The general interpretation of an expression like `$\ulcorner (\meta{A}\eand\meta{B})\urcorner$' is in terms of rules for concatenation.
But we can avoid long-windedness by creating our own conventions. We can perfectly well stipulate that an expression like `$\enot \meta{A}$' should simply be read \emph{directly} in terms of rules for concatenation. So, \emph{officially}, the metalanguage expression `$\enot \meta{A}$'
simply abbreviates:
\begin{quote}
the result of concatenating the symbol `$\enot$' with the sentence \meta{A}
\end{quote}
and similarly, for expressions like `$(\meta{A} \eand \meta{B})$', `$(\meta{A} \eor \meta{B})$', etc.
\section{Quotation conventions for arguments}
One of our main purposes for using TFL is to study arguments, and that will be our concern in chapter \ref{ch.TruthTables}. In English, the premises of an argument are often expressed by individual sentences, and the conclusion by a further sentence. Since we can symbolise English sentences, we can symbolise English arguments using TFL.
Or rather, we can use TFL to symbolise each of the \emph{sentences} used in an English argument. However, TFL itself has no way to flag some of them as the \emph{premises} and another as the \emph{conclusion} of an argument. (Contrast this with natural English, which uses words like `so', `therefore', etc., to mark that a sentence is the \emph{conclusion} of an argument.)
%So, if we want to symbolise an \emph{argument} in TFL, what are we to do?
%An obvious thought would be to add a new symbol to the \emph{object} language of TFL itself, which we could use to separate the premises from the conclusion of an argument. However, adding a new symbol to our object language would add significant complexity to that language, since that symbol would require an official syntax.\footnote{\emph{The following footnote should be read only after you have finished the entire book!} And it would require a semantics. Here, there are deep barriers concerning the semantics. First: an object-language symbol which adequately expressed `therefore' for TFL would not be truth-functional. (\emph{Exercise}: why?) Second: a paradox known as `validity Curry' shows that FOL itself \emph{cannot} be augmented with an adequate, object-language `therefore'.}
So, we need another bit of notation. Suppose we want to symbolise the premises of an argument with $\meta{A}_1, \ldots, \meta{A}_n$ and the conclusion with $\meta{C}$. Then we will write:
$$\meta{A}_1, \ldots, \meta{A}_n \therefore \meta{C}$$
The role of the symbol `$\therefore$' is simply to indicate which sentences are the premises and which is the conclusion.
%Strictly, this extra notation is \emph{unnecessary}. After all, we could always just write things down long-hand, saying: the premises of the argument are well symbolised by $\meta{A}_1, \ldots \meta{A}_n$, and the conclusion of the argument is well symbolised by $\meta{C}$. But having some convention will save us some time. Equally, the particular convention we chose was fairly \emph{arbitrary}. After all, an equally good convention would have been to underline the conclusion of the argument. Still, this is the convention we will use.
Strictly, the symbol `$\therefore$' will not be a part of the object language, but of the \emph{metalanguage}. As such, one might think that we would need to put quote-marks around the TFL-sentences which flank it. That is a sensible thought, but adding these quote-marks would make things harder to read. Moreover---and as above---recall that \emph{we} are stipulating some new conventions. So, we can simply stipulate that these quote-marks are unnecessary. That is, we can simply write:
$$A, A \eif B \therefore B$$
\emph{without any quotation marks}, to indicate an argument whose premises are (symbolised by) `$A$' and `$A \eif B$' and whose conclusion is (symbolised by) `$B$. | {
"alphanum_fraction": 0.7442073057,
"avg_line_length": 82.8469860896,
"ext": "tex",
"hexsha": "5ea8c10b19e3b6a00c41133d1d6a3d17bed27a63",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-02-19T10:13:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-09-08T05:09:01.000Z",
"max_forks_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "ryanmichaelhebert/forallx-cam",
"max_forks_repo_path": "forallx-cam-tfl.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "ryanmichaelhebert/forallx-cam",
"max_issues_repo_path": "forallx-cam-tfl.tex",
"max_line_length": 964,
"max_stars_count": 7,
"max_stars_repo_head_hexsha": "37f7bbf197ba0fee0e2106f90755e2fc35f5b9bf",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "OpenLogicProject/forallx-cam",
"max_stars_repo_path": "forallx-cam-tfl.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-04T05:59:31.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-02-19T01:39:52.000Z",
"num_tokens": 14970,
"size": 53602
} |
\lab{Markov Chains}{Markov Chains}
\label{lab:Markov}
\objective{
A \emph{Markov chain} is a collection of states with specified probabilities for transitioning from one state to another.
They are characterized by the fact that the future behavior of the system depends only on its current state.
Markov chains have far ranging applications.
In this lab, we learn to construct, analyze, and interact with Markov chains and apply a Markov chain to natural language processing.}
\section*{State Space Models} % ===============================================
Many systems can be described by a finite number of states.
For example, a board game where players move around the board based on die rolls can be modeled by a Markov chain.
Each space represents a state, and a player is said to be in a state if their piece is currently on the corresponding space.
In this case, the probability of moving from one space to another only depends on the players current location: where the player was on a previous turn does not affect their current turn.
% A Markov chain is a collection of states, together with the probabilities of moving from one state to another.
Finite Markov chains have an associated \emph{transition matrix} that stores the information about the transitions between the states in the chain.
The $(ij)^{th}$ entry of the matrix gives the probability of moving from state $i$ to state $j$.
Thus the rows of the transition matrix must sum to 1.
\begin{info} % Row / column stochasticity
A transition matrix where the rows sum to 1 is called \emph{row stochastic} (or \emph{right stochastic}).
The columns of a \emph{column stochastic} (or \emph{left stochastic}) transition matrix each sum to 1 and the $(i,j)^{th}$ entry of the matrix gives the probability of moving from state $j$ to state $i$.
Both representations are common, but in this lab we exclusively use row stochastic transition matrices for consistency.
\end{info}
Consider a very simple weather model where the probability of being hot or cold depends on the weather of the previous day.
If the probability that tomorrow is hot given that today is hot is 0.7, and the probability that tomorrow is cold given that today is cold is 0.4, then by assigning hot to the $0^{th}$ row and column, and cold to the $1^{st}$ row and column, this Markov chain has the following transition matrix:
%
\begin{align*}
\begin{blockarray}{ccc}
& \text{\textcolor{red}{hot tomorrow}} & \text{\textcolor{blue}{cold tomorrow}} \\
\begin{block}{c(cc)}
\text{\textcolor{red}{hot today}} & 0.7 & 0.3 \\
\text{\textcolor{blue}{cold today}} & 0.6 & 0.4 \\
\end{block}\end{blockarray}
\end{align*}
If it is hot today, we examine the $0^{th}$ row of the matrix.
There is a $70\%$ chance that tomorrow will be hot ($0^{th}$ column) and a $30\%$ chance that tomorrow will be cold ($1^{st}$ column).
Conversely, if it is cold today, there is a $60\%$ chance that tomorrow will be hot and a $40\%$ chance that tomorrow will be cold.
Markov chains can be represented by a \emph{state diagram}, a type of directed graph.
The nodes in the graph are the states, and the edges indicate the state transition probabilities.
The Markov chain described above has the following state diagram.
%
% TODO: Turn this into a tikzpicture.
\begin{figure}[H]
\includegraphics[width=.5\linewidth]{figures/WeatherChain.pdf}
\end{figure}
%
\begin{problem} % Problem: stochasticity.
Transition matrices for Markov chains are efficiently stored as NumPy arrays.
Write a function that accepts a dimension $n$ and returns the transition matrix for a random Markov chain with $n$ states.
\\
(Hint: use array broadcasting to avoid looping.)
\end{problem}
\subsection*{Simulating State Transitions} % ----------------------------------
% TODO: Use the binomial distribution instead of uniform. (?)
Since the rows of a transition matrix sum to $1$, the entries of each row partition the interval $[0, 1]$.
We can thus choose the next state to move to by generating a random number between $0$ and $1$.
Consider again the simple weather model and suppose that today is hot.
The row that corresponds to ``hot''in the transition matrix is $[0.7, 0.3]$.
If we generate a random number and it is smaller than $0.3$, then the simulation indicates that tomorrow will be cold.
Conversely, if the random number is between $0.3$ and $1$, then the simulation says that tomorrow will be hot.
\begin{lstlisting}
import numpy as np
def forecast():
"""Forecast tomorrow's weather given that today is hot."""
transition_matrix = np.array([[0.7, 0.3], [0.6, 0.4]])
# Sample from the standard uniform distribution to choose a new state.
if np.random.random() < transition_matrix[0, 1]:
return 1 # Tomorrow will be cold.
else:
return 0 # Tomorrow will be hot.
\end{lstlisting}
\begin{problem} % Problem: Forecasting over several days.
Modify \li{forecast()} so that it accepts a parameter \li{days} and runs a simulation of the weather for the number of days given.
Return a list containing the day-by-day weather predictions (0 for hot, 1 for cold).
Assume the first day is hot, but do not include the data from the first day in the list of predictions.
The resulting list should therefore have \li{days} entries.
\end{problem}
\subsection*{Larger Chains} % -------------------------------------------------
The \li{forecast()} function makes one random draw from a \emph{uniform} distribution to simulate a state change.
Larger Markov chains require draws from a \emph{multinomial} distribution, a multivariate generalization of the binomial distribution.
A single draw from a binomial distribution with parameter $p$ indicates successes or failure of a single experiment with probability $p$ of success.
The classic example is a coin flip, where the $p$ is the probability that the coin lands heads side up.
A single draw from a multinomial distribution with parameters $\left(p_1, p_2, ..., p_n \right)$ indicates which of $n$ outcomes occurs.
In this case the classic example is a dice roll, with $6$ possible outcomes instead of the $2$ in a coin toss.
\begin{lstlisting}
# To simulate a single dice roll, store the probabilities of each outcome.
>>> die_probabilities = np.array([1./6, 1./6, 1./6, 1./6, 1./6, 1./6])
# Make a single random draw (roll the die once).
>>> np.random.multinomial(1, die_probabilities)
array([0, 0, 0, 1, 0, 0]) # The roll resulted in a 4.
\end{lstlisting}
\begin{problem} % Problem: 4 states instead of 2. Multinomial transitioning.
Let the following be the transition chain for a Markov chain modeling weather with four states: hot, mild, cold, and freezing.
\begin{align*}
\begin{blockarray}{ccccc}
& \text{\textcolor{red}{hot}} & \text{\textcolor[rgb]{0,.6,0}{mild}} & \text{\textcolor{blue}{cold}} & \text{\textcolor{cyan}{freezing}} \\
\begin{block}{c(cccc)}
\text{\textcolor{red}{hot}} & 0.5 & 0.3 & 0.2 & 0 \\
\text{\textcolor[rgb]{0,.6,0}{mild}} & 0.3 & 0.3 & 0.3 & 0.1 \\
\text{\textcolor{blue}{cold}} & 0.1 & 0.3 & 0.4 & 0.2 \\
\text{\textcolor{cyan}{freezing}} & 0 & 0.3 & 0.5 & 0.2 \\
\end{block}\end{blockarray}
\end{align*}
Write a new function that accepts a parameter \li{days} and runs the same kind of simulation as \li{forecast()}, but that uses this new four-state transition matrix.
This time, assume the first day is mild.
Return a list containing the day-to-day results (0 for hot, 1 for mild, 2 for cold, and 3 for freezing).
\label{prob:makov-state-transition}
\end{problem}
% TODO: instead of crappy empirical analysis, teach briefly about steady states and raising the transition matrix to a large power.
\begin{problem} % Problem: Analysis of results.
Write a function that investigates and interprets the results of the simulations in the previous two problems.
Specifically, find the average percentage of days that are hot, mild, cold, and freezing in each simulation.
Does changing the starting day alter the results?
Print a report of your findings.
\end{problem}
\section*{Using Markov Chains to Simulate English} % ==========================
% TODO: is it okay to make this reference?
One of the original applications of Markov chains was to study natural languages.\footnote{In computer science, a \emph{natural language} is a spoken language, like English or Russian. See \url{http://langvillea.people.cofc.edu/MCapps7.pdf} for some details on the early applications of Markov chains, including the study of natural languages.}
In the early $20^{th}$ century, Markov used his chains to model how Russian switched from vowels to consonants.
By mid-century, they had been used as an attempt to model English.
It turns out that Markov chains are, by themselves, insufficient to model very good English.
However, they can approach a fairly good model of bad English, with sometimes amusing results.
By nature, a Markov chain is only concerned with its current state.
Thus a Markov chain simulating transitions between English words is completely unaware of context or even of previous words in a sentence.
For example, a Markov chain's current state may be the word ``continuous.''
Then the chain would say that the next word in the sentence is more likely to be ``function'' rather than ``raccoon.''
However, without the context of the rest of the sentence, even two likely words stringed together may result in gibberish.
We restrict ourselves to a subproblem of modeling the English of a specific file.
The transition probabilities of the resulting Markov chain will reflect the sort of English that the source authors speak.
Thus the Markov chain built from \emph{The Complete Works of William Shakespeare} will differ greatly from, say, the Markov chain built from a collection of academic journals.
We will call the source collection of works in the next problems the \emph{training set}.
\subsection*{Making the Chain} % ----------------------------------------------
With the weather models of the previous sections, we chose a fixed number of days to simulate.
However, English sentences are of varying length, so we do not know beforehand how many words to choose (how many state transitions to make) before ending the sentence.
To capture this feature, we include two extra states in our Markov model: a \emph{start state} (\textcolor[rgb]{0,.6,0}{\$tart}) marking the beginning of a sentence, and a \emph{stop state} (\textcolor{red}{\$top}) marking the end.
Thus if a training set has $N$ unique words, the transition matrix will be $(N+2) \times (N+2)$.
The start state should only transition to words that appear at the beginning of a sentence in the training set, and only words that appear at the end a sentence in the training set should transition to the stop state.
The stop state is called an \emph{absorbing state} because once we reach it, we cannot transition back to another state.
% Because every state has a possible path to the stop state, this model is called an \emph{absorbing Markov chain}.
After determining the states in the Markov chain, we need to determine the transition probabilities between the states and build the corresponding transition matrix.
Consider the following small training set as an example.
\begin{lstlisting}
<<I am Sam Sam I am.
Do you like green eggs and ham?
I do not like them, Sam I am.
I do not like green eggs and ham.>>
\end{lstlisting}
If we include punctuation (so ``ham?'' and ``ham.'' are counted as distinct words) and do not alter the capitalization (so ``Do'' and ``do'' are also different), there are 15 unique words in this training set:
%
\begin{align*}
\text{I\quad am\quad Sam\quad am.\quad Do\quad you\quad like\quad green}
\\
\text{eggs\quad and\quad ham?\quad do\quad not\quad them,\quad ham.}
\end{align*}
With start and stop states, the transition matrix should be $17 \times 17$.
Each state must be assigned a row and column index in the transition matrix.
As easy way to do this is to assign the states an index based on the order that they appear in the training set.
Thus our states and the corresponding indices will be as follows:
%
\begin{align*}
\begin{array}{ccccccc}
\text{\textcolor[rgb]{0,.6,0}{\$tart}} & \text{I} & \text{am} & \text{Sam} & \ldots & \text{ham.} & \text{\textcolor{red}{\$top}}
\\
0 & 1 & 2 & 3 & \ldots & 15 & 16
\end{array}
\end{align*}
The start state should transition to the words ``I'' and ``Do'', and the words ``am.'', ``ham?'', and ``ham.'' should each transition to the stop state.
We first count the number of times that each state transitions to another state:
\begin{align*}
\begin{blockarray}{cccccccc}
& \text{\textcolor[rgb]{0,.6,0}{\$tart}} & \text{I} & \text{am} & \text{Sam} & & \text{ham.} & \text{\textcolor{red}{\$top}} \\
\begin{block}{c(ccccccc)}
\text{\textcolor[rgb]{0,.6,0}{\$tart}} & 0 & 3 & 0 & 0 & \ldots & 0 & 0\\
\text{I} & 0 & 0 & 1 & 0 & \ldots & 0 & 0\\
\text{am} & 0 & 0 & 0 & 1 & \ldots & 0 & 0\\
\text{Sam} & 0 & 2 & 0 & 1 & \ldots & 0 & 0\\
& \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
\text{ham.} & 0 & 0 & 0 & 0 & \ldots & 0 & 1\\
\text{\textcolor{red}{\$top}} & 0 & 0 & 0 & 0 & \ldots & 0 & 1\\
\end{block}\end{blockarray}
\end{align*}
Now we divide each row by its sum so that each row sums to 1.
\begin{align*}
\begin{blockarray}{cccccccc}
& \text{\textcolor[rgb]{0,.6,0}{\$tart}} & \text{I} & \text{am} & \text{Sam} & & \text{ham.} & \text{\textcolor{red}{\$top}} \\
\begin{block}{c(ccccccc)}
\text{\textcolor[rgb]{0,.6,0}{\$tart}} & 0 & 3/4 & 0 & 0 & \ldots & 0 & 0\\
\text{I} & 0 & 0 & 1/5 & 0 & \ldots & 0 & 0\\
\text{am} & 0 & 0 & 0 & 1 & \ldots & 0 & 0\\
\text{Sam} & 0 & 2/3 & 0 & 1/3 & \ldots & 0 & 0\\
& \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
\text{ham.} & 0 & 0 & 0 & 0 & \ldots & 0 & 1\\
\text{\textcolor{red}{\$top}} & 0 & 0 & 0 & 0 & \ldots & 0 & 1\\
\end{block}\end{blockarray}
\end{align*}
The $3/4$ indicates that 3 out of 4 times, the sentences in the training set start with the word ``I''.
Similarly, the $2/3$ and $1/3$ tell us that ``Sam'' is followed by ``I'' twice and by ``Sam'' once in the training set.
Note that ``am'' (without a period) always transitions to ``Sam'' and that ``ham.'' (with a period) always transitions the stop state.
Finally, to avoid a row of zeros, we place a 1 in the bottom right hand corner of the matrix (so the stop state always transitions to itself).
The entire procedure of creating the transition matrix for the Markov chain with words from a file as states is summarized below in Algorithm \ref{alg:MarkovSentencesTransitionMatrix}.
\begin{algorithm} % Read a file and convert it into a Markov chain.
\begin{algorithmic}[1]
\Procedure{MakeTransitionMatrix}{}
\State Count the number of unique words in the training set.
\State Initialize a square array of zeros of the appropriate size to be the transition \par\quad matrix (remember to account for the start and stop states).
\State Initialize a list of states, beginning with \li{"\$tart"}.
\For {each sentence in the training set}
\State Split the sentence into a list of words.
\State Add each \emph{new} word in the sentence to the list of states.
\State Convert the list of words into a list of indices indicating which row and \par\qquad\enspace column of the transition matrix each word corresponds to.
\State Add 1 to the entry of the transition matrix corresponding to
\par\qquad\enspace transitioning from the start state to the first word of the sentence.
\For {each consecutive pair $(i, j)$ of words in the list of words}
\State Add 1 to the entry of the transition matrix corresponding to \par\qquad\qquad transitioning from state $i$ to state $j$.
\EndFor
\State Add 1 to the entry of the transition matrix corresponding to
\par\qquad\enspace transitioning from the last word of the sentence to the stop state.
\EndFor
\State Make sure the stop state transitions to itself.
\State Normalize each row by dividing by the row sums (Hint: array broadcasting).
\EndProcedure
\end{algorithmic}
\caption{Convert a training set of sentences into a Markov chain.}
\label{alg:MarkovSentencesTransitionMatrix}
\end{algorithm}
\begin{problem} % Problem: Class that makes a Markov chain from a file.
Write a class called \li{SentenceGenerator}.
The constructor should accept a filename (the training set).
Read the file and build a transition matrix from its contents.
You may assume that the file has one complete sentence written on each line.
\label{problem:markov-random-sentences-init}
\end{problem}
\begin{problem} % Problem: Create random sentences
Add a method to the \li{SentenceGenerator} class called \li{babble()}.
Begin at the start state and use the strategy from Problem \ref{prob:makov-state-transition} to repeatedly transition through the object's Markov chain.
Keep track of the path through the chain and the corresponding sequence of words.
When the stop state is reached, stop transitioning to terminate the simulation.
Return the resulting sentence as a single string.
\newpage
For example, your \li{SentenceGenerator} class should be able to create random sentences that sound somewhat like Yoda speaking.
\begin{lstlisting}
>>> yoda = SentenceGenerator("Yoda.txt")
>>> for i in xrange(5):
... print(yoda.babble())
...
<<
Impossible to my size, do not!
For eight hundred years old to enter the dark side of Congress there is.
But beware of the Wookiees, I have.
Fear leads to eat as well.
But agree on this, we must, and find your weapon!>>
\end{lstlisting}
\label{prob:markov-random-sentences-babble}
\end{problem}
\newpage
\section*{Additional Material} % ==============================================
\subsection*{Large Training Sets} % -------------------------------------------
The approach in Problems \ref{problem:markov-random-sentences-init} and \ref{prob:markov-random-sentences-babble} begins to fail as the training set grows larger.
For example, a single Shakespearean play may not be large enough to cause memory problems, but \emph{The Complete Works of William Shakespeare} certainly will.
To accommodate larger data sets, consider use a sparse matrix for the transition matrix in instead of a regular NumPy array. %(use the \li{lil_matrix} from the \li{scipy.sparse} library). % Why lil_matrix?
Ensure that the process still works on small training sets, then proceed to larger training sets.
How are the resulting sentences different if a very large training set is used instead of a small training set?
\subsection*{Variations on the English Model} % -------------------------------
Choosing a different state space for the English Markov model produces different results.
Consider modifying your \li{SentenceGenerator} class so that it can determine the state space in a few different ways.
The following ideas are just a few possibilities.
\begin{itemize}
\item Let each punctuation mark have its own state.
In the example training set, instead of having two states for the words ``ham?'' and ``ham.'', there would be three states: ``ham'', ``?'', and ``.'', with ``ham'' transitioning to both punctuation states.
\item Model paragraphs instead of sentences.
Add a \textcolor[rgb]{0,.6,0}{\$tartParagraph} state that always transitions to \textcolor[rgb]{0,.6,0}{\$tartSentence} and a \textcolor{red}{\$topParagraph} state that is sometimes transitioned to from \textcolor{red}{\$topSentence}.
\item Let the states be individual letters instead of individual words.
Be sure to include a state for the spaces between words.
We will explore this particular state space choice more in Volume III.
\end{itemize}
\subsection*{Natural Language Processing Tools} % -----------------------------
The Markov model of Problems \ref{problem:markov-random-sentences-init} and \ref{prob:markov-random-sentences-babble} is an example of \emph{natural language processing}.
The \li{nltk} module (natural language toolkit) has many tools for parsing and analyzing text.
For example, \li{nltk.sent_tokenize()} reads a single string and splits it up by sentence.
\begin{lstlisting}
>>> from nltk import sent_tokenize
>>> with open("Yoda.txt", 'r') as yoda:
... sentences = sent_tokenize(yoda.read())
...
>>> print(sentences)
<<['Away with your weapon!',
'I mean you no harm.',
'I am wondering - why are you here?',
...>>
\end{lstlisting}
The \li{nltk} module is \textbf{not} part of the Python standard library.
For instructions on downloading, installing, and using \li{nltk}, visit \url{http://www.nltk.org/}.
| {
"alphanum_fraction": 0.7237993389,
"avg_line_length": 59.2853025937,
"ext": "tex",
"hexsha": "494bdee07f9c3fb5104a934e12e2f7facb2bb343",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-05T14:45:03.000Z",
"max_forks_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "joshualy/numerical_computing",
"max_forks_repo_path": "Vol2A/MarkovChains/MarkovChains.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "joshualy/numerical_computing",
"max_issues_repo_path": "Vol2A/MarkovChains/MarkovChains.tex",
"max_line_length": 344,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9f474e36fe85ae663bd20e2f2d06265d1f095173",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "joshualy/numerical_computing",
"max_stars_repo_path": "Vol2A/MarkovChains/MarkovChains.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 5377,
"size": 20572
} |
\subsubsection{ParetoFrontier}
\label{ParetoFrontierPP}
The \textbf{ParetoFrontier} PostProcessor is designed to identify the points lying on the Pareto Frontier in a multi-dimensional trade-space.
This post-processor receives as input a \textbf{DataObject} (a PointSet only) which contains all data points in the trade-space space and it
returns the subset of points lying in the Pareto Frontier as a PointSet.
It is here assumed that each data point of the input PointSet is a realization of the system under consideration for a
specific configuration to which corresponds several objective variables (e.g., cost and value).
%
\ppType{ParetoFrontier}{ParetoFrontier}
%
\begin{itemize}
\item \xmlNode{objective},\xmlDesc{string, required parameter}, ID of the objective variable that represents a dimension of the trade-space space.
The \xmlNode{costID} requires one identifying attribute:
\begin{itemize}
\item \xmlAttr{goal}, \xmlDesc{string, required field}, Goal of the objective variable characteristic: minimization (min) or maximization (max)
\item \xmlAttr{upperLimit}, \xmlDesc{string, optional field}, Desired upper limit of the objective variable for the points in the Pareto frontier
\item \xmlAttr{lowerLimit}, \xmlDesc{string, optional field}, Desired lower limit of the objective variable for the points in the Pareto frontier
\end{itemize}
\end{itemize}
The following is an example where a set of realizations (the ``candidates'' PointSet) has been generated by changing two parameters
(var1 and var2) which produced two output variables: cost (which it is desired to be minimized) and value (which it is desired to be maximized).
The \textbf{ParetoFrontier} post-processor takes the ``candidates'' PointSet and populates a Point similar in structure
(the ``paretoPoints'' PointSet).
\textbf{Example:}
\begin{lstlisting}[style=XML,morekeywords={anAttribute},caption=ParetoFrontier input example (no expand)., label=lst:ParetoFrontier_PP_InputExample]
<Models>
<PostProcessor name="paretoPP" subType="ParetoFrontier">
<objective goal='min' upperLimit='0.5'>cost</objective>
<objective goal='max' lowerLimit='0.5'>value</objective>
</PostProcessor>
</Models>
<Steps>
<PostProcess name="PP">
<Input class="DataObjects" type="PointSet" >candidates</Input>
<Model class="Models" type="PostProcessor" >paretoPP</Model>
<Output class="DataObjects" type="PointSet" >paretoPoints</Output>
</PostProcess>
</Steps>
<DataObjects>
<PointSet name="candidates">
<Input>var1,var2</Input>
<Output>cost,value</Output>
</PointSet>
<PointSet name="paretoPoints">
<Input>var1,var2</Input>
<Output>cost,value</Output>
</PointSet>
</DataObjects>
\end{lstlisting}
\nb it is possible to specify both upper and lower limits for each objective variable.
When one or both of these limits are specified, then the Pareto frontier is filtered such that all Pareto frontier points that
satisfy those limits are preserved.
| {
"alphanum_fraction": 0.7369265319,
"avg_line_length": 51.95,
"ext": "tex",
"hexsha": "861b82d9d22f9c2d2b5d42bdfd01bf23fb05f838",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "dgarrett622/raven",
"max_forks_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "dgarrett622/raven",
"max_issues_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex",
"max_line_length": 157,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f36cc108f7500b0e2717df4832b69b801b43960d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "dgarrett622/raven",
"max_stars_repo_path": "doc/user_manual/PostProcessors/ParetoFrontier.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 778,
"size": 3117
} |
% -------------------------------------------------------------------------
% This is a good place to outline testing strategiesx in Amanzi
% -------------------------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Code development cycle}
Development of a new capability consists of several steps that are summarized below.
Some steps can be skipped during a casual work cycle of code support, bug fixes, and minor improvements.
\begin{itemize}
\item Create a new github development branch.
\item Create github ticket or multiple tickets that summarize and stage the development process.
\item Implement numerical algorithms and add them to an Amanzi library.
\item Write unit tests for the new code.
\item Integrate new functionality into other algorithms.
\item Write integrated unit tests as needed.
\item If implemented algorithms take control parameters from an XML file,
document these parameters.
\item Test new capability and add a benchmark or verification test to the
user guide.
\item Create a pull request to inform team members about the new capability and
to collect miscallenous feedback.
\item Merge the development branch into the master branch.
\end{itemize}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Testing}
Testing is a cornerstone of modern software development. In the form of Test-Driven
Development, it is useful for providing feedback in the design process. In other
forms, it is essential for preventing the project from descending into chaos, and
controlling the cost of software maintenance. In this section we describe the
various forms of testing used to certify that Amanzi works properly, in order of
increasing scope.
% Describe testing layout in directories?
\subsection{Unit Testing}
Each individual software component should have a defined set of assumptions under
which it operates, and a set of behaviors and corresponding certifications on
what it produces. These assumptions and behaviors are ideally articulated in the
documentation of the component, but they should also be tested independently as part
of the implementation process. A test of an individual component's assumptions and
behaviors is called a {\em unit test}. A unit test provides a set of PASS/FAIL tests for
each function, method, and attribute in a software component.
Some Amanzi's tests are integrated tests that fill a huge gap between short unit tests
and long benchmark tests.
At the moment they are also called unit tests.
% Talk about Amanzi's unit testing here.
\subsection{Verification and Benchmark Testing}
The various algorithms we use in Amanzi have to be tested on the basic subsurface
problems that are relevant to our charter, and compared against other codes to
weigh the costs and benefits of our choices against existing approaches.
A {\em verification test} consists of a simulation run with a given input describing
a problem that has a known solution, a characterization of the quality of the
solution, and a PASS or FAIL result based on the quality of that solution measured
against some threshold.
% Describe verification testing here.
A {\em benchmark test} is a simulation run with a given input whose output is
compared to the output of one or more other codes. All codes must have inputs that
describe the same ``benchmark problem." The differences between the codes can be
evaluated visually and/or with numerical metrics. Numerical metrics allow benchmark
tests to have PASS/FAIL results, whereas a visual inspection test requires an
expert for evaluation, so the former are preferred where practical.
% Describe benchmark testing here.
\subsection{Regression Testing}
A {\em regression test} is a simulation-based PASS/FAIL test similar to a
verification test, and is typically part of a large suite of tests that are run
automatically and periodically to ensure that bugs and errors have not been
introduced into Amanzi during code development. We provide a couple of tools for
constructing PASS/FAIL tests that can be used to monitor bugs and regressions. In
particular, we support two types of regression tests: {\em smoke tests} and
{\em comparison tests}.
\subsubsection{Smoke tests}
A smoke test simply runs an Amanzi simulation with a given input, PASSes if
the simulation runs to completion, and FAILs otherwise. A smoke test can be created
(and added to Amanzi's regression test suite) by calling the following CMake command
inside of a CMakeLists.txt file in a testing directory:
\begin{lstlisting}
ADD_AMANZI_SMOKE_TEST(<test_name>
INPUT file.xml
[FILES file1;file2;...;fileN]
[PARALLEL]
[NPROCS procs1 ... ]
[MPI_EXEC_ARGS arg1 ... ])
\end{lstlisting}
Arguments:
\begin{itemize}
\item \verb|test_name|: the name given to the comparison test
\item \verb|INPUT| (required): This (required) keyword defines an Amanzi XML input file that will
be run.
\item \verb|FILES| (optional): A list of any additional files that the test needs in order
to run in its directory/environment. These files will be copied from the source
directory to the run directory.
\item \verb|PARALLEL| (optional): The presence of this keyword signifies that this is
a parallel job. This is also implied by an NPROCS value > 1
\item \verb|NPROCS| (optional): This keyword starts a list of the number of processors to
run the test on, and defaults to 1.
\item \verb|MPI_EXEC_ARGS| (optional): This keyword denotes extra arguments to give to
MPI. It is ignored for serial tests.
\end{itemize}
\subsubsection{Comparison tests}
A comparison test runs an Amanzi simulation with a given input, and then compares
a field or an observation from that simulation to that in the specified reference
file, PASSing if the L2 norm of the difference in the simulation and reference
values falls below the given tolerance. One can add a comparison test to the
Amanzi regression test suite by calling the following CMake command inside of a
CMakeLists.txt file within a testing directory:
\begin{lstlisting}
ADD_AMANZI_COMPARISON_TEST(<test_name>
INPUT file.xml
REFERENCE reference
[FILES file1;file2;...;fileN]
ABSOLUTE_TOLERANCE tolerance
RELATIVE_TOLERANCE tolerance
[FIELD field_name]
[OBSERVATION observation_name]
[PARALLEL]
[NPROCS procs1 ... ]
[MPI_EXEC_ARGS arg1 ... ])
\end{lstlisting}
Arguments:
\begin{itemize}
\item \verb|test_name|: the name given to the comparison test
\item \verb|INPUT| (required): This (required) keyword defines an Amanzi XML input file that will
be run.
\item \verb|REFERENCE| The name of the file containing reference data to which
the simulation output will be compared.
\item \verb|TOLERANCE| (required): This specifies the maximum L2 error norm that can be
measured for a successful testing outcome.
\item \verb|FILES| (optional): A list of any additional files that the test needs in order
to run in its directory/environment. These files will be copied from the source
directory to the run directory.
\item \verb|FIELD| (required if OBSERVATION not given): The name of the field in Amanzi that
will be compared to its reference value for this test.
\item \verb|OBSERVATION| (required if FIELD not given): The name of the observation in the Amanzi
input that will be compared to its reference value for this test.
\item \verb|PARALLEL| (optional): The presence of this keyword signifies that this is
a parallel job. This is also implied by an NPROCS value > 1
\item \verb|NPROCS| (optional): This keyword starts a list of the number of processors to
run the test on, and defaults to 1.
\item \verb|MPI_EXEC_ARGS| (optional): This keyword denotes extra arguments to give to
MPI. It is ignored for serial tests.
\end{itemize}
| {
"alphanum_fraction": 0.7166686813,
"avg_line_length": 49.8373493976,
"ext": "tex",
"hexsha": "270cf8803545c1f402784e668175198c25f038fc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c6cd3287bdc2c6cf26c6f8b79e34799751385f9e",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "cannsudemir/amanzi",
"max_forks_repo_path": "doc/developer_guide/testing.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c6cd3287bdc2c6cf26c6f8b79e34799751385f9e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "cannsudemir/amanzi",
"max_issues_repo_path": "doc/developer_guide/testing.tex",
"max_line_length": 105,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "c6cd3287bdc2c6cf26c6f8b79e34799751385f9e",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "cannsudemir/amanzi",
"max_stars_repo_path": "doc/developer_guide/testing.tex",
"max_stars_repo_stars_event_max_datetime": "2021-02-23T18:34:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-02-23T18:34:47.000Z",
"num_tokens": 1741,
"size": 8273
} |
\RequirePackage{docswitch}
% \flag is set by the user, through the makefile:
% make note
% make apj
% etc.
\setjournal{\flag}
%\documentclass[\docopts]{\docclass}
\documentclass[skiphelvet,twocolumn]{aastex63}
%\documentclass[skiphelvet,preprint,twocolumn]{lsstdescnote}
% You could also define the document class directly
%\documentclass[]{emulateapj}
% Custom commands from LSST DESC, see texmf/styles/lsstdesc_macros.sty
\usepackage{lsstdesc_macros}
\usepackage{rotating}
\usepackage{graphicx}
%\usepackage{diagbox}
\usepackage{multirow}
\usepackage{adjustbox}
\usepackage{comment}
\usepackage{mathtools}
\usepackage{textcomp}
\usepackage[toc,page]{appendix}
%\usepackage{pdflscape}
\graphicspath{{./}{./figures/}}
\bibliographystyle{apj}
% Add your own macros here:
\newcommand{\snrb}{\mbox{$SNR^b$}}
\newcommand{\snrbmin}{\mbox{$SNR^b_{min}$}}
\newcommand{\snrg}{\mbox{$SNR^g$}}
\newcommand{\snrr}{\mbox{$SNR^r$}}
\newcommand{\snri}{\mbox{$SNR^i$}}
\newcommand{\snrz}{\mbox{$SNR^z$}}
\newcommand{\snry}{\mbox{$SNR^y$}}
\newcommand{\z}{{$z$}}
\newcommand{\bu}{{$u$}}
\newcommand{\bg}{{$g$}}
\newcommand{\br}{{$r$}}
\newcommand{\bi}{{$i$}}
\newcommand{\bz}{{$z$}}
\newcommand{\by}{{$y$}}
\newcommand{\salt}{SALT2}
\newcommand{\xnorm}{$x_0$}
\newcommand{\strech}{$x_1$}
\newcommand{\snstrech}{\mbox{$x_1$}}
\newcommand{\col}{$c$}
\newcommand{\daymax}{$T_0$}
\newcommand{\sigc}{\mbox{$\sigma_c$}}
\newcommand{\sigmu}{\mbox{$\sigma_\mu$}}
\newcommand{\zlim}{\mbox{$z_{lim}$}}
\newcommand{\zlimfaint}{\mbox{$z_{lim,faint}^{SN}$}}
\newcommand{\cosmos}{{COSMOS}}
\newcommand{\elais}{{ELAIS-S1}}
\newcommand{\xmm}{{XMM-LSS}}
\newcommand{\cdfs}{{CDF-S}}
\newcommand{\adfa}{{ADF-A}}
\newcommand{\adfb}{{ADF-B}}
\newcommand{\adfs}{{Euclid/Roman}}
\newcommand{\euclid}{{Euclid}}
\newcommand{\romanspace}{{Roman Space Telescope}}
\newcommand{\wfirst}{{\sc WFIRST}}
\newcommand{\sne}{{SNe~Ia}}
\newcommand{\sn}{{SN}}
\newcommand{\degsq}{{deg$^2$}}
\newcommand{\nsn}{{$N_{SN}^{z\leq z_{lim}}$}}
\newcommand{\nsncomp}{{$N_{SN}^{z\leq z_{complete}}$}}
\newcommand{\sumnsncomp}{{$\sum\limits_{Npixels} N_{SN}^{z\leq z_{complete}}$}}
\newcommand{\zcomp}{\mbox{$z_{complete}$}}
\newcommand{\zcompb}{\mbox{$z_{complete}^{0.95}$}}
\newcommand{\snfaint}{\mbox{$(\snstrech,\sncolor)=(-2.0,0.2)$}}
\newcommand{\snx}{\mbox{$x_0$}}
\newcommand{\sncolor}{\mbox{$c$}}
\newcommand{\redshift}{\mbox{$z$}}
\newcommand{\per}{$\%$}
\newcommand{\seq}{$\sim$}
\newcommand{\nvisits}{$N_{visits}$}
\newcommand{\nvisitsb}{\mbox{$N_{visits}^b$}}
\newcommand{\sumnvisitsb}{\mbox{$\sum\limits_{b}N_{visits}^b$}}
\newcommand{\nvisitsbmin}{\mbox{$N_{visits,min}^b$}}
\newcommand{\nvisitsy}{$N_{visits}^y$}
\newcommand{\nvisitsall}{$N_{visits}^g,N_{visits}^r,N_{visits}^i,N_{visits}^z,N_{visits}^y$}
\newcommand{\ddfscen}[1]{RDD\_#1}
\newcommand{\osfamily}[1]{{\it #1}}
\newcommand{\doffset}{tdo}
% ======================================================================
\begin{document}
\title{LSST Deep Drilling program and Supernovae}
\input{authors}
\maketitlepre
\begin{abstract}
The ambition of the Rubin Observatory Legacy Survey of Space and Time (LSST) Supernovae (\sn) program is to unveil the nature of Dark Energy by measuring cosmological parameters with a high degree of accuracy. Reducing systematic uncertainties on cosmological parameters requires to enrich the core sample of well-measured Type Ia supernovae (\sne). The average quality of observed \sne~is estimated from the signal-to-noise ratio of the photometric light curves which is primarly driven by survey parameters such as the cadence and the number of visits per band. An optimal observing strategy is thus critical in the success of the Supernovae science program and the LSST
%Collecting a large sample of well-measured type Ia supernovae is a prerequisite to reduce systematic uncertainties on cosmological parameters. The average quality of the photometric light curves is primarily driven by observing strategy %The main Wide Fast Deep (WFD) survey will provide an unprecedented number of well-sampled \sne~up to redshift completeness of $\sim$0.2-0.3.
Deep Drilling Field (DDF) mini-survey is crucial for collecting a sample of high-redshift (up to $z\simeq$1) of well-measured supernovae which improves cosmological constraints from \sne. The goal of this paper is to quantify the impact of the DDF strategy parameters (number of fields, cadence, number of season of observation, number of visits per band, budget) on the redshift completeness and number of \sne. Detailed analysis of recent LSST simulations show that the depth of the DD survey (limited to $z\sim0.65$) could be explained by an insufficient signal-to-noise ratio (per band) or equivalently by an inappropriate number of visits per band. We propose a method that provides a guidance in the estimation of the number of visits (per band) requested to reach higher redshifts. The results of this method are used to precise the contours of DDF scenarios that would meet the goal in terms of size and depth of the SN sample. We have taken synergistic datasets into account (Euclid/Roman and spectroscopy for follow-up and host-galaxy redshift) to build a realistic Deep Rolling survey characterised by a low cadence (one day), a rolling strategy (each field observed at least two seasons), and two sets of fields: ultra-deep ($z\gtrsim0.8$) and deep ($z\gtrsim0.5$) fields.
%Further studies (i.e. accurate joint simulations with concerned endeavours) are requested to finalize the settings of this Deep Rolling (DR) strategy.
\end{abstract}
% Keywords are ignored in the LSST DESC Note style:
\dockeys{Cosmology; Supernovae; Astronomy software; Observing Strategy}
%\maketitlepost
% ----------------------------------------------------------------------
% {% raw %}
\section{Introduction}
\label{sec:intro}
Type Ia supernovae(\sne) are transient astronomical events resulting from a powerful and luminous explosion of a white dwarf. They are identified by their brightness evolution, with a luminosity peak about 15 days after explosion, and a slow decrease lasting up to few months. \sne~can be used as standard candles to determine cosmological distances. They are included in a Hubble diagram, one is the most statistically efficient approach to constraint the Dark Energy equation of state (\citealt{Betoule_2014,Scolnic_2018}).
\par
The stated ambition of the Rubin Observatory Legacy Survey of Space and Time (LSST) Supernovae program is to maximize the sample size of well-measured type Ia supernovae (\sne) while reducing systematic uncertainties on cosmological parameters. Increasing the statistics in the Hubble diagram requires (1) advances in the measurement of the distances (i.e. a control at the per-mil level of the photometry and survey flux calibration, and a progress in standardization technique) (2) a better control of the astrophysical environment and its potential impacts on the SN light curves and distances (local host properties, absorption) (3) a better control of the SN diversity (SN~Ia sub-populations, population drift with redshift) (4) a precise determination of the survey selection function (SN identification, residual contamination by non-SN~Ia's as a function of redshift). Having access to a {\it large sample} of {\it well-measured} \sne~is a prerequisite for a successful completion of this list of improvements.
\par
The ten-year Rubin Observatory Legacy Survey of Space and Time (LSST) will image billions of objects in six colors. 80 to 90\per~of the observing time will be dedicated to Wide Fast Deep(WFD) primary survey, which will cover half of the sky ($\sim$ 18000 \degsq) at a ''universal'' cadence. The remaining observing time will be shared among other programs (mini-surveys) including intensive scanning of a set of Deep Drilling (DD) fields. It is expected that about 10\per~(100\per) of \sne~observed in the WFD (DD) survey will be identified from spectral features. Accurate supernova parameters will be estimated from well-measured light curves characterized by a sampling of few days and high Signal-to-Noise Ratio per band (\snrb). %As a consequence all the studies presented in this paper rely on the supernova light curves only.
Obtaining high quality light curves is therefore a key design point of the SN survey: the average quality of the light curves depends primarily on the observing strategy.
\par
In a recent paper (\cite{lochner2021impact}), the Dark Energy Science Collaboration (DESC) has presented an analysis of the WFD survey of observing strategies simulated by the LSST project\footnote{These strategies are available from https://community.lsst.org/t/community-survey-strategy-highlights/4760.}. The conclusion is that an unprecedented number of high quality \sne~will be observed in the WFD survey (between 120k and 170k) up to redshifts $z\sim 0.3$. The DD Supernovae program is critical for obtaining a sample of high-redshift (up to $z\simeq$1) and well-measured supernovae so as to achieve Stage IV dark energy goals.
%which improves cosmological constraints from \sne. Achieving Stage IV dark energy goals will critically rely on the deep drilling fields of LSST.
\par
%This paper compiles a set of studies related to the DD program of LSST.
This paper deals with the interplay between the DD strategy and \sne. The main goal is to design a set of strategies that would optimize the size and depth of the well-measured \sne~sample collected by the survey. We perform a detailed study of the impact of the strategy parameters (number of fields to observe, number of seasons, season lengths, numer of visits per night and per field) on the SN sample quality to assess whether observing \sne~up to $z\simeq$1 is achievable given design constraints, including in particular the number of visits alloted to DDFs. This article is subdivided into 5 sections. The design constraints of the DD program and the metrics used to assess observing strategies are described in the first two parts. A detailed analysis of recent DD simulations proposed by LSST is presented in the third section. A method to optimize the DD program is proposed in a fourth part. The last section of the document is dedicated to the presentation of DD realistic scenarios that would achieve the goal of observing high quality \sne~up to high redshifts.
\section{Deep Drilling survey design constraints}
\label{sec:design}
Survey parameters having an impact on the design of the DD mini-surveys are the number of fields to be observed, the cadence of observation, the number of season of observation, the season length, and, last but not least, the budget.
\par
Four distant extragalactic survey fields that LSST guarantees to observe as Deep Drilling Fields were selected in 2012\footnote{http://ls.st/bki}: \cosmos, \elais, \xmm, \cdfs~(Tab. \ref{tab:locddf}). More recently, the DESC collaboration has supported the LSST DDF coverage of the southern deep fields area to ensure contemporaneous observations with \euclid~(\citealt{laureijs2011euclid,Amendola_2013}) and \romanspace~(\citealt{spergel2015widefield}), at the begining and at mid-term of the LSST survey, respectively.
\begin{table}[!htbp]
\caption{Location of the DD fields considered in this study. ADF-A and ADF-B are example of south fields in the \adfs~area simulated in LSST observing strategy.}\label{tab:locddf}
\begin{center}
\begin{tabular}{c|c|c}
\hline
\hline
Field & Central RA & Central Dec\\
Name & (J2000) & (J2000)\\
\hline
\elais & 00:37:48 & −44:01:30 \\
\xmm & 02:22:18 & −04:49:00 \\
\cdfs & 03:31:55 & −28:07:00 \\
\cosmos &10:00:26 & +02:14:01 \\
\hline
\adfa & 04:51:00& −52:55:00 \\
\adfb & 04:35:00 & −54:40:00 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\par
A regular cadence\footnote{The cadence is defined as the median of inter-night gaps.} of observation ($\sim$ 3 days max) is required to collect well-sampled light curves (LC). The number of large gaps ($>$10 days) between visits degrades the
measurements of luminosity distances, and potentially result in rejecting large sets of light curves of poor quality.
\par
The number of exploding supernovae is proportional to the number of season of observation and to the season duration (\citealt{perrett}). Maximizing season length is particularly important in the DDFs because of time dilation. Season lengths of at least six months are required to maximize the size of the sample of well-measured \sne.
\par
It is expected that 5-15$\%$ of the total number of LSST visits will be alloted to the DD program and shared among science topics interested by DD observations (such as AGN, supernovae, ...). This limited budget is related to the total number of visits per observing night by:
\begin{equation}\label{eq:ddbudget}
\begin{aligned}
&DD_{budget} = N_{visits}^{DD}/(N_{visits}^{DD}+ N_{visits}^{non-DD})\\
&N_{visits}^{DD} = \sum_{i=1}^{N_{fields}} \sum_{ j=1}^{N_{season}^i} N_{visits,night}^{ij}\times seaslen^{ij}/cad^{ij} \\
& N_{visits,night}^{ij} = \sum_{b} N_{visits,night}^{ijb} , b=g,r,i,z,y
\end{aligned}
\end{equation}
%\begin{eqnarray}
%DD_{budget} &=& N_{visits}^{DD}/N_{visits}^{tot} \\
%N_{visits}^{DD} &=& \sum_{i=1}^{N_{fields}} \sum_{ j=1}^{N_{season}^i} N_{visits,night}^{ij}\times seaslen^{ij}\times 30/cad^{ij} \\
%N_{visits,night}^{ij} &=& \sum_{b} N_{visits,night}^{ijb} , b=g,r,i,z,y
%\label{eq:ddbudget}
%\end{eqnarray}
where $N_{fields}$ is the number of DD fields, $N_{season}$ the number of season of observations per field, $seaslen$ the season length (in days), $cad$ the cadence of observation, and $N_{visits, night}^{ij}$ the total number of visits per observing night, per field, and per season. The maximum number of visits per night and per field is fairly strongly dependent on the budget (the total number of visits is multiplied by 2.5 when increasing the budget from 6\% to 15\%) but also on the configuration of the survey that may be parametrized by $N_{fields}\times N_{seasons}^{field}$: the number of visits increases by a factor 5 if $N_{fields}\times N_{seasons}^{field}$ decreases from 50 to 10.
\begin{comment}
We have estimated $N_{visits,night}$ from the following input: cadence of 1 day, season lengths of 6 months, 5 fields observed for 10 or 2 seasons, and a budget of 6, 10 and 15\%. The conclusion (Tab. \ref{tab:ddbudget}) is that the maximum number of visits per night and per fields is quite dependent on the budget (the number of visits is multiplied by 2.5 when increasing the budget from 6\% to 15\%) but also on the configuration of the survey that may be parametrized by $N_{fields}\times N_{seasons}^{field}$: the number of visits increases by a factor 5 if $N_{fields}\times N_{seasons}^{field}$ decreases from 50 to 10.
\begin{table}[!htbp]
\caption{Total number of visits (per observing night) as a function of the DD budget and the cadence of observation for a configuration of 5 fields, for 10 and 2 seasons of observation, and for a cadence of 1 day. }\label{tab:ddbudget}
\begin{center}
%\begin{tabular}{c|c|c|c}
\begin{tabular}{c|c|c}
\hline
\hline
%\diagbox[innerwidth=3.cm,innerleftsep=-1.cm,height=3\line]{budget}{cadence} & 1 & 2 & 3\\
budget & $N_{visits, night}$ & $N_{visits, night}$\\
& 5 fields, 10 seasons & 5 fields, 2 seasons \\
\hline
6\% & 13 & 66 \\
10\% & 22 & 111 \\
15\% & 33 & 166 \\
%6\% & 13/66 & 26/132 & 40/199 \\
%10\% & 22/111 & 44/221 & 66/332 \\
%15\% & 33/166 & 66/332 & 99/498 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{comment}
\section{Metrics to assess observing strategies}
\label{sec:metrics}
The metrics used to assess observing strategies are estimated from full simulation of light curves. We have used the SALT2 model (\citealt{Guy_2007,Guy_2010}) where a \sne~is described by five parameters: \snx, the normalization of the SED sequence; \snstrech, the stretch; \sncolor, the color; \daymax, the day of maximum luninosity; and $z$, the redshift. A flat-$\Lambda$CDM model was used to estimate cosmological distances, with $H_0$ = 70 km s$^{-1}$, $\Omega_m$ = 0.3 and $\Omega_\Lambda$ = 0.7.
\par
In SALT2, model uncertainties of $g$ and $r$-band (rest-frame UV) light-curves fluxes are large (\citealt{Guy_2007}). $g$ and $r$ observations with relative error model larger than 10$\%$ have not been considered in this study. This requirement implies that the list of filters useful to measure photometric light-curves (observer-frame) is redshift-dependent: \bg\br\bi~for $z\lesssim$0.1, \bg\br\bi\bz~for $0.1\lesssim z\lesssim$0.3-0.4, \br\bi\bz\by~for $0.4\lesssim z \lesssim 0.6-0.7$, and \bi\bz\by~for $z\gtrsim 0.7$.
\par
We rely on two metrics to assess observing strategies: the redshift limit \zlim, and the number of well-measured \sne, \nsn. A well-measured \sne~is defined by the following tight selection criteria: light curves points with $SNR\geq$ 1; at least four (ten) epochs before (after) maximum luminosity; at least one point with a phase lower (higher) than -10 (20); and \sigc$\leq$0.04 where \sigc~ is the error on the color parameter of the supernova (this corresponds to \sigmu$\leq$0.12 where \sigmu~is the distance modulus error). The redshift limit is defined as the maximum redshift of supernovae passing these selection criteria. The metrics are measured in HEALPixels of size 0.46 $deg^2$ over the region of the DDFs.
\par
The estimation of \zlim~is mainly driven by the selection on the error of the color, \sigc$\leq$0.04. \sigc~reflects the quality of the collected light curves, result of the cadence of observation (sampling) and of the flux uncertainty measurements (observing conditions). \sigc~estimation is strongly correlated to the Signal-to-Noise Ratio (SNR) per band, \snrb, defined by:
\begin{equation}
\begin{aligned}
SNR^b &= \sqrt{\sum_{i=1}^{n^b}{\left(\frac{f_i^b}{\sigma_i^b}\right)^2}}
\end{aligned}
\label{eq:snrb}
\end{equation}
where $f^b$, and $\sigma^b$ are the fluxes and fluxes uncertainties. The summation runs over the number of light curve points. Requesting \sigc$\leq 0.04$ is equivalent to requiring a minimal SNR per band and the link between \zlim~and \snrb~may be written:
\begin{equation}
\begin{aligned}
\zlim &\Longleftarrow & \sigc \leq 0.04 & \Longleftarrow &\cap (\snrb \geq \snrbmin)
\end{aligned}
\label{eq:zlimsnr}
\end{equation}
The redshift of a complete sample, \zcompb, is estimated from the redshift limit distribution, \zlimfaint, of a simulated set of faint supernovae with \daymax~values spanning over the season duration of a group of observations. \zcompb~is defined as the 95th percentile of the \zlimfaint~cumulative distribution.
\section{Analysis of recent simulations}
\label{sec:analysis}
The LSST project has periodically released few sets of simulations during the past years. These simulations contain a large number of WFD strategies depending on parameters such as observed area, filter allocation, weather impact, scanning strategy. The diversity of DD scenarios is rather limited and we have choosen to analyze a set of observing strategies with representative DD surveys on the basis of the following criteria: number of visits (and filter allocation) per observing night, cadence of observation, dithering, and budget. The list of observing strategies is given in Tab. \ref{tab:os}.
%\input{texfiles/dd_summary_0_filteralloc}
%\input{texfiles/dd_summary_1}
%\input{texfiles/dd_summary_2}
%\input{texfiles/dd_summary_3}
Four sets of observing strategies may be extracted from Tab. \ref{tab:os} according to the filter allocation per night, which is the parameter that has the most significant impact on the redshift limit value : \osfamily{baseline}~ (11 observing strategies), with two sets of filter distributions, \osfamily{agn}, \osfamily{daily}, and \osfamily{desc} family. Estimation of the pair metric (\nsncomp,\zcompb) (defined in Sec. \ref{sec:metrics}) for these families shows (Fig. \ref{fig:nsn_zlim_zoom}) that higher redshift limits are reached for the \osfamily{baseline-like} family. Most (10/11) of these observing strategies reach \zcompb$\sim$0.65. ddf\_heavy, the strategy with the largest budget, reaches \zcompb$\sim$0.72 and also collects the larger number of well-sample \sne. \osfamily{daily} and \osfamily{desc} are characterized by a lower depth but by a significant number of well-measured \sne.\par
The redshift completeness value is mainly driven by the number of visits per observing night and by the cadence. We would expect \zcompb$\sim$(0.75,0.52,0.48,0.65) for (\osfamily{baseline},\osfamily{agn},\osfamily{daily},\osfamily{desc}), respectivelly, for a regular cadence of 2 days and median observing conditions. The corresponding expected number of well-measured supernovae would be of (9000,3400,2700,6200) without dithering. A comparison of these numbers with Fig. \ref{fig:nsn_zlim_zoom} reveals that the values of the pair metric %(\nsncomp, \zcompb)
are the complex outcome of the combination of the following (ranked by order) parameters: number of visits per observing night$\times$cadence, season length, dithering, observing conditions. Gaps of more than $\sim$ 5-7 days in a 2 days median cadence have a harmful
impact on the size and depth of the SN sample. The main source of gaps for the DDF survey is due to telescope downtime (clouds, telescope maintenance) which leads to about 16-20$\%$ of nights without observation per season.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{nsn_zlim_zoom.png}
\caption{\nsncomp~vs \zcompb~for the observing strategies considered in this paper.}\label{fig:nsn_zlim_zoom}
\end{center}
\end{figure*}
\par
The translationnal dithering is expected to affect both the number of well-sampled supernovae and the redshift completeness for each of the HEALPixel of a field. With no dithering, \nsncomp~and \zcompb~distributions are uniform on the whole field area. The dithering tends increase the cadence of observation and to decrease both \nsncomp~and \zcompb~(per HEALPixel). The dithered pointings of the telescope are estimated from the central location of the fields (Tab. \ref{tab:locddf}) and lead to a distribution map of the metrics decreasing as the (HEALPixel center) distance to the central location increases. A more subtle effect is observed (Fig. \ref{fig:dither}) when all the pixel are considered to estimate the total number of supernovae and the 95th percentile redshift completeness. \zcompb~tends to decrease with an increase of the translationnal dither offset (\doffset), with a greater effect for low cadences. The total number of supernovae is the result of two opposite effects, a decrease of the cadence and an increase of the survey area, the later offsetting the former for low cadences and low \doffset~values.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.55\textwidth]{dither_ddf.png}
\caption{Ratio of the number of supernovae $N_{SN}/N_{SN}^{nodither}$ (top) and \zcompb~difference (bottom) as a function of translationnal dither offset. The simulations labelled as 'Fakes' (dotted lines) correspond to regular cadences (1,2,3,5 days) with median observing conditions (5$\sigma$ depth single exposure: 24.13/23.84/23.45/22.74/22.10 for $g/r/i/y/z$ bands, respectivelly.}\label{fig:dither}
\end{center}
\end{figure}
\section{Optimization of the number of visits}
\label{sec:opti}
The analysis of recent simulations have shown (see Sec. \ref{sec:analysis}) that it was difficult, with the proposed cadences of observation, filter allocation and season lengths, to collect complete samples of \sne~with redshift higher than \zcompb\seq 0.55-0.6 for a DD budget of \seq 5\per. %A high budget (\seq 13.5\per) is requested to reach \zcompb\seq 0.65.
%The redshift limits are mainly driven by \sumnvisitsb$\times$ cadence.
With optimal survey parameters (i.e. a regular cadence of observation of one day, no dithering, minimal fluctuation of observing conditions) the depth of the four families considered in Sec. \ref{sec:analysis} would be \zcompb $\sim$ (0.77,0.59,0.54,0.72) for (\osfamily{baseline},\osfamily{agn},\osfamily{daily},\osfamily{desc}), respectivelly. These results are well below the ambitious goal of \zcompb\seq1 because \snrb~values are too low to reach higher redshifts (Eq. \ref{eq:zlimsnr}). The signal-to-noise ratio per band is the complex result of the combination of the SN flux ($z$-dependent), the number of visits per, the cadence of observation, and observing conditions (5-$\sigma$ depth). It is thus not possible to estimate the
observing strategy parameters required to reach higher redshifts from the results of Sec. \ref{sec:analysis}. This is why we present in this section a study to assess the relationship between the redshift limit and the number of visits per band and per observing night (for a defined cadence). The optimized number of visits per band required to reach higher redshifts estimated with this approach is a key parameter to build DD scenarios consistent with the list of constraints presented in Sec. \ref{sec:design}.
\par
As described in Eq. \ref{eq:ddbudget} the DD budget depends primarily on 5 parameters: the number of fields to observe, the season length (per field and per season), the number of seasons of observation, the cadence of observation (per field and per season), and the number of visits \nvisitsb~per filter and per observing night. \nvisitsb~affect \snrb~through the flux measurement uncertainties $\sigma_i^b$. In the background-dominated regime one has $\sigma_i^b \simeq \sigma_5^b$ where $\sigma_5^b$ is equal by definition to
\begin{equation}\label{eq:opt2}
\begin{aligned}
\sigma_5^b &= \frac{f_5^b}{5}
\end{aligned}
\end{equation}
where $f_ 5^b$ is the five-sigma flux related to the five-sigma depth magnitude $m_5^b$ through:
\begin{equation}\label{eq:opt3}
\begin{aligned}
m_5^b &= -2.5 \log f_5^b+zp^b
\end{aligned}
\end{equation}
where $zp^b$ is the zero point of the considered filter. $m_5^b$ is related to \nvisitsb through:
\begin{equation}\label{eq:opt4}
\begin{aligned}
m_5^b - m_5^{b, single~visit} & = 1.25 \log(N_{visits}^b)
\end{aligned}
\end{equation}
where $m_5^{b, single~visit}$ is the five-sigma depth corresponding to a single visit, a parameter depending on observing conditions. These equations \eqref{eq:opt2}-\eqref{eq:opt4} describe the relationship between \snrb~ and \nvisitsb. The request $\snrb~\geq~\snrbmin$ is equivalent to $\nvisitsb~\geq~\nvisitsbmin$ and equ. \eqref{eq:zlimsnr} may be written:
\begin{equation}
\begin{aligned}
\zlim &\Longleftarrow & \sigc \leq 0.04 & \Longleftarrow &\cap (\nvisitsb~\geq~\nvisitsbmin)
\end{aligned}
\label{eq:zlimnvisits}
\end{equation}
\\
The relations \eqref{eq:zlimsnr} and \eqref{eq:zlimnvisits} are not univocal. A lot of \snrb~(\nvisitsb) combinations lead in fact to the same result and constraints have to be applied to choose the optimal configuration. We have performed a systematic scan of the SNR parameter space (\snrg,\snrr,\snri,\snrz,\snry) and picked combinations fulfilling the above-mentioned selection criteria (Sec. \ref{sec:metrics}). The optimal solution is selected by minimizing the total number of visits per observing night and by requiring a maximum number of \by-band visits. This selection aims at reducing the (potentially severe) systematic effects affecting the \by~band measurements. The result is displayed on Fig. \ref{fig:nvisits_zlim} for a 1 day cadence. The number of visits strongly increases with the redshift completeness for \zcomp$\gtrsim 0.7$ where only three bands \bi\bz\by~can be used to construct \sne~light curves. 284 visits are required to reach \zcomp$\sim$0.95 for a one day cadence. The number of visits required to reach a given \zcomp~value increases linearly (as a first approximation) with the cadence. This tend to disfavour DD strategies with high cadences: a high number of visits per observing night (e.g. 7 hours of observations to reach \zcomp $\sim$ 0.95 for a cadence of 3 days) would jeopardize the uniformity of the WFD survey.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth,left]{nvisits_zlim.png}
\caption{Number of visits as a function of the completeness redshift. 235 visits {\it with} the following filter allocation (\nvisitsall)=(2,4,89,121,18) are requested per observing night to reach \zcomp\seq 0.9 for a cadence of one day.}\label{fig:nvisits_zlim}
\end{figure}
\par
The optimized number of visits required to reach higher redshift completeness is the last piece of the puzzle to be included in the budget estimator (Eq. \ref{eq:ddbudget}). We have now the tools to design realistic DD scenarios.
%leading to the collection of a large-sample of well measured \sne~up to higher completeness redshifts.
\begin{comment}
We may then use \ref{eq:zliminvisits} and \ref{fig:nvisits_zlim} to estimate the redshift completeness corresponding to the number of visits of \ref{tab:ddbudget}. The conclusions of the result (Tab. \ref{tab:zlim}) are (a) it is very difficult to reach completeness redshifts higher than 0.6-0.7 if all fields are observed for ten years ; (b) the only way to explore higher redshift domains is to reduce the number of seasons of observation.
\begin{table}[!htbp]
\caption{Redshift completeness as a function of the DD budget and the cadence of observation for a configuration of 5 fields. The first/second number corresponds to 10/2 seasons of observation per field. The redshift completeness are independent on the cadence since the total SNR per band, \snrb,are identical.}\label{tab:zlim}
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
\hline
\diagbox[innerwidth=3.cm,innerleftsep=-1.cm,height=3\line]{budget}{cadence} & 1 & 2 & 3\\
\hline
6\% &\multicolumn{3}{c}{0.62/0.74} \\
10\% & \multicolumn{3}{c}{0.66/0.79} \\
15\% & \multicolumn{3}{c}{0.68/0.83}\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{comment}
\section{Deep Rolling surveys to probe high \zcomp~domains}
\label{sec:scenario}
\paragraph{Season length and field location}
An astronomical target is said to be observable if it is visible (i.e. for LSST with altitude 20$^o\leq$ alt$\leq$86.5$^o$ and airmass$\leq$1.5) for some time. The nightly period of visibility depends on the location of a field w.r.t. LSST. The season length may be estimated from a list of night during which a field is observable. It depends on the time of visibility that is the total exposure time of a field. The estimation of the season length as a function of the total number of visits (Fig. \ref{fig:seasonlength_nvisits}) suggests a decrease from 275-200 to 150-100 days when \nvisits~increase from 1 to 400. Combining information of sec. \ref{sec:opti} and Fig. \ref{fig:seasonlength_nvisits} lead to the conclusion that low cadences are favored to reach higher \zcomp~values while maximizing the season length.
%This is due to the fact that the minimal \snrb~to reach \zcomp~is independent on the cadence. The corresponding requested number of visits increases with the cadence. This leads to a decrease of the season length.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth]{seasonlength_nvisits.png}
\caption{Maximal season length as a function of the number of visits. Fields are observable if the following requirements are met: altitude 20$^o\leq$ alt$\leq$86.5$^o$ and airmass$\leq$1.5.}\label{fig:seasonlength_nvisits}
\end{center}
\end{figure}
\paragraph{Deep Rolling strategy}
A Deep Drilling program may be defined by the following parameters: number of fields to observe, number of season of observation (per field), season length (per field and per season), number of visits per filter per observing night, DD budget, number of supernovae, and redshift completeness. Once the field parameters configuration (fields to observe, number of seasons, season length) is set, one of the three parameters (\zcomp, budget, \nvisits) may be fixed to estimate the two others using Eq. \ref{eq:ddbudget} and the results of Fig.~\ref{fig:nvisits_zlim}.
\begin{comment}
A GUI (Fig. \ref{fig:budget_gui}) was designed from the results of sec. \ref{sec:opti} to design Deep Drilling scenarios. Once the field parametersconfiguration (fields to observe, number of seasons, season length) is set, one of the three parameters (\zcomp, budget, \nvisits) may be fixed to estimate the two others.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{budget_GUI.png}
\caption{A GUI to define Deep Drilling programs. The field parameters (fields to observe, number of seasons, season length) are defined in the bottom table. The bottom plot displays the number of visits as a function of \zcomp. The budget as a function of \zcomp~is represented on the top plot. One of the three parameters (\zcomp, budget, \nvisits) may be fixed to estimate the two others. The expected total number of supernovae is also estimated. We have chosen \zcomp$\sim$0.8 as an illustraion.}\label{fig:budget_gui}
\end{center}
\end{figure}
\end{comment}
Observing 5 fields for ten years up to \zcomp $\sim$ 0.9 would certainly give access to a large sample of well-measured \sne~(around 11k) but also to an unrealistic scenario (DD budget: 87\%). The only way to reach higher \zcomp~while remaining within a reasonable budgetary envelope is to reduce the number of seasons of observation. This is the purpose of the Deep Rolling (DR) strategy characterized by a limited number of seasons of observation per field (at least 2) and a large number of visits per observing night (more than 200 for higher \zcomp).
\par
%Building a consistent DDR strategy is not only a matter of adjusting LSST survey settings to stay within a reasonable budgetary envelope.
%Synergistic
Spectroscopic datasets from cosmological endeavors overlapping with LSST in area and timing are essential for the success of the supernovae program. They provides enormous added benefits through (a) the follow-up of the full sample of well-measured supernovae (photometric classification), and (b) the measurement of host-galaxy redshifts with high accuracy. Two of the spectroscopic resources contemporaneous with LSST, Primary Focus Spectrograph (PFS \cite{Tamura_2016}) and 4MOST(\cite{4MOST}), may provide guidance in the choice of the depth of the DR survey. The PFS spectroscopic follow-up survey is designed to observe two of LSST DDFs accessible from the Subaru telescope: \cosmos~and \xmm. About 4000 spectra (half of live supernovae and half of host galaxy redshifts) up to $z\sim0.8$ will be collected. The 4MOST Time-Domain Extragalactic Survey (TiDES \cite{TiDES}) is dedicated to the spectroscopic follow-up of extragalactic optical transients and variable sources selected from e.g. LSST. The goal is to collect spectra for up to 30 000 live transients to $z\sim0.5$ and to mesure up to 50 000 host galaxy redshifts up to $z\sim1$. Two sets of LSST fields may then be defined to fully benefit from the synergy with PFS and 4MOST: ultra-deep (\cosmos, \xmm) and deep (\adfs, \cdfs, \elais) fields.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.8\textwidth]{budget_zcomp.png}
\caption{DD budget (top) and total number of \sne~(bottom) as a function of the redshift completeness of deep fields for a set of scenarios where the redshift completeness of ultra-deep fields (\cosmos~and \xmm) is set to 0.9. Subscripts correspond to the number of seasons of observation and superscripts to the redshift limit. Highest \zcomp~values are reached for a minimal strategy composed of two seasons of observations of \cosmos~and \xmm~with a depth of 0.9 plus 4 seasons of observation of the \adfs~ field (1 pointing). Coloured areas correspond to a variation of the number of $y$-band visits (20$\leq N_{visits}^y \leq$ 80).}\label{fig:budget_zcomp}
\end{center}
\end{figure*}
\par
On the basis of the above, it is possible to sketch the outlines of a realistic large scale high-$z$ DD survey: (a) a low cadence of observation (one day), (b) a rolling strategy (with a minimal of two seasons of observation per field), and (c) two sets of fields, ultra-deep (\cosmos,\xmm, with \zcomp$\gtrsim$0.8) and deep (\adfs,\cdfs,\elais, with \zcomp$\gtrsim$0.5) fields. The budget and expected number of \sne~for a set of scenario is given on Fig. \ref{fig:budget_zcomp}. A budget of 8-9$\%$ is requested to observe ultra-deep fields up to $z\sim0.9$ and deep fields up to $z\sim0.7$ and to collect 1100 to 2100 \sne. The parameters of the DR strategy (number of fields to observe, redshift completeness for ultra-deep and deep fields, number of seasons per field,budget) have to be tuned on the basis of studies with the synergistic surveys (PSF, TiDES, Euclid, Roman) using accurate joint simulations.
\par
The sequence of observations (field/night) of the DR survey must fulfill a couple of constraints. LSST observation of \adfs~has to be contemporaneous with \euclid~(years 2 and 3) and with \romanspace~(years 5 and 6). Observing multiple fields per night is not optimal if the number of visits per field is high. It may jeopardize the uniformity of the WFD survey (if the total number of DD visits is too high) and have a negative impact on the regularity of the cadence (if a choice has to be made among the DDFs). Overlap of field observations should be minimized. This means that the DR survey should be deterministic with a timely sequence defined in advance. An example of the progress of a DR survey is given on Fig. \ref{fig:timelysequence} for a configuration of 5 fields and a depth of 0.9 and 0.7 for ultra-deep and deep fields, respectivelly.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\textwidth,height=0.3\textwidth]{timely_sequence_235visits.png}
\caption{ Cumulative sum of the number of nights (per field and per season) as a function of the time since surveystart (assumed to be late 2023). The following sequence of observations is considered: \cosmos, \adfs~(x2), \xmm, \cosmos, \adfs~(x2), \elais~(x2), \xmm , with a maximum season length of 180 days for the deep fields, a cadence of one day, and ensuring only one field is observed per night. The fields are required to be observable (airmass $\leq$ 1.5 and 20\textdegree $\leq$ altitude $\leq$ 86.5\textdegree) for at least 2 hours (235 visits) for the ultra-deep fields and 15' (29 visits) for the deep fields. The overlap, defined as the fraction of nights with more than one field observed during a night, is $\sim7\%$}\label{fig:timelysequence}
\end{center}
\end{figure}
\begin{comment}
\begin{table}[!htbp]
\caption{Set of scenarios to reach higher \zcomp.}\label{tab:rolling_scenarios}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\hline
Scenario & \zcomp & \nsncomp & budget & \nvisits & fields & seasons & season length\\
& & & & \bg/\br/\bi/\bz/\by & & & [month] \\
\hline
\multirow{5}{*}{\ddfscen{a}} & \multirow{5}{*}{0.8} & \multirow{5}{*}{2270} & \multirow{5}{*}{11.4\per} & & COSMOS & 1,2 & 5.8 \\
& & & & & CDFS & 3,4 & 6.0 \\
& & & & 127 & ELAIS & 7,8 & 6.0 \\
& & & & 2/2/45/58/19 &XMM-LSS & 9,10 & 6.0 \\
& & & & &ADFS & 1,2,5,6 & 6.0 \\
\hline
\multirow{5}{*}{\ddfscen{b}} & \multirow{5}{*}{0.84} & \multirow{5}{*}{2500} & \multirow{5}{*}{15.0\per} & & COSMOS & 1,2 & 5.4\\
& & & & & CDFS & 3,4 & 6.0 \\
& & & & 169 & ELAIS & 7,8 & 6.0 \\
& & & & 2/2/63/85/18 &XMM-LSS & 9,10 & 5.8\\
& & & & &ADFS & 1,2,5,6 & 6.0 \\
\hline
\multirow{3}{*}{\ddfscen{c}} & \multirow{3}{*}{0.9} & \multirow{3}{*}{1860} & \multirow{3}{*}{14.3\per} & & COSMOS & 1,2 & 4.7 \\
& & & & 250 & CDFS & 3,4 & 6.0 \\
& & & & 2/2/98/130/18 &ADFS & 1,2,5,6 & 6.0 \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{comment}
% ----------------------------------------------------------------------
% ----------------------------------------------------------------------
%\section{Lessons from recent simulations}
%\label{sec:simu}
% ----------------------------------------------------------------------
%\section{Discussion}
%\label{sec:discussion}
% ----------------------------------------------------------------------
\section{Conclusions}
\label{sec:conclusion}
In this paper we have presented a set of studies assessing the impact of the LSST Deep-Drilling mini-survey on the size and depth of a sample of well-measured \sne. A comprehensive analysis of the recent LSST simulations has shown that (a) higher redshifts could not be obtained with the proposed strategies without exceeding a reasonable budget allocation and (b) the proposed number of visits (per observing night) had to be adjusted. Reaching higher redshift completeness requires to increase the signal-to-noise ratio of the photometric light curves, while considering band-flux distribution ($z$-dependent), cadence, and observing conditions. We have proposed a method providing the relationship between the optimized number of visits {\it per band} and the redshift completeness. We used this result to design a set of realistic strategies that would meet the initial requirements: collecting a large sample of well-measured \sne~up to higher redshift while respecting survey design constraints (cadence, budget, number of fields, season length). Synergistic datasets have been taken into account to lead to a Deep Rolling strategy with the following contours: (a) low cadence (one day), (b) rolling strategy (at least two seasons of obervation per field), (c) two sets of fields: ultra-deep (\cosmos, \xmm - \zcomp~$\gtrsim$~0.8) and deep (\adfs, \cdfs, \elais - \zcomp~$\gtrsim$~0.5).
\par
The minimal scenario proposed in this paper consists of three fields, two ultra-deep (\cosmos, \xmm, for two seasons) and one deep (\adfs, for four seasons in synergy with \euclid~and \romanspace). The redshift completeness will depend on the DD alloted budget, which is conditionned to the WFD survey time required to meet SRD requirements. It is not clear whether this information will be known prior to the start of the survey. This is why the proposed scenario is flexible and may be adapted according to (potentially changing) circumstances. The DD budget will provide guidance for the depth of the \sne~sample once the number of fields to observe is chosen. Accurate joint studies between LSST and external facilities providing contemporaneous observations (\euclid, \romanspace, Subaru, 4MOST) have to be performed to optimize the depth and size of the \sne~sample collected with the DR survey.
% ----------------------------------------------------------------------
\subsection*{Acknowledgments}
%%% Here is where you should add your specific acknowledgments, remembering that some standard thanks will be added via the \code{desc-tex/ack/*.tex} and \code{contributions.tex} files.
%This paper has undergone internal review in the LSST Dark Energy Science Collaboration. % REQUIRED if true
\input{contributions} % Standard papers only: author contribution statements. For examples, see http://blogs.nature.com/nautilus/2007/11/post_12.html
% This work used TBD kindly provided by Not-A-DESC Member and benefitted from comments by Another Non-DESC person.
% Standard papers only: A.B.C. acknowledges support from grant 1234 from ...
\input{desc-tex/ack/standard} % also available: key standard_short
% This work used some telescope which is operated/funded by some agency or consortium or foundation ...
% We acknowledge the use of An-External-Tool-like-NED-or-ADS.
%{\it Facilities:} \facility{LSST}
\vspace*{7cm}
\appendix
\input{appendix}
% Include both collaboration papers and external citations:
\bibliography{main,lsstdesc}
\end{document}
% ======================================================================
| {
"alphanum_fraction": 0.7316833672,
"avg_line_length": 95.7457264957,
"ext": "tex",
"hexsha": "54f78e7ddc1f34b077179cb1ab4b6564b6af95db",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "c4371afb29410aef1727695e143a85882e7e43c2",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "pgris/optiDD",
"max_forks_repo_path": "main.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c4371afb29410aef1727695e143a85882e7e43c2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "pgris/optiDD",
"max_issues_repo_path": "main.tex",
"max_line_length": 1393,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "c4371afb29410aef1727695e143a85882e7e43c2",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "pgris/optiDD",
"max_stars_repo_path": "main.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 12504,
"size": 44809
} |
\section{The Trainers}
\newlength{\trainerIconWidth}
\setlength{\trainerIconWidth}{2.0cm}
\begin{center}
\begin{longtable}{>{\centering\arraybackslash} m{1.1\trainerIconWidth} m{1\textwidth}}
\includegraphics[width=\trainerIconWidth]{photos/Chen.jpg} &
\textbf{Dr. Zhiliang Chen}\newline
Postdoctoral Research Associate\newline
The University of New South Wales (UNSW), NSW\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Corley.jpg} &
\textbf{Dr. Susan Corley}\newline
Postdoctoral Research Associate\newline
The University of New South Wales (UNSW), NSW\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Deshpande.jpg} &
\textbf{Dr. Nandan Deshpande}\newline
Postdoctoral Research Associate\newline
The University of New South Wales (UNSW), NSW\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Duesing.jpg} &
\textbf{Dr. Konsta Duesing}\newline
Research Team Leader - Statistics \& Bioinformatics\newline
CSIRO Animal, Food and Health Science, NSW\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Field.jpg} &
\textbf{Dr. Matthew Field}\newline
Computational Biologist\newline
The John Curtin School of Medical Research ANU College of Medicine, Biology \& Environment, ACT\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Li.jpg} &
\textbf{Dr. Xi (Sean) Li}\newline
Bioinformatics Analyst\newline
Bioinformatics Core, CSIRO Mathematics, Informatics and Statistics, ACT\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/McGrath.jpg} &
\textbf{Dr. Annette McGrath}\newline
Bioinformatics Core Leader at CSIRO\newline
Bioinformatics Core, CSIRO Mathematics, Informatics and Statistics, ACT\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/McWilliam.jpg} &
\textbf{Mr. Sean McWilliam}\newline
Bioinformatics Analyst\newline
CSIRO Animal, Food and Health Sciences, QLD\newline
\mailto{[email protected]}\\
\pagebreak
\includegraphics[width=\trainerIconWidth]{photos/Moolhuijzen.jpg} &
\textbf{Dr. Paula Moolhuijzen}\newline
Bioinformatics Analyst\newline
Centre for Crop Disease Management, Curtin University, WA\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/Tyagi.jpg} &
\textbf{Dr. Sonika Tyagi}\newline
Bioinformatics Supervisor\newline
Australian Genome Research Facility Ltd, The Walter and Eliza Hall Institute, VIC\newline
\mailto{[email protected]}\\
\includegraphics[width=\trainerIconWidth]{photos/watson-haigh.jpg} &
\textbf{Dr. Nathan S. Watson-Haigh}\newline
Research Fellow in Bioinformatics\newline
The Australian Centre for Plant Functional Genomics (ACPFG), SA\newline
\mailto{[email protected]}\\
\end{longtable}
\end{center}
| {
"alphanum_fraction": 0.7444905781,
"avg_line_length": 40.141025641,
"ext": "tex",
"hexsha": "6b2477cda02be04ee01782815a9cb63e9f50780d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "9d453bb371f2529730bb567fdf430883e3e59438",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "jrevote/ngs-workshop_embl-2015-06",
"max_forks_repo_path": "010_trainers/trainers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "9d453bb371f2529730bb567fdf430883e3e59438",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "jrevote/ngs-workshop_embl-2015-06",
"max_issues_repo_path": "010_trainers/trainers.tex",
"max_line_length": 107,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "9d453bb371f2529730bb567fdf430883e3e59438",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "jrevote/ngs-workshop_embl-2015-06",
"max_stars_repo_path": "010_trainers/trainers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 900,
"size": 3131
} |
The Resource Manager/Monitor and Control Interface is intended to access both low level and abstracted information from the monitor and control system (if available), much like the Resource Manager/Operating System Interface (section \ref{sec:RMOS}).
The resource manager is in a somewhat unique position of providing a range of functionality depending on the specific implementation.
The resource manager role includes functionality such as batch schedulers and allocators as well as potential portions of tightly integrated runtime and launch systems.
The resource manager may require fairly low level measurement information to make decisions and potentially store historic information for consumption by the user role (for example).
In contrast to the Resource Manager/Operating System Interface (section \ref{sec:RMOS}) this interface includes the capability to mine information from the Monitor and Control system in situations where the Resource Manager does not retain historic data itself.
The resource manager may also play a very large role in controlling power and energy pertinent functionally on both a application and platform basis in response to facility restrictions (power capping or energy aware scheduling for example).
\subsection{Supported Attributes}\label{sec:RMMCAttributes}
A significant amount of functionality for this interface is exposed through the attribute functions (section \ref{sec:Attributes}).
The attribute functions in conjunction with the following attributes (Table \ref{table:RMMC}) expose numerous measurement (get) and control (set) capabilities to the resource manager.
\begin{attributetable}{Resource Manager, Monitor and Control - Supported Attributes }{table:RMMC}
\aPstateDesc
\aCstateDesc
\aCstateLimitDesc
\aSstateDesc
\aPowerDesc
\aMinPowerDesc
\aMaxPowerDesc
\aFreqDesc
\aFreqLimitMinDesc
\aFreqLimitMaxDesc
\aEnergyDesc
\aTempDesc
\end{attributetable}
\subsection{Supported Core (Common) Functions}\label{sec:RMMCSupportedCommon}
\begin{itemize}[noitemsep,nolistsep]
\item{Hierarchy Navigation Functions - section \ref{sec:Navigation}}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\item{Group Functions - section \ref{sec:Group}}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\item{Attribute Functions - section \ref{sec:Attributes}}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\item{Metadata Functions - section \ref{sec:METADATA}}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\item{Statistics Functions - section \ref{sec:StatisticsFunctions}}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\end{itemize}
%==============================================================================%
\subsection{Supported High-Level (Common) Functions}\label{sec:RMMCHighLevel}
\begin{itemize}[noitemsep,nolistsep]
\item{Report Functions} - section \ref{sec:ReportFunctions}
\begin{itemize}[noitemsep,nolistsep]
\item{ALL}
\end{itemize}
\end{itemize}
%==============================================================================%
\subsection{Interface Specific Functions}\label{sec:RMMCFunctions}
| {
"alphanum_fraction": 0.7545227698,
"avg_line_length": 45.8,
"ext": "tex",
"hexsha": "cc3432ee8b13e93537653261ac98c55d5fab820e",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2019-05-24T13:46:52.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-04-18T16:06:43.000Z",
"max_forks_repo_head_hexsha": "e3b74b0c62fa7e6104b8b18c4334e71afb745802",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "regrant/powerapi_spec-1",
"max_forks_repo_path": "RMMC.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "ddcc18ba6d2a9669f1c30f86f438ddecd5f444de",
"max_issues_repo_issues_event_max_datetime": "2020-09-18T15:02:08.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-03-09T17:13:36.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "pwrapi/powerapi_spec",
"max_issues_repo_path": "RMMC.tex",
"max_line_length": 261,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "e3b74b0c62fa7e6104b8b18c4334e71afb745802",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "regrant/powerapi_spec-1",
"max_stars_repo_path": "RMMC.tex",
"max_stars_repo_stars_event_max_datetime": "2018-06-07T17:19:34.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-03-09T17:10:47.000Z",
"num_tokens": 774,
"size": 3206
} |
%!TEX root = ../thesis.tex
\chapter{Discussion}
\label{ch:discussion}
| {
"alphanum_fraction": 0.6944444444,
"avg_line_length": 12,
"ext": "tex",
"hexsha": "9cc0fe40049b214e83a915e783c0fc8a2dac0fd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3704f81642037e938049f7e2a35173b124ce5e14",
"max_forks_repo_licenses": [
"LPPL-1.3c"
],
"max_forks_repo_name": "uis-no/uis-thesis",
"max_forks_repo_path": "chapters/discussion.tex",
"max_issues_count": 10,
"max_issues_repo_head_hexsha": "3704f81642037e938049f7e2a35173b124ce5e14",
"max_issues_repo_issues_event_max_datetime": "2022-01-19T11:55:33.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-05T17:34:31.000Z",
"max_issues_repo_licenses": [
"LPPL-1.3c"
],
"max_issues_repo_name": "uis-no/uis-thesis",
"max_issues_repo_path": "chapters/discussion.tex",
"max_line_length": 26,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "3704f81642037e938049f7e2a35173b124ce5e14",
"max_stars_repo_licenses": [
"LPPL-1.3c"
],
"max_stars_repo_name": "uis-no/uis-thesis",
"max_stars_repo_path": "chapters/discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-08T13:40:24.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-12-21T20:19:15.000Z",
"num_tokens": 20,
"size": 72
} |
\documentclass[12pt]{report}
\setlength{\textwidth}{6.5 in}
\setlength{\evensidemargin}{0 in}
\setlength{\oddsidemargin}{0 in}
\setlength{\textheight}{9.4 in }
\setlength{\topmargin}{-0.7 in}
\pagestyle{myheadings}
\usepackage[pdftex]{graphicx} \usepackage{eso-pic}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{placeins}
\usepackage{ifthen}
\usepackage{tikz}
\usepackage{pgfplots}
\usepackage{array} % for table column M
\usepackage{makecell} % to break line within a cell
\usepackage{verbatim}
\usepackage{epstopdf}
\usepackage{amsfonts}
\usepackage{xcolor}
\usepackage{subcaption}
\usepackage{pdfpages}
\usepackage{hyperref}
%\captionsetup{compatibility=false}
%\usepackage{dsfont}
\usepackage[absolute,overlay]{textpos}
\usetikzlibrary{calc, angles,quotes}
\usetikzlibrary{pgfplots.fillbetween, backgrounds}
\usetikzlibrary{positioning}
\usetikzlibrary{arrows}
\usetikzlibrary{pgfplots.groupplots}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{plotmarks}
\usetikzlibrary{decorations.markings}
\DeclareGraphicsExtensions{.pdf,.eps,.png}
\input{preamble.tex}
\markboth{\em Practice final}{\em Practice final}
\begin{document}
\thispagestyle{empty}
\begin{centering}
{\large Stanford University}\\
{\large EE 264: Digital Signal Processing}\\
{\large Summer, 2017} \\
\mbox{}\\
{\large Practice final exam}\\
\mbox{}\\
\end{centering}
\noindent \rule{6.5 in}{0.5pt}
%\mbox{}\\
This is a sample question similar to the ones you will find in the final exam. I will go over this question during the review on the last lecture. The solutions will be available on Canvas after the class.
\noindent This is not an assignment, so you do not need to turn in your solutions. However, it's recommended that you read and attempt all questions before the class.
\noindent
\rule{6.5 in}{0.5pt}
\section*{Inverse control}
In many control problems, we wish to make a given plant track an input command. One way to achieve this is using an \textit{inverse control} system as illustrated in the figure below.
\FloatBarrier
\begin{figure}[h!]
\centering
\resizebox{0.7\textwidth}{!}{\input{figs/inverse_control.tex}}
\caption{Inverse control block diagram.}
\end{figure}
\FloatBarrier
In this system, the controller is \textit{approximately} the inverse of the plant. As a result, if an input command $s[n]$ were applied to the controller, the output of the plant would follow that command and produce $y[n] \approx s[n]$.
For this problem, we will consider the particular case when the plant is \underline{non-minimum phase}. Thus, $H^{-1}(z)$ is unstable. In addition, we will assume that the plant transfer function is
\begin{equation}
H(z) = \frac{(1 - 1.5e^{j3\pi/4}z^{-1})(1 - 1.5e^{-j3\pi/4}z^{-1})}{(1 - 0.75e^{j\pi/3}z^{-1})(1 - 0.75e^{-j\pi/3}z^{-1})}.
\end{equation}
However, in some parts of the problem, we will pretend that the plant is unknown and we need to estimate it.
\subsection*{Part 1: Preliminaries}
\begin{description}
\item [(a)] Factor $H(z)$ as a product of a minimum phase and an all-pass system: $H(z) = H_{min}(z)H_{ap}(z)$. Plot the pole-zero diagram and the magnitude and phase responses of the minimum-phase and of the all-pass system. By convention, assume that the all-pass system has \underline{unit gain}. Hence, the gain of $H_{min}(z)$ must match $H(z)$.
\item [(b)] Suppose that $x[n]$ were a zero-mean white noise with unit average power, given an expression for the PSD of the output of the plant $y[n]$. Your answer can be in terms of $H(e^{j\omega})$ or $H_{min}(e^{j\omega})$.
\end{description}
\subsection*{Part 2: Plant identification}
In this part we will assume that the plant is unknown. Hence, we must find ways to identify it i.e., to estimate $H(e^{j\omega})$. We will first identify the plant using the technique covered in HW4. Then, we will estimate $H(e^{j\omega})$ based on cross-correlation estimation.
\begin{description}
\item [(a)] Assuming that $x[n]$ is a \underline{zero-mean} \underline{white noise} with \underline{unit average power}, use any of the PSD estimation techniques covered in class to estimate the PSD $\Phi_{yy}(e^{j\omega})$ of the plant output $y[n]$. Be sure to indicate all relevant parameters you used such as window type, window length, number of samples, etc. On the same graph, plot the theoretical PSD and your PSD estimate. Your plots should be in dB.
\item [(b)] Use the Kramers-Kronig relation to obtain the estimated plant phase response $\angle \hat{H}(e^{j\omega})$ from its log-magnitude, as done in HW4. On the same graph, plot the \underline{unwrapped} phase of your estimate and the true \underline{unwrapped} phase of the plant. Explain any discrepancies.
\textit{Note:} Differently from the HW4, be sure to account for the periodicity of the DTFT when computing the convolution integral.
\item [(c)] Now let's consider a different plant identification technique. Show that the cross-correlation between the output $y[n]$ and input $x[n]$ of a system $h[n]$ is given by
\begin{equation}
\phi_{yx}[m] = h[m]\ast \phi_{xx}[m]
\end{equation}
\item [(d)] Explain how you could use the result from part (c) to obtain an estimate for $H(e^{j\omega})$.
\item [(e)] Modify the Blackman-Tukey PSD estimation technique to estimate $\Phi_{yx}(e^{j\omega}) = \mathcal{F}\{\phi_{yx}[m]\}$.
\textit{Note:} The FFT indexing assumes that $\phi_{yx}[m]$ is indexed from $m = 0$ to $m = 2M-2$, where $M-1$ is the maximum lag in the cross-correlation estimation. However, we know that the cross-correlation is indexed from $m = -M+1$ to $m = M-1$. Thus, computing $\mathrm{DFT}\{\phi_{yx}[m]\}$ is actually equivalent to $\mathcal{F}\{\phi_{yx}[m-M]\}$. Hence, the $\mathrm{DFT}\{\phi_{yx}[m]\}$ must be phase shifted to produce the correct $\Phi_{yx}(e^{j\omega})$. Alternatively, you may use the command \texttt{ifftshift} before computing the DFT, so that $\phi_{yx}[m]$ is indexed correctly.
\textit{Hint:} differently from the autocorrelation function, the cross-correlation function is not necessarily even symmetric.
\item [(f)] Implement your method and obtain and estimate for $H(e^{j\omega})$. On the same graph, plot the magnitude of your estimate and the true magnitude of the plant. On a different graph, plot the \underline{unwrapped} phase of your estimate and the true \underline{unwrapped} phase of the plant. Explain any discrepancies.
\end{description}
\subsection*{Part 3: Controller design}
Now we would like to design a controller for the plant $H(z)$. In inverse control, the controller should be such that $C(z) \approx H^{-1}(z)$. However, $H^{-1}(z)$ is unstable. Hence, we will design $C(z)$ such that $C(z) = H^{-1}_{min}(z)$.
For the following questions, you may use the theoretical values of $H_{min}(e^{j\omega})$, as opposed to those you estimated in the previous part.
\begin{description}
\item [(a)] Design a \underline{linear-phase} FIR system such that $|C(e^{j\omega})| \approx |H^{-1}_{min}(e^{j\omega})|$. Your filter should have \underline{at most} 21 coefficients. Specify all the choices you made. That is, what is $H_d(e^{j\omega})$, the weight function $W(\omega)$, and the algorithm you chose.
\item [(b)] On the same graph, plot the magnitude of $C(e^{j\omega})$ and $H^{-1}_{min}(e^{j\omega})$. On a different graph, plot the phase of $C(e^{j\omega})$ and $H^{-1}_{min}(e^{j\omega})$. Discuss whether this filter would be a good inverse controller for the plant.
\item [(c)] Now use the leasts-squares method to design an FIR system such that $C(e^{j\omega}) \approx H^{-1}_{min}(e^{j\omega})$. Note that for this part, your filter \underline{does not} have to be linear phase, but it has to have \underline{real coefficients}. Additionally, your filter should have \underline{at most} 21 coefficients. Specify all the choices you make.
\textit{Note:} In designing this filter, make sure that the vector $d$ is Hermitian symmetric, so that you obtain a filter with real coefficients.
\item [(d)] On the same graph, plot the magnitude of $C(e^{j\omega})$ and $H^{-1}_{min}(e^{j\omega})$. On a different graph, plot the phase of $C(e^{j\omega})$ and $H^{-1}_{min}(e^{j\omega})$. Discuss whether this filter would be a good inverse controller for the plant.
\end{description}
\subsection*{Part 4: Testing}
Test how well your controller designed in Part 3(c) controls the plant. Plot the plant output $y[n]$ and the controller input $s[n]$, when $s[n]$ is each of the following signals.
\begin{description}
\item[(a)] Unit step. Plot for $n = 0, \ldots, 40$.
\item[(b)] Sinusoid of frequency $0.1\pi$. Plot for $n = 0, \ldots, 100$.
\item[(b)] Sinusoid of frequency $0.9\pi$. Plot for $n = 0, \ldots, 100$.
\end{description}
For all cases, plot $s[n]$ and $y[n]$ on the same graph. But use a different graphs for each item.
For the sinusoidal inputs, you should see that although, we do not have $y[n] = s[n]$, we have $y[n] \approx s[n-\Delta]$, where $\Delta$ is the delay introduced by the all-pass filter, which was not included in the design of the controller. You can estimate $\Delta$ by measuring the group delay of the plant and controller at the sinusoids frequency.
\end{document}
| {
"alphanum_fraction": 0.7265820037,
"avg_line_length": 64.8802816901,
"ext": "tex",
"hexsha": "9d76a9808cd832fb674685dc9deb97ca093ac46b",
"lang": "TeX",
"max_forks_count": 10,
"max_forks_repo_forks_event_max_datetime": "2022-03-19T07:25:20.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-04-16T01:11:14.000Z",
"max_forks_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jkperin/DSP",
"max_forks_repo_path": "homework/practice_final.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jkperin/DSP",
"max_issues_repo_path": "homework/practice_final.tex",
"max_line_length": 600,
"max_stars_count": 21,
"max_stars_repo_head_hexsha": "0ec74b4597fb54800ebdab440cba4892d210343d",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jkperin/DSP",
"max_stars_repo_path": "homework/practice_final.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-07T08:56:28.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-11T21:48:47.000Z",
"num_tokens": 2690,
"size": 9213
} |
% Author: Jan Schaumann <[email protected]>
\special{! TeXDict begin /landplus90{true}store end }
\documentclass[xga]{xdvislides}
\usepackage[landscape]{geometry}
\usepackage{graphics}
\usepackage{graphicx}
\usepackage{colordvi}
\usepackage{multirow}
\usepackage{fancyvrb}
\fvset{commandchars=\\\{\}}
\usepackage[usenames]{color}
\usepackage[dvipsnames]{xcolor}
\definecolor{gray}{RGB}{180,180,180}
\begin{document}
\setfontphv
%%% Headers and footers
\lhead{\slidetitle} % default:\lhead{\slidetitle}
\chead{CS615 - Aspects of System Administration}% default:\chead{\relax}
\rhead{Slide \thepage} % default:\rhead{\sectiontitle}
\lfoot{\Gray{Networking I}}% default:\lfoot{\slideauthor}
\cfoot{\relax} % default:\cfoot{\relax}
\rfoot{\Gray{\today}}
\vspace*{\fill}
\begin{center}
\Hugesize
CS615 - Aspects of System Administration\\ [1em]
Networking I\\ [1em]
\hspace*{5mm}\blueline\\ [1em]
\Normalsize
Department of Computer Science\\
Stevens Institute of Technology\\
Jan Schaumann\\
\[email protected]+\\
\verb+https://stevens.netmeister.org/615/+
\end{center}
\vspace*{\fill}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/osi-stack.eps}
\end{center}
\vspace*{\fill}
\subsection{Team Missions}
\begin{itemize}
\item \textcolor{red}{https://www.us-cert.gov/ics/advisories/icsa-19-274-01}
\item \textcolor{green}{https://is.gd/soixLV}
\item https://is.gd/vSuYvF
\item \textcolor{blue}{https://is.gd/qkXhe2}
\end{itemize}
\subsection{TCP}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/tcp-packet.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/osi-stack.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/osi-stack.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/osi-stack2.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking I}
\begin{verbatim}
$ sudo tcpdump -w /tmp/out port 80 &
$ curl -s -I http://www.cs.stevens.edu/ >/dev/null
$ fg
^C
$ sudo tcpdump -r /tmp/out -n -XX -c 1
15:23:43.477095 IP 172.16.1.30.51525 > 155.246.56.11.80: Flags [S], seq 1016422373,
win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 86305166 ecr 0,sackOK,eol], length 0
0x0000: c4b3 01db afe2 8c85 9047 b4f6 0800 4500 .........G....E.
0x0010: 0040 0000 4000 4006 b988 ac10 011e 9bf6 .@..@.@.........
0x0020: 380b c945 0050 3c95 5fe5 0000 0000 b002 8..E.P<._.......
0x0030: ffff 6109 0000 0204 05b4 0103 0306 0101 ..a.............
0x0040: 080a 0524 e98e 0000 0000 0402 0000 ...$..........
\end{verbatim}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/tcp-encapsulation.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking I}
Layers:
\begin{Verbatim}
0x0000: \textcolor{red}{c4b3 01db afe2 8c85 9047 b4f6 0800} \textcolor{orange}{4500} .........G....E.
0x0010: \textcolor{orange}{0040 0000 4000 4006 b988 ac10 011e 9bf6} .@..@.@.........
0x0020: \textcolor{orange}{380b} \textcolor{cyan}{c945 0050 3c95 5fe5 0000 0000 b002} 8..E.P<._.......
0x0030: \textcolor{cyan}{ffff 6109 0000 0204 05b4 0103 0306 0101} ..a.............
0x0040: \textcolor{cyan}{080a 0524 e98e 0000 0000 0402 0000} ...$..........
\end{Verbatim}
\vspace{.25in}
\textcolor{red}{Link layer information}; here: Ethernet \\
\textcolor{orange}{Network layer information}; here: IP \\
\textcolor{cyan}{Transport layer information}; here: TCP
\subsection{Networking I}
OSI Layer 2 / TCP/IP Layer 1: Ethernet information: \\
\begin{Verbatim}
0x0000: \textcolor{blue}{c4b3 01db afe2} \textcolor{red}{8c85 9047 b4f6} \textcolor{green}{0800} \textcolor{orange}{4500} .........G....E.
0x0010: \textcolor{orange}{0040 0000 4000 4006 b988 ac10 011e 9bf6} .@..@.@.........
0x0020: \textcolor{orange}{380b} \textcolor{cyan}{c945 0050 3c95 5fe5 0000 0000 b002} 8..E.P<._.......
0x0030: \textcolor{cyan}{ffff 6109 0000 0204 05b4 0103 0306 0101} ..a.............
0x0040: \textcolor{cyan}{080a 0524 e98e 0000 0000 0402 0000} ...$..........
\end{Verbatim}
\vspace{.25in}
Destination address: \textcolor{blue}{c4:b3:01:db:af:e2} \\
Source address: \textcolor{red}{8c:85:90:47:b4:f6} \\
Type: IP (\textcolor{green}{0800}) \\
\textcolor{orange}{IPv4 stuff} \textcolor{cyan}{TCP stuff}
\vspace{.25in}
\begin{Verbatim}
$ ifconfig en0 | grep ether
ether \textcolor{red}{8c:85:90:47:b4:f6}
\end{Verbatim}
\subsection{Networking I}
OSI Layer 3 / TCP/IP Layer 2: Internet Protocol: \\
\begin{Verbatim}
0x0000: c4b3 01db afe2 8c85 9047 b4f6 0800 \textcolor{gray}{45}\textcolor{green}{00} .........G....E.
0x0010: \textcolor{cyan}{0040} \textcolor{yellow}{0000} \textcolor{olive}{40}00 \textcolor{orange}{40}06 b988 ac10 011e 9bf6 .@..@.@.........
0x0020: 380b c945 0050 3c95 5fe5 0000 0000 b002 8..E.P<._.......
0x0030: ffff 6109 0000 0204 05b4 0103 0306 0101 ..a.............
0x0040: 080a 0524 e98e 0000 0000 0402 0000 ...$..........
\end{Verbatim}
\vspace{.5in}
Version 4 (0100) + Header Length 20 (0101) = 01000101 = \textcolor{gray}{45} \\
DSCP default (000000) + Not-ECN (00) = \textcolor{green}{00} \\
Total length = \textcolor{cyan}{0040} = 64\\
Identification = \textcolor{yellow}{0000} \\
Flags = Don't Fragment (010) + Frag Offset (00000) = \textcolor{olive}{40}00 \\
TTL= \textcolor{orange}{40} = 64 \\
\subsection{Networking I}
OSI Layer 3 / TCP/IP Layer 2: Internet Protocol: \\
\begin{Verbatim}
0x0000: c4b3 01db afe2 8c85 9013 73c1 0800 4510 ..........s...E.
0x0010: 0040 0000 4000 40\textcolor{purple}{06} b988 \textcolor{red}{ac10 011e} \textcolor{blue}{9bf6} .@..@.@.........
0x0020: \textcolor{blue}{380b} \textcolor{green}{c945 0050 3c95 5fe5 0000 0000 b002} 8..E.P<._.......
0x0030: \textcolor{green}{ffff 6109 0000 0204 05b4 0103 0306 0101} ..a.............
0x0040: \textcolor{green}{080a 0524 e98e 0000 0000 0402 0000} ...$..........
\end{Verbatim}
Protocol: TCP (6) \textcolor{purple}{06} \\
Header Checksum: 0xb988 \\
Source Address: 172.16.1.30 (\textcolor{red}{ac10 011e}) \\
Destination Address: 155.246.56.11 (\textcolor{blue}{9bf6 380b}) \\
TCP Stuff: \textcolor{green}{c945 ... 0000}
\vspace{.25in}
\begin{Verbatim}
$ ifconfig en0 | grep "inet "
inet \textcolor{red}{172.16.1.30} netmask 0xffffff00 broadcast 172.16.1.255
$
\end{Verbatim}
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011111101100011100000001011| \\
\vspace{.5in}
IPv4 addresses are 32-bit numbers.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011 11110110 00111000 00001011| \\
\vspace{.5in}
Each IPv4 address consists of four octets.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011 11110110 00111000 00001011| \\
\verb| 155 . 246 . 56 . 11| \\
\begin{Verbatim}
\textcolor{blue}{9B F6 38 0B}
\end{Verbatim}
\vspace{.5in}
Each IPv4 address consists of four octets.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011 11110110 00111000 00001011| \\
\vspace{.5in}
IPv4 addresses are divided into a {\em network part} and a {\em host part}. \\
\vspace{.25in}
Hosts on the same network ({\em broadcast domain}) can talk to each other
without the help of a router.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\begin{Verbatim}
\textcolor{red}{10}011011 11110110 00111000 00001011
\end{Verbatim}
\vspace{.5in}
There are three different {\em classes} of IPv4 networks.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\begin{Verbatim}
\textcolor{red}{10}\textcolor{gray}{01}1011 11110110 00111000 00001011
\end{Verbatim}
\vspace{.5in}
There are three different {\em classes} of IPv4 networks. \\
Well, five, really.
\end{center}
\Normalsize
\subsection{IPv4 Basics}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.8]{pics/classfull.eps}
\end{center}
\vspace*{\fill}
\subsection{Subnets}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011 11110110 00111000 00001011| \\
\verb|11111111 11111111 00000000 00000000| \\
\vspace{.5in}
A {\em netmask} splits the IPv4 address into {\em network} and {\em host}
parts.
\end{center}
\Normalsize
\subsection{Subnets}
\vspace{.5in}
\Hugesize
\begin{center}
\verb|10011011 11110110 00111000 00001011| \\
\verb|11111111 11111111 11111111 00000000| \\
\vspace{.5in}
A {\em netmask} splits the IPv4 address into {\em network} and {\em host}
parts.
\end{center}
\Normalsize
\subsection{Subnets}
\begin{verbatim}
$ ipcalc -n 155.246.56.11/16
Address: 155.246.56.11 10011011.11110110. 00111000.00001011
Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000
Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111
=>
Network: 155.246.0.0/16 10011011.11110110. 00000000.00000000
HostMin: 155.246.0.1 10011011.11110110. 00000000.00000001
HostMax: 155.246.255.254 10011011.11110110. 11111111.11111110
Broadcast: 155.246.255.255 10011011.11110110. 11111111.11111111
Hosts/Net: 65534 Class B
\end{verbatim}
\vspace{.5in}
Try also: \verb+sipcalc -a 155.246.56.11/16+
\subsection{Subnets}
\begin{verbatim}
$ ipcalc -n 155.246.56.11/24
Address: 155.246.56.11 10011011.11110110.00111000. 00001011
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
=>
Network: 155.246.56.0/24 10011011.11110110.00111000. 00000000
HostMin: 155.246.56.1 10011011.11110110.00111000. 00000001
HostMax: 155.246.56.254 10011011.11110110.00111000. 11111110
Broadcast: 155.246.56.255 10011011.11110110.00111000. 11111111
Hosts/Net: 254 Class B
\end{verbatim}
\vspace{.5in}
Try also: \verb+sipcalc -a 155.246.56.11/24+
\subsection{CIDR cheat sheet}
A.B.C.D/N
\begin{itemize}
\item $N$ = bits describing network portion of address
\item $M=32-N$ = bits in host portion of address
\item $2^M$ = number of addresses on this subnet
\item $2^M - 2$ = number of possible hosts
\begin{itemize}
\item first address on subnet = network address
\item last address on subnet = broadcast address
\end{itemize}
\item subnet division need not occur on dotted boundary only
\begin{itemize}
\item for example, you can divide 155.246.89.0/24
into four /26 networks
\item networks starting at .0, .64, .128, .192
\end{itemize}
\end{itemize}
\addvspace{.5in}
Which of the following is not a valid netmask? \\
\verb+255.255.253.0, 255.255.250.0, 255.255.240.0+
\subsection{Mommy, where do IP addresses come from?}
\Huge
\vfill
\begin{center}
The Internet Assigned Numbers Authority (IANA) oversees global IP
address/AS number allocation, root zone management etc.
\\
\vspace{.5in}
\verb+https://www.iana.org/+
\end{center}
\vfill
\Normalsize
\subsection{Mommy, where do IP addresses come from?}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.5]{pics/rirs.eps} \\
\vspace{.5in}
Regional Internet Registries (RIR) manage the allocation and
registration of Internet number resources within a region of the world.
\end{center}
\vspace*{\fill}
See also: \verb+https://www.xkcd.com/195/+
\subsection{Mommy, where do IP addresses come from?}
\vspace*{\fill}
\begin{center}
{\bf RIR}s assign blocks of IP addresses to the Local Internet Registries (LIR).
\\
\vspace{.5in}
LIRs are either ISPs, enterprises using a lot of addresses, or academic
institutions.
\end{center}
\vspace*{\fill}
\subsection{IPv4 Subnets: Common CIDRs}
\begin{verbatim}
10011011 11110110 00111000 00001011
| | |||| | |||||||| /32 Host route
| | |||| | |||||| /30 "Glue network" (Point-to-point)
| | |||| | ||||| /29 Smallest multi-host network
| | |||| | |||| /28 Small LAN
| | |||| | ||| /27 Small LAN
| | |||| | || /26 Small LAN
| | |||| | | /25 Large LAN
| | |||| | /24 Large LAN
| | |||| /20 Small ISP / Large business
| | ||| /19 LIR / ISP / Large business
| | || /18 LIR / ISP / Large business
| | | /17 LIR / ISP / Large business
| | /16 LIR / ISP / Large business
| /8 RIR
\end{verbatim}
\subsection{IPv4 Exhaustion}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.8]{pics/cerf.eps}
\end{center}
\vspace*{\fill}
\subsection{IPv4 Exhaustion}
IPv4 address space depletion: \\
\begin{itemize}
\item private IP space (RFC1918): \verb+10.0.0.0/8+, \verb+172.16.0.0/12+, \verb+192.168.0.0/16+
\item class D (\verb+224.0.0.0/4+) and E (\verb+240.0.0.0/4+)
\item class As (16M addresses each!) initially handed out liberally \\
(ATT, Apple, MIT, Stanford, Xerox, ...)
\item subnetting often inefficient
\item more and more devices added
\end{itemize}
\subsection{IPv4 Exhaustion}
IPv4 address space depletion: \\
Total theoretically available IP addresses: $2^{32}$
\\
RFC1918: {\tt 10.0.0.0/8}, {\tt 172.16.0.0/12}, {\tt 192.168.0.0/16}
\\
RFC5735 etc.: {\tt 0.0.0.0/8}, {\tt 100.64.0.0/10}, {\tt 127.0.0.0/8}, \\
{\tt 169.254.0.0/16}, {\tt 192.0.0.0/24}, {\tt 192.0.2.0/24}, \\
{\tt 192.88.99.0/24}, {\tt 198.18.0.0/15}, {\tt 198.51.100.0/24}, \\
{\tt 203.0.113.0/24}
\\
Class D/E: {\tt 224.0.0.0/4}, {\tt 240.0.0.0/4}
\\
"Limited broadcast": {\tt 255.255.255.255/32}
\\
What is the percent/number of actually available IP addresses?
\subsection{IPv4 Exhaustion}
Past and predicted: \\
\begin{tabular}{l r}
IANA Address Pool Exhaustion: & 2011-02-03 \\
APNIC reached final {\tt /8}: & 2011-04-19 \\
RIPENCC reached final {\tt /8}: & 2012-09-14 \\
LACNIC reached final {\tt /8}: & 2014-06-10 \\
ARIN reached final {\tt /8}: & 2015-09-24 \\
AFRINIC (predicted): & 2020-05-17 \\
\end{tabular}
\\
\vspace{.5in}
{\tt https://www.potaroo.net/tools/ipv4/} \\
{\tt https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml}
\subsection{IPv6 Basics}
\vspace{.5in}
\Hugesize
\begin{center}
\begin{verbatim}
10011011111101100011100000001011
\end{verbatim}
\vspace{.5in}
IPv4 addresses are 32-bit numbers.
\end{center}
\Normalsize
\subsection{IPv6 Basics}
\begin{verbatim}
$ sudo tcpdump -w /tmp/out port 80 &
$ curl -s -I http://www.yahoo.com/ >/dev/null
$ fg
^C
$ sudo tcpdump -t -r /tmp/out -n -XX -c 1
reading from PCAP-NG file /tmp/out.pcap
IP6 2001:470:1f07:1d1:a8af:a9d:98ff:e30f.52369 > 2001:4998:58:1836::11.80: Flags [S], seq 1370475612,
win 65535, options [mss 1440,nop,wscale 6,nop,nop,TS val 90153796 ecr 0,sackOK,eol], length 0
0x0000: c4b3 01db afe2 8c85 9047 b4f6 86dd 6000 .........G....`.
0x0010: 6c64 002c 0640 2001 0470 1f07 01d1 a8af ld.,[email protected]......
0x0020: 0a9d 98ff e30f 2001 4998 0058 1836 0000 ........I..X.6..
0x0030: 0000 0000 0011 cc91 0050 51af cc5c 0000 .........PQ..\..
0x0040: 0000 b002 ffff aca1 0000 0204 05a0 0103 ................
0x0050: 0306 0101 080a 055f a344 0000 0000 0402 ......._.D......
0x0060: 0000 ..
\end{verbatim}
\subsection{IPv6 Basics}
OSI Layer 3 / TCP/IP Layer 2: Internet Protocol v6:
\begin{Verbatim}
0x0000: \textcolor{blue}{c4b3 01db afe2} \textcolor{red}{8c85 9047 b4f6} \textcolor{green}{86dd} \textcolor{orange}{6000} .........G....`.
0x0010: \textcolor{orange}{6c64 002c 0640 2001 0470 1f07 01d1 a8af} ld.,[email protected]......
0x0020: \textcolor{orange}{0a9d 98ff e30f 2001 4998 0058 1836 0000} ........I..X.6..
0x0030: \textcolor{orange}{0000 0000 0011} \textcolor{cyan}{cc91 0050 51af cc5c 0000} .........PQ..\..
0x0040: \textcolor{cyan}{0000 b002 ffff aca1 0000 0204 05a0 0103} ................
0x0050: \textcolor{cyan}{0306 0101 080a 055f a344 0000 0000 0402} ......._.D......
0x0060: \textcolor{cyan}{0000} ..
\end{Verbatim}
\vspace{.25in}
Destination address: \textcolor{blue}{c4:b3:01:db:af:e2} \\
Source address: \textcolor{red}{8c:85:90:47:b4:f6} \\
Type: IPv6 (\textcolor{green}{86dd}) \\
\textcolor{orange}{IPv6 stuff}
\textcolor{cyan}{TCP stuff}
\vspace{.15in}
\begin{Verbatim}
$ ifconfig en0 | grep ether
ether \textcolor{red}{8c:85:90:47:b4:f6}
\end{Verbatim}
\subsection{IPv6 Basics}
OSI Layer 3 / TCP/IP Layer 2: Internet Protocol v6:
\begin{Verbatim}
0x0000: c4b3 01db afe2 8c85 9047 b4f6 86dd \textcolor{orange}{6000} .........G....`.
0x0010: \textcolor{orange}{6c64 002c 06}\textcolor{purple}{40} \textcolor{red}{2001 0470 1f07 01d1 a8af} ld.,[email protected]......
0x0020: \textcolor{red}{0a9d 98ff e30f} \textcolor{blue}{2001 4998 0058 1836 0000} ........I..X.6..
0x0030: \textcolor{blue}{0000 0000 0011} \textcolor{cyan}{cc91 0050 51af cc5c 0000} .........PQ..\..
0x0040: \textcolor{cyan}{0000 b002 ffff 751a 0000 0204 05a0 0103} ......u.........
0x0050: \textcolor{cyan}{0306 0101 080a 37c1 3edf 0000 0000 0402} ......7.>.......
0x0060: \textcolor{cyan}{0000 ..}
\end{Verbatim}
\vspace{.15in}
\textcolor{orange}{Version, Traffic Class, Flow Label, Length, Next Header}; TTL: 64 (\textcolor{purple}{40}) \\
Source address: \textcolor{red}{2001:470:1f07:1d1:18af:0a9d:98ff:e30f} \\
Destination address: \textcolor{blue}{2001:4998:58:1836::11}
\vspace{.15in}
\begin{Verbatim}
$ ifconfig en0 | grep inet6
inet6 fe80::1461:d52b:78a7:334a%en0 prefixlen 64 secured scopeid 0x5
inet6 2001:470:1f07:1d1:cd9:97f3:f16:eb48 prefixlen 64 autoconf secured
inet6 \textcolor{red}{2001:470:1f07:1d1:a8af:a9d:98ff:e30f} prefixlen 64 autoconf temporary
\end{Verbatim}
\subsection{IPv6 Basics}
\Hugesize
\begin{center}
\begin{verbatim}
0000000000100000
0000000101001001
1001100001011000
0001100000110110
0000000000000000
0000000000000000
0000000000000000
0000000000010001
\end{verbatim}
\vspace{.5in}
IPv6 addresses are 128 bits.
\end{center}
\Normalsize
\subsection{IPv6 Basics}
\Hugesize
\begin{center}
IPv4: 32 bits $=>$ $2^{32}$ addresses \\
\vspace{.5in}
IPv6: 128 bits $=>$ $2^{128}$ addresses
\end{center}
\Normalsize
\subsection{IPv6 Basics}
\Hugesize
\begin{center}
IPv4: 32 bits $=>$ $4,294,967,296$ addresses \\
\vspace{.5in}
IPv6: 128 bits $=>$ $2^{128}$ addresses
\end{center}
\Normalsize
\subsection{IPv6 Basics}
\Hugesize
\begin{center}
IPv4: 32 bits $=>$ $4,294,967,296$ addresses \\
\vspace{.5in}
IPv6: 128 bits $=>$ $340,282,366,920,938,463,463,374,607,431,768,211,456$ addresses \\
\vspace{.5in}
\end{center}
\Normalsize
\subsection{IPv6 Basics}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.75]{pics/wolfram.eps} \\
\verb+https://is.gd/94ve91+
\end{center}
\vspace*{\fill}
\subsection{IPv6 Basics}
\begin{itemize}
\item 8x16 bit fields (words) in case insensitive colon hexadecimal
representation
\begin{verbatim}
2031:0000:0000:030F:0000:0000:0000:130B
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics}
\begin{itemize}
\item 8x16 bit fields (words) in case insensitive colon hexadecimal
representation
\begin{verbatim}
2031:0000:0000:030F:0000:0000:0000:130B
\end{verbatim}
\item Leading zeros in a field are optional:
\begin{verbatim}
2031:0:0:30F:0:0:0:130B
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics}
\begin{itemize}
\item 8x16 bit fields (words) in case insensitive colon hexadecimal
representation
\begin{verbatim}
2031:0000:0000:030F:0000:0000:0000:130B
\end{verbatim}
\item Leading zeros in a field are optional:
\begin{verbatim}
2031:0:0:30F:0:0:0:130B
\end{verbatim}
\item Successive fields of 0 represented as ::, but only once in
an address:
\begin{verbatim}
2031::30F:0:0:0:130B ok
2031:0:0:30F::130B ok
2031::30F::130B not ok
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics}
\begin{itemize}
\item 8x16 bit fields (words) in case insensitive colon hexadecimal
representation
\begin{verbatim}
2031:0000:0000:030F:0000:0000:0000:130B
\end{verbatim}
\item Leading zeros in a field are optional:
\begin{verbatim}
2031:0:0:30F:0:0:0:130B
\end{verbatim}
\item Successive fields of 0 represented as ::, but only once in
an address:
\begin{verbatim}
2031::30F:0:0:0:130B ok
2031:0:0:30F::130B ok
2031::30F::130B not ok
\end{verbatim}
\item
\begin{verbatim}
0000:0000:0000:0000:0000:0000:0000:00001 =>
0:0:0:0:0:0:0:1 => ::1
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics - Address Oddities}
\begin{itemize}
\item Address may include a link name:
\begin{verbatim}
2001:470:1f07:3d1::1%eth0
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics - Address Oddities}
\begin{itemize}
\item Address may include a link name:
\begin{verbatim}
2001:470:1f07:3d1::1%eth0
\end{verbatim}
\item IPv4-mapped addresses
\begin{verbatim}
0:0:0:0:0:ffff:66.163.162.9
::ffff:66.163.162.9
\end{verbatim}
\end{itemize}
\subsection{IPv6 Basics - Address Oddities}
\begin{itemize}
\item Address may include a link name:
\begin{verbatim}
2001:470:1f07:3d1::1%eth0
\end{verbatim}
\item IPv4-mapped addresses
\begin{verbatim}
0:0:0:0:0:ffff:66.163.162.9
::ffff:66.163.162.9
\end{verbatim}
\item You need brackets to distinguish a port from an address:
\begin{itemize}
\item IPv4: \verb+66.163.162.9:22+
\item IPv6: \verb+[2001:470:1f07:3d1::1]:22+
\end{itemize}
\end{itemize}
\subsection{IPv6 Basics -- Address Scope}
\begin{itemize}
\item Link-Local (example: \verb+fe80::e276:63ff:fe72:3900%xennet0+)
\begin{itemize}
\item Used on a single link
\item Packets with link-local source or destination addresses are not
forwarded to other links
\end{itemize}
\end{itemize}
\subsection{IPv6 Basics -- Address Scope}
\begin{itemize}
\item Link-Local (example: \verb+fe80::e276:63ff:fe72:3900%xennet0+)
\begin{itemize}
\item Used on a single link
\item Packets with link-local source or destination addresses are not
forwarded to other links
\end{itemize}
\item Unique-Local (\verb+fc00::/7+)
\begin{itemize}
\item Used for private IPv6 networks
\item not globally routable
\item Applications similar to RFC 1918
\end{itemize}
\end{itemize}
\subsection{IPv6 Basics -- Address Scope}
\begin{itemize}
\item Link-Local (example: \verb+fe80::e276:63ff:fe72:3900%xennet0+)
\begin{itemize}
\item Used on a single link
\item Packets with link-local source or destination addresses are not
forwarded to other links
\end{itemize}
\item Unique-Local (\verb+fc00::/7+)
\begin{itemize}
\item Used for private IPv6 networks
\item not globally routable
\item Applications similar to RFC 1918
\end{itemize}
\item Global (example: \verb+2001:470:1f07:3d1::1+)
\begin{itemize}
\item A globally unique address
\item Packets with global addresses can be forwarded to any part of
the global network
\end{itemize}
\end{itemize}
%\subsection{IPv6 Configuration Types}
%\begin{itemize}
% \item Static Configuration
% \item Stateful Autoconfiguration (DHCPv6)
% \item Stateless Address Autoconfiguration (SLAC)
% \begin{itemize}
% \item RFC2462
% \item use of autonomously configured link-local address
% using its EUI-64 address
%\begin{verbatim}
% fe80::213:d3ff:fe9c:1840%eth0
%\end{verbatim}
% \item at boot time, send Router Solicitation (RS) to
% request Router Advertisements (RAs)
% \end{itemize}
%\end{itemize}
%
\subsection{IPv6 Subnets}
\begin{verbatim}
$ sipcalc 2001:470:30:84:e276:63ff:fe72:3900/64
-[ipv6 : 2001:470:30:84:e276:63ff:fe72:3900/64] - 0
[IPV6 INFO]
Expanded Address - 2001:0470:0030:0084:e276:63ff:fe72:3900
Compressed address - 2001:470:30:84:e276:63ff:fe72:3900
Subnet prefix (masked) - 2001:470:30:84:0:0:0:0/64
Address ID (masked) - 0:0:0:0:e276:63ff:fe72:3900/64
Prefix address - ffff:ffff:ffff:ffff:0:0:0:0
Prefix length - 64
Address type - Aggregatable Global Unicast Addresses
Network range - 2001:0470:0030:0084:0000:0000:0000:0000 -
2001:0470:0030:0084:ffff:ffff:ffff:ffff
\end{verbatim}
\subsection{IPv6 Subnets: Common CIDRs}
\small
\begin{verbatim}
2001:0db8:0123:4567:89ab:cdef:1234:5678
|||| |||| |||| |||| |||| |||| |||| |||128 Single end-points and loopback
|||| |||| |||| |||| |||| |||| |||| ||124
|||| |||| |||| |||| |||| |||| |||| |120
|||| |||| |||| |||| |||| |||| |||| 116
|||| |||| |||| |||| |||| |||| |||112
|||| |||| |||| |||| |||| |||| ||108
|||| |||| |||| |||| |||| |||| |104
|||| |||| |||| |||| |||| |||| 100
|||| |||| |||| |||| |||| |||96
|||| |||| |||| |||| |||| ||92
|||| |||| |||| |||| |||| |88
|||| |||| |||| |||| |||| 84
|||| |||| |||| |||| |||80
|||| |||| |||| |||| ||76
|||| |||| |||| |||| |72
|||| |||| |||| |||| 68
|||| |||| |||| |||64 Single End-user LAN (default prefix size for SLAAC)
|||| |||| |||| ||60
|||| |||| |||| |56 Proposed minimal end sites assignment
|||| |||| |||| 52
|||| |||| |||48 Default end sites assignment
|||| |||| ||44
|||| |||| |40
|||| |||| 36
|||| |||32 Local Internet registry minimum allocations
|||| ||28 Local Internet registry medium allocations
|||| |24 Local Internet registry large allocations
|||| 20 Local Internet registry extra large allocations
|||16
||12 Regional Internet Registry allocations from IANA
|8
\end{verbatim}
\Normalsize
\newpage
\vspace*{\fill}
\begin{center}
\Hugesize
Hooray! \\ [1em]
\hspace*{5mm}
\blueline\\
\hspace*{5mm}\\
5 Minute Break
\end{center}
\vspace*{\fill}
\subsection{Networking Buzzwords}
\\
\newcommand{\gargantuan}{\fontsize{45}{50}\selectfont}
\gargantuan
\begin{center}
``The network is the computer.'' \\
\small
\vspace*{.5in}
John Gage, Sun Microsystems
\end{center}
\Normalsize
\subsection{Networking Buzzwords}
\\
\gargantuan
\begin{center}
``The network is the network, \\
the computer is the computer - \\
sorry about the confusion.'' \\
\small
\vspace*{.5in}
Joe on Computing
\end{center}
\Normalsize
\subsection{Networking Buzzwords}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.9]{pics/cloud.eps}
\end{center}
\vspace*{\fill}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.4]{pics/map-of-internet.eps} \\
\vspace*{\fill}
\small
\verb+http://www.chrisharrison.net/index.php/Visualizations/InternetMap+
\Normalsize
\end{center}
\subsection{Networking}
/X? % /30
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.8]{pics/2computers.eps} \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
/X? % /29
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.8]{pics/3computers.eps} \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.9]{pics/broadcast-domain.eps} \\
\end{center}
\vspace*{\fill}
\subsection{WHOIS ASN?}
\Huge
\vfill
\begin{center}
The Internet Assigned Numbers Authority (IANA) oversees global IP
address/AS number allocation, root zone management etc.
\\
\vspace{.5in}
\verb+https://www.iana.org/+
\end{center}
\vfill
\Normalsize
\subsection{WHOIS ASN?}
Autonomous System Numbers (ASNs) are assigned by IANA
to the RIRs, see e.g. {\tt
ftp://ftp.arin.net/pub/stats/arin/}
\\
You can query databases on the internet about e.g. IP
block or ASN information via the {\tt WHOIS} protocol:
\begin{verbatim}
$ whois 155.246.56.11 | more
NetRange: 155.246.0.0 - 155.246.255.255
CIDR: 155.246.0.0/16
NetName: STEVENS
NetHandle: NET-155-246-0-0-1
Parent: NET155 (NET-155-0-0-0-0)
NetType: Direct Assignment
Organization: Stevens Institute of Technology (SIT)
RegDate: 1991-12-31
Updated: 2007-01-29
Ref: https://rdap.arin.net/registry/ip/155.246.0.0
\end{verbatim}
\subsection{WHOIS ASN?}
Carriers connect their Autonomous Systems at {\em
Internet Exchange Points} (IXPs) to route traffic
between the different networks.\\
This {\em peering} happens amongst carriers on a
tiered basis. \\
Examples:
\begin{verbatim}
https://peeringdb.com/net?asn=21976
https://peeringdb.com/net?asn=6939
https://peeringdb.com/net/27
https://peeringdb.com/net/433
https://peeringdb.com/net/457
\end{verbatim}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=1.2]{pics/asns.eps} \\
\end{center}
\vspace*{\fill}
\subsection{WHOIS ASN?}
Most of these services are available via APIs or
text-based interfaces:
\begin{verbatim}
$ host www.google.com
www.google.com has address 172.217.0.36
www.google.com has IPv6 address 2607:f8b0:4006:807::2004
$ whois -h whois.cymru.com 2607:f8b0:4006:807::2004
AS | IP | AS Name
15169 | 2607:f8b0:4006:807::2004 | GOOGLE - Google Inc., US
$ curl -s https://peeringdb.com/api/net?asn=15169 | python -mjson.tool | more
{ "data": [ {
"aka": "Google, YouTube (for Google Fiber see AS16591 record)",
"created": "2005-02-06T06:41:04Z",
"id": 433,
"info_ipv6": true,
"info_prefixes4": 15000,
"info_prefixes6": 750,
"info_ratio": "Mostly Outbound",
\end{verbatim}
\subsection{Networking}
To find the path your packets might take, give {\tt
traceroute(1)} a go: \\
\begin{verbatim}
$ traceroute search.yahoo.com
traceroute to search.yahoo.com (63.250.200.63), 30 hops max, 60 byte packets
1 155.246.89.2 (155.246.89.2) 0.342 ms postal0.cs.stevens-tech.edu (155.246.89.3) 0.251 ms 0.298 ms
2 155.246.89.2 (155.246.89.2) 0.311 ms 0.300 ms gwa.cc.stevens.edu (155.246.151.37) 0.252 ms
3 454a0465.cst.lightpath.net (69.74.4.101) 3.984 ms 3.761 ms 3.735 ms
4 18267502.cst.lightpath.net (24.38.117.2) 32.559 ms 32.591 ms 32.577 ms
5 hunt183-154.optonline.net (167.206.183.154) 4.473 ms 4.634 ms 18267502.cst.lightpath.net (24.38.117.2) 32.527 ms
6 451be0a9.cst.lightpath.net (65.19.113.169) 5.170 ms 5.278 ms hunt183-154.optonline.net (167.206.183.154) 4.465 ms
7 nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.928 ms 451be0a9.cst.lightpath.net (65.19.113.169) 5.153 ms nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.868 ms
8 ae-1.pat2.bfw.yahoo.com (216.115.111.26) 26.422 ms ae-1.pat1.bfw.yahoo.com (216.115.111.28) 13.974 ms nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.572 ms
9 et-18-1-0.msr1.bf2.yahoo.com (74.6.227.37) 17.812 ms et-18-1-0.msr2.bf1.yahoo.com (74.6.227.49) 16.576 ms ae-1.pat2.bfw.yahoo.com (216.115.111.26) 23.416 ms
10 et-0-1-1.clr1-a-gdc.bf1.yahoo.com (74.6.122.15) 18.817 ms et-0-1-1.clr2-a-gdc.bf1.yahoo.com (74.6.122.19) 17.672 ms et-0-1-0.clr1-a-gdc.bf1.yahoo.com (74.6.122.13) 17.947 ms
\end{verbatim}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=1.3]{pics/car-duct-tape.eps} \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.9]{pics/cable-layer.eps} \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=1.2]{pics/internet-undersea-cable.eps} \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
Stringing cables across the oceans' floors since 1866!
\vspace*{\fill}
\begin{center}
\includegraphics[scale=1.0]{pics/internet-undersea-cable.eps} \\
\verb+https://www.submarinecablemap.com/+ \\
\verb+https://is.gd/CjanOu+
\end{center}
\vspace*{\fill}
\subsection{Networking}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/cablemap.eps} \\
\verb+https://www.submarinecablemap.com/+ \\
\end{center}
\vspace*{\fill}
\subsection{Networking}
``The Net interprets censorship as damage and routes around it.'' \\
...except when it can't.
\begin{center}
\vspace*{\fill}
\includegraphics[scale=0.4]{pics/syria-disappears.eps} \\
\vspace*{\fill}
{\tt https://blog.cloudflare.com/how-syria-turned-off-the-internet} \\
{\tt https://youtu.be/OZHKeYwnALc}
\end{center}
\subsection{Networking}
\begin{center}
\vspace*{\fill}
\includegraphics[scale=0.9]{pics/tubes.eps} \\
\vspace*{\fill}
{\tt https://amzn.com/0061994952} \\
{\tt https://cromwell-intl.com/travel/usa/new-york-internet/}
\end{center}
\subsection{Networking}
The internet is a physical place. \\
\begin{center}
\vspace*{\fill}
\includegraphics[scale=0.6]{pics/Room_641A.eps} \\
\vspace*{\fill}
{\tt https://en.wikipedia.org/wiki/Room\_641A}
\end{center}
\subsection{Networking}
Now identify the physical and organizational aspects
of your network traffic:
\begin{verbatim}
$ traceroute search.yahoo.com
traceroute to search.yahoo.com (63.250.200.63), 30 hops max, 60 byte packets
1 155.246.89.2 (155.246.89.2) 0.342 ms postal0.cs.stevens-tech.edu (155.246.89.3) 0.251 ms 0.298 ms
2 155.246.89.2 (155.246.89.2) 0.311 ms 0.300 ms gwa.cc.stevens.edu (155.246.151.37) 0.252 ms
3 454a0465.cst.lightpath.net (69.74.4.101) 3.984 ms 3.761 ms 3.735 ms
4 18267502.cst.lightpath.net (24.38.117.2) 32.559 ms 32.591 ms 32.577 ms
5 hunt183-154.optonline.net (167.206.183.154) 4.473 ms 4.634 ms 18267502.cst.lightpath.net (24.38.117.2) 32.527 ms
6 451be0a9.cst.lightpath.net (65.19.113.169) 5.170 ms 5.278 ms hunt183-154.optonline.net (167.206.183.154) 4.465 ms
7 nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.928 ms 451be0a9.cst.lightpath.net (65.19.113.169) 5.153 ms nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.868 ms
8 ae-1.pat2.bfw.yahoo.com (216.115.111.26) 26.422 ms ae-1.pat1.bfw.yahoo.com (216.115.111.28) 13.974 ms nyiix.bas1-m.nyc.yahoo.com (198.32.160.121) 6.572 ms
9 et-18-1-0.msr1.bf2.yahoo.com (74.6.227.37) 17.812 ms et-18-1-0.msr2.bf1.yahoo.com (74.6.227.49) 16.576 ms ae-1.pat2.bfw.yahoo.com (216.115.111.26) 23.416 ms
10 et-0-1-1.clr1-a-gdc.bf1.yahoo.com (74.6.122.15) 18.817 ms et-0-1-1.clr2-a-gdc.bf1.yahoo.com (74.6.122.19) 17.672 ms et-0-1-0.clr1-a-gdc.bf1.yahoo.com (74.6.122.13) 17.947 ms
\end{verbatim}
\subsection{Networking I}
\vspace*{\fill}
\begin{center}
\includegraphics[scale=0.7]{pics/osi-stack2.eps}
\end{center}
\vspace*{\fill}
% break here
% %
% % \subsection{A simple example}
% % \Hugesize
% % \begin{center}
% % \begin{verbatim}
% % $ telnet www.google.com 80
% %
% % \end{verbatim}
% % \end{center}
% % \Normalsize
% % \vspace*{\fill}
% %
% % \subsection{A simple example}
% % \Hugesize
% % \begin{center}
% % \begin{verbatim}
% % $ telnet www.google.com 80
% % Trying 2607:f8b0:400c:c03::67...
% % Connected to www.google.com.
% % Escape character is '^]'.
% % GET / HTTP/1.0
% %
% % \end{verbatim}
% % \end{center}
% % \Normalsize
% % \vspace*{\fill}
% %
% % \subsection{A simple example}
% % \Hugesize
% % \begin{center}
% % \begin{verbatim}
% % $ telnet www.google.com 80
% % Trying 2607:f8b0:400c:c03::67...
% % Connected to www.google.com.
% % Escape character is '^]'.
% % GET / HTTP/1.0
% %
% % HTTP/1.0 200 OK
% % Date: Mon, 17 Mar 2014 16:15:01 GMT
% % Content-Type: text/html; charset=ISO-8859-1
% % Server: gws
% % [...]
% % \end{verbatim}
% % \end{center}
% % \Normalsize
% % \vspace*{\fill}
% %
% % \subsection{A simple example}
% % What exactly happens?
% %
% % \subsection{A simple example}
% % \\
% % \Hugesize
% % \begin{center}
% % \begin{verbatim}
% % $ strace -f telnet www.google.com 80 2>strace.out
% % Trying 173.194.73.99...
% % Connected to www.google.com.
% % Escape character is '^]'.
% % GET / HTTP/1.0
% %
% % [...]
% % \end{verbatim}
% % \end{center}
% % \Normalsize
% % \vspace*{\fill}
% %
% % %\subsection{A simple example}
% % %Let's just look at what files this opens:
% % %\Hugesize
% % %\begin{center}
% % %\begin{verbatim}
% % %$ strace -f -e trace=open \
% % % telnet www.yahoo.com 80 2>strace.out
% % %Trying 98.139.183.24...
% % %Connected to any-fp3-real.wa1.b.yahoo.com.
% % %Escape character is '^]'.
% % %HEAD / HTTP/1.0
% %
% % %[...]
% % %\end{verbatim}
% % %\end{center}
% % %\Normalsize
% % %\vspace*{\fill}
% %
% %
% %
% % \subsection{A simple example}
% % What exactly happens?
% % \\
% % \begin{itemize}
% % \item local host connects to remote host
% % \item sends command
% % \item receives data
% % \end{itemize}
% %
% % \subsection{A simple example}
% % How exactly do we connect to the remote host?
% % \\
% % \begin{itemize}
% % \item look up hostname
% % \item open connection to IP address
% % \end{itemize}
% %
% % \subsection{A simple example}
% % How exactly do we look up a hostname?
% % \\
% % \begin{itemize}
% % \item look up various local files
% % \item open a connection to a DNS server's IP
% % \item ask DNS server to resolve hostname
% % \item get back IP
% % \end{itemize}
% %
% % \subsection{...open a few files...}
% % \begin{verbatim}
% % execve("/usr/bin/telnet", ["telnet", "www.google.com", "80"], [/* 29 vars */]) = 0
% % [...]
% % open("/etc/nsswitch.conf", O_RDONLY) = 3
% % fstat(3, {st_mode=S_IFREG|0644, st_size=286, ...}) = 0
% % mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = [...]
% % read(3, "passwd: files ldap\ngroup: files "..., 4096) = 286
% % [...]
% % open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 3
% % fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC)
% % fstat(3, {st_mode=S_IFREG|0644, st_size=277, ...}) = 0
% % mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = [...]
% % read(3, "127.0.0.1 localhost\n\n# The fo"..., 4096) = 277
% % [...]
% % stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=205, ...}) = 0
% % open("/etc/resolv.conf", O_RDONLY) = 3
% % fstat(3, {st_mode=S_IFREG|0644, st_size=205, ...}) = 0
% % read(3, "nameserver 155.246.1.20\nnameserv"..., 4096) = 205
% % \end{verbatim}
% %
% % \subsection{... query a DNS server ...}
% % \begin{verbatim}
% % [...]
% % socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
% % connect(3, {sa_family=AF_INET, sin_port=htons(53),
% % sin_addr=inet_addr("155.246.1.20")}, 16) = 0
% % gettimeofday({1330805293, 202924}, NULL) = 0
% % sendto(3, "\364\333\1\0\0\1\0\0\0\0\0\0\3www\6google\3com\0\0\1\0\1", 32,
% % MSG_NOSIGNAL, NULL, 0) = 32
% % poll([{fd=3, events=POLLIN}], 1, 5000) = 1 ([{fd=3, revents=POLLIN}])
% % ioctl(3, FIONREAD, [504]) = 0
% % recvfrom(3, "\364\333\201\200\0\1\0\6\0\r\0\10\3www\6google\3com\0\0\1\0\1"...,
% % 1024, 0, {sa_family=AF_INET, sin_port=htons(53),
% % sin_addr=inet_addr("155.246.1.20")}, [16]) = 504
% % close(3) = 0
% % [...]
% % \end{verbatim}
% %
% % \subsection{...communicate with the remote host...}
% % \begin{verbatim}
% % [...]
% % write(1, "Trying 173.194.73.104...\n", 25) = 25
% % close(4294967295) = -1 EBADF (Bad file descriptor)
% % socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
% % setsockopt(3, SOL_IP, IP_TOS, [16], 4) = 0
% % connect(3, {sa_family=AF_INET, sin_port=htons(80),
% % sin_addr=inet_addr("173.194.73.104")},16) = 0
% % [...]
% % read(0, "GET / HTTP/1.0\n", 8191) = 15
% % select(4, [0 3], [3], [3], {0, 0}) = 1 (out [3], left {0, 0})
% % sendto(3, "GET / HTTP/1.0\r\n", 16, 0, NULL, 0) = 16
% % [...]
% % recvfrom(3, "HTTP/1.0 200 OK\r\nDate: Sat, 02 M"..., 8191, 0, NULL, NULL) = 5520
% % select(4, [0 3], [1], [3], {0, 0}) = 2 (in [3], out [1], left {0, 0})
% % write(1, "HTTP/1.0 200 OK\nDate: Sat, 02 Ma"..., 5508) = 5508
% % recvfrom(3, "", 6035, 0, NULL, NULL) = 0
% % [...]
% % \end{verbatim}
% %
% % % ktrace
% % %\subsection{A simple example}
% % %... look up various local files...
% % %\begin{verbatim}
% % %[...]
% % % 5921 1 telnet CALL open(0xbba06a65,0,0x1b6)
% % % 5921 1 telnet NAMI "/etc/nsswitch.conf"
% % % 5921 1 telnet RET open 3
% % %[...]
% % % 5921 1 telnet CALL open(0xbba0474b,0,0x1b6)
% % % 5921 1 telnet NAMI "/etc/hosts"
% % % 5921 1 telnet RET open 3
% % %[...]
% % % 5921 1 telnet CALL open(0xbba0495b,0,0x1b6)
% % % 5921 1 telnet NAMI "/etc/resolv.conf"
% % % 5921 1 telnet RET open 3
% % %[...]
% % %\end{verbatim}
% %
% % %\subsection{A simple example}
% % %... query a DNS server ...
% % %\begin{verbatim}
% % %[...]
% % % 5921 1 telnet CALL socket(2,2,0)
% % % 5921 1 telnet RET socket 3
% % % 5921 1 telnet CALL connect(3,0xbba210f0,0x10)
% % % 5921 1 telnet RET connect 0
% % % 5921 1 telnet CALL sendto(3,0xbfbee0d0,0x1f,0,0,0)
% % % 5921 1 telnet GIO fd 3 wrote 31 bytes
% % % "[T\^A\0\0\^A\0\0\0\0\0\0\^Cwww\^Eyahoo\^Ccom\0\0\^\\0\^A"
% % %[...]
% % % 5921 1 telnet CALL recvfrom(3,0x8077000,0x10000,0,
% % % 0xbfbeda10,0xbfbed9d4)
% % % 5921 1 telnet GIO fd 3 read 139 bytes
% % % "[T\M^A\M^@\0\^A\0\^B\0\^A\0\0\^Cwww\^Eyahoo\^Ccom\0\0\^\\0\^A\M-
% % % 5921 1 telnet RET recvfrom 139/0x8b
% % % 5921 1 telnet CALL close(3)
% % %[...]
% % %\end{verbatim}
% %
% % %\subsection{A simple example}
% % %... communicate with remote host ...
% % %\begin{verbatim}
% % % 5821 1 telnet CALL read(0,0x5222a0,0x400)
% % % 5821 1 telnet GIO fd 0 read 15 bytes
% % % "GET / HTTP/1.0\n"
% % % 5821 1 telnet RET read 15/0xf
% % % 5821 1 telnet CALL poll(0x7f7fffffd440,3,0)
% % % 5821 1 telnet RET poll 1
% % % 5821 1 telnet CALL sendto(3,0x521260,0x10,0,0,0)
% % % 5821 1 telnet GIO fd 3 wrote 16 bytes
% % % "GET / HTTP/1.0\r\n"
% % % 5821 1 telnet RET sendto 16/0x10
% % %\end{verbatim}
% % %\Normalsize
% %
% % %\subsection{A simple example}
% % %... communicate with remote host ...
% % %\begin{verbatim}
% % % 5921 1 telnet CALL recvfrom(3,0x8064b80,0x400,0,0,0)
% % % 5921 1 telnet GIO fd 3 read 1024 bytes
% % % "HTTP/1.1 200 OK\r
% % % Date: Sat, 19 Mar 2011 22:55:56 GMT\r
% % % Connection: close\r
% % % Content-Type: text/html; charset=utf-8\r
% % % <html>
% % % <head>
% % % <title>Yahoo!</title>
% % %[...]
% % %\end{verbatim}
% % \Normalsize
% %
% % \subsection{A simple example}
% % What does this look like on the wire?
% % \\
% %
% % \begin{itemize}
% % \item determine which nameserver to query
% % \item ask who has a route to the nameserver
% % \item open socket to well defined port on remote IP
% % \item send queries
% % \item open socket to requested port on remote IP
% % \end{itemize}
% %
% % \subsection{A simple example}
% % What does this look like on the wire?
% % \vspace*{1in}
% % \\
% % \Hugesize
% % \begin{center}
% % \begin{verbatim}
% % # tcpdump port not 22
% % \end{verbatim}
% % \end{center}
% % \Normalsize
% % \vspace*{\fill}
% %
% % \subsection{What does this look like on the wire?}
% % \begin{verbatim}
% % $ start-netbsd # custom shell alias
% % $ ssh <instance-name>
% % # script commands.out
% % # ifconfig -a
% % # route -n get default
% % # cat /etc/resolv.conf
% % # tcpdump -w tcpdump.out port not 22 &
% % # arp -d -a
% % # ping -n -c 3 98.139.180.149
% % # telnet www.google.com 80
% % [...]
% % # kill %1
% % # exit
% % # exit
% % $ scp <instance-name>:*out ~/tmp/
% % $ ec2-terminate-instances <instance>
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Finding the next hop:
% % \begin{verbatim}
% % $ tcpdump -n -r tcpdump.out arp
% % reading from file tcpdump.out, link-type EN10MB (Ethernet)
% % 18:06:59.217533 ARP, Request who-has 10.114.62.1 tell 10.114.63.209, length 28
% % 18:06:59.218187 ARP, Reply 10.114.62.1 is-at fe:ff:ff:ff:ff:ff, length 28
% % 18:07:06.148475 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:07:06.148499 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % 18:08:05.820986 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:08:05.821011 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % 18:09:18.518859 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:09:18.518878 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % 18:10:17.081885 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:10:17.081903 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Performing the DNS query:
% % \begin{verbatim}
% % $ tcpdump -t -n -r tcpdump.out udp port 53
% % reading from file tcpdump.out, link-type EN10MB (Ethernet)
% % IP 10.202.150.59.65511 > 172.16.0.23.53: 60916+ AAAA? www.google.com. (32)
% % IP 172.16.0.23.53 > 10.202.150.59.65511: 60916 1/0/0 AAAA 2607:f8b0:400c:c01::93 (60)
% % IP 10.202.150.59.65510 > 172.16.0.23.53: 1928+ A? www.google.com. (32)
% % IP 172.16.0.23.53 > 10.202.150.59.65510: 1928 6/0/0 A 173.194.75.105, A
% % 173.194.75.106, A 173.194.75.147, A 173.194.75.99, A 173.194.75.103, A 173.194.75.104 (128)
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Establishing the connection to the server:
% % \begin{verbatim}
% % $ tcpdump -n -r tcpdump.out tcp port 80
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [S],
% % seq 4158935008, win 32768,
% % options [mss 1460,nop,wscale 3, ...], length 0
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [S.],
% % seq 933875667, ack 4158935009, win 62920,
% % options [mss 1430,nop,nop, ...], length 0
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [.],
% % ack 1, win 4197, length 0
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Sending the HTTP request:
% % \begin{verbatim}
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [P.],
% % seq 1:17, ack 1, win 4197, length 16
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % ack 17, win 984, length 0
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [P.],
% % seq 17:19, ack 1, win 4197, length 2
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % ack 19, win 984, length 0
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Receiving the HTTP response:
% % \begin{verbatim}
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % seq 1:1431, ack 19, win 984, length 1430
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % seq 1431:2861, ack 19, win 984, length 1430
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [.],
% % ack 2861, win 3840, length 0
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % seq 2861:4291, ack 19, win 984, length 1430
% % \end{verbatim}
% %
% % \subsection{A simple example}
% % Terminating the connection:
% % \begin{verbatim}
% % [...]
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [.],
% % ack 42901, win 3738, length 0
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [.],
% % ack 42901, win 4122, length 0
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % seq 42901:44331, ack 19, win 984, length 1430
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [FP.],
% % seq 44331:44839, ack 19, win 984, length 508
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [.],
% % ack 44840, win 4134, length 0
% % IP 10.202.150.59.65531 > 173.194.75.105.80: Flags [F.],
% % seq 19, ack 44840, win 4197, length 0
% % IP 173.194.75.105.80 > 10.202.150.59.65531: Flags [.],
% % ack 20, win 984, length 0
% % \end{verbatim}
% %
% % \subsection{Notables from this simple example}
% % ``Simple'' is, as usual, relative.
% %
% % \subsection{Notables from this simple example}
% % ``Simple'' is, as usual, relative.
% % \\
% %
% % \begin{itemize}
% % \item host configuration assumed
% % \item network architecture (internal or across the internet) not
% % relevant (here)
% % \item even simple examples cross multiple layers and protocols
% % (HTTP, DNS; TCP, UDP, ARP)
% % \item we haven't even scratched the surface
% % \end{itemize}
% %
% % \subsection{TCP/IP Basics: Protocol Layers}
% % \begin{center}
% % \begin{tabular}{|cl|l|}
% % \hline
% % & {\bf Layer} & {\bf Function} \\
% % \hline
% % 4. & Application Layer & End-User application programs \\
% % 3. & Transport Layer & Delivery of data to applications \\
% % 2. & Network Layer & Basic communication, addressing, and routing \\
% % \multirow{2}{*}{1.} & Link Layer & Network Hardware and device drivers \\
% % & Physical Layer & Cable or physical medium \\
% % \hline
% % \end{tabular}
% % \end{center}
% % \addvspace{.5in}
% % Examples of protocols for each layer:
% % \begin{itemize}
% % \item Simple Mail Transfer Protocol (RFC 821) \\
% % Hypertext Transfer Protocol (RFC 2616)
% % \item Transmission Control Protocol (RFC 793, tcp(4)) \\
% % User Datagram Protocol (RFC 768; udp(4))
% % \item Internet Protocol (RFC 791; ip(4)) \\
% % Internet Control Message Protocol (RFC 792; icmp(4))
% % \item Address Resolution Protocol (RFC 826; arp(4))
% % \end{itemize}
% %
% % \subsection{TCP/IP Basics: Protocol Layers (OSI Model)}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.7]{pics/osi.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% % \subsection{TCP/IP Basics: ARP}
% % \begin{center}
% % Ethernet Address Resolution Protocol \\
% % -- or -- \\
% % Converting Network Protocol Addresses to 48-bit Ethernet Address for Transmission on Ethernet Hardware
% % \end{center}
% %
% % \begin{verbatim}
% % $ arp -a
% % logger.srcit.stevens-tech.edu (155.246.89.81) at 00:07:e9:09:c8:94 [ether] on eth0
% % vader.srcit.stevens-tech.edu (155.246.89.5) at 00:23:8b:a9:dd:60 [ether] on eth0
% % tarantula.phy.stevens-tech.edu (155.246.89.41) at 00:50:45:5f:1c:d4 [ether] on eth0
% % nirvana.phy.stevens-tech.edu (155.246.89.33) at 00:1e:68:0f:99:a2 [ether] on eth0
% % Vlan16.cc.stevens-tech.edu (155.246.89.1) at 00:09:44:d1:64:00 [ether] on eth0
% % cinema.srcit.stevens-tech.edu (155.246.89.67) at 00:25:90:1e:05:51 [ether] on eth0
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ARP}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/3computers-arp.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% %
% % \subsection{TCP/IP Basics: ARP}
% % \begin{center}
% % Ethernet Address Resolution Protocol \\
% % -- or -- \\
% % Converting Network Protocol Addresses to 48-bit Ethernet Address for Transmission on Ethernet Hardware
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % 18:06:59.217533 ARP, Request who-has 10.114.62.1 tell 10.114.63.209, length 28
% % 18:06:59.218187 ARP, Reply 10.114.62.1 is-at fe:ff:ff:ff:ff:ff, length 28
% % 18:07:06.148475 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:07:06.148499 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % 18:08:05.820986 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:08:05.821011 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % 18:09:18.518859 ARP, Request who-has 10.114.63.209 (ff:ff:ff:ff:ff:ff)
% % tell 0.0.0.0, length 28
% % 18:09:18.518878 ARP, Reply 10.114.63.209 is-at 12:31:3d:04:30:23, length 28
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ND}
% % \begin{center}
% % Neighbor Discovery Protocol
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ ndp -n -a
% % Neighbor Linklayer Address Netif Expire S Flags
% % 2001:470:30:84:e276:63ff:fe72:3900 e0:76:63:72:39:00 xennet0 permanent R
% % fe80::21b:21ff:fe45:bf54%xennet0 00:1b:21:45:bf:54 xennet0 21m52s S R
% % fe80::21b:21ff:fe7a:7269%xennet0 00:1b:21:7a:72:69 xennet0 23h59m59s S R
% % fe80::e276:63ff:fe72:3900%xennet0 e0:76:63:72:39:00 xennet0 permanent R
% % fe80::1%lo0 (incomplete) lo0 permanent R
% % $
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ND}
% % \begin{center}
% % Neighbor Discovery Protocol
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % 22:35:47.947624 IP6 fe80::21b:21ff:fe7a:7269 > ff02::1:ff62:3400: ICMP6,
% % neighbor solicitation, who has 2001:470:30:84:e276:63ff:fe62:3400, length 32
% % 22:35:50.950101 IP6 2001:470:30:84:e276:63ff:fe72:3900 > ff02::1:ff7a:7269: ICMP6,
% % neighbor solicitation, who has fe80::21b:21ff:fe7a:7269, length 32
% % 22:35:50.950690 IP6 fe80::21b:21ff:fe7a:7269 > 2001:470:30:84:e276:63ff:fe72:3900:
% % ICMP6, neighbor advertisement, tgt is fe80::21b:21ff:fe7a:7269, length 32
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP}
% % \begin{center}
% % Internet Control Message Protocol
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ ping -c 3 www.yahoo.com
% % PING any-fp.wa1.b.yahoo.com (67.195.160.76): 56 data bytes
% % 64 bytes from 67.195.160.76: icmp_seq=0 ttl=53 time=30.888 ms
% % 64 bytes from 67.195.160.76: icmp_seq=1 ttl=53 time=23.193 ms
% % 64 bytes from 67.195.160.76: icmp_seq=2 ttl=53 time=25.433 ms
% %
% % ----any-fp.wa1.b.yahoo.com PING Statistics----
% % 3 packets transmitted, 3 packets received, 0.0% packet loss
% % round-trip min/avg/max/stddev = 23.193/26.505/30.888/3.958 ms
% % $
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP: Ping}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/3computers-ping.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% %
% % \subsection{TCP/IP Basics: ICMP}
% % \begin{center}
% % Internet Control Message Protocol
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ tcpdump -r tcpdump.out -n icmp
% % 13:23:03.081954 IP 166.84.7.99 > 67.195.160.76: icmp 64: echo request seq 23
% % 13:23:03.092153 IP 67.195.160.76 > 166.84.7.99: icmp 64: echo reply seq 23
% % 13:23:04.081865 IP 166.84.7.99 > 67.195.160.76: icmp 64: echo request seq 24
% % 13:23:04.090909 IP 67.195.160.76 > 166.84.7.99: icmp 64: echo reply seq 24
% % 13:23:05.071735 IP 166.84.7.99 > 67.195.160.76: icmp 64: echo request seq 25
% % 13:23:05.081368 IP 67.195.160.76 > 166.84.7.99: icmp 64: echo reply seq 25
% % \end{verbatim}
% %
% %
% % \subsection{TCP/IP Basics: ICMP6}
% % \begin{center}
% % Internet Control Message Protocol for IPv6
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ ping6 -c 3 www.netbsd.org
% % PING6(56=40+8+8 bytes) 2001:470:30:84:204:d7b0:0:1 -->
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b
% % 16 bytes from 2001:4f8:3:7:2e0:81ff:fe52:9a6b, icmp_seq=0 hlim=57 time=74.316 ms
% % 16 bytes from 2001:4f8:3:7:2e0:81ff:fe52:9a6b, icmp_seq=1 hlim=57 time=71.260 ms
% % 16 bytes from 2001:4f8:3:7:2e0:81ff:fe52:9a6b, icmp_seq=2 hlim=57 time=71.321 ms
% %
% % --- www.netbsd.org ping6 statistics ---
% % 3 packets transmitted, 3 packets received, 0.0% packet loss
% % round-trip min/avg/max/std-dev = 71.260/72.299/74.316/1.747 ms
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP6}
% % \begin{center}
% % Internet Control Message Protocol for IPv6
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % 12:46:58.524431 IP6 2001:470:30:84:204:d7b0:0:1 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b: ICMP6, echo reque st, seq 0, length 16
% % 12:46:58.598621 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b >
% % 2001:470:30:84:204:d7b0:0:1: ICMP6, echo reply , seq 0, length 16
% % 12:46:59.532864 IP6 2001:470:30:84:204:d7b0:0:1 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b: ICMP6, echo request, seq 1, length 16
% % 12:46:59.604011 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b >
% % 2001:470:30:84:204:d7b0:0:1: ICMP6, echo reply , seq 1, length 16
% % 12:47:00.532817 IP6 2001:470:30:84:204:d7b0:0:1 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b: ICMP6, echo reque st, seq 2, length 16
% % 12:47:00.604016 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b >
% % 2001:470:30:84:204:d7b0:0:1: ICMP6, echo reply , seq 2, length 16
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP: Traceroute}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/traceroute1.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% % \subsection{TCP/IP Basics: ICMP: Traceroute}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/traceroute2.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% % \subsection{TCP/IP Basics: ICMP: Traceroute}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/traceroute3.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% % \subsection{TCP/IP Basics: ICMP: Traceroute}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/traceroute4.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% %
% %
% % \subsection{TCP/IP Basics: ICMP}
% % \begin{center}
% % Internet Control Message Protocol
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ traceroute www.netbsd.org
% % traceroute to www.netbsd.org (204.152.190.12), 64 hops max, 40 byte packets
% % 1 eth2-3a.core1.nav.nyc.access.net (166.84.0.1) 0.256 ms 0.165 ms 0.181 ms
% % 2 l3v1.nyc.access.net (166.84.66.14) 1.570 ms 1.556 ms 1.437 ms
% % 3 gige-g3-3.core1.nyc4.he.net (209.51.171.25) 4.963 ms 2.422 ms 1.457 ms
% % 4 10gigabitethernet2-3.core1.ash1.he.net (72.52.92.86) 8.423 ms 8.769 ms 7.683 ms
% % 5 10gigabitethernet1-2.core1.atl1.he.net (184.105.213.110) 21.898 ms 19.647 ms 19.838 ms
% % 6 isc.gige-g2-1.core1.atl1.he.net (216.66.0.50) 77.465 ms 77.921 ms 80.519 ms
% % 7 iana.r1.atl1.isc.org (199.6.12.1) 77.302 ms 78.230 ms 81.782 ms
% % 8 int-0-5-0-1.r1.pao1.isc.org (149.20.65.37) 81.860 ms 83.780 ms 84.160 ms
% % 9 int-0-0-1-0.r1.sql1.isc.org (149.20.65.10) 81.543 ms 80.193 ms 84.434 ms
% % 10 www.netbsd.org (204.152.190.12) 81.986 ms 81.008 ms 82.604 ms
% % $
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP}
% % \begin{center}
% % Internet Control Message Protocol
% % \end{center}
% %
% % \begin{verbatim}
% % IP (tos 0x0, ttl 1, id 44866, offset 0, flags [none], proto UDP (17), length 40)
% % 166.84.7.99.44865 > 149.20.53.86.33435: [udp sum ok] UDP, length 12
% % IP (tos 0xc0, ttl 64, id 48796, offset 0, flags [none], proto ICMP (1), length 68)
% % 166.84.0.1 > 166.84.7.99: ICMP time exceeded in-transit, length 48
% % IP (tos 0x0, ttl 2, id 44869, offset 0, flags [none], proto UDP (17), length 40)
% % 166.84.7.99.44865 > 149.20.53.86.33438: [udp sum ok] UDP, length 12
% % IP (tos 0x0, ttl 3, id 44872, offset 0, flags [none], proto UDP (17), length 40)
% % 166.84.7.99.44865 > 149.20.53.86.33441: [udp sum ok] UDP, length 12
% % IP (tos 0x0, ttl 4, id 44875, offset 0, flags [none], proto UDP (17), length 40)
% % 166.84.7.99.44865 > 149.20.53.86.33444: [udp sum ok] UDP, length 12
% % IP (tos 0x0, ttl 252, id 6760, offset 0, flags [none], proto ICMP (1), length 56)
% % 154.24.25.109 > 166.84.7.99: ICMP time exceeded in-transit, length 36
% % ...
% % IP (tos 0x0, ttl 248, id 0, offset 0, flags [none], proto ICMP (1), length 56)
% % 149.20.53.86 > 166.84.7.99: ICMP 149.20.53.86 udp port 33482 unreachable, length 36
% % \end{verbatim}
% %
% %
% % \subsection{TCP/IP Basics: ICMP6}
% % \begin{center}
% % Internet Control Message Protocol for IPv6
% % \end{center}
% % \vspace{.2in}
% %
% % \begin{verbatim}
% % $ traceroute6 www.netbsd.org
% % traceroute6 to www.netbsd.org (2001:4f8:3:7:2e0:81ff:fe52:9a6b) from
% % 2001:470:30:84:204:d7b0:0:1, 64 hops max, 12 byte packets
% % 1 router.vc.panix.com 0.271 ms 0.282 ms 0.155 ms
% % 2 2001:470:30::a654:420e 5.459 ms 1.251 ms 1.073 ms
% % 3 gige-g3-3.core1.nyc4.he.net 1.288 ms 2.001 ms 10.176 ms
% % 4 10gigabitethernet8-3.core1.chi1.he.net 26.603 ms 20.532 ms 25.029 ms
% % 5 2001:470:1:34::2 72.033 ms 72.377 ms 72.686 ms
% % 6 iana.r1.ord1.isc.org 76.288 ms 72.773 ms 71.481 ms
% % 7 int-0-0-1-8.r1.pao1.isc.org 73.027 ms 76.489 ms 77.507 ms
% % 8 int-0-0-1-0.r2.sql1.isc.org 73.555 ms 75.367 ms 74.769 ms
% % 9 www.NetBSD.org 72.036 ms 72.522 ms 71.39 ms
% % $
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: ICMP6}
% % \begin{center}
% % Internet Control Message Protocol for IPv6
% % \end{center}
% %
% % \begin{verbatim}
% % 12:47:26.860045 IP6 2001:470:30:84:204:d7b0:0:1.51749 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.33435: UDP, length 12
% % 12:47:26.860265 IP6 2001:470:30:84::3 > 2001:470:30:84:204:d7b0:0:1:
% % ICMP6, time exceeded in-transit [|icmp6]
% % 12:47:26.860907 IP6 2001:470:30:84:204:d7b0:0:1.51749 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.33436: UDP, length 12
% % [...]
% % 12:47:29.759506 IP6 2001:470:30:84:204:d7b0:0:1.51749 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.33461: UDP, length 12
% % 12:47:29.830787 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b >
% % 2001:470:30:84:204:d7b0:0:1: ICMP6,
% % destination unreachable[|icmp6]
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: TCP}
% % \begin{center}
% % Transmission Control Protocol
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % $ telnet www.google.com 80
% % Trying 173.194.73.99...
% % Connected to www.google.com.
% % Escape character is '^]'.
% % GET / HTTP/1.0
% %
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: TCP}
% % \begin{center}
% % Transmission Control Protocol
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % 14:51:33.582076 IP 166.84.7.99.58356 > 67.195.160.76.80: S
% % 2267539609:2267539609(0) win 32768
% % <mss 1460,nop,wscale 3,sackOK,nop,nop,nop,nop,timestamp 10>
% % 14:51:33.590748 IP 67.195.160.76.80 > 166.84.7.99.58356: S
% % 3229501874:3229501874(0) ack 2267539610 win 5792
% % <mss 1440,sackOK,timestamp 1241180702 1,nop,wscale 8>
% % 14:51:33.590766 IP 166.84.7.99.58356 > 67.195.160.76.80: .
% % ack 1 win 4197 <nop,nop,timestamp 1 1241180702>
% % 14:51:37.732720 IP 166.84.7.99.58356 > 67.195.160.76.80: P
% % 1:17(16) ack 1 win 4197 <nop,nop,timestamp 9 1241180702>
% % 14:51:37.741763 IP 67.195.160.76.80 > 166.84.7.99.58356: .
% % ack 17 win 23 <nop,nop,timestamp 12411848 53 9>
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: TCP}
% % \begin{center}
% % Transmission Control Protocol over IPv6
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % $ telnet www.netbsd.org 80
% % Trying 2001:4f8:3:7:2e0:81ff:fe52:9a6b...
% % Connected to www.netbsd.org.
% % Escape character is '^]'.
% % GET / HTTP/1.0
% %
% %
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: TCP}
% % \begin{center}
% % Transmission Control Protocol IPv6
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % 14:58:11.128436 IP6 2001:470:30:84:204:d7b0:0:1.58334 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.80: S 3232473102:3232473102(0)
% % win 32768 <mss 1440,nop,wscale3,sackOK,nop,nop,nop,nop,timestamp 1[|tcp]>
% % 14:58:11.200293 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b.80 >
% % 2001:470:30:84:204:d7b0:0:1.58334: S 4139493123:4139493123(0)
% % ack 3232473103 win 32768
% % 14:58:11.200337 IP6 2001:470:30:84:204:d7b0:0:1.58334 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.80: . ack 1 win 4140
% % 14:58:14.322701 IP6 2001:470:30:84:204:d7b0:0:1.58334 >
% % 2001:4f8:3:7:2e0:81ff:fe52:9a6b.80: P 1:17(16) ack 1 win 4140
% % 14:58:14.589416 IP6 2001:4f8:3:7:2e0:81ff:fe52:9a6b.80 >
% % 2001:470:30:84:204:d7b0:0:1.58334: . ack 17 win 33120
% % 14:58:14.752420 IP6 2001:470:30:84:204:d7b0:0:1.58334 >
% % \end{verbatim}
% %
% %
% % \subsection{TCP/IP Basics: UDP}
% % \begin{center}
% % User Datagram Protocol
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % $ nslookup www.yahoo.com
% % Server: 155.246.1.20
% % Address: 155.246.1.20#53
% %
% % Non-authoritative answer:
% % www.yahoo.com canonical name = fp3.wg1.b.yahoo.com.
% % fp3.wg1.b.yahoo.com canonical name = any-fp3-lfb.wa1.b.yahoo.com.
% % any-fp3-lfb.wa1.b.yahoo.com canonical name = any-fp3-real.wa1.b.yahoo.com.
% % Name: any-fp3-real.wa1.b.yahoo.com
% % Address: 98.139.183.24
% %
% % $
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: UDP}
% % \begin{center}
% % User Datagram Protocol
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % 15:06:04.760444 IP (tos 0x0, ttl 64, id 0, offset 0, flags [none],
% % proto UDP (17), length 59) panix.netmeister.org.49164 >
% % cache2.ns.access.net.domain: 28557+ A? www.yahoo.com. (31)
% %
% % 15:06:05.210569 IP (tos 0x0, ttl 63, id 1862, offset 0, flags [none],
% % proto UDP (17), length 207) cache2.ns.access.net.domain >
% % panix.netmeister.org.49164: 28557 4/2/2
% % www.yahoo.com. CNAME fp3.wg1.b.yahoo.com.[|domain]
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: UDP}
% % \begin{center}
% % User Datagram Protocol over IPv6
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % $ dig -6 @2001:470:20::2 www.yahoo.com
% %
% % ;; ANSWER SECTION:
% % www.yahoo.com. 300 IN CNAME fp3.wg1.b.yahoo.com.
% % fp3.wg1.b.yahoo.com. 60 IN CNAME any-fp3-lfb.wa1.b.yahoo.com.
% % any-fp3-lfb.wa1.b.yahoo.com. 300 IN CNAME any-fp3-real.wa1.b.yahoo.com.
% % any-fp3-real.wa1.b.yahoo.com. 60 IN A 98.139.183.24
% %
% % ;; Query time: 51 msec
% % ;; SERVER: 2001:470:20::2#53(2001:470:20::2)
% % ;; WHEN: Sat Mar 3 22:49:44 2012
% % ;; MSG SIZE rcvd: 128
% %
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: UDP}
% % \begin{center}
% % User Datagram Protocol over IPv6
% % \end{center}
% % \vspace{.2in}
% % \begin{verbatim}
% % 15:24:20.731990 IP6 (hlim 64, next-header: UDP (17), length: 39)
% % 2001:470:30:84:204:d7b0:0:1.65037 > 2001:470:20::2.53:
% % [udp sum ok] 18545+ A? www.yahoo.com. (31)
% %
% % 15:24:20.976796 IP6 (hlim 61, next-header: UDP (17), length: 119)
% % 2001:470:20::2.53 > 2001:470:30:84:204:d7b0:0:1.65037:
% % 18545 4/0/0 www.yahoo.com.[|domain]
% %
% % \end{verbatim}
% %
% % \subsection{TCP/IP Basics: Putting it all together}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.6]{pics/tcpip-stack.eps}
% % \end{center}
% % \vspace*{\fill}
% %
% % \subsection{Networking}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=0.8]{pics/dsr.eps} \\
% % \end{center}
% % \vspace*{\fill}
% %
% %
% % \subsection{Networking}
% % \vspace*{\fill}
% % \begin{center}
% % \includegraphics[scale=1.3]{pics/car-duct-tape.eps} \\
% % \end{center}
% % \vspace*{\fill}
% %
\subsection{Internet Maps and Architecture}
\begin{itemize}
\item \verb+https://is.gd/C66S8a+
\item \verb+https://www.submarinecablemap.com/+
\item \verb+https://en.wikipedia.org/wiki/Peering+
\item \verb+https://is.gd/tpPNE5+
\item \verb+https://is.gd/B0d3kh+
\item \verb+https://amzn.com/0061994936+
\item \verb+https://bgp.he.net/+
\item \verb+https://www.wired.com/2014/08/shark_cable/+
\end{itemize}
\subsection{IPv6}
\begin{itemize}
\item \verb+https://www.potaroo.net/papers/isoc/2005-07/ipv6size.html+
\item \verb+https://bgp.he.net/ipv6-progress-report.cgi+
\item \verb+https://ipv6.he.net/statistics/+
\item \verb+https://tunnelbroker.net/+
\end{itemize}
%\subsection{Reading}
%\begin{itemize}
% \item \verb+https://is.gd/qXVo2j+
%\end{itemize}
%\vspace{.5in}
%Commands:
%\begin{itemize}
% \item \verb+tcpdump(8)+
% \item \verb+ktrace(1)+ / \verb+strace(1)+
% \item \verb+tcp(4)+/\verb+ip(4)+
% \item \verb+netstat(1)+
% \item \verb+nslookup(1)+
%\end{itemize}
%
\end{document}
| {
"alphanum_fraction": 0.6280958122,
"avg_line_length": 33.8794291339,
"ext": "tex",
"hexsha": "81b9417ca9548aa94afd48a92998a04bd8fc1e05",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2021-10-13T11:38:13.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-02-11T01:12:06.000Z",
"max_forks_repo_head_hexsha": "54f9b03689c884966b131a13c5c56bf817a2adbb",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "jschauma/cs615asa",
"max_forks_repo_path": "06-networking/slides.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "54f9b03689c884966b131a13c5c56bf817a2adbb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "jschauma/cs615asa",
"max_issues_repo_path": "06-networking/slides.tex",
"max_line_length": 178,
"max_stars_count": 26,
"max_stars_repo_head_hexsha": "54f9b03689c884966b131a13c5c56bf817a2adbb",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "jschauma/cs615asa",
"max_stars_repo_path": "06-networking/slides.tex",
"max_stars_repo_stars_event_max_datetime": "2021-12-20T01:06:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-02-11T16:59:35.000Z",
"num_tokens": 27518,
"size": 68843
} |
\documentclass{article}
\usepackage{listings}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[hyphens]{url}
\begin{document}
\textbf{Course name:} Physics
\textbf{Course code:} GT1 (FGA2.GT1-03)
\textbf{Academic year:} 2013-2014
\textbf{Lecturer:} Giuseppe Maggiore
\textbf{Number of EC's:} 4
\section{Introduction}
In this document we describe the \textit{Physics} course.
The course aims at teaching how basic rigid body physics can be simulated in real-time on a modern computer.
The objective is for students to be able to: \textit{(i)} intuitively understand the basic equations of rigid body physics; \textit{(ii)} know how to translate those equations into a working physics engine; and \textit{(iii)} know the trade-offs that a physics engine must chose between.
At the end of the course the students will be able to program a simplified, robust, and general purpose physics engine.
The course will require \textit{multiple assignments}, roughly one after each lecture or two. The various assignments build towards a working physics engine. Each assignment requires handing in two deliverables: \textit{(i)} a working program, its source, its compiled executable, and a video of the program running; \textit{(ii)} a written discussion and description of the implementation. The program is built \footnote{In either C++ (recommended) or C\# (acceptable)} in groups of two to three students, while the discussion is handed in (and written) by each student individually.
\subsection{Relation to other courses}
The course is related the preceding \textbf{Programming 1} through \textbf{4} courses, in that it requires students to be fluent with structured, higher level programming language such as C++. The course is also related with the \textbf{Mathematics 1} through \textbf{4} courses, in that it requires students to work with mathematical functions, derivatives, integrals, differential equations, numerical methods, trigonometry, and algebra.
\subsection{Relation to the industry}
In modern games, physics is not just an interesting addition; rather, it is a necessary component that is expected to work flawlessly.
In order to be able to delivery correct-looking physics simulations in games, a practitioner needs to understand the underlying theoretical framework of rigid body (Newtonian) physics. Moreover, a physics engine developer must be aware of the mainstream techniques for performing (and optimizing) collision detection and also collision resolution through impulse computation in the presence of (multiple, simultaneous) constraints.
Even in those cases when an existing, off-the-shelf physics engine is used, knowledge of the above topics is a requirement in order to be able to make an informed choice.
Moreover, the computation of external forces to the bodies of a physics engine (whether custom or existing) are an extremely important aspect that needs to be modelled separately. Depending on the game setting, different forces will be at play: friction, Earth-surface-gravity, general gravity, Magnusson, tires on asphalt, wings on air, etc. Studying such forces is a requisite for being able to use a physics engine to build a domain specific physical simulation.
\subsection{Competences}
The course relates to competences \textbf{P3.Game Engine}, and \textbf{P4.Development with Resource Constraints}.
\subsection{Course learning objectives}
The learning objectives of this course are the following, marked with the corresponding voice of Bloom's Taxonomy:
\begin{itemize}
\item \textbf{Build} a basic kinematic simulator with RK2 or RK4. The simulator tracks position, velocity, rotation, angular rotation, mass, intertia tensor of arbitrary convex polytopes. \textbf{Understand} the difference in precision, stability, and performance of different integration methods. \textbf{Understand} numerical drift for rotation matrices and quaternions, and how to compensate through normalization.
\item \textbf{Build} a system for SAT computation of arbitrary convex polytopes. \textbf{Understand} the various kinds of contact manifold determination, and build at least one.
\item \textbf{Understand} how to reduce the number of expensive collision tests with broad-phase methods such as bounding spheres, bins, and axis algined bounding boxes (AABBs). \textbf{Build} AABBs collision detection with the method of overlapping intervals. \textbf{Understand} how to reduce the number of SAT tests by exploiting the symmetry of a convex polytope.
\item \textbf{Understand} the Projected Gauss-Seidel method. \textbf{Build} an impulse-based collision response system for all pairs of objects simultaneously.
\item \textbf{Understand} forces from various domains of physics: gravity, friction, springs, bullets, and cars. \textbf{Build} at least one of those into the simulation.
\end{itemize}
\section{Course structure}
\subsection{Number of hours}
2 hours per week, contact time, and approximately 92 hours total study time.
\subsection{Attendance policy}
Attendance is not mandatory but students may miss valuable information if they do not attend the classes.
\subsection{Teaching method}
Traditional frontal lectures.
\section{Assessment \& deadlines}
The course has a series of assessments: \textit{(i)} a series of pieces that end up becoming a physics engine, each divided in a \textit{group assignment of coding} and an \textit{individual assignment of writing a report}.
All assessments must qualify for a full pass (6+) in order for the course to pass.
\subsection{Assignment: building a physics engine}
This assignment requires groups \textbf{of up to four} students to write a C++ or C\# physics engine. This assignment is not graded directly, but only indirectly thorugh of assignment 2.
The grading criteria will be:
\begin{itemize}
\item performance 30\%
\item general purpose structure 30\%
\item credibility/correctness of the physical simulation 30\%
\item quality of the description document 10\%
\end{itemize}
The partial assignments are:
\begin{itemize}
\item Build a basic kinematic simulator with RK2 or RK4 (20\%)
\item SAT/contact manifold computation (at least for OBBs, better for arbitrary meshes) (20\%)
\item Collision culling with bounding spheres, AABBs, and bins (20\%)
\item Collision response (20\%)
\item Forces for domain-specific scenarios (20\%)
\end{itemize}
The partial assignments are due, alternatively:
\begin{itemize}
\item the end of the week after presentation in class (bonus $\times 1.1$), printed, with CD, in the lecturer's pigeon-hole
\item at the end of the course, all together (no bonus); the deadline in this case is \textit{Friday of the exam week, at midnight}
\end{itemize}
\section{Materials}
\subsection{Literature}
The course will be based on:
\begin{itemize}
\item The book \textit{Game Physics - Second Edition}, by David Eberly
\item The book \textit{Physics for game programmers}, by Grant Palmer
\item The paper \textit{Iterative Dynamics with Temporal Coherence}, by Erin Catto
\item The tutorial \textit{Car physics for games}, by Marco Monster
\item The Siggraph '97 course notes \textit{An Introduction to Physically Based Modeling: Rigid Body Simulation I - Unconstrained Rigid Body Dynamics} by David Baraff
\end{itemize}
\subsection{Software packages}
The course will make use of Visual Studio 2010 or newer, with any graphics library associated such as DirectX, OpenGL, XNA, MonoGame, etc.
\section{Lecture topics}
The lectures will cover the following topics, approximately one per lecture:
\begin{itemize}
\item \textbf{Topic 1 - basic concepts from physics}: translational and rotational Newtonian physics, numerical integration, equations of motion for a system of bodies
\item \textbf{Topic 2 - narrow phase of collision detection}: separating axis, collision response
\item \textbf{Topic 3 - broad phase of collision detection}: axis aligned bounding boxes, bounding spheres, etc.
\item \textbf{Topic 4 - simultaneous resolution of multiple constraints}: constraints as a system of equations, the Gauss-Seidel method
\item \textbf{Topic 5 - force computation}: ballistic forces (Magnusson, friction, gravity), car forces, plane forces, etc. \item \textbf{Optional topic 1 - preprocessing of models for collision detection} BSP for faster collision detection
\item \textbf{Optional topic 2 - preprocessing of generic models} calculating the \textit{inertia tensor} of arbitrary polytopes
\end{itemize}
Topic 5, and the optional topics may not be entirely included in the lectures. Precedence will be given to correct assimilation of the previous topics, and the final schedule will also be dependent on the students response to the topics.
\section{Conclusions}
In this document we described the \textit{Physics} course. The course focuses on understanding basic physics, and learning how to build a basic physics engine for games.
At the end of the course the students will be able to understand the structure of a physics engine, and will know how to build their own.
\end{document}
| {
"alphanum_fraction": 0.7939975725,
"avg_line_length": 61.6530612245,
"ext": "tex",
"hexsha": "284f14d4e0f3363f9c458cc43444e442efaf5f4a",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-09-04T07:48:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-25T02:31:44.000Z",
"max_forks_repo_head_hexsha": "0dd4856b6361b34d472020f98b63d0803d9a92e3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "giuseppemag/Game-physics",
"max_forks_repo_path": "Course description GT1 NHTV/GT1-Prog Course Information.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "0dd4856b6361b34d472020f98b63d0803d9a92e3",
"max_issues_repo_issues_event_max_datetime": "2015-08-16T10:05:47.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-16T10:05:36.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "giuseppemag/Game-physics",
"max_issues_repo_path": "Course description GT1 NHTV/GT1-Prog Course Information.tex",
"max_line_length": 584,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "0dd4856b6361b34d472020f98b63d0803d9a92e3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "giuseppemag/Game-physics",
"max_stars_repo_path": "Course description GT1 NHTV/GT1-Prog Course Information.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-21T04:08:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-02T23:38:10.000Z",
"num_tokens": 2066,
"size": 9063
} |
\par
\subsection{{\tt PDV} : {\tt double *} vector methods}
\label{subsection:Utilities:proto:PDV}
\par
%=======================================================================
\begin{enumerate}
%-----------------------------------------------------------------------
\item
\begin{verbatim}
double ** PDVinit ( int n ) ;
\end{verbatim}
\index{PDVinit@{\tt PDVinit()}}
This is the allocator and initializer method for {\tt double*} vectors.
Storage for an array with size {\tt n} is found and each
entry is filled with {\tt NULL}.
A pointer to the array is returned.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PDVfree ( double **p_vec ) ;
\end{verbatim}
\index{PDVfree@{\tt PDVfree()}}
This method releases the storage taken by {\tt p\_vec[]}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PDVcopy ( int n, double *p_y[], double *p_x[] ) ;
\end{verbatim}
\index{PDVcopy@{\tt PDVcopy()}}
This method copies {\tt n} entries from {\tt p\_x[]} to {\tt p\_y[]},
i.e.,
{\tt p\_y[i] = p\_x[i]} for {\tt 0 <= i < n}.
%-----------------------------------------------------------------------
\item
\begin{verbatim}
void PDVsetup ( int n, int sizes[], double vec[], double *p_vec[] ) ;
\end{verbatim}
\index{PDVsetup@{\tt PDVsetup()}}
This method sets the entries of {\tt p\_vec[]} as pointers into {\tt
vec[]} given by the {\tt sizes[]} vector,
i.e.,
{\tt p\_vec[0] = vec}, and
{\tt p\_vec[i] = p\_vec[i-1] + sizes[i-1]}
for {\tt 0 < i < n}.
%-----------------------------------------------------------------------
\end{enumerate}
| {
"alphanum_fraction": 0.4819571865,
"avg_line_length": 34.7872340426,
"ext": "tex",
"hexsha": "24be45956076a927c58b8720587527aea84e9088",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z",
"max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "alleindrach/calculix-desktop",
"max_forks_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PDV.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "alleindrach/calculix-desktop",
"max_issues_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PDV.tex",
"max_line_length": 72,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "alleindrach/calculix-desktop",
"max_stars_repo_path": "ccx_prool/SPOOLES.2.2/Utilities/doc/PDV.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 414,
"size": 1635
} |
\subsection{Sort}
**** Asymmetric
Pages for:
+ Public keys
+ RSA
+ Message signing
+ PGP
+ Public keys to facilitate symmetric encryption
| {
"alphanum_fraction": 0.7304964539,
"avg_line_length": 11.75,
"ext": "tex",
"hexsha": "2f51cf543d123f5041c778eac0086b812f4c11d0",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/encryptionAsymmetric/01-08-Sort.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/encryptionAsymmetric/01-08-Sort.tex",
"max_line_length": 48,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/encryptionAsymmetric/01-08-Sort.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 36,
"size": 141
} |
\documentclass[../../main.tex]{subfiles}
\begin{document}
\setcounter{footnote}{0}
\newcommand{\ade}{\emph{autodE }}
\newcommand{\lmethod}{\emph{lmethod}}
\newcommand{\hmethod}{\emph{hmethod}}
\newcommand{\lmethodx}{\emph{lmethod }}
\newcommand{\hmethodx}{\emph{hmethod }}
\NewDocumentCommand{\code}{v}{%
\texttt{\textcolor{black}{#1}}%
}
\section{Manuscript III}
\emph{autodE: Automated Calculation of Reaction Energy Profiles -- Application to Organic and Organometallic Reactions}
\subsection{Abstract}
Calculating reaction energy profiles to aid in mechanistic elucidation has long been the domain of the expert computational chemist. Here, we introduce \ade (\url{https://github.com/duartegroup/autodE}), an open-source Python package capable of locating transition states and minima and delivering a full reaction energy profile from 1D or 2D chemical representations. \ade is broadly applicable to study organic and organometallic reaction classes, including addition, substitution, elimination, migratory insertion, oxidative addition and reductive elimination; it accounts for conformational sampling of both minima and TSs, and is compatible with many electronic structure packages. The general applicability of \ade is demonstrated in complex multi-step reactions, including metal-catalysed cobalt- and rhodium-catalysed hydroformylation, and an Ireland-Claisen rearrangement.
\subsection{Introduction}
Automating the search for new chemical reactions is seen as one of the ‘grand challenges’ in computational chemistry.\cite{Grimme2018, Foscato2020} Discovering such thermodynamically and kinetically accessible reactions requires knowledge of the underlying potential energy surface (PES), which -- to a first approximation -- involves the characterization of reactants, products, intermediates, and transition states (TSs).\cite{Cheng2015, Sameera2012, Kozuch2011} While this has been routine for computational chemists for decades, locating TSs remains time consuming and a highly non-systematic endeavour.\cite{Simm2019} To this end, many automated TS-search algorithms have been developed which may be broadly classified into single and double-ended methods.\cite{Jensen2020, Dewyer2018} The former requires a single chemical structure from which TSs, intermediates, and products are located with no \emph{a priori} information. Representative examples include the ab initio nanoreactor (AINR),\cite{Wang2014} which employs high temperature and pressure molecular dynamics (MD) simulations to identify chemical transformations; the single-ended growing-string method (SE-GSM),\cite{Zimmerman2015} which iteratively generates structures along the PES and optimises them until a TS and minima at each side of the string are found; the artificial force induced reaction (AFIR) method,\cite{Maeda2016} which adds artificial external forces to the original PES of the target system, and the transition state search method which uses high-energy chemical dynamics simulations (CDS) and a geometry-based algorithm to identify reactive pathways.\cite{Martinez-Nunez2015} While they allow for free exploration of unknown pathways, these methods remain costly and limited to small systems ($\sim10$ reactant atoms) if left to explore all transformations.
\\\\
On the other hand, double-ended methods use knowledge of both reactants and products to generate TSs. Depending on the specific method either single or multiple steps can be studied. The latter includes reaction network enumeration algorithms, which allow for a systematic search of the reaction space, but hit an exponential wall of complexity if arbitrarily many intermediates are permitted.\cite{Kim2018, Habershon2016} Single-step methods, although seemingly limited in scope, remain a powerful and affordable approach to explore many chemical processes determined by elementary steps. This is the case, for example, when the goal is to elucidate the origin of regio- or enantioselectivity or compare different synthetically-plausible pathways for a given transformation. Several methods have been developed following this philosophy, from linear and quasi-synchronous transit\cite{Halgren1977, Peng1993} and related approaches\cite{Dewar1984, Elber1987} to more recent developments based on growing\cite{Dohm2020} and freezing string methods\cite{Suleimanov2015, Behn2011} (GSM/FSM, Figure \ref{fig::ade_1}). Finally, heuristic approaches guided by either graph-based chemical rules or electronic-structure theory have been employed to explore a wide range of chemical reactions; including the Reaction Mechanism Generator,\cite{Gao2016} which follows approaches pioneered by NetGen,\cite{Broadbelt1994} and other reaction network approaches.\cite{Rappoport2019, Bergeler2015}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=7.5cm]{5/autode/figs/fig1}
\vspace{0.4cm}
\hrule
\caption{Comparison of approaches to generate reaction profiles.}
\label{fig::ade_1}
\end{figure}
Here, we introduce \emph{autodE}, which combines elements of previously reported double-ended methods with the development of a general, easy to use, and freely available package to automate the characterization of reaction pathways. The \ade algorithm takes inspiration from related methods, such as AutoTS\cite{Jacobson2017} developed by Schr\"{o}dinger\texttrademark $\,$ and the Reaction Mechanism Generator developed by Green et al.,\cite{Gao2016} and aims to overcome current limitations in applicability, conformational sampling and accessibility; specifically (1) it provides a broadly applicable framework to study organic and organometallic reaction classes; (2) it accounts for conformational sampling of both minima and transition states, which is essential particularly when exploring flexible systems; (3) it is freely available (open-source Python package distributed under an MIT license) and requires minimal user input and expertise. Moreover, it is compatible with several electronic structure theory packages and extensible to others. This work describes the algorithm and implementation of \ade and demonstrates its capabilities in a range of representative organic and organometallic reactions. We demonstrate that \ade is capable of locating the different TSs and minima along the PES and delivering a full reaction energy profile using only SMILES representations as inputs, available from most 2D chemical sketching software. To illustrate the functionality and general applicability of \emph{autodE}, we apply it to a range of reaction classes, including complex organic and metal-catalysed reactions.
\subsection{Methodology}
In general, human-guided characterization of TSs and reaction pathways requires: (1) locating reactants and products; (2) locating the TS, usually starting from a guess TS structure generated by chemical intuition; (3*) once reactants, TSs, and products have been characterized, perform a conformational search to identify the lowest energy conformer in each case; and (4*) performing single point energy evaluations to refine energies. While the starred steps are not always performed, they are usually necessary to achieve meaningful conclusion about the reactivity and selectivity of a given reaction step. Our method follows this workflow, which is described in detail in the following sections, along with representative examples (Figure \ref{fig::ade_2}).
\begin{sidewaysfigure}
\vspace{0.2cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/fig2}
\vspace{0.2cm}
\hrule
\caption{Diagrammatic workflow used by \ade to generate a reaction profile from a set of reactant(s) and product(s) SMILES strings. TSA = transition state analogue, where active bonds are constrained at distances found at the TS found in previous the step.}
\label{fig::ade_2}
\end{sidewaysfigure}
\newpage
With a view to generating reaction profiles efficiently using currently available quantum mechanics methods, our implementation makes use of two levels of theory: a lower-level (\lmethod) and higher-level method (\hmethod). Both methods have default settings (Table \ref{fig::ade_1}) selected based on their transferability, which can be modified by the user depending on the methods available in the respective software. For example, DFT methods can be used as \lmethodx for optimizations, and wavefunction (WF) methods for \hmethodx single point energy evaluations.
\begin{table}[h!]
\def\arraystretch{2.0}
\begin{tabularx}{\textwidth}{YYY}
\hline
Software Package & Calculation & Default Method \\
\hline
& low\_opt & PBE-D3BJ/def2-SVP
\\
ORCA, Gaussian09, NWChem$^a$ & opt, optts, hess & PBE0-D3BJ/def2-SVP
\\
&sp & PBE0-D3BJ/def2-TZVP
\\
MOPAC& low\_opt, opt, sp & PM7
\\
XTB &low\_opt, opt, sp& GFN2-XTB
\\
\end{tabularx}
\hrule
\caption{Default methods used in \ade calculations. sp = single point energy, opt = optimization, low\_opt = low level optimization, optts = transition state optimization, hess = Hessian. \emph{a}. Uses D3 rather than D3BJ dispersion.}
\label{table::ade_1}
\end{table}
\ade is readily extensible to many electronic structure theory codes and force fields in addition to those with currently implemented wrappers (Figure \ref{fig::ade_si_1a}). In the following paragraphs we describe the implementation and key features of \emph{autodE}.
{\bfseries{Finding Lowest Energy Conformers for Reactant and Products}}. Locating the global minimum, or a structure close to it, on a PES is challenging but essential in estimating the feasibility of a reaction.\cite{Chan2019} To characterize the lowest energy conformer at a given level of theory requires transforming the 1D SMILES representation into a 3D geometry and searching the conformational space. Several open-source tools are available including RDKit\cite{Landrum2019} and CONFAB\cite{OBoyle2011} implemented in OpenBabel, but are generally limited to organic species. \ade uses the ETKDGv2\cite{Riniker2015} algorithm implemented in RDKit, which is known to perform well for organic species, along with its SMILES parser.\cite{Ebejer2012} To determine optimal parameters for the number of conformers generated and the root mean squared displacement (RMSD) threshold used to exclude identical conformers, a set of 20 amino acids were considered. Using CREST\cite{Pracht2020} as the benchmark method, the generation of 600 conformers and RMSD threshold = 0.1 \AA$;$ was sufficient to obtain reliable minima, rendering a conformer within 1 \kcalx of the most stable CREST conformer in 90\% of the cases (18/20, Figure \ref{fig::ade_si_2}). Therefore, this inexpensive conformer search was kept as the default for organic molecules and allows for \ade to remain method agnostic (i.e. \lmethod/\hmethodx can be chosen by the user).
For metal complexes, where OpenBabel and RDKit fail to interpret the SMILES string and/or generate a sensible 3D geometry, we utilize our own (Open)SMILES parser and a simple repulsion+bonded (RB) forcefield (Eqn. \eqref{equation::ade_1}) by randomizing atoms then minimizing the function (Figure \ref{fig::ade_si_3}--\ref{fig::ade_si_5a}):
\begin{equation}
U_\text{RB}(\boldsymbol{x}) = \sum_{ij}^\text{bonds} k_1 (r_{ij} - r_{ij}^\text{avg})^2 + \sum_{i > j} \frac{k_2}{r_{ij}^n}
\label{equation::ade_1}
\end{equation}
For both organic molecules and metal complexes, the lowest energy conformer is found by optimizing each conformer at the \lmethodx and excluding non-unique structures based on an RMSD threshold (default 0.3 \AA), then an energy threshold (default 0.2 \kcal). The remaining structures are finally optimised at the desired low\_opt level with the \hmethod, and the lowest energy is kept (Figure \ref{fig::ade_si_6a}).
{\bfseries{TS Location}}. Within \ade, each species has an associated molecular graph, in which the atoms correspond to vertices (V) and ‘bonds’ to edges (E). For a discussion of how bonds are defined, see the SI (Figure \ref{fig::ade_si_7}). From the set of reactant graphs $\{R_i\}$, the reactant graph ($R$) is formed as the graph union, and likewise with products $\{P_i\}$ to generate ($P$). This is represented in Figure \ref{fig::ade_3} for a Claisen rearrangement, where the graphs $R$ and $P$ are formed from a single reactant and product molecule. If $R$ and $P$ are isomorphic, as in an identity reaction, then the chemical transformation of interest is not obvious from the graph transformation and an exception is raised. Once this has been checked, to find an atom mapping (bijection) from $R$ to $P$, we generate a set of transformations of $R, \{R’\}$, obtained by adding and/or removing edges. This leads to a set of functions $\{g\}$ that represents all the possible bond rearrangements $[g: E(R) \rightarrow E(R’)]$.
The atom mapping(s) [$f: V(R’) \rightarrow V(P)$] are then found where $R’$ and $P$ are isomorphic using NetworkX.\cite{NetworkX} A brute force search for $\{g\}$, obtained by enumerating over all possible edge additions/deletions becomes intractable quickly, as the number of operations grows as $2^N$ for a system with $N$ atoms. However, chemical reactions usually involve a limited number of bonds that are formed and broken in a single elementary step, thereby substantially reducing the search space. We therefore limit the transformation to a maximum of four ‘active’ bonds where two edges are added and two removed, denoted (2, 2). For a graph $X$ with $b$ bonds ($b_X$ = $|E(X)|$) the principle of microscopic reversibility is used to switch reactants and products if $b_R < b_P$.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=11cm]{5/autode/figs/fig3}
\vspace{0.4cm}
\hrule
\caption{Atom mappings between reactants and products for a Claisen rearrangement.}
\label{fig::ade_3}
\end{figure}
\newpage
From these constraints, five scenarios exist:
\begin{enumerate}[label=\Roman*.]
\item (1, 1) – substitution reactions e.g., S${}_\text{N}$2.
\item (2, 2) – substitution reactions e.g., epoxidation with peroxide.
\item (0, 1) – dissociations e.g., E1 eliminations.
\item (0, 2) – dissociations e.g., reverse of Diels-Alder cycloaddition.
\item (1, 2) – eliminations e.g., E2 eliminations.
\end{enumerate}
Defining the change in the number of bonds as $\Delta b = b_R – b_P$ as in refs. \cite{Jacobson2017,Crabtree2009} only when $\Delta b = 0$ does the full set of deleting and adding two edges need to be enumerated. Further acceleration is achieved by calculating $\{\Delta b_k\}$ where $k$ is a bond type (e.g. C--C) and ensuring any atom ($a$) does not exceed a maximal valence (e.g., $d(a) \le 4$ for a carbon atom). For example, in a Claisen rearrangement $\Delta b_\text{C--C} = 1$ and $\Delta b_\text{C--O} = -1$ such that the enumeration over (1, 1) transformations is targeted to only C–C and C–O bonds (Figure \ref{fig::ade_3}). Once the set of valid $\{g\}$ is found, a TS is located for each of them (if it exists), and following conformational sampling, the lowest energy TS is taken for the calculation of a reaction profile. For the Claisen reaction shown in Figure \ref{fig::ade_3}, only one bond rearrangement is obtained while for the all-carbon analogue (O $\rightarrow$ CH$_2$) two rearrangements are found ($g = \{g_1, g_2\}$). While reasonably exhaustive, we are yet to find a reaction where the process of generating/checking $R’$ and $P$ isomorphisms and finding the mapping is comparable or more demanding than a DFT calculation.
To generate a saddle point, while ensuring compatibility with multiple electronic structure codes, we favour finding a TS guess by performing a set of constrained geometry optimizations along a linear path between reactants and products. In general, \ade attempts 1D constrained searches over forming/breaking bonds except when $\Delta b$ = 2 (e.g., Diels-Alder). If this procedure fails and the number of active bonds is more than one, a 2D PES exploration is performed by constrained optimizations and fitting a polynomial up to order 3.\cite{SciPy} From a 2D PES, the lowest energy saddle point is found by locating where the first derivative vanishes, and then using Dijkstra's algorithm\cite{Dijkstra1959} implemented in NetworkX.\cite{NetworkX} A constrained optimization at the saddle point is then performed to generate the TS guess. Once the TS guess is obtained, it is optimised with the TS optimisers implemented in the electronic structure theory package selected by the user. ‘Goodness’ of the TS is defined by a significant imaginary mode ($|\nu_\text{imag}| > 45$ cm$^{-1}$) and a significant contribution from the active bonds, or forward/backward displacement along the mode affording graphs isomorphic to reactants/products (quick reaction coordinate calculation,\cite{Goodman2003} see description of \code{true_ts()} function in §\ref{section::ade_si_algorithm} for further details). In cases where the reactant or product(s) are not isomorphic to $R$ or $P$ (e.g. barrierless proton transfers), rigid-body conformers of the reactant- and product- complex are generated (see SI §\ref{section::ade_si_nci_complexes}) and isomorphisms checked for the forwards/backwards displaced structures to all optimised conformers. We envisage the implementation of nudged elastic band (NEB) approaches to accelerate the computationally intensive 2D PES scans using an initial a linear path from the reactant(complex) by making/breaking bonds to drive towards a product state.
{\textbf{Truncation}}. Performing tens of constrained optimizations along one or two coordinates in a large system is currently computationally demanding. To accelerate the generation of a TS guess in such systems, the system is truncated. Our implementation truncates over C--X bonds where the carbon atom is fully saturated and is at least two bonds away from the active atoms, and the truncated group is replaced by a hydrogen atom (Figure \ref{fig::ade_si_8}). The TS is then located using this truncated complex and saved in the template library from which the full non-truncated TS may be found using the templating method described below. For truncation to be utilised it must remove $>10$ atoms.
{\textbf{Finding Lowest Energy TS Conformers}}. TS conformers are located in a similar manner to the protocol described for metal complexes in §\ref{section::ade_si_metal_complex_confs} ($n$=8 in Eqn. \eqref{equation::ade_1} and randomization using a displacement vector of $~ 3$ \AA). However, here the ‘active’ bonds are constrained at distances found at the TS using a harmonic potential with a force constant $k_1’ = 10k_1$ (Eqn. \eqref{equation::ade_2}).
\begin{equation}
U_\text{RB'}(\boldsymbol{x}) = \sum_{ij \in \text{bonds}} k_1 (r_{ij} - r_{ij}^\text{avg})^2 + \sum_{ij \in \text{const.}} k_1' (r_{ij} - r_{ij}^\text{avg})^2 + \sum_{i > j} \frac{k_2}{r_{ij}^8}
\label{equation::ade_2}
\end{equation}
An identical method is then used to find the lowest energy transition state analogue (TSA) -- as a model for the TS -- by performing optimizations with the active bonds constrained to their length in the TS. From the TSA a TS is found by rerunning the transition state optimization and checking that it is both a ‘good’ TS and is lower in energy than the previously found TS.
{\bfseries{Transition State Templates}}. To accelerate the location of TSs, if available, a template is employed to generate a TS guess structure. Templates are saved into a local library following the successful location of a TS and contain a graph, solvent, charge and multiplicity. For a template to be used, the reactant (complex) must have the same charge, multiplicity, and solvent parameters as the one used to obtain the template. In addition, the active bonds must match (same number and atoms) along with their nearest neighbours, based on their connectivity and atom type. If a matching template is found in the library, then a TS guess is generated by constrained optimization, where the active bonds are fixed at values found in the saved TS, from which TS optimization is performed. For example, a TS found for EtCl + F$^{-}$ $\rightarrow$ EtF + Cl$^{-}$ enables a $10\times$ faster characterization of the TS for the propyl analogue.
{\bfseries{Reactive Reactant Complexes}}. For bimolecular reactions (substitution and elimination, not addition as $b_P > b_R$) an initial association complex on the PES must be found that favours a linear arrangement of the nucleophile and the leaving group, while also reducing the required number of optimizations. For this reason, the energy function $U_\text{RR}$ (Eqn. \eqref{equation::ade_3}) is minimized with respect to rigid-body rotation and translation of one reactive component with empirical parameters $c_1 = c_2 = c_3 = 1, c_4 = 10$ and $c_5 = 2.5 (1.5)$ for charged (uncharged) complexes.
\begin{equation}
U_\text{RR}(\boldsymbol{x}_i) = \sum_\text{\{ac\}} c_1 (r_\text{ac} - c_5 r_\text{ac}^\text{avg})^2 + \sum_\text{\{acx\}} c_2 (1 - \cos\theta) + \sum_\text{\{acx\}} c_3 (1 - \cos\phi) + \sum_\text{\{ij\}} \frac{c_4}{r_\text{ij}^4}
\label{equation::ade_3}
\end{equation}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=8cm]{5/autode/figs/fig4}
\vspace{0.3cm}
\hrule
\caption{(a) Illustration of the geometric parameters used in Eqn. \eqref{equation::ade_2} the S$_\text{N}$2 reaction NH$_3$ + CH$_3$Cl as an example: van is the average of vectors $\{\boldsymbol{v}_{am}\}$ where m is a nearest neighbour to a. (b) Minimization of $U_\text{RR}$ for an E2 reaction.}
\label{fig::ade_4}
\end{figure}
This energy function has been designed to maximize the collinearity of vectors $\boldsymbol{v}_{an}$, $\boldsymbol{v}_{ac}$ and $\boldsymbol{v}_{cx}$ while maintaining both a low steric repulsion and distance ($r_{ac} = |\boldsymbol{v}_{ac}|$) close to that desired (Figure \ref{fig::ade_4}). This choice is guided by chemical intuition, and there are rare cases where this is not adhered to (e.g., front-side S$_\text{N}$2,\cite{Hamlin2018} which is found upon TS conformer searching). For example, adequate initial geometries are obtained for S${}_\text{N}2$ substitution, E2 elimination and S$_\text{N}$Ar reactions (Figure \ref{fig::ade_4}, \ref{fig::ade_si_9}). Note that there may be several minima on this surface; thus, multiple initial random rotations and translations ($\sim 10$) are performed to have a better chance of locating the global minimum while remaining computationally inexpensive (execution time $\sim$ seconds). Non-reactive association complex conformers are also available from optimization of random molecular orientations (SI §\ref{section::ade_si_nci_complexes}).
\subsection{Results and Discussion}
To demonstrate the applicability of \ade in multiple reaction classes, we explored some textbook organic reactions (e.g., S$_\text{N}$2, E2, etc.) alongside industrially relevant organic and metal-catalysed reactions involving more than 50 atoms. By way of example, we demonstrate that \ade is broadly applicable, efficient in exploring conformational space, and straightforward to use.
Even with a small amount of programming experience, using \ade is as simple as declaring reactants and products involved in the reaction from their respective SMILES strings, then calling a method. An input for calculating an S${}_\text{N}2$ reaction profile immersed in an implicit aqueous environment is shown in Figure 5. Here, by calling the method \\
\code{calculate_reaction_profile} with complexes turned on, the association complexes are also calculated along the energy profile. Alternatively, one can decide only to obtain the TS, with the function \code{locate_transition_state}.
Without specifying which electronic structure codes to use, \ade employs the first available \hmethodx and \lmethod, (sequentially ORCA, Gaussian09, NWChem and XTB, MOPAC respectively), with the default DFT methodologies listed in Table \ref{table::ade_1}. Using this setup, the reaction energy profile shown in Figure \ref{fig::ade_5} is obtained in about 30 minutes. While it would be ideal to obtain reaction free energies, the cost and limitations associated with the calculation of entropy and thermal contributions using currently available models (ideal gas) mean that the addition of such corrections should be considered with care. For this reason, potential energies are set as the default in \ade, but free energies are available with the \code{free_energy=True} flag. Further algorithmic details are outlined in the SI (§\ref{section::ade_si_algorithm}).
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=10cm]{5/autode/figs/fig5}
\vspace{0.4cm}
\hrule
\caption{Reaction energy profile for an S$_\text{N}$2 reaction calculated using \ade with GFN2-XTB and DFT level \lmethod/\hmethod in XTB/ORCA. Final energies quoted at the CPCM(H$_2$O)-PBE0-D3BJ/def2-TZVP//CPCM(H2O)-PBE0-D3BJ/def2-SVP level of theory and distances quoted in \AA.}
\label{fig::ade_5}
\end{figure}
{\bfseries{TS Conformers}}. A thorough exploration of the conformational space in order to find the lowest energy conformer for reactants and TSs is essential in characterising the kinetic and thermodynamic parameters of a reaction. \ade provides two routes to locating TS conformers. The first uses reactant/product complex conformers, from which different TSs may be traversed, and the second locates TS conformers directly. If using templates, both approaches are efficient, generally requiring only one constrained optimisation and one TS optimisation per conformer once a TS has been found. Direct TS conformer searching is, however, faster as only the TSAs are optimised and the lowest energy found whereas full enumeration from reactant/product complexes can require rescanning over the PES.
For example, for an E2 elimination reaction, the initial search may locate a transition state in which the deprotonation occurs with the proton and the leaving group in a \emph{syn} conformation, rather than the favoured \emph{anti} conformer. The latter TS is automatically found in \ade by randomizing and minimizing Eqn. \eqref{equation::ade_2} (Figure \ref{fig::ade_6}a). Similarly, the \ade algorithm correctly locates the lowest energy H-migration TS for the radical decomposition of 1-propanol, with an RMSD $< 0.1$ \AA$\;$compared to the human-generated TS from ref. \cite{Ferro-Costas2018} (Figure \ref{fig::ade_si_13}). The importance of this unbiased TS conformer generation is highlighted in the Michael addition of nitromethyl acetate and methyl vinyl ketone, where several rotatable bonds exist (Figure \ref{fig::ade_6}b). For this reaction, an exhaustive search from product conformers generated 21 TS conformers, which upon optimization, led to a range in activation energies of more than 5 \kcalx and reaction energies that differ by up to 17 \kcal. The weak correlation between $\Delta E^\ddagger$ and $\Delta E_r$ from the global reactant conformer to each TS-product conformer pair highlights the importance of a systematic conformer search at both minima and transition states.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=10cm]{5/autode/figs/fig6}
\vspace{0.4cm}
\hrule
\caption{(a) Lowest energy transition state conformers found with \ade for methoxide + chloroethane E2 elimination. (b) TS and product conformer distributions for the addition reaction between nitromethyl acetate and methyl vinyl ketone. Calculations performed at the CPCM(H$_2$O)-PBE0-D3BJ/ma-def2-SVP level of theory and distances quoted in \AA.}
\label{fig::ade_6}
\end{figure}
{\bfseries{Organic Reaction}}. For \ade to become routinely used in mechanistic investigations, it must be applicable to synthetically relevant, and usually more complex, reactions. Here, we considered the Ireland--Claisen rearrangement explored computationally via trial-and-error and the AFIR method.\cite{Lee2019} With known intermediates, processed as SMILES strings from Chemdraw\texttrademark, \ade was tested and compared to the human-guided approach, which originally involved the testing of various conceivable mechanisms. \ade delivered an essentially identical reaction profile and reduced the time from the year of reported human and compute effort (that included searching other possible pathways) to a few minutes of setup time and $\sim$600 CPUh (one day of computer time on 24 cores, Figure \ref{fig::ade_7}). Interestingly, \ade deviates $< 2$ \kcalx from the trial and error-based search (blue), with INT3 showing a larger difference of 5 \kcal, due to \ade finding a more stable intermediate than the one located previously.
This example demonstrates that a combination of chemical knowledge, required to hypothesize reasonable mechanisms, and the use of \emph{autodE}, can substantially speed up joint experimental and computational efforts to elucidate complex reaction mechanisms, and in this way advance the optimization of synthetic routes and the design of novel catalysts.
\begin{sidewaysfigure}
\vspace{0.2cm}
\centering
\includegraphics[width=0.75\textwidth]{5/autode/figs/fig7}
\vspace{0.2cm}
\hrule
\caption{Ireland--Claisen rearrangement calculated using \ade (black) and described in ref. \cite{Lee2019} (blue) calculated at B3LYP-D3BJ/6-311++G(2d,2p)//B3LYP-D3BJ/6-31G(d). CPCM(hexane) solvent model used in ORCA (this work) and IEF-PCM(hexane) in Gaussian09 (from ref. \cite{Lee2019}). 3D Structures of the most stable intermediates and transition states are shown in Figure \ref{fig::ade_si_14}.}
\label{fig::ade_7}
\end{sidewaysfigure}
{\bfseries{Organometallic Reaction}}. The conversion of alkenes into aldehydes via addition of CO and H$_2$ is an exceptionally important industrial process, making catalyst optimization the subject of considerable study.\cite{Franke2012} The general mechanism was discovered by Heck and Breslow\cite{Heck1961} and has been the subject of numerous computational studies (for an overview of these studies see ref. \cite{Kegl2015} and references cited therein). Those works have provided significant insights into the mechanism of the reaction, and highlighted the challenges associated with it. In fact, only finding the intermediates and transition states along the PES is already a laborious process, requiring extensive sampling, due to the presence of several conformers and isomeric forms (Figure \ref{fig::ade_8}a).
Applying \ade to study this catalytic cycle, using only the SMILES strings of the catalyst, ethylene, intermediates and products, the full reaction profile was obtained in less than 26 hours (16 CPU cores, excluding electronically barrierless ligand association steps, Figure \ref{fig::ade_8}b). \ade correctly identifies the most stable isomer of all intermediates; for example, the axial isomer for strong $\sigma$-donors (H: INT1, alkyl: INT3, acyl: INT5). TSs for all steps in the cycle are located successfully, i.e. they contain a single imaginary frequency and have geometries very similar to those found for analogous phosphine-based catalysts (Figure \ref{fig::ade_si_15}).\cite{Decker2001}
\begin{sidewaysfigure}
\vspace{0.2cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/fig8}
\vspace{0.2cm}
\hrule
\caption{(a) Heck-Breslow mechanism of hydroformylation in the absence of a phosphine catalyst. (b) Reaction profile calculated at PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP in autodE.}
\label{fig::ade_8}
\end{sidewaysfigure}
The same process was applied to the analogous Co-catalysed hydroformylation (Figure \ref{fig::ade_si_16}), which has been used as a representative example in other automated reaction generation algorithms.\cite{Kim2018, Habershon2016, Maeda2012} Once again, all TSs were successfully located with minutes of setup time.
With a view to expand the use of \ade for catalyst design campaigns, and following previous efforts in this field,\cite{Guan2018} we have also developed the \emph{molfunc} tool \\
({\url{https://github.com/duartegroup/molfunc}}). Combining \ade with \emph{molfunc} enables a rapid screening of different catalysts by performing functionalisation of hydrogenic or other monovalent atomic positions. For example, for the H-insertion step, \emph{molfunc} facilitates the automated exploration of different groups at the para position of triphenyl phosphine on the barrier (Figure \ref{fig::ade_si_17}). The obtained positive correlation between the enhanced electron withdrawing ability of the catalyst and rate agrees with the experimental observation.\cite{Kegl2015}
{\bfseries{Further Examples}}. To demonstrate the generality of our method, we tested \ade on a set of 15 simple organic reactions of different classes. \ade finds all TSs in all cases and generates reactions profiles in a matter of hours (Figure \ref{fig::ade_si_18a}). Further examples include metal-catalysed carbonylation of methanol to generate acetic acid via the Monsanto and Cativa processes (see full SI),\cite{Jones2000} alkaline ester hydrolysis (Figure \ref{fig::ade_si_9}), a Houk-List TS (Figure \ref{fig::ade_si_23}),\cite{Armstrong2014} and 57 carbene insertion reactions reported in ref. \cite{Mieusset2008} (see full SI). In this latter case \ade correctly locates 55/57 insertion TSs, with two failures occurring when a QRC+graph isomorphism check fails to confirm the TS as linking reactants and products, despite the TS being correct. Moreover, TSs for synthetically relevant reactions including a key Diels-Alder step in a total synthesis of Brevianamide A investigated by Domingo et al.\cite{Domingo1997} and a diastereoselective epoxidation\cite{Schneebeli2009} are presented in the full SI.\footnote{Examples presented in the full SI were performed by Joseph J. Silcock.}
{\bfseries{Limitations}}. The graph-based approach affords two immediate limitations of the method: The need to define bonds, and checking stereochemistry. While bonds are generally well-defined in organic systems, they are not a rigid concept. This is partially alleviated by using not regenerating the molecular graphs from the 3D structure, rather than a molecular graph representation, and retaining the SMILES-defined connectivity but may afford an incorrect assignment of a ‘good’ TS, which actually does or does not lead to products/reactants. Furthermore, the isomorphism condition on reactants/products does not currently include a stereochemistry check – the algorithm simply finds the lowest energy TS. While this is often defined by the reaction type e.g., stereochemical inversion in S$_\text{N}$2 is generated by traversing the lowest energy TS, there are reactions where this is not adhered to. Furthermore, by enforcing a maximum connectivity change of four (up to 2 bonds breaking and forming) means that, for example, the TS in a synchronous Fritsch–Buttenberg–Wiechell rearrangement mechanism cannot be found. In this case, human intervention is necessary to prevent the very slow enumeration of $\sim 10$ 2D PES scans. Finally, we note a limitation in the current \lmethodx electronic structure theory methods used to generate conformers of reactant and product complexes. Reactions involving anions are particularly problematic, as at the GFN2-XTB level they are generally not sufficiently stable (even in implicit solvent) such that the reaction may be barrierless at the \lmethodx level. We hope that as semi-empirical/TB electronic structure methods become more accurate and/or DFT and WF methods become faster, this limitation will be mitigated. Because \ade does not rely on one specific method implementing novel methods should be straightforward.
{\bfseries{Further Development}}. There are several opportunities to increase the accuracy and efficiency of \emph{autodE}, including (1) addition of explicit solvation; (2) inclusion of an online and open-source transition state library; (3) treatment of dynamical effects and (4) enhanced error correction. Initial progress has been made on explicit solvation using a simple QM/MM embedding scheme, but for the time being is too computationally demanding to routinely replace implicit solvation models. Future developments to the code will focus on these aspects to approach a fully automated protocol to predict reaction mechanisms with chemical accuracy.
\subsection{Conclusion}
Converting a 2D representation of a reaction to a reaction profile has long been the domain of expert computational chemists. Here we have shown that our method, \emph{autodE}, can largely automate this process and present a selection of both simple and synthetically interesting organic and organometallic reactions. By building a flexible framework in a way that does not rely on a particular electronic structure theory method, future methods for calculating points on the potential energy surface are easy to include. Indeed, the dominant source of failure of the methodology is due to inaccuracies in electronic structure methods rather than issues associated with \emph{autodE}. Crucially, \emph{autodE} is open source, making the development faster, more collaborative and more robust. We believe \emph{autodE} will facilitate faster computational investigation for both newcomers and experts alike.
\clearpage
\section{Selected Supporting Information III}
\emph{Full Supporting Information including raw data can be found at}:\\ {\url{https://onlinelibrary.wiley.com/doi/10.1002/anie.202011941}}
\subsection{Computational Methods}
All \ade calculations were performed with a development version of the code (1.0.0a0). Unless otherwise stated the \lmethodx used was GFN2-XTB v. 6.2\cite{Bannwarth2019} and the \hmethodx ORCA v. 4.2\cite{Neese2017} All ORCA optimizations employed resolution of identity DFT (RI or RIJCOSX for hybrid functionals)\cite{Neese2003, Neese2009} using the PBE\cite{Perdew1996} or PBE0\cite{Adamo1999} functional, in combination with the D3BJ dispersion correction,\cite{Grimme2010, Grimme2011} def2-SVP or def2-TZVP basis set\cite{Weigend2005} (with effective core potentials for Rh) and the default auxiliary basis set.\cite{Weigend2006} Gaussian09\cite{G09} calculations employed identical methods without density fitting unless otherwise specified. NWChem v. 6.6\cite{Valiev2010} calculations used no dispersion correction as D3BJ is unavailable with PBE0. Default integration grids were used in all DFT calculations. Conformers generated with RDKit\cite{Landrum2019} version 2020.03.1 and OpenBabel version 2.4.1.
\subsection{Compatibility of autodE with Different Electronic Structure Theory Codes}
Figure \ref{fig::ade_si_1a} shows the energy profile for the endo/exo Diels Alder cyclization between methyl acrylate and cyclopentadiene obtained when combining \ade with each of the software for which wrappers are currently implemented and show invariability of the results to the chosen method.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS1a}
\vspace{0.4cm}
\hrule
\caption{Comparison of different codes used to perform optimizations and single point energy evaluations in the reaction between \emph{endo} Diels--Alder cyclisation between methyl acrylate and cyclopentadiene. All profiles are calculated at the PBE0/def2-TZVP//PBE0/def2-SVP level of theory and the low-level method is tight-binding DFT (XTB) or semi-empirical PM7 (MOPAC). Forming bond distances are quoted in \AA.}
\label{fig::ade_si_1a}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS1b}
\vspace{0.4cm}
\hrule
\caption{As Figure \ref{fig::ade_si_1a} for \emph{exo} products.}
\label{fig::ade_si_1b}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=11cm]{5/autode/figs/figS2}
\vspace{0.4cm}
\hrule
\caption{Probability that a conformer will be found -- using RDKit + XTB optimization -- within 1 \kcalx of the most stable conformer generated by CREST given a specific root mean squared deviation (RMSD) and a number of conformers threshold in the EKTDGv2 algorithm. Molecule set contains 20 L-amino acids. All optimizations were performed at the GFN2-XTB level. Crest v. 2.9 \& XTB v. 6.2.3 to find the lowest energy conformer, as it generated a more diverse set than the exhaustive algorithm in CONFAB. For example, CREST generated 90 conformers for serine for which 85 were found in ref. \cite{He2016} while CONFAB did not identify the correct rotatable bonds and afforded only 36 conformers.}
\label{fig::ade_si_2}
\end{figure}
\clearpage
\subsection{Metal-complex Conformers}
\label{section::ade_si_metal_complex_confs}
\ade generates metal complex conformers using Eqn. \eqref{equation::ade_1_repeat},
\begin{equation}
U_\text{RB}(\boldsymbol{x}) = \sum_{ij \in \text{bonds}} k_1 (r_{ij} - r_{ij}^\text{avg})^2 + \sum_{i > j} \frac{k_2}{r_{ij}^n}
\label{equation::ade_1_repeat}
\end{equation}
where the first harmonic term describes bonds between atoms, and the second term introduces a non-physical repulsion that enforces a notion of steric repulsion. Parameters $k_1 = 1$ and $k_2 = 0.01$ were selected based on empirical experience and comparison to optimised structures (Figure \ref{fig::ade_si_3}) while ideal bond lengths ($r^\text{avg}$) are obtained as averages from the Cambridge Structural Database.
To generate reasonable 3D structures for organometallic complexes, each atom is added sequentially and randomly in a 10 \AA$\;$ cubic box, and the function $U_\text{RB}$ is minimized with respect to all coordinates after each addition. Using a smooth potential with few local minima ($n = 2$ in Eqn. \eqref{equation::ade_1_repeat}) is required to obtain stable structures for large complexes (Figure \ref{fig::ade_si_4}). For a test set of 20 metal-complexes with up to 100 atoms, our approach delivers a stable conformer in all cases, while RDKit successfully generates only 15 geometries and CONFAB failed in all cases (Figure \ref{fig::ade_si_5a} and Table \ref{table::ade_si_1}). With the analytic derivative of $U_\text{RB}$ and a conjugate gradient algorithm implemented in \emph{scipy},\cite{SciPy} an initial structure is available in a few seconds for complexes with more than 100 atoms. Stereo-defined metal complexes are currently not supported, as random initialization does not respect any chirality.
An alternative and slightly faster approach is used to generate metal complex conformers; a random displacement vector (length $\sim3$ \AA) is applied to all atoms and $U_\text{RB}$ minimized with $n = 8$. The steeper repulsive term is used as it generates a more realistic PES; for example, generating the expected three minima in the butane dihedral PES, Figure \ref{fig::ade_si_4}. While simple, this strategy affords more conformers than both RDKit and CONFAB for the metal complex test set (Figure \ref{fig::ade_si_5a} and Table \ref{table::ade_si_1}). It is also worth noting that we found no advantage in using the EMBED algorithm\cite{Havel2002} to generate initial coordinates for organic systems (Figure \ref{fig::ade_si_6a}). Moreover, conformers can be generated using arbitrary distance constraints specified by the user (e.g., to retain a square planar geometry given an 3D initial structure).
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=11cm]{5/autode/figs/figS3}
\vspace{0.4cm}
\hrule
\caption{Using the molecules butane, proline and [RhH(ethene)(CO)${}_3$], the calculated RMSD between the geometries optimised with the repulsion + bonded forcefield [\eqref{equation::ade_1_repeat} $n=8$] and XTB for different $k_2$ values (Eqn. \eqref{equation::ade_1_repeat}). Initial conformer geometries are obtained by random displacements from XTB-optimised geometries (normally distributed, $\sigma$=0.5 \AA, $\mu$=0.0 \AA, on each direction). This approach provides a more realistic starting structure than optimizing at the local XTB minimum i.e. not overly favouring a very small repulsion where atoms do not move from their optimised positions. Error bars are quoted as the standard error in the mean of 20 repeats.}
\label{fig::ade_si_3}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=12cm]{5/autode/figs/figS4}
\vspace{0.4cm}
\hrule
\caption{Rotational barrier of butane using a simple repulsion + bonded potential Eqn. \ref{equation::ade_1_repeat}, with different n values, compared to a DFT reference (PBE0-D3BJ/def2-SVP). Relative energies normalized to the eclipsed conformation ($\theta$ = 0 $^{\circ}$). Initial structure not symmetrized.}
\label{fig::ade_si_4}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS5a}
\vspace{0.4cm}
\hrule
\caption{Metal complexes itemized in Table \ref{table::ade_si_1} generated by minimizing a simple repulsion + bonded potential Eqn. \eqref{equation::ade_1_repeat} and their subsequent XTB optimised geometries. Structures initialized by adding atoms sequentially from a random position within a 10 \AA$\;$ cube and minimizing $U-\text{RB}$ with $n$ = 2 repeatedly until all atoms were added, then performing a final minimization with $n = 8$ $(k = 1$, $c = 0.01)$.}
\label{fig::ade_si_5a}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS5b}
\vspace{0.4cm}
\hrule
\caption{\figurename{ }\ref{fig::ade_si_5a} continued.}
\label{fig::ade_si_5b}
\end{figure}
\begin{table}[h!]
\def\arraystretch{1.5}
\begin{tabularx}{\textwidth}{YYYYYYY}
\hline
Entry & \multicolumn{2}{c}{RDKit} & \multicolumn{2}{c}{CONFAB} &\multicolumn{2}{c}{RB}
\\
& SG & N& SG &N &SG &N
\\
\hline
1 & \ding{55} &0& \ding{55} &0 &\ding{51} &1\\
2 & \ding{51}& 1& \ding{55} &0 &\ding{51} &2
\\
3 & \ding{51} &1 &\ding{55} &0 &\ding{51} &5
\\
4 & \ding{51} &15 &\ding{55}& 0& \ding{51}& 17
\\
5 & \ding{51}& 1& \ding{55} &0 &\ding{51} &8
\\
6 & \ding{55}& 0& \ding{55} &0& \ding{51} &10
\\
7 & \ding{55} &0 &\ding{55} &0& \ding{51}& 1
\\
8 & \ding{55}& 0& \ding{55}& 0& \ding{51}& 10
\\
9 & \ding{51} &2&\ding{55}& 0& \ding{51} &1
\\
10 & \ding{51}& 2&\ding{55} &0 &\ding{51} &5
\\
11 & \ding{51}& 6& \ding{55}& 0 &\ding{51} &23
\\
12 & \ding{51}& 8& \ding{55} &0 &\ding{51} &16
\\
13 & \ding{51}& 9& \ding{55} &0 &\ding{51} &18
\\
14 & \ding{55}& 0 &\ding{55}& 0 &\ding{51}& 20
\\
15 & \ding{51} &22 &\ding{55} &0& \ding{51} &24
\\
16 & \ding{51} &1 &\ding{55} &0 &\ding{51} &5
\\
17 & \ding{51}& 3& \ding{55} &0& \ding{51}& 5
\\
18 & \ding{51}& 13 &\ding{55}& 0& \ding{51} &18
\\
19 & \ding{51}& 14& \ding{55}& 0& \ding{51}& 19
\\
20 & \ding{51}& 28 &\ding{55} &0& \ding{51} &30
\end{tabularx}
\label{table::si_ade_1}
\hrule
\caption{Metal complex test set. The set comprises the first 10 complexes used to benchmark MolSimplify (ref. \cite{Ioannidis2016}., all Cr(II)), 5 large complexes from ref. \cite{Ounkham2017} and 5 intermediates in hydroformylation catalysis. N is the number of conformers/isomers generated using RDKit (v. 2019.09.3), CONFAB (OpenBabel v. 2.4.1), and the repulsion + bonded (RB, Eqn. \eqref{equation::ade_3}) algorithm introduced in this work. Unique conformers are found by discarding those with energies within 0.24 \kcalx of others. RDKit and RB requested 50 conformers and CONFAB the default. 3D structure generated (SG) successfully is indicated with a tick. See full supporting information for SMILES strings and \figurename{ \ref{fig::ade_si_5a}, \ref{fig::ade_si_5b}} for geometries.}
\label{table::ade_si_1}
\end{table}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS6a}
\vspace{0.4cm}
\hrule
\caption{Conformer ranking for 20 amino acids in their neutral form using energies from XTB optimization (XTB opt), DFT single points (sp, XTB optimised geometries) and DFT loose optimization (opt). DFT calculations performed at the PBE-D3BJ/def2-SVP level of theory. Conformers generated using RDKit with a maximum of 1000 requested and an RMSD threshold of 0.05 \AA.}
\label{fig::ade_si_6a}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS6b}
\vspace{0.4cm}
\hrule
\caption{\figurename{ }\ref{fig::ade_si_6a} continued.}
\label{fig::ade_si_6b}
\end{figure}
\clearpage
\subsection{Constructing Molecular Graphs}
The precise nature and properties of a chemical bond are still not uniquely defined,\cite{Zhao2019} this is despite a rather inclusive IUPAC definition of ‘When forces acting between two atoms or groups of atoms lead to the formation of a stable independent molecular entity, a chemical bond is considered to exist…’.\cite{IUPACgoldbook} However, to form a molecular graph the bonds must be rigidly defined. Given a SMILES string the molecular graph is constructed by adding edges defined explicitly (e.g. C=C) and implicitly (C--H) in the string. If a molecular graph is constructed from a 3D geometry an edge between two atoms (a, b) is added if $r_{ab} < 1.25 \times r^\text{avg}_{ab}$, where $r^\text{avg}_{ab}$ is the CCDC average for that atom pair (i.e. $r^\text{avg}_\text{C--C}$ = 1.427 \AA). Edges are added by iterating through atoms sorted by their atomic weight, then for each atom index $i$ enumerating atoms $j > i$ sorted by the distance closest to atom $i$. If an atom exceeds its maximal valance e.g. hydrogen: 1, carbon: 4 then the longest edge(s) is(are) removed. Some simple examples are shown in Figure \ref{fig::ade_si_7}.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS7}
\vspace{0.4cm}
\hrule
\caption{Mapping of 3D structures to molecular graphs. Distances quoted in \AA.}
\label{fig::ade_si_7}
\end{figure}
In the molecular graphs used here there is no concept of multiple bonding with the exception of allowing nodes to be ‘pi atoms’, so that suitable distance constraints can be added to ensure no rotation about $\pi$-bonds in RB conformer generation and a molecule is not truncated over a $\pi$-bond. A node may also be a ‘stereo atom’ which again is to prevent inversion of chiral centres in RB conformer generation.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS8}
\vspace{0.4cm}
\hrule
\caption{Schematic process of truncating 3-methylcyclohex-1-ene. The active bond is highlighted in blue and the active atoms are those that form the active bond (C, H).}
\label{fig::ade_si_8}
\end{figure}
\clearpage
\subsection{Non-covalent and Reactive Reactant Complexes}
\label{section::ade_si_nci_complexes}
In many chemical processes, the formation of a reactant/product complex precedes/follows the chemical step of interest, and is therefore fundamental to determine the kinetics of the process. In some cases, the molecular graph of the reactant and product complexes may not be isomorphic to the separated reactants and products. For example, in the base-catalysed hydrolysis of methyl acetate, loss of MeO$^{-}$ from the tetrahedral intermediate proceeds with concurrent deprotonation of the acid (Figure \ref{fig::ade_si_10}). Thus, to successfully characterize the TS as ‘good’, reactant and product complex conformers are required from which isomorphisms can be checked to the forward/reverse displaced species from the TS. This approach allows the lowest energy conformer of a non-covalent interaction (NCI) complex to be located systematically. For a complex formed by molecule A and B, conformers of the A.B complex are constructed by adding molecule B at points equally spaced on the surface of a sphere in a random orientation around A, then energy minimizing with an \lmethodx (Figure \ref{fig::ade_si_11}). For a complex with $N_m$ molecules using $N_s$ points on a sphere and $N_r$ random rotations, this approach generates $(N_s\times N_r)^{N_m–1}$ conformers. To maintain efficiency, a maximum threshold number of conformers is set (1000 by default). This approach also provides additional functionality, facilitating the generation of hydrogen bond complexes of relevance in anion-recognition without prior knowledge (Figure \ref{fig::ade_si_12}).
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=12cm]{5/autode/figs/figS9}
\vspace{0.4cm}
\hrule
\caption{Representative optimization of the reactant complexes for (a) ethylbromide + water and (b) methyl 6-bromonicotinate + fluoride (concerted S$_N$Ar from ref. \cite{Kwan2018}) under equation \eqref{equation::ade_3}. Energies ($U$) are in arbitrary units.}
\label{fig::ade_si_9}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS10}
\vspace{0.4cm}
\hrule
\caption{(a) Reaction profile for alkaline ester hydrolysis generated in \ade (ORCA/XTB, CPCM(water)-PBE0-D3BJ/def2-TZVP//CPCM(water)-PBE0-D3BJ/ma-def2-SVP). TS for methoxide loss is more stable than separated acetic acid + methoxide species at the chosen level of theory. (b) TS structures calculated at CPCM(water)-PBE0-D3BJ/ma-def2-SVP. Key distances are quoted in \AA.}
\label{fig::ade_si_10}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS11}
\vspace{0.4cm}
\hrule
\caption{NCI complex conformer (a) generation methodology and (b) application to the water trimer. $(5\times3)^2$ = 225 conformers are generated and optimised.}
\label{fig::ade_si_11}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=12cm]{5/autode/figs/figS12}
\vspace{0.4cm}
\hrule
\caption{Most stable NCI complex conformer for a urea-fluoride hydrogen-bonding complex located using \ade at the XTB level. Distances shown in \AA. See full supporting information for the associated input script.}
\label{fig::ade_si_12}
\end{figure}
\clearpage
\subsection{Algorithm Details}
\label{section::ade_si_algorithm}
To outline the algorithm in more detail than shown in the high-level workflow (Figure \ref{fig::ade_2}) for an S$_N$2 reaction (Figure \ref{fig::ade_5}, with the input reproduced below) the partial trace of function calls is shown and commented where the function is not self-explanatory, or there are multiple possibilities exist depending on the system. Note that this trace will likely change in development but is accurate for the 1.0.0a0 release. Only a single molecule initialization (MeCl), reaction initialization and calculating reaction profile functions calls are shown. Functions without a prefix (e.g. RDKit.) are \ade functions.
\\\\
\hrule
\code{MeCl = Reactant(name=`CH3Cl', smiles=`ClC')}
\hrule
\begin{enumerate}
\item \code{init_smiles(`ClC')}.
\begin{enumerate}
\item \code{any(metal in smiles for metal in metals)} $\rightarrow$ False.\\
If true then use the RB algorithm to generate 3D
structure.
\item \code{init_organic_smiles(‘ClC’)}
\begin{enumerate}
\item \code{rdkit.Chem.MolFromSmiles(‘ClC’)}
\item \code{rdkit.Chem.AddHs(..)}
\item \code{rdkit.AllChem.EmbedMultipleConfs(..)}
\item \code{make_graph(..)}
Add nodes and edges from the atoms and RDKit
bonds defined by the SMILES connectivity, set
stereocenters inc. Z/E alkenes.
\item \code{are_coords_reasonable} $\rightarrow$ True
If the coordinates aren’t reasonable then revert to
RB algorithm to generate a 3D structure.
\item \code{check_bonds(..)}
If the connectivity based on the 3D geometry
defined by distances is different to the SMILES
connectivity display an error.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\vspace{0.5cm}
\hrule
\code{reaction = Reaction(Flouride, MeCl, Chloride, MeF, name=`sn2', solvent_name=`water')}
\hrule
\begin{enumerate}
\item \code{classify(..)}
Classify the reaction based on the number of
reactants and products. If there are no reactants/
products throw an exception.
\item \code{get_solvent(solvent_name)}
\item \code{_check_solvent()}
If the reactant and product molecules that comprise
this reaction have an associated solvent this
function checks they match. Also permissible for
all molecule.solvent = None for a reaction in the
gas phase.
\item \code{_check_balance()}
Ensure that the number of atoms and the total
charge are conserved reactants $\rightarrow$ products,
otherwise throw an exception.
\end{enumerate}
\vspace{0.5cm}
\hrule
\code{calculate_reaction_profile()}
\hrule
\begin{enumerate}
\item \code{find_lowest_energy_conformers()}
\begin{enumerate}
\item \code{for mol in reacs + prods: mol.find_lowest_energy_conformer()}
\begin{enumerate}
\item \code{_generate_conformers()}
If RDKit conformers are fine then use the
ETKDGv2() algorithm to generate
conformers with a specified RMSD and
max number cutoff. Otherwise use the RB
algorithm with the same cutoffs. Both
are parallelized.
\item \code{for conf in mol.conformers: conf.optimise(lmethod)}
Initially optimise all conformers at the
low-level method with keywords.opt to
generate structures (hopefully) closer to
the minima on the high-level method
surface
\item \code{for conf in mol.conformers: conf.optimise(hmethod)}
Failures in Gaussian internal coordinate
bends are corrected by running a few
optimisation steps in Cartesian
coordinates then switching back to
internals .
\item \code{_set_lowest_energy_conformer()}
Enumerate all the conformers in this
molecule and set mol.atoms, mol.energy
as the conformer with the lowest energy
that is also graph isomorphic to the
molecule’s graph to remove any
structures with different connectivity i.e.
not conformers.
\end{enumerate}
\end{enumerate}
\item \code{optimise_reacs_prods()}
\begin{enumerate}
\item \code{for mol in reacs + prods: mol.optimise(hmethod)}
Optimise the geometry with the high-level method
but don’t reset the molecular graph
\end{enumerate}
\item \code{find_complexes()}
\begin{enumerate}
\item \code{ReactantComplex(reacs, ..)}
\begin{enumerate}
\item \code{charge = sum(reacs.charge)} $\rightarrow -1$
\item \code{mult = sum(reacs.mult) - (n-1)} $\rightarrow$ 1
Spin multiplicity is the sum of the two where n is
the number of reactants by default.
atoms = reacs.atoms
\item \code{solvent = reacs[0].solvent} $\rightarrow$ water
\item \code{graph = union(reacs.graph)}
\end{enumerate}
\item \code{ProductComplex(prods, ..)}
As for reactant complex.
\end{enumerate}
\item \code{locate_transition_state()}
\begin{enumerate}
\item $b_R > b_P \rightarrow$ False.\\
If True then call \code{switch_reacs_prods()},
then switch back after locating TSs.
\item \code{find_tss()}
Locate all possible transition states for this R $\rightarrow$ P
rearrangement.
\begin{enumerate}
\item \code{species_are_isomorphic(reac, prod)} $\rightarrow$ False.\\
If the graphs of reactant and product $(R, P)$ are
isomorphic then the bond rearrangement of
interest is not obvious and this returns true and
\code{find_tss} returns None.
\item \code{get_bond_rearrs(reac, prod, ..)}
Get all possible bond rearrangements that generate an R’ that is isomorphic to P.
\item \code{bond_rearrs is None} $\rightarrow$ False.\\
If a set of bond rearrangements cannot be found then \code{find_tss} returns None.
\item \code{for g in bond_rearrs:}
Attempt to locate a TS for all possible bond rearrangements (g).
\item \code{translate_rotate_reactant()}
\item \code{reoder_product_complex(..)}
Reorder the atoms in the product complex using
the atom mapping from $R’$ to $P$ so $R$ and $P$ have
atoms that map.
\item \code{is_worth_truncating(..)} $\rightarrow$ False.\\
Form the truncated reactant complex by
swapping fragments for hydrogen if they are
far enough away from the active atoms, then
calculation the difference in number of atoms
between the full complex and the truncated, if
$> 10$ then use the truncated complex then
revert.
\item \code{for ts_func in funcs()}:
Enumerate possible TS guess functions, here a
2D low-level scan, 1D breaking-bond scan and
1D high-level forming bond scan in sequence
until either there are no more functions or a
TS has been found.
\item \code{ts_guess = ts_func(reaction, ..)}
Use the function to generate a transition state guess geometry as, in the first instance, the saddle point in the 1D/2D surface.
\item \code{could_have_correct_mode(ts_guess)} $\rightarrow$ True.\\
The ‘lowest’ mode in the Hessian must be negative (imaginary), be $|\nu| > 45 \text{ cm}^{-1}$, and has a contribution from the active atoms. The latter is defined by the mass weighted $|\nu_i|$ for an active atom $i$ being above a threshold.
\item \code{ts.optimise()}
Use the TS optimiser in the high-level method to optimise to a first order saddle point. If there is $>1$ imaginary frequency following the successful termination displace along the spurious mode forwards and backwards to try and remove it.
\item \code{is_true_ts()}
Check that the forward/reverse displacement leads to the expected change in active bond lengths (all $\Delta r$ $> 0.3$ \AA) or that displacement forwards and backwards along the largest magnitude imaginary mode leads to a structure that has a graph which is isomorphic to either the reactant complex or an optimised conformer of the reactant complex.
\item \code{ts.save_template()}
If \code{Config.make_ts_template}
then save a TS template containing the
atoms, distances, charge, multiplicity and
solvent for subsequent use.
\item \code{len(tss) == 0} $\rightarrow$ False.\\
If no transition states can be found then \code{find_tss} returns None.
\end{enumerate}
\item \code{find_lowest_energy_ts()}
If there is more than one TS that leads to products
then choose the lowest energy.
\end{enumerate}
\item \code{find_lowest_energy_ts_conformer()}
Use the repulsion + bonded force field to randomise
coordinates then minimize using the low-level method
with fixed active bond lengths, removing the similar
based on RMSD and energy tolerance then reoptimising.
at the high-level method from which a TS optimisation is
run from the lowest energy. If this is both a ‘good’ TS
and is lower energy than previously found then use this
TS.
\item \code{calculate_single_points()}
\begin{enumerate}
\item \code{for mol in reacs + prods: mol.single_point()}
\end{enumerate}
\item \code{plot_reaction_profile(..)}
Spline between the points in the profile and optimise the
points in the spline to generate stationary points at the
minima/maxima for a smooth profile.
\end{enumerate}
In checking if the largest magnitude imaginary mode afforded by a Hessian calculation on the transition state guess geometry could be the ‘correct’ transition state mode that separates reactants and products there has to be a ‘significant contribution’ from the active atoms before a transition state optimization is performed. The significance is quantified by $d > 0.1$ based on empirical experience. Further \ade developments will introduce a more robust and interpretable measure where,
\begin{equation}
d = \frac{\sum_{i \in \text{active}} w_i}{\sum_{i \in \text{atoms}} w_i} \quad,\quad w_i = \frac{M_i}{c_M} + 10 \frac{|\boldsymbol{v}_\text{imag}(i)|}{c_d}
\end{equation}
$M_i$ is the mass of atom $i$, $c_M$ = 1 amu, $v_\text{imag}(i)$ the normal mode displacement vector and $c_d = 1$ \AA.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=9cm]{5/autode/figs/figS13}
\vspace{0.4cm}
\hrule
\caption{H-migration reaction involved in the radical decomposition of 1-propanol. (a) Transition state analogue (TSA) conformers located with \ade and ranked in increasing energy (c0-c3)(b) optimised TS, starting from TSAC1 conformer, at PBE0-D3BJ/def2-SVP compared to the lowest energy TS conformer reported in ref. \cite{Ferro-Costas2018} (cyan). RMSD calculation include all atoms. Key distances quoted in \AA.}
\label{fig::ade_si_13}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14.5cm]{5/autode/figs/figS14.pdf}
\vspace{0.4cm}
\hrule
\caption{Most stable intermediates and transition states for the Ireland-Claisen reaction profile shown in Figure \ref{fig::ade_7} located by \ade at the B3LYP-D3BJ/6-31G(d) level of theory using ORCA/XTB. }
\label{fig::ade_si_14}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=13cm]{5/autode/figs/figS15}
\vspace{0.2cm}
\hrule
\caption{Most stable intermediates and transition states for the Heck-Breslow mechanism of hydroformylation shown in Figure \ref{fig::ade_8} located by \ade at the PBE-D3BJ/def2-SVP level of theory.}
\label{fig::ade_si_15}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=15cm]{5/autode/figs/figS16}
\vspace{0.2cm}
\hrule
\caption{Co-catalyzed hydroformylation (a) reaction profile and (b) transition state geometries generated in \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP). ald. = propionaldehyde. TS${}_{5-0}$ is not a local maximum due to basis set superposition error making the TS more stable than the separated aldehyde and INT0. Electronically barrierless ligand association TSs are not calculated.}
\label{fig::ade_si_16}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=11cm]{5/autode/figs/figS17}
\vspace{0.2cm}
\hrule
\caption{Relative rate ($\ln(k_X/k_H) \approx -\Delta\Delta E_A^\ddagger/RT$, T = 373.15 K) for the hydrogen migration catalysed by [RhH(CO)$_2$(PAr$_3$)(ethene)]. $\Delta\Delta E_A^\ddagger$ is the energy difference between the TS analogue and the canonical TS with X = H. The TS analogues are generated by keeping bonds fixed and only optimizing the added functional group). Energies calculated at PBE0-D3BJ/def2-SVP where Ar = $p$-C$_6$H$_5$X and X = $\{$H, F, Br, CF$_3$, CH=CH${}_2$, CONH$_2$, NO$_2$, NH$_2$$\}$.}
\label{fig::ade_si_17}
\end{figure}
\clearpage
\subsection{Reaction Classes}
To demonstrate the applicability of \ade to multiple reaction classes we have generated reaction profiles for a selection of organic reaction types choosing a minimal example in each case (Figure \ref{fig::ade_si_18a}). Several reactions (Figure \ref{fig::ade_si_18a} g, i, j, k) which have a TS below either products or reactants due to less stable separated anionic components at the DFT method used. Of particular note is acylium addition to benzene (Figure \ref{fig::ade_si_17}), where our TS conformational search algorithm finds a true TS (that links reactants and products) that is more stable (0.7 \kcal) than the Wheland intermediate conformer obtained with RDKit (2.3 \kcal). These differences are within the error of the level of theory used and suggest that this reaction is electronically barrierless.
All of the reaction profiles were generated successfully with the exception of the stepwise conversion of acetic acid to acetyl chloride by thionyl chloride, for which only the chloride addition TS was located with \ade (Step II, Scheme 1). This resulted from the substitution reaction being barrierless at the chosen \lmethodx (GFN2-XTB) and saddle point found on the 2D surface at the \code{low_opt} (PBE-D3BJ/ma-def2-SVP) level of theory not being a saddle point on the opt level (PBE0-D3BJ/ma-def2-SVP) surface. Note that \ade by default does not perform a 2D scan at the opt level due to computational expense. Both of these limitations are not intrinsic to the method, but rather of the accuracy electronic structure theory packages currently available.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS18a-d}
\vspace{0.2cm}
\hrule
\caption{Reaction profiles for a range of organic reaction classes calculated at PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP for neutral and CPCM(water)-PBE0-D3BJ/ma-def2-TZVP//CPCM(water)-PBE0-D3BJ/ma-def2-SVP for reactions with an anionic component with ORCA/XTB. (c) TS contained a second spurious imaginary frequency $< 50i \text{ cm}^{-1}$ that could not be removed via displacement along the imaginary vector. b. TS contained two spurious imaginary frequency $< 50i \text{ cm}^{-1}$. Distances quoted in \AA.}
\label{fig::ade_si_18a}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS18e-h}
\vspace{0.2cm}
\hrule
\caption{\figurename{ \ref{fig::ade_si_18a}} continued.}
\label{fig::ade_si_18e}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS18i-l}
\vspace{0.2cm}
\hrule
\caption{\figurename{ \ref{fig::ade_si_18a}} continued.}
\label{fig::ade_si_18i}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS18m-o}
\vspace{0.2cm}
\hrule
\caption{\figurename{ \ref{fig::ade_si_18a}} continued.}
\label{fig::ade_si_18m}
\end{figure}
% --------------------------------------------------------------------------------------------
% ------------------------------------- Joe's examples --------------------------------------
% --------------------------------------------------------------------------------------------
\iffalse
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS19}
\vspace{0.2cm}
\hrule
\caption{Rh-catalysed methanol carbonylation (Monsanto process) reaction profile generated in \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP).}
\label{fig::ade_si_19}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=13cm]{5/autode/figs/figS20}
\vspace{0.2cm}
\hrule
\caption{Most stable intermediates and transition states for the reaction profile plotted in Figure \ref{fig::ade_si_19} located by \ade at the PBE-D3BJ/def2-SVP level of theory.}
\label{fig::ade_si_20}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=14cm]{5/autode/figs/figS21}
\vspace{0.2cm}
\hrule
\caption{Ir-catalysed methanol carbonylation (Cativa process) reaction profile generated in \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP).}
\label{fig::ade_si_21}
\end{figure}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=13cm]{5/autode/figs/figS22}
\vspace{0.2cm}
\hrule
\caption{Most stable intermediates and transition states for the reaction profile shown in Figure \ref{fig::ade_si_20} located by \ade at the PBE-D3BJ/def2-SVP level of theory. Distances quoted in \AA.}
\label{fig::ade_si_22}
\end{figure}
\fi
% --------------------------------------------------------------------------------------------
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=7cm]{5/autode/figs/figS23}
\vspace{0.2cm}
\hrule
\caption{Houk-List TS for an asymmetric aldol reaction found with \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP). See full supporting information for input. Distances quoted in \AA.}
\label{fig::ade_si_23}
\end{figure}
% --------------------------------------------------------------------------------------------
% ------------------------------------- More Joe examples ----------------------------------
% --------------------------------------------------------------------------------------------
\iffalse
\clearpage
\subsection{Further Examples}
57 reactions of carbenes into the C–H bonds of acetonitrile, isobutane and methane, reported in ref.\cite{Mieusset2008} showing the excellent correlation between manually obtained TSs and those found in an automated fashion with \ade.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=9.5cm]{5/autode/figs/figS24}
\vspace{0.2cm}
\hrule
\caption{Carbene-insertion barrier (a) enthalpies and (b) free energies. Calculations were performed at the same level of theory as the original work of Mieusset et. al in ref. \cite{Mieusset2008}. Values tabulated in the full supporting information.}
\label{fig::ade_si_24}
\end{figure}
For Brevianamide formation \ade correctly identifies the relevant H-bond interaction (shown in black dashed lines) that leads to the formation of Brevianamide A, in agreement with previous computational studies by Domingo et al.\cite{Domingo1997}
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=\textwidth]{5/autode/figs/figS25}
\vspace{0.2cm}
\hrule
\caption{Diels-alder reaction for the formation of Brevianamide A and B generated in \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP).}
\label{fig::ade_si_25}
\end{figure}
In a diastereoselective epoxidation from ref.\cite{Schneebeli2009} \ade correctly locates this complex (2, 2) making breaking transition state in the preferred orientation.
\begin{figure}[h!]
\vspace{0.4cm}
\centering
\includegraphics[width=11cm]{5/autode/figs/figS26}
\vspace{0.2cm}
\hrule
\caption{Diastereoselective epoxidation from ref. \cite{Schneebeli2009} generated in \ade (ORCA/XTB, PBE0-D3BJ/def2-TZVP//PBE0-D3BJ/def2-SVP). Second spurious imaginary frequency 7.35$i \text{ cm}^{-1}$.}
\label{fig::ade_si_26}
\end{figure}
\fi
% --------------------------------------------------------------------------------------------
\clearpage
\end{document}
| {
"alphanum_fraction": 0.7609932255,
"avg_line_length": 71.8624390244,
"ext": "tex",
"hexsha": "98abf0ff0479b5f0087e5209a8287aa0adfada8e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "t-young31/thesis",
"max_forks_repo_path": "5/autode/autode.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "t-young31/thesis",
"max_issues_repo_path": "5/autode/autode.tex",
"max_line_length": 1981,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2dea31ef64f4b7d55b8bdfc2094bab6579a529e0",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "t-young31/thesis",
"max_stars_repo_path": "5/autode/autode.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 20462,
"size": 73659
} |
% !TEX root = ../main.tex
\section{Discussion}
\label{sec:discussion}
We presented \acrlong{PVI}, a flexible method designed to avoid bad local optima. We showed that classic \acrlong{VI} gets trapped in these local optima and cannot recover. The choice of proximity statistic $f$ and distance $d$ enables the design of a variety of constraints that improve optimization. As examples of proximity statistics, we gave the entropy, \gls{KL} divergence, orthogonal proximity statistic, and the mean and variance of the approximate posterior. We evaluated our method in four models to demonstrate that it is easy to implement, readily extensible, and leads to beneficial statistical properties of variational inference algorithms.
The empirical results also yield guidelines for choosing proximity statistics. The entropy is useful for models with discrete latent variables which are prone to quickly getting stuck in local optima or flat regions of the objective. We also saw that the \gls{KL} statistic gives poor performance empirically, and that the orthogonal proximity statistic reduces pruning in deep generative models such as the variational autoencoder. In models like the deep exponential family model of text, the entropy is not tractable so the mean/variance proximity statistic is a natural choice.
\paragraph{Future Work.} Simplifying optimization is necessary for truly black-box variational inference. An adaptive magnitude decay based on the value of the constraint should further improve the technique (this could be done per-parameter). New proximity constraints are also easy to design and test. For example, the variance of the gradients of the variational parameters is a valid proximity statistic---which can be used to avoid variational approximations that have high-variance gradients. Another set of interesting proximity statistics are empirical statistics of the variational distribution, such as the mean, for when analytic forms are unavailable. We also leave the design and study of constraints that admit coordinate updates to future work. | {
"alphanum_fraction": 0.8212560386,
"avg_line_length": 258.75,
"ext": "tex",
"hexsha": "484fdc210afda99e9c6276b539541762702b8cfc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "altosaar/thesis",
"max_forks_repo_path": "ch-pvi/sec_discussion.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "altosaar/thesis",
"max_issues_repo_path": "ch-pvi/sec_discussion.tex",
"max_line_length": 759,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "287484c87db0eca46f4cdae70ff8582bd66ce5a3",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "altosaar/thesis",
"max_stars_repo_path": "ch-pvi/sec_discussion.tex",
"max_stars_repo_stars_event_max_datetime": "2021-06-26T12:18:53.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-21T18:56:27.000Z",
"num_tokens": 396,
"size": 2070
} |
I am interested in the effect of Functional Reactive Programming [FRP] on User
Interface programming.
I first grew an interest in the field of Functional
Reactive Programming after seeing Bret Victor's ``Inventing on Principle''
\cite{Victor2012a}. His talk claims that, in the traditional compile-run-debug
cycle of coding, ``most of the developer's time is spent looking at the code,
blindly without an immediate connection to the thing they're making''. He goes
on to show a side-by-side illustration of a new style of development -- on one
side is the runtime preview, and on the other side is the code pertaining to
said runtime. Changes in the code update the runtime, live. He argues that ``so
much of creation is discovery, and you can't discover anything if you can't see
what you're doing'' -- alluding to his earlier statement that the
compile-run-debug cycle is much like this. I would like to investigate the
claims Bret Victor makes, and indeed Elm, an instance of such a FRP, whose website also makes
similar claims.
A counter-argument may be that this is much like giving a
child a chainsaw. Is it too powerful? Does this tight feedback loop cut out a
perhaps crucial pause for
thought? Furthermore -- is this appropriate for all types of programming? Is it
at least appropriate for User Interface design? It has been shown that novices
tend to ``thrash'' about, trying out many ideas that may or may not be a
solution,
whereas experts think much more about the problem at hand before proceeding with
a solution \cite{Lopez2012a}.
My goal is to answer these questions. By way of conducting user studies,
leveraging Elm with extensions to do A/B testing to illustrate it's
effectiveness (or ineffectiveness) at enhancing User Interface Design.
As far as the scope of this project goes, I will be researching as much as
is necessary in order to meet the aims of the project listed \ref{itm:aims}.
Should I complete these aims, I may go on to do further user studies, or attempt
to further analyse, compare and contrast User Interface Design and
Declarative/Functional Reactive Programming languages against other methods, so
as to make firmer statements about the benefits of Elm.
\chapter{Requirements}
I will now identify what the requirements are for the project.
\section{Requirement definitions}
\subsection{Numbering and referencing system}
Below is an example requirement illustrating the requirement numbering system
in use:
\begin{enumerate}
\item High-level, abstract title which sums up the topic of the associated
requirements (E.g. Compare two pieces of program text)
\label{itm:example}
\begin{enumerate}
\item Requirement associated with high-level title (E.g. The program must show if
the two input texts are the same or different)
\label{itm:example-requirement}
\textbf{Priority} of requirement (\textbf{High}, \textbf{Medium} or
\textbf{Low}).
\end{enumerate}
\end{enumerate}
Here I am referencing the above abstract title: \ref{itm:example}
Here I am referencing the above requirement: \ref{itm:example-requirement}
\subsection{Priorities system}
Below are the meanings of priorities:
\begin{itemize}
\item Priorities may be \textbf{High} or \textbf{Medium} or \textbf{Low}
\item A \textbf{High} priority requirement is one that is crucial for the system
to function. They feature the word `must'.
\item \textbf{Medium} -- provides additional utility to the
system. They feature the word `should'.
\item \textbf{Low} -- is an ancillary feature that adds
desirable functionality to the system. They feature the word `could'.
\end{itemize}
\section{Functional Requirements}
\begin{enumerate}
\item Write software to assist the capture of objective data to inform me of
the user's activities as they use the Elm IDE.
\label{itm:compile}
\begin{enumerate}
\item The program must be able to work offline and later transfer
collected data to me once a connection is resumed, collecting
mouse and keyboard activity\\
\textbf{Priority: High}
\label{itm:record}
\end{enumerate}
\item Perform Pilot and User Studies
\begin{enumerate}
\item I must perform Pilot and User Studies in an iterative fashion, each
one learning and building upon discoveries made in prior ones,
starting vague and getting more and more focused on a particular
facet of User Interface Design and/or Declarative programming as
an activity.\\
\textbf{Priority: High}
\item I must use these studies to inform experimental and software design
to disambiguate and filter data collected in the experiment, and
to exercise hypotheses.\\
\textbf{Priority: High}
\end{enumerate}
\end{enumerate}
\section{Non-Functional Requirements}
\begin{enumerate}
\item Source code
\label{itm:source}
\begin{enumerate}
\item The software must be written clearly and simply.\\
\textbf{Priority: High}
\label{itm:source-clear}
\item The software must have suitable, concise comments which explain the
programs intent, but only where the code alone is not enough.\\
\textbf{Priority: High}
\label{itm:source-comments}
\end{enumerate}
\item Activity recording
\label{itm:results}
\begin{enumerate}
\item The program activity recording feature must not slow down
the user's use of the IDE more than 1ms difference than without
it.\\
\textbf{Priority: High}
\label{itm:display-equivalent}
\item There should be software to visualise the usage data\\
\textbf{Priority: Medium}
\label{itm:display-und5mins}
\end{enumerate}
\end{enumerate}
\chapter{Project Plan}
I will now explain my current plan for the project. Notice that I say current
here -- this may change throughout the course of the project: I may narrow in on a
topic of interest, or branch out to investigate anomalous research findings.
I will be building the end product -- the dissertation and software -- via a
process of iterations, much like an iterative Software Lifecycle. The Literature
Survey is ongoing -- throughout the whole project from beginning to end --
feeding into all parts of the dissertation, and indeed this Proposal, as shown
in the Gantt chart in section \ref{pdf:gantt}. The literature I choose is
sometimes chosen
to support points I wish to make, sometimes acting to guide my next area of research, reinforce findings,
compare or contrast with other research, and probably many other things I have not yet thought
of. Most importantly, I will be looking at who the
paper/article etc.\ is cited by, preferring sources that are peer-reviewed.
As well as this literature
research, I will also have an ongoing Product Literature Survey -- looking at
existing software out there that is related to my current area of interest.
Central to this idea of iteration
is my desired method of performing user studies: I will first do what I have called a
``Pilot'' -- a short and shallow trial User Study that focuses not on the
research I'm concerned with, but instead the particular experimental design I
would like to use in my actual User Study. By employing a Pilot I can hopefully
get an idea of the nature of the experimental design -- perhaps discovering any
variables I had not previously considered that will require me to increase my
sample size or simplify the experiment in order to mitigate their effect on the
dependent variable I wish to test for. These are all problems discovered in
\cite{Yates2012a} -- including basic teething problems in
getting the experiment to flow smoothly. In an even less detailed aspect, the
pilot may allow me to look at what is out there. It may help to not look
for anything in particular initially, and see what happens.
At this stage, with
the help of discussion with my Project Supervisor, I have some ideas about how
to gather data in User Studies and these pilots could prove to be a useful
testbed for such tools. I have a hypothesis that the novice developer
``thrashing'' \cite{Lopez2012a} can be observed by shorter pauses between editing and
experimentation, and I could measure this by way of measuring the mouse position
relative to the IDE, clicks, and key-presses, using tools built-in to Elm and a
bit of extension to stream this over the Internet to my storage facilities
\cite{WhatFRP}.
As you will see in the Gantt chart in section
\ref{pdf:gantt} I have included Testing \& Implementation under the same heading
as I will be doing Test Driven Development. My experience on Placement at
PicoChip, my job as a Software Engineer at Altran and readings have helped me realise that this way of
developing is time-saving and improves code quality by enforcing modularity in
order to test it \cite{Martin2008a} and \cite{Hunt2000a}.
\section{The plan}
\includepdf[scale=0.7,pages=1,pagecommand=\subsection{Gantt
chart}\label{pdf:gantt}]{final_year_project-ganttchart.pdf}
\chapter{Required Resources}
I will now talk about the resources I require for the completion of this
dissertation, including the availability of these resources.
I will require users for my user study. These users must be proficient in at
least one programming language (declarative programming languages are niche in
and of themselves, never mind the discipline of programming, so some basic
knowledge is required in order to see useful patterns in User Interface Design).
Suitable candidates are First and Second Year
Computer Science students from most Universities in the UK. Their availability
is limited -- Christmas holidays and coursework deadlines may mean that certain
periods of the term are particularly busy for them. At Bath, suitable periods are
therefore November, January to Mid February (inclusive), Mid-March to April
(inclusive). It will be useful to procure free periods for other nearby
Universities to hedge my bets, and to have a decent random assignment of users so I
can make equivalent groups in my experiments.
The ACM Digital library, accessible via the Bath portal either from University
or from home via Single-sign-on is a valuable resource for research papers,
articles and references. The Cited-By feature will allow me to assert the
popularity/ranking of each resource. Another valuable resource is the Psychology
of Programming Interest Group, a ``[group of] people from diverse communities to explore
common interests in the psychological aspects of programming and in the
computational aspects of psychology'', with peer reviewed papers on particularly
relevant topics to my area of research.
I will require regular access to the Internet, Emacs with haskell-mode installed and Elm version 0.10
\cite{Elm2013a}. I will also need git for software source control, and
bitbucket.org for online, private backups of my work.
I require LaTeX to type up my dissertation, and have chosen texlive on Ubuntu 12.04.3
as my development environment of choice. The full development environment is
installed at the house I am staying in, in Bath, on my laptop. I am also able to
replicate this environment to a satisfactory level at Bath University on any
computer with access via Putty/SSH or similar to LCPU, as all the above software
can be installed and run on my Bath University account.
I am using Chromium Version 28.0.1500.71 Ubuntu 12.04
(28.0.1500.71-0ubuntu1.12.04.1) to run the Elm IDE, which is an important
dependency that may cause problems in getting Users in User Studies to run a
functionally equivalent browser. Only recent editions of Chrome, Chromium,
Firefox, Opera and Safari (not Internet Explorer) support Elm web programs.
\chapter{Ethical considerations}
In conducting User Studies, I will be interacting with people and collecting
data from them, so I must be considerate and mindful of those I talk to and the
information I handle.
An Ethical Checklist such as the one Bath University uses as it's template
\cite{Bath2013a} may assist my research such that I treat each participant
with care and respect. I may learn from the discoveries made by others -- in my
reading, I came across a paper (also mentioned earlier) that highlighted concerns that
participants under study had, and the paper detailed ways to mitigate these
concerns so as to make the participant feel that are informed and safe
\cite{Yates2012a}.
| {
"alphanum_fraction": 0.7771861891,
"avg_line_length": 49.9838709677,
"ext": "tex",
"hexsha": "85881c0e8c237c73cc828ef2e14607e178d19dbf",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e7f0cbf681d2341ecb2e00d40557e4e34f8b7209",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "spanners/dissertation",
"max_forks_repo_path": "proposal/proposal.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e7f0cbf681d2341ecb2e00d40557e4e34f8b7209",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "spanners/dissertation",
"max_issues_repo_path": "proposal/proposal.tex",
"max_line_length": 105,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "e7f0cbf681d2341ecb2e00d40557e4e34f8b7209",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "spanners/dissertation",
"max_stars_repo_path": "proposal/proposal.tex",
"max_stars_repo_stars_event_max_datetime": "2019-04-03T01:19:48.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-09-09T16:53:46.000Z",
"num_tokens": 2798,
"size": 12396
} |
\section{Observability}\label{sec:observ}
Guaranteeing that all the data required by the specification is
actually \emph{observable} is one of the principal engineering
challenges of RV. In embedded systems, the RV specification often
involves data from a number of different types of data sources,
including state data of executing programs, sensor data, as well as
data that is communicated from other systems. The safety
properties of cyber-physical systems are often formulated by aerospace
and automobile engineers that are domain experts, but can have
varying degrees of understanding of the computing systems, so the RV
engineer needs to be very proactive in addressing the observability
issue. In embedded systems, the closed nature of the hardware
platforms and proprietary issues can make it impossible to observe the
information required in the specification. Additional sensors may be
needed or communication data formats changed. At times it is
necessary to change the specification so that it only depends on
observable data. The observability issue may seem like an
``engineering detail'', but based on our experience, it is often a
significant obstacle resulting in delays, frustration, and sometimes
preventing progress altogether.
\paragraph{Challenge:} \emph{Determining observability of the state and environment variables in the
specification.}
\paragraph{Copilot Approach:} How a RV framework obtains state data impacts the properties that
can be monitored. Many RV frameworks such as MAC~\cite{KimLKS04}
and MOP~\cite{ChenR05} instrument the software and hardware of the
SUO so that it emits events of interest to the monitor. While
attractive from the viewpoint of maximizing state observability, the
additional overhead may affect worst case
execution times and consequently the scheduling;
regulatory guidelines may also require recertification of that system.
Copilot and several other RV frameworks~\cite{sampling,Kane15,borzoo} opt to sacrifice complete
observability by sampling events. Copilot monitors run as dedicated
threads and sample program variables and state information via shared
memory. Currently, we rely on scheduling analysis and experimentation to ensure that we sample values at a
sufficient rate that specification violations are detected. This has
been very successful when the implementation platform is running a
real-time operating system (RTOS) with deterministic scheduling
guarantees, but we cannot make strong assertions of efficacy running on less
specialized systems.
A critical lesson learned in the course of conducting many case studies
is to ask questions about observability early and often.
If monitoring the state of an executing program, is it possible that
the monitor fails to detect a state change? It is often necessary to
read sensor data to obtain the required state data (e.g. aircraft
pitch and vehicle position) or environmental data (e.g. temperature).
If it is raw sensor data, do we apply filters before feeding the data
into the monitors? Is the data available in the same coordinate
systems demanded of the specification? Can we ensure the integrity
and provenance of the data being observed?
The aircraft safe separation criteria specification introduced in
Section~\ref{sec:req} requires the monitor to observe state data for
both the aircraft the monitor is executing on as well as the
``intruder'' aircraft. Hence, the monitors must sample data from
executing programs (planned maneuver), onboard positioning sensors,
and data broadcast from other vehicles.
%For our experiments, the
% aircraft position data had to be converted from World Geodetic
% System latitude and longitude to ECEF.
%\item Does the monitor possess access to the necessary sensor data to
% monitor state of the SUO?
%\item Do those sensors even exist?
%\item Need to preform traditional signal processing (filtering etc)
%\item If sampling:
%\begin{itemize}
%\item Can you estimate missed values
%\item Can you figure out when to sample.
%\end{itemize}
%\item static analysis tools can help with code instrumentation
%\item static analysis can help with determining when to sample based
% on decision points in code. etc timing analysis for hard realtime
% systems.
%end{itemize}
| {
"alphanum_fraction": 0.801849711,
"avg_line_length": 52.1084337349,
"ext": "tex",
"hexsha": "c2f56816d5e10befbbb23b7612e4c1e553486e62",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "Copilot-Language/copilot-discussion",
"max_forks_repo_path": "ISoLA16/observ.tex",
"max_issues_count": 30,
"max_issues_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047",
"max_issues_repo_issues_event_max_datetime": "2021-09-07T22:34:17.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-04-01T20:24:19.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "Copilot-Language/copilot-discussion",
"max_issues_repo_path": "ISoLA16/observ.tex",
"max_line_length": 109,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "caccad918b23dae991095344a845827ddccd6047",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "Copilot-Language/copilot-discussion",
"max_stars_repo_path": "ISoLA16/observ.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-17T13:20:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-06-10T00:44:21.000Z",
"num_tokens": 914,
"size": 4325
} |
\subsection{PyRe progress and theory}
\begin{frame}
\frametitle{PyRe: Cyclus (Py)ro (Re)processing Module}
\begin{columns}
\begin{column}{.45\textwidth}
\begin{block}{How does PyRe work?}
PyRe does the following with an input stream and facility configuration parameters:
\begin{itemize}
\item Pass fuel to voloxidation.
\item Generate efficiencies from parameters.
\item Multiply stream by efficiency matrix.
\item Repeated for each process.
\end{itemize}
\end{block}
\begin{block}{Current Work:}
Create a class for each sub-process, and build the archetype.
\end{block}
\end{column}
\begin{column}{.55\textwidth}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{flowchart}
\caption{An archetype design flowchart of pyroprocessing facilities.}
\label{fig:pyre}
\end{figure}
\end{column}
\end{columns}
\end{frame}
\subsection{About PyRe}
\begin{frame}
\frametitle{PyRe: Cyclus (Py)ro (Re)processing Module}
\begin{block}{Timeline:}
The archetype will be functional with preliminary efficiencies early \textbf{August 2018}. \\
A more detailed PyRe will be completed for ANTPC in \textbf{September 2018}.
\end{block}
\begin{block}{Archetype Uses:}
PyRe will answer the following questions
\begin{itemize}
\item What is the effect of introducing pyroprocessing plants in the fuel cycle?
\item How do various facility designs affect throughput and efficiency?
\item Where in a pyroprocessing plant will monitoring most
effectively detect material diversion?
\end{itemize}
\end{block}
\begin{block}{Where is PyRe?}
PyRe can be found in the "pyre" branch of recycle: \href{https://github.com/arfc/recycle}{arfc/recycle.}
\end{block}
\begin{block}{Who is working on this?}
Greg Westphal, UIUC Graduate Researcher
\end{block}
\end{frame}
| {
"alphanum_fraction": 0.7360735533,
"avg_line_length": 34.8867924528,
"ext": "tex",
"hexsha": "ef3296eab22ae8d1652c3eded31ce1a77705219a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "arfc/2018-07-27-cyclus-call",
"max_forks_repo_path": "pyre.tex",
"max_issues_count": 5,
"max_issues_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297",
"max_issues_repo_issues_event_max_datetime": "2018-07-27T00:50:48.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-25T14:48:24.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "arfc/2018-07-27-cyclus-call",
"max_issues_repo_path": "pyre.tex",
"max_line_length": 107,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "8f5c1b02669a6e68394e241250baf5339bc1f297",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "arfc/2018-07-27-cyclus-call",
"max_stars_repo_path": "pyre.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 544,
"size": 1849
} |
\documentclass[twoside]{MATH77}
\usepackage[\graphtype]{mfpic}
\usepackage{multicol}
\usepackage[fleqn,reqno,centertags]{amsmath}
\begin{document}
\opengraphsfile{pl16-03}
\begmath 16.3 Plotting Using \TeX
\silentfootnote{$^\copyright$1997 Calif. Inst. of Technology, \thisyear \ Math \`a la Carte, Inc.}
\subsection{Purpose}
These subprograms produce plots for documents prepared using the \TeX\ or
\LaTeX\ languages.
\subsection{Usage}
To produce a plot using \TeX\ or \LaTeX, one constructs a program that uses
SPLOT. SPLOT writes a file of \TeX\ or \LaTeX\ commands. The file is
then incorporated into a \TeX\ or \LaTeX\ document, which is then processed
by a \TeX\ or \LaTeX\ processor. Detailed instructions are given in
Section \ref{IntoTeX} below.
Sections \ref{PPSP} through \ref{DP} below describe:\\
\begin{tabular*}{3.20in}{@{}l@{~~~}l}
\quad \ref{PPSP} & Program Prototype, Single Precision\dotfill \pageref{PPSP}\\
\quad \ref{Argdef} & Argument Definitions\dotfill \pageref{Argdef}\\
\quad \ref{Numopt} & Numerical Option Descriptions\dotfill \pageref{Numopt}\\
\quad \ref{Textopt} & Textual Option Descriptions\dotfill \pageref{Textopt}\\
\quad \ref{IntoTeX} & Incorporating the Output Into a \TeX\ or\rule{0.50in}{0pt}\\
& \LaTeX\ Document\dotfill
\pageref{IntoTeX}\\
\quad \ref{DP} & Modifications for Double Precision\dotfill \pageref{DP}\\
\end{tabular*}
\subsubsection{Program Prototype, Single Precision\label{PPSP}}
\begin{description}
\item[INTEGER] \ {\bf NX}
\item[REAL] \ {\bf XSIZE, YSIZE, X}($\geq$ NX), {\bf Y}($\geq$ NX),
{\bf OPT}(see below)
\item[CHARACTER] \ {\bf COPT}(see below)
\end{description}
Assign values to NX, XSIZE, YSIZE, X, Y, OPT and COPT, and use the following
subroutine reference.
\begin{center}
\fbox{\begin{tabular}{@{\bf }c}
CALL SPLOT (XSIZE, YSIZE, X, NX, Y,\\
OPT, COPT)\\
\end{tabular}}
\end{center}
At this point, \TeX\ or \LaTeX\ commands to produce the plot are in the file
{\tt splot.tex} (unless another file name is selected by options).
\subsubsection{Argument Definitions\label{Argdef}}
\begin{description}
\item[XSIZE] \ [in] Physical horizontal size of the drawing area for the plot.
Default units are inches.
\item[YSIZE] \ [in] Physical vertical size of the drawing area for the plot.
Default units are inches. The nominal drawing area for the plot is a
region of size XSIZE $\times$ YSIZE. The origin of the physical
coordinate system is at the lower left corner, and all border lines
are on the boundary of this region. Physical coordinates outside this
area may be specified. The actual area of the image is increased by
outward-pointing ``tick'' marks (see option \ref{tick} in Section
\ref{Numopt}), ``tick'' labels (see options \ref{border} and \ref{tick}
in Section \ref{Numopt}) or annotations placed outside the drawing area
(see option \ref{text} in Section \ref{Numopt}).
Many of the options refer to user coordinates that are specified in
the same units as the points that are plotted. Different user
coordinates may be used in a plot, see option~\ref{dataset} in
Section~\ref{Numopt}.
\item[X] \ [in] Array of NX abscissas.
\item[NX] \ [in] Number of abscissas in X and ordinates in Y.
\item[Y] \ [in] Array of NX ordinates.
See also option \ref{ncurv} in Section \ref{Numopt}.
\item[OPT] \ [inout] OPT(1) is status output. Nonzero values indicate
a problem for which an error message is printed. Possibilities are
listed in the comments of subroutine SPLOTE in the file {\tt splot.f}
or {\tt splot.for}. Unless one has taken special action by calling
MESS, see Chapter~19.3, all except for 6 warning messages stop
execution. Values from 1--6 are for warnings, 10--19 are for problems
in COPT, 20--32 are for problems in OPT, values $\geq 40$ are caused
by problems that probably are due to bugs in this plotting software
that should be fixed.\\
Beginning in OPT(2), the user provides option specifications as described
in Section \ref{Numopt}.
\item[COPT] \ [in] Character options, and data used by options in OPT, as
described in Section \ref{Textopt}.
\end{description}
For simplest usage, set OPT(2)~= 0 and COPT~= 'Q'.
\subsubsection{Numerical Option Descriptions\label{Numopt}}
Options are specified by providing a sequence of codes starting in OPT(2),
and by sequences of characters in COPT().
For options specified in OPT(), each code is a whole number, followed by
zero or more ``arguments.''
Arguments are numbers, which depending on the option, may be whole
numbers. They may be data or indices into the array COPT() indicating the
location of a character datum. Where we write (Argument X=n) below, n is
the default value for X if the option is not selected.
``Whole numbers'' in OPT() are determined by using the Fortran nint()
intrinsic function, that is, the nearest whole number to OPT(j) is used
if OPT(j) is expected to be a whole number.
If an option code is negative, say OPT(j) $<$ 0, then OPT(j) is not
processed, but the number of ``arguments'' used by option $|$OPT(j)$|$
are skipped.
\end{multicols}
\begin{multicols}{2}[
\begin{center}
\begin{tabular}{|c|p{2.55in}|c|p{2.35in}|}
\multicolumn{4}{c}{\large Summary of Options\rule[-10pt]{0pt}{10pt}}\\
\hline
Option \rule{0pt}{10pt} & & Scratch & \\
or Field & {\qquad \raisebox{1.3ex}[0pt][0pt]{Brief Description}} &
File &
\multicolumn{1}{c|}{\raisebox{1.3ex}[0pt][0pt]{Remarks, Other affected
options}}\\ \hline
0 \rule{0pt}{10pt} & End of options. & & \\
A\ref{opt}/$10^0$ & Finish or not, set units
& & Subsequent data \\
A\ref{opt}/$10^1$ & Interpolation
& Y & Subsequent data \\
A\ref{opt}/$10^2$ & $\log_{10}$ on X, (polar -- not implemented)
& Y & Subsequent data,
\ref{text}--\ref{xtick},
\ref{line}--\ref{ellipse} \\
A\ref{opt}/$10^3$ & $\log_{10}$ on Y or polar info.
& Y & Subsequent data,
\ref{text}--\ref{xtick},
\ref{line}--\ref{ellipse} \\
A\ref{opt}/$10^4$ & Lines at X = 0, Y = 0 & & \\
A\ref{opt}/$10^5$ & \TeX\ or \LaTeX\
& & Use final value \\
\ref{ncurv} & For multiple curves
& Y & Required by some cases in option \ref{symbol}\\
\ref{pen} & Solid, dashed, dotted curves, width, etc.
& Y & Subsequent data, \ref{line}, \ref{pline} \\
\ref{border} & Border characteristics
& & Use final value for each border or axis \\
\ref{tick}, \ref{tick1} & Length and spacing of border tick marks
& & Use final value for each border or axis \\
\ref{dataset} & Change current data set
& Y & data, \ref{Xminmax}, \ref{Yminmax}, \ref{border}, \ref{onesymb},
\ref{text}, \ref{number}, and \ref{line} -- \ref{pellipse}\\
\ref{Xminmax}, \ref{Yminmax} & Adjust user coordinates, clipping
& & Digits $10^{3:4}$ of S\ref{border} \\
\ref{symbol} & Plot symbols, error bars, vectors
& Y & Subsequent data \\
\ref{onesymb} & Plot a single symbol, etc.
& Y & Nothing \\
\ref{arrow} & Draw arrow heads on curves
& Y & Subsequent data, \ref{line}, \ref{pline} \\
\ref{linwid} & Set various line widths & & \\
A\ref{linwid}--D\ref{linwid}
& Line widths for border, tick, etc. & & Use final value\\
E\ref{linwid} & Line widths for rectangles and ellipses
& Y & \ref{rect}, \ref{ellipse}, \ref{prect}, and \ref{pellipse} \\
\ref{text} & Text annotation
& Y & Nothing \\
\ref{number} & Put a number on the plot
& Y & Nothing \\
\ref{xtick} & Extra tick / annotation
& Y & Nothing \\
\ref{bad} & Bad data value
& & Use final value \\
\ref{nouse_1} & Not used. & & \\
\ref{line} & Draw a line
& Y & Nothing \\
\ref{rect} & Draw a rectangle
& Y & Nothing \\
\ref{ellipse} & Draw an ellipse
& Y & Nothing \\
\ref{pline}--\ref{pellipse} & As for \ref{line}--\ref{ellipse}, but in
physical coordinates & Y & Nothing \\
\ref{fill} & Type of fill for closed regions
& Y & Subsequent data, \ref{rect}, \ref{ellipse}, \ref{prect},
\ref{pellipse} \\
\ref{nouse_2} & Not used. & & \\
\ref{debug} & Debug output
& & Final value per call, then reset \\
\hline
\end{tabular}
\end{center}
]
The third and fourth columns of the table are described in Section
\ref{SOE}.
\vspace{5pt}{\bf Number\hspace{.4in}Description}\vspace{-5pt}
\renewcommand{\labelenumi}{\bf \theenumi}
\begin{enumerate}
\setcounter{enumi}{-1}
\item No more options. Option zero must be specified, and must be last
in the list of options. If no other options are specified, the
following defaults apply:
\begin{itemize}
\item Physical units are inches.
\item One curve is plotted, using a solid line 0.5 points wide.
Points are connected by a B\'ezier curve that is not closed.
\item XMIN, XMAX, YMIN and YMAX are determined automatically.
\item All four borders are drawn, with automatically determined and
labeled ``tick'' marks on the bottom and left borders.
\item The output is \LaTeX\ in file {\tt splot.tex}.
\end{itemize}
\item\label{opt} (Argument N\ref{opt} = 0) N\ref{opt} is a whole number
in which the decimal digits specify options. Digits of
N\ref{opt} mean:
\begin{itemize}
\item[$10^0$]
\begin{itemize}
\item[0 =] Finish the curve, and the plot. Physical units are in
inches.
\item[1 =] Finish the curve, and the plot. Physical units are in
millimeters. 1 in.\ = 25.4 mm.
\item[2 =] Finish the curve, and the plot. Physical units are in
points. 1 in.\ = 72.27pt.
\item[3 =] Finish this {\tt mfpic} group, but allow the plot to be
continued on a subsequent call. Use this for multiple curves if
you have capacity problems with Metafont. This option requires
that in your \TeX \ or \LaTeX \ document the file generated by
SPLOT be included with a statement of the form
$\backslash$hbox\{$\backslash$input ... \}, where ... is the
file generated by SPLOT and there is a space preceding the
closing ``\}''. In \LaTeX \ one can use mbox in place of hbox.
(Units are needed only when plot finishes.)
\item[4 =] Finish the curve but allow the plot to be continued on a
subsequent call.
\item[5 =] More data for the current curve may be provided on a
subsequent call.
\end{itemize}
See special considerations in Section C.
\item[$10^1$]
\begin{itemize}
\item[0 =] Interpolate between points using a B\'ezier curve; do not
close the curve.
\item[1 =] Interpolate between points using a B\'ezier curve; close
the curve with a B\'ezier curve.
\item[2 =] Interpolate between points using a B\'ezier curve; close
the curve with a straight line.
\item[3 =] Connect points using straight lines; do not close the
curve.
\item[4 =] Connect points using straight lines; close the curve with a
straight line.
\item[5 =] Do not connect the points. This is frequently used with
option \ref{symbol}.
\end{itemize}
\item[$10^2$]
Specifies how X is used to determine the abscissa, $\xi$ of the point
plotted, and how a border or axis with labels depending on X is
labeled. When $10^\xi$ is output for a label, if the
major ticks are 1 unit apart in $\xi$, then minor ticks
(if any) will be spaced logarithmically.
\newline
\hspace{.25in}\begin{tabular}{ccc}
Value & Plot & Labels\\
0 & $\xi = \text{X}$ & $\xi$\\
1 & $\xi = \text{X}$ & $10^\xi$\\
2 & $\xi = \log_{10}\text{X}$ & $\xi$ \\
3 & $\xi = \log_{10}\text{X}$ & $10^\xi = \text{X}$ \\
4 & \multicolumn{2}{l}{Polar coordinates, $r = \text{X}\geq 0$}\\
\end{tabular}\newline
Options for logarithmic or polar conversion apply to curves, and to
options \ref{text}, \ref{number}, and \ref{xtick}. Those for
logarithmic conversions apply to options \ref{line}--\ref{ellipse}.
{\bf Polar coordinates are not implemented in the current code.}
\newline
In the case of polar coordinates, the points plotted are
$(\text{X} \cos \text{Y},\, \text{X} \sin \text{Y})$, and
\begin{itemize}
\item[$\bullet$] XMIN, XMAX, YMIN and YMAX are interpreted as
$r_{\min}$, $r_{\max}$, $\theta_{\min}$, and $\theta_{\max}$,
respectively.
\item[$\bullet$] $r_{\min} \geq$ 0 is required.
\item[$\bullet$] Physical coordinates are Cartesian. The origin
of the physical coordinate system coincides with the origin
of the user (polar) coordinate system, and is within the
figure.
\item[$\bullet$] In a user Cartesian coordinate system derived
from the polar coordinate system, $x_{\min}$ is $-r_{\max}$ if
$\theta_{\min} \leq 180^{\circ} + k \times 360^{\circ} \leq
\theta_{\max}$ for some integer $k$, else it is $\min(0, r_{\max}
\cos \theta_{\min}, r_{\max} \cos \theta_{\max})$. $x_{\max}$,
$y_{\min}$ and $y_{\max}$ are defined similarly. Given a point
$(r, \theta)$ in the user's polar coordinate system, in the
physical Cartesian coordinate system,
$\text{X}_{\text{physical}} = \text{XSIZE}
\frac{r \cos \theta}{x_{\max}-x_{\min}}$, and
$\text{Y}_{\text{physical}} = \text{YSIZE}
\frac{r \sin \theta}{y_{\max}-y_{\min}}$.
\item[$\bullet$] The border indices are $1 \Rightarrow \theta = \theta_{\min}$,
$2 \Rightarrow r = r_{\min}$, $3 \Rightarrow \theta =
\theta_{\max}$ and $4 \Rightarrow r = r_{\max}$.
\end{itemize}
\item[$10^3$] The preceding table applies with X replaced by Y and
$\xi$ replaced by $\eta$ unless in polar coordinates. In the case
of polar coordinates, \newline
\hspace{.25in}\begin{tabular}{ccc}
Value & $\theta$ is given & Labels\\
0 & in radians & degrees\\
1 & in degrees & degrees\\
2 & in radians & radians\\
3 & in degrees & radians\\
\end{tabular}
\item[$10^4$] Let $x_b$ denote a vertical line drawn from border to
border at the 0 point on the bottom border, if for this border
XMIN$\,<0<\,$XMAX. Similarly let $x_t$ be a vertical line
at 0 as defined by the top border, $y_\ell$ a horizontal line
relative to the 0 on the left border, and $y_r$ a horizontal line
relative to the 0 on the right border. In all cases if the 0 line
isn't strictly inside the borders, the line is not drawn.
\begin{itemize}
\item[0 =] Draw $x_b$ and $y_\ell$.
\item[1 =] Do not draw any extra lines.
\item[2 =] Draw $x_b$ only.
\item[3 =] Draw $y_\ell$ only.
\item[4 =] Draw $x_t$ only.
\item[5 =] Draw $y_r$ only.
\item[6 =] Draw $x_b$ and $y_r$.
\item[7 =] Draw $x_t$ and $y_\ell$.
\item[8 =] Draw $x_t$ and $y_r$.
\end{itemize}
\item[$10^5$] 0 gives \LaTeX\ output, 1 gives \TeX\ output.
\end{itemize}
\item\label{ncurv} (Arguments K\ref{ncurv} = NX, L\ref{ncurv} = 1)
L\ref{ncurv} sets of ordinates are provided in Y. K\ref{ncurv} is a
whole number that gives NDIMY, the declared first dimension for Y
(K\ref{ncurv} $\geq$ NX), and L\ref{ncurv} is a whole number that
gives NY, the number of sets of ordinates, {\em i.e.}\ the effective
second dimension for Y. The total dimension for Y must be $\geq$
NDIMY~$\times$ NY. If a curve has been started but not yet
finished, that is, SPLOT was most recently previously called with
option \ref{opt}, and with digit $10^0$ of N\ref{opt}~= 5, and NY
is changed by specifying this option, an error message is
produced. NY must not be greater than the parameter MAXNY in
SPLOT, which is currently 50.
\item\label{pen} (Argument P\ref{pen} = 50) S\ref{pen} specifies
whether lines drawn are solid, dashed, or dotted, and the pen width
for solid or dashed lines. In the case of dashed lines one can
indicate the length of the lines and the nominal space between
them. In the case of dotted lines, one can specify the diameter
of the dots and the space between them. Digits of S\ref{pen} are
used as follows:
\begin{itemize}
\item[$10^0$] Pen type.
\begin{itemize}
\item[= 0] Solid lines. (The only other thing to specify for
this case is line width in $10^{1:2}$ below.)
\item[= 1] Dashed lines.
\item[= 2] Dotted lines.
\item[= 3,4] As for 1,2, except units for the length of dashes
or the diameter of the dots is in deci-points instead of in
points. See $10^{3:4}$ below.
\item[= 5--8] As for 1--4, except units for the length of the
spaces, see $10^5$ below, are in deci-points instead of in
points.
\end{itemize}
\item[$10^{1:2}$] The width for solid or dashed lines in
deci-points. A value of 0 gives the default value of 5
deci-points. Here are lines of widths 3, 5, 7 and 10 deci-points:
\vrule width 0.3pt\ \vrule width 0.5pt\ \vrule width 0.7pt\ \vrule
width 1.0pt\ .
\item[$10^{3:4}$] The length of the dashed lines or the diameter
of the dots. Units are points or deci-points, depending on the
value of the low order digit of P\ref{pen}.
\item[$10^5$] The nominal space between the dashed lines or the
dots. Units are points or deci-points, depending on the
value of the low order digit of P\ref{pen}.
\end{itemize}
The following table gives vertical lines showing various
possibilities with both the spacing and the length for dashes and
dots given in points. \vspace{5pt}
{\small
\hspace{-16pt}\begin{tabular}{*{15}{@{\hspace{9pt}}l}}
Spacing: & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & 4
\\
Dashes: & 1 & 2 & 1 & 2 & 3 & 1 & 2 & 3 & 4 & 1 & 3 & 4 & 5 & 7
\\
Dots: & 1 &$\frac 32$& 1 &$\frac 32$& 2 & 1 & 2 &$\frac 52$&
3 & 2 &$\frac 52$& 3 & $\frac 72$& 4
\end{tabular}
}
\hspace{34pt} \mbox{\input pl16-03a }
\item\label{border} (Arguments N\ref{border}, S\ref{border}, T\ref{border} =
0) N\ref{border} is a whole number in which every decimal digit
$k$ for which $1 \leq k \leq 6$ stipulates that S\ref{border} and
T\ref{border} specify characteristics for border $k$. Border
indices are: 1~$\Rightarrow$ bottom, 2~$\Rightarrow$ left,
3~$\Rightarrow$ top, 4~$\Rightarrow$ right, 5~$\Rightarrow$
X-axis if XMIN~$<$ 0 ~$<$ XMAX, 6~$\Rightarrow$ Y-axis if YMIN~$<$
0 ~$<$ YMAX.
S\ref{border} is a whole number in which the decimal digits have the
following meanings:
\begin{itemize}
\item[$10^0$]
\begin{itemize}
\item[0 =] Border $k$ is not drawn, else it is. If border $k$ is not
drawn, ``tick'' marks are neither drawn nor labeled.
\item[1 =] Major and minor ``tick'' marks are neither drawn nor
labelled.
\item[2 =] No labels printed at major ``tick'' marks.
\item[3 =] Labels printed at major ``tick'' marks,
except those at the ends of the border.
\item[4 =] Labels printed at major ``tick'' marks
except those at the left (bottom) end of the border.
\item[5 =] Labels printed at major ``tick'' marks
except those at the right (top) end of the border.
\item[6 =] Labels printed at all major ``tick'' marks.
\end{itemize}\vspace{5pt}
Labels are never printed at minor ``tick'' marks, and will be skipped
at some major tick marks if it appears the labels would overlap.
\item[$10^1$] Length, in points, of arrow head to be drawn at the right
or top end of the border. If 0, there is no arrow head.
\item[$10^{2:3}$] Points plotted are not allowed to get closer to the
border than this number of points.
\item[$10^{4:}$] Gives the space in points to set aside for labels +
outward pointing tick marks + border captions. If this is 0,
this space will be estimated.
\end{itemize}
The defaults for bottom and left borders are 6, for top and right
borders are 1, and for axes, are 0.
T\ref{border} is a whole number in which decimal digits have the
following meanings:
\begin{itemize}
\item[$10^{0:1}$] The number of minor intervals within each major
interval. If zero the value is determined automatically.
\item[$10^2$] Defines how range on variable is expanded.
\begin{itemize}
\item[0 =] Expand the range so that a major ``tick'' mark appears at
automatically-determined ends of border $k$ if the major
``tick'' mark would be at zero, else expand the range to a
minor ``tick'' mark.
\item[1 =] Expand the range so that a major or minor ``tick''
mark appears at automatically-determined ends of border $k$.
\item[2 =] Expand the range so that a major ``tick'' mark appears at
automatically-determined ends of border $k$.
\item[else] Do not expand the range.
\end{itemize}
\item[$10^{3:}$] If nonzero, this is the index in COPT of the same kind
of text allowed after a textual option character of 1--6 as
defined in Section~\ref{Textopt}. If more than one border/axis is
indicated, the caption pointed to will be applied to all of them.
Thus it is best to use COPT options 1-6 as described in Section
\ref{Textopt} below, unless this is being used for an extra data
set, see Option \ref{dataset}.
\end{itemize}
\item\label{tick} (Arguments N\ref{tick} = 123456, A\ref{tick} = 4.0pt,
B\ref{tick} = 3.0pt) N\ref{tick} has the same meaning as
N\ref{border}. $|$A\ref{tick}$|$ is the physical length of major
``tick'' marks along border $k$, and $|$B\ref{tick}$|$ is the
physical length of minor ``tick'' marks, both in points.
``Tick'' marks of positive length are directed inward, while
``tick'' marks of negative length are directed outward. If the
physical length is $>$ than the length of the plot ($10^5$ should
do this), the line is drawn the full length of the plot.
\item\label{tick1} (Arguments N\ref{tick1}, X\ref{tick1}, D\ref{tick1})
N\ref{tick1} has the same meaning as N\ref{border}. If
D\ref{tick1} $>$ 0 then major ``tick'' marks will be drawn at all
positions for which [XY]MIN~$\leq$ X\ref{tick1}~+ $k$~$\times$
D\ref{tick1}~$\leq$ [XY]MAX ($k$ integer). If D\ref{tick1} $>$ 0
this option supersedes digits $10^{0:1}$ of T\ref{border}.
\item\label{dataset} (Argument B\ref{dataset}, S\ref{dataset},
T\ref{dataset}, P\ref{dataset}, U\ref{dataset})
One may specify multiple data sets, and thus plot different curves
on the same figure with different scalings and with a specified
alignment. One should not specify this option without having
already provided all of the data and options that reference a
prior data set.
\begin{itemize}
\item[B\ref{dataset}] This gives the index
of the border used for annotations and labels for the new data set.
See option \ref{border} for definitions of borders 1--4. Values
of 5 and 6 refer to the current data set for X and Y, and can be
used to set values for P\ref{dataset} and U\ref{dataset} on the
initial data set. If B\ref{dataset} indexes a border already in
use by another data set, that border with associated tick marks
and labels (if any) is output immediately, and the working area of
the plot is reduced by moving the new location of this border
towards the center of the plot.
\item[S\ref{dataset}] As for S\ref{border} for the new border,
$1 \leq \text{B\ref{dataset}} \leq 4$.
\item[T\ref{dataset}] As for T\ref{border} for the new border,
$1 \leq \text{B\ref{dataset}} \leq 4$.
\item[P\ref{dataset}] A distance in the same physical units as
XSIZE and YSIZE from the left or bottom end of the border. See
U\ref{dataset} below. If this is less than 0, there will be no
effort to align the points for the different data sets.
\item[U\ref{dataset}] A value in user coordinates associated
with this border. Data will be plotted in such a way that a data
point with a coordinate with this value will be plotted at the
distance from the left or bottom end of the border indicated by
P\ref{dataset}. This provides a means to align plots for the
different data sets.
\end{itemize}
\item\label{Xminmax} (Arguments A\ref{Xminmax} = 0, B\ref{Xminmax} = 0)
If A\ref{Xminmax} $<$ B\ref{Xminmax}, A\ref{Xminmax}
(B\ref{Xminmax}) gives the smallest (largest) value of X for the
current dataset. Values outside this range will be {\em clipped,
i.e.}\ will not appear in the plot. If all values are in this
range, the plotting region is chosen as if these extreme values
occurred.
\item\label{Yminmax} (Arguments A\ref{Yminmax} = 0, B\ref{Yminmax} = 0)
Similar to option \ref{Xminmax}, but for Y instead of X.
\item\label{symbol} (Argument N\ref{symbol} ...) Plot
symbols as specified by N\ref{symbol} at the current data points.
In addition data points are connected or not as set by digit
$10^1$ of N\ref{opt}.
N\ref{symbol} $<$ 0 means another N\ref{symbol} follows
immediately, and N\ref{symbol} = 0 draws no symbols.
If several curves are being plotted (see option \ref{ncurv}), the
first uses the first $|$N\ref{symbol}$|$, etc. If NY specifies
more curves than the number of symbols specified, the
specification for the last N\ref{symbol} $\geq$ 0 is re-used.
$|$N\ref{symbol}$|$ is a whole number in which decimal digits
have the following meanings:
\begin{itemize}
\item[$10^0\neq 1$] Plot symbols with vertices evenly spaced on a
circle.
\begin{itemize}
\item[$10^0$] The number of vertices, $v$. $v=0$ describes a
circle, in which case digits $10^{1:2}$ are ignored.
\item[$10^1$] The number of vertices, $s$ to ``skip'' when drawing
edges, except that $s \geq v$ means ``draw lines from the center
to the vertices.'' To include all the vertices when $s<v$,
$\gcd(v,s+1)$ polygons are drawn, each time starting one vertex
counterclockwise from the previous starting vertex.
\item[$10^2$] Rotation, $n$. The first vertex is at an angle of
$n \: \frac{45^\circ}v$ counterclockwise from the $x$-axis
direction.
\item[$10^3$] The width of line to use, $w$, in deci-points.
If $w=0$ then $w = 3$ is used. If $w=9$, the symbol is filled.
\item[$10^{4:5}$] The diameter of the circle in points. If 0, 6
is used.
\item[$10^{6:}$] If this is not 0, the symbol is opaque {\em
i.e.}\ pixels inside the symbol are cleared before drawing the
symbol.
\end{itemize}
Possible values for $|$N\ref{symbol}$|$ include\vspace{5pt}
\begin{center}
\mbox{\input pl16-03b }
\end{center}
If $s < v$ and $v$ is odd, the given data are at the averages of
the minima and maxima of the abscissas and ordinates of the
vertices. Otherwise the given data are at the centers of
circumcircles.
\item[$10^{1:0}$=1] Plot error bars. NY must be a multiple of 2.
Each point on a curve is specified by 3 numbers, {\em e.g.}\
$x_i,\ y_{i,1},$ and $y_{i,2}$ for the first curve. At each $x_i$
a solid vertical line is drawn from $y_{i,1} - y_{i,2}$ to
$y_{i,1} + y_{i,2}$. The next three columns of $y$ would be used
for the next curve, etc. Only the points $(x_i, y_{i,1})$ are
connected for the first curve, and similarly for later curves.
\begin{itemize}
\item[$10^2$] Length in points of a horizontal line drawn
centered on the vertical line at the top and bottom of each error
bar.
\item[$10^3$] As for $10^2$, except used for a line at $y_{i,1}$
\item[$10^4$] Width in deci-points for the cross hatch lines.
\item[$10^5$] The width of the vertical line, $w$, in
deci-points. If $w=0$ then $w = 3$ is used.
\end{itemize}
\item[$10^{1:0}=11$] As for $10^{1:0} = 1$, except each curve is
specified by 4 numbers (NY must be a multiple of 3) and the top of
the vertical line is $y_{i,1} + y_{i,3}$ for the first curve.
\item[$10^{1:0}=21$] Draw ``vector fields.'' NY (see option
\ref{ncurv}) must be a multiple of 3. An arrow is drawn from
$(x_i,~y_{i,1})$ to $(x_i+y_{i,2},~y_{i,1}+y_{i,3})$ for
the first curve. The next three columns of $y$ would be used
for the next curve, etc.
\begin{itemize}
\item[$10^2$] Length of the arrow head in points. If 0, no
arrow head is drawn.
\item[$10^3$] Size of circle in points to be drawn at
($x_i,~y_{i,1})$. Set to 0 if no circle is desired.
\item[$10^4$] Width in deci-points for the line used to
draw the circle above. If this is 9, the circle is filled.
\item[$10^{5}$] The width of the line use to draw the arrow, $w$,
in deci-points. If $w=0$ then $w = 3$ is used.
\end{itemize}
\end{itemize}
\item\label{onesymb} (Arguments X\ref{onesymb}, Y\ref{onesymb},
N\ref{onesymb}\ldots .) Plot a single symbol as specified by
N\ref{onesymb} at (X\ref{onesymb},~Y\ref{onesymb}).
$|$N\ref{onesymb}$|$ has the same meaning as
$|$N\ref{symbol}$|$. If digit $10^0$ of $|$N\ref{symbol}$|$
is 1, there are extra arguments corresponding to the
values of $y_{i,2}$ ... required by option \ref{symbol}.
N\ref{onesymb}~$\geq$ 0 means (X\ref{onesymb}, Y\ref{onesymb})
(and any additional arguments) are in user coordinates,
while N\ref{onesymb}~$<$ 0 means (X\ref{onesymb}, Y\ref{onesymb})
and additional arguments, if any, are in physical coordinates.
\item\label{arrow} (Argument S\ref{arrow} = 0) If S\ref{arrow}$~\neq$ 0
draw an arrow head, with a length of S\ref{arrow} points, at
the end of the next open curve, or at the last point given in
the next closed curve.
\item\label{linwid} (Arguments A\ref{linwid}~= 100.0, B\ref{linwid}~=
70.0, C\ref{linwid}~= 50.0, D\ref{linwid}~= 60.0, E\ref{linwid}~=
30.0) \ Specify ``pens'' (as is done for option \ref{pen}) for
various kinds of lines. Values $\leq 0$ select the default.
\begin{itemize}
\item[A\ref{linwid}] For borders.
\item[B\ref{linwid}] For major ``tick'' marks.
\item[C\ref{linwid}] For minor ``tick'' marks.
\item[D\ref{linwid}] For lines drawn at X = 0 or Y = 0.
\item[E\ref{linwid}] For rectangles or ellipses (see options \ref{rect},
\ref{ellipse}, \ref{pline}, and \ref{pellipse})
\end{itemize}
\item\label{text} (Arguments X\ref{text}, Y\ref{text}, T\ref{text}) Place
a text annotation at (X\ref{text},~Y\ref{text}). The text begins at
COPT(T\ref{text}~/ 10). If (T\ref{text}~/10) = 0, text starts
immediately after the text for the last option of this type, or if
this is the first, with the first position in COPT where this kind of
option could appear. The text pointed to must be in the same form
as the text following an option character of 1--6 as described in
Section \ref{Textopt}, except the ``\{...\}'' contains text to be
printed rather than a caption and ``[...]'' describes the
justification relative to (X\ref{text},~Y\ref{text}).
T\ref{text}$\mod 10$ has the following meaning:
\begin{itemize}
\item[0 =] X\ref{text} and Y\ref{text} are in the current user
coordinate system.
\item[1 =] X\ref{text} and Y\ref{text} are in physical
coordinates.
\item[2--3] as for 0--1 except don't use the default prefix and
postfix, any data of this type required is in the text string
pointed to.
\item[4--7] as for 0--3, and in addition an opaque borderless
unfilled rectangle is placed ``below'' the annotation, but
``above'' any plotted curves or symbols.
\end{itemize}
\item\label{number} (Arguments V\ref{number}, X\ref{number},
Y\ref{number}, T\ref{number}). Place a number given by
V\ref{number} on the plot at (X\ref{number}, Y\ref{number}).
T\ref{number} is interpreted as for T\ref{text} except that
if the value of T\ref{number}~/ 10 is 0, the default formatting
is used. Otherwise the text pointed to in COPT has the same
form as described in Section \ref{Textopt} for text following an
option character of ``N''.
\item\label{xtick} (Arguments X\ref{xtick}, T\ref{xtick})
Place an annotation and/or a line or tick mark at the
value given by X\ref{xtick} on one of the borders or axes. If
X\ref{xtick} is not an interior point along this border or axis,
this has no action. Digits of T\ref{xtick} are interpreted as
follows.
\begin{itemize}
\item[$10^0$] Defines the type of line to draw.
\begin{itemize}
\item[= 0] No line.
\item[= 1] A major tick mark.
\item[= 2] A minor tick mark.
\item[= 3] A solid line from border to border.
\item[= 4] A dashed line from border to border with 3 point dashes
and a 2 point space between them.
\end{itemize}
\item[$10^1$] Gives the index of the border or axes to which this
applies.
\item[$10^{2:}$] Points to text in COPT for an annotation that is
formatted with the same kind of rule as used for a numeric
label, see Option \ref{number}. If 0 there is no annotation.
\end{itemize}
\item\label{bad} (Argument A\ref{bad}, Y\ref{bad}) Values of Y equal to
Y\ref{bad} are considered to be ``bad data.'' If A\ref{bad} is zero,
``bad'' values of Y are simply skipped. If A\ref{bad}~$>$ 0 and a
curve is being plotted, it will be terminated at the previous
``good'' value of Y, and re-started at the next ``good'' value of Y.
If A\ref{bad}~$<$ 0 then testing for ``bad data'' is turned off.
When plotting a closed curve, if this option has been selected and
A\ref{bad}~$>$ 0, it is taken to be zero.
\item\label{nouse_1} Not used.
\item\label{line} (Arguments XA\ref{line}, YA\ref{line}, XB\ref{line}
YB\ref{line}) Draw a line in user coordinates from
(XA\ref{line}, YA\ref{line}) to (XB\ref{line}, YB\ref{line})
using the currently selected line style parameters (including the
arrow head), and the scaling set by the most recent appearance of
options \ref{Xminmax} or \ref{Yminmax}.
\item\label{rect} (Arguments A\ref{rect}, B\ref{rect}, C\ref{rect},
D\ref{rect}) Draw a rectangle specified by points (A\ref{rect},
B\ref{rect}) and (C\ref{rect}, D\ref{rect}), in user
coordinates of any two diagonally opposite corners of the
rectangle, after all curves are drawn, but before any annotations
specified by options \ref{onesymb} or \ref{text} are placed.
The rectangle is filled as specified by option \ref{fill}.
\item\label{ellipse} (Arguments X\ref{ellipse}, Y\ref{ellipse},
A\ref{ellipse}, B\ref{ellipse}, R\ref{ellipse}) Draw an ellipse
centered at (X\ref{ellipse}, Y\ref{ellipse}) with major axis
A\ref{ellipse} and minor axis B\ref{ellipse}, and with the
major axis rotated R\ref{ellipse} degrees counterclockwise from
the $+x$-axis direction. All but R\ref{ellipse} are in user
coordinates. If logarithms of user coordinates are
ordinarily taken, logarithms of X\ref{ellipse} and Y\ref{ellipse}
are computed before rotation. The input values of A\ref{ellipse}
and B\ref{ellipse} are used without taking logarithms.
\item\label{pline} Like \ref{line}, but in physical coordinates.
Logarithms are never applied to coordinates in this case.
\item\label{prect} Like \ref{rect}, but in physical coordinates.
Logarithms are never applied to coordinates in this case.
\item\label{pellipse} Like \ref{ellipse}, but in physical coordinates.
Logarithms are never applied to coordinates in this case.
\item\label{fill} (Arguments F\ref{fill}=0, ...) Specify filling of closed
curves, rectangles defined by option \ref{rect} or \ref{prect}, or
ellipses defined by option \ref{ellipse} or \ref{pellipse}. Some
cases require extra arguments that are denoted by A\ref{fill},
B\ref{fill}, and C\ref{fill}. One must provide exactly as many
arguments as the number required. F\ref{fill} is a whole number
in which the decimal digits have the following meaning:
\begin{itemize}
\item[$10^0$]
\begin{itemize}
\item[0] Do not fill~-- leave transparent.
\item[1] Fill with solid black.
\item[2] Erase.
\item[3] Shade with dots of size A\ref{fill} and spacing B\ref{fill}.
\item[4] Hatch with lines of thickness A\ref{fill}, spacing
B\ref{fill} and angle C\ref{fill} (in degrees counter-clockwise
from horizontal).
\end{itemize}
\item[$10^1$] 0 = specification applies to curves, 1 = specification
applies to rectangles defined by option \ref{rect}, else
specification applies to ellipses defined by option \ref{ellipse}.
\item[$10^2$] 0 = specification applies to next curve, rectangle or
ellipses only. Else it applies, until changed by option
\ref{fill}. One can specify as many as 3 fill patterns with
shading or hatching and all will be applied.
\end{itemize}
A\ref{fill} and B\ref{fill} are in points, 72.27 points/inch.
\item\label{nouse_2} Not used.
\item\label{debug} (Argument L\ref{debug} = 0) Print debugging output:
\begin{itemize}
\item[L\ref{debug}$\leq$0] No debugging output
\item[L\ref{debug}$>$0] Option settings
\item[L\ref{debug}$>$1] Scratch file contents, including data.
\end{itemize}
\end{enumerate}
\renewcommand{\labelenumi}{\theenumi}
\subsubsection{Textual Option Descriptions\label{Textopt}}
Data at the beginning of COPT() consists of strings headed by a single
character that identifies the kind of string that follows. After the
last of such strings, COPT() may contain character strings pointed to by
various options. Strings pointed to must begin with ``['', ``\{'', or
``(''.
Options specified entirely in COPT() consist of a single letter or number
code followed by text associated with the option. All letters used in
options may be in either upper or lower case. Option codes may be
preceded by blanks. Where we write ``[\,]'', ``()'', or ``\{\}'' (no text
inside), the following is to be understood.
\begin{description}
\item[\hbox{[\,]}] Always optional. Contains two or four letters between
the brackets that define how an item is centered relative to some nominal
position. The first is either {\tt t}, {\tt c}, or {\tt b} for top,
center (vertically), and bottom. The second letter is either {\tt l},
{\tt c}, or {\tt r}, for left, center (horizontally), and right. One may
follow the second letter with an {\tt s} which indicates the text is
``stacked'' (vertically), and a following letter that must be {\tt l},
{\tt c}, or {\tt r}, to indicate how the text is justified horizontally
inside the stack. Inside a stack, text enclosed in balanced ``\{...\}'' or
``\$...\$'' is kept on a single line.
\item[()] Always optional. Contains information between the parentheses
on formatting numbers and on the size of text (in points). Items can
appear in any order, with no intervening spaces. Letters can be in either
upper or lower case, and \# is used to denote an integer which if not
present causes the nominal value to be used. The following can be
specified.
\begin{itemize}
\item[.] Always print a decimal point.
\item[F\#] Font size in points. (The only case that
makes sense if ``()''
is being used in connection with text output. Thus ``(F12)'' would be
used to indicate text or numbers that are in 12 point.) The default is 9
point.
\item[D\#] Number of significant digits that must be printed.
\item[A\#] Number of digits that are required after the decimal point.
\item[B\#] Number of digits that are required before the decimal point,
\item[X\#] $0 < \# < 10$, bias for selecting the exponent notation. If
\# is zero, it is replaced by 5. The exponent notation is used if
there are $9-\#$) or more zeros that are serving as place holders,
else the usual format is used. Note that with an input \# of 9,
there will always be at least 0 zeros, and exponent notation must be
used.
\end{itemize}
\item[\{\}] Never optional, but can be replaced by a ``\#'' to get a
default value. Between the braces there is a text string containing
at least one ``\#'' that is not preceded by a ``$\backslash$''. When
outputting a number or text, this string is copied to the output with the
first such ``\#'' replaced by the number or text being output. This
provides a means to change font, size, etc. The default for \LaTeX\
is ``\{$\backslash $small \#\}'' and for \TeX\ is ``\{\#\}''.
\end{description}\vspace{5pt}
{\bf Option}\vspace{-12pt}
\begin{description}
\item[Format] \hspace{.5in}{\bf Option Description}
\item[F\{{\em File\_name}\}] Specifies a file name for the output.
If this option is not selected, it is as if ``F\{splot.tex\}'' were used.
\item[Q] End of the option list. The end of the option list may also be
indicated by \{, ( or [ appearing where the first letter of an
option code is expected --- probably text used by option
\ref{text}, \ref{number}, or \ref{xtick}.
\item[A\,()\{\}] Default specification for printing numbers on
borders and axes. Should precede any B, T, L, R, X, or Y specifications.
\item[B\,()\{\}] Specification for printing numbers on
bottom border.
\item[T\,()\{\}] Specification for printing numbers on
top border.
\item[L\,()\{\}] Specification for printing numbers on
left border.
\item[R\,()\{\}] Specification for printing numbers on
right border.
\item[X\,()\{\}] Specification for printing numbers on
x-axis.
\item[Y\,()\{\}] Specification for printing numbers on
y-axis.
\item[N\,\hbox{[\,]}()\{\}] Specification for printing numbers
interior to the plot. The default for ``[\,]'' is {\tt [cl]}.
\item[W\,\hbox{[\,]}()\{\}] Specification for printing words or other
text somewhere on the plot. If (...) is used, only the ``F'' for
font size applies. The default for ``[\,]'' is {\tt [cl]}.
\item[C\,()\{\}] Default specifications for printing border/axis
captions.
\item[I\{{\em Input\_file\_name}\}] Specifies the name of a file from
which the X and Y data are to be read. This file is opened with {\tt
FORM~= 'UNFORMATTED'} and {\tt ACCESS~='SEQUENTIAL'}, and will have NX
records read with a sequence of statements of the form\\
{\tt \hspace*{0.125in} read (IO) X, (Y(J), J=1,NY)}\\
where IO is the unit number, automatically selected, associated with this
file, and NY is 1 or, if set, the value of K\ref{ncurv} in option
\ref{ncurv}. If NX is sufficently large data is read to the end of
the file.
\item[M\{{\em Raw plot output}\}]The raw plot output is sent
directly to the plot device driver, except that $\backslash$ serves
as an escape character ({\em i.e.}$\backslash$x for any ``x'' is replaced
by x).
\item[``1--6''\hbox{[\,]}()\{{\em Border/axis\_caption}\}] Specification
for a caption to be printed on a border/axis, where a single digit from 1
to 6 is used as in Option \ref{border}. A {\tt c} for vertical positioning
on a vertical line or for horizontal positioning on a horizontal line
causes the text to be centered with respect to that direction outside any
labels that may be present. Defaults for ``[\,]'' are {\tt [bc]}, {\tt
[cl]}, {\tt [tc]}, {\tt [cr]}, {\tt [cr]} and {\tt [tc]} for the bottom,
left, top, right, x-axis, and y-axis respectively.
The \# is not used in this case. When using \LaTeX \ the default
``$\backslash$small'' is not generated in this case if the first
character following the \{ is a $\backslash$.
Thus ``1\{Label bottom border\}'' will place text centered below the
bottom border with a baseline at the edge of the figure. And the default
for the $x$-axis centers text vertically just past the right end of the
axis. If one were to use {\tt [bc]} on the top border, the text would
have a baseline at the top of the figure. If defaults are used, a caption
for a vertical line that requires more than two character lengths
unstacked than it would require if stacked, will be stacked by adding {\tt
sc} to the positioning information.
\end{description}
\subsubsection{Incorporating the Output Into a \TeX\ or \LaTeX\
Document\label{IntoTeX}}
SPLOT generates output intended to be processed by the {\tt mfpic}
package of \TeX\ or \LaTeX\ macros. See Section D below for more
details. Using \LaTeX\ or PDF\LaTeX\ one can generate either ``.dvi''
files or ``.pdf'' files. We describe both approaches, but recommend
using PDF\LaTeX\ as it gives documents that are easier to share and it
is easier to get set up. Also for many of our plots we had to use our
own version of gftopk as the standard version does not set aside
enough memory for the character sizes that are generated for the
plots.
The general form of a \LaTeX\ document that incorporates plots generated by
SPLOT is:
\begin{verbatim}
\documentclass... % for LaTeX2e
% or \documentstyle for older LaTeX versions
...
\usepackage[metapost]{mfpic}
...
\begin{document}
...
\opengraphsfile{<fontname>}
...
\input <plotname>
% TeX reads the SPLOT output from the file
% <plotname>.tex, default is splot.tex
...
\input <anotherplotname>
...
\closegraphsfile
...
\end{document}
\end{verbatim}
The processing steps for this document illustrate usage:
\begin{enumerate}
\item Run the programs that invoke SPLOT:\\
{\tt pl16-03} \hspace{20pt}(Creates the symbols for option \ref{symbol})\\
{\tt drsplot} \hspace{20pt}(See Section C)
\vspace{-5pt}
\item\label{DocP} Process the document:\\
{\tt pdflatex ch16-03}
\vspace{-5pt}
\item\label{mfP} Process the font produced in step \ref{DocP}:\\
{\tt mpost pl16-03}\\
The name {\tt pl16-03} was mentioned in the command
``{\tt $\backslash$opengraphsfile}'' in this document.
\vspace{-5pt}
\item Process the document again, this time incorporating the
generated plots from step \ref{mfP}:\\
{\tt pdflatex ch16-03}
\vspace{-5pt}
\vspace{-5pt}
\end{enumerate}
Programs above may have different names and different usages on
different systems. If you are generating ``.dvi'' files and use a
system that automatically updates a ``.dvi'' display when running
\LaTeX, you will likely need to close this window after generating a
new graphic font in order for the new graphic font to be displayed
correctly.
To get dvi output.
Replace ``$\backslash $usepackage[metapost]\{mfpic\}'' with\\
``$\backslash $usepackage[metafont]\{mfpic\}''.\\
Run latex instead of pdflatex.\\
Replace ``mpost pl16-03'' with ``mf pl16-03''\\
Run gftopk pl16-03.*gf\\
Run latex again.
\subsubsection{Modifications for Double Precision\label{DP}}
For double precision usage, change the REAL type statement to DOUBLE
PRECISION, and change the subroutine name to DPLOT. The default file name
for output, used if option {\tt F} in COPT() is not selected, is
{\tt dplot.tex}.
\subsection{Examples and Remarks}
The example given is that used to generate the plot in Chapter~2.2.
If one produces a curve or set of related curves by setting the $10^0$
digit of N\ref{opt}~=~5 and calling SPLOT several times to supply data
values, the value of NY must be the same on every call that contributes to
the construction of the curve(s).
If one produces a plot by making several calls to SPLOT, using digit
$10^0$ of N\ref{opt} to indicate that the plot is or is not to be finished
on the current call, the following considerations are important:
\begin{itemize}
\item
The specifications whether ``tick'' marks are linearly or
logarithmically spaced are those in effect when the plot is finished.
\item
The specifications whether to plot data as given, or to plot $\log_{10}~X$
or $\log_{10}~Y$, are remembered from one invocation of SPLOT to the
next, but, if changed, are separately observed.
\item
The specification to use a polar coordinate system is the one in effect
when the plot is finished.
\item
The units and values of XSIZE and YSIZE are those in effect when
the plot is finished.
\end{itemize}
All options are reset to their default values when SPLOT is first invoked,
or when invoked after finishing a plot. When SPLOT is invoked after
having been invoked with instructions {\em not} to finish the plot, all
optional settings retain their values from the previous invocation until
changed by option selections in the current invocation, except that digit
$10^0$ of N\ref{opt} is reset to zero, and L\ref{debug} is reset to zero
before the option selections are processed.
We also give a simple example here of taking output in a file to make
a plot, and give scripts illustrating how to get the plots.
Program {\tt plotf.f} takes output from a file and calls {\tt dplot}.
Scripts {\tt plotpdf} and {\tt plotdvi} shows how to to do the
whole job where the program {\tt tedxrk8} is the program used to
generate the results and the \LaTeX\ file is {\tt stepsel.tex}.
{\bf plot.f}\vspace{-10pt}
\begin{verbatim}
c Plot data from a file; in this case file
c plot.out, assumed to have floating point
c numbers x_i, y_i, z_i on each line. y(x)
c and z(x) are plotted. Output in this case
c goes to file stepselp.tex.
double precision X(1), Y(1), OPT(8)
character COPT*27
data COPT/'I{plot.out}F{stepselp.tex}Q'/
c Set for 2 curves ....
data OPT/ 0.D0, 2.D0, 100000.D0, 2.D0,
c For x in [0, 15]
1 8.D0, 0.D0, 15.D0, 0.D0/
call DPLOT(4.5D0, 3.D0, X, 100000,
1 Y, OPT, COPT)
stop
end
\end{verbatim}
{\bf plotpdf}\vspace{-10pt}
\begin{verbatim}
#!/bin/bash
export RUNDIR=/m/math77/ode/dxrk8/new
$RUNDIR/tedxrk8 <$RUNDIR/test.in
$RUNDIR/plotf
pdflatex stepsel
mpost stepselp
pdflatex stepsel
\end{verbatim}
{\bf plotdvi}\vspace{-10pt}
\begin{verbatim}
#!/bin/bash
export RUNDIR=/m/math77/ode/dxrk8/new
$RUNDIR/tedxrk8 <$RUNDIR/test.in
$RUNDIR/plotf
latex stepsel
mf stepselp
gftopk stepselp.*gf
latex stepsel
\end{verbatim}
\subsection{Functional Description}
\subsubsection{Sequence of Events}\label{SOE}
Each time SPLOT is invoked, the character options in COPT are processed,
then the numeric options in OPT are processed, then the data are
processed. Most numeric options, and all data, are stored on a scratch
file, and processed when digit $10^0$ of N\ref{opt} $\leq 2$.
Numeric options are processed in the order they appear, and the setting
of one may affect the interpretation of another. When data are processed,
the most recently previous setting of an option that affects the data or
their interpretation is effective, even if that setting occurred during a
previous invocation of SPLOT for the same plot.
The effects of options or fields of options on subsequent options, and
data, are summarized in the table at the beginning of Section
\ref{Numopt}. The column labelled ``Scratch File'' indicates whether the
option, or some of its fields, are put onto the scratch file, along with
the data.
The plot border, axes, tick marks, labels, and captions are output first.
Then data on the scratch file is output in the following order. First all
data for a given {\tt mfpic} group is output before that for another. Within
each such group, curves are output, then single symbols, then rectangles
and ellipses, and finally text annotations.
\subsubsection{Context of Usage -- Interaction with \LaTeX\ and \TeX}
Plots are rendered into marks on the page by converting them to characters of
a font. The {\tt mfpic} package instructs \TeX\ or \LaTeX\ to put commands to
produce the font into the file {\tt $<$fontname$>$.mf} specified by the {\tt
$\backslash$opengraphsfile} command (see Section \ref{IntoTeX}).
The document is thus produced by invoking the \TeX\ or \LaTeX\ program to
produce the file {\tt $<$fontname$>$.mf}, processing {\tt $<$fontname$>$.mf}
by the {\tt mf} program, and then invoking \TeX\ or \LaTeX\ again to
account for the ``font metrics'' and to incorporate the font ``characters''
that implement the plots.
After processing by the {\tt mf} program, it may also be necessary to use the
{\tt gftopk} program, depending on how the {\tt .dvi} file is ultimately
converted to a format for a specific printer.
Finally, the document is converted from ``device independent'' ({\tt .dvi})
form to a form usable by a specific printer by a {\tt dvi} program specific
to that printer.
Consult documentation specific to your system for details of usage of \TeX,
\LaTeX, {\tt mf}, {\tt gftopk} and {\tt dvi} programs.
A version of the {\tt mfpic} macros accompanies the libraries.
Some implementations of {\tt gftopk} have insufficient capacity to process
output from {\tt mf} resulting from using {\tt mfpic}. A version of {\tt
gftopk} that has more capacity accompanies the libraries. You will need a
C compiler to compile it for your system.
\subsection{Error Procedures and Restrictions}
Although this software has been used to produce the wide variety of plots
in the MATH77 documentation, time constraints have meant that a good
number of the options have not been checked. It is expected that there
will be bugs in some of these unchecked options. These will be fixed as
they arise. Any errors due to a bug should be obvious from an examination
of the plot if the bug allows the code to get this far. The C versions of
this code have been checked only on the simple demonstration drivers
and thus are even more likely to suffer from bugs.
An error message is produced if OPT(1) is returned non-zero. All errors
are processed by the error message processor described in Chapter 19.3.
If the error messages provided do not clear up the problem, looking at
{\tt $<$plotname$>$.tex} may clarify the problem for you.
\subsection{Supporting Information}
The source language is ANSI Fortran~77.
\begin{tabular}{@{\bf}l@{\hspace{5pt}}l}
\bf Entry & \hspace{.35in} {\bf Required Files}\vspace{2pt} \\
SPLOT & \parbox[t]{2.7in}{\hyphenpenalty10000 \raggedright
MESS, SMESS, SPLOT, SPLOT0\rule[-5pt]{0pt}{8pt}}\\
DPLOT & \parbox[t]{2.7in}{\hyphenpenalty10000 \raggedright
DMESS, MESS, SPLOT, SPLOT0\rule[-5pt]{0pt}{8pt}}\\
\end{tabular}
Design and code by Fred T. Krogh and W. Van Snyder, JPL, December 1997.
Special thanks to Thomas Leathrum for creating {\tt mfpic} and to Geoffrey
Tobin for making his latest version of {\tt mfpic} available and for
answering questions on it's use.
\closegraphsfile
\begcode
\medskip\
\lstset{language=[77]Fortran,showstringspaces=false}
\lstset{xleftmargin=.8in}
\centerline{\bf \large DRSPLOT}\vspace{10pt}
\lstinputlisting{\codeloc{splot}}\vspace{-30pt}
\vspace{30pt}\centerline{\bf \large ODSPLOT}\vspace{10pt}
\lstset{language={}}
\lstinputlisting{\outputloc{splot}}
\end{document}
| {
"alphanum_fraction": 0.6683840498,
"avg_line_length": 47.8997429306,
"ext": "tex",
"hexsha": "1c8f2817d72c325341a19a701aa9087193d93d9c",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2018-11-25T05:32:54.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-01-07T09:26:45.000Z",
"max_forks_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "jacobwilliams/math77",
"max_forks_repo_path": "doc/doctex/ch16-03.tex",
"max_issues_count": 4,
"max_issues_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1",
"max_issues_repo_issues_event_max_datetime": "2022-02-21T16:04:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2019-01-17T02:48:32.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "jacobwilliams/math77",
"max_issues_repo_path": "doc/doctex/ch16-03.tex",
"max_line_length": 98,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "b562d09e191e99eba8a5bedfec45acf7461203b1",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "jacobwilliams/math77",
"max_stars_repo_path": "doc/doctex/ch16-03.tex",
"max_stars_repo_stars_event_max_datetime": "2021-05-25T19:17:42.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-01-04T03:17:06.000Z",
"num_tokens": 16496,
"size": 55899
} |
\documentclass[12pt]{article}
\usepackage[margin=2cm]{geometry}
\usepackage{amsmath}
\usepackage{slashed}
\usepackage{tikz}
\begin{document}
\noindent
The following diagram shows the geometry of a typical collider experiment
that generates electron scattering data.
\begin{center}
\begin{tikzpicture}
\draw[dashed] (0,0) circle (0.5cm);
\draw[thick,->] (2,0) node[anchor=west] {$e^-$} -- (0.6,0);
\draw[thick,->] (-2,0) node[anchor=east] {$e^-$} -- (-0.6,0);
\draw[thick,->] (0.40,0.40) -- (1.3,1.3) node[anchor=south west] {$e^-$};
\draw[thick,->] (-0.4,-0.4) -- (-1.3,-1.3) node[anchor=north east] {$e^-$};
\draw (1,0.5) node {$\theta$};
\end{tikzpicture}
\end{center}
\noindent
Here is the same diagram with momentum labels $p$ and Dirac spinor labels $u$.
\begin{center}
\begin{tikzpicture}
\draw[dashed] (0,0) circle (0.5cm);
\draw[thick,->] (2,0) node[anchor=west] {$p_2, u_2$} -- (0.6,0);
\draw[thick,->] (-2,0) node[anchor=east] {$p_1, u_1$} -- (-0.6,0);
\draw[thick,->] (0.40,0.40) -- (1.3,1.3) node[anchor=south west] {$p_3, u_3$};
\draw[thick,->] (-0.4,-0.4) -- (-1.3,-1.3) node[anchor=north east] {$p_4, u_4$};
\draw (1,0.5) node {$\theta$};
\end{tikzpicture}
\end{center}
\noindent
In center of mass coordinates the momentum vectors are
\begin{align*}
p_1&=
\underset{\substack{\phantom{0}\\ \text{particle 1}}}
{\begin{pmatrix}E\\0\\0\\p\end{pmatrix}}
&
p_2&=
\underset{\substack{\phantom{0}\\ \text{particle 2}}}
{\begin{pmatrix}E\\0\\0\\-p\end{pmatrix}}
&
p_3&=
\underset{\substack{\phantom{0}\\ \text{particle 3}}}
{\begin{pmatrix}
E\\
p\sin\theta\cos\phi\\
p\sin\theta\sin\phi\\
p\cos\theta
\end{pmatrix}}
&
p_4&=
\underset{\substack{\phantom{0}\\ \text{particle 4}}}
{\begin{pmatrix}
E\\
-p\sin\theta\cos\phi\\
-p\sin\theta\sin\phi\\
-p\cos\theta
\end{pmatrix}}
\end{align*}
\noindent
Symbol $p$ is incident momentum,
$E$ is total energy $E=\sqrt{p^2+m^2}$,
and $m$ is electron mass.
Polar angle $\theta$ is the observed scattering angle.
Azimuth angle $\phi$ cancels out in scattering calculations.
\bigskip
\noindent
The spinors are
\begin{align*}
u_{11}&=
\underset{\substack{\phantom{0}\\ \text{particle 1}\\ \text{spin up}}}
{\begin{pmatrix}E+m\\0\\p\\0\end{pmatrix}}
&
u_{12}&=
\underset{\substack{\phantom{0}\\ \text{particle 1}\\ \text{spin down}}}
{\begin{pmatrix}0\\E+m\\0\\-p\end{pmatrix}}
&
u_{21}&=
\underset{\substack{\phantom{0}\\ \text{particle 2}\\ \text{spin up}}}
{\begin{pmatrix}E+m\\0\\-p\\0\end{pmatrix}}
&
u_{22}&=
\underset{\substack{\phantom{0}\\ \text{particle 2}\\ \text{spin down}}}
{\begin{pmatrix}0\\E+m\\0\\p\end{pmatrix}}
\\[2ex]
u_{31}&=
\underset{\substack{\phantom{0}\\ \text{particle 3}\\ \text{spin up}}}
{\begin{pmatrix}E+m\\0\\p_{3z}\\p_{3x}+ip_{3y}\end{pmatrix}}
&
u_{32}&=
\underset{\substack{\phantom{0}\\ \text{particle 3}\\ \text{spin down}}}
{\begin{pmatrix}0\\E+m\\p_{3x}-ip_{3y}\\-p_{3z}\end{pmatrix}}
&
u_{41}&=
\underset{\substack{\phantom{0}\\ \text{particle 4}\\ \text{spin up}}}
{\begin{pmatrix}E+m\\0\\p_{4z}\\p_{4x}+ip_{4y}\end{pmatrix}}
&
u_{42}&=
\underset{\substack{\phantom{0}\\ \text{particle 4}\\ \text{spin down}}}
{\begin{pmatrix}0\\E+m\\p_{4x}-ip_{4y}\\-p_{4z}\end{pmatrix}}
\end{align*}
\noindent
The spinors shown above are not individually normalized.
Instead, a combined spinor normalization constant
$N=(E+m)^4$ will be used.
\bigskip
\noindent
The following formula computes a probability density $|\mathcal{M}_{abcd}|^2$
for electron scattering where the subscripts $abcd$ are the spin states of the electrons.
\begin{equation*}
|\mathcal{M}_{abcd}|^2=\frac{e^4}{N}
\bigg|
\underset{\substack{\phantom{0}\\ \text{from Feynman diagram with}\\ \text{photon exchange}\\ \text{no electron interchange}}}
{\frac{1}{t}(\bar{u}_{3c}\gamma^\mu u_{1a})(\bar{u}_{4d}\gamma_\mu u_{2b})}
-
\underset{\substack{\phantom{0}\\ \text{from Feynman diagram with}\\ \text{photon exchange}\\ \text{electron interchange}}}
{\frac{1}{u}(\bar{u}_{4d}\gamma^\nu u_{1a})(\bar{u}_{3c}\gamma_\nu u_{2b})}
\bigg|^2
\end{equation*}
\noindent
Symbol $e$ is electron charge.
Symbols $t$ and $u$ are Mandelstam variables
\begin{align*}
t&=(p_1-p_3)^2
\\
u&=(p_1-p_4)^2
\end{align*}
\noindent
Let
\begin{equation*}
a_1=(\bar{u}_{3c}\gamma^\mu u_{1a})(\bar{u}_{4d}\gamma_\mu u_{2b})
\qquad
a_2=(\bar{u}_{4d}\gamma^\nu u_{1a})(\bar{u}_{3c}\gamma_\nu u_{2b})
\end{equation*}
\noindent
Then
\begin{align*}
|\mathcal{M}_{abcd}|^2
&=
\frac{e^4}{N}
\left|\frac{a_1}{t} - \frac{a_2}{u}\right|^2\\
&=
\frac{e^4}{N}
\left(\frac{a_1}{t} - \frac{a_2}{u}\right)\left(\frac{a_1}{t} - \frac{a_2}{u}\right)^*\\
&=
\frac{e^4}{N}
\left(
\frac{a_1a_1^*}{t^2} - \frac{a_1a_2^*}{tu} -
\frac{a_1^*a_2}{tu} + \frac{a_2a_2^*}{u^2}
\right)
\end{align*}
\noindent
The expected probability density $\langle|\mathcal{M}|^2\rangle$ is computed by
summing $|\mathcal{M}_{abcd}|^2$ over all spin states and dividing by the number
of inbound states.
There are four inbound states.
\begin{align*}
\langle|\mathcal{M}|^2\rangle
&=
\frac{1}{4}\sum_{a=1}^2\sum_{b=1}^2\sum_{c=1}^2\sum_{d=1}^2
|\mathcal{M}_{abcd}|^2\\
&=
\frac{e^4}{4N}\sum_{a=1}^2\sum_{b=1}^2\sum_{c=1}^2\sum_{d=1}^2
\left(
\frac{a_1a_1^*}{t^2}-\frac{a_1a_2^*}{tu}-\frac{a_1^*a_2}{tu}+\frac{a_2a_2^*}{u^2}
\right)
\end{align*}
\noindent
Use the Casimir trick to replace sums over spins with matrix products.
\begin{align*}
f_{11}&=\frac{1}{N}\sum_{abcd}a_1a_1^*=
\mathop{\rm Tr}\left(
(\slashed{p}_3+m)\gamma^\mu(\slashed{p}_1+m)\gamma^\nu
\right)
\mathop{\rm Tr}\left(
(\slashed{p}_4+m)\gamma_\mu(\slashed{p}_2+m)\gamma_\nu
\right)
\\
f_{12}&=\frac{1}{N}\sum_{abcd}a_1a_2^*=
\mathop{\rm Tr}\left(
(\slashed{p}_3+m)\gamma^\mu(\slashed{p}_1+m)\gamma^\nu
(\slashed{p}_4+m)\gamma_\mu(\slashed{p}_2+m)\gamma_\nu
\right)
\\
f_{22}&=\frac{1}{N}\sum_{abcd}a_2a_2^*=
\mathop{\rm Tr}\left(
(\slashed{p}_4+m)\gamma^\mu(\slashed{p}_1+m)\gamma^\nu
\right)
\mathop{\rm Tr}\left(
(\slashed{p}_3+m)\gamma_\mu(\slashed{p}_2+m)\gamma_\nu
\right)
\end{align*}
\noindent
Hence
\begin{equation*}
\langle|\mathcal{M}|^2\rangle
=\frac{e^4}{4}
\left(
\frac{f_{11}}{t^2}-\frac{f_{12}}{tu}-\frac{f_{12}^*}{tu}+\frac{f_{22}}{u^2}
\right)
\end{equation*}
\noindent
The following formulas are equivalent to the Casimir trick.
(Recall that $a\cdot b=a^\mu g_{\mu\nu}b^\nu$)
\begin{align*}
f_{11}&=
32 (p_1\cdot p_2)^2 +
32 (p_1\cdot p_4)^2 -
64 m^2 (p_1\cdot p_3) + 64 m^4
\\
f_{12}&=
-32 (p_1\cdot p_2)^2 +
32 m^2 (p_1\cdot p_2) +
32 m^2 (p_1\cdot p_3) +
32 m^2 (p_1\cdot p_4) - 32m^4
\\
f_{22}&=
32 (p_1\cdot p_2)^2 +
32 (p_1\cdot p_3)^2 -
64 m^2 (p_1\cdot p_4) + 64 m^4
\end{align*}
\noindent
In Mandelstam variables
\begin{align*}
s&=(p_1+p_2)^2
\\
t&=(p_1-p_3)^2
\\
u&=(p_1-p_4)^2
\end{align*}
the formulas are
\begin{align*}
f_{11} &= 8 s^2 + 8 u^2 - 64 s m^2 - 64 u m^2 + 192 m^4
\\
f_{12} &= -8 s^2 + 64 s m^2 - 96 m^4
\\
f_{22} &= 8 s^2 + 8 t^2 - 64 s m^2 - 64 t m^2 + 192 m^4
\end{align*}
\subsection*{High energy approximation}
When $E\gg m$ a useful approximation is to set $m=0$ and obtain
\begin{align*}
f_{11}&=8s^2+8u^2\\
f_{12}&=-8s^2\\
f_{22}&=8s^2+8t^2
\end{align*}
\noindent
Hence
\begin{align*}
\langle|\mathcal{M}|^2\rangle
&=\frac{e^4}{4}
\left(
\frac{f_{11}}{t^2}-\frac{f_{12}}{tu}-\frac{f_{12}^*}{tu}+\frac{f_{22}}{u^2}
\right)
\\
&=\frac{e^4}{4}
\left(
\frac{8s^2+8u^2}{t^2}-\frac{-8s^2}{tu}-\frac{-8s^2}{tu}+\frac{8s^2+8t^2}{u^2}
\right)
\\
&=2e^4
\left(
\frac{s^2+u^2}{t^2}+\frac{2s^2}{tu}+\frac{s^2+t^2}{u^2}
\right)
\end{align*}
\noindent
Combine terms so $\langle|\mathcal{M}|^2\rangle$ has a common denominator.
\begin{equation*}
\langle|\mathcal{M}|^2\rangle
=2e^4
\left(
\frac{u^2(s^2+u^2)+2s^2tu+t^2(s^2+t^2)}{t^2u^2}
\right)
\end{equation*}
\noindent
For $m=0$ the Mandelstam variables are
\begin{align*}
s&=4E^2
\\
t&=2E^2(\cos\theta-1)
\\
u&=-2E^2(\cos\theta+1)
\end{align*}
\noindent
Hence
\begin{align*}
\langle|\mathcal{M}|^2\rangle
&=2e^4
\left(
\frac{32E^8\cos^4\theta+192E^8\cos^2\theta+288E^8}{16E^8(\cos\theta-1)^2(\cos\theta+1)^2}
\right)
\\
&=4e^4\frac{\left(\cos^2\theta+3\right)^2}{(\cos\theta-1)^2(\cos\theta+1)^2}
\\
&=4e^4
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2
\end{align*}
\noindent
The following equivalent formula can also be used.
\begin{multline*}
\langle|\mathcal{M}|^2\rangle
=2e^4
\left(
\frac{s^2+u^2}{t^2}+\frac{2s^2}{tu}+\frac{s^2+t^2}{u^2}
\right)
\\
=2e^4\bigg(
\underset{\substack{\phantom{0}\\ \text{from Feynman diagram with}\\ \text{photon exchange}\\ \text{no electron interchange}}}
{\frac{1+\cos^4(\theta/2)}{\sin^4(\theta/2)}}
+
\underset{\substack{\phantom{0}\\ \text{interaction term}}}
{\frac{2}{\sin^2(\theta/2)\cos^2(\theta/2)}}
+
\underset{\substack{\phantom{0}\\ \text{from Feynman diagram with}\\ \text{photon exchange}\\ \text{electron interchange}}}
{\frac{1+\sin^4(\theta/2)}{\cos^4(\theta/2)}}
\bigg)
\end{multline*}
For example, see A.~Zee, p.~134.
\subsection*{Cross section}
The differential cross section is
\begin{equation*}
\frac{d\sigma}{d\Omega}
=\frac{\langle|\mathcal{M}|^2\rangle}{64\pi^2s}
=\frac{e^4}{16\pi^2s}
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2,\quad s\gg m
\end{equation*}
\noindent
Substituting $e^4=16\pi^2\alpha^2$ yields
\begin{equation*}
\frac{d\sigma}{d\Omega}=\frac{\alpha^2}{s}
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2
\end{equation*}
\noindent
We can integrate $d\sigma$ to obtain a cumulative distribution function.
Recall that
\begin{equation*}
d\Omega=\sin\theta\,d\theta\,d\phi
\end{equation*}
Hence
\begin{equation*}
d\sigma=\frac{\alpha^2}{s}
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2
\sin\theta\,d\theta\,d\phi
\end{equation*}
\noindent
Let $I(\theta)$ be the following integral of $d\sigma$.
\begin{align*}
I(\theta)&=\frac{s}{2\pi\alpha^2}\int_0^{2\pi}\int d\sigma
\\
&=\int
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2
\sin\theta\,d\theta,
\quad a\le\theta\le\pi-a
\end{align*}
\noindent
Angular support is limited to an arbitrary $a>0$ because $I(0)$ and $I(\pi)$ are undefined.
Assume that $I(\theta)-I(a)$ is computable given $\theta$ by either symbolic or numerical integration.
\bigskip
\noindent
Let $C$ be the normalization constant
\begin{equation*}
C=I(\pi-a)-I(a)
\end{equation*}
\noindent
Then the cumulative distribution function $F(\theta)$ is
\begin{equation*}
F(\theta)=\frac{I(\theta)-I(a)}{C},\quad a\le\theta\le\pi-a
\end{equation*}
\noindent
The probability of observing scattering events in the interval $\theta_1$ to $\theta_2$
can now be computed.
\begin{equation*}
P(\theta_1\le\theta\le\theta_2)=F(\theta_2)-F(\theta_1)
\end{equation*}
\noindent
Probability density function $f(\theta)$ is the derivative of $F(\theta)$.
\begin{equation*}
f(\theta)
=\frac{dF(\theta)}{d\theta}
=\frac{1}{C}
\left(
\frac{\cos^2\theta+3}{\cos^2\theta-1}
\right)^2
\sin\theta
\end{equation*}
\noindent
This is a graph of $f(\theta)$ for $a=\pi/6=30^\circ$.
\begin{center}
\includegraphics[scale=0.5]{moller-scattering.png}
\end{center}
\noindent
Probability distribution for $30^\circ$ bins ($a=30^\circ$).
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\theta_1$ & $\theta_2$ & $P(\theta_1\le\theta\le\theta_2)$\\
\hline
$0^\circ$ & $30^\circ$ & -- \\
$30^\circ$ & $60^\circ$ & 0.40 \\
$60^\circ$ & $90^\circ$ & 0.10 \\
$90^\circ$ & $120^\circ$ & 0.10 \\
$120^\circ$ & $150^\circ$ & 0.40 \\
$150^\circ$ & $180^\circ$ & -- \\
\hline
\end{tabular}
\end{center}
\subsection*{Notes}
In component notation, the trace operators of the Casimir trick become sums over
a repeated index, in this case $\alpha$.
\begin{align*}
f_{11}&=
\left(
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\alpha
\right)
\left(
(\slashed{p}_4+m)^\alpha{}_\beta
\gamma_\mu{}^\beta{}_\rho
(\slashed{p}_2+m)^\rho{}_\sigma
\gamma_\nu{}^\sigma{}_\alpha
\right)
\\
f_{12}&=
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\tau
(\slashed{p}_4+m)^\tau{}_\delta
\gamma_\mu{}^\delta{}_\eta
(\slashed{p}_2+m)^\eta{}_\xi
\gamma_\nu{}^\xi{}_\alpha
\\
f_{22}&=
\left(
(\slashed{p}_4+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\alpha
\right)
\left(
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma_\mu{}^\beta{}_\rho
(\slashed{p}_2+m)^\rho{}_\sigma
\gamma_\nu{}^\sigma{}_\alpha
\right)
\end{align*}
\noindent
To convert the above formulas to Eigenmath code,
the $\gamma$ tensors need to be transposed
so that repeated indices are adjacent to each other.
Also, multiply $\gamma^\mu$ by the metric tensor to lower the index.
\begin{align*}
\gamma^{\beta\mu}{}_\rho\quad&\rightarrow\quad
\text{\tt gammaT = transpose(gamma)}\\
\gamma^\beta{}_{\mu\rho}\quad&\rightarrow\quad
\text{\tt gammaL = transpose(dot(gmunu,gamma))}
\end{align*}
\noindent
Define the following $4\times4$ matrices.
\begin{align*}
(\slashed{p}_1+m)\quad&\rightarrow\quad\text{\tt X1 = pslash1 + m I}\\
(\slashed{p}_2+m)\quad&\rightarrow\quad\text{\tt X2 = pslash2 + m I}\\
(\slashed{p}_3+m)\quad&\rightarrow\quad\text{\tt X3 = pslash3 + m I}\\
(\slashed{p}_4+m)\quad&\rightarrow\quad\text{\tt X4 = pslash4 + m I}
\end{align*}
\noindent
Then for $f_{11}$ we have the following Eigenmath code.
The contract function sums over $\alpha$.
\begin{align*}
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\alpha
\quad&\rightarrow\quad
\text{\tt T1 = contract(dot(X3,gammaT,X1,gammaT),1,4)}
\\
(\slashed{p}_4+m)^\alpha{}_\beta
\gamma_\mu{}^\beta{}_\rho
(\slashed{p}_2+m)^\rho{}_\sigma
\gamma_\nu{}^\sigma{}_\alpha
\quad&\rightarrow\quad
\text{\tt T2 = contract(dot(X4,gammaL,X2,gammaL),1,4)}
\end{align*}
\noindent
Next, multiply then sum over repeated indices.
The dot function sums over $\nu$ then the contract function
sums over $\mu$. The transpose makes the $\nu$ indices adjacent
as required by the dot function.
$$
f_{11}=
\mathop{\rm Tr}(\cdots\gamma^\mu\cdots\gamma^\nu)
\mathop{\rm Tr}(\cdots\gamma_\mu\cdots\gamma_\nu)
\quad\rightarrow\quad
\text{\tt contract(dot(T1,transpose(T2)))}
$$
\noindent
Follow suit for $f_{22}$.
\begin{align*}
(\slashed{p}_4+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\alpha
\quad&\rightarrow\quad
\text{\tt T1 = contract(dot(X4,gammaT,X1,gammaT),1,4)}
\\
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma_\mu{}^\beta{}_\rho
(\slashed{p}_2+m)^\rho{}_\sigma
\gamma_\nu{}^\sigma{}_\alpha
\quad&\rightarrow\quad
\text{\tt T2 = contract(dot(X3,gammaL,X2,gammaL),1,4)}
\end{align*}
\noindent
Then
$$
f_{22}=
\mathop{\rm Tr}(\cdots\gamma^\mu\cdots\gamma^\nu)
\mathop{\rm Tr}(\cdots\gamma_\mu\cdots\gamma_\nu)
\quad\rightarrow\quad
\text{\tt contract(dot(T1,transpose(T2)))}
$$
\noindent
The calculation of $f_{12}$ begins with
\begin{multline*}
(\slashed{p}_3+m)^\alpha{}_\beta
\gamma^{\mu\beta}{}_\rho
(\slashed{p}_1+m)^\rho{}_\sigma
\gamma^{\nu\sigma}{}_\tau
(\slashed{p}_4+m)^\tau{}_\delta
\gamma_\mu{}^\delta{}_\eta
(\slashed{p}_2+m)^\eta{}_\xi
\gamma_\nu{}^\xi{}_\alpha
\\
\rightarrow\quad
\text{\tt T = contract(dot(X3,gammaT,X1,gammaT,X4,gammaL,X2,gammaL),1,6)}
\end{multline*}
\noindent
Then sum over repeated indices $\mu$ and $\nu$.
$$
f_{12}=\mathop{\rm Tr}(\cdots\gamma^\mu\cdots\gamma^\nu\cdots\gamma_\mu\cdots\gamma_\nu)
\quad\rightarrow\quad
\text{\tt contract(contract(T,1,3))}
$$
\end{document}
| {
"alphanum_fraction": 0.6546734454,
"avg_line_length": 25.4850498339,
"ext": "tex",
"hexsha": "01cbb2db95e69c141ab2d47ab6e25cb2381c3539",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "94fc6dfbc8dee95cca58c9822533699e8ed79a51",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "georgeweigt/georgeweigt.github.io",
"max_forks_repo_path": "moller-scattering.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "94fc6dfbc8dee95cca58c9822533699e8ed79a51",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "georgeweigt/georgeweigt.github.io",
"max_issues_repo_path": "moller-scattering.tex",
"max_line_length": 126,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "94fc6dfbc8dee95cca58c9822533699e8ed79a51",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "georgeweigt/georgeweigt.github.io",
"max_stars_repo_path": "moller-scattering.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 6529,
"size": 15342
} |
\section{Semidecidability of the CF Regularity Problem}
\label{sec:semidecidability}
We know that the context-free regularity problem is undecidable
\cite{Pettorossi13}, due to its reduction to the undecidable Post Correspondence
Problem \cite{Hopcroft06}.
We prove the semidecidability and the undecidability of the context-free
regularity problem.
\begin{theorem}
\label{thm:semidecidability-mnf}
Given a context-free grammar, the problem of saying whether or not there
exists an equivalent grammar in Marciani Normal Form is semidecidable and
undecidable.
\begin{proof}
Let us consider a context-free grammar in Chomsky Normal Form.
Let us derive an equivalent grammar by unfolding every production,
until getting the productions for the axiom only.
Now, it' easy to check if the derived grammar is in Marciani Normal Form.
So, given any context-free grammar in Chomsky Normal Form, it is always
possible to check if there exists an equivalent context-free grammar in
Marciani Normal Form.
As this possibility holds for grammars in Chomsky Normal Form, then it
holds for every context-free grammar \cite{Pettorossi13}.
\end{proof}
\end{theorem}
\begin{theorem}
\label{thm:semidecidability}
The context-free regularity problem is semidecidable and undecidable.
\begin{proof}
Follows from the semidecidability and undecidability of the problem of
saying whether or not, given a context-free grammar, there exists an
equivalent grammar in Marciani Normal Form, and from the regularity of
the language generated by a context-free grammar in Marciani Normal Form.
\end{proof}
\end{theorem}
| {
"alphanum_fraction": 0.7952176579,
"avg_line_length": 38.8333333333,
"ext": "tex",
"hexsha": "58174a145761ca647c91ad2c24d945973c8dbb76",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2018-02-17T13:30:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-02-17T13:30:49.000Z",
"max_forks_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "gmarciani/research",
"max_forks_repo_path": "marciani-normal-form/sec/semidecidability.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "gmarciani/research",
"max_issues_repo_path": "marciani-normal-form/sec/semidecidability.tex",
"max_line_length": 80,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "7cc526fe7cd9916ceaf8285c4e4bc4dce4028537",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "gmarciani/research",
"max_stars_repo_path": "marciani-normal-form/sec/semidecidability.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-20T12:54:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-27T13:31:43.000Z",
"num_tokens": 424,
"size": 1631
} |
%----------------------------------------------------------------------------
\chapter{Automatic license plate recognition}
Automatic license plate recognition (ALPR) refers to a technology that identifies vehicles based on their number plates. It is based on Optical character recognition. Traditionally, these systems are used to find stolen vehicles, check for road usage permits, measure vehicle speed, or operate parking garages. The technology is also suitable for tracking vehicles and collecting location data. This may be to the benefit of the authorities, but in Europe, it raises privacy concerns, as drivers have the right to data privacy.
ALPR systems can be categorized according to several aspects. There are fixed (pre-installed) and mobile (cameras in vehicles) systems based on their mobility. According to another aspect, some systems perform on-device image evaluation, while others process them remotely (like on a central computer or a server farm). The variants are illustrated in Figure \ref{fig:alpr-systems}. Both solutions have pros and contras in terms of network bandwidth- and hardware requirements.
\begin{figure}[htb]
\centerline{\includegraphics[width=1.0\columnwidth]{.//Figure/ALPR/alpr-systems.png}}
\caption{Pre-installed closed-circuit ALPR system\cite{ImageFixedALPR} (left) and a police car equipped with mobile ALPR\cite{ImageMobileALPR} (right).}
\label{fig:alpr-systems}
\end{figure}
\section{History}
The technology began to spread worldwide in the 2000s; however, the first ALPR systems were in service as early as the late 1970s. The first system was created in 1976 in the UK by the Police Scientific Development Branch. Their solution has been installed to monitor the A1 road's traffic and to look for stolen vehicles. The first arrest based on ALPR identification of a stolen car happened in 1981\cite{ANPR_history}.
As the technology evolved, more sophisticated solutions emerged. Fixed cameras began to form coherent networks, and thanks to hardware developments, previously expensive and cumbersome systems became affordable and widely available. The proliferation of the systems was further facilitated by changing license plates in many countries (like in the Netherlands) to help ALPR operation\cite{DutchLicensePlates}.
During the 1990s, mobile ALPR systems also appeared. This was due to the elimination of special hardware requirements, and the more robust solutions no longer required certain view angles or conditions to work. A challenging task, in this case, is solving the power supply requirements of the hardware from a battery. Providing an internet connection while moving is also a requirement. These have been challenging issues in the past; nowadays, they are no longer limiting factors.
\section{Components}
There are many versions of ALPR solutions that regularly differ from each other. Still, below I try to give a general picture of what image processing tasks arise in an ALPR system\cite{ANPR} (without claiming completeness):
\begin{itemize}
\item Plate localization is the process of finding and isolating the license plates. This can be either done with object detection or semantic segmentation.
\item Plate resizing and orientation try to minimize the skew of an image and adjust the dimensions to the required size. Various heuristics (like Projection Profile Method\cite{ProjectionProfileMethod}) exist to determine skew and apply projection afterward. A recent solution is the use of attention-based transformer neural networks\cite{SpatialTransformerNetworks}.
\item Image manipulation is a collective concept for pixel-level transformations based on its statistical properties in the present work's context. The process can either be normalization (rescale values into a range of [0, 1]), standardization (rescale values to have 0 mean and a standard deviation of 1), grayscale conversion, a combination of these, or other. Care must be taken with these operations because the image quality significantly affects the effectiveness of subsequent steps.
\item Optical character recognition consists of character segmentation and classification - more about this in the OCR chapter.
\item Syntactic inspection is the process where country-specific information is taken into consideration (position of specific characters, length of the recognized sequence). The goal here is to extract and validate the correct identifier.
\item Averaging multiple images can significantly improve performance. It helps by averaging unwanted protruding effects, such as blur or reflection, that often occur in pictures of moving vehicles.
\end{itemize}
Not all ALPR systems explicitly separate the above points (for example, the OCR character segmentation- and classification can be done at once or as separate steps). These factors influence the usability of a solution - the key is the implementation and coordination of them.
\section{Challenges}
In the case of an ALPR system, there are numerous difficulties for which there are different solutions.
The variance of the images is quite large. Accurate operation at different times (day or night) and weather conditions (sunny, snowy, foggy, rainy) is also expected in most cases. Image manipulation techniques can overcome this problem to some extent. Devices can see the license plates in different sizes and angles depending on their installation. This is where plate localization, then proper resizing and orientation can help - although this cannot provide a solution for too distant, low-quality license plate images. Blurring caused by the movement of cars is also a problem. The answer to this is to use short-shutter cameras (~1/1000 second) with a global shutter. The effect of different shutter speeds are illustrated by Figure \ref{fig:shutter-speed}. Modern smartphones are generally capable of the required shutter speeds. However, the presence of the rolling shutter can still be an issue.
\begin{figure}[htb]
\centerline{\includegraphics[width=1.0\columnwidth]{.//Figure/ALPR/shutter-speed.png}}
\caption{The effect of shutter speed illustrated by a model car\cite{ShutterEffect}.}
\label{fig:shutter-speed}
\end{figure}
Another common difficulty is the variety of number plates. License plates can have various structures, colors, and fonts, and their shape can also vary. It is typical for single and multi-row license plates to be used within a country. For these reasons, most of the current ALPR systems can only be used satisfactorily in a given country. However, the situation in Europe has improved in recent years with the proliferation of EU number plates. Although ALPR processability is now considered in the design of most license plates, in some countries, few characters have almost the same symbol (like 0 and O in England). Outside of Europe, characters outside the Latin alphabet may also appear. Character variability is discussed in more detail in the OCR chapter. Dirt may also be present on license plates, or other objects may obscure their text. Deliberate circumvention attempts can be an additional difficulty for ALPR systems. This can be, for example, covering certain characters or using a surface that impairs visibility. I do not address this issue in this work.
\newpage
\section{Evaluation}
Most ALPR system publishers provide rough metrics about their solutions, making it difficult to compare them. I found two de-facto benchmark datasets for license plate identification. The first is the SSIG SegPlate Database\cite{SSIG-ALPR}, with 101 on-track vehicles captured by a static camera. The other is the UFPR-ALPR Dataset\cite{UFPR-ALPR}, containing 4,500 images with 30,000 number plate characters.
\begin{figure}[htb]
\centerline{\includegraphics[width=1.0\columnwidth]{.//Figure/ALPR/alpr-datasets.png}}
\caption{Samples from the SSIG SegPlate\cite{SSIG-ALPR} (left) and the UFPR-ALPR\cite{UFPR-ALPR} (right) datasets.}
\label{fig:alpr-datasets}
\end{figure}
Sighthound\cite{Sighthound} and OpenALPR\cite{OpenALPR} are considered to be widespread market players. These solutions have been compared to a YOLO-based ALPR system by Laroca et al.\cite{RobustRealTimeALPR_YOLO}. Based on their performances on the SSIG dataset, Sighthound had 89.80\%, OpenALPR 93.03\%, and YOLO-ALPR 93.53\% accuracy. On the more challenging UFPR-ALPR dataset, Sighthound scored 47.39\%, OpenALPR 50.94\%, and YOLO-ALPR 64.89\% validation accuracy.
| {
"alphanum_fraction": 0.8024193548,
"avg_line_length": 131.75,
"ext": "tex",
"hexsha": "a3332c28b2474649b46d48c164222c15029f957e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ef1a063826ea93729cee4260f9dbcf102f2ee608",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "arpadfodor/StolenVehicleDetector",
"max_forks_repo_path": "Thesis/content/ALPR.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ef1a063826ea93729cee4260f9dbcf102f2ee608",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "arpadfodor/StolenVehicleDetector",
"max_issues_repo_path": "Thesis/content/ALPR.tex",
"max_line_length": 1072,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "ef1a063826ea93729cee4260f9dbcf102f2ee608",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "arpadfodor/StolenVehicleDetector",
"max_stars_repo_path": "Thesis/content/ALPR.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1800,
"size": 8432
} |
\documentclass[8pt,a4paper,landscape,oneside]{amsart}
\usepackage{amsmath, amsthm, amssymb, amsfonts}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{booktabs}
\usepackage{caption}
\usepackage{color}
\usepackage{fancyhdr}
\usepackage{float}
\usepackage{fullpage}
%\usepackage{geometry}
% \usepackage[top=0pt, bottom=1cm, left=0.3cm, right=0.3cm]{geometry}
\usepackage[top=3pt, bottom=1cm, left=0.3cm, right=0.3cm]{geometry}
\usepackage{graphicx}
% \usepackage{listings}
\usepackage{subcaption}
\usepackage[scaled]{beramono}
\usepackage{titling}
\usepackage{datetime}
\usepackage{enumitem}
\usepackage{multicol}
\newcommand{\subtitle}[1]{%
\posttitle{%
\par\end{center}
\begin{center}\large#1\end{center}
\vskip0.1em\vspace{-1em}}%
}
% Minted
\usepackage{minted}
\newcommand{\code}[1]{\inputminted[fontsize=\normalsize,baselinestretch=1]{cpp}{_code/#1}}
\newcommand{\bashcode}[1]{\inputminted{bash}{_code/#1}}
\newcommand{\regcode}[1]{\inputminted{cpp}{code/#1}}
% Header/Footer
% \geometry{includeheadfoot}
%\fancyhf{}
\pagestyle{fancy}
\lhead{Reykjavík University}
\rhead{\thepage}
\cfoot{}
\setlength{\headheight}{15.2pt}
\setlength{\droptitle}{-20pt}
\posttitle{\par\end{center}}
\renewcommand{\headrulewidth}{0.4pt}
\renewcommand{\footrulewidth}{0.4pt}
% Math and bit operators
\DeclareMathOperator{\lcm}{lcm}
\newcommand*\BitAnd{\mathrel{\&}}
\newcommand*\BitOr{\mathrel{|}}
\newcommand*\ShiftLeft{\ll}
\newcommand*\ShiftRight{\gg}
\newcommand*\BitNeg{\ensuremath{\mathord{\sim}}}
\DeclareRobustCommand{\stirling}{\genfrac\{\}{0pt}{}}
\newenvironment{myitemize}
{ \begin{itemize}[leftmargin=.5cm]
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} }
{ \end{itemize} }
% Title/Author
\title{.*RU.*}
\subtitle{Team Reference Document}
\date{\ddmmyyyydate{\today{}}}
% Output Verbosity
\newif\ifverbose
\verbosetrue
% \verbosefalse
\begin{document}
\begin{multicols*}{3}
\maketitle
\thispagestyle{fancy}
\vspace{-3em}
% \addtocontents{toc}{\protect\enlargethispage{\baselineskip}}
\tableofcontents
% \clearpage
\section{Code Templates}
\subsection{Basic Configuration}
\subsubsection{.bashrc}
\bashcode{bashrc.sh}
ProTip\texttrademark: setxkbmap dvorak on qwerty: \texttt{o.yqtxmal ekrpat}
\subsubsection{.vimrc}
\bashcode{vimrc.sh}
\subsection{C++ Header}
A C++ header.
\code{header.cpp}
\subsection{Java Template}
A Java template.
\code{template.java}
\section{Data Structures}
\subsection{Union-Find}
\ifverbose
An implementation of the Union-Find disjoint sets data structure.
\fi
\code{data-structures/union_find.cpp}
\subsection{Segment Tree}
\ifverbose
An implementation of a Segment Tree.
\fi
\code{data-structures/segment_tree_node.cpp}
\ifverbose
\code{data-structures/segment_tree_mnnode.cpp}
\fi
\code{data-structures/segment_tree.cpp}
\subsubsection{Persistent Segment Tree}
\code{data-structures/persistent_segment_tree.cpp}
\subsection{Fenwick Tree}
\ifverbose
A Fenwick Tree is a data structure that represents an array of $n$
numbers. It supports adjusting the $i$-th element in $O(\log n)$ time,
and computing the sum of numbers in the range $i..j$ in $O(\log n)$
time. It only needs $O(n)$ space.
\fi
\code{data-structures/fenwick_tree.cpp}
\subsection{Matrix}
\ifverbose
A Matrix class.
\fi
\code{data-structures/matrix.cpp}
\ifverbose
\subsection{AVL Tree}
\ifverbose
A fast, easily augmentable, balanced binary search tree.
\fi
\code{data-structures/avl_tree.cpp}
\ifverbose
Also a very simple wrapper over the AVL tree that implements a map
interface.
\code{data-structures/avl_map.cpp}
\fi
\fi
\subsection{Cartesian Tree}
\code{data-structures/cartesian_tree.cpp}
\ifverbose
\subsection{Heap}
An implementation of a binary heap.
\code{data-structures/heap.cpp}
\fi
\ifverbose
\subsection{Dancing Links}
\ifverbose
An implementation of Donald Knuth's Dancing Links data structure. A
linked list supporting deletion and restoration of elements.
\fi
\code{data-structures/dancing_links.cpp}
\fi
\subsection{Misof Tree}
\ifverbose
A simple tree data structure for inserting, erasing, and querying the
$n$th largest element.
\fi
\code{data-structures/misof_tree.cpp}
\ifverbose
\subsection{$k$-d Tree}
\ifverbose
A $k$-dimensional tree supporting fast construction, adding points, and
nearest neighbor queries.
\fi
\code{data-structures/kd_tree.cpp}
\fi
\subsection{Sqrt Decomposition}
\ifverbose
Design principle that supports many operations in amortized $\sqrt{n}$ per operation.
\fi
\code{data-structures/sqrt_decomposition.cpp}
\subsection{Monotonic Queue}
\ifverbose
A queue that supports querying for the minimum element. Useful for sliding window algorithms.
\fi
\code{data-structures/monotonic_queue.cpp}
\subsection{Convex Hull Trick}
If converting to integers, look out for division by 0 and $\pm\infty$.
\code{data-structures/convex_hull_trick.cpp}
And dynamic variant:
\code{data-structures/convex_hull_trick_dynamic.cpp}
\subsection{Sparse Table}
\code{data-structures/sparse_table.cpp}
\section{Graphs}
\subsection{Single-Source Shortest Paths}
\subsubsection{Dijkstra's algorithm}
\ifverbose
An implementation of Dijkstra's algorithm. It runs in
$\Theta(|E|\log{|V|})$ time.
\fi
\code{graph/dijkstra.cpp}
\subsubsection{Bellman-Ford algorithm}
\ifverbose
The Bellman-Ford algorithm solves the single-source shortest paths
problem in $O(|V||E|)$ time. It is slower than Dijkstra's
algorithm, but it works on graphs with negative edges and has the
ability to detect negative cycles, neither of which Dijkstra's
algorithm can do.
\fi
\code{graph/bellman_ford.cpp}
\subsubsection{IDA$^\star$ algorithm}
\code{graph/idastar.cpp}
\ifverbose
\subsection{All-Pairs Shortest Paths}
\subsubsection{Floyd-Warshall algorithm}
The Floyd-Warshall algorithm solves the all-pairs shortest paths
problem in $O(|V|^3)$ time.
\code{graph/floyd_warshall.cpp}
\fi
\subsection{Strongly Connected Components}
\subsubsection{Kosaraju's algorithm}
\ifverbose
Kosarajus's algorithm finds strongly connected components of a
directed graph in $O(|V|+|E|)$ time.
\fi
Returns a Union-Find of the SCCs, as well as a topological ordering
of the SCCs. Note that the ordering specifies a random element from
each SCC, not the UF parents!
\code{graph/scc.cpp}
\subsection{Cut Points and Bridges}
\code{graph/cut_points_and_bridges.cpp}
\ifverbose
\subsection{Minimum Spanning Tree}
\subsubsection{Kruskal's algorithm}
\code{graph/kruskals_mst.cpp}
\fi
\ifverbose
\subsection{Topological Sort}
\subsubsection{Modified Depth-First Search}
\code{graph/tsort.cpp}
\fi
\subsection{Euler Path}
\ifverbose
Finds an euler path (or circuit) in a directed graph, or reports that
none exist.
\fi
\code{graph/euler_path.cpp}
And an undirected version, which finds a cycle.
\code{graph/euler_path_undirected.cpp}
\subsection{Bipartite Matching}
\subsubsection{Alternating Paths algorithm}
\ifverbose
The alternating paths algorithm solves bipartite matching in $O(mn^2)$
time, where $m$, $n$ are the number of vertices on the left and right
side of the bipartite graph, respectively.
\fi
\code{graph/bipartite_matching.cpp}
\subsubsection{Hopcroft-Karp algorithm}
\ifverbose
An implementation of Hopcroft-Karp algorithm for bipartite
matching.
\fi
Running time is $O(|E|\sqrt{|V|})$.
\code{graph/hopcroft_karp.cpp}
\subsubsection{Minimum Vertex Cover in Bipartite Graphs}
\code{graph/bipartite_mvc.cpp}
\subsection{Maximum Flow}
\subsubsection{Dinic's algorithm}
An implementation of Dinic's algorithm that runs in
$O(|V|^2|E|)$.
\ifverbose
It computes the maximum flow of a flow network.
\fi
\code{graph/dinic.cpp}
\ifverbose
\subsubsection{Edmonds Karp's algorithm}
An implementation of Edmonds Karp's algorithm that runs in
$O(|V||E|^2)$.
\ifverbose
It computes the maximum flow of a flow network.
\fi
\code{graph/edmonds_karps.cpp}
\fi
\subsection{Minimum Cost Maximum Flow}
\ifverbose
An implementation of Edmonds Karp's algorithm, modified to find
shortest path to augment each time (instead of just any path). It
computes the maximum flow of a flow network, and when there are
multiple maximum flows, finds the maximum flow with minimum cost.
\fi
Running time is $O(|V|^2|E|\log|V|)$.
\code{graph/edmonds_karps_mcmf.cpp}
\subsection{All Pairs Maximum Flow}
\subsubsection{Gomory-Hu Tree}
An implementation of the Gomory-Hu Tree. The spanning tree is constructed using Gusfield's algorithm
in $O(|V| ^ 2)$ plus $|V|-1$ times the time it takes to calculate the maximum flow.
If Dinic's algorithm is used to calculate the max flow, the running time is $O(|V|^3|E|)$.
NOTE: Not sure if it works correctly with disconnected graphs.
\code{graph/gomory_hu_tree.cpp}
\subsection{Heavy-Light Decomposition}
\code{graph/hld.cpp}
\subsection{Centroid Decomposition}
\code{graph/centroid_decomposition.cpp}
\subsection{Least Common Ancestors, Binary Jumping}
\code{graph/lca.cpp}
\subsection{Tarjan's Off-line Lowest Common Ancestors Algorithm}
\code{graph/tarjan_olca.cpp}
\subsection{Minimum Mean Weight Cycle}
Given a strongly connected directed graph, finds the cycle of minimum
mean weight. If you have a graph that is not strongly connected, run
this on each strongly connected component.
\code{graph/min_mean_cycle.cpp}
\subsection{Minimum Arborescence}
Given a weighted directed graph, finds a subset of edges of minimum
total weight so that there is a unique path from the root $r$ to each
vertex. Returns a vector of size $n$, where the $i$th element is the
edge for the $i$th vertex. The answer for the root is undefined!
\code{graph/arborescence.cpp}
\subsection{Blossom algorithm}
Finds a maximum matching in an arbitrary graph in $O(|V|^4)$ time. Be
vary of loop edges.
\code{graph/blossom.cpp}
\subsection{Maximum Density Subgraph}
Given (weighted) undirected graph $G$. Binary search density. If $g$ is
current density, construct flow network: $(S, u, m)$, $(u, T,
m+2g-d_u)$, $(u,v,1)$, where $m$ is a large constant (larger than sum
of edge weights). Run floating-point max-flow. If minimum cut has empty
$S$-component, then maximum density is smaller than $g$, otherwise it's
larger. Distance between valid densities is at least $1/(n(n-1))$. Edge
case when density is $0$. This also works for weighted graphs by
replacing $d_u$ by the weighted degree, and doing more iterations (if
weights are not integers).
\subsection{Maximum-Weight Closure}
Given a vertex-weighted directed graph $G$. Turn the graph into a flow
network, adding weight $\infty$ to each edge. Add vertices $S,T$. For
each vertex $v$ of weight $w$, add edge $(S,v,w)$ if $w\geq 0$, or edge
$(v,T,-w)$ if $w<0$. Sum of positive weights minus minimum $S-T$ cut is
the answer. Vertices reachable from $S$ are in the closure. The
maximum-weight closure is the same as the complement of the
minimum-weight closure on the graph with edges reversed.
\subsection{Maximum Weighted Independent Set in a Bipartite Graph}
This is the same as the minimum weighted vertex cover. Solve this by
constructing a flow network with edges $(S,u,w(u))$ for $u\in L$,
$(v,T,w(v))$ for $v\in R$ and $(u,v,\infty)$ for $(u,v)\in E$. The
minimum $S,T$-cut is the answer. Vertices adjacent to a cut edge are
in the vertex cover.
\subsection{Synchronizing word problem}
A DFA has a synchronizing word (an input sequence that moves all states
to the same state) iff.\ each pair of states has a synchronizing word.
That can be checked using reverse DFS over pairs of states. Finding the
shortest synchronizing word is NP-complete.
\subsection{Max flow with lower bounds on edges}
% TODO: Test this!
Change edge $(u,v,l\leq f\leq c)$ to $(u,v,f\leq c-l)$. Add edge
$(t,s,\infty)$. Create super-nodes $S$, $T$. Let $M(u) = \sum_{v}
l(v,u) - \sum_{v} l(u,v)$. If $M(u)<0$, add edge $(u,T,-M(u))$, else
add edge $(S,u,M(u))$. Max flow from $S$ to $T$. If all edges from $S$
are saturated, then we have a feasible flow. Continue running max flow
from $s$ to $t$ in original graph.
% TODO: Was there something similar for vertex capacities that we should add?
\subsection{Tutte matrix for general matching}
Create an $n\times n$ matrix $A$. For each edge $(i,j)$, $i<j$, let
$A_{ij} = x_{ij}$ and $A_{ji} = -x_{ij}$. All other entries are $0$.
The determinant of $A$ is zero iff.\ the graph has a perfect matching.
A randomized algorithm uses the Schwartz--Zippel lemma to check if it is
zero.
\section{Strings}
\subsection{The Knuth-Morris-Pratt algorithm}
\ifverbose
An implementation of the Knuth-Morris-Pratt algorithm. Runs in $O(n+m)$
time, where $n$ and $m$ are the lengths of the string and the pattern.
\fi
\code{strings/kmp.cpp}
\subsection{The $Z$ algorithm}
\ifverbose
Given a string $S$, $Z_i(S)$ is the longest substring of $S$ starting
at $i$ that is also a prefix of $S$. The $Z$ algorithm computes these
$Z$ values in $O(n)$ time, where $n = |S|$. $Z$ values can, for
example, be used to find all occurrences of a pattern $P$ in a string
$T$ in linear time. This is accomplished by computing $Z$ values of $S
= P T$, and looking for all $i$ such that $Z_i \geq |P|$.
\fi
\code{strings/z_algorithm.cpp}
\ifverbose
\subsection{Trie}
A Trie class.
\code{strings/trie.cpp}
\fi
\subsection{Suffix Array}
An $O(n \log^2 n)$ construction of a Suffix Tree.
\code{strings/suffix_array.cpp}
\subsection{Aho-Corasick Algorithm}
\ifverbose
An implementation of the Aho-Corasick algorithm. Constructs a state
machine from a set of keywords which can be used to search a string for
any of the keywords.
\fi
\code{strings/aho_corasick.cpp}
\subsection{eerTree}
\ifverbose
Constructs an eerTree in $O(n)$, one character at a time.
\fi
\code{strings/eertree.cpp}
% http://arxiv.org/pdf/1506.04862v1.pdf
\subsection{Suffix Automaton}
Minimum automata that accepts all suffixes of a string with $O(n)$
construction. The automata itself is a DAG therefore suitable for DP,
examples are counting unique substrings, occurrences of substrings and
suffix.
\code{strings/suffix_automaton.cpp}
\subsection{Hashing}
Modulus should be a large prime. Can also use multiple instances with
different moduli to minimize chance of collision.
\code{strings/hasher.cpp}
\section{Mathematics}
\ifverbose
\subsection{Fraction}
A fraction (rational number) class. Note that numbers are stored in
lowest common terms.
\code{mathematics/fraction.cpp}
\fi
\ifverbose
\subsection{Big Integer}
\ifverbose
A big integer class.
\fi
\code{mathematics/intx.cpp}
\subsubsection{Fast Multiplication}
\ifverbose
Fast multiplication for the big integer using Fast Fourier Transform.
\fi
\code{mathematics/fastmul.cpp}
\fi
\subsection{Binomial Coefficients}
The binomial coefficient $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ is the
number of ways to choose $k$ items out of a total of $n$ items. Also
contains an implementation of Lucas' theorem for computing the answer
modulo a prime $p$. Use modular multiplicative inverse if needed, and
be very careful of overflows.
\code{mathematics/nck.cpp}
\subsection{Euclidean algorithm}
\ifverbose
The Euclidean algorithm computes the greatest common divisor of two
integers $a$, $b$.
\fi
\code{mathematics/gcd.cpp}
The extended Euclidean algorithm computes the greatest common divisor
$d$ of two integers $a$, $b$ and also finds two integers $x$, $y$ such
that $a\times x + b\times y = d$.
\code{mathematics/egcd.cpp}
\subsection{Trial Division Primality Testing}
\ifverbose
An optimized trial division to check whether an integer is prime.
\fi
\code{mathematics/is_prime.cpp}
\subsection{Miller-Rabin Primality Test}
\ifverbose
The Miller-Rabin probabilistic primality test.
\fi
\code{mathematics/miller_rabin.cpp}
\subsection{Pollard's $\rho$ algorithm}
\code{mathematics/pollard_rho.cpp}
\subsection{Sieve of Eratosthenes}
\ifverbose
An optimized implementation of Eratosthenes' Sieve.
\fi
\code{mathematics/prime_sieve.cpp}
\subsection{Divisor Sieve}
\ifverbose
A O(n) prime sieve. Computes the smallest divisor of any number up to n.
\fi
\code{mathematics/divisor_sieve.cpp}
\ifverbose
\subsection{Modular Exponentiation}
A function to perform fast modular exponentiation.
\code{mathematics/mod_pow.cpp}
\fi
\subsection{Modular Multiplicative Inverse}
\ifverbose
A function to find a modular multiplicative inverse. Alternatively use
\texttt{mod\_{}pow(a,m-2,m)} when $m$ is prime.
\fi
\code{mathematics/mod_inv.cpp}
\ifverbose
A sieve version:
\fi
\code{mathematics/mod_inv_sieve.cpp}
\subsection{Primitive Root}
\code{mathematics/primitive_root.cpp}
\subsection{Chinese Remainder Theorem}
\ifverbose
An implementation of the Chinese Remainder Theorem.
\fi
\code{mathematics/crt.cpp}
\subsection{Linear Congruence Solver}
Given $ax \equiv b \pmod{n}$, returns $(t,m)$ such that all solutions
are given by $x\equiv t\pmod{m}$. No solutions iff $(0,0)$ is returned.
\code{mathematics/linear_congruence.cpp}
\subsection{Berlekamp-Massey algorithm}
Given a sequence of integers in some field, finds a linear recurrence
of minimum order that generates the sequence in $O(n^2)$.
\code{mathematics/berlekamp_massey.cpp}
\subsection{Tonelli-Shanks algorithm}
Given prime $p$ and integer $1\leq n<p$, returns the square root $r$ of
$n$ modulo $p$. There is also another solution given by $-r$ modulo
$p$.
\code{mathematics/tonelli_shanks.cpp}
\subsection{Numeric Integration}
\ifverbose
Numeric integration using Simpson's rule.
\fi
\code{mathematics/numeric_integration.cpp}
\subsection{Linear Recurrence Relation}
Computes the $n$-th term satisfying the linear recurrence relation with
initial terms \texttt{init} and coefficients \texttt{c} in $O(k^2\log n)$.
\code{mathematics/linear_recurrence.cpp}
\subsection{Fast Fourier Transform}
The Cooley-Tukey algorithm for quickly computing the discrete Fourier
transform. The \texttt{fft} function only supports powers of twos. The
\texttt{czt} function implements the Chirp Z-transform and supports any
size, but is slightly slower.
\code{mathematics/fft.cpp}
\subsection{Number-Theoretic Transform}
Other possible moduli: $2\,113\,929\,217$ ($2^{25}$), $2\,013\,265\,920\,268\,435\,457$ ($2^{28}$, with $g=5$). % XXX: Should $g=1976010382590097340$, or is $g=5$ alright as well?
\code{mathematics/ntt.cpp}
\subsection{Fast Hadamard Transform}
Computes the Hadamard transform of the given array. Can be used to
compute the \texttt{XOR}-convolution of arrays, exactly like with FFT.
For \texttt{AND}-convolution, use $(x+y,y)$ and $(x-y,y)$. For
\texttt{OR}-convolution, use $(x,x+y)$ and $(x,-x+y)$. \textbf{Note}:
Size of array must be a power of $2$.
\code{mathematics/fht.cpp}
\subsection{Tridiagonal Matrix Algorithm}
Solves a tridiagonal system of linear equations $a_ix_{i-1} + b_ix_i +
c_ix_{i+1} = d_i$ where $a_1 = c_n = 0$. Beware of numerical
instability.
\code{mathematics/tridiagonal.cpp}
\subsection{Mertens Function}
Mertens function is $M(n) = \sum_{i=1}^n \mu(i)$. Let $L\approx
(n\log{\log{n}})^{2/3}$ and the algorithm runs in $O(n^{2/3})$.
\ifverbose
\else
Can also be easily changed to compute the summatory $\Phi$.
\fi
\code{mathematics/mertens.cpp}
\ifverbose
\subsection{Summatory Phi}
The summatory phi function $\Phi(n) = \sum_{i=1}^n \phi(i)$. Let $L\approx
(n\log{\log{n}})^{2/3}$ and the algorithm runs in $O(n^{2/3})$.
\code{mathematics/summatory_phi.cpp}
\fi
\subsection{Prime $\pi$}
Returns $\pi\left(\lfloor n/k\rfloor\right)$ for all $1\leq k \leq n$,
where $\pi(n)$ is the number of primes $\leq n$. Can also be modified
to accumulate any multiplicative function over the primes.
\code{mathematics/primepi.cpp}
\subsection{Josephus problem}
Last man standing out of $n$ if every $k$th is killed. Zero-based, and
does not kill $0$ on first pass.
\code{mathematics/josephus.cpp}
\subsection{Number of Integer Points under Line}
Count the number of integer solutions to $Ax+By\leq C$, $0 \leq x \leq
n$, $0 \leq y$. In other words, evaluate the sum $\sum_{x=0}^n
\left\lfloor \frac{C-Ax}{B} + 1\right\rfloor$. To count all solutions,
let $n = \left\lfloor \frac{c}{a}\right\rfloor$. In any case, it must hold
that $C-nA \geq 0$. Be very careful about overflows.
\code{mathematics/floor_sum.cpp}
\subsection{Numbers and Sequences}
Some random prime numbers: 1031, 32771, 1048583, 33554467,
1073741827, 34359738421, 1099511627791, 35184372088891,
1125899906842679, 36028797018963971.
More random prime numbers: $10^3 + \{-9,-3,9,13\}$, $10^6+
\{-17,3,33\}$, $10^9+ \{7,9,21,33,87\}$.
Some maximal divisor counts:
\begin{tabular}{rr}
840 & 32 \\
720\,720 & 240 \\
735\,134\,400 & 1\,344 \\
963\,761\,198\,400 & 6\,720 \\
866\,421\,317\,361\,600 & 26\,880 \\
897\,612\,484\,786\,617\,600 & 103\,680 \\
\end{tabular}
\subsection{Game Theory}
\begin{itemize}
\item Useful identity: $\oplus_{x=0}^{a-1}\, x = [0,a-1,1,a][a\% 4]$
\item \textbf{Nim}: Winning position if $n_1\oplus \cdots \oplus n_k = 0$
\item \textbf{Misére Nim}: Winning position if some $n_i > 1$ and $n_1\oplus \cdots \oplus n_k = 0$, or all $n_i \leq 1$ and $n_1\oplus \cdots \oplus n_k = 1$
\end{itemize}
\section{Geometry}
\subsection{Primitives}
\ifverbose
Geometry primitives.
\fi
\code{geometry/primitives.cpp}
\subsection{Lines}
\ifverbose
Line related functions.
\fi
\code{geometry/lines.cpp}
\subsection{Circles}
\ifverbose
Circle related functions.
\fi
\code{geometry/circles.cpp}
\subsection{Polygon}
\ifverbose
Polygon primitives.
\fi
\code{geometry/polygon.cpp}
\subsection{Convex Hull}
\ifverbose
An algorithm that finds the Convex Hull of a set of points.
\fi
NOTE: Doesn't work on some weird edge cases. (A small case that
included three collinear lines would return the same point on both the
upper and lower hull.)
\code{geometry/convex_hull.cpp}
\subsection{Line Segment Intersection}
\ifverbose
Computes the intersection between two line segments.
\fi
\code{geometry/line_segment_intersect.cpp}
\subsection{Great-Circle Distance}
Computes the distance between two points (given as latitude/longitude
coordinates) on a sphere of radius $r$.
\code{geometry/gc_distance.cpp}
\subsection{Smallest Enclosing Circle}
Computes the smallest enclosing circle using Welzl's algorithm in
expected $O(n)$ time.
\code{geometry/welzl.cpp}
\subsection{Closest Pair of Points}
\ifverbose
A sweep line algorithm for computing the distance between the closest
pair of points.
\fi
\code{geometry/closest_pair.cpp}
\subsection{3D Primitives}
\ifverbose
Three-dimensional geometry primitives.
\fi
\code{geometry/primitives3d.cpp}
\subsection{3D Convex Hull}
\code{geometry/convex_hull3d.cpp}
\subsection{Polygon Centroid}
\code{geometry/polygon_centroid.cpp}
\subsection{Rotating Calipers}
\code{geometry/rotating_calipers.cpp}
\subsection{Rectilinear Minimum Spanning Tree}
Given a set of $n$ points in the plane, and the aim is to find a
minimum spanning tree connecting these $n$ points, assuming the
Manhattan distance is used. The function \texttt{candidates} returns at
most $4n$ edges that are a superset of the edges in a minimum spanning
tree, and then one can use Kruskal's algorithm.
\code{geometry/rmst.cpp}
\subsection{Line upper/lower envelope}
To find the upper/lower envelope of a collection of lines $a_i+b_i x$,
plot the points $(b_i,a_i)$, add the point $(0,\pm \infty)$ (depending
on if upper/lower envelope is desired), and then find the convex hull.
\subsection{Formulas}
Let $a = (a_x, a_y)$ and $b = (b_x, b_y)$ be two-dimensional vectors.
\begin{itemize}
\item $a\cdot b = |a||b|\cos{\theta}$, where $\theta$ is the angle
between $a$ and $b$.
\item $a\times b = |a||b|\sin{\theta}$, where $\theta$ is the
signed angle between $a$ and $b$.
\item $a\times b$ is equal to the area of the parallelogram with
two of its sides formed by $a$ and $b$. Half of that is the
area of the triangle formed by $a$ and $b$.
\item The line going through $a$ and $b$ is $Ax+By=C$ where $A=b_y-a_y$, $B=a_x-b_x$, $C=Aa_x+Ba_y$.
\item Two lines $A_1x+B_1y=C_1$, $A_2x+B_2y=C_2$ are parallel iff.\ $D=A_1B_2-A_2B_1$ is zero. Otherwise their unique intersection is $(B_2C_1-B_1C_2,A_1C_2-A_2C_1)/D$.
\item \textbf{Euler's formula:} $V - E + F = 2$
\item Side lengths $a,b,c$ can form a triangle iff.\ $a+b>c$, $b+c>a$ and $a+c>b$.
\item Sum of internal angles of a regular convex $n$-gon is $(n-2)\pi$.
\item \textbf{Law of sines:} $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$
\item \textbf{Law of cosines:} $b^2 = a^2 + c^2 - 2ac\cos B$
\item Internal tangents of circles $(c_1,r_1), (c_2,r_2)$ intersect at $(c_1r_2+c_2r_1)/(r_1+r_2)$, external intersect at $(c_1r_2-c_2r_1)/(r_1+r_2)$.
\end{itemize}
\section{Other Algorithms}
\subsection{2SAT}
\ifverbose
A fast 2SAT solver.
\fi
\code{other/two_sat.cpp}
\subsection{DPLL Algorithm}
A SAT solver that can solve a random 1000-variable SAT instance within a second.
\code{other/dpll.cpp}
\subsection{Stable Marriage}
\ifverbose
The Gale-Shapley algorithm for solving the stable marriage problem.
\fi
\code{other/stable_marriage.cpp}
\subsection{Algorithm X}
\ifverbose
An implementation of Knuth's Algorithm X, using dancing links. Solves the Exact Cover problem.
\fi
\code{other/algorithm_x.cpp}
\subsection{Matroid Intersection}
Computes the maximum weight and cardinality intersection of two
matroids, specified by implementing the required abstract methods, in
$O(n^3(M_1+M_2))$.
\code{other/matroid_intersection.cpp}
\subsection{$n$th Permutation}
\ifverbose
A very fast algorithm for computing the $n$th permutation of the list
$\{0,1,\ldots,k-1\}$.
\fi
\code{other/nth_permutation.cpp}
\subsection{Cycle-Finding}
\ifverbose
An implementation of Floyd's Cycle-Finding algorithm.
\fi
\code{other/floyds_algorithm.cpp}
\subsection{Longest Increasing Subsequence}
\code{other/lis.cpp}
\subsection{Dates}
\ifverbose
Functions to simplify date calculations.
\fi
\code{other/dates.cpp}
\subsection{Simulated Annealing}
An example use of Simulated Annealing to find a permutation of length $n$
that maximizes $\sum_{i=1}^{n-1}|p_i - p_{i+1}|$.
\code{other/simulated_annealing.cpp}
\subsection{Simplex}
\code{other/simplex.cpp}
\subsection{Fast Square Testing}
An optimized test for square integers.
\code{tricks/is_square.cpp}
\subsection{Fast Input Reading}
\ifverbose
If input or output is huge, sometimes it is beneficial to optimize the
input reading/output writing. This can be achieved by reading all input
in at once (using fread), and then parsing it manually. Output can also
be stored in an output buffer and then dumped once in the end (using
fwrite). A simpler, but still effective, way to achieve speed is to use
the following input reading method.
\fi
\code{tricks/fast_input.cpp}
\ifverbose
\subsection{128-bit Integer}
GCC has a 128-bit integer data type named \texttt{\_\_int128}. Useful
if doing multiplication of 64-bit integers, or something needing a
little more than 64-bits to represent. There's also
\texttt{\_\_float128}.
\fi
\subsection{Bit Hacks}
\code{tricks/snoob.cpp}
\newpage
\begin{tabular}{@{}l|l|l@{}}
\toprule
Catalan & $C_0=1$, $C_n=\frac{1}{n+1}\binom{2n}{n} = \sum_{i=0}^{n-1}C_iC_{n-i-1} = \frac{4n-2}{n+1}C_{n-1}$ & \\
Stirling 1st kind & $\left[{0\atop 0}\right]=1$, $\left[{n\atop 0}\right]=\left[{0\atop n}\right]=0$, $\left[{n\atop k}\right]=(n-1)\left[{n-1\atop k}\right]+\left[{n-1\atop k-1}\right]$ & \#perms of $n$ objs with exactly $k$ cycles\\
Stirling 2nd kind & $\left\{{n\atop 1}\right\}=\left\{{n\atop n}\right\}=1$, $\left\{{n\atop k}\right\} = k \left\{{ n-1 \atop k }\right\} + \left\{{n-1\atop k-1}\right\}$ & \#ways to partition $n$ objs into $k$ nonempty sets\\
Euler & $\left \langle {n\atop 0} \right \rangle = \left \langle {n\atop n-1} \right \rangle = 1 $, $\left \langle {n\atop k} \right \rangle = (k+1) \left \langle {n-1\atop {k}} \right \rangle + (n-k)\left \langle {{n-1}\atop {k-1}} \right \rangle$ & \#perms of $n$ objs with exactly $k$ ascents \\
Euler 2nd Order & $\left \langle \!\!\left \langle {n\atop k} \right \rangle \!\! \right \rangle = (k+1) \left \langle \!\! \left \langle {{n-1}\atop {k}} \right \rangle \!\! \right \rangle +(2n-k-1)\left \langle \!\! \left \langle {{n-1}\atop {k-1}} \right \rangle \!\! \right \rangle$ & \#perms of ${1,1,2,2,...,n,n}$ with exactly $k$ ascents \\
Bell & $B_1 = 1$, $B_n = \sum_{k=0}^{n-1} B_k \binom{n-1}{k} = \sum_{k=0}^n\left\{{n\atop k}\right\}$ & \#partitions of $1..n$ (Stirling 2nd, no limit on k)\\
\bottomrule
\end{tabular}
\vspace{10pt}
\begin{tabular}{ll}
\#labeled rooted trees & $n^{n-1}$ \\
\#labeled unrooted trees & $n^{n-2}$ \\
\#forests of $k$ rooted trees & $\frac{k}{n}\binom{n}{k}n^{n-k}$ \\
% Kirchoff's theorem
$\sum_{i=1}^n i^2 = n(n+1)(2n+1)/6$ & $\sum_{i=1}^n i^3 = n^2(n+1)^2/4$ \\
$!n = n\times!(n-1)+(-1)^n$ & $!n = (n-1)(!(n-1)+!(n-2))$ \\
$\sum_{i=1}^n \binom{n}{i} F_i = F_{2n}$ & $\sum_{i} \binom{n-i}{i} = F_{n+1}$ \\
$\sum_{k=0}^n \binom{k}{m} = \binom{n+1}{m+1}$ & $x^k = \sum_{i=0}^k i!\stirling{k}{i}\binom{x}{i} = \sum_{i=0}^k \left\langle {k \atop i} \right\rangle\binom{x+i}{k}$ \\
$a\equiv b\pmod{x,y} \Rightarrow a\equiv b\pmod{\lcm(x,y)}$ & $\sum_{d|n} \phi(d) = n$ \\
$ac\equiv bc\pmod{m} \Rightarrow a\equiv b\pmod{\frac{m}{\gcd(c,m)}}$ & $(\sum_{d|n} \sigma_0(d))^2 = \sum_{d|n} \sigma_0(d)^3$ \\
$p$ prime $\Leftrightarrow (p-1)!\equiv -1\pmod{p}$ & $\gcd(n^a-1,n^b-1) = n^{\gcd(a,b)}-1$ \\
$\sigma_x(n) = \prod_{i=0}^{r} \frac{p_i^{(a_i + 1)x} - 1}{p_i^x - 1}$ & $\sigma_0(n) = \prod_{i=0}^r (a_i + 1)$ \\
$\sum_{k=0}^m (-1)^k \binom{n}{k} = (-1)^m \binom{n-1}{m}$ & \\
$2^{\omega(n)} = O(\sqrt{n})$ & $\sum_{i=1}^n 2^{\omega(i)} = O(n \log n)$ \\
% Kinematic equations
$d = v_i t + \frac{1}{2}at^2$ & $v_f^2 = v_i^2 + 2ad$ \\
$v_f = v_i + at$ & $d = \frac{v_i + v_f}{2}t$ \\
\end{tabular}
\subsection{The Twelvefold Way}
Putting $n$ balls into $k$ boxes.\\
\begin{tabular}{@{}c|c|c|c|c|l@{}}
Balls & same & distinct & same & distinct & \\
Boxes & same & same & distinct & distinct & Remarks\\
\hline
- & $\mathrm{p}_k(n)$ & $\sum_{i=0}^k \stirling{n}{i}$ & $\binom{n+k-1}{k-1}$ & $k^n$ & $\mathrm{p}_k(n)$: \#partitions of $n$ into $\le k$ positive parts \\
$\mathrm{size}\ge 1$ & $\mathrm{p}(n,k)$ & $\stirling{n}{k}$ & $\binom{n-1}{k-1}$ & $k!\stirling{n}{k}$ & $\mathrm{p}(n,k)$: \#partitions of $n$ into $k$ positive parts \\
$\mathrm{size}\le 1$ & $[n \le k]$ & $[n \le k]$ & $\binom{k}{n}$ & $n!\binom{k}{n}$ & $[cond]$: $1$ if $cond=true$, else $0$\\
\bottomrule
\end{tabular}
\end{multicols*}
\onecolumn
\begin{multicols*}{3}
\section{Useful Information}
\section{Misc}
\subsection{Debugging Tips}
\begin{myitemize}
\item Stack overflow? Recursive DFS on tree that is actually a long path?
\item Floating-point numbers
\begin{itemize}
\item Getting \texttt{NaN}? Make sure \texttt{acos} etc.\ are
not getting values out of their range (perhaps
\texttt{1+eps}).
\item Rounding negative numbers?
\item Outputting in scientific notation?
\end{itemize}
\item Wrong Answer?
\begin{itemize}
\item Read the problem statement again!
\item Are multiple test cases being handled correctly?
Try repeating the same test case many times.
\item Integer overflow?
\item Think very carefully about boundaries of all input parameters
\item Try out possible edge cases:
\begin{itemize}
\item $n=0, n=-1, n=1, n=2^{31}-1$ or $n=-2^{31}$
\item List is empty, or contains a single element
\item $n$ is even, $n$ is odd
\item Graph is empty, or contains a single vertex
\item Graph is a multigraph (loops or multiple edges)
\item Polygon is concave or non-simple
\end{itemize}
\item Is initial condition wrong for small cases?
\item Are you sure the algorithm is correct?
\item Explain your solution to someone.
\item Are you using any functions that you don't completely understand? Maybe STL functions?
\item Maybe you (or someone else) should rewrite the solution?
\item Can the input line be empty?
\end{itemize}
\item Run-Time Error?
\begin{itemize}
\item Is it actually Memory Limit Exceeded?
\end{itemize}
\end{myitemize}
\subsection{Solution Ideas}
\begin{myitemize}
\item Dynamic Programming
\begin{itemize}
\item Parsing CFGs: CYK Algorithm
\item Drop a parameter, recover from others
\item Swap answer and a parameter
\item When grouping: try splitting in two
\item $2^k$ trick
\item When optimizing
\begin{itemize}
\item Convex hull optimization
\begin{itemize}
\item $\mathrm{dp}[i] = \min_{j<i}\{\mathrm{dp}[j] + b[j] \times a[i]\}$
\item $b[j] \geq b[j+1]$
\item optionally $a[i] \leq a[i+1]$
\item $O(n^2)$ to $O(n)$
\end{itemize}
\item Divide and conquer optimization
\begin{itemize}
\item $\mathrm{dp}[i][j] = \min_{k<j}\{\mathrm{dp}[i-1][k] + C[k][j]\}$
\item $A[i][j] \leq A[i][j+1]$
\item $O(kn^2)$ to $O(kn\log{n})$
\item sufficient: $C[a][c] + C[b][d] \leq C[a][d] + C[b][c]$, $a\leq b\leq c\leq d$ (QI)
\end{itemize}
\item Knuth optimization
\begin{itemize}
\item $\mathrm{dp}[i][j] = \min_{i<k<j}\{\mathrm{dp}[i][k] + \mathrm{dp}[k][j] + C[i][j]\}$
\item $A[i][j-1] \leq A[i][j] \leq A[i+1][j]$
\item $O(n^3)$ to $O(n^2)$
\item sufficient: QI and $C[b][c] \leq C[a][d]$, $a\leq b\leq c\leq d$
\end{itemize}
\end{itemize}
\end{itemize}
\item Greedy
\item Randomized
\item Optimizations
\begin{itemize}
\item Use bitset (/64)
\item Switch order of loops (cache locality)
\end{itemize}
\item Process queries offline
\begin{itemize}
\item Mo's algorithm
\end{itemize}
\item Square-root decomposition
\item Precomputation
\item Efficient simulation
\begin{itemize}
\item Mo's algorithm
\item Sqrt decomposition
\item Store $2^k$ jump pointers
\end{itemize}
\item Data structure techniques
\begin{itemize}
\item Sqrt buckets
\item Store $2^k$ jump pointers
\item $2^k$ merging trick
\end{itemize}
\item Counting
\begin{itemize}
\item Inclusion-exclusion principle
\item Generating functions
\end{itemize}
\item Graphs
\begin{itemize}
\item Can we model the problem as a graph?
\item Can we use any properties of the graph?
\item Strongly connected components
\item Cycles (or odd cycles)
\item Bipartite (no odd cycles)
\begin{itemize}
\item Bipartite matching
\item Hall's marriage theorem
\item Stable Marriage
\end{itemize}
\item Cut vertex/bridge
\item Biconnected components
\item Degrees of vertices (odd/even)
\item Trees
\begin{itemize}
\item Heavy-light decomposition
\item Centroid decomposition
\item Least common ancestor
\item Centers of the tree
\end{itemize}
\item Eulerian path/circuit
\item Chinese postman problem
\item Topological sort
\item (Min-Cost) Max Flow
\item Min Cut
\begin{itemize}
\item Maximum Density Subgraph
\end{itemize}
\item Huffman Coding
\item Min-Cost Arborescence
\item Steiner Tree
\item Kirchoff's matrix tree theorem
\item Pr\"ufer sequences
\item Lov\'asz Toggle
\item Look at the DFS tree (which has no cross-edges)
\item Is the graph a DFA or NFA?
\begin{itemize}
\item Is it the Synchronizing word problem?
\end{itemize}
\end{itemize}
\item Mathematics
\begin{itemize}
\item Is the function multiplicative?
\item Look for a pattern
\item Permutations
\begin{itemize}
\item Consider the cycles of the permutation
\end{itemize}
\item Functions
\begin{itemize}
\item Sum of piecewise-linear functions is a piecewise-linear function
\item Sum of convex (concave) functions is convex (concave)
\end{itemize}
\item Modular arithmetic
\begin{itemize}
\item Chinese Remainder Theorem
\item Linear Congruence
\end{itemize}
\item Sieve
\item System of linear equations
\item Values too big to represent?
\begin{itemize}
\item Compute using the logarithm
\item Divide everything by some large value
\end{itemize}
\item Linear programming
\begin{itemize}
\item Is the dual problem easier to solve?
\end{itemize}
\item Can the problem be modeled as a different combinatorial problem? Does that simplify calculations?
\end{itemize}
\item Logic
\begin{itemize}
\item 2-SAT
\item XOR-SAT (Gauss elimination or Bipartite matching)
\end{itemize}
\item Meet in the middle
\item Only work with the smaller half ($\log(n)$)
\item Strings
\begin{itemize}
\item Trie (maybe over something weird, like bits)
\item Suffix array
\item Suffix automaton (+DP?)
\item Aho-Corasick
\item eerTree
\item Work with $S+S$
\end{itemize}
\item Hashing
\item Euler tour, tree to array
\item Segment trees
\begin{itemize}
\item Lazy propagation
\item Persistent
\item Implicit
\item Segment tree of X
\end{itemize}
\item Geometry
\begin{itemize}
\item Minkowski sum (of convex sets)
\item Rotating calipers
\item Sweep line (horizontally or vertically?)
\item Sweep angle
\item Convex hull
\end{itemize}
\item Fix a parameter (possibly the answer).
\item Are there few distinct values?
\item Binary search
\item Sliding Window (+ Monotonic Queue)
\item Computing a Convolution? Fast Fourier Transform
\item Computing a 2D Convolution? FFT on each row, and then on each column
\item Exact Cover (+ Algorithm X)
\item Cycle-Finding
\item What is the smallest set of values that identify the solution? The cycle structure of the permutation? The powers of primes in the factorization?
\item Look at the complement problem
\begin{itemize}
\item Minimize something instead of maximizing
\end{itemize}
\item Immediately enforce necessary conditions. (All values greater than 0? Initialize them all to 1)
\item Add large constant to negative numbers to make them positive
\item Counting/Bucket sort
\end{myitemize}
\section{Formulas}
% \item Number of permutations of length $n$ that have no fixed
% points (derangements): $D_0 = 1, D_1 = 0, D_n = (n - 1)(D_{n-1}
% + D_{n-2})$
% \item Number of permutations of length $n$ that have exactly $k$
% fixed points: $\binom{n}{k} D_{n-k}$
\begin{itemize}[leftmargin=*]
\item \textbf{Legendre symbol:} $\left(\frac{a}{b}\right) = a^{(b-1)/2} \pmod{b}$, $b$ odd prime.
\item \textbf{Heron's formula:} A triangle with side lengths
$a,b,c$ has area $\sqrt{s(s-a)(s-b)(s-c)}$ where $s =
\frac{a+b+c}{2}$.
\item \textbf{Pick's theorem:} A polygon on an integer grid
strictly containing $i$ lattice points and having $b$ lattice
points on the boundary has area $i + \frac{b}{2} - 1$. (Nothing
similar in higher dimensions)
\item \textbf{Euler's totient:} The number of integers less than
$n$ that are coprime to $n$ are $n\prod_{p|n}\left(1 - \frac{1}{p}\right)$
where each $p$ is a distinct prime factor of $n$.
\item \textbf{König's theorem:} In any bipartite graph $G=(L\cup R,E)$, the number
of edges in a maximum matching is equal to the number of
vertices in a minimum vertex cover. Let $U$ be the set of
unmatched vertices in $L$, and $Z$ be the set of vertices that
are either in $U$ or are connected to $U$ by an alternating
path. Then $K=(L\setminus Z)\cup(R\cap Z)$ is the minimum
vertex cover.
\item A minimum Steiner tree for $n$ vertices requires at most $n-2$ additional Steiner vertices.
\item The number of vertices of a graph is equal to its minimum
vertex cover number plus the size of a maximum independent set.
\item \textbf{Lagrange polynomial} through points $(x_0,y_0),\ldots,(x_k,y_k)$ is $L(x) = \sum_{j=0}^k y_j \prod_{\shortstack{$\scriptscriptstyle 0\leq m \leq k$ \\ $\scriptscriptstyle m\neq j$}} \frac{x-x_m}{x_j - x_m}$
\item \textbf{Hook length formula:} If $\lambda$ is a Young diagram and $h_{\lambda}(i,j)$ is the hook-length of cell $(i,j)$, then then the number of Young tableux $d_{\lambda} = n!/\prod h_{\lambda}(i,j)$.
\item \textbf{Möbius inversion formula:} If $f(n) = \sum_{d|n} g(d)$, then $g(n) = \sum_{d|n} \mu(d) f(n/d)$. If $f(n) = \sum_{m=1}^n g(\lfloor n/m\rfloor)$, then $g(n) = \sum_{m=1}^n \mu(m)f(\lfloor\frac{n}{m}\rfloor)$.
\item \#primitive pythagorean triples with hypotenuse $<n$ approx $n/(2\pi)$.
\item \textbf{Frobenius Number:} largest number which can't be
expressed as a linear combination of numbers $a_1,\ldots,a_n$
with non-negative coefficients. $g(a_1,a_2) = a_1a_2-a_1-a_2$,
$N(a_1,a_2)=(a_1-1)(a_2-1)/2$. $g(d\cdot a_1,d\cdot a_2,a_3) =
d\cdot g(a_1,a_2,a_3) + a_3(d-1)$. An integer $x>\left(\max_i
a_i\right)^2$ can be expressed in such a way iff.\ $x\ |\
\mathrm{gcd}(a_1,\ldots,a_n)$.
\end{itemize}
\subsection{Physics}
\begin{itemize}
\item \textbf{Snell's law:} $\frac{\sin\theta_1}{v_1} = \frac{\sin\theta_2}{v_2}$
\end{itemize}
\subsection{Markov Chains}
A Markov Chain can be represented as a weighted directed graph of
states, where the weight of an edge represents the probability of
transitioning over that edge in one timestep. Let $P^{(m)} = (p^{(m)}_{ij})$
be the probability matrix of transitioning from state $i$ to state $j$
in $m$ timesteps, and note that $P^{(1)}$ is the adjacency matrix of
the graph. \textbf{Chapman-Kolmogorov:} $p^{(m+n)}_{ij} = \sum_{k}
p^{(m)}_{ik} p^{(n)}_{kj}$. It follows that $P^{(m+n)} =
P^{(m)}P^{(n)}$ and $P^{(m)} = P^m$. If $p^{(0)}$ is the initial
probability distribution (a vector), then $p^{(0)}P^{(m)}$ is the
probability distribution after $m$ timesteps.
The return times of a state $i$ is $R_i = \{m\ |\ p^{(m)}_{ii} > 0 \}$,
and $i$ is \textit{aperiodic} if $\gcd(R_i) = 1$. A MC is aperiodic if
any of its vertices is aperiodic. A MC is \textit{irreducible} if the
corresponding graph is strongly connected.
A distribution $\pi$ is stationary if $\pi P = \pi$. If MC is
irreducible then $\pi_i = 1/\mathbb{E}[T_i]$, where $T_i$ is the
expected time between two visits at $i$. $\pi_j/\pi_i$ is the expected
number of visits at $j$ in between two consecutive visits at $i$. A MC
is \textit{ergodic} if $\lim_{m\to\infty} p^{(0)} P^{m} = \pi$. A MC is
ergodic iff.\ it is irreducible and aperiodic.
A MC for a random walk in an undirected weighted graph (unweighted
graph can be made weighted by adding $1$-weights) has $p_{uv} =
w_{uv}/\sum_{x} w_{ux}$. If the graph is connected, then $\pi_u =
\sum_{x} w_{ux} / \sum_{v}\sum_{x} w_{vx}$. Such a random walk is
aperiodic iff.\ the graph is not bipartite.
An \textit{absorbing} MC is of the form $P = \left(\begin{matrix} Q & R
\\ 0 & I_r \end{matrix}\right)$. Let $N = \sum_{m=0}^\infty Q^m = (I_t
- Q)^{-1}$. Then, if starting in state $i$, the expected number of
steps till absorption is the $i$-th entry in $N1$. If starting in state
$i$, the probability of being absorbed in state $j$ is the $(i,j)$-th
entry of $NR$.
Many problems on MC can be formulated in terms of a system of
recurrence relations, and then solved using Gaussian elimination.
\subsection{Burnside's Lemma}
Let $G$ be a finite group that acts on a set $X$. For each $g$ in $G$
let $X^g$ denote the set of elements in $X$ that are fixed by $g$. Then
the number of orbits \[ |X/G| = \frac{1}{|G|} \sum_{g\in G} |X^g| \]
\[
Z(S_n) = \frac{1}{n} \sum_{l=1}^n a_l Z(S_{n-l})
\]
\subsection{Bézout's identity}
If $(x,y)$ is any solution to $ax+by=d$ (e.g.\ found by the Extended
Euclidean Algorithm), then all solutions are given by \[
\left(x+k\frac{b}{\gcd(a,b)}, y-k\frac{a}{\gcd(a,b)}\right) \]
\subsection{Misc}
\subsubsection{Determinants and PM}
\begin{align*}
det(A) &= \sum_{\sigma \in S_n}\text{sgn}(\sigma)\prod_{i = 1}^n a_{i,\sigma(i)}\\
perm(A) &= \sum_{\sigma \in S_n} \prod_{i = 1}^n a_{i,\sigma(i)}\\
pf(A) &= \frac{1}{2^nn!}\sum_{\sigma \in S_{2n}} \text{sgn}(\sigma)\prod_{i = 1}^n a_{\sigma(2i-1),\sigma(2i)}\\ &= \sum_{M \in \text{PM}(n)} \text{sgn}(M) \prod_{(i,j) \in M} a_{i,j}
\end{align*}
\subsubsection{BEST Theorem}
Count directed Eulerian cycles. Number of OST given by
Kirchoff's Theorem (remove r/c with root) $\#\textsc{OST}(G,r)
\cdot \prod_{v}(d_v-1)!$
\subsubsection{Primitive Roots}
Only exists when $n$ is $2, 4, p^k, 2p^k$, where $p$ odd prime. Assume
$n$ prime. Number of primitive roots $\phi(\phi(n))$
Let $g$ be primitive root. All primitive roots are of the form $g^k$
where $k,\phi(p)$ are coprime.\\ $k$-roots:
$g^{i \cdot \phi(n) / k}$ for $0 \leq i < k$
\subsubsection{Sum of primes} For any multiplicative $f$:
\[
S(n,p) = S(n, p-1) - f(p) \cdot (S(n/p,p-1) - S(p-1,p-1))
\]
\subsubsection{Floor}
\begin{align*}
&\left\lfloor \left\lfloor x/y \right\rfloor / z \right\rfloor = \left\lfloor x / (yz) \right\rfloor \\
&x \% y = x - y \left\lfloor x / y \right\rfloor
\end{align*}
\clearpage
\section*{Practice Contest Checklist}
\begin{itemize}
\item How many operations per second? Compare to local machine.
\item What is the stack size?
\item How to use printf/scanf with long long/long double?
\item Are \texttt{\_{}\_{}int128} and \texttt{\_{}\_{}float128} available?
\item Does MLE give RTE or MLE as a verdict? What about stack overflow?
\item What is \texttt{RAND\_{}MAX}?
\item How does the judge handle extra spaces (or missing newlines) in the output?
\item Look at documentation for programming languages.
\item Try different programming languages: C++, Java and Python.
\item Try the submit script.
\item Try local programs: i?python[23], factor.
\item Try submitting with \texttt{assert(false)} and \texttt{assert(true)}.
\item Return-value from \texttt{main}.
\item Look for directory with sample test cases.
\item Make sure printing works.
\item Remove this page from the notebook.
\end{itemize}
\end{multicols*}
\end{document}
| {
"alphanum_fraction": 0.5682032313,
"avg_line_length": 44.019969278,
"ext": "tex",
"hexsha": "4f429a539ef424ac75c0c38c7043cff20c975571",
"lang": "TeX",
"max_forks_count": 143,
"max_forks_repo_forks_event_max_datetime": "2022-01-07T19:47:49.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-06-14T17:59:20.000Z",
"max_forks_repo_head_hexsha": "5b326ad153f52d46a84f3d670f035af59a3ec016",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "VardanGrigoryan/CompetitiveProgramming-1",
"max_forks_repo_path": "comprog.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "5b326ad153f52d46a84f3d670f035af59a3ec016",
"max_issues_repo_issues_event_max_datetime": "2022-02-22T16:47:42.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-12-02T10:51:31.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "VardanGrigoryan/CompetitiveProgramming-1",
"max_issues_repo_path": "comprog.tex",
"max_line_length": 353,
"max_stars_count": 549,
"max_stars_repo_head_hexsha": "5b326ad153f52d46a84f3d670f035af59a3ec016",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "guru-raj/CompetitiveProgramming",
"max_stars_repo_path": "comprog.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-10T07:35:32.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-02-13T00:46:04.000Z",
"num_tokens": 15510,
"size": 57314
} |
%!TEX root = ../thesis.tex
% ******************************* Thesis Appendix B ********************************
\chapter{Requirement-based Test Cases}
Below are the test cases run for the software testing evaluation in Chapter 7.1.
Testing evidences such as screenshots are not included to reduce the length of this document.
\begin{table}[!ht]
\begin{tabularx}{\textwidth}{|c|c|X|c|}
\hline
Test & Req & Requirement Statement & Status \\
\hline
1 & FR1 & \textbf{The system would store learner records on a blockchain} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
2 & FR2 & \textbf{Teachers would be able to create and edit learning resources} & \cellcolor{pink}FAILED \\
\hline
& & \multicolumn{2}{X|}{The user interface and transaction for creating learning resources have been built,
but not for editing.} \\
\hline
3 & FR3 & \textbf{Teachers would be able to create and edit assessments} & \cellcolor{pink}FAILED \\
\hline
& & \multicolumn{2}{X|}{The user interface and transaction for creating assessments have been built,
but not for editing.} \\
\hline
4 & FR4 & \textbf{The system would enforce the provision of learning outcomes, knowledge required
and assessment goals} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{Course modules, module units and assessments cannot be created without
these fields.} \\
\hline
5 & FR5 & \textbf{The system would enforce predefined assessments
rules (eg. marking schemes, transparent procedures
and feedback) with Smart Contracts} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
6 & FR6 & \textbf{The system would enforce predefined assessments
rules (eg. marking schemes, transparent procedures
and feedback) with Smart Contracts} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
7 & FR7 & \textbf{The system would be able to facilitate vivas (oral defence) as a form of assessments} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{This feature is present but could be improved. For example, time selection fields are missing from the UI and blockchain schema.} \\
\hline
8 & FR8 & \textbf{The system would provide multiple ways to define
grade schema} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{Support for pass/ fail, score, and grade result types.} \\
\hline
9 & FR9 & \textbf{Teachers would be able to negotiate a customised list of courses for a student
within a fixed course credits budget, customising degree specifications} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
10 & FR10 & \textbf{The system should feature dedicated support channels
between students and teachers or other advisors} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{This is present in the form of a direct messaging UI.} \\
\hline
11 & FR11 & \textbf{Students would be able to add assessment submissions
on the blockchain} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
\end{tabularx}
\end{table}
\clearpage
\begin{table}[!ht]
\begin{tabularx}{\textwidth}{|c|c|X|c|}
\hline
Test & Req & Requirement Statement & Status \\
\hline
12 & FR12 & \textbf{The system would be able to generate certificates on
the blockchain when a course has been completed} & \cellcolor{pink}FAILED \\
\hline
& & \multicolumn{2}{X|}{This feature has not been completed due to time limitation} \\
\hline
13 & FR13 & \textbf{The system would allow students to control access to
their education history on the blockchain} & \cellcolor{green}PASSED \\
\hline
% &&\multicolumn{2}{X|}{} \\
% \hline
14 & FR14 & \textbf{The system would provide one login for content delivery, assessment and record keeping} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{Content delivery is embedded into this system, which facilitates
assessment and record keeping, with a single sign-on with the .card file.} \\
\hline
15 & NR1 & \textbf{The client applications would have the same functionalities
across devices and a responsive interface} & \cellcolor{pink}FAILED \\
\hline
& & \multicolumn{2}{X|}{The client applications work on all
JavaScript enabled browsers and have the same functionalities across devices.
A majority of pages have been optimised for smaller screens but not all.} \\
\hline
16 & NR2 & \textbf{The client applications would fail safely and display error messages
to the user} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{A dialogue is created for every transaction
and clear error messages are shown when transactions fail.} \\
\hline
17 & NR3 & \textbf{The client applications would notify users of relevant
events on the blockchain network} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{A notification drawer is built for the client applications.} \\
\hline
18 & NR6 & \textbf{The system should be highly usable, visually consistent and appealing} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{A consistent UI design language framework is adopted for the client applications.} \\
\hline
19 & NR7 & \textbf{The client applications should always display its navigation menu and status of the application} & \cellcolor{green}PASSED \\
\hline
& & \multicolumn{2}{X|}{A navigation menu is always present on the left edge, and a top title bar displays where the user is at.} \\
\hline
\end{tabularx}
\end{table}
No test cases have been developed for non-functional requirements NR4 and NR5,
because they describe criteria that cannot be objectively assessed by the developer. | {
"alphanum_fraction": 0.3561403509,
"avg_line_length": 92.6829268293,
"ext": "tex",
"hexsha": "da51ff6aab5718fc7db20a320b4618271bd846be",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "dtylam/fypp",
"max_forks_repo_path": "Appendix5/appendix5.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "dtylam/fypp",
"max_issues_repo_path": "Appendix5/appendix5.tex",
"max_line_length": 281,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "3e0fae40d106873480f6a07bbe11a4af74091860",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "dtylam/fypp",
"max_stars_repo_path": "Appendix5/appendix5.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1826,
"size": 11400
} |
\chapter{3D Slicer}
% Biomedical research done the open source way
% Open Source Reseource Program
% Community formation, hackathons, funding models
% reusing open source tools, giving credit
| {
"alphanum_fraction": 0.8020833333,
"avg_line_length": 32,
"ext": "tex",
"hexsha": "a1758117258923f74a89150e60e8d0b2e24ba469",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7b961181e734406e8a9fe21112bbcb9881f02e68",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "PracticingOpenScience/PracticingOpenScience",
"max_forks_repo_path": "en/CaseStudies-Slicer.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7b961181e734406e8a9fe21112bbcb9881f02e68",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "PracticingOpenScience/PracticingOpenScience",
"max_issues_repo_path": "en/CaseStudies-Slicer.tex",
"max_line_length": 49,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "7b961181e734406e8a9fe21112bbcb9881f02e68",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "PracticingOpenScience/PracticingOpenScience",
"max_stars_repo_path": "en/CaseStudies-Slicer.tex",
"max_stars_repo_stars_event_max_datetime": "2015-11-29T12:16:08.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-11-28T19:29:51.000Z",
"num_tokens": 47,
"size": 192
} |
\documentclass[11pt,a4paper]{article}
\usepackage{graphicx}
\usepackage[prologue]{xcolor}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{tabularx}
\usepackage{booktabs}
\usepackage{tikz}
\usetikzlibrary{decorations.pathreplacing}
\usepackage{hyperref}
\usepackage[nameinlink]{cleveref}
\definecolor[named]{ACMBlue}{cmyk}{1,0.1,0,0.1}
\definecolor[named]{ACMYellow}{cmyk}{0,0.16,1,0}
\definecolor[named]{ACMOrange}{cmyk}{0,0.42,1,0.01}
\definecolor[named]{ACMRed}{cmyk}{0,0.90,0.86,0}
\definecolor[named]{ACMLightBlue}{cmyk}{0.49,0.01,0,0}
\definecolor[named]{ACMGreen}{cmyk}{0.20,0,1,0.19}
\definecolor[named]{ACMPurple}{cmyk}{0.55,1,0,0.15}
\definecolor[named]{ACMDarkBlue}{cmyk}{1,0.58,0,0.21}
\hypersetup{colorlinks,
linkcolor=ACMOrange,
citecolor=ACMPurple,
urlcolor=ACMDarkBlue,
filecolor=ACMDarkBlue}
\title{COMP4031: Scientific Computing\\A simple model of convection}
\author{Lawrence Mitchell\thanks{\texttt{[email protected]}}}
\date{October 2019}
\renewcommand{\vec}[1]{\ensuremath{\mathbf{#1}}}
\begin{document}
\maketitle{}
\noindent
This assignment is to be completed and handed in via DUO. You should
submit a Jupyter notebook containing both your code (which should be
runnable), and a discussion of your implementation choices and
observations and answers to the questions. You can use any of the code
presented in the lectures, appropriately attributed. After submission,
you should email to arrange a 15 minute slot to run through your
notebook and discuss your implementation and observations.
\section{The convection-diffusion equation}
\label{sec:part1}
We will simulate a very simple model of heat transfer driven by a
prescribed wind field in a 2D vertical slice of the atmosphere.
We denote the temperature field by $T(x, y; t)$ and impose a
background wind field
\begin{equation}
\label{eq:2}
\vec{w} = \begin{bmatrix}v_0\\v_1\end{bmatrix} = \begin{bmatrix}
4 \sin 2\pi y \cos \frac{\pi x}{2}\\
-\sin \frac{\pi x}{2} \cos 2\pi y\end{bmatrix}
\end{equation}
shown in \cref{fig:wind}. The boundary of the domain is held at a
prescribed (but spatially varying) temperature.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{wind-field}
\caption{Background wind field of \cref{eq:2}}
\label{fig:wind}
\end{figure}
The relevant equation describing the time evolution of the temperature
field is to find $T$ satisfying
\begin{equation}
\label{eq:1}
\begin{aligned}
\partial_t T - \nu \nabla^2 T + \vec{w} \cdot \nabla T &= 0 &&\text{
in } \Omega &&\\
T(x, y; t) &= 1 - y &&\text{ on } \partial\Omega &&\text{ (boundary
condition)}\\
T(x, y; 0) &= 1 - y &&\text{ in } \Omega &&\text { (initial condition)}.
\end{aligned}
\end{equation}
where $\nu > 0$ is a scalar constant controlling the relative
importance of the convective and diffusive terms. The second equation
indicates that on the boundary of the domain ($\partial\Omega$), we
apply a Dirichlet condition to $T$; while the third specifies the
initial condition.
To begin, set $\nu = 1$. The problem is posed on the rectangular
domain $\Omega = [0, 10] \times [0, 1]$, shown in \cref{fig:omega}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\draw[thick, black] (0,0) rectangle node{$\Omega$} (10, 1);
\draw [decorate,decoration={brace, amplitude=10pt, mirror, raise=4pt}]
(0,0) -- (10,0) node [black,midway,below, yshift=-10pt] {$L_x = 10$};
\draw[decorate,decoration={brace, amplitude=10pt, raise=4pt}]
(0,0) -- (0, 1) node [rotate=90,black,midway, above, yshift=10pt] {$L_y = 1$};
\end{tikzpicture}
\caption{Simulation domain}
\label{fig:omega}
\end{figure}
We will discretise this equation using the method of lines, separately
discretising in space and in time. You may find it helpful to
structure your program to discretise the spatial term independently
from the time derivative so that you can reuse your code from
\cref{sec:part1} in \cref{sec:part2}.
For this part, you should use explicit Euler for the time derivative.
Your goal is to derive, and implement, a finite difference
discretisation for the spatial terms. You should use an appropriate
centred difference scheme for the Laplacian term. For the first
derivative, as we saw in class, we should use a one-sided derivative.
There are a number of options here:
\begin{itemize}
\item left neighbour;
\item right neighbour;
\item adaptive ``upwinding'' based on the value of the wind.
\end{itemize}
Implement the timestepping scheme and spatial derivatives. Discuss
whether you observe any differences in the accuracy and/or numerical
stability of the solution scheme with the different choices for the
$\nabla T$ term. Fix the final time for this step to be $t=1$.
This section is worth 40/100 marks. Marks are provided for
correctness (15), elegant/simple code (10), and the discussion of the
choice of derivative approximations (15).
\section{Numerical experiments, better time-stepping}
\label{sec:part2}
Implement a higher-order explicit RK scheme as well as explicit Euler.
Justify your choice based on the order of your spatial discretisation.
Fix the final time at $t = 1$. Discuss answers to the following
questions:
\begin{itemize}
\item For the two time-stepping schemes, how does time to solution
change as you increase the resolution?
\item What observations can you make about the required timestep size
as you increase the resolution?
\item Experiment with smaller values of $\nu$, does this have any
impact on the stability of your solution?
\end{itemize}
This section is worth 40/100 marks. Marks are provided for
correctness of the RK implementation, and the discussion in your
answers to the questions.
\section{Stationary states, implicit time-stepping}
\label{sec:part3}
As long as $\nu > 0$, this equation has a steady state. We can find it
by setting the time derivative to zero, and solving the resulting
stationary equation
\begin{equation}
\label{eq:4}
\begin{aligned}
-\nu \nabla^2 T + \vec{w} \cdot \nabla T &= 0 &\text{ in } \Omega\\
T &= 1 - y &\text{ on } \partial\Omega.
\end{aligned}
\end{equation}
To do this, we will discretise the operator and assemble it into a
matrix, representing the operator
\begin{equation}
\label{eq:7}
A := -\nu \nabla^2 + \vec{w} \cdot \nabla.
\end{equation}
The stationary solution can be obtained by inverting this matrix $A$
onto the right hand side
matrix equation
\begin{equation}
\label{eq:5}
T_\text{stationary} = A^{-1} F
\end{equation}
where $T_\text{stationary}$ represents the vector of unknowns, and $F$
is the discretised right hand side. You can either assemble into a
dense matrix and use
\href{https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html}{\underline{\texttt{numpy.linalg.solve}}}
to solve the system, or else assemble a sparse matrix and use
\href{https://docs.scipy.org/doc/scipy/reference/sparse.linalg.html}{\underline{\texttt{scipy.sparse.linalg.spsolve}}}.
Verify the correctness of your implementation against an exact
solution. You can simplify things by choosing $\vec{w} = [1, 1]$. Pick
a solution (for example a product of sines and cosines), substitute it
in to the equation to determine what the right hand side should be,
and use the exact solution as the boundary condition.
Use the discretised operator to implement an implicit time-stepping
scheme (implicit Euler is fine). Does this allow you to get a faster
time to solution for the experiments of \cref{sec:part2}?
This section is worth 20/100 marks. Marks are provided for the correctness
of the implementation (judged by appropriate convergence against an
exact solution), and discussion of the comparison of the implicit
time-stepping scheme to the explicit ones of \cref{sec:part2}.
\section*{Mark breakdown}
In addition to the marks for implementation and discussion as
described above, an additional 10/100 marks will be given for the
overall quality of the presentation of the work in the short viva. A breakdown of the
mark scheme is shown in \cref{tab:marks}.
\begin{table}[htbp]
\centering
\renewcommand\tabularxcolumn[1]{m{#1}}
\begin{tabularx}{0.9\linewidth}{X|c}
\toprule
Descriptor & Marks\\
\midrule
Part 1 correctness & 15\\
Part 1 simplicity/elegance of implementation & 10\\
Part 1 discussion of differencing choices & 15\\
\midrule
Part 2 correctness & 20\\
Part 2 discussion and answers to questions & 20\\
\midrule
Part 3 correctness and convergence test for stationary solution & 10\\
Part 3 implementation of implicit scheme and comparison to
explicit & 10\\
\midrule
In-person presentation & 10\\
\midrule
\midrule
Total & 100\\
\bottomrule
\end{tabularx}
\caption{Mark breakdown}
\label{tab:marks}
\end{table}
\end{document}
| {
"alphanum_fraction": 0.7447049496,
"avg_line_length": 38.8942731278,
"ext": "tex",
"hexsha": "907cb41ea28853ae88fbcc93a426cae0d1676178",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2019-11-28T13:26:48.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-11-28T13:26:48.000Z",
"max_forks_repo_head_hexsha": "2840ebb8e1cb09b865400c0639b247005fa3c601",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "wence-/comp4031-scicomp",
"max_forks_repo_path": "coursework/rubric.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2840ebb8e1cb09b865400c0639b247005fa3c601",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "wence-/comp4031-scicomp",
"max_issues_repo_path": "coursework/rubric.tex",
"max_line_length": 124,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "2840ebb8e1cb09b865400c0639b247005fa3c601",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "wence-/comp4031-scicomp",
"max_stars_repo_path": "coursework/rubric.tex",
"max_stars_repo_stars_event_max_datetime": "2019-10-07T10:43:06.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-10-07T10:43:06.000Z",
"num_tokens": 2603,
"size": 8829
} |
% PLEASE DO NOT MODIFY THIS FILE! It was generated by raskman version: 1.1.0
\subsubsection{glite-job-list-match}
\label{glite-job-list-match}
\medskip
\textbf{glite-job-list-match}
\smallskip
\medskip
\textbf{SYNOPSIS}
\smallskip
\textbf{glite-job-list-match [options] $<$jdl\_file$>$}
{\begin{verbatim}
options:
--version
--help
--config, -c <configfile>
--debug
--logfile <filepath>
--noint
--output, -o <filepath>
--verbose
--rank
--config-vo <configfile>
--vo <voname>
\end{verbatim}
\medskip
\textbf{DESCRIPTION}
\smallskip
glite-job-list-match displays the list of identifiers of the resources on which the user is authorized and satisfying the job requirements included in the job description file. The CE identifiers are returned either on the standard output or in a file according to the chosen command options, and are strings univocally identifying the CEs published in the IS.
The returned CEIds are listed in decreasing order of rank, i.e. the one with the best (greater) rank is in the first place and so forth.
\medskip
\textbf{OPTIONS}
\smallskip
\textbf{--version}
displays UI version.
\textbf{--help}
displays command usage
\textbf{--config}, \textbf{-c} <configfile>
if the command is launched with this option, the configuration file pointed by configfile is used. This option is meaningless when used together with "--vo" option
\textbf{--debug}
When this option is specified, debugging information is displayed on the standard output and written into the log file, whose location is eventually printed on screen.
The default UI logfile location is:
glite-wms-job-<command\_name>\_<uid>\_<pid>\_<time>.log located under the /var/tmp directory
please notice that this path can be overriden with the '--logfile' option
\textbf{--logfile} <filepath>
when this option is specified, all information is written into the specified file pointed by filepath.
This option will override the default location of the logfile:
glite-wms-job-<command\_name>\_<uid>\_<pid>\_<time>.log located under the /var/tmp directory
\textbf{--noint}
if this option is specified, every interactive question to the user is skipped and the operation is continued (when possible)
\textbf{--output}, \textbf{-o} <filepath>
writes the results of the operation in the file specified by filepath instead of the standard output. filepath can be either a simple name or an absolute path (on the submitting machine). In the former case the file filepath is created in the current working directory.
\textbf{--verbose}
displays on the standard output the job class-ad that is sent to the Network Server generated from the job description file.
This differs from the content of the job description file since the UI adds to it some attributes that cannot be directly inserted by the user
(e.g., defaults for Rank and Requirements if not provided, VirtualOrganisation etc).
\textbf{--rank}
displays the "matching" CEIds toghether with their associated ranking values.
\textbf{--config-vo} <configfile>
if the command is launched with this option, the VO-specific configuration file pointed by configfile is used. This option is meaningless when used together with "--vo" option
\textbf{--vo} <voname>
this option allows the user to specify the name of the Virtual Organisation she/he is currently working for.
If the user proxy contains VOMS extensions then the VO specified through this option is overridden by the default VO contained in the proxy (i.e. this option is only useful when working with non-VOMS proxies).
This option is meaningless when used together with "--config-vo" option
\medskip
\textbf{EXAMPLES}
\smallskip
1) simple request:
glite-job-list-match ./match.jdl
If the operation succeeds, the output will be a list of CEs
2) request for displays CE rank numbers:
glite-job-list-match --rank ./match.jdl
If the operation succeeds, a list of CEs with their rank numbers is displayed on the standard output
3) saves the result in a file:
glite-job-list-match --output match.out ./match.jdl
If the operation succeeds,a list of CEs is saved in the file match.out in the current working directory
\medskip
\textbf{ENVIRONMENT}
\smallskip
GLITE\_WMSUI\_CONFIG\_VAR: This variable may be set to specify the path location of the custom default attribute configuration
GLITE\_WMSUI\_CONFIG\_VO: This variable may be set to specify the path location of the VO-specific configuration file
GLITE\_WMS\_LOCATION: This variable must be set when the Glite WMS installation is not located in the default paths: either /opt/glite or /usr/local
GLITE\_LOCATION: This variable must be set when the Glite installation is not located in the default paths: either /opt/glite or /usr/local
GLOBUS\_LOCATION: This variable must be set when the Globus installation is not located in the default path /opt/globus.
It is taken into account only by submission and get-output commands
GLOBUS\_TCP\_PORT\_RANGE="<val min> <val max>" This variable must be set to define a range of ports to be used for inbound connections in the interactivity context.
It is taken into account only by submission of interactive jobs and attach commands
X509\_CERT\_DIR: This variable may be set to override the default location of the trusted certificates directory, which is normally /etc/grid-security/certificates.
X509\_USER\_PROXY: This variable may be set to override the default location of the user proxy credentials, which is normally /tmp/x509up\_u<uid>.
\medskip
\textbf{FILES}
\smallskip
One of the following paths must exist (seeked with the specified order):
- \$GLITE\_WMS\_LOCATION/etc/
- \$GLITE\_LOCATION/etc/
- /opt/glite/etc/
- /usr/local/etc/
- /etc/
and contain the following UI configuration files:
glite\_wmsui\_cmd\_var.conf, glite\_wmsui\_cmd\_err.conf, glite\_wmsui\_cmd\_help.conf, <voName>/glite\_wmsui.conf
- glite\_wmsui\_cmd\_var.conf will contain custom configuration default values
A different configuration file may be specified either by using the --config option or by setting the GLITE\_WMSUI\_CONFIG\_VAR environment variable
here follows a possible example:
[
RetryCount = 3 ;
ErrorStorage= "/tmp" ;
OutputStorage="/tmp";
ListenerStorage = "/tmp" ;
LoggingTimeout = 30 ;
LoggingSyncTimeout = 30 ;
NSLoggerLevel = 0;
DefaultStatusLevel = 1 ;
DefaultLogInfoLevel = 1;
]
- glite\_wmsui\_cmd\_err.conf will contain UI exception mapping between error codes and error messages (no relocation possible)
- glite\_wmsui\_cmd\_help.conf will contain UI long-help information (no relocation possible)
- <voName>/glite\_wmsui.conf will contain User VO-specific attributes.
A different configuration file may be specified either by using the --config-vo option or by setting the GLITE\_WMSUI\_CONFIG\_VO environment variable
here follows a possible example:
[
LBAddresses = { "tigerman.cnaf.infn.it:9000" };
VirtualOrganisation = "egee";
NSAddresses = { "tigerman.cnaf.infn.it:7772" }
]
Besides those files, a valid proxy must be found inside the following path:
/tmp/x509up\_u<uid> ( use the X509\_USER\_PROXY environment variable to override the default location JDL file)
\medskip
\textbf{AUTHORS}
\smallskip
Alessandro Maraschini ([email protected])
| {
"alphanum_fraction": 0.7783476573,
"avg_line_length": 36.9948717949,
"ext": "tex",
"hexsha": "e8aad4060d3426a9db8466c29298ec3a052ddf6d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "italiangrid/wms",
"max_forks_repo_path": "users-guide/WMS/glite-job-list-match.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "italiangrid/wms",
"max_issues_repo_path": "users-guide/WMS/glite-job-list-match.tex",
"max_line_length": 360,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "5b2adda72ba13cf2a85ec488894c2024e155a4b5",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "italiangrid/wms",
"max_stars_repo_path": "users-guide/WMS/glite-job-list-match.tex",
"max_stars_repo_stars_event_max_datetime": "2019-01-18T02:19:18.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-18T02:19:18.000Z",
"num_tokens": 1786,
"size": 7214
} |
\documentclass[10pt]{article}
\usepackage[margin=1in]{geometry}
\usepackage{amsmath,amsthm,amssymb, graphicx, multicol, array, enumerate, gensymb}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newenvironment{problem}[2][Problem]{\begin{trivlist}
\item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}}
\begin{document}
\title{Mathematics problems}
\date{}
\maketitle
\section{Elementary algebra}
\begin{problem}{1.1}
Simplify $$\frac{z^{17}}{z^3 \cdot z^5}$$
\end{problem}
\begin{problem}{1.2}
Solve for $x$:
$$6^2 \cdot 6^x = 6^6$$
\end{problem}
\begin{problem}{1.3}
Calculate the missing value. If $x \cdot y$ is 5, then $x^3y^3=\dots$
\end{problem}
\begin{problem}{1.4}
Calculate
$$\frac{\sqrt{2^{10}}}{\sqrt{4^3}}$$
\end{problem}
\begin{problem}{1.5}
True or False ($x$ and $y$ and $z$ are real numbers):
\begin{enumerate}[(a)]
\item $x+y=y+x$
\item $x(y+z)=xy+xz$
\item $x^{y+z}=x^y+x^z$
\item $\frac{x^y}{x^z}=x^{y-z}$
\end{enumerate}
\end{problem}
\begin{problem}{1.6}
Find the solution set for the inequality below:
$$\frac{2x-5}{2}\ge4$$
\end{problem}
\section{Functions of one variable}
\begin{problem}{2.1 (Based on SYD 2.5.6)}
The relationship between temperatures measured in Celsius and Fahrenheit is linear. 0\degree C is equivalent to 32\degree F and 100\degree C is the same as 212\degree F.
Which temperature is measured by the same number on both scales?
\end{problem}
\begin{problem}{2.2}
Take the following function $f(x)=5x+4$. Find y if $f(y)=24$.
\end{problem}
\begin{problem}{2.3}
Find all values of x that satisfy:
$$10^{x^2-2x+2}=100$$
\end{problem}
\begin{problem}{2.4}
Solve the following problem. If the annual GDP growth of a country is 3\%, how long does it take the economy to double its GDP?
\end{problem}
\begin{problem}{2.5}
Calculate the following value
$$\ln(1/e)$$
\end{problem}
\section{Calculus}
\begin{problem}{3.1}
Calculate the following sum
$$\sum\limits_{i=0}^{\infty} \left( \frac{1}{8^i}+0.5^i\right)$$
\end{problem}
\begin{problem}{3.2}
Find the following limit
$$\lim\limits_{x \rightarrow 3}\frac{x-3}{2}$$
\end{problem}
\begin{problem}{3.3}
Find the slope of the function $f(x)=x^2-4$ at $(-1,-3)$.
\end{problem}
\begin{problem}{3.4}
Find the following derivative
$$\frac{\mathrm{d}}{\mathrm{d}\, x} \frac{x^2+3}{x+2}$$
\end{problem}
\begin{problem}{3.5}
Find the following second derivative
$$\frac{\mathrm{d^2}}{\mathrm{d}\, x^2} 4x^3+4$$
\end{problem}
\begin{problem}{3.6}
Is the function $f(x)=\frac{1}{x}$ continuous at $0$? Why?
\end{problem}
\begin{problem}{3.7}
Consider the following function. Find all of its stationary points and classify them as local minima, local maxima or inflection points. Also decide whether it is convex or concave. If it has one or more inflection points then define where it is locally concave or locally convex.
$$f(x)=3x^3-9x$$
\end{problem}
\begin{problem}{3.8}
Let $f(x,y)=x^2y^3$. Calculate $f(2,3)$
\end{problem}
\begin{problem}{3.9}
Consider the following function: $f(x,y)=\ln(x-y)$. For what combinations of $x$ and $y$ is this function defined?
\end{problem}
\begin{problem}{3.10}
Find the following partial derivative:
$$\frac{\partial^2}{\partial \, x^2} x^5+xy^3$$
\end{problem}
\begin{problem}{3.11}
Find the local maxima or minima of the following function:
$$f(x,y)=\sqrt{xy}-0.5x-0.5y$$
\end{problem}
\begin{problem}{3.12}
Solve the following constrained optimization problem using Lagrange's method:
$\max x^2y^2$ s.t. $x+y=5$
\end{problem}
\section{Linear algebra}
\begin{problem}{4.1}
Take the following matrices:
$$A=\begin{bmatrix} 2 & 3\\ 4 & 1 \\ 1 & 2\end{bmatrix}$$
$$B=\begin{bmatrix} 1 & 4 & 1\\2 & 1 & 2\end{bmatrix}$$
What is $A \cdot B$?
\end{problem}
\begin{problem}{4.2}
Take the following matrices:
$$A=\begin{bmatrix} 2 & 3\\ 4 & 1 \\ 1 & 2\end{bmatrix}$$
$$B=\begin{bmatrix} 1 & 4 & 1\\2 & 1 & 2\end{bmatrix}$$
What is $B \cdot A$?
\end{problem}
\begin{problem}{4.3}
What is the transpose of the following matrix?
$$\begin{bmatrix}3.3 & 5.1 & 4.7\\ 2 & 6.1 & 1.23 \\ 4 & 5.76 & 0\end{bmatrix}$$
\end{problem}
\begin{problem}{4.4}
Calculate the determinant of
$$\begin{bmatrix}2 & 3 \\ 4 & 5 \end{bmatrix} $$
\end{problem}
\section{Probability theory}
\begin{problem}{5.1}
You run an experiment where you flip a coin four times. Each time you get either heads (H) or tails (T). What is the sample space of your experiment?
\end{problem}
\begin{problem}{5.2}
Assume that in a certain country 1\% of the population uses a certain drug. You have a way to test drug use, which will give you a positive result in 99\% of the cases where the individual is indeed a drug user and a negative result in 99.5\% of the cases where the individual doesn't use the drug. What is the probability that someone with a positive drug test is indeed a drug user?
\end{problem}
\begin{problem}{5.3}
You run an experiment in which you toss a dice twice and sum up the results. What is the expected value of this sum?
\end{problem}
\end{document}
| {
"alphanum_fraction": 0.6888844937,
"avg_line_length": 28.8914285714,
"ext": "tex",
"hexsha": "a8cce8091039a9ccf836fe65efd3f9b8086a77c3",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "7508efa839b8d8ae3adfa5c237bee3176a1de6e6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "kiss-oliver/ba-pre-session-2018-archive",
"max_forks_repo_path": "opt_out_exam/math_tasks.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "7508efa839b8d8ae3adfa5c237bee3176a1de6e6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "kiss-oliver/ba-pre-session-2018-archive",
"max_issues_repo_path": "opt_out_exam/math_tasks.tex",
"max_line_length": 384,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "7508efa839b8d8ae3adfa5c237bee3176a1de6e6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "kiss-oliver/ba-pre-session-2018-archive",
"max_stars_repo_path": "opt_out_exam/math_tasks.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1776,
"size": 5056
} |
%%
%% This is file `mitsample.tex',
%% generated with the docstrip utility.
%
% The original source files were:
%
% ubcthesis.dtx (with options: `mitsampletex')
%%
%% This file was generated from the ubcthesis package.
%% --------------------------------------------------------------
%%
%% Copyright (C) 2001
%% Michael McNeil Forbes
%% [email protected]
%%
%% This file may be distributed and/or modified under the
%% conditions of the LaTeX Project Public License, either version 1.2
%% of this license or (at your option) any later version.
%% The latest version of this license is in
%% http://www.latex-project.org/lppl.txt
%% and version 1.2 or later is part of all distributions of LaTeX
%% version 1999/12/01 or later.
%%
%% This program is distributed in the hope that it will be useful,
%% but WITHOUT ANY WARRANTY; without even the implied warranty of
%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
%% LaTeX Project Public License for more details.
%%
%% This program consists of the files ubcthesis.dtx, ubcthesis.ins, and
%% the sample figures fig.eps and fig.fig.
%%
%% This file may be modified and used as a base for your thesis without
%% including the licence agreement as long as the content (i.e. textual
%% body) of the file is completely rewritten. You must, however, change
%% the name of the file.
%%
%% This file may only be distributed together with a copy of this
%% program. You may, however, distribute this program without generated
%% files such as this one.
%%
% This Sample thesis requires \LaTeX2e
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
\ProvidesFile{mitsample.tex}[2012/04/07 v1.70 ^^J
Massachusetts Institute of Technology Sample Thesis]
\documentclass[msc,10pt,oneside]{mitthesis}
%
% To compile issue the following commands:
% latex mitsample
% bibtex mitsample
% latex mitsample
% latex mitsample
% latex mitsample
%
% To view use xdvi (on unix systems):
% xdvi mitsample.dvi
%
% To make a postscript file, use dvips:
% dvips -o mitsample.ps mitsample.dvi
%
% To view the postscript file, use ghostview or gv (on unix systems):
% gv mitsample.ps
%
%************************************************
% Optional packages.
%
% The use of these packages is optional: they are standard now and
% should be installed on your system, but if they are not, you might
% have to comment out the appropriate lines to get this file to
% compile.
%
%******** natbib ********************************
% This is a very nice package for bibliographies. It includes options
% for sorting and compressing bibliographic entries.
\usepackage[numbers,sort&compress]{natbib}
%******** graphics and graphicx ******************************
% This allows you to include encapsulated postscript files. If you
% don't have this, comment the \includegraphics{} line following the
% comment "%includegraphics" later in this file.
\usepackage{graphicx}
%******** pdflscape ********************************
% This allows you to include landscape layout pages by using the
% |landscape| environment. The use of |pdflscape| is preferred over
% the standard |lscape| package because it automatically rotates the
% page in the pdf file for easier reading. (Thanks to Joseph Shea
% for pointing this out.)
\usepackage{pdflscape}
%******** psfrag ******************************
% This allows you to replace text in postscript pictures with formated
% latex text. This allows you to use math in graph labels
% etc. Uncomment the psfrag lines following the "%psfrag" comment
% later in this file if you don't have this package. The replacements
% will only be visible in the final postscript file: they will be
% listed in the .dvi file but not performed.
\usepackage{psfrag}
%******** afterpage ***************************
% This package allows you to issue commands at the end of the current
% page. A good use for this is to use the command
% \afterpage{\clearpage} right after a figure. This will cause the
% figure to be inserted on the page following the current one (or on
% the current page if it will fit) but will not break the page in the
% middle.
\usepackage{afterpage}
%******** hyperref *****************************
% Please read the manual:
% http://www.tug.org/applications/hyperref/manual.html
%
% This adds hyperlinks to your document: with the right viewers (later
% versions of xdvi, acrobat with pdftex, latex2html etc.) this will
% make your equation, figure, citation references etc. hyperlinks so
% that you can click on them. Also, your table of contents will be
% able to take you to the appropriate sections. In the viewers that
% support this, the links often appear with an underscore. This
% underscore will not appear in printed versions.
%
% Note: if you do not use the hypertex option, then the dvips driver
% may be loaded by default. This will cause the entries in the list
% of figures and list of tables to be on a single line because dvips
% does not deal with hyperlinks on broken lines properly.
%
% NOTE: HYPERREF is sensitive to the ORDER in which it is LOADED.
% For example, it must be loaded AFTER natbib but BEFORE newly
% defined float environments. See the README file with the hyperref
% for some help with this. If you have some very obscure errors, try
% first disabling hyperref. If that fixes the problem, try various
% orderings.
%
% Note also that there is a bug with versions before 2003/11/30
% v6.74m that cause the float package to not function correctly.
% Please ensure you have a current version of this package. A
% warning will be issued if you leave the date below but do not have
% a current version installed.
%
% Some notes on options: depending on how you build your files, you
% may need to choose the appropriate option (such as [pdftex]) for the
% backend driver (see the hyperref manual for a complete list). Also,
% the default here is to make links from the page numbers in the table
% of contents and lists of figures etc. There are other options:
% excluding the [linktocpage] option will make the entire text a
% hyperref, but for some backends will prevent the text from wrapping
% which can look terrible. There is a [breaklinks=true] option that
% will be set if the backend supports (dvipdfm for example supports
% it but does not work with psfrag.)
%
% Finally, there are many options for choosing the colours of the
% links. These will be included by default in future versions but
% you should probably consider changing some now for the electronic
% version of your thesis.
\usepackage[unicode=true,
linktocpage,
linkbordercolor={0.5 0.5 1},
citebordercolor={0.5 1 0.5},
linkcolor=blue]{hyperref}
% If you would like to compile this sample thesis without the
% hyperref package, then you will need to comment out the previous
% \usepackage command and uncomment the following command which will
% put the URL's in a typewriter font but not link them.
%\newcommand\url[1]{\texttt{#1}}
% These commands are optional. The defaults are shown.
\institution{Massachusetts Institute of Technology}
\institutionaddress{Cambridge}
\program{Physics}
% You can issue as many of these as you have...
\previousdegree{B.Sc., The University of British Columbia, 1999}
\previousdegree{M.Sc., The University of British Columbia, 2001}
% You can override the option setting here.
% \degreetitle{Jack of All Trades}
% These commands are required.
\title{A Sample Thesis}
\subtitle{With a Subtitle}
\author{Michael M$^{\rm c}$Neil Forbes}
\copyrightyear{2000}
\submitdate{June 2004}
% These commands are required by MIT.
\advisor{Frank Wilczek}
\advisortitle{Herman Feshbach Professor of Physics}
\chairman{Thomas Greytak}{Professor and Associate Department Head for
Education}
% One might want to override the format of the section and chapter
% numbers. This shows you how to do it. Note that
\renewcommand\thepart {\Roman{part}}
\renewcommand\thechapter {\arabic{chapter}}
\renewcommand\thesection {\thechapter.\arabic{section}}
\renewcommand\thesubsection {\thesection.\arabic{subsection}}
\renewcommand\thesubsubsection{\thesubsection.\arabic{subsubsection}}
\renewcommand\theparagraph {\thesubsubsection.\arabic{paragraph}}
\renewcommand\thesubparagraph {\theparagraph.\arabic{subparagraph}}
\setcounter{tocdepth}{2}
\setcounter{secnumdepth}{2}
% Here is the start of the document.
\begin{document}
% Unlike the UBC thesis, page numbering for MIT theses should start
% at 1 and continue. Thus, there is no \frontmatter command issued
% here as there was for the UBC thesis.
\maketitle
\authorizationform
\begin{abstract}
The \texttt{genthesis.cls} \LaTeX{} class file and accompanying
documents, such as this sample thesis, are distributed in the hope
that it will be useful but without any warranty (without even the
implied warranty of fitness for a particular purpose). For a
description of this file's purpose, and instructions on its use, see
below.
These files are distributed under the GPL which should be included
here in the future. Please let the author know of any changes or
improvements that should be made.
Michael Forbes.
[email protected]
\end{abstract}
\tableofcontents
\listoftables
\listoffigures
% Any other lists should come here, i.e.
% Abbreviation schemes, definitions, lists of formulae, list of
% schemes, etc.
\chapter{Preface}
These papers have been published earlier\ldots.
\chapter{Acknowledgements}
Thank you mother here.
% Force a new page.
\newpage
% Any other unusual sections should come here between the
% acknowledgements and the main body.
% Suppress the running headers for this page only.
\thispagestyle{plain}
\chapter*{Disclaimer} % Unnumbered
The \texttt{mitthesis} \LaTeX{} class and the accompanying sample files
are \emph{unofficial} and are not supported by the Massachusetts
Institute of Technology. While I have attempted to make the style
file and sample files conform to all of the requirements set forth by
the library, you should always consult one of the library staff
members for assistance with problems \emph{before} starting final
draft. You should be able to find the thesis requirements at one of
the following sites:
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|}
\hline
\url{http://libraries.mit.edu/archives/thesis-specs/}\\
\url{http://libraries.mit.edu/archives/index.html}\\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:ubcurls}
Potential sources of information regarding thesis preparation at MIT.}
\end{table}
% Force a new page.
\newpage
% Suppress the running headers for this page only.
\thispagestyle{plain}
% Here we provide a short optional argument to \chapter[]{}. This
% optional argument will appear in the table of contents. For long
% titles, one should use this to give a single-line entry to the
% table of contents.
\chapter[Poem]{A Japanese Introduction}
% Here is a quote:
\begin{quote}
% It is centered
\begin{center}
This is a small poem,\\
a little poem, a Haiku,\\
to show you how to.\\
---Michael Forbes.
\end{center}
\end{quote}
This small poem shows several features:
\begin{itemize}
\item The \verb|\newpage| command has been used to force a page break.
\item The pagestyle has been set to suppress the headers using the
command \verb|\thispagestyle{plain}|. Note that using
\verb|\pagestyle{plain}| would have affected all of the subsequent
pages.
\item The \verb|\chapter[Poem]{A Japanese Introduction}| command has
been used with an optional argument to generate a title and to list
this ``chapter'' in the table of contents as ``Poem''. If one did
not desire to have an entry in the table of contents, then one would
just use the starred command \verb|\chapter*{}|. The use of an
optional argument is useful for long chapter and section titles that
take up too much space in the table of contents.
\end{itemize}
% Parts are the largest units
\part{Thesis}
% Chapters are the next main unit.
\chapter{This is a Chapter}
% Sections are a sub-unit
\section{A Section}
Here is a section with some text. Equations look like this $y=x$.
This is an example of a second paragraph in a section so you can
see how much it is indented by.
% Subsections follow
\subsection{This is a Subsection}
Here is an example of a citation: \cite{Forbes:2006ba}. The actual
form of the citation is governed by the bibliographystyle. These
citations are maintained in a BIBTeX file \texttt{sample.bib}. You
could type these directly into the file. For an example of the format
to use look at the file \texttt{mitsample.bbl} after you compile this
file.
This is an example of a second paragraph in a subsection so you can
see how much it is indented by.
\subsubsection{This is a Subsubsection}
Here are some more citations \cite{LL3:1977,Peccei:1989,Turner:1999}.
If you use the \texttt{natbib} package with the \verb+sort&compress+
option, then the following citation will look the same as the first
citation in this section: \cite{Turner:1999,Peccei:1989,LL3:1977}.
This is an example of a second paragraph in a subsubsection so you can
see how much it is indented by.
\paragraph{This is a Paragraph}
Paragraphs and subparagraphs are the smallest units of text. There is
no subsubsubsection etc.
\subparagraph{This is a Subparagraph}
This is the last level of organisation. If you need more than this,
you should consider reorganizing your work\dots
\begin{equation}
\mathrm{f}(x)=\int_{-\infty}^{\int_{-\infty}^x
e^{-\frac{y^2}{2}}\mathrm{d}{y}}e^{-z^2}\mathrm{d}z
\end{equation}
In order to show you what a separate page would look like (i.e. without
a chapter heading) I must type some more text. Thus I will babble a
bit and keep babbling for at least one more page\ldots What you
should notice is that the chapter titles appear substantially lower
than the continuing text. Babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble.
Babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble babble babble babble babble babble babble
babble babble babble babble.
\begin{table}[t] %optional [t, b or h];
\begin{tabular}{|r||r@{.}l|}
\hline
Phoenix & \$960&35\\
\hline
Calgary & \$250&00\\
\hline
\end{tabular}
\caption{
\label{tab:Table1}
Here is the caption for this wonderful table.Text of Caption}
\end{table}
\chapter[Another Chapter\ldots]{Another Chapter with a Very Long
Chapter-name that will Probably Cause Problems}
This chapter name is very long and does not display properly in the
running headers or in the table of contents. To deal with this, we
provide a shorter version of the title as the optional argument to the
\verb|\chapter[]{}| command.
\section{Another Section}
Another bunch of text to demonstrate what this file does.
You might want a list for example:
\begin{itemize}
\item An item in a list.
\item Another item in a list.
\end{itemize}
\section*{An Unnumbered Section That is Not Included in the Table of
Contents}
%% We would like to place the figure here, so we start with [h].
%% Note that we have located the figure between paragraphs (rather,
%% before one) so that it does not split up sentences.
\begin{figure}[ht]
\begin{center}
%% psfrag: comment the following line if not using the psfrag package
\psfrag{pie makes me happy!}{$\pi$ makes me happy!}
%% includegraphics: comment the following if not using the graphicx package
\includegraphics[width=0.4\textwidth]{fig.eps}
\caption[Happy Face: figure example.]{\label{fig:happy} This is a
figure of a happy face with a \texttt{psfrag} replacement. The
original figure (drawn in xfig and exported to a .eps file) has
the text ``pie makes me happy!''. The \texttt{psfrag} package
replaces this with ``$\pi$ makes me happy!''. Note that we have
used the optional argument for the caption command so that only
a short version of this caption occurs in the list of figures.}
\end{center}
\end{figure}
\afterpage{\clearpage}
Here is an example of a figure environment.
Perhaps I should say that the example of a figure can be seen in
Figure~\ref{fig:happy}. Figure placement can be tricky with \LaTeX\
because figures and tables are treated as ``floats'': text can flow
around them, but if there is not enough space, they will appear later.
To prevent figures from going too far, the
\verb|\afterpage{\clearpage}| command can be used. This makes sure
that the figure are typeset at the end of the page (possibly appear on
their own on the following pages) and before any subsequent text.
The \verb|\clearpage| forces a page break so that the figure can be
placed, but without the the \verb|\afterpage{}| command, the page
would be broken too early (at the \verb|\clearpage| statement). The
\verb|\afterpage{}| command tells \LaTeX{} to issue the command after
the present page has been rendered.
Be careful when using the ``here'' placement option
\verb|\begin{figure}[ht]| that you place the figure between paragraphs
in your text, otherwise \LaTeX{} might actually insert it in the
middle of a sentence (which does not look very good and is frowned
upon by the editors!)
\subsection*{An Unnumbered Subsection}
Note that if you use subsections or further divisions under an
unnumbered section, then you should make them unnumbered as well
otherwise you will end up with zeros in the section numbering.
\chapter{Landscape Mode}
The landscape mode allows you to rotate a page through 90 degrees. It
is generally not a good idea to make the chapter heading landscape,
but it can be useful for long tables etc.
\begin{landscape}
This text should appear rotated, allowing for formatting of very
wide tables etc. Note that this might only work after you convert
the \texttt{dvi} file to a postscript (\texttt{ps}) or \texttt{pdf}
file using \texttt{dvips} or \texttt{dvipdf} etc. This feature is
provided by the \verb|lscape| and the \verb|pdflscape| packages.
The latter is preferred if it works as it also rotates the pages in
the pdf file for easier viewing.
\end{landscape}
% This file is setup to use a bibtex file sample.bib and uses the
% plain style. Note, the bibliography could come after the appendices.
\bibliographystyle{plain}
\bibliography{sample}
% If you only have one appendix, please uncomment the following line.
\appendix
\chapter{First Appendix}
Here you can have your appendices. Note that if you only have a
single appendix, you should issue
\verb|\renewcommand{\appendicesname}{Appendix}| before calling
\verb|\appendix| to display the singular ``Appendix'' rather than the
default plural ``Appendices''.
\chapter{Second Appendix}
Here is the second appendix.
%% This changes the headings and chapter titles (no numbers for
%% example).
\backmatter
% Indices come here.
\end{document}
\endinput
%%
%% End of file `mitsample.tex'.
| {
"alphanum_fraction": 0.7466411714,
"avg_line_length": 39.8256513026,
"ext": "tex",
"hexsha": "1ce2cdc1c4f0c123872ace8522f17090a6aaf390",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2020-01-13T13:59:44.000Z",
"max_forks_repo_forks_event_min_datetime": "2019-10-28T16:12:13.000Z",
"max_forks_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wathen/PhD",
"max_forks_repo_path": "ThesisTemplate/www.phys.washington.edu/users/mforbes/projects/ubcthesis/mitsample.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wathen/PhD",
"max_issues_repo_path": "ThesisTemplate/www.phys.washington.edu/users/mforbes/projects/ubcthesis/mitsample.tex",
"max_line_length": 75,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "35524f40028541a4d611d8c78574e4cf9ddc3278",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wathen/PhD",
"max_stars_repo_path": "ThesisTemplate/www.phys.washington.edu/users/mforbes/projects/ubcthesis/mitsample.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T21:27:30.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-10-25T13:30:20.000Z",
"num_tokens": 5132,
"size": 19873
} |
\section{Software Environment}
\centeredlargetext{white}{black}{
Software Environment
}
\begin{frame}
\frametitle{Welcome to Ubuntu GNU/Linux}
\begin{center}
\includegraphics[height=0.5\paperheight]{Tux.png}
\includegraphics[width=0.5\paperwidth]{blackeubuntulogo.png}
\end{center}
\end{frame}
\begin{frame}
\frametitle{How to take the mouse out of the Virtual Machine}
\begin{itemize}
\item Hit the \textbf{RIGHT CTRL} Key on Windows / Linux Host
\item Hit the \textbf{LEFT APPLE} Key on Mac Host
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{How to Open a Terminal - Icon in Upper Left Corner}
\framesubtitle{To type your command line instructions}
\begin{center}
\includegraphics[width=0.7\paperwidth]{Screenshot-OpenTerminal.jpg}
\end{center}
\end{frame}
\begin{frame}
\frametitle{How to Open a Terminal - Icon in Upper Left Corner}
\framesubtitle{To type your command line instructions}
\begin{center}
\includegraphics[width=0.7\paperwidth]{Screenshot-Terminal.jpg}
\end{center}
\end{frame}
\begin{frame}
\frametitle{How to Navigate Directories}
\framesubtitle{Double Click in Folder Icons in the Desktop}
\begin{center}
\includegraphics[width=0.7\paperwidth]{Screenshot-Nautilus.jpg}
\end{center}
\end{frame}
\begin{frame}[fragile]
\frametitle{Walk through the directories}
\begin{itemize}
\item Find source code of exercises
\begin{verbatim}
cd ~/src/ITK-OpenCV-Bridge-Tutorial/Exercises
pwd
ls
nautilus .
\end{verbatim}
\pause
\item Find binary build of exercises
\begin{verbatim}
cd ~/bin/ITK-OpenCV-Bridge-Tutorial/Exercises
pwd
ls
\end{verbatim}
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{How to View Images}
\begin{itemize}
\item Go to the directory
\item Invoke ``eye of gnome" eog application
\begin{verbatim}
cd ~/data
eog mandrill.png
\end{verbatim}
\pause
\item Hit ESC key to quit the application
\end{itemize}
\end{frame}
\begin{frame}[fragile]
\frametitle{How to View Videos}
\begin{itemize}
\item Go to the directory
\item Invoke ``VideoLAN" vlc application
\begin{verbatim}
cd ~/data
vlc Walk1.mpg
\end{verbatim}
\pause
\item Hit CTRL-Q to quit the application
\item or use the menus
\end{itemize}
\end{frame}
| {
"alphanum_fraction": 0.7709006928,
"avg_line_length": 22.7894736842,
"ext": "tex",
"hexsha": "ae000e99e2c110609b22d6fb80d6c3a31015efbd",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2020-05-23T18:57:47.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-04T15:28:26.000Z",
"max_forks_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c",
"max_forks_repo_licenses": [
"CC-BY-3.0",
"Apache-2.0"
],
"max_forks_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial",
"max_forks_repo_path": "Documents/Tutorial/SoftwareEnvironment.tex",
"max_issues_count": 2,
"max_issues_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c",
"max_issues_repo_issues_event_max_datetime": "2016-09-06T02:43:30.000Z",
"max_issues_repo_issues_event_min_datetime": "2016-08-31T14:01:43.000Z",
"max_issues_repo_licenses": [
"CC-BY-3.0",
"Apache-2.0"
],
"max_issues_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial",
"max_issues_repo_path": "Documents/Tutorial/SoftwareEnvironment.tex",
"max_line_length": 67,
"max_stars_count": 15,
"max_stars_repo_head_hexsha": "0c47e0a06d61f21acd27ad4339ce0e42c8260a0c",
"max_stars_repo_licenses": [
"CC-BY-3.0",
"Apache-2.0"
],
"max_stars_repo_name": "InsightSoftwareConsortium/ITK-OpenCV-Bridge-Tutorial",
"max_stars_repo_path": "Documents/Tutorial/SoftwareEnvironment.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-23T11:44:12.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-05-10T23:24:41.000Z",
"num_tokens": 662,
"size": 2165
} |
\documentclass[conference]{IEEEtran}
\IEEEoverridecommandlockouts
%\usepackage{hyphenat}
\usepackage[ruled,vlined]{algorithm2e}
\usepackage{amsmath}
\usepackage{xspace}
\usepackage[binary-units=true]{siunitx}
\usepackage{ulem}
%\usepackage{censor}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{tabularx}
\hypersetup{colorlinks=true,linkcolor=black,citecolor=blue,filecolor=black,urlcolor=blue}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\begin{document}
\title{The benefits of prefetching for large-scale cloud-based neuroimaging analysis workflows}
\newcommand{\tristan}[1]{\color{orange}\textbf{From Tristan: }#1\color{black}}
\newcommand{\tristanmod}[2]{\color{orange}\sout{#1}{#2}\color{black}}
\newcommand{\ariel}[1]{\color{blue}\textbf{From Ariel:}#1\color{black}}
\newcommand{\arielmod}[2]{\color{blue}\sout{#1}{#2}\color{black}}
\newcommand{\valerie}[1]{\color{purple}\textbf{Valerie: }#1\color{black}}
\newcommand{\valeriemod}[2]{\color{purple}\sout{#1}{#2}\color{black}}
\newcommand{\aws}{AWS\xspace}
\newcommand{\hcp}{HCP\xspace}
\newcommand{\sfs}{S3Fs\xspace}
\author{Valerie Hayot-Sasson$^1$, Tristan Glatard$^1$, Ariel Rokem$^2$ \\ $^1$ Department of Computer-Science and Software Engineering, Concordia University, Montreal, Canada\\ $^2$ Department of Psychology and eScience Institute, University of Washington, Seattle, Washington, USA}
\maketitle
\begin{abstract}
%%edited
To support the growing demands of neuroscience applications, researchers are transitioning to cloud computing for its scalable, robust and elastic infrastructure. Nevertheless, large datasets residing in object stores may result in significant data transfer overheads during workflow execution.
Prefetching, a method to mitigate the cost of reading in mixed workloads, masks data transfer costs within processing time of prior tasks. We present an implementation of ``Rolling Prefetch", a Python library that implements a particular form of prefetching from AWS S3 object store, and we quantify its benefits.
Rolling Prefetch extends \sfs, a Python library exposing AWS S3 functionality via a file object, to add prefetch capabilities.
In measured analysis performance of a 500~GB brain connectivity dataset stored on S3, we found that prefetching provides significant speed-ups of up to 1.86$\times$, even in applications consisting entirely of data loading. The observed speed-up values are consistent with our theoretical analysis. Our results demonstrate the usefulness of prefetching for scientific data processing on cloud infrastructures and provide an implementation applicable to various application domains.
%%%%%%%%%%%%%%%% original
% With the increase in dataset size and computing requirements of neuroimaging workflows, researchers have transitioned to cloud environments to provide the necessary infrastructure to enable the research. Since compute-local storage on the cloud can be rather costly,
% large datasets are placed on slower, but scalable,
% cloud storage instances, resulting in non-negligible performance penalties when accessing the data during workflow execution.
% A popular method for mitigating the cost of data accesses is through prefetching. In prefetching, the data is transferred fast storage in
% anticipation of a task access, masking slow storage latency and bandwidth within task scheduling and compute time.
% In this paper, we investigate the benefit of prefetching on a neuroimaging use case running on a t2-xlarge Amazon EC2 instance operating
% on an 500~GB tractography dataset stored on Amazon S3. To achieve this, we designed a model to describe the anticipated performance trends, developed a Python library based on S3Fs to prefetch the data and compared its performance with S3Fs as its baseline. Overall, it was found
% that prefetching the data provided a significant speedup when large amounts of data needed to processed by a task and when there were many
% parallel tasks accessing the dataset.
%%%%%%%%%%%%%%%%%%%%%
%Neuroimaging deals with massive amounts of data. One way to deal with that is to move to cloud computing. One of the persistent bottlenecks in %data analysis in these environments is that you have to wait for data to arrive at the CPU to be able to start analyzing it. Here, we devise a %way to prefetch data that will be needed in the future, while other work is happening.
\end{abstract}
\begin{IEEEkeywords}
Prefetching, neuroimaging, cloud computing
\end{IEEEkeywords}
\section{Introduction}
% Cloud service providers, such as Amazon Web Services (\aws)~\cite{aws},
% Microsoft Azure~\cite{azure} and Google Cloud~\cite{gc}, give access to two
% main services: Compute services and Object Stores. Compute services
% typically consist of instances with a configured amount of RAM, vCPU and
% local storage. Costs associated with compute services increase with the
% number of resources and time required. Compared to object stores,
% compute-local storage can be very costly, particularly when storing large
% amounts of data for extended periods. Thus, users generally opt for
% scalable external object stores for their large data. This is despite the
% increased latency, which may vary depending on location of the storage and
% compute instance network. Examples of cloud object storage include Amazon's
% Simple Storage Service (S3), Azure's Blob Service, and Google Cloud
% Storage.
Many fields of science are experiencing increases in the volume of datasets available to researchers. Neuroscience in particular is experiencing a rapid growth in data due to technical advances,
scientific breakthroughs, and sociological trends towards open data sharing. This increase in data is providing the basis for new discoveries about brain structure and function, but it also presents technical challenges. To deal with the deluge of available data, neuroscientists are increasingly adopting cloud platforms for data storage and processing. However, inefficient use of cloud services can lead
to needlessly longer processing time and cost. We aim to investigate
methods to reduce the impact of data transfers for
neuroimaging workflows on cloud services.
Data prefetching
is a well-established technique for the reduction of data access-related costs~\cite{callahan1991software, mowry1992design,
klaiber1991architecture}. Traditionally, prefetching was
used to reduce memory latency, as memory accesses were significantly slower than CPU processing. However, since the rise of Big Data,
prefetching has also been shown to be beneficial to the processing of large
datasets located on remote storage~\cite{yildiz2018improving}. During the
execution of an application, data required for future tasks are
copied from the remote storage device to compute-local storage, such that
when the application requires the data, it can read it from local
storage.
A recent example of the effectiveness of prefetching on the cloud is Netco~\cite{jalaparti2018netco}, a prefetching extension
integrated into the Hadoop Distributed File System (HDFS). Future data is prefetched based on two measures: 1) size of the
data to be processed and 2) task deadline. Netco demonstrated superior performance compared to other file systems, which importantly motivates our study. However, it remains tightly bound to HDFS while cloud applications generally use different file systems. A more versatile solution is needed that would broadly apply to cloud data analysis and storage.
The present study focuses on neuroscience data that describe long-range
connections between different parts of the human brain, a central research topic in contemporary
neuroscience~\cite{bassett_network_2017}. The three-dimensional trajectory
of the major neural pathways, composed of millions of neuronal axons, are
inferred from measurements of diffusion MRI and processed using computational tractography algorithms. These
algorithms generate ``streamlines": 3D curves that approximate the
trajectories of the major pathways. A single human brain measurement may
contain several millions of these streamlines, with their coordinates
assessed at sub-millimeter resolution. Subsequent analyses of these
streamlines usually access
streamlines sequentially and entirely within the files in which they are stored. Such an access pattern creates an excellent opportunity for prefetching.
This paper investigates the benefits of prefetching for cloud-based processing of neuroscience streamlines. Through
both theoretical analysis and experimentation, we characterize the
speed-up provided by prefetching compared to sequential data transfers for
neuroscience data processing deployed on the Amazon Web Services cloud. More specifically, this paper makes the following contributions:
\begin{itemize}
\item Formalization and performance analysis of a ``rolling prefetch" data scheme for cloud-based applications;
\item Implementation based on the S3Fs Python library to access data on Amazon S3;
\item Experimental evaluation in the Amazon cloud with a 500-GB dataset of streamlines derived from dMRI data.
\end{itemize}
% This is due to technical and scientific breakthroughs, enabling improved
% measurements that are providing more detailed views of the brain. But also
% thanks to sociotechnical trends that are promoting increase sharing of
% neuroscientific data. For example, human brain imaging datasets are rapidly
% increasing in size. While many neuroimaging studies ranged from 10 -
% 100~GBs in size, newer datasets such as the
% BigBrain~\cite{amunts2013bigbrain}, UK Biobank~\cite{sudlow2015uk} and the
% Human Connectome Project (HCP)~\cite{van2013wu} are expected to reach up to
% Petabytes of data. Cloud platforms are commonly used to store this data in
% publicly-accessible object stores, as is the case for instance for \hcp
% data.
% \subsection{Cloud Infrastructure}
% The cloud gives users access various types of self-maintained services and hardware.
% This is a particularly attractive model to scientific groups who would not otherwise
% have the expertise nor the funds to maintain such infrastructure. Furthermore, the amount
% of infrastructure is vast, supporting many users without significant queuing time in attaining the resources.
% Unlike High Performance Computing (HPC) clusters which may also provide access to various self-maintained
% hardware, users using the cloud as an Infrastructure as a Service (IaaS), can install whatever software is required
% on there.
\section{Materials and Methods}
Our implementation of prefetching, known as Rolling Prefetch, is a Python library implemented as a layer on top of \sfs. The code is available under the MIT license at \url{https://github.com/ValHayot/rollingprefetch}.
\sfs is a Python library, based on FSSpec, for interacting with directories and files located on Amazon S3.
To offset the cost of latency on S3, \sfs leverages the on-demand caching mechanisms provided by FSSpec.
Unlike \sfs which has distinct data transfer and compute phases, prefetched data transfers happen concurrently with compute (Figure~\ref{fig:prefetch}). In both cases, data is transferred from cloud storage in blocks of configurable size. While local I/O overheads occur during prefetch, the resulting overhead is in general minimal since memory disks are expected to be used for this purpose. Most importantly, prefetching requires knowledge of the application's data access pattern, such that transfers can be correctly anticipated. In our case, we assume that data is read sequentially, as is commonly the case with tractography data.
Since Rolling Prefetch is implemented as a layer atop \sfs, it can easily replace an existing call to S3Fs by calling \texttt{S3PrefetchFileSystem} instead of \texttt{S3FileSystem}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{figures/prefetch_diagram.pdf}
\end{center}
\caption{Prefetching vs sequential transfers (\sfs)}
%https://docs.google.com/drawings/d/1MV5dIdLyBrM_HJX-0jOsIKf90UD8wXE_G5Cnmd-mY-A/edit
\vspace{-1.5em}
\label{fig:prefetch}
\end{figure}
\subsection{Algorithm}
% These caching mechanisms include MMapCache, ReadAheadCache,
% BlockCache, BytesCache and AllBytes. AllBytes attempts to cache the entire file contents into memory upon data access.
% MMapCache creates a memory-mapped temporary file which is populated in units of blocks, based on
% the configurable \texttt{block\_size} parameter, as data is requested. For small sequential reads, the ReadAheadCache
% can be used to load data in blocks at a time. The BytesCache is similar to the ReadAheadCache except it improves the
% efficiency of semi-random reads by also maintaining parts of the previous cached block in memory. BlockCache differs from
% the others in that it implements a least recently used (LRU) cache of max number of blocks. When memory is filled, the
% least recently used block is evicted and the next one is read. By default, \sfs uses the BytesCache with a block size of
% 5~MB.
Rolling Prefetch combines prefetching with data eviction. By evicting processed data, it ensures a reduced footprint on the local storage leveraged as cache
(e.g. tmpfs, SSD, HDD). Rolling Prefetch consists of three threads: (1) the reading thread loads data blocks from local storage and marks them for eviction, (2) the prefetching thread transfers data blocks from cloud storage to local storage, and (3) the eviction threads deletes the blocks that have been marked for eviction.
% \valerie{cut read alg below}
% \begin{algorithm}
% \SetAlgoLined
% \SetKwInOut{Input}{Input}
% \SetKwInOut{Output}{Output}
% \Input{$start$ the start position of the read;\\
% $end$ the end position of the read;\\}
% \Output{$output$ the file contents located between $start$ and $end$}
% $not\_local \gets True$\;
% $read\_length \gets 0$
% $total\_read \gets end - start$
% \While{$total\_read > read\_length$}{
% \While{$not\_local$}{
% $block, not\_local \gets check\_local(start)$\;
% }
% $curr\_end \gets max(block.end, end)$\;
% $output \gets block.read(start, curr\_end)$\;
% \If{$block.end < end$}{
% $start \gets curr\_end$\;
% $flag\_for\_deletion(block)$\;
% }
% }
% \caption{Read}\label{alg:read}
% \end{algorithm}
\subsubsection{Reading}
The reading thread is triggered by the application to read data directly from cache or to wait until it is prefetched, if not found in the cache. %(Algorithm~\ref{alg:read}).
By waiting for the data to be cached, we ensure that performance is comparable to \sfs in a worst case scenario. Furthermore, as all data reads
are considered to be sequential, if the current block is not in memory then it is the current block being prefetched. Whenever a prefetched block has been read fully, it is up to the
read function to flag it for deletion.
% This process involves renaming the file to contain
% a ``.todelete'' extension. Using this extension, the eviction thread can determine which
% files need to be deleted.
\subsubsection{Prefetching}
Throughout the lifetime of the application, the prefetching thread continuously requests blocks from cloud storage, so long as there remain blocks that have not been prefetched (Algorithm~\ref{alg:prefetch}). Each block will be written to an appropriate cache location while not exceeding user-defined space limits.
Initially, the \texttt{used} variable for all cache locations is set to zero and the \texttt{file\_index} variable
is set to point to the first file index in the sequential ordering. The algorithm then enters a while loop whose
termination is dictated by the main program.
% (i.e., if the file object is still open, fetch, otherwise, terminate thread).
If all files have been prefetched, the prefetching threads terminates regardless of the status of the main thread.
Within the loop, the algorithm iterates between all available cache locations in priority order. The amount of available storage space
remaining is computed, and if it is insufficient to write the next block, the algorithm iterates through the file list $total\_files$
($verify\_used()$) and queries the filesystem to determine which blocks have been evicted. Based on how many blocks have been
removed, we can update the amount of used space, and by extension, the amount of available space. Should sufficient space
be released, the next block is fetched from cloud storage and is written to local storage. If there is insufficient space, the
next local storage location is tried.
%%mention in discussion the risk of an infinite loop if insufficient cache location storage space is provided. maybe
% need for timeout? -- will not mention!
\begin{algorithm}
\SetAlgoLined
\SetKwInOut{Input}{Input}
\Input{
$fetch$ a shared variable indicating whether the main thread has terminated or not\\
$total\_files$ the list of all the files to prefetch;\\
$cache\_location$ list of paths to cache locations;\\
$total$ total space available on prefetch cache;\\
$total\_size$ cumulative sum of all prefetched files;\\
$blocksize$ size of a block\\}
$used \gets 0$\;
$file\_index \gets 0$\;
\While{$fetch$}{
\ForEach{$cache\_location$}{
$available \leftarrow total - used$\;
\If{ $available < blocksize$}{
$available \gets verify\_used()$\;
}
\If{ $available \geq blocksize$}{
$fetch\_next\_block()$\;
$used \leftarrow used + blocksize$\;
}
}
\If{$total\_size \leq offset$ or $file\_index + 1 < total\_files$}{
$file\_index \gets file\_index + 1$\;
$total\_size \gets sizeof(total\_files[file\_index])$\;
}
\ElseIf{$total\_size \leq offset$}{
break;
}
}
\caption{Prefetching}\label{alg:prefetch}
\end{algorithm}
\subsubsection{Eviction}
Similar to prefetching, the eviction thread only terminates when the main thread has completed processing. %(Algorithm~\ref{alg:eviction}).
% However, there is no mechanism for it to break prior to the main thread indicating termination. This
% is because we assume that the file object will be closed when all data has been read.
To avoid additional costs caused by continuously querying the filesystem to determine which files can be evicted,
we determine the names of all the blocks that should be evicted ($get\_all\_blocks$), and verify whether they exist
in the filesystem at time of removal. We then update the list to ensure that we do not attempt to remove this file more
than once. Between each loop, we sleep for 5 seconds to ensure that sufficient time has elapsed between evictions.
The eviction thread ensures deletion of all remaining files prior to terminating.
% \valerie{cut eviction alg below}
% \begin{algorithm}
% \SetAlgoLined
% \SetKwInOut{Input}{Input}
% \Input{$fetch$ a shared variable indicating whether the main thread has terminated or not;\\
% $total\_files$ a list files to prefetch;\\
% $cache\_location$ a list of paths to cache locations;\\
% $file\_sizes$ a list containing the size of each file;\\
% $blocksize$ size of a block\\}
% $all\_blocks \gets get\_all\_blocks()$\;
% \While{fetch}{
% \ForEach{$block$ in $all\_blocks$}{
% \ForEach{$dir$ in $cache\_location$}{
% $removed \gets remove(block)$\;
% \If{$removed$}{
% $update\_list(all\_blocks, block)$\;
% break\;
% }
% }
% }
% $sleep 5$\;
% }
% $remove\_remaining(all\_blocks)$
% \caption{Eviction}\label{alg:eviction}
% \end{algorithm}
\subsection{Performance analysis}
We consider the cost of processing a file using sequential transfers (\sfs) to be the sum of three components (Equation~\ref{eq:s3fs}):
1) latency-associated costs, 2) bandwidth-associated costs and 3) application compute time.
The cost of latency is experienced every time a chunk of data is requested from cloud storage. That is, if we read a full file in a single call to cloud storage, we will only pay for latency once. If we issue
multiple calls to obtain file data on cloud storage, we will pay latency for each call. In contrast,
bandwidth is not affected by the number of calls. Whether we request all the data at once
or in chunks, bandwidth remains fixed, and time to transfer the data is
the quotient between the total amount of data transferred and the bandwidth. Compute time is assumed to be proportional to data size, as is frequently the case in neuroimaging.
\begin{equation}
T_{\mathrm{seq}} = n_{b}l_c + \frac{f}{b_{cr}} + cf,
\label{eq:s3fs}
\end{equation}
where $n_{b}$ is the number of data blocks, $f$ is the total size of the file to transfer, $l_c$ is the cloud latency, $b_{cr}$ is the cloud read bandwidth, and $c$ is the compute time per byte consumed.
Rolling Prefetch contrasts Equation~\ref{eq:s3fs} in that the compute and data transfer times mask one another (Equation~\ref{eq:prefetch}). However, Rolling Prefetch has a slightly higher performance penalty than that of sequential transfers when there is no compute, due to reading and writing the data to local storage. Furthermore, we
must consider the initial read from cloud storage, where no compute can occur concurrently, and the last compute, where no data transfer can occur concurrently.
\begin{equation}
T_{\mathrm{pf}} = T_{\mathrm{cloud}} + (n_b-1)\max\left(T_{\mathrm{cloud}}, T_{\mathrm{comp}}\right) + T_{\mathrm{comp}} \label{eq:prefetch}
\end{equation}
where $T_{\mathrm{cloud}}$ is the time to download a block from the cloud and write it to local storage, and $T_{\mathrm{comp}}$ is the time to read a block from local storage and process it:
\begin{eqnarray*}
T_{\mathrm{cloud}} &=& \overbrace{l_{c} + \frac{f}{b_{cr}n_{b}}}^{\mathrm{cloud\ read}} + \overbrace{l_{l} + \frac{f}{b_{lw}n_{b}}}^{\mathrm{local\ write}}, \\
T_{\mathrm{comp}} &=& \underbrace{l_l+\frac{f}{b_{lr}n_b}}_{\mathrm{local\ read}} + \underbrace{\frac{cf}{n_b}}_{\mathrm{compute}},
\end{eqnarray*}
where $l_l$ is the latency of local storage, $b_{lw}$ is the write bandwidth to local storage, and $b_{lr}$ is the read bandwidth to local storage.
This simple model provides several insights. First, if we neglect local transfers ($l_l$=0, $b_{lw}$=$b_{lr}$=+$\infty$), a reasonable assumption when local storage is in-memory, then we have:
\begin{equation*}
T_{\mathrm{seq}} = T_{\mathrm{pf}} + (n_b-1)\min\left(T_{\mathrm{cloud}},T_{\mathrm{comp}}\right),
\end{equation*}
and therefore the speed-up provided by Rolling Prefetch compared to sequential transfers is:
\begin{equation}
S=\frac{T_{\mathrm{seq}}}{T_{\mathrm{pf}}}=1+(n_b-1)\frac{\min\left(T_{\mathrm{cloud}},T_{\mathrm{comp}}\right)}{T_{\mathrm{pf}}} < 2
\label{eq:speed-up}
\end{equation}
Rolling Prefetch is therefore expected to provide a speed-up of at most 2x. This upper bound is approached when $T_{\mathrm{cloud}} \approx T_{\mathrm{comp}}$, which requires that cloud transfer and compute times are of similar magnitude.
In sequential transfers, using a single block ($n_b=1$) leads to the shortest transfer time. In practice,
block size is of course constrained by available memory. In contrast, with Rolling Prefetch, an optimal block size $\hat n_b$ exists under the reasonable assumption that $l_l \ll l_c$:
\begin{equation}
\hat n_b = \sqrt{\frac{cf}{l_c}}
\end{equation}
It suggests that on a given cloud infrastructure, the number of blocks should increase when the application compute time or the total data size increases.
Finally, as the number of blocks increases, $T_{\mathrm{seq}}$ and $T_{\mathrm{pf}}$ become equivalent to $n_bl_c$ and $n_b(l_c+l_l)$, respectively, resulting in parallel asymptote lines.
%\tristan{Processing time should be explained when you explain the problem in intro. The real question in this paper is whether opening (and processing) tractography files involves enough data transfers AND compute time for prefetching to be useful. If one is negligible compared to the other, speed-up will be negligible too.}
% \ariel{Added a suggested subsection describing the dataset}
% \valerie{cut entire implementation subsection} \tristan{I moved the description of S3FS here, you could summarize the paragraphs on your implementation into 1}
% \subsection{Implementation}
% Rolling prefetch is a Python library implemented as a layer on top of \sfs.
% \sfs is a Python library, based on FSSpec, for interacting with directories and files located on Amazon S3.
% To offset the cost of latency on S3, \sfs leverages the on-demand caching mechanisms provided by FSSpec.
% These caching mechanisms include MMapCache, ReadAheadCache,
% BlockCache, BytesCache and AllBytes. AllBytes attempts to cache the entire file contents into memory upon data access.
% MMapCache creates a memory-mapped temporary file which is populated in units of blocks, based on
% the configurable \texttt{block\_size} parameter, as data is requested. For small sequential reads, the ReadAheadCache
% can be used to load data in blocks at a time. The BytesCache is similar to the ReadAheadCache except it improves the
% efficiency of semi-random reads by also maintaining parts of the previous cached block in memory. BlockCache differs from
% the others in that it implements a least recently used (LRU) cache of max number of blocks. When memory is filled, the
% least recently used block is evicted and the next one is read. By default, \sfs uses the BytesCache with a block size of
% 5~MB.
% \subsection{Implementation details}
% Inheriting functionality from \sfs permitted us to focus on prefetching rather
% that the logic of communicating directly with the S3 interface. However, we have modified some of the S3Fs functions, settings and functionality.
% \begin{sloppypar}
% The implementation can be split up into two classes: \texttt{S3PrefetchFileSystem} and
% \texttt{S3PrefetchFile}. \texttt{S3PrefetchFileSystem} inherits from \sfs' \texttt{S3FileSystem}, and is nearly identical to the S3FileSystem
% class, with the exception that it overrides S3FileSystem's call to \texttt{open} to create
% and return an \texttt{S3PrefetchFile} object.
% \end{sloppypar}
% % Aside from reimplementing the \texttt{read} function, S3File is responsible for starting and stopping the prefetch and evictions threads. Both the prefetch and eviction threads are instatiated in the constructor of an S3PrefetchFile.
% % The threads are then terminated via the status
% % of a shared object (i.e. $self.fetch$) between the threads.
% % The most important difference between \texttt{S3File} and \texttt{S3PrefetchFile} is that we have disabled \sfs caching in our library, since it is embedded in prefetching.
% Since it is common for large files to be broken down into datasets of smaller files, we wanted the ability to leverage prefetching in this case.
% As a result, a call to open accepts a list of file paths rather than just a single path. In the case where there needs to be a header,
% the first file in the provided list must point to the header region. If all the other files contain a header that needs to be skipped, there is
% a parameter, known as \texttt{header\_bytes} that can be passed to \texttt{open} to specify the size of the header to skip in subsequent files.
% Currently, it is not possible to prefetch unrelated files.
% % \begin{sloppypar}
% % The library fixes the open mode to be ``read bytes'', as \texttt{S3PrefetchFile} is only intended to read files, and currently only functions with binary files. To be able to write or append to files, the \texttt{open} of the parent class of \texttt{S3PrefetchFileSystem}
% % will need to be used.
% % \end{sloppypar}
% Prefetching should be enabled to write to any available local filesystem. As a result, we have added the \texttt{prefetch\_storage} parameter
% to the initialization of an \texttt{S3PrefetchFile}. This parameter allows users to specify the desired caching locations and the amount
% of available space to use at each location. The cache directories are listed in terms of descending priority. That is, the first location should
% point a location on the fastest available local storage, e.g., tmpfs.
\subsection{Data}
To demonstrate the utility of Rolling Prefetch in neuroimaging, we analyzed data derived from a dMRI experiment conducted in a single subject. These data are special, in that a Super-Resolution Hybrid Diffusion Imaging (HYDI) method was used to achieve an effective spatial resolution of 0.625 mm$^3$, much higher than the typical $\sim$2 mm$^3$ commonly used. The details of the HYDI method and parameters of data acquisition were previously described \cite{Elsaid2019-ez}. The source MRI data were 14.94 GB. The data were processed using a probabilistic tractography algorithm \cite{Berman2008-xg}, implemented as part of DIPY \cite{Garyfallidis2014-el} and accelerated using CUDA to work on GPU \cite{rokem2021gpu}. This generated $\sim$498 GB of streamline data. Tractography streamlines are usually stored in the neuroscience-specific \texttt{.trk} file format. In this case, streamlines were sharded into 464 \texttt{.trk} files stored in an S3 bucket using the high-availability Standard storage class.
% \tristan{I moved the nibabel text here, it could be summarized in 1 paragraph}
% \ariel{ I revised the nibabel section into one paragraph. Below in comments is the original as I found it:}
Each \texttt{.trk} file is comprised of a 1,000-byte header and a body of variable length consisting
of a series of streamlines. Each streamline section contains \SI{4}{\byte} that denote the number of points in the streamline,
followed by a series of floating point values detailing each coordinate and ends with a series of values representing properties of the streamline. We used Nibabel, a Python library that reads and writes neuroscience-specific data formats, to read these files. For \texttt{.trk} files, Nibabel can return individual
data points via a generator. This is known as ``lazy loading'' and can be turned on or off prior to reading the file. As a result of the data representation format in \texttt{.trk} files, Nibabel reads may incur significant overhead: because it issues a total of three read calls for each streamline in the file.
% , some of which can be very small, depending on the streamline.
In addition, an affine transform is stored together with the data, and used to bring coordinates from different measurements into register. For example, when Nibabel reads \texttt{.trk} files it automatically applies an affine transformation to the coordinates of each streamline stored in the file. This means that some amount of compute is always executed when data is read from file.
% As in many other research fields, neuroscience data is usually stored in field-specific file formats. For example, MR images are often stored in the Neuroimaging Informatics Technology Initiative (NIfTI) file format, with the \texttt{.nii} extension. Streamlines from computational tractography are often stored in a file format based on the Trackvis software, with the \texttt{.trk} extension. Nibabel is a Python library that implements reading and writing of many neuroscience-specific data formats. It is highly used in Python-based analysis projects of neuroimaging data. For files that
% exceed available memory, it provides the ability to MemoryMap them, it the case of randomly accessed files, or it returns individual
% data points via a generator, for sequentially accessed files. This is known as ``lazy loading'' and can be turned on or off prior to reading
% the file. In the case of \texttt{.trk} files representing streamlines, as they are sequentially read, Nibabel uses a generator to return streamlines.
% As a result of the data representation format in \texttt{.trk} files, Nibabel may incur significant overheads when
% reading them. Each \texttt{.trk} file is comprised of a 1000 byte header and a body of variable length consisting
% of a series of streamlines. Each streamline contains \SI{4}{\byte} that denote the number of points in the track,
% followed by a series of floating point values detailing each track point and ends with a series of values representing the properties of the track. This means that for each existing streamline, Nibabel issues a total
% of three read calls, some of which can be very small, depending on the streamline.
% One of the challenges of analyzing volumetric data from neuroimaging is that the origin, voxel size and orientation of the acquisition volume can vary between different acquisitions (e.g., different MRI contrasts). To correspond the data from these different acquisitions, affine transforms are stored together with the data, and the transforms can be applied to the coordinates of different data to bring them into register. For example, when Nibabel reads \texttt{.trk} files it automatically applies an affine transformation to the coordinates of each streamline stored in the file. This means that some amount of compute is always executed when data is read from file.
\subsection{Experiments}
To evaluate the performance Rolling Prefetch, we conducted four experiments: 1) varying
the number of files; 2) varying the blocksize; 3) parallel processing of files; and 4) neuroimaging use-cases.
For experiments 1-3, the pipeline consisted of reading the file with Nibabel. As Nibabel
performs a minimal amount of computing, Rolling Prefetch is believed to have an effect even here. Additional compute required
by real analysis should only increase the benefit of using prefetching.
% For all experiments we are using tractography
% files belonging to the HYDI dataset stored on S3.
\subsubsection{Varying the number of files}\label{exp:files}
We vary the number of \texttt{.trk} files read to determine when Rolling Prefetch will be favourable relative to \sfs. We expected that for small amount of data, there should be no significant discernible difference between \sfs and Rolling Prefetch. This is because compute time is
expected to be minimal and the blocksize large compared to the total dataset size. In other words, there
will be less opportunities to prefetch data with very small datasets, unless the block size is proportionally smaller,
in which case latency will be penalizing both reads from \sfs and Rolling Prefetch, extending processing time. With larger files, there will be more computation occurring during the reads, and therefore, more opportunities
to mask the data transfers within the compute.
We benchmarked the time it took to lazily read 1, 5, 10, 15, 20 and 25 files and extract the streamlines in Nibabel.
The total data size for each of the file increments are 1.1, 5.8, 11.9, 18.0, 24.2 and \SI{31.2}{\gibi\byte},
respectively. The blocksize was set to \SI{64}{\mebi\byte} and the prefetch storage was located on \texttt{tmpfs} with
a limit of \SI{2}{\gibi\byte} of storage space. We performed 10 repetitions.
\subsubsection{Varying the blocksize}\label{exp:blocksize}
In this experiment, we aimed to quantify the effect of the number of blocks ($n_b$) on Rolling Prefetch and \sfs. According to our previous analysis, we expect that \sfs will reach its best performance with a single block, and that the runtime will linearly increase with $n_b$ from that point on. For Rolling Prefetch, we expect the runtime to decrease to a minimum, and then increase linearly with $n_b$.
Block sizes of 8, 16, 32, 64, 128, 256, 512, 1024,
and \SI{2048}{\mebi\byte} were used to load the files into Nibabel and extract the streamlines.
% We suspect that \SI{8}{\mebi\byte} may be too small to have many computes spanning more than
% a single block, however, it will occur at a higher frequency then with larger blocks.
We use the
largest-sized block (\SI{2048}{\mebi\byte}) to determine the
overhead of using the Rolling Prefetch algorithm, as no file in the HYDI dataset is larger than \SI{1.7}{\gibi\byte},
and therefore, there are no opportunities to prefetch.
Five files pertaining to the HYDI dataset were used to generate the results.
These files ranged from \SI{793}{\mebi\byte} to
\SI{1.5}{\gibi\byte} in size each. Since only Rolling Prefetch is capable of treating a list of files as a single file, a new
Nibabel object needed to be created for each file in the case of \sfs.
The experiment was executed on both \sfs and Rolling Prefetch for each block and repeated 10 times. Tmpfs was excluded from this
experiment as the \sfs blocksize would have no effect on its performance. Prefetched storage was set to be tmpfs and
configured to have \SI{2}{\gibi\byte} of available space such that the largest block size could fit in prefetch
storage.
\subsubsection{Parallel processing}\label{exp:parallel}
Since perceived
throughput may be affected by the number of threads used, we aimed to determine if the performance difference
between \sfs and Rolling Prefetch remains proportional to the amount of data being processed per thread.
If throughput is reduced by the number of active threads, it is expected that
the cost of data transfers will outweigh the cost of computing, such that the difference between \sfs and Rolling Prefetch will
be very small. Furthermore, since S3 is a scalable distributed file system and local storage may not be (or may be more limited in terms of scalability), local storage
throughput is expected to be reduced significantly with the addition of multiple threads.
% Should this be the case, the
% overheads specific to prefetching will become very costly.
We used a total of four parallel processes to evaluate the effects of parallel processing. We increased the number of
files per thread between each condition from 1-20, with the last condition loading
a total of \SI{108}{\gibi\byte} into Nibabel. The blocksize was set to \SI{64}{\mebi\byte} and we set the prefetch
storage location to be on \texttt{tmpfs}, with a limit of \SI{1}{\gibi\byte} of storage per process. 10 repetitions were
performed.
\subsubsection{Neuroimaging use cases}\label{exp:4}
Our previous experiments evaluate the value of Rolling Prefetch when the application performs the most
basic and necessary action: converting the data from binary to array representation.
% Although it is unusual
% for applications to load the data without performing any additional processing of any kind, this will give us an
% idea of the lower bound of speedups .
We expect that the benefits of Rolling Prefetch can be maximized with an increased compute time, due to greater opportunities to prefetch data during compute. However, we also expect that there is a
peak ratio between data transfer time and compute time (data-to-compute ratio), where the benefits of Rolling Prefetch will be highest.
If compute time is significantly greater than data transfer, regardless of all the time saved by prefetching data, the
total runtime will predominantly be compute. This motivates further profiling to determine how to shard
large datasets, like the HYDI dataset, such that the benefits of Rolling Prefetch can be maximized.
Since the benefits of Rolling Prefetch may vary according to the ratio between data transfer and compute time, we have selected two use cases with varying computation times: 1) histogram distribution of streamline lengths; and 2) bundle recognition.
The distribution of streamline lengths is used in order to assess the performance of tractography algorithms. It may also aid in determining if the files are good candidates for compression. The histogram computation consists of loading each streamline within the dataset lazily, gathering the lengths of
each individual streamline and generating a histogram of 20 bins from their lengths.
% Unlike the segmentation
% application, this application is data-intensive.
The human brain is composed of multiple different functional regions, and these regions are connected through major anatomical pathways. One of the first steps in the analysis of streamline data from human brain dMRI data is to classify the streamlines according to their three-dimensional trajectory into these different pathways. This is referred to as ``bundle recognition"~\cite{garyfallidis2018recognition}. To demonstrate the benefits of Rolling Prefetch in this use-case, we ran a software pipeline that determines whether streamlines within the tractography file belong to either of two different bundles (the corticospinal tract, CST and the arcuate fasciculus, ARC), or to neither \cite{Kruper2021-bo}. Currently, the pipeline loads the data all at once (i.e. no lazy loading) and then performs the bundle recognition task. Thus, it is not possible to read only fragments of a large file which does not fit in memory, in this case, and as a consequence of the separation of data loading and processing, there will be no opportunities to keep the prefetched data transfers masked within compute operations.
We chose not to modify the pipeline in order to determine what is the speedup provided
by Rolling Prefetch without additional optimization, and in order to be able to speculate on the possible benefits of adapting existing pipelines
to support larger-than-memory files and allowing reads and computations to occur together.
Varying parameters were used for the bundle recognition and
histogram application. For instance, recognition was
executed on two different compute instances (c5.9xlarge and r5.4xlarge). The experiment executed on the c5.9xlarge instance consisted of a \SI{1}{\gibi\byte} file with a \SI{64}{\mebi\byte} blocksize. The histogram experiment executed on the r5.4xlarge instance used 10 files totalling \SI{12}{\gibi\byte} and used a \SI{32}{\mebi\byte} blocksize.
We also considered the speedups obtained from
bundle recognition in smaller files. We split up the
previous \SI{1}{\gibi\byte} file and split it up into
9 shards containing the same number of streamlines. File sizes ranged from \SI{73}{\gibi\byte} to \SI{165}{\mebi\byte}. We compared the speedup provided by Rolling Prefetch on a \SI{165}{\mebi\byte} shard to
the processing of all 9 shards. These experiments were
run exclusively on the r5.4xlarge instance with 10 repetitions.
To compare the benefits of Rolling Prefetch with different data-to-compute ratios, we executed the histogram computation and the bundle recognition algorithm on
an r5.4xlarge instance using the \SI{1}{\gibi\byte} file. Blocksize in both cases was fixed at \SI{32}{\mebi\byte}. 5 repetitions were performed.
In all cases, \SI{2}{\gibi\byte} of cache storage was allocated to Rolling Prefetch.
\subsection{Infrastructure}
For the first three experiments, we used an Amazon EC2 t2.xlarge instance with 4 vCPUs and \SI{16}{\gibi\byte} of RAM with a
maximum of \SI{7.8}{\gibi\byte} of RAM of tmpfs space. The compute instance was configured to use Red Hat Enterprise
Linux 8.3 with kernel version 4.18.0. This instance was hosted in availability zone \texttt{us-west-2a}.
The experiments all used Python version 3.9.0 with version 0.5.2 of \sfs and Nibabel version 3.2.1.
The scientific applications required significantly more memory than available on t2.xlarge instances. As a result, the scientific applications were executed on a c5.9xlarge instance and an r5.4xlarge configured identically
to the t2.xlarge instance, but located in the \texttt{us-west-2d} availability zone. c5.9xlarge instances consist of 64 vCPUs and have \SI{72}{\gibi\byte} of available memory, whereas the r5.4xlarge instance consists of 16 vCPUs and \SI{128}{\gibi\byte} of available memory.
The data used for all experiments originated from the HYDI tractography dataset stored on the Amazon S3 object store.
This dataset is broken down into 464 files ranging from \SI{700}{\mebi\byte} to \SI{1.7}{\gibi\byte} each. This dataset
was also located in the \texttt{us-west-2} region.
To get an idea of the cost of data transfers, we measured the time it would take to read data sequentially from S3 to the t2.xLarge instance. We varied the filesize from \SI{1}{\kibi\byte} to \SI{4}{\gibi\byte} and measure the time it took to read the files from
a \texttt{us-west-2} S3 bucket and from tmpfs. This method was favoured over more standard benchmarking libraries to ensure the
two filesystems were benchmarked in the same way. Results were then compared to benchmarking results produced by
the \href{https://github.com/dvassallo/s3-benchmark}{s3-benchmark} library to ensure validity of our benchmarking
method. The results can be seen in Table~\ref{table:benchmarks}.
%%tristan do i keep this if our model is just a theoretical eval? maybe it should be placed where the model
%% is instead?
\begin{table}
\caption{Measured latency and bandwidth obtained from reading files ranging from \SI{1}{\kibi\byte} to \SI{4}{\gibi\byte} on a t2.xLarge instance reading from an S3 bucket in the same region.}
\centering
\begin{tabular}{| c | c | c| }
\cline{2-3}
\multicolumn{1}{c|}{}& S3 & memory \\
\hline
bandwidth (MB/s) & 91 & 2221 \\
latency (s) & 0.1 & $\num{1.6e-6}$ \\
\hline
\end{tabular}
\vspace{-1.5em}
\label{table:benchmarks}
\end{table}
\section{Results}
\subsection{Varying the file size}
As we increase the file size we observe that the disparity between \sfs and
Rolling Prefetch increases (Figure~\ref{fig:filesize}), with Rolling Prefetch significantly outpacing ($\sim$1.7x faster)
\sfs at \SI{31.2}{\gibi\byte} (25 files). This is expected as per our theoretical evaluation. With a large amount of blocks, the pipeline duration is determined by the maximum between the time it takes to
compute the data $T_{comp}$ and the time it takes to transfer the data $T_{cloud}$. On the other hand, \sfs runtime is the sum of the time it takes to transfer the data and the compute. In this case, the compute time is large enough such that using Rolling Prefetch significantly reduces runtime with large amounts of data.
As the size of the data decreases, the benefit of using Rolling Prefetch also decreases.
% This too is expected, as there is proportionally less compute occurring.
In this particular use case, S3 data transfer time is much larger
than that of compute time, thus the only time we can save is that of $T_{comp}$. $T_{comp}$ like ${T_{cloud}}$ is
proportional to the number of blocks used, and thus the amount of time saved increases with file size. In the
worst case, Rolling Prefetch performance equals that of \sfs.
\subsection{Varying the block size}%% some parts may be better suited for discussion
At a very large number of blocks, the performance of both \sfs and Rolling Prefetch is degraded due to
increased impacts of both S3 storage latency and, in the case of Rolling Prefetch, local storage latency (Figure~\ref{fig:blocksize}). \sfs starts to surpass the speed of Rolling Prefetch, although not substantially (approx. 1.03~X) at 748 blocks. Furthermore, while we read from S3
at certain blocksizes, the block size on local storage is dictated by filesystem and kernel readahead
size. In our specific case, the data would be read from tmpfs, and thus we do not see a significant added cost
to Rolling Prefetch. It could be inferred that the overhead of Rolling Prefetch may be more significant with a large
number of blocks due to Nibabel's access patterns, which consist of many small reads.
Conversely, when decreasing the number of blocks we see an increase in performance in both \sfs and Rolling Prefetch. This
is because both benefit from decreased latency to access data on S3, by making fewer read calls to it. Rolling Prefetch is found to be faster than \sfs with a peak speedup of 1.24~X at 187 blocks (i.e.
\SI{32}{\mebi\byte} blocks). The speedup obtained from Rolling Prefetch did not vary significantly between 24 and
187 blocks, averaging at a speedup of approximately 1.2~X. Rolling Prefetch outpaces \sfs at less than
748 blocks as the cost of local latency has diminished sufficiently with the larger blocks. As the blocksize
increases, Rolling Prefetch is able to begin masking S3 latency and bandwidth within its compute.
% The speedup being
% proportional to how many more blocks can be read within the computation.
\sfs and Rolling Prefetch performance converges again when the blocks reach a certain size, placing more weight disk bandwidth
than latency. \sfs exceeds the performance of Rolling Prefetch at a single block, where no actual prefetching can take
place and we pay the cost of implementation overheads, such as writing the data to local storage
and reading it from local storage rather than directly from memory.
Generally speaking, variations in blocksize did not lead to major speedups between Rolling Prefetch and \sfs. Since the
total files size and compute time was fixed, there was a fixed upper bound to how many data transfers could
be masked by compute time. By minimizing the amount of latency through a reduction in the number of blocks, the
overall application time was shortened, in both \sfs and Rolling Prefetch, and thus a larger percentage of the total
runtime could be masked by compute.
\subsection{Parallelized workloads}
% \tristan{This section refers to file numbers while figure is in GiB}
When we parallelize the reads up to four concurrent processes (Figure~\ref{fig:parallel}), we notice that the trends are still consistent
despite increased contention on local filesystems. This is likely due to the fact that \texttt{tmpfs} speeds
would remain sufficiently high even with the increased contention caused by having at minimum 4 and
potentially even up to 8 concurrent threads accessing the filesystem at the same time. We speculate that the same
pipeline running with data read from a single disk would have had worse performance
due to its lack of scalability.
The maximum speedup achieved, on average, during this experiment was 1.86$\times$ at 24.2~GiB. The average speedup
was around 1.52~x altogether, with the minimum average speedup being 1.37$\times$ at 80.6~GB. Due to the high
variability in results within both Rolling Prefetch and \sfs, we expect that these speedup differences may be a
result of bandwidth and latency variability across runs. We do not expect that conditions are more favourable
for reading the data with four threads processing 24.2~GiB; since they are processing the
same amount of data concurrently, the overall speedup should be constant.
\subsection{Neuroimaging use-cases}
The results obtained from our neuroimaging use cases indicate that, in all
cases, the use of Rolling Prefetch can speed up analyses (Figure~\ref{fig:speedup}).
The extent of this speedup is, however, dependent on a number of
factors, such as instance type, application and number of shards.
The greatest observed speedup with Rolling Prefetch was found when executing the bundle recognition application
with a greater number of shards (1.64$\times$ faster). This result echoes what was observed in Figure~\ref{fig:filesize} with even a more compute-intensive application like
bundle recognition. We do not, however, observe a speedup with a single shard, as the size of the shard was so small it did not incur many reads from S3.
Due to the compute-intensive nature of the bundle recognition pipeline, it comes as no
surprise that overall speedup in the unsharded case is minimal (1.14$\times$). The ratio of data-to-compute was found to be approximately 1/7, where the compute took, on average around 9000s. Thus, even if Rolling Prefetch significantly speeds up reads, the data transfer time is
minimal to the overall compute time of the application. With a highly data-intensive
workflow, such as a histogram computation, we observe that the speedup is more
significant with a speedup of 1.5$\times$. Although our model dictates that the upper bound
of speedup that can be obtained with Rolling Prefetch is 2$\times$, the observed speedups
obtained by these two applications never reach that boundary. A possible explanation may be that the data-to-compute ratio of histogram computation was too large to mask many of
the reads within the compute, whereas the data-to-compute ratio was too small in the segmentation application, such that any speedup obtained
through the Rolling Prefetch algorithm was offset by the computation time. The ideal
algorithm for Rolling Prefetch would lie somewhere between the data and computational
requirements of the histogram and segmentation applications.
Although the execution parameters differed between the two instance types thus not
making them directly comparable, we observed a greater speedup on the c5.9xlarge
(1.6$\times$) than on the r5.4xlarge (1.1$\times$) instance. We suspect that the speedup obtained
on the c5.9xlarge instance was relatively high as a result of the increased parallelism
obtained from the increased number of CPUs, resulting in a more data-intensive application. While the r5.4xlarge instance did process a larger number of files,
suggesting that speedup should be greater, the actual file sizes varied, and with
it, the streamline features, potentially resulting in a more compute-intensive
execution further exacerbated by the decrease in available vCPUs.
% old paragraph on segmentation
% Results on the segmentation pipeline demonstrate that prefetching provides a substantial speedup over \sfs. The
% average speedup obtained, as can be seen in Figure~\ref{fig:segmentation}, was approximately 1.5~x. This performance gain is quite substantial despite data loading
% and processing being distinct stages within the pipeline. It is expected that we can obtain an even
% greater performance boost with prefetching if data loading and processing occur together, obtaining a speedup
% closer to the limit of 2~X. Furthermore, due to the memory requirements of the pipeline, enabling large datasets
% to be loaded lazily with prefetching and Nibabel may reduce the memory demands of the application, enabling more
% cost-effective instances to be selected.
% Likewise, as per the results obtained in Experiment~\label{exp:filesize}, we expect the performance gain to
% increase with files size. We also expect this pipeline to adapt well to parallelism (as per Experiment~\label{exp:parallel}, up to the point until strain on the local storage due to filesystem contention
% outweighs that of contention on S3.
\begin{figure}
\begin{center}
\vspace{-0.6em}
\includegraphics[height=160pt]{figures/filesize.pdf}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-30pt}
\caption{Runtime performance of reading subsets of the HYDI dataset stored on Amazon S3 into Nibabel using \sfs and Rolling Prefetch on an Amazon EC2 t2.xlarge instance. Blocksize was set to \SI{64}{\mebi\byte} on both \sfs and
Rolling Prefetch. Prefetch cache consisted of \SI{2}{\gibi\byte} tmpfs storage.}
\vspace{-5.53em}
\label{fig:filesize}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{figures/parallel.pdf}
\setlength{\abovecaptionskip}{-10pt}
\setlength{\belowcaptionskip}{-10pt}
\caption{Runtime performance of reading subsets of the HYDI dataset stored on Amazon S3 into Nibabel in parallel using 4
\sfs and Rolling Prefetch processes. Blocksize was set to \SI{64}{\mebi\byte} on both \sfs and Rolling Prefetch.
Prefetch cache consisted of \SI{1}{\gibi\byte} tmpfs storage.}
\vspace{-5.5em}
\label{fig:parallel}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{figures/blocksize.pdf}
\setlength{\abovecaptionskip}{-10pt}
\setlength{\belowcaptionskip}{-20pt}
\caption{Runtime performance of reading a \SI{6}{\gibi\byte} subset of the HYDI dataset stored on Amazon S3 into Nibabel using
\sfs and Rolling Prefetch with various block size configurations on an Amazon EC2 t2.xlarge instance. Prefetch cache consisted of \SI{2}{\gibi\byte} tmpfs storage.}
\vspace{-1.5em}
\label{fig:blocksize}
\end{center}
\end{figure}
% \begin{figure}
% \begin{center}
% \includegraphics[width=\columnwidth]{figures/segmentation.pdf}
% \caption{Runtime performance of the Segmentation pipeline operating on a \SI{1}{\gibi\byte} S3 HYDI tractography file using \sfs and prefetch on an Amazon C5.9xlarge instance. Blocksize was set to be \SI{64}{\mebi\byte} on
% both \sfs and prefetch. Prefetch was allotted \SI{2}{\gibi\byte} of storage on tmpfs.}
% \label{fig:segmentation}
% \end{center}
% \end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{figures/speedup.pdf}
\setlength{\abovecaptionskip}{-10pt}
\setlength{\belowcaptionskip}{-8pt}
\caption{Rolling prefetch speedup of the neuroimaging use-cases (histogram and bundle recognition) in various conditions. Experiments varying instance type and number of files were only performed with the bundle recognition pipeline.}
\vspace{-1.5em}
\label{fig:speedup}
\end{center}
\end{figure}
\section{Discussion}
Our theoretical analysis and experimental results demonstrate that there is a substantial processing time gain to be
obtained from using Rolling Prefetch, particularly in the case of mixed workloads, where there is a significant
cost associated with time spent on compute and data transfers. This works well with typical use cases in neuroimaging,
where tasks vary widely ranging from very short tasks to long ones necessitating hours to complete and where datasets
are large enough to incur transfers of similar magnitudes. Moreover, to save time with data transfers, researchers
may opt to transfer their data over the compute instance, and perform the computation exclusively with
data stored locally. While this does effectively give the optimal performance during processing, researchers are left to manage
the data themselves. Since local storage on compute can become quite costly, researchers must decide between
processing only a subset of the data, paying hefty fees related to storing large amounts of data or
incorporating data management logic directly into their workflows.
There are also natural limitations to
Rolling Prefetch. For instance, in the case of parallel workloads, in certain instances S3 would be
preferred to the available local storage. This is a consequence of the fact that S3 is designed to be scalable
and is capable of processing many requests simultaneously, whereas unless configured to do so, local storage
will not scale adequately to increased contention.
% Additionally, when blocks become significantly large with
% an equally large number of blocks, it is possible that rolling prefetch will be slower than \sfs.
% -- commented out because cannot remember what i meant
\subsection{Benefits of Rolling Prefetch}
Rolling Prefetch is an added layer on top of \sfs that replaces its built-in cache strategies to intercept
application read calls and ensure that data is preloaded. The implementation ensures that filesystem space requirements are not exceeded during prefetching through limits set by the user.
With the built-in eviction mechanism, users are not required to do any form of data management should they be processing datasets larger than available storage. Furthermore, the library allows configuration of multiple storage device as cache for prefetching. Currently,
files are written to specific storage devices based on available space and storage priority.
With just a simple data loading workload, we have observed up to a 1.8$\times$ speedup and a maximum overhead of 1.03$\times$. These observed speedups were
obtained when we set the cache location to a \texttt{tmpfs} device and may naturally decrease with a slower device such as an SSD. In our particular case, the speedups meant saving 20~min of processing time
on loading nearly \SI{100}{\gibi\byte} of data with a maximum runtime of approximately 71~min. Moreover, this
was achieved on a small instance with only \SI{1}{\gibi\byte} of prefetch storage allotted to each parallel
process, indicating that we can achieve good speedups even when resources are constrained.
While our results pertain to the applicability of Rolling Prefetch to neuroimaging, similar speedups can be expected from other kinds of sequential processing, such as standard text file processing like word count.
\subsection{Parallelism-related overheads}
While our experiments demonstrate that Rolling Prefetch continues to provide a performance improvement to parallel
applications running on S3, we do expect performance to decrease if we continue to scale horizontally on
a single instance, or use a slower device as cache. Our implementation consists of two threads actively reading and writing to local storage. Each time the number of Rolling
Prefetch instances increase, we double the amount of threads writing to local storage. Although it is standard
to have multiple DIMMs on a single machine, it may
not be necessarily true of other storage devices. That being said, attached cloud volumes may
also be sufficiently scalable such that processing time remains unaffected by an increase in processes.
To reduce the load of prefetching data to local storage, we can add a third component to Rolling Prefetch that periodically tracks cache bandwidth. Using such a parameter, the algorithm could take into consideration filesystem
load in addition to user-specified priority.
\subsection{Task granularity}
\sfs was designed to be used within distributed processing frameworks. With such frameworks, we must take into consideration task granularity and how that would affect
scheduling. Rolling prefetch becomes beneficial with larger files. Assuming a workload where the task is simply to load the streamlines, we would need a few
GBs of data to start noticing a speedup. In cases where resources are ample enough to run all tasks in
parallel and the amount of data processed by a task is minimal, Rolling Prefetch and \sfs would probably behave
similarly, with \sfs potentially performing a bit faster.
The risk with passing large amounts data to the library within a task is that it is less robust to failures as any failed task would have to resume from the beginning. This could
take a significant amount of time, depending on how much data had been processed by the task prior to failure.
% Making the tasks smaller would reduce the costs associated with task restart, but then the benefits of
% Rolling Prefetch would be limited.
It is understandable that implementations such as Netco are adaptations to
persistent filesystem, since then prefetching could work without limiting scheduler logic. For our
implementation to be efficient within parallel processing frameworks, we would need to decouple it from
the task itself and allow it to communicate with the task scheduler to determine which files to prefetch.
\section{Conclusion}
% Prefetching is a widely used method for minimizing the costs of computational workloads that involve multiple data reads.
% To determine the potential added value of prefetching to large-scale neuroimaging workloads, we quantified its
% speedup through analysis of the runtime performance of standard neuroscience data loading, and by embedding this idea in existing computational pipelines. To accomplish this, we developed Rolling Prefetch, a Python library that can
% prefetch sequentially-read binary files located on S3 as an extension of the \sfs library. We compared the performance of Rolling Prefetch with \sfs.
% While there have been
% implementations for prefetching on the cloud, this technique has not been used for the processing of neuroscience
% workloads on the cloud. However, this idea is becoming more relevant for neuroscience data analysis workloads, as contemporary neuroscience datasets continue to increase in size and analysis pipelines are becoming more data-intensive.
% Our theoretical analysis of prefetching concluded that Rolling Prefetch can provide up to a maximum of 2x speedup,
% and is most effective when the time needed to compute is comparable to the time required to transfer the data.
% Furthermore, it was determined that while \sfs performed best given the least number of blocks, Rolling Prefetch
% would perform best with an intermediary number of blocks, where the overheads of prefetching would become more
% significant at 1 block, when prefetching would at minimum do the same amount of work as \sfs, and at many blocks,
% where the latency of writing locally would become apparent.
% Overall the analysis states that it is better to
% slightly overestimate the number of blocks as the penalty of reading more blocks is proportional to the latency
% of the cloud ($l_{c}$).
% Our experimental evaluation supports the conclusions drawn from the theoretical evaluation. A peak speedup of
% 1.8x was attained when processing a total of 20 tractography files across four parallel threads. Good speedups
% were also obtained when increasing the number of files processed per task, obtaining a 1.7x speedup at 25 files.
% As determined by our theoretical evaluation, having too few blocks and too many blocks penalized the performance
% of Rolling Prefetch due to overheads incurred by the prefetching process.
Overall, we conclude that Rolling Prefetch would be a valuable addition to large-scale neuroimaging pipeline executing
on the cloud, particularly in instances where data transfer and compute time are similar.
% However, our implementation still has limitations that may not make it favorable to distributed
% processing.
Future work should focus on the following improvements: The loading part of Rolling Prefetch should be performed outside of task themselves to ensure that it does not interrupt any form of task scheduling and to avoid tasks which take too long to restart if lost.
The library should also make sure to be able to communicate with the schedulers to help determine where tasks
should be scheduled give the location of the data.
\section{Acknowledgments}
This research was supported by NIH grant 1RF1MH121868-01 and by cloud compute credits from Microsoft Azure. This research was also supported by the Canada Research Chairs program, and by the Natural Sciences and Engineering Research Council of Canada.
%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.
\bibliographystyle{plain}
\bibliography{biblio}
\end{document}
\endinput
%%
%% End of file `sample-authordraft.tex'.
| {
"alphanum_fraction": 0.7857936794,
"avg_line_length": 74.1380846325,
"ext": "tex",
"hexsha": "9abcb58c469b0e6f34b2319374a37922f43020a6",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dacb8b5e741461c268d960ace46e22a926a4339f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "ValHayot/rollingprefetch",
"max_forks_repo_path": "paper/paper.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "dacb8b5e741461c268d960ace46e22a926a4339f",
"max_issues_repo_issues_event_max_datetime": "2021-08-24T02:52:36.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-07-28T21:47:24.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "ValHayot/rollingprefetch",
"max_issues_repo_path": "paper/paper.tex",
"max_line_length": 1116,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "dacb8b5e741461c268d960ace46e22a926a4339f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "ValHayot/rollingprefetch",
"max_stars_repo_path": "paper/paper.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-15T22:06:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-11-15T22:06:27.000Z",
"num_tokens": 15539,
"size": 66576
} |
\chapter{Batch tutorial}
\label{Chap:batch:tutorial}
Details about the algorithms used for data processing are given in the other
sections of this manual. This section explains how a sequence of processing
steps can be run at once without \matlab\ programming. SPM12 includes
\texttt{matlabbatch}\footnote{\url{https://sourceforge.net/projects/matlabbatch}}
which has been derived from the SPM5 batch system, but is also available as a
separate package.
In \texttt{matlabbatch}, each data processing step is called ``module''. There
are e.g.\ modules for spatial processing of MRI data (realignment,
normalisation, smoothing), statistics (fMRI or factorial design specification,
model estimation, contrast specification). A batch describes which modules
should be run on what kind of data and how these modules depend on each other.
Compared to running each processing step interactively, batches have a number
of advantages:
\begin{description}
\item[Documentation] Each batch can be saved as a \matlab\ script. All
parameters (including default settings) are included in this script. Thus, a
saved batch contains a full description of the sequence of processing steps
and the parameter settings used.
\item[Reproducibility] Batches can be saved, even if not all parameters have
been set. For a multi-subject study, this allows to create template batches.
These templates contain all settings which do not vary across subjects. For
each subject, they can be loaded and only subject-specific parts need to be
completed.
\item[Unattended execution] Instead of waiting for a processing step to
complete before entering the results in the next one, all processing steps
can be run in the specified order without any user interaction.
\item[Multiple batches] Multiple batches can be loaded and executed together.
\item[Error reporting] If a batch fails to complete, a standardised report
will be given in the \matlab\ command window.
When running a batch from the GUI, this can be saved to an error \verb|.mat| file.
%When running a batch from the command line, such a file will be created always.
%It will be located in the current MATLAB directory and named
%\verb|spm_error_<DATE-TIME>.mat|.
\end{description}
\section{Single subject}
In this tutorial we will develop a batch for spatial processing and fMRI
statistics of a single subject of the ``Face'' example dataset (see
chapter~\ref{Chap:data:faces}). To follow this tutorial, it is not necessary to
download the example dataset, except for the last step (entering subject
dependent data).
To create a batch which can be re-used for multiple subjects in this study, it
is necessary to collect/define
\begin{itemize}
\item study specific input data (e.g.\ MRI measurement parameters, time
constants of the functional experiment, number of sessions),
\item necessary processing steps,
\item data flow between processing steps.
\end{itemize}
Subject specific input data (original functional and structural MRI data,
subject specific experiment parameters) should be entered after the batch
template has been saved.
\subsection{Study specific input data}
This dataset consists of fMRI data acquired in a single session and a
structural MRI. See section~\ref{sec:batch_interface_advanced} to learn how to
deal efficiently with multi-session data. MRI parameters and experiment
details are described in chapter~\ref{Chap:data:faces}.
\subsection{Necessary processing steps}
\subsubsection{Helper modules}
Some SPM modules produce graphics output which is captured in a PostScript
file in the current working directory. Also, a new directory needs to be
created for statistics. The ``BasicIO'' menu provides a collection of modules
which are useful to organise a batch. We will need the following modules:
\begin{itemize}
\item Named directory selector (\texttt{BasicIO > File/Dir Operations > Dir Operations > Named Directory Selector})
\item Change directory (\texttt{BasicIO > File/Dir Operations > Dir Operations > Change Directory})
\item Make directory (\texttt{BasicIO > File/Dir Operations > Dir Operations > Make Directory})
\end{itemize}
\subsubsection{SPM processing}
For a classical SPM analysis, the following processing steps are necessary:
\begin{itemize}
\item Realignment (\texttt{SPM > Spatial > Realign > Realign: Estimate \& Reslice})
\item Slice timing correction (\texttt{SPM > Temporal > Slice Timing})
\item Coregistration (\texttt{SPM > Spatial > Coregister > Coregister: Estimate})
\item Segmentation (\texttt{SPM > Tools > Old Segment})
\item Normalisation (\texttt{SPM > Tools > Old Normalise > Old Normalise: Write})
\item Smoothing (\texttt{SPM > Spatial > Smooth})
\item fMRI design (\texttt{SPM > SPM > Stats > fMRI model specification})
\item Model estimation (\texttt{SPM > Stats > Model estimation})
\item Contrasts (\texttt{SPM > Stats > Contrast Manager})
\item Results report (\texttt{SPM > Stats > Results Report})
\end{itemize}
Note that this examplar analysis pipeline is ancient and the \texttt{SPM > Tools > Old Segment} and \texttt{SPM > Tools > Old Normalise > Old Normalise: Write} modules could be replaced by a single \texttt{SPM > Spatial > Normalise: Estimate \& Write} one.
\subsection{Add modules to the batch}
The helper modules and the SPM processing modules can be assembled using the
GUI. Click the ``Batch'' button in the SPM Menu window. First, add the helper
modules, followed by the SPM modules in the order listed above. Do not configure
any details until you have selected all modules.
\begin{figure}
\centering
\includegraphics[width=.49\linewidth]{batch/batch_spm}
\hfill
\includegraphics[width=.49\linewidth]{batch/batch_basicio}
\caption{SPM and BasicIO application menus}
\label{fig:basicio_spm}
\end{figure}
\subsection{Configure subject-independent data}
Now, go through the batch and configure all settings that are
subject-independent (e.g.\ the name of the analysis directory, slice timing
parameters) as described in chapter~\ref{Chap:data:faces}. Do not enter any data that
is specific for a certain subject. The values that need to be entered are not
repeated here, please refer to the corresponding sections in
chapter~\ref{Chap:data:faces}.
The file \verb|man/batch/face_single_subject_template_nodeps.m| contains the
batch after all modules have been added and subject-independent data has been
entered.
\subsubsection*{Named Directory Selector}
\begin{description}
\item[Input Name] Give this selection a name (e.g. ``Subject directory'') -
this name will be shown in the dependency list of this batch.
\item[Directories] Add a new directory selector, but do not enter a
directory itself.
\end{description}
\subsubsection*{Change Directory}
Nothing to enter now.
\subsubsection*{Make Directory}
\begin{description}
\item[New Directory Name] ``categorical'' - the name of the analysis
directory. This directory will be created at batch run-time in the subject
directory.
\end{description}
\subsubsection*{Realign: Estimate \& Reslice}
\begin{description}
\item[Data] Add a new ``Session'' item. Do not enter any files for this
session now.
\end{description}
\subsubsection*{Slice Timing}
\begin{description}
\item[Data] Add a new ``Session'' item. Do not enter any files for this
session now.
\item[Timing options] Enter data for ``Number of slices'', ``TR'', ``TA'',
``Slice order'', ``Reference slice''.
\end{description}
\subsubsection*{Coreg: Estimate}
Nothing to enter now.
\subsubsection*{Segment}
Nothing to enter now.
\subsubsection*{Normalise: Write}
\begin{description}
\item[Data] Add a new ``Subject''. Do not enter any files now.
\item[Writing Options] Adjust bounding box, voxel sizes, interpolation
\end{description}
\subsubsection*{Smooth}
\begin{description}
\item[FWHM] Enter FWHM
\end{description}
\subsubsection*{fMRI model specification}
Enter all data which is constant across subjects.
\begin{description}
\item[Timing parameters] Enter values for ``Units for design'', ``Interscan
interval'', ``Microtime resolution'', ``Microtime onset''
\item[Data \& Design] Add a new ``Session'' item. Do not enter scans,
conditions or regressors yet. They will be added as dependencies or
subject specific inputs. If you want to make sure to remember this, you
can highlight ``Multiple conditions'' and select ``Clear Value'' from the
``Edit'' menu. Do the same for ``Multiple regressors''. This will mark
both items with an \verb|<-X|, indicating that something must be entered
there.
\item[Factorial design] Enter the specification for both factors.
\item[Basis functions] Select the basis function and options you want to use.
\end{description}
\subsubsection*{Model estimation}
Nothing to be entered yet for classical estimation.
\subsubsection*{Contrast manager}
If you have selected the ``Factorial design'' option as described above, SPM
will automatically create some contrasts for you. Here, you can create
additional T- or F-contrasts. As an example, we will add an ``Effects of
interest'' F-contrast.
\begin{description}
\item[Contrast session] Add a new ``F-contrast'' item.
\item[Name] Enter a name for this contrast, e.g. ``Effects of interest''.
\item[Contrast vectors] Add a new ``Contrast vector'' item. F-contrasts
can have multiple rows. You can either enter a contrast matrix in an ``F
contrast vector'' entry, or enter them row by row. To test for the
effects of interest (1 basis function and 2 derivatives for each of the
four conditions) enter \verb|eye(12)| as F contrast vector.
\item[Replicate over sessions] This design does not have multiple
sessions, so it is safe to say ``No'' here.
\end{description}
\subsubsection*{Results report}
Reviewing individual results for a large number of subjects can be very time
consuming. Results report will print results from selected contrasts to a
PostScript file.
\begin{description}
\item[Contrast(s)] Enter \verb|Inf| to print a report for each of the
defined contrasts.
\end{description}
\subsection{Data flow}
In chapter~\ref{Chap:data:faces}, each processing step was performed on its own. In
most cases, output data was simply passed on from one module to the next. This
scheme is illustrated in figure~\ref{fig:flow}. Only the coloured items at the
top of the flow chart are subject specific and need to be entered in the final
batch. All arrow connections are subject-independent and can be specified in
the batch template.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{batch/flow}
\caption{Flow chart for batch}
\label{fig:flow}
\end{figure}
\subsubsection{Add dependencies}
Based on the data flow in figure~\ref{fig:flow}, modules in the batch can now
be connected. The batch containing all dependencies can be found in
\verb|man/batch/face_single_subject_template.m|.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{batch/batch_dependencies}
\caption{Dependency selection}
\label{fig:batch_dependency}
\end{figure}
Again, start editing at the top of the batch:
\subsubsection*{Named Directory Selector}
Nothing to enter now.
\subsubsection*{Change Directory}
\begin{description}
\item[Directory] Press ``Dependency'' and select ``Subject
directory(1)''. At run time, SPM will change to this directory before batch
processing continues.
\end{description}
\subsubsection*{Make Directory}
\begin{description}
\item[Parent Directory] Press ``Dependency'' and select ``Subject
directory(1)''. The ``categorial'' directory will be created in this
directory.
\end{description}
\subsubsection*{Realign: Estimate \& Reslice}
Nothing to enter now.
\subsubsection*{Slice Timing}
\begin{description}
\item[Session] Press ``Dependency'' and select ``Resliced Images (Sess~1)''.
\end{description}
\subsubsection*{Coreg: Estimate}
\begin{description}
\item[Reference Image] Press ``Dependency'' and select ``Mean Image''.
\end{description}
\subsubsection*{Segment}
\begin{description}
\item[Data] Press ``Dependency'' and select ``Coregistered Images''. At run
time, this will resolve to the coregistered anatomical image.
\end{description}
\subsubsection*{Normalise: Write}
\begin{description}
\item[Parameter File] Press ``Dependency'' and select ``Deformation Field
Subj$\rightarrow$MNI (Subj~1)''.
\item[Images to Write] Press ``Dependency'' and select ``Slice Timing
Corr. Images (Sess~1)''.
\end{description}
\subsubsection*{Smooth}
\begin{description}
\item[Images to Smooth] Press ``Dependency'' and select ``Normalised Images
(Subj~1)''
\end{description}
\subsubsection*{fMRI model specification}
\begin{description}
\item[Directory] Press ``Dependency'' and select ``Make Directory
'categorical'\,''
\item[Scans] Press ``Dependency'' and select ``Smoothed Images''. Note: this
works because there is only one session in our experiment. In a
multisession experiments, images from each session may be normalised and
smoothed using the same parameters, but the smoothed images need to be
split into sessions again. See section~\ref{sec:batch_interface_advanced}
how this can be done.
\item[Multiple regressors] Press ``Dependency'' and select ``Realignment
Param File (Sess~1)''.
\end{description}
\subsubsection*{Model estimation}
\begin{description}
\item[Select SPM.mat] Press ``Dependency'' and select ``SPM.mat File (fMRI
Design\&Data)''.
\end{description}
\subsubsection*{Contrast manager}
\begin{description}
\item[Select SPM.mat] Press ``Dependency'' and select ``SPM.mat File
(Estimation)''.
\end{description}
\subsubsection*{Results report}
\begin{description}
\item[Select SPM.mat] Press ``Dependency'' and select ``SPM.mat File
(Contrasts)''.
\end{description}
\subsection{Entering subject-specific data}
Now, only 4 modules should have open inputs left (marked with \verb|<-X|). These
inputs correspond to data which vary over the subjects in your study:
\begin{description}
\item[Named Directory Selector] Subject directory
\item[Realign: Estimate \& Reslice] Raw EPI data for the fMRI session
\item[Coreg: Estimate] Anatomical image to be coregistered to mean EPI
\item[fMRI model specification] Names, conditions and onsets of your
experimental conditions, specified in a multiple conditions .mat file.
\end{description}
Using the GUI, you can now perform these steps for each subject:
\begin{enumerate}
\item load the template batch
\item enter subject-specific data
\item save batch under a subject specific name.
\end{enumerate}
After that, all batches for all subjects can be loaded and run at once.
This process can be automated using some basic MATLAB scripting. See
section~\ref{sec:batch_interface_cmd_cfg_serial} for details.
\begin{figure}[htbp]
\centering
\includegraphics[height=.3\textheight]{batch/batch_single_subject_template_nodeps}
\includegraphics[height=.3\textheight]{batch/batch_single_subject_template}
\includegraphics[height=.3\textheight]{batch/batch_single_subject}
\caption{All stages of batch entry}
\label{fig:batch_stages}
\end{figure}
\section{Advanced features}
\label{sec:batch_interface_advanced}
\subsection{Multiple sessions}
If an fMRI experiment has multiple sessions, some processing steps need to
take this into account (slice timing correction, realignment, fMRI design),
while others can work on all sessions at once (normalisation, smoothing).
Two modules in BasicIO help to solve this problem:
\begin{description}
\item[Named File Selector] Files can be entered here session by session. Note
that this file selector selects all files (not restricted to images) by
default. To select only images, set the filter string to something like
\verb|.*nii$| or \verb|.*img$|.
\item[File Set Split] This module splits a list of files based on an index
vector. Named file selector provides such an index vector to split the
concatenation of all selected images into individual sessions again.
\end{description}
\subsection{Processing multiple subjects in GUI}
There are different ways to process multiple subjects in the batch
editor:
\begin{itemize}
\item Add the necessary processing steps when creating the job.
\item Create a per-subject template, save it and load it multiple
times (i.e. in the file selector, add the same file multiple times
to the list of selected files).
\item Use ``Run Batch Jobs'' from ``BasicIO''
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{batch/batch_multi_subject_template}
\caption{Using ``Run Batch Jobs''}
\label{fig:batch_multi_subject_template}
\end{figure}
In all cases, the data for all subjects has to be entered through the
GUI, and computation will be done for all subjects at once after all
data is entered. There is an example job
\verb|face_multi_subject_template.m| that demonstrates the usage of
``Run Batch Jobs'' to run the single subject template job described
above. Note that the order and type of inputs in the single subject
template is important. Also, consistency checks are limited. If
inconsistent data is entered, the job will fail to execute and return
an error message.
To run this job for multiple subjects, simply repeat the ``Runs'' item
as many times as necessary and fill in the required data.
\subsection{Command line interface}
\label{sec:batch_interface_cmd}
The command line interface is especially useful to run multiple jobs
at once without user interaction, e.g.\ to process multiple subjects
or to combine separate processing steps. There is a ``high-level''
interface using \verb|spm_jobman|, which combines ``low-level''
callbacks to \verb|cfg_util|.
\subsubsection{SPM startup in command line mode}
\label{sec:batch_interface_spm_startup}
During normal startup, SPM performs important initialisation
steps. Without initialisation, SPM and its batch system will not
function properly. Consequently, an initialisation sequence needs to
be run before any batch job can be submitted.
MATLAB has several command line options to start without its GUI
(\verb|-nodesktop|) or even without any graphics output to a screen
(\verb|-nodisplay|). See MATLAB documentation for details.
To run SPM in \verb|-nodisplay| mode, the file \verb|spm_defaults.m|
has to be modified. The line \verb|defaults.cmdline = 0;| must be
changed to \verb|defaults.cmdline = true;|. In command line mode, SPM
will not open any figure window except the ``Graphics'' window.
Within MATLAB, the following commands are sufficient to set up SPM
\begin{enumerate}
\item \verb|spm('defaults', MODALITY)| where \verb|MODALITY| has to be
replaced by the desired modality (e.g. \verb|'fmri'|)
\item \verb|spm_jobman('initcfg')|
\end{enumerate}
After executing these commands, any SPM functions and batch jobs
can be run in the same MATLAB session.
\subsubsection{Complete and run a pre-specified job}
\label{sec:batch_interface_cmd_cfg_serial}
\verb|spm_jobman('run', job[, input1, input2 ...])|
This interface takes a job and asks for the input to any open configuration
items one after another. If a list of appropriate inputs is supplied, these
will be filled in. After all inputs are filled, the job will be run. Note that
only items without a pre-set value will be filled (marked with \verb|<-X| in
the GUI). To force a item to to be filled, use ``Edit:Clear Value''
in the GUI or set its value to \verb|'<UNDEFINED>'| in the harvested job.
The job argument is very flexible, it can e.g.\ be a job variable, the
name of a script creating a job variable, even a cell list of any
mixture of variables and scripts. All job snippets found will be
concatenated into a single job, the missing inputs will be filled and
the resulting job will be run.
The batch system can generate a script skeleton for any loaded
job. From the batch GUI, this feature is accessible via ``File:Save
Batch and Script''. This skeleton consists of a commented list of
necessary inputs, a \verb|for| loop to enter inputs for multiple runs
or subjects and the code to initialise and run the job. An example is
available in \verb|face_single_subject_script.m|:
\begin{verbatim}
% List of open inputs
% Named Directory Selector: Directory - cfg_files
% Realign: Estimate & Reslice: Session - cfg_files
% Coreg: Estimate: Source Image - cfg_files
% fMRI model specification: Multiple conditions - cfg_files
nrun = X; % enter the number of runs here
jobfile = {fullfile(spm('dir'),'man','batch','face_single_subject_template.m')};
jobs = repmat(jobfile, 1, nrun);
inputs = cell(4, nrun);
for crun = 1:nrun
% Named Directory Selector: Directory - cfg_files
inputs{1, crun} = MATLAB_CODE_TO_FILL_INPUT;
% Realign: Estimate & Reslice: Session - cfg_files
inputs{2, crun} = MATLAB_CODE_TO_FILL_INPUT;
% Coreg: Estimate: Source Image - cfg_files
inputs{3, crun} = MATLAB_CODE_TO_FILL_INPUT;
% fMRI model specification: Multiple conditions - cfg_files
inputs{4, crun} = MATLAB_CODE_TO_FILL_INPUT;
end
spm('defaults','fmri');
spm_jobman('run',jobs,inputs{:});
\end{verbatim}
The skeleton needs to be adapted to the actual data layout by adding
MATLAB code which specifies the number of runs and the input data in
the \verb|for| loop.
Another example script and batch is available for the multimodal
dataset, called \verb|multimodal_fmri_script.m| and
\verb|multimodal_fmri_template.m|.
\subsection{Modifying a saved job}
In some cases, instead of using the serial interface it may be more
appropriate to modify the fields of a saved or harvested job. By default, jobs
are saved as MATLAB \verb|.mat| files, but they can also be saved as
\verb|.m| files. These files contain a number of MATLAB commands,
which will create a variable \verb|matlabbatch|. The commands can be
modified to set different values, add or remove options.
| {
"alphanum_fraction": 0.7708104144,
"avg_line_length": 39.3081081081,
"ext": "tex",
"hexsha": "68d2ea97d8c77521132e5625354e4153be58708e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2028515a244fcec88c072d4a66b97bbc57dc15c0",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "wiktorolszowy/diffusion_fMRI",
"max_forks_repo_path": "software/spm12/man/batch/batch.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2028515a244fcec88c072d4a66b97bbc57dc15c0",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "wiktorolszowy/diffusion_fMRI",
"max_issues_repo_path": "software/spm12/man/batch/batch.tex",
"max_line_length": 256,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "2028515a244fcec88c072d4a66b97bbc57dc15c0",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "Mic-map/DfMRI",
"max_stars_repo_path": "software/spm12/man/batch/batch.tex",
"max_stars_repo_stars_event_max_datetime": "2021-11-22T11:57:47.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-11-17T12:46:19.000Z",
"num_tokens": 5360,
"size": 21816
} |
\documentclass{article}
\newcommand\tab[1][0.5cm]{\hspace*{#1}}
\title{SOFTENG351 Notes 2017}
\author{Theodore Oswandi}
\usepackage[
lmargin=2.5cm,
rmargin=7cm,
tmargin=1cm,
bmargin=3cm,
]{geometry}
\usepackage{enumitem}
\setlist{noitemsep}
\usepackage[none]{hyphenat}
\begin{document} \maketitle{}
\section{Fundamentals of Database Systems}
\subsection{General Information}
\paragraph{Database}
large integrated collection of data.
\\ Contains [Entities, Relationships]
\paragraph{DBMS}(Database Management System):
\\ software package to store and manage databases
\paragraph{Database System}: DBMS with database
\paragraph{DBMS and uses}
\begin{itemize}
\item store large amounts of information
\item code for queries
\item protect from inconsistencies and crashes
\item security
\item concurrent access
\end{itemize}
\subsection{Why Databases}
\tab Need to shift from computation to storage of large amounts of information
\\ \\ \tab Accomodate for changes in:
\\ \tab \textbf{Variety:} types of data
\\ \tab \textbf{Velocity:} movement of data
\\ \tab \textbf{Veracity:} uncertainty of data
\\ \tab \textbf{Volume:} amount of data
\paragraph{Structures/Models}
Need to have a model to describe data, and a schema used to give an abstract description of the data model
\subsection{Levels of Abstraction}
\tab \textbf{Views:} describe how data seen
\\ \tab \textbf{Logical Schema:} how data structures organised (variable types)
\\ \tab \textbf{Physical Schema:} how files structured
\\ \tab \textbf{Data Definition Language:} How to define database schema
\\ \tab \textbf{Data Manipulation:} how to update values in database
\\ \tab \textbf{Query Language:} used to access data
\subsection{Data Independence}
\tab \textbf{Logical Data Independence}
\begin{itemize}
\item external handling separate from logical organisation
\item mappings change, not external schema
\item applications only see external schema
\end{itemize}
\tab \textbf{Physical Data Independence}
\begin{itemize}
\item changes to physical schema doesn't affect logical layer
\item abstract from DBMS storage organisation
\item can perform optimisation/tuning
\end{itemize}
\subsection{Concurrency Control}
\begin{itemize}
\item many users have to be able to access information at the smae time and make updates without negatively affecting database
\item don't want to access disk lots. It is slow and inefficient
\item let multiple users access and keep data consistent
\item let users feel like they're the only ones using system
\end{itemize}
\section{Relational Model of Data}
\subsection{General Information}
\begin{itemize}
\item is logical model of data
\item distinguish between data syntax and semantics
\item simple and powerful
\item sql based off this
\end{itemize}
\subsection{Simple approach}
\begin{itemize}
\item use tuples to store data
\item relations are sets of these tuples
\item tables to represent sets of data
\item properties (columns) are called attributes
\item attributes associated with domains (variable types)
\end{itemize}
\subsection{Relational Schemata}
Use of attributes creates relation schema such as:
\\ MOVIE(title: $string$, production\_year: $number$)
\\ \\
\textbf{Relation Schema} provide abstract description of tuples in relation
\\ \\
\textbf{Database Schema} is set $S$ of relational schemata. Is basically the set of all tables and their attributes
\subsection{Keys}
Are used to uniquely identify tuples over all data in a given table.
\\ They are used to restrict number of database instances, to something more realistic and identify objects efficiently
\\ \\
\textbf{Superkey} over relation schema is a subset of attributes that satisfies this uniqueness property
\\ \textbf{Key} is a minimal superkey, is key if no other superkeys exist for R
\\ \textbf{Foreign Key}: is a key used to index values from other values. Used to make reference between relational schemata.
\\ \tab - ensures referential integrity
\\ \tab - no need to copy info from other tables
\\ \tab - need to ensure that [x,y] $\subseteq$ [x,y] and not [y,x] (Order matters)
\\ \\ \textbf{Example}
\\ MOVIE(title: string, production\_year: number, director\_id: number)
\\ with key [title, production\_year]
\\ \\ DIRECTOR(id: number, name: string)
\\ with key [id]
\\ \\ with foreign key: MOVIE[director\_id] $\subseteq$ DIRECTOR[id]
\subsection{Integrity Constraints}
\begin{itemize}
\item Db schema should be meaningful, and satisfy all constraints
\item should stay true to keys, and foreign keys
\item constraints should interact with each other correctly
\item should process queries and update efficiently
\item should do this and make as few comprimises as possible
\end{itemize}
\section{SQL as Data Definition and Manipulation Language}
\subsection{General Information}
\begin{itemize}
\item \textbf{S}tructured \textbf{E}nglish \textbf{QUE}ry \textbf{L}anguage
\item used as a standardised language in relational Db systems
\item is query, data definition, and data management language
\item names not case sensitive
\end{itemize}
\subsection{Keywords}
\paragraph{CREATE} used to create tables. [CREATE TABLE $<tablename> <attribute specs>$]
\paragraph{Attributes}: defined as [$<attribute\_name> <domain>$]
\paragraph{NOT NULL}: used to ensure that attribute is not null.
\paragraph{DROP TABLE}: removes relation schemata from Db system
\paragraph{ALTER TABLE}: change existing relation schemata
\\ \\ \textbf{CHARACTER} and \textbf{VARCHAR} are fixed or variable length strings. Variable length string are $VARCHAR(32)$
\paragraph{NATIONAL CHARACTER}: string characters from other languages
\\ \\ \textbf{INTEGER} and \textbf{SMALLINT} used for integers
\\ \\ \textbf{NUMERIC} and \textbf{DECIMAL} used for fixed point numbers
\\ \\ \textbf{FLOAT, REAL} and \textbf{DOUBLE PRECISION} used for floating point numbers
\\ \\ \textbf{DATE} used for dates (year, month, day)
\\ \\ \textbf{TIME} used for times (hour, minute, second)
\subsection{Null and Duplicate Tuples}
\paragraph{NULL} used to insert tuple where some of the information is not known. This is not good as it can create inconsistencies in information. SQL does this through use of a null marker
\paragraph{Duplicate Tuples} are when another tuple exist with same values. NULL can make this confusing.
\\ \tab - these duplicated really expensive to remove
\\ \tab - relations do not contain these tuples
\subsection{SQL Semantics}
\paragraph{Table Schema} set of attributes
\paragraph{Table over R} set of partial tuples
\\ \\ These relations are idealised with no duplicates
\subsection{Constraints}
\begin{itemize}
\item can add uniqueness parameters like primary keys on table
\item NOT NULL makes sure that partial tuples don't exist
\item \textbf{UNIQUE} $<attribute\_list>$ ensures that values for attributes exist only once
\item \textbf{PRIMARY KEY} used on tables that are also UNIQUE, and ensures that NOT NULL applied to it too
\item \mbox{$FOREIGN KEY <attributelist> REFERENCES <tablename> <attributelist>$} used to create foreign key dependencies
\item $CONSTRAINT <name> <constraint>$ \\used to give names to constraints
\end{itemize}
\subsection{Referential Actions}
Are used to specify what happens if tuples updated or deleted
\\ \\ SQL provies:
\begin{itemize}
\item \textbf{CASCADE} forces refercing tuples to be deleted
\item \textbf{SET NULL} sets referencing tuples to NULL
\item \textbf{SET DEFAULT} sets the tuple values to specified default
\item \textbf{NO ACTION} does nothing
\end{itemize}
\paragraph{Example usage:}
\mbox{FOREIGN KEY(title, year) REFERENCES MOVIE(title, year) ON DELETE \textbf{CASCADE}} \\ will delete director tuple if movie deleted
\\\\ \mbox{FOREIGN KEY(title, year) REFERENCES MOVIE(title, year) ON DELETE \textbf{SET NULL}} \\ will keep tuple in director but won't know what he directed if movie deleted
\subsection{CREATE statement}
\begin{itemize}
\item \textbf{CREATE ASSERTION} $<name>$ CHECK defines integrity constraints
\item \textbf{CREATE DOMAIN} define domains and check clause
\item \textbf{CREATE VIEW $<name> \textbf{AS} <query>$} define views
\item \textbf{CREATE GLOBAL TEMPORARY TABLE} makes a table for current SQL session
\item \textbf{CREATE LOCAL TEMPORARY TABLE} makes table for current module or session
\end{itemize}
\subsection{Use as Data Manipulation Language}
Allow insert, delete or update
\begin{itemize}
\item INSERT INTO $<tablename>$ VALUES $<tuple>$
\item INSERT INTO $<tablename> <attributelist>$ VALUES $<tuples>$
\item INSERT INTO $<tablename> <query>$
\item DELETE FROM $<tablename>$ WHERE $<condition>$
\item \mbox{UPDATE $<tablename>$ SET $<value-assignment-list>$ WHERE $<conditions>$}
\end{itemize}
\section{Relational Query Languages: Algebra}
\subsection{Query Languages}
\begin{itemize}
\item allow access and retrieval of data
\item foundation based on logic, and allows for optimisation
\item are not programming languages
\item not intended for complex claculations
\item easy efficient access to lots of data
\end{itemize}
\textbf{Query Language} must have
\begin{itemize}
\item formal language with
\item input schema
\item output schema
\item query mapping
\end{itemize}
\textbf{Equivalent Queries}
\begin{itemize}
\item same input schema
\item same output schema
\item same query mapping
\item if L $\sqsubseteq$ L' and L' $\sqsubseteq$ L
\end{itemize}
\textbf{Dominant}, if Q1 dominates Q2
\begin{itemize}
\item if L $\sqsubseteq$ L'
\end{itemize}
\subsection{Relational Algebra}
\begin{itemize}
\item useful for representing execution plan
\item easily translated to SQL
\item A is set of possible relations
\item Set of Operations:
\begin{itemize}
\item Attribute Selection $\sigma$
\\ \tab returns value where A in MOVIE is same as B in MOVIE
\\ \tab $[\sigma _{A=B}(MOVIE)]$
\item Constant Selection $\sigma$
\\ \tab returns vlaue where A is 3 in MOVIE
\\ \tab $[\sigma _{A=3}(MOVIE)]$
\item Projection $\pi$
\\ \tab returns title and date of MOVIE
\\ \tab $[\pi _{title, date}(MOVIE)]$
\item Renaming $\delta \mapsto$
\\ \tab renames attribute in MOVIE
\\ \tab $[\delta _{title \mapsto title'}(MOVIE)]$
\item Union $\bigcup$
\\ \tab takes 2 arguments and returns set of tuples contained in either result, removing duplicates
\\ \tab $[\sigma _{A=2}(MOVIE) \bigcup \sigma_{A=3}(MOVIE)]$
\item Difference $-$
\\ \tab returns negation
\\ \tab $[-\sigma _{A=3}(MOVIE)]$
\item Natural Join $\bowtie$
\\ \tab Joins two tables and return tuples that match in both tables
\\ \tab $[ MOVIE \bowtie DIRECTOR ]$
\end{itemize}
\end{itemize}
Redundant Operations
\begin{itemize}
\item Intersection $\bigcap$
\\ \tab because can use DeMorgans to produce with $- and \bigcup$
\item Cross Product $\times$
\\ \tab because creates $N\times M$ tuples which is not space efficient
\item Division $\div$
\\ \tab because it uses cross product to generate all those that match
\end{itemize}
$\pi \sigma \bowtie \delta \bigcap \bigcup \bigvee \bigwedge \div \times - \in \mapsto$
\subsection{Query Language $\mathcal{L} _{ALG}$ of Relational Algebra}
wtf is this
\subsection{Incremental Query Formulation}
Can use multiple Queries to break up calculation to make it more readable. Ths will also reduce the amount of copying needed as you can use previous results in subsequent calculations.
\section{Relational Query Language: Calculus}
\subsection{General}
\begin{itemize}
\item let users describe what they want
\item simple to write query and translate to SQL
\item sometimes relational algebra can get convulated
\item based on first order logic
\item if safe then can automatically be transformed to SQL style
\item \textbf{Tuple Relational Calculus TRC} variables represent tuples
\item \textbf{Domain Relational Calculus DRC} variables represent values in domain
\end{itemize}
\textbf{Query Structure \& Example}
\\ \tab $\{ \tab[0.2cm] to\_return \tab[0.2cm]|\tab[0.2cm] \exists x,y,z(sometable(x,y,z,n)\}$
\\ \tab $\{x_{name} | \exists x_{age} (PERSON(x_{name}, x_{age}, 'Male'))\}$
\subsection{Object Properties}
have variables or may set its value in query
variable values must be in its domain
can make complex properties by using negation, conjunction and exisential quantification
remove brackets if doesn't increase ambiguity
\subsection{Shortcuts}
\begin{itemize}
\item \textbf{Inequation} $A \neq B$ equivalent to $\neg A = B$
\item \textbf{Disjunction} DeMorgan's: $A \vee B$ shortcuts $\neg(\neg A \wedge B)$
\item \textbf{Universal Quantification} DeMorgan's: $\forall x (A)$ shortcuts $\neg \exists x (\neg A)$
\item \textbf{Implication} $A \Rightarrow B $ shortcuts $ \neg A \vee B$
\item \textbf{Equivalence} $A \Leftrightarrow B$ shortcuts $(A\Rightarrow B) \wedge (B \Rightarrow A)$
\item \textbf{Successive Exestential Quantification} $\exists x_1 ( \exists x_2 (A))$ same as $\exists x_1,x_2(A)$
\item \textbf{Successive Univeral Quantification} $\forall x_1 ( \forall x_2 (A))$ same as $\forall x_1,x_2(A)$
\end{itemize}
\subsection{Free variables}
\begin{itemize}
\item placeholders used to describe structure
\item basically doesn't have to hold a specific set of values
\item not bound to any quantifier
\item negation does nothing to free variables
\item obeys conjunction
\end{itemize}
\subsection{Formulae interpretation}
\tab Whole thing either evaluates to T or F
\subsection{Domain Relational Calculus}
\begin{itemize}
\item Truth value of formula based on its values on free variables
\item \textbf{Example:} $\{ (m) | \exists d, t(SCREENING ('Event', m, d, t)) \}$
\item if looking for empty set, then just returns if value exists for that or not
\end{itemize}
\subsection{Tuple Relational Calculus}
\begin{itemize}
\item named on global perspective
\item sort logic based on domain and relation schemata
\item formula built fromatms on DRC. Free variables defined analogously
\end{itemize}
\subsection{Safe Range Normal Form}
\begin{itemize}
\item check if query is safe or not
\item basically don't want potentially infinite number of values
\item Must contain these properties:
\begin{enumerate}
\item \textbf{eliminate shortcuts} (Universal Quantification $\forall$, Implication $\Rightarrow$ and Equivalence $\Leftrightarrow$)
\item \textbf{bounded renaming}: ensure no free and bound variables.\\ \tab Not allowed: $\neg (A \vee B)$
\item remove any double negations $\neg \neg A$ to $A$
\item only put negation in front of atom or quantifier, and not over bracketed area
\item omit unecessary brackets
\item ensure there is no disjunction in final formula
\end{enumerate}
\item if all free variables range restricted then formula safe
\item if in SRNF then everything should either be an atom or quantified formula
\end{itemize}
% $\neq \Leftrightarrow \Leftarrow \Rightarrow \forall \exists \neg x \vee \wedge$
\section{SQL: As a Query Language}
\textbf{Basic structure:}
\\\tab$SELECT <attribute list> FROM <table list> WHERE <condition>$
\\\\\textbf{Renaming} (tables and attributes):
\\\tab$<name> AS <new name>$
\\\\\textbf{Select and remove duplicates}
\\\tab$SELECT DISTINCT \dots$
\\\\\textbf{Sort results} (can sort by multiple in heirarchy):
\\\tab $ORDER BY <attribute list>$
\\\tab add $ASC$ or $DESC$ to determine if want increasing/decreasing
\subsection{Operators:}
\tab$=, <, >, \leq, \geq, <>$(not equal to)
\subsection{Where Structure}
\tab$[<expression> <operator> <expression>]$
\subsection{Values in a range}
\tab$[<attribute> BETWEEN <value> AND <value>]$
\\\tab$[<attribute> IN <value list>]$
\subsection{Aggregate functions}
\begin{itemize}
\item \textbf{COUNT}: returns number of values
\item \textbf{AVG}: return average argument values. Only work if domain some kind of number
\item \textbf{MIN}: returns minimum argument value
\item \textbf{MAX}: returns maximum argument value
\item \textbf{SUM}: returns sum of argumet values. Only works if domain is number.
\end{itemize}
Results of above functions different if distinct used
\subsection{Grouping}
\begin{itemize}
\item used when only want to apply aggregate on group of tuples
\item use $GROUP BY <attribute list>$
\item if want to only do some groups then add $HAVING <condition>$
\item the HAVING condition must apply to group and not tuples in groups
\end{itemize}
\subsection{Subqueries}
\begin{itemize}
\item can utilise further queries in the WHERE conditions
\item $\textbf{IN} <subquery>$ checks if tuple exists in subquery
\item $\textbf{EXISTS} <subquery>$ checks if subquery returns non empty relation
\item $\textbf{UNIQUE} <subquery>$ checks if result contains no duplicates
\item can \textbf{NOT} above queries if you want
\end{itemize}
\subsection{Reachability}
\begin{itemize}
\item basically seeing if given a graph if you can go from A to X in $n$ number of stops
\item If n is known then can use iteration
\item However if n is flexible then must use recursion somehow
\end{itemize}
\subsection{Recursion}
wat.
\subsection{Writing Difficult SQL Query}
\begin{enumerate}
\item Formalise query in safe calculus
\item Transform query into SRNF
\item Transform SRNF to SQL
\end{enumerate}
\subsection{Division}
Basically if $R \div S$, then return all attributes in R that satisfy all values in S, except columns in S.
utilises 2 nested WHERE NOT EXISTS
\subsection{Outer Joins}
\begin{itemize}
\item used when $t_1 \in R_1$ don't match $t_2 \in R_2$. So can't just do $R_1 \bowtie R_2$
\item \textbf{FULL OUTER JOIN}\\\tab
if $r$ FULL OUTER JOIN $s$ then do join, keep all information from both tables and make unknown attributes NULL
\item \textbf{LEFT OUTER JOIN}\\\tab
if $r$ LEFT OUTER JOIN $s$, then do the join, and keep entirety of $r$, those values not in $s$ make resulting column for that tuple NULL
\item \textbf{RIGHT OUTER JOIN}\\\tab
if $r$ RIGHT OUTER JOIN $s$, then do join and keep entirety of $s$ but only match $r$ if exists. Unknown values become NULL
\end{itemize}
\textbf{Possibly make list of examples here}
\section{Entitiy Relationship Model}
\subsection{Database Design}
\subsubsection{General Information}
\begin{itemize}
\item Organisations now have access to lots of data
\item Data is recorded in databases
\begin{itemize}
\item What to keep in Db?
\item How to access it?
\end{itemize}
\item Database design aims to create and manage complex databases for huge information systems
\item Broken down into four phases
\begin{enumerate}
\item Requirement Analysis
\item Conceptual Design
\item Logical Design
\item Physical Design
\end{enumerate}
\end{itemize}
\subsubsection{Conceptual Design}
\paragraph{target of database} meet organisation that is going to use database needs
\paragraph{conceptual design} provides abstract description of database in terms of high level concepts. Shows expected structure
\paragraph{Inputs} are information requirements of users
\paragraph{Output} is database schema, with consideration of user requirements but no layout, implementatation and phyiscal details yet
\paragraph{Conceptual data model} to describe language
\subsection{Entity Relationship}
\subsubsection{Entities}
\begin{itemize}
\item basic objects
\item described by attributes
\item these attributes should allow differentiation between objects
\item \textbf{key} created from these attributes to uniquely identify objects
\end{itemize}
Look
\begin{itemize}
\item Visualised by rectangles
\item Attributes point to rectangle
\item Attriutes that create key are underlined
\end{itemize}
\subsubsection{Relationships}
\begin{itemize}
\item create connection between various entities
\item are also technicalle objects that connect other objects together
\end{itemize}
\tab \textbf{Format:} {COMPONENTS, ATTRIBUTES, KEYS}
\\ \tab \tab \textbf{Examples:} $\{\{Student:PERSON\},\{Year, Course\}, \{Student, Year, Course\}\}$
\textbf{Look}
\begin{itemize}
\item Visulaised by diamonds
\item Contains attributes as well, but doesn't have to.
\item Same as Entities, key attributes are underlined
\item Edged linked to it are objects it associates with
\item Components that are part of key have a dot in the line connecting it to relationship diamond
\item key may span all attributes
\end{itemize}
\textbf{Roles}
\begin{itemize}
\item used to tie together relationship containing at least 2 of hte same object type. Both will have a line connecting diamond and rectangle, each named respectively.
\item effectively make use of foreign keys
\end{itemize}
\tab \textbf{Example:} $\{\{Student:PERSON, Lecturer:PERSON\},\emptyset, \{Student, Lecturer\}\}$
\paragraph{unique-key-value property} exists due to attributes in object uniquely identifying tuples. In a given set of tuples no duplicates in attribute subset that makes up key will exist.
\subsection{Set vs Foreign Key Semantics}
\textbf{Set semantics} uses entire attribute set\\
\textbf{Foreign key semantics} uses only key attributes
\\\\If using foreign key semantics then definitions of [Relationships, Relationship Sets, Database Instances] are slightly different. But also lets you use $E_{id}$ instead of having to use entire $E$ to reference things.
\\\\Due to unique-key-value property, using either $E$ or $E_{id}$ is equivalent since can identify tuple by key attributes
\subsection{Identifiers}
For given Entity, can create new attribute that identifies rest of attributes for given tuple. If can ensure this column is unique then can use it as single attribute key. This kind of associates an attribute with an entity
\paragraph{unique identifier property} bascally saying that identifiers that identify relationships must be unique
\subsection{Specialisation, Generalisation, Clusters}
\subsubsection{Specialisation}
\begin{itemize}
\item lets you define multiple types of an entity. Basically subtypes
\item Such as Student, Lecturer, Tutor which are all people
\item Generally adds various attributes that relate to specialised object type.
\item Is treated as a relation
\item Can have specialisations of specialisations (relationship of relationship)
\end{itemize}
Subtype U with supertype C:\tab\tab $U = (\{C\}, attr(U),\{C\}$
\subsubsection{Generalisation}
\begin{itemize}
\item abstract concepts that model multiple things
\item allows you to create a single relationship for connections
\end{itemize}
\subsubsection{Clusters}
\begin{itemize}
\item model disjoint union (or)
\item ensure that things are mutually disjoint
\item attach those cluster groups to a single point $\oplus$
\item cluster still labelled as an object
\end{itemize}
\subsection{Transforming the ER diagram}
\begin{itemize}
\item these diagrams can be transformed to database schemata for things such as importing into SQl for example.
\item Entities become tables
\item Relationships become tables with foreign key connections to other tables
\end{itemize}
\subsection{Handling Clusters}
\begin{itemize}
\item clusters in conceptual design to model alternatives
\item to handle them, must remove them from ER diagram to see individual interactions.
\item cluster provide convenient way to model objects to Db
\item if you avoid clusters then ER schemata becomes large
\end{itemize}
\section{Database Design Quality: Relation Normalisation}
\subsection{General}
\begin{itemize}
\item \textbf{Normalisation}'s goal is to efficiently process updates
\begin{itemize}
\item this aims to have no data redundancy
\item redundancy means you have to do many updates
\item also means you have to keep checking that data integrity exists and all redundant values updated as needed.
\end{itemize}
\item \textbf{Denormalisation}'s goal is to efficiently process queries
\begin{itemize}
\item because joins are expensive to do
\end{itemize}
\item cannot always do this as they trade off each other
\item therefore try to design datbase well
\end{itemize}
\subsection{Functional Dependencies}
\begin{itemize}
\item if $X \rightarrow Y$ then for a given value of X, the Y value will be same.
\item therefore if $Student \rightarrow Teacher$ then that student will always have that teacher
\item if $\emptyset \rightarrow X$ then X can only have one value
\item attributes occuring in a key is a \textbf{prime attribute}
\end{itemize}
\subsection{Derivation Rules}
General formula \tab $\frac{IF}{THEN}$
\\\\Transitivity\tab$\frac{X \rightarrow Y, Y \rightarrow Z}{X \rightarrow Z}$
\\\\Extension\tab$\frac{X \rightarrow Y}{X \rightarrow XY}$
\\\\Reflexivity\tab$\frac{}{XY \rightarrow Y}$
\\\\Also something about derivation trees but I don't understand
\subsection{Soundness \& Completeness}
\paragraph{Sound} R is sound if every derivable dependency is implied
\paragraph{Complete} R is complete if every implied dependency is derivable
\subsection{Canonical Cover}
Get all functional dependencies and create a minimal list that still preserves its properties
\\\\\tab Contains:
\begin{enumerate}
\item Get all Functional Dependencies
\item \textbf{Decompose}: break $A \rightarrow BC$ to $A \rightarrow B, \rightarrow C$
\item \textbf{L-reduced cover} which is when you do $AB \rightarrow C$ to $A \rightarrow C$
\item Then you are left with canonical cover when you delete duplicates and redundant dependencies.
\end{enumerate}
\subsection{Boyce-Codd Normal Form}
\begin{itemize}
\item can always get lossless BCNF decomposition
\item guarantee no redundancy in terms of FD
\item might exist some FD that can't be enfored
\item no partial key can depend on superkey
\end{itemize}
\subsection{Third Normal Form}
\begin{itemize}
\item Lossless 3NF can always be achieved
\item guarantees that all FDs can be enforced on elements in 3NF decomposition
\item ensures least level of data redundancy among all lossless, faithful decompositions
\end{itemize}
\section{Gerald's Fun Section}
\end{document} | {
"alphanum_fraction": 0.7301868676,
"avg_line_length": 41.5944700461,
"ext": "tex",
"hexsha": "1817c8eb2c0a0752345824385c6afb6f51de2eb1",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-07-20T06:57:35.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-06-05T01:07:48.000Z",
"max_forks_repo_head_hexsha": "43b40da90e4db476e7359e7ef9348a455924a946",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tosw164/SE364notes",
"max_forks_repo_path": "SE351_Databases/351_notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "43b40da90e4db476e7359e7ef9348a455924a946",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tosw164/SE364notes",
"max_issues_repo_path": "SE351_Databases/351_notes.tex",
"max_line_length": 225,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "43b40da90e4db476e7359e7ef9348a455924a946",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tosw164/SE364notes",
"max_stars_repo_path": "SE351_Databases/351_notes.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-25T01:52:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-07-28T04:54:52.000Z",
"num_tokens": 7627,
"size": 27078
} |
\subsection{Surface Load Traction Examples}
\label{sec:example:3dhex8:surfload}
PyLith features discussed in this example:
\begin{itemize}
\item Time-dependent Neumann (traction) boundary conditions
\item Dirichlet boundary conditions
\item Elastic material
\item Output of solution at user-defined locations
\end{itemize}
\subsubsection{Overview}
This set of examples describes a set of problems for PyLith involving
surface loading with a Neumann (traction) applied to the ground surface.
The first example demonstrates the use of a surface load in a static
problem, and the second example demonstates how to apply a cyclic
load in a quasi-static problem. The second problem also includes output
of the solution at user-defined locations. All of the examples are
contained in the directory \filename{examples/3d/hex8}, and the corresponding
\filename{cfg} files are \filename{step18.cfg} and \filename{step19.cfg}.
Run the examples as follows:
\begin{shell}
# Step18
$ pylith step18.cfg
# Step19
$ pylith step19.cfg
\end{shell}
This will cause PyLith to read the default parameters in \filename{pylithapp.cfg},
and then override or augment them with the additional parameters in
the \filename{stepXX.cfg} file. Each \filename{cfg} file is extensively
documented, to provide detailed information on the various parameters.
\subsubsection{Step18 - Static Surface Load}
The \filename{step18.cfg} file defines a problem with a spatially varying
axial surface load applied to the top surface with Dirichlet (roller)
boundary conditions on the lateral and bottom surfaces. We first set
the array of boundary conditions with one for each surface of the
domain. As in the other examples, we also setup output for the ground
surface.
For the Dirichlet boundary conditions we fix the degree of freedom
associated with motion normal to the boundary while leaving the other
degrees of freedom free. We do not explicitly specify the use of a
Dirichlet boundary condition because it is the default. Similarly,
the ZeroDispDB is the default spatial database for the displacements
in a Dirichlet boundary condition, so all we need to specify is the
degree of freedom that is constrained, the name of the nodeset from
CUBIT, and a label used in diagnostic output. For the Dirichlet boundary
condition on the +x surface we have:
\begin{cfg}[Excerpt from \filename{Step18.cfg}]
<h>[pylithapp.timedependent.bc.x_pos]</h>
<p>label</p> = face_xpos
<p>bc_dof</p> = [0]
<p>db_initial.label</p> = Dirichlet BC on +x
\end{cfg}
On the top surface we apply a Neumann boundary condition for the surface
load, so we first set the boundary condition type and then specify
the nodeset in CUBIT associated with this surface. For the static
surface load, we use a spatial database for the initial value and
linear interpolation. We integrate the surface tractions over the
boundary, so we also specify the numerical integration scheme to use.
Finally, we specify a vector for the up direction because the tractions
are applied to a horizontal surface, resulting in ambiguous shear
directions for our default orientation convention.
\begin{cfg}[Excerpt from \filename{Step18.cfg}]
<h>[pylithapp.timedependent.bc]</h>
<f>z_pos</f> = pylith.bc.Neumann
<h>[pylithapp.timedependent.bc.z_pos]</h>
<p>label</p> = face_zpos
<f>db_initial</f> = spatialdata.spatialdb.SimpleDB
<p>db_initial.label</p> = Neumann BC on +z
<p>db_initial.iohandler.filename</p> = spatialdb/tractions\_axial\_pressure.spatialdb
<p>db_initial.query_type</p> = linear ; Use linear interpolation.
# Diagnostic output
<p>output.cell_info_fields</p> = [initial-value]
<p>output.writer.filename</p> = output/step18-traction.vtk
<f>output.cell_filter</f> = pylith.meshio.CellFilterAvg
# We must specify quadrature information for the cell faces.
<f>quadrature.cell</f> = pylith.feassemble.FIATLagrange
<p>quadrature.cell.dimension</p> = 2
<p>quadrature.cell.quad_order</p> = 2 \\
# Because normal for +z surface is {[}0,0,1{]}, the horizontal and
# vertical shear directions are ambiguous. We provide a ``fake'' up
# direction of [0,1,0] so that the horizontal shear direction (cross
# product of ``up'' and normal is [1,0,0] and the vertical shear
# direction (cross product of normal and horizontal) is [0,1,0].
<p>up_dir</p> = [0,1,0]
\end{cfg}
When we have run the simulation, the output VTK files will be contained
in \filename{examples/3d/hex8/output} (all with a prefix of \filename{step18}).
Results using ParaView are shown in Figure \vref{fig:example:3dhex8:step18:displacement}.
\begin{figure}
\includegraphics[width=10cm]{examples/figs/3dhex8_step18-displ}
\caption{Displacement field for example step18 visualized using ParaView. The
vectors show the displacement field while the colors in the wireframe
correspond to the z-component of the displacement field.}
\label{fig:example:3dhex8:step18:displacement}
\end{figure}
\subsubsection{Step19 - Time-Dependent Surface Load}
The \filename{step19.cfg} file defines a problem that is identical to
example step18, except that we vary the amplitude of the surface load
as a function of time. We use a temporal database (analogous to our
spatial databases for specifying spatial variations) to prescribe
a piecewise linear variation of the amplitude with time as given in
the file \filename{spatialdb/loadcycle.timedb}. The amplitude begins
at zero, progresses to 1.0, then 1.5, before decreasing in a symmetric
fashion. The temporal database can use variable time steps to prescribe
arbitrary time histories.
Rather than specify a spatial database for the initial value of the
Neumann boundary condition corresponding to the surface load, we specify
a spatial database for the change in value and the temporal database:
\begin{cfg}[Excerpt from \filename{Step19.cfg}]
<h>[pylithapp.timedependent.bc.z_pos]</h>
<p>label</p> = face_zpos
<f>db_change</f> = spatialdata.spatialdb.SimpleDB
<p>db_change.label</p> = Amplitude of Neumann BC on +z
<p>db_change.iohandler.filename</p> = spatialdb/tractions_axial_pressure.spatialdb
<p>db_change.query_type</p> = linear ; Use linear interpolation
<f>th_change</f> = spatialdata.spatialdb.TimeHistory
<p>th_change.label</p> = Time history for Neumann BC on +z
<p>th_change.filename</p> = spatialdb/loadcycle.timedb
\end{cfg}
When we have run the simulation, the output VTK files will be contained
in \filename{examples/3d/hex8/output} (all with a prefix of \filename{step19}).
Results using ParaView are shown in Figure \vref{fig:example:3dhex8:step19:stress}.
We also output the solution at user-defined locations, which are given
in the file \filename{output\_points.txt.} See Section \vref{sec:output:points}
for a discussion of the output parameters. This type of output is
designed for comparison against observations and inversions and output
via HDF5 files (see Section \vref{sub:HDF5/Xdmf-Output}).
\begin{figure}
\includegraphics[width=10cm]{examples/figs/3dhex8_step19-stress_t200}
\caption{Stress field (zz-component) for example step19 at t = 200
years visualized using ParaView. The stresses appear as four
layers since we have used \object{CellFilterAvg} for material
output.}
\label{fig:example:3dhex8:step19:stress}
\end{figure}
% End of file
| {
"alphanum_fraction": 0.7850041425,
"avg_line_length": 45.5471698113,
"ext": "tex",
"hexsha": "dd97f5c9ee54287307ed522ccec324177da14d4e",
"lang": "TeX",
"max_forks_count": 71,
"max_forks_repo_forks_event_max_datetime": "2022-03-03T04:26:02.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-03-24T12:11:08.000Z",
"max_forks_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Grant-Block/pylith",
"max_forks_repo_path": "doc/userguide/examples/obsolete/3dhex8_surfload.tex",
"max_issues_count": 277,
"max_issues_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_issues_repo_issues_event_max_datetime": "2022-03-30T21:13:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-02-20T16:27:35.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Grant-Block/pylith",
"max_issues_repo_path": "doc/userguide/examples/obsolete/3dhex8_surfload.tex",
"max_line_length": 89,
"max_stars_count": 93,
"max_stars_repo_head_hexsha": "f6338261b17551eba879da998a5aaf2d91f5f658",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Grant-Block/pylith",
"max_stars_repo_path": "doc/userguide/examples/obsolete/3dhex8_surfload.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-25T13:40:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-08T16:41:22.000Z",
"num_tokens": 1891,
"size": 7242
} |
\documentclass[DM,authoryear,toc]{lsstdoc}
% lsstdoc documentation: https://lsst-texmf.lsst.io/lsstdoc.html
\input{meta}
% Package imports go here.
% Local commands go here.
% To add a short-form title:
% \title[Short title]{Title}
\title{Technical items to honor a tech great}
% Optional subtitle
% \setDocSubtitle{A subtitle}
\author{%
William O'Mullane
}
\setDocRef{DMTN-130}
\setDocUpstreamLocation{\url{https://github.com/lsst-dm/dmtn-130}}
\date{\vcsDate}
% Optional: name of the document's curator
% \setDocCurator{The Curator of this Document}
\setDocAbstract{%
We have been asked to consider naming some part of the technical system in honor of Jim Gray. This document is intended to give some background and options for that.
}
% Change history defined here.
% Order: oldest first.
% Fields: VERSION, DATE, DESCRIPTION, OWNER NAME.
% See LPM-51 for version number policy.
\setDocChangeRecord{%
\addtohist{1}{YYYY-MM-DD}{Unreleased.}{William O'Mullane}
}
\setDocCompact{true}
\begin{document}
% Create the title page.
\maketitle
% ADD CONTENT HERE
% You can also use the \input command to include several content files.
\input{body}
\appendix
% Include all the relevant bib files.
% https://lsst-texmf.lsst.io/lsstdoc.html#bibliographies
\section{References} \label{sec:bib}
\bibliography{local,lsst,lsst-dm,refs_ads,refs,books}
% Make sure lsst-texmf/bin/generateAcronyms.py is in your path
\section{Acronyms} \label{sec:acronyms}
\input{acronyms.tex}
\end{document}
| {
"alphanum_fraction": 0.7573529412,
"avg_line_length": 24.1290322581,
"ext": "tex",
"hexsha": "277919f998965a8e9696c1fd56e13623b6f82557",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1e2c869dc2127c916ae8f98c355b96a92fa39c3b",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "lsst-dm/dmtn-130",
"max_forks_repo_path": "DMTN-130.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1e2c869dc2127c916ae8f98c355b96a92fa39c3b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "lsst-dm/dmtn-130",
"max_issues_repo_path": "DMTN-130.tex",
"max_line_length": 165,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1e2c869dc2127c916ae8f98c355b96a92fa39c3b",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "lsst-dm/dmtn-130",
"max_stars_repo_path": "DMTN-130.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 439,
"size": 1496
} |
\documentclass[12pt, fullpage,letterpaper]{article}
\usepackage[margin=1in]{geometry}
\usepackage{url}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{xspace}
\usepackage{graphicx}
\usepackage{hyperref}
\newcommand{\semester}{Spring 2022}
\newcommand{\assignmentId}{1}
\newcommand{\releaseDate}{1 Feb, 2022}
\newcommand{\dueDate}{11:59pm, 18 Feb, 2022}
\newcommand{\bx}{{\bf x}}
\newcommand{\bw}{{\bf w}}
\title{CS 6190: Probabilistic Machine Learning \semester}
\author{Homework \assignmentId}
\date{Handed out: \releaseDate\\
Due: \dueDate}
\begin{document}
\maketitle
\input{emacscomm}
\footnotesize
\begin{itemize}
\item You are welcome to talk to other members of the class about
the homework. I am more concerned that you understand the
underlying concepts. However, you should write down your own
solution. Please keep the class collaboration policy in mind.
\item Feel free discuss the homework with the instructor or the TAs.
\item Your written solutions should be brief and clear. You need to
show your work, not just the final answer, but you do \emph{not}
need to write it in gory detail. Your assignment should be {\bf no
more than 10 pages}. Every extra page will cost a point.
\item Handwritten solutions will not be accepted.
\item The homework is due by \textbf{midnight of the due date}. Please submit
the homework on Canvas.
\end{itemize}
\section*{Analytical problems [80 points + 30 bonus]}
\label{sec:q1}
\begin{enumerate}
\item~[8 points] A random vector, $\x = \left[\begin{array}{c}\x_1 \\ \x_2\end{array}\right]$ follows a multivariate Gaussian distribution,
\[
p(\x) = \N\big( \left[\begin{array}{c}\x_1 \\ \x_2\end{array}\right] | \left[\begin{array}{c}\bmu_1 \\ \bmu_2\end{array}\right], \left[\begin{array}{cc} \bSigma_{11} & \bSigma_{12} \\ \bSigma_{21} & \bSigma_{22}\end{array}\right]\big).
\]
Show that the marginal distribution of $\x_1$ is $p(\x_1) = \N(\x_1| \bmu_1, \bSigma_{11})$.
\item~[\textbf{Bonus}][10 points] Given a Gaussian random vector, $\x \sim \N(\x|\bmu, \bSigma)$. We have a linear transformation, $\y = \A\x + \b + \z$, where $\A$ and $\b$ are constants, $\z$ is another Gaussian random vector independent to $\x$, $p(\z) = \N(\z|\0, \bLambda)$. Show $\y$ follows Gaussian distribution as well, and derive its form. Hint: using characteristic function. You need to check the materials by yourself.
\item~[8 points] Show the differential entropy of the a multivariate Gaussian distribution $\N(\x|\bmu, \bSigma)$ is
\[
\mathrm{H}[\x] = \frac{1}{2}\log |\bSigma| + \frac{d}{2}(1 + \log 2\pi)
\]
where $d$ is the dimension of $\x$.
\item~[8 points] Derive the Kullback-Leibler divergence between two Gaussian distributions, $p(\x) = \N(\x|\bmu, \bSigma)$ and $q(\x) = \N(\x|\m, \Lambda)$, \ie $\mathrm{KL}(q || p)$.
\item~[8 points] Given a distribution in the exponential family,
\[
p(\x|\boldeta) = \frac{1}{Z(\boldeta)} h(\x)\exp\big(-\u(\x)^\top \boldeta\big).
\]
Show that
\[
\frac{\partial^2 \log Z(\boldeta)}{\partial \boldeta^2} = \mathrm{cov}(\u(\x)),
\]
where $\mathrm{cov}$ is the covariance matrix.
\item~[4 points] Is $\log Z(\boldeta)$ convex or nonconvex? Why?
\item~[8 points] Given two random variables $\x$ and $\y$, show that
\[
I(\x,\y) = H[\x] - H[\x|\y]
\]
where $I(\cdot, \cdot)$ is the mutual information and $H[\cdot]$ the entropy.
\item~[24 points] Convert the following distributions into the form of the exponential-family distribution. Please give the mapping from the expectation parameters to the natural parameters, and also represent the log normalizer as a function of the natural parameters.
\begin{itemize}
\item Dirichlet distribution
\item Gamma distribution
\item Wishart distribution
\end{itemize}
\item~[6 points] Does student $t$ distribution (including both the scalar and vector cases) belong to the exponential family? Why?
\item~[6 points] Does the mixture of Gaussian distribution belong to the exponential family? Why?
\[
p(\x) = \frac{1}{2}\N(\x|\bmu, \bSigma) + \frac{1}{2}\N(\x|\m, \bLambda)
\]
\item~[\textbf{Bonus}][20 points] Given a distribution in the exponential family $p(\x|\boldeta)$, where $\boldeta$ are the natural parameters. As we discussed in the class, the distributions in the exponential family are often parameterized by their expectations, namely $\btheta = \EE\big(\u(\x)\big)$ where $\u(\x)$ are the sufficient statistics (recall Gaussian and Bernoulli distributions). Given an arbitrary distribution $p(\x|\balpha)$, the Fisher information matrix in terms of the distribution parameters $\balpha$ is defined as $\F(\balpha) = \EE_{p(\x|\balpha)}[- \frac{\partial^2 \log(p(\x|\balpha)) }{\partial \balpha^2}]$.
\begin{enumerate}
\item~[5 points] Show that if we calculate the Fisher Information matrix in terms of the natural parameters, we have $\F(\boldeta) = \mathrm{cov}\big(\u(\x)\big)$.
\item~[5 points] Show that $\frac{\partial \btheta}{\partial \boldeta} = \F(\boldeta)$.
\item~[10 points] Show that the Fisher information matrix in terms of the expectation parameters is the inverse of that in terms of the natural parameters, $\F(\btheta) =\F^{-1}(\boldeta) $.
\item~[5 points] Suppose we observed dataset $\Dcal$. Show that
\[
\frac{\partial \log p(\Dcal|\boldeta)}{\partial \boldeta} \F(\boldeta)^{-1} = \frac{\partial \log p(\Dcal|\btheta)}{\partial \btheta}
\]
and
\[
\frac{\partial \log p(\Dcal|\btheta)}{\partial \btheta}\F(\btheta)^{-1} = \frac{\partial \log p(\Dcal|\boldeta)}{\partial \boldeta}.
\]
Note that I choose the orientation of the gradient vector to be consistent with Jacobian. So, in this case, the gradient vector is a row vector (rather than a column vector). If you want to use a column vector to represent the gradient, you can move the information matrix to the left. It does not influence the conclusion.
\end{enumerate}
\end{enumerate}
\section{Practice [20 points ]}
\begin{enumerate}
\item~[5 Points] Look into the student t's distribution. Let us set the mean and precision to be $\mu = 0$ and $\lambda = 1$. Vary the degree of freedom $\nu = {0.1, 1, 10, 100, 10^6}$ and draw the density of the student t's distribution. Also, draw the density of the standard Gaussian distribution $\N(0,1)$. Please place all the density curves in one figure. Show the legend. What can you observe?
\item~[5 points] Draw the density plots for Beta distributions: Beta(1,1), Beta(5, 5) and Beta (10, 10). Put the three density curves in one figure. What do you observe? Next draw the density plots for Beta(1, 2), Beta(5,6) and Beta(10, 11). Put the three density curves in another figure. What do you observe?
\item~[10 points] Randomly draw 30 samples from a Gaussian distribution $\N(0, 2)$. Use the 30 samples as your observations to find the maximum likelihood estimation (MLE) for a Gaussian distribution and a student $t$ distribution. For both distributions, please use L-BFGS to optimize the parameters. For student $t$, you need to estimate the degree of the freedom as well. Draw a plot of the estimated the Gaussian distribution density, student $t$ density and the scatter data points. What do you observe, and why? Next, we inject three noises into the data: we append $\{8, 9, 10\}$ to the $30$ samples. Find the MLE for the Gaussian and student $t$ distribution again. Draw the density curves and scatter data points in another figure. What do you observe, and why?
\end{enumerate}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
| {
"alphanum_fraction": 0.6989849037,
"avg_line_length": 55.6811594203,
"ext": "tex",
"hexsha": "b7f0d684706a035609a2e55af0debe34bc124c36",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "f000f571d1068ab640a360b490a40f0f15d8502b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "tgautam03/CS6190-ProbabilisticML",
"max_forks_repo_path": "assignments/a1/hw1.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f000f571d1068ab640a360b490a40f0f15d8502b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "tgautam03/CS6190-ProbabilisticML",
"max_issues_repo_path": "assignments/a1/hw1.tex",
"max_line_length": 773,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "f000f571d1068ab640a360b490a40f0f15d8502b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "tgautam03/CS6190-ProbabilisticML",
"max_stars_repo_path": "assignments/a1/hw1.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-08T06:17:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-08T06:17:05.000Z",
"num_tokens": 2267,
"size": 7684
} |
% !TEX TS-program = xelatex
% !TEX encoding = UTF-8
% !Mode:: "TeX:UTF-8"
\documentclass[onecolumn,oneside]{SUSTechHomework}
\usepackage{float}
\author{胡玉斌}
\sid{11712121}
\title{Lab 10}
\coursecode{CS315}
\coursename{Computer Security}
\begin{document}
\maketitle
\section*{Task 1}
We calculate the exgcd: $e \times d + \phi(n) \times y = 1$
The Figure \ref{fig1} show how to calculate by python with \verb|sympy| package.
d = 0X3587A24598E5F2A21DB007D89D18CC50ABA5075BA19A33890FE7C28A9B496AEB.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task1_1.png}
\caption{exgcd}
\label{fig1}
\end{figure}
\section*{Task 2}
Encrypting the message with RSA to calculate $C = M^e$.
$C = \mbox{0X6FB078DA550B2650832661E14F4F8D2CFAEF475A0DF3A75CACDC5DE5CFC5FADC}$
To verify that, calculate $M=C^d~\mbox{mod}~n$.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task2_1.png}
\caption{Encrypt}
\end{figure}
\section*{Task 3}
Calculate $C^d$, we have $M = 0X50617373776F72642069732064656573$
Decode by \verb|utf-8|, we get message \verb|Password is dees|
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task3_1.png}
\caption{Decrypt}
\end{figure}
\section*{Task 4}
We sign the message by calculating $S=M^d$
We get $S=0X55A4E7F17F04CCFE2766E1EB32ADDBA890BBE92A6FBE2D785ED6E73CCB35E4CB$ when $M=\mbox{I owe you \$2000.}$
We get $S=0XBCC20FB7568E5D48E434C387C06A6025E90D29D848AF9C3EBAC0135D99305822$ when $M=\mbox{I owe you \$3000.}$
Although we change \verb|2000| to \verb|3000|, the signature $S$ is changed a lot.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task4_1.png}
\caption{Sign}
\end{figure}
\section*{Task 5}
If the signature is correct, sigen message is the same as actual message.
But it the signature is wrong, the message is changed.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task5_1.png}
\caption{Verify}
\end{figure}
\section*{Task 6}
\subsection*{Step 1}
Download a certificate from a real web server.
We use the \verb|www.baidu.com| server.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_1.png}
\caption{Step 1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_2.png}
\caption{Step 1}
\end{figure}
\subsection*{Step 2}
Extract the public key (e, n) from the issuer’s certificate.
We have $n$ and $e=0x10001$ now.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_3.png}
\caption{Step 2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_4.png}
\caption{Step 2}
\end{figure}
\subsection*{Step 3}
Extract the signature from the server’s certificate.
We have $s$ now.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_5.png}
\caption{Step 3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_6.png}
\caption{Step 3}
\end{figure}
\subsection*{Step 4}
Extract the body of the server’s certificate.
We have $sha256sum$.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_7.png}
\caption{Step 4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_8.png}
\caption{Step 4}
\end{figure}
\subsection*{Step 5}
Verify the signature.
We find that the signature $s$ is the same as the result.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{img/task6_9.png}
\caption{Step 5}
\end{figure}
\end{document}
| {
"alphanum_fraction": 0.6898064516,
"avg_line_length": 22.1428571429,
"ext": "tex",
"hexsha": "7d5c814ab08ae6d615216ab9c03d5c2b70c9cc04",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-04-27T13:41:36.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-01-07T04:14:11.000Z",
"max_forks_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "Eveneko/SUSTech-Courses",
"max_forks_repo_path": "CS315_Computer-Security/lab10/lab10.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "Eveneko/SUSTech-Courses",
"max_issues_repo_path": "CS315_Computer-Security/lab10/lab10.tex",
"max_line_length": 113,
"max_stars_count": 4,
"max_stars_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Eveneko/SUSTech-Courses",
"max_stars_repo_path": "CS315_Computer-Security/lab10/lab10.tex",
"max_stars_repo_stars_event_max_datetime": "2021-03-11T10:05:09.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-11-11T11:56:57.000Z",
"num_tokens": 1422,
"size": 3875
} |
\clearpage
\subsubsection{Enumeration} % (fold)
\label{ssub:enum}
An Enumeration allows you to create a type where the value is one of a list of available options. When you declare an enumeration you are listing the values that are available for data of this type. The example in \fref{fig:type-decl-enum} declares a type that can have the value \texttt{ADD\_COINS} or \texttt{REMOVE\_COINS}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{./topics/type-decl/diagrams/Enum}
\caption{An Enumeration allows you to define related constants}
\label{fig:type-decl-enum}
\end{figure}
\mynote{
\begin{itemize}
\item An Enumeration is a kind of \textbf{artefact} that you can declare.
\item Using an enumeration you can declare a kind of value that must come from a list of available options.
\item When you declare the enumeration you list the \emph{values} that it may have.
\item This is like a list of constants, where values of this type will match one of these constants.
\item Internally the compiler will map your values to numeric values. The first option is represented by the value \texttt{0}, the second is represented by the value \texttt{1}, and so on.
\item You can specify the values for each option in the enumeration, this can be useful if you want to be able to combine options in a single value. In these cases you give each option a bit unique value (first at \texttt{1}, then \texttt{2}, \texttt{4}, \texttt{8}, \texttt{16}, etc).
\item The \textbf{size} of an enumeration is based on the size of the integer type used to represent its values.
\end{itemize}
}
% subsection enumerations (end) | {
"alphanum_fraction": 0.7574107683,
"avg_line_length": 63.5769230769,
"ext": "tex",
"hexsha": "67f1f9d326855f925f141b74b71570c39f917c0d",
"lang": "TeX",
"max_forks_count": 6,
"max_forks_repo_forks_event_max_datetime": "2022-03-24T07:42:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-06-02T03:18:37.000Z",
"max_forks_repo_head_hexsha": "8f3040983d420129f90bcc4bd69a96d8743c412c",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "macite/programming-arcana",
"max_forks_repo_path": "topics/type-decl/concepts/enum.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_issues_repo_issues_event_max_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-12-29T19:45:10.000Z",
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "thoth-tech/programming-arcana",
"max_issues_repo_path": "topics/type-decl/concepts/enum.tex",
"max_line_length": 326,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "bb5c0d45355bf710eff01947e67b666122901b07",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "thoth-tech/programming-arcana",
"max_stars_repo_path": "topics/type-decl/concepts/enum.tex",
"max_stars_repo_stars_event_max_datetime": "2021-08-10T04:50:54.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-08-10T04:50:54.000Z",
"num_tokens": 435,
"size": 1653
} |
\chapter{Fast Shower Parametrizations}
Please $\backslash$input$\{path/file\}$ your text here.
| {
"alphanum_fraction": 0.7684210526,
"avg_line_length": 31.6666666667,
"ext": "tex",
"hexsha": "a5bb0f17f82df9bceac7ae7bb2b04f989d8fbc28",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "berghaus/cernlib-docs",
"max_forks_repo_path": "geant4/parameterisation/parameterisation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "berghaus/cernlib-docs",
"max_issues_repo_path": "geant4/parameterisation/parameterisation.tex",
"max_line_length": 55,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "berghaus/cernlib-docs",
"max_stars_repo_path": "geant4/parameterisation/parameterisation.tex",
"max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z",
"num_tokens": 24,
"size": 95
} |
\documentclass[11pt]{article}
\usepackage{fullpage}
\usepackage{mathtools,color,xcolor,hyperref,graphicx,wrapfig,listings,array,xspace,tabu,stmaryrd,tabularx,verbatim,longtable}
% "define" Scala
\lstdefinelanguage{scala}{
morekeywords={abstract,case,catch,class,def,%
do,else,extends,false,final,finally,%
for,if,implicit,import,match,mixin,%
new,null,object,override,package,%
private,protected,requires,return,sealed,%
super,this,throw,trait,true,try,%
type,val,var,while,with,yield},
otherkeywords={=>,<-,<\%,<:,>:,\#,@},
sensitive=true,
morecomment=[l]{//},
morecomment=[n]{/*}{*/},
morestring=[b]",
morestring=[b]',
morestring=[b]"""
}
\newcommand{\authnote}[2]{\textsf{#1 \textcolor{blue}{: #2}}}
\newcommand{\knote}[1]{{\authnote{\textcolor{green}{kushti}}{#1}}}
\newcommand{\mnote}[1]{{\authnote{\textcolor{red}{Morphic}}{#1}}}
\newcommand{\dnote}[1]{{\authnote{\textcolor{brown}{Dima}}{#1}}}
\newcommand{\ret}{\mathsf{ret}}
\newcommand{\new}{\mathsf{new}}
\newcommand{\hnew}{h_\mathsf{new}}
\newcommand{\old}{\mathsf{old}}
\newcommand{\op}{\mathsf{op}}
\newcommand{\verifier}{\mathcal{V}}
\newcommand{\prover}{\mathcal{P}}
\newcommand{\key}{\mathsf{key}}
\newcommand{\nextkey}{\mathsf{nextKey}}
\newcommand{\node}{\mathsf{t}}
\newcommand{\parent}{\mathsf{p}}
\newcommand{\leaf}{\mathsf{f}}
\newcommand{\vl}{\mathsf{value}}
\newcommand{\balance}{\mathsf{balance}}
\newcommand{\lft}{\mathsf{left}}
\newcommand{\rgt}{\mathsf{right}}
\newcommand{\lbl}{\mathsf{label}}
\newcommand{\direction}{\mathsf{d}}
\newcommand{\oppositedirection}{\bar{\mathsf{d}}}
\newcommand{\found}{\mathsf{found}}
\newcommand{\mypar}[1]{\smallskip\noindent\textbf{#1.}\ \ \ }
\newcommand{\ignore}[1]{}
\newcommand{\langname}{ErgoTree\xspace}
\newcommand{\corelang}{$\lst{Core-}\lambda$\xspace}
\newcommand{\lst}[1]{\text{\lstinline[basicstyle={\ttfamily}]$#1$}}
\newcommand{\andnode}{\ensuremath{\mathsf{AND}}}
\newcommand{\ornode}{\ensuremath{\mathsf{OR}}}
\newcommand{\tnode}{\ensuremath{\mathsf{THRESHOLD}}}
\newcommand{\GF}{\ensuremath{\mathrm{GF}}}
\newcommand{\ASDag}{ErgoTree\xspace}
\newcommand{\I}[1]{\mathit{#1}}
\newcommand{\B}[1]{\mathbf{#1}}
\newcommand{\PA}[1]{\I{PA}\langle\I{#1}\rangle}
\newcommand{\NA}[1]{\I{NA}\langle\I{#1}\rangle}
\newcommand{\nlindent}[1][0.2cm]{\newline\hangindent=#1}
\newcommand{\MU}[1]{\mu\B{\alpha}.\B{#1}}
\newcommand{\Monoid}[1]{\I{Monoid}\TY{#1}}
%\newcommand{\indentline}{\hangindent=0.7cm}
\newcommand{\tick}{\checkmark}
\newcommand{\Left}[3]{\text{\lst{l}}[#1,#2]\cdot #3}
\newcommand{\Right}[3]{\text{\lst{r}}[#1,#2]\cdot #3}
\newcommand{\SelectField}[2]{\text{\lst{Field}}(#1, #2)}
\newcommand{\Fst}[1]{\text{\lst{fst}}~#1}
\newcommand{\Snd}[1]{\text{\lst{snd}}~#1}
% \newcommand{\Fst}[1]{$#1$\lst{.fst}}
% \newcommand{\Snd}[1]{$#1$\lst{.snd}}
\newcommand{\Ctx}{\mathcal{E}}
\newcommand{\Apply}[2]{#1\langle#2\rangle}
\newcommand{\RCtx}{\mathcal{R}}
\newcommand{\RMatch}[1]{(#1 :: \mathcal{R})}
\newcommand{\RCtxEmpty}{\epsilon}
\newcommand{\Frame}{\mathcal{F}}
\newcommand{\Prim}{\delta}
\newcommand{\Sp}{\mathcal{S}}
\newcommand{\Spec}[1]{\mathcal{S}|[#1|]}
\newcommand{\Build}[1]{\mathcal{B}|[#1|]}
\newcommand{\Hole}{\diamondsuit{}}
\newcommand{\Trait}[2]{\text{\lst{trait}}~#1~\{ #2 \}}
\newcommand{\Class}[3]{\text{\lst{class}}~#1~\text{\lst{extends}}~#2 \{ #3 \}}
\newcommand{\MSig}[3]{\text{\lst{def}}~#1(#2): #3}
\newcommand{\CaseOfxxx}[3]{\lst{case} $#1$ \lst{of} \{ $#2 \to #3$ \}}
\newcommand{\LetXXX}[3]{\lst{let} $#1$ \lst{=} $#2$ \lst{in} $#3$}
\newcommand{\LetrecXXX}[3]{\lst{letrec} $#1$ \lst{=} $#2$ \lst{in} $#3$}
\newcommand{\CaseOfXX}[2]{\text{\lst{case}}~#1~\text{\lst{of}}~\{ #2 \}}
\newcommand{\CaseOf}[3]{\text{\lst{case}}~#1~\text{\lst{of}}~\{ #2 \to #3 \}}
\newcommand{\True}{\text{\lst{true}}}
\newcommand{\False}{\text{\lst{false}}}
\newcommand{\IfThenElse}[3]{\text{\lst{if}}~(#1)~#2~\text{\lst{else}}~#3}
\newcommand{\Let}[3]{\text{\lst{let}}~#1~\text{\lst{=}}~#2~\text{\lst{in}}~#3}
\newcommand{\Field}[2]{#1.\text{\lst{#2}}}
\newcommand{\FDecl}[2]{\text{\lst{val}}~#1 : #2}
\newcommand{\New}[1]{\text{\lst{new}}~#1}
\newcommand{\Meth}[2]{\text{\lst{#1.#2}}}
\newcommand{\KSet}{\mathcal{K}}
\newcommand{\VSet}{\mathcal{V}}
\newcommand{\LSet}{\mathcal{L}}
\newcommand{\Low}[1]{\mathcal{L}\llbracket#1\rrbracket}
\newcommand{\Denot}[1]{\llbracket#1\rrbracket}
\newcommand{\PSet}{\mathcal{P}}
\newcommand{\DSet}{\mathcal{D}}
\newcommand{\CSet}{\mathcal{CLS}}
\newcommand{\ISet}{\mathcal{ABS}}
\newcommand{\Ov}[1]{\overline{#1}}
\newcommand{\Un}[1]{\underline{#1}}
\newcommand{\Tup}[1]{(#1)}
\newcommand{\Coll}[1]{\text{\lst{Coll}}(#1)}
\newcommand{\Some}[1]{\text{\lst{Some}}(#1)}
\newcommand{\None}[1]{\text{\lst{None}}[#1]}
\newcommand{\Def}[1]{\llparenthesis#1\rrparenthesis}
\newcommand{\ByDef}{\overset{def}{=}}
\newcommand{\Dag}{\Delta}
\newcommand{\Dom}[1]{\mathcal{D}om~#1}
\newcommand{\TAddr}{Addr}
\newcommand{\TDef}{Def}
\newcommand{\TNode}{Node}
\newcommand{\TDag}{Dag}
\newcommand{\TPair}[2]{#1\times#2}
\newcommand{\TList}[1]{List~#1}
\newcommand{\TMDag}{\TDag * \TAddr}
\newcommand{\Focus}[1]{\langle#1\rangle}
\newcommand{\MDag}[1]{\Delta\Focus{#1}}
\newcommand{\MDagPr}[1]{\Delta'\Focus{#1}}
\newcommand{\Map}[2]{#1 \mapsto #2}
\newcommand{\AddMap}[3]{#1 \cup \{#2 \mapsto #3\}}
\newcommand{\To}{$\mapsto$}
\newcommand{\TP}[2]{#1 \to #2}
\newcommand{\Set}[1]{\{ #1 \}}
\newcommand{\DHole}[2]{d~#1\Hole#2}
\newcommand{\PrimPat}{\Prim~\overline{\beta}}
\newcommand{\DefPat}{d~(\Ov{\beta})}
\newcommand{\Lam}[2]{\lambda#1.#2}
\newcommand{\TyLam}[3]{\lambda(\Ov{#1:#2}).#3}
\newcommand{\LamPat}[2]{\lambda#1.#2}
\newcommand{\DagPat}[2]{\{ \TP{#1}{#2} \}}
\newcommand{\MDagPat}[4]{\{ \TP{#1}{#2} \}^#3\Focus{#4}}
\newcommand{\Inj}[3]{#1\xleftarrow{}#3}
\newcommand{\SE}[3]{SE'|[#1|]~#2~#3}
\newcommand{\SEtop}[2]{SE|[#1|]~#2}
\newcommand{\TEnv}{\Gamma}
\newcommand{\Der}[2]{#1~\vdash~#2}
\newcommand{\DerV}[2]{#1~\vdash^{\text{\lst{v}}}~#2}
\newcommand{\DerC}[2]{#1~\vdash^{\text{\lst{c}}}~#2}
\newcommand{\DerEnv}[1]{\Der{\TEnv}{#1}}
\newcommand{\DerEnvV}[1]{\DerV{\TEnv}{#1}}
\newcommand{\DerEnvC}[1]{\DerC{\TEnv}{#1}}
\newcommand{\Dif}[1]{\partial#1}
\newcommand{\View}[2]{#1\sphericalangle #2}
\newcommand{\This}{\text{\lst{this}}}
\newcommand{\Section}[1]{Section~\ref{section:#1}}
\newcommand{\MaxVlqSize}{VLQ_{max}}
\newcommand{\MaxBits}{Bits_{max}}
\newcommand{\MaxBytes}{Bytes_{max}}
\newcommand{\MaxTypeSize}{Type_{max}}
\newcommand{\MaxDataSize}{Data_{max}}
\newcommand{\MaxBox}{Box_{max}}
\newcommand{\MaxSigmaProp}{SigmaProp_{max}}
\newcommand{\MaxAvlTree}{AvlTree_{max}}
\newcommand{\MaxConstSize}{Const_{max}}
\newcommand{\MaxExprSize}{Expr_{max}}
\newcommand{\MaxErgoTreeSize}{ErgoTree_{max}}
\newtheorem{definition}{Definition}
\setcounter{tocdepth}{2}
\begin{document}
\title{ErgoTree Specification}
\author{authors}
\maketitle
\begin{abstract}
In this document we consider typed abstract syntax of the language
called \ASDag which defines semantics of a condition which protects a closed
box in the Ergo Platform blockchain. Serialized graph is written into a box.
Most of Ergo users are unaware of the graph since they are developing contracts in higher-level languages, such as
ErgoScript. However, for developers of alternative higher-level languages, client libraries and clients knowledge of
internals would be highly useful. This document is providing the internals, namely, the following data structures and
algorithms:
\begin{itemize}
\item{} Serialization to a binary format and graph deserialization from the binary form.
\item{} When a graph is considered to be well-formed and when not.
\item{} Type system and typing rules.
\item{} How graph is transformed into an execution trace.
\item{} How execution trace is costed.
\item{} How execution trace is reduced into a Sigma-expression.
\item{} How Sigma-expression is proven and verified.
\end{itemize}
\end{abstract}
\knote{Please note that the document is intended for general high-skilled tech audience, so avoid describing Scala
classes etc.}
\tableofcontents
\section{Introduction}
\label{sec:intro}
The design space of programming languages is very broad ranging from
general-purpose languages like C,Java,Python up to specialized languages
like SQL, HTML, CSS, etc.
Since Ergo's goal is to provide a platform for contractual money, the choice
of the language for writing contracts is very important.
First of all the language and contract execution environment should be
\emph{deterministic}. Once created and stored in Ergo blockchain, smart
contract should always behave predictably and deterministically, it should only depend on well-defined data context and nothing else.
As long as data context doesn't change, any execution of the contract
should return the same value any time it is executed, on any execution
platform, and even on any \emph{compliant} language implementation.
No general purpose programming language is deterministic because
all of them provide non-deterministic operations. ErgoScript doesn't have
non-deterministic operations.
Second, the language should be \emph{spam-resistant}, meaning it should
facilitate in defending against attacks when malicious contracts can overload
network nodes and bring the blockchain down. To fullfill this goal ErgoScript
support \emph{ahead-of-time cost estimation}, the fast check performed before
contract execution to ensure the evaluation cost is within acceptable
bounds. In general, such cost prediction is not possible, however if the
language is simple enough (which is the case of ErgoScript) and if operations
are carefully selected, then costing is possible and doesn't require
usage of Gas~\mnote{cite etherium} and allow to avoid related problems~\mnote{cite Gas related problems}.
Third, being simple, the contracts language should be \emph{expressive
enough}. It should be possible to implement most of the practical scenarios,
which is the case of ErgoScript. In our experience expressivity of contracts
language comes hand in hand with design and capabilities of Ergo blockchain
platform itself, making the whole system \emph{turing-complete} as we
demonstrated in \mnote{cite TuringPaper}.
Forth, simplicity and expressivity are often characteristics of
domain-specific languages~\mnote{cite DSL}. From this perspective ErgoScript
is a DSL for writing smart contracts. The language directly captures the
Ubiquites Language~\cite{UbiqLang} of smart contracts
domain directly manipulating with first-class Boxes, Tokens, Zero-Knowledge
Sigma-Propostions etc., these are the novel features Ergo aims to provide as a platform/service
for custom user applicatons. Domain-specific nature nature of ErgoScript also fasilitates spam-resistance,
because operations of ErgoScript are all carefully selected to be \emph{costing
friendly}.
And last, but not the least, we wanted our new language to be, nevertheless,
\emph{familiar to the most} since we aim to address as large audience of
programmers as possible with minimum surprise and WTF ratio
\cite{WTFLang}.
The syntax of ErgoScript is inspired by Scala/Kotlin, but in fact it shares a
common subset with Java and C\#, thus if you are proficient in any of these
languages you will be right at home with ErgoScript as well.
Guided by this requirements we designed ErgoScript as a new yet familiar
looking language which directly support all novel features of Ergo blockchain.
We also implemented reference implementation of the specification described in this document.
\include{language}
\include{types}
\include{evaluation}
\include{serialization}
\include{graph}
\include{costing}
\bibliographystyle{alpha}
\bibliography{spec.bib}
\appendix
\include{appendix_predeftypes}
\include{appendix_primops}
\include{appendix_ergotree_serialization}
\include{appendix_motivation}
\include{appendix_integer_encoding}
\end{document} | {
"alphanum_fraction": 0.7217559174,
"avg_line_length": 39.8461538462,
"ext": "tex",
"hexsha": "38f90a1bf7166879a25f00230dd03186878bb9e3",
"lang": "TeX",
"max_forks_count": 19,
"max_forks_repo_forks_event_max_datetime": "2022-01-30T02:12:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-12-28T11:19:17.000Z",
"max_forks_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jozanek/sigmastate-interpreter",
"max_forks_repo_path": "docs/spec/spec.tex",
"max_issues_count": 486,
"max_issues_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e",
"max_issues_repo_issues_event_max_datetime": "2022-03-30T11:02:28.000Z",
"max_issues_repo_issues_event_min_datetime": "2017-12-08T13:07:23.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jozanek/sigmastate-interpreter",
"max_issues_repo_path": "docs/spec/spec.tex",
"max_line_length": 133,
"max_stars_count": 41,
"max_stars_repo_head_hexsha": "251784a9f7c1b325c4859fe256c9fe3862fffe4e",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jozanek/sigmastate-interpreter",
"max_stars_repo_path": "docs/spec/spec.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-23T19:27:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-21T13:18:44.000Z",
"num_tokens": 3886,
"size": 11914
} |
%
% CMPT 454: Database Systems II - A Course Overview
% Section: Concurrency Control
%
% Author: Jeffrey Leung
%
\section{Concurrency Control}
\label{sec:concurrency-control}
\begin{easylist}
& Concurrency control:
&& If atomicity is maintained and there is no system crash, then concurrency control guarantees consistency and isolation
\end{easylist}
\subsection{Consistency}
\label{subsec:consistency}
\begin{easylist}
& \textbf{Serial schedule:} Consistent schedule where transactions are executed sequentially
&& Different transaction orders may create different DB states, all of which are acceptable
&& Example: Given transactions:
\end{easylist}
\begin{align*}
T1: & \ R1(A) \ W1(A) \\
T2: & \ R2(A) \ W2(A)
\end{align*}
\begin{easylist}
The two possible serial schedules are:
\end{easylist}
\begin{align*}
T1\ T2 = & \ R1(A)\ W1(A)\ R2(A)\ W2(A) \\
T2\ T1 = & \ R2(A)\ W2(A)\ R1(A)\ W1(A)
\end{align*}
\begin{easylist}
& \textbf{(View) Serializable schedule:} Consistent schedule where, for every data item, all transaction read and final write actions have the same effect as a serial schedule of the same transactions
&& Example: Given transactions:
\end{easylist}
\begin{align*}
T1: & \ R1(A) \ W1(A)\ R1(B)\ W1(B) \\
T2: & \ R2(B) \ W2(B)
\end{align*}
\begin{easylist}
A serializable schedule is:
\end{easylist}
\begin{align*}
R1(A)\ W1(A)\ R2(B)\ W2(B)\ R1(B)\ W1(B)
\end{align*}
\begin{easylist}
A non-serializable schedule is:
\end{easylist}
\begin{align*}
R1(A)\ W1(A)\ R1(B)\ R2(B)\ W1(B)\ W2(B)
\end{align*}
\begin{easylist}
&& Explanation: For each table in each transaction, the final write operation is dependent on all previous reads in the same transaction. \\
Therefore, if for every data item in every transaction in a given schedule, the read operations and final write operation have the same effect as in a serial schedule, then the schedules are equivalent.
&& Manual method to determine if a schedule is serializable, given a set of transactions:
&&& Write out all possible serial schedules
&&& For each serial schedule possibility:
&&&& For each table:
&&&&& In the test schedule and the current serial schedule, isolate the read operations and final write operation on the current table
&&&&& Compare the isolated operations
&&&&& If any operation does not have the same effect, then move to the next serial schedule possibility
&&&& If all tables' read and final write operations have the same effect as the current serial schedule, then the schedule is serializable
&&& If no serial schedule possibility matches, then the schedule is not serializable
&& \textbf{Conflict-based testing:} Method to determine if a schedule is serializable through analyzing its conflict actions
&&& Actions from an aborted transaction are ignored when calculating conflicts
&&& \textbf{Conflict actions:} Two subsequent read/write actions (at least one write) from different transactions (in a schedule) which operate on the same data item
&&&& Every conflict action \textbf{induces} the order of transactions to match the order in the two operations
&&&& Types of conflict actions:
&&&&& If the operations are write then read: \\
Given operations $W1\ R2$, \textbf{transaction $T2$ reads the effect of transaction $T1$}
&&&&& If the operations are read then write: \\
Given operations $R1\ W2$, \textbf{transaction $T1$ does not read the effect of transaction $T2$}
&&&&& If the operations are both write: \\
Given operations $W1\ W2$, \textbf{transaction $T2$ overwrites transaction $T1$}
&&& If and only if all conflict actions in a schedule induce the same priority of transactions, then the schedule is serializable
&&& Precedence graph is created by analyzing each conflict action and adding directional edges to transactions in their induced orders
&&&& An acyclic precedence graph can be navigated in any topological order to create a serial schedule
&&& \textbf{Conflict-serializable schedule:} Consistent serializable schedule which has an acyclic precedence graph
&&&& Subset of serializable schedules
\clearpage
\end{easylist}
\subsection{Isolation}
\label{subsec:isolation}
\begin{easylist}
& \textbf{Isolation:} Property of a transaction where, if it is aborted, other transactions should not be affected
&& \textbf{Dependent:} Transaction which reads from or writes to a data item, with the result dependent on the write action of another transaction
&&& Conflict actions which create a dependency: $WW$ or $WR$
&&& E.g. If the order of actions is $W1\ R2$, then $T2$ depends on $T1$
& \textbf{Cascading abort:} Abort action which causes a different transaction to also abort
&& Invalidates the isolation property
&& E.g. Given concurrent transactions $T1$ and $T2$:
&&& $W1\ R2\ ABORT(1)$ means that the value before $W1$ must be restored, which invalidates $R2$. Therefore, $T2$ must be aborted in addition to $T1$.
&&& $W1\ W2\ ABORT(1)$ means that the value before $W1$ must be restored, which loses the write by $W2$. Therefore, $T2$ must be aborted in addition to $T1$.
& \textbf{Recoverable schedule:} Isolated schedule where a transaction can only commit after the transactions it depends on have committed
&& Cascading aborts are possible, but are safe to do
&& \textbf{Avoid Cascading Aborts (ACA) schedule:} Isolated schedule which avoids cascading aborts by waiting to access a resource with an uncommitted write, until the action is committed
&&& Subset of recoverable schedules
&&& All dependencies are replaced with waits
&&& The only conflict action allowed is $RW$
&&&& If a schedule has $WW$ or $WR$ conflict actions, then it violates ACA
\clearpage
\end{easylist}
\subsection{Enforcing Consistency and Isolation}
\label{subsec:enforcing-consistency-and-isolation}
\begin{easylist}
& Resource locks:
&& \textbf{Share lock (S-lock):} Lock on a resource during a read operation
&&& Notation: $S(T)$
&& \textbf{Exclusive lock (X-lock):} Lock on a resource during a write operation
&&& Notation: $X(T)$
&&& \textbf{Upgrade:} Action where a lock is changed from share to exclusive, due to a transaction requesting an exclusive lock while holding a share lock on the same resource
&& Unlock notation: $U(T)$
&& Locking by itself does not guarantee serializability
&& \textbf{Lock table:} Hash table of lock entries
&&& \textbf{Lock entry:} Data structure representing a single resource which maintains a list of transactions holding locks and a wait queue
&&& \textbf{On lock release (OID/XID):} Operation where the next available locks in the wait queue are processed
& \textbf{2PL:} Resource access lock protocol which ensures conflict serializability (consistency)
&& Rules: For each transaction:
&&& Before reading, a share lock must be activated
&&& Before writing, an exclusive lock must be activated
&&& All unlocks must happen after all lock operations
&&&& \textbf{Lock point:} Time in a schedule at which all required locks have been acquired by a transaction
&& Does not enforce isolation
& \textbf{Strict 2PL:} Resource access lock protocol which ensures conflict serializability (consistency) and ACA (isolation)
&& Rules: For each transaction:
&&& Before reading, a share lock must be activated
&&& Before writing, an exclusive lock must be activated
&&& Unlocks only happen when committing
&& E.g. Schedule which violates string 2PL: $R2(A)\ W1(A)\ W2(A)$
\clearpage
\end{easylist}
\subsection{Deadlocks}
\label{subsec:deadlocks}
\begin{easylist}
& \textbf{Deadlock:} Situation where two transactions hold locks while waiting for the other transaction to release their lock
&& Can occur in 2PL or strict 2PL
\end{easylist}
\subsubsection{Detection Strategies}
\label{subsubsec:detection-strategies}
\begin{easylist}
& \textbf{Waits-for-graph:} Deadlock detection strategy where a directed graph is maintained between transactions waiting for another transaction to release a lock, and if a cycle (deadlock) is detected, a transaction in the cycle is aborted
&& Aborted transaction is chosen by criteria such as fewest locks, least work done, farthest from completion, etc.
& \textbf{Timeout:} Deadlock detection strategy where a transaction which exceeds a certain duration is aborted
\end{easylist}
\subsubsection{Prevention Strategies}
\label{subsubsec:prevention-strategies}
\begin{easylist}
& Each transaction is assigned a unique timestamp; waits can only occur in one temporal direction
&& \textbf{Wait-die:} Deadlock prevention strategy where older transactions may wait for newer transactions
&& \textbf{Wound-wait:} Deadlock prevention strategy where newer transactions may wait for older transactions
&& If a transaction violates this, the newer transaction is aborted and restarted after some time with the original timestamp
\end{easylist}
\subsection{B+ Tree Concurrency}
\label{subsec:b-tree-concurrency}
\begin{easylist}
& B+ tree indices also require locking
& Locking a node also locks the subtree
& \textbf{Crabbing locking:} B+ tree locking strategy which releases ancestor locks as child locks are obtained, `crabbing' down the tree
&& Search: Release a lock on a node when a child node of it is locked
&& Insert/delete: Release a lock on any ancestor nodes when a child node of it is locked, and the child is not full
&&& Reason: Any split in the index will propagate to the non-full child, then stop
\end{easylist}
\clearpage
| {
"alphanum_fraction": 0.742090425,
"avg_line_length": 47.4108910891,
"ext": "tex",
"hexsha": "00a967e2ee645765355b41d0342883f797634dca",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-12-27T21:44:56.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-11-18T09:17:46.000Z",
"max_forks_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "AmirNaghibi/notes",
"max_forks_repo_path": "cmpt-454-database-systems-ii/tex/concurrency-control.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "AmirNaghibi/notes",
"max_issues_repo_path": "cmpt-454-database-systems-ii/tex/concurrency-control.tex",
"max_line_length": 241,
"max_stars_count": 25,
"max_stars_repo_head_hexsha": "c4640bbcb65c94b8756ccc3e4c1bbc7d5c3f8e92",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "AmirNaghibi/notes",
"max_stars_repo_path": "cmpt-454-database-systems-ii/tex/concurrency-control.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-09T02:37:39.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-11T08:45:10.000Z",
"num_tokens": 2519,
"size": 9577
} |
\section{Cubic Spline Interpolation}
\begin{defn}
Given a function $f$ defined on $[a,b]$, $a=x_0<x_1<\cdots,<x_n=b$, a cubic spline interpolation $S$ for $f$ is a function that satisfies the following conditions.
\begin{enumerate}
\item $S_j(x)$ is a cubic polynomial, on the subinterval $[x_j,x_{j+1}]$ for $j=0:n-1$.
\item $S_j(x_j)=f(x_j)$, $S_j(x_{j+1})=f(x_{j+1})$ for $j=0:n-2$.
\item $S'_j(x_{j+1})=S'_{j+1}(x_{j+1})$ for $j=0:n-2$.
\item $S''_j(x_{j+1})=S''_{j+1}(x_{j+1})$ for $j=0:n-2$.
\item $\begin{cases}
\text{natural boundary:} & S''(x_0)=S''(x_n)=0. \\
\text{clamped boundary:} & S'(x_0)=f'(x_0)$, $S'(x_n)=f'(x_n).
\end{cases}$
\end{enumerate}
\end{defn}
\subsection{Construction of a Cubic Spline}
Let $h_j=x_{j+1}-x_j$ (forward):
\begin{enumerate}[(1)]
\item \begin{align*}
S_j(x) &= a_j+b_j(x-x_j)+c_j(x-x_j)^2+d_j(x-x_j)^3\quad\text{for $j=0:n-1$} \\
&\Rightarrow a_{j+1}=S_{j+1}(x_{j+1})=S_j(x_{j+1}) \\
&\hspace{1.3cm}=a_j+b_jh_j+c_jh_j^2+d_jh_j^3=f(x_{j+1})\quad\text{for $j=0:n-1$}
\end{align*}
\item \begin{align*}
S'_j(x) &= b_j+2c_j(x-x_j)+3d_j(x-x_j)^2 \\
&\Rightarrow b_{j+1}=S'_{j+1}(x_{j+1})=S'_j(x_{j+1}) \\
&\hspace{1.3cm}=b_j+2c_jh_j+3d_jh_j^2\quad\text{for $j=0:n-1$}
\end{align*}
\item \begin{align*}
S''_j(x) &= 2c_j+6d_j(x-x_j) \\
&\Rightarrow 2c_{j+1}=S''_{j+1}(x_{j+1})=S''_j(x_{j+1}) \\
&\hspace{1.3cm}=2c_j+6d_jh_j\quad\text{for $j=0:n-1$}
\end{align*}
\end{enumerate}
Above all, the linear system to be solved is:
\begin{align*}
&Ax=b \\
&\begin{cases}
A=diag([1,2(h_0+h_1),\cdots,2(h_{n-2}+h_{n-1}),1]) \\
\phantom{A=}+diag([0,h_1,\cdots,h_{n-1}],1)+diag([h_0,\cdots,h_{n-2},0],-1) \\
x = [c_0;c_1;\cdots;c_n] \\
b = \left[0;\frac{3}{h_1}(a_2-a_1)-\frac{3}{h_0}(a_1-a_0);\cdots;\frac{3}{h_{n-1}}(a_n-a_{n-1})-\frac{3}{h_{n-2}}(a_{n-1}-a_{n-2});0\right] \\
\end{cases}
\end{align*}
Then we will get $b_j$, $d_j$ by
\begin{equation*}
\begin{cases}
b_j &= \frac{1}{h_j}(a_{j+1}-a_j)-\frac{h_j}{3}(2c_j+c_{j+1}) \\
d_j &= \frac{1}{3h_j}(c_{j+1}-c_j)
\end{cases}
\end{equation*}
\subsection{Clamped Splines}
\begin{align*}
&Ax=b \\
&\begin{cases}
A=\begin{pmatrix}
2h_0 & h_0 & 0 & \cdots & 0 \\
h_0 & 2(h_0+h_1) & h_1 & \ddots & \vdots \\
0 & h_1 & 2(h_1+h_2) & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & h_{n-1} \\
0 & \cdots & \cdots & h_{n-1} & 2h_{n-1} \\
\end{pmatrix} \\
\\
x = \begin{pmatrix}c_0 & c_1 & \cdots & c_n \end{pmatrix}^T \\
\\
b = \begin{pmatrix}
\frac{3}{h_0}(a_1-a_0)-3f'(a) \\
\frac{3}{h_1}(a_2-a_1)-\frac{3}{h_0}(a_1-a_0) \\
\vdots \\
\frac{3}{h_{n-1}}(a_n-a_{n-1})-\frac{3}{h_{n-2}}(a_{n-1}-a_{n-2}) \\
3f'(b)-\frac{3}{h_{n-1}}(a_n-a_{n-1})
\end{pmatrix}
\end{cases}
\end{align*} | {
"alphanum_fraction": 0.5254915028,
"avg_line_length": 40.0133333333,
"ext": "tex",
"hexsha": "98d8ecb226a0e27d24d61920c5246d7fc630f552",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ef1e37b97522fce9837142d242676fdd16e74712",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "iydon/NumericalAnalysisNotes",
"max_forks_repo_path": "Notes/sections/3.5_Cubic_spline_interpolation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ef1e37b97522fce9837142d242676fdd16e74712",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "iydon/NumericalAnalysisNotes",
"max_issues_repo_path": "Notes/sections/3.5_Cubic_spline_interpolation.tex",
"max_line_length": 163,
"max_stars_count": 8,
"max_stars_repo_head_hexsha": "ef1e37b97522fce9837142d242676fdd16e74712",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "Iydon/NumericalAnalysisNotes",
"max_stars_repo_path": "Notes/sections/3.5_Cubic_spline_interpolation.tex",
"max_stars_repo_stars_event_max_datetime": "2019-12-02T10:07:33.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-11-08T15:48:13.000Z",
"num_tokens": 1421,
"size": 3001
} |
% !TeX root = ./apxthy.tex
\section{Least Squares Methods}
\subsection{Motivation}
%
We first describe least squares methods in abstract terms. Let
$[a,b]$ be an interval and
$b_1, \dots, b_N \in C([a,b])$ be $N$ linearly independent basis functions
for an approximation space
\[
\AA_N := {\rm span}\b\{ b_1, \dots, b_N \b\}.
\]
Given $w \in C(a,b) \cap L^1(a,b)$ (note the open interval!) we can
define a weighted $L^2$-inner product
\[
\< f, g \>_{L^2_w} := \int_a^b w(x) f(x) g(x)^* \,dx,
\]
with associated norm $\|f\|_{L^2_w} := \<f,f\>_{L^2_w}^{1/2}$. The best
approximation of a function $f \in C([a,b])$ with respect to this weighted norm
is then given by
\begin{equation} \label{eq:lsq:ctslsq}
p_N \in \min_{p \in \AA_N} \b\| f - p \b\|_{L^2_w}^2.
\end{equation}
We call this a continuous least squares problem.
Computationally, we typically need to discretise \eqref{eq:lsq:ctslsq}.
To that end, we choose points $x_1, \dots, x_M \in [a, b]$ and weights
$w_1, \dots w_M$ and define the discrete inner product
\begin{equation} \label{eq:lsq:disip}
\< f, g \>_{\ell^2_w} := \sum_{m = 1}^M w_m f(x_m) g(x_m)^*
\end{equation}
with associated norm $\|f\|_{\ell^2_w} := \<f,f \>_{\ell^2_w}^{1/2}$. This
gives the discrete least squares problem
\begin{equation}
\label{eq:lsq:dislsq}
p_N \in \min_{p \in \AA_N} \b\| f - p \b\|_{\ell_w^2}.
\end{equation}
This is the typical kind of least squares problem encountered in
real applications.
We distinguish two scenarios:
\begin{enumerate}
\item {\bf User Chooses Data: } In this scenario the ``user'' is given a
function $f$ to be approximated. She may then choose the points $x_m$, weights
$w_m$ and evaluations $f(x_m)$ in order to fine-tune and optimise the fit $p_N$.
For example it is then feasible to start from \eqref{eq:lsq:ctslsq} and design a
discrete LSQ system \eqref{eq:lsq:dislsq} that approximates
\eqref{eq:lsq:ctslsq} in a suitable sense. An arbtirary amount of data $(x_m,
f(x_m))$ may then be generated to ensure a good fit.
\item {\bf Data is provided: } Some data has been collected outside the control
of the person (``user'') designing the fit. Given the data points $(x_m, f(x_m))$
(possibly subject to noise, i.e. $y_m = f(x_m) + \eta_m$ is then provided)
one then needs to choose an appropriate approximation space $\AA_N$,
approximation degree $N$ and weights $w_m$ to ensure a good fit in a sense
dictated by the application.
\end{enumerate}
We will study both scenarios but note that the second one is the more
typical in applications.
\subsection{Solution methods}
%
\label{sec:lsq:soln}
%
We convert \eqref{eq:lsq:dislsq} into a linear algebra problem.
By writing
\[
Y_m := f(x_m), \sqrt{W} := {\rm diag}(\sqrt{w_1}, \dots, \sqrt{w_m}) \in \R^{M \times M}
\]
and
\[
p(x_m) = \sum_{n = 1}^N c_n b_n(x_m) = A c,
\quad \text{where } A_{mn} = b_n(x_m),
\]
then $A \in \R^{M \times N}$ and we obtain
\[
\sum_{m = 1}^M w_m | p(w_m) - f(x_m)|^2
= \big\| \sqrt{W} A c - \sqrt{W} Y \big\|^2,
\]
where $\|\cdot\|$ denotes the standard Euclidean norm in $\R^M$.
We write $\tilde{A} := \sqrt{W} A, \tilde{Y} := \sqrt{W} Y$ and
write the least squares functional equivalently as
\[
\Phi(c) := \big\| \sqrt{W} A c - \sqrt{W} Y \big\|^2
= c^T \tilde{A}^T \tilde{A} c - 2 c^T \tilde{A}^T \tilde{Y} + \|\tilde{Y}\|^2.
\]
A minimiser must satisfy $\nabla\Phi(c) = 0$, which gives the linear system
\begin{equation} \label{eq:lsq:normaleqns}
\tilde{A}^T\tilde{A} c = \tilde{A}^T \tilde{Y}.
\end{equation}
These are called the normal equations, which can be solved using
the LU or Cholesky factorisation.
It turns out that they are often (though not always) ill-conditioned.
An alternative approach is therefore to perform the (numerically stable)
{\em thin QR factorisation}
\[
\tilde{A} = Q R,
\]
where $R \in \R^{N \times N}$ is upper triangular and $Q \in \R^{M \times N}$
has ortho-normal columns, i.e., $Q^T Q = I \in \R^{N \times N}$.
With the QR factorisation in hand the normal equations can be rewritten as
\begin{align*}
\tilde{A}^T\tilde{A} c &= \tilde{A}^T \tilde{Y} \\
\Leftrightarrow \qquad
R^T Q^T Q R c &= R^T Q^T \tilde{Y} \\
\Leftrightarrow \qquad
R c &= Q^T \tilde{Y},
\end{align*}
provided that $R$ is invertible (which is equivalent to $A^T A$ being invertible
and to $A$ having full rank). Thus, the solution of the least squares problem
becomes
\begin{equation}
\label{eq:lsq:qr}
R c = Q^T \sqrt{W} Y, \qquad \text{where} \qquad
\sqrt{W} A = QR.
\end{equation}
It is worthwhile comparing the computational cost of the two approaches.
\begin{enumerate}
\item The assembly of thenormal equations requires the multiplication
$A^T A$ which requires $O(M N^2)$ operations, followed by the
Cholesky factorisation of $A^T A$ which requires $O(N^3)$ operiations.
Thus the cost of solving \eqref{eq:lsq:normaleqns} is $O(M N^2)$.
\item The cost of the QR factorisation in \eqref{eq:lsq:qr} is
$O(M N^2)$ as well, while the inversion of $R$ is only $O(N^2)$ and the
multiplication with $Q^T$ is $O(NM)$.
Thus both algorithms scale like $O(M N^2)$.
\end{enumerate}
\subsection{Orthogonal Polynomials}
%
\label{sec:lsq:orthpolys}
%
We have so far encountered orthogonal polynomials in the context of the
Chebyshev basis, which arise naturally due to their connection to trigonometric
polynomials. More generally, we can consider orthogonal polynomials with respect
to {\em any} inner product $\<\cdot, \cdot\>$. For simplicity we will continue
to work on the domain $[-1, 1]$. In the context of least squares problems, we
can think of \eqref{eq:lsq:ctslsq} or \eqref{eq:lsq:dislsq} and the inner
continuous or discrete products associated with these least squares problems.
The main result we want to discuss here is that the three-point recursion
\eqref{eq:poly:chebrecursion} for the Chebyshev basis is not special, but that
all families of orthogonal polynomials satisfy such a recursion. That is,
given an inner product $\< \cdot, \cdot\>$ we will construct sequences of
coefficients, $A_k, B_k, C_k$ such that the sequence of polynomials given by
%
\begin{equation} \label{eq:lsq:general_3ptrec}
\phi_{k+1}(x) := (x - B_k) \phi_k(x) - C_k \phi_{k-1}(x)
\end{equation}
%
are orthogonal. By construction, we immediately see that the leading term in
$\phi_{k}$ is $x^k$; hence they also span the space of all polynomials.
Taking the inner product of \eqref{eq:lsq:general_3ptrec} with $\phi_k$ and then
$\phi_{k-1}$ we obtain
%
\begin{align*}
0 &= \< x \phi_k, \phi_k \> - B_k \< \phi_k, \phi_k \>, \\
0 &= \< \phi_k, x \phi_{k-1} \> - C_k \< \phi_{k-1}, \phi_{k-1} \>,
\end{align*}
%
which gives expressions for $B_k, C_k$,
%
\begin{equation} \label{eq:lsq:coeffs_3pt}
\begin{split}
B_k &:= \frac{\< x \phi_k, \phi_k \>}{\< \phi_k, \phi_k \>}, \\
C_k &:= \frac{\< \phi_k, x \phi_{k-1} \>}{\< \phi_{k-1}, \phi_{k-1} \>}.
\end{split}
\end{equation}
%
It is worth noting that this construction is simply the Gram-Schmidt procedure,
but truncated at a three-term recursion rather than the full recursion to
$\phi_0$. In particular, by construction, we have that $\phi_{k+1} \perp \phi_k,
\phi_{k-1}$ and it only remains to show that they it is also orthogonal to
$\phi_0, \dots, \phi_{k-2}$. Concretely we obtain the following result.
\begin{proposition}
Suppose that $\< \cdot, \cdot\>$ is an inner product on the space of
polynomials such that the operator $p \mapsto x \cdot p$ is self-adjoint,
(i.e., $\<x p, q\> = \< p, xq\>$ for all polynomials $p, q$). Suppose,
moreover, that $\phi_0 = 1, \phi_1 = x - \<1, x\>$, and that $\phi_k, k \geq 2,$
is given by the three-point recursion \eqref{eq:lsq:general_3ptrec} with
coefficients \eqref{eq:lsq:coeffs_3pt}. Then $\{ \phi_k : k \in \N \}$ is
a basis of the space of polynomials which is orthogonal with respect to
$\< \cdot, \cdot \>$.
\end{proposition}
\begin{proof}
By construction we have that $\phi_1 \perp \phi_0$ and that $\phi_{k+1}
\perp \phi_k, \phi_{k-1}$ for $k \geq 2$. Thus, it only remains to prove
that, for $k \geq 2$, $\phi_{k+1} \perp \phi_j$ for $j = 0, \dots, k-2$.
By induction we may assume that $\< \phi_j, \phi_i \> = 0$ for $i \neq j$
and $i \leq k$. Then, we have
\begin{align*}
\< \phi_{k+1}, \phi_j\>
&=
\< x \phi_k, \phi_j \> - B_k \< \phi_k, \phi_j\> - C_k \< \phi_{k-1}, \phi_j\> \\
&= \< \phi_k, x \phi_j \>,
\end{align*}
where we also used self-adjointness of multiplication by $x$. Since
the degree of $x \phi_j$ is at most $k-1$ and, again by induction, $\phi_k$
is orthogonal to $\phi_0, \dots, \phi_{k-1}$ it follows that
$\< \phi_k, x \phi_j \> = 0$. This completes the proof.
\end{proof}
\begin{exercise}
Derive a recursion for an {\em orthonormal} basis of the form
\begin{align*}
A_0 \phi_0 &= 1, \\
A_1 \phi_1 &= x - B_1, \\
A_k \phi_k &= (x - B_k) \phi_{k-1} - C_k \phi_{k-2}.
\end{align*}
Make sure to prove that all $A_k$ are non-zero.
\end{exercise}
\begin{exercise}
Consider the inner product
\[
\< p, q\> = \int_{-1}^1 pq + p'q' \, dx;
\]
prove that the multiplication operator $p \mapsto xp$ is not self-adjoint.
If we were to construct a sequence of orthogonal polynomials by the
Gram-Schmidt procedure, would we again obtain a three-term recursion?
{\it Hint: The involved calculations are somewhat boring. You may wish to use
a computer algebra system to explore this question.}
\end{exercise}
\begin{remark}
A discrete inner product of the form \eqref{eq:lsq:disip} is not strictly
an inner product on the space of all polynomials, but depending on the
summation points it may be an inner product on a subspace $\mathcal{P}_N$.
In this case the recursion formula can simply be terminated at degree
$k = N$ to obtain an orthogonal (or orthonormal) basis of $\mathcal{P}_N$.
\end{remark}
\subsection{Accuracy and Stability I: Least Squares and Nodal Interpolation}
%
Consider fitting trigonometric polynomials $\AA_{2N} = \TT_N'$ with equispaced
grid points $x_n = \pi n / N$ and uniform weights $w_n = 1$. Then the
least squares fit
\[
\min \sum_{n = 0}^{2N-1} |f(x_n) - t(x_n)|^2
\]
is equivalent to trigonometric interpolation, for which we have sharp and error
estimates that predict a close to optimal rate of approximation.
We could leave it at this, but it is still interesting to observe what happens
to the least squares system. The matrix $A$ is now given by
\[
A_{nk} = e^{ikx_n}
\]
and the entries in the normal equation by
\[
[A^* A]_{kk'} = \sum_{n} e^{-ikx_n} e^{ik' x_n} = 2 N \delta_{kk'}
\]
according to Exercise~\ref{exr:trig:trapezoidal rule}(i).
This is due to the fact that the discrete inner product
\eqref{eq:lsq:disip} (up to a constant factor) identical to
the $L^2(\TT)$-inner product on the space $\TT_N'$, that is,
\[
\< f, g \>_{\ell^2} = 2N \mint_{-\pi}^\pi f g^* \,dx \qquad
\forall f, g \in \TT_N'.
\]
No QR factorisation is needed and the lsq fit is given by
\[
c = (2 N)^{-1} A^* Y,
\]
where the operation $(2N)^{-1} A^T Y$ can be performed at $O(N \log N)$
computational cost using the FFT.
Analogous observations are of course true for connecting least squares
methods and algebraic polynomials.
\subsection{Accuracy and Stability II: Random data}
%
\label{sec:lsq:rand}
%
The situation gets more interesting when we are not allowed to optimise the
points at which to fit the approximant. There is an infinite variety of
different situations that can occur when the provided data is application
driven, which goes far beyond the scope of this module. Here, we will assume
that the points $x_m$ are distributed according to some probability law. That
is, they are random. This is in fact a rather common situation in applications
as well. Note also that we are now in the Case-2 situtation where we are given a
fixed amount of data $(x_m, f(x_m))_{m = 1}^M$ and should choose $N$ ensure the
best possible fit given the data we have. In particular this means that we
should not choose $N$ too large!
Specifically, we shall assume throughout this section that
\begin{equation}
\label{eq:lsw:wm_law}
x_m \sim w \,dx, \qquad \text{are iid}, \quad \text{for } m = 1, \dots, M.
\end{equation}
(identically and independently distributed) and without loss of generality that
$\int_a^b w \,dx = 1$ as this can always be achieved by rescaling. We also
assume that $w \in C(a,b) \cap L^1(a,b)$ as before. In this case, we can
construct a sequence of orthogonal polynomials as in \S~\ref{sec:lsq:orthpolys}
with respect to the $L^2_w$-inner product and we will target best approximation
with respect to the same inner product.
We will discuss two fundamental results due to Cohen, Davenport and Leviatan
\cite{Cohen2013-yj}, but we won't prove them. The first result concerns the
{\em stability} of the normal equations. Specifically, we will show that if we
use an $L^2_w$-orthogonal basis then $A^T A$ will be close to identity (and in
particular invertible) for a sufficiently large number of sample points $x_m$.
\begin{theorem}[Stability] \label{th:lsq:randstab}
Let $\phi_1, \dots, \phi_N$ be $L^2_w$-orthonormal, $A_{mk} := \phi_k(x_m)$,
then
\[
\mathbb{P}\B[ \| A^* A - I \|_{\rm op} > \delta\B]
\leq 2 N \exp\B( - \smfrac{C_\delta M}{K(N)}\B),
\]
where $C_\delta = (1+\delta) \log (1+\delta) - \delta$ and
\[
K(N) = \sup_{x \in [a,b]} \sum_{k = 1}^N |\phi_k(x)|^2.
\]
\end{theorem}
Let us specifically focus on the Chebyshev measure where $w(x) =
(1-x^2)^{-1/2}$, the Chebyshev basis $T_k(x)$ on the interval $[a,b]= [-1,1]$.
Since $|T_k(x)| \leq 1$ it readily follows that $K(N) \leq N$. Moreover, the
recursion formulat for $T_k$ implies that $T_k(1) = 1$ for all $k$, hence this
bound is sharp, i.e., $K(N) = N$ in this case.
To make $N \exp( - \smfrac{C_\delta M}{N})$ small, we therefore need
to choose $N \leq \alpha M / \log M$. With this choice,
\[
N \exp( - \smfrac{C_\delta M}{N})
=
N \exp\b( - \alpha C_\delta \log M\b)
\leq
M^{1 -\alpha C_\delta} / \log M
\]
and by choosing $\alpha$ sufficiently large we can ensure that this value tends
to zero as $M \to \infty$. (the case of sufficiently large amounts of data).
Conversely, if $\alpha$ is too small, then $M^{1 -\alpha C_\delta} / \log M \to
\infty$ as $M \to \infty$ which shows that the choice $N \leq \alpha M/\log M$
is sharp. This is a very mild restriction!
The next result we discuss concerns the approximation $p_{NM} = \sum_n c_n
\phi_n$ we obtain by solving the least squares problem $\| f - p_{NM}
\|_{\ell^2(\{x_m\})}^2 \to \min$.
\begin{theorem} \label{th:lsq:randerr}
There exists a constant $c$ such that, if
\[
K(N) \leq \frac{c}{1+r} \frac{M}{\log M},
\]
then,
\[
\mathbb{E}\b[ \|f - p_{NM}\|_{L^2_w}^2 \b]
\leq
(1+\epsilon(M)) \|f - \Pi_N f\|_{L^2_w}^2
+ 8 \|f\|_{L^\infty(a,b)}^2 M^{-r},
\]
where $\epsilon(M) \to 0$ as $M \to \infty$, and $\Pi_N$ denotes the
best-approximation operator with respect to the $L^2_w$-norm.
\end{theorem}
As a first comment, we observe that our restriction $N \leq \alpha M / \log M$
for sufficiently small $\alpha$ re-appears. (In fact, this prerequisite is
required to be able to apply Theorem~\ref{th:lsq:randstab}.)
We can now ask what consequence the choice $N = \alpha M / \log M$ has
on the error. In this case, $\alpha = c / (1+r)$, or equivalently,
$r = c/\alpha - 1$ hence (sacrifycing just a log-factor)
\[
M^{-r} \leq N^{-r} = N^{1 - c/\alpha} =: N^{-\alpha'},
\]
where $\alpha' > 0$ provided that $\alpha$ is chosen sufficiently small. In this
case, we can conclude that
\[
\mathbb{E}\b[ \|f - p_{NM}\|_{L^2_w}^2 \b]
\lesssim
\|f - \Pi_N f\|_{L^2_w}^2
+
N^{- \alpha'}
\]
for some $\alpha' > 0$. Thus, for differentiable functions $f$, such a choice is
quasi-optimal.
However, for analytic functions the rate in the error estimate is
reduced. Let us assume that $f \in A(E_\rho)$, then
$\| f - \Pi_N f \|_{L^2_w} \lesssim \rho^{-N}$ hence we must balance the
two constributions
\[
\rho^{-N} + M^{-r}.
\]
We have already seen that $N = \alpha M/\log M$ leads to $\rho^{-N} \ll M^{-r}$,
hence we instead attempt to choose $N = a (M/\log M)^{\alpha}$ for some $0 < \alpha < 1$,
which gives
\[
r = c' (M / \log M)^{1-\alpha}.
\]
Thus, we wish to balance
\begin{align*}
\exp\B[ - (\log \rho) N\B] &+ \exp\B[ - r \log M \B] \\
= \exp\B[ - c'' M^\alpha (\log M)^{-\alpha} \B] &+ \exp\B[ - c M^{1-\alpha} (\log M)^{-\alpha} \B]
\end{align*}
We can now see that the two terms are balanced when $\alpha = 1/2$, that is,
the quasi-optimal choice of $N$ appears to be
\[
N = a \b(M / \log M\b)^{1/2}.
\]
This is also somewhat consistent with the observation that in the $C^j$ case we
need to decrease $\alpha$ for increasing $j$.
That said, we should remember that we have balanced an error estimate and not
the actual error. At this point, it is highly advisable to test these
predictions numerically, which is done in \nblsq, where we see --- for some
limited examples --- that the stability condition $N \lesssim M/\log M$ appears
to be crucial but the stronger requirement $N \lesssim (M/\log M)^{1/2}$ seems
to not be required.
In summary, the foregoing analysis is intended to demonstrate how different
competing contributions to approximation by fitting from data can be balanced at
least in principle but also the limitations of analysis.
\subsection{Exercises}
%
\label{sec:lsq:exercises}
%
| {
"alphanum_fraction": 0.6656901408,
"avg_line_length": 40.0677200903,
"ext": "tex",
"hexsha": "a07be35397b8f1339f0b366a14ab6d223495d854",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-12-02T02:44:56.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-12-02T02:44:56.000Z",
"max_forks_repo_head_hexsha": "0b28c5c4370eb4d9c5a9063c2c5c1b938aa54a3d",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cortner/ApxThyApp",
"max_forks_repo_path": "tex/lsq.tex",
"max_issues_count": 1,
"max_issues_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f",
"max_issues_repo_issues_event_max_datetime": "2022-01-21T01:58:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2022-01-03T22:23:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cortner/MA3J8ApxThyApp",
"max_issues_repo_path": "tex/lsq.tex",
"max_line_length": 103,
"max_stars_count": 6,
"max_stars_repo_head_hexsha": "9400c557187dbd82468df2dbd0a7da99d7f08f8f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cortner/MA3J8ApxThyApp",
"max_stars_repo_path": "tex/lsq.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-03T02:47:25.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-05-22T05:11:01.000Z",
"num_tokens": 6000,
"size": 17750
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{\label{sec:Amazon}The EC2 Grid Type }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\index{Amazon EC2 Query API}
\index{grid computing!submitting jobs using the EC2 Query API}
\index{grid type!ec2}
Condor jobs may be submitted to clouds supporting
Amazon's Elastic Compute Cloud (EC2) interface.
Amazon's EC2 is an on-line commercial service that allows
the rental of computers by the hour to run computational applications.
It runs virtual machine images that have been uploaded to Amazon's
online storage service (S3 or EBS).
More information about Amazon's EC2 service is available at
\URL{http://aws.amazon.com/ec2}.
The \SubmitCmd{ec2} grid type uses the EC2 Query API,
also called the EC2 REST API.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{\label{sec:Amazon-submit}EC2 Job Submission}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Condor jobs are submitted to Amazon's EC2
with the \SubmitCmd{grid} universe, and setting the
\SubmitCmd{grid\_resource} command to \SubmitCmd{ec2}, followed
by the service's URL. For example,
partial contents of the submit description file may be
\begin{verbatim}
grid_resource = ec2 https://ec2.amazonaws.com/
\end{verbatim}
Since the job is a virtual machine image,
most of the submit description file commands
specifying input or output files are not applicable.
The \SubmitCmd{executable} command is still required,
but its value is ignored.
It can be used to identify different jobs in the output of \Condor{q}.
The VM image for the job must already reside in one of Amazon's storage
service (S3 or EBS) and be registered with EC2.
In the submit description file,
provide the identifier for the image using either \SubmitCmd{ec2\_ami\_id}.
This grid type requires access to user authentication information,
in the form of path names to files containing the appropriate keys.
The \SubmitCmd{ec2} grid type has two different authentication methods.
The first authentication method uses the EC2 API's built-in authentication.
Specify the service with expected \Expr{http://} or \Expr{https://} URL,
and set the EC2 access key and secret access key as follows:
\begin{verbatim}
ec2_access_key_id = /path/to/access.key
ec2_secret_access_key = /path/to/secret.key
\end{verbatim}
While both pairs of files may be associated with the same account,
the credentials are not the same.
The second authentication method for the EC2 grid type is X.509.
Specify the service with an \Expr{x509://} URL,
even if the URL was given in another form.
Use \SubmitCmd{ec2\_access\_key\_id} to
specify the path to the X.509 public key (certificate),
and \SubmitCmd{ec2\_secret\_access\_key} specifies the path to the X.509
private key as in the following example:
\begin{verbatim}
grid_resource = ec2 x509://service.example
ec2_access_key_id = /path/to/x.509/public.key
ec2_secret_access_key = /path/to/x.509/private.key
\end{verbatim}
If using an X.509 proxy, specify the proxy in both places.
%Condor can use the EC2 API to create an SSH key pair that allows
%secure log in to the virtual machine once it is running.
%If the command
%\SubmitCmd{ec2\_keypair\_file}
%is set in the submit description file,
%Condor will write an SSH private key into the indicated file.
%The key can be used to log into the virtual machine.
%Note that modification will also be needed of the firewall
%rules for the job to incoming SSH connections.
An EC2 service uses a firewall to restrict network access to
the virtual machine instances it runs.
Typically, no incoming connections are allowed.
One can define sets of firewall rules and give them names.
The EC2 API calls these security groups.
If utilized, tell Condor what set of security
groups should be applied to each VM using the
\SubmitCmd{ec2\_security\_groups} submit description file command.
If not provided, Condor uses the security group \SubmitCmd{default}.
The EC2 API allows the choice of different hardware configurations
for instances to run on.
Select which configuration to use for the \SubmitCmd{ec2} grid type
with the \SubmitCmd{ec2\_instance\_type} submit description file command.
Condor provides no default.
Each virtual machine instance can be given up to 16Kbytes of unique data,
accessible by the instance by connecting to a well-known address.
This makes it easy for many instances to share the same VM image,
but perform different work.
This data can be specified to Condor in one of two ways.
First, the data can be provided directly in the submit description file
using the \SubmitCmd{ec2\_user\_data} command.
Second, the data can be
stored in a file, and the file name is specified with the
\SubmitCmd{ec2\_user\_data\_file} submit description file command.
This second option allows the use of binary data.
If both options are used, the two blocks of
data are concatenated, with the data from \SubmitCmd{ec2\_user\_data}
occurring first. Condor performs the base64 encoding that EC2 expects on
the data.
Below is a sample submission:
\begin{verbatim}
###################################
# Note to submit an AMI as a job we need the grid universe
# To submit to a different region update the url passed to the grid_resource
# e.g. https://ec2.ap-northeast-1.amazonaws.com
# Note: The ami *must* be present in that region
# For more details see:
# http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/Welcome.html
Universe = grid
grid_resource = ec2 https://ec2.amazonaws.com/
# Executable in this context is just a label for the job
Executable = ec2_test_job
# log for job run
Log=$(cluster).ec2.log
###################################
# The AMI ID used
ec2_ami_id = ami-MyAmiId0
ec2_instance_type = m1.small
###################################
# User data input for the instance (optional)
# ec2_user_data = Hello EC2!
# ec2_user_data_file = /path/to/datafile
###################################
# Required credentials used to access EC2 (must be full paths)
ec2_access_key_id = /home/user/your_ec2.aid
ec2_secret_access_key = /home/user/your_ec2.key
###################################
# Location to store instance's SSH keypair (optional)
ec2_keypair_file = /home/user/ec2_gend_key.pem
###################################
# Pre-allocated elastic ip for the instance (optional)
# ec2_elastic_ip = your.elastic.ip.addr
###################################
# Security group for the instance (optional, default if not provided)
ec2_security_groups = sg-MySecGroup
###################################
# VPC subnet in which to launch the instance (optional)
# ec2_vpc_subnet = subnet-a1bc23d4
###################################
# Param to attach to an ebs volume volume:/drive_loc (optional)
# ec2_ebs_volumes = vol-abcde15f:/dev/sdb
###################################
# If a volume is specified, the zone is required
# Note: zone must be within specified or default region
# ec2_availability_zone = us-east-1b
############################
# Adding Tag (name) foo=bar (value) (optional)
# ec2_tag_names = foo
# ec2_tag_foo = bar
queue
\end{verbatim}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{\label{sec:Amazon-config}EC2 Configuration Variables}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
The \SubmitCmd{ec2} grid type requires these configuration variables
to be set in the Condor configuration file:
\footnotesize
\begin{verbatim}
EC2_GAHP = $(SBIN)/ec2_gahp
EC2_GAHP_LOG = /tmp/EC2GahpLog.$(USERNAME)
\end{verbatim}
\normalsize
The \SubmitCmd{ec2} grid type does not presently permit the explicit use
of an HTTP proxy.
| {
"alphanum_fraction": 0.6978871425,
"avg_line_length": 38.4257425743,
"ext": "tex",
"hexsha": "3fb23fdacca515fe6668f1744f1af3e60d1b9e26",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2021-08-29T14:03:03.000Z",
"max_forks_repo_forks_event_min_datetime": "2021-08-29T14:03:03.000Z",
"max_forks_repo_head_hexsha": "f0f41854fd418adaf24f430adf5217e514a044db",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "bbockelm/condor-network-accounting",
"max_forks_repo_path": "doc/grids/amazon.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "f0f41854fd418adaf24f430adf5217e514a044db",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "bbockelm/condor-network-accounting",
"max_issues_repo_path": "doc/grids/amazon.tex",
"max_line_length": 76,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "f0f41854fd418adaf24f430adf5217e514a044db",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "bbockelm/condor-network-accounting",
"max_stars_repo_path": "doc/grids/amazon.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1809,
"size": 7762
} |
\section{Conclusion}\label{sec:conclusion}
This work presents a novel approach to investigating a healthcare population
that encompasses the topics of segmentation analysis, queuing models, and the
recovery of queuing parameters from incomplete data. This is done despite common
limitations in operational research with regard to the availability of
fine-grained data, and this work only uses administrative hospital spell data
from patients presenting COPD from the Cwm Taf Morgannwg UHB.\
By considering a variety of attributes present in the data, and engineering
some, an effective clustering of the spell population is identified that
successfully feeds into a multi-class, \(M/M/c\) queue to model a hypothetical
COPD ward. With this model, a number of insights are gained by investigating
purposeful changes in the parameters of the model that have the potential to
inform actual public health policy.
In particular, since neither the resource capacity of the system or the clinical
processes of the spells are evident in the data, service times and resource
levels are not available. However, length of stay is. Using what is available,
this work assumes that mean service times can be parameterised using mean
lengths of stay. By using the Wasserstein distance to compare the distribution
of the simulated lengths of stay data with the observed data, a best performing
parameter set is found via a parameter sweep.
This parameterisation ultimately recovers a surrogate for service times for each
cluster, and a common number of servers to emulate resource availability. The
parameterisation itself offers its strengths by being simple and effective.
Despite its simplicity, a good fit to the observed data is found, and --- as is
evident from the closing section of this work --- substantial and useful
insights can be gained into the needs of the population being studied.
This analysis, and the formation of the entire model, in effect, considers all
types of patient arrivals and how they each impact the system in terms of
resource capacity and length of stay. By investigating scenarios into changes in
both overall patient arrivals and resource capacity, it is clear that there is
no quick solution to be employed from within the hospital to improve COPD
patient spells. The only effective, non-trivial intervention is to improve the
overall health of the patients arriving at the hospital. This is shown by moving
patient arrivals between clusters. In reality, this would correspond to an
external, preventative policy that improves the overall health of COPD patients.
| {
"alphanum_fraction": 0.8194444444,
"avg_line_length": 63.2195121951,
"ext": "tex",
"hexsha": "e0a96f1da240f3b63456fc2f9c5f8fb318f27fba",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2022-03-23T20:29:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2022-03-23T20:29:08.000Z",
"max_forks_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "drvinceknight/copd-paper",
"max_forks_repo_path": "sections/conclusion.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1",
"max_issues_repo_issues_event_max_datetime": "2022-03-24T12:15:13.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-06-28T13:59:15.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "drvinceknight/copd-paper",
"max_issues_repo_path": "sections/conclusion.tex",
"max_line_length": 80,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "387a14f886d3c562228bb4be45abdd7ed996eda1",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "drvinceknight/copd-paper",
"max_stars_repo_path": "sections/conclusion.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 512,
"size": 2592
} |
\documentclass[a4paper,UKenglish]{lipics-v2016}
\usepackage{microtype}
\usepackage{bussproofs}
\usepackage{stmaryrd}
\newcommand{\new}{\mathsf{new}}
\newcommand{\for}{\mathrm{for }}
\newcommand{\Ncal}{\mathcal{N}}
\newcommand{\interp}[1]{\llbracket #1 \rrbracket}
\newcommand{\interpp}[1]{\{\!| #1 |\!\}}
\newcommand{\from}{\leftarrow}
\newcommand{\maps}{\colon}
\newcommand{\Th}{\mathrm{Th}}
\newcommand{\Gph}{\mathrm{Gph}}
\newcommand{\FinSet}{\mathrm{FinSet}}
\newcommand{\FPGphCat}{\mathrm{FPGphCat}}
\newcommand{\Set}{\mathrm{Set}}
\newcommand{\Cat}{\mathrm{Cat}}
\newcommand{\Calc}{\mathrm{Calc}}
\newcommand{\Mon}{\mathrm{Mon}}
\newcommand{\op}{\mathrm{op}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\pic}{$\pi$-calculus}
\title{Representing operational semantics with enriched Lawvere theories}
\author[1]{
Michael Stay
}
\author[2]{
L.\ G.\ Meredith
}
\affil[1]{
Pyrofex Corp., Kirkland, WA, USA\\
{\tt [email protected]}
}
\affil[2]{
{RChain Cooperative, Seattle, WA, USA}\\
{\tt [email protected]}
}
\Copyright{Michael Stay, Lucius Gregory Meredith}
\subjclass{F.1.2 Modes of computation, F.3.2 Semantics of Programming Languages, F.4 Mathematical Logic and Formal Languages, D.1.3 Concurrent Programming, D.3.1 Formal Definitions and Theory, D.3.3 Language Constructs and Features}
\keywords{Concurrent combinator, nominal, pi calculus}
%Editor-only macros:: begin (do not touch as author)%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\EventEditors{John Q. Open and Joan R. Acces}
\EventNoEds{2}
\EventLongTitle{42nd Conference on Very Important Topics (CVIT 2016)}
\EventShortTitle{CVIT 2016}
\EventAcronym{CVIT}
\EventYear{2016}
\EventDate{December 24--27, 2016}
\EventLocation{Little Whinging, United Kingdom}
\EventLogo{}
\SeriesVolume{42}
\ArticleNo{23}
% Editor-only macros::end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\maketitle
\begin{abstract}
\noindent
Many term calculi, like $\lambda$-calculus or {\pic}, involve
binders for names, and the mathematics of bound variable names is
subtle. Sch\"onfinkel introduced the SKI combinator calculus in
1924 to clarify the role of quantified variables in intuitionistic
logic by eliminating them. Yoshida demonstrated how to
eliminate the bound names coming from the input prefix in the
asynchronous {\pic}, but her combinators still depend on the $\new$
operator to bind names. Recently, Meredith and Stay
showed how to modify Yoshida's combinators by replacing $\new$ and
replication with reflective operators to provide the first
combinator calculus with no bound names into which the asynchronous
{\pic} has a faithful embedding. Here we provide an alternative set
of combinators built from $\mathsf{SKI}$ plus reflection that also
eliminates all nominal phenomena, yet provides a faithful
embedding of a reflective higher-order pi calculus.
We show that with the nominal features effectively
eliminated as syntactic sugar, multisorted Lawvere theories enriched
over graphs suffice to capture the operational semantics of the
calculus.
\end{abstract}
\EnableBpAbbreviations
\section{Introduction}
Many term calculi, like $\lambda$-calculus or {\pic}, involve binders
for names, and the mathematics of bound variable names is subtle.
Sch\"onfinkel introduced the SKI combinator calculus in 1924 to
clarify the role of quantified variables in intuitionistic logic by
eliminating them \cite{finkel}. Yoshida demonstrated how to eliminate
the bound names coming from the input prefix in the asynchronous
{\pic}, but her combinators still depend on the $\new$ operator to
bind names. Curry developed Sch\"onfinkel's ideas much
further. Recently, Meredith and Stay \cite{Rhocomb} showed how to
modify Yoshida's combinators by replacing $\new$ and replication with
reflective operators to provide the first combinator calculus with no
bound names into which the asynchronous {\pic} has a faithful
embedding of a reflective higher-order pi calculus.
Here we provide an alternative set of combinators built
from $\mathsf{SKI}$ plus reflection that also eliminates all nominal
phenomena, yet provides a faithful embedding.
The recent work by Jamie Gabbay and Andrew Pitts
\cite{DBLP:journals/fac/GabbayP02} and others
\cite{DBLP:journals/jcss/Clouston14} on nominal set theory has put the
study of bound names and substitution on a much nicer foundation, at
the cost of an increase in the complexity of the semantic
framework. It interprets nominal phenomena in terms of atoms in
Fraenkl-Mostowski set theory. Clouston's work in particular makes
evident the additional machinery needed to interpret nominal phenomena
as Lawvere theories. On the other hand, with the nominal features
effectively eliminated as syntactic sugar, we show that multisorted
Lawvere theories enriched over graphs suffice to capture the
operational semantics of the calculus.
\section{Previous work}
There is a long history and an enormous body of work on modeling term rewriting and operational semantics with various notions of category enriched over category-like structures; we only have room here for a sampling. L\"uth and Ghani \cite{DBLP:conf/ctcs/LuethG97} use poset-enriched categories to study the modularity of strong normalization. One approach to nominality is the one we mentioned in the introduction; a different approach deals with nominal issues by allowing ``funtion types'' in the signature: Seely \cite{DBLP:conf/lics/Seely87}
% R. A. G. Seely, Modeling computations: a 2-categorical framework, in Proc. Symp. Logic Comp. Sci. 1987, Computer Society of the IEEE, pp. 65–71. Also available at http://www.math.mcgill.ca/rags/WkAdj/LICS.pdf.
suggested using 2-categories for modeling the denotational semantics of lambda calculus in Scott domains to capture the adjunction between $\beta$ reduction and $\eta$ conversion; Hilken \cite{DBLP:journals/tcs/Hilken96}
% B. Hilken, Towards a proof theory of rewriting: the simply-typed 2λ- calculus, Theor. Comp. Sci. 170 (1996), 407–444. Also available at http://math.ucr.edu/home/baez/qg-winter2007/hilken_2lambda_calculus.ps
expands Seely's work by exploring the proof theory using categorical logic; and Hirschowitz \cite{DBLP:journals/corr/Hirschowitz13}
% Tom Hirschowitz, Cartesian closed 2-categories and permutation equivalence in higher-order rewriting, Logical Methods in Computer Science, IfCoLog (Interna- tional Federation of Computational Logic), 9 (3) (2013), 10–28. Also available at https://hal.archives-ouvertes.fr/hal-00540205/file/macy.pdf
generalizes algebraic signatures to cartesian closed 2-signatures. A third approach is to model substitution explicitly: Stell \cite{Stell}
% http://www.comp.leeds.ac.uk/jgs/caen.ps
considered sesquicategories for term rewriting; in his system, objects are finite sets of variables, morphisms are substitutions, and 2-morphisms are roughly rewrite rules.
\section{Gph-enriched categories}
Here we review some standard definitions and results in enriched category theory; see \cite{CIS-335497}, \cite{Power99EnrichedLawvereTheories}, \cite{DBLP:journals/acs/LackR11}, and \cite{Trimble} for more details.
A {\bf directed multigraph with self loops}, hereafter {\bf graph}, consists of a set $E$ of edges, a set $V$ of vertices, two functions $s,t\maps E \to V$ picking out the source and target of each edge, and a function $a\maps V \to E$ such that $s\circ a$ and $t \circ a$ are both the identity on $V$---that is, $a$ equips each vertex in $V$ with a chosen self loop. There are no constraints on $E, V, s,$ or $t$, so a graph may have infinitely many vertices and infinitely many edges between any pair of vertices. A {\bf graph homomorphism} from $(E, V, s, t, a)$ to $(E', V', s', t', a')$ is a pair of functions $(\epsilon\maps E \to E', \upsilon\maps V \to V')$ such that $\upsilon\circ s = s' \circ \epsilon$ and $\upsilon\circ t = t' \circ \epsilon$. {\bf Gph} is the category of graphs and graph homomorphisms. Gph has finite products: the terminal graph is the graph with one vertex and one loop, while the product of two graphs $(E, V, s, t, a) \times (E', V', s', t', a')$ is $(E \times E', V \times V', s \times s', t\times t', a \times a').$
A {\bf Gph-enriched category} consists of
\begin{itemize}
\item a set of objects;
\item for each pair of objects $x, y,$ a graph $\hom(x,y);$
\item for each triple of objects $x, y, z,$ a composition graph homomorphism $\circ\maps \hom(y, z) \times \hom(x, y) \to \hom(x, z);$ and
\item for each object $x,$ a vertex of $\hom(x, x),$ the identity on $x,$
\end{itemize}
such that composition is associative, and composition and the identity obey the unit laws. A Gph-enriched category has finite products if the underlying category does.
Any category is trivially Gph-enrichable by treating the elements of the hom sets as vertices and adjoining a self loop to each vertex. The category Gph is nontrivially Gph-enriched: Gph is a topos, and therefore cartesian closed, and therefore enriched over itself. Given two graph homomorphisms $F, F'\maps (E, V, s, t, a) \to (E', V', s', t', a'),$ a {\bf graph transformation} assigns to each vertex $v$ in $V$ an edge $e'$ in $E'$ such that $s'(e') = F(v)$ and $t'(e') = F'(v).$ Given any two graphs $G$ and $G',$ there is an exponential graph $G'^G$ whose vertices are graph homomorphisms between them and whose edges are graph transformations.
A {\bf Gph-enriched functor} between two Gph-enriched categories $C, D$ is a functor between the underlying categories such that the graph structure on each hom set is preserved, {\em i.e.} the functions between hom sets are graph homomorphisms between the hom graphs.
Let $S$ be a finite set, $\FinSet$ be a skeleton of the category of finite sets and functions between them, and $\FinSet/S$ be the category of functions into $S$ and commuting triangles. A {\bf multisorted Gph-enriched Lawvere theory}, hereafter {\bf Gph-theory} is a Gph-enriched category with finite products Th equipped with a finite set $S$ of {\bf sorts} and a Gph-enriched functor $\theta\maps \FinSet^{\op}/S \to \Th$ that preserves products strictly. Any Gph-theory has an underlying multisorted Lawvere theory given by forgetting the edges of each hom graph.
A {\bf model} of a Gph-theory Th is a Gph-enriched functor from Th to Gph that preserves products up to natural isomorphism. A {\bf homomorphism of models} is a braided Gph-enriched natural transformation between the functors. Let FPGphCat be the 2-category of small Gph-enriched categories with finite products, product-preserving Gph-functors, and braided Gph-natural transformations. The forgetful functor $U\maps \FPGphCat[\Th, \Gph] \to \Gph$ that picks out the underlying graph of a model has a left adjoint that picks out the free model on a graph.
Gph-enriched categories are part of a spectrum of 2-category-like structures. A strict 2-category is a category enriched over Cat with its usual product. Sesquicategories are categories enriched over Cat with the ``funny'' tensor product \cite{Lack2010}; a sesquicategory can be thought of as a 2-category where the interchange law does not hold. A Gph-enriched category can be thought of as a sesquicategory where 2-morphisms (now edges) cannot be composed. Any strict 2-category has an underlying sesquicategory, and any sesquicategory has an underlying Gph-enriched category; these forgetful functors have left adjoints.
\section{Gph-theories as models of computation}
Lawvere theories and their generalizations are categories with infinitely many objects and morphisms, but most theories of interest are finitely generated. A presentation of the underlying multisorted Lawvere theory of a finitely-generated Gph-theory is a signature for a term calculus, consisting of a set of sorts, a set of term constructors, and a set of equations, while the edges in the hom graphs of the theory encode the reduction relation.
Here is a presentation of the SKI combinator calculus as a Gph-theory:
\begin{itemize}
\item one sort $T$, for terms
\item term constructors
\[\begin{array}{rl}
S&:1 \to T\\
K&:1 \to T\\
I&:1 \to T\\
(-\; -)&: T^2 \to T\\
\end{array}\]
\item structural congruence (no equations)
\item rewrites
\[\begin{array}{rl}
\sigma&:(((S\; x)\; y)\; z) \Rightarrow ((x\; z)\; (y\; z))\\
\kappa&:((K\; y)\; z) \Rightarrow y\\
\iota&:(I\; z) \Rightarrow z\\
\end{array}\]
\end{itemize}
where in the rewrites we have used expressions like $((K\; y)\; z)$ as shorthand for
\[ T\times T \xrightarrow{\mbox{\tiny left}^{-1}} 1\times T \times T \xrightarrow{K \times T \times T} T\times T \times T \xrightarrow{(-\;-)\times T} T\times T \xrightarrow{(-\;-)} T. \]
A model $M$ of this Gph-theory in Gph picks out a graph $M(T)$ of terms and rewrites. It picks out three special vertices $S,K,$ and $I$ of $M(T)$; it equips $M(T)$ with a graph homomorphism from $M(T)^2$ to $M(T)$ that says for every pair of vertices $(u,v),$ there is a vertex $(u\;v)$, and similarly for edges; and it equips $M(T)$ with graph transformations asserting the existence of an edge out of a reducible expression to the term it reduces to.
That this Gph-theory captures the operational semantics of the SKI calculus is almost definitional: there is an edge between distinct vertices in the free model on the empty graph if and only if the source vertex is reducible to the target vertex in a single step.
It is straightforward to verify that Gph-theories suffice to capture the operational semantics of any calculus where every context is a reduction context. This restriction on reduction contexts is a consequence of the fact that models map term constructors to graph homomorphisms: given a model $M$, a graph homomorphism $F\maps M(T) \to M(T)$, and an edge $e\maps t_1 \to t_2,$ there is necessarily an edge $F(e)\maps F(t_1) \to F(t_2).$
\section{Gph-theory for SKI with the weak head normal form evaluation strategy}
\label{whnf}
In modern programming languages, many contexts are {\em not} reduction contexts. In Haskell, for instance, there are no reductions under a lambda abstraction: even if $t_1$ reduces to $t_2$ as a program, the term $\backslash x \to t_1$ does not reduce to $\backslash x \to t_2.$
Gph-theories can still capture the operational semantics of calculi with restrictions on reduction contexts by introducing term constructors that explicitly mark the reduction contexts. For example, suppose that we want an evaluation strategy for the SKI calculus that only reduces the leftmost combinator when it has been applied to sufficiently many arguments, {\em i.e.} we want the {\em weak head normal form}; we can accomplish this by introducing a term constructor $R\maps T \to T$ that explicitly marks the reduction contexts. We then add a structural congruence rule for propagating the context and modify the existing reduction rules to apply only to marked contexts.
\begin{itemize}
\item one sort $T$, for terms
\item term constructors
\[\begin{array}{rl}
S&:1 \to T\\
K&:1 \to T\\
I&:1 \to T\\
(-\; -)&: T^2 \to T\\
R&:T \to T\\
\end{array}\]
\item structural congruence
\[\begin{array}{rl}
R(x\; y) &= (Rx\; y)\\
\end{array}\]
\item rewrites
\[\begin{array}{rl}
\sigma&:(((RS\; x)\; y)\; z) \Rightarrow ((Rx\; z)\; (y\; z))\\
\kappa&:((RK\; y)\; z) \Rightarrow Ry\\
\iota&:(RI\; z) \Rightarrow Rz\\
\end{array}\]
\end{itemize}
\begin{theorem}
Let $t$ be a term in which $R$ does not appear. Then $Rt$ reduces to $Rt',$ where $t'$ is the weak head normal form of $t.$
\end{theorem}
\begin{proof}
If we form the term $Rt$ where $t$ contains no uses of $R$, no reductions will ever take place in the right-hand argument of an application: the structural congruence and rewrite rules enforce that the $R$ context can only move to the left term in an application, never the right. The result follows by induction on the number of steps to reach $t'.$
\end{proof}
\section{Explicit reduction contexts as gas}
The Ethereum \cite{wood2014ethereum} and RChain \cite{RChain} projects are building virtual machines on the blockchain. Both use the concept of a linear resource called ``gas'' (as in gasoline) that is consumed as the virtual machine executes. Gph-theories can capture the operational semantics of a calculus where reduction contexts are consumable, and thus play a role similar to that of gas \cite{DBLP:journals/corr/StayM15}.
\begin{itemize}
\item one sort $T$, for terms
\item term constructors
\[\begin{array}{rl}
S&:1 \to T\\
K&:1 \to T\\
I&:1 \to T\\
(-\; -)&: T^2 \to T\\
R&:T \to T\\
\end{array}\]
\item structural congruence
\[\begin{array}{rl}
R(x\; y) &= (Rx\; y)\\
\end{array}\]
\item rewrites
\[\begin{array}{rl}
\sigma&:(((RS\; x)\; y)\; z) \Rightarrow ((x\; z)\; (y\; z))\\
\kappa&:((RK\; y)\; z) \Rightarrow y\\
\iota&:(RI\; z) \Rightarrow z\\
\end{array}\]
\end{itemize}
\begin{theorem}
Let $t$ be a term in which $R$ does not appear; let $t'$ be the weak head normal form of $t$; let $m$ be the number of steps by which $Rt$ reduces to $Rt'$ in the calculus of section \ref{whnf}; and let $n\ge m$. Then in this calculus, $R^n t$ reduces to $R^{n-m}t'$ in $m$ steps.
\end{theorem}
\begin{proof}
As before, if we form the term $Rt$ where $t$ contains no uses of $R$, no reductions will ever take place in the right-hand argument of an application. Each application of the reduction rules reduces the number of $R$s by one, and structural equivalence preserves the number of $R$s. The result follows by induction on the number of steps to reach $t'.$
\end{proof}
\section{Gph-theory for a pi calculus variant}
\label{rhocomb}
Gph-theories can capture the operational semantics of concurrent calculi as well as serial calculi like SKI above.
Meredith and Radestock \cite{DBLP:journals/entcs/MeredithR05} describe a reflective higher-order variant of pi calculus we call the RHO calculus. Rather than the usual replication and $\new$ operators, they have quoting and unquoting operators. Quoting turns a process into a name and unquoting does the opposite; freshness of names is obtained using a type discipline. They prove that there is a faithful embedding of the monadic asynchronous pi calculus into the RHO calculus.
\subsection{The RHO calculus}
\subsubsection{Syntax}
\[\begin{array}{rlr}
P, Q &::= 0 & \mbox{the stopped process}\\
&| \quad \for(y \from x)P & \mbox{input guarded process} \\
&| \quad x!P & \mbox{output process}\\
&| \quad P \;|\; Q & \mbox{parallel composition}\\
&| \quad *x & \mbox{deference}\\
&\\
x, y &::= \&P & \mbox{quotation}\\
\end{array}\]
Note that in the original rho-calculus papers the notation was
somewhat different. The quotation and dereference constructions were
originally written, $\ulcorner P \urcorner$ and $\urcorner x \ulcorner$,
respectively. Here we have adopted a more programmer
friendly style employing the $\&$ and $*$ of the $\mathsf{C}$ programming
language for reference (quotation) and dereference, respectively. Input
guards which were written with a whimper $?$ in more traditional process calculi style are now written in for-comprehension style as adopted
in languages like $\mathsf{Scala}$; e.g. $x?(y)P$ is written
here $\mathsf{for}( y \from x )P$.
\subsubsection{Free and bound names}
\[\begin{array}{rl}
FN(0) &= \emptyset \\
FN(\for(y \from x)P) &= \{x\}\cup (FN(P)\backslash \{y\}) \\
FN(x!P) &= \{x\}\cup FN(P) \\
\end{array}\quad\quad
\begin{array}{rl}
FN(P|Q) &= FN(P)\cup FN(Q) \\
FN(*x) &= \{x\}
\end{array}\]
\subsubsection{Structural congruence}
Structural (process) congruence is the smallest congruence $\equiv$ containing $\alpha$-equivalence and making $(|, 0)$ into a commutative monoid.
\subsubsection{Name equivalence}
Name equivalence is the smallest equivalence relation $\equiv_N$ on names such that
\begin{center}
\AXC{} \UIC{$\&*x \equiv_N x$} \DP and \AXC{$P \equiv Q$} \UIC{$\&P \equiv_N \&Q$} \DP.
\end{center}
\subsubsection{Substitution}
Syntactic substitution:
\[\begin{array}{rl}
(0)\{\&Q/\&P\} &= 0\\
(\for (y \from x) R)\{\&Q/\&P\} &= \for (z \from (x\{\&Q/\&P\})) (R\{z/y\}\{\&Q/\&P\})\\
(x!R)\{\&Q/\&P\} &= (x\{\&Q/\&P\})!(R\{\&Q/\&P\})\\
(R|S)\{\&Q/\&P\} &= (R\{\&Q/\&P\}) \;|\; (S\{\&Q/\&P\})\\
(*x)\{\&Q/\&P\} &= \left\{\begin{array}{rl}
*\&Q & \mbox{when } x \equiv_N \&Q\\
*x & \mbox{otherwise,}
\end{array}\right.
\end{array}\]
where
\[ x\{\&Q/\&P\} = \left\{\begin{array}{rl}
\&Q & \mbox{if } x\equiv_N \&P \\
x & \mbox{ otherwise}
\end{array}\right.\]
and, in the rule for input, $z$ is chosen to be distinct from $\&P, \&Q,$ the free names in $Q,$ and all the names in $R.$
Semantic substitution, for use in $\alpha$-equivalence:
\[ (*x)\{\&Q/\&P\} = \left\{\begin{array}{rl}
Q & \mbox{when } x \equiv_N \&Q\\
*x & \mbox{otherwise}
\end{array}\right. \]
\subsubsection{Reduction rules}
We use $\to$ to denote single-step reduction.
\begin{center}
\AXC{$x_0 \equiv_N x_1$}
\UIC{$\for(y \from x_1)P \;|\; x_0!Q \quad \to\quad P\{\&Q / y\}$} \DP \quad \quad
\end{center}
\begin{center}
\AXC{$P\to P'$}
\UIC{$P\;|\; Q \quad \to \quad P' \;|\; Q$} \DP
\end{center}
\begin{center}
\AXC{$P\equiv P'$} \AXC{$P' \to Q'$} \AXC{$Q' \equiv Q$}
\TIC{$P\to Q$} \DP
\end{center}
\subsection{RHO combinators}
We can define an embedding $\interp{-}$ of closed RHO calculus terms into a set of RHO combinators. We follow Milner \cite{milner91polyadicpi} in thinking of an input-prefixed process $\for(x \from y)P$ as consisting of two parts: the first names the channel $y$ on which the process is listening, while the second describes the continuation $\lambda x.P$ in terms of an abstracted name. The right hand side of the communication rule, in effect, applies the continuation to the name to be substituted. Since the only bound names in the RHO calculus come from input prefixing, we can completely eliminate bound names by using abstraction elimination on the continuation. Like the weak head normal form SKI calculus above, this combinator calculus uses a linear resource $C$ to reify reduction contexts.
A Gph-theory for the operational semantics of these combinators has:
\begin{itemize}
\item one sort $T$, for terms
\item term constructors
\[\begin{array}{rl}
C &: 1 \to T \\
0 &: 1 \to T \\
| &: 1 \to T \\
\for &: 1 \to T \\
! &: 1 \to T \\
\& &: 1 \to T \\
\end{array}\quad\quad
\begin{array}{rl}
* &: 1 \to T \\
S &: 1 \to T \\
K &: 1 \to T \\
I &: 1 \to T \\
() &: T \times T \to T \\
\end{array}\]
\item structural congruence rules
\[\begin{array}{rll}
((|\; 0)\; P) &= P & \mbox{unit law}\\
((|\; ((|\; P)\; Q))\; R) &= ((|\; P)\; ((|\; Q)\; R) & \mbox{associativity}\\
((|\; P)\; Q) &= ((|\; Q)\; P) &\mbox{commutativity}\\
\end{array}\]
\item reduction rules
\[\begin{array}{ll}
\sigma\maps (((S\; P)\; Q)\; R) \Rightarrow ((P\; R)\; (Q\; R)) & \mbox{action of }S \\
\kappa\maps ((K\; P)\; Q) \Rightarrow P & \mbox{action of }K\\
\iota\maps (I\; P) \Rightarrow P & \mbox{action of }I\\
\xi\maps ((|\; C)\; ((|\; ((\for\; (\&\; P))\; Q))\; ((!\; (\&\; P))\; R))) \Rightarrow ((|\; C)\; (Q\; (\&\; R))) & \mbox{communication}\\
\epsilon\maps ((|\; C)\;(*\; (\&\; P))) \Rightarrow ((|\; C)\; P) & \mbox{evaluation} \\
\end{array}\]
\end{itemize}
\subsection{Embeddings}
We define an interpretation function $\interp{-}$ from RHO calculus terms into RHO combinators by
\[\begin{array}{rl}
\interp{0} &= 0 \\
\interp{\for(x \from \&P)Q} &= ((for\; (\&\; \interp{P}))\; \interp{Q}_x)\\
\interp{\&P!Q} &= ((!\; (\&\; \interp{P}))\; \interp{Q}) \\
\interp{P|Q} &= ((|\; \interp{P})\; \interp{Q})\\
\interp{*\&P} &= (*\; (\&\; \interp{P}))
\end{array}\]
where $\interp{-}_x$ eliminates the free name $x:$
\[\begin{array}{rl}
\interp{P}_x &= (K\; \interp{P}) \mbox{ where $x$ is not free in } P \\
\interp{\for(y \from \&P)Q}_x &= ((S\; ((S\; (K for))\; ((S\; (K\; \&))\; \interp{P}_x)))\; \interp{\interp{Q}_y}_x) \\
\interp{\&P!Q}_x &= ((S\; ((S\; (K\; !))\; ((S\; (K\; \&))\; \interp{P}_x)))\; \interp{Q}_x) \\
\interp{P|Q}_x &= ((S\; ((S\; (K\; |))\; \interp{P}_x))\; \interp{Q}_x) \\
\interp{*\&P} &= \left\{\begin{array}{ll}
((S\; (K\; *))\; I) & \mbox{when } \&P \equiv_N x\\
((S\; (K\; *))\; ((S (K\; \&))\; \interp{P}_x) & \mbox{otherwise.}
\end{array}\right.
\end{array}\]
Consider the following sorting on RHO combinators:
\[\begin{array}{rl}
C &: W\\
0 &: W\\
| &: W \Rightarrow W \Rightarrow W\\
\for &: N \Rightarrow (N \Rightarrow W) \Rightarrow W\\
! &: N \Rightarrow W \Rightarrow W\\
\end{array}\quad\quad
\begin{array}{rl}
\& &: W \Rightarrow N\\
* &: N \Rightarrow W\\
S &: \forall X,Y,Z.(Z \Rightarrow Y \Rightarrow X) \Rightarrow (Z \Rightarrow Y) \Rightarrow Z \Rightarrow X\\
K &: \forall X,Y.X \Rightarrow Y \Rightarrow X\\
I &: \forall X.X \Rightarrow X\\
\end{array}\]
The left- and right-hand sides of each of the structural congruence and rewrite rules have the sort $W,$ the interpretation of any RHO calculus term has the sort $W,$ and the result of eliminating an abstraction has the sort $N \Rightarrow W.$
We define an interpretation function $\interpp{-}$ from $W$-sorted RHO combinators not containing $C$ into the RHO calculus by
\[\begin{array}{rl}
\interpp{0} &= 0\\
\interpp{P\;|\;Q} &= \interpp{P} \;|\; \interpp{Q}\\
\interpp{((\for\; (\&\; P))\; Q)} &= for (\&\interpp{R} \leftarrow \&\interpp{P})\interpp{(Q\; (\&\; R))}\\
\interpp{((!\; (\&\; P))\; Q)} &= \&\interpp{P}!\interpp{Q}\\
\interpp{(*(\&\; P))} &= *\&\interpp{P}\\
\interpp{(((S\; P)\; Q)\; R)} &= \interpp{((P\; R)\; (Q\; R))}\\
\interpp{((K\; P)\; Q)} &= \interpp{P}\\
\interpp{(I\; P)} &= \interpp{P}
\end{array}\]
where $R$ is any $W$-sorted RHO combinator.
Some simple calculation shows that
\begin{theorem}
\label{roundtrip}
$\interpp{\interp{P}}$ is $\alpha$-equivalent to $P$, $Q$ is reducible to $\interp{\interpp{Q}}$ without using the rewrite $\xi,$ and $\interp{\interpp{-}}$ is idempotent.
\end{theorem}
See the appendix for more details.
\subsection{Barbed bisimilarity}
An {\bf observation relation} $\downarrow_\Ncal$ over a set of names $\Ncal$ is the smallest relation satisfying
\begin{center}
\AXC{$y \in \Ncal$} \AXC{$x \equiv_N y$} \BIC{$x!P \downarrow_\Ncal x$} \DP $\quad$ and $\quad$ \AXC{$P \downarrow_\Ncal x \mbox{ or } Q \downarrow_\Ncal x$} \UIC{$P\;|\;Q \downarrow_\Ncal x$} \DP
\end{center}
for the RHO calculus or
\begin{center}
\AXC{$y \in \Ncal$} \AXC{$x \equiv_N y$} \BIC{$((!\; x)\; P) \downarrow_\Ncal x$} \DP $\quad$ and $\quad$ \AXC{$P \downarrow_\Ncal x \mbox{ or } Q \downarrow_\Ncal x$} \UIC{$((|\; P)\; Q) \downarrow_\Ncal x$} \DP.
\end{center}
for the RHO combinators.
We denote eventual reduction by $\to^*$ and write $P \downarrow^*_\Ncal x$ if there exists a process $Q$ such that $P \to^* Q$ and $Q \downarrow_\Ncal x.$
An {\bf $\Ncal$-barbed bisimulation} over a set of names $\Ncal$ is a symmetric binary relation $S_\Ncal$ between agents such that $P \mathop{S_\Ncal} Q$ implies
\begin{enumerate}
\item if $P \to P'$ then $Q \to^* Q'$ and $P' \mathop{S_\Ncal} Q',$ and
\item if $P \downarrow_\Ncal x,$ then $Q \downarrow^*_\Ncal x.$
\end{enumerate}
$P$ is $\Ncal$-barbed bisimilar to $Q,$ written $P \approx Q,$ if $P \mathop{S_\Ncal} Q$ for some $\Ncal$-barbed bisimulation $S_\Ncal.$
\subsection{Faithfulness}
\begin{theorem}
$P \approx_{\mbox{\tiny calc}} Q \iff ((|\; C)\; \interp{P}) \approx_{\mbox{\tiny comb}} ((|\; C)\; \interp{Q})$.
\end{theorem}
\begin{proof}[Proof sketch]
The only occurrence of $C$ on the right is at the topmost context and the rewrite rules preserve the location of $C$, so the only reduction context is the topmost one. The rest follows from the two interpretation functions and theorem \ref{roundtrip}. In particular, while the only reduction rule in the RHO calculus is synchronizing on a name, there are extra reduction rules for the RHO combinators; however, these extra reduction rules never send or receive on a name and never prevent sending or receiving on a name. Therefore, each synchronization in the evaluation of a RHO calculus term corresponds to a synchronization in the corresponding RHO combinator term and some number of reductions of $S,K,I,$ or evaluating a quoted process.
\end{proof}
In fact, we believe a much stronger property than bisimilarity should hold: since $S,K,$ and $I$ are only used for eliminating dummy variables and the $\epsilon$ reduction plays the role of semantic substitution, $\interp{\interpp{-}}$ should pick out a normal form for a RHO combinator. We should get a set of normal-form equivalence classes of $W$-sorted RHO combinators that is isomorphic to the set of $\alpha$-equivalence classes of RHO calculus terms. Then we should get
\[ P \xrightarrow{\mbox{\tiny comm}} P' \quad \iff \quad \interp{P} \xrightarrow{\xi} \interp{P'}\]
and
\[ Q \xrightarrow{\xi} Q' \quad \iff \quad \interpp{Q} \xrightarrow{\mbox{\tiny comm}} \interpp{Q'}, \]
where we now regard the left and right sides as being equivalence classes.
\section{Conclusion and future work}
This paper is part of a pair of papers demonstrating that reflection
provides a powerful technique for treating nominal phenomena as
syntactic sugar, thus paving the way for simpler semantic treatments
of richly featured calculi, such as the {\pic} and other calculi of
concurrency. We illustrated the point by providing faithful semantics
of both the $\lambda$-calculus and the {\pic} in terms of graph-enriched
Lawvere theories. This work may be considered preparatory for a more
elaborate study of logics for concurrency in which the nominal
phenomena have logical status, but may be treated in a technically
simpler fashion.
\section{Appendix: abstraction elimination calculations}
\[\begin{array}{rl}
& ((K\; \interp{P})\; x) \\
= & \interp{P}\\
\\
& (((S\; ((S\; (K\; for))\; ((S\; (K\; \&))\; \; \interp{P}_x)))\; \interp{\interp{Q}_y}_x)\; x)\\
=& ((((S\; (K\; for))\; ((S\; (K\; \&))\; \; \interp{P}_x))\; x)\; (\interp{\interp{Q}_y}_x\; x))\\
=& ((((K\; for)\; x)\; (((S\; (K\; \&))\; \; \interp{P}_x)\; x))\; \interp{Q}_y)\\
=& ((for\; (((K\; \&)\; x)\; (\interp{P}_x\; x)))\; \interp{Q}_y)\\
=& ((for\; (\&\; \interp{P}))\; \interp{Q}_y)\\
\\
& (((S\; ((S\; (K\; !))\; ((S\; (K\; \&))\; \interp{P}_x)))\; \interp{Q}_x)\; x)\\
=& (((((S\; (K\; !))\; ((S\; (K\; \&))\; \interp{P}_x)))\; x)\; (\interp{Q}_x\; x))\\
=& ((((K\; !)\; x)\; (((S\; (K\; \&))\; \interp{P}_x)\; x))\; \interp{Q})\\
=& ((!\; (((K\; \&)\; x)\; (\interp{P}_x\; x)))\; \interp{Q})\\
=& ((!\; (\&\; \interp{P}))\; \interp{Q})\\
\\
& (((S\; ((S\; (K\; |))\; \interp{P}_x))\; \interp{Q}_x)\; x)\\
=& ((((S\; (K\; |))\; \interp{P}_x)\; x)\; (\interp{Q}_x\; x))\\
=& ((((K\; |)\; x)\; (\interp{P}_x\; x))\; \interp{Q})\\
=& (|\; \interp{P})\; \interp{Q})\\
\\
& (((S\; (K\; *))\; I)\; x)\\
=& (((K\; *)\; x)\; (I\; x))\\
=& (*\; x)\\
\\
& (((S\; (K\; *))\; ((S\; (K\; \&))\; \interp{P}_x))\; x)\\
=& (((K\; *)\; x)\; (((S\; (K\; \&))\; \interp{P}_x)\; x))\\
=& (*\; (((K\; \&)\; x)\; (\interp{P}_x\; x)))\\
=& (*\; (\&\; \interp{P}))\\
\end{array}\]
\bibliographystyle{plainurl}
\bibliography{calco}
\end{document}
| {
"alphanum_fraction": 0.6789990497,
"avg_line_length": 60.7115384615,
"ext": "tex",
"hexsha": "080b65f53fa9fbc125fc05b05ed4dbadb2619a88",
"lang": "TeX",
"max_forks_count": 7,
"max_forks_repo_forks_event_max_datetime": "2020-06-16T00:37:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2016-10-03T06:03:03.000Z",
"max_forks_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189",
"max_forks_repo_licenses": [
"CC0-1.0"
],
"max_forks_repo_name": "leithaus/pi4u",
"max_forks_repo_path": "Representing operational semantics with enriched Lawvere theories calco2017/calco_orig.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189",
"max_issues_repo_issues_event_max_datetime": "2019-08-19T22:39:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-07-06T19:01:06.000Z",
"max_issues_repo_licenses": [
"CC0-1.0"
],
"max_issues_repo_name": "leithaus/pi4u",
"max_issues_repo_path": "Representing operational semantics with enriched Lawvere theories calco2017/calco_orig.tex",
"max_line_length": 1057,
"max_stars_count": 13,
"max_stars_repo_head_hexsha": "c87163938857589153eb5225d0e4ac17597fd189",
"max_stars_repo_licenses": [
"CC0-1.0"
],
"max_stars_repo_name": "leithaus/pi4u",
"max_stars_repo_path": "Representing operational semantics with enriched Lawvere theories calco2017/calco_orig.tex",
"max_stars_repo_stars_event_max_datetime": "2020-06-16T00:37:17.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-10-12T20:35:01.000Z",
"num_tokens": 10004,
"size": 31570
} |
\section{Release Notes}
\textbf{2015.10.20}
\begin{itemize}
\item Benchmarks and updates to documentation
\end{itemize}
\textbf{2016.07.05}
\begin{itemize}
\item Prefer new syntax over \ilcode{Nat % N} syntax
\end{itemize}
\textbf{2015.09.29}
\begin{itemize}
\item Add dining cryptographers, dining philosophers, and stop-and-wait examples
\item Allow actions that are not self-disabling when a self-disabling version violates safety
\item Make synthesis feasible for synchronous systems
\item Fix crash when optimizing using MPI
\item Fix \ilcode{-=>} operator when used with random write variables, and change it to automatically randomize unassigned ones
\item Fix \ilcode{-=>} operator to not affect the guard for pure shadow variables
\item New \ilcode{random read:} access to variables for probabilistic stabilization
\item New \ilcode{(future & closed)} keyword for stabilization without enforcing a specific protocol behavior
\end{itemize}
\textbf{2015.04.23}
\begin{itemize}
\item New \ilcode{random write:} access to variables for probabilistic stabilization
\item New \ilcode{(future & future silent)} and \ilcode{(future & future shadow)} keywords for convergence to any subset of the invariant
\item Daisy chain orientation example
\item Can implicitly remove self-loops by using \ilcode{-=>} in place of \ilcode{-->}
\item New \ilcode{min(a,b)} and \ilcode{max(a,b)} functions
\end{itemize}
\textbf{2015.01.16}
\begin{itemize}
\item Introduce \ilcode{active shadow} which can be substituted for \ilcode{shadow} to prevent shadow self-loops
\item Change \ilcode{permit:} semantics to make more intuitive sense
\item More examples and documentation
\end{itemize}
\textbf{2014.12.21}
\begin{itemize}
\item New support for \ilcode{shadow} variables
\item Use .prot file extension
\item MPI version now supports protocol optimization via the \ilflag{-optimize} flag
\item When using puppet variables, enforce a silent protocol with \ilcode{future silent;} line
\end{itemize}
\textbf{2014.09.12}
\begin{itemize}
\item New \ilcode{permit:} keyword to complement \ilcode{forbid:}
\item New \ilcode{(assume & closed)} keyword to restrict initial states
\item New \ilflag{-optimize} flag for finding an optimal protocol (in interleaved steps)
\item New \ilcode{(future & silent)} or \ilcode{(future & shadow)} syntax for invariants (see examples)
\item Putting a \ilflag{-def} before (but not after) \ilflag{-x} in the argument list affects the initial file read and candidate actions
\item More examples!
\item Substantially more quality control and testing, associated bug fixes
\end{itemize}
\textbf{2014.05.24}
\begin{itemize}
\item File reorganization
\item Preserve locally-conjunctive invariants
\end{itemize}
\textbf{2014.04.26}
\begin{itemize}
\item Serial search is now the default mode
\item Improved packaging
\item Improved file reading, still much room to improve
\item Verification in GUI
\end{itemize}
\textbf{2014.01.31}
\begin{itemize}
\item Symmetry can be enforced across sets of variables.
\item GUI gets a state exploration mode.
\item Can now mark actions as forbidden. Synthesis cannot use them for recovery.
\item Improve Promela model output.
\item More testing and bug fixes.
\end{itemize}
\textbf{2013.11.19}
\begin{itemize}
\item Fix output format when multiple variables are used.
\item Add ring orientation example
\end{itemize}
| {
"alphanum_fraction": 0.7799227799,
"avg_line_length": 42.0875,
"ext": "tex",
"hexsha": "bd6127b5516ed507a1c82a3e68592d6f22d1c626",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505",
"max_forks_repo_licenses": [
"0BSD"
],
"max_forks_repo_name": "czlynn/protocon",
"max_forks_repo_path": "doc/webtex/changes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"0BSD"
],
"max_issues_repo_name": "czlynn/protocon",
"max_issues_repo_path": "doc/webtex/changes.tex",
"max_line_length": 137,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "aa520f9edf3be7ff458f96e88cd2dab1eec4c505",
"max_stars_repo_licenses": [
"0BSD"
],
"max_stars_repo_name": "czlynn/protocon",
"max_stars_repo_path": "doc/webtex/changes.tex",
"max_stars_repo_stars_event_max_datetime": "2019-09-02T22:19:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-09-02T22:19:43.000Z",
"num_tokens": 909,
"size": 3367
} |
\section{Matrix Multiplication}
Given two matrices $\mathbf{A}\in\mathbb{R}^{m \times p }$ and $\mathbf{B}\in\mathbb{R}^{p \times n}$ with entries $a_{i,j}$ and $b_{i,j}$ in their $i$th row and $j$th column, the product $\mathbf{C} \in \mathbf{AB}\in\mathbb{R}^{m \times n}$ and with elements $c_{i,j} = \sum_{k = 1}^{p} a_{i,k}b_{k,j}$.
There are two ways to interpret how the rows or columns of $\mathbf{C}$ come about through this operation. The first is that a row or column vector in $\mathbf{C}$ is formed by one of the matrices acting on an individual vector from the other matrix. The second is that a row of a column vector in $\mathbf{C}$ is formed as the linear superposition of the vectors within one of the matrices, where the coefficients of the superposition are given by a row or a column within the other matrix.
Let:
\begin{equation}
\mathbf{A} = \left[
\begin{array}{ccc}
\vert & \vert & \vert \\
a^{(c)}_1 & a^{(c)}_2 & a^{(c)}_3\\
\vert & \vert & \vert \\
\end{array}
\right] =
\left[
\begin{array}{ccc}
- & a^{(r)}_1 & - \\
- & a^{(r)}_2 & - \\
- & a^{(r)}_3 & - \\
\end{array}
\right]
\end{equation}
\begin{equation}
\mathbf{B} = \left[
\begin{array}{ccc}
\vert & \vert & \vert \\
b^{(c)}_1 & b^{(c)}_2 & b^{(c)}_3\\
\vert & \vert & \vert \\
\end{array}
\right] =
\left[
\begin{array}{ccc}
- & b^{(r)}_1 & - \\
- & b^{(r)}_2 & - \\
- & b^{(r)}_3 & - \\
\end{array}
\right]
\end{equation}
\begin{equation}
\mathbf{C} = \left[
\begin{array}{ccc}
\vert & \vert & \vert \\
c^{(c)}_1 & c^{(c)}_2 & c^{(c)}_3\\
\vert & \vert & \vert \\
\end{array}
\right] =\left[
\begin{array}{ccc}
- & c^{(r)}_1 & - \\
- & c^{(r)}_2 & - \\
- & c^{(r)}_3 & - \\
\end{array}
\right]
\end{equation}
be the notation in terms of row and column vectors.
\subsubsection{Elementwise}
$[\mathbf{C}]_{i,j}$ is the dot product of the $i$th row vector of $\mathbf{A}$ and the $j$ column vector of $\mathbf{B}$.
\begin{equation}
c_{i,j} = \left[\begin{array}{c} \vert \\ a_i^{(r)}\\ \vert \end{array}\right] \cdot \left[\begin{array}{ccc} - &b_j^{(c)}& - \end{array}\right]
\end{equation}
\subsubsection{Columns of $\mathbf{C}$}
The $j$th column of $\mathbf{C}$ involves all of $\mathbf{A}$ and only the $j$th column of $\mathbf{B}$.
The $j$th column of $\mathbf{C}$ is the matrix product of $\mathbf{A}$ with the first column of $\mathbf{B}$.
\begin{equation}
\left[
\begin{array}{c}
\vert\\
c^{(c)}_j\\
\vert
\end{array}
\right]=
\left[
\begin{array}{ccc}
- & a^{(r)}_1 & - \\
- & a^{(r)}_2 & - \\
- & a^{(r)}_3 & - \\
\end{array}
\right]
\left[ \begin{array}{c} \vert \\ b^{(c)}_j \\ \vert \end{array}\right]
\end{equation}
The $j$th column of the product $\mathbf{C}$ is a linear superposition of the columns of $\mathbf{A}$, with the coefficients given by the $j$th column of $\mathbf{B}$.
\begin{equation}
\left[
\begin{array}{c}
\vert\\
c^{(c)}_j\\
\vert
\end{array}
\right]=
\begin{array}{ccc}
\left[\begin{array}{c} \vert\\ a^{(c)}_1 \\ \vert \end{array}\right]b^{(c)}_{1,j}
+ \left[\begin{array}{c} \vert\\ a^{(c)}_2 \\ \vert \end{array}\right]b^{(c)}_{2,j}
+ \left[\begin{array}{c} \vert\\ a^{(c)}_3 \\ \vert \end{array}\right]b^{(c)}_{3,j}
\end{array}
\end{equation}
\subsubsection{Rows of $\mathbf{C}$}
The $i$th row of $\mathbf{C}$ involves all of $\mathbf{B}$ and only the $i$th row of $\mathbf{A}$.
The $i$th row of $\mathbf{C}$ is the matrix product of $\mathbf{B}$ acting backwards on the $i$th row of $\mathbf{A}$.
\begin{equation}
\left[\begin{array}{ccc} - & c^{(r)}_i & - \end{array}\right] =
\left[\begin{array}{ccc} - & a^{(r)}_i & - \end{array}\right]
\left[
\begin{array}{ccc}
\vert & \vert & \vert \\
b^{(c)}_1 & b^{(c)}_2 & b^{(c)}_3\\
\vert & \vert & \vert \\
\end{array}
\right]
\end{equation}
The $i$th row of $\mathbf{C}$ is a linear superposition of the rows of $\mathbf{B}$, with the coefficients given by the $i$th row of $\mathbf{A}$.
\begin{equation}
\begin{array}{rl}
\left[\begin{array}{ccc} - & c^{(r)}_i & - \end{array}\right] = & a^{(r)}_{i,1} \left[\begin{array}{ccc} - & b^{(r)}_1 & - \end{array}\right]\\
&+ a^{(r)}_{i,2} \left[\begin{array}{ccc} - & b^{(r)}_2 & - \end{array}\right]\\
&+ a^{(r)}_{i,3} \left[\begin{array}{ccc} - & b^{(r)}_3 & - \end{array}\right]
\end{array}
\end{equation}
| {
"alphanum_fraction": 0.5985469885,
"avg_line_length": 29.6319444444,
"ext": "tex",
"hexsha": "6ee6596b7def70dcd79080579ab938cbf43c3d42",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jpbm/probabilism",
"max_forks_repo_path": "notes/chapters/sections/linalg_matrixmultiplication.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jpbm/probabilism",
"max_issues_repo_path": "notes/chapters/sections/linalg_matrixmultiplication.tex",
"max_line_length": 491,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jpbm/probabilism",
"max_stars_repo_path": "notes/chapters/sections/linalg_matrixmultiplication.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1687,
"size": 4267
} |
\documentclass[utf8x,xcolor=pdftex,dvipsnames,table]{beamer}
\usetheme{Malmoe} % Now it's a beamer presentation with the lisa theme!
\setbeamertemplate{footline}[page number]
\usecolortheme{beaver}
\usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage[utf8x]{inputenc}
%\logo{\includegraphics[width=.8in]{UdeM_NoirBleu_logo_Marie_crop}}
\usepackage{listings}
\newcommand{\superscript}[1]{\ensuremath{^{\textrm{#1}}}}
\mode<presentation>
\title{Aesara, Pylearn2, libgpuarray Presentation}
\author{%
\footnotesize
Frédéric Bastien, Bart van Merriënboer \newline
Département d'Informatique et de Recherche Opérationnelle \newline
Université de Montréal \newline
Montréal, Canada \newline
\texttt{\{bastienf, vanmerb\}@iro.umontreal.ca} \newline \newline
}
\date{OML Workshop 2014}
\setbeamertemplate{navigation symbols}{}
\begin{document}
\begin{frame}[plain]
\titlepage
\vspace{-5em}
\includegraphics[width=1in]{../hpcs2011_tutorial/pics/lisabook_logo_text_3.png}
\hfill
\includegraphics[width=.8in]{../hpcs2011_tutorial/pics/UdeM_NoirBleu_logo_Marie_crop}
\end{frame}
\section{Introduction}
\begin{frame}{High level}\setcounter{page}{1}
Python <- \{NumPy/SciPy/libgpuarray\} <- Aesara <- Pylearn2
\begin{itemize}
\item Python: OO coding language
\item Numpy: $n$-dimensional array object and scientific computing toolbox
\item SciPy: sparse matrix objects and more scientific computing functionality
\item libgpuarray: GPU $n$-dimensional array object in C for CUDA and OpenCL
\item Aesara: compiler/symbolic graph manipulation
\item Pylearn2: machine learning framework
\end{itemize}
\end{frame}
%% \begin{frame}{Others}
%% \begin{itemize}
%% \item matplotlib: one of the many plotting library
%% \item IPython: Advanced python shell
%% \item IPython notebook: web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document
%% \end{itemize}
%% \end{frame}
\begin{frame}{Python}
\begin{itemize}
\item General-purpose high-level OO interpreted language
\item Emphasizes code readability
\item Comprehensive standard library
\item Dynamic type and memory management
\item Slow execution
\item Easily extensible with C
\item Popular in {\em web development}\ and {\em scientific communities}
\end{itemize}
\end{frame}
\begin{frame}{NumPy/SciPy}
\begin{itemize}
\item Python floats are full-fledged objects on the heap
\begin{itemize}
\item Not suitable for high-performance computing!
\end{itemize}
\item NumPy provides an $n$-dimensional numeric array in Python
\begin{itemize}
\item Perfect for high-performance computing
\item Slices of arrays are views (no copying)
\end{itemize}
\item NumPy provides
\begin{itemize}
\item Elementwise computations
\item Linear algebra, Fourier transforms
\item Pseudorandom number generators (many distributions)
\end{itemize}
\item SciPy provides lots more, including
\begin{itemize}
\item Sparse matrices
\item More linear algebra
\item Solvers and optimization algorithms
\item Matlab-compatible I/O
\item I/O and signal processing for images and audio
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{What's missing?}
\begin{itemize}
\item Non-lazy evaluation (required by Python) hurts performance
\item Bound to the CPU
\item Lacks symbolic or automatic differentiation
\item No automatic speed and stability optimization
\end{itemize}
\end{frame}
%% \begin{frame}{Why scripting for GPUs?}
%% \begin{bf}They complement each other\end{bf}
%% GPUs are everything that high level languages are not
%% \begin{itemize}
%% \item Highly parallel
%% \item Very architecture-sensitive
%% \item Built for maximum FP/memory throughput
%% \item So hard to program that meta-programming is easier
%% \end{itemize}
%% \begin{bf}Best of both worlds:\end{bf} easily scripted code which invokes high-performance GPU kernels.
%% \begin{bf}Aesara C code generation removes overhead\end{bf} of
%% function calls between Python and C by launching many C functions at once.
%% \end{frame}
\begin{frame}{Aesara}
High-level domain-specific language tailored to numeric computation.
\begin{itemize}
\item Syntax as close to NumPy as possible
\item Compiles most common expressions to C for CPU and/or GPU
\item Limited expressivity means more opportunities optimizations
\begin{itemize}
\item No subroutines -> global optimization
\item Strongly typed -> compiles to C
\item Array oriented -> easy parallelism
\item Support for looping and branching in expressions
\end{itemize}
\item Automatic speed and stability optimizations
\item Can reuse other technologies for best performance.
\begin{itemize}
\item BLAS, SciPy, Cython, Numba, PyCUDA, CUDA
\end{itemize}
\item Automatic differentiation and R op
\item Sparse matrices
\end{itemize}
\end{frame}
\begin{frame}{Pylearn2}
Machine Learning library aimed at researchers
\begin{itemize}
\item Built on top of Aesara, for fast execution and use of GPU
\item Easy to try variants of implemented algorithms, and to extend them (using Aesara)
\item Very modular, each component of the library can be used in isolation
\item Experiments can be specified through a YAML config file, or by a Python script
\item Scripts for visualizing weights, plot monitored values
\end{itemize}
\end{frame}
\begin{frame}{libgpuarray}
Goal: A common GPU $n$-dimensional array that can be reused by all projects, support for both CUDA and OpenCL.
\newline \newline
Motivation:
\begin{itemize}
\item Currently there are at least 6 different GPU arrays in Python
\begin{itemize}
\item CudaNdarray (Aesara), GPUArray (pycuda), CUDAMatrix (cudamat), GPUArray (pyopencl), Clyther, Copperhead, ...
\item There are even more if we include other languages.
\end{itemize}
\item They are incompatible
\begin{itemize}
\item None have the same properties and interface
\end{itemize}
\item All of them implement a subset of numpy.ndarray properties
\item This is the new GPU backend on Aesara
\end{itemize}
\end{frame}
\begin{frame}{Goal of the stack}
\begin{center}
\begin{bf}Fast to develop\end{bf}\newline \bigskip
\begin{bf}Fast to run\end{bf}\newline \bigskip
\hspace{-2.5cm}
\includegraphics[width=0.35\textwidth]{road-runner-1.jpg}
\end{center}
\end{frame}
\section{Aesara}
% I think it is a good idea to make explicit the change into a new section -- PL
\begin{frame}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}{Description}
\begin{itemize}
\item Mathematical symbolic expression compiler
\item Expressions mimic NumPy's syntax and semantics
\item Dynamic C/CUDA code generation
\begin{itemize}
\item C/C++, CUDA, OpenCL, PyCUDA, Cython, Numba, \ldots
\end{itemize}
\item Efficient symbolic differentiation
%\begin{itemize}
% \item Derivatives of functions with one or many inputs.
% \item Computation of the Jacobian, Hessian, R and L op.
%\end{itemize}
\item Speed and stability optimizations
\begin{itemize}
\item Gives the right answer for ``$\log (1 + x)$'' even if $x$ is really tiny.
\end{itemize}
\item Extensive unit-testing and self-verification
%\begin{itemize}
% \item Detects and diagnoses many types of errors
%\end{itemize}
\item Works on Linux, OS X and Windows
\item Transparent use of a GPU
\begin{itemize}
\item {\tt float32} only for now (libgpuarray provides much more)
\item Limited support on Windows
\end{itemize}
% \item Statically typed and purely functional
\item Sparse operations (CPU only)
\end{itemize}
\end{frame}
% The following does not work with lstset, for some reason
%\begin{frame}{Simple example}
\begin{frame}[fragile]
\frametitle{Simple example}
\lstset{language=Python,
commentstyle=\itshape\color{blue},
stringstyle=\color{violet},
}
\begin{lstlisting}
import aesara
# declare symbolic variable
a = aesara.tensor.vector("a")
# build symbolic expression
b = a + a ** 10
# compile function
f = aesara.function([a], b)
print f([0, 1, 2])
# prints `array([0, 2, 1026])`
\end{lstlisting}
\end{frame}
\begin{frame}{Simple example: graph optimization}
\center
\includegraphics[width=0.35\textwidth]{../hpcs2011_tutorial/pics/f_unoptimized.png}
\hspace{0.1\textwidth}
\includegraphics[width=0.35\textwidth]{../hpcs2011_tutorial/pics/f_optimized.png}
%Symbolic programming = *Paradigm shift*: people need to use it to understand it.
\end{frame}
\begin{frame}{Project status?}
\begin{itemize}
\item Mature: Aesara has been developed and used since January 2008 (6.5 yrs old)
\item Driven over 100 research papers
\item Good user documentation
\item Active mailing list with participants from outside our lab
\item Core technology for a few Silicon-Valley start-ups
\item Many contributors (some from outside our lab)
\item Used to teach many university classes
\item Has been used for research at Google and Yahoo.
\end{itemize}
Aesara: \url{deeplearning.net/software/aesara/}
Deep Learning Tutorials: \url{deeplearning.net/tutorial/}
\end{frame}
\section{Pylearn2}
\begin{frame}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}{Pylearn2 details}
The core library contains a collection of:
\begin{itemize}
\item Training algorithms (e.g. Stochastic and Batch GD, model-specific rules)
\begin{itemize}
\item Costs, supervised/unsupervised and exact/estimated (e.g. NLL, Score matching, NCE)
\item Monitor, history of (functions of) parameters and hyperparameters on different data sets (training, validation, test)
\item Termination criteria, determine when to stop training
\end{itemize}
\item Training extensions, perform actions throughout the training process (e.g., early stopping)
\item Models (e.g. NNets, ConvNets, RBMs, k-means, PCA, SVMs)
\item Datasets (e.g. MNIST, CIFAR-10) and preprocessors (LCN, ZCA)
\end{itemize}
\end{frame}
\begin{frame}{Pylearn2 details, continued}
\begin{itemize}
\item Data specifications which give semantics to data
\begin{itemize}
\item IndexSpace, 1D integer array e.g.\ for labels
\item VectorSpace, 1D float array e.g.\ for softmax output
\item Conv2DSpace, 3D float32 arrays e.g.\ for color image input
\end{itemize}
\item Allows for automatic conversion when needed e.g.\ labels to one-hot vectors, images to flattened vectors
\item YAML file allows experiments to be conducted without writing code
\end{itemize}
\end{frame}
\begin{frame}{Project status}
\begin{itemize}
\item Has been used for scientific publications, Kaggle competitions, used by many researchers at LISA
\item Still under rapid development, however the API shouldn't break without warning
\item Documentation is incomplete, but quickly improving
\item Active mailing list with participants from outside our lab
\item Core technology for a least one Silicon-Valley start-up
\item Features currently in development:
\begin{itemize}
\item Recurrent neural networks (RNNs), based on the GroundHog framework developed at LISA
\item Better hyperparameter search support, using e.g. Hyperopt
\end{itemize}
\end{itemize}
\end{frame}
%% \begin{frame}[fragile]
%% \frametitle{Simple example}
%% % I know it is not Python, but YAML is not supported by listings
%% % close enough? -- PL
%% \lstset{language=python,
%% commentstyle=\slshape\color{blue},
%% stringstyle=\color{violet},
%% basicstyle=\tiny\ttfamily}
%% \begin{lstlisting}
%% !obj:pylearn2.train.Train {
%% "dataset": !obj:pylearn2.datasets.dense_design_matrix.DenseDesignMatrix &dataset {
%% "X" : !obj:numpy.random.normal { 'size': [5,3] },
%% },
%% "model": !obj:pylearn2.models.autoencoder.DenoisingAutoencoder {
%% "nvis" : 3,
%% "nhid" : 4,
%% "irange" : 0.05, # Interval from which to sample weights
%% "corruptor": !obj:pylearn2.corruption.BinomialCorruptor {
%% "corruption_level": 0.5,
%% },
%% "act_enc": "tanh",
%% "act_dec": null, # Linear activation on the decoder side.
%% },
%% "algorithm": !obj:pylearn2.training_algorithms.sgd.SGD {
%% "learning_rate" : 1e-3,
%% "batch_size" : 5,
%% "monitoring_dataset" : *dataset,
%% "cost" : !obj:pylearn2.costs.autoencoder.MeanSquaredReconstructionError {},
%% "termination_criterion" : !obj:pylearn2.termination_criteria.EpochCounter {
%% "max_epochs": 10,
%% },
%% }
%% }
%% \end{lstlisting}
%% \end{frame}
%% \begin{frame}[fragile]
%% \frametitle{Simple example}
%% \lstset{language=python,
%% commentstyle=\itshape\color{blue},
%% stringstyle=\color{violet},
%% basicstyle=\small
%% }
%% \begin{lstlisting}
%% # Use Pylearn2 to perform a linear transformation
%% # followed by a softmax
%% x = aesara.tensor.vector("x")
%% softmax = pylearn2.models.mlp.Softmax(
%% n_classes=2, layer_name="softmax", irange=0.05
%% )
%% softmax.set_input_space(
%% pylearn2.space.VectorSpace(dim=5)
%% )
%% y = softmax.fprop(x)
%% f = aesara.function([x], y)
%% print f([0.12, 0.12, 0.43, 0.32, 0.96])
%% # prints [0.43, 0.54]
%% \end{lstlisting}
%% \end{frame}
\section{libgpuarray}
\begin{frame}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}{libgpuarray: Design Goals}
\begin{itemize}
\item Have the base object in C to allow collaboration with more projects.
\begin{itemize}
\item We want people from C, C++, ruby, R, \ldots all use the same base GPU ndarray.
\end{itemize}
\item Be compatible with CUDA and OpenCL.
\item Not too simple, (don’t support just matrix).
\item Support all dtype.
\item Allow strided views.
\item But still easy to develop new code that support only a few memory layout.
\begin{itemize}
\item This ease the development of new code.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Project status?}
\begin{itemize}
\item Usable directly, but not all implementation available.
\item Multiple GPUs works.
\item Is the next GPU array container for Aesara and is working.
\begin{itemize}
\item Not all Aesara implementations available now.
\item OpenCL misses more implementations.
\item Multiple GPUs on the way.
\end{itemize}
\item Web site: \url{http://deeplearning.net/software/libgpuarray/}
\end{itemize}
\end{frame}
\section{Conclusion}
\begin{frame}
\tableofcontents[currentsection]
\end{frame}
\begin{frame}{Conclusion}
Aesara/Pylearn2/libgpuarry provide an environment for machine learning that is:
\begin{bf}Fast to develop\end{bf}\newline
\begin{bf}Fast to run\end{bf}\newline
\end{frame}
\begin{frame}{Acknowledgments}
\begin{itemize}
\item All people working or having worked at the LISA lab.
\item All Aesara/Pylearn 2 users/contributors
\item Compute Canada, RQCHP, NSERC, and Canada Research Chairs for providing funds or access to compute resources.
\end{itemize}
\end{frame}
\begin{frame}
\begin{center}
\Huge
Questions?
\end{center}
\end{frame}
\end{document}
| {
"alphanum_fraction": 0.7124524408,
"avg_line_length": 33.3483870968,
"ext": "tex",
"hexsha": "43f96a59db037141d9d857c8cf5bb7e8c0bbe1a1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "90fa750461e91fb6281d494ae86404e2153fd7eb",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "abdalazizrashid/Theano-PyMC",
"max_forks_repo_path": "doc/omlw2014/presentation.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "90fa750461e91fb6281d494ae86404e2153fd7eb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "abdalazizrashid/Theano-PyMC",
"max_issues_repo_path": "doc/omlw2014/presentation.tex",
"max_line_length": 177,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "90fa750461e91fb6281d494ae86404e2153fd7eb",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "abdalazizrashid/Theano-PyMC",
"max_stars_repo_path": "doc/omlw2014/presentation.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4321,
"size": 15507
} |
\section*{Database Design}
\label{sec:db}
%\begin{figure}[H]
% \centering
% \makebox[\textwidth][c]{\includegraphics[width=1\textwidth]{ChauSze_SeniorDesign_FlowMain.PNG}}
% \caption{Diagram of the database design.}
%\end{figure} | {
"alphanum_fraction": 0.7268907563,
"avg_line_length": 34,
"ext": "tex",
"hexsha": "e3408abf0c557ae581f1e397d340ae91aa7ba46e",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "wodiesan/senior_design_spring",
"max_forks_repo_path": "documentation/latex_proposal/database.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "wodiesan/senior_design_spring",
"max_issues_repo_path": "documentation/latex_proposal/database.tex",
"max_line_length": 100,
"max_stars_count": 12,
"max_stars_repo_head_hexsha": "a335d8e052fc71250d127ca24755e0e1dad81d30",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "wodiesan/senior_design_spring",
"max_stars_repo_path": "documentation/latex_proposal/database.tex",
"max_stars_repo_stars_event_max_datetime": "2018-09-30T17:29:43.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-03-26T03:46:09.000Z",
"num_tokens": 79,
"size": 238
} |
\subsection{Employment status information}
\begin{itemize}\setlength{\itemsep}{0pt}
\item {\bfseries \GroupAProf}, \GroupAlg, \GroupAUniversity, \GroupBEmployment, permanent employment
\item {\bfseries \GroupBProf}, \GroupBlg, \GroupBUniversity, \GroupAEmployment, permanent employment
\end{itemize}
\subsection{First-time proposal data}
Not applicable.
\subsection{Composition of the project group}
\begin{itemize}\setlength{\itemsep}{0pt}
\item {\bfseries \GroupAProf}, \GroupAlg, \GroupAUniversity, \GroupBEmployment, permanent employment
\item {\bfseries \GroupBProf}, \GroupBlg, \GroupBUniversity, \GroupAEmployment, permanent employment
\end{itemize}
| {
"alphanum_fraction": 0.782414307,
"avg_line_length": 35.3157894737,
"ext": "tex",
"hexsha": "1ac9e312078cdd0a4c6909c75b898fbc864519a9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "bc08bd5cb2ae5b2910c69fe49a9899d595bd5e34",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "jmjos/dfg-research-grant-proposal-template",
"max_forks_repo_path": "51_group.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "bc08bd5cb2ae5b2910c69fe49a9899d595bd5e34",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "jmjos/dfg-research-grant-proposal-template",
"max_issues_repo_path": "51_group.tex",
"max_line_length": 102,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "bc08bd5cb2ae5b2910c69fe49a9899d595bd5e34",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "jmjos/dfg-research-grant-proposal-template",
"max_stars_repo_path": "51_group.tex",
"max_stars_repo_stars_event_max_datetime": "2022-01-03T05:43:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-05-05T20:12:49.000Z",
"num_tokens": 192,
"size": 671
} |
\chapter{Interfaces} \label{interfaces}
\section{Global Signals} \label{global-signals}
The following common signals are shared between all devices on the APB4
bus.
\begin{longtable}[]{@{}lccl@{}}
\toprule
Port & Size & Direction & Description\tabularnewline
\midrule
\endhead
\texttt{PRESETn} & 1 & Input & Asynchronous active low reset\tabularnewline
\texttt{PCLK} & 1 & Input & Clock Input\tabularnewline
\bottomrule
\caption{Global Signals}
\end{longtable}
\subsection{PRESETn} \label{presetn}
When the active low asynchronous \texttt{PRESETn} input is asserted (`0'), the
APB4 interface is put into its initial reset state.
\subsection{PCLK} \label{pclk}
\texttt{PCLK} is the APB4 interface system clock. All internal logic for the APB4
interface operates at the rising edge of this system clock and APB4 bus
timings are related to the rising edge of \texttt{PCLK}.
\section{Master Interface} \label{master-interface-1}
The APB4 Interface decodes the signaling of an APB4 bus master and
therefore implements a subset of a regular APB4 Slave Interface.
\begin{longtable}[]{@{}lccl@{}}
\toprule
Port & Size & Direction & Description\tabularnewline
\midrule
\endhead
\texttt{MST\_PSEL} & 1 & Input & Peripheral Select\tabularnewline
\texttt{MST\_PADDR} & \texttt{PADDR\_SIZE} & Input & Address Bus\tabularnewline
\texttt{MST\_PRDATA} & \texttt{PDATA\_SIZE} & Output & Read Data Bus\tabularnewline
\texttt{MST\_PREADY} & 1 & Output & Transfer Ready\tabularnewline
\texttt{MST\_PSLVERR} & 1 & Output & Transfer Error Indicator\tabularnewline
\bottomrule
\caption{APB4 Master Interface Ports}
\end{longtable}
\subsection{MST\_PSEL}\label{mst_psel}
%The APB4 Master generates PSEL, to signal to an attached peripheral that
it is selected and a data transfer is pending. This signal drives the
APB4 Multiplexer MST\_PSEL port and is decoded to select the individual
peripheral by asserting the corresponding \texttt{SLV\_PSEL[n]} output..
\subsection{MST\_PADDR}\label{mst_paddr}
\texttt{MST\_PADDR} is the APB4 address bus. The bus width is defined by the
\texttt{PADDR\_SIZE} parameter and is driven by the APB4 Master.
\subsection{MST\_PRDATA}\label{mst_prdata}
\texttt{MST\_PRDATA} drives the APB4 read data bus. The selected peripheral
drives this bus during read cycles, via the APB4 Multiplexer.
The bus width must be byte-aligned and is defined by the \texttt{PDATA\_SIZE}
parameter.
\subsection{MST\_PREADY}\label{mst_pready}
\texttt{MST\_PREADY} is driven by the selected peripheral via the APB4
Multiplexer. It is used to extend an APB4 transfer.
\subsection{MST\_PSLVERR}\label{mst_pslverr}
\texttt{MST\_PSLVERR} indicates a failed data transfer to the APB4 Master when
asserted (`1') and is driven by the selected peripheral via the APB4
Multiplexer.
\section{Slave Interface}\label{slave-interface}
The Slave Interface provides the following signals \emph{for each}
individual peripheral. The number of peripherals supported, and
therefore instances of the following signals, is controlled by the
SLAVES parameter (see section 0).
\begin{quote}
\textbf{Note:} Each individual port name is referenced by the index `n', where `n' is
an integer value in the range 0 to \texttt{SLAVES-1}. E.g. \texttt{SLV\_PSEL[2]}
This nomenclature is used throughout this datasheet
\end{quote}
\begin{longtable}[]{@{}lccl@{}}
\toprule
Port & Size & Direction & Description\tabularnewline
\midrule
\endhead
\texttt{SLV\_PSEL[n]} & 1 & Output & Peripheral Select\tabularnewline
\texttt{SLV\_PRDATA[n]} & \texttt{PDATA\_SIZE} & Input & Read Data Bus\tabularnewline
\texttt{SLV\_PREADY[n]} & 1 & Input & Transfer Ready Input\tabularnewline
\texttt{SLV\_PSLVERR[n]} & 1 & Input & Transfer Error Indicator\tabularnewline
\texttt{SLV\_ADDR[n]} & \texttt{PADDR\_SIZE} & Input & Transfer Ready Input\tabularnewline
\texttt{SLV\_MASK[n]} & \texttt{PADDR\_SIZE} & Input & Transfer Error Indicator\tabularnewline
\bottomrule
\caption{Slave Interface Ports}
\end{longtable}
\subsection{SLV\_PSEL[n]}\label{slv_pseln}
The APB4 Multiplexer generates \texttt{SLV\_PSEL[n]}, signaling to an
attached peripheral that it is selected and a data transfer is pending.
\subsection{SLV\_PRDATA[n]}\label{slv_prdatan}
\texttt{SLV\_PRDATA[n]} is the APB4 read data bus associated with the
attached peripheral. The peripheral drives this bus during read cycles,
indicated when \texttt{PWRITE} is negated (`0'), and the data is then multiplexed
to the \texttt{MST\_PRDATA} output port.
The bus width must be byte-aligned and is defined by the \texttt{PDATA\_SIZE}
parameter.
\subsection{SLV\_PREADY[n]}\label{slv_preadyn}
\texttt{SLV\_PREADY[n]} is driven by the attached peripheral and multiplexed
to the \texttt{MST\_PREADY }output port. It is used to extend an APB4 transfer.
\subsection{SLV\_PSLVERR[n]}\label{slv_pslverrn}
\texttt{SLV\_PSLVERR[n]} indicates a failed data transfer when asserted
(`1'). As APB4 peripherals are not required to support this signal it must be
tied LOW (`0') when unused.
\subsection{SLV\_ADDR[n] and SLV\_MASK[n]} \label{slv_addrn-and-slv_maskn}
\texttt{SLV\_ADDR[n]} is the base address where the attached peripheral is to
appear in the system memory map. It is bitwise `AND'ed with the
corresponding address mask \texttt{SLV\_MASK[n]} input to define the overall
address range of each peripheral.
As a consequence, these ports are typically assigned hard-coded values
rather than connected to other logic in the design. | {
"alphanum_fraction": 0.7536489854,
"avg_line_length": 38.7448275862,
"ext": "tex",
"hexsha": "fc3eada68b7c9444ac580f5e45d3d46cb02a863b",
"lang": "TeX",
"max_forks_count": 5,
"max_forks_repo_forks_event_max_datetime": "2021-09-29T17:12:57.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-10-29T07:08:20.000Z",
"max_forks_repo_head_hexsha": "b7cf2a5757f0250b26f4dabe9612a9c49d3b561f",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "Jensenwww/andata",
"max_forks_repo_path": "docs/tex/interfaces.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "b7cf2a5757f0250b26f4dabe9612a9c49d3b561f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "Jensenwww/andata",
"max_issues_repo_path": "docs/tex/interfaces.tex",
"max_line_length": 100,
"max_stars_count": 9,
"max_stars_repo_head_hexsha": "b7cf2a5757f0250b26f4dabe9612a9c49d3b561f",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "Jensenwww/andata",
"max_stars_repo_path": "docs/tex/interfaces.tex",
"max_stars_repo_stars_event_max_datetime": "2022-02-17T23:11:11.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-01-19T20:40:39.000Z",
"num_tokens": 1682,
"size": 5618
} |
\chapter{Paladin Moves}
\index{Paladin Moves} \index{Paladin} \index{Moves}
\section{Evidence of Faith} \index{Evidence of Faith} \index{Evidence} \index{Faith}
Your +1 forward applies to anything you do based on your knowledge of the spell's effects: defying it, defending against it, using it to your advantage, etc.
| {
"alphanum_fraction": 0.7307692308,
"avg_line_length": 33.8,
"ext": "tex",
"hexsha": "b0aeeb723533ec3bcf2daf718c5e5e8a69339612",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2022-01-31T18:19:27.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-19T20:29:24.000Z",
"max_forks_repo_head_hexsha": "3066435d403380cd603436bc0523a04c874c03f1",
"max_forks_repo_licenses": [
"CC-BY-3.0"
],
"max_forks_repo_name": "maxlambertini/DungeonWorld-ConTeXt",
"max_forks_repo_path": "tex/dwpocket_044_Paladin_Moves.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "3066435d403380cd603436bc0523a04c874c03f1",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-3.0"
],
"max_issues_repo_name": "maxlambertini/DungeonWorld-ConTeXt",
"max_issues_repo_path": "tex/dwpocket_044_Paladin_Moves.tex",
"max_line_length": 158,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "3066435d403380cd603436bc0523a04c874c03f1",
"max_stars_repo_licenses": [
"CC-BY-3.0"
],
"max_stars_repo_name": "maxlambertini/DungeonWorld-ConTeXt",
"max_stars_repo_path": "tex/dwpocket_044_Paladin_Moves.tex",
"max_stars_repo_stars_event_max_datetime": "2016-10-07T08:56:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-10-07T08:56:40.000Z",
"num_tokens": 89,
"size": 338
} |
\subsection{Interview answers}\label{Interview}
\subsubsection{25 year old single male}
\begin{itemize}
\item Sex: Male
\item Age: 25
\item Education: Vocational
\item Work: Skilled Sailor
\item Household: 3 singles
\end{itemize}
\emph{What do you characterize as food waste, and is it a problem in the household?}
If any kind of food is thrown out, it is food waste according to the participant, furthermore he thinks food waste is a problem in the household, he elaborates that he thinks it is because some food is sold in bundles, that are too big compared with what is needed, and that they therefore buy too much food.
\emph{Is there a difference in the type of food, that is thrown out?}
It is mostly vegetables, which is thrown out in the household.
\emph{What factor is in focus, when food is thrown out?}
The participant only look at the freshness of the food, and does not take best before date in account.
\emph{Planning meals and shopping.}
The participant does rarely plan more than one day ahead, and does not plan at any specific time during day. Furthermore the participant mostly shop for what is wanted the specific day, and it is rarely considered, if a meal could be made from what is in the home. No one specific in the household do the shopping or the cooking, but they try to schedule the shopping with who has the time, or already has to be out.
\emph{What is the most important about your diet?}
Quality is the most important, closely followed by organic food, they try to avoid food waste, but it is not in focus.
\emph{Do you use leftovers?}
The participant always use leftovers, both because he finds it stupid, to throw it out, but also to save some money.
\emph{When do you shop, for how long, how much and how many times?}
Preferably in the morning, because there are less people. The shopping only takes 5-10 minutes and is done 4 - 5 times a week. Breakfast and lunch is often bought for a few days, where the dinner is mostly bought at the day it is needed.
\emph{What affects you in the shopping situation?}
The participant is not affected by sales in general, but do impulsive shopping if groceries are on sale, because they are close to the best before date.
\subsubsection{53 year old married female}
\begin{itemize}
\item Sex: Female
\item Age: 53
\item Education: University
\item Work: Pedagogue
\item Household: 2, one married couple
\end{itemize}
\emph{What do you characterize as food waste, and is it a problem in the household?}
Sales that requires you to buy to large quantities, which makes you buy to much, is seen as food waste, together with bad usage of leftovers. Food waste is not seen as a problem in this household.
\emph{Is there a difference in the type of food, that is thrown out?}
Even though food waste is not seen as a problem, it rarely happens that bread is thrown out, this is because the packages the food is sold in, are to big.
\emph{What factor is in focus, when food is thrown out?}
The participant only look at the freshness of the food, and does not take best before date in account.
\emph{Planning meals and shopping.}
The participant plan in the way that, food is taken out of the freezer in the morning, furthermore are there often made extra food, to have easy and fast prepared food, for the next few days, or to have something for a lunch, to bring to work. What needs to be shopped is mostly planned in the morning, when the food is taken out of the freezer, and the participant therefore knows what needs to be bought, if something unexpected happens, resulting in an unexpected dinner, makes shopping after work, a necessity. It is mostly the participant who does the shopping, but the other person in the household, may be asked to do some shopping.
\emph{What is the most important about your diet?}
The participant want to avoid food waste as a primary focus, next is the prize and the excitement of the meal, taken into consideration.
\emph{Do you use leftovers?}
The participant always use leftovers. And plans a dinner, so there will be leftovers, to ease the cooking for days to come.
\emph{When do you shop, for how long, how much and how many times?}
The participant prefers to shop after work. The shopping only takes about 10 minutes and is done 3-4 times a week. furthermore the participant prefers to shop for a few days only, to keep the food fresh. this results in a fridge that in quite empty, and as a result, the shopping is often a necessity.
\emph{What affects you in the shopping situation?}
The participant is affected by "decorated" sales mend for tempting to do impulsive shopping, furthermore the participant does not like to buy the food that is on sale, because it is close to the best before date.
\subsubsection{75 year old married female}
\begin{itemize}
\item Sex: Female
\item Age: 75
\item Education: Vocational
\item Work: Pensioner
\item Household: 2, one married couple
\end{itemize}
\emph{What do you characterize as food waste, and is it a problem in the household?}
No use of leftovers, and food that is badly stored, which makes it go bad. Food waste is not seen as a problem in the household, if there are small leftovers, the dog will get it.
%Sales that requires you to buy to large quantities, which makes you buy to much, is seen as food waste, together with bad usage of leftovers. Food waste is not seen as a problem in this household.
\emph{Is there a difference in the type of food, that is thrown out?}
The participant can not say what difference there could be, because food is never thrown out.
\emph{What factor is in focus, when food is thrown out?}
If food would be thrown out, the freshness will be the only factor.
\emph{Planning meals and shopping.}
The participant likes to shop large quantities of meat, because it is bought from a butcher who's shop is far away from home. Where items like bread and milk is bought as needed, the participant elaborates that there is rarely missing anything, because alternative ingredients will be found.
\emph{What is the most important about your diet?}
It is important that the diet if healthy, and filled with energy, the meat has to be proper, and therefor it is bought from a butcher
\emph{Do you use leftovers?}
The participant always use leftovers.
\emph{When do you shop, for how long, how much and how many times?}
There is no specific time in the day which the participant likes to shop, and the shopping takes place 2 to 3 times a week.
\emph{What affects you in the shopping situation?}
The participant likes to buy savings, and fill the freezer, and can be tempted to do impulsive shopping, furthermore the participant likes to buy the food that is on sale, because it is close to the best before date.
\subsubsection{55 year old single woman}
\begin{itemize}
\item Sex: Female
\item Age: 55
\item Education: Skilled Worker
\item Work: Teaching assistant
\item Household: 2, mother and daughter
\item Phone: Yes, android
\item Tablet: Yes, android
\end{itemize}
\emph{What do you characterize as food waste?}
When food is thrown out in the trash. This applies to both prepared and unprepared food. If animals are fed with the food instead of it getting thrown out, it is not food waste.
\emph{Is foodwaste a problem for the household?}
No, everything edible that gets too old, it is fed to the animals of the household, more specifficaly hens, as a replacement of fodder.
\emph{Is there a difference in what gets thrown out?}
No, there is nothing specific that gets thrown out the most.
\emph{What factors are considered when throwing food out?}
Mostly the freshness of the food. Not so much the best before date, but mostly how the food looks, feels, and smells.
\emph{When do you plan your meals and shopping?}
The planning is done a 2-3 days ahead of when the food is getting eaten.
\emph{How is the shopping planned?}
Mostly out of what is wanted to eat and discounts.
\emph{How is the coordination of the members of the household?}
The mother makes everything, the daughter neither prepares the food or does any shopping.
\emph{What is valued the most about your diet?}
What the household want to eat, variation of the meals, and a little economy.
\emph{Do you use leftovers?}
Yes, leftover gets used for lunch or dinner, depending on the qunatity of the leftovers.
\emph{When do you grocery shop?}
In the morning before work, or after dinner.
\emph{How long does the grocery shopping take?}
About an hour because there is being shopped in more than one store.
\emph{How many times a week is grocery shopping done?}
3 times a week.
\emph{WHat is the quantity when shopping?}
Depends from time to time.
\emph{What affects you in the groecry shopping situation?}
The discounts are planned from home, as the participant looks through discount magzines. But if there is something practical and cheap, the particapant will buy on impulse.
\emph{How do you make your shopping list?}
The shopping list is made in by hand on a notepad, but if there was a simple solution for tablet or smartphone, the participant would be willing to try it out. The participant have already tried using Evernote for a shopping list that would sync between different devices.
\subsubsection{28 year old single male}
\begin{itemize}
\item Sex: Male
\item Age: 28
\item Education: Educator
\item Work: Kindergarten helper
\item Household: 11
\end{itemize}
\emph{What do you characterize as food waste, and is it a problem in the household?}
The participant believes when eat able food is thrown out, is should be considered food waste. He does not feel that the food waste level in the household is a problem.
\emph{Is there a difference in the type of food, that is thrown out?}
It is mostly vegetables, which is thrown out in the household, he does not always have the time to eat them all before they goes bad.
\emph{What factor is in focus, when food is thrown out?}
When the participant feels the food has become to old he will throw it out, this is based on how fresh the food seems to be.
\emph{Planning meals and shopping.}
How the participant plans his shopping and cooking varies, but the planning happens for the most part when he already is at the mall. When he is shopping for products, his choosing is based on what he likes and what there is on sale. He is alone when he is shopping or cooking.
\emph{What is the most important about your diet?}
It can not be to spicy as he is vulnerable to stomach ulcers. It also has to be of a good quality and economic.
\emph{Do you use leftovers?}
The participant tries to save and eat leftovers when possible.
\emph{When do you shop, for how long, how much and how many times?}
He tries to do his shopping after work if possible. He is not sure about how much time he spends on shopping, but the time used goes down he know what he wants beforehand.
\emph{What affects you in the shopping situation?}
He tries to get the wares he finds delicious, or if they are on sale. | {
"alphanum_fraction": 0.7707918101,
"avg_line_length": 48.2008733624,
"ext": "tex",
"hexsha": "d86ceacca14ece521a22a1a6a37d930f036b34f5",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "amatt13/FoodPlanner-Report",
"max_forks_repo_path": "Appendix/InterviewAnswers.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "amatt13/FoodPlanner-Report",
"max_issues_repo_path": "Appendix/InterviewAnswers.tex",
"max_line_length": 639,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "387a7c769cdda4913b81838bc8feffc9fbcafcc8",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "amatt13/FoodPlanner-Report",
"max_stars_repo_path": "Appendix/InterviewAnswers.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2630,
"size": 11038
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% UMB-CS110-2015S: Introduction to Computing
% Copyright 2015 Pejman Ghorbanzade <[email protected]>
% Creative Commons Attribution-ShareAlike 4.0 International License
% More info: https://github.com/ghorbanzade/UMB-CS110-2015S
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def \topDirectory {../..}
\def \texDirectory {\topDirectory/src/main/tex}
\documentclass[10pt, compress]{beamer}
\usepackage{\texDirectory/template/style/directives}
\input{\texDirectory/template/config}
\usepackage{\texDirectory/template/style/beamerthemeUmassLecture}
\doc{number}{14}
%\setbeamertemplate{footline}[text line]{}
\begin{document}
\prepareCover
\section{Course Administration}
\begin{frame}[fragile]
\frametitle{Course Administration}
Assignment 5 released. Due on April 30, 2015 at 17:30 PM.
\end{frame}
\begin{frame}[fragile]
\frametitle{Overview}
\begin{itemize}
\item[] Polymorphism
\begin{itemize}
\item[] Introduction
\item[] Method Overloading
\item[] Method Overriding
\item[] Subtyping
\end{itemize}
\end{itemize}
\end{frame}
\plain{}{Polymorphism}
\section{Introduction}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{figure}\centering
\includegraphics[width=\textwidth]{\texDirectory/template/images/polymorphism.png}
\end{figure}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Polymorephism in Computer Science}
\begin{itemize}
\item[] \textbf{Ad-hoc Polymorphism} A function denotes different and potentially heterogeneous implementations depending on a limited range of individually specified types and combinations
\item[] \textbf{Parametric Polymorphism} A function developed without mention of any specific type and thus can be used transparently with any number of new types
\item[] \textbf{Inclusion Polymorphism} A concept wherein a name may denote instances of many different classes as long as they are related by some common superclass
\end{itemize}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{(Ad-hoc) Polymorephism in Java}
\begin{itemize}
\item[] \textbf{Compile-Time Polymorphism} allows development of methods with the same name but with different parameter list
\item[] \textbf{Run-Time Polymorphism} allows unique implementation of methods of the super class in sub classes
\end{itemize}
\end{block}
\begin{block}{(Inclusion) Polymorphism in Java}
\begin{itemize}
\item[] \textbf{Subtyping} allows the subclass object to be referenced via the super class.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Advantage}
\begin{itemize}
\item[] \textbf{Compile-Time Polymorphism} enables \alert{method overloading} where multiple methods of same name but different parameter list can be developed in a class.
\item[] \textbf{Run-Time Polymorphism} enables \alert{method overriding} where a method previously implemented in The super class can be overridden in the subclass.
\item[] \textbf{Subtyping} allows instantiation of an object during compile-time while determining its type in run-time.
\end{itemize}
\end{block}
\end{frame}
\section{Method Overloading}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Objective}
Write a program \texttt{BigOcean.java} that controls movement of a fish in a vast ocean.
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{Fish.java} (Page 1)}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class Fish {
// attributes
private double posX;
private double posY;
// constructors
public Fish() {
posX = 0;
posY = 0;
}
// print position
public void showPosition() {
System.out.printf("moved to [%.1f, %.1f]\n",
this.posX, this.posY);
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{Fish.java} (Page 2)}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=15]{java}
// fish moves some random distance in some random direction
public void move() {
double direction = 2 * Math.PI * Math.random();
double distance = 10 * Math.random();
this.posX += distance * Math.cos(direction);
this.posY += distance * Math.sin(direction);
}
// fish moves [distance] meters in some random direction
public void move(double distance) {
double direction = 2 * Math.PI * Math.random();
this.posX += distance * Math.cos(direction);
this.posY += distance * Math.sin(direction);
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{Fish.java} (Page 3)}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=28]{java}
// fish moves to [posX, posY]
public void move(double posX, double posY) {
this.posX = posX;
this.posY = posY;
}
// fish moves [distance] meters toward [someFish]
public void move(double distance, Fish someFish) {
double direction = Math.atan2(
someFish.getPosY() - this.posY,
someFish.getPosX() - this.posX
);
this.posX += distance * Math.cos(direction);
this.posY += distance * Math.sin(direction);
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{Fish.java} (Page 4)}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=42]{java}
// getters and setters
public double getPosX() {
return posX;
}
public double getPosY() {
return posY;
}
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{BigOcean.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class BigOcean {
public static void main(String[] args) {
Fish fish1 = new Fish();
fish1.move(10);
fish1.showPosition();
fish1.move();
fish1.showPosition();
fish1.move(10, 10);
fish1.showPosition();
Fish fish2 = new Fish();
fish2.move(5, fish1);
fish2.showPosition();
}
}
\end{minted}
\end{block}
\end{frame}
\section{Method Overriding}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Objective}
Write a program \texttt{BigOcean2.java} that using the previously developed class \texttt{Fish.java} controls movement of a salmon in a vast ocean.
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{Salmon.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class Salmon extends Fish {
@Override
public void move() {
super.move();
System.out.println("The salmon just moved!");
}
public void sing() {
System.out.println("Let it go!");
}
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{BigOcean2.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class BigOcean2 {
public static void main(String[] args) {
Salmon fish1 = new Salmon();
fish1.move();
fish1.showPosition();
fish1.sing();
}
}
\end{minted}
\end{block}
\begin{block}{Output}
\begin{verbatim}
The salmon just moved!
moved to [0.8, 1.3]
Let it go!
\end{verbatim}
\end{block}
\end{frame}
\section{Subtyping}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{BigOcean3.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class BigOcean3 {
public static void main(String[] args) {
Salmon fish1 = new Salmon();
Fish fish2 = fish1;
Object fish3 = fish2;
System.out.println(fish3.equals(fish1)); // prints true
}
}
\end{minted}
\texttt{fish1}, \texttt{fish2} and \texttt{fish3} all refer to the same Salmon.
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{BigOcean3.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class BigOcean3 {
public static void main(String[] args) {
Salmon fish1 = new Salmon();
Fish fish2 = fish1;
fish1.sing();
fish2.sing();
}
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Output}
\begin{verbatim} Let it go!
The method sing() is undefined for the type Fish
\end{verbatim}
\end{block}
\begin{block}{Problem Statement}
Compiler treats \texttt{fish2} like any other instance of class Fish.
\end{block}
\begin{block}{Lesson Learned}
Subtyping affects how compiler identifies the objects.
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Class \texttt{BigOcean3.java}}
\begin{minted}[fontsize=\small,tabsize=8, linenos, firstnumber=1]{java}
public class BigOcean3 {
public static void main(String[] args) {
Salmon fish1 = new Salmon();
Fish fish2 = new Salmon();
fish1.move();
fish2.move();
}
}
\end{minted}
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Output}
\begin{verbatim} The salmon just moved!
The salmon just moved!
\end{verbatim}
\end{block}
\begin{block}{Justification}
Program passes compilation because compiler finds a method \texttt{move()} in class \texttt{Fish} for \texttt{fish2}. In run-time however, method \texttt{move()} of class \texttt{Salmon} is invoked.
\end{block}
\begin{block}{Lesson Learned}
Subtyping does not affect how JVM identifies objects.
\end{block}
\end{frame}
\begin{frame}[fragile]
\frametitle{Polymorphism}
\begin{block}{Notes}
\begin{itemize}
\item[] Any object with more than one IS-A relation is polymorphic
\item[] Any object is an instance of class \texttt{Object}, thus polymorphic
\end{itemize}
\end{block}
\end{frame}
\plain{}{Keep Calm\\and\\Think Object-Oriented}
\end{document}
| {
"alphanum_fraction": 0.7145302588,
"avg_line_length": 28.7023121387,
"ext": "tex",
"hexsha": "6d760b591ca84e8880436176d765f0b177b58921",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "UMB-CS110-2015S/Assignments",
"max_forks_repo_path": "src/main/tex/slides/ls14.tex",
"max_issues_count": 7,
"max_issues_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_issues_repo_issues_event_max_datetime": "2019-03-17T16:39:11.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-08-22T15:44:45.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "UMB-CS110-2015S/Assignments",
"max_issues_repo_path": "src/main/tex/slides/ls14.tex",
"max_line_length": 200,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "b12ded95ddec71cd45dd05dff773018f6879d37f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "UMB-CS110-2015S/Assignments",
"max_stars_repo_path": "src/main/tex/slides/ls14.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:40.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:40.000Z",
"num_tokens": 3057,
"size": 9931
} |
% GB (09/28/2007)
\chapter{Distributed-Memory Parallel Traversals}
\label{chap:distributedMemoryTraversals}
ROSE provides an experimental distributed-memory AST traversal mechanism meant for very large scale program
analysis. It allows you to distribute expensive program analyses among a distributed-memory system consisting of many
processors; this can be a cluster or a network of workstations. Different processes in the distributed system will get
different parts of the AST to analyze: Each process is assigned a number of defining function declarations in the AST,
and a method implemented by the user is invoked on each of these. The parts of the AST outside of function definitions
are shared among all processes, but there is no guarantee that all function definitions are visible to all processes.
The distributed memory analysis framework uses the MPI message passing library for communicating attributes among
processes. You will need an implementation of MPI to be able to build and run programs using distributed memory
traversals; consult your documentation on how to run MPI programs. (This is often done using a program named {\tt
mpirun}, {\tt mpiexecute}, or similar.)
Distributed memory analyses are performed in three phases:
\begin{enumerate}
\item A top-down traversal (the `pre-traversal') specified by the user runs on the shared AST outside of function
definitions. The inherited attributes this traversal computes for defining function declaration nodes in the AST are
saved by the framework for use in the next phase.
\item For every defining function declaration, the user-provided {\tt analyzeSubtree()} method is invoked; these calls
run concurrently, but on different function declarations, on all processors. It takes as arguments the AST node for the
function declaration and the inherited attribute computed for this node by the pre-traversal. Within {\tt
analyzeSubtree()} any analysis features provided by ROSE can be used. This method returns the value that will be used as
the synthesized attribute for this function declaration in the bottom-up traversal (the `post-traversal').
However, unlike normal bottom-up traversals, the synthesized attribute is not simply copied in memory as the AST is
distributed. The user must therefore provide the methods {\tt serializeAttribute()} and {\tt deserializeAttribute()}.
These compute a serialized representation of a synthesized attribute, and convert such a representation back to the
user's synthesized attribute type, respectively. A serialized attribute is a pair of an integer specifying the size of
the attribute in bytes and a pointer to a region of memory of that size that will be copied byte by byte across the
distributed system's communication network. Attributes from different parts of the AST may have different sizes. As
serialization of attributes will often involve dynamic memory allocation, the user can also implement the {\tt
deleteSerializedAttribute()} method to such dynamic memory after the serialized data has been copied to the
communication subsystem's internal buffer.
Within the {\tt analyzeSubtree()} method the methods {\tt numberOfProcesses()} and {\tt myID()} can be called. These
return the total number of concurrent processes, and an integer uniquely identifying the currently running process,
respectively. The ID ranges from 0 to one less than the number of processes, but has no semantics other than that it is
different for each process.
\item Finally, a bottom-up traversal is run on the shared AST outside of function definitions. The values returned by
the distributed analyzers in the previous phase are used as synthesized attributes for function definition nodes in this
traversal.
\end{enumerate}
After the bottom-up traversal has finished, the {\tt getFinalResults()} method can be invoked to obtain the final
synthesized attribute. The {\tt isRootProcess()} method returns true on exactly one designated process and can be used
to perform output, program transformations, or other tasks that are not meant to be run on each processor.
Figure~\ref{Tutorial:exampleDistributedMemoryTraversals} gives a complete example of how to use the distributed
memory analysis framework. It implements a trivial analysis that determines for each function declaration at what depth
in the AST it can be found and what its name is. Figure~\ref{Tutorial:exampleOutput_DistributedMemoryTraversals} shows
the output produced by this program when running using four processors on some input files.
% The distributedMemoryFunctionNames.C file is a copy of projects/DistributedMemoryAnalysis/functionNames.C
\begin{figure}[!h]
{\indent
{\mySmallFontSize
% Do this when processing latex to generate non-html (not using latex2html)
\begin{latexonly}
\lstinputlisting{\TutorialExampleDirectory/distributedMemoryFunctionNames.C}
\end{latexonly}
% Do this when processing latex to build html (using latex2html)
\begin{htmlonly}
\verbatiminput{\TutorialExampleDirectory/distributedMemoryFunctionNames.C}
\end{htmlonly}
% end of scope in font size
}
% End of scope in indentation
}
\caption{Example source demonstrating the use of the distributed-memory parallel analysis framework.}
\label{Tutorial:exampleDistributedMemoryTraversals}
\end{figure}
\begin{figure}[!h]
{\indent
{\mySmallFontSize
% GB (09/28/2007): This is just copied here; we can't generate it as not everyone has a copy of MPI. Might not be very
% elegant, but it's 4:42 p.m. on my last day :-)
\begin{verbatim}
----- found the following functions: ------
process 0: at depth 3: function il
process 0: at depth 5: function head
process 0: at depth 5: function eq
process 1: at depth 3: function headhead
process 1: at depth 3: function List
process 1: at depth 3: function find
process 1: at depth 3: function head
process 2: at depth 3: function operator!=
process 2: at depth 3: function find
process 2: at depth 3: function head
process 2: at depth 3: function fib
process 3: at depth 3: function xform
process 3: at depth 3: function func
process 3: at depth 3: function f
process 3: at depth 3: function g
process 3: at depth 3: function deref
-------------------------------------------
\end{verbatim}
% end of scope in font size
}
% End of scope in indentation
}
\caption{Example output of a distributed-memory analysis running on four processors.}
\label{Tutorial:exampleOutput_DistributedMemoryTraversals}
\end{figure}
| {
"alphanum_fraction": 0.7953733892,
"avg_line_length": 58.5545454545,
"ext": "tex",
"hexsha": "ef427597b328118462024f7cf3f837aaddaa7cfe",
"lang": "TeX",
"max_forks_count": 146,
"max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z",
"max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z",
"max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "sujankh/rose-matlab",
"max_forks_repo_path": "docs/Rose/Tutorial/distributedMemoryTraversals.tex",
"max_issues_count": 174,
"max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z",
"max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z",
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "sujankh/rose-matlab",
"max_issues_repo_path": "docs/Rose/Tutorial/distributedMemoryTraversals.tex",
"max_line_length": 120,
"max_stars_count": 488,
"max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "maurizioabba/rose",
"max_stars_repo_path": "docs/Rose/Tutorial/distributedMemoryTraversals.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z",
"max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z",
"num_tokens": 1405,
"size": 6441
} |
% ************************** Thesis Acknowledgements **************************
\chapter*{Acknowledgements}
\thispagestyle{empty}
\markboth{Acknowledgements}{}
Finally, this is where I want to thank everyone that has made this work possible. Most and foremost, I would like to thank Prof.\@\xspace Dr.\@\xspace Dorothee Schaile for giving me the opportunity to work at the chair of elementary particle physics, for allowing me to attend a number of workshops and conferences, and, especially, for always having an open door in case I needed some advice.
I am also deeply grateful to PD\@\xspace Dr.\@\xspace Jeanette Lorenz, not only for the excellent supervision all through my Bachelor's, Master's and PhD theses, but especially also for the many stimulating discussions and words of advice, as well as the numerous constructive comments regarding this thesis. Your guidance and feedback during the last couple of years have been invaluable. Thank you for continuously encouraging me to actively help shaping efforts to search for Supersymmetry in ATLAS. Thank you also for your relentless and fruitful efforts to facilitate a regular exchange of ideas, even during times of pandemic-enforced home-office and isolation.
I would further like to express my gratitude towards Prof.\@\xspace Dr.\@\xspace Wolfgang D\"unnweber for agreeing to provide the second review of this thesis, especially since it got much longer than initially anticipated. I am further grateful to Prof.\@\xspace Dr.\@\xspace Andreas Burkert, Prof.\@\xspace Dr.\@\xspace Gerhard Buchalla, Prof.\@\xspace Dr.\@\xspace Thomas Kuhr and Prof.\@\xspace Dr.\@\xspace Thomas Preibisch for agreeing to take part in the examination commission.
Many thanks to all the members of the \onelepton analysis team as well as the numerous colleagues involved in the pMSSM efforts in ATLAS. Most of the results presented herein are a product of a collaborative effort. Furthermore, many of the discussions we had have actively shaped this thesis.
I would like to acknowledge the support of the Luxembourg National Research Fund (FNR) for funding my PhD project, especially because high-energy physics research is still virtually inexistent in Luxembourg. Although a founding member of numerous European endeavours, Luxembourg is still one of only three European countries that are not a member of CERN, a circumstance that I regard as deeply disappointing and regretful. My sincere hope is that the future sees a growing high-energy physics community in Luxembourg. Funding single, external projects like mine certainly is an important step to allow this to happen.
Furthermore, I would like to thank all the members of the institute for always ensuring a warm and friendly atmosphere, for the numerous work-unrelated activities, for the countless, interesting discussions, and for bearing with me when the shared group disks were running out of space again.
Special thanks go to Dr.\@\xspace Nikolai Hartmann for being an awesome office partner, for the countless discussions on the sense and nonsense of data analysis with and without ATLAS software, and for sharing with me your passion for coffee and for obscure (and sometimes esoteric) programming languages and software packages.
Special thanks also go to Dr.\@\xspace Ferdinand Krieter (I agree, the title does look weird) and Paola Arrubarrena for the many non-physics chats, for sharing my somewhat unusual sense of humour, and for enduring my virtual rants during the writing phase in home-office.
Many thanks also to Dr.\@\xspace Michael Holzbock, Dr.\@\xspace Andrea Matic and David Koch for patiently playing the receiving end during many of my rubber duck debugging sessions.
My gratitude also goes to Yannick Erpelding and Nick Beffort. I am deeply grateful for the many years of friendship and for the myriad of memorable moments we have lived through so far. Thank you for being awesome friends and for looking out for me.
Finally, and most importantly, I owe my deepest gratitude to my family, in particular to my parents and to my sister for always supporting me and encouraging me to expand my horizons, but also to my wonderful partner Nathalie M\"unster for enduring the numerous inevitable meltdowns during writing up in times of a pandemic, for being an endless source of stress relief, and, most notably, for always being on my side and making me laugh every single day. To Pandu I want to say: stay vigilant, my good boy, for much more adventurous and treat-filled times are ahead.
%\vspace{5em}
%
%\begin{tikzpicture}[remember picture,overlay,shift={(current page text area.south east)}]
%\node [above right]{\parbox{0.70\textwidth}{
%\raggedright\textit{``I may not have gone where I intended to go, but I think I have ended up where I needed to be.''
%}
%\par\raggedleft--- \textup{Douglas Adams, The Long Dark Tea-Time of the Soul}
%\vspace{10em}
%}};
%\end{tikzpicture}
%
%
\clearpage | {
"alphanum_fraction": 0.7881733495,
"avg_line_length": 120.4390243902,
"ext": "tex",
"hexsha": "a8d33b7d2a9ee61dc8a66a45c9433c6de9dca327",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "eschanet/phd-thesis",
"max_forks_repo_path": "Acknowledgement/acknowledgement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "eschanet/phd-thesis",
"max_issues_repo_path": "Acknowledgement/acknowledgement.tex",
"max_line_length": 667,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "607efdd3d48ec4def49ba41188c4453b04dd99d2",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "eschanet/phd-thesis",
"max_stars_repo_path": "Acknowledgement/acknowledgement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1067,
"size": 4938
} |
\documentclass[11pt]{report}
\usepackage{verbatim,amsmath,amssymb,morehelp}
\newcommand{\nm}[2]{\nomenclature{#1}{#2}}
\begin{document}
\section{blah}
MAKE PICTURE
To develop notation, we examine a few particular cases.
\end{document}
| {
"alphanum_fraction": 0.7448559671,
"avg_line_length": 15.1875,
"ext": "tex",
"hexsha": "efb83cb834fd2262013968972b10e824d0fda437",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "2b51577f7288bced8f72f2ecec9ee94387e73458",
"max_forks_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_forks_repo_name": "RyanMcCarl/.emacs.d",
"max_forks_repo_path": "archive/test/manual/etags/tex-src/testenv.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "2b51577f7288bced8f72f2ecec9ee94387e73458",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_issues_repo_name": "RyanMcCarl/.emacs.d",
"max_issues_repo_path": "archive/test/manual/etags/tex-src/testenv.tex",
"max_line_length": 56,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "2b51577f7288bced8f72f2ecec9ee94387e73458",
"max_stars_repo_licenses": [
"BSD-2-Clause-FreeBSD"
],
"max_stars_repo_name": "RyanMcCarl/.emacs.d",
"max_stars_repo_path": "archive/test/manual/etags/tex-src/testenv.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 77,
"size": 243
} |
\documentclass[./Thesis.tex]{subfiles}
\begin{document}
\chapter{Agda Briefly}
\label{chap:agda-briefly}
\epigraph{
The world of mathematics is becoming very large, the complexity of
mathematics is becoming very high, and there is a danger of an
accumulation of mistakes.
}{Vladimir Voevodsky \cite{voevodsky-quote}}
The goal of this chapter is two-fold. The main goal is to introduce the reader
to Agda and its style of dependently typed proofs. The secondary goal is to
introduce the reader to the syntax and proof styles used in this thesis. Luckily,
these goals are complementary so we will tackle them simultaneously.
\section{Functions and Datatypes}
\label{sec:functions-and-datatypes}
The code snippet below defines the standard boolean datatype ($\mathbb{Z}/[2]$).
The keyword \AgdaKeyword{data} allows users to define their own types.
You can think about the type $\AgdaPrimitiveType{Set}$ as the type of all types\footnotemark.
The following lines introduce the common names for boolean values while
expressing that they are of type boolean.
This example also highlights that names in Agda need not be ASCII identifiers.
They can be arbitrary unicode strings namely $\AgdaDatatype{𝔹}$.
\footnotetext{
This allows for a type theoretic analog of Russell's
Paradox. In actuality
$\AgdaPrimitiveType{Set}_{n} : \AgdaPrimitiveType{Set}_{n + 1}$
but as everything in this thesis lives in
$\AgdaPrimitiveType{Set}_0 = \AgdaPrimitiveType{Set}$
we can ignore this complexity.
}
\begin{code}
data 𝔹 : Set where
false : 𝔹
true : 𝔹
not : 𝔹 → 𝔹
not false = true
not true = false
not-example : 𝔹
not-example = not false
\end{code}
The function $\AgdaFunction{not}$ is defined by case analysis and follows
the standard mathematical definition closely. Functions in \Agda{} must be
total. In other words, they must cover every possible value in their domain.
The function defined above clearly satisfies this condition. Note that function
application, as seen in $\AgdaFunction{not-example}$ is simply an empty space
separating the function and its argument. This convention first appeared in
Alonzo Church's \textit{lambda calculus} and as \Agda{} is an
extension of the lambda calculus the convention carries over. Investigating the
lambda calculus is a thesis worthy exercise and in the past many Reed theses
have attempted this challenge. In our case, we refer the interested reader to the
following sources \cite{harper} \cite{hott-book}. This being said we consider the
lambda calculus to be a formalism of the abstract mapping notation common when
describing functions in mathematics. Note the following similarities between the
mathematical notation and the lambda notation described in \ref{eqn:mapping}.
\begin{align}
\label{eqn:mapping}
x \mapsto x^2 \text{ is the same as } λ x \, → \, x^2
\end{align}
\Agda{} supports arbitrary lambda expressions and we can use this fact to define
the identity function for booleans in lambda notation.
\begin{code}[hide]
module Identity₁ where
\end{code}
\begin{code}
id : 𝔹 → 𝔹
id = λ x → x
\end{code}
The following identity function is definitionally equal to the lambda function
described above as every function in \Agda{} is desugared to the lambda calculus.
\begin{code}[hide]
module Identity₂ where
\end{code}
\begin{code}
id : 𝔹 → 𝔹
id x = x
\end{code}
The astute reader may find the definitions above suspicious. Why is the identity
function defined with a domain and codomain of the booleans? Does \Agda{}
require a programmer to create a new identity function for every type?
Thankfully, \Agda{} adopts a solution pervasive to mathematics, universal
quantification.
\begin{code}[hide]
module Identity₃ where
\end{code}
\begin{code}
id : ∀ (A : Set) → A → A
id A x = x
id-𝔹 : 𝔹 → 𝔹
id-𝔹 = id 𝔹
\end{code}
The general identity function depicted above introduces two new concepts.
It introduces universal quantification, or type theoretically, dependent types.
A dependent type comes in two parts. The type of the second half $A \, → \, A$ depends
on the input of the first $∀ (A : \AgdaPrimitiveType{Set}) →$. This is powerful
innovation and we will come to see that it drastically increases language expressivity.
It also introduces function currying. This concept allows multi-argument
functions to be encoded as functions that return functions.
\begin{align}
\label{eqn:currying}
(x, y) \mapsto x^2 + y^2 \equiv λ x \, → λ y \, → \, x^2 + y^2
\end{align}
Every function in \Agda{} is automatically curried so the identity function
described above is definitionally equal to the following function.
\begin{code}[hide]
module Identity₄ where
\end{code}
\begin{code}
id : ∀ (A : Set) → A → A
id = λ A → λ x → x
\end{code}
Notice how the dependent type behaves similarly to normal function types.
This is because all function types in \Agda{} are actually dependently typed.
The syntax $A \, → \, B$ is just \textit{syntactic sugar} for a dependent type that ignores
the type of the input $(\_ : A) \, → \, B$. Lastly we make specifying the type
of the input optional as \Agda{} can almost always infer that type. Note that we
can still specify the type as seen in $\AgdaFunction{id-example₂}$ and
$\AgdaFunction{id-example₃}$
\begin{code}[hide]
module Identity₅ where
\end{code}
\begin{code}
id : ∀ {A : Set} → A → A
id {A} x = x
id-example₁ : 𝔹
id-example₁ = id true
id-example₂ : 𝔹
id-example₂ = id {𝔹} true
id-example₃ : 𝔹
id-example₃ = id {A = 𝔹} true
\end{code}
\section{Baby's First Proofs}
\label{sec:first-proofs}
In order to prove interesting mathematical statements we first need a notion of
equality. The following code defines a datatype that represents equality. The
first line informs \Agda{} that equality is a binary relation over an arbitrary
type. The datatype only has a single constructor which requires that the left
hand side is the ``same'' as the right hand side. Thus the constructor is named
$\AgdaInductiveConstructor{≡-refl}$ as these constraints make the constructor
similar to the reflexivity axiom of equivalence classes.
\begin{code}[hide]
module Equality where
\end{code}
\begin{code}
data _≡_ {A : Set} : A → A → Set where
≡-refl : ∀ {x : A} → x ≡ x
\end{code}
In practice the left hand side and right hand side need not be the exact same
code. They do need to simplify to the same value. In the following code
$\AgdaFunction{not} \, \AgdaInductiveConstructor{true}$ simplifies to
$\AgdaInductiveConstructor{false}$ so the reflexivity constructor is allowed.
\begin{code}
simple : not true ≡ false
simple = ≡-refl
\end{code}
These concepts guide the following proof where we prove that $\AgdaFunction{not}$ is an
involution. The proof proceeds by cases, when the input is
$\AgdaInductiveConstructor{false}$ the expression
$\AgdaFunction{not} \,
(\AgdaFunction{not} \, \AgdaInductiveConstructor{false})$
simplifies to $\AgdaInductiveConstructor{false}$ via repeated applications of
the definition of $\AgdaFunction{not}$. The process proceeds symmetrically when the input is
$\AgdaInductiveConstructor{true}$. Note that the case split is required as
$\AgdaFunction{not}$ does not simplify for arbitrary booleans. This is unlike
the identity function which would simplify for any input.
\begin{code}
not-involution : ∀ (b : 𝔹) → not (not b) ≡ b
not-involution false = ≡-refl
not-involution true = ≡-refl
\end{code}
\section{Natural Numbers}
\label{sec:natural-numbers}
The set of mathematical statements provable with only finite constructions is
significantly smaller than the set of all provable mathematical statements.
Thankfully \Agda{} can express many infinite constructions\footnotemark{}.
Infinite constructions are defined by an \textit{inductively defined datatype}
\cite{agda}.
In the following code we define the simplest inductively defined datatype, the Peano
natural numbers. This definition is similar to our definition of the booleans
but the $\AgdaInductiveConstructor{suc}$ constructor recursively
contains another natural.
\footnotetext{It can easily construct any ordinal less than
$\varepsilon_0$ and with effort can express larger ordinals.
\cite{ordinals}
}
\begin{code}[hide]
module Naturals where
open import Relation.Binary.PropositionalEquality
using (_≡_; _≢_; module ≡-Reasoning)
renaming (refl to ≡-refl; sym to ≡-sym; cong to ≡-cong; cong₂ to ≡-cong₂)
open ≡-Reasoning
\end{code}
\begin{code}
data ℕ : Set where
zero : ℕ
suc : ℕ → ℕ
three : ℕ
three = suc (suc (suc zero))
\end{code}
We can consume these naturals by \textit{pattern matching} \cite{agda}. In the
following example the variable $m$ is bound by pattern matching and is equal to
the input $n$ minus one. This is because we are ``pealing'' off one successor
constructor.
\begin{code}
isZero : ℕ → 𝔹
isZero zero = true
isZero (suc m) = false
\end{code}
\begin{code}[hide]
infixl 7 _+_
\end{code}
The example above is not a compelling use of natural numbers as it does not
require any properties unique to the natural numbers. The code sample
below defines addition on Peano naturals a more compelling use. As our definition was inductively
defined any non trivial operation on naturals must be inductively defined as
well.
\begin{code}
_+_ : ℕ → ℕ → ℕ
zero + n = n
suc n + m = suc (n + m)
\end{code}
The previous function name was defined with mixfix syntax. This is a syntax unique
to \Agda{} that allows for the construction of complex mathematical operators. A
valid name may be interspersed with underscores. At every underscore \Agda{}
expects a user to supply one argument. Addition is a binary operator so we
supply one underscore on either side of the addition symbol. \\
This is a proof assistant so lets prove a simple property of the naturals,
namely that addition is associative. In order to do this we need a definition of
associativity. The following definition is a function that takes
an arbitrary binary operation and returns a specification of associativity for
that binary operation. Note that even arguments can be defined with mixfix.
\begin{code}
Associative : ∀ {A : Set} → (A → A → A) → Set
Associative {A} _∙_ = ∀ (x y z : A) → (x ∙ (y ∙ z)) ≡ ((x ∙ y) ∙ z)
\end{code}
The actual proof begins by cases. When the
first argument is zero, $\AgdaInductiveConstructor{zero} + (y + z)$ and
$(\AgdaInductiveConstructor{zero} + y) + z$ both simplify to $y + z$
definitionally. Thus we can invoke reflexivity. The second case is more
complicated and is proved using equational reasoning combinators. The first
combinator $\AgdaFunction{\_≡⟨⟩\_}$ can only be used when the two lines are
definitionally equal to each other. This is the reflexivity axiom in disguise.
The next combinator we use is $\AgdaFunction{\_≡⟨\_⟩\_}$ and it allows us to
supply a proof that the first line equals the second.
We invoke the inductive hypothesis and wrap a
$\AgdaInductiveConstructor{suc}$ around the result. This is allowable as our
equality type is congruent over any function.
Lastly we push the $\AgdaInductiveConstructor{suc}$
inside the term. This is the second line in the definition of addition applied
in reverse.
\begin{code}
+-assoc : Associative _+_
+-assoc zero y z = ≡-refl
+-assoc (suc x) y z = begin
suc x + (y + z) ≡⟨⟩
suc (x + (y + z)) ≡⟨ ≡-cong suc (+-assoc x y z) ⟩
suc ((x + y) + z) ≡⟨⟩
(suc (x + y) + z) ≡⟨⟩
(suc x + y) + z ∎
\end{code}
\section{Record Types}
\label{sec:record-types}
We introduce one last critical construct, the record construction. A record is
very similar to a datatype but there are some differences. Most importantly
records have only one constructor so they can not be used to define any of the
datatype given above. Unlike datatypes, records contain a list of named fields.
We define a record type that bundles together all the proofs required for a
binary operator to be a semigroup. In this case there is only one constraint
namely associativity. \\
\begin{code}
record IsSemigroup {A : Set} (_∙_ : A → A → A) : Set where
constructor IsSemigroup✓
field
∙-assoc : Associative _∙_
\end{code}
We prove that $\AgdaFunction{\_+\_}$ is a semigroup but supplying every
field of the $\AgdaDatatype{IsSemigroup}$ record.
\begin{code}
+-isSemigroup : IsSemigroup _+_
+-isSemigroup = IsSemigroup✓ +-assoc
\end{code}
Importantly the type of fields can depend on the value of fields previously in
the list. This allows us to define the less than or equal to relation $n \leq m$
as $\exists k. \, n + k = m$. The reason fields act similarly to exists
that when we construct a record we ``forget'' everything specific to the value
except the requirements specified by the fields.
\begin{code}
record _≤_ (n : ℕ) (m : ℕ) : Set where
constructor lte
field
k : ℕ
pf : n + k ≡ m
\end{code}
% \begin{code}
% Reflexive : ∀ {A : Set} → (A → A → Set) → Set
% Reflexive _R_ = ∀ {x} → x R x
% ≡-reflexive : ∀ {A : Set} → Reflexive {A = A} _≡_
% ≡-reflexive = ≡-refl
% \end{code}
% \begin{code}
% Symmetric : ∀ {A : Set} → (A → A → Set) → Set
% Symmetric _R_ = ∀ {x y} → x R y → y R x
% ≡-sym : ∀ {A : Set} → Symmetric {A = A} _≡_
% ≡-sym {A} {x} {.x} ≡-refl = ≡-refl
% \end{code}
% Agda supports mixfix syntax for defining names (datatypes, functions, and arguments).
% As shown below anywhere an underscore is given an argument should be supplied.
% \begin{code}
% Transitive : ∀ {A : Set} → (A → A → Set) → Set
% Transitive _R_ = ∀ {x y z} → x R y → y R z → x R z
% ≡-trans : ∀ {A : Set} → Transitive {A = A} _≡_
% ≡-trans {A} {x} {.x} {.x} ≡-refl ≡-refl = ≡-refl
% \end{code}
% \begin{code}
% ≡-cong : ∀ {A B : Set} (f : A → B) {x y : A} → x ≡ y → f x ≡ f y
% ≡-cong {A} {B} f {x} {.x} ≡-refl = ≡-refl
% \end{code}
% \begin{code}
% infix 1 begin_
% begin_ : ∀ {A : Set} {x y : A} → x ≡ y → x ≡ y
% begin x≡y = x≡y
% infix 3 _∎
% _∎ : ∀ {A : Set} (x : A) → x ≡ x
% x ∎ = ≡-refl
% infixr 2 _≡⟨_⟩_
% _≡⟨_⟩_ : ∀ {A : Set} (x : A) {y z : A} → x ≡ y → y ≡ z → x ≡ z
% x ≡⟨ x≡y ⟩ y≡z = ≡-trans x≡y y≡z
% infixr 2 _≡⟨⟩_
% _≡⟨⟩_ : ∀ {A : Set} {y : A} (x : A) → x ≡ y → x ≡ y
% x ≡⟨⟩ ≡-refl = ≡-refl
% \end{code}
\end{document}
| {
"alphanum_fraction": 0.7114005154,
"avg_line_length": 40.5621468927,
"ext": "tex",
"hexsha": "b4e421d65637595fbfdfb54532da28d31c397890",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "mckeankylej/thesis",
"max_forks_repo_path": "tex/AgdaBriefly.lagda.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "mckeankylej/thesis",
"max_issues_repo_path": "tex/AgdaBriefly.lagda.tex",
"max_line_length": 97,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "ddad4c0d5f384a0219b2177461a68dae06952dde",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "mckeankylej/thesis",
"max_stars_repo_path": "tex/AgdaBriefly.lagda.tex",
"max_stars_repo_stars_event_max_datetime": "2020-12-01T22:38:27.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-12-01T22:38:27.000Z",
"num_tokens": 4264,
"size": 14359
} |
%%% mode: latex
%%% TeX-master: "wireless-security"
\documentclass[journal, compsoc]{IEEEtran}
\usepackage{silence}
\WarningFilter{breakurl}{You are using breakurl while processing via pdflatex}
\WarningFilter{hyperref}{Token not allowed in a PDF string (PDFDocEncoding)}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[english]{babel}
\usepackage[autostyle]{csquotes}
\usepackage[cmex10]{amsmath}
\usepackage{xmpincl}
\usepackage{filecontents}
\usepackage{siunitx}
\usepackage[hyphens]{url}
\usepackage[hyphenbreaks]{breakurl}
\usepackage{doi}
\usepackage[backend=biber,style=ieee,citestyle=ieee,sortcites,url=false]{biblatex}
\interdisplaylinepenalty=2500
\setcounter{biburlnumpenalty}{3000}
\setcounter{biburllcpenalty}{6000}
\setcounter{biburlucpenalty}{9000}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% License %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begingroup\newif\ifmy{}
\IfFileExists{copyright.xmp}{}{\mytrue}
\ifmy{}
\begin{filecontents}{copyright.xmp}
<?xpacket begin='' id=''?>
<x:xmpmeta xmlns:x='adobe:ns:meta/'>
<rdf:RDF xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#'>
<rdf:Description rdf:about=''
xmlns:xapRights='http://ns.adobe.com/xap/1.0/rights/'>
<xapRights:Marked>True</xapRights:Marked>
</rdf:Description>
<rdf:Description rdf:about=''
xmlns:xapRights='http://ns.adobe.com/xap/1.0/rights/'
>
<xapRights:UsageTerms>
<rdf:Alt>
<rdf:li xml:lang='x-default' >This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</rdf:li>
<rdf:li xml:lang='en_US' >This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</rdf:li>
<rdf:li xml:lang='en' >This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.</rdf:li>
</rdf:Alt>
</xapRights:UsageTerms>
</rdf:Description>
<rdf:Description rdf:about=''
xmlns:dc='http://purl.org/dc/elements/1.1/'>
<dc:title>
<rdf:Alt>
<rdf:li xml:lang='x-default'>Multi-Surface Attack on Wireless Security</rdf:li>
<rdf:li xml:lang='en_US'>Multi-Surface Attack on Wireless Security</rdf:li>
</rdf:Alt>
</dc:title>
</rdf:Description>
<rdf:Description rdf:about=''
xmlns:cc='http://creativecommons.org/ns#'>
<cc:license rdf:resource='http://creativecommons.org/licenses/by-sa/4.0/'/>
</rdf:Description>
<rdf:Description rdf:about=''
xmlns:cc='http://creativecommons.org/ns#'>
<cc:attributionName>Brian Ridings, Tyler Romeo, and Neal Trischitta</cc:attributionName>
</rdf:Description>
</rdf:RDF>
</x:xmpmeta>
<?xpacket end='r'?>
\end{filecontents}
\fi\endgroup
\includexmp{copyright}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Bibliography %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begingroup\newif\ifmy{}
\IfFileExists{references.bib}{}{\mytrue}
\ifmy{}
\begin{filecontents}{references.bib}
@article{1389197,
journal = {ANSI/IEEE Std 802.11, 1999 Edition (R2003)},
title = {IEEE Standard for Information Technology- Telecommunications and Information Exchange Between Systems-Local and Metropolitan Area Networks- Specific Requirements- Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications},
year = {2003},
month = {},
pages = {i-513},
doi = {10.1109/IEEESTD.2003.95617},
}
@article{1438730,
journal = {IEEE Std 802.1X-2004 (Revision of IEEE Std 802.1X-2001)},
title = {IEEE Standard for Local and Metropolitan Area Networks Port-Based Network Access Control},
year = {2004},
pages = {0\_1-169},
doi = {10.1109/IEEESTD.2004.96095},
}
@article{4100091,
journal = {ISO/IEC 8802-11, Second edition: 2005/Amendment 6 2006: IEEE STD 802.11i-2004 (Amendment to IEEE Std 802.11-1999)},
title = {ISO/IEC International Standard - Information Technology Telecommunications and Information Exchange Between Systems Local and Metropolitan Area Networks Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Medium Access Control (MAC) Security Enhancements},
year = {2004},
month = {July},
pages = {c1-178},
doi = {10.1109/IEEESTD.2004.311922},
}
@inproceedings{Fluhrer:2001:WKS:646557.694759,
author = {Fluhrer, Scott R. and Mantin, Itsik and Shamir, Adi},
title = {Weaknesses in the Key Scheduling Algorithm of RC4},
booktitle = {Revised Papers from the 8th Annual International Workshop on Selected Areas in Cryptography},
series = {SAC '01},
year = {2001},
isbn = {3-540-43066-0},
pages = {1--24},
numpages = {24},
url = {http://dl.acm.org/citation.cfm?id=646557.694759},
acmid = {694759},
publisher = {Springer-Verlag},
address = {London, UK, UK},
}
@inproceedings{Tews:2007:BBW:1784964.1784983,
author = {Tews, Erik and Weinmann, Ralf-Philipp and Pyshkin, Andrei},
title = {Breaking 104 Bit WEP in Less Than 60 Seconds},
booktitle = {Proceedings of the 8th International Conference on Information Security Applications},
series = {WISA'07},
year = {2007},
isbn = {3-540-77534-X, 978-3-540-77534-8},
location = {Jeju Island, Korea},
pages = {188--202},
numpages = {15},
url = {http://dl.acm.org/citation.cfm?id=1784964.1784983},
acmid = {1784983},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
}
@inproceedings{Wu:2006:STA:1124772.1124863,
author = {Wu, Min and Miller, Robert C. and Garfinkel, Simson L.},
title = {Do Security Toolbars Actually Prevent Phishing Attacks?},
booktitle = {Proceedings of the SIGCHI Conference on Human Factors in Computing Systems},
series = {CHI '06},
year = {2006},
isbn = {1-59593-372-7},
location = {Montr\&\#233;al, Qu\&\#233;bec, Canada},
pages = {601--610},
numpages = {10},
url = {http://doi.acm.org/10.1145/1124772.1124863},
doi = {10.1145/1124772.1124863},
acmid = {1124863},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {e-commerce, user interface design, user study, world wide web and hypermedia},
}
@incollection{Halvorsen:2009,
year = {2009},
isbn = {978-3-642-04765-7},
booktitle = {Identity and Privacy in the Internet Age},
volume = {5838},
series = {Lecture Notes in Computer Science},
editor = {Jøsang, Audun and Maseng, Torleiv and Knapskog, SveinJohan},
doi = {10.1007/978-3-642-04766-4_9},
title = {An Improved Attack on TKIP},
url = {http://dx.doi.org/10.1007/978-3-642-04766-4_9},
publisher = {Springer Berlin Heidelberg},
author = {Halvorsen, FinnM. and Haugen, Olav and Eian, Martin and Mjølsnes, StigF.},
pages = {120-132},
language = {English},
}
@article{viehbock2011brute,
title={Brute forcing wi-fi protected setup},
author={Viehb{\"o}ck, Stefan},
journal={Wi-Fi Protected Setup},
year={2011}
}
@techreport{aboba2004extensible,
title={Extensible authentication protocol (EAP)},
author={Aboba, Bernard and Blunk, Larry and Vollbrecht, John and Carlson, James and Levkowetz, Henrik and others},
year={2004},
institution={RFC 3748, June}
}
@techreport{harkins2010extensible,
title={Extensible Authentication Protocol (EAP) Authentication Using Only a Password},
author={Harkins, D and Zorn, G},
year={2010},
institution={RFC 5931, August}
}
@techreport{zorn1998microsoft,
title={Microsoft PPP CHAP Extensions},
author={Zorn, G and Cobb, S},
year={1998},
institution={RFC 2433, October}
}
@article{zorn2000microsoft,
title={Microsoft PPP CHAP extensions, version 2},
author={Zorn, Glen},
year={2000},
institution={RFC 2759, January}
}
@online{marlinspike2012divide,
title={Divide and Conquer: Cracking MS-CHAPv2 with a 100\% success rate},
author={Moxie Marlinspike},
date={2012-07-29},
url={https://www.cloudcracker.com/blog/2012/07/29/cracking-ms-chap-v2/},
urldate={2014-12-10},
}
@online{tpwr703n,
title={TP-Link TL-WR703N},
author={mandrawes},
organization={OpenWrt},
date={2014-11-29},
url={http://wiki.openwrt.org/toh/tp-link/tl-wr703n?rev=1417238011},
urldate={2014-12-10},
}
@online{keeble2014passive,
title={Passive WiFi Tracking},
author={Edward Keeble},
date={2014-02-26},
url={http://edwardkeeble.com/2014/02/passive-wifi-tracking/},
urldate={2014-12-10},
}
@online{hak5pineapple,
title={Wifi Pineapple / Jasager},
author={Robin Wood},
date={2014-12-10},
url={https://forums.hak5.org/index.php?/forum/64-wifi-pineapple-jasager/},
urldate={2014-12-10},
}
@online{dunstan2010attacking,
title={Attacking and Securing PEAP},
author={Patrick Dunstan},
date={2010-05-17},
url={http://www.defenceindepth.net/2010/05/attacking-and-securing-peap.html},
urldate={2014-12-10},
}
\end{filecontents}
\fi\endgroup
\addbibresource{references.bib}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Header %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{Multi-Surface Attack on Wi-Fi Security}
\author{%
Brian~Ridings,
Tyler~Romeo,
and~Neal~Trischitta
\IEEEcompsocitemizethanks{\IEEEcompsocthanksitem{} B. Ridings,
T. Romeo, and N. Trischitta are with the Stevens Institute of
Technology.}
\thanks{Copyright \texorpdfstring{\textcopyright}{(c)} 2014 Brian
Ridings, Tyler Romeo, and Neal Trischitta. Some rights
reserved. This work is licensed under the Creative Commons
Attribution-ShareAlike 4.0 International License. To view a copy
of this license, visit
\url{http://creativecommons.org/licenses/by-sa/4.0/}.}
}
\markboth{CS-577 Cybersecurity Lab, Fall 2014}{Ridings
\MakeLowercase{\textit{et al.}}: Multi-Surface Attack of Wi-Fi
Security}
\begin{document}
\IEEEtitleabstractindextext{%
\begin{abstract}
Wireless security is critical in both personal and corporate
environments. Almost all wireless security protocols have some
known exploit, and only select few configurations can provide real
security. A proof-of-concept of this fact was developed by making
a tiny form-factor, minimal-resource using router that can
automatically exploit multiple types of insecure wireless network
configurations. The deliverable was a success, and demonstrated
that using less than \$15 hardware, an attacker can exploit WEP-, WPA-TKIP-,
EAP-MD5-, and EAP-MSCHAPv2-protected networks. Additionally,
networks with unsigned or self-signed X.509 certificates could be
trivially spoofed, allowing replay of client credentials. Router
and operating system developers are deemed responsible for
improving their products by disabling insecure protocols and
encouraging user best practices.
\end{abstract}
\begin{IEEEkeywords}
Wireless systems, network-level security and protection, unauthorized access
\end{IEEEkeywords}
}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Paper %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\IEEEPARstart{W}{ireless} security is critical infrastructure in
high-confidentiality or high-integrity networks. Eavesdropping and
packet alteration attacks are significantly easier on a wireless
medium, and thus traditional security mechanisms may not be enough due
to the inherent insecurity of the channel. The motivation for our
project was to demonstrate vulnerabilities in Wi-Fi security. For this
project, researchers extended OpenWRT, an open Linux-based firmware
that allows taking a high level perspective while not having to be
concerned about the drivers and other inconsequential parts of the
project. It also gave researchers the ability for full
customization of the device and its components. The deployment of the
Wi-Fi attack framework will attempt to intercept client credentials
from the wireless interface by actively listening for probe requests
from clients. In addition, our framework will be able to compute basic
cracking of wireless protocols with known vulnerabilities.
The purpose of the experiment is to demonstrate as a proof-of-concept
exactly how many vulnerabilities exist in real-world wireless networks
and how easy it is for an attacker to infiltrate an insecure network
using minimal resources and opportunity. The first step to finding a
solution is to clearly identify the problem, and it is important that
users understand the liability they are incurring when using insecure
wireless security protocols.
Some related Work is the Hak5 Wifi Pineapple~\cite{hak5pineapple}. The
Pineapple responds to client probe requests and lets it connect with
that SSID on an open access point. The goal of this new device to not
respond to probe request, but instead to create an access point and
then wait for the client to initiate contact. Another goal is to allow for
mutable and different encryption types, whereas the Pineapple only
makes open access points. The Pineapple wants the user to connect so
that a man-in-the-middle attack can be performed, but the new device
wants to first capture credentials and other information that may be
sensitive to the client, without the client initiating the connection.
This paper is organized as follows: \autoref{sec:overview} gives an
overview of various wireless security protocols in use in modern
networks, and the vulnerabilities and exploits associated with those
protocols; \autoref{sec:design} outlines the design and implementation
of the proof-of-concept hardware deliverable demonstrating the
aforementioned exploits; and finally \autoref{sec:results} explains
the results of testing the final product and what was learned from
those results.
\section{Wireless Security Overview}
\label{sec:overview}
Securing wireless networks is a dubious task, to the point where some
companies simply do not allow wireless access to corporate services at
all. Wireless networks are generally secured using one of two security
standards: either IEEE 802.11--1999 (WEP) or IEEE 801.11i-2004 (WPA and
WPA2), the latter optionally combined with Wi-Fi Protected Security
(WPS) or IEEE 802.1X-2004.
\subsection{WEP}
\label{sec:overview-wep}
Wired Equivalent Privacy, often acronymed as WEP, was the first
security algorithm developed for Wi-Fi, and was initially specified as
part of the original IEEE 802.11 standard as released in 2003
\cite{1438730}. As the name implies, the protocol was designed
primarily to provide privacy and confidentiality for wireless
communications, i.e., ensure that other clients could not listen in on
traffic being sent to the access point. WEP even has an `Open' mode,
where traffic is encrypted but clients do not have to authenticate to
the access point. In fact, the only method of authentication or access
control provided by WEP is pre-shared key authentication, which relies
on using the key in an RC4 cipher when encrypting challenge text from
the access point.
WEP is thoroughly broken. As an emphasis of this fact, using `Open'
mode, where no authentication is performed, is actually more secure
than using a pre-shared key. (The reason behind this has to do with
deriving the keystream by capturing the initial challenge.) Among the
many issues with the protocol are use of RC4, a stream cipher, which
requires using a new key every time, and a weak 23-bit initialization
vectors that are open to repetition. These vulnerabilities resuled in
the Fluher, Mantin, and Shamir attack
\cite{Fluhrer:2001:WKS:646557.694759}, which, including improvements
over the years, allows a 95\% chance recovery of a 104-bit WEP key
using only 85,000 captured network packets
\cite{Tews:2007:BBW:1784964.1784983}. This attack can be achieved on
consumer hardware in just a few minutes.
\subsection{WPA}
\label{sec:overview-wpa}
WEP has since been deprecated and superseded by Wi-Fi Protected
Access. The first version of WPA was released by the Wi-Fi Alliance in
the IEEE 802.11i draft standard as a stop-gap measure for replacing
WEP, at least until the IEEE 802.11 working group could flesh out the
more final version of the protocol. That final version was released as
IEEE 802.11i-2004 and is commonly referred to was WPA2
\cite{4100091}. The first version of WPA used the Temporal Key
Integrity Protocol (TKIP). Although TKIP uses a 128-bit key and
discourages attacks with a new key mixing function and message
integrity checks, it uses some of the same techniques as WEP
encryption, e.g., using CRC32 for checksums, and thus is vulnerable in
many of the same ways. Halvorsen, et.~al~\cite{Halvorsen:2009}
demonstrated being able to recover upto 596 bytes of the TKIP
keystream in under twenty minutes. Eventually this was fixed in WPA2
with the advent of the Counter Mode CBC-MAC Protocol (CCMP or
AES-CCMP), which is considered secure as of the writing of this
paper. CCMP works by using CBC-MAC to encrypt the data and compute the
integrity tag, and then encrypting that tag in CTR mode, thus
achieving authenticated encryption. Any attack on CCMP would imply an
attack on AES itself.
WPA also, unlike WEP, has more support dedicated to access control. In
addition to using a pre-shared key, two other methods of client
authentication may be used: Wi-Fi Protected Setup (WPS) and IEEE
802.1X.
\subsection{WPS}
\label{sec:overview-wps}
WPS was created by the Wi-Fi Alliance in 2006. It is not an IEEE
standard, but nonetheless it is supported on many consumer routers. It
was created to allow non-technology-savvy users to authenticate their
wireless devices more easily. The protocol works over EAP, and
involves having the user either push a button on the router or enter a
preset PIN on the device. The enrollee (client) and the registrar
(access point) negotiate configuration data and then reconnect using
the new configuration. The WPS protocol has since been broken due to a
timing attack on the PIN authentication, which allows a brute force
attack to be completed in under four hours
\cite{viehbock2011brute}. To make things worse, some routers are
misleading, and even if a user disables WPS in the user interface, it
remains enabled and the router will still negotiate with
clients. Fortunately, some router manufacturers have released firmware
updates that fix this issue, while others have implemented
rate-limiting on the WPS mechanism to stop brute-force attacks.
\subsection{802.1X}
\label{sec:overview-8021x}
Contrary to WPS, 802.1X is not an authentication protocol, but an
wrapping of an existing authentication framework. IEEE~802.1X-2007 is
a method of encapsulating the Extensible Authentication Protocol (or
EAP, as defined in RFC~3748~\cite{aboba2004extensible}) in IEEE~802
packets~\cite{1438730}. The protocol is sometimes referred to as
EAPOL, or EAP over LAN, and was originally designed for IEEE~802.3
Ethernet, but was extended to wireless in 2004. The 802.1X protocol
itself is not of much interest for security research, but the
underlying EAP is. As aforementioned, EAP is an authentication
framework that can be used with many different authentication methods,
some more secure than others.
Common authentication methods available on EAP are: EAP-MD5, EAP-PWD,
EAP-MSCHAP, EAP-TLS, EAP-TTLS, and PEAP\@. There are many others, but
only these are both widely supported natively by clients and are
commonly used by network administrators.
Starting at the bottom, EAP-MD5 involves a pre-shared key that, as the
name implies, is combined in MD5 to authenticate the
client~\cite{aboba2004extensible}. Other than the various
vulnerabilities inherited from using MD5, this protocol does not have
any server authentication, and thus is blatantly vulnerable to
man-in-the-middle attacks using a fake access point. EAP-MD5 was
officially deprecated in Windows Vista. EAP-PWD improves upon EAP-MD5
by using a secure hash function, specifically HMAC-SHA-256, and
authenticating both the server and the
client~\cite{harkins2010extensible}. While the key exchange protocol
is resistant to attack, this protocol may still suffer from users
setting low-entropy passwords or passwords that can be guessed using a
dictionary attack.
A more commonly used protocol is EAP-MSCHAP (either version 1 as
defined in RFC 2433~\cite{zorn1998microsoft} or version 2 as defined
in RFC 2759~\cite{zorn2000microsoft}). It involves using Microsoft's
customized version of the Challenge-Handshake Authentication Protocol,
and uses a three-way handshake using SHA1, MD4, and DES (together) to
authenticate both the client and the server. As the reader may be able
to infer, the protocol is needlessly complicated and, as demonstrated
by Marlinspike in 2012~\cite{marlinspike2012divide}, can be broken by
cracking a single DES key and then cracking the resulting MD4
hash. Since both tasks are trivial on modern hardware, MS-CHAP is
considered broken.
Finally, there are EAP-TLS, EAP-TTLS, and PEAP, which, with respect to
the protocol, are considered secure. The first uses TLS to
authenticate both the server and client. X.509 certificates are issued
to clients and servers, all signed by a common certificate authority
(CA). The latter two are very similar, and do not require the client
to have its own X.509 certificate. Instead, they establish the TLS
connection as a tunnel, and then lets the client authenticate using
further protocols that are tunneled inside the TLS stream.
PEAP specifically, as developed my Microsoft, uses EAP once again
inside the encrypted TLS tunnel. In other words, it is a second
iteration of EAP inside a tunnel established via the first iteration
of EAP.\@ The catch is that each version of PEAP (there are two) only
supports a single authentication mechanism for the inner iteration of
EAP.\@ PEAPv0 uses EAP-MSCHAPv2, and PEAPv1 uses EAP-GTC, an alternative
to MSCHAPv2 developed by Cisco. With PEAP, since the inner
authentication protocol is run inside a TLS tunnel, it remains secure
even despite the vulnerabilities of
MS-CHAP~\cite{dunstan2010attacking}.\footnote{If the attacker can
perform a man-in-the-middle attack between the access point and the
RADIUS server used to authenticate, then a new attack on PEAP
becomes possible, but this paper does not consider that attack since
the primary threat model involves an outside attacker attempting to
gain access to the network.}
However, while all of these protocols are indeed the most secure out
of all the EAP options, they still inherit vulnerabilities from the
X.509 trust model. The server's certificate must be signed by a
trusted CA, and users must be instructed not to trust unsafe
certificates. Unfortunately, this can be near impossible since:
certificates for use network-wide in 802.1X systems are
extraordinarily expensive and are not affordable to smaller entities;
and even modern operating systems, like Windows 8 and OS X, provide
trivial and almost entirely ignorable warnings when a network provides
an untrusted server certificate. The only method of protection is to
go multiple user interface levels deep into advanced network settings
and toggle a flag disallowing unsafe certificates. As of the writing
of this paper, the wireless network at the Stevens Institute of
Technology does not use a trusted certificate, and the Information
Technology department explicitly instructs users to ignore warnings
and connect anyway.
\section{Design and Execution}
\label{sec:design}
\subsection{Threat Model}
\label{sec:design-model}
Overall, there are numerous vulernabilities in all of the available
security protocols for Wi-Fi. The ones more readily exploitable, as
listed above, are:
\begin{itemize}
\item Sniff traffic for WEP-secured network and brute-force the
keystream.
\item Perform a man-in-the-middle attack on a WPA-TKIP-secured
network.
\item Brute-force the PIN for a WPS-secured network.
\item Use dictionaries or brute-force to guess a weak WPA-PSK or
EAP-PWD password.
\item Impersonate an Access Point of an EAP-MD5-secured network and
crack the client credentials.
\item Crack the DES key and the resulting MD4 hash of a key exchange
on an EAP-MSCHAP-secured network.
\item Provide a fake server TLS certificate on an EAP-TLS- or
EAP-TTLS-secured network and social engineer the user into accepting
it.
\item Provide a fake server TLS certificate on a PEAP-secured network
and then crack the inner MS-CHAPv2 exchange as described before.
\end{itemize}
Many of these attacks allow complete breaks of security in the entire
wireless network. Looking at the system from the perspective of the
attacker, there are a number of assets that might be of
interest. Specifically they are:
\begin{description}
\item[Network access] \hfill \\
The network may contain resources that only authenticated clients
can be allowed to access. If the wireless network provides access to
the entire LAN, including business-critical resources, this can pose
a high-level risk to the confidentiality and integrity of those
resources, and a medium-level risk to availability depending on the
services running. On the other end of the spectrum, even if the
wireless network is only provided for personal use, it could still
pose a low-level risk for integrity, specifically impersonation.
\item[Client credentials] \hfill \\
Some wireless attacks allow cracking of the client's original
credentials. This gives the attacker persistent access to the
network under the client's identity, thus excaberating any risks
aforementioned concerning network access.
\item[Network traffic] \hfill \\
In limited cases where the attacker gains the client credentials,
and in cases where the attacker performs a man-in-the-middle attack
on the client, the attacker can sniff and later data sent to and
from the actual network. This poses a high-level risk to
confidentiality and integrity if any mission-critical information is
communicated over that network.
\end{description}
As a further note, some assumptions are made about the system when
making these attacks. Specifically, the attacker must be in an
opportune location to launch these attacks. A WEP key cannot be
cracked if you are not in range of the network in order to capture
traffic. As a result, it is a requirement that this device be portable
and unnoticeable, thus facilitating easy, persistent planting of the
device in a suitable ground zero. In addition, if the network is
protected by EAP-TLS, EAP-TTLS, or PEAP, the assumption needs to be
made that either the server certificate is not signed by a trusted
authority or that users are either not knowledgeable enough or not
caring enough to properly validate the certificate every time they
connect to the network. In most situations, these two assumptions can
be made trivially.
\subsection{Hardware Design}
\label{sec:design-hardware}
In order to facilitate easy placement, a tiny form factor router was
chosen for this experiment: the TP-Link TL-WR703N~\cite{tpwr703n}. The
device measures \SI{5.7}{\cm} by \SI{5.7}{\cm} by \SI{1.8}{\cm}
(slighty larger with a low-profile USB storage device plugged in), and
has an Atheros AR7240 CPU at \SI{400}{\MHz} and an Atheros AR9331
chipset, with integrated 802.11b/g/n wireless.
One of the issues with the device was power. Power is supplied via
MicroUSB, but a power source is needed for the router to remain
operational. For an attacker, plugging into an outlet may not be an
option if the router is to be hidden, and thus a portable batter must
be connected to the router when it is planted. With Wi-Fi at
\SI{18}{dB.mW}, average current is \SI{100}{\mA} at approximately
\SI{5}{\volt}, giving a power consumption of \SI{0.5}{\W}. Under
normal conditions, a small \SI{3200}{\mA.\hour} phone battery could
power the router for \SI{32}{\hour}, which is more than enough to
exploit most network vulnerabilities or capture at least one set of
client credentials. With an even more powerful battery (on the current
market going as high as \SI{25,000}{\A.\hour}), the device becomes
larger but can last more than ten days.
OpenWrt 12 `Attitude Adjustment' was used as the base system, with
various additional packages installed, such as hostapd for launching
multiple wireless interfaces simultaneously, scapy for scanning
wireless networks~\cite{keeble2014passive}, and freeradius2 for
simulating access to a RADIUS server in 802.1X-secured networks.
\subsection{Wireless Exploits}
\label{sec:design-exploits}
Some of the main purposes of the framework are to provide wireless
reconnaissance, to locate a wireless network, and to collect
information about its configuration and associated clients. The `scapy'
python library, an interactive packet manipulation program was used
for the reconnaissance framework, since it provides the ability to
forge or decode packets of a wide number of protocols. In order to use
scapy, the wireless network interface was set to monitor mode to
capture and listen to all traffic transmission from the wireless
access point. Unlike promiscuous mode, which is also used for packet
sniffing, monitor mode allows packets to be captured without having to
associate with an access point or ad hoc network first. Using `scapy',
probe requests and responses frames were captured to determine clients
attempting to actively seek any, or a particular, access point.
For discovered 802.11 PEAP/WPA networks, the device then employs a
RADIUS impersonation attack. The attack, as described earlier,
exploits the lack of certificate validation by setting up a fake
access point with a forged certificate. In addition, the device runs a
patched RADIUS server that allows all username and password
combinations to be authenticated, while simultaneously logging the
credential exchange. Combined, the client will attempt to connect and
subsequently expose the client's credentials in the resulting MS-CHAP
exchange.
\section{Results and Testing}
\label{sec:results}
The device was programmed to receive client probe requests and set up
up to six access points on demand. Access points are configured to
match the expected encryption settings of the actual access
point. Afterward the device pools for clients attaching to the access
point and removes duplicate access points that have the same SSID, but
are operating on that interface. Furthermore, old interfaces are
pruned based on the time they were created due to the upper limit of
the number of interfaces supported by the wireless driver.
There are some limitations of this setup. One is that it is hard to
delete interfaces, add interfaces, and modify them without restarting
all of the interfaces. This causes issues because any existing client
traffic is interrupted upon adding, modifying, or deleting an
interface. Currently the research team is not aware of any method of
hot-swapping wireless interfaces with the currently available drivers.
The research team also took some of the results from Spoofing
Enterprise Routers, specifically FreeRadius-wpe patch files that allow
for logging RADIUS requests and responses in order to decrypt
passwords and log them is passed in plaintext. The patches were not
made for OpenWRT, thus patches had to be applied to the version of
FreeRadius that works with OpenWRT.\@ To my knowledge this is the first
instance of a self-contained FreeRadius-wpe router in OpenWRT.\@
Otherwise, the device was able to successfully capture 802.1X client
credentials sent over PEAP-MSCHAPv2, which is the most popular
enterprise authentication protocol.
\section{Conclusion}
\label{sec:conclusion}
As demonstrated by the device, wireless security is in a broken
state. Most networks, both personal and enterprise, are open to
exploitation using various methods depending on the authentication
protocol in use.
The moral of the story is that there are only two security protocols
for Wi-Fi that are effective at mitigating threats: using WPA-CCMP-PSK
with a strong, non-guessable key; and using EAP-TLS, EAP-TTLS, or PEAP
(over 802.1X) with a trusted certificate authority and clients that
are configured to always reject untrusted certificates. Literally any
other wireless environment can be broken by consumer-grade hardware
and is entirely insecure.
The only solution to this problem is for router manufacturers to find
new ways to encourage users to utilize secure wireless technology. For
example, completely removing WEP, WPA-TKIP, and WPS from routers. By
only allowing WPA2-CCMP, using either a PSK or the secure 802.1X, the
protocol is secure and any vulnerabilities could only be introduced by
user bad practices. For WPA2 with a PSK or with PEAPv0-MSCHAPv2, this paper
does not address methods of encouraging users to create secure
passwords and follow proper password management policies, as that is a
much larger scope problem that needs to be addressed by dedicated
research. For EAP-TLS, however, operating systems developers should
make rejection of unsafe certificates the default settings and create
more prominent warnings when a user is prompted to inspect and accept
an unsafe certificate. A similar example of this problem and solution
is phishing warning toolbars in browsers, which were proven largely
ineffective and replaced with more serious and prominent, full-page
warnings~\cite{Wu:2006:STA:1124772.1124863}.
The onus is now on router and operating systems developers to disable
insecure protocols in their firmware and software, thus enforcing user
best practices and improving the general state of wireless security.
\printbibliography{}
\end{document}
| {
"alphanum_fraction": 0.7669313267,
"avg_line_length": 45.9238095238,
"ext": "tex",
"hexsha": "8b324d0beeab18dca759e6c6ffa5a8008644af52",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "acf58976df3c0b6b605b8b50a3af7ce9c4d04beb",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "GeneralZero/CS-577-Final-Project",
"max_forks_repo_path": "wireless-security.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "acf58976df3c0b6b605b8b50a3af7ce9c4d04beb",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "GeneralZero/CS-577-Final-Project",
"max_issues_repo_path": "wireless-security.tex",
"max_line_length": 340,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "acf58976df3c0b6b605b8b50a3af7ce9c4d04beb",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "GeneralZero/CS-577-Final-Project",
"max_stars_repo_path": "wireless-security.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 8616,
"size": 33754
} |
\subsubsection{\stid{1.16} SICM}
\paragraph{Overview} The goal of this project is to create a universal interface for discovering, managing and sharing within complex memory hierarchies. The result will be a memory API and a software library which implements it. These will allow operating system, runtime and application developers and vendors to access emerging memory technologies. The impact of the project will be immediate and potentially wide reaching, as developers in all areas are struggling to add support for the new memory technologies, each of which offers their own programming interface. The problem we are addressing is how to program the deluge of existing and emerging complex memory technologies on HPC systems. This includes the MCDRAM (on Intel Knights Landing), NV-DIMM, PCI-E NVM, SATA NVM, 3D stacked memory, PCM, memristor, and 3Dxpoint. Also, near node technologies, such as PCI-switch accessible memory or network attached memories, have been proposed in exascale memory designs. Current practice depends on ad hoc solutions rather than a uniform API that provides the needed specificity and portability. This approach is already insufficient and future memory technologies will only exacerbate the problem by adding additional proprietary APIs. Our solution is to provide a unified two-tier node-level complex memory API. The target users for the low-level interface are system and runtime developers, as well as expert application developers that prefer full control of what memory types the application is using. The high-level interface is designed for application developers who would rather define coarser-level constraints on the types of memories the application needs and leave out the details of the memory management. The low-level interface is primarily an engineering and implementation project. The solution it provides is urgently needed by the HPC community; as developers work independently to support these novel memory technologies, time and effort is wasted on redundant solutions and overlapping implementations. We can achieve success due to our team’s extensive experience with runtimes and applications. Our willingness to work with and accept feedback from multiple hardware vendors and ECP developers differentiates our project from existing solutions and will ultimately determine the scale of adoption and deployment.
\begin{itemize}
\item Low-Level Interface: Finished refactor of low-level interface supporting memory arenas on different memory types. Added initial for Global Arrays, and OMPI-X. Reviewing features need to fully support these runtimes.
\item Analysis: evaluating ACME using Gem5. We are currently resolving some compatibility issues between the ACME build environment and our Gem5 virtual machine. We now have traces from ES3M and several mini-apps.
\item Analysis and High-Level interface: New tool-chain based on Intel PEBs instrumentation that analyzes memory use and suggests placement.
\item Cost Models: We have extracted a lot of experimental data related to application memory use and layout. This was done with full applications on hardware with the memory throttling-based emulation methodology.
\item Cost Models: Development of a tool, Mnemo, which provides for automated recommendations of capacity sizing of heterogeneous memories for object store workloads. Given a platform with specific configuration of different memories, and a (representative) workload, we can quickly extract some of the relevant memory usage metrics, and produce cost-benefit estimation curves as a function of different capacity allocations of the different types of memories. The output of Mnemo are estimates which give its users information to make informed decisions about capacity allocations. This can have practical use in shared/capacity platforms, or to rightsize capacity allocations to collocated workloads.
\end{itemize}
\paragraph{Next Steps}
\begin{itemize}
\item Low-Level Interface: Focus on support of runtimes and adding feature requested to support Global Arrays, OpenMP and MPI. Test with proxy applications for functionality and correctness. Investigate Linux kernel modifications for page migration in collaboration with ECP project Argo 2.3.5.05 and RIKEN research center in Japan, on-going. Verify support of emerging OpenMP standards.
\item Document needs ACME climate app hybrid memory analysis (with ORNL collaborators and related to UNITY SSIO ASCR project)
\item Understand capabilities of hwloc and netloc with respect to OMPI-X needs and work with managers of those libraries.
\end{itemize}
| {
"alphanum_fraction": 0.8266637669,
"avg_line_length": 229.9,
"ext": "tex",
"hexsha": "6615c143f73105182deee8a4e1c184795bc72c8d",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e8e370db41c84998abe21edda07281718a02c55e",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "shintaro-iwasaki/ECP-ST-CAR-PUBLIC",
"max_forks_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e8e370db41c84998abe21edda07281718a02c55e",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "shintaro-iwasaki/ECP-ST-CAR-PUBLIC",
"max_issues_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_line_length": 2341,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e8e370db41c84998abe21edda07281718a02c55e",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "shintaro-iwasaki/ECP-ST-CAR-PUBLIC",
"max_stars_repo_path": "projects/2.3.1-PMR/2.3.1.16-SICM/2.3.1.16-SICM.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 877,
"size": 4598
} |
\documentclass{beamer}
\mode<presentation> {
\usetheme{Madison}
% \usetheme[left,hideallsubsections,width=1cm]{UWThemeB}
\usefonttheme[onlymath]{serif}
}
\usepackage{booktabs, calc, rotating}
\usepackage{scalefnt}
\usepackage[english]{babel}
\usepackage[latin1]{inputenc}
\usepackage{times}
%\usepackage[T1]{fontenc}
\usepackage{alltt}
\setbeamertemplate{navigation symbols}{}
\setbeamertemplate{blocks}[rounded][shadow=true]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% CHANGE THE TITLE AND INPUT FILE ACCORDING TO THE CHAPTER THAT YOU WISH TO COMPILE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title[Frequency]{Frequency Distributions}
\date[Fall 2017]{Fall 2017}
%Personal definitions - Bayes Regression
\def\bsb{\boldsymbol \beta}
\def\bsa{\boldsymbol \alpha}
\def\bsm{\boldsymbol \mu}
\def\bsS{\boldsymbol \Sigma}
\def\bsx{\boldsymbol \xi}
\begin{document}
\frame{\titlepage}
\begin{frame}
\frametitle{Outline}
%\tableofcontents[part=1,pausesections]
\tableofcontents[part=1]
\end{frame}
\part<presentation>{Main Talk}
\section{How Frequency Augments Severity Information}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Basic Terminology}
\begin{itemize}
\item \textcolor{blue}{Claim} - indemnification upon the occurrence of an insured
event \vspace{2mm}
\begin{itemize}
\item \textcolor{blue}{Loss} - some authors use claim and loss interchangeably, others think of loss as the amount suffered by the insured whereas claim is the amount paid by the
insurer \vspace{2mm}
\end{itemize}
\item \textcolor{blue}{Frequency} - how often an insured event occurs, typically within a policy
contract \vspace{2mm}
\item \textcolor{blue}{Count} - In this chapter, we focus on count random variables that represent the number of claims, that is, how frequently an event
occurs \vspace{2mm}
\item \textcolor{blue}{Severity} - Amount, or size, of each payment for an insured event
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{The Importance of Frequency}
\begin{itemize}
\item Insurers pay claims in monetary units, e.g., US dollars. So, why should they care about how frequently claims occur? \pause
\item Setting the price of an insurance good can be problematic:
\begin{itemize}
\item In manufacturing, the cost of a good is (relatively) known
\item In other areas of financial services, market prices are available
\item Price of an insurance good?: start with an expected cost, add ``margins'' to account for riskiness, expenses, and a profit/surplus allowance
\end{itemize}
\item We can think of the expected cost as the expected number of claims times the expected amount per claims; that is, expected \textit{frequency times severity}
\item Claim amounts, or severities, will turn out to be relatively homogeneous for many lines of business and so we begin our investigations with frequency modeling
\end{itemize}
%\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Other Ways that Frequency Augments Severity Information I}
\begin{itemize}
\item \textbf{Contractual} - For example, deductibles and policy limits are often in terms of each occurrence of an insured
event \vspace{2mm}
\item \textbf{Behaviorial} - Explanatory (rating) variables can have different effects on models of how often an event occurs in contrast to the size of the event
\begin{itemize}
\item In healthcare, the decision to utilize healthcare by individuals is related primarily to personal characteristics whereas the cost per user may be more related to characteristics of the healthcare provider (such as the physician)
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Other Ways that Frequency Augments Severity Information II}
\begin{itemize}
\item \textbf{Databases}
\begin{itemize}
\item Many insurers keep separate data files that suggest developing separate frequency and severity
models \vspace{2mm}
\item This recording process makes it natural for insurers to model the frequency and severity as separate
processes \vspace{4mm}
\end{itemize}
\item \textbf{Regulatory and Administrative}
\begin{itemize}
\item Regulators routinely require the reporting of claims numbers as well as
amounts \vspace{2mm}
\item This may be due to the fact that there can be alternative definitions of an ``amount,'' e.g., paid versus incurred, and there is less potential error when reporting claim numbers
\end{itemize} \end{itemize}
\end{frame}
\section{Basic Frequency Distributions}
\begin{frame}%[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Foundations}
\begin{itemize}\scalefont{0.8}
\item We focus on claim counts $ N$ with support on the non-negative integers $k=0,1,2,
\ldots$ %\vspace{2mm}
\item The \textcolor{blue}{probability mass function} is denoted as $\Pr(N = k) =
p_k$ \vspace{2mm}
\item We can summarize the distribution through its
\textcolor{blue}{moments}:
\begin{itemize}\scalefont{0.8}
\item The \textcolor{blue}{mean}, or first moment, is
$$ \mathrm{E~} N = \mu = \sum^{\infty}_{k=0} k ~ p_k $$
% \end{itemize}
\item More generally, the $r$th moment is
$$ \mathrm{E~} N^r = \mu^{\prime}_r = \sum^{\infty}_{k=0} k^r p_k $$
\item It is common to use the \textcolor{blue}{variance}, which is the second moment about the mean,
$$\mathrm{Var~} N = \mathrm{E~} (N-\mu)^2 = \mathrm{E~} N^2 -
\mu^2$$ %\vspace{2mm}
\end{itemize}
\item Also recall the \textcolor{blue}{moment generating function} (mgf):
$$M(t) = \mathrm{E~}e^{tN} = \sum^{\infty}_{k=0} e^{tk} p_k $$
\end{itemize}
\end{frame}
\begin{frame}%[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Probability Generating Function}
\begin{itemize}\scalefont{0.9}
\item The \textcolor{blue}{probability generating function} (pgf) is:
\begin{eqnarray*}
\mathrm{P}(z) &=& \mathrm{E~}z^N = \mathrm{E~}\exp{(N \ln z)} = M(\ln{z})\\
&=& \sum^{\infty}_{k=0} z^k p_k \vspace{2mm}
\end{eqnarray*}
\item By taking the $m$th derivative, we see that the pgf ``generates'' the
probabilities:
\begin{eqnarray*}
\left. P^{(m)}(z)\right|_{z=0} &=& \frac{\partial^m }{\partial z^m} P(z)|_{z=0} = p_m m!
\end{eqnarray*}
\vspace{2mm}
\item Further, the pgf can be used to generate moments:
\begin{eqnarray*}
P^{(1)}(1) &=& \sum^{\infty}_{k=0} k p_k = \mathrm{E~}N
\end{eqnarray*}
and
\begin{equation*}
P^{(2)}(1) = \mathrm{E~}[N(N-1)]
\end{equation*}
\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Important Frequency Distributions}
\begin{itemize}
\item The three important (in insurance) frequency distributions
are: \vspace{2mm}
\begin{itemize}
\item Poisson \vspace{2mm}
\item Negative binomial \vspace{2mm}
\item Binomial \vspace{4mm}
\end{itemize}
\item They are important because: \vspace{2mm}
\begin{itemize}
\item They fit well many insurance data sets of interest \vspace{2mm}
\item They provide the basis for more complex distributions that even better approximate real situations of interest to us
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Poisson Distribution}
\begin{itemize}
\item This distribution has parameter $\lambda$, probability mass function
\begin{equation*}
p_k = \frac{e^{-\lambda}\lambda^k}{k!}
\end{equation*}
and pgf
\begin{eqnarray*}
P(z) &=& M_N (\ln z) = \exp(\lambda(z-1)) \vspace{2mm}
\end{eqnarray*}
\item The expectation is $\mathrm{E~}N = \lambda $, which is the same as the variance, $\mathrm{Var~}N = \lambda$
\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Negative Binomial Distribution}
\begin{itemize}
\item This distribution has parameters $(r, \beta)$, probability mass function
\begin{equation*}
p_k = {k+r-1\choose k} (\frac{1}{1+\beta})^r
(\frac{\beta}{1+\beta})^k
\end{equation*}
and pgf
\begin{eqnarray*}
P(z) &=& (1-\beta(z-1))^{-r}\
\end{eqnarray*} \vspace{2mm}
\item The expectation is $\mathrm{E~}N = r\beta $ and the variance is $\mathrm{Var~}N =
r\beta(1+\beta)$ \vspace{2mm}
\item If $r$ = 1, this distribution is called the \textcolor{blue}{geometric
distribution} \vspace{2mm}
\item As $\beta>0$, we have $\mathrm{Var~}N >\mathrm{E~}N$. This distribution is said to be \textcolor{blue}{overdispersed} (relative to the Poisson)
\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Binomial Distribution}
\begin{itemize}
\item This distribution has parameters $(m,q)$, probability mass function
\begin{equation*}
p_k = {m\choose k} q^k (1-q)^{m-k}
\end{equation*}
and pgf
\begin{eqnarray*}
P(z) &=& (1+q(z-1))^m
\end{eqnarray*} \vspace{2mm}
\item The mean is $\mathrm{E~}N = mq$ and the variance is $\mathrm{Var~}N =
mq(1-q)$
\end{itemize}
\end{frame}
\section{The ($a, b$, 0) Class}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{The ($a, b$, 0) Class}
\begin{itemize}
\item Recall the notation: $p_k= \Pr(N = k)$ \vspace{2mm}
\item \textit{Definition}. A count distribution is a member of the \textcolor{blue}{($a, b$, 0) class} if the probabilities $p_k$ satisfy
\begin{equation*}
\frac{p_k}{p_{k-1}}=a+\frac{b}{k},
\end{equation*}
for constants $a,b$ and for $k=1,2,3, \ldots $ \vspace{2mm}
\begin{itemize}
\item There are only three distributions that are members of the ($a,b$,0) class. They are the Poisson ($a=0$), binomial ($a<0$), and negative binomial
($a>0$) \vspace{2mm}
\item The recursive expression provides a computationally efficient way to generate
probabilities
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{The ($a, b$, 0) Class - Special Cases}
\begin{itemize}
\item \textit{Example: Poisson Distribution}.
\begin{itemize}
\item Recall $p_k =\frac{\lambda^k}{k!}e^{-\lambda}$. Examining the ratio,
\begin{equation*}
\frac{p_k}{p_{k-1}} =
\frac{\lambda^k/k!}{\lambda^{k-1}/(k-1)!}\frac{e^{-\lambda}}{e^{-\lambda}}=
\frac{\lambda}{k} \vspace{2mm}
\end{equation*}
Thus, the Poisson is a member of the ($a, b$, 0) class with $a = 0$,
$b = \lambda$, and initial starting value $p_0 = e^{-\lambda}$
\vspace{2mm}
\end{itemize}
\textbf{Other special cases} (Please check)
\item \textit{Example: Binomial Distribution}. Member of the ($a, b$, 0) class with $a = \frac{-q}{1-q},$ $b = \frac{(m+1)q}{1-q},$ and $p_0 =
(1-q)^m$ \vspace{2mm}
\item \textit{Example: Negative Binomial Distribution}. Member of the ($a, b$, 0) class with $a = \frac{\beta}{1+\beta},$ $b = \frac{(r-1)\beta}{1+\beta},$ and $p_0 = (1+\beta)^{-r}$
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{The ($a, b$, 0) Class - Exercises}
\textit{Exercise.} A discrete probability distribution has the
following properties:
\begin{eqnarray*}
p_k&=&c\left( 1+\frac{1}{k}\right) p_{k-1}, \:\:\: k=1,2,3, \dots \\
p_0&=& 0.5
\end{eqnarray*} \vspace{2mm}
Determine the expected value of this discrete random variable
\vspace{4mm}
\textit{Exercise.} A discrete probability distribution has the
following properties:
\begin{eqnarray*}
\Pr(N=k) = \left( \frac{3k+9}{8k}\right) \Pr(N=k-1), ~~~k=1,2,3,\ldots
\end{eqnarray*} \vspace{2mm}
Determine the value of $\Pr(N=3)$
\end{frame}
\section{Estimating Frequency Distributions}
\begin{frame}%[shrink=2]
\frametitle{Parameter Estimation}
\begin{itemize}\scalefont{0.9}
\item The customary method of estimation is \textbf{maximum likelihood} %\vspace{2mm}
\item To provide intuition, we outline the ideas in the context of Bernoulli distribution %\vspace{2mm}
\begin{itemize}\scalefont{0.9}
\item This is a special case of the binomial distribution with $m=1$
\item For count distributions, either there is a claim ($N=1$) or not ($N=0$). The probability mass function is:
\begin{equation*}
p_k = \Pr (N=k) = \left\{ \begin{array}{ll}
1-q & \mathrm{if}\ k=0 \\
q& \mathrm{if}\ k=1
\end{array} \right. .
\end{equation*}
\end{itemize} \vspace{2mm}
\item \textcolor{blue}{The Statistical Inference Problem}
\begin{itemize} \scalefont{0.9}
\item Now suppose that we have a collection of independent random variables. The $i$th variable is denoted as $N_i$. Further assume they have the same Bernoulli distribution with parameter $q$ %\vspace{2mm}
\item In statistical inference, we assume that we observe a sample of such random variables. The observed value of the $i$th random variable is $n_i$. Assuming that the Bernoulli distribution is correct, we wish to say something about the probability parameter $q$
\end{itemize}\end{itemize}
\end{frame}
\begin{frame}[shrink=2]
\frametitle{Bernoulli Likelihoods}
\begin{itemize}
\item \textit{Definition}. The \textcolor{blue}{likelihood} is the observed value of the mass
function \vspace{2mm}
\item For a single observation, the likelihood is:
\begin{equation*}
\left\{
\begin{array}{ll}
1-q & \mathrm{if}\ n_i=0 \\
q & \mathrm{if}\ n_i=1
\end{array}
\right. .
\end{equation*} \vspace{2mm}
\item The objective of \textcolor{blue}{maximum likelihood estimation (MLE)} is to find the parameter values that produce the largest
likelihood \vspace{2mm}
\begin{itemize}
\item Finding the maximum of the logarithmic function yields the same solution as finding the maximum of the corresponding
function \vspace{2mm}
\item Because it is generally computationally simpler, we consider the logarithmic (log-) likelihood, written
as:
\begin{equation*}
\left\{
\begin{array}{ll}
\ln \left( 1-q\right) & \mathrm{if}\ n_i=0 \\
\ln q & \mathrm{if}\ n_i=1
\end{array}\right. .
\end{equation*}
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]
\frametitle{Bernoulli MLE I}
\begin{itemize}
\item More compactly, the log-likelihood of a single observation is:
\begin{equation*}
n_i \ln q + (1-n_i)\ln ( 1-q ) \vspace{2mm}
\end{equation*}
\item Assuming independence, the log-likelihood of the data set is:
\begin{equation*}
L_{Bern}(q)=\sum_i \left\{ n_i \ln q + (1-n_i)\ln ( 1-q ) \right\}
\end{equation*} \vspace{2mm}
\begin{itemize}
\item The (log) likelihood is viewed as a function of the parameters, with the data held
fixed \vspace{2mm}
\item In contrast, the joint probability mass function is viewed as a function of the realized data,
with the parameters held fixed \vspace{2mm}
\end{itemize}
\item The method of maximum likelihood means finding the values of $q$ that maximize the log-likelihood
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]
\frametitle{Bernoulli MLEII}
\begin{itemize}
\item We began with the Bernoulli distribution in part because the log-likelihood is easy to
maximize \vspace{2mm}
\item Take a derivative of $L_{Bern}(q)$ to get
\begin{equation*}
\frac{\partial}{\partial q} L_{Bern}(q)=\sum_i \left\{ n_i \frac{1}{q} - (1-n_i)\frac{1}{1-q} \right\}
\end{equation*}
and solving the equation $\frac{\partial}{\partial q} L_{Bern}(q) =0$ yields
\begin{equation*}
\hat{q} = \frac{\sum_i n_i}{\mathrm{sample ~size}}
\end{equation*}
or, in words, the $MLE$ $\hat{q}$ is the fraction of ones in the
sample \vspace{2mm}
\item Just to be complete, you should check, by taking derivatives, that when we solve $\frac{\partial}{\partial q} L_{Bern}(q) =0$ we are maximizing the function $L_{Bern}(q)$, not minimizing it
\end{itemize}
\end{frame}
\begin{frame}%[shrink=2]
\frametitle{Frequency Distributions MLE I}
\begin{itemize}\scalefont{0.9}
\item We can readily extend this procedure to all frequency
distributions \vspace{2mm}
\item For notation, suppose that $\theta$ (``theta'') is a parameter that describes a given frequency distribution $\Pr(N=k; \theta) =
p_k(\theta)$ \vspace{2mm}
\begin{itemize}\item In later developments we will let $\theta$ be a vector but for the moment assume it to be a
scalar\end{itemize} \vspace{2mm}
\item The log-likelihood of a a single observation is
\begin{equation*}
\left\{
\begin{array}{ll}
\ln p_0(\theta) & \mathrm{if}\ n_i=0 \\
\ln p_1(\theta) & \mathrm{if}\ n_i=1 \\
\vdots & \vdots
\end{array}
\right. .
\end{equation*}
that can be written more compactly as
\begin{equation*}
\sum_k I(n_i=k) \ln p_k(\theta).
\end{equation*} \vspace{2mm}
This uses the notation $I(\cdot)$ to be the indicator of a set (it
returns one if the event is true and 0 otherwise)
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]
\frametitle{Frequency Distributions MLE II}
\begin{itemize}
\item Assuming independence, the log-likelihood of the data set is
\begin{equation*}
L(q)=\sum_i \left\{ \sum_k I(n_i=k) \ln p_k(\theta) \right\} = \left\{ \sum_k m_k\ln p_k(\theta) \right\}
\end{equation*}
where we use the notation $m_k$ to denote the number of observations
that are observed having count $k$ \vspace{2mm}
Using notation: $m_k = \sum_i I(n_i=k)$ \vspace{2mm}
\item \textbf{Special Case}. \textit{Poisson}. A simple exercise in calculus yields
$$ \hat{\lambda} = \frac{\sum_k k m_k}{\mathrm{sample ~size}}$$
the average claim count
\end{itemize}
\end{frame}
\section{Other Frequency Distributions}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Other Frequency Distributions}
\begin{itemize}
\item Naturally, there are many other count distributions needed in
practice \vspace{2mm}
\item For many insurance applications, one can work with one of our three basic distributions (binomial, Poisson, negative binomial) and allow the parameters to be a function of known explanatory
variables \vspace{2mm}
\begin{itemize}
\item This allows us to explain claim probabilities in terms of known (to the insurer) variables such as age, sex, geographic
location, etc.
\item This field of statistical study is known as \textbf{regression analysis} - it is an important topic that we will not pursue in this
course \vspace{2mm}
\end{itemize}
\item To extend our basic count distributions to alternatives needed in practice, we consider two approaches:
\begin{itemize}
\item Zero truncation or modification
\item Mixing
\end{itemize}
\end{itemize}
\end{frame}
\subsection{Zero Truncation or Modification}
\begin{frame}%\beamerdefaultoverlayspecification{<+->}
\frametitle{Zero Truncation or Modification}
\begin{itemize}
\item Why truncate or modify zero?
\begin{itemize}
\item If we work with a database of claims, then there are no zeroes!
\item In personal lines (like auto), people may not want to report that first claim
(why?) \vspace{2mm}
\end{itemize}
\item Let's modify zero probabilities in terms of the $(a,b,0)$
class \vspace{2mm}
\item \textit{Definition}. A count distribution is a member of the \textcolor{blue}{($a, b$, 1) class} if the probabilities $p_k$ satisfy
\begin{equation*}
\frac{p_k}{p_{k-1}}=a+\frac{b}{k},
\end{equation*}
for constants $a,b$ and for $k=2,3, \ldots $ \vspace{2mm}
\item Note that this starts at $k=2$, not $k=1$ (starts at $p_1$, not
$p_0$) \vspace{2mm}
\item Thus, all distributions that are members of the ($a, b$, 0) are members of the ($a, b$, 1) class. Naturally, there are additional distributions that are members of this wider class
\end{itemize}
\end{frame}
\begin{frame}%[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Zero Truncation or Modification}
\begin{itemize}\scalefont{0.8}
\item Pick a specific distribution in the ($a, b$, 0) class: \vspace{2mm}
\begin{itemize}
\item Consider $p_k^0$ to be a probability for this member of $(a,b,0)$
\item Let $p_k^M$ be the corresponding probability for a member of $(a,b,1)$, where the $M$ stands for ``modified''
\item Pick a new probability of a zero claim, $p_0^M$, and define
\begin{eqnarray*}
c = \frac{1-p_0^M}{1-p_0^0} .
\end{eqnarray*} %\vspace{2mm}
\item We then calculate the rest of the modified distribution as
\begin{eqnarray*}
p_k^M =c p_k^0
\end{eqnarray*} %\vspace{2mm}
\end{itemize}
\item \textit{Special Case: Poisson Truncated at Zero.} For this case, we assume that $p_0^M=0$, so that the probability of $N=0$ is zero, hence the name ``truncated at zero'' %\vspace{2mm}
\item For this case, we use the letter $T$ to denote probabilities instead of $M$, so we use $p_k^T$ for probabilities. Thus,
\begin{eqnarray*}
p_k^T&=&
\left \{
\begin{array}{cc}
0 & k=0\\ \frac{1}{1-p_0^0}p_k^0 & k \ge 1\\
\end{array}
\right.
\end{eqnarray*}
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Modified Poisson Example} \textit{Example: Zero
Truncated/Modified Poisson}. Consider a Poisson distribution with
parameter $\lambda=2$ \vspace{2mm}
We show how to calculate $p_k, k=0,1,2,3$, for the usual
(unmodified), truncated, and a modified version with $p_0^M=0.6$
\vspace{2mm}
\textit{Solution.} For the Poisson distribution as a member of the
($a,b$,0) class, we have $a=0$ and $b=\lambda=2$ \vspace{2mm}
Thus, we may use the recursion $p_k = \lambda p_{k-1}/k= 2
p_{k-1}/k$ for each type, after determining starting probabilities
\vspace{2mm}
\bigskip
\scalefont{0.8}
\begin{tabular}{cccc}
\hline
k & $p_k$ & $p_k^T$ & $p_k^M$ \\
\hline
0 & $p_0=e^{-\lambda}=0.135335$ & $p_0^T$ = 0 & $p_0^M$ = 0.6 \\
1 & $p_1=p_0(0+\frac{\lambda}{1})=0.27067$ & $p_1^T=\frac{p_1}{1-p_0}=0.313035$ & $p_1^M$=$\frac{1-p_0^M}{1-p_0}~p_1=0.125214$ \\
2 & $p_2=p_1\left( \frac{\lambda}{2}\right)=0.27067$ & $p_2^T=p_1^T\left(\frac{\lambda}{2}\right)=0.313035$ & $p_2^M=p_1^M\left(\frac{\lambda}{2}\right)=0.125214$ \\
3 & $p_3=p_2\left(\frac{\lambda}{3}\right)=0.180447$ & $p_3^T=p_2^T\left(\frac{\lambda}{3}\right)=0.208690$ & $p_3^M=p_2^M\left(\frac{\lambda}{3}\right)=0.083476$ \\
\hline
\end{tabular}\scalefont{1.25}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Modified Distribution Exercise} \textit{Exercise: Course
3, May 2000, Exercise 37.} You are given: \vspace{2mm}
\begin{itemize}
\item $p_k$ denotes the probability that the number of claims equals $k$ for
$k=0,1,2,\ldots$ \vspace{2mm}
\item $\frac{p_n}{p_m}=\frac{m!}{n!}, m\ge 0, n\ge 0$ \vspace{2mm}
\end{itemize}
Using the corresponding zero-modified claim count distribution with
$p_0^M=0.1$, calculate $p_1^M$
\end{frame}
\subsection{Mixture Distributions}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Mixtures of Finite Populations}
\begin{itemize}
\item Suppose that our population consists of several subgroups, each having their own
distribution \vspace{2mm}
\item We randomly draw an observation from the population, without knowing from which subgroup we are drawing
\vspace{2mm}
\item For example, suppose that $N_1$ represents claims from ``good'' drivers and $N_2$ represents claims from ``bad'' drivers. We draw
\begin{equation*}
N =
\begin{cases}
N_1 & \text{with prob~}\alpha\\
N_2 & \text{with prob~}(1-\alpha) .\\
\end{cases}
\end{equation*} \vspace{2mm}
\item Here, $\alpha$ represents the probability of drawing a ``good''
driver \vspace{2mm}
\item ``Mixture'' of two subgroups
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Finite Population Mixture Example}
\textit{Exercise. Exam "C" 170.} In a certain town the number of
common colds an individual will get in a year follows a Poisson
distribution that depends on the individual's age and smoking
status\vspace{2mm}
The distribution of the population and the mean number of colds are
as follows: \vspace{2mm} \scalefont{0.8}
\begin{center}
\begin{tabular}{l|cc}
\hline & Proportion of population & Mean number of colds \\ \hline
Children & 0.3 & 3 \\
Adult Non-Smokers & 0.6 & 1 \\
Adult Smokers & 0.1 & 4 \\\hline
\end{tabular}\end{center}
\scalefont{1.25} \vspace{2mm}
\begin{enumerate}
\item Calculate the probability that a randomly drawn person has 3 common colds in a
year \vspace{2mm}
\item Calculate the conditional probability that a person with exactly 3 common colds in a year is an adult
smoker
\end{enumerate}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Mixtures of Infinitely Many Populations}
\begin{itemize}
\item We can extend the mixture idea to an infinite number of
populations (subgroups) \vspace{2mm}
\item To illustrate, suppose we have a population of drivers. The $i$th person has their own (personal) Poisson distribution with expected number of claims,
$\lambda_i$ \vspace{2mm}
\item For some drivers, $\lambda$ is small (good drivers), for others it is high (not so good drivers). There is a distribution of
$\lambda$ \vspace{2mm}
\item A convenient distribution for $\lambda$ is a \textcolor{blue}{gamma distribution} with parameters $(\alpha,
\theta)$ \vspace{2mm}
\item Then, one can check that if $N|\Lambda \sim$ Poisson$(\Lambda)$
and if $\Lambda \sim$ gamma$(\alpha, \theta)$:
\begin{eqnarray*}
N &\sim& \text{Negative Binomial} (r = \alpha, \beta = \theta)
\end{eqnarray*}
\end{itemize}
\end{frame}
\begin{frame}%[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Negative Binomial as a Gamma Mixture of Poissons}
\scalefont{0.9}
\textit{Example}. Suppose that $N|\Lambda \sim$ Poisson$(\Lambda)$
and that $\Lambda \sim$ gamma with mean of 1 and variance of 2.
Determine the probability that $N=1$ \vspace{2mm}
\textit{Solution.} For a gamma distribution with parameters
$(\alpha, \theta)$, we have that mean is $ \alpha \theta$ and the
variance is $\alpha \theta^2$. Thus:
\begin{eqnarray*}
\alpha &=& \frac{1}{2} \text{ and } \theta =2 \vspace{2mm}
\end{eqnarray*}
Now, one can directly use the negative binomial approach to get $r =
\alpha = \frac{1}{2}$ and $\beta= \theta =2 $. Thus:
\begin{eqnarray*}
\Pr(N=1) = p_1 &=& {1+r-1 \choose 1}(\frac{1}{(1+\beta)^r})(\frac{\beta}{1+\beta})^1 \\
&=& {1+\frac{1}{2}-1 \choose 1}{\frac{1}{(1+2)^{1/2}}}(\frac{2}{1+2})^1\\
&=& \frac{1}{3^{3/2}} = 0.19245
\end{eqnarray*}
\end{frame}
\section{Model Selection}
\begin{frame}[shrink=3]
\frametitle{Example: Singapore Automobile Data}
\begin{itemize}
\item A 1993 portfolio of $n=7,483$ automobile insurance policies from a major insurance company in
Singapore \vspace{2mm}
\item The count variable is the number of automobile accidents per
policyholder \vspace{2mm}
\item There were on average $ 0.06989$ accidents per person \vspace{2mm}
\end{itemize}
\scalefont{0.8}
\begin{equation*}
\begin{tabular}{crr}
\hline \multicolumn{3}{c}{\textbf{Table. Comparison of Observed to Fitted Counts }} \\
\multicolumn{3}{c}{\textbf{Based on Singapore Automobile Data}} \\
\hline
Count & Observed & Fitted Counts using the \\
$(k)$ & $(m_k)$ & Poisson Distribution $(n\widehat{p}_k)$ \\
\hline
0 & 6,996 & 6,977.86 \\
1 & 455 & 487.70 \\
2 & 28 & 17.04 \\
3 & 4 & 0.40 \\
4 & 0 & 0.01 \\ \hline Total & 7,483 & 7,483.00 \\ \hline
\end{tabular}
\end{equation*}
\scalefont{1.25} \vspace{2mm}The average is $\bar{N} = \frac{0\cdot
6996 + 1 \cdot 455 + 2 \cdot 28 + 3 \cdot 4 + 4 \cdot 0}{7483} =
0.06989$
\end{frame}
\begin{frame}[shrink=2]
\frametitle{Singapore Data: Adequacy of the Poisson Model}
\begin{itemize}
\item With the Poisson distribution: \vspace{2mm}
\begin{itemize}
\item The maximum likelihood estimator of $\lambda$ is
$\widehat{\lambda}=\overline{N}$ \vspace{2mm}
\item Estimated probabilities, using $\widehat{\lambda}$, are denoted as
$\widehat{p}_k$ \vspace{2mm}
%\item Fitted counts are 7,483 times the fitted probabilities
%$(n\widehat{p}_j)$.
\end{itemize}
\pause
\item For goodness of fit, consider \emph{Pearson's chi-square statistic}
\begin{equation*}
\sum_k\frac{\left( m_k-n\widehat{p}_k \right) ^{2}}{n\widehat{p}_k}.
\end{equation*}
\begin{itemize}
\item Assuming that the Poisson distribution is a correct model; this statistic has an asymptotic chi-square
distribution \vspace{2mm}
\begin{itemize}\item The degrees of freedom ($df$) equals the number of cells minus one minus the number of estimated parameters \vspace{2mm}\end{itemize}
\item For the Singapore data, this is $df=5-1-1=3$ \vspace{2mm}
\item The statistic is 41.98; the basic Poisson model is inadequate
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Example. Course C/Exam 4. May 2001, 19}
During a one-year period, the number of accidents per day was
distributed as follows: \vspace{2mm}
\begin{tabular}{l|rrrrrrr}\hline
Number of Accidents & 0 & 1 & 2 & 3 & 4 & 5 \\
Number of Days & 209 & 111 & 33 & 7 &
3 & 2
\\ \hline
\end{tabular}
\vspace{2mm}
You use a chi-square test to measure the fit of a Poisson
distribution with mean 0.60 \vspace{2mm}
The minimum expected number of observations in any group should be 5
\vspace{2mm}
The maximum number of groups should be used
\vspace{2mm}
Determine the chi-square statistic
\end{frame}
\end{document}
\begin{frame}[shrink=2]%\beamerdefaultoverlayspecification{<+->}
\frametitle{Instructor Notes}
\begin{itemize}
\item
\end{itemize}
\end{frame}
\textcolor{blue}{temp}
| {
"alphanum_fraction": 0.7073554567,
"avg_line_length": 38.109517601,
"ext": "tex",
"hexsha": "780d755e6884236b8772337df87f17860cab9e05",
"lang": "TeX",
"max_forks_count": 3,
"max_forks_repo_forks_event_max_datetime": "2021-06-27T19:52:11.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-24T13:09:22.000Z",
"max_forks_repo_head_hexsha": "130b86e2a6a1bcf4e1d9282ff55cbf6e5b206795",
"max_forks_repo_licenses": [
"RSA-MD"
],
"max_forks_repo_name": "ewfrees/LossDataAnalyticsOverheads",
"max_forks_repo_path": "LatexSourceCode/Chap2Frequency/Chap2FrequencyDistributions_Fall2017.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "130b86e2a6a1bcf4e1d9282ff55cbf6e5b206795",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"RSA-MD"
],
"max_issues_repo_name": "ewfrees/LossDataAnalyticsOverheads",
"max_issues_repo_path": "LatexSourceCode/Chap2Frequency/Chap2FrequencyDistributions_Fall2017.tex",
"max_line_length": 264,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "130b86e2a6a1bcf4e1d9282ff55cbf6e5b206795",
"max_stars_repo_licenses": [
"RSA-MD"
],
"max_stars_repo_name": "ewfrees/LossDataAnalyticsOverheads",
"max_stars_repo_path": "LatexSourceCode/Chap2Frequency/Chap2FrequencyDistributions_Fall2017.tex",
"max_stars_repo_stars_event_max_datetime": "2021-10-01T17:41:45.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-10-24T15:54:55.000Z",
"num_tokens": 9572,
"size": 29230
} |
\documentclass[a4paper, 12pt]{article}
\usepackage{amsmath}
\usepackage{color}
\usepackage{dsfont}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage[left=2cm, right=2cm, bottom=3cm, top=2cm]{geometry}
\usepackage{natbib}
\usepackage{microtype}
\definecolor{orange}{rgb}{1, 0.5, 0}
\definecolor{darkgreen}{rgb}{0, 0.5, 0}
\definecolor{darkred}{rgb}{0.7, 0, 0}
\newcommand{\btheta}{\boldsymbol{\theta}}
\newcommand{\tn}{\textnormal}
\title{Notes}
\author{Brendon J. Brewer}
\date{}
\begin{document}
\maketitle
% Need this after the abstract
\setlength{\parindent}{0pt}
\setlength{\parskip}{8pt}
\section{A property of Nested Sampling}
Consider the implied prior for $L$, denoted $\pi(L)$. This is
a probability distribution over the real line.
As usual, define $X(\ell)$ as the amount of prior mass
with likelihood above $\ell$:
\begin{align}
X(\ell) &= \int_\ell^\infty \pi(L) \, dL.
\end{align}
This is the complementary CDF of $L$.
The NS sequence rectifies $\pi$ to give you the distribution
\begin{align}
p_{\rm NS}(L) &= \frac{1}{C} \frac{\pi(L)}{X(L)}
\end{align}
where
\begin{align}
C &= \int_{-\infty}^{L_{\rm max}} \frac{\pi(L)}{X(L)} \, dL
\end{align}
is a normalising constant,
and $L_{\rm max}$ is the maximum likelihood from the
discarded points of the run
(not necessarily the overall maximum likelihood).
Consider a `constrained prior' distribution, which is just the
prior $\pi$ but truncated so it only includes values above
a threshold $\ell$:
\begin{align}
p(L \,|\, \ell) &\propto
\left\{
\begin{array}{lr}
\pi(L),& L > \ell, \\
0, & \tn{otherwise}.
\end{array}
\right.
\end{align}
The normalisation constant of a constrained prior is
just $X(\ell)$.
NS gives us the opportunity to measure this for any given
value of $\ell$. Consider the KL divergence from $p_{\rm NS}$ to
the constrained prior:
\begin{align}
D_{\rm KL}(p_\ell \,||\, p_{\rm NS})
&= \int_{\ell}^{\infty} \frac{\pi(L)}{X(\ell)}
\log\left[\frac{\pi(L)/X(\ell)}{\pi(L)/C/X(L)}\right] \, dL \\
&= \int_{\ell}^{\infty} \frac{\pi(L)}{X(\ell)}
\log\left[\frac{CX(L)}{X(\ell)}\right] \, dL
\end{align}
We can take the expectation (integral) in terms of $X$ instead of
$L$, and use the fact that the prior corresponds to a
Uniform$(0,1)$ distribution for $X$:
\begin{align}
D_{\rm KL}(p_\ell \,||\, p_{\rm NS})
&= \int_0^{X(\ell)} \frac{1}{X(\ell)}
\log(X) \, dX - \log X(\ell) + \log C \\
&= \frac{1}{X(\ell)} \int_0^{X(\ell)} \log(X) \, dX
- \log X(\ell) + \log C \\
&= \frac{1}{X(\ell)}\left[X\log(X) - X\right]_0^{X(\ell)}
- \log X(\ell) + \log C \\
&= \log X(\ell) - 1 - \log X(\ell) + \log C \\
&= \log C - 1.
\end{align}
Crucially, {\em this does not depend on} $\ell$.
\section{Uniqueness}
Is $p_{\rm NS}$ the only distribution with the above property?
Consider the KL divergence from some distribution $q$
to the constrained prior.
We will then see what choices of $q$ have the above property.
The KL divergence is
\begin{align}
D_{\rm KL}(p_\ell \,||\, q)
&= \int_{\ell}^{\infty} \frac{\pi(L)}{X(\ell)}
\log\left[\frac{\pi(L)/X(\ell)}{q(L)}\right] \, dL \\
&= -\log X(\ell) + \frac{1}{X(\ell)}
\int_{\ell}^{\infty} \pi(L)
\log\left[\frac{\pi(L)}{q(L)}\right] \, dL
\end{align}
The derivative of the KL divergence, with respect to $\ell$, is
\begin{align}
\frac{d}{d\ell} D_{\rm KL}(p_\ell \,||\, q)
&= -\frac{X'(\ell)}{X(\ell)}
+ \frac{X(\ell)\pi(\ell)\log\left[\pi(\ell)/q(\ell)\right]
-X'(\ell)\int_{\ell}^{\infty} \pi(L)
\log\left[\pi(L)/q(L)\right] \, dL}
{X(\ell)^2} \\
&= -\frac{X'(\ell)}{X(\ell)}
+ \frac{\pi(\ell)\log\left[\pi(\ell)/q(\ell)\right]}
{X(\ell)}
- \frac{X'(\ell)/X(\ell)\int_{\ell}^{\infty} \pi(L)
\log\left[\pi(L)/q(L)\right] \, dL}
{X(\ell)}\\
&= -\frac{X'(\ell)}{X(\ell)}
+ \frac{\pi(\ell)\log\left[\pi(\ell)/q(\ell)\right]}
{X(\ell)}
- \frac{X'(\ell)\left(D_{\rm KL} + \log X(\ell)\right)}
{X(\ell)} \\
&= \frac{\pi(\ell)}{X(\ell)}
\Big(1
+ \log\left[\pi(\ell)/q(\ell)\right]
+ D_{\rm KL} + \log X(\ell)
\Big)
\end{align}
To satisfy the nice property of the previous section, the derivative
must be zero. This is achieved if
\begin{align}
1 + \log\left[\pi(\ell)/q(\ell)\right]
+ D_{\rm KL} + \log X(\ell) &= 0 \\
1 + \log\pi(\ell) - \log q(\ell)
+ D_{\rm KL} + \log X(\ell) &= 0
\end{align}
%\section*{TwinPeaks}
%Denote the two scalars by $L_1(\btheta)$ and $L_2(\btheta)$.
%The implied prior for $L_1$ and $L_2$ is $\pi(L_1, L_2)$.
%Consider a constrained distribution
%\begin{align}
%p(L_1, L_2 \,|\, \ell_1, \ell_2) &\propto
% \left\{
% \begin{array}{lr}
% \pi(L_1, L_2), & L_1 > \ell_1 \tn{ and } L_2 > \ell_2 \\
% 0, & \tn{otherwise}.
% \end{array}
% \right.
%\end{align}
%The normalising constant of this distribution is
%\begin{align}
%X(\ell_1, \ell_2) &=
% \int_{\ell_1}^{\infty}
% \int_{\ell_2}^{\infty}
% \pi(L_1, L_2)
% \, dL_1
% \, dL_2.
%\end{align}
%My proposed TwinPeaks distribution is
%\begin{align}
%p_{\rm TP}(L_1, L_2) &=
% \frac{1}{C}
% \frac{\pi(L_1, L_2)}{X(L_1, L_2)}.
%\end{align}
%As in the previous section, we can compute the KL divergence from
%this to a constrained prior:
%\begin{align}
%D_{\rm KL}(p_{\ell_1, \ell_2} \,||\, p_{\rm TP})
% &= \int_{\ell_1}^{\infty}
% \int_{\ell_2}^{\infty}
% \frac{\pi(L_1, L_2)}{X(\ell_1, \ell_2)}
% \log\left[C \frac{X(L_1, L_2)}{X(\ell_1, \ell_2)} \right]
% \, dL_1
% \, dL_2 \\
% &=
% \int_{\ell_1}^{\infty}
% \int_{\ell_2}^{\infty}
% \frac{\pi(L_1, L_2)}{X(\ell_1, \ell_2)}
% \log\left[C \frac{X(L_1, L_2)}{X(\ell_1, \ell_2)} \right]
% \, dL_1
% \, dL_2 \\
% &=
% \log C \,-\, \log X(\ell_1, \ell_2) \\ &\quad\quad\quad
% \,+\,
% \frac{1}{X(\ell_1, \ell_2)}
% \int_{\ell_1}^{\infty}
% \int_{\ell_2}^{\infty}
% \pi(L_1, L_2)
% \log X(L_1, L_2)
% \, dL_1
% \, dL_2 \\
%\end{align}
%Doing the integral in terms of $X$
\section{Commutativity}
Does the operation of constraining by $f(\theta)$ by a factor of $t_1$
commute with the operation of constraining by $g(\theta)$ by a factor
of $t_2$? I think so...that's the product rule, right?
The first route yields the following sequence of distributions:
\begin{align}
\pi(\theta)
\implies
\frac{\pi(\theta)\mathds{1}\big(f(\theta) \geq f^*\big)}{t_1}
\implies
\frac{\pi(\theta)\mathds{1}\big(f(\theta) \geq f^*\big)\mathds{1}\big(g(\theta) \geq g^*\big)}{t_1t_2}
\end{align}
\end{document}
| {
"alphanum_fraction": 0.5701972243,
"avg_line_length": 31.3990825688,
"ext": "tex",
"hexsha": "0ae967dfc29c246ae0e8e31e867f546fc4897dd1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a4da4fe17d586fe75cc09d6cddfaa6b3052aeea8",
"max_forks_repo_licenses": [
"BSD-3-Clause"
],
"max_forks_repo_name": "eggplantbren/TwinPeaks-hs",
"max_forks_repo_path": "doc/notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a4da4fe17d586fe75cc09d6cddfaa6b3052aeea8",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-3-Clause"
],
"max_issues_repo_name": "eggplantbren/TwinPeaks-hs",
"max_issues_repo_path": "doc/notes.tex",
"max_line_length": 102,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "a4da4fe17d586fe75cc09d6cddfaa6b3052aeea8",
"max_stars_repo_licenses": [
"BSD-3-Clause"
],
"max_stars_repo_name": "eggplantbren/TwinPeaks-hs",
"max_stars_repo_path": "doc/notes.tex",
"max_stars_repo_stars_event_max_datetime": "2020-03-10T03:16:50.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-03-10T03:16:50.000Z",
"num_tokens": 2613,
"size": 6845
} |
\subsection{Parsing}
Mulitple objects in scene, objects have parts segmentation.
| {
"alphanum_fraction": 0.7976190476,
"avg_line_length": 14,
"ext": "tex",
"hexsha": "8e6836af26ff2583d35bbde8393ac2fe3358dc1b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/vision/05-04-parsing.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/vision/05-04-parsing.tex",
"max_line_length": 59,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/vision/05-04-parsing.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 18,
"size": 84
} |
\documentclass{notes}
\title{A model of navigation history}
\author{Connor G. Brewster \and Alan Jeffrey}
\date{August, 2016}
\indicia{
\includegraphics[height=4.5ex]{images/by}
\begin{tabular}[b]{@{}l}%
Copyright Connor G. Brewster and Alan Jeffrey\\
Licensed under Creative Commons License CC-BY
\end{tabular}
}
\usepackage{amssymb}
\usepackage{colortbl}
\usepackage{graphicx}
\usepackage{float}
\usepackage{subcaption}
\usepackage{tikz}
\usetikzlibrary{fit}
% Macros, so we can change notation easily
\newcommand{\Verts}{V}
\newcommand{\aVert}{v}
\newcommand{\bVert}{w}
\newcommand{\aNH}{H}
\newcommand{\Docs}{D}
\newcommand{\Active}{A}
\newcommand{\FullyActive}{F\!A}
\newcommand{\parentOf}{\rightarrow}
\newcommand{\parentOfActive}{\twoheadrightarrow}
\newcommand{\childOf}{\leftarrow}
\newcommand{\activeChildOf}{\twoheadleftarrow}
\newcommand{\leChron}{\le}
\newcommand{\ltChron}{<}
\newcommand{\geChron}{\ge}
\newcommand{\gtChron}{>}
\newcommand{\eqSess}{\sim}
\newcommand{\ltSess}{\lesssim}
\newcommand{\gtSess}{\gtrsim}
\newcommand{\rootDoc}{d_0}
\newcommand{\aDoc}{d}
\newcommand{\bDoc}{e}
\newcommand{\cDoc}{f}
\newcommand{\st}{\mathbin.}
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{conjecture}{Conjecture}
\newtheorem{goal}{Goal}
\newtheorem{patch}{Patch}
\newtheorem{counterexample}{Counterexample}
\newtheorem{experiment}{Experiment}
\newcommand{\QED}{\hfill$\Box$}
\tikzstyle{doc} = [draw=black, fill=blue!10, circle, font={\normalfont\sffamily}]
\tikzstyle{fully} = [draw=red, thick]
\tikzstyle{active} = [color=white, fill=blue!50!black]
\tikzstyle{jshactive} = [active] % ASAJ: if we want jshactive to be distinct [fill=green!50]
\begin{document}
\maketitle
\subparagraph{Abstract:}
Navigation has been a core component of the web since its inception:
users and scripts can follow hyperlinks, and can go back or forwards
through the navigation history. In this paper, we present a formal
model aligned with the \textsc{whatwg} specification of navigation
history, and investigate its properties. The fundamental property of
navigation history is that traversing the history by $\delta$ then by
$\delta'$ should be the same as traversing by $\delta+\delta'$. In
particular, traversing by $+1$ (forward) then by $-1$ (back) is the
same as traversing by $0$ (doing nothing). We show that the
specification-aligned model does not satisfy this property, by
exhibiting a series of counter-examples, which motivate four patches
to the model. We present a series of experiments, showing that
browsers are inconsistent in their implementation of navigation
history, but that their behaviour is closer to the patched model than
to the specification-aligned model. We propose patches to the
specification to align it with the patched model.
\subparagraph{ACM Classification:}
D.2.1 Requirements/Specifications.
\subparagraph{Keywords:}
Formal model,
Navigation,
Session history,
Specification,
Web browsers.
\section{Introduction}
Navigation has been a core component of the web since its inception:
users and scripts can follow hyperlinks, and can go back or forwards
through the navigation history. Users are exposed to this functionality
through following hyperlinks, and by the forward and back buttons.
Scripts have many ways of accessing session history, via the
navigation API~\cite[\S7.7]{whatwg} and the \verb|element.click()| method.
% TODO: add detail to citations, cite Harel.
Prior formalizations of navigation history
include~\cite{HH:2006,Haydar:2004,HPS:2004,LHYT:2000,WP:2003}, which
predate and are not aligned with the \textsc{whatwg}
specification~\cite{whatwg}. The specification of the navigation API
is informal, and has complex dependencies on the rest of the HTML
specification. There is little discussion of the goals
of the API, and an unclear alignment with browser implementations.
In this paper, we present a formal model of navigation, aligned with
the HTML specification, and investigate its properties. The
starting point is that there is a total order of
\emph{documents}\footnote{%
We are eliding some of the technical details of the specification here,
in particular we are conflating a \emph{browsing context}
with the document it contains, and we are ignoring issues around
document loading and unloading, and around the current entry of the joint
session history.
}, one of which is \emph{active}, for example:
\[\begin{tikzpicture}
\node[doc](0) at (0,0){0};
\node[doc](1) at (1,0){1};
\node[doc,jshactive,fully](2) at (2,0){2};
\node[draw,dotted,fit=(0)(1)(2)] {};
\end{tikzpicture}\]
In diagrams, we use left-to-right order to indicate order,
and highlight the active document. The user can \emph{traverse}
the history which changes the active document, for example pressing
the back button:
\[\begin{tikzpicture}
\node[doc](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,0){1};
\node[doc](2) at (2,0){2};
\node[draw,dotted,fit=(0)(1)(2)] {};
\end{tikzpicture}\]
The user can also \emph{navigate}, which replaces any document
after the currently active document by a fresh active document:
\[\begin{tikzpicture}
\node[doc](0) at (0,0){0};
\node[doc](1) at (1,0){1};
\node[doc,jshactive,fully](3) at (3,0){3};
\node[draw,dotted,fit=(0)(1)(3)] {};
\end{tikzpicture}\]
Users can also traverse the history by more than one document
at a time, for example by using a pull-down menu from the back
or forwards button. This is called \emph{traversing by $\delta$},
for instance we can traverse our running example by $-2$
to get:
\[\begin{tikzpicture}
\node[doc,jshactive,fully](0) at (0,0){0};
\node[doc](1) at (1,0){1};
\node[doc](3) at (3,0){3};
\node[draw,dotted,fit=(0)(1)(3)] {};
\end{tikzpicture}\]
We formalize the notions of traversal and navigation in
\S\ref{sec:model}, and show the \emph{fundamental property of traversal}:
that traversing by $\delta$ then by $\delta'$
is the same as traversing by $\delta+\delta'$.
So far, the model is refreshingly simple, and corresponds well to
the specification and to browser implementations. Where the problems
arise is in the \emph{hierarchical} nature of documents. HTML
documents can contain \verb|iframe| elements, which
are independent documents in their own right, often
used to embed third party content such as advertisements.
We can treat each document as a tree, for example:
\[\begin{tikzpicture}
\node[doc,jshactive,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc,active,fully](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)] {};
\node[draw,dotted,fit=(2)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(2);
\draw[->](0)--(3);
\end{tikzpicture}\]
The problem comes from the ability of each document to
navigate separately and maintain its own session history,
but that traversal is a global operation that operates
on the \emph{joint session history}. For example
if document $2$ in the previous example navigates, the
resulting state is:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
and then if document $1$ navigates, the state is:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active](4) at (4,-2){4};
\node[doc,jshactive,fully](5) at (5,-1){5};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(5)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(5);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
Note that node $4$ here is in an unusual state: it is active, but has
an inactive ancestor. The specification~\cite[\S7.7]{whatwg}
distinguishes between \emph{active} documents such as $4$, and
\emph{fully active} documents such as $0$, $3$ and $5$. Active
documents can become fully active by traversals involving their
ancestors. For example, after traversing by $-1$, document $4$ is
fully active:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active,fully](4) at (4,-2){4};
\node[doc](5) at (5,-1){5};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(5)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
As even a simple example like this shows, the combination of features
quickly results in a complex mix of session history, ordering, and
document hierarchy, which leads to the problems:
\begin{itemize}
\item \emph{Formally} there is no simple model,
and the model provided by the specification does
not satisfy the traverse-then-traverse property.
\item \emph{Experimentally} the browsers disagree
with each other, and with the HTML specification,
about the semantics of navigation.
\end{itemize}
In this paper, we address these:
\begin{itemize}
\item \S\ref{sec:model} provides a formal model of navigation history,
which is intended to align with the specification. We show, through
a series of examples, that it does not satisfy the
fundamental property, and give patches to the model for
each example. The final model does satisfy the
fundamental property.
\item \S\ref{sec:experiments} investigates how well the patched
model aligns with existing browser implementations. We show
ways in which the browsers exhibit behaviours which are not
aligned with the specification, and discuss how our proposed
model matches these behaviours.
\end{itemize}
Finally, we propose changed wording to the specification, which
would bring it in line with our patched model.
\section{Model}
\label{sec:model}
In this section, we present our formal model of navigation history.
\S\ref{sec:preliminaries} contains definitions of concepts such as
tree and total order, and may be skipped by most readers. The model,
together with some examples, is given in \S\ref{sec:defns}. In
\S\ref{sec:properties} we define the fundamental property of
traversal, show that the model does \emph{not} satisfy
it, but can be patched to do so.
\subsection{Preliminaries}
\label{sec:preliminaries}
A \emph{directed graph} $G=(\Verts,{\parentOf})$ consists of:
\begin{itemize}
\item a set $\Verts$ (the \emph{vertices}), and
\item a relation ${\parentOf} \subseteq (\Verts\times\Verts)$ (the \emph{edges}).
\end{itemize}
The \emph{transitive closure} of $\parentOf$ is defined as $\aVert\parentOf^+\aVert'$ whenever
there exists $\aVert_0,\ldots,\aVert_n$ such that:
\[
\aVert=\aVert_0\parentOf\cdots\parentOf\aVert_n=\aVert'
\]
The \emph{reflexive transitive closure} of $\parentOf$ is defined as $\aVert\parentOf^*\aVert'$ whenever
$\aVert\parentOf^+\aVert'$ or $\aVert=\aVert'$.
A \emph{forest} is a directed graph where:
\begin{itemize}
\item there is no $\aVert$ such that $\aVert\parentOf^+\aVert$ (\emph{acyclicity})
\item if $\aVert\parentOf\aVert'\childOf\aVert''$ then $\aVert=\aVert''$ (\emph{at most one parent}).
\end{itemize}
A \emph{root vertex} of a forest is a vertex $\aVert$ such that there is no $\bVert\parentOf\aVert$.
A \emph{tree} is a forest with a unique root vertex.
A \emph{preorder} is a directed graph $(\Verts, {\le})$ such that:
\begin{itemize}
\item every $\aVert$ has $\aVert\le\aVert$ (\emph{reflexivity}), and
\item if $\aVert\le\aVert'\le\aVert''$ then $\aVert\le\aVert''$ (\emph{transitivity}).
\end{itemize}
A \emph{partial order} is a preorder such that:
\begin{itemize}
\item for every $\aVert$ and $\aVert'$, if $\aVert\le\aVert'$ and $\aVert'\le\aVert$ then $\aVert=\aVert'$
(\emph{antisymmetry}).
\end{itemize}
A \emph{total order} is a partial order such that:
\begin{itemize}
\item for every $\aVert$ and $\aVert'$, either $\aVert\le\aVert'$ or $\aVert\ge\aVert'$ (\emph{totality}).
\end{itemize}
A \emph{equivalence} is a preorder $(\Verts,{\sim})$ such that:
\begin{itemize}
\item if $\aVert\sim\aVert'$ then $\aVert'\sim\aVert$ (\emph{symmetry}).
\end{itemize}
\subsection{Definitions}
\label{sec:defns}
We can now formalize our model of navigation history, together with
the operations of navigation and traversal. This formalizes the
navigation history specification~\cite{whatwg}, and has a pleasant
diagrammatic presentation, but as we shall see in
\S\ref{sec:properties}, it has unexpected properties.
\begin{definition}[Navigation history]
A \emph{navigation history} $\aNH=(\Docs,\Active,{\parentOf},{\leChron},{\eqSess})$ consists of:
\begin{itemize}
\item a finite set $\Docs$ (the \emph{documents}),
\item a subset $\Active \subseteq \Docs$ (the \emph{active} documents),
\item a forest $(\Docs,{\parentOf})$ (the \emph{child} relationship),
\item a total order $(\Docs,{\leChron})$ (the \emph{chronological} order), and
\item an equivalence relation $(\Docs,{\eqSess})$ (the \emph{same-session} equivalence).
\end{itemize}
such that:
\begin{itemize}
\item for every $\aDoc$ there is a unique $\aDoc'\in\Active$ such that $\aDoc \eqSess \aDoc'$,
\item for every $\aDoc \parentOf \bDoc \eqSess \bDoc'$
we have $\aDoc \parentOf \bDoc'$, and
\item for every $\aDoc \parentOf \bDoc$, we have $\aDoc \leChron \bDoc$.
\QED
\end{itemize}
\end{definition}
We present such navigation histories ad diagrams, using
left-to-right position for chronological order, and grouping documents
in the same session. Since documents in the same session must have the
same parent, we only draw the document hierarchy for active children.
For example the diagram:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active,fully](4) at (4,-2){4};
\node[doc](5) at (5,-1){5};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(5)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
represents:
\[\begin{array}{l}
D = \{ 0,1,2,3,4,5 \} \\[\jot]
A = \{ 0,1,3,4 \} \\[\jot]
0 \parentOf 1 \quad 0 \parentOf 3 \quad 0 \parentOf 5 \quad 1 \parentOf 2 \quad 1 \parentOf 4 \\[\jot]
0 \leChron 1 \leChron 2 \leChron 3 \leChron 4 \leChron 5 \\[\jot]
1 \eqSess 5 \quad 2 \eqSess 4
\end{array}\]
In such a navigation history, we define:
\begin{itemize}
\item $\rootDoc$ is the unique active root document,
\item $\aDoc \parentOfActive \bDoc$ when $\aDoc \parentOf \bDoc$ and $\bDoc \in \Active$
(the \emph{active child} relationship),
\item $\FullyActive = \{ \aDoc \mid \rootDoc \parentOfActive^* \aDoc \}$
(the \emph{fully active} documents),
\item $\aDoc \ltSess \bDoc$ whenever $\aDoc \eqSess \bDoc$ and $\aDoc \ltChron \bDoc$,
\item the \emph{session future} of $\aDoc$ is $\{ \bDoc \mid \aDoc \ltSess \bDoc \}$,
\item the \emph{session past} of $\aDoc$ is $\{ \bDoc \mid \aDoc \gtSess \bDoc \}$,
\item the \emph{joint session future} is $\{ \bDoc \mid \exists \aDoc \in \FullyActive \st \aDoc \ltSess \bDoc \}$,
\item the \emph{joint session past} is $\{ \bDoc \mid \exists \aDoc \in \FullyActive \st \aDoc \gtSess \bDoc \}$,
\end{itemize}
These definitions are intended to align with the specification, for example
\cite[7.7.2]{whatwg} has the definition:
\begin{quote}
The \textbf{joint session history} of a top-level browsing context is the
union of all the session histories of all browsing contexts of all
the fully active \verb|Document| objects that share that top-level browsing
context, with all the entries that are current entries in their
respective session histories removed except for the current entry of
the joint session history.
\end{quote}
which (eliding the concept of ``current entry of the joint session history'')
corresponds to the above definitions of joint session future and past.
%
We now consider how to formalize operations on navigation histories.
staring with \emph{navigation}, which is triggered by following hyperlinks,
or other actions which trigger document loading.
\begin{definition}[Navigation]
Define \emph{deleting $\aDoc$ from $\aNH$}, when $\aDoc$ is in the joint session future
to be $\aNH'=(\Docs',\Active,{\leChron},{\parentOf},{\eqSess})$ where:
\begin{itemize}
\item $\Docs' = \Docs \setminus \{ \bDoc \mid \aDoc\parentOf^* \bDoc \}$.
\end{itemize}
Define \emph{replacing $\aDoc$ by $\aDoc'$ in $\aNH$}, where $\aDoc\in\FullyActive$ and
$\aDoc'\notin\Docs$,
to be $\aNH'=(\Docs',\Active',{\leChron'},{\parentOf'},{\eqSess'})$ where:
\begin{itemize}
\item $\Docs' = \Docs \cup \{\aDoc'\}$,
\item $\bDoc \in \Active'$ whenever
$\bDoc \in \Active$ and $\bDoc\ne\aDoc$, or
$\bDoc=\aDoc'$,
\item $\bDoc \leChron' \cDoc$ whenever
$\bDoc \leChron \cDoc$, or $\cDoc = \aDoc'$,
\item $\bDoc \parentOf' \cDoc$ whenever
$\bDoc \parentOf \cDoc$, or
$\bDoc \parentOf \aDoc$ and $\cDoc = \aDoc'$, and
\item $\bDoc \eqSess' \cDoc$ whenever
$\bDoc \eqSess \cDoc$, or
$\bDoc=\cDoc$, or
$\bDoc \eqSess \aDoc$ and $\cDoc = \aDoc'$, or
$\aDoc \eqSess \cDoc$ and $\bDoc = \aDoc'$.
\end{itemize}
Define \emph{navigating from $\aDoc$ to $\aDoc'$ in $\aNH$}, where $\aDoc\in\FullyActive$ to be the result of:
\begin{itemize}
\item deleting the session future of $\aDoc$, and then
\item replacing $\aDoc$ by $\aDoc'$.
\QED
\end{itemize}
\end{definition}
There are two parts to navigation from $\aDoc$ to $\aDoc'$: deleting the session
future of $\aDoc$, followed by replacing $\aDoc$ by $\aDoc'$. For example,
navigating from $1$ to $6$ in:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active,fully](4) at (4,-2){4};
\node[doc](5) at (5,-1){5};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(5)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
we first delete the session future of $1$ (which is $5$):
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
then replace $1$ by $6$:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active](4) at (4,-2){4};
\node[doc,jshactive,fully](6) at (6,-1){6};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(6)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(6);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
We also define \emph{traversing the history}, which changes the active
documents.
\begin{definition}[Traversal]
Define \emph{traversing the history to $\aDoc$ in $\aNH$}, where $\aDoc \in \Docs$,
to be $\aNH'=(\Docs,\Active',{\leChron},{\parentOf},{\eqSess})$ where:
\begin{itemize}
\item $\bDoc\in\Active'$ whenever $\aDoc\not\eqSess\bDoc \in \Active$, or
$\aDoc=\bDoc$.
\end{itemize}
Define \emph{$\aNH$ traverses the history by $+\delta$ to $\aNH'$} when:
\begin{itemize}
\item the joint session future of $\aNH$ is $\aDoc_1 \ltChron \cdots \ltChron \aDoc_\delta \ltChron \cdots$,
\item $H$ traverses the history to $d_\delta$ in $H'$
\end{itemize}
Define \emph{$\aNH$ traverses the history by $-\delta$ to $\aNH'$} when:
\begin{itemize}
\item the joint session past of $\aNH$ is $\aDoc_1 \gtChron \cdots \gtChron \aDoc_\delta \gtChron \cdots$,
\item $H$ traverses the history to $d_\delta$ in $H'$
\end{itemize}
Define \emph{$\aNH$ traverses the history by $0$ to $\aNH'$} when $\aNH=\aNH'$.
\end{definition}
For example, to traverse the history by $-2$ in:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active](4) at (4,-2){4};
\node[doc,jshactive,fully](6) at (6,-1){6};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(6)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(6);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
we find the joint session past (which is $2 \gtChron 1$)
and traverse the history to the second item (which is $1$)
to arrive at:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,active,fully](3) at (3,0){3};
\node[doc,active,fully](4) at (4,-2){4};
\node[doc](6) at (6,-1){6};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(6)] {};
\node[draw,dotted,fit=(2)(4)] {};
\node[draw,dotted,fit=(3)] {};
\draw[->](0)--(1);
\draw[->](1)--(4);
\draw[->](0)--(3);
\end{tikzpicture}\]
These definitions are intended to formally capture the HTML
specification, for example \cite[\S7.7.2]{whatwg} includes:
\begin{quote}
To \textbf{traverse the history by a delta} $\delta$, the user agent
must append a task to this top-level browsing context's session
history traversal queue, the task consisting of running the
following steps:
\begin{enumerate}
\item If the index of the current entry of the joint session history
plus $\delta$ is less than zero or greater than or equal to the
number of items in the joint session history, then abort these
steps.
\item Let \emph{specified entry} be the entry in the joint session
history whose index is the sum of $\delta$ and the index of the
current entry of the joint session history.
\item Let \emph{specified browsing context} be the browsing context
of the specified entry.
\item If the specified browsing context's active document's unload a
document algorithm is currently running, abort these steps.
\item Queue a task that consists of running the following
substeps [\dots]
\begin{itemize}
\item[3.] Traverse the history of the specified browsing context
to the specified entry.
\end{itemize}
\end{enumerate}
\end{quote}
\subsection{Properties}
\label{sec:properties}
We now consider the fundamental property of navigation history:
\begin{definition}[Fundamental property]
\label{defn:fundamental}
$H$ satisfies the \emph{fundamental property of traversal} whenever
$H$ traverses the history by $\delta$ to $H'$
and $H'$ traverses the history by $\delta'$ to $H''$
implies $H$ traverses the history by $\delta+\delta'$ to $H''$.
\QED
\end{definition}
Unfortunately, navigation histories as specified do not always satisfy the fundamental property,
due to ways individual session histories are combined into the joint session history.
In this section, we give a series of counterexamples, and propose patches to
the model to address each counterexample.
\begin{counterexample}
\label{counterexample:intermediaries}
Let $H$ be:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-2){1};
\node[doc](3) at (3,-2){3};
\node[doc,active,fully](2) at (2,-1){2};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)--(1);
\draw[->](0)--(2);
\end{tikzpicture}\]
which traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-2){1};
\node[doc,jshactive,fully](3) at (3,-2){3};
\node[doc,active,fully](2) at (2,-1){2};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)to[out=300,in=180](3);
\draw[->](0)--(2);
\end{tikzpicture}\]
which traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-2){1};
\node[doc,active,fully](3) at (3,-2){3};
\node[doc](2) at (2,-1){2};
\node[doc,jshactive,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)to[out=300,in=180](3);
\draw[->](0)--(4);
\end{tikzpicture}\]
but $H$ traverses the history by $2$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-2){1};
\node[doc](3) at (3,-2){3};
\node[doc](2) at (2,-1){2};
\node[doc,jshactive,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)--(1);
\draw[->](0)--(4);
\end{tikzpicture}\]
\QED
\end{counterexample}
%
This counterexample is caused by the definition of `traverses the history by $\delta$' which
only traverses one document's session history. Instead, we should traverse
the history of all $\delta$ documents.
\begin{patch}[Traverse intermediaries]
Define \emph{$\aNH$ traverses the history by $+\delta$ to $\aNH'$} when:
\begin{itemize}
\item the joint session future of $\aNH$ is $\aDoc_1 \ltChron \cdots \ltChron \aDoc_\delta \ltChron \cdots$,
\item there is some $\aNH=\aNH_0,\ldots,\aNH_\delta=\aNH'$, such that
\item $H_{i-1}$ traverses the history to $d_i$ in $H_i$ for each $1 \le i \le \delta$.
\end{itemize}
Define \emph{$\aNH$ traverses the history by $-\delta$ to $\aNH'$} when:
\begin{itemize}
\item the joint session past of $\aNH$ is $\aDoc_1 \gtChron \cdots \gtChron \aDoc_\delta \gtChron \cdots$,
\item there is some $\aNH=\aNH_0,\ldots,\aNH_\delta=\aNH'$, such that
\item $H_{i-1}$ traverses the history to $d_i$ in $H_i$ for each $1 \le i \le \delta$.
\QED
\end{itemize}
\end{patch}
\begin{counterexample}
Let $H$ be:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc](2) at (2,-1){2};
\node[doc,active](3) at (3,-2){3};
\node[doc](4) at (4,-2){4};
\node[doc](5) at (5,0){5};
\node[draw,dotted,fit=(0)(5)] {};
\node[draw,dotted,fit=(1)(2)] {};
\node[draw,dotted,fit=(3)(4)] {};
\draw[->](0)--(1);
\draw[->](2)--(3);
\end{tikzpicture}\]
which traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-1){2};
\node[doc,active,fully](3) at (3,-2){3};
\node[doc](4) at (4,-2){4};
\node[doc](5) at (5,0){5};
\node[draw,dotted,fit=(0)(5)] {};
\node[draw,dotted,fit=(1)(2)] {};
\node[draw,dotted,fit=(3)(4)] {};
\draw[->](0)--(2);
\draw[->](2)--(3);
\end{tikzpicture}\]
which in turn traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,active,fully](2) at (2,-1){2};
\node[doc](3) at (3,-2){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[doc](5) at (5,0){5};
\node[draw,dotted,fit=(0)(5)] {};
\node[draw,dotted,fit=(1)(2)] {};
\node[draw,dotted,fit=(3)(4)] {};
\draw[->](0)--(2);
\draw[->](2)--(4);
\end{tikzpicture}\]
but $H$ traverses the history by $2$ to:
\[\begin{tikzpicture}
\node[doc](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,active](2) at (2,-1){2};
\node[doc,active](3) at (3,-2){3};
\node[doc](4) at (4,-2){4};
\node[doc,jshactive,fully](5) at (5,0){5};
\node[draw,dotted,fit=(0)(5)] {};
\node[draw,dotted,fit=(1)(2)] {};
\node[draw,dotted,fit=(3)(4)] {};
\draw[->](0)--(2);
\draw[->](2)--(3);
\end{tikzpicture}\]
\QED
\end{counterexample}
The problem this time is that the definition of `joint session history' only includes
the fully active documents, not all active documents.
\begin{patch}[Active joint session history]
Define:
\begin{itemize}
\item the \emph{joint session future} is $\{ \bDoc \mid \exists \aDoc \in \Active \st \aDoc \ltSess \bDoc \}$, and
\item the \emph{joint session past} is $\{ \bDoc \mid \exists \aDoc \in \Active \st \aDoc \gtSess \bDoc \}$.
\QED
\end{itemize}
\end{patch}
\begin{counterexample}
Let $H$ be:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-2){3};
\node[doc,active,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(3)]{};
\draw[->](0)--(4);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}\]
which traverses the history by $-1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc,active,fully](2) at (2,-2){2};
\node[doc](3) at (3,-2){3};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(3)]{};
\draw[->](0)--(1);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}\]
which traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,jshactive,fully](3) at (3,-2){3};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(3)]{};
\draw[->](0)--(1);
\draw[->](0)to[out=-20,in=120](3);
\end{tikzpicture}\]
which is not the same as $H$.
\QED
\end{counterexample}
This counterexample is caused by an asymmetry in the definition
of traversal: it is defined in terms of navigating \emph{to} a document
$d$, and not navigating \emph{from} a document. We fix this
by making the definition symmetric:
\begin{patch}[Symmetric traversal]
Define \emph{$\aNH$ traverses the history from $\aDoc'$} when there is some $\aDoc$ such that:
\begin{itemize}
\item $\aDoc\ltSess\aDoc'$,
\item for any $\bDoc\ltSess\aDoc'$ we have $\bDoc\leChron\aDoc$, and
\item $\aNH$ traverses the history to $\aDoc$.
\end{itemize}
Define \emph{$\aNH$ traverses the history by $-\delta$ to $\aNH'$} when:
\begin{itemize}
\item the joint session past and active documents of $\aNH$ are $\aDoc_1 \gtChron \cdots \gtChron \aDoc_\delta \gtChron \cdots$,
\item there is some $\aNH=\aNH_0,\ldots,\aNH_\delta=\aNH'$, such that
\item $H_{i-1}$ traverses the history from $d_i$ in $H_i$ for each $1 \le i \le \delta$.
\QED
\end{itemize}
\end{patch}
For example, to traverse the history by $-1$ from:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-2){3};
\node[doc,active,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(3)]{};
\draw[->](0)--(4);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}\]
we find the joint session past and active documents (which is $4 \gtChron 2 \gtChron 1 \gtChron 0$)
and traverse the history from the first item (which is $4$)
which is the same as traversing the history to $1$:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-1){1};
\node[doc,active,fully](2) at (2,-2){2};
\node[doc](3) at (3,-2){3};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(3)]{};
\draw[->](0)--(1);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}\]
\begin{counterexample}
\label{cex:not-well-formed}
Let $H$ be:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)(4)]{};
\draw[->](0)--(1);
\draw[->](1)--(4);
\end{tikzpicture}\]
which traverses the history by $-1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc ](4) at (4,-2){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)(4)]{};
\draw[->](0)--(1);
\draw[->](1)--(2);
\end{tikzpicture}\]
which traverses the history by $1$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,active](2) at (2,-2){2};
\node[doc,jshactive,fully](3) at (3,-1){3};
\node[doc](4) at (4,-2){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)(4)]{};
\draw[->](0)--(3);
\draw[->](1)--(2);
\end{tikzpicture}\]
which is not the same as $H$.
\QED
\end{counterexample}
%
The problem here is not the definition of `traversing by $\delta$', but the definition
of navigation histories themselves. They allow for states such as $H$ from
Counterexample~\ref{cex:not-well-formed}, which includes the problematic documents:
\[\begin{tikzpicture}
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)(4)]{};
\end{tikzpicture}\]
There are similar problems with documents:
\[\begin{tikzpicture}
\node[doc,active,fully](2) at (2,-1){2};
\node[doc](1) at (1,-2){1};
\node[doc](3) at (3,-1){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(2)(3)]{};
\node[draw,dotted,fit=(1)(4)]{};
\end{tikzpicture}\]
and with documents:
\[\begin{tikzpicture}
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](3) at (3,-2){3};
\node[doc](2) at (2,-1){2};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(1)(2)]{};
\node[draw,dotted,fit=(3)(4)]{};
\end{tikzpicture}\]
%
It turns out that these are the only remaining cause of counterexamples,
and we will call examples like this not \emph{well-formed}.
\begin{definition}[Well-formed]
A navigation history is \emph{well formed} whenever
for any $a \ltSess b$ and $c \ltSess d$,
if $a \in \Active$ and $d \in \Active$ then $d \leChron b$.
\end{definition}
%
We have that traversal preserves being well-formed: if $H$ is well-formed, and $H$ traverses
by $\delta$ to $H'$, then $H'$ is well-formed. Unfortunately, this is not true for navigation,
because of the way it clears the session future.
\begin{counterexample}
\label{cex:wf-nav}
Let $H$ be the well-formed history:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)]{};
\draw[->](0)--(1);
\draw[->](1)--(2);
\end{tikzpicture}\]
which navigates from $2$ to:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(3)]{};
\node[draw,dotted,fit=(2)(4)]{};
\draw[->](0)--(1);
\draw[->](1)--(4);
\end{tikzpicture}\]
which is not well-formed.
\QED
\end{counterexample}
%
Fortunately, we can patch navigation to address this, by requiring that
we clear the entire joint session future, not just the session future of the document
being navigated from.
\begin{patch}[Navigation deletes joint session future]
Define \emph{navigating from $\aDoc$ to $\aDoc'$ in $\aNH$}, where $\aDoc\in\FullyActive$ to be the result of:
\begin{itemize}
\item deleting the joint session future, and then
\item replacing $\aDoc$ by $\aDoc'$.
\QED
\end{itemize}
\end{patch}
%
For example in Counterexample~\ref{cex:wf-nav}, navigation from 2 now results in the well-formed history:
\[\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc,jshactive,fully](4) at (4,-2){4};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)]{};
\node[draw,dotted,fit=(2)(4)]{};
\draw[->](0)--(1);
\draw[->](1)--(4);
\end{tikzpicture}\]
%
With these patches, we can prove the fundamental property of traversal.
\begin{theorem}
\label{thm:fundamental}
For any well-formed navigation history $H$,
if $H$ traverses the history by $\delta$ to $H'$
and $H'$ traverses the history by $\delta'$ to $H''$
then $H$ traverses the history by $\delta+\delta'$ to $H''$.
\end{theorem}
\begin{proof}
In this paper, we give a proof sketch. The full details have been mechanically verified in Agda~\cite{AgdaProofs}.
Define:
\begin{itemize}
\item a document $d$ \emph{can go back} there is some $c \ltSess d$,
\item the \emph{back target} $b$ is the $\le$-largest active document which can go back, and
\item the \emph{forward target} $f$ is the $\le$-smallest document in the joint session future.
\end{itemize}
We then show some lemmas:
\begin{enumerate}
\item $H$ traverses by $+(\delta+1)$ to $H'$ if and only if
$H$ traverses to $f$, then traverses by $+\delta$ to $H'$.
\item $H$ traverses by $-(\delta+1)$ to $H'$ if and only if
$H$ traverses from $b$, then traverses by $-\delta$ to $H'$.
\item If $H$ is well-formed and $H$ traverses to $f$ with result $H'$,
then $f$ is the back target of $H'$, and $H'$ traverses from $f$ with result $H$.
\item If $H$ is well-formed and $H$ traverses from $b$ with result $H'$,
then $b$ is the forward target of $H'$, and $H'$ traverses to $b$ with result $H$.
\item If $H$ is well-formed and $H$ traverses to $f$ to $H'$, then $H'$ is well-formed.
\item If $H$ is well-formed and $H$ traverses from $b$ to $H'$, then $H'$ is well-formed.
\end{enumerate}
The result is then an induction on $\delta$.
\QED
\end{proof}
\section{Experiments}
\label{sec:experiments}
In this section, we summarize our experiments to validate the conformance of browser
implementations with respect to the \textsc{whatwg} specification, to our proposed
changes, and to each other.
We give details of how to recreate Counterexample~\ref{counterexample:intermediaries}
in detail, the other counterexamples are similar. We create an \textsc{html} page for the parent,
containing two \verb|iframe|s, both of which start at \verb|page1.html|, with a hyperlink
to \verb|page2.html|:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/1.png}%
}~\raisebox{-.5\height}{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-2){1};
\node[doc,active,fully](2) at (2,-1){2};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)] {};
\node[draw,dotted,fit=(2)] {};
\draw[->](0)--(1);
\draw[->](0)--(2);
\end{tikzpicture}
}
\end{quote}
Clicking on both hyperlinks loads both copies of \verb|page2.html|:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/8.png}%
}~\raisebox{-.5\height}{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-2){1};
\node[doc,active,fully](3) at (3,-2){3};
\node[doc](2) at (2,-1){2};
\node[doc,jshactive,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)to[out=300,in=180](3);
\draw[->](0)--(4);
\end{tikzpicture}
}
\end{quote}
Pressing the ``back'' button twice takes us to the initial state of Counterexample~\ref{counterexample:intermediaries}:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/1.png}%
}~\raisebox{-.5\height}{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,jshactive,fully](1) at (1,-2){1};
\node[doc](3) at (3,-2){3};
\node[doc,active,fully](2) at (2,-1){2};
\node[doc](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)--(1);
\draw[->](0)--(2);
\end{tikzpicture}
}
\end{quote}
Now, the user can traverse the history by $+2$ (by holding down the ``forward'' button)
which results in state:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/8.png}%
}~\raisebox{-.5\height}{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-2){1};
\node[doc,active,fully](3) at (3,-2){3};
\node[doc](2) at (2,-1){2};
\node[doc,jshactive,fully](4) at (4,-1){4};
\node[draw,dotted,fit=(0)] {};
\node[draw,dotted,fit=(1)(3)] {};
\node[draw,dotted,fit=(2)(4)] {};
\draw[->](0)to[out=300,in=180](3);
\draw[->](0)--(4);
\end{tikzpicture}
}
\end{quote}
Experimentally, this shows that Firefox is aligned with our patched model, rather than
with the unpatched model. We can set up similar experiments for the other counterexamples,
and execute them in other browsers, which gives results\footnote{%
Recall that Counterexample~4 depends on a non-well-formed navigation history,
and that the patch for it is to make such states unreachable, and so
experimentally unverifiable.
}:
\begin{center}
{\sffamily
\begin{tabular}{crrrr}
\rowcolor{black!50!blue}
&&&& \llap{\color{white} Counterexample} \\
\rowcolor{black!50!blue}
& \color{white}1 & \color{white}2 & \color{white}3 & \color{white}5 \\
\rowcolor{white!90!blue}
Firefox & P & P & P & P \\
Chrome & P & P & P & P \\
\rowcolor{white!90!blue}
Safari & P & P & P & P \\
Internet Explorer & U & U & P & P \\
\rowcolor{white!90!blue}
Servo & P & P & P & P \\
\end{tabular}
}
\quad
\begin{tabular}{ll}
\textsf{P}:& aligned with patched model \\
\textsf{U}:& aligned with unpatched model \\
\end{tabular}
\end{center}
Most browsers are compatible with the patched model rather than than
unpatched model, with the exception of Internet Explorer, which has mixed behaviour
(Edge is similar). Servo was designed from the patched model.
Moreover, performing these experiments shows some unexpected behaviours
in browser implementations. For example in Firefox, starting in state:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/5.png}
}~\raisebox{-.5\height}{\rlap{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,active,fully](4) at (4,-1){4};
\node[doc](5) at (5,-2){5};
\node[doc,jshactive,fully](6) at (6,-2){6};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(6)]{};
\draw[->](0)to[out=0,in=140](4);
\draw[->](0)to[out=0,in=120](6);
\end{tikzpicture}
}}
\end{quote}
and traversing by $-4$ results in state:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4/firefox/6.png}
}~\raisebox{-.5\height}{\rlap{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,active,fully](4) at (4,-1){4};
\node[doc](5) at (5,-2){5};
\node[doc](6) at (6,-2){6};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(6)]{};
\draw[->](0)to[out=0,in=140](4);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}
}}
\end{quote}
This state is unexpected, as document $4$ should have traversed to document $1$, and any state
showing \verb|page3.html| should be capable of going back.
In Safari, the use of \verb|pushState| and \verb|popState| for navigation has unexpected
results. We can use \verb|pushState| and \verb|popState| to construct state:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4state/safari/6.png}%
}~\raisebox{-.5\height}{\rlap{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc,active,fully](1) at (1,-1){1};
\node[doc,jshactive,fully](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc](4) at (4,-1){4};
\node[doc](5) at (5,-2){5};
\node[doc](6) at (6,-2){6};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(6)]{};
\draw[->](0)--(1);
\draw[->](0)to[out=-20,in=90](2);
\end{tikzpicture}
}}
\end{quote}
then traversing by $+4$ results in:
\begin{quote}
\raisebox{-.5\height}{
\includegraphics[width=.45\linewidth]{images/experiments/forwardback4state/safari/7.png}%
}~\raisebox{-.5\height}{\rlap{
\begin{tikzpicture}
\node[doc,active,fully](0) at (0,0){0};
\node[doc](1) at (1,-1){1};
\node[doc](2) at (2,-2){2};
\node[doc](3) at (3,-1){3};
\node[doc,jshactive,fully](4) at (4,-1){4};
\node[doc](5) at (5,-2){5};
\node[doc](6) at (6,-2){6};
\node[draw,dotted,fit=(0)]{};
\node[draw,dotted,fit=(1)(4)]{};
\node[draw,dotted,fit=(2)(6)]{};
\draw[->](0)to[out=0,in=140](4);
\end{tikzpicture}
}}
\end{quote}
After this traversal, we are unable to determine the active entry for one of the \verb|iframe|s
as its state is \verb|null|.
As these examples show, navigation history is difficult to implement: even major
browser implementations give unexpected behaviours when combining separate
\verb|iframe| session histories.
\section{Specification}
In this section, we discuss how the \textsc{whatwg}
specification~\cite[\S7.7.2]{whatwg} can be aligned with the model
from \S\ref{sec:model}. This is not a direct translation, due to some
of the features we elided in our model. In particular, we did not
discuss how documents are \emph{loaded} and \emph{unloaded}, which
includes downloading and allocating resources such as \textsc{html} or
\textsc{css}, and activating JavaScript content. Since
loading-then-unloading a document is wasteful, the specification
should be written to avoid loading intermediate pages when traversing
by a delta. This introduces complexity.
Our first proposed change is that the current specification is defined in terms
of the ``joint session history'' and makes use of the ``current entry
of the joint session history'', neither of which are used by our model.
We propose to remove the definition of ``joint session history''
and ``current entry of the joint session history'', and add the following:
\begin{quote}
The \textbf{session past} of a browsing context is the entries
of the session history added before the current entry
(and does not include the current entry).
The \textbf{session future} of a browsing context is the entries
of the session history added after the current entry
(and does not include the current entry).
If an entry has a next entry in the chronologically ordered session
history, it is its \textbf{successor}.
If an entry has a previous entry in the chronologically ordered session
history, it is its \textbf{predecessor}.
The \textbf{joint session past} of a top-level browsing context is the
union of all the session pasts of all browsing contexts
that share that top-level browsing context.
Entries in the joint session past are in decreasing chronological order of
the time they were added to their respective session histories.
The \textbf{joint session future} of a top-level browsing context is the
union of all the session futures of all browsing contexts
that share that top-level browsing context.
Entries in the joint session future are in increasing chronological order of
the time their predecessor were added to their respective session
histories.
\end{quote}
The second proposed change is to replace the definition of how a user agent
should``traverse the history by a delta'' by the following:
\begin{quote}
To \textbf{traverse the history by a delta} \emph{delta}, the user
agent must append a task to this top-level browsing context's session
history traversal queue, the task consisting of running the following
steps:
\begin{enumerate}
\item Define the \emph{entry sequence}
as follows:
\begin{enumerate}
\item If \emph{delta} is a positive integer $+n$, and if the length of the
joint session future is less than or equal to $n$ then let the \emph{entry sequence}
be the first $n$ entries of the joint session future.
\item If \emph{delta} is a negative integer $-n$, and if the length of the
joint session past is less than or equal to $n$ then let the \emph{entry sequence}
be the first $n$ entries of the joint session past.
\item Otherwise, abort traversing the history by a delta.
\end{enumerate}
\item A session entry is said to \textbf{become active} when
it is a member of the \emph{entry sequence}, and no
session entry after it in the \emph{entry sequence} has the same
browsing context.
\item A session entry is said to \textbf{stay active} when it it the
current entry of its browsing context, and there are no members of
the \emph{entry sequence} with the same browsing context.
\item A session entry is said to be \textbf{activating} when either
it will become active or stay active.
\textbf{Note:} the activating documents
will be active after traversal has finished.
\item A session entry is said to be \textbf{fully activating} if
is activating, and either its browsing context is a top-level
browsing context, or it has a parent browsing context
and the session entry through which it is nested is itself fully activating.
\textbf{Note:} the fully activating documents
will be fully active after traversal has finished.
\item Queue a task that consists of running the following
substeps. The relevant event loop is that of the specified
browsing context's active document. The task source for the queued
task is the history traversal task source.
\begin{enumerate}
\item For each \emph{specified entry} in the \emph{entry sequence},
run the following substeps.
\begin{enumerate}
\item Let \emph{specified browsing context} be the browsing context of the \emph{specified entry}.
\item If there is an ongoing attempt to navigate \emph{specified
browsing context} that has not yet matured (i.e. it has not
passed the point of making its \texttt{Document} the active
document), then cancel that attempt to navigate the browsing
context.
\item If the \emph{specified browsing context}'s active document
is not the same \texttt{Document} as the \texttt{Document} of
the specified entry, then run these substeps:
\begin{enumerate}
\item Prompt to unload the active document of the
\emph{specified browsing context}. If the user refused to
allow the document to be unloaded, then abort these steps.
\item Unload the active document of the \emph{specified
browsing context} with the recycle parameter set to false.
\end{enumerate}
\item If the \emph{specified entry} is activating but not fully activating,
then set the current entry of the session history of \emph{specified browsing context}
to be the \emph{specified entry}.
\textbf{Note:} in this case, the current entry of the session history should
be updated, but the document will not be fully active, so should not be loaded.
\item If the \emph{specified entry} is fully activating, then
traverse the history of the \emph{specified browsing context}
to the \emph{specified entry}.
\textbf{Note:} in this case, the document will be fully active, so should be loaded.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{quote}
We believe that these changes bring the specification in line with our model, and
so satisfies the fundamental property of navigation.
\section{Conclusion}
We have proposed a model of web navigation compatible with the
\textsc{whatwg} specification, and investigated its ``fundamental
property'': that traversing by $\delta$ then by $\delta'$ is the same
as traversing by $\delta+\delta'$. Unfortunately, the specified model
does not satisfy this property, but we have shown that a patched model
does. Experimentally, it appears that the patched model is closer to
the behaviour of existing browser implementations.
\bibliographystyle{plain}
\bibliography{notes}
\end{document}
| {
"alphanum_fraction": 0.65910852,
"avg_line_length": 38.7988546886,
"ext": "tex",
"hexsha": "e64eedc7bdce266facf49f5c1335f29f9e21176b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "cea700aea1488df1d27092c8f3d79d5d99f7e09a",
"max_forks_repo_licenses": [
"CC-BY-4.0"
],
"max_forks_repo_name": "cbrewster/ServoNavigation",
"max_forks_repo_path": "notes/notes.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "cea700aea1488df1d27092c8f3d79d5d99f7e09a",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"CC-BY-4.0"
],
"max_issues_repo_name": "cbrewster/ServoNavigation",
"max_issues_repo_path": "notes/notes.tex",
"max_line_length": 128,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "cea700aea1488df1d27092c8f3d79d5d99f7e09a",
"max_stars_repo_licenses": [
"CC-BY-4.0"
],
"max_stars_repo_name": "cbrewster/ServoNavigation",
"max_stars_repo_path": "notes/notes.tex",
"max_stars_repo_stars_event_max_datetime": "2017-04-22T01:51:07.000Z",
"max_stars_repo_stars_event_min_datetime": "2017-04-22T01:51:07.000Z",
"num_tokens": 18246,
"size": 54202
} |
\section{Applications of DSSAT}
\label{sect:dssat-application}
In this section, we demonstrate two applications of DSSAT.
\subsection{Analyzing probabilistic/approximate partial design}
After formulating DSSAT and proving its NEXPTIME-completeness,
we show its application to the analysis of probabilistic design and approximate design.
Specifically, we consider the probabilistic version of the \textit{topologically constrained logic synthesis problem}~\cite{Sinha2002,Balabanov2014},
or equivalently the \textit{partial design problem}~\cite{Gitina2013}.
In the \textit{(deterministic) partial design problem},
we are given a specification function $G(X)$ over primary input variables $X$ and
a \textit{partial design} $C_F$ with black boxes to be synthesized.
The Boolean functions induced at the primary outputs of $C_F$ can be described by $F(X,T)$,
where $T$ corresponds to the variables of the black box outputs.
Each black box output $t_i$ is specified with its input variables (i.e., dependency set) $\dep{i}\subseteq X \cup Y$ in $C_F$,
where $Y$ represents the variables for intermediate gates in $C_F$ referred to by the black boxes.
The partial design problem aims at deriving the black box functions $\{h_1(\dep{1}),\ldots,h_{|T|}(\dep{|T|})\}$
such that substituting $t_i$ with $h_i$ in $C_F$ makes the resultant circuit function equal $G(X)$.
The above partial design problem can be encoded as a DQBF problem;
moreover, the partial equivalence checking problem is shown to be NEXPTIME-complete~\cite{Gitina2013}.
Specifically, the DQBF that encodes the partial equivalence checking problem is of the form:
\begin{align}
\label{eq:dssat-dqbf-partial-design}
\forall X,\forall Y,\exists T(D).(Y \equiv E(X)) \limply (F(X,T)\equiv G(X)),
\end{align}
where $D$ consists of $(\dep{1},\ldots,\dep{|T|})$,
$E$ corresponds to the defining functions of $Y$ in $C_F$,
and the operator ``$\equiv$'' denotes element-wise equivalence between its two operands.
\begin{figure}[t]
\centering
\includegraphics{fig/build/dssat-prob-miter.pdf}
\caption{A miter for the equivalence checking of probabilistic partial design}
\label{fig:dssat-prob-miter}
\end{figure}
The above partial design problem can be extended to its probabilistic variant,
which is illustrated by the circuit shown in~\cref{fig:dssat-prob-miter}.
The \textit{probabilistic partial design problem} is the same as the deterministic partial design problem except that
$C_F$ is a distilled probabilistic design~\cite{LeeTC18ProbDesign} with black boxes,
whose functions at the primary outputs can be described by $F(X,Z,T)$,
where $Z$ represents the variables for the auxiliary inputs that trigger errors in $C_F$
(including the errors of the black boxes) and
$T$ corresponds to the variables of the black box outputs.
Each black box output $t_i$ is specified with its input variables (i.e., dependency set)
$\dep{i} \subseteq X \cup Y$ in $C_F$.
When $t_i$ is substituted with $h_i$ in $C_F$,
the function of the resultant circuit is required to be sufficiently close to the specification with respect to some expected probability.
\begin{theorem}
The probabilistic partial design problem is NEXPTIME-complete.
\end{theorem}
\begin{proof}
To show that the probabilistic partial design problem is in the NEXPTIME complexity class,
we note that the black box functions can be guessed and validated in time exponential to the number of black box inputs.
To show completeness in the NEXPTIME complexity class,
we reduce the known NEXPTIME-complete DSSAT problem to the probabilistic partial design problem,
similar to the construction used in the previous work~\cite{Gitina2013}.
Given a DSSAT instance,
it can be reduced to a probabilistic partial design instance in polynomial time as follows.
Without loss of generality,
consider the DSSAT formula~\cref{eq:dssat}.
We create a probabilistic partial design instance by letting the specification $G$ be a tautology and
letting $C_F$ be a probabilistic design with black boxes,
which involves primary inputs $x_1,\ldots,x_n$ and black box outputs $y_1,\ldots,y_m$ to compute the matrix $\pf$.
The driving inputs of the black box output $y_j$ is specified by the dependency set $\dep{y_j}$ in~\cref{eq:dssat},
and the probability for primary input $x_i$ to evaluate to $\top$ is set to $p_i$.
The original DSSAT formula is satisfiable with respect to a target satisfying probability $\theta$ if and only if
there exist implementations of the black boxes such that the resultant circuit composed with the black box implementations behaves like a tautology with respect to the required expectation $\theta$.
\end{proof}
On the other hand,
the probabilistic partial design problem can be encoded with the following XDSSAT formula:
\begin{align}
\label{eq:dssat-partial-design}
\random{} X,\random{} Z,\forall Y,\exists T(D).(Y \equiv E(X)) \limply (F(X,Z,T)\equiv G(X)),
\end{align}
where the primary input variables are randomly quantified with probability $p_{x_i}$ of $x_i \in X$ to reflect their weights,
and the error-triggering auxiliary input variables $Z$ are randomly quantified according to the pre-specified error rates of the erroneous gates in $C_F$.
Notice that the above formula takes advantage of the extension with universal quantifiers as discussed previously.
In approximate design,
a circuit implementation may deviate from its specification by a certain extent.
The amount of deviation can be characterized in a way similar to the error probability calculation in probabilistic design.
For approximate partial design,
the equivalence checking problem can be expressed by the XDSSAT formula:
\begin{align}
\label{eq:dssat-approximate-design}
\random{} X,\forall Y,\exists T(D).(Y \equiv E(X)) \limply (F(X,T)\equiv G(X)),
\end{align}
which differs from~\cref{eq:dssat-partial-design} only in requiring no auxiliary inputs.
The probabilities of the randomly quantified primary input variables are determined by the approximation criteria in measuring the deviation.
For example, when all the input assignments are of equal weight,
the probabilities of the primary inputs are all set to 0.5.
We note that as the engineering change order (ECO) problem~\cite{JiangDATE20ECOSurvey} heavily relies on partial design equivalence checking,
the above DSSAT formulations provide fundamental bases for ECOs of probabilistic and approximate designs.
\subsection{Modeling Dec-POMDP}
We demonstrate the descriptive power of DSSAT by constructing a polynomial-time reduction from Dec-POMDP to DSSAT.
Our reduction is an extension of that from POMDP to SSAT proposed by Salmon and Poupart~\cite{Salmon2020}.
In essence, given a Dec-POMDP $\mathcal{M}$,
we will construct in polynomial time a DSSAT formula $\Qf$
such that there is a joint policy $\vec{\pi}$ for $\mathcal{M}$ with value $V(\vec{\pi})$
if and only if there is a set $\skf$ of Skolem functions for $\Qf$
with satisfying probability $\spb{\pcf{\Qf}{\skf}}$, and $V(\vec{\pi})=\spb{\pcf{\Qf}{\skf}}$.
First we introduce the variables used in construction of the DSSAT formula and their domains.
To improve readability, we allow a variable $x$ to take values from a finite set $U=\{x_1,\ldots,x_K\}$.
Under this setting,
a randomized quantifier $\random{}$ over variable $x$ specifies a distribution $\Pr[x\equiv x_i]$ for each $x_i\in U$.
We also define a scaled reward function:
\begin{align*}
r(s,\vec{a})=\frac{\rho(s,\vec{a})-\min\limits_{s',\vec{a}'}\rho(s',\vec{a}')}{\sum\limits_{s'',\vec{a}''}[\rho(s'',\vec{a}'')-\min\limits_{s',\vec{a}'}\rho(s',\vec{a}')]}
\end{align*}
such that $r(s,\vec{a})$ forms a distribution over all pairs of $s$ and $\vec{a}$,
i.e., $\forall s,\forall \vec{a}.r(s,\vec{a})\geq 0$ and $\sum\limits_{s,\vec{a}}r(s,\vec{a})=1$.
We will use the following variables:
\begin{itemize}
\item $x_s^t\in S$: the state at stage $t$,
\item $x_a^{i,t}\in A_i$: the action taken by Agent~$i$ at stage $t$,
\item $x_o^{i,t}\in O_i$: the observation received by Agent~$i$ at stage $t$,
\item $x_r^t\in S\times (A_1\times\ldots\times A_n)$: the reward earned at stage $t$,
\item $x_T^t\in S$: transition distribution at stage $t$,
\item $x_\Omega^t\in O_1\times\ldots\times O_n$: observation distribution at stage $t$,
\item $x_p^t\in \mathbb{B}$: used to sum up rewards across stages.
\end{itemize}
We encode elements in sets $S$, $A_i$, and $O_i$ by integers,
i.e., $S=\{0,1,\ldots,|S|-1\}$, etc.,
and use indices $s$, $a_i$, and $o_i$ to iterate through them, respectively.
On the other hand, a special treatment is required for variables $x_r^t$ and $x_\Omega^t$,
as they range over Cartesian products of multiple sets.
We assign a unique number to an element in a product set as follows.
Consider $\vec{Q}=Q_1\times\ldots\times Q_n$, where each $Q_i$ is a finite set.
An element $\vec{q}=(q_1,\ldots,q_n)\in \vec{Q}$ is numbered by $N(q_1,\ldots,q_n)=\sum_{i=1}^n q_i(\prod_{j=1}^{i-1}|Q_j|)$.
In the following construction,
variables $x_r^t$ and $x_\Omega^t$ will take values from the numbers given to the elements in $S\times\vec{A}$ and $\vec{O}$ by $N_r(s,\vec{a})$ and $N_\Omega(\vec{o})$, respectively.
We begin by constructing a DSSAT formula for a Dec-POMDP with $h=1$.
Under this setting,
the derivation of an optimal joint policy is simply finding an action for each agent to maximize the expectation value of the reward function, i.e.,
\begin{align*}
\vec{a}^*=\argmax_{\vec{a}\in \vec{A}}\sum_{s\in S}\Delta_0(s)r(s,\vec{a}).
\end{align*}
The following DSSAT formula encodes the above optimization problem:
\begin{align*}
\random{} x_s^0,\random{} x_r^0,\exists x_a^{1,0}(D_{x_a^{1,0}}),\ldots,\exists x_a^{n,0}(D_{x_a^{n,0}}).\pf,
\end{align*}
where the distribution of $x_s^0$ follows $\Pr[x_s^0 \equiv s]=\Delta_0(s)$,
the distribution of $x_r^0$ follows $\Pr[x_r^0 \equiv N_r(s,\vec{a})]=r(s,\vec{a})$,
each $D_{x_a^{i,0}}=\emptyset$, and the matrix:
\begin{align*}
\pf=\bigwedge_{s\in S}\bigwedge_{\vec{a}\in\vec{A}}[x_s^0\equiv s\wedge\bigwedge_{i\in I} x_a^{i,0}\equiv a_i\rightarrow x_r^0\equiv N_r(s,\vec{a})].
\end{align*}
As the existentially quantified variables have no dependency on randomly quantified variable,
the DSSAT formula is effectively an exist-random quantified SSAT formula.
For an arbitrary Dec-POMDP with $h>1$,
we follow the two steps proposed in the previous work~\cite{Salmon2020},
namely \textit{policy selection} and \textit{policy evaluation},
and adapt the policy selection step for the multi-agent setting in Dec-POMDP.
In the previous work~\cite{Salmon2020},
an agent's policy selection is encoded by the following prefix (use Agent~$i$ as an example):
\begin{align*}
\exists x_a^{i,0},\random{} x_p^0,\random{} x_o^{i,0},\ldots,\exists x_a^{i,h-2},\random{} x_p^{h-2},\random{} x_o^{i,h-2},\exists x_a^{i,h-1},\random{} x_p^{h-1}.
\end{align*}
In the above quantification,
variable $x_p^t$ is introduced to sum up rewards earned at different stages.
It takes values from $\booldom$ and follows a uniform distribution,
i.e., $\Pr[x_p^t\equiv\top]=\Pr[x_p^t\equiv\bot]=0.5$.
When $x_p^t\equiv\bot$,
the process is stopped and the reward at stage $t$ is earned;
when $x_p^t\equiv\top$,
the process is continued to stage $t+1$.
Note that variables $\{x_p^t\}$ are shared across all agents.
With the help of variable $x_p^t$,
rewards earned at different stages are summed up with an equal weight $2^{-h}$.
Variable $x_o^{i,t}$ also follows a uniform distribution $\Pr[x_o^{i,t} \equiv o_i]=|O_i|^{-1}$,
which scales the satisfying probability by $|O_i|^{-1}$ at each stage.
Therefore, we need to re-scale the satisfying probability accordingly in order to obtain the correct satisfying probability corresponding to the value of a joint policy.
The scaling factor will be derived in the proof of~\cref{thm:dssat-dec-pomdp-reduction}.
As the actions of the agents can only depend on their own observation history,
for the selection of a joint policy it is not obvious how to combine the quantification of each agent,
i.e., the selection of an individual policy,
into a linearly ordered prefix required by SSAT,
without suffering an exponential translation cost.
On the other hand, DSSAT allows to specify the dependency of an existentially quantified variable freely and is suitable to encode the selection of a joint policy.
In the prefix of the DSSAT formula,
variable $x_a^{i,t}$ depends on $D_{x_a^{i,t}}=\{x_o^{i,0},\ldots,x_o^{i,t-1},x_p^0,\ldots,x_p^{t-1}\}$.
Next, the policy evaluation step is exactly the same as that in the previous work~\cite{Salmon2020}.
The following quantification computes the value of a joint policy:
\begin{align*}
\random{} x_s^t, \random{} x_r^t, t=0,\ldots,h-1 \\
\random{} x_T^t, \random{} x_\Omega^t, t=0,\ldots,h-2
\end{align*}
Variables $x_s^t$ follow a uniform distribution $\Pr[x_s^t \equiv s]=|S|^{-1}$ except for variable $x_s^0$,
which follows the initial distribution specified by $\Pr[x_s^0 \equiv s]=\Delta_0(s)$;
variables $x_r^t$ follow the distribution of the reward function $\Pr[x_r^t \equiv N_r(s,\vec{a})]=r(s,\vec{a})$;
variables $x_T^t$ follow the state transition distribution $\Pr[x_{T_{s,\vec{a}}}^t \equiv s']=T(s,\vec{a},s')$;
variables $x_\Omega^t$ follow the observation distribution $\Pr[x_{\Omega_{s',\vec{a}}}^t \equiv N_\Omega(\vec{o})]=\Omega(s',\vec{a},\vec{o})$.
Note that these variables encode the random mechanism of a Dec-POMDP and are hidden from the agents.
That is, variables $x_a^{i,t}$ do not depend on the above variables.
\begin{figure}[t]
\centering
\input{fig/dssat-dec-pomdp-formulas}
\caption{The formulas used to encode a Dec-POMDP $\mathcal{M}$}
\label{fig:dssat-dec-pomdp-formula}
\end{figure}
The formulas to encode $\mathcal{M}$ are listed in~\cref{fig:dssat-dec-pomdp-formula}.
\Cref{eq:dssat-dec-pomdp-formulas-xp-next} encodes that when $x_p^t \equiv \bot$, i.e., the process is stopped, the observation $x_o^{i,t}$ and next state $x_s^{t+1}$ are set to a preserved value $0$, and $x_p^{t+1} \equiv \bot$.
\Cref{eq:dssat-dec-pomdp-formulas-xp-stop} ensures the process is stopped at the last stage.
\Cref{eq:dssat-dec-pomdp-formulas-xr-0} ensures the reward at the first stage is earned when the process is stopped, i.e., $x_p^0 \equiv \bot$.
\Cref{eq:dssat-dec-pomdp-formulas-xr-t} requires the reward at stage $t>0$ is earned when $x_p^{t-1} \equiv \top$ and $x_p^t \equiv \bot$.
\Cref{eq:dssat-dec-pomdp-formulas-x-T} encodes the transition distribution from state $s$ to state $s'$ given actions $\vec{a}$ are taken.
\Cref{eq:dssat-dec-pomdp-formulas-x-omega} encodes the observation distribution to receive observation $\vec{o}$ under the situation that state $s'$ is reached after actions $\vec{a}$ are taken.
\begin{figure}[t]
\centering
\input{fig/dssat-dec-pomdp-derivation}
\caption{The derivation of the induction case in the proof of~\cref{thm:dssat-dec-pomdp-reduction}}
\label{fig:dssat-dec-pomdp-derivation}
\end{figure}
The following theorem states the correctness of the reduction.
\begin{theorem}\label{thm:dssat-dec-pomdp-reduction}
The above reduction maps a Dec-POMDP $\mathcal{M}$ to a DSSAT formula $\Phi$,
such that a joint policy $\vec{\pi}$ exists for $\mathcal{M}$ if and only if
a set $\skf$ of Skolem functions exists for $\Qf$, with $V(\vec{\pi})=\spb{\pcf{\Qf}{\skf}}$.
\end{theorem}
\begin{proof}
Given an arbitrary Dec-POMDP $\mathcal{M}$,
we prove the statement via mathematical induction over the planning horizon $h$ as follows.
For the base case $h=1$,
to prove the ``only if'' direction,
consider a joint policy $\vec{\pi}$ for $\mathcal{M}$ that specifies $\vec{a}=(a_1,\ldots,a_n)$
where Agent~$i$ takes action $a_i$.
For this joint policy,
the value is computed as $V(\vec{\pi})=\sum_{s \in S}\Delta_0(s)r(s,\vec{a})$.
Based on $\vec{\pi}$,
we construct a set $\skf$ of Skolem functions where $x_a^{i,0}=a_i$ for each $i\in I$.
To compute $\spb{\pcf{\Qf}{\skf}}$,
we cofactor the matrix with $\skf$ and arrive at the following CNF formula:
\begin{align*}
\bigwedge_{s\in S}[x_s^0\neq s \vee x_r^0 \equiv N_r(s,\vec{a})],
\end{align*}
and the satisfying probability of $\Qf$ with respect to $\skf$ is
\begin{align*}
\spb{\pcf{\Qf}{\skf}}=
\sum_{s\in S}\Pr[x_s^0 \equiv s]\Pr[x_r^0 \equiv N_r(s,\vec{a})]=
\sum_{s\in S}\Delta_0(s)r(s,\vec{a})=V(\vec{\pi}).
\end{align*}
Note that only equalities are involved in the above argument.
The reasoning steps can hence be reversed to prove the ``if'' direction.
For the induction step,
first assume that the statement holds for a planning horizon $h>1$.
For a planning horizon of $h+1$,
consider a joint policy $\vec{\pi}_{h+1}$ with value $V(\vec{\pi}_{h+1})$.
Note that as a joint policy is a mapping from observation histories to actions,
we can build a corresponding set of Skolem functions $\skf_{h+1}$ to simulate joint policy $\vec{\pi}_{h+1}$ for the DSSAT formula.
The derivation of satisfying probability with respect to $\skf_{h+1}$ is shown in~\cref{fig:dssat-dec-pomdp-derivation}.
Note that to obtain the correct value of the joint policy,
we need to re-scale the satisfying probability by a scaling factor $\kappa_{h+1}=2^{h+1}(|\vec{O}||S|)^{h}$.
As only equalities are involved in the derivation in~\cref{fig:dssat-dec-pomdp-derivation},
the ``if'' direction is also proved.
Because $\spb{\pcf{\Qf}{\skf_{h+1}}}=V(\vec{\pi}_{h+1})$ is established,
the theorem is proved according to the principle of mathematical induction.
\end{proof}
\subsubsection{Discussion}
Below we count the numbers of variables and clauses in the resulting DSSAT formula with respect to the input size of the given Dec-POMDP.
For a stage,
there are $3+2(|I|+|S||\vec{A}|)$ variables,
and therefore in total the number of variables is $O(h(|I|+|S||A|))$ asymptotically.
On the other hand,
the number of clauses per stage is $2+|I|+|S||\vec{A}|+|S|^2|\vec{A}|+|S||\vec{A}||\vec{O}|$,
and hence the total number of clauses is $O(h(|I|+|S||\vec{A}|(|S|+|\vec{O}|))$.
Overall, we show that the proposed reduction is polynomial-time with respect to the input size of the Dec-POMDP.
Below we demonstrate the reduction with an example.
\begin{figure}[t]
\centering
\includegraphics{fig/build/dssat-dec-pomdp-example.pdf}
\caption{A Dec-POMDP example with two agents and $h=2$}
\label{fig:dssat-dec-pomdp-example}
\end{figure}
\begin{example}
Consider a Dec-POMDP with two agents and planning horizon $h=2$.
Given a joint policy $(\pi_1,\pi_2)$ for Agent~$1$ and Agent~$2$,
let the actions taken at $t=0$ be $\vec{a}^0=(a_1^0,a_2^0)$ and
the actions taken at $t=1$ under observations $\vec{o}^0=(o_1^0,o_2^0)$ be $\vec{a}^1=(a_1^1,a_2^1)$.
The value of this joint policy is computed by~\cref{eq:dssat-bellman} as:
\begin{align*}
V(\pi)=
\sum_{s^0\in S}\Delta_0(s^0)[r(s^0,\vec{a}^0)+
\sum_{\vec{o}^0\in\vec{O}}\sum_{s^1\in S}T(s^0,\vec{a}^0,s^1)\Omega(s^1,\vec{a}^0,\vec{o}^0)r(s^1,\vec{a}^1)].
\end{align*}
The decision tree of the converted DSSAT formula is shown in~\cref{fig:dssat-dec-pomdp-example}.
At $t=0$,
after taking actions $\vec{a}^0$,
variable $x_p^0$ splits into two cases:
when $x_p^0\equiv\bot$ (left branch),
the expected reward $\Delta_0(s^0)r(s^0,\vec{a}^0)$ will be earned for $t=0$;
on the other hand,
when $x_p^0\equiv\top$ (right branch),
observation $\vec{o}^0$ is received,
based on which the agents will select their actions $\vec{a}^1$ at $t=1$.
Again, variable $x_p^1$ will split into two cases,
but this time $x_p^1$ is forced to be $\bot$ as it is the last stage.
The expected reward $\Delta_0(s^0)T(s^0,\vec{a}^0,s^1)\Omega(s^1,\vec{a}^0,\vec{o}^0)r(s^1,\vec{a}^1)$ will be earned under the branch of $x_p^1\equiv\bot$ for $t=1$.
Note that the randomized quantifiers over variables $x_p^t$, $x_s^t$, and $x_o^t$ will scale the satisfying probability by the factors labelled on the edges, respectively.
Therefore, we have to re-scale the satisfying probability by $2^2|S||O_1\times O_2|$,
which is predicted by the scaling factor $\kappa_{h}=2^h(|\vec{O}||S|)^{h-1}$ calculated in the proof of~\cref{thm:dssat-dec-pomdp-reduction}.
\end{example} | {
"alphanum_fraction": 0.7182686684,
"avg_line_length": 61.9818731118,
"ext": "tex",
"hexsha": "b6969a8bf9627bfd583ede3de6501e34ba6889d9",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "nianzelee/PhD-Dissertation",
"max_forks_repo_path": "paper/dependency-ssat/application.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "nianzelee/PhD-Dissertation",
"max_issues_repo_path": "paper/dependency-ssat/application.tex",
"max_line_length": 229,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "061e22dd55b4e58b3de3b0e58bb1cbe11435decd",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "nianzelee/PhD-Dissertation",
"max_stars_repo_path": "paper/dependency-ssat/application.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-11T19:38:13.000Z",
"max_stars_repo_stars_event_min_datetime": "2022-03-11T19:38:13.000Z",
"num_tokens": 6226,
"size": 20516
} |
\documentclass{article}
\usepackage{amsmath}
\usepackage{listings}
\usepackage{xcolor}
\lstset { %
language=C++,
backgroundcolor=\color{black!5}, % set backgroundcolor
basicstyle=\footnotesize,% basic font setting
}
\title{Knowledge Graph Based on Bayesian Network}
\date{2018-9-8}
\author{Lanting Guo}
\begin{document}
\maketitle
\newpage
\tableofcontents
\newpage
\pagenumbering{gobble}
\section{Why Bayesian Network}
To empower both data and human knowledge.
\section{Definitions}
\subsection{Node}
\begin{lstlisting}
using Attributes = Dict<AttributeName, Value>;
class Node {
NodeName node_name;
Node parent;
Nodes childeren;
Attributes attributes;
};
\end{lstlisting}
To represent Question, Person, Concept etc;
\subsection{Edge}
\begin{lstlisting}
class Edge {
Node head;
Node tail;
};
\end{lstlisting}
\section{Goals}
\subsection{students' perspective}
philosophy: Accelate learning speed.
definition: At Time t1, Student s understands Concept c with a Probality p1, after a fixed period training at platform,
at Time t2, Student s understands Concept c with a Probality p2,
details:
\subsection{company's perspective}
\section{Graph Architecture}
\subsection{baisc definition}
\subsection{how to add a node}
\subsection{how to delete a node}
\subsection{how to add a parent node}
\subsection{how to add a child node}
\section{Evaluation, Metrics}
\section{Inference}
\section{Parameters: Learning, Hand Coding}
\section{Scalability}
\section{Conclusion}
\end{document}
| {
"alphanum_fraction": 0.7451984635,
"avg_line_length": 22,
"ext": "tex",
"hexsha": "946eec2144eb55ee0db74c6151a4ca6c286bf305",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1d833d8773833d7b371cc55ab8f5af5924ec7938",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "braincodercn/SimpleBayesianNetwork",
"max_forks_repo_path": "report.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1d833d8773833d7b371cc55ab8f5af5924ec7938",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "braincodercn/SimpleBayesianNetwork",
"max_issues_repo_path": "report.tex",
"max_line_length": 119,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1d833d8773833d7b371cc55ab8f5af5924ec7938",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "braincodercn/SimpleBayesianNetwork",
"max_stars_repo_path": "report.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 409,
"size": 1562
} |
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Short Sectioned Assignment
% LaTeX Template
% Version 1.0 (5/5/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Frits Wenneker (http://www.howtotex.com)
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size
\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs
\usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default
\usepackage[english]{babel} % English language/hyphenation
\usepackage{sectsty} % Allows customizing section commands
\allsectionsfont{\centering \normalfont\scshape} % Make all sections centered, the default font and small caps
% Support for Romanian diacritics
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{combelow}% provides \cb to place comma below character
\usepackage{newunicodechar}
% Allows links
\usepackage{hyperref}
\sloppy
\hypersetup{
colorlinks = true,
urlcolor = blue, % External URLs color.
linkcolor = blue, % Internal links color.
}
% Helps reduce spaces
\usepackage{enumitem}
\usepackage{fancyhdr} % Custom headers and footers
\pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers
\fancyhead{} % No page header - if you want one, create it in the same way as the footers below
\fancyfoot[L]{} % Empty left footer
\fancyfoot[C]{} % Empty center footer
%\fancyfoot[R]{\thepage} % Page numbering for right footer
\renewcommand{\headrulewidth}{0pt} % Remove header underlines
\renewcommand{\footrulewidth}{0pt} % Remove footer underlines
\setlength{\headheight}{0pt} % Customize the height of the header
%\setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text
%----------------------------------------------------------------------------------------
% TITLE SECTION
%----------------------------------------------------------------------------------------
\newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height
\title{
\normalfont \normalsize
\textsc{University ``Alexandru Ioan Cuza'' of Iaşi} \\ [0pt] % Your university, school and/or department name(s)
\horrule{0.5pt} \\[0.4cm] % Thin top horizontal rule
\huge K syntax highlighter for IntelliJ IDEA\\ % The assignment title
\horrule{2pt} \\[0.5cm] % Thick bottom horizontal rule
}
\author{Bîrsan Ioana (căs. Amariei), B5} % Your name
\date{} % Doesn't allow automatic date appearance
%----------------------------------------------------------------------------------------
% DOCUMENT
%----------------------------------------------------------------------------------------
\begin{document}
\maketitle % Print the title
\section*{Objectives}
My project proposal consists of developing a plugin for IntelliJ IDEA that provides:
\begin{enumerate}
\item{K syntax highlighting}
\item{support for autocompletion}
\item{recognition of files with k extension}
\end{enumerate}
Future directions:
\begin{itemize}
\item{support for compiling programs}
\end{itemize}
\parskip = \baselineskip % one line between paragraphs
\section*{Bibliography}
\begin{enumerate}[noitemsep,nolistsep]
\item[$-$] \url{https://en.wikipedia.org/wiki/Service_provider_interface}
\item[$-$] \url{https://www.jetbrains.org/intellij/sdk/docs/basics/types_of_plugins.html}
\item[$-$] \url{http://www.jetbrains.org/intellij/sdk/docs/tutorials/custom_language_support_tutorial.html}
\item[$-$] \url{https://www.jetbrains.org/intellij/sdk/docs/basics/plugin_structure/plugin_services.html}
\item[$-$] \url{https://www.cqse.eu/en/blog/intellij-plugin-tutorial/}
\item[$-$] \url{http://www.latextemplates.com/template/short-sectioned-assignment}
\end{enumerate}
\end{document} | {
"alphanum_fraction": 0.6431279621,
"avg_line_length": 38.7155963303,
"ext": "tex",
"hexsha": "2e30f6011a6c1c2d807143be3ef6ac5d64ee702a",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "dd03f69b264d0161c53ab41b58ae034a737da752",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "ioanabirsan/K-Syntax-Highlighter-for-Intellij-IDEA",
"max_forks_repo_path": "project-prerequisites/documentation/syntax-highlighter.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "dd03f69b264d0161c53ab41b58ae034a737da752",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "ioanabirsan/K-Syntax-Highlighter-for-Intellij-IDEA",
"max_issues_repo_path": "project-prerequisites/documentation/syntax-highlighter.tex",
"max_line_length": 124,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "dd03f69b264d0161c53ab41b58ae034a737da752",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "ioanabirsan/K-Syntax-Highlighter-for-Intellij-IDEA",
"max_stars_repo_path": "project-prerequisites/documentation/syntax-highlighter.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1046,
"size": 4220
} |
Roots is a holistic system for application performance monitoring (APM),
performance anomaly detection, and bottleneck identification.
The key intuition behind the system is that, as an intrinsic PaaS service, Roots
has visibility into all activities of the PaaS cloud, across layers.
Moreover, since the PaaS applications we have observed spend most of their time in
PaaS kernel services~\cite{Jayathilaka:2015:RTS:2806777.2806842}, we hypothesize
that we can reason about application performance from observations of how
the application uses the platform, i.e. by monitoring the time spent in
PaaS kernel services. If we are able to do so, then we can avoid application
instrumentation and its downsides while detecting performance anomalies, and
identifying their root cause in near real time with low overhead.
The PaaS model that we assume with Roots is one
in which the clients of a web application engage in a
``service level agreement'' (SLA)~\cite{Keller:2003:WFS:635430.635442}
with the ``owner'' or operator of the application that is hosted in a PaaS cloud. The SLA
stipulates a response-time ``service level objective'' (SLO) that, if violated,
constitutes a breech of the agreement.
If the performance of an application deteriorates to the
point that at least one of its SLOs is violated, we treat it
as an \textit{anomaly}. Moreover, we refer to the process
of diagnosing the reason for
an anomaly as \textit{root cause analysis}.
For a given anomaly, the root cause could be a change in the application workload or
a \textit{bottleneck} in the application runtime. Bottlenecks may occur in the
application code, or in the PaaS kernel services that the application depends on.
Roots collects performance data across the cloud platform stack, and aggregates it based on
request/response. It uses this data to infer application performance, and to identify
SLO violations (performance anomalies). Roots can further handle different types of anomalies
in different ways. We overview each of these functionalities in the remainder of this section.
\subsection{Data Collection and Correlation}
We must address two issues when designing a monitoring framework for
a system as complex as a PaaS cloud.
\begin{enumerate}
\item Collecting data from multiple different layers.
\item Correlating data collected from different layers.
\end{enumerate}
Each layer of the cloud platform is only able to collect data regarding the
state changes that are local to it. A layer cannot monitor state changes
in other layers due to the level of encapsulation provided by layers. However,
processing an application request involves cooperation of multiple layers.
To facilitate system-wide monitoring and
bottleneck identification, we must gather data from all the different layers involved
in processing a request. To combine the information across layers,
we correlate the data, and link events related to the same client request together.
To enable this, we augment the front-end server of the cloud platform.
Specifically, we have it tag incoming application requests with unique identifiers.
This request identifier is added to the HTTP request as a header, which is visible to all
internal components of the PaaS cloud. Next, we configure data collecting agents
within the platform to record the request identifiers along with any events they capture.
This way we record the relationship between application requests, and the resulting
local state changes in different layers of the cloud, without breaking the existing level
of abstraction in the cloud architecture. This approach is also scalable, since the events are
recorded in a distributed manner without having to maintain any state at the data collecting agents.
Roots aggregates the recorded events by request
identifier to efficiently group the related events as required during analysis.
\begin{figure}
\centering
\includegraphics[scale=0.5]{apm_architecture}
\caption{Roots APM architecture.}
\label{fig:apm_architecture}
\end{figure}
Figure~\ref{fig:apm_architecture} illustrates the high-level architecture of Roots, and how
it fits into the PaaS stack. APM components are shown in grey.
The small grey boxes attached to the PaaS components represent the
agents used to instrument the cloud platform.
In the diagram, a user request is tagged with the identifier value
$R$ at the front-end server. This identifier is passed down to the lower layers of the cloud
along with the request. Events that occur in the lower layers while processing this request
are recorded with the request identifier $R$, so Roots can correlate them later. For example, in the
data analysis component we can run a filter query to select all the events related to a particular
request (as shown in the pseudo query in the diagram). Similarly, Roots can run a ``group by''
query to select all events, and aggregate them by the request identifier.
Figure~\ref{fig:apm_architecture} also
depicts Roots data collection across all layers in the
PaaS stack (i.e. full stack monitoring).
From the front-end server layer we gather information related to incoming application
requests. This involves scraping the HTTP server access logs, which are
readily available in most technologies used as front-end
servers (e.g. Apache HTTPD, Nginx).
From the application server layer, we collect application logs and
metrics from the application runtime that are easily accessible, e.g. process level
metrics indicating resource usage of the individual application instances. Additionally, Roots
employs a set of per-application benchmarking processes that periodically probes
different applications
to measure their performance. These are lightweight, stateless processes managed by the Roots framework.
Data collected by these processes is sent to the data storage component, and is available
for analysis as per-application time series data.
At the PaaS kernel layer we collect information regarding all kernel invocations
made by the applications. This requires intercepting the PaaS kernel invocations
at runtime. This must be done carefully so as to not introduce significant
overhead application execution. For each PaaS kernel invocation, we capture the
following parameters.
\begin{itemize}
\item Source application making the kernel invocation
\item Timestamp
\item A sequence number indicating the order of PaaS kernel invocations within an application request
\item Target kernel service and operation
\item Execution time of the invocation
\item Request size, hash and other parameters
\end{itemize}
Collecting PaaS kernel invocation details enables tracing the execution of application
requests without requiring that the application code be instrumented.
Finally, at the lowest level we can collect information related to virtual machines, containers
and their resource usage. We gather metrics on network usage by individual components which
is useful for traffic engineering use cases.
We also scrape
hypervisor and container manager logs to learn how resources are allocated and released over time.
To avoid introducing delays to the application request processing flow, we implement
Roots data collecting agents as asynchronous tasks. That is, none of them
suspend application request processing to report data to the data storage components.
To enable this, we collect data into log files or memory buffers that are local to the
components being monitored. This locally collected (or buffered) data is periodically sent
to the data storage components of Roots using separate background tasks and batch communication
operations. We also isolate the activities in the cloud platform from potential
failures in the Roots data collection or storage components.
\subsection{Data Storage}
The Roots data storage is a database that supports persistently storing monitoring data, and running
queries on them.
%Cloud providers have the freedom to implement this component in any way they see fit, as long
%as it scales to the number of applications deployed in the cloud platform.
Most data retrieval queries executed
by Roots use application and time intervals as indices. Therefore a database that can index monitoring
data by application and timestamp will greatly improve the query performance.
It is also acceptable to remove old monitoring data to make room for more recent events, since Roots
performs anomaly detection using the most recent data in near realtime.
\subsection{Data Analysis}
Roots data analysis components use two basic abstractions: \textit{anomaly detectors}
and \textit{anomaly handlers}.
Anomaly detectors are processes that periodically analyze the data collected for
each deployed application. Roots supports multiple detector implementations, where each implementation
uses a different statistical method to look for performance anomalies. Detectors are configured
per-application, making it possible for different applications to use different anomaly
detectors. Roots also supports multiple concurrent anomaly detectors on the same application, which can be used
to evaluate the efficiency of different detection strategies for any given application. Each
anomaly detector has an execution schedule (e.g. run every 60 seconds), and a sliding window
(e.g. from 10 minutes ago to now)
associated with it. The boundaries of the window determines the time range
of the data processed by the detector at any round of execution. Window is updated
after each round of execution.
When an anomaly detector finds an anomaly in application performance, it sends an event
to a collection of anomaly handlers. The event encapsulates a unique anomaly identifier,
timestamp, application identifier and the source detector's sliding window that correspond to the
anomaly. Anomaly handlers are configured globally (i.e. each handler
receives events from all detectors), but each handler can be programmed to handle only
certain types of events. Furthermore, they can fire their own events, which are also delivered to
all the listening anomaly handlers. Similar to detectors, Roots supports multiple anomaly handler
implementations -- one for logging anomalies, one for sending alert emails, one
for updating a dashboard etc. Additionally, Roots provides two special anomaly handler
implementations: a workload change analyzer, and a bottleneck identifier.
We implement the communication between detectors and handlers
using shared memory.
The ability of anomaly handlers to fire their own events, coupled with their support
for responding to a filtered subset of incoming events enables constructing
elaborate event flows with sophisticated logic. For example, the workload
change analyzer can run some analysis upon receiving an anomaly event
from any anomaly detector. If an anomaly cannot be associated with a workload
change, it can fire a different type of event. The bottleneck identifier, can
be programmed to only execute its analysis upon receiving this second type of event.
This way we perform the workload change analysis first, and perform the
systemwide bottleneck identification only when it is necessary.
Both the anomaly detectors and anomaly handlers work with fixed-sized sliding windows.
Therefore the amount of state these entities must keep in memory has
a strict upper bound.
The extensibility of Roots is primarily achieved through the abstractions of anomaly
detectors and handlers. Roots makes it simple to implement new detectors and handlers,
and plug them into the system. Both the detectors and the handlers are executed
as lightweight processes that do not interfere with the rest of the processes in
the cloud platform.
\subsection{Roots Process Management}
\label{sec:process_mgt}
\begin{figure}
\centering
\includegraphics[scale=0.45]{roots_pod}
\caption{Anatomy of a Roots pod. The diagram shows 2 application benchmarking processes (B),
3 anomaly detectors (D), and 2 handlers (H). Processes communicate via a shared
memory communication bus local to the pod.}
\label{fig:roots_pod}
\end{figure}
Most data collection activities in Roots can be treated as passive -- i.e. they
take place automatically as the applications receive and process requests in the cloud
platform. They do not require explicit scheduling or management. In contrast,
application benchmarking and data analysis are active processes that require
explicit scheduling and management. This is achieved by grouping benchmarking
and data analysis processes into units called Roots pods.
Each Roots pod is responsible for starting and maintaining a preconfigured set of
benchmarkers and data analysis processes (i.e. anomaly detectors and handlers).
These processes are light enough, so as to pack a large number of them
into a single pod. Pods are self-contained entities, and there is no inter-communication
between pods. Processes in a pod can efficiently communicate with each other
using shared memory, and call out to the central Roots data storage to retrieve
collected performance data for analysis.
This enables starting and stopping
Roots pods with minimal impact on the overall monitoring system.
Furthermore, pods
can be replicated for high availability, and application load can be distributed
among multiple pods for scalability.
Figure~\ref{fig:roots_pod} illustrates a Roots pod monitoring two applications.
It consists of two benchmarking processes, three anomaly detectors and
two anomaly handlers. The anomaly detectors and handlers are shown communicating
via an internal shared memory communication bus.
%To automate the process of managing pods, they can be tied into the core
%process management framework of the PaaS cloud. That way whenever the cloud
%platform initializes, a collection of pods can be started automatically.
%Application deployment process of the PaaS cloud can be augmented
%to register each new application with one of the available pods, so that the
%benchmarkers and anomaly detectors can start running on the application.
%Moreover, pods can be moved around or restarted as needed in response
%to errors and autoscaling events that occur in the cloud platform.
| {
"alphanum_fraction": 0.8179826112,
"avg_line_length": 60.2,
"ext": "tex",
"hexsha": "1c84c626b8a77aea0d1952b865f32b64a7dcbafd",
"lang": "TeX",
"max_forks_count": 1,
"max_forks_repo_forks_event_max_datetime": "2020-05-25T02:59:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-05-25T02:59:15.000Z",
"max_forks_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "UCSB-CS-RACELab/eager-appscale",
"max_forks_repo_path": "Eager/paper/dissertation/roots/architecture.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "UCSB-CS-RACELab/eager-appscale",
"max_issues_repo_path": "Eager/paper/dissertation/roots/architecture.tex",
"max_line_length": 111,
"max_stars_count": 3,
"max_stars_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "UCSB-CS-RACELab/eager-appscale",
"max_stars_repo_path": "Eager/paper/dissertation/roots/architecture.tex",
"max_stars_repo_stars_event_max_datetime": "2018-07-16T18:20:23.000Z",
"max_stars_repo_stars_event_min_datetime": "2016-06-12T01:18:49.000Z",
"num_tokens": 2825,
"size": 14147
} |
\documentclass[twocolumn,letterpaper,12pt,notitlepage]{article}
\usepackage{graphicx}
\usepackage[font=bf]{caption}
\graphicspath{ {./images/} }
\begin{document}
\title{Lord Stanley Seekers}
\author{
Crooks, Kevin\\
\texttt{[email protected]}
\and
Lim, Brian\\
\texttt{[email protected]}
\and
May, Arthur\\
\texttt{[email protected]}
}
\twocolumn[{%
\renewcommand\twocolumn[1][]{#1}%
\maketitle
\begin{center}
\centering
\includegraphics[width=1\textwidth,height=5cm]{penaltybox}
\captionof{figure}{A full penalty box during an NHL Game.\cite{penaltybox}}
\end{center}%
}]
\section{Problem Space}
In the world of sports, predicting a game’s outcome has myriad benefits. It can help coaches and players
learn what areas to focus on. It can help sportswriters write articles outlining the predictions for season-
final standings. It can help oddsmakers set profitable betting lines. Unfortunately for these professionals, current best NHL game prediction accuracy is less than 70 percent. This is generally attributed to simple
randomness in sports, the general perception of higher parity in the NHL as opposed to other sports (we
all know the Patriots are going to be in the postseason, right?), and of course the randomness of ice
condition. Bad bounces, breaks, millimetres in difference in skate angle, etc. Nonetheless, 70 percent isn’t bad,
but naturally everyone wants these to be better. \newline
Rather than retreaded an old problem space, which would simply amount to wondering if the science has improved in the past few years, or training algorithms have, we decided to look at a new
problem: period prediction. An NHL game is divided into 3 periods of twenty minutes each. If the score
is still tied after these three regulation periods, a 4th overtime period is played. Overtime rules have
changed a few times in the past couple of decades, but the current standard is a single 5-minute period.
We included these in our analyses as well. Despite the fact that statistics will be different between a full
20-minute period and a shorter 5-minute period, we are looking at predictors for each period individually,
so our overall models will not be affected by this bundling. \newline
The advantage to such a prediction might be immediately obvious to a coach, although probably
not as useful for Vegas. If your team enters the second period trailing by 2, how do you overcome that
lead? If hits have a high correlation with success, maybe play more aggressively. If blocked shots have a
low correlation, don’t worry about taking too many bad shots at the risk of turning the puck over. Our
goal is to see if we can find which statistics, taken from the standard metrics used in NHL statkeeping,
can be beneficial in predicting the outcome of a period.\newline
Similar research has been done in this space on the depth of such research, but not using the same approach. In one research study, players are ranked according to how much their actions impact their team's chance of scoring the next goal.\cite{scoringimpact} Additionally, in another study done on NHL data, researchers attempt to predict outcome based on pre-game data.\cite{scoringimpact} Lastly, one study was done to predict NHL playoffs using Google's PageRank feature.\cite{pagerank}
\section{Approach}
Our approach to address the problem iterated through three steps over multiple phases. The first step was to prepare the data. Once the data was prepared, we performed a number of visualizations around the data to get an understanding of relationships between features and the end of period goal differential. Once we had an understanding of the relationships between the features, we ran them through different machine learning classifiers based on the results we were seeking. We did not seek a binary prediction, but a goal differential which is a bit more challenging. Furthermore, as we settled on our models we experimented with different normalization approaches and ran a grid search on our classifier to help fine tune the results. We performed forward stepwise selection to determine the best subset of features to be used to predict the outcomes of each period, and used the best subset per period to train and predict the models for the individual periods.
Several classifiers will be tested including a gradient boost, random forest, and KNN. The classifier that performed the best will be used to present the results.
When our results were output we output confusion matrices to help us interpret how our model was performing relative to the other potential outcomes of the model.
\section{Data}
The data for this work was obtained from Kaggle, comprised of 3.6 million plays over 11,400 games over
the course of 3 full seasons. The plays including the standard stats recorded by the NHL – goals,
penalties, shots on goal, blocked shot, hits, faceoffs, takeaways, giveaways, and missed shots. This data
was synchronized with 8.9 million shifts for the same games (a shift is defined as a consecutive time on
ice for a player, typically 20-40 seconds, and a player might have 40 shifts over the course of a single
game). This data was then split by home and away, the artificial split we used as the easiest way to
differentiate teams. Concordantly, the predicted outcome, final period goal differential, was also given as
Away minus Home. Thus to determine final goal differential for the away team, simply take the negative
of the result. The resultant feature matrix was 37589x19: 37,589 periods for which we had valid
statistics and 19 statistical measurements, originally given as raw counts to be later normalized and
scaled depending on the statistic in question. A sample of the data can be found in Table \ref{tab:1}. \newline
A small handful of periods had to be rejected from having missing statistics, or obviously incorrect
statistics. This generally happened in overtime periods for some unknown reason. Additionally, we noted
a minor statistical error, probably attributed to typographical errors at the input level, in which a player’s
shift length was given as 1-2 seconds, followed by a regular 30 second shift. The number of these was
small enough that they were included regardless, reasoning they would have little effect on the overall
average shift length feature, and indeed in the final outcome itself.
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{||c c c c c c c c||}
\hline
periodID & period initial goal differential & blocked shots away & blocked shots home & faceoff away & faceoff home & ... & period final goal differential \\ [0.5ex]
\hline\hline
2010020001\_1 & 0 & 7 & 6 & 9 & 5 & ... & 1 \\
\hline
2010020001\_2 & 1 & 8 & 13 & 5 & 8 & ... & 0 \\
\hline
2010020001\_3 & 1 & 6 & 3 & 9 & 7 & ... & 0 \\ [1ex]
\hline
\end{tabular}}
\caption{NHL game data prepared sample}
\label{tab:1}
\end{table*}
After preparing the data, a number of visualizations were performed to better understand the relationships.
Figure \ref{fig:2} contains a Seaborn violin plot. In this particular plot the difference between the home and away value of each attribute against the change in goal differential at the beginning and ending of the period. The penalty differential and takeaway differential appeared to have a strong relationship which makes sense. If you are on a powerplay more than your opponent in a period, you will have more goals. Same argument can be made for takeaways. Nonetheless, some attributes had more impact on the outcome than others.
The same analysis was done attribute by attribute using a Seaborn joint plot. A sample plot can be found in Figure \ref{fig:3}.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{violin}
\caption{Violin plot of difference between home and away attributes vs. Outcome}
\label{fig:2}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{joint}
\caption{Joint plot of penalty differential and goal differential for the period}
\label{fig:3}
\end{figure}
\section{Results}
A number of multi-class classifiers were tested including KNN, Random Forest, and Gradient Boost. No classifier performed particularly well relative to our 60$\%$ accuracy target with results being output in the 35-45$\%$ accuracy range for the first three periods with increasing accuracy for each period. These results can be found in Figure \ref{fig:4}, Figure \ref{fig:5}, and \ref{fig:6}.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{period1cm}
\caption{Period 1 accuracy score and confusion matrix heat map.}
\label{fig:4}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{period2cm}
\caption{Period 2 accuracy score and confusion matrix heat map.}
\label{fig:5}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{period3cm}
\caption{Period 3 accuracy score and confusion matrix heat map.}
\label{fig:6}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{periodOTcm}
\caption{OT accuracy score and confusion matrix heat map.}
\label{fig:7}
\end{figure}
In search of a conclusive result, we included shift data and overtime results into our features and data. Shift results didn't do much to sway our results. However, when running our classifier for overtime, the results were much more favorable with accuracy scores of approximately 75$\%$. Details of the accuracy score and confusion matrix can be found in Figure \ref{fig:7}
To achieve our results with the gradient boosting classifier, score differentials were binned to $\{-1,0,1\}$. A MinMaxScaler was applied to all attributes to normalize the data and help with interpretation and comparison of models. A GridSearch was done to help identify and understand the impact of each of the parameters on the model. Once a model was found, different iterations were performed using different features of the data based on our initial visualization analysis of features that correlated well. We performed forward stepwise selection to determine the best subset of features to be used to predict the outcomes of each period, and used the best subset per period to train and predict the models for the individual periods.
\section{Discussion}
The ability to predict the outcome for overtime is considered a significant contribution. Perhaps predicting a goal differential in terms of a number of goals was too arduous for the duration of this research. However, there are a number of reasons why the overtime period was as accurate as it was.
The overtime period is a win, lose, or draw scenario. There less classes of goal differential. Furthermore, the score is always tied going into overtime so the conditions of the game are completely different than the the second and third periods. It is understood that the first period starts off as a tie a well. However, the features for the game in the first period are not yet established. The dynamic of the game is not yet established.
In terms of predicting goal differential, we may have had better results if we attempted to calculate a goal differential and allowed non-integer values to be a result. It is understood that you can't have a partial goal differential. However, a fractional goal differential extrapolated over an entire season could have a significant impact on a teams placement in the standings.
When performing the forward stage-wise feature selection, it was interesting to see that different features proved more important when trying to predict the outcomes of each period. This can be interpreted to mean that depending on the phase of the game, there are different strategies that can be employed to obtain a desired outcome. Furthermore, there were some features that were important across all the periods, meaning that there are some general strategies that are independent of the game situation. For example, the penalty differential was an influential feature across all periods, meaning that it is a sound strategy to avoid taking penalties and putting your team at a disadvantage in all phases of the game. While penalties were important across all the periods, stats like blocked shots were most important in the third period, which can be attributed to a more defensive style of play used when a team is trying to preserve a lead. Likewise, the number of hits proved important in the first period and overtime periods, which likely means that using a more aggressive style of play to obtain possession of the puck will lead to more positive outcomes in these periods. Leveraging this type of feature analysis could be used by teams to adjust their strategies in real time to try to obtain the desired outcome.
\twocolumn[{%
\renewcommand\twocolumn[1][]{#1}%
\begin{thebibliography}{9}
\bibitem{penaltybox}
Predators Hockey, USA Today, 2015
\\\texttt{https://usatftw.files.wordpress.com/2015/11/ap\_jets\_predators\_hockey\_77540060.jpg}
\bibitem{scoringimpact},
Oliver Schulte, Zeyu Zhao, Mehrsan Javan, Philippe Desaulniers,
\textit{Apples-to-Apples: Clustering and Ranking NHL Players
Using Location Information and Scoring Impact}
\\\texttt{http://www.sloansportsconference.com/wp-content/uploads/2017/02/1625.pdf}
\bibitem{scoringimpact},
Josh Weissbock and Diana Inkpen,
\textit{Combining Textual Pre-game Reports and Statistical Data for Predicting Success in the National Hockey League}
\\\texttt{https://link.springer.com/chapter/10.1007/978-3-319-06483-3\_22}
\bibitem{pagerank},
Nathan Swanson, Donald Koban, Patrick Brundage,
\textit{Predicting the NHL playoffs with PageRank}
\\\texttt{https://www.degruyter.com/view/journals/jqas/13/4/article-p131.xml}
\end{thebibliography}
}]
\end{document}
| {
"alphanum_fraction": 0.7891190959,
"avg_line_length": 66.3653846154,
"ext": "tex",
"hexsha": "2fdcec04227bd61a6c63a9ec536d677f236df5c1",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "4f2d53d7980cae8219e2a189819ba6d120aa9aac",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "bliminate/Lord_Stanley_Seekers",
"max_forks_repo_path": "Final_Report_41.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "4f2d53d7980cae8219e2a189819ba6d120aa9aac",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "bliminate/Lord_Stanley_Seekers",
"max_issues_repo_path": "Final_Report_41.tex",
"max_line_length": 1327,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "4f2d53d7980cae8219e2a189819ba6d120aa9aac",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "bliminate/Lord_Stanley_Seekers",
"max_stars_repo_path": "Final_Report_41.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3192,
"size": 13804
} |
\documentclass{article}
\usepackage{arxiv}
\usepackage[utf8]{inputenc} % allow utf-8 input
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
\usepackage{hyperref} % hyperlinks
\usepackage{url} % simple URL typesetting
\usepackage{booktabs} % professional-quality tables
\usepackage{amsfonts} % blackboard math symbols
\usepackage{nicefrac} % compact symbols for 1/2, etc.
\usepackage{microtype} % microtypography
\usepackage{lipsum} % Can be removed after putting your text content
\usepackage{graphicx}
\usepackage[numbers]{natbib}
\usepackage{doi}
\usepackage{amsmath}
\title{Lagrange-ng: The next generation of the DEC model}
\author{\href{https://orcid.org/0000-0002-9130-6878}{\includegraphics[scale=0.06]{orcid.pdf}\hspace{1mm}Ben Bettisworth}\\
Computational Molecular Evolution\\
Heidelberg Institute for Theoretical Studies\\
Heidelberg, Germany \\
\texttt{[email protected]} \\
%% examples of more authors
\And
Alexandros Stamatakis\\
Computational Molecular Evolution\\
Heidelberg Institute for Theoretical Studies\\
Heidelberg, Germany\\
\texttt{[email protected]} \\
}
\renewcommand{\shorttitle}{\textit{arXiv} Template}
%%% Add PDF metadata to help others organize their library
%%% Once the PDF is generated, you can check the metadata with
%%% $ pdfinfo template.pdf
\hypersetup{
pdftitle={Lagrange-ng Supplemental Material},
pdfsubject={q-bio.NC, q-bio.QM},
pdfauthor={Ben Bettisworth, Alexandros Stamatakis},
pdfkeywords={},
}
\begin{document}
\maketitle
\section{Background}
Computation of the likelihood of a particular set of parameters under the DEC model proceeds in a fashion similar to the
Felsenstein algorithm. In this algorithm, the computation starts from the tips and moves towards the root, storing
intermediate results in buffers called conditional likelihood vectors. Please see \cite{yang2006computational} for a
much more detailed explanation, including a detailed discussion about the savings involved with such a scheme. What is
relevant for the discussion here is that the Felsenstein algorithm avoids excess computation by noticing that, at
certain points in the computation of a likelihood on a tree, the only relevant number is the likelihood
\textit{conditioned on the current state}.
\section{Methods and Algorithms}
\label{sec:methods}
Lagrange utilizes a task based parallelization scheme in which each node of the tree is assigned as a task. In order to
compute the results for a generic node of the tree results for its two children must be computed. This involves
computing
\begin{enumerate}
\item The right and left rate matrices: $Q_r$, $Q_l$,
\item The right and left transition matrices: $P_r = e^{Q_r}$ and $P_l = e^{Q_l}$ respectively,
\item The result of the Markov process along the left and right branches: $w_r = P_r v_r$ and $w_l = P_l v_l$
respectively,
\item The weighted combination of $w_r$ and $w_l$, $v_t$.
\end{enumerate}
Together, these operations make up a single task for a worker. However, as it can be seen above, the task can be
subdivided into smaller parts, which we will call operations. For the purposes of this paper, we will label each of the
operations as
\begin{enumerate}
\item Make Rate Matrix Operation,
\item Expm Operation,
\item Dispersion Operation and,
\item Split Operation.
\end{enumerate}
In order to more easily support computation on other platforms, such as GPUs, we have separated the operations from the
memory buffers which are required to store the intermediate result needed for likelihood computation. For example, in
Figure~\ref{fig:simple-operations} we show an unspecified node and its associated operations. In each operation, there
is a set of indices which indicate the location of the assigned memory buffer. Additionally, they store the last
execution clock point, which is discussed later.
Each operation is typically small enough that it can't justify being its own separate task. Still, if the tree
topology and branch lengths are appropriate, these operations can be shared between two tasks. For example, consider the
tree topology in Figure~\ref{fig:ultra}. Here, two of the branches share the same length, $1.0$, and so if they also
share a rate matrix, then the result of this computation will be the same. Therefore, the computation of the likelihood
of this tree can be accelerated by computing $e^Q$ only once, and saving the result.
In order to avoid re-computation of the same result, Lagrange-ng allows for the sharing of operations between tasks.
However, when computing with multiple threads, this introduces the possibility of computing with stale values from
dependant operations when performing successive evaluations of the likelihood with changed model parameters, as in most
optimization routines.
To avoid stale data, we use a clock based method to enforce a total ordering on the computation of operations. Readers
familiar with vector clocks will recognize this as a vector clock with the number of distributed elements set to one.
After the evaluation of each operation, a time is recorded in the evaluated operation, and the clock incremented. When
we wish to know if a operation is ready, all that needs to be done is to check if the clocks of its immediate dependant
operations show a larger value. If this is the case, then the dependant operations have already been evaluated, and the
operation is ready.
By carefully dividing the tasks into operations, merging identical operations between tasks, and enforcing a total order
on the computation of operations, we can implement an extremely effective task based parallelization scheme.
\begin{figure}
\centering
\includegraphics[width=.5\linewidth]{figures/simple-tree-ultrametric.pdf}
\caption{A simple ultrametric tree.}
\label{fig:ultra}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/simple-operations.pdf}
\caption{An example set of operations for an generic unspecified node.}
\label{fig:simple-operations}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/full-operations.pdf}
\caption{Tree in Figure~\ref{fig:ultra} decomposed into the operations used to compute the likelihood.}
\label{fig:full-operations}
\end{figure}
\section{Comparing Distributions on Trees}
When comparing the results of Lagrange-ng and Lagrange, it can be difficult to know whether or not the results are
"close enough" to be the same. By way of example, scientific software will typically, instead of asking if two numbers
are exactly equal to one another, ask if they are closer than some small number called $\epsilon$. This is due to the
limitations in the IEEE 754 format, which is how real numbers are encoded in nearly all modern computers. A more
complete explanation of this phenomenon is outside the scope of this paper, and is extensively discussed in any textbook
on numerical computing, but for our purposes it is sufficient to say that differences in the order of operations will
generally yield different results, even when the results should algebraically equal.
Lagrange-ng does not escape this this limitation of the IEEE 754 standard. So, in order to fairly compare results, we
must take this into account. One can examine, by eye, each distribution individually and compare the distributions by
hand, but this is time consuming and error prone. Therefore, it is desirable to construct a metric between distributions
on trees in order to automatically compare the results.
Consider an example, where we have two sets of ancestral range distributions computed by differing methods using the
same tree. Let us call those distributions $d_1$ and $d_2$. The distribution for node $n$ is then represented by the
notation $d_i(n)$. Since the tree shared between the two distributions, we can match the node level distributions in a
one-to-one mapping between the two sets of distributions. We will index the individual elements of the distribution
either by the list of regions names or the binary notation for regions. For instance, if we have a distribution over
regions $A, B,$ and $C$, then an entry of distribution for node $n$ might be indexed as $d_1(n, AB)$. Equivalently, we
can use a binary notation to write $d_1(n, 110)$.
In this example, the first thought would be to treat $d_1(n)$ and $d_2(n)$ as vectors, and simply compute the cosine
distance between distributions. This will indeed produce a metric, but it has some undesirable qualities to it. For
example, suppose we have a distribution over 5 regions: $A, B, C, D$, and $E$ where $d_1(n, AB) = 1.0$. If we use the
cosine distance to compute the distance between $d_1(n)$ and $d_2(n)$ where $d_2(n, ABC) = 1.0$, then the resulting
distance will be 1.0, as the vectors are orthogonal. However, this is the same distance as if instead $d_2(n, CDE) =
1.0$ as it is also orthogonal to $d_1(n)$. But, there is a very real sense in which the prediction $AB$ is much closer
to $ABC$ than $CDE$. In the first case, the predicted ranges differ by only one region, whereas in the second case, the
predicted ranges differ by \textit{every} region.
Since the cosine distance doesn't account for the available transitions between states in the DEC model, we should pick
a distance that is aware. To do this, we first embed the two distributions into a hypercube graph. A hypercube is graph
with $2^n$ nodes and each node is connected to $n$ other nodes (see Figure~\ref{fig:hypercube} for an example).
Importantly, the edges of a hypercube graph correspond to the valid transitions between states in the DEC
model\footnotemark.
\footnotetext{Some readers might have noticed in Figure~\ref{fig:hypercube} that the edges don't distinguish a direction
of the edges, which means that transitions \textit{out} of the extinct state are valid. While conceptually this is a
problem, for the purposes of computing a distance, it will not affect the results, as taking the path through the
extinction state is equivalent to taking any other path of equal distance, of which there necessarily be due to the
nature of the hypercube.}
Once the distributions are embedded in a hypercube graph, we can compute the distance between each distribution as the
``amount of effort required to turn one distribution into the other''. This is the Wasserstein metric, also known as the
Earthmover's distance, and is what we base our distance off of. Suppose we have the distributions $D1$ and $D2$ from
Figure~\ref{fig:distributions-example}. In order to turn $D1$ into $D2$, we need to find a way to move $0.25$ ``earth''
from node \texttt{10} to node \texttt{01}. Two example solution can be seen in Figure~\ref{fig:earthmovers-example}.
While the distance computation in Figure~\ref{fig:earthmovers-example} is straightforward, in general finding the
minimum distance requires the use of an optimization routine.
To find the minimum distance, we elected to express the problem as a linear programming problem. Specifically, we solve
\begin{align}
& \min_x \textstyle \sum_i x_i \nonumber \\
\text{such } & \text{that } Ax = b \\
& x_i \geq 0 \nonumber
\end{align}
Where $A$ is the constraint matrix induced by the hypercube, and $b = d_1(n) - d_2(n)$ I.E. difference between the two
distributions. If we have a distribution with $s$ states then we can produce $A$ by creating $s-1$ rows and $s(2^{s}-1)$
columns. The rows represent the nodes of the graph, and the columns represent the edges of the graph, split in two for
each direction of flow. The entries of $A$ are defined as:
\begin{equation}
A(n, e) = \begin{cases}
1 & \text{If } e \text{ points to } n \\
-1 & \text{If } e \text{ points away from } n \\
0 & \text{Otherwise.}
\end{cases}
\end{equation}
Please note that there are only $n-1$ rows. This is because the final row will just be expressible as a linear
combination of the previous rows, and so will add no constraints to the problem. Additionally, we choose to suppress
edges leading away from the extinct state, to be consistent with the model. By suppressing these edges, we remove $s$
columns from the matrix as well.
As an example, the distance between the distributions in Figure~\ref{fig:distributions-example} can be computed with the
matrix
\begin{equation*}
A = \begin{pmatrix}
1 & 0 & 1 & 0 & 0 & 0 \\
-1 & -1 & 0 & 0 & 1 & 0 \\
0 & 0 & -1 & -1 & 0 & 1 \\
\end{pmatrix}
\end{equation*}
and the vector
\begin{equation*}
b = \begin{pmatrix}
0 \\
0.25 \\
0.25 \\
\end{pmatrix}.
\end{equation*}
Finally, in order to normalize the distance, we divide the result by maximum possible path length, which happens to be
the number of \textit{regions}.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/hypercube-distribution.pdf}
\end{center}
\caption{An example 3 dimensional hypercube with associated region names in binary notation.}
\label{fig:hypercube}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/example-distributions.pdf}
\end{center}
\caption{Two example distributions, displayed as 2d-hypercubes (squares).}
\label{fig:distributions-example}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/example-earthmovers.pdf}
\end{center}
\caption{Distributions from Figure~\ref{fig:distributions-example} with the Wasserstein metric applied. On the left,
we choose to move $0.25$ units of ``mass'' from node \texttt{10} to node \texttt{01}. This requires 2 transitions,
shown in blue, for a total of $0.5$ distance. On the right, we choose to transfer mass from node \texttt{01} to node
\texttt{10}, which yields the same result.}
\label{fig:earthmovers-example}
\end{figure}
\section{Experiments}
\label{sec:experiments}
\subsection{Investigating the Optimal Threading Configuration}
Because Lagrange-ng implements both course and fine grained parallelization, we need to investigate the optimal
threading configuration for a given dataset. To this end, we generated datasets with 50 or 100 taxa and 5, 6, 7, or 8
regions. This yielded a total of 8 dataset configurations. In addition to the dataset configurations, we also generated
the 6 threading configurations with a total number of 32 threads. For each of these dataset and threading
configurations, we ran 100 trials and recorded the times. The results from these runs can be seen in
Figure~\ref{fig:threading-configurations}.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/threading_violin.png}
\end{center}
\caption{Plot of the threading configurations on various dataset sizes. 100 datasets were generated for each Taxa,
Region, and Threading Configuration parameters. Each dataset was generated randomly, similar to how datasets are
constructed in the rest of this work.}
\label{fig:threading-configurations}
\end{figure}
\subsection{Determining the parallel efficiency of Lagrange-ng}
Given the results from the previous experiment to determine the optimal threading configuration, we choose to determine
the parallel efficiency of the Lagrange-ng using only workers. This is to say, we only increased the number of threads
allocated to the coarse grained tasks. To this end we tested Lagrange-ng with 1, 4, 8, 16, and 32 threads by generating
100 datasets for each threading configuration. We did this with datasets with 6 regions and 100 and 500 taxa. We
computed the mean of the execution times for the runs with a single thread, and used this value to compute the
realized speedups.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/peff_100_taxa.png}
\end{center}
\caption{Parallel efficiency plot for a datasets with 100 taxa and 6 regions. Please notice the log-log scaling. The
actual values plotted are 3.0, 4.1, 4.7, 4.8 for 4, 8, 16, 32 threads, respectively. The ratio of the realized
speedup to the optimal speedup is 0.742873, 0.513696, 0.295814, 0.149044 for 4, 8, 16 and 32 threads respectively.}
\label{fig:peff-100-taxa}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/peff_500_taxa.png}
\end{center}
\caption{Parallel efficiency plot for a datasets with 500 taxa and 6 regions. Please notice the log-log scaling. The
actual values plotted are 3.3, 5.5, 7., 9.3 for 4, 8, 16, and 32 threads, respectively. The ratio of the realized
speedup to the optimal speedup is 0.83, 0.69, 0.49, 0.29 for 4, 8, 16, and 32 threads, respectively.}
\label{fig:peff-100-taxa}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Regarding the optimal threading configuration, Figure~\ref{fig:threading-configurations} shows that allocating all the
threads to workers is normally optimal. Occasionally, allocating 2 threads per worker is slightly faster. This is
slightly surprising, and might indicate that the linear algebra library used has issues with lock contention. If this is
the case, then changing the library might improve results. However, there are still sequential parts of the likelihood
computation which does not benefit from the fine grained parallelization, so this will have improvement a limit.
The threading efficiency of Lagrange-ng's coarse grained parallelization ranges between 0.74 and 0.14. This is
expected, as the method of parallelization is based on the tree topology. Children nodes must be evaluated before parent
nodes can be evaluated, which leads to a dependencies which prevent perfect parallel efficiency from being achieved.
This means that, for every likelihood evaluation, there is a period where there is less work available than threads. In
this case the threads idle, and do no meaningful work. However, we expect that as the number of taxa grows, the
efficiency of this method should increase, as the proportion of time spent in this "work starved" period is lower.
Results from the threading configuration experiments suggest that this can efficiency can be improved further on some
datasets by allocating 2 threads per worker.
\bibliographystyle{acm}
\bibliography{references}
\end{document}
| {
"alphanum_fraction": 0.7691092345,
"avg_line_length": 55.7902735562,
"ext": "tex",
"hexsha": "928ec4b053cbda61f67f9212cd70ccdbac29689b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "6e0c444fbf75515c8648f5a00dcc92fffc3e9288",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "computations/lagrange-paper",
"max_forks_repo_path": "supplement.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "6e0c444fbf75515c8648f5a00dcc92fffc3e9288",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "computations/lagrange-paper",
"max_issues_repo_path": "supplement.tex",
"max_line_length": 122,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "6e0c444fbf75515c8648f5a00dcc92fffc3e9288",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "computations/lagrange-paper",
"max_stars_repo_path": "supplement.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 4613,
"size": 18355
} |
\subsection{Heteroskedasticity and Autocorrelation (HAC) adjusted standard errors}
| {
"alphanum_fraction": 0.8255813953,
"avg_line_length": 17.2,
"ext": "tex",
"hexsha": "7a215e1803b8b38538cb5d61838d49e036bf7872",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "adamdboult/nodeHomePage",
"max_forks_repo_path": "src/pug/theory/statistics/time/01-02-HAC.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z",
"max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "adamdboult/nodeHomePage",
"max_issues_repo_path": "src/pug/theory/statistics/time/01-02-HAC.tex",
"max_line_length": 82,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "adamdboult/nodeHomePage",
"max_stars_repo_path": "src/pug/theory/statistics/time/01-02-HAC.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 23,
"size": 86
} |
{
\begin{doublespacing}
\begin{flushleft}
\section{Contributions to knowledge}
This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge. This is a section of your contribution to knowledge.
\end{flushleft}
\end{doublespacing}
} | {
"alphanum_fraction": 0.8048582996,
"avg_line_length": 102.9166666667,
"ext": "tex",
"hexsha": "a75653d9623275e89d9b5b0641432283f6ebbfbd",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "66ad3693e788fd9102f4e2347ba27ba2610d7df4",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RabbitA/Marq_Phd_Dissertation_Template",
"max_forks_repo_path": "Latex Template/Body/contributions to knowledge.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "66ad3693e788fd9102f4e2347ba27ba2610d7df4",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RabbitA/Marq_Phd_Dissertation_Template",
"max_issues_repo_path": "Latex Template/Body/contributions to knowledge.tex",
"max_line_length": 1114,
"max_stars_count": 1,
"max_stars_repo_head_hexsha": "66ad3693e788fd9102f4e2347ba27ba2610d7df4",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RabbitA/Marq-Dissertation-Template",
"max_stars_repo_path": "Latex Template/Body/contributions to knowledge.tex",
"max_stars_repo_stars_event_max_datetime": "2021-07-22T17:26:35.000Z",
"max_stars_repo_stars_event_min_datetime": "2021-07-22T17:26:35.000Z",
"num_tokens": 250,
"size": 1235
} |
\chapter{Concluding Remarks}
\label{chap:conculusion}
\section{Conclusion}
In this study, we built a complete reusable pipeline composed of preprocessing, training, evaluation and explanation to detect dementia from raw MRI scans. The models obtained by training on the OASIS dataset did not attain state-of-the-art performances but have the advantage of providing not only a diagnostic but an explanation about which region of the MRI made the model do such a prediction. The model explanation we obtained coincides with the consequences of dementia that specialists observe. In particular, fullgrad was able to spot the importance of the hippocampus in the input image when predicting dementia.
In addition to that, we built a responsive and easy-to-use web application that allows any clinician without any machine learning background to quickly gain insight into the prediction our model made. Thus reducing the black box image from which machine learning is suffering from in the field.
In conclusion, we show with this thesis that deep learning techniques have reached a sufficient level technical maturity to provide interpretable predictions that can be integrated into research activities and soon into clinician practices.
\section{Future Works}
Despite the already interesting results and helpful insight on dementia gained by the visualization, this project has been conducted using only a small number of labeled data due to the difficulties in accessing a custom dataset. To have even more interesting results, the entire pipeline would have to be applied to a much larger dataset for which the labels would be provided.
We would also like to try the pipeline with some other modalities instead of using the T1 only. For example, it has been shown that the iron density inside a brain is correlated with dementia\footnote{\href{https://www.medicalnewstoday.com/articles/measuring-iron-in-the-brain-can-point-to-dementia}{https://www.medicalnewstoday.com/articles/measuring-iron-in-the-brain-can-point-to-dementia}}. In fact, we think that the cause of dementia might be invisible through a T1 weighted image and that we might currently only be looking at the consequences.
In section~\ref{sec:OASIS} we decided to label all the scans of a patient as having dementia if his latest check-up he was diagnosed with dementia. In a future work, we would like to analyze the output of the model on his early scans and check if the model is able to detect something.
During the period of the project we had the opportunity to show our results to one clinician. A next step would be to confront them to more clinician to hear from them and improve even more the pipeline according to their needs.
Section~\ref{sec:model_out_exp} highlights intriguing results about the right hippocampus, further analysis should be perform to better understand the reason behind this higher attention on the right compare to the left side.
| {
"alphanum_fraction": 0.8163265306,
"avg_line_length": 133.6363636364,
"ext": "tex",
"hexsha": "c4438df494b565482a256b3025ef2f4f1bb213ce",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "e407effabb562f85fecf47f40e2473db8d99fe5b",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "cgallay/masters-thesis",
"max_forks_repo_path": "chapters/conclusion_future_work.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e407effabb562f85fecf47f40e2473db8d99fe5b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "cgallay/masters-thesis",
"max_issues_repo_path": "chapters/conclusion_future_work.tex",
"max_line_length": 621,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "e407effabb562f85fecf47f40e2473db8d99fe5b",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "cgallay/masters-thesis",
"max_stars_repo_path": "chapters/conclusion_future_work.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 577,
"size": 2940
} |
% This is samplepaper.tex, a sample chapter demonstrating the
% LLNCS macro package for Springer Computer Science proceedings;
% Version 2.20 of 2017/10/04
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% HEADER and TITLE stuffs. (Sep 7, 2018)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[runningheads]{llncs}
%
\usepackage{graphicx}
% Used for displaying a sample figure. If possible, figure files should
% be included in EPS format.
%
% If you use the hyperref package, please uncomment the following line
% to display URLs in blue roman font according to Springer's eBook style:
% \renewcommand\UrlFont{\color{blue}\rmfamily}
\begin{document}
%
\title{Synopsis-based Sampling for Cardinality Estimation of SPARQL queries}
%
%\titlerunning{Abbreviated paper title}
% If the paper title is too long for the running head, you can set
% an abbreviated paper title here
%
% \author{Lei Gai\inst{1}\orcidID{0000-1111-2222-3333} \and
% Tengjiao Wang\inst{1}\orcidID{1111-2222-3333-4444} \and
% Wei Chen\inst{1}\orcidID{2222--3333-4444-5555}}
\author{Lei Gai\inst{1} \and
Tengjiao Wang\inst{1} \and
Wei Chen\inst{1}}
%
\authorrunning{L. Gai et al.}
% First names are abbreviated in the running head.
% If there are more than two authors, 'et al.' is used.
%
% \institute{Princeton University, Princeton NJ 08544, USA \and
% Springer Heidelberg, Tiergartenstr. 17, 69121 Heidelberg, Germany
% \email{[email protected]}\\
% \url{http://www.springer.com/gp/computer-science/lncs} \and
% ABC Institute, Rupert-Karls-University Heidelberg, Heidelberg, Germany\\
% \email{\{abc,lncs\}@uni-heidelberg.de}}
\institute{Peking University, P.R.China \email{\{lei.gai, tjwang, pekingchenwei\}@pku.edu.cn}}
%
\maketitle % typeset the header of the contribution
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% ABSTRACT and KEYWORDS
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
The abstract
\keywords{First keyword \and Second keyword \and Another keyword.}
\end{abstract}
%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% INTRODUCTION (Sep 7, 2018)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%
SPARQL is a formal way to structurally express users' analytical/semantical tasks. For many of today's tasks, user often need to pose ad hoc complex queries involving huge tables.
The research community has long realized the need for accurate cardinality estimation in query optimization. Unfortunately, despite many eminent works on this topic, it is still not a well solved problem considering the trade-off between several factors.
We can see two line of work. synopsis
Due to the multiplicative nature of joins, errors in cardinality estimation typically propagate exponentially through a larger query plan (\underline{cite SIGMOD91, ioannidis}). This means even small improvement can dramatically improve the quality of estimation available to the query optimizer, thus boot the overall query performance (\underline{cite VLDB2015, how good are query optimzier, really?})
Joins are expensive, especially on large data with complicated correlations. The sample operator cannot be push through a join sample, i.e., $sample(R \bowtie) S) \neq sample(R) \bowtie(S)$. This means sample is not reusable to the next sampling. (cite SIgMOD2018 random sampling over joins revisited, CIDR2017, cardinality estimation done right-index-based join sampling).
The optimizer's cost model relies heavily upon the underlying cardinality model.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% PRELIMINARIES (Sep 8, 2018)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries}
\subsection{Sequential Sampling}
\subsection{multi-way join sampling}
\textbf{simple random sample} of size $k$, each element in the underlying population is picked with equal probability (uniform), and the procedure is repeated $k$ times independently (independent).
the fundamental challenge for the problem i that the sampling operation cannot be pushed down through a join operator.
To obtain uniform and independent samples, Two methods were proposed (\underline{cite Chaudhuri}). One is to reject samples from $sample(R) \bowtie sample(S)$ with appropriate probability. The other takes samples from $R$ non-uniformly, guided by the statistical information on $S$. (However, they only considered the problem over two-relation joins).
(\underline{cite SIGMOD 2018}) gives a generic framework to handle multi-way joins, and with selection predicates. It leverage the join size upper bound (cite AGM method), depending on what prior information is available about the underlying data, and offers different trade-offs between sample production latency and throughput.
common join attribute, join predicate.
If the sample size is $n$ and the dataset size is $N$, SRS generates a sample by taking every $k$-th data item ($k=\lceil\frac{N}{n}\rceil$ ).
\subsection{cardinality estimation}
(\underline{cite VLDB2015, how good are query optimzier, really?})
\begin{itemize}
\item \textbf{estimate for triple pattern (base table)}
\item \textbf{estimate for Joins}
\end{itemize}
\subsection{self-join, sampling}
(AGMS) Alon, N., Gibbons, P.B., Matias, Y., Szegedy, M. Tracking
Join and Self-Join Sizes in Limited Storage. In Proceedings
of the eighteenth ACM SIGMOD-SIGACT-SIGART
symposium on Principles of database systems (PODS '99),
ACM Press, New York, NY, 1999, 10-20.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% MODELING (Sep 8, 2018)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Data Model}
Idea: use an oracle to tell when a key is heavy [Kumar Xu 06]
Adjust sampling probability accordingly
Can use a “sketch” data structure to play the role of oracle
Like a hash table with collisions, tracks approximate frequencies
E.g. (Counting) Bloom Filters, Count-Min Sketch
Track probability with which key is sampled, use HT estimators
Set probability of sampling key with (estimated) weight $w$ as $1/(1 + \epsilon w)$ for parameter $\epsilon$ : decreases as $w$ increases
Decreasing e improves accuracy, increases sample size
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% SAMPLING (Sep 11, 2018)
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Sampling}
For each tuple in $R$, choose it for the sample with probability $f$, independent of other tuples. In real world DB, choose each tuple in $R$ is prohibitively costly. we can easily convert it to Sampling with/-out Replacement (SWoR). Sampling a slightly larger fraction $\hat{f}$ to ensure that we get at least an f-fraction, as can
be shown by the Chernoff bound, and then rejecting an appropriate number of the samples to ensure that we get exactly an f-fraction.
Given a RDF dataset $\mathcal{D}$ of $d$ triples, and a Conjunctive SPARQL query with $k$ triple patterns $TP=\{P_i\}$, $i=1,\ldots,k$. We use $\sigma_{P_{i}}(\mathcal{D})$ to represent the set of triples that match triple pattern $P_{i}$. For two triple pattern $P_{1}$ and $P_{2}$ with a join attribute $A$, let $n_1(v)$ and $n_2(v)$ be distinct number of triples in $\sigma_{P_{1}}(\mathcal{D})$ and $\sigma_{P_{2}}(\mathcal{D})$ that contains value $v$ in $A$ respectively. Clearly, $\sum_{v \in A} n_1(v) = |\sigma_{P_{1}}(\mathcal{D})|$, and $\sum_{v \in A} n_2(v) = |\sigma_{P_{2}}(\mathcal{D})|$. The cardinality of $P_{1} \bowtie P_{2}$ can be estimated as
\begin{equation}
n = |P_{1} \bowtie P_{2}| = \sum\limits_{v \in A} n_1(v)\cdot n_2(v)
\end{equation}
For $n_i(v)$, if there are synopsis available to capture the correlation among triple elements, such multi-dimensional histogram or index like RDF-3X, $n_i(v)$ can be directly estimated or even get exactly. otherwise more common cases we should rely on applying \textit{independence assumption} for S,P and O.
To computing \textsc{Sampling}$(TR_{1} \bowtie TR_{2}, f)$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% BIB: generate by generatebibitem.tex in 'genbib' sub-folder.
% Copy the content of appropriate .bbl content and paste there.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% ---- Bibliography ----
%
% BibTeX users should specify bibliography style 'splncs04'.
% References will then be sorted and formatted in the correct style.
%
% \bibliographystyle{splncs04}
% \bibliography{mybibliography}
%
\begin{thebibliography}{8}
\end{thebibliography}
\end{document}
| {
"alphanum_fraction": 0.6715579502,
"avg_line_length": 45.9105263158,
"ext": "tex",
"hexsha": "bcdbc467417ebeec7afe726b1ea7a7df9cce4f7b",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "caf995906829b066cd3458d69c130f0ec8599096",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RaysKai/Papers",
"max_forks_repo_path": "samplepaper.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "caf995906829b066cd3458d69c130f0ec8599096",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RaysKai/Papers",
"max_issues_repo_path": "samplepaper.tex",
"max_line_length": 669,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "caf995906829b066cd3458d69c130f0ec8599096",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RaysKai/Papers",
"max_stars_repo_path": "samplepaper.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2146,
"size": 8723
} |
% $Id$
% ObitTalk User Documentation
%
\documentclass[11pt]{report}
\usepackage{psfig}
\usepackage{graphicx}
\usepackage{verbatim}
\font\ttlfont=cmbxsl10 at 30pt
\font\secfont=cmbxsl10 at 15pt
\raggedbottom
\topmargin=-.9cm
\topskip=-2.5cm
\textheight=9.0in
\textwidth=6.5in
\oddsidemargin=0cm
\evensidemargin=-1cm
\parindent=0.25in
\topsep=0.5ex
\itemsep=0.5ex
\begin{document}
\setcounter{chapter}{1}
%\setcounter{section}{1}
% Title page
\topskip 1.5in
\centerline{\ttlfont ObitTalk User Documentation }
\vskip 1cm
\centerline{\LARGE\it Obit: Merx mollis mortibus nuper}
\vskip 3cm
\centerline{\secfont Version: 1.6 \today}
\vskip 1cm
% Abstract
\centerline{\secfont Abstract}
This documents describes the ObitTalk interface to AIPS and Obit
Software.
ObitTalk is a python package which allows running AIPS and Obit tasks
and scripts and direct access to astronomical data using Obit.
Obit implements multiple data types, currently AIPS and FITS data.
Most of the material in this document is also available in the on--line
documentation.
This document assumes familiarity with python and with AIPS and radio
astronomical techniques.
The difference with AIPS and POPS usage is explained.
\clearpage
\topskip 0in
% Table of contents
\newpage
\tableofcontents
%\listoffigures
%\listoftables
\newpage
\section {Introduction}
ObitTalk is derived from the ParselTongue project at JIVE and
provides a scripting and interactive command line interfaces to
astronomical data and processing software. In particular, AIPS and
FITS data structures as used in the AIPS and Obit software packages
are supported as well as AIPS tasks and Obit tasks and other python
enabled software.
Obit is intended as an environment optimized for the development and
evaluation of new data processing algorithms.
As such, it is not a full featured data processing system.
However, with the interoperability of Obit and AIPS, the ObitTalk
interface to both Obit and AIPS does present the user with a full
featured data processing environment for radio interferometry.
This utility package facilitates the access to data and images from
python as well as various interactive features.
The details of the functions in this package are given later.
Many of these functions have equivalents in POPS although adapted to
python.
AIPS tasks will use the AIPS XAS TV which must be started separately.
Obit tasks and ObitTalk use the ObitView image display and/or the
ObitMess task message server each of which must also be started
independently.
If AIPS is not available, ObitTalk still can work using FITS or AIPS
files.
ObitTalk can start tasks or scripts either locally or on a remote
machine which has an ObitTalkServer process running.
Some remote data access is supported through the AIPSUVData,
AIPSImage, FITSUVData and FITSImage classes.
Currently other python functions only work interactively locally or
remotely using the ObitScript class.
Tasks, scripts and more detailed access to and manipulation of data
are available. These are described briefly below and methods of
obtaining more detailed descriptions are described.
This document contains both tutorial and reference material.
New users should read the first few sections; later sections are
mostly for reference.
\section {Obtaining Software}
Obit and related software is available from
http://www.cv.nrao.edu/$\sim$bcotton/Obit.html.
%At present stable releases are issued roughly yearly.
For up to date versions of the software the
anonymous SVN interface
(https://svn.cv.nrao.edu/view/ObitInstall/) is recommended.
Binary distributions for Linux are also available.
Obit depends heavily on third party software which is described on
the Obit page.
Support of the Obit package is extremely limited.
The components of the Obit/ObitTalk package are:
\begin{itemize}
\item Obit\\
Basic Obit package and the support for radio interferometry
\item ObitSD\\
Obit ``On The Fly'' (OTF) single dish imaging package.
\item ObitView\\
Image display used by Obit.
\item ObitMess\\
Task message display server used by ObitTalk.
\item ObitTalk\\
Scripting and interactive interface to Obit software.
\end{itemize}
These software packages come with installation instructions and config
scripts to build them.
A simplified binary installation for Linux systems is available at
http://svn.cv.nrao.edu/obit/ under linux\_distro.
The binary distribution is a tarball that can be unpacked, added to
your \$PATH and directly executed.
\section {Starting ObitTalk}
The operation of ObitTalk is influenced by the values of a number of
environment variables to specify the locations of directories with
python modules, data directories and Obit and AIPS task documentation
and executable files.
Some of these are set to standard values by the ObitTalk startup script
with the exception of the AIPS startup values; other environment
variables may need to be set using the script, setup.sh or setup.csh -
depending on your shell, which is generated by the Obit installation
from source procedure.
Obit related values may be set by the ObitTalk script used by the
binary installation.
If the AIPS shell variables AIPS\_ROOT and AIPS\_VERSION are
previously set by an AIPS startup script no further action needs to be
taken to use AIPS.
If you wish to use python modules not in one of the standard location,
set PYTHONPATH to include the directories.
For example, using tcsh and setting PYTHONPATH to use modules in both
directories pointed to by myPython1 and myPython2:
\begin{verbatim}
% setenv PYTHONPATH "$myPython1":"$myPython2"
\end{verbatim}
Custom setups can be implemented using an ObitTalk startup script as
discussed below.
If you wish to use the ObitView image display or the ObitMess task
message window, you can start them before ObitTalk.
If ObitView is in your path:
\begin{verbatim}
% ObitView &
\end{verbatim}
will start the display server.
If this fails to start the display, see the discussion of ObitView
below.
ObitMess can be started in the same fashion; see sections \ref{ObitView} and
\ref{ObitMess} for more details.
Then, if the script ObitTalk is in your path:
\begin{verbatim}
% ObitTalk [scriptname]
\end{verbatim}
should start ObitTalk.
%Likewise, for asynchronous display of task messages the ObitMess
%server must also be started.
If the environment variables AIPS\_ROOT and AIPS\_VERSION are
defined, or an .obitrc.py startup script file is found defining them,
ObitTalk will make AIPS tasks and data available.
If the optional scriptname is given, then the python interpreter will
do some simple AIPS initialization and execute the python script
``scriptname''.
If no script is specified then ObitTalk will ask for your AIPS number
and do its AIPS initialization (if AIPS is available) and go into an
interactive python session.
Outside of the NRAO, AIPS user numbers are relatively arbitrary and
can be used to separate different projects.
Note: AIPS number 1 is a bad idea if you plan on using AIPS/POPS.
The python prompts are: \begin{verbatim}>>> \end{verbatim}
\subsection{AIPS Setup}
Obit can use AIPS format data whether or not AIPS is available; most
operations involving visibility data are more efficient using AIPS
than FITS format.
In order to use AIPS format, AIPS directories are needed.
Purely Obit use of AIPS format places no restrictions on these
directories but AIPS use requires a SPACE file.
To create a directory for AIPS data in /export/data/DATA\_1:
\begin{verbatim}
% mkdir /export/data/DATA_1
% touch /export/data/DATA_1/SPACE
\end{verbatim}
The names of the AIPS directories must be provided either using the
AIPS or Obit startup scripts.
In order to run AIPS tasks, the location of the AIPS help and
executable files needs to be specified; these are under
\$AIPS\_ROOT/\$AIPS\_VERSION.
This definition can be done in either standard AIPS setup scripts or
in the Obit startup script (see next section).
Furthermore, AIPS tasks read their parameters from a file named
\$DA00/TDD000004;.
The path \$DA00 needs to be provided either by the AIPS or the Obit
Startup scripts.
\subsection{Startup Script}
When ObitTalk starts, it looks for a startup script named .obitrc.py in
either the users home directory or the current working directory (the
latter has priority).
If found, it is executed as python code.
This can be used to define the AIPS and Obit setups and can be used in
the absence of AIPS startup scripts.
The following startup script fragment shows how to define AIPS tasks
and data, Obit tasks and FITS data directories.
This can be used to define both local and remote data directories; see
section \ref{remote_data} for a discussion of defining data
directories on remote systems.
Example startup scripts can be found in
\$OBIT/share/scripts/obitrc.py, /usr/share/obit/scripts/obitrc.py, or
dot.obitrc.py in the top level of the binary distribution.
\begin{verbatim}
# Startup script
print "Executing startup script "
import ObitTalkUtil
###################### Define ###################################
# Define AIPS_ROOT and AIPS_VERSION for access to AIPS Software
AIPS_ROOT = "/export/data_1/users/aips/"
AIPS_VERSION = "31DEC06/"
# Define directory for AIPS TDD000004; file
DA00 = "/export/data/aips/DA00/SMEAGLE/"
# Define OBIT_EXEC for access to Obit Software
OBIT_EXEC = None # (def /usr/lib/obit/bin)
OBIT_EXEC = "/export/data_1/users/bcotton/Software.dir/SVN/ObitInstall/ObitSystem/Obit/"
# Define AIPS directories (URL, disk name)
# URL = None for local disks
aipsdirs = [ \
(None, "/export/data_1/aips/DATA/SMEAGLE_1"), \
(None, "/export/data_1/aips/DATA/SMEAGLE_2"), \
(None, "/export/data_1/aips/DATA/SMEAGLE_3"), \
(None, "/export/data_2/aips/DATA/SMEAGLE_4"), \
(None, "/export/data_2/aips/DATA/SMEAGLE_5"), \
(None, "/export/data_2/aips/DATA/SMEAGLE_6"), \
(None, "/export/data_2/bcotton/SMEAGLE_7")]
# Define FITS directories (URL, disk name)
# URL = None for local disks
fitsdirs = [ \
(None, "/export/data_1/users/bcotton/Software.dir/AIPS/FITS")]
# setup environment
ObitTalkUtil.SetEnviron(AIPS_ROOT=AIPS_ROOT, AIPS_VERSION=AIPS_VERSION, \
OBIT_EXEC=OBIT_EXEC, DA00=DA00, \
aipsdirs=aipsdirs, fitsdirs=fitsdirs)
# List directories
ObitTalkUtil.ListAIPSDirs()
ObitTalkUtil.ListFITSDirs()
# Any other customization goes here
\end{verbatim}
\section {Object--orientation for POPS users}
Many of the differences between AIPS/POPS and ObitTalk are because the
latter is generally object--oriented.
``Object--oriented'' in this context means little more than variables
are more substantial than the floats and strings and simple arrays of
POPS variables (although these also exist).
A python (hence ObitTalk) variable is a relatively arbitrary thing and
can be a scalar number, string, an array or list of variables or the
interface to a dataset such as an image or uv data.
In ObitTalk, the interface to a data set is assigned to a variable and
this variable is used to specify operations in a way not very
different from INNAME, INCLASS, INDISK, INSEQ ... are used to specify
a dataset in POPS.
This allows having an arbitrary number of such data objects while
avoiding the conflicts in usage of INNAME... in POPS.
The usual object--oriented syntax is that ``class methods'' (functions
which can operate on an object) are invoked like this:
\begin{verbatim}
>>> object.function(arguments)
\end{verbatim}
where ``object'' is the python object, ``function'' is the function
name and arguments are the additional arguments, the object is
implicitly an argument, by convention called ``self'' in python.
In python documentation of function interfaces, ``self'' appears as the
first argument of the function although it is invoked as shown above.
As a convenience to POPS users many of these functions are also
implemented in the more traditional procedural form, for instance, the
following produce the same result:
\begin{verbatim}
>>> myTask.explain()
\end{verbatim}
or
\begin{verbatim}
>>> explain(myTask)
\end{verbatim}
\subsection {Data objects}
ObitTalk uses Obit to access the external (i.e. disk) representations
of datasets and Obit allows multiple ``native'' data representations.
At present AIPS and FITS (as practiced by AIPS) external
representations are supported.
(Note, the old style random groups FITS for UV data as written by AIPS
task FITTP is NOT supported but the tables format written by FITAB is.)
The distinction between external representations is largely hidden
except for the process of creating (``instantiation'' in computerese)
the interface object in which its representation must be specified.
For example, to create an interface object to an AIPS image described
by the strings Aname (AIPS Name), Aclass (AIPS class), and integers
disk (AIPS disk number) and seq (AIPS sequence number):
\begin{verbatim}
>>> myImage=Image.newPAImage(``myImage'', Aname, Aclass, disk, seq, exists, err)
\end{verbatim}
where exists is True if the image is expected to previously exist and
False otherwise.
Messages and error conditions are registered in err (defined at
ObitTalk startup) and any error messages can be viewed by:
\begin{verbatim}
>>> ShowErr(err)
\end{verbatim}
%\newpage
Thereafter the variable myImage is used to access the AIPS image but
beyond this point, it is largely irrelevant if the underlying file is
an AIPS or FITS (or other) format.
For instance, the header can be displayed:
\begin{verbatim}
>>> imhead(myImage)
Object: J0555+39
Observed: 2001-01-25 Telescope: VLBA Created: 2006-04-18
Observer: BC111 Instrument: VLBA
Minimum = -7.5144e-06 Maximum = 1.5197e-05 JY/BEAM
--------------------------------------------------------------
Type Pixels Coord value at Pixel Coord incr Rotat
RA---SIN 164 5 55 30.80561 78.00 -5e-05 0.00
DEC--SIN 167 39 48 49.1650 87.00 5e-05 0.00
STOKES 3 IPol 1.00 1 0.00
FREQ 1 4.2826e+10 1.00 4e+06 0.00
--------------------------------------------------------------
Coordinate equinox 2000.0 Coordinate epoch 2000.00
Observed RA 5 55 30.80561 Observed Dec 39 48 49.1650
no. Comp 200
Clean Beam 0.001 x 0.001 asec, PA 0.0 deg.
Rest freq 0 Vel type: LSR, wrt radio
Alt ref value 0 wrt pixel 1.00
\end{verbatim}
In this sense, objects can have members (other objects) or functions
which operate on the object.
For instance, the ``header'' of myImage which is referred to as an
ImageDescriptor in ObitTalk is referenced as myImage.Desc and the
function which destroys the object as well as its external
representation is myImage.Zap() (functions are denoted with parenthess
even if there are no arguments.
Note the names of variables are arbitrary and ``myImage'' could as well
be ``Judy'' and are used in error and other informative messages.
Local disk numbers in AIPS data files have the same meaning as in
POPS.
FITS disk numbers correspond to the directories pointed to by the
environment variables \$FITS, \$FITS01, \$FITS02.... FITS disk 0 has
a special meaning in which the filename is either relative to the
current working directory or a full path to the file.
Disk numbers may also be defined on remote computers.
\subsection{Tasks}
An important type of object in ObitTalk is the Task object.
This object defines the interface to tasks (parameters, documentation,
etc.)
Currently, interfaces to AIPS tasks and Obit tasks are supported.
Tasks have the same meaning as in POPS and are programs that run
independently of the python process and are generally compiled Fortran
or C programs.
In order to run a task, a task object is first created; at this point
AIPS or Obit needs to be specified but after the object is created the
type of task is relatively minor.
One difference between POPS and python is that the final single quote
around a POPS string causes it to be converted to upper case whereas
no case conversion is done in python.
If you want a AIPS file name or class which contains upper case
letters, you must type it that way.
Tasks may have output as well as input parameters.
If tasks are run synchronously (using the task\_obj.go() syntax), a
python RunTime exception will be thrown if the task finishes in other
than a normal completion, either detects an uncorrectable problem or aborts.
In any mode of running an Obit task, the output parameter ``retCode'' will have
a value of 0 if the task terminated normally without detecting a problem and
-1 otherwise. Note: this value will be -999 during the task execution.
\subsubsection{Tasks functions}
There are a number of common task functions which can be invoked from
a task object.
These functions also have a short version to simplify typing.
For example:
\begin{verbatim}
>>> myTask.i
\end{verbatim}
is equivalent to
\begin{verbatim}
>>> myTask.inputs()
\end{verbatim}
These common task functions and the short form are explained in the
following:
\begin{itemize}
\item inputs (short i)\\
This function is to list the current values of the tasks input
parameters with a short description.
\item outputs (short o)\\
This function is to list the current values of the tasks output
parameters with a short description.
\item help (short h)\\
This function is to list the help documentation for the task.
\item explain (short e)\\
This function is to list any extended documentation for the task.
\item go (short g)\\
This function starts the task in synchronous mode using the current
input parameters.
\item abort (short a)\\
This function aborts the task.
A ``Control C'' while running a task synchronously has the same effect.
\item wait (short w)\\
This function suspends operations pending the completion of the
task.
\end{itemize}
\subsubsection{Arrays in AIPS Tasks}
The main difference between Obit and AIPS tasks as well as a major
difference between POPS and python is that array indexing in POPS
arrays is one relative whereas in python indexing is zero relative.
In other words, the first element of array parm in POPS is parm(1) and
in python it is parm[0] (also note the parentheses and square
brackets).
Since the AIPS documentation describes array values by their one
relative indices, using zero relative addressing is a serious
potential source of trouble; aparm(3) in POPS is aparm[2] in python.
To avoid this problem, ObitTalk adds a extra, unused, element at the
beginning of each array to keep the indexing consistent with AIPS
documentation.
To enforce this scheme, ObitTalk does not allow you to modify the
first element of an array.
This causes an additional problem, that you cannot set a AIPS task
array parameter as:
\begin{verbatim}
>>> AIPStaskObj.ArrayParm = [1,2,3,4] # Fails
\end{verbatim}
Instead, there are two options, using slicing of the parameter array:
\begin{verbatim}
>>> AIPStaskObj.ArrayParm[1:] = [1,2,3,4] # OK
\end{verbatim}
or using the AIPSList class:
\begin{verbatim}
>>> AIPStaskObj.ArrayParm = AIPSList([1,2,3,4]) # OK
\end{verbatim}
Multidimensional arrays can be set
\begin{verbatim}
>>> AIPStaskObj.Array2DParm = AIPSList([[1,2,3,4],[5,6,7,8]]) # OK
\end{verbatim}
(Note the double square brackets).
\subsubsection{Arrays in Obit Tasks}
Arrays in Obit task array parameters have zero--relative indexing so
statements like
\begin{verbatim}
>>> ObitTaskobj.ArrayParm = [1,2,3,4] # OK
\end{verbatim}
work as expected.
\subsubsection{Examples}
An example of creating a task object named im to run AIPS task IMEAN
is:
\begin{verbatim}
>>> im=AIPSTask("IMEAN")
\end{verbatim}
The parameters of the task can then be set:
\begin{verbatim}
>>> im.inname='07030+51396'; im.inclass='PCUBE'; im.indisk=1; im.inseq=2
>>> im.BLC=AIPSList([10,10]); im.TRC=AIPSList([100,100])
\end{verbatim}
The Inputs can be reviewed:
\begin{verbatim}
>>> im.i
IMEAN: Task to print the mean, rms and extrema in an image
Adverbs Values Comments
--------------------------------------------------------------------------------
dohist -1.0 True (1.0) do histogram plot.
= 2 => flux on x axis
userid 0.0 User ID. 0=>current user
32000=>all users
inname 07030+51396 Image name (name)
inclass PCUBE Image name (class)
inseq 2.0 Image name (seq. #)
indisk 1.0 Disk drive #
blc 10.0, 10.0, 0.0, 0.0, 0.0, 0.0, 0.0 Bottom left corner of image
0=>entire image
trc 100.0, 100.0, 0.0, 0.0, 0.0, 0.0, 0.0 Top right corner of image
0=>entire image
nboxes 0.0 No. of ranges for histogram.
pixrange 0.0, 0.0 Min and max range for hist.
functype 'LG' => do log10 plot of #
samples, else linear
pixavg 0.0 Estimate of mean noise value
pixstd 0.0 Estimate of true noise rms
< 0 => don't do one
= 0 => 2-passes to get
docat 1.0 Put true RMS in header
ltype 3.0 Type of labeling: 1 border,
2 no ticks, 3 - 6 standard,
7 - 10 only tick labels
<0 -> no date/time
outfile Name of output log file,
No output to file if blank
dotv -1.0 > 0 Do plot on the TV, else
make a plot file
grchan 0.0 Graphics channel 0 => 1.
\end{verbatim}
and the task run:
\begin{verbatim}
>>> im.g
IMEAN2: Task IMEAN (release of 31DEC02) begins
IMEAN2: Initial guess for PIXSTD taken from ACTNOISE inheader
IMEAN2: Image= 07030+51396 .PCUBE . 2 1 xywind= 1 1 241 241
IMEAN2: Mean and rms found by fitting peak in histogram:
IMEAN2: Mean=-3.1914E-06 Rms= 2.7893E-04 **** from histogram
IMEAN2: Mean and rms found by including all data:
IMEAN2: Mean= 1.8295E-05 Rms= 5.2815E-04 JY/BEAM over 174243 pixels
IMEAN2: Flux density = 2.0006E-01 Jy. beam area = 15.93 pixels
IMEAN2: Minimum=-1.5441E-03 at 164 180 1 1
IMEAN2: Skypos: RA 07 02 04.303 DEC 51 51 23.18
IMEAN2: Skypos: IPOL 1400.000 MHZ
IMEAN2: Maximum= 4.0180E-02 at 93 159 1 1
IMEAN2: Skypos: RA 07 03 36.211 DEC 51 47 11.65
IMEAN2: Skypos: IPOL 1400.000 MHZ
IMEAN2: returns adverbs to AIPS
IMEAN2: Appears to have ended successfully
IMEAN2: smeagle 31DEC06 TST: Cpu= 0.0 Real= 0
\end{verbatim}
\subsection{functions = verbs}
In addition to tasks, ObitTalk allows POPS verb--like functionality by
means of functions using data interface objects.
This allows access to headers, data values and unlike POPS, access to
much of the high level functionality in the Obit class libraries as
well as all of the functionality of python.
Numerous operations which in POPS require tasks can be performed by
ObitTalk functions.
Examples are the conversions between AIPS and FITS types (functions
imlod, uvlod, imtab, uvtab).
Much of the POPS functionality is implemented in ObitTalk functions.
\section{ObitView Image Display \label{ObitView}}
While AIPS tasks can use the AIPS TV, the image display used by
ObitTalk and Obit tasks is ObitView which is run as an independent
program.
ObitView can be used as an image browser independently of ObitTalk.
To display image myImage on a running ObitView simply:
\begin{verbatim}
>>> tvlod(myImage)
\end{verbatim}
A screen shot of the ObitView window is shown in Figure \ref{ObitViewFig}.
\begin{figure}
\centering
\includegraphics[height=4in]{ObitView.eps}
\caption{
Screenshot of ObitView window.
}
\label{ObitViewFig}
\end{figure}
ObitView uses the xmlrpc protocols to communicate between tasks and as
such allows communication between different computers by means of the
internet.
Parts of this protocol involve fixed port numbers which means that only
a single ObitView can run on a given computer using a given port
number.
An attempt to start a second will fail with a ``can't bind'' message.
By default port 8765 is used but others may be used as well.
For instance to use port 8888, start ObitView as follows
\begin{verbatim}
% ObitView -port 8888 &
\end{verbatim}
Then ObitTalk can be told to use this port by:
\begin{verbatim}
>>> newDisplay(8888)
\end{verbatim}
Obit tasks which use the display have a parameter dispURL which should
be set to\\
"http://localhost:8888/RPC2" to use the new display.
If the display is running on a machine on which the data is not
visible, use ``http://myhost:port/RPC2'' where myhost is the network
name and port is the port number (usually 8765), Example, to set
the display on a task object named task:
\begin{verbatim}
>>> task.dispURL="http://canis.cv.nrao.edu:8765/RPC2"
\end{verbatim}
When a remote process displays on an ObitView display, it first copies
the image as a compressed FITS image to the display which saves the
file in its current working directory as ObitDisplayFITS.fits.gz.
It is useful to start ObitView from a directory where it is both
possible and desirable to write these temporary files.
If there is trouble connecting to the display server port
(e.g. firewall, address translation) and you have ssh login access
between the relevant hosts then it is possible to use ssh port
forwarding through the secure connection.
From the command shell on the client side (as seen by ObitView)
issue:
\begin{verbatim}
% ssh -L localport:host:hostport user@host
\end{verbatim}
where localport is the local port number (typically 8765 for
ObitView), host is the host on which the ObitView process is running
and host port is the port on host that the target ObitView is
watching.
Then, give the task or ObitTalk on the client end (again as seen by
ObitView) a url for itself other than localhost; this will cause the
file to be transmitted.
For instance if the result of the shell ``hostname'' command is
``smeagle'' create an ObitTalk display:
\begin{verbatim}
>>> newDisplay(URL="http://smeagle:8765/RPC2")
\end{verbatim}
A tvlod should then cause to image to be displayed on the host
specified in the ssh command.
ObitView is used by ObitTalk and Obit tasks to display images and
perform some interactive operations such as specifying CLEAN boxes.
ObitView gives much more control over the display of an image than is
possible in the AIPS TV.
Once an image is loaded, all of the display controls are available;
there is extensive online help.
When an interactive CLEAN box or other window setting session begins,
a RequestBox dialog appears with the image displayed overlaid by the
current CLEAN window.
The radio buttons at the top of this dialog specify what action is to
be taken by the calling program when the ``OK'' button on the bottom
is hit and the calling program resumes.
These options are:
\begin{itemize}
\item Continue\\
Continue with the edited window.
\item Abort\\
Shutdown immediately.
\item Quit Operation\\
Terminate the current operation and continue/shutdown in an orderly
fashion.
For a simple CLEAN, this means stop the clean here and do whatever
component restoration/flattening operations were requested.
If this command is given in a CLEAN as part of a self--calibration
cycle, the current CLEAN is terminated and the self--calibration
continues.
If this command is given at the end of a self--calibration cycle then
the self-calibration proceeds as if it were converged.
\item Turn Off TV\\
No more displays of the image.
Inside a CLEAN this causes no more displays during the current CLEAN
but this does not affect displays in some outer operation (e.g. self
calibration).
If the TV display is turned off in one CLEAN of a self--calibration
loop then it is turned off in subsequent CLEANs.
\item View Field\\
If the image is a multi--facet image, then the display (with possible
editing of its CLEAN window) of another facet is requested by this
option and the facet number (1-relative) entered in the text box labeled
``Request field''
\end{itemize}
If editing of the window displayed is desired, then the ``Clear''
button deletes the current window (not normally needed) and the ``Edit''
button puts the display in window editing mode.
The message dialog appears with detailed instruction about editing.
To exit editing mode hit the ``d'' or ``D'' button.
When all editing of the window is complete, the ``OK'' button causes the
calling program to resume with the specified operation and the edited
window.
The ``Cancel'' button is like ``OK'' except that any editing of the
window is discarded.
There are several types of boxes used by Obit CLEANing and these are
shown in different colors (subject to some user selection).
Not all types are always used.
The types of CLEAN boxes are:
\begin{itemize}
\item ``Inner'' boxes \\
These are the traditional CLEAN window boxes specifying the regions in
which components may be selected.
\item ``Inner'' unboxes \\
Specifies regions in which components are NOT to be selected.
Used in the autoCenter mode.
\item ``Outer'' boxes \\
Specifies regions inside of which the autoWindow algorithm is allowed
to place Inner boxes.
For multi--facet images these generally correspond to the region of
the facet to be used when the image is flattened.
\end{itemize}
The program timeout (length of time ObitView will wait before sending
a program the default response, i.e. ``Continue'') can be set using
the ``Options'' menu.
The default timeout is infinite but can be specified to a finite
period.
The actual minimum is 5 seconds to give time to actually respond
interactively and any activity on the editing dialog disables the
timeout for that instance.
\section{ObitMess Task Message Display\label{ObitMess}}
The ObitMess server is used in order to display task messages and to
provide user input for tasks running asynchronous.
Use of this faculity is described in Section \ref{asynctasks}.
To be used in an ObitTalk session, is must be started independently.
Like ObitView, ObitMess uses the xmlrpc protocols to communicate
with ObitTalk and as such allows communication between different
computers by means of the internet.
Parts of this protocol involve fixed port numbers which means that only
a single ObitMess can run on a given computer using a given port
number.
An attempt to start a second will fail with a ``can't bind'' message.
By default port 8777 is used but others may be used as well.
For instance to use port 8889, start ObitMess as follows
\begin{verbatim}
% ObitMess -port 8889 &
\end{verbatim}
Then ObitTalk can be told to use this port when starting a task
(myTask) by:
\begin{verbatim}
>>> tw=go(mytask, URL="http://localhost:8889/RPC2")
\end{verbatim}
If there is trouble connecting between ObitTalk and the message server
port (e.g. firewall, address translation) and you have ssh login access
between the relevant hosts then it is possible to use ssh port
forwarding through the secure connection.
From the command shell on the client side (as seen by ObitMess)
issue:
\begin{verbatim}
% ssh -L localport:host:hostport user@host
\end{verbatim}
where localport is the local port number (typically 8777 for
ObitMess), host is the host on which the ObitMess process is running
and host port is the port on host that the target ObitMess is
watching.
When ObitMess is started a window will appear with the label ``Obit
task message server'' and a Quit button.
Additional windows will be produced as needed.
Only hit the ``Quit'' button when you are through with the message server.
\section {ObitTalk Basics}
Obit consists of class libraries and a number of prepackaged
tasks similar to AIPS tasks. The classes are implemented in c but
there are python bindings to much of the high-level functionality
allowing python scripts a high degree of flexibility in accessing
and manipulating data.
ObitTalk can execute Obit Tasks and functions as well as AIPS tasks
but not POPS verbs.
Obit can support multiple physical data formats as long as they are
uniquely mapable to a common data model. Above a data access level,
the underlying physical data representation is (mostly) hidden.
Currently, AIPS and FITS (as practiced by AIPS) are supported.
Only FITS format OTF data is supported.
AIPS and Obit tasks (mostly) are completely interoperable and may be
mixed.
Data objects generally have a ``descriptor'' member, e.g. each
Image has an ImageDesc giving the ``header'' information.
These can be accessed by conversion to and from a python dict
(dictionary) in the relevant Descriptor class function.
An example of an AIPS image in catalog slot 2 of AIPS disk 2:
\begin{verbatim}
>>> indisk=2
>>> image=getname(2,indisk)
>>> dict = image.Desc.Dict
\end{verbatim}
Or, the function Header will display the contents in a human readable
form:
\begin{verbatim}
>>> image.Header()
\end{verbatim}
Note: function imhead(image) is a different path to the same end.
Catalogs in AIPS data directories can be viewed using the functions
Acat(), AMcat(), AUcat() for all, image and uv entries; there are
numerous optional arguments an explaination of which can be obtained
by
\begin{verbatim}
>>> help(Acat)
Acat(disk=None, first=1, last=1000, Aname=None, Aclass=None, Aseq=0, giveList=False)
Catalog listing of AIPS files on disk disk
The class remembers the last disk accessed
Strings use AIPS wild cards:
blank => any
'?' => one of any character
"*" => arbitrary string
If giveList then return list of CNOs
disk = AIPS disk number to list
first = lowest slot number to list
last = highest slot number to list
Aname = desired AIPS name, using AIPS wildcards, None -> don't check
Aclass = desired AIPS class, using AIPS wildcards, None -> don't check
Aseq = desired AIPS sequence, 0=> any
giveList = If true, return list of CNOs matching
\end{verbatim}
Directories in FITS ``disks'' can be displayed by Fdir
\begin{verbatim}
>>> help(Fdir)
Fdir(disk=None, dir=None)
Catalog listing of FITS files on disk disk
The class remembers the last disk accessed
disk = AIPS disk number to list
dir = relative or abs. path of directory, def. = cwd
Only used if disk == 0
\end{verbatim}
\subsection{Tasks}
Following are lists of tasks available through ObitTalk.\\
{\bf AIPS Tasks}
\begin{itemize}
\item All AIPS tasks
\end{itemize}
{\bf Obit Tasks}
\begin{itemize}
\item {\bf AutoFlag} Radio interferometry data editing software
\item {\bf BeamCor} Imaging software correcting for tabulated beamshape
\item {\bf BPass} Simple UV bandpass calibration
\item {\bf Calib} Calibrate visibility data (amp \& phase)
\item {\bf CLCal} Apply gain solutions to a CL table
\item {\bf Convol} Convolve images
\item {\bf CubeClip} Remove insignificant pixels from 3D cube
\item {\bf CubeVel} Flux weighted velocity image from 3D cube
\item {\bf Feather} Task to feather together images
\item {\bf FndSou} Task to generate a source catalog from an image
\item {\bf GetJy} Determine calibrator flux densities
\item {\bf HGeom} Task to make an image consistent with another image
\item {\bf IDIin} Read IDI format UV data
\item {\bf IDIout} Write IDI format UV data
\item {\bf Imager} Radio interferometry imaging task
\item {\bf IonCal } Low frequency Field Based calibration
\item {\bf IonImage} Low frequency Field Based calibration and imaging
\item {\bf IonMovie} Make a movie of ionospheric phase from an SN table
\item {\bf IonSF} Convert Ion. movie to 2D structure func (distance,time)
\item {\bf Lister} Listing of data and calibration tables
\item {\bf LowFRFI} Low Frequency Radio Interferometry RFI removal
\item {\bf MapBeam} Map beam polarization
\item {\bf MCube} Task to accumulate image planes into a cube
\item {\bf MednFlag} Automated UV flagging about a median value
\item {\bf MFImage} Wideband imaging
\item {\bf Ringer} Fit rings to SiO maser cubes
\item {\bf noFQId} Set FqIDs in continuum data to 1
\item {\bf Quack} Flags specified portion of scans of UV data
\item {\bf SCMap} Interferometry self calibration imaging
\item {\bf SetJy} Modify SoUrce (SU) table
\item {\bf SNCor} Modify visibility gain (AIPS SN) table
\item {\bf SNFilt} Fits for instrumental phases in SN table.
\item {\bf SNSmo} Smooth visibility gain (AIPS SN) table
\item {\bf Split} Split multi--source UV data to single source
\item {\bf SplitCh} Split UV data to multiple channels
\item {\bf Squint} VLA beam squint correcting imaging software
\item {\bf Squish} Compress image cube along third axis
\item {\bf SubImage} Task to copy a sub region of an image
\item {\bf TabCopy} Task to one or more tables
\item {\bf Template} Task to print the mean, rms and extrema in an image
\item {\bf UVBlAvg} Baseline dependent time and/or frequency averaging
\item {\bf UVCopy} Copy UV data
\item {\bf UVPolCor} Correct off-axis instrumental polarization in UV data
\item {\bf UVSim} Simulate UV data
\item {\bf UVSubBC} Correct off-axis instrumental polarization in UV data
\item {\bf UVSub} Task to subtract a clean model from a uv data base
\item {\bf VL2VZ} Convert VL (survey catalog) table to VZ table
\item {\bf VLSSFix} Corrects residual geometry in low frequency images
\end{itemize}
{\bf Obit SD Tasks:}
\begin{itemize}
\item {\bf CCBCalib} Calibrate GBT CCB OTF format data
\item {\bf CCBFix} Strip bad data from post-lobotomy GBT CCB data
\item {\bf OTFImage} Image OTF format data
\item {\bf OTFSCal} Image and self calibrate OTF format data
\end{itemize}
To see task documentation either a python task object may first
be created and its documentation viewed, or more directly:
\begin{verbatim}
AIPSHelp("AIPS_task_name")
or
ObitHelp("Obit_task_name")
\end{verbatim}
To create a task object:
\begin{verbatim}
>>> im=AIPSTask("IMEAN")
\end{verbatim}
to create an AIPS task object for task IMEAN, or
\begin{verbatim}
>>> fe=ObitTask("Feather")
\end{verbatim}
to create an Obit Task object for Feather.
Note the names of the objects are arbitrary.
Task parameters can be set using the form object.parameter=value:
\begin{verbatim}
>>> im.inname="MY FILE"
\end{verbatim}
where the parameter names are subject to minimum match and tab completion.
Array values are given in square brackets "[ ]", the usual form
for a python list. AIPS array values are indexed 1-relative and Obit
arrays 0-relative but this is largely transparent.
Note: unlike POPS, ALL strings are case sensitive.
There are convenience functions setname, set2name and setoname to copy
the name information to a task object for the first and second input
objects and the output object:
\begin{verbatim}
>>> setname (myImage, im)
\end{verbatim}
Task parameters can be reviewed using the inputs() function:
\begin{verbatim}
>>> im.inputs()
\end{verbatim}
or
\begin{verbatim}
>>> inputs(im)
\end{verbatim}
Note: there is NO minimum match on functions but there is tab
completion and you must give the parentheses.
POPS style help can be viewed:
\begin{verbatim}
>>> im.help()
\end{verbatim}
or
\begin{verbatim}
>>> help(im)
\end{verbatim}
or EXPLAIN (if available) by:
\begin{verbatim}
>>> im.explain()
\end{verbatim}
or
\begin{verbatim}
>>> explain(im)
\end{verbatim}
Tasks can be run using the go function:
\begin{verbatim}
>>> im.go()
\end{verbatim}
The above form of the go function runs synchronously and does not return
until the task finishes.
Log messages will appear in the screen; if logging to a file is
desired, set the name of the file (relative or full path) on the task
object's logFile member:
\begin{verbatim}
>>> im.logFile="myLog.log"
\end{verbatim}
For Obit tasks, there is an alternative logging method, writting
messages directly to a file and NOT displaying them on the terminal or
Message Server; this is useful for batch, script driven processing.
The logging file is specified as:
\begin{verbatim}
>>> im.taskLog="myLog.log"
\end{verbatim}
This avoids problems with using logging by ObitTalk which include
missed or mangled messages and the task hanging due to a full message
buffer.
After a task is run which generates output values, these can be
viewed using the outputs function:
\begin{verbatim}
>> im.outputs()
\end{verbatim}
and the values can be accessed through the task parameter.
The task functions work for both AIPS and Obit tasks.
Obit tasks have an output parameter ``retCode'' which will have a
value of -999 until the task completes without detecting a problem
After such a completion, the value will be 0.
\subsection{Asynchronous Tasks \label{asynctasks}}
If the ObitMess message server is running and the doWait
parameter on the task (or script) object is set to False, it is
possible to execute asynchronously:
\begin{verbatim}
>> window = go(TaskObj)
\end{verbatim}
If TaskObj.doWait==True, the task is run synchronously with messages
written to the python command window.
When a task is run asynchronously (TaskObj.doWait=False), a new
ObitMess window with a scrolling text box will appear on the screen;
the task messages will appear in this window and the task can be
controlled from this window.
If the task accepts terminal input, then this text can be entered into
the text box below the message box, one line at a time and
hitting the Enter key.
If the window is expecting user input, the status becomes ``Awaiting
user input'' and the task will suspend until the response is typed
into the response line and the Enter key hit.
The task status shown at the bottom of this window gives ``Running'',
``Finished'' and there are buttons that allow aborting the task, saving
the messages in a text file or closing the window.
A screen shot of a message window is shown in Figure
\ref{MessWinFig}.
\begin{figure}
\centering
\includegraphics[height=3in]{ObitMessScreen.eps}
\caption{
Message window for Obit task Template after completion.
}
\label{MessWinFig}
\end{figure}
The TaskWindow object returned when an asynchrous task is started can
be used to suspend python operations until the task completes:
\begin{verbatim}
>> window.wait()
\end{verbatim}
or abort the task:
\begin{verbatim}
>> window.abort()
\end{verbatim}
Tasks (or scripts) can also be run asynchronously without a runtime
display of the messages.
This is done using the MsgBuf argument to the go function which will then
execute the task (or script) and save the messages.
In this case the go function returns a TaskMsgBuffer.
The task messages can be saved to a logfile or obtained from the
TaskMsgBuffer object:
\begin{verbatim}
>> buffer = go(myTask,MsgBuf=True)
>> buffer.wait()
>> messages = buffer.Messages()
\end{verbatim}
TaskMsgBuffer objects also have an abort function.
\subsection{Disk Numbers and Task Executation \label{Disks}}
``Data disks'' as defined in ObitTalk include the information about
the location of the data and ObitTalk will attempt to execute the task
where the data defined for it resides.
This means that disk numbers cannot be defaulted as then ObitTalk
cannot decide where to execute the task or tell if all data reside on
the same host.
For Obit Tasks operating on FITS files, disk 0 has a special meaning,
that the filename given is relative to the current working directory.
\subsection{Scripts}
Scripts can be executed locally directly as a command line argument to
ObitTalk or interactively using the ObitScript class.
When a script is executed from the command line, there is no prompt for
the AIPS user number which must be supplied by the script.
To use all features of the Obit python interface, a full
initialization of Obit is also needed.
An example python fragment in a script given on the command line
initializing Obit for user 100 is the following:
\begin{verbatim}
# Initialize Obit for AIPS user 100
user=100
from OTObit import *
AIPS.AIPS.userno=user
OSystem.PSetAIPSuser (user)
err=OErr.OErr()
\end{verbatim}
Scripts can run from an interactive ObitTalk session (started with no
arguments) and can be run synchronously or asynchronously and either
locally or remotely using the ObitScript class.
\subsection{Task logs}
Messages from running AIPS or Obit tasks will appear in the python
window if the task is being run synchronously or in the Task Window
if run asynchronously.
Each Task window has a button that allows writing the contents into a
file; otherwise the logging messages are lost when the window closes.
If logging to a file is desired, set the name of the file (relative or
full path) on the task object's logFile member:
\begin{verbatim}
>>> im.logFile="myLog.log"
\end{verbatim}
This will cause the messages to be logged as the task runs.
For batch, script--driven processing it may be desirable to write
messages directly to the log file from the task and not to the
terminal output or Message Server.
This also avoids the problems of ObitTalk occasionally losing or
mangling messages or causing the task to hang due to a full I/O
buffer.
Obit tasks can invoke direct logging using the taskLog task object
member:
\begin{verbatim}
>>> im.taskLog="myLog.log"
\end{verbatim}
\subsection{ObitTalk/Obit routines}
ObitTalk has python binding to the Obit c library that allow access to
data and many high level functions.
Thus, scripts or interactive use can be controlled by data values in
files.
(Note: in general functions which manipulate data require that the
data be visible locally whereas tasks and the ObitTalk Data classes do
not).
Control parameters to Obit (c) routines are largely passed in an
InfoList structure (a type of associative array similar to a python
dict) but many of the python interface routines take care of this
detail and their parameters are passed through a python dictionary.
Details are available via the python help command.
Use of ObitTalk routines is described below.
\subsection{Messages and error handling}
In ObitTalk error handling and messages use the OErr class.
ObitTalk defines a variable err at startup for this purpose.
Python functions bound to Obit routines which can generate either
messages or error conditions are passed an OErr argument.
Messages are generally not shown until explicitly requested, this
allows suppressing messages when necessary.
{\bf Note: if an error condition is indicated on err and has not been
cleared and/or messages displayed, then subsequent functions passed
err will simply return withoutperforming their function.}
OErr functions include:
\begin{itemize}
\item {\bf ShowErr(err)} Display any messages and clear any error conditions.
\item {\bf OErr.PClear(err)} Clear Obit error stack err and error condition
\item {\bf OErr.PIsErr(err)} Tells if an error condition exists
\item {\bf OErr.PLog(err, eCode, message)} Add message To Obit
Error/message stack err
\item {\bf OErr.PSet(err)} Set Obit error flag
\item {\bf OErr.printErr(err)} Prints Obit error/message stack
\item {\bf OErr.printErrMsg(err, message='Error')} Prints Obit error
stack and throws runtime exception on error
\item {\bf OErr.OErrIsA(err)} Tells if object thinks it's a Python ObitErr
\end{itemize}
Each OErr message has a severity level:
\begin{itemize}
\item {\bf OErr.Info} Informative message
\item {\bf OErr.Warn} Warning message (not an error)
\item {\bf OErr.Traceback} Traceback information from c routines.
\item {\bf OErr.MildError} Error (but may not be serious)
\item {\bf OErr.Error} Error message
\item {\bf OErr.StrongError} Serious error
\item {\bf OErr.Fatal} Program cannot continue
\end{itemize}
\subsection{Lock and Parameter Files}
ObitTalk uses files in /tmp to indicate that resources are allocated
and for input and output parameter files for ObitTasks.
If problems occur then these files may not be properly disposed of and
may need to be deleted by hand.
These will have names like Obit\_pops\_no\_.pid (e.g. Obit3.5942)
indicating an allocated ``POPS number'' or ObitTask\_Input.pops\_no
(e.g. SCMapInput.1) indicating the input parameter file to an Obit
Task (SCMap).
\subsection{Modifying Data Headers}
The Obit/python interface can be used to modify data headers through
the Descriptor classes (ImageDesc, UVDesc, etc).
The actual memory resident structure is a c structure which can be
translated to and from a python dict.
The general procedure is
\begin{enumerate}
\item Open the object Read/Write
\begin{verbatim}
>>> help(x.Open)
Open(self, access, err, blc=None, trc=None) method of Image.Image instance
Open an image persistent (disk) form
self = Python Image object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
blc = if given and a list of integers (min 2) giving
bottom left corner (1-rel) of subimage
trc = if given and a list of integers (min 2) giving
top right corner (1-rel) of subimage
\end{verbatim}
\item Obtain the descriptor in python dict form using the x.Desc.Dict
function.
\item Modify the contents of the dict making sure to maintain its
structure, format of date strings and data types.
\item Update the Descriptor using a x.Dest.Dict = dict type statement
\item Update descriptor in external representation using the data
object's UpdateDesc function.
\begin{verbatim}
UpdateDesc(self, err, Desc=None) method of Image.Image instance
Update any disk resident structures about descriptor
self = Python Image object
err = Python Obit Error/message stack
Desc = Descriptor, if None then use current descriptor
Contents can be accessed through the Dict member
\end{verbatim}
\item Close object
\end{enumerate}
An example is shown in the following in which the value of
``observer'' is changed from ``Axxxx'' to ``my code'':
\begin{verbatim}
>>> x=getname(17)
AIPS Image W3 VLA 1 1
>>> imhead(x)
AIPS Image Name: W3 Class: VLA seq: 1 disk: 1
Object: W3
Observed: 1992-07-17 Telescope: VLA Created: 2006-09-25
Observer: Axxxx Instrument: VLA
Minimum = -0.018 Maximum = 2.452 JY/BEAM
--------------------------------------------------------------
Type Pixels Coord value at Pixel Coord incr Rotat
RA---SIN 320 2 25 36.44334 161.00 -1.72914 0.00
DEC--SIN 320 62 6 11.2407 161.00 1.72914 -0.35
FREQ 1 8.6697e+09 1.00 6.05469e+06 0.00
STOKES 1 IPol 1.00 1 0.00
--------------------------------------------------------------
Coordinate equinox 2000.0 Coordinate epoch 2000.00
Observed RA 2 25 36.44334 Observed Dec 62 6 11.2407
no. Comp 1
Clean Beam 6.99984 x 6.99984 asec, PA 0.0 deg.
Rest freq 0 Vel type: Observer, wrt Optical
Alt ref value 1.1704e+05 wrt pixel 16.00
Maximum version number of AIPS CC tables is 1
Maximum version number of AIPS HI tables is 1
>>> x.Open(Image.READWRITE,err)
>>> d=x.Desc.Dict
>>> d["observer"]
'Axxxx '
>>> d["observer"]="my code"
>>> x.Desc.Dict=d
>>> x.UpdateDesc(err)
>>> x.Close(err)
>>> imhead(x)
AIPS Image Name: W3 Class: VLA seq: 1 disk: 1
Object: W3
Observed: 1992-07-17 Telescope: VLA Created: 2006-09-25
Observer: my code Instrument: VLA
Minimum = -0.018 Maximum = 2.452 JY/BEAM
--------------------------------------------------------------
Type Pixels Coord value at Pixel Coord incr Rotat
RA---SIN 320 2 25 36.44334 161.00 -1.72914 0.00
DEC--SIN 320 62 6 11.2407 161.00 1.72914 -0.35
FREQ 1 8.6697e+09 1.00 6.05469e+06 0.00
STOKES 1 IPol 1.00 1 0.00
--------------------------------------------------------------
Coordinate equinox 2000.0 Coordinate epoch 2000.00
Observed RA 2 25 36.44334 Observed Dec 62 6 11.2407
no. Comp 1
Clean Beam 6.99984 x 6.99984 asec, PA 0.0 deg.
Rest freq 0 Vel type: Observer, wrt Optical
Alt ref value 1.1704e+05 wrt pixel 16.00
Maximum version number of AIPS CC tables is 1
Maximum version number of AIPS HI tables is 1
\end{verbatim}
\subsection{Object parameter lists\label{ParmList}}
It is frequently necessary to pass parameters to Obit functions to
control their behavior.
These are sometimes explicit arguments of python functions but in
other cases they are passed through the InfoList member of the object.
This is particularly used for data selection and calibration
parameters.
An InfoList is conceptually similar to a python dict structure
although less flexible.
An InfoList is a list of labeled data items, each item is a scalar or
an array of a given data type.
The data types supported are int, long (explicitly 32 bit in c),
float, double (explicitly 64 bit in c), boolean and strings.
More details can be obtained by viewing the help function on the
class.
Obit data (and other) objects will have an InfoList member which can
generally be accessed through the List member.
Conversion to and from python dict structures is by means of the Dict
member of the InfoList class.
Simple access to entries in an InfoList are through the set and get
functions.
\begin{verbatim}
set(self, name, value, ttype=None) method of InfoList.InfoList instance
Save a value in an InfoList
Set an entry in an InfoList, possibly redefining its type and dimension
self = input Python InfoList
name = name of desired entry
value = value to save, either a scalar integer, float, boolean or string
or a 1D array of one of these types
Type and dimensionality determined from value unless ttype is set
ttype = data type, "double", "long", None=>type of value
\end{verbatim}
\begin{verbatim}
get(self, name) method of InfoList.InfoList instance
Retrieve a value from an InfoList
returns python list containing data:
0 - return code, 0=OK else failed
1 - name
2 - type
int=1, oint=3, long=4, float=9, double=10, string=13, boolean=14
3 - dimension array as list, e.g. [1,1,1,1,1] for scalar
4 - data array
self = input Python InfoList
name = name of desired entry
\end{verbatim}
Usage of these functions as shown in the following in which x is an Obit data
object.
\begin{verbatim}
>>> x.List.set("fvalue",1.234)
>>> x.List.get("fvalue")
[0, 'fvalue', 9, [1, 1, 1, 1, 1], [1.2339999675750732]]
>>> x.List.set("farray",[1.234,4.567,7.890])
>>> x.List.get("farray")
[0, 'farray', 9, [3, 1, 1, 1, 1], [1.2339999675750732, 4.5669999122619629, 7.8899998664855957]]
>>> x.List.set("darray",[1.234,4.567,7.890],"double")
>>> x.List.get("darray")
[0, 'darray', 10, [3, 1, 1, 1, 1], [1.234, 4.5670000000000002,
7.8899999999999997]]
>>> x.List.Dict
{'DISK': [2, [1, 1, 1, 1, 1], [1]], 'FileType': [2, [1, 1, 1, 1, 1],
[1]],
'fvalue': [9, [1, 1, 1, 1, 1], [1.2339999675750732]],
'darray': [10, [3, 1, 1, 1, 1], [1.234, 4.5670000000000002,
7.8899999999999997]],
'User': [2, [1, 1, 1, 1, 1], [100]],
'farray': [9, [3, 1, 1, 1, 1], [1.2339999675750732,
4.5669999122619629, 7.8899998664855957]],
'CNO': [2, [1, 1, 1, 1, 1], [40]],
'Disk': [2, [1, 1, 1, 1, 1], [1]], 'nVisPIO': [2, [1, 1, 1, 1, 1], [1]]}
>>>
\end{verbatim}
\subsection{Accessing UV Data \label {UVData}}
There are a number of Obit Class functions that perform highlevel
operations of uv data sets (UV objects) in the CleanVis,
UVImager, and UVSelfCal classes.
For details, import these classes and view the help documentation.
Visibility data can be read from and written to data objects using the
UV ReadVis and WriteVis functions employing objects of the UVVis
class.
The selection, calibration and editing of visibility data can be
controlled by setting parameters on the InfoList member of the UV data
object.
Many of these are set using the interface to highlevel class
functionality, but for a given parameter which is not part of the
class function interface definition, the value can be set directly
through the InfoList (see section \ref{ParmList}).
A complete list of the UV data selection/calibration/editing
parameters follows.
\begin{itemize}
\item doCalSelect boolean scalar\\
Select/calibrate/edit data?
\item Stokes string (4,1,1) \\
Selected output Stokes parameters:
`` "$\Rightarrow$ no translation,"I ","V ","Q ", "U ",
"IQU ", "IQUV", "IV ", "RR ", "LL ", "RL ", "LR ",
"HALF" = RR,LL, "FULL"=RR,LL,RL,LR. [default " "]
In the above 'F' can substitute for "formal" 'I' (both RR+LL).
\item BChan int scalar \\
First spectral channel (1-rel) selected. [def all]
\item EChan int scalar \\
Highest spectral channel (1-rel) selected. [def all]
\item BIF int scalar \\
First IF (1-rel) selected. [def all]
\item EIF int scalar \\
Highest IF (1-rel) selected. [def all]
\item doPol int scalar \\
$>0 \Rightarrow$ calibrate polarization.
\item doCalib int scalar \\
$>0 \Rightarrow$ calibrate, $2 \Rightarrow$ also calibrate Weights
\item gainUse int scalar \\
SN/CL table version number, $0 \Rightarrow$ use highest
\item flagVer int scalar \\
Flag table version, $0 \Rightarrow$ use highest, $<0 \Rightarrow$ none
\item BLVer int scalar \\
BL table version, $0>$ use highest, $<0 \Rightarrow$ none
\item BPVer int scalar \\
Band pass (BP) table version, $0 \Rightarrow $use highest
\item Subarray int scalar \\
Selected subarray, $<=0 \Rightarrow$all [default all]
\item dropSubA bool scalar \\
Drop subarray info?
\item FreqID int scalar \\
Selected Frequency ID, $<=0 \Rightarrow$all [default all]
\item timeRange float (2,1,1) \\
Selected timerange in days.
\item UVRange float (2,1,1) \\
Selected UV range in kilowavelengths.
\item InputAvgTime float scalar \\
Input data averaging time (sec). Used for fringe rate decorrelation correction.
\item Sources string (?,?,1) \\
Source names selected unless any starts with
a '-' in which case all are deselected (with '-' stripped).
\item souCode string (4,1,1) \\
Source Cal code desired,
\begin{itemize}
\item ' ' $\Rightarrow$ any code selected
\item '* ' $\Rightarrow$ any non blank code (calibrators only)
\item '-CAL' $\Rightarrow$ blank codes only (no calibrators)
\end{itemize}
\item Qual int scalar \\
Source qualifier, -1 [default] = any
\item Antennas int (?,1,1) \\
a list of selected antenna numbers, if any is negative
then the absolute values are used and the specified antennas are deselected.
\item corrType int scalar \\
Correlation type, 0=cross corr only, 1=both, 2=auto only.
\item passAll bool scalar \\
If True, pass along all data when selecting/calibration
even if it's all flagged.
Data deselected by time, source, antenna etc. is not passed.
\item doBand int scalar \\
Band pass application type $<0 \Rightarrow$ none:
\begin{enumerate}
\item If = 1 then all the bandpass data for each antenna
will be averaged to form a composite bandpass
spectrum, this will then be used to correct the data.
\item If = 2 the bandpass spectra nearest in time (in a weighted
sense) to the uv data point will be used to correct the data.
\item If = 3 the bandpass data will be interpolated in time using the
solution weights to form a composite bandpass spectrum, this
interpolated spectrum will then be used to correct the data.
\item If = 4 the bandpass spectra nearest in time (neglecting weights)
to the uv data point will be used to correct the data.
\item If = 5 the bandpass data will be interpolated in time ignoring
weights to form a composite bandpass spectrum, this interpolated
spectrum will then be used to correct the data.
\end{enumerate}
\item Smooth float (3,1,1) \\
specifies the type of spectral smoothing
\begin{itemize}
\item Smooth[0] = type of smoothing to apply:
\begin{itemize}
\item 0 $\Rightarrow$ no smoothing
\item 1 $\Rightarrow$ Hanning
\item 2 $\Rightarrow$ Gaussian
\item 3 $\Rightarrow$ Boxcar
\item 4 $\Rightarrow$ Sinc (i.e. sin(x)/x)
\end{itemize}
\item Smooth[1] = the "diameter" of the function, i.e. width between
first nulls of Hanning triangle and sinc function, FWHM of Gaussian,
width of Boxcar. Defaults (if $<$ 0.1) are 4, 2, 2 and 3 channels for
Smooth[0] = 1 - 4.
\item Smooth[2] = the diameter over which the convolving function has
value - in channels. Defaults: 1, 3, 1, 4 times Smooth[1] used when
\end{itemize}
\item SubScanTime float scalar \\
\{Optional\} if given, this is the desired time (days) of a sub scan.
This is used by the selector to suggest a value close to this which will
evenly divide the current scan. 0 $\Rightarrow$ Use scan average.
This is only useful for ReadSelect operations on indexed ObitUVs.
\end{itemize}
As an example of the data selection usage, to specify that only
autocorrelations are desired in UV data object myUV in subsequent
operations:
\begin{verbatim}
>>> myUV.List.set('corrType',2)
\end{verbatim}
\section{Parallel Processing}
ObitTalk and Obit tasks implement some basic aspects of parallel
processing.
These include using multiple cores and/or processors with shared
memory in a computer using multi--threading and distributing tasks
across nodes of a cluster or workstations on a LAN.
These are described in the following sections.
\subsection{Multi--threading}
Many of the more expensive operation in Obit allow using multiple
processors/cores which share memory.
The technique of multi--threading is used for this.
Obit tasks which support multi--threading have a parameter, nThreads,
giving the maximum number of threads to allow in a parallel operation.
In general, this should not be more than the actual number of
processors/corees available but may be fewer if multiple tasks are to
be run using threading or the particular task executation cannot make
good use of more than a given number of threads.
Threading in functions called from scripts can be invoked as in the
following example of allowing two parallel threads.
\begin{verbatim}
>>> # Allow multiple threads
>>> OSystem.PAllowThreads(2) # 2 threads max.
\end{verbatim}
\subsection{Cluster Nodes}
ObitTalk can start parallel, independent processes on multiple nodes
of a cluster of workstations on a network; these can be either tasks or
ObitScripts.
Executation is initiated on the node/workstation on which the data
disks are defined.
See sections \ref{Disks} and \ref{Remote} for more details.
\section{Examples}
The following give simple examples of using ObitTalk.
\subsection{Display AIPS Catalog}
The examine your AIPS image catalog on disk 7
\begin{verbatim}
>>> AMcat(7)
AIPS Directory listing for disk 7
1 CYG A 74 MHz.MODEL . 1 MA 13-Apr-2004 10:25:32
\end{verbatim}
\subsection{Create Python Image Object}
To create a python object for the AIPS image in slot 1 and name it ``x'':
\begin{verbatim}
>>> x=getname(1,7)
AIPS Image CYG A 74 MHz MODEL 7 1
\end{verbatim}
\subsection{Display Data Header}
To view the image header of x:
\begin{verbatim}
>>> imhead(x)
AIPS Image Name: CYG A 74 MHz Class: MODEL seq: 1 disk: 7
Object: 3C405
Observed: 2001-01-19 Telescope: VLA Created: 2001-03-03
Observer: AD441 Instrument: VLA
Minimum = -25 Maximum = 4638.2 JY/BEAM
--------------------------------------------------------------
Type Pixels Coord value at Pixel Coord incr Rotat
RA---SIN 512 19 59 28.35406 256.00 -5 0.00
DEC--SIN 512 40 44 2.0862 257.00 5 0.32
FREQ 1 7.38e+07 1.00 1.55029e+06 0.00
STOKES 1 IPol 1.00 1 0.00
--------------------------------------------------------------
Coordinate equinox 2000.0 Coordinate epoch 2000.00
Observed RA 19 59 28.35406 Observed Dec 40 44 2.0862
no. Comp 697
Clean Beam 24.9998 x 24.9998 asec, PA 0.0 deg.
Rest freq 0 Vel type: Observer, wrt Optical
Alt ref value 0 wrt pixel 0.00
Maximum version number of AIPS CC tables is 1
Maximum version number of AIPS HI tables is 1
\end{verbatim}
\subsection{Display an Image}
To display image x in ObitView:
\begin{verbatim}
>>> tvlod(x)
\end{verbatim}
Note: if ObitTalk thinks something has gone wrong with the image
display, the python object may need to be recreated.
To recreate the default image display:
\begin{verbatim}
>>> newDisplay()
\end{verbatim}
\subsection{Image Pixel Access}
Access to arrays of image pixel values is through the FArray class.
Images can be read into or written from FArray objects which can be
manipulated in many ways.
See help(FArray) for details.
In the following the pixel array in an image is read and several
operations are performed.
\begin{verbatim}
>>> # Create image object from AIPS catalog entry
>>> x = Image.newPAImage("Swan","Cygnus A","J2000",1,1,True,err)
>>> ShowErr(err) # Check for errors
>>> x.Open(Image.READONLY,err) # Open image
>>> x.Read(err) # Read plane
>>> pixels=x.FArray # python FArray object from image
>>> pixels.Mean # Display Mean of pixel values
49.573715209960938
>>> pixels.RMS # Display RMS of pixel values
4.758549690246582
>>> FArray.PSMul(pixels, 5.0) # Scale all pixels by 5
>>> pixels.Mean # Display new mean
247.86857604980469
>>> x.Close(err) # Close image
>>> pixels.get(100,100) # Display (0-rel) pixel [100,100]
8.0
>>> pixels.set(3.1415926,100,100) # set value of pixel [100,100]
>>> pixels.get(100,100) # See new value
3.1415925025939941
\end{verbatim}
\subsection{Run an AIPS task}
To run AIPS task IMEAN on x and view the values returned:
\begin{verbatim}
>>> imean=AIPSTask("imean") # Define task object
>>> setname(x,imean) # Fill in info on x to task object
>>> imean.i # View inputs
Adverbs Values Comments
--------------------------------------------------------------------------------
dohist -1.0 True (1.0) do histogram plot.
= 2 => flux on x axis
userid 0.0 User ID. 0=>current user
32000=>all users
inname CYG A 74 MHz Image name (name)
inclass MODEL Image name (class)
inseq 1.0 Image name (seq. #)
indisk 7.0 Disk drive #
blc 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 Bottom left corner of image
0=>entire image
trc 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 Top right corner of image
0=>entire image
nboxes 0.0 No. of ranges for histogram.
pixrange 0.0, 0.0 Min and max range for hist.
functype 'LG' => do log10 plot of #
samples, else linear
pixavg 0.0 Estimate of mean noise value
pixstd 0.0 Estimate of true noise rms
< 0 => don't do one
= 0 => 2-passes to get
docat 1.0 Put true RMS in header
ltype 3.0 Type of labeling: 1 border,
2 no ticks, 3 - 6 standard,
7 - 10 only tick labels
<0 -> no date/time
outfile Name of output log file,
No output to file if blank
dotv -1.0 > 0 Do plot on the TV, else
make a plot file
grchan 0.0 Graphics channel 0 => 1.
>>> imean.g # Execute task
IMEAN1: Task IMEAN (release of 31DEC02) begins
IMEAN1: Image= CYG A 74 MHz.MODEL . 1 7 xywind= 1 1 512 512
IMEAN1: Mean and rms found by fitting peak in histogram:
IMEAN1: Mean= 4.8010E-02 Rms= 4.7438E+00 **** from histogram
IMEAN1: Mean and rms found by including all data:
IMEAN1: Mean= 1.8457E+00 Rms= 6.0963E+01 JY/BEAM over 262144 pixels
IMEAN1: Flux density = 1.7080E+04 Jy. beam area = 28.33 pixels
IMEAN1: Minimum=-2.5000E+01 at 59 145 1 1
IMEAN1: Skypos: RA 20 00 55.087 DEC 40 34 45.54
IMEAN1: Maximum= 4.6382E+03 at 252 256 1 1
IMEAN1: Skypos: RA 19 59 30.116 DEC 40 43 57.20
IMEAN1: Skypos: IPOL 73.800 MHZ
IMEAN1: returns adverbs to AIPS
IMEAN1: Appears to have ended successfully
IMEAN1: smeagle 31DEC06 TST: Cpu= 0.0 Real= 0
>>> imean.o # Examine outputs
Adverbs Values Comments
--------------------------------------------------------------------------------
pixavg 0.0480099283159 Estimate of mean noise value
pixstd 4.74377298355 Estimate of true noise rms
< 0 => don't do one
= 0 => 2-passes to get
\end{verbatim}
\subsection{Run an Obit task (FndSou)}
To run Obit task FndSou on an image, x, containing multiple sources to
generate a source catalog (use sf.h for detailed help):
\begin{verbatim}
>>> sf=ObitTask("FndSou")
>>> setname(x,sf)
>>> sf.outDisk=1
>>> sf.NGauss=20 # Max. number of sources (islands)
>>> sf.CutOff=2 # Minimum pixel brightness to consider
>>> sf.Retry=1 # Try multiple components if residuals exceed this
>>> sf.doMult=True # Allow using multiple Gaussians per source
>>> sf.doWidth=True # Fix width
>>> sf.Parms=[2., 5., 0., 1]
>>> sf.RMSsize=50 # Size of window to use to determine image RMS
>>> sf.prtLv=1 # Some diagnostic output
>>> sf.doVL=True # Generate VL table
>>> sf.i # Display inputs
FndSou: Task to fit Gaussian models to an image by least-squares
Adverbs Values Comments
--------------------------------------------------------------------------------
DataType AIPS FITS" or "AIPS" type of input
inName 1400+208 Image Name (Name) 1
inClass ICLEAN Image Name (Class) 1
inSeq 1 Image Name (Seq. #) 1
inDisk 1 Disk drive # 1
inFITS Filename 1 if FITS image
BLC 0, 0, 0, 0, 0, 0, 0 Bottom left corner of image
0=>entire image
TRC 0, 0, 0, 0, 0, 0, 0 Top right corner of image
0=>entire image
doVL True Convert to VL table?
doPBCorr False PB correction to VL table?
asize 25.0 antenna diam. for PB corr.
doResid False Catalog residual map?
outName Output Image Name
outClass Output Image Class
outSeq 0 Output Image Seq. #
outDisk 1 output Disk drive
outFITS Output Filename if FITS image
NGauss 20 Max. Number of islands
NPass 1 Number of passes through resid.
CutOff 2.0 Flux cutoff level
Retry 1.0 Retry level
Sort Sort Order of output ' '=RA
OutPrint Printer disk file to save
doMult True >0 => fit multiple peaks
doWidth True >0 => fit widths
Gain 0.05 Amp-dependent part of retry
and warning levels
Parms 2.0, 5.0, 0.0, 1.0, 0.0 Components constraints
[0] flux < Parms[0]
[1] widths>Parms[1] cells
[2] peaks>Parms[2] cells
outside fitting region
[3] if >0 don't allow Gauss
smaller than CLEAN beam
RMSsize 50 Size of region to determine RMS
prtLv 1 Debug print level
>>> sf.g # run task
** Message: info : FndSou Begins
** Message: info : Date/Time: 2007-10-11 13:47:51
** Message: info : Found 23 islands pass 1
** Message: info : Successfully fitted 20 components
** Message: info : Attempt to break 0 islands into multiple
** Message: info : 0 Attempts to break islands failed
** Message: info : 0 components rejected for low peak
** Message: info : 0 fits hit iteration limit
Found 23 islands in 1 passes
Successfully fitted 20 components
Attempt to break 0 islands into multiple
0 Attempts to break islands failed
0 components rejected for low peak
0 fits hit iteration limit
...
** Message: info : FndSou Ends
** Message: info : Date/Time: 2007-10-11 13:47:58
\end{verbatim}
\subsection{Table Access (print contents of VL table)}
To create a python object from the VL table created in the previous example
and display its contents using the Catalog module utility PVLPrint:
\begin{verbatim}
>>> import Catalog
>>> vltab=x.NewTable(Table.READONLY, "AIPS VL",1,err)
>>> Catalog.PVLPrint(vltab,x,err)
>>> ShowErr(err) # Display any error messages
Listing of fitted VL table values
Fitted sizes in asec, Peak, Flux, IRMS in mJy, residual values relative to Peak
Error estimates (asec, mJy, deg) given under value
RA Dec Peak Flux IRMS Fit Maj Fit min
1 13 43 56.1032 22 18 21.163 3666.31 4806.81 90.186 104.886 80.000
1.80 1.12 93.92 123.13 3.021 1.822
2 13 48 14.5900 24 15 57.461 5508.17 5917.91 87.870 85.951 80.000
0.82 0.81 89.29 95.93 1.442 1.253
3 13 48 51.8784 26 35 44.011 2742.81 3484.55 77.721 98.355 82.667
1.60 1.69 81.18 103.13 3.143 2.266
4 13 49 39.0137 21 07 29.926 13631.90 13820.04 112.454 81.104 80.000
0.41 0.40 112.83 114.39 0.676 0.658
5 13 50 58.3986 15 51 55.507 2479.43 2733.85 91.272 88.209 80.000
1.94 1.88 93.20 102.76 3.473
...
\end{verbatim}
\subsection{Table Row Data}
In the following example, the header of an AIPS CC (Clean Components)
table is converted to a dict and printed and the first few rows
are read into a python dict structure and printed.
\begin{verbatim}
>>> imDict=x.Desc.Dict
>>> xinc = abs(imDict['cdelt'][0]) # X Cell spacing
>>> yinc = abs(imDict['cdelt'][1]) # Y Cell spacing
>>> cctab=x.NewTable(Table.READONLY,"AIPS CC",1,err)
>>> thead=cctab.Desc.Dict
>>> thead # Display contents of python dict
{'repeat': [1, 1, 1, 1], 'nrow': 114, 'dim1': [1, 1, 1, 1],
'sortOrder2': 0, 'sortOrder1': 0, 'dim2': [1, 1, 1, 1],
'dim0': [1, 1, 1, 1], 'version': 1, 'lrow': 16, 'Table name': 'AIPS CC',
'FieldName': ['FLUX', 'DELTAX', 'DELTAY', '_status'],
'type': [9, 9, 9, 2], 'FieldUnit': ['JY', 'DEGREES', 'DEGREES', '']}
>>> cctab.Open(Table.READONLY,err)
>>> ShowErr(err) # Display any error messages
>>> for i in range(1,5): # Loop over first 4 rows printing
... row = cctab.ReadRow(i, err) # Read row i (1-rel)
... xcell = row["DELTAX"][0]/xinc # X position in cells
... ycell = row["DELTAY"][0]/yinc # Y position in cells
... flux = row["FLUX"][0] # Flux
... print "%5d %5.2f %5.2f %10.2f" % (i,xcell, ycell,flux)
...
1 -16.00 6.00 1260.95
2 -47.00 16.00 646.20
3 -16.00 5.00 626.66
4 -46.00 16.00 527.65
>>> cctab.Close(err) # Close table
\end{verbatim}
\subsection{Writing to a History}
The following example writes a timestamp and a comment into a image
processing history and then prints the history.
\begin{verbatim}
>>> hi = x.History(Image.READWRITE, err) # Extract history object from image
>>> r=hi.Open(History.READWRITE, err) # Open history
>>> hi.TimeStamp(" Start Obit "+ObitSys.pgmName,err) # Timestamp
>>> r=hi.WriteRec(-1,"Some comment",err) # write comment
>>> r=hi.Close(err) # Close
>>> OErr.printErrMsg(err, "Error with history")# Error test
>>> PrintHistory(x) # Show history
History for AIPS:Image:Cygnus A.J2000.1.1
1 --------------------------------------------------------------------
2 --------------------------------------------------------------------
3 /Begin "HISTORY" information found in fits tape header by IMLOD
...
1553 / 2007-10-11T21:12:11 Start Obit ObitPython
1554 Some comment
\end{verbatim}
\subsection{Modify Visibility Data}
The UV functions ReadVis and WriteVis read and write single visibility
records in the form of python UVVis objects which contain the
following members:
\begin{itemize}
\item {\bf u} u coordinate (lambda)
\item {\bf v} v coordinate (lambda)
\item {\bf w} w coordinate (lambda)
\item {\bf time} Visibility time in days since 0 h on reference day
\item {\bf ant1} antenna 1 of baseline
\item {\bf ant2} antenna 2 of baseline
\item {\bf vis } visibilities as list of tuples (vis, wt) as (complex, float)
\end{itemize}
The visibilities are in the order defined in the data descriptor:
\begin{itemize}
\item {\bf jlocs} 0-rel axis order: Stokes' parameters
\item {\bf incs} Increment in data: Stokes (in floats)
\item {\bf jlocf} 0-rel axis order: Frequency
\item {\bf incf} Increment in data: Frequency (in floats)
\item {\bf jlocif} 0-rel axis order: IF
\item {\bf incif} Increment in data: IF (in floats)
\end{itemize}
The following example uses the UVVis class to read the records in a
UV data file, multiply the complex visibilities by 2.0 and the weights
by 0.5.
To specify data selection, calibration and editing to be applied to
data as it is read, see section \ref{UVData}.
\begin{verbatim}
# Input AIPS file
x = UV.newPAUV("inUV", "RX_Tau", "IF2", 1, 1, True,err, nvis=1)
x.Open(UV.READONLY, err)
OErr.printErrMsg(err, "Error with input image")
# Output AIPS file
y = UV.newPAUV("outUV", "RX_Tau", "Copy", 1, 1, False,err, nvis=1)
UV.PClone(x, y, err)
y.Open(UV.WRITEONLY, err)
OErr.printErrMsg(err, "Error with output image")
# Get information about data
nvis = x.Desc.Dict["nvis"] # Number of visibilities
jstok = x.Desc.Dict["jlocs"] # Order in data of Stokes
nstok = x.Desc.Dict["inaxes"][jstok] # Number of Stokes (polarizations)
stokinc = x.Desc.Dict["incs"]/3 # Increment between Stokes in vis
jfreq = x.Desc.Dict["jlocf"] # Order in data of Frequency
nfreq = x.Desc.Dict["inaxes"][jfreq] # Number of Frequencies
freqinc = x.Desc.Dict["incf"]/3 # Increment between channels in vis
jif = x.Desc.Dict["jlocif"] # Order in data of IF
nif = x.Desc.Dict["inaxes"][jif] # Number of IFs
ifinc = x.Desc.Dict["incif"]/3 # Increment between IFs in vis
# Loop over input file
for i in range(0, nvis):
# read to UVVis
v = x.ReadVis(err)
vlist = v.vis # array of tuples (complex vis, float weight)
# Multiply each vis by two, multiply weight by 0.5
# Loop over IF
for iif in range (0, nif):
# Loop over Frequency channel
for ifreq in range (0, nfreq):
# Loop over Stokes
for istok in range (0, nstok):
indx = istok*stokinc + ifreq*freqinc + iif*ifinc
# Extract visibility tuple
tup = vlist[indx]
vlist[indx] = (2.0*tup[0],tup[1]*0.5) # multiply/replace
# Write data to output
y.WriteVis(v, err)
OErr.printErrMsg(err, "Error copying file")
# Close files
x.Close(err)
y.Close(err)
OErr.printErrMsg(err, "Error closing file")
\end{verbatim}
\subsection{Write Quantized FITS image}
The following example reads an AIPS image and writes a integerized FITS
image with the pixel values truncated at a set fraction of the RMS "noise"
in the image. This operation creates an image which is more compressible
but with a controlled loss of precision.
Note: in practice is is better to use the ObitTalk function imtab as
it is simpler to use and will also copy tables; this example is given
to show how to access images in ObitTalk.
\begin{verbatim}
# Specify input and output
inDisk = 1
Aname = "INPUT IMAGE"
Aclass = "CLASS"
Aseq = 1
outDisk = 1
outFile = "Quantized.fits"
# Create Images
inImage = Image.newPAImage("Input image", Aname, Aclass, inDisk, Aseq, True, err)
# Note: inImage can also be created using getname(cno,disk)
outImage = Image.newPFImage("Output image", outFile, outDisk, False, err)
Image.PClone(inImage, outImage, err) # Same structure etc.
OErr.printErrMsg(err, "Error initializing")
# Fraction of RMS
fract = 0.25
# Copy to quantized integer image with history
inHistory = History.History("history", inImage.List, err)
Image.PCopyQuantizeFITS (inImage, outImage, err, fract=fract, inHistory=inHistory)
OErr.printErrMsg(err, "Writing to FITS")
\end{verbatim}
%\end{alltt}
\subsection{Image Gaussian Fitting}
Fitting of Gaussians to an image over a large area can be performed by
task FndSou and over more limited areas using the ImageFit class
function Fit.
This function takes an image and a FitRegion which defines the fitting
area of the image and the initial set of values defining the Gaussians
to be fit.
Image class functions TVFit and GaussFit provide a simplified
interface to the fitting routines.
The following is an example of an interactive model fitting session, a
screen shot of the ObitView window after the fitting region and model
are specified is shown in figure \ref{ImageFitFig}
\begin{figure}
\centering
\includegraphics[angle=-90,origin=c,height=3in]{ImageFit.eps}
\caption{
Screenshot of ObitView window specifying fitting region with model.
}
\label{ImageFitFig}
\end{figure}
\begin{verbatim}
>>> # Define image
>>> x=Image.newPAImage("image","3C84","PennAr",1,1,True,err)
>>> # Interactively set fitting region followed by fitting
>>> fr = x.TVFit(x,disp,err)
\end{verbatim}
The image will be loaded to the display, hit the ``edit'' button on
the RequestBox, then specify the region to fit on the display with a
rectangular box, followed by circular boxes to mark Gaussian components
initial locations and initial sizes; instructions are given in the
ObitView Message Box.
When done, hit ``d'' and then ``OK'' on the bottom of the RequestBox.
Example results:
\begin{verbatim}
Model fit for 3C84
RA 3 19 47.73316 ( 0.518 asec), pixel 131.441 ( 0.259)
Dec 41 30 36.7370 ( 0.594 asec), pixel 116.772 ( 0.297)
Peak Flux density 0.0109 (0.000725) JY/BEAM
Integrated Flux density 0.0164 ( 0.00109) Jy
Fitted Major axis 15.148 ( 1.13) asec, 7.574 ( 0.33) pixels
Fitted Minor axis 11.228 ( 0.661) asec, 5.614 ( 0.33) pixels
Fitted Position angle -36.995 ( 7.84) deg
\end{verbatim}
\begin{verbatim}
Deconvolved model
Deconvolved Major axis 10.8 ( 1.12) asec, 5.385 ( 0.814) pixels
Deconvolved Minor axis 3.55 ( 1.63) asec, 1.776 ( 0.814) pixels
Deconvolved Position angle 143.01 ( 5.49) deg
\end{verbatim}
Image class function GaussFit can be used for noninteractive fitting.
The defaults are generally adequate for a single source near the
reference pixel.
Both TVFit and GaussFit return a FitRegion object.
Additional functionality can be obtained by using ImageFit functions directly, first
\begin{verbatim}
>>> import ImageFit, FitRegion, FitModel
\end{verbatim}
The ImageFit.Fit function is described in the following:
\begin{verbatim}
Fit(self, err, input={'FluxLow': 0.0, 'GMajLow': 0.0, 'GMajUp': 1e+20,
'GMinLow': 0.0, 'GMinUp': 1e+20, 'MaxIter': 0, 'PosGuard': 0.0,
'fitImage': None, 'fitRegion': None, 'prtLv': 0, ...})
Fit a model to an image
Resultant model left in FitRegion reg
inImageFit = Python ImageFit object
image = ObitImage to be fitted
reg = Fit region defining what is to be fitted and initial guess
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
fitImage Image to be fitted
fitRegion FitRegion to be fitted
MaxIter int Maximum number of iterations [def. 10 per fitted parameter]
prtLv int Message level, 0=>none [def 0]
PosGuard float Distance (cells) from edge to allow center [def no bound]
FluxLow float Lower bounds on Flux density [def no bound]
GMajUp float Major axis upper bound (cells) [def no bound]
GMajLow float Major axis lower bound (cells) [def no bound]
GMinUp float Minor axis upper bound (cells) [def no bound]
GMinLow float Minor axis lower bound (cells) [def no bound]
\end{verbatim}
A FitRegion can be created interactively using the image viewer and
FitRegion.PSetup():
\begin{verbatim}
PSetup(inImage, disp, err)
Interactive initial definition of fitting region
Interactively allows the user to set the region of the image
to be fitted and the initial model.
The fitting region is first specified with a rectangular window
and then the initial models to be fitted with circular windows.
Returns FitRegion, leaves image pixel array on inImage
image = image to be fitted
disp = image display to use
err = Obit Error/message stack
\end{verbatim}
Fitted models can then be viewed on the screen or written to a file by
FitRegion.Print()
\begin{verbatim}
Print(self, ImDesc, file=None)
Display human readable contents
self = object with Model to display
ImDesc = Image Descriptor with Beam, etc.
file = if present, the name of a file into which to write
the information rather than displaying it on the screen
\end{verbatim}
or, can be accessed in python using the array of FitModel objects in
the FitRegion.
\subsection{Subtract a CLEAN model from UV Data}
The following python script fragment subtracts the Fourier transform
of a CLEAN model, multiplied by 0.5 from one uv data set and writes
another.
Several steps are necessary to create a SkyModel from an image mosaic
containing a single image.
Then, control parameters are entered into the input dict for
SkyModel.PSubUV which is used to perform the operation.
The input and output data are all FITS files with names inFile,
inModel, outFile on FITS ``disks'' inDisk and outDisk.
Note: this operation is also performed by task UVSub.
\begin{verbatim}
import SkyModel, ImageMosaic
# Set data
inData = UV.newPFUV("Input uv data", inFile, inDisk, True, err)
inImage = Image.newPFImage("Input image",inModel, inDisk, True, err)
outData = UV.newPFUV("Output uv data", outFile, outDisk, False, err)
OErr.printErrMsg(err, "Error initializing")
# Make Image Mosaic with a single image
mosaic = ImageMosaic.newObit("Mosaic", 1, err)
OErr.printErrMsg(err, "Error making mosaic")
# Add image to mosaic
ImageMosaic.PSetImage(mosaic, 0, inImage)
# Make SkyModel from mosaic
model = SkyModel.PCreate("SkyModel", mosaic)
OErr.printErrMsg(err, "Error making SkyModel")
# Control parameters to input dict, most defaulted
Input = SkyModel.UVSubInput
Input['InData'] = inData # Input uv data
Input['SkyModel'] = model # SkyModel
Input['OutData'] = outData # output uv data
Input['doCalSelect'] = False # No calibration or data selection
Input['Stokes'] = ' ' # No conversion of Stokes
Input['Factor'] = 0.5 # Multiply model FT by 0.5
Input['Mode'] = 0 # Fastest FT type (DFT or Grid)
Input['Type'] = 0 # Use CLEAN model from CC table
Input['CCVer'] = [2] # Use CC table 2 (array of 1 per image)
# Subtract Fourier transform of sky model from inData, write outData
SkyModel.PSubUV(err, Input)
OErr.printErrMsg(err, "Error subtracting")
\end{verbatim}
\section{Obit classes and utility packages with python interfaces}
There are a number of Obit functions with high level python
interfaces. To see more details import and view the help for each:
\begin{verbatim}
>>> import History
>>> help(History)
\end{verbatim}
Obit/AIPS/Radio Interferometry/Image classes and utilities
\begin{itemize}
\item {\bf AIPSDir} AIPS directory class
\item {\bf CArray} Complex array class
\item {\bf Catalog} Source catalog class
\item {\bf CleanImage} Image CLEAN
\item {\bf CleanVis} Visibility based CLEAN
\item {\bf ConvUtil} Image convolution utilities
\item {\bf FArray} float array class
\item {\bf FArrayUtil} FArray utilities
\item {\bf FeatherUtil} Image feathering utilities
\item {\bf FFT} Fast Fourier Transform class
\item {\bf FInterpolate} Float array interpolator
\item {\bf FITSDir} FITS directory routines
\item {\bf FitModel} Source fitting model
\item {\bf FitRegion} Source fitting region
\item {\bf History} History class
\item {\bf ImageDesc} Image Descriptor (header)
\item {\bf ImageMosaic} Image Mosaic class
\item {\bf Image} Image class
\item {\bf ImageFit} Image fitting class
\item {\bf ImageUtil} Image utilities
\item {\bf InfoList} Obit associative array for control info
\item {\bf IonCal} Ionospheric calibration
\item {\bf MergeCal} Partial fix for screwed up VLBA cal. data
\item {\bf MosaicUtil} Image mosaicing utilities
\item {\bf OData} Base Data (image, UV, OTF) class
\item {\bf ODisplay} Interface to ObitView display
\item {\bf OErr} Obit message/error class
\item {\bf OPlot} Ploting interface
\item {\bf OSystem} Obit System class
\item {\bf OWindow} (CLEAN) image window class
\item {\bf ParserUtil} Obit task input/output file parser
\item {\bf SkyGeom} Celestial geometry
\item {\bf SkyModel} Sky model class
\item {\bf SkyModelVMBeam} Tabulated beam Sky model class
\item {\bf SkyModelVMIon} Ionospheric Sky Model class
\item {\bf SpectrumFit} Spectrum fitting class
\item {\bf TableDesc} Table descriptor (header) class
\item {\bf TableList} Table list for data object (Image, UVData, OTF)
\item {\bf Table } Table class
\item {\bf TableUtil} Table utilities
\item {\bf TableSTar} manipulate AIPS STar tables
\item {\bf TaskWindow} Task message window class
\item {\bf TimeFilter} Time filtering class
\item {\bf UVDesc} UV data descriptor (header)
\item {\bf UVGSolve} UV gain solutions
\item {\bf UVImager} UV data imager class
\item {\bf UV } UV data class
\item {\bf UVRFIXize} RFI Excision class
\item {\bf UVSelfCal} UV Self calibration class
\item {\bf UVSoln2Cal} UV SN to CL table routines.
\item {\bf UVVis} UV visibility access class
\item {\bf VLACal} VLA calibration/pipeline utilities
\item {\bf ZernikeUtil} Zernike polynomial utilities
\end{itemize}
Single dish/OTF imaging classes and utilities.
These require the ObitSD python directory in the PYTHONPATH.
\begin{itemize}
\item {\bf CCBUtil} GBT CCB utility package
\item {\bf CleanOTF} Single dish (Hogbom) CLEAN
\item {\bf CleanOTFRec} Single dish record based CLEAN
\item {\bf GBTDCROTF} Convert GBT DCR data to OTF format
\item {\bf GBTUtil} Utilities for GBT data
\item {\bf OTFDesc} OTF Descriptor
\item {\bf OTFGetAtmCor} OTF Atmospheric correction utilities
\item {\bf OTFGetSoln} OTF calibration solution utilities
\item {\bf OTF} OTF ("On the Fly") data
\item {\bf OTFRec} OTF record access class
\item {\bf OTFSoln2Cal} Utilities to convert OTF solutiion to calibration tables
\item {\bf OTFUtil} OTF Utilities
\item {\bf PARUtil} Utilities for GBT Mustang (Penn Array) data
\end{itemize}
\section {OTObit Functions}
The following are functions available from OTObit which are all
automatically imported when ObitTalk in started.
\subsection{AIPSHelp}
\begin{verbatim}
AIPSHelp(Task)
Give Help for AIPS task Task
Task = AIPSTask name to give (e.g. "IMEAN")
\end{verbatim}
\subsection{AllDest}
\begin{verbatim}
AllDest(disk=None, Atype=' ', Aname=' ', Aclass=' ', Aseq=0)
Delete AIPS files matching a pattern
Strings use AIPS wild cards:
blank => any
'?' => one of any character
"*" => arbitrary string
disk = AIPS disk number, 0=>all
Atype = AIPS entry type, 'MA' or 'UV'; ' => all
Aname = desired AIPS name, using AIPS wildcards, None -> don't check
Aclass = desired AIPS class, using AIPS wildcards, None -> don't check
Aseq = desired AIPS sequence, 0=> any
\end{verbatim}
\subsection{AMcat}
\begin{verbatim}
AMcat(disk=1, first=1, last=1000)
Catalog listing of AIPS Image files on disk disk
Strings use AIPS wild cards:
blank => any
'?' => one of any character
"*" => arbitrary string
If giveList then return list of CNOs
disk = AIPS disk number to list
first = lowest slot number to list
last = highest slot number to list
Aname = desired name, using AIPS wildcards, None -> don't check
Aclass = desired class, using AIPS wildcards, None -> don't check
Aseq = desired sequence, 0=> any
giveList = If true, return list of CNOs matching
\end{verbatim}
\subsection{AUcat}
\begin{verbatim}
AUcat(disk=1, first=1, last=1000)
Catalog listing of AIPS UV data files on disk disk
Strings use AIPS wild cards:
blank => any
'?' => one of any character
"*" => arbitrary string
If giveList then return list of CNOs
disk = AIPS disk number to list
first = lowest slot number to list
last = highest slot number to list
Aname = AIPS desired name, using AIPS wildcards, None -> don't check
Aclass = AIPS desired class, using AIPS wildcards, None -> don't check
Aseq = AIPS desired sequence, 0=> any
giveList = If true, return list of CNOs matching
\end{verbatim}
\subsection{Acat}
\begin{verbatim}
Acat(disk=1, first=1, last=1000)
Catalog listing of AIPS files on disk disk
The class remembers the last disk accessed
Strings use AIPS wild cards:
blank => any
'?' => one of any character
"*" => arbitrary string
If giveList then return list of CNOs
disk = AIPS disk number to list
first = lowest slot number to list
last = highest slot number to list
Aname = desired AIPS name, using AIPS wildcards, None -> don't check
Aclass = desired AIPS class, using AIPS wildcards, None -> don't check
Aseq = desired AIPS sequence, 0=> any
giveList = If true, return list of CNOs matching
\end{verbatim}
\subsection{ClearErr}
\begin{verbatim}
ClearErr(err=<C OErr instance>)
Print any errors and clear stack
err = Python Obit Error/message stack, default is OTObit version
\end{verbatim}
\subsection{Fdir}
\begin{verbatim}
Fdir(disk=None, dir=None)
Catalog listing of FITS files on disk disk
The class remembers the last disk accessed
disk = AIPS disk number to list
dir = relative or abs. path of directory, def. = cwd
Only used if disk == 0
\end{verbatim}
\subsection{ObitHelp}
\begin{verbatim}
ObitHelp(Task)
Give Help for OBIT task Task
Task = ObitTask name to give (e.g. "Feather")
\end{verbatim}
\subsection{PrintHistory}
\begin{verbatim}
PrintHistory(ObitObj, hiStart=1, hiEnd=1000000, file=None)
Display history log or write to file
Reads selected history records and displays with "more"
ObitObj = Python Obit object with history
err = Python Obit Error/message stack
hiStart = if given the first (1-rel) history record
hiEnd = if given the highest (1-rel) history record
file = if present, the name of a file into which to write
the history rather than displaying it on the screen
\end{verbatim}
\subsection{ShowErr}
\begin{verbatim}
ShowErr(err=<C OErr instance>)
Print any errors and clear stack
err = Python Obit Error/message stack, default of OTObit version
\end{verbatim}
\subsection{alldest}
\begin{verbatim}
alldest(Aname='.*', Aclass='.*', Atype='.?', Adisk=0, Aseq=0, test=False)
Delete AIPS files matching a pattern
Uses regular expression matching for strings
Note: "+" values are escaped
Clears any status before deleting
Aname = AIPS file name , " " => any
Aclass = AIPS class name, " " => any
Atype = 'MA', 'UV' or any
Adisk = AIPS disk number, 0=> any
Aseq = AIPS sequence number; 0=> any
test = if true only list and not delete
\end{verbatim}
\subsection{altswitch}
\begin{verbatim}
altswitch(inImage)
Switch frequency and velocity
Algorithm lifted from AIPS AU7.FOR
inImage = Python Image object, created with getname, getFITS
\end{verbatim}
\subsection{clearstat}
\begin{verbatim}
clearstat(o, code=4)
Clear status of AIPS catalog entry
Clears AIPS status of object o,
Optionally sets status using code parameter
o = Obit AIPS Data object
code = status code:
0 = Add write status
1 = Clear write status
2 = Increment Read Status
3 = Decrement Read Status
4 = Clear All Status
\end{verbatim}
\subsection{copyInputs}
\begin{verbatim}
copyInputs(inTask, outTask)
Copy values from one task object to another
Copies parameter values from inTask to outTask which are in both the
inTask and outTask _input_list.
Need not be the same task.
inTask = Task object to copy from
outTask = Task object to copy to
\end{verbatim}
\subsection{day2dhms}
\begin{verbatim}
day2dhms(tim)
convert a time in days to a string as d/hh:mm:ss.s
Returns time as string: "d/hh:mm:ss.s"
tim time in days
\end{verbatim}
\subsection{dhms2day}
\begin{verbatim}
dhms2day(st)
convert a time string in d/hh:mm:ss.s to days
Returns time in days
st time string as "d/hh:mm:ss.s"
\end{verbatim}
\subsection{explain}
\begin{verbatim}
explain(TaskObj)
Give explanation for a task if available
TaskObj = Task object whose inputs to list
\end{verbatim}
\subsection{getFITS}
\begin{verbatim}
getFITS(file, disk=1, Ftype='Image')
Return Obit object for FITS file in file on disk
file = FITS file name
disk = FITS disk number
Ftype = FITS data type: 'Image', 'UV'
\end{verbatim}
\subsection{getname}
\begin{verbatim}
getname(cno, disk=1)
Return Obit object for AIPS file in cno on disk
cno = AIPS catalog slot number
disk = AIPS disk number
\end{verbatim}
\subsection{go}
\begin{verbatim}
go(TaskObj, MsgBuf=False, URL="http://localhost:8777/RPC2")
Execute task
Returns TaskWindow object if run asynchronously (doWait=True)
or the task message log if run synchronously (doWait=False)
The wait() function on the TaskWindow will hang until the task finishes
TaskObj = Task object to execute
If doWait member is true run synchronously,
else run with messages in a separate Message window
MsgBuf = if true and TaskObj.doWait=False run asynchronously
using a TaskMsgBuffer
URL = URL of ObitMess message server if MsgBuf=False
\end{verbatim}
\subsection{imhead}
\begin{verbatim}
imhead(ObitObj)
List header
ObitObj = Obit or ObitTalk data object
\end{verbatim}
\subsection{imlod}
\begin{verbatim}
imlod(filename, inDisk, Aname, Aclass, Adisk, Aseq, err)
Load FITS Image data to AIPS
Read a ImageTAB FITS Image data file and write an AIPS data set
filename = name of FITS file
inDisk = FITS directory number
Aname = AIPS name of file
Aclass = AIPS class of file
Aseq = AIPS sequence number of file
Adisk = FITS directory number
err = Python Obit Error/message stack
returns AIPS Image data object
\end{verbatim}
\subsection{imstat}
\begin{verbatim}
imstat(inImage, blc=[1, 1, 1, 1, 1], trc=[0, 0, 0, 0, 0])
Get statistics in a specified region of an image plane
Returns dictionary with statistics of selected region with entries:
Mean = Mean value
RMSHist = RMS value from a histogram analysis
RMS = Simple RMS value
Max = maximum value
MaxPos = pixel of maximum value
Min = minimum value
MinPos = pixel of minimum value
inImage = Python Image object, created with getname, getFITS
\end{verbatim}
\subsection{imtab}
\begin{verbatim}
imtab(inImage, filename, outDisk, err, fract=None, quant=None,
exclude=['AIPS HI', 'AIPS PL', 'AIPS SL'], include=['AIPS CC'],
headHi=False))
Write Image data as FITS file
Write a Image data set as a integer FITAB format file
History written to header
inImage = Image data to copy
filename = name of FITS file
inDisk = FITS directory number
err = Python Obit Error/message stack
fract = Fraction of RMS to quantize
quant = quantization level in image units, has precedence over fract
None or <= 0 => use fract.
exclude = List of table types NOT to copy
NB: "AIPS HI" isn't really a table and gets copied anyway
include = List of table types to copy
headHi = if True move history to header, else leave in History table
returns FITS Image data object
\end{verbatim}
\subsection{inputs}
\begin{verbatim}
inputs(TaskObj)
List task inputs
TaskObj = Task object whose inputs to list
\end{verbatim}
\subsection{newDisplay}
\begin{verbatim}
newDisplay(port=8765, URL=None)
Recreate display to another display server
port = port number on local machine
URL = Full URL (e.g. http://localhost:8765/RPC2)
\end{verbatim}
\subsection{setname}
\begin{verbatim}
setname(inn, out)
Copy file definition from inn to out as in...
Supports both FITS and AIPS
Copies Data type and file name, disk, class etc
inn = Obit data object, created with getname, getFITS
out = ObitTask object,
\end{verbatim}
\subsection{set2name}
\begin{verbatim}
set2name(in2, out)
Copy file definition from in2 to out as in2...
Supports both FITS and AIPS
Copies Data type and file name, disk, class etc
in2 = Obit data object, created with getname, getFITS
out = ObitTask object,
\end{verbatim}
\subsection{set3name}
\begin{verbatim}
set3name(in3, out)
Copy file definition from in3 to out as in3...
Supports both FITS and AIPS
Copies Data type and file name, disk, class etc
in3 = Obit data object, created with getname, getFITS
out = ObitTask object,
\end{verbatim}
\subsection{set4name}
\begin{verbatim}
set4name(in4, out)
Copy file definition from in4 to out as in4...
Supports both FITS and AIPS
Copies Data type and file name, disk, class etc
in4 = Obit data object, created with getname, getFITS
out = ObitTask object,
\end{verbatim}
\subsection{setoname}
\begin{verbatim}
setoname(inn, out)
Copy file definition from inn to out as outdisk...
Supports both FITS and AIPS
Copies Data type and file name, disk, class etc
inn = Obit data object, created with getname, getFITS
out = ObitTask object,
\end{verbatim}
\subsection{setwindow}
\begin{verbatim}
setwindow(w, out)
Set BLC and TRC members on out from OWindow w
Uses first window in first field on w which must be a rectangle
This may be set interactively using tvlod
w = OWindow object
out = ObitTask object, BLC and TRC members [0] and [1] are modified
\end{verbatim}
\subsection{tabdest}
\begin{verbatim}
tabdest(ObitObj, tabType, tabVer)
Delete a table
Deletes associated tables
ObitObj = Python Obit object with tables
tabType = Table type, NB AIPS tables names start with "AIPS "
e.g. "AIPS CC"
tabVer = table version, 0=> highest, <0 => all
\end{verbatim}
\subsection{tget}
\begin{verbatim}
tget(inn, file=None)
Restore task object from disk
Restore values in task object
inn = task name, or a task object of the desired type
in the latter case, the input object will NOT be modified
file = optional file name, the default is <task_name>.pickle
in the current working directory
\end{verbatim}
\subsection{tput}
\begin{verbatim}
tput(to, file=None)
save task object
save values in task object
to = task object to save
file = optional file name, the default is <task_name>.pickle
in the current working directory
\end{verbatim}
\subsection{tvlod}
\begin{verbatim}
tvlod(image, window=None)
display image
image = Obit Image, created with getname, getFITS
window = Optional window for image to edit
\end{verbatim}
\subsection{tvstat}
\begin{verbatim}
tvstat(inImage)
Set region in an image using the display and tell mean, rms
Returns dictionary with statistics of selected region with entries:
Mean = Mean value
RMSHist = RMS value from a histogram analysis
RMS = Simple RMS value
Max = maximum value
MaxPos = pixel of maximum value
Min = minimum value
MinPos = pixel of minimum value
inImage = Python Image object, created with getname, getFITS
\end{verbatim}
\subsection{uvTabSave}
\begin{verbatim}
uvTabSave(inUV, filename, outDisk, err, \
exclude=['AIPS HI', 'AIPS_AN', 'AIPS FQ', 'AIPS PL', 'AIPS SL'],\
include=[])
Write UV data tables (but not data) to a FITS file
Write tables associated with UV data set as a FITAB format file
History written to header
inUV = UV data to copy
filename = name of FITS file
inDisk = FITS directory number
err = Python Obit Error/message stack
exclude = List of table types NOT to copy
NB: "AIPS HI" isn't really a table and gets copied anyway
include = List of table types to copy (FQ, AN always done )
returns FITS UV data object
\end{verbatim}
\subsection{uvlod}
\begin{verbatim}
uvlod(filename, inDisk, Aname, Aclass, Adisk, Aseq, err)
Load FITS UV data to AIPS
Read a UVTAB FITS UV data file and write an AIPS data set
filename = name of FITS file
inDisk = FITS directory number
Aname = AIPS name of file
Aclass = AIPS class of file
Aseq = AIPS sequence number of file
Adisk = FITS directory number
err = Python Obit Error/message stack
returns AIPS UV data object
\end{verbatim}
\subsection{uvtab}
\begin{verbatim}
uvtab(inUV, filename, outDisk, err, compress=False,
exclude=['AIPS HI', 'AIPS AN', 'AIPS FQ', 'AIPS SL', 'AIPS PL'],
include=[], headHi=False)
Write UV data as FITS file
Write a UV data set as a FITAB format file
History written to header
inUV = UV data to copy
filename = name of FITS file
inDisk = FITS directory number
err = Python Obit Error/message stack
exclude = List of table types NOT to copy
NB: "AIPS HI" isn't really a table and gets copied anyway
include = List of table types to copy (FQ, AN always done )
Exclude has presidence over include
headHi = if True move history to header, else leave in History table
returns FITS UV data object
\end{verbatim}
\subsection{window}
\begin{verbatim}
window(image)
Make a window object for an image
Returns OWindow object
image = Obit image object
\end{verbatim}
\subsection{zap}
\begin{verbatim}
zap(o)
Zap object o
Delete Image, UV or OTF data files
Removes all external components (files)
o = Obit Data object to delete
\end{verbatim}
% Bibliography if any
%\bibliographystyle{aa} % style aa.bst
%\bibliography{Report}
\section{OTObit Data}
The OTObit environment contains a number of useful pieces of
information concerning your current session.
These should all be imported into the scripting or interactive
environment at startup.
\begin{verbatim}
AIPSdisks = ['/usr/AIPS/DATA/GOLLUM_1', '/usr/AIPS/DATA/GOLLUM_2', '/u...
Adisk = 1
FITSdisks = ['/usr/AIPS/FITS']
Fdisk = 1
ObitSys = <C OSystem instance>
dir = None
disp = <C ODisplay instance> ObitView
dsk = 'DA10'
err = <C OErr instance>
nAIPS = 8
nFITS = 1
popsno = 1
userno = 103
\end{verbatim}
\section{Remote Usage\label{Remote}}
In order to run tasks or scripts or access data on a remote machine,
an ObitTalkServer must be running on the remote host and the client
ObitTalk must be told the URL of the remote server and the list of
directory names on the remote host.
\subsection{ObitTalkServer}
The target host machine must have installed AIPS and Obit systems.
Remote access is provided through a ObitTalkServer process which can
be started once the initial AIPS processes are run to define the
standard AIPS directories.
Note: this does NOT include the AIPS data directories \$DA01 ....
The default is for ObitTalkServer to watch port 8000 although this can
be modified in the ObitTalkServer script.
The xmlrpc URL of this server process is then
'http://mymachine.org:8000/RPC2' where mymachine.org is a suitable
network name for the host.
The host must allow client access to port 8000.
An example of creating a remote AIPSImage is:
\begin{verbatim}
>>> ai=AIPSImage("3C43","PCube",disk,1)
\end{verbatim}
This can then be displayed on a running ObitView by either:
\begin{verbatim}
>>> tvlod(ai)
\end{verbatim}
to display of the current ObitView display, or
\begin{verbatim}
>>> ai.display(url)
\end{verbatim}
where url is the optional url of an ObitView server.
Note: if url is not specified and the local ObitView
server is the default, the default server display will be used;
this is likely to seldom be the desired effect so you should use the
second form and give the url of you ObitView as seen by the
remote server.
\subsection{Remote data directories\label{remote_data}}
The set of AIPS data directories on a machine depends on a number of
factors, login name, user number, system configuration files as well
as command line arguments.
Due to this complexity, the current configuration of ObitTalk does not
allow an automated discovery of these directories and they must be
explicitly supplied.
After the ObitTalk startup has initialized the local data directories,
remote AIPS directories can be defined:
\begin{verbatim}
>>> url = 'http://mymachine.org:8000/RPC2'
>>> dirname = '/export/data_1/aips/DATA/MINE_1'
>>> disk = len(AIPS.AIPS.disks)
>>> AIPS.AIPS.disks.append(AIPS.AIPSDisk(url, disk, dirname))
\end{verbatim}
This directory will then be accessable as disk disk.
Note: to define an additional local AIPS disk, set url to None.
The function AIPSCat(disk) will give a directory listing of this
directory, tasks and the AIPSUVData and AIPSImage classes can access
data in these directories.
For a task to use remote data, all ``disks'' specified must be on the
same host.
Disk numbers on the task object will automatically be translated to
the local numbers on the remote host.
Note: ObitTalk uses disks to determine where a task is to be run so NO
disk numbers may be defaulted.
Example usage follows:
\begin{verbatim}
>>> url='http://192.168.1.140:8000/RPC2'
>>> dirname='/export/data_1/aips/DATA/VINO_1'
>>> disk = len(AIPS.AIPS.disks)
>>> AIPS.AIPS.disks.append(AIPS.AIPSDisk(url, disk, dirname))
>>> t=ObitTask("Template")
>>> t.DataType='AIPS'
>>> t.inDisk=disk
>>> t.inName='0319+415'
>>> t.inClass='IClean'
>>> t.inSeq=1
>>> t.g
[1, '** Message: info : TEMPLATE Begins']
[1, '** Message: info : TEMPLATE: mean -0.000005 RMS 0.000736']
[1, '** Message: info : TEMPLATE Ends']
\end{verbatim}
or an AIPS task:
\begin{verbatim}
>>> AIPS.AIPS.disks.append(AIPS.AIPSDisk(url, disk, dirname))
>>> im=AIPSTask("imean")
>>> im.indisk=disk
>>> im.inname='0319+415'
>>> im.inclass='IClean'
>>> im.inseq=1
>>> im.g
IMEAN1: Task IMEAN (release of 31DEC05) begins
IMEAN1: Initial guess for PIXSTD taken from ACTNOISE inheader
IMEAN1: Image= 0319+415 .IClean. 1 1 xywind= 1 1 397 397
IMEAN1: Mean and rms found by fitting peak in histogram:
IMEAN1: Mean=-1.7323E-05 Rms= 7.2413E-04 **** from histogram
IMEAN1: Mean and rms found by including all data:
IMEAN1: Mean=-4.8774E-06 Rms= 7.3894E-04 JY/BEAM over 20441 pixels
IMEAN1: Flux density = -5.3379E-03 Jy. beam area = 18.68 pixels
IMEAN1: Minimum=-2.4419E-03 at 397 350 1 1
IMEAN1: Skypos: RA 03 20 09.53788 DEC 41 26 27.4046
IMEAN1: Maximum= 2.8951E-03 at 300 378 1 1
IMEAN1: Skypos: RA 03 20 12.14383 DEC 41 26 35.8283
IMEAN1: Skypos: IPOL 4860.100 MHZ
IMEAN1: returns adverbs to AIPS
IMEAN1: Appears to have ended successfully
IMEAN1: vino 31DEC05 TST: Cpu= 0.0 Real= 0
\end{verbatim}
Note: since the task definition is likely obtained from the client host, be
sure the versions of Obit and AIPS are compatable.
\subsection{ObitScript class}
Any file containing python instructions can be fed to ObitTalk as a
command line argument in a non interactive session.
Scripts can also be use in interactice sessions using the ObitScript class.
The ObitScript class allows defining scripts that can be executed
either locally or remotely on a host with a running ObitTalkServer.
Scripts are similar to tasks and share many properties like
synchronous or asynchronous operation.
Scripts may use all Obit classes with python bindings for data local
to the host on which it is executing and has all the task and remote
data access available interactively.
Note: before a script can be run on a remote machine, the AIPS data directories
on the remote host must be entered into list of disks as described above.
Scripts are text strings containing valid commands.
Note: the script must follow python indentention rules, a backslash n
(cannot be said in latex) indicates a line break.
Scripts can be supplied as simple strings, a list of strings or the
name of a file containing the text of the script.
An example usage follows
\begin{verbatim}
>>> import ObitScript
>>> script = \
>>> 'im=Image.newPAImage("image","0900+398III","IClean",1,23,True,err)\n'+ \
>>> 'im.Header(err)\n'
>>> s=ObitScript.ObitScript("myScript", script=script)
>>> s.i # Show script text
Listing of script myScript
im=Image.newPAImage("image","0900+398III","IClean",1,23,True,err)
im.Header(err)
>>> s.g
** Message: info : myScript Begins
User 100
AIPS Image Name: 0900+398III Class: IClean seq: 23 disk: 1
Object: 0900+398
Observed: 2005-04-04 Telescope: VLA Created: 2007-02-09
Observer: AP452 Instrument: VLA
Minimum = -0.74624 Maximum = 33.584 JY/BEAM
--------------------------------------------------------------
Type Pixels Coord value at Pixel Coord incr Rotat
RA---SIN 256 9 9 33.38948 129.00 -20 0.00
DEC--SIN 256 42 53 47.3748 129.00 20 0.00
FREQ 1 7.3794e+07 1.00 1.46484e+06 0.00
STOKES 1 IPol 1.00 1 0.00
--------------------------------------------------------------
Coordinate equinox 2000.0 Coordinate epoch 2000.00
Observed RA 9 0 0.00000 Observed Dec 39 47 60.0000
Phase shifted in X 1.836 in Y 3.096
no. Comp 1
Clean Beam 76.3171 x 71.8424 asec, PA -68.5 deg.
Rest freq 0 Vel type: Observer, wrt Optical
Alt ref value 0 wrt pixel 0.00
Maximum version number of AIPS CC tables is 1
Maximum version number of AIPS HI tables is 1
** Message: info : myScript Ends
\end{verbatim}
The execution of a script is done by wrapping the script in Obit
initialization and shutdown code and writting it to a disk file in /tmp
where it is fed as the command line input to ObitTalk.
If the ObitScript object member debug is set to True then a copy
of the script file will be saved.
The following describes the ObitScript class and can be obtained
online by:
\begin{verbatim}
>>> help(ObitScript)
\end{verbatim}
\begin{verbatim}
DESCRIPTION
This module provides the ObitScript class.
This class allows running Obit/python scripts either
locally or remotely
ObitScripts are derived from Task and share most of execution properties.
In particular, ObitScripts can be executed either locally or remotely.
In this context a script is a character string containing a sequence of
ObitTalk or other python commands and may be included when the script
object is created or attached later.
An example:
script="import OSystem
print 'Welcome user',OSystem.PGetAIPSuser()
"
CLASSES
ObitScriptMessageLog
Task.Task(MinimalMatch.MinimalMatch)
ObitScript
class ObitScript(Task.Task)
This class implements running Obit/python Script
The ObitScript class, handles client-side script related operations.
Actual script operations are handled by server-side proxies.
For local operations, the server-side functionality is
implemented in the same address space but remote operation is
through an xmlrpc interface.
An ObitScript has an associated proxy, either local or remote.
A proxy is a module with interface functions,
local proxies are class modules from subdirectory Proxy with the
same name (i.e. ObitScript) and the server functions are implemented
there. Remote proxies are specified by a URL and a proxy from the
xmlrpclib module is used.
Method resolution order:
ObitScript
Task.Task
MinimalMatch.MinimalMatch
Methods defined here:
__call__(self)
__getattr__(self, name)
__init__(self, name, **kwds)
Create ObitScript task object
Creates Script Object.
name = name of script object
Optional Keywords:
script = Script to execute as string or list of strings
file = Name of text file containing script
URL = URL on which the script is to be executed
Default = None = local execution
AIPSDirs = List of AIPS directories on URL
Default = current AIPS directories on url
FITSDirs = List of FITS directories on URL
Default = current FITS directories on url
AIPSUser = AIPS user number for AIPS data files
Default is current
version = AIPS version string, Default = current
Following is a list of class members:
url = URL of execution server, None=Local
proxy = Proxy for URL
script = Script as text string
userno = AIPS user number
AIPSDirs = List of AIPS directories on URL
FITSDirs = List of FITS directories on URL
AIPSUser = AIPS user number for AIPS data files
version = AIPS version string
_message_list = messages from Script execution
__setattr__(self, name, value)
abort(self, proxy, tid, sig=15)
Abort the script specified by PROXY and TID.
Calls abort function for task tid on proxy.
None return value
proxy = Proxy giving access to server
tid = Task id in pid table of process to be terminated
sig = signal to sent to the task
explain(self)
List script
feed(self, proxy, tid, banana)
Feed the script a BANANA.
Pass a message to a running script's sdtin
proxy = Proxy giving access to server
tid = Script task id in pid table of process
bananna = text message to pass to script input
finished(self, proxy, tid)
Determine if script has finished
Determine whether the script specified by PROXY and TID has
finished.
proxy = Proxy giving access to server
tid = Task id in pid table of process
go(self)
Execute the script.
Writes task input parameters in the task parameter file and
starts the task synchronously returning only when the task
terminates. Messages are displayed as generated by the task,
saved in an array returned from the call and, if the task
member logFile is set, written to this file.
help(self)
List script.
inputs(self)
List script
messages(self, proxy=None, tid=None)
Return task messages
Returns list of messages and appends them to the object's
message list.
proxy = Proxy giving access to server
tid = Task id in pid table of process
outputs(self)
Not defined.
spawn(self)
Spawn the script.
Starts script asynchronously returning immediately
Messages must be retrieved calling messages.
Returns (proxy, tid)
wait(self, proxy, tid)
Wait for the script to finish.
proxy = Proxy giving access to server
tid = Task id in pid table of process
----------------------------------------------------------------------
Data and other attributes defined here:
AIPSDirs = []
FITSDirs = []
debug = False
doWait = False
isbatch = 32000
logFile = ''
msgkill = 0
proxy = <module 'LocalProxy' from '/export/users/bcotton/share/obittal...
script = ''
url = None
userno = 0
version = 'TST'
----------------------------------------------------------------------
Methods inherited from MinimalMatch.MinimalMatch:
__repr__(self)
class ObitScriptMessageLog
Methods defined here:
__init__(self)
zap(self)
Zap message log.
----------------------------------------------------------------------
Data and other attributes defined here:
userno = -1
\end{verbatim}
\section{Local Python Data Interface Classes}
Local and remote script execution data access is allowed through
the direct python bindings to the data classes.
These classes are Image, UV (radio interferometric data), and OTF
(radio single dish ``On-the-Fly'' data) which are all derived from the
base OData class.
Most of the top level class functionality, e.g. making an image from a
data set, are available through these classes.
The online documentation for these classes can be obtained by
\begin{verbatim}
>>> help(Image)
>>> help(UV)
>>> import OTF; help(OTF)
\end{verbatim}
Class members are accessed as using the ``object\_name.value'' form as
\begin{verbatim}
>>> header=uv.Desc.Dict
\end{verbatim}
to get the ``header'' from uv data uv as a python dict.
Class functions (have ``self'' as an argument are called as
\begin{verbatim}
>>> uv.Header(err)
\end{verbatim}
Note, ``self'' not included directly in the argument.
Functions which do not have ``self'' as an argument (usually have
names starting with 'P') need to include
the class:
\begin{verbatim}
>>> UV.PHeader(uv, err)
\end{verbatim}
All data ojects have a Descriptor (the ``Desc'' member) which can be
read and written (requires open and close of data object).
Conversion between the c memory resident forms and a python dict is by
means of the ``Dict'' member of the descriptor classes:
\begin{verbatim}
>>> d=uv.Desc.Dict
>>> d
{'origin': 'Obit ', 'jlocr': 4, 'obsdat': '1996-11-16', 'equinox': 2000.0,
'observer': 'AC473 ',
'ptype': ['UU-L-SIN', 'VV-L-SIN', 'WW-L-SIN', 'BASELINE', 'TIME1 '],
'ilocid': -1, 'obsdec': 30.2984147222, 'xshift': 0.0, 'ilocws': -1,
'jlocd': 5, 'restFreq': 0.0, 'ilocsu': -1, 'nvis': 1594634, 'ilocb': 3,
'ilocv':1, 'ilocw': 2, 'iloct': 4, 'ilocu': 0, 'nrparm': 5, 'instrume': 'VLA',
'epoch':2000.0, 'isort': 'TB', 'VelDef': 0, 'inaxes': [3, 2, 30, 1, 1, 1, 0],
'yshift': 0.0, 'ilocit': -1, 'object': 'MCFIELD ',
'ctype': ['COMPLEX ', 'STOKES ', 'FREQ ', 'IF ', 'RA ', 'DEC '],
'cdelt': [1.0, -1.0, 97656.25, 1.0, 1.0, 1.0, 0.0], 'jlocif': 3,
'JDObs': 2450403.5, 'date': '2007-07-07', 'ilocfq': -1, 'jlocf': 2, 'VelReference': 3,
'ncorr': 60, 'jlocc': 0, 'crpix': [1.0, 1.0, 16.0, 1.0, 1.0, 1.0, 1.0], 'jlocs': 1,
'name': 'AIPS UV data', 'teles': 'VLA ', 'altRef': 125100.0,
'numVisBuff': 0, 'naxis': 6, 'crota': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
'bunit':'UNCALIB ', 'firstVis': 0, 'altCrpix': 16.0, 'obsra': 195.75129125000001,
'crval': [1.0, -1.0, 316562500.0, 1.0, 195.75129125000001, 30.2984147222, 0.0]}
\end{verbatim}
\subsection{Obit python Image class}
The interface to Images use FArray objects to store the pixel data.
The FArray class allows efficient pixel manipulation and knows about
magic value blanking of pixels.
The data arrays in memory can also be access for use with NumPy.
Further functions are available in python modules ImageUtil,
CleanImage, ConvUtil, ImageMosaic, MosaicUtil and Feather modules
The following describes the Image class.
\begin{verbatim}
NAME
Image - Python Obit Image class
DESCRIPTION
This class contains an astronomical image and allows access.
An ObitImage is the front end to a persistent disk resident structure.
Magic value blanking is supported, blanked pixels have the value
OBIT_MAGIC (ObitImageDesc.h).
Pixel data are kept in an FArray structure which is how Python acceses the data.
There may be associated tables (e.g. "AIPS CC" tables).
Both FITS and AIPS cataloged images are supported.
Image Members with python interfaces:
exist - True if object previously existed prior to object creation
InfoList - used to pass instructions to processing
ImageDesc - Astronomical labeling of the image Member Desc
FArray - Container used for pixel data Member FArray
PixBuf - memory pointer into I/O Buffer
Additional Functions are available in ImageUtil.
CLASSES
OData.OData(OData.ODataPtr)
Image
class Image(OData.OData)
Python Obit Image class
Additional Functions are available in ImageUtil.
Method resolution order:
Image
OData.OData
OData.ODataPtr
Methods defined here:
Clone(self, outImage, err)
Make a copy of a object but do not copy the actual data
This is useful to create an Image similar to the input one.
self = Python Image object
outImage = Output Python Image object, must be defined
err = Python Obit Error/message stack
Close(self, err)
Close an image persistent (disk) form
self = Python Image object
err = Python Obit Error/message stack
Copy(self, outImage, err)
Make a deep copy of input object.
Makes structure the same as self, copies data, tables
self = Python Image object to copy
outImage = Output Python Image object, must be defined
err = Python Obit Error/message stack
GetPlane(self, array, plane, err)
Read an image persistent (disk) form to an (optional) specified FArray
The data to be read is specified in the InfoList member as modified by plane
self = Python Image object
array = Python FArray to accept data, if None use inImage buffer
plane = array of 5 integers giving (1-rel) pixel numbers
err = Python Obit Error/message stack
Header(self, err)
Write image header on output
self = Python Obit Image object
err = Python Obit Error/message stack
ImageIsA(self)
Tells if input really a Python Obit Image
return true, false (1,0)
self = Python UV object
Info(self, err)
Get underlying data file info
self = Python Obit Image object
err = Python Obit Error/message stack
Open(self, access, err, blc=None, trc=None)
Open an image persistent (disk) form
self = Python Image object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
blc = if given and a list of integers (min 2) giving
bottom left corner (1-rel) of subimage
trc = if given and a list of integers (min 2) giving
top right corner (1-rel) of subimage
PutPlane(self, array, plane, err)
Write an image persistent (disk) form from an (optional) specified FArray
The data to be written is specified in the InfoList member as modified by plane
self = Python Image object
array = Python FArray to provide data, if None use inImage buffer
plane = array of 5 integers giving (1-rel) pixel numbers
err = Python Obit Error/message stack
Read(self, err)
Read an image persistent (disk) form
The data to be read is specified in the InfoList mamber
Uses FArray member as buffer.
self = Python Image object
err = Python Obit Error/message stack
ReadFA(self, array, err)
Read an image persistent (disk) form to a specified FArray
The data to be read is specified in the InfoList member
self = Python Image object
array = Python FArray to accept data
err = Python Obit Error/message stack
ReadPlane(self, err, blc=None, trc=None)
Read an image plane into the FArray
Reads the plane specified by blc, trc
into the FArray associated with the image
self = Python Image object
err = Python Obit Error/message stack
blc = if given and a list of integers (min 2) giving
bottom left corner (1-rel) of subimage
trc = if given and a list of integers (min 2) giving
top right corner (1-rel) of subimage
returns Python FArray from Image with data read
Scratch(self, err)
Create a scratch file suitable for accepting the data to be read from self
A scratch Image is more or less the same as a normal Image except that it is
automatically deleted on the final unreference.
self = Python Image object
err = Python Obit Error/message stack
UpdateDesc(self, err, Desc=None)
Update any disk resident structures about descriptor
self = Python Image object
err = Python Obit Error/message stack
Desc = Descriptor, if None then use current descriptor
Contents can be accessed throuth the Dict member
Write(self, err)
Write an image persistent (disk) form
The data to be written is specified in the InfoList member
Uses FArray member as buffer.
self = Python Image object
err = Python Obit Error/message stack
WriteFA(self, array, err)
Write an image persistent (disk) form from a specified FArray
The data to be written is specified in the InfoList member
self = Python Image object
array = Python FArray to write
err = Python Obit Error/message stack
WritePlane(self, imageData, err)
Write an image plane.
Writes the plane specified by blc, trc on image infoList
Checks if the current FArray on Image is compatable with
imageData.
self = Python Image object
imageData = Python FArray with data to write
err = Python Obit Error/message stack
__del__(self)
__getattr__(self, name)
__init__(self, name)
__repr__(self)
__setattr__(self, name, value)
cast(self, toClass)
Casts object pointer to specified class
self = object whose cast pointer is desired
toClass = Class string to cast to ("ObitImage")
----------------------------------------------------------------------
Methods inherited from OData.OData:
CopyTables(self, outOData, exclude, include, err)
Copy Tables from one OData to another
self = Python OData object
outOData = Output Python OData object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
Dirty(self)
Mark OData as needing a header update to disk file
self = Python OData object
FullInstantiate(self, access, err)
Fully instantiate an OData by opening and closing
return 0 on success, else failure
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
GetHighVer(self, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
self = Python OData object
tabType = Table type, e.g. "OTFSoln"
GetName(self)
Tells OData object name (label)
returns name as character string
self = Python OData object
History(self, access, err)
Return the associated History
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
IsScratch(self)
Tells if OData is a scratch object
return true, false (1,0)
self = Python OData object
NewTable(self, access, tabType, tabVer, err, numOrb=0,
numPCal=3, numIF=1, numPol=1, numTerm=0, numChan=1,
numTones=1, numBand=1, numTabs=1, npoly=1, numCoef=5, noParms=0)
Return the specified associated table
Table will be created if necessary.
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "AIPS AN"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
Optional parameters, values only used if table created
numOrb = Number of orbital parameters (AN)
numPCal = Number of polarization parameters (AN)
numIF = Number of IFs (FQ, SN, CL, BP, BL, TY, CQ)
numPol = Number of Stokes' (SN, CL, BP, BL, PC, TY, GC, MC, IM)
numTerm = Number of terms in model polynomial (CL)
numChan = Number of spectral channels (BP)
numTomes = Number of Phase cal tones (PC)
numTabs = Number of ??? (GC)
numCoef = Number of polynomial coefficents (NI)
numBand = Number of Bands(?) (IM, GC)
npoly = number of polynomial terms (IM)
noParms = Number of parameters in CC table model
maxis1-5 = Dimension of axes of IDI data matrix
ODataIsA(self)
Tells if input really a Python Obit OData
return true, false (1,0)
self = Python OData object
Rename(self, err, newFITSName=None, newAIPSName=' ',
newAIPSClass=' ', newAIPSSeq=0)
Rename underlying files
self = Python OData object
err = Python Obit Error/message stack
For FITS files:
newFITSName = new name for FITS file
For AIPS:
newAIPSName = New AIPS Name (max 12 char) Blank => don't change.
newAIPSClass = New AIPS Class (max 6 char) Blank => don't change.
newAIPSSeq = New AIPS Sequence number, 0 => unique value
UpdateTables(self, err)
Update any disk resident structures about the current tables
Returns 0 on success
self = Python Image object
err = Python Obit Error/message stack
Zap(self, err)
Delete underlying files and the basic object.
self = Python OData object
err = Python Obit Error/message stack
ZapTable(self, tabType, tabVer, err)
Destroy specified table
Returns 0 on success
self = Python OData object
tabType = Table type, e.g. "AIPS CC"
tabVer = table version, integer
err = Python Obit Error/message stack
FUNCTIONS
ObitName(ObitObject)
Return name of an Obit object or input if not an Obit Object
PClone(inImage, outImage, err)
Make a copy of a object but do not copy the actual data
This is useful to create an Image similar to the input one.
inImage = Python Image object
outImage = Output Python Image object, must be defined
err = Python Obit Error/message stack
PClone2(inImage1, inImage2, outImage, err)
Make a copy of a object but do not copy the actual data
inImage1 = Python Image object to clone
inImage2 = Python Image object whose geometry is to be used
outImage = Output Python Image object, must be defined,
will be defined as Memory only
err = Python Obit Error/message stack
PCloneMem(inImage, outImage, err)
Make a Memory only clone of an Image structure
This is useful for temporary structures
inImage = Python Image object
outImage = Output Python Image object, must be defined
err = Python Obit Error/message stack
PClose(inImage, err)
Close an image persistent (disk) form
inImage = Python Image object
err = Python Obit Error/message stack
PCompare(in1Image, in2Image, err, plane=[1, 1, 1, 1, 1])
Compare a plane of two images
returns list [max. abs in1Image, max abs difference, RMS difference]
in1Image = Python Image object
in2Image = Python Image object, on output, the FArray contains the difference.
err = Python Obit Error/message stack
plane = plane to compare
PCopy(inImage, outImage, err)
Make a deep copy of input object.
Makes structure the same as inImage, copies data, tables
inImage = Python Image object to copy
outImage = Output Python Image object, must be defined
err = Python Obit Error/message stack
PCopyQuantizeFITS(inImage, outImage, err, fract=0.25, quant=None, inHistory=None)
Make a copy of an image quantizing to a 16 or 32 bit integer
FITS image
inImage = Python Image object
outImage = Output Python Image object, must be defined
but not fully created
err = Python Obit Error/message stack
fract = quantization level as a fraction of the plane min. RMS
quant = quantization level in image units, has precedence over fract
None or <= 0 => use fract.
inHistory = if given a History object to copy to the output FITS header
PCopyTables(inImage, outImage, exclude, include, err)
Copy Tabeles from one image to another
inImage = Python Image object
outImage = Output Python Image object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
PDirty(inImage)
Mark Image as needing a header update to disk file
inImage = Python Image object
PFArray2FITS(inArray, outFile, err, outDisk=1, oDesc=None)
Write an FArray to a FITS image
Very rudimentary header attached
Returns image object
inArray = Python FArray object
outFile = Name of FITS file
outDisk = FITS disk number
oDesc = None or ImageDescriptor to be written
err = Python Obit Error/message stack
PFArray2Image(inArray, outImage, err)
Attach an FArray to an image and write it
Very rudimentary header attached
inArray = Python Image object
outImage = Python Image to write
err = Python Obit Error/message stack
PFullInstantiate(inImage, access, err)
Fully instantiate an Image by opening and closing
return 0 on success, else failure
inImage = Python Image object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PGetBeam(inImage)
Return Beam attached to Image
returns Beam with image pixel data
inImage = Python Image object
PGetDesc(inImage)
Return the member ImageDesc
returns ImageDesc as a Python Dictionary
inImage = Python Image object
PGetFArray(inImage)
Return FArray used to buffer Image data
returns FArray with image pixel data
inImage = Python Image object
PGetHighVer(inImage, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
inImage = Python Image object
tabType = Table type, e.g. "OTFSoln"
PGetList(inImage)
Return the member InfoList
returns InfoList
inImage = Python Image object
PGetName(inImage)
Tells Image object name (label)
returns name as character string
inImage = Python Image object
PGetPixBuf(inImage)
Return python memory buffer for pixel array in memory
inImage = Python Image object
PGetPlane(inImage, array, plane, err)
Read an image persistent (disk) form to an (optional) specified FArray
The data to be read is specified in the InfoList member as modified by plane
inImage = Python Image object
array = Python FArray to accept data, if None use inImage buffer
plane = array of 5 integers giving (1-rel) pixel numbers
err = Python Obit Error/message stack
PGetTable(inImage, access, tabType, tabVer, err, noParms=0)
Return (create) the specified associated table
Specific table types are recognized and the appropriate constructor
called, these may have additional parameters. This allows creating
new tables of the appropriate type.
returns Python Obit Table
inImage = Python Image object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "AIPS AN", or "OTFSoln"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
noParms = Number of parameters in CC table model
PGetTableList(inImage)
Return the member tableList
returns tableList
inImage = Python Image object
PHeader(inImage, err)
Print image descriptor
inImage = Python Image object
err = Python Obit Error/message stack
PImageGetTable(inImage, access, tabType, tabVer, err)
Obsolete use PGetTable
PIsA(inImage)
Tells if input really a Python Obit Image
return True, False (1,0)
inImage = Python Image object
PIsScratch(inImage)
Tells if Image is a scratch object
return true, false (1,0)
inImage = Python Image object
POpen(inImage, access, err, blc=None, trc=None)
Open an image persistent (disk) form
inImage = Python Image object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
blc = if given and a list of integers (min 2) giving
bottom left corner (1-rel) of subimage
trc = if given and a list of integers (min 2) giving
top right corner (1-rel) of subimage
PPutPlane(inImage, array, plane, err)
Write an image persistent (disk) form from an (optional) specified FArray
The data to be written is specified in the InfoList member as modified by plane
inImage = Python Image object
array = Python FArray to provide data, if None use inImage buffer
plane = array of 5 integers giving (1-rel) pixel numbers
err = Python Obit Error/message stack
PRead(inImage, err)
Read an image persistent (disk) form
The data to be read is specified in the InfoList mamber
Uses FArray member as buffer.
inImage = Python Image object
err = Python Obit Error/message stack
PReadFA(inImage, array, err)
Read an image persistent (disk) form to a specified FArray
The data to be read is specified in the InfoList member
inImage = Python Image object
array = Python FArray to accept data
err = Python Obit Error/message stack
PReadPlane(inImage, err, blc=None, trc=None)
Read an image plane into the FArray
Reads the plane specified by blc, trc
into the FArray associated with the image
inImage = Python Image object
err = Python Obit Error/message stack
blc = if given and a list of integers (min 2) giving
bottom left corner (1-rel) of subimage
trc = if given and a list of integers (min 2) giving
top right corner (1-rel) of subimage
returns Python FArray from Image with data read
PScratch(inImage, err)
Create a scratch file suitable for accepting the data to be read from inImage
A scratch Image is more or less the same as a normal Image except that it is
automatically deleted on the final unreference.
inImage = Python Image object
err = Python Obit Error/message stack
PSetBeam(inImage, beam)
Replace the Beam attached to an Image
inImage = Python Image object
beam = Python Beam Image to attach
PSetFArray(inImage, array)
Replace the FArray on an Image
inImage = Python Image object
array = Python FArray to attach
PSwapAxis(inImage, err, ax1=3, ax2=4)
Swap axes on an image
The order of two adjacent axes may be swapped if the dimensionality
of at least one of them is 1
inImage = Image whose axes are to be swapped
err = Python Obit Error/message stack
ax1 = first (1-rel) axis number
ax2 = second (1-rel) axis number
PUnref(inImage)
Decrement reference count
Decrement reference count which will destroy object if it goes to zero
Python object stays defined.
inImage = Python Image object
PUpdateDesc(inImage, err, Desc=None)
Update external representation of descriptor
inImage = Python Image object
err = Python Obit Error/message stack
Desc = Image descriptor, if None then use current descriptor
PUpdateTables(inImage, err)
Update any disk resident structures about the current tables
inImage = Python Image object
err = Python Obit Error/message stack
PWrite(inImage, err)
Write an image persistent (disk) form
The data to be written is specified in the InfoList member
Uses FArray member as buffer.
inImage = Python Image object
err = Python Obit Error/message stack
PWriteFA(inImage, array, err)
Write an image persistent (disk) form from a specified FArray
The data to be written is specified in the InfoList member
inImage = Python Image object
array = Python FArray to write
err = Python Obit Error/message stack
PWritePlane(Image, imageData, err)
Write an image plane.
Writes the plane specified by blc, trc on image infoList
Checks if the current FArray on Image is compatable with
imageData.
Image = Python Image object
imageData = Python FArray with data to write
err = Python Obit Error/message stack
PZap(inImage, err)
Delete underlying files and the basic object.
inImage = Python Image object
err = Python Obit Error/message stack
PZapTable(inImage, tabType, tabVer, err)
Destroy specified table
inImage = Python Image object
tabType = Table type, e.g. "AIPS CC"
tabVer = table version, integer
err = Python Obit Error/message stack
input(inputDict)
Print the contents of an input Dictionary
inputDict = Python Dictionary containing the parameters for a routine
newObit(name, filename, disk, exists, err)
Create and initialize an Image structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
Returns the Python Image object
name = name desired for object (labeling purposes)
filename = name of FITS file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
newPACNO(disk, cno, exists, err, verbose=True)
Create and initialize an AIPS based Image structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
Returns the Python Image object
isOK member set to indicate success
disk = AIPS directory number
cno = AIPS catalog number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
newPAImage(name, Aname, Aclass, disk, seq, exists, err, verbose=True)
Create and initialize an AIPS based Image structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
Returns the Python Image object
isOK member set to indicate success
name = name desired for object (labeling purposes)
Aname = AIPS name of file
Aclass = AIPS class of file
seq = AIPS sequence number of file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
newPFImage(name, filename, disk, exists, err, verbose=True)
Create and initialize an FITS based Image structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
isOK member set to indicate success
Returns the Python Image object
name = name desired for object (labeling purposes)
filename = name of FITS file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
\end{verbatim}
\subsection{Obit python UV class}
Further utilities are available in the SkyModel, IonCal, CleanVis
UVSelfCal, UVGSolve, UVImager, and UVSoln2Cal python modules.
The following describes the UV class.
\begin{verbatim}
NAME
UV - Python Obit inteferometer (UV) data class
DESCRIPTION
This class contains interoferometric data and allows access.
An ObitUV is the front end to a persistent disk resident structure.
There maybe (usually are) associated tables which either describe
the data or contain calibration and/or editing information.
Both FITS (as Tables) and AIPS cataloged data are supported.
Most access to UV data is through functions as the volume of the data is
inappropriate to be processed directly in python.
UV Members with python interfaces:
exist - True if object previously existed prior to object creation
List - used to pass instructions to processing
Desc - Astronomical labeling of the data
TableList - List of tables attached
VisBuf - memory pointer into I/O Buffer
Data selection, calibration and editing parameters on List member:
"doCalSelect" bool (1,1,1) Select/calibrate/edit data?
"Stokes" string (4,1,1) Selected output Stokes parameters:
" "=> no translation,"I ","V ","Q ", "U ",
"IQU ", "IQUV", "IV ", "RR ", "LL ", "RL ", "LR ",
"HALF" = RR,LL, "FULL"=RR,LL,RL,LR. [default " "]
In the above 'F' can substitute for "formal" 'I' (both RR+LL).
"BChan" int (1,1,1) First spectral channel selected. [def all]
"EChan" int (1,1,1) Highest spectral channel selected. [def all]
"BIF" int (1,1,1) First "IF" selected. [def all]
"EIF" int (1,1,1) Highest "IF" selected. [def all]
"doPol" int (1,1,1) >0 -> calibrate polarization.
"doCalib" int (1,1,1) >0 -> calibrate, 2=> also calibrate Weights
"gainUse" int (1,1,1) SN/CL table version number, 0-> use highest
"flagVer" int (1,1,1) Flag table version, 0-> use highest, <0-> none
"BLVer" int (1,1,1) BL table version, 0> use highest, <0-> none
"BPVer" int (1,1,1) Band pass (BP) table version, 0-> use highest
"Subarray" int (1,1,1) Selected subarray, <=0->all [default all]
"dropSubA" bool (1,1,1) Drop subarray info?
"FreqID" int (1,1,1) Selected Frequency ID, <=0->all [default all]
"timeRange" float (2,1,1) Selected timerange in days.
"UVRange" float (2,1,1) Selected UV range in kilowavelengths.
"InputAvgTime" float (1,1,1) Input data averaging time (sec).
used for fringe rate decorrelation correction.
"Sources" string (?,?,1) Source names selected unless any starts with
a '-' in which case all are deselected (with '-' stripped).
"souCode" string (4,1,1) Source Cal code desired, ' ' => any code selected
'* ' => any non blank code (calibrators only)
'-CAL' => blank codes only (no calibrators)
"Qual" int (1,1,1) Source qualifier, -1 [default] = any
"Antennas" int (?,1,1) a list of selected antenna numbers, if any is negative
then the absolute values are used and the specified antennas are deselected.
"corrtype" int (1,1,1) Correlation type, 0=cross corr only, 1=both, 2=auto only.
"passAll" bool (1,1,1) If True, pass along all data when selecting/calibration
even if it's all flagged,
data deselected by time, source, antenna etc. is not passed.
"doBand" int (1,1,1) Band pass application type <0-> none
(1) if = 1 then all the bandpass data for each antenna
will be averaged to form a composite bandpass
spectrum, this will then be used to correct the data.
(2) if = 2 the bandpass spectra nearest in time (in a weighted
sense) to the uv data point will be used to correct the data.
(3) if = 3 the bandpass data will be interpolated in time using
the solution weights to form a composite bandpass spectrum,
this interpolated spectrum will then be used to correct the
data.
(4) if = 4 the bandpass spectra nearest in time (neglecting
weights) to the uv data point will be used to correct the
data.
(5) if = 5 the bandpass data will be interpolated in time ignoring
weights to form a composite bandpass spectrum, this
interpolated spectrum will then be used to correct the data.
"Smooth" float (3,1,1) specifies the type of spectral smoothing
Smooth(1) = type of smoothing to apply:
0 => no smoothing
1 => Hanning
2 => Gaussian
3 => Boxcar
4 => Sinc (i.e. sin(x)/x)
Smooth(2) = the "diameter" of the function, i.e.
width between first nulls of Hanning triangle
and sinc function, FWHM of Gaussian, width of
Boxcar. Defaults (if < 0.1) are 4, 2, 2 and 3
channels for Smooth(1) = 1 - 4.
Smooth(3) = the diameter over which the convolving
function has value - in channels.
Defaults: 1, 3, 1, 4 times Smooth(2) used when
"SubScanTime" float scalar [Optional] if given, this is the
desired time (days) of a sub scan. This is used by the
selector to suggest a value close to this which will
evenly divide the current scan.
0 => Use scan average.
This is only useful for ReadSelect operations on indexed ObitUVs.
CLASSES
OData.OData(OData.ODataPtr)
UV
class UV(OData.OData)
Python Obit inteferometer (UV) data class
UV Members with python interfaces:
List - used to pass instructions to processing
TableList - List of tables attached
Desc - Astronomical labeling of the data
VisBuf - memory pointer into I/O Buffer
Method resolution order:
UV
OData.OData
OData.ODataPtr
Methods defined here:
Clone(self, outUV, err)
Make a copy of a object but do not copy the actual data
This is useful to create an UV similar to the input one.
self = Python UV object
outUV = Output Python UV object, must be defined
err = Python Obit Error/message stack
Close(self, err)
Close a UV persistent (disk) form
returns 0 on success, else failure
self = Python UV object
err = Python Obit Error/message stack
Copy(self, outUV, err)
Make a deep copy of input object.
Makes structure the same as self, copies data, tables
self = Python UV object to copy
outUV = Output Python UV object, must be defined
err = Python Obit Error/message stack
Header(self, err)
Write image header on output
self = Python Obit UV object
err = Python Obit Error/message stack
Info(self, err)
Get underlying data file info
self = Python Obit UV object
err = Python Obit Error/message stack
Open(self, access, err)
Open a UV data persistent (disk) form
Returns 0 on success, else failure
self = Python UV object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
Read(self, err)
Read a UV persistent (disk) form
Reads into buffer attached to UV data, use VisBuf for access
Returns 0 on success, else failure
self = Python UV object
err = Python Obit Error/message stack
Scratch(self, err)
Create a scratch file suitable for accepting the data to be read from self
A scratch UV is more or less the same as a normal UV except that it is
automatically deleted on the final unreference.
self = Python UV object
err = Python Obit Error/message stack
UVIsA(self)
Tells if input really a Python Obit UV
return true, false (1,0)
self = Python UV object
UpdateDesc(self, err, Desc=None)
Update any disk resident structures about descriptor
self = Python UV object
err = Python Obit Error/message stack
Desc = Descriptor, if None then use current descriptor
Contents can be accessed throuth the Dict member
Write(self, err)
Write a UV persistent (disk) form
Writes buffer attached to UV data, use VisBuf for access
returns 0 on success, else failure
self = Python UV object
err = Python Obit Error/message stack
__del__(self)
__getattr__(self, name)
__init__(self, name)
__repr__(self)
__setattr__(self, name, value)
cast(self, toClass)
Casts object pointer to specified class
self = object whose cast pointer is desired
toClass = Class string to cast to ("ObitUV")
----------------------------------------------------------------------
Methods inherited from OData.OData:
CopyTables(self, outOData, exclude, include, err)
Copy Tables from one OData to another
self = Python OData object
outOData = Output Python OData object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
Dirty(self)
Mark OData as needing a header update to disk file
self = Python OData object
FullInstantiate(self, access, err)
Fully instantiate an OData by opening and closing
return 0 on success, else failure
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
GetHighVer(self, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
self = Python OData object
tabType = Table type, e.g. "OTFSoln"
GetName(self)
Tells OData object name (label)
returns name as character string
self = Python OData object
History(self, access, err)
Return the associated History
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
IsScratch(self)
Tells if OData is a scratch object
return true, false (1,0)
self = Python OData object
NewTable(self, access, tabType, tabVer, err, numOrb=0,
numPCal=3, numIF=1, numPol=1, numTerm=0, numChan=1,
numTones=1, numBand=1, numTabs=1, npoly=1, numCoef=5, noParms=0)
Return the specified associated table
Table will be created if necessary.
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "AIPS AN"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
Optional parameters, values only used if table created
numOrb = Number of orbital parameters (AN)
numPCal = Number of polarization parameters (AN)
numIF = Number of IFs (FQ, SN, CL, BP, BL, TY, CQ)
numPol = Number of Stokes' (SN, CL, BP, BL, PC, TY, GC, MC, IM)
numTerm = Number of terms in model polynomial (CL)
numChan = Number of spectral channels (BP)
numTomes = Number of Phase cal tones (PC)
numTabs = Number of ??? (GC)
numCoef = Number of polynomial coefficents (NI)
numBand = Number of Bands(?) (IM, GC)
npoly = number of polynomial terms (IM)
noParms = Number of parameters in CC table model
maxis1-5 = Dimension of axes of IDI data matrix
ODataIsA(self)
Tells if input really a Python Obit OData
return true, false (1,0)
self = Python OData object
Rename(self, err, newFITSName=None, newAIPSName=' ',
newAIPSClass=' ', newAIPSSeq=0)
Rename underlying files
self = Python OData object
err = Python Obit Error/message stack
For FITS files:
newFITSName = new name for FITS file
For AIPS:
newAIPSName = New AIPS Name (max 12 char) Blank => don't change.
newAIPSClass = New AIPS Class (max 6 char) Blank => don't change.
newAIPSSeq = New AIPS Sequence number, 0 => unique value
UpdateTables(self, err)
Update any disk resident structures about the current tables
Returns 0 on success
self = Python OData object
err = Python Obit Error/message stack
Zap(self, err)
Delete underlying files and the basic object.
self = Python OData object
err = Python Obit Error/message stack
ZapTable(self, tabType, tabVer, err)
Destroy specified table
Returns 0 on success
self = Python OData object
tabType = Table type, e.g. "AIPS CC"
tabVer = table version, integer
err = Python Obit Error/message stack
FUNCTIONS
PClone(inUV, outUV, err)
Make a copy of a object but do not copy the actual data
This is useful to create an UV similar to the input one.
inUV = Python UV object
outUV = Output Python UV object, must be defined
err = Python Obit Error/message stack
PClose(inUV, err)
Close an image persistent (disk) form
inUV = Python UV object
err = Python Obit Error/message stack
PCopy(inUV, outUV, err)
Make a deep copy of input object.
Makes structure the same as inUV, copies data, tables
inUV = Python UV object to copy
outUV = Output Python UV object, must be defined
err = Python Obit Error/message stack
PCopyTables(inUV, outUV, exclude, include, err)
Copy Tabeles from one image to another
inUV = Python UV object
outUV = Output Python UV object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
PDirty(inUV)
Mark UV as needing a header update to disk file
inUV = Python UV object
PEditClip(inUV, scratch, outUV, err)
Clip raw visibilities
control parameters on inUV info member
"maxAmp" OBIT_float (1,1,1) Maximum allowed amplitude
"oper" OBIT_string (4,1,1) operation type:
"flag" flag data with amplitudes in excess of maxAmp
"clip" clip amplitudes at maxAmp and preserve phase
default is "flag"
returns UV data object
inUV = Python UV object to clip/flag
scratch= True if this is to be a scratch file (same type as inUV)
outUV = Predefined UV data if scratch is False, may be inUV
ignored if scratch True.
err = Python Obit Error/message stack
PEditClipStokes(inUV, scratch, outUV, err)
Flag visibilities by Stokes
Clip a uv data set. Data with amplitudes of the selected stokes
in excess of maxAmp are flagged. Optionally all correlations associated
may be flagged. Stokes conversion as needed for test.
Control parameters are on the inUV info member:
"clipStok" OBIT_string (1,1,1) Stokes value to clip (I, Q, U, V, R, L)
default = "I"
"flagAll" Obit_bool (1,1,1) if true, flag all associated correlations
default = True
"maxAmp" OBIT_float (1,1,1) Maximum allowed amplitude
returns UV data object
inUV = Python UV object to clip/flag
scratch= True if this is to be a scratch file (same type as inUV)
outUV = Predefined UV data if scratch is False, may be inUV
ignored if scratch True.
err = Python Obit Error/message stack
PEditFD(inUV, outUV, err)
Frequency-domain editing of UV data - produces FG table
Editing is done independently for each visibility channel.
First clipping is done on correlator and Vpol amplitudes.
Following this, an average and RMS is determined for each channel
in each timeAvg period and a spectral baseline is established
for the average values, either using a median window filter (FDwidMW>0)
or a linear baseline fit (FDwidMW<=0) to specified channels.
Channels with excessive RMSes or residual amplitudes are flagged.
Flagging is done by entering the offending data in FG table flagTab
on outUV.
Control parameters on inUV info member
"flagTab" OBIT_int (1,1,1) FG table version number [ def. 1]
"timeAvg" OBIT_float (1,1,1) Time interval over which to average
data to be flagged (days) [def = 1 min.]
"FDmaxAmp" OBIT_float (1,1,1) Maximum average amplitude allowed in the
spectrum before fitting. Any channel exceeding this is
flagged in advance of the baseline fitting or median
filtering,. default = infinite
"FDmaxV" OBIT_float (1,1,1) Maximum average amplitude allowed in V
polarization; any channel exceeding this is flagged in
advance of the baseline fitting or median filtering,
Calculates V from difference in amplitudes.
default = infinite
"FDwidMW" OBIT_int (1,1,1) If > 0 the width of the median window in channels.
An odd number (5) is recommended, default or 0 => linear baseline
"FDmaxRMS" OBIT_float (2,1,1) Flag all channels having RMS
values > maxRMS[0] of the channel median sigma.[default = 6.]
plus maxRMS[1] (default 0.1) of the channel average in quadrature
"FDmaxRes" OBIT_float (1,1,1) Max. residual flux in sigma allowed for
channels outside the baseline fitting regions.
default = 6.
"FDmaxResBL" OBIT_float (1,1,1) Max. residual flux in sigma allowed for
channels within the baseline fitting regions.
Default = FDmaxRes
"FDbaseSel" OBIT_int (4,*,1) Channel selection to define spectral baseline
Used only for linear baseline fitting.
Select groups of channels/IF(s) to fit as sets
of (Start,end,inc,IF), i.e., chanSel = 6,37,1,0,
92,123,1,0 for two regions applying to all IFs.
Channel and IF numbers 1 -rel
The first group for which the end channel == 0 terminates the list
Channel increments defaults to 1
If the IF==0 then the group applies to all IF.
Default is channels 2 => nchan-1 all IFs
inUV = Python UV object to flag
Any prior selection and editing is applied.
outUV = UV data onto which the FG table is to be attached.
May be the same as inUV.
err = Python Obit Error/message stack
PEditStokes(inUV, outUV, err)
Stokes editing of UV data, FG table out
All data on a given baseline/correlator are flagged if the
amplitude of the datatype "FlagStok" exceeds maxAmp.
If a fraction of bad baselines on any antenna/channel/IF exceeds
maxBad, then all data to that correlator is flagged.
Flagging entries are written into FG table flagTab.
Results are unpredictable for uncalibrated data.
Control parameters on info member of inUV:
"flagStok" OBIT_string (1,1,1) Stokes value to clip (I, Q, U, V, R, L)
default = "V"
"flagTab" OBIT_int (1,1,1) FG table version number [ def. 1]
NB: this should not also being used to flag the input data!
"timeAvg" OBIT_float (1,1,1) Time interval over which to determine
data to be flagged (days) [def = 1 min.]
"maxAmp" OBIT_float (1,1,1) Maximum VPol allowed
"maxBad" OBIT_float (1,1,1) Fraction of allowed flagged baselines
to an antenna above which all baselines are flagged.
[default 0.25]
inUV = Python UV object to clip/flag
outUV = UV data onto which the FG table is to be attached.
May be the same as inUV.
err = Python Obit Error/message stack
PEditTD(inUV, outUV, err)
Time-domain editing of UV data - produces FG table
Fill flagging table with clipping by RMS values of the real and imaginary
parts. All correlations are clipped on each baseline if the RMS is
larger than the maximum. The clipping is done independently in
each time interval defined by timeAvg.
The clipping level is given by MIN (A, MAX (B,C)) where:
A = sqrt (maxRMS[0]**2 + (avg_amp * maxRMS[1])**2)
and avg_amp is the average amplitude on each baseline.
B = median RMS + 3 * sigma of the RMS distribution.
C = level corresponding to 3% of the data.
All data on a given baseline/correlator are flagged if the RMS
exceeds the limit. If a fraction of bad baselines on any correlator
exceeds maxBad, then all data to that correlator is flagged. In
addition, if the offending correlator is a parallel hand correlator
then any corresponding cross hand correlations are also flagged.
Flagging entries are written into FG table flagTab.
Control parameters on inUV info member
"flagTab" OBIT_int (1,1,1) FG table version number [ def. 1]
"timeAvg" OBIT_float (1,1,1) Time interval over which to determine
data to be flagged (days) [def = 1 min.]
NB: this should be at least 2 integrations.
"maxRMS" OBIT_float (2,1,1) Maximum RMS allowed, constant plus
amplitude coefficient.
"maxBad" OBIT_float (1,1,1) Fraction of allowed flagged baselines
[default 0.25]
inUV = Python UV object to clip/flag
outUV = UV data onto which the FG table is to be attached.
May be the same as inUV.
err = Python Obit Error/message stack
PFullInstantiate(inUV, access, err)
Fully instantiate an UV by opening and closing
return 0 on success, else failure
inUV = Python UV object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PGetDesc(inUV)
Return the member UVDesc
returns UVDesc as a Python Dictionary
inUV = Python UV object
PGetFreq(inUV, err)
Get Frequency information
inUV = Python UV object
err = Python Obit Error/message stack
PGetHighVer(inUV, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
inUV = Python UV object
tabType = Table type, e.g. "OTFSoln"
PGetList(inUV)
Return the member InfoList
returns InfoList
inUV = Python UV object
PGetName(inUV)
Tells UV object name (label)
returns name as character string
inUV = Python UV object
PGetSubA(inUV, err)
Get Subarray information
returns 0 on success, else 1
inUV = Python UV object
err = Python Obit Error/message stack
PGetTable(inUV, access, tabType, tabVer, err, numOrb=0, numPCal=3,
numIF=1, numPol=1, numTerm=0, numChan=1, numTones=1,
numBand=1, numTabs=1, npoly=1, numCoef=5, maxis1=2, maxis2=1,
maxis3=1, maxis4=1, maxis5=1)
Return (create)the specified associated table
Specific table types are recognized and the appropriate constructor
called, these may have additional parameters. This allows creating
new tables of the appropriate type.
returns Python Obit Table
inUV = Python UV object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "AIPS AN"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
Optional parameters, values only used if table created
numOrb = Number of orbital parameters (AN)
numPCal = Number of polarization parameters (AN)
numIF = Number of IFs (FQ, SN, CL, BP, BL, TY, CQ)
numPol = Number of Stokes' (SN, CL, BP, BL, PC, TY, GC, MC, IM)
numTerm = Number of terms in model polynomial (CL)
numChan = Number of spectral channels (BP)
numTomes = Number of Phase cal tones (PC)
numTabs = Number of ??? (GC)
numCoef = Number of polynomial coefficents (NI)
numBand = Number Bands(?) (IM, GC)
npoly = number of polynomial terms (IM)
maxis1-5 = Dimension of axes of IDI data matrix
PGetTableList(inUV)
Return the member tableList
returns tableList
inUV = Python UV object
PGetVisBuf(inUV)
PHeader(inUV, err)
Print data descriptor
inUV = Python Obit UV object
err = Python Obit Error/message stack
PIsA(inUV)
Tells if input really a Python Obit UV
return true, false (1,0)
inUV = Python UV object
PIsScratch(inUV)
Tells if UV is a scratch object
return true, false (1,0)
inUV = Python UV object
PNewUVTable(inUV, access, tabType, tabVer, err)
Obsolete use PGetTable
POpen(inUV, access, err)
Open an image persistent (disk) form
inUV = Python UV object
access = access 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PRename(inUV, err, newFITSName=None, newAIPSName=' ',
newAIPSClass=' ', newAIPSSeq=0)
Rename underlying files
inUV = Python UV object
err = Python Obit Error/message stack
For FITS files:
newFITSName = new name for FITS file
For AIPS:
newAIPSName = New AIPS Name (max 12 char) Blank => don't change.
newAIPSClass = New AIPS Class (max 6 char) Blank => don't change.
newAIPSSeq = New AIPS Sequence number, 0 => unique value
PScratch(inUV, err)
Create a scratch file suitable for accepting the data to be read from inUV
A scratch UV is more or less the same as a normal UV except that it is
automatically deleted on the final unreference.
inUV = Python UV object
err = Python Obit Error/message stack
PUVInfo(inUV, err)
Get file info for extant uv data object
Fills in information on object, useful for scratch files
inUV = Python UV object
err = Python Obit Error/message stack
PUpdateDesc(inUV, err, Desc=None)
Update external representation of descriptor
inUV = Python UV object
err = Python Obit Error/message stack
Desc = UV descriptor, if None then use current descriptor
Contents can be accessed throuth the Dict member
PUpdateTables(inUV, err)
Update any disk resident structures about the current tables
inUV = Python UV object
err = Python Obit Error/message stack
PUtilAvgF(inUV, outUV, err, scratch=False, NumChAvg=0, doAvgAll=False, ChanSel=None)
Average A UV data set in Frequency
returns Averaged UV data object
inUV = Python UV object to copy
Any selection editing and calibration applied before average.
outUV = Predefined UV data if scratch is False, ignored if
scratch is True.
err = Python Obit Error/message stack
scratch = True if this is to be a scratch file (same type as inUV)
NumChAvg = Number of channels to average, [def.0 = all]
doAvgAll = If TRUE then average all channels and IF.
ChanSel = Groups of channels to consider (relative to channels &
IFs selected by BChan, EChan, BIF, EIF)
(start, end, increment, IF) as array of tuples
where start and end at the beginning and ending
channel numbers (1-rel) of the group to be included,
increment is the increment between selected channels
and IF is the IF number (1-rel)
default increment is 1, IF=0 means all IF.
Default is all channels in each IF.
Example [(3,14,1,0),(25,30,1,0)] averages channels
3 through 14 and 25 through 30 in each IF.
PUtilAvgT(inUV, outUV, err, scratch=False, timeAvg=1.0)
Average A UV data set in Time
returns Averaged UV data object
inUV = Python UV object to copy
Any selection editing and calibration applied before average.
outUV = Predefined UV data if scratch is False, ignored if
scratch is True.
err = Python Obit Error/message stack
scratch = True if this is to be a scratch file (same type as inUV)
timeAvg = Averaging time in min
PUtilCopyZero(inUV, scratch, outUV, err)
Copy a UV data set replacing data by zero, weight 1
returns UV data object
inUV = Python UV object to copy
scratch= True if this is to be a scratch file (same type as inUV)
outUV = Predefined UV data if scratch is False
ignored if scratch True.
err = Python Obit Error/message stack
PUtilCount(inUV, err, timeInt=1440.0)
Count data values by interval in a UV dataset
Each new source starts a new interval
returns a dist with entries:
numTime = Number of time intervals
numCorr = Number of Correlations per vis
Count = Number of good correlation/visibilities
Bad = Number of flagged correlation/visibilities
Source = Source ID per interval (or 0 if no source ID)
LST = Average LST (days) per interval
inUV = Python UV object to copy
Any selection editing and calibration applied before average.
err = Python Obit Error/message stack
timeInt = interval in min (max 500 intervals)
PUtilIndex(inUV, err, maxScan=None, maxGap=None)
Indexes a uv data
inUV = Python UV object to index
err = Python Obit Error/message stack
maxScan = max. scan length in min. [def. long]
maxGap = max. scan gap in min. [def. long]
PUtilUVWExtrema(inUV, err)
Get UV coverage information
returns array [0]=maximum baseline length (in U,V), [1] = maximum W
inUV = Python UV object
err = Python Obit Error/message stack
PUtilVisCompare(in1UV, in2UV, err)
Compares the visibilites in in1UV with those in in2UV
returns RMS real, imaginary parts/amplitude
in1UV = Numerator Python UV object
in2UV = Denominator Python UV object
err = Python Obit Error/message stack
PUtilVisDivide(in1UV, in2UV, outUV, err)
Divides the visibilites in in1UV by those in in2UV
outUV = in1UV / in2UV
in1UV = Numerator Python UV object, no calibration/selection
in2UV = Denominator Python UV object
outUV = Output python UV object
err = Python Obit Error/message stack
PUtilVisSub(in1UV, in2UV, outUV, err)
Subtracts the visibilites in in2UV from those in in1UV
outUV = in1UV - in2UV
in1UV = First python UV object, no calibration/selection
in2UV = Second python UV object, calibration allowed
outUV = Output Python UV object, may be same as in1UV
err = Python Obit Error/message stack
PZap(inUV, err)
Delete underlying files and the basic object.
inUV = Python UV object
err = Python Obit Error/message stack
PZapTable(inUV, tabType, tabVer, err)
Destroy specified table
Returns 0 on success
inUV = Python UV object
tabType = Table type, e.g. "AIPS AN"
tabVer = table version, integer
err = Python Obit Error/message stack
newPACNO(disk, cno, exists, err, verbose=True, nvis=1000)
Create and initialize an AIPS based UV structure
Create, set initial access information
and if exists verifies the file.
Sets buffer to hold 1000 vis.
Returns the Python UV object
isOK member set to indicate success
disk = AIPS directory number
cno = AIPS catalog number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
nvis = Number of visibilities read/written per call
newPAUV(name, Aname, Aclass, disk, seq, exists, err, verbose=True, nvis=1000)
Create and initialize an AIPS based UV structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
Sets buffer to hold 1000 vis.
Returns the Python UV object
isOK member set to indicate success
name = name desired for object (labeling purposes)
Aname = AIPS name of file
Aclass = AIPS class of file
seq = AIPS sequence number of file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
nvis = Number of visibilities read/written per call
newPFUV(name, filename, disk, exists, err, verbose=True, nvis=1000)
Create and initialize an FITS based UV structure
Create, set initial access information (full image, plane at a time)
and if exists verifies the file.
Sets buffer to hold 1000 vis.
Returns the Python UV object
isOK member set to indicate success
name = name desired for object (labeling purposes)
filename = name of FITS file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
verbose = If true any give error messages, else suppress
nvis = Number of visibilities read/written per call
\end{verbatim}
\subsection{Obit python OTF class}
To access the OTF class, your PYTHONPATH variable should include the
ObitSD/python directory before the Obit/python directory.
Then in ObitTalk:
\begin{verbatim}
>>>> import OTF
\end{verbatim}
to make the OTF classes available.
Further functions are available in the OTFUtil, CCBUtil, CleanOTF,
CleanOTFRec,GBTDCROTF, OTFGetAtmCor, OTFGetSoln, and OTFSoln2Cal
python modules.
The following describes the OTF class.
\begin{verbatim}
NAME
OTF - Python Obit "On-the-fly" (OTF) single dish data class
DESCRIPTION
This class contains single dish data and allows access.
An ObitOTF is the front end to a persistent disk resident structure.
There maybe (usually are) associated tables which either describe
the data or contain calibration and/or editing information.
OTF Members with python interfaces:
List - used to pass instructions to processing
Desc - Astronomical labeling of the image
TableList - List of tables attached
RecBuf - memory pointer into I/O Buffer
Additional Functions are available in OTFUtil, OTFSoln2Cal, OTFGetSoln,
OTFGetAtmCor, CleanOTF
There are a number of utility routines in this module which take
control parameters in the form of python dictionaries
(e.g. AtmCal, Clean, Concat, Image, ResidCal, Soln2Cal, Split)
which each have defined dictionaries with default values and names of the
routine and "Input" appended.
Care should he taken not to change the data types of the entries in these
dictionaries.
These dictionaries can be listed in semi human readable form using the OTF.input
function.
Data selection, calibration and editing parameters on List member
"doCalSelect" bool (1,1,1) Select/calibrate/edit data?
"doCalib" int (1,1,1) >0 -> calibrate,
"gainUse" int (1,1,1) SN/CL table version number, 0-> use highest
"flagVer" int (1,1,1) Flag table version, 0-> use highest, <0-> none
"BChan" int (1,1,1) First spectral channel selected. [def all]
"EChan" int (1,1,1) Highest spectral channel selected. [def all]
"Targets" string (?,?,1) Target names selected. [def all]
"timeRange" float (2,1,1) Selected timerange in days. [def all]
"Scans" int (2,1,1) Lowest and highest selected scan numbers. [def all]
"Feeds" int (?,1,1) a list of selected feed numbers, [def all.]
"keepCal" bool (1,1,1) If true keep cal-on data, otherwise drop [def True.]
CLASSES
OData.OData(OData.ODataPtr)
OTF
class OTF(OData.OData)
Python Obit "On-the-fly" (OTF) single dish data class
This class contains single dish data and allows access.
An ObitOTF is the front end to a persistent disk resident structure.
There maybe (usually are) associated tables which either describe
the data or contain calibration and/or editing information.
OTF Members with python interfaces:
List - used to pass instructions to processing
Desc - Astronomical labeling of the image
TableList - List of tables attached
RecBuf - memory pointer into I/O Buffer
Method resolution order:
OTF
OData.OData
OData.ODataPtr
Methods defined here:
Clone(self, outOTF, err)
Make a copy of a object but do not copy the actual data
This is useful to create an OTF similar to the input one.
self = Python OTF object
outOTF = Output Python OTF object, must be defined
err = Python Obit Error/message stack
Close(self, err)
Close a OTF persistent (disk) form
returns 0 on success, else failure
self = Python OTF object
err = Python Obit Error/message stack
Copy(self, outOTF, err)
Make a deep copy of input object.
Makes structure the same as self, copies data, tables
self = Python OTF object to copy
outOTF = Output Python OTF object, must be defined
err = Python Obit Error/message stack
Header(self, err)
Write image header on output
self = Python Obit OTF object
err = Python Obit Error/message stack
Info(self, err)
Get underlying data file info
self = Python Obit OTF object
err = Python Obit Error/message stack
NewTable(self, access, tabType, tabVer, err, numDet=1,
numPoly=0, numParm=0)
Return the specified associated table
self = Python OTF object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "OTFSoln"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
Optional parameters, values only used if table created
numDet = Number of Detectors (OTFCal, OTFSoln, OTFScanData)
numPoly = Number of polynomial terms (OTFCal, OTFSoln)
numParm = Number of model parameters (OTFModel)
OTFIsA(self)
Tells if input really a Python Obit OTF
return true, false (1,0)
self = Python OTF object
Open(self, access, err)
Open a OTF data persistent (disk) form
Returns 0 on success, else failure
self = Python OTF object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
Read(self, err)
Read a OTF persistent (disk) form
Reads into buffer attached to OTF data, use VisBuf for access
Returns 0 on success, else failure
self = Python OTF object
err = Python Obit Error/message stack
ReadRec(self, err)
Read a OTF persistent (disk) form
Returns OTFRec structure from next record
self = Python OTF object
err = Python Obit Error/message stack
Scratch(self, err)
Create a scratch file suitable for accepting the data to be read from self
A scratch OTF is more or less the same as a normal OTF except that it is
automatically deleted on the final unreference.
self = Python OTF object
err = Python Obit Error/message stack
UpdateDesc(self, err, Desc=None)
Update any disk resident structures about descriptor
self = Python OTF object
err = Python Obit Error/message stack
Desc = Descriptor, if None then use current descriptor
Contents can be accessed throuth the Dict member
Write(self, err)
Write a OTF persistent (disk) form
Writes buffer attached to OTF data, use VisBuf for access
returns 0 on success, else failure
self = Python OTF object
err = Python Obit Error/message stack
WriteRec(self, outRec, err)
Write a OTF persistent (disk) form
Writes buffer attached to OTF data, use VisBuf for access
returns 0 on success, else failure
self = Python OTF object
outRec = OTFRec structure to write
err = Python Obit Error/message stack
__del__(self)
__getattr__(self, name)
__init__(self, name)
__repr__(self)
__setattr__(self, name, value)
cast(self, toClass)
Casts object pointer to specified class
self = object whose cast pointer is desired
toClass = Class string to cast to ("ObitOTF")
----------------------------------------------------------------------
Methods inherited from OData.OData:
CopyTables(self, outOData, exclude, include, err)
Copy Tables from one OData to another
self = Python OData object
outOData = Output Python OData object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
Mark OData as needing a header update to disk file
self = Python OData object
FullInstantiate(self, access, err)
Fully instantiate an OData by opening and closing
return 0 on success, else failure
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
GetHighVer(self, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
self = Python OData object
tabType = Table type, e.g. "OTFSoln"
GetName(self)
Tells OData object name (label)
returns name as character string
self = Python OData object
History(self, access, err)
Return the associated History
self = Python OData object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
IsScratch(self)
Tells if OData is a scratch object
return true, false (1,0)
self = Python OData object
ODataIsA(self)
Tells if input really a Python Obit OData
return true, false (1,0)
self = Python OData object
Rename(self, err, newFITSName=None, newAIPSName=' ',
newAIPSClass=' ', newAIPSSeq=0)
Rename underlying files
self = Python OData object
err = Python Obit Error/message stack
For FITS files:
newFITSName = new name for FITS file
For AIPS:
newAIPSName = New AIPS Name (max 12 char) Blank => don't change.
newAIPSClass = New AIPS Class (max 6 char) Blank => don't change.
newAIPSSeq = New AIPS Sequence number, 0 => unique value
UpdateTables(self, err)
Update any disk resident structures about the current tables
Returns 0 on success
self = Python Image object
err = Python Obit Error/message stack
Zap(self, err)
Delete underlying files and the basic object.
self = Python OData object
err = Python Obit Error/message stack
ZapTable(self, tabType, tabVer, err)
Destroy specified table
Returns 0 on success
self = Python OData object
tabType = Table type, e.g. "AIPS CC"
tabVer = table version, integer
err = Python Obit Error/message stack
FUNCTIONS
AtmCal(err, input= AtmCalInput )
Basic atmospheric calibration.
Applies Atmospheric calibration and optionally gross pointing offsets
Returns the version number of the Soln Table on success.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = input Python OTF to calibrate
solint = solution interval (sec)
tau0 = zenith opacity (nepers)
minEl = minimum elevation (deg)
tTemp = effective atmospheric temperature (per detector)
tRx = Receiver temperature per detector (K)
calJy = Noise cal value in Jy per detector
raOff = RA pointing offset (deg)
decOff = Dec pointing offset (deg)
ClearCal(inOTF, err)
Delete calibration tables on an OTF
Removes all OTFSoln and OTFCal tables
inOTF = Extant Python OTF
err = Python Obit Error/message stack
Concat(err, input={'InData': None, 'OutData': None})
Concatenates OTFs.
Applies Copies InData to the end of OutData.
The files must be compatable (not checked)
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = Python input OTF to calibrate
OutData = Python output OTF, must be previously defined
MBBaseCal(err, input=MBBaseCalInput)
Continuum baseline fitting for multibeam instrument.
Fit one term, time variable common, atmospheric polynomial and a single offset
per detector.
Since the different detectors each have an individual multiplicative term, the
Atmospheric + offset are places in the the detector's additive term and the
polynomial is set to zero.
Scans in excess of 5000 samples will be broken into several.
Returns the version number of the Soln Table on success.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = input Python OTF to calibrate
solint = solution interval (sec), entries 4 times per SolInt
order = polynomial order
clipsig = Data outside of +/- clipsig ignored [def large]
plotdet = Detector number (1-rel) to plot per scan [def =-1 = none]
minEl = minimum elevation (deg)
gainuse = version number of prior table (Soln or Cal) to apply, -1 is none
flagver = version number of flagging table to apply, -1 is none
ObitName(ObitObject)
Return name of an Obit object or input if not an Obit Object
PClone(inOTF, outOTF, err)
Make a copy of a object but do not copy the actual data
This is useful to create an OTF similar to the input one.
inOTF = Python OTF object
outOTF = Output Python OTF object, must be defined
err = Python Obit Error/message stack
PClose(inOTF, err)
Close an image persistent (disk) form
inOTF = Python OTF object
err = Python Obit Error/message stack
PConcat(inOTF, outOTF, err)
Copy data from inOTF to the end of outOTF
inOTF = Python OTF object
outOTF = Output Python OTF object, must be defined
err = Python Obit Error/message stack
PCopy(inOTF, outOTF, err)
Make a deep copy of input object.
Makes structure the same as inOTF, copies data, tables
inOTF = Python OTF object to copy
outOTF = Output Python OTF object, must be defined
err = Python Obit Error/message stack
PCopyTables(inOTF, outOTF, exclude, include, err)
Copy Tabels from one image to another
inOTF = Python OTF object
outOTF = Output Python OTF object, must be defined
exclude = list of table types to exclude (list of strings)
has priority
include = list of table types to include (list of strings)
err = Python Obit Error/message stack
PDirty(inOTF)
Mark OTF as needing a header update to disk file
inOTF = Python OTF object
PFullInstantiate(inOTF, access, err)
Fully instantiate an OTF by opening and closing
return 0 on success, else failure
inOTF = Python OTF object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PGetDesc(inOTF)
Return the member OTFDesc
returns OTFDesc as a Python Dictionary
inOTF = Python OTF object
PGetHighVer(inOTF, tabType)
Get highest version number of a specified Table
returns highest tabType version number, 0 if none.
inOTF = Python OTF object
tabType = Table type, e.g. "OTFSoln"
PGetList(inOTF)
Return the member InfoList
returns InfoList
inOTF = Python OTF object
PGetName(inOTF)
Tells OTF object name (label)
returns name as character string
inOTF = Python OTF object
PGetRecBuf(inOTF)
PGetTableList(inOTF)
Return the member tableList
returns tableList
inOTF = Python OTF object
PHeader(inOTF, err)
Print data descriptor
inOTF = Python Obit OTF object
err = Python Obit Error/message stack
PIsA(inOTF)
Tells if input really a Python Obit OTF
return true, false (1,0)
inOTF = Python OTF object
PIsScratch(inOTF)
Tells if OTF is a scratch object
return true, false (1,0)
inOTF = Python OTF object
PNewOTFTable(inOTF, access, tabType, tabVer, err, numDet=1, numPoly=0, numParm=0)
Return the specified associated table
inOTF = Python OTF object
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
tabType = Table type, e.g. "OTFSoln"
tabVer = table version, if > 0 on input that table returned,
if 0 on input, the highest version is used.
err = Python Obit Error/message stack
Optional parameters, values only used if table created
numDet = Number of Detectors (OTFCal, OTFSoln, OTFScanData)
numPoly = Number of polynomial terms (OTFCal, OTFSoln)
numParm = Number of model parameters (OTFModel)
POTFInfo(inOTF, err)
Get file info for extant uv data object
Fills in information on object, useful for scratch files
inOTF = Python OTF object
err = Python Obit Error/message stack
POpen(inOTF, access, err)
Open an image persistent (disk) form
Returns 0 on success, else failure
inOTF = Python OTF object
access = access 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PRename(inOTF, err, newFITSName=None)
Rename underlying files
inOTF = Python OTF object
err = Python Obit Error/message stack
For FITS files:
newFITSName = new name for FITS file
PScratch(inOTF, err)
Create a scratch file suitable for accepting the data to be read from inOTF
A scratch OTF is more or less the same as a normal OTF except that it is
automatically deleted on the final unreference.
inOTF = Python OTF object
err = Python Obit Error/message stack
PSetTarget(inOTF, Target, Flux, RA, Dec, err)
Set target flux density and position
inOTF = Python OTF object
Target = Target name
Flux = Target Flux density
RA = RA in deg at mean equinox and epoch
Dec = Dec in deg at mean equinox and epoch
err = Python Obit Error/message stack
PUpdateDesc(inOTF, err, Desc=None)
Update external representation of descriptor
inOTF = Python OTF object
err = Python Obit Error/message stack
Desc = OTF descriptor, if None then use current descriptor
PUpdateTables(inOTF, err)
Update any disk resident structures about the current tables
inOTF = Python OTF object
err = Python Obit Error/message stack
PZap(inOTF, err)
Delete underlying files and the basic object.
inOTF = Python OTF object
err = Python Obit Error/message stack
PZapTable(inOTF, tabType, tabVer, err)
Destroy specified table
inOTF = Python OTF object
tabType = Table type, e.g. "OTFSoln"
tabVer = table version, integer
err = Python Obit Error/message stack
PolyBLCal(err, input=PolyBLCalInput)
Polynomial baseline fit to residual data
Each solution interval in a scan is median averaged
(average of 9 points around the median) and then a polynomial fitted.
Returns the version number of the Soln Table on success.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = input Python OTF to calibrate
solint = solution interval (sec)
order = polynomial order
minEl = minimum elevation (deg)
gainuse = version number of prior table (Soln or Cal) to apply, -1 is none
flagver = version number of flagging table to apply, -1 is none
ResidCal(err, input=ResidCalInput)
Determine residual calibration for an OTF.
Determines a solution table for an OTF by one of a number of techniques using
residuals from a model image.
Returns the version number of the Soln Table on success.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = Python input OTF to calibrate
Model = Python input model FArray, "None" means do not subtract model image
ModelDesc= Python input model ImageDesc
minFlux = Minimum brightness in model
solint = solution interval (sec)
solType = solution type:
"Gain" solve for multiplicative term from "cals" in data.
(solint, minRMS, minEl, calJy)
"Offset" Solve for additive terms from residuals to the model.
(solint, minEl)
"GainOffset" Solve both gain and offset
(solint, minRMS, minEl, calJy)
"Filter" Additive terms from filters residuals to the model.
(solint, minEl)
"multiBeam" Multibeam solution
(solint, minEl)
minEl = minimum elevation (deg)
minRMS = Minimum RMS residual to solution
calJy = Noise cal value in Jy per detector
gainuse = version number of prior table (Soln or Cal) to apply, -1 is none
flagver = version number of flagging table to apply, -1 is none
SelfCal(err, ImageInp=ImageInput, Soln2CalInp=Soln2CalInput)
Self calibrate an OTF
Image an OTF, optionally Clean, determine residual calibration,
apply to Soln to Cal table. If the Clean is done, then the CLEAN result is
used as the model in the ResidCal, otherwise the dirty image from Image is.
err = Python Obit Error/message stack
ImageInp = input parameter dictionary for Image
CleanInp = input parameter dictionary for Clean, "None"-> no Clean requested
May be modified to point to the result of the Image step
ResidCalInp = input parameter dictionary for ResidCal
Will be modified to give correct derived model image
Soln2CalInp = input parameter dictionary for Soln2Cal
Soln2Cal(err, input=Soln2CalInput)
Apply a Soln (solution) table to a Cal (calibration) table.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = Python input OTF to calibrate
soln = Soln table version number to apply, 0-> high
oldCal = input Cal table version number, -1 means none, 0->high
newCal = output Cal table version number, 0->new
Split(err, input=SplitInput)
Select and calibrate an OTF writing a new one.
Applies calibration and editing/selection to inData and writes outData.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = input Python OTF to calibrate
OutData = output Python OTF, must be previously defined
average = if true average in frequency
gainuse = version number of prior table (Soln or Cal) to apply, -1 is none
flagver = version number of flagging table to apply, -1 is none
input(inputDict)
Print the contents of an input Dictionary
inputDict = Python Dictionary containing the parameters for a routine
There should be a member of the dictionary ('structure') with a value
being a list containing:
1) The name for which the input is intended (string)
2) a list of tuples consisting of (parameter name, doc string)
with an entry for each parameter in the dictionary.
The display of the the inputs dictionary will be in the order of
the tuples and display the doc string after the value.
An example:
Soln2CalInput={'structure':['Soln2Cal',[('InData','Input OTF'),
('soln','input soln table version'),
('oldCal','input cal table version, -1=none'),
('newCal','output cal table')]],
'InData':None, 'soln':0, 'oldCal':-1, 'newCal':0}
makeImage(err, input=ImageInput)
Image an OTF.
Data is convolved and resampled onto the specified grid.
Image is created and returned on success.
err = Python Obit Error/message stack
input = input parameter dictionary
Input dictionary entries:
InData = input Python OTF to image
OutName = name of output image file
Disk = disk number for output image file
ra = center RA (deg)
dec = center Dec (deg)
nx = number of pixels in "x" = RA
ny = number of pixels in 'Y' = dec
xCells = Cell spacing in x (asec)
yCells = Cell spacing in y (asec)
minWt = minimum summed weight in gridded image [def 0.1]
ConvType= Convolving function Type 0=pillbox,3=Gaussian,4=exp*sinc,5=Sph wave
ConvParm= Convolving function parameters depends on ConvType
Type 2 = Sinc, (poor function - don't use)
Parm[0] = halfwidth in cells,
Parm[1] = Expansion factor
Type 3 = Gaussian,
Parm[0] = halfwidth in cells,[def 3.0]
Parm[1] = Gaussian with as fraction or raw beam [def 1.0]
Type 4 = Exp*Sinc
Parm[0] = halfwidth in cells, [def 2.0]
Parm[1] = 1/sinc factor (cells) [def 1.55]
Parm[2] = 1/exp factor (cells) [def 2.52]
Parm[3] = exp power [def 2.0]
Type 5 = Spherodial wave
Parm[0] = halfwidth in cells [def 3.0]
Parm[1] = Alpha [def 5.0]
Parm[2] = Expansion factor [not used]
gainuse = version number of prior table (Soln or Cal) to apply, -1 is none
flagver = version number of flagging table to apply, -1 is none
doBeam = Beam convolved with convolving Fn image desired? [def True]
Beam = Actual instrumental Beam to use, else Gaussian [def None]
Beam
newPOTF(name, filename, disk, exists, err, nrec=1000)
Create and initialize an OTF structure
Create, set initial access information (nrec records)
and if exists verifies the file.
Returns the Python OTF object
name = name desired for object (labeling purposes)
filename = name of FITS file
disk = FITS directory number
exists = if true then the file is opened and closed to verify
err = Python Obit Error/message stack
nrec = Number of records read/written per call
DATA
AtmCalInput = {'InData': None, 'aTemp': [0.0, 0.0], 'calJy': [1.0, 1.0...
ConcatInput = {'InData': None, 'OutData': None}
ImageInput = {'Beam': None, 'ConvParm': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0,...
MBBaseCalInput = {'InData': None, 'clipsig': 1e+20, 'flagver': -1, 'ga...
PolyBLCalInput = {'InData': None, 'flagver': -1, 'gainuse': -1, 'minEl...
ResidCalInput = {'Clip': 1e+20, 'InData': None, 'Model': None, 'ModelD...
Soln2CalInput = {'InData': None, 'newCal': 0, 'oldCal': -1, 'soln': 0,...
SplitInput = {'InData': None, 'OutData': None, 'average': 0, 'flagver'...
\end{verbatim}
\subsection{Obit python Table Class}
Obit Table class objects can be created as shown in the following:
\begin{verbatim}
inUV=UV.newPAUV("UV", "20050415", "LINE", 1, 1, True,err)
tabType="AIPS SU"
tabVer=1
access=UV.READONLY
su = inUV.NewTable(access,tabType,tabVer,err)
\end{verbatim}
If a new table is being created, some optional parameters may be needed
depending on the table type (see help(UV) description of NewTable).
The table header (descriptor) can be obtained as a python Dict:
\begin{verbatim}
h = su.Desc.Dict
\end{verbatim}
Data from a row in the table can be obtained as a python Dict:
\begin{verbatim}
su.Open(access,err)
row1 = su.ReadRow(access,err)
OErr.printErrMsg(err, "Error reading")
su.Close(err)
print "row1",row1
\end{verbatim}
Note: these dict structures are independent of the underlying data structures.
The following describes the Obit Table class.
\begin{verbatim}
NAME
Table - Python Obit Table class
DESCRIPTION
This class contains tabular data and allows access.
An ObitTable is the front end to a persistent disk resident structure.
Both FITS (as Tables) and AIPS cataloged data are supported.
Table Members with python interfaces:
InfoList - used to pass instructions to processing
Table header keywords for specific table types are available in the keys
member of a Table after the table has been opened. These will be updated
to disk when the table is closed.
CLASSES
TablePtr
Table
class Table(TablePtr)
Methods defined here:
Close(self, err)
Close an table persistent (disk) form
Specific table type keywords are written from the "keys" dict member
self = Python Table object
err = Python Obit Error/message stack
Open(self, access, err)
Open an table persistent (disk) form
Specific table type keywords are written to the "keys" dict member
self = Python Table object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
ReadRow(self, rowno, err)
Read a specified row in a table and returns as a python Dict
self = Python Image object
rowno = row number (1-rel) to read
err = Python Obit Error/message stack
WriteRow(self, rowno, rowDict, err)
Write an image persistent (disk) form from a specified Dict
Writes a single row
self = Python Image object
rowno = row number (1-rel) to write
rowDict = Python Dict of same form as returned by PReadRow
err = Python Obit Error/message stack
Zap(self, err)
Delete underlying files and the basic object.
self = Python Table object
err = Python Obit Error/message stack
__del__(self)
__init__(self, name)
----------------------------------------------------------------------
Methods inherited from TablePtr:
__getattr__(self, name)
__repr__(self)
__setattr__(self, name, value)
class TablePtr
Methods defined here:
__getattr__(self, name)
__init__(self, this)
__repr__(self)
__setattr__(self, name, value)
FUNCTIONS
PClone(inTab, outTab)
Copy the structure of a Table
inTab = input Python Table
outTab = extant output Python Obit Table or None
PClose(inTab, err)
Close a table persistent (disk) form
Specific table type keywords are written from the "keys" dict member
inTab = Python Table object
err = Python Obit Error/message stack
PConcat(inTab, outTab, err)
Copy row data from inTab to the end of outTab
inTab = input Python Obit Table
outTab = extant output Python Obit Table
err = Python Obit Error/message stack
PCopy(inTab, outTab, err)
Copy a Table including persistent forms
inTab = input Python Obit Table
outTab = extant output Python Obit Table
err = Python Obit Error/message stack
PDirty(inTable)
Mark Table as needing a header update to disk file
inTable = Python Table object
PFullInstantiate(inTab, access, err)
Open and close to fully instantiate
return 0 on success, else failure
inTab = input Python Table
access = access code 1=READONLY, 2=WRITEONLY, 3=READWRITE
err = Python Obit Error/message stack
PGetDesc(inTab)
Return the TableDesc from a Table
returns TableDesc
inTab = input Python Table
PGetIODesc(inTab)
Return the TableDesc from a Table's IO member
returns TableDesc from IO member (disk resident version)
if the IO member is not defined a None is returned.
For most reliable results, this routine should be called when
the table is opened with Write allowed.
inTab = input Python Table
PGetIOList(inTab)
Return the InfoList from a Table's IO member
returns InfoList from IO member (disk resident version)
if the IO member is not defined a None is returned.
For most reliable results, this routine should be called when
the table is opened with Write allowed.
inTab = input Python Table
PGetList(inTab)
Return the InfoList from a Table
returns InfoList
inTab = input Python Table
PGetName(inTab)
Returns object name (label)
return name string
inTab = input Python Table
PGetVer(inTab)
Get table version number
returns table version number
inTab = input Python Table
PIsA(inTab)
Tells if object thinks it's a Python Obit Table
return true, false (1,0)
inTab = input Python Table
POpen(inTab, access, err)
Open a table persistent (disk) form
Specific table type keywords are written to the "keys" dict member
inTab = Python Table object
access = access READONLY (1), WRITEONLY (2), READWRITE(3)
err = Python Obit Error/message stack
PReadRow(inTab, rowno, err)
Read a specified row in a table and returns as a python Dict
Dict has keys:
"Table name" to give the name of the table
Field named (column labels)
data are returned as a list of the field data type.
inTab = Python Table object
rowno = row number (1-rel) to read
err = Python Obit Error/message stack
PSort(inTab, colName, desc, err)
Sort a table by contents of a column
inTab = input Python Obit Table to sort
colName = Column name (e.g. "Time")
desc = if true sort in descending order, else ascending
err = Python Obit Error/message stack
PUnref(inTab)
Decrement reference count
Decrement reference count which will destroy object if it goes to zero
Python object stays defined.
inTab = Python Table object
PWriteRow(inTab, rowno, rowDict, err)
Write an image persistent (disk) form from a specified Dict
Writes a single row
inTab = Python Table object
rowno = row number (1-rel) to write
rowDict = Python Dict of same form as returned by PReadRow
err = Python Obit Error/message stack
PZap(inTab, err)
Destroy the persistent form of a Table
inTab = input Python Obit Table
err = Python Obit Error/message stack
DATA
READONLY = 1
READWRITE = 3
WRITEONLY = 2
\end{verbatim}
\section{ObitTalk Data Classes}
The ObitTalk classes AIPSUVData, AIPSImage, FITSUVData and
FITSImage allow local or remote access to AIPS and FITS Images
and UV data.
Functions in these data classes work for data on remote nodes.
Details of these class interfaces can be viewed using:
\begin{verbatim}
>>> help(AIPSUVData)
>>> help(AIPSImage)
>>> help(FITSUVData)
>>> help(FITSImage)
\end{verbatim}
\subsection{AIPSUVData}
\begin{verbatim}
class AIPSUVData(_AIPSData)
This class describes an AIPS UV data set.
Methods inherited from _AIPSData:
exists(self)
Check whether this image or data set exists.
Returns True if the image or data set exists, False otherwise.
getrow_table(self, type, version, rowno)
Get a row from an extension table.
Returns row ROWNO from version VERSION of extension table TYPE
as a dictionary.
header(self)
Get the header for this image or data set.
Returns the header as a dictionary.
header_table(self, type, version)
Get the header of an extension table.
Returns the header of version VERSION of the extension table
TYPE.
table(self, type, version)
table_highver(self, type)
Get the highest version of an extension table.
Returns the highest available version number of the extension
table TYPE.
tables(self)
Get the list of extension tables.
verify(self)
Verify whether this image or data set can be accessed.
zap(self)
Destroy this image or data set.
zap_table(self, type, version)
Destroy an extension table.
Deletes version VERSION of the extension table TYPE. If
VERSION is 0, delete the highest version of table TYPE. If
VERSION is -1, delete all versions of table TYPE.
Properties inherited from _AIPSData:
disk
Disk where this data set is stored.
lambdaself
klass
Class of this data set.
lambdaself
name
Name of this data set.
lambdaself
seq
Sequence number of this data set.
lambdaself
userno
User number used to access this data set.
lambdaself
\end{verbatim}
\subsection{AIPSImage}
\begin{verbatim}
class AIPSImage(_AIPSData)
This class describes an AIPS image.
Methods defined here:
display(self, dispURL='http://localhost:8765/RPC2')
Display an image.
Displays image on ObitView server on dispURL
dispURL = URL of ObitView server on which to display
Returns True if successful
Methods inherited from _AIPSData:
exists(self)
Check whether this image or data set exists.
Returns True if the image or data set exists, False otherwise.
getrow_table(self, type, version, rowno)
Get a row from an extension table.
Returns row ROWNO from version VERSION of extension table TYPE
as a dictionary.
header(self)
Get the header for this image or data set.
Returns the header as a dictionary.
header_table(self, type, version)
Get the header of an extension table.
Returns the header of version VERSION of the extension table
TYPE.
table(self, type, version)
table_highver(self, type)
Get the highest version of an extension table.
Returns the highest available version number of the extension
table TYPE.
tables(self)
Get the list of extension tables.
verify(self)
Verify whether this image or data set can be accessed.
zap(self)
Destroy this image or data set.
zap_table(self, type, version)
Destroy an extension table.
Deletes version VERSION of the extension table TYPE. If
VERSION is 0, delete the highest version of table TYPE. If
VERSION is -1, delete all versions of table TYPE.
Properties inherited from _AIPSData:
disk
Disk where this data set is stored.
lambdaself
klass
Class of this data set.
lambdaself
name
Name of this data set.
lambdaself
seq
Sequence number of this data set.
lambdaself
userno
User number used to access this data set.
lambdaself
\end{verbatim}
\subsection{FITSUVData}
\begin{verbatim}
class FITSUVData(_FITSData)
This class describes an FITS UV data set.
Methods inherited from _FITSData:
exists(self)
Check whether this image or data set exists.
Returns True if the image or data set exists, False otherwise.
getrow_table(self, type, version, rowno)
Get a row from an extension table.
Returns row ROWNO from version VERSION of extension table TYPE
as a dictionary.
header(self)
Get the header for this image or data set.
Returns the header as a dictionary.
header_table(self, type, version)
Get the header of an extension table.
Returns the header of version VERSION of the extension table
TYPE.
table(self, type, version)
table_highver(self, type)
Get the highest version of an extension table.
Returns the highest available version number of the extension
table TYPE.
tables(self)
Get the list of extension tables.
verify(self)
Verify whether this image or data set can be accessed.
zap(self)
Destroy this image or data set.
zap_table(self, type, version)
Destroy an extension table.
Deletes version VERSION of the extension table TYPE. If
VERSION is 0, delete the highest version of table TYPE. If
VERSION is -1, delete all versions of table TYPE.
Properties inherited from _FITSData:
disk
Disk where this data set is stored.
lambdaself
filename
Filename of this data set.
lambdaself
\end{verbatim}
\subsection{FITSImage}
\begin{verbatim}
class FITSImage(_FITSData)
This class describes an FITS image.
Methods inherited from _FITSData:
exists(self)
Check whether this image or data set exists.
Returns True if the image or data set exists, False otherwise.
getrow_table(self, type, version, rowno)
Get a row from an extension table.
Returns row ROWNO from version VERSION of extension table TYPE
as a dictionary.
header(self)
Get the header for this image or data set.
Returns the header as a dictionary.
header_table(self, type, version)
Get the header of an extension table.
Returns the header of version VERSION of the extension table
TYPE.
table(self, type, version)
table_highver(self, type)
Get the highest version of an extension table.
Returns the highest available version number of the extension
table TYPE.
tables(self)
Get the list of extension tables.
verify(self)
Verify whether this image or data set can be accessed.
zap(self)
Destroy this image or data set.
zap_table(self, type, version)
Destroy an extension table.
Deletes version VERSION of the extension table TYPE. If
VERSION is 0, delete the highest version of table TYPE. If
VERSION is -1, delete all versions of table TYPE.
Properties inherited from _FITSData:
disk
Disk where this data set is stored.
lambdaself
filename
Filename of this data set.
lambdaself
\end{verbatim}
\end{document}
% for lists
\begin{enumerate}
\item \hfil\break
\end{enumerate}
% for figures
\begin{figure}
\centering
\includegraphics[angle=-90][height=3in]{graphic.eps}
\caption{
}
\label{graphic}
\end{figure}
% Example figures
\begin{figure}
\centering
\includegraphics[height=3.5in]{ZerkFig1.eps}
\includegraphics[height=3.5in]{ZerkFig2.eps}
\centerline{
\psfig{figure=ZerkFig3.eps,height=3.5in}
\psfig{figure=ZerkFig4.eps,height=3.5in}
}
\caption{
Top: Fitted model ionospheric OPD screen rendered as a plane in 3-D
viewed from different angles.
The surface representing the OPD screen is shown with its projection
onto the bottom of the box.\hfill\break
Bottom: As above but without the linear gradients to emphasize the
curvature.
The radius shown is 10$^\circ$.
Models computed and plotted by
http://wyant.opt-sci.arizona.edu/zernikes/zernikes.htm.
}
\label{IonModel}
\end{figure}
% Example table
\begin{table}[t]
\caption{Observing dates}
\vskip 0.1in
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|} \hline
\hline
Date & Start IAT & End IAT \\
\hline
12 October 1998 & 21 00 & 21 15\\
22 January 2000 & 17 00 & 29 00$^1$\\
23 January 2000 & 17 00 & 29 00$^2$\\
\hline
\end{tabular}
\end{center}
\hfill\break
Notes:\hfill\break
$^1$ Times beyond 24 are the next day\hfill\break
$^2$ Not including 20 30 to 23 00\hfill\break
\label{Observations}
\end{table}
| {
"alphanum_fraction": 0.6430862816,
"avg_line_length": 39.2585756289,
"ext": "tex",
"hexsha": "415c464ac198fcfdce5ee71cb2e8a99b8fa81490",
"lang": "TeX",
"max_forks_count": 8,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T12:16:08.000Z",
"max_forks_repo_forks_event_min_datetime": "2017-08-29T15:12:32.000Z",
"max_forks_repo_head_hexsha": "e4ce6029e9beb2a8c0316ee81ea710b66b2b7986",
"max_forks_repo_licenses": [
"Linux-OpenIB"
],
"max_forks_repo_name": "sarrvesh/Obit",
"max_forks_repo_path": "ObitSystem/ObitTalk/doc/ObitTalk.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "e4ce6029e9beb2a8c0316ee81ea710b66b2b7986",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"Linux-OpenIB"
],
"max_issues_repo_name": "sarrvesh/Obit",
"max_issues_repo_path": "ObitSystem/ObitTalk/doc/ObitTalk.tex",
"max_line_length": 97,
"max_stars_count": 5,
"max_stars_repo_head_hexsha": "e4ce6029e9beb2a8c0316ee81ea710b66b2b7986",
"max_stars_repo_licenses": [
"Linux-OpenIB"
],
"max_stars_repo_name": "sarrvesh/Obit",
"max_stars_repo_path": "ObitSystem/ObitTalk/doc/ObitTalk.tex",
"max_stars_repo_stars_event_max_datetime": "2020-10-20T01:08:59.000Z",
"max_stars_repo_stars_event_min_datetime": "2019-08-26T06:53:08.000Z",
"num_tokens": 60331,
"size": 240341
} |
\chapter{WAN concepts}
\section{WAN Technologies overview}
\subsection{Topology}
A WAN operates beyond the geographic scope of a LAN. WANs are used to interconnect the enterprise LAN to remote LANs in branch sites and telecommuter sites. A WAN is owned by a service provider whereas a LAN is typically owned by an organization. An organization must pay a fee to use the WAN service provider’s network services to connect remote sites.\\
\begin{figure}[hbtp]
\centering
\subfigure[Point-to-point]
{
\includegraphics[width=0.4\textwidth]{pictures/WANtopology1.PNG}
\label{WANtopology1}
}
\subfigure[Hub-and-Spoke]
{
\includegraphics[width=0.4\textwidth]{pictures/WANtopology2.PNG}
\label{WANtopology2}
}
\subfigure[Full Mesh]
{
\includegraphics[width=0.4\textwidth]{pictures/WANtopology3.PNG}
\label{WANtopology3}
}
\subfigure[Dual-homed]
{
\includegraphics[width=0.4\textwidth]{pictures/WANtopology4.PNG}
\label{WANtopology4}
}
\caption{Four common WAN topologies}
\end{figure}
\paragraph{Point-to-Point topology} employs a point-to-point circuit between two endpoints (Figure \ref{WANtopology1}). Typically involves a dedicated leased-line connection such as a T1/E1 line.
\paragraph{Hub-and-Spoke} An example of a single-homed topology. Applicable when a private network connection between multiple sites is required. A single interface to the hub can be shared by all spoke circuits (Figure \ref{WANtopology2}).
\paragraph{Full Mesh} A disadvantage of the hub-and-spoke topology is that all communication has to go through the hub. With a full mesh topology using virtual circuits, any site can communicate directly with any other site (Figure \ref{WANtopology3}). A disadvantage is the large number of virtual circuits that need to be configured and maintained.
\paragraph{Dual-homed Topology} Provides redundancy and load balancing, however more expensive to implement than single-homed topologies(Figure \ref{WANtopology4}). Requires additional networking hardware including routers and switches. More difficult to implement since they require complex configurations.
\subsection{Terminology}
WAN operations focus primarily on the Layer 1 and 2 of the OSI Model. One primary difference between a WAN and a LAN is that a company must subscribe to an outside WAN service provider to use WAN carrier network services.\\
\begin{figure}[hbtp]
\caption{Common WAN terminology}\label{terminology}
\centering
\includegraphics[ width=0.7\textwidth ]{pictures/DCE.PNG}
\end{figure}
Terminology commonly used to describe WAN connections (Figure \ref{terminology}):
\begin{itemize}
\item \textbf{Customer Premises Equipment (CPE)} Consists of devices and inside wiring located on the enterprise edge connecting to a carrier.
\item \textbf{Central Office (CO)} is the local service provider facility that connects the CPE to the provider network.
\item \textbf{Local Loop (last mile)} is the actual copper or fiber cable that connects the CPE to the CO.
\item \textbf{Data Terminal Equipment (DTE)} is usually a router that pass the data from a customer network to DCE.
\item \textbf{Data Communications Equipment (DCE)} is usually a modem that puts data on the local loop by converting digital signals into analog signals. It connects subscribers to the ISP provider.
\item \textbf{Demarcation Point} is a point established in a building to separate customer equipment from service provider equipment. It is the place where the responsibility for the connection changes from the user to the service provider.
\item \textbf{Toll network} consists of the longhaul, all-digital, fiber-optic communications lines and other equipment inside the WAN provider network.
\end{itemize}
\begin{figure}[hbtp]
\caption{WAN devices}\label{Device}
\centering
\includegraphics[scale=1]{pictures/Device.PNG}
\end{figure}
There are many types of devices that are specific to WAN environments (Figure \ref{Device}):
\begin{itemize}
\item \textbf{Dialup modem} converts (modulates) the digital signals (produced by a computer) into analog signals (voice frequencies).
\item \textbf{Broadband modem} converts the digital signals into analog signals transferred via high-speed DSL or cable Internet service.
\item \textbf{Access server} controls and coordinates dialup modem, dial-in and dial-out user communications.
\item \textbf{CSU/DSU} is only used for \textbf{Leased line}. The CSU provides termination for the digital signal and ensures connection integrity through error correction and line monitoring. The DSU converts line frames into frames that the LAN can interpret and vice versa.
\end{itemize}
WAN technologies are either circuit-switched or packet-switched:
\paragraph{Circuit Switching} dynamically establishes a \emph{dedicated virtual connection} for voice or data between a sender and a receiver. Communication can't start until the connection is established through the service provider network. The two most common types of circuit-switched WAN technologies are \textbf{PSTN} and \textbf{ISDN}.
\paragraph{Packet Switching} splits traffic data into packets that are routed over a shared network. A circuit does not need to be established and many pairs of nodes can communicate over the same channel. Packet switching costs less than circuit switching, however, latency and jitter are greater in packet-switching networks. There are two approaches to packet-switched network link determination:
\begin{itemize}
\item \textbf{Connectionless systems:} Full addressing information must be carried in each packet. The \textbf{Internet} is an example of a connectionless system.
\item \textbf{Connection-oriented systems:} The network predetermines the route for a packet, and each packet only has to carry an identifier. An example of a connection-oriented system is \textbf{Frame Relay} (DLCIs are the identifiers).
\end{itemize}
\section{WAN connection}
There are several WAN access connection options (figure \ref{WANaccess}) that ISPs can use to connect the local loop to the enterprise edge.\\
\begin{figure}[hbtp]
\caption{WAN access options}\label{WANaccess}
\centering
\includegraphics[ width=0.7\textwidth ]{pictures/WANaccess.PNG}
\end{figure}
Service provider networks are complex and consist mostly of high-bandwidth fiber-optic media, using SONET and SDH standard. A newer fiber-optic media development for long-range communications is called dense wavelength division multiplexing (DWDM).
\subsection{Private WAN Infrastructures}
\paragraph{Leased lines} are \emph{permanent dedicated point-to-point} connections from the customer premises to the provider network. The organization pays a monthly lease fee to a service provider to use the line. Leased lines require little installation and maintenance expertise, offer high quality and availability. However, they are expensive and has limited flexibility.
\paragraph{Dialup}transports binary computer data through the voice telephone network using a modem. Dialup access is suitable when intermittent, low-volume data transfers are needed. The advantages of modem and analog lines are simplicity, availability, and low implementation cost. The disadvantages are the low data rates and a relatively long connection time.
\paragraph{ISDN}is a \emph{circuit-switching} technology that enables the local loop of a PSTN (Public switched telephone network) to carry digital signals. It can provide additional capacity as needed on a leased line connection or can also be used as a backup. ISDN has declined in popularity due to DSL and other broadband services. There are two types of ISDN Interfaces: BRI (2 B-channels, 1 D-channel), PRI (23 B-channel, 1 D-channel)
\paragraph{Frame Relay}is a Layer-2 WAN technology used to interconnect enterprise LANs. Frame Relay creates PVCs (Private Virtual Circuits) to connect multiple sites, and carry voice and data traffic. PVCs are uniquely identified by a DLCI (Data-Link Connection Identifier). The PVCs and DLCIs ensure bidirectional communication between one DTE device to another.
\paragraph{ATM} is built on a \emph{cell-based} architecture rather than on a frame-based architecture. ATM cells are always a \emph{fixed} length of \textbf{53 bytes}. ATM is well-suited for voice and video traffic because this traffic is intolerant of delay.s
\paragraph{Ethernet WAN:}Originally Ethernet was not suitable as a WAN access technology because the maximum cable length was one kilometer. However, \emph{fiber-optic} cables have made Ethernet a reasonable WAN access option. There are several benefits to an Ethernet WAN: Reduced expenses and administration, Easy integration with existing networks, Enhanced business productivity. Ethernet WANs have replaced Frame Relay and ATM.
\paragraph{MPLS} is a \emph{multiprotocol} high-performance WAN technology that directs data from one router to the next. MPLS is based on \emph{short path labels} rather than IP network addresses. It uses labels which tell a router what to do with a packet. The labels identify paths between distant routers rather than endpoints, and while MPLS actually routes IPv4 and IPv6 packets, everything else is switched. Furthermore, MPLS can deliver any type of packet between sites and encapsulate them of various network protocols.
\paragraph{VSAT} is a solution that creates a private WAN using \emph{satellite} communications in remote locations where there are no service providers that offer WAN service.
\subsection{Public WAN Infrastructures}
\paragraph{DSL}is an always-on connection technology that uses existing \emph{twisted-pair telephone} lines to transport high-bandwidth data. A DSL modem converts an Ethernet signal from the user device to a DSL signal. Key components in the DSL connection: \emph{DSL modem (subscriber end)} and \emph{DSLAM (ISP end)}. The advantage that DSL has over cable technology is that DSL is not a shared medium -- each user has a separate direct connection to the DSLAM.
\paragraph{Cable} is widely used in urban areas to distribute television signals. Network access is available from television providers. This allows for greater bandwidth than the conventional telephone local loop. Two types of equipment are required: \emph{Cable Modem (subscriber end)} and \emph{CMTS (ISP end)}.
\paragraph{WiMAX} is a new technology that operates in a similar way to Wi-Fi, but at higher speeds, over greater distances, and for a greater number of users. It uses a network of WiMAX towers that are similar to cell phone towers.
\paragraph{Satellite Internet}Typically used by rural users where cable and DSL are not available. Cable and DSL have higher download speeds, but satellite systems are about 10 times faster than an analog modem.
\paragraph{VPN}is an encrypted connection between private networks over Internet. VPN uses virtual connections called VPN tunnels, which are routed through the Internet from the private network of the company to the remote site or employee host. There are several benefits to using VPN: cost savings, security, scalability, compatibility with broadband technology. There are two types of VPN access:
\begin{itemize}
\item \textbf{Site-to-site VPN:} connect entire networks to each other, for example, connecting a branch
office network to a company headquarters network.
\item \textbf{Remote-access VPN:} enable individual hosts, such as extranet consumers, to access a company network securely over the Internet.
\end{itemize}
\paragraph{Dynamic Multipoint VPN (DMVPN)} is a Cisco software solution for building multiple VPNs. DMVPN is built on three protocols: NHRP, IPsec, and mGRE. NHRP is the distributed address \emph{mapping} protocol for VPN tunnels. IPsec \emph{encrypts} communications on VPN tunnels. The mGRE protocol allows the dynamic creation of \emph{multiple spoke tunnels} from one permanent VPN \emph{hub}. | {
"alphanum_fraction": 0.8026149304,
"avg_line_length": 80.1013513514,
"ext": "tex",
"hexsha": "6e69748feac15ced0b2b5a27ac31b9b475b28340",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "buiquanghuy23103/CCNA",
"max_forks_repo_path": "Notebook/WANconcepts.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "buiquanghuy23103/CCNA",
"max_issues_repo_path": "Notebook/WANconcepts.tex",
"max_line_length": 529,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "1234574bcba3c206263091ef90191fd9c41c624f",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "buiquanghuy23103/CCNA",
"max_stars_repo_path": "Notebook/WANconcepts.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 2686,
"size": 11855
} |
\chapter{\proj Geometric Registration}
\label{ch_register}
\index{registration}
% \chapterhead{Programs}
\markright{Geometric registration}
\section{Introduction}
Image registration is a procedure which determines
the best spatial fit between two or more images that overlap the same
scene, and were acquired at the same or at a different time, by identical or
different sensors.
Thus, registration is required for processing a new set of data
in such a way that its
image under an appropriate transform is in a proper geometrical
relationship with the previous set of data.
\bigskip
Several digital techniques have been used for automatic registration of
images such as cross-correlation, normal cross-correlation and minimum
distance criteria.
The advantage of the wavelet transform is that it produces both
spatial and frequency domain information which allow the study of
the image by frequency bands \cite{reg:djamdji1,reg:djamdji2,reg:djamdji3}. \\
An automatic image registration procedure can be helpful for several
applications:
\begin{enumerate}
\item Comparison between two images obtained at the same wavelength.
\item Comparison between two images obtained at different wavelengths.
\item Pixel field of view distortion estimation.
\end{enumerate}
\bigskip
The geometrical correction is usually performed by three operations:
\begin{itemize}
\item The measure of a set of well-defined ground control points (GCPs), which are
features well located both in the input image and in the reference image.
\item The determination of the warping or deformation model, by specifying a
mathematical deformation model defining the relation between the
coordinates $(x,y)$ and $(X,Y)$ in the reference and input image respectively.
\item The construction of the corrected image by output-to-input mapping.
\end{itemize}
\bigskip
The main difficulty lies in the automated localization of the corresponding
GCPs, since the accuracy of their determination will affect the overall
quality of the registration. In fact, there are always ambiguities in
matching two sets of points, as a given point corresponds to a small region
{\em D}, which takes into account the prior geometric uncertainty between
the two images and many objects could be contained in this region.
\bigskip
One property of the wavelet transform is to have a sampling step proportional
to the scale. When we compare the images in the wavelet transform space,
we can choose a scale corresponding to the size of the region {\em D}, so
that no more than one object can be detected in this area, and the matching
is done automatically.
\section{Deformation model}%Polynomial Transformation
Geometric correction requires a spatial transformation to invert an unknown
distortion function. A general model for characterizing misregistration
between two sets of remotely sensed data is a pair of bivariate polynomials
of the form:
\begin{eqnarray*}
x_i = \displaystyle{ \sum^{N}_{p=0} \sum^{N-p}_{q=0} a_{pq}
X^{p}_{i} Y^{q}_{i} = Q(X_{i}, Y_{j}) } \\
y_i = \displaystyle{ \sum^{N}_{p=0} \sum^{N-p}_{q=0} b_{pq}
X^{p}_{i} Y^{q}_{i} = R(X_{i}, Y_{j}) }
\end{eqnarray*}
where $(X_{i},Y_{i})$ are the coordinates of the $i^{th}$ GCP in the
reference image, $(x_{i},y_{i})$ the corresponding GCP in the input
image and $N$ is the degree of the polynomial.
Usually, for images taken
under the same {\em imaging direction}, polynomials of degree one or two
are sufficient as they can model most of the usual deformations like shift,
scale, skew, perspective and rotation (see Table 3.1).
We then compute the unknown parameters
($(N+1)(N+2)/2$ for each polynomial) using the least mean square
estimator. \\
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c} \hline
Shift & $x = a_0 + X$ \\
& $y = b_0 + Y$ \\ \hline
Scale & $x = a_1 X$ \\
& $y = b_2 Y$ \\ \hline
Skew & $x = X + a_2 Y$ \\
& $y = Y$ \\ \hline
Perspective & $x = a_3 X Y$ \\
& $y = Y$ \\ \hline
Rotation & $x = \cos \theta X + \sin \theta Y$ \\
& $x = -\sin \theta X + \cos \theta Y$ \\ \hline
\end{tabular}
\caption{Some common deformations.}
\end{center}
\label{table:deformations}
\end{table}
\section{Image registration: mr\_fusion}
\index{mr\_fusion}
Program {\em mr\_fusion} performs the geometrical registration
of two images having the same size, and same resolution. Four deformation
models are available (``-d" option). The program may fail to register the image
when not enough control points are detected. In this case, the user can try
another deformation model which may be more adapted to his data, or modify
the noise model parameters. Here the noise model is only used for structure
detection. If the image contains strong features, the threshold parameter
``-s"
can be set at a greater value. Hence, the registration will be performed only
from the strongest feature of the image. The number of scales is also very
important. Indeed, the maximum distance between two pixels of the same point
in the two images must be less than or equal to the size of the wavelet at the
last scale. The number of scales can be fixed either using
directly the ``-n" option,
or using the ``-D" option.
By choosing the latter, the user gives the maximum distance
between two pixels of the same point, and the program calculates automatically
the correct number of scales.
{\bf
\begin{center}
USAGE: mr\_fusion option image\_ref image\_in image\_out
\end{center}}
Options are:
\begin{itemize}
\baselineskip=0.4truecm
\itemsep=0.1truecm
\item {\bf [-p]} \\
Poisson Noise. Default is no Poisson component (just Gaussian).
\item {\bf [-g SigmaNoise]} \\
SigmaNoise = Gaussian noise standard deviation. Default is automatically estimated.
\item {\bf [-c gain,sigma,mean]} \\
See section~\ref{sect_support}.
\item {\bf [-n number\_of\_scales]} \\
Number of scales used in the multiresolution transform. Default is 4.
\item {\bf [-s NSigma]} \\
Thresholding at NSigma * SigmaNoise. Default is 5.
\item{\bf [-r res\_min]} \\
Minimum resolution for the reconstruction.
The registration procedure is stopped at scale res\_min and
the resulting deformation model is used to register the input image.
Default value is 1.
\item{\bf [-D dist\_max]} \\
Maximum estimated distance between two identical points in both images.
This value is used to estimate the number of scales for the wavelet transform.
% \item{\bf [-l]} \\
% Sub-scene and scene registration:
% the sub-scene is considered to be part of a larger scene.
% image\_in is registered, and the resulting deformation model is used to
% register the larger scene.
\item{\bf [-i Interpolation type]} \\
Type of interpolation:
\begin{itemize}
\baselineskip=0.4truecm
\item 0: Zero order interpolation -- nearest neighbor.
\item 1: First order interpolation -- bilinear.
\item 2: Second order interpolation -- bicubic.
\end{itemize}
Default is 2.
\item{\bf [-d DeforModel] } \\
Type of registration deformation model: \\
The type of polynomial model used for the geometrical registration. Three
types
are available:
\begin{itemize}
\baselineskip=0.4truecm
\itemsep=0.1truecm
\item 0: Polynomial of the first order of type I:
\begin{eqnarray}
x^{'} & = & aX - bY + c_x \\
y^{'} & = & bX + aY + c_y
\end{eqnarray}
\item 1: Polynomial of the first order of type II:
\begin{eqnarray}
x^{'} & = & aX + bY + c \\
y^{'} & = & dX + eY + f
\end{eqnarray}
\item 2: Polynomial of the second order:
\begin{eqnarray}
x^{'} & = & aX^{2} + bY^{2} + cXY + dX + eY + f \\
y^{'} & = & gX^{2} + hY^{2} + iXY + jX + kY + l
\end{eqnarray}
\item 3: Polynomial of the third order.
\end{itemize}
Default is 1.
\item{\bf [-o]} \\
Manual Options specifications:\\
A few options are provided in order to have more control on the procedure.
Using manual options, the following parameters can be fixed by the user
for each scale:
\begin{itemize}
\baselineskip=0.4truecm
\item Matching distance: \\
The distance used for the matching procedure. The procedure looks for each control
point candidate in the reference image which is the corresponding control point
candidate in the input image within a radius of {\em Matching distance}.
\item Threshold level: \\
The threshold level used for thresholding the wavelet transform.
\item Type of registration deformation model (same as option -d).
\item Type of interpolation (same as option -i).
\end{itemize}
The available manual options are the following:
\begin{itemize}
\baselineskip=0.4truecm
\itemsep=0.1truecm
\item 0: Everything is taken care of by the program.
\item 1: The matching distance is specified manually for each resolution.
\item 2: The threshold level is specified manually for each resolution and for
both the reference image and the input image.
\item 3: The type of deformation model is specified manually for each resolution.
\item 4: The matching distance, the Threshold level and the Type of deformation model
are specified manually for each resolution.
\item 5: The matching distance, the threshold level, the type of deformation model
and the type of interpolation are specified manually for each resolution.
\end{itemize}
The default is none (0).
\item{\bf [-w]} \\
The following files are written to disk:
\begin{itemize}
\baselineskip=0.4truecm
\item deform\_model.txt: contains the calculated coefficients of the
deformation model, allowing us to calculate the coordinates in the
second image of a given point from its coordinates in the reference image.
\item scale\_j\_control\_points.dat: contains the set of control points for
each scale. The first line contains the number of control points, and all
other lines the values Xr,Yr,Xi,Yi,
where Xr and Yr are the coordinates of the control point in the reference
image (origin is (0,0)), and Xi and Yi are the coordinates of the
corresponding control point in the input image (second image).
\item xx\_grill\_in: an artificial image which contains a ``grilled'' image.
\item xx\_grill\_out: the resulting image after applying the deformation
model to the artificial one.
\end{itemize}
The default is none.
\end{itemize}
\begin{figure}[htb]
\centerline{
\vbox{
\hbox{
\psfig{figure=ch5_dec_ngc.ps,bbllx=1.8cm,bblly=7cm,bburx=19.2cm,bbury=24.3cm,width=8cm,height=8cm}
\psfig{figure=ch5_diff_ngc_dec.ps,bbllx=1.8cm,bblly=7cm,bburx=19.2cm,bbury=24.3cm,width=8cm,height=8cm}
}
\hbox{
\psfig{figure=ch5_rec_ngc.ps,bbllx=1.8cm,bblly=7cm,bburx=19.2cm,bbury=24.3cm,width=8cm,height=8cm}
\psfig{figure=ch5_diff_ngc_rec.ps,bbllx=1.8cm,bblly=7cm,bburx=19.2cm,bbury=24.3cm,width=8cm,height=8cm}
}
}}
\caption{Synthetic image (upper left) and difference between the original
and the synthetic image (upper right). Registered image (bottom left)
and difference between NGC2997 and the registered image (bottom right). }
\label{fig_ngc_register}
\end{figure}
A strong distortion was applied to the galaxy NGC2997 (see
Figure \ref{fig_ngc}).
A synthetic image was made by shifting it by 5 and 10 pixels
in each axis direction. Then this image
was rotated by 10 degrees and Gaussian noise was added.
Figure \ref{fig_ngc_register} shows the synthetic image
(upper left panel), and also the difference
between the original image and the synthetic one (upper right panel).
Figure \ref{fig_ngc_register} (bottom left and right) shows the
corrected image and the residual between the original image
and the corrected image.
The two images have been correctly -- and automatically -- registered.
\subsubsection*{Example:}
\begin{itemize}
\item mr\_fusion -n 6 -s 10 -d 1 ngc2997.fits dec\_ngc register\_ima\\
Register the image dec\_ngc on ngc2997, with a 10 sigma detection, 6 scales,
and using the first order deformation model of type 2. The
result is presented
in Figure \ref{fig_ngc_register}.
\end{itemize}
\clearpage
\newpage
If the deformation model is not sophisticated enough to resolve a specific
problem, the result will certainly not be correct, but the user can still
use the CGP points which do not depend on any model.
| {
"alphanum_fraction": 0.7224788912,
"avg_line_length": 43.8951048951,
"ext": "tex",
"hexsha": "e1722e35b770c73296118ab50b35b44fee5fbd6f",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "sfarrens/cosmostat",
"max_forks_repo_path": "src/doc/doc_mra/doc_mr1/ch_register.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "sfarrens/cosmostat",
"max_issues_repo_path": "src/doc/doc_mra/doc_mr1/ch_register.tex",
"max_line_length": 103,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "a475315cda06dca346095a1e83cb6ad23979acae",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "sfarrens/cosmostat",
"max_stars_repo_path": "src/doc/doc_mra/doc_mr1/ch_register.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 3285,
"size": 12554
} |
\section{Results and Discussion}
\label{sec:results}
\subsection{The period-\teff\ relations, revealed}
\label{sec:the_reveal}
To explore the relationship between rotation period, effective temperature
(\teff ) and velocity dispersion, we calculated \sigmavb \footnote{\sigmavb\
was calculated as 1.5$\times$ the median absolute deviation of velocities, to
mitigate sensitivity to outliers.} for groups of stars with similar rotation
periods and temperatures, and presumed similar age.
The top panel of figure \ref{fig:vplot} shows rotation period versus effective
temperature for the \mct\ sample, coloured by \sigmavb, where \sigmavb\ was
calculated for groups of stars over a grid in $\log_{10}$(period) and
temperature.
If we assume that mass dependent heating does not strongly affect this sample
and \vb\ at low galactic latitudes is an unbiased tracer of \vz, then \vb\
velocity dispersion can be interpreted as an age proxy, and stars plotted in a
similar color in figure \ref{fig:vplot} are similar ages.
In the appendix of this paper, we show that this assumption appears valid for
stars with Galactic latitude $<$ 15\degrees.
\begin{figure}
\caption{
Top: Rotation period vs effective temperature for stars in the \mct\
sample, colored by the velocity dispersions of stars calculated over a
grid in $\log_{10}$(period) and \teff\ (this grid causes the quantized
appearance).
Black lines show gyrochrones from a gyrochronology model that projects the
rotation-color relation of
Praesepe to longer rotation periods over time \citep{angus2019}.
These gyrochrones do not appear to reflect the evolution of field stars at
long rotation periods/old ages because they do not trace lines of constant
velocity dispersion.
Gyrochrones are plotted at 0.5, 1, 1.5, 2, 2.5, 4 and 4.57 Gyr (Solar age)
in both top and bottom panels.
Bottom: Same as top panel with rotation period vs {\it mass}
\citep[from][]{berger2020}.
White lines show gyrochrones from a model that includes mass and
age-dependent angular momentum transport between the core and envelope
\citep{spada2019}.
Qualitatively, these gyrochrones reflect the evolution of field
stars at long rotation periods/old ages: they trace lines of constant
velocity dispersion by reproducing periods of `stalled' surface rotational
evolution for K-dwarfs.
}
\centering
\includegraphics[width=1\textwidth]{main_figure}
\label{fig:vplot}
\end{figure}
Overall, figure \ref{fig:vplot} shows that velocity dispersion increases with
rotation period across all temperatures, implying that rotation period
increases with age, as expected.
This result is insensitive to the choice of bin position and size.
Black lines show gyrochrones from the \citet{angus2019} gyrochronology model,
which projects the rotation-color relation of Praesepe to longer rotation
periods over time.
These gyrochrones are plotted at 0.5, 1, 1.5, 2, 2.5, 4 and 4.57 (Solar age)
Gyr.
At the youngest ages, these gyrochrones describe the data well: the palest
yellow (youngest) stars with the lowest velocity dispersions all fall close to
the 0.5 Gyr gyrochrone.
However, although the 0.5 Gyr and 1 Gyr gyrochrones also trace constant
velocity dispersion/age among the field stars, by 1.5 Gyr the gyrochrones
start to {\it cross} different velocity dispersion regimes.
For example, the 1.5 Gyr gyrochrone lies on top of stars with velocity
dispersions of around 10-11 kms$^{-1}$ at 5000-5500K and stars with $\sim$15
\kms\ velocity dispersions at 4000-4500K.
The gyrochrones older than 1.5 Gyr also cross a range of velocity dispersions.
If these were true isochrones they would follow lines of constant velocity
dispersion.
At ages older than around 1 Gyr, it appears that gyrochrones should have a
more flattened, or even inverted, shape in rotation period-\teff\ space than
these Praesepe-based models.
The bottom panel of figure \ref{fig:vplot} shows velocity dispersion as a
function of rotation period and {\it mass}, \citep[from][]{berger2020}, with
gyrochrones from the \citet{spada2019} model shown in white.
These gyrochrones are also plotted at 0.5, 1, 1.5, 2, 2.5, 4 and 4.57 Gyr.
Each point plotted in the top panel also appears in the bottom panel with the
same color.
Because velocity dispersion was calculated in bins of \teff, not mass, bin
outlines are clearly visible in the top panel but appear smeared-out in the
bottom panel.
In the bottom panel of figure \ref{fig:vplot}, the \citet{spada2019} models
{\it do} trace lines of constant velocity dispersion, and reproduce the trends
in the data at all ages.
These models qualitatively agree with the data and reproduce the apparent
flattening and inversion in the rotation period-\teff/mass relations.
The results shown in figure \ref{fig:vplot} indicate that stars of spectral
type ranging from late G to late K ($\sim$5500-3500 K) follow a braking law
that changes over time.
In particular, the relationship between rotation period and effective
temperature appears to flatten out and eventually invert.
These results provide further evidence for `stalled' surface rotational
evolution of K dwarfs, like that observed in open clusters \citep{curtis2019}
and reproduced by models that vary angular momentum transport between stellar
core and envelope with time and mass \citep{spada2019}.
The velocity dispersions of stars in the \mct\ sample, shown in figure
\ref{fig:vplot}, provide the following picture of rotational evolution.
At young ages \citep[younger than around 1 Gyr but still old enough to be on
the main sequence and have transitioned from the `I' sequence to the `C'
sequence ][]{barnes2003}, stellar rotation period {\it decreases} with {\it
increasing} mass.
This is likely because lower-mass stars with deeper convection zones have
stronger magnetic fields, larger Alfv\'en radii and therefore experience
greater angular momentum loss rate \citep[\eg][]{schatzman1962, kraft1967,
parker1970, kawaler1988, charbonneau2010}.
This aligns with the current assumptions about stellar spin-down, dynamo
theory, and the gyrochronology paradigm that has been in place for decades
\citep[\eg][]{skumanich1972, noyes1984, kawaler1988, barnes2003, angus2019}.
According to the \citet{spada2019} model, the radiative cores and convective
envelopes of stars are decoupled at these young ages, \ie\ transportation of
angular momentum from the surface to the core of the star is reduced, so the
surface slows down due to wind-braking but the core keeps spinning rapidly.
According to the data presented in figure \ref{fig:vplot}, at intermediate
ages, the rotation periods of K dwarfs appear {\it constant} with mass, and at
late ages rotation period {\it increases} with {\it increasing} mass.
The interpretation of this, according to the \citet{spada2019} model, is that
lower-mass stars are still braking more efficiently at these intermediate and
old ages but their cores are more tightly coupled to their envelopes, allowing
angular momentum transport between the two layers.
Angular momentum resurfaces and prevents the stellar envelopes from
spinning-down rapidly, and this effect is strongest for late K-dwarfs with
effective temperatures of $\sim$4000-4500K and masses $\sim$0.5-0.7 M$_\odot$.
A period of core-envelope decoupling in the evolution of cool dwarfs has been
explored in theoretical models for decades \citep[\eg][]{endal1981,
macgregor1991, denissenkov2010, gallet2013}.
In such models, the angular momenta of the radiative core and convective
envelope are permitted to evolve separately once a radiative core develops on
the pre-main sequence.
A decoupled core and envelope is required to reproduce observations of young
clusters and star forming regions \citep[\eg][]{irwin2007, bouvier2008,
denissenkov2010, spada2011, reiners2012} and has become an established element
of theoretical gyrochronology models.
During this phase, angular momentum transport between radiative core and
convective envelope is reduced.
Over time, models increase the efficiency of angular momentum transport
between the core and envelope in order to reproduce the close-to solid body
rotation observed for the Sun \citep[\eg][]{thompson1996}.
The core-envelope coupling timescale affects the predicted surface rotation
periods of young and intermediate-age stars and is usually constrained using
observations of open clusters.
The \citet{lanzafame2015} gyrochronology model uses a mass-dependent
core-envelope coupling timescale, and \citet{spada2019} fit this model to
open-cluster observations, including new rotation period measurements for
K dwarfs in the NGC 6811 cluster \citep{curtis2019}.
A similar mass-dependent core-envelope coupling timescale was also found to
explain the observed lithium depletion trends in open clusters by an
independent study \citep{somers2016}.
Although variable angular momentum transport between the surfaces and cores of
stars has been an essential ingredient of stellar evolution models for
decades, the transport mechanism is still unknown.
Among the proposed mechanisms are magneto-hydrodynamical waves resulting from
various magnetic field geometries, and gravity waves \citep[see,
\eg][]{charbonneau1993, ruediger1996, spruit2002, talon2003, spada2010,
brun2011, oglethorpe2013}.
% These different processes may make subtly different predictions for the
% rotational evolution of stars, and could potentially be tested with the large
% number of field stars with measured ages and rotation periods which could be
% provided by kinematic ages for the \mct\ sample.
Figure \ref{fig:vplot} reveals another phenomenon of magnetic evolution:
K and M dwarfs remain magnetically active for longer than G dwarfs.
The mass dependence of magnetic activity lifetimes has been demonstrated
previously \citep[\eg][]{west2008, newton2017, kiman2019}, and if the
detectability of a rotation period is considered to be a magnetic activity
proxy, then our results provide further evidence for a mass-dependent activity
lifetime.
Figure \ref{fig:vplot} shows that the groups of stars with the largest
velocity dispersions are cooler than 4500 K.
This implies that the oldest stars with detectable rotation periods, are
cooler than 4500 K, \ie\ these-low mass stars stay active longer than more
massive stars.
To investigate this idea further, we compared the velocity dispersions of
stars with measured rotation periods, to the velocity dispersions of the
overall \kepler\ sample.
We calculated the velocity dispersions for all stars in the \kepler\ field,
after removing visual binaries, subgiants, stars fainter than 16th magnitude,
and high Galactic latitude stars, following the method described in section
\ref{sec:method}.
We then compared these velocity dispersions to the velocity dispersions of
stars in the \mct\ sample.
If the rotation periods of G stars are only detectable when they are young, G
stars with measured periods should have smaller velocity dispersions than G
stars in the overall \kepler\ field.
We calculated the ratio of \sigmavb\ for the entire \Kepler\ sample to
\sigmavb\ for stars with rotation periods published in \mct, as a function of
\teff\ (see figure \ref{fig:compare}).
A larger ratio means the rotating star sample is {\it younger}, on average,
than the overall \Kepler\ population, and a ratio of 1 means that the rotating
stars have the {\it same} age distribution as the overall Kepler sample.
Figure \ref{fig:compare} shows that this ratio is largest for G stars and
approaches unity for K and early M dwarfs.
This indicates that the G stars with detectable rotation periods are, on
average, {\it younger} than the total population of G stars in the Kepler
field.
On the other hand, the late K and early M dwarfs with detectable rotation
periods have a similar age distribution to the overall Kepler population which
suggests that the oldest K and M dwarfs are represented in the \mct\ sample.
This result bolsters the evidence that M dwarf rotation periods are measurable
at older ages than G dwarf rotation periods.
In other words, G stars become magnetically inactive and have fewer active
surface regions {\it at a younger age than M dwarfs}.
\begin{figure}
\caption{
Velocity dispersions for the entire \kepler\ field divided by the velocity
dispersions of stars with measured rotation periods in \mct,
as a function of effective temperature.
A larger ratio indicates that the overall \kepler\ field is older, on
average, than stars in the \mct\ catalog.
As this ratio approaches unity the two populations have similar kinematic
ages.
The large ratio for the hottest stars indicates that G dwarfs become
inactive at young ages.
This ratio approaches unity at low temperatures, showing that K and early
M dwarf rotation periods are measurable over a large range of ages.
}
\centering
\includegraphics[width=1\textwidth]{field_comparison}
\label{fig:compare}
\end{figure}
\subsection{Synchronized binaries and the \kepler\ period gap}
\label{sec:gap}
\begin{figure}
\caption{
Top: rotation period vs. effective temperature for stars in the \mct\
sample, separated into three groups. Blue circles
show stars with rotation periods longer than the
period gap, orange squares show stars with rotation periods shorter than
the gap, but longer than the lower edge of the main rotation period
distribution, and green triangles show stars with rotation periods shorter
than this lower edge.
Stars were separated into these three groups using \citet{angus2019}
gyrochronology models, with the scheme shown in the legend.
Bottom: the velocities of these groups of stars (in the direction of
Galactic latitude, $b$) are shown as a function of rotation period.
Only stars cooler than 5000 K are plotted in the bottom panel in order to
isolate populations above and below the period gap, which only extends up
to temperatures of $\sim$4600 K.
The black line indicates the velocity standard deviation as a function of
period.
}
\centering
\includegraphics[width=1\textwidth]{gap}
\label{fig:gap}
\end{figure}
In this section, we explored the kinematic properties of the \mct\ sample in
more detail, investigating the velocity dispersions of stars either side of the
\kepler\ period gap, and identifying rapidly rotating stars that may be
synchronized binaries.
There is a sharp gap in the population of rotation periods (often called the
\kepler\ period gap), which lies just above the 1 Gyr gyrochrone in the upper
panel of figure \ref{fig:vplot}, whose origin is unknown and is the subject of
much speculation \citep{mcquillan2014, davenport2017, davenport2018,
reinhold2019, reinhold2020}.
This gap was first identified by \mct, and roughly follows a line of constant
gyrochronal age of around 1.1 Gyr \citep[according to the][gyrochronology
relation]{angus2019}.
Several explanations for the gap's origin have been proposed, including a
discontinuous star formation history \citep{mcquillan2013, davenport2017,
davenport2018} and a change in magnetic field structure causing a brief period
where rotational variability is reduced and rotation periods cannot be
measured \citep{reinhold2019, reinhold2020}.
The top panel of figure \ref{fig:vplot} suggests that the \citet{angus2019},
Praesepe-based gyrochronology model is valid below the gap but not above.
Gyrochrones follow lines of constant velocity dispersion below the gap, but
{\it cross} lines of constant velocity dispersion above the gap.
This phenomenon is robust to the choice of bin size and position.
Although we do not provide an in-depth analysis here (and more data may be
needed to confirm a connection) these data suggest that the gap may indeed
separate a young regime where stellar cores are decoupled from their envelopes
from an old regime where these layers are more tightly coupled.
If so, this could indicate that the phenomenon responsible for changing the
shape of gyrochrones in rotation-\teff\ space is related to the phenomenon that
produces the gap.
An alternate explanation for the gap is that the \mct\ sample contains two
distinct stellar populations: one young and one old.
If so, the kinematic properties of stars above and below the gap are likely to
be distinctly different.
The bottom panel of figure \ref{fig:gap} shows the velocity dispersions of
stars in the \mct\ sample, with stars subdivided into three groups: those that
rotate more quickly than the main rotation period population (green
triangles), those with rotation periods shorter than the gap (orange squares),
and those with rotation periods longer than the gap (blue circles).
Stars were separated into these three groups using \citet{angus2019}
gyrochronology model, according to the scheme shown in the legend.
Only stars cooler than 5000 K are included in the bottom panel in order to
isolate populations above and below the period gap, which only extends up to a
temperature of $\sim$4600 K in our sample, although \citet{davenport2017}
found that the gap extends to temperatures as hot as 6000 K.
In general, velocity dispersion increases with rotation period because both
quantities increase with age.
Previously, only the overall velocity dispersions of all stars above and below
the gap have been compared, leading to the assumption that these groups belong
to two distinct populations \citep{mcquillan2014}.
However, figure \ref{fig:gap} shows a smooth increase in velocity dispersion
with rotation period across the gap (from orange squares to blue circles),
suggesting that these groups are part of the same Galactic population.
% The smooth increase in velocity dispersion across the gap shown here suggests
% that stars above and below are part of the same stellar population.
This observation does not rule out the possibility that a brief cessation of
star formation in the Solar neighborhood, around one Gyr ago, may have caused
this gap, however.
In the final part of our analysis, we investigated the potential for using
kinematics to identify synchronized binaries in the \mct\ sample.
Synchronized binaries are pairs of stars whose rotation periods are equal to
their orbital period.
Since synchronization appears to happen at rotation periods of 7 days or
shorter \citep{simonian2019}, and most isolated stars have rotation periods
longer than 7 days, the rotation periods of synchronized binaries are likely
to be {\it shorter} than they would be if they were isolated stars.
For this reason, their rotation periods do not reflect their ages and the
gyrochronal age of a synchronized binary is likely to be much younger than the
true age of the system.
Synchronized binaries are therefore a source of contamination for
gyrochronology and should be removed from samples before performing a
gyrochronal age analysis.
Figure \ref{fig:gap} shows that some of the most rapidly rotating stars in the
\mct\ sample have relatively large absolute velocities, indicating that they
are likely synchronized binaries.
For this reason, the velocity dispersions of stars with rotation periods
shorter than the lower edge of the rotation period distribution (green
triangles in figure \ref{fig:gap}) are not significantly smaller than the,
presumed older, orange-colored stars.
In general, stars with rotation periods less than $\sim$10 days have an
increased chance of being synchronized binaries.
This result is in agreement with a recent study which found that a large
fraction of photometric binaries were rapid rotators, and the probability of a
star being a synchronized binary system substantially increased below rotation
periods of around 7 days \citep{simonian2019}.
We caution users of rotation period catalogs that rapid rotators with large
absolute velocities should be flagged as potential synchronized binaries
before applying any gyrochronal analysis.
| {
"alphanum_fraction": 0.7987651842,
"avg_line_length": 57.4121037464,
"ext": "tex",
"hexsha": "378301d98d0aceece82eb5859c48d59133fdb4ad",
"lang": "TeX",
"max_forks_count": 2,
"max_forks_repo_forks_event_max_datetime": "2020-04-21T11:51:15.000Z",
"max_forks_repo_forks_event_min_datetime": "2020-01-23T14:11:57.000Z",
"max_forks_repo_head_hexsha": "7cad283612bc70ca9d12c79978561b938f527198",
"max_forks_repo_licenses": [
"MIT"
],
"max_forks_repo_name": "RuthAngus/kinematics-and-rotation",
"max_forks_repo_path": "paper/results.tex",
"max_issues_count": 6,
"max_issues_repo_head_hexsha": "7cad283612bc70ca9d12c79978561b938f527198",
"max_issues_repo_issues_event_max_datetime": "2020-04-23T15:10:58.000Z",
"max_issues_repo_issues_event_min_datetime": "2020-03-11T16:46:55.000Z",
"max_issues_repo_licenses": [
"MIT"
],
"max_issues_repo_name": "RuthAngus/kinematics-and-rotation",
"max_issues_repo_path": "paper/results.tex",
"max_line_length": 79,
"max_stars_count": 2,
"max_stars_repo_head_hexsha": "7cad283612bc70ca9d12c79978561b938f527198",
"max_stars_repo_licenses": [
"MIT"
],
"max_stars_repo_name": "RuthAngus/kinematics-and-rotation",
"max_stars_repo_path": "paper/results.tex",
"max_stars_repo_stars_event_max_datetime": "2020-05-15T10:37:05.000Z",
"max_stars_repo_stars_event_min_datetime": "2020-01-23T18:24:18.000Z",
"num_tokens": 4691,
"size": 19922
} |
\documentstyle[11pt]{article}
\title{R/I\_SOLVE: Rational/Integer Polynomial Solvers}
\author{Francis J. Wright \\
School of Mathematical Sciences \\
Queen Mary and Westfield College \\
University of London \\
Mile End Road, London E1 4NS, UK. \\
E-mail: {\tt [email protected]}}
\date{27 January 1995}
\begin{document}
\maketitle
\begin{abstract}
This package provides the operators \verb|r/i_solve| that compute
respectively the exact rational or integer zeros of a single
univariate polynomial using fast modular methods.
\end{abstract}
\section{Introduction}
This package provides operators that compute the exact rational zeros
of a single univariate polynomial using fast modular methods. The
algorithm used is that described by R. Loos (1983): Computing rational
zeros of integral polynomials by $p$-adic expansion, {\it SIAM J.
Computing}, {\bf 12}, 286--293. The operator \verb|r_solve| computes
all rational zeros whereas the operator \verb|i_solve| computes only
integer zeros in a way that is slightly more efficient than extracting
them from the rational zeros. The \verb|r_solve| and \verb|i_solve|
interfaces are almost identical, and are intended to be completely
compatible with that of the general \verb|solve| operator, although
\verb|r_solve| and \verb|i_solve| give more convenient output when
only rational or integer zeros respectively are required. The current
implementation appears to be faster than \verb|solve| by a factor that
depends on the example, but is typically up to about 2.
I plan to extend this package to compute Gaussian integer and rational
zeros and zeros of polynomial systems.
\section{The user interface}
The first argument is required and must simplify to either a
univariate polynomial expression or equation with integer, rational or
rounded coefficients. Symbolic coefficients are not allowed (and
currently complex coefficients are not allowed either.) The argument
is simplified to a quotient of integer polynomials and the denominator
is silently ignored.
Subsequent arguments are optional. If the polynomial variable is to
be specified then it must be the first optional argument, and if the
first optional argument is not a valid option (see below) then it is
(mis-)interpreted as the polynomial variable. However, since the
variable in a non-constant univariate polynomial can be deduced from
the polynomial it is unnecessary to specify it separately, except in
the degenerate case that the first argument simplifies to either 0 or
$0 = 0$. In this case the result is returned by \verb|i_solve| in
terms of the operator \verb|arbint| and by \verb|r_solve| in terms of
the (new) analogous operator \verb|arbrat|. The operator
\verb|i_solve| will generally run slightly faster than \verb|r_solve|.
The (rational or integer) zeros of the first argument are returned as
a list and the default output format is the same as that used by
\verb|solve|. Each distinct zero is returned in the form of an
equation with the variable on the left and the multiplicities of the
zeros are assigned to the variable \verb|root_multiplicities| as a
list. However, if the switch \verb|multiplicities| is turned on then
each zero is explicitly included in the solution list the appropriate
number of times (and \verb|root_multiplicities| has no value).
\begin{sloppypar}
Optional keyword arguments acting as local switches allow other output
formats. They have the following meanings:
\begin{description}
\item[\verb|separate|:] assign the multiplicity list to the global
variable \verb|root_multiplicities| (the default);
\item[\verb|expand| or \verb|multiplicities|:] expand the solution
list to include multiple zeros multiple times (the default if the
\verb|multiplicities| switch is on);
\item[\verb|together|:] return each solution as a list whose second
element is the multiplicity;
\item[\verb|nomul|:] do not compute multiplicities (thereby saving
some time);
\item[\verb|noeqs|:] do not return univariate zeros as equations but
just as values.
\end{description}
\end{sloppypar}
\section{Examples}
\begin{verbatim}
r_solve((9x^2 - 16)*(x^2 - 9), x);
\end{verbatim}
\[
\left\{x=\frac{-4}{3},x=3,x=-3,x=\frac{4}{3}\right\}
\]
\begin{verbatim}
i_solve((9x^2 - 16)*(x^2 - 9), x);
\end{verbatim}
\[
\{x=3,x=-3\}
\]
See the test/demonstration file \verb|rsolve.tst| for more examples.
\section{Tracing}
The switch {\tt trsolve} turns on tracing of the algorithm. It is off
by default.
\end{document}
| {
"alphanum_fraction": 0.7721322387,
"avg_line_length": 38.5213675214,
"ext": "tex",
"hexsha": "62d7c221d5ed3edb8c4a8b82ea8e3f93b72d8dcc",
"lang": "TeX",
"max_forks_count": null,
"max_forks_repo_forks_event_max_datetime": null,
"max_forks_repo_forks_event_min_datetime": null,
"max_forks_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_forks_repo_licenses": [
"BSD-2-Clause"
],
"max_forks_repo_name": "arthurcnorman/general",
"max_forks_repo_path": "packages/solve/rsolve.tex",
"max_issues_count": null,
"max_issues_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_issues_repo_issues_event_max_datetime": null,
"max_issues_repo_issues_event_min_datetime": null,
"max_issues_repo_licenses": [
"BSD-2-Clause"
],
"max_issues_repo_name": "arthurcnorman/general",
"max_issues_repo_path": "packages/solve/rsolve.tex",
"max_line_length": 70,
"max_stars_count": null,
"max_stars_repo_head_hexsha": "5e8fef0cc7999fa8ab75d8fdf79ad5488047282b",
"max_stars_repo_licenses": [
"BSD-2-Clause"
],
"max_stars_repo_name": "arthurcnorman/general",
"max_stars_repo_path": "packages/solve/rsolve.tex",
"max_stars_repo_stars_event_max_datetime": null,
"max_stars_repo_stars_event_min_datetime": null,
"num_tokens": 1136,
"size": 4507
} |
\section{Multi-Asset Support}
\label{sec:multicurrency}
In Bitcoin's ledger model~\cite{Nakamoto,formal-model-of-bitcoin-transactions,Zahnentferner18-UTxO}, transactions spend as yet \emph{unspent transaction outputs \textup{(}\!\UTXO{}s\textup{)}}, while supplying new unspent outputs to be consumed by subsequent transactions.
Each individual \UTXO\ locks a specific \emph{quantity} of cryptocurrency by imposing specific conditions that need to be met to spend that quantity, such as for example signing the spending transaction with a specific secret cryptographic key, or passing some more sophisticated conditions enforced by a \emph{validator script}.
Quantities of cryptocurrency in a transaction output are represented as an integral number of the smallest unit of that particular cryptocurrency --- in Bitcoin, these are Satoshis.
To natively support multiple currencies in transaction outputs, we generalise those integral quantities to natively support the dynamic creation of new user-defined \emph{assets} or \emph{tokens}. Moreover, we require a means to forge tokens in a manner controlled by an asset's \emph{forging policy}.
We achieve all this by the following three extensions to the basic \UTXO\ ledger model that are further detailed in
the remainder of this section.
%
\begin{enumerate}
\item Transaction outputs lock a \emph{heterogeneous token bundle} instead of only an integral value of one cryptocurrency.
\item We extend transactions with a \emph{forge} field. This is a token bundle of tokens that are created (minted) or destroyed (burned) by that transaction.
\item We introduce \emph{forging policy scripts \textup{(}FPS\textup{)}} that govern the creation and destruction of assets in forge fields. These scripts are not unlike the validators locking outputs in \UTXO.
\end{enumerate}
\subsection{Token bundles}
We can regard transaction outputs in an \UTXO\ ledger as pairs \((\val, \nu)\) consisting of a locked value $\val$ and a validator script $\nu$ that encodes the spending condition. The latter may be proof of ownership by way of signing the spending transaction with a specific secret cryptography key or a temporal condition that allows an output to be spent only when the blockchain has reached a certain height (i.e. a certain number of blocks have been produced).
To conveniently use multiple currencies in transaction outputs, we want each output to be able to lock varying quantities of multiple different currencies at once in its $\val$ field.
This suggests using finite maps from some kind of \emph{asset identifier} to an integral quantity as a concrete representation, e.g. $\texttt{Coin} \mapsto 21$.
Looking at the standard \UTXO\ ledger rules~\cite{Zahnentferner18-UTxO}, it becomes apparent that cryptocurrency quantities need to be monoids.
It is a little tricky to make finite maps into a monoid, but the solution is to think of them as \emph{finitely supported functions} (see Section~\ref{sec:fsfs} for details).
If want to use \emph{finitely supported functions} to achieve a uniform
representation that can handle groups of related, but \emph{non-fungible} tokens, we need to go a step further.
In order to not lose the grouping of related non-fungible tokens (all house tokens issued by a specific entity, for example) though, we need to move to a two-level structure --- i.e., finitely-supported functions of finitely-supported functions. Let's consider an example. Trading of rare in-game items is popular in modern, multi-player computer games. How about representing ownership of such items and trading of that ownership on our multi-asset \UTXO\ ledger? We might need tokens for ``hats'' and ``swords'', which form two non-fungible assets with possibly multiple tokens of each asset --- a hat is interchangeable with any other hat, but not with a sword, and also not with the currency used to purchase these
items. Here our two-level structure pays off in its full generality, and we can represent currency to purchase items together with sets of items, where some can be multiples, e.g.,
%
\begin{align*}
& \{\mathsf{Coin} \mapsto \{\mathsf{Coin} \mapsto 2\}, \mathsf{Game} \mapsto \{\mathsf{Hat} \mapsto 1, \mathsf{Sword} \mapsto 4\}\} \\
+ \ & \{\mathsf{Coin} \mapsto \{\mathsf{Coin} \mapsto 1\}, \mathsf{Game} \mapsto \{\mathsf{Sword} \mapsto 1, \mathsf{Owl} \mapsto 1\}\} \\
= \ & \{\mathsf{Coin} \mapsto \{\mathsf{Coin} \mapsto 3\}, \mathsf{Game} \mapsto \{\mathsf{Hat} \mapsto 1, \mathsf{Sword} \mapsto 5, \mathsf{Owl} \mapsto 1\}\} \ .
\end{align*}
\subsection{Forge fields}
If new tokens are frequently generated (such as issuing new hats whenever an in-game achievement has been reached) and destroyed (a player may lose a hat forever if the wind picks up), these operations need to be lightweight and cheap. We achieve this by adding a forge field to every transaction. It is a token bundle (just like the $\val$ in an output), but admits positive quantities (for minting new tokens) and negative quantities (for burning existing tokens). Of course, minting and burning needs to be strictly controlled.
\subsection{Forging policy scripts}
The script validation mechanism for locking \UTXO\ outputs is as follows :
in order to for a transaction to spend an output \((\val, \nu)\), the validator
script $\nu$ needs to be executed and approve of the spending transaction.
Similarly, the forging
policy scripts associated with the tokens being minted or burned by a transaction
are run in order to validate those actions.
In the spirit of the Bitcoin Miniscript approach, we chose to include a simple
scripting language supporting forging policies for several common usecases, such as
single issuer, non-fungible, or one-time issue tokens, etc.
(see Section~\ref{sec:fps-language} for all the usecases).
In order to establish a permanent association between the forging policy and the
assets controlled by it, we propose a hashing approach, as opposed to a global registry
lookup. Such a registry requires a specialized access control scheme, as well
as a scheme for cleaning up unused entries.
In the representation of custom assets we propose, each token is associated with the
hash of the forging policy script required to validate at the time of forging
the token, eg.
in order to forge the value
\(\{\mathsf{HASHVALUE} \mapsto \{\mathsf{Owl} \mapsto 1\}\}\), a script whose
hash is $\mathsf{HASHVALUE}$ will be run.
Relying on permanent hash associations to identify asset forging policies and their assets also has its disadvantages.
For example, policy hashes are long strings that, in our model, will have multiple copies stored on the ledger.
Such strings are not human-readable, take up valuable ledger real estate, and increase transaction-size-based fees.
| {
"alphanum_fraction": 0.7813747228,
"avg_line_length": 98.0434782609,
"ext": "tex",
"hexsha": "82925ea8020a2c65822f8190a98fae54686be3cd",
"lang": "TeX",
"max_forks_count": 399,
"max_forks_repo_forks_event_max_datetime": "2022-03-31T11:18:25.000Z",
"max_forks_repo_forks_event_min_datetime": "2018-10-05T09:36:10.000Z",
"max_forks_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_forks_repo_licenses": [
"Apache-2.0"
],
"max_forks_repo_name": "AriFordsham/plutus",
"max_forks_repo_path": "papers/utxoma/multicurrency.tex",
"max_issues_count": 2493,
"max_issues_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_issues_repo_issues_event_max_datetime": "2022-03-31T15:31:31.000Z",
"max_issues_repo_issues_event_min_datetime": "2018-09-28T19:28:17.000Z",
"max_issues_repo_licenses": [
"Apache-2.0"
],
"max_issues_repo_name": "AriFordsham/plutus",
"max_issues_repo_path": "papers/utxoma/multicurrency.tex",
"max_line_length": 718,
"max_stars_count": 1299,
"max_stars_repo_head_hexsha": "f7d34336cd3d65f62b0da084a16f741dc9156413",
"max_stars_repo_licenses": [
"Apache-2.0"
],
"max_stars_repo_name": "AriFordsham/plutus",
"max_stars_repo_path": "papers/utxoma/multicurrency.tex",
"max_stars_repo_stars_event_max_datetime": "2022-03-28T01:10:02.000Z",
"max_stars_repo_stars_event_min_datetime": "2018-10-02T13:41:39.000Z",
"num_tokens": 1616,
"size": 6765
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.